doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.05685
| 30 |
Setup S1 (Random = 33%) S2 (Random = 50%) Judge G4-S G3.5 G4 72% 2968 66% 3061 G4-S - 60% 2964 G3.5 - - C - - C 66% 3062 62% 2964 68% 3057 - H G4-S G3.5 64% 3066 95% 1967 94% 1788 60% 2968 - 89% 1593 54% 3061 - - 53% 3062 - - C 95% 1712 91% 1538 96% 1497 - H 87% 1944 85% 1761 83% 1567 84% 1475
co oF @ © o Agreement ° qu 0.0 05 Win rate difference 1.0
Figure 2: Agreement and win rate dif- ference. Each point corresponds to a model pair and counts only the non-tie votes between the two models. The x- axis value is the win rate difference be- tween the two models. The y-axis value is the GPT-4 and human agreement.
# Win rate
Figure 4: Average win rate of nine models under different judges on Chatbot Arena.
# Table 7: Category-wise win rate of models.
|
2306.05685#30
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 30 |
8The semicolons theme initially emerged as a catch-all category for miscellaneous issues. In the end, however, all these issues involved semicolons immediately after if statements, as in if (condition); {...}.
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, and Juha Sorva
As for whether the LLMs followed our instructions not to pro- vide sample code or tests, performance was poor across the board. The responses from GPT-3.5 practically always included code, and very often included model-solution-like code. This was less com- mon for Codex, which however did produce automated tests for some of the help requests.
4.3 Deeper Analysis of GPT-3.5 Responses 4.3.1 Results from an Extended Dataset. As described in Section 3.4.2, we proceeded by analyzing all the 150 responses produced by GPT- 3.5 with the English prompts. Table 3 summarizes the ï¬ndings, which are similar to those we obtained for GPT-3.5 with the smaller dataset and reported above.
|
2306.05715#30
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 30 |
[5] Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. Experience grounds language. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8718â8735, Online, November 2020. Association for Computational Linguistics.
[6] Grzegorz ChrupaÅa, Bertrand Higy, and Afra Alishahi. Analyzing analytical methods: The case of phonology in neural models of spoken language. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4146â4156, Online, July 2020. Association for Computational Linguistics.
[7] Alexis Conneau, German Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. What you can cram into a single vector: Probing sentence embeddings for linguistic properties. arXiv preprint arXiv:1805.01070, 2018.
[8] David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using a multi-scale deep network. Advances in neural information processing systems, 27, 2014.
|
2306.05720#30
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 30 |
Models Random-Guess Bloomz-560m Bloomz-1b1 Bloomz-1b7 Bloomz-3b Bloomz-7b1 Bloomz-7b1-mt Bloomz-7b1-p3 Bloomz Bloomz-mt Bloomz-p3 llama-7b llama-13b llama-30b llama-65b baize-7b (lora) baize-7b-healthcare (lora) baize-13b (lora) baize-30b (lora) Belle-0.2M Belle-0.6M Belle-1M Belle-2M chatglm-6B doctorglm-6b moss-base-16B moss-sft-16B vicuna-7b vicuna-13b alpaca-7b pythia-1.4b pythia-2.8b pythia-6.9b pythia-12b gpt-neox-20b h2ogpt-12b h2ogpt-20b dolly-3b dolly-7b dolly-12b stablelm-3b stablelm-7b falcon-7b falcon-7b-instruct falcon-40b falcon-40b-instruct 0-shot
|
2306.05783#30
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 30 |
els to ensure the training efï¬ciency and multi-modality en- hanced representations. As shown in Figure 3, since CRM is involved and LLM is tunable, the research works in quadrant 1 could better align to the data distribution of recommender systems and thus all achieve satisfying performance. How- ever, they only leverage small-scale language models as fea- ture encoders, and thus the key capacities (e.g., reasoning, instruction following) of large foundation models still remain underexplored in this quadrant.
|
2306.05817#30
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 30 |
Limitations There is still a lot of uncertainty around certain variables, such as the relative contribu- tion of added parameters to their energy consumption and carbon footprint, as well as the proportion of energy used for pre-training versus fine-tuning models for different tasks and architectures [267]. Conducting further research on these variables can benefit the field both from the perspective of sustainability and overall efficiency.
8
# 4.1.7 Data and Content Moderation Labor
Human labor is a substantial component of machine learning model development, including generative AI systems. This labor is typically completed via a process called crowd computation, where distributed data laborers, also called crowdworkers, complete large volumes of individual tasks that contribute to model development. This can occur in all stages of model development: before a model is trained, crowdworkers can be employed to gather training data, curate and clean this data, or provide data labels. While a model is being developed, crowdworkers evaluate and provide feedback to model generations before the final deployed model is released, and after model deployment, crowdworkers are often employed in evaluating, moderating, or correcting a modelâs output. Crowdwork is often contracted out by model developers to third-party companies.
|
2306.05949#30
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 30 |
Three Levels of Generalization. All models perform best on the Cross-Task setting, with over 10% absolute gap (step SR) on average compared with Cross-Website and Cross-Domain settings, indicating that generalizing to unseen environments is still a major challenge. On the contrary, we note that the performance of Cross-Website and Cross-Domain settings are notably similar, which is also reinforced in Figure 6, where there is no clear distinction in performance across these settings. This suggests that the challenges primarily stem from the diversity in website designs and interaction logic rather than domain specifics. Tasks across domains tend to share common operations, and pretrained LMs may already have the capability to decompose complex tasks at a high level based on commonsense knowledge. Yet, grounding such knowledge into actionable steps in specific and varying environments remains a considerable challenge.
|
2306.06070#30
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 30 |
We investigated the application of our metrics in two crucial areas: factual alignment and hal- lucination detection for in-context learning based models. Upon applying our proposed metrics to these tasks, we exhibit promising results, offer- ing valuable insights into aligning generated out- put with factual knowledge and identifying and mitigating hallucinated facts. Furthermore, our observations indicate that even in these signifi- cantly enhanced LLMs, explicit knowledge instil- lation continues to encounter challenges when it comes to location and language-related queries. All code and data necessary to reproduce the re- sults reported in this paper is available at: https: //github.com/rit-git/lm-know.
# Acknowledgements
We would like to thank Tom Mitchell, Estevam Hruschka, Nikita Bhutani, Eser Kandogan, and Yasaman Razeghi for their valuable comments.
# References
Badr AlKhamissi, Millicent Li, Asli Celikyilmaz, Mona Diab, and Marjan Ghazvininejad. 2022. A review on language models as knowledge bases. arXiv preprint arXiv:2204.06031.
|
2306.06264#30
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 30 |
Figure 2: GA using an LLM. This ï¬gure illustrates how diï¬erent aspects of a GA can be performed by an LLM. GPT-3.5 was used to fragment, reproduce, and optimize molecules represented by SMILES strings. The ï¬rst column illustrated how an LLM can fragment a molecule represented by a SMILES string (input molecule on top, output LLM fragments below). The middle column showcases how an LLM can reproduce/mix two molecules as is done in a GA (input molecule on top, output LLM below). The right column illustrates an application in which an LLM is used to optimize molecules given their SMILES and an associated score. The LLM suggested potential modiï¬cations to optimize molecules. The plot shows best (blue) and mean (orange) Tanimoto similarity to Vitamin C per LLM produced generations.
operation (two independent organic chemists judged the LLM-GA-generated molecules to be chemically reasonable in 32/32 cases, but only in 21/32 cases for the random recombi- nation operation).
|
2306.06283#30
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 30 |
In order to verify the robustness of REMEMBERER, experiments with different initial experience combinations or a different training set are conducted. The results are depicted in Table 3. The initial experience combination E0 denotes the certain trajectory adopted by the original implementation of ReAct while E1 and E2 are randomly sampled from S0. It is observed that the proposed RE- MEMBERER can achieve better and more stable results with different initialization and training sets compared to ReAct. Thus, REMEMBERER can mitigate the workload to some extent to search for an optimal exemplar combination.
We compare the training efficiency of REMEMBERER with the conventional IL and RL methods in Table 4 and Table 5. In contrast to the IL, REMEMBERER requires quite few annotated samples to ini- tialize the experience memory, while IL is in need of much more human annotations. REMEMBERER agent can be trained on only 10 tasks for 74 steps, while the RL and IL are expected to be trained for about 100 thousand steps to achieve an acceptable performance. Consequently, the proposed REMEMBERER offers a much more efficient way to build a practical agent agilely.
# 4.3 Results on WikiHow
|
2306.07929#30
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 31 |
# Win rate
Figure 4: Average win rate of nine models under different judges on Chatbot Arena.
# Table 7: Category-wise win rate of models.
Model Writing Roleplay Reasoning Math Coding Extraction STEM Humanities 61.2% GPT-4 50.9% GPT-3.5 Vicuna-13B 39.7% LLaMA-13B 15.1% 67.9% 60.6% 39.2% 15.1% 49.3% 32.6% 20.1% 7.8% 66.1% 56.3% 63.8% 55.0% 18.0% 36.9% 2.1% 7.5% 66.2% 48.8% 29.2% 9.3% 76.6% 72.2% 52.8% 53.8% 47.0% 47.5% 10.1% 6.8%
# 4.3 Win rates under different judges
|
2306.05685#31
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 31 |
For 123 help requests out of 150, GPT-3.5 correctly identiï¬ed and mentioned at least one actual issue; for 82 of those, it identiï¬ed and mentioned all actual issues. The LLM identiï¬ed non-existing issues in 72 help requests.
Even when it did not mention the actual issues, GPT-3.5 often generated model-solution-like code. Almost every response included code, and the code was model-solution quality in roughly two re- sponses out of three.
Given that we had grouped the issues in student code (Section 4.1 above), it was easy to break down the GPT-3.5 analysis by issue type, so we did that. Table 4 summarizes.
Note: In many cases, a help request had more than one issue (1.9 on average), and our analysis does not account for whether the help request responses addressed a speciï¬c issue type.
Consider the logic errors theme in Table 4. When issues related to Conditionals are present, the LLM addresses all the issues in 35% cases; with Iteration issues are present, the same proportion is 73%; and when Arithmetic issues are present, it is 57%. For the in- put/output theme, the proportions are somewhat lower: 44%, 54%, and 50% for formatting issues, unwanted outputs, and missing out- puts, respectively.
|
2306.05715#31
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 31 |
[9] Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and Yoav Goldberg. Amnesic probing: Behavioral explanation with amnesic counterfactuals. Transactions of the Association for Computational Linguistics, 9:160â175, 2021.
[10] Mureji Fatunde and Crystal Tse. Stability ai raises seed round at $1 billion value. Bloomberg, 2022.
[11] Maxwell Forbes, Ari Holtzman, and Yejin Choi. Do neural language representations learn physical commonsense? arXiv preprint arXiv:1908.02899, 2019.
[12] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840â6851, 2020.
[13] Adrian Johnston and Gustavo Carneiro. Self-supervised monocular trained depth estimation using self-attention and discrete disparity volume. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition, pages 4756â4765, 2020.
[14] Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. arXiv preprint arXiv:2206.00364, 2022.
10
|
2306.05720#31
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 31 |
stablelm-7b falcon-7b falcon-7b-instruct falcon-40b falcon-40b-instruct 0-shot 0.089 0.111 0.131 0.107 0.139 0.167 0.189 0.066 0.051 0.266 0.115 0.125 0.166 0.076 0.143 0.129 0.130 0.131 0.193 0.127 0.091 0.137 0.127 0.099 0.093 0.072 0.064 0.051 0.109 0.135 0.124 0.103 0.115 0.075 0.081 0.075 0.114 0.066 0.095 0.095 0.070 0.158 0.048 0.078 0.038 0.126 MMLU 1-shot 0.089 0.109 0.116 0.117 0.084 0.160 0.196 0.059 0.066 0.264 0.093 0.132 0.079 0.107 0.121 0.091 0.121 0.111 0.216 0.148 0.114 0.126 0.148 0.109 0.076 0.050 0.065 0.051 0.104 0.170 0.127 0.110
|
2306.05783#31
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 31 |
3.2 Not Tune LLM; Infer w/o CRM (Quadrant 3) With the emergence of large foundation models, especially ChatGPT, researchers intend to analyze the zero-shot or few-shot performance of LLM in recommendation domains, where LLM is frozen and CRM is not involved. Sileo et al. [2022] apply zero-shot learning on GPT-2 by inferring the next item according to the userâs behavior history, which merely defeats the random baseline. Other works [Wang and Lim, 2023; Liu et al., 2023a; Sun et al., 2023; Dai et al., 2023; Li et al., 2023g] investigate zero-shot and few-shot recom- mendation setting based on the ChatGPT API, with delicate prompt engineering to instruct the LLM to perform tasks like rating prediction, pairwise comparison, and listwise rank- ing. Chat-REC [Gao et al., 2023] instructs ChatGPT to not
|
2306.05817#31
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 31 |
Two key ethical concerns in the use of crowdwork for generative AI systems are: crowdworkers are frequently subject to working conditions that are taxing and debilitative to both physical and mental health, and there is a widespread deficit in documenting the role crowdworkers play in AI development. This contributes to a lack of transparency and explainability in resulting model outputs. Manual review is necessary to limit the harmful outputs of AI systems, including generative AI systems. A common harmful practice is to intentionally employ crowdworkers with few labor protections, often taking advantage of highly vulnerable workers, such as refugees [119, p. 18], incarcerated people [54], or individuals experiencing immense economic hardship [98, 181]. This precarity allows a myriad of harmful practices, such as companies underpaying or even refusing to pay workers for completed work (see Gray and Suri [93, p. 90] and Berg et al. [29, p. 74]), with no avenues for worker recourse. Finally, critical aspects of crowdwork are often left poorly documented, or entirely undocumented [88].
|
2306.05949#31
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 31 |
In-context Learning with LLMs. We also experiment with two popular LLMs, GPT-3.5-turbo and GPT-4 [25], through in-context learning. We use the same multiple-choice formulation as MINDACT, and include three demonstration examples for in-context learning. We can see that both models are comparable to the two baselines with only three in-context examples. Note that this is not a fair comparison with the Flan-T5 models, which are fine-tuned on the full training data. We also include the zero-shot results with Flan-T5XL in Appendix D.2, but the model fails to perform the task without fine-tuning. Meanwhile, GPT-3.5 only has around 20% element selection accuracy, despite the superior performance people have observed on other datasets. Further analysis reveals that one possible problem is the modelâs propensity to select the None option, asserting that the task cannot be finished on the current webpage. This is somewhat accurate since tasks typically necessitate navigation through multiple webpages and performing a series of actions before reaching the final result. This aspect indeed represents the primary difficulty of our task. On the other hand, we observe highly promising outcomes with GPT-4. The performance
|
2306.06070#31
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 31 |
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901.
Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun, Lingy- ong Yan, Meng Liao, Tong Xue, and Jin Xu. 2021. Knowledgeable or educated guess? revisiting lan- guage models as knowledge bases. arXiv preprint arXiv:2106.09231.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
|
2306.06264#31
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 31 |
Encouraged by these ï¬ndings, they prompted an LLM with 30 parent molecules and their performance scores (Tanimoto similarity to vitamin C) with the task to come up with n new molecules that the LLM âbelievesâ to improve the score. A preliminary visual inspection suggests that the LLM might produce chemically reasonable modiï¬cations. Future work will need to systematically investigate potential improvements compared to conventional GAs.
The importance of the results of the McGill team is that they indicate that these LLMs (when suitably conditioned) might not only reproduce known structures but generate new structures that make chemical sense [32, 59].
A current limitation of this approach is that most LLMs still struggle to output valid SMILES without explicit ï¬ne-tuning [33]. We anticipate that this problem might be mitigated by building foundation models for chemistry (with more suitable tokeniza12
tion [60, 61]), as, for instance, the ChemNLP project of OpenBioML.org attempts to do (https://github.com/OpenBioML/chemnlp). In addition, the context length limits the number of parent molecules that can be provided as examples.
|
2306.06283#31
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 31 |
# 4.3 Results on WikiHow
REMEMBERER is applied to WikiHow with 2-shot in-context learning. The experience memory is initialized with two annotated experiences of the decision step. The agent is trained for 3 epochs on a training set containing 10 different tasks selected from WikiHow excluding the test tasks. Similar to the experiments on WebShop, the succeeded tasks in the first epoch are excluded from training in the following two epochs. As observed that most of the tasks require an interaction of less than 5 steps, the trajectory exceeding 15 steps will be regarded as failed. The main results are depicted in Table 2. The exemplars of the âLLM onlyâ baseline are the initial experiences of REMEMBERER. The proposed REMEMBERER surpasses the baseline as well as the original result in Zhang et al. [2023]. In addition, 10 tasks are annotated to form an annotated experience memory. REMEMBERER agent with this annotated experience memory is evaluated without further training, and the result is denoted as âRMMBR. (A)â in the table. This result demonstrates that REMEMBERER is capable of
8
Table 7: Comparison of the average reward estimation of the full model and the ablation model without bootstrapping policy. The error is the absolute difference between the average reward estimation from the experience memory and the real training reward.
|
2306.07929#31
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 32 |
# 4.3 Win rates under different judges
We plot the average win rate of models under different judges on MT-bench and Chatbot Arena in Figure 3 and Figure 4, respectively. The win rate curves from LLM judges closely match the curves from humans. On MT-bench second turn, proprietary models like Claude and GPT-3.5 are more preferred by the humans compared to the first turn, meaning that a multi-turn benchmark can better differentiate some advanced abilities of models. We also list the per-category win rate of
8
# Table 8: Evaluation results of several model variants.
|
2306.05685#32
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 32 |
4.3.2 Exercise-Specific Results. We brieï¬y looked into how spe- ciï¬c exercises interplay with the performance of GPT-3.5. Table 5 summarizes the results of this supplementary analysis.
As shown in the table, there are exercise-speciï¬c diï¬erences in the extent to which the responses address the issues; there is no ob- vious pattern, however. In the worst-case scenario, the responses address all of the issues in only one response out of the ten that we sampled; in the best case, all issues are addressed in ten of ten re- sponses. Even in the latter case, however, four of the ten responses featured false positives. To illustrate, here is some student code.
f o r ( v a r i = l i s t . l e n g t h ; i >= 0 ; i â â) { v a r v a l u e = l i s t [ i ] ; p r i n t ( ' $ v a l u e ' ) ; }
|
2306.05715#32
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 32 |
10
[15] Gyeongnyeon Kim, Wooseok Jang, Gyuseong Lee, Susung Hong, Junyoung Seo, and Seungry- ong Kim. Dag: Depth-aware guidance with denoising diffusion probabilistic models. arXiv preprint arXiv:2212.08861, 2022.
[16] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[17] Min Seok Lee, WooSeok Shin, and Sung Won Han. Tracer: Extreme attention guided salient object tracing network (student abstract). In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 12993â12994, 2022.
[18] Kenneth Li, Aspen K Hopkins, David Bau, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. Emergent world representations: Exploring a sequence model trained on a synthetic task. arXiv preprint arXiv:2210.13382, 2022.
[19] Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H arXiv preprint Miller, and Sebastian Riedel. Language models as knowledge bases? arXiv:1909.01066, 2019.
|
2306.05720#32
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 32 |
0.114 0.126 0.148 0.109 0.076 0.050 0.065 0.051 0.104 0.170 0.127 0.110 0.070 0.059 0.132 0.087 0.098 0.060 0.068 0.068 0.085 0.118 0.046 0.095 0.043 0.123 3-shot 0.089 0.119 0.128 0.164 0.146 0.205 0.210 0.075 0.053 0.248 0.057 0.093 0.135 0.073 0.100 0.079 0.106 0.171 0.207 0.243 0.180 0.162 0.132 0.112 0.065 0.062 0.051 0.029 0.066 0.202 0.121 0.066 0.084 0.066 0.086 0.078 0.110 0.055 0.052 0.093 0.071 0.093 0.051 0.106 0.077 0.121 0-shot 0.089 0.124 0.107 0.054 0.168 0.074 0.077 0.071 0.142 0.204 0.118 0.133 0.152 0.079 0.154
|
2306.05783#32
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 32 |
only serve as the score/ranking function, but also take con- trol over the recommendation pipeline, e.g., deciding when to call an independent pre-ranking model API. As illustrated in Figure 3, although a larger model size might bring per- formance improvement, the zero-shot or few-shot learning of LLM is still much inferior compared with the light-weight CRM tuned on the training data, indicating the importance of in-domain collaborative knowledge from recommender sys- tems.
# 3.3 Not Tune LLM; Infer with CRM (Quadrant 2)
Research works in quadrant 2 utilize different key capabilities (e.g., rich semantic information, reasoning ability) of LLM without tuning to assist CRM in better completing recom- mendation tasks.
|
2306.05817#32
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 32 |
What to Evaluate Researchers and developers close to the system development should check that crowdworking is conducted under basic ethical standards, such as the 18 Criteria for Fairer Microwork proposed by Berg et al. [29, p. 105] in Digital Labour Platforms and the Future of Work or the Oxford Internet Instituteâs Fairwork Principles [75]. Concurrently, researchers and developers should document the role of crowdwork in all dataset development undertaken during generative AI systems development, e.g. using frameworks like CrowdWorkSheets [70] and sections 3.3 and 3.4 in Datasheets for Datasets [86]. Basic details such as crowdworkersâ demographics, the instructions given to them, or how they were assessed and compensated, are foundational for interpreting the output of AI systems shaped by this labor [147]. All aspects of data labor should be transparently reported (as done by Glaese et al. [89], for example), both as a tool for understanding model output and as a means to audit unethical labor practices.
|
2306.05949#32
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 32 |
final result. This aspect indeed represents the primary difficulty of our task. On the other hand, we observe highly promising outcomes with GPT-4. The performance is on par with the tuned Flan-T5 models under Cross-Website and Cross-Domain settings for element selection, indicating a great potential for developing generalist agents using LLMs. Nevertheless, GPT-4âs high operational cost remains a concern. Developing smaller models specialized for the web is an interesting future avenue.
|
2306.06070#32
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 32 |
Hyung Won Chung, Le Hou, Shayne Longpre, Bar- ret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416.
Roi Cohen, Mor Geva, and Crawling the internal Amir Globerson. 2023. knowledge-base of language models. arXiv preprint arXiv:2301.12810.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 4171â 4186.
Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Frederique Lafor- est, and Elena Simperl. 2018. T-rex: A large scale alignment of natural language with knowledge base triples. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
|
2306.06264#32
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 32 |
Overall, we see that the ï¬exibility of the natural language input and the in-context learning abilities allows using LLMs in very diï¬erent waysâto very eï¬ciently build pre- dictive models or to approach molecular and material design in entirely unprecedented ways, like by providing contextâsuch as âfuzzyâ design rulesâor simply prompting the LLM to come up with new structures. However, we also ï¬nd that some âoldâ ideas, such as â-ML and data augmentation, can also be applied in this new paradigm.
# B. Automation and novel interfaces
|
2306.06283#32
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 32 |
Task Set Setting Avg Reward Estimation Avg Training Reward Abs Error WebShop Full Model w/o bootstrp. 0.86 0.62 0.84 0.84 0.02 0.22 WikiHow Full Model w/o bootstrp. 2.48 1.98 2.60 2.70 0.12 0.72
# Table 8: Results of ablation study
Task Set Setting Avg Reward/Score Success Rate WebShop Full Model w/o bootstrp. w/o random 0.66 0.67 0.65 0.37 0.36 0.37 WikiHow Full Model w/o bootstrp. w/o random w/o discouraged w/o task sim. fg w/o obsrv. sim. fo 2.63 2.54 2.64 2.48 2.63 2.47 0.93 0.89 0.90 0.81 0.94 0.87
exploiting expert experiences, which can be regarded as analogous to conventional imitation learning. Nevertheless, the annotated experiences may not offset the exact shortage of the particular LLM. In contrast, the RL training will have an opportunity to collect more specific experiences and gain a more promising performance.
|
2306.07929#32
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 33 |
8
# Table 8: Evaluation results of several model variants.
Model #Training Token MMLU (5-shot) TruthfulQA (0-shot) MT-Bench Score (GPT-4) LLaMA-7B LLaMA-13B Alpaca-7B Alpaca-13B Vicuna-7B (selected) Vicuna-7B (single) Vicuna-7B (all) Vicuna-13B (all) 1T 1T 4.4M 4.4M 4.8M 184M 370M 370M 35.2 47.0 40.1 48.1 37.3 44.1 47.1 52.1 0.22 0.26 0.26 0.30 0.32 0.30 0.32 0.35 2.74 2.61 4.54 4.53 5.95 6.04 6.00 6.39 GPT-3.5 GPT-4 - - 70.0 86.4 - - 7.94 8.99
|
2306.05685#33
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 33 |
be list.length - 1, which GPT-3.5âs response correctly iden- tiï¬ed and mentioned. However, the response also suggested an âimaginaryâ issue: âAlso, you have an extra closing curly brace at the end of the code block. Remove that to avoid a syntax error.â
Exploring the Responses of Large Language Models to Beginner Programmersâ Help Requests
ICER â23 V1, August 7â11, 2023, Chicago, IL, USA
Table 2: Comparison of responses by GPT-3.5 and Codex. En = English prompts; Fi = Finnish prompts.
Aspect GPT-3.5 (En) GPT-3.5 (Fi) Codex (En) Codex (Fi) Identiï¬es and mentions at least one actual issue. Identiï¬es and mentions all actual issues. Identiï¬es non-existent issues. Includes duplicate or superï¬uous content. Includes code. Includes a model solution. Includes some automated tests. 90% 57% 40% 0.0% 100% 67% 0.0% 90% 53% 23% 0.0% 90% 70% 0.0% 70% 13% 40% 60% 33% 13% 6.7% 33% 17% 43% 50% 67% 40% 10%
|
2306.05715#33
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 33 |
[20] Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988, 2022.
[21] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748â8763. PMLR, 2021.
[22] René Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. Vision transformers for dense prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12179â 12188, 2021.
[23] René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, and Vladlen Koltun. Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(3), 2022.
|
2306.05720#33
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 33 |
0.168 0.074 0.077 0.071 0.142 0.204 0.118 0.133 0.152 0.079 0.154 0.194 0.178 0.184 0.191 0.053 0.082 0.066 0.058 0.084 0.037 0.115 0.063 0.063 0.060 0.137 0.108 0.064 0.078 0.077 0.086 0.080 0.094 0.079 0.091 0.085 0.086 0.133 0.046 0.114 0.085 0.070 CEval 1-shot 0.089 0.117 0.115 0.058 0.182 0.072 0.078 0.070 0.166 0.164 0.137 0.106 0.181 0.119 0.141 0.180 0.174 0.178 0.196 0.063 0.080 0.065 0.063 0.074 0.085 0.048 0.062 0.071 0.131 0.119 0.132 0.089 0.073 0.097 0.096 0.078 0.084 0.083 0.079 0.071 0.082 0.102
|
2306.05783#33
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 33 |
Early works [Ding et al., 2021; Hou et al., 2022; Hou et al., 2023a] propose to extract transferable text embeddings from a ï¬xed BERT model with rich semantic information. The text embeddings are then fed into several projection lay- ers to better produce the cross-domain representations for trainable conventional recommendation models. The projec- tion layers are designed as a single-layer neural network for ZESRec [Ding et al., 2021], a self-attention layer for Tran- sRec [Wang et al., 2022], an MoE-enhanced network for UniSRec [Hou et al., 2022], and a vector quantization based embedding lookup table for VQ-Rec [Hou et al., 2023a]. We can observe from Figure 3 that the direct usage of a single- layer neural network as an adapter does not yield satisfactory results. However, with a carefully designed adapter module, the semantic representations from the ï¬xed BERT parame- ters can be better aligned with the subsequent recommenda- tion module, leading to impressive recommendation perfor- mances.
|
2306.05817#33
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 33 |
External evaluators can use evaluation metrics designed specifically around crowdwork, such as those proposed by Fair Work [75], to evaluate quality of working conditions. Relevant labor law interventions by jurisdiction may also apply. Since many critical crowdworking jobs and evaluation of this work involves long-term exposure to traumatic content [199], such as child sexual abuse material or graphic depictions of violence [181], it may also be necessary to consider professional support for mental health and practices to limit the degree of exposure in any one work day.
Limitations The lack of regulation and rules around crowdworker protection for AI contributes to minimal to no documentation or transparency. The lack of information makes crowdwork difficult to evaluate. Incentives to conduct crowdwork at a low cost with little transparency contribute to less literature on evaluating crowdwork. Outsourcing labor also creates barriers to evaluation by further complicating reporting structures, communication, and working conditions.
9
# Impacts: People and Society
|
2306.05949#33
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 33 |
# 5 Related Work
Autonomous Agents for Web and Mobile Applications. Considerable initiatives have been in- vested in automating web navigation, driven by a vision of facilitating effortless human-web interac- tion. Yet, previous research has been limited by the types of tasks and websites it can handle, either confined to simplified simulation environments [22, 31, 40], or limited to a narrow set of domains and websites [39, 40]. Recent studies [5, 21, 35] have utilized similar techniques for mobile applications, however, these are often simpler and offer fewer functions compared with full-fledged websites. In contrast, MIND2WEB aims to adapt to a realistic web environment, characterized by its high diversity.
8
Also related is the research on web automation systems [1, 19]. These technologies often demand programming skills, which can make them less accessible to general users. We aim to equip the web automation system with a natural language interface, thereby reducing the entry barrier significantly.
|
2306.06070#33
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 33 |
Ben Goodrich, Vinay Rao, Peter J Liu, and Mohammad Saleh. 2019. Assessing the factual accuracy of gener- ated text. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 166â175.
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of halluci- nation in natural language generation. ACM Comput- ing Surveys, 55(12):1â38.
Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield Dodds, Nova DasSarma, Eli Tran-Johnson, et al. 2022. Language models arXiv preprint (mostly) know what they know. arXiv:2207.05221.
Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Teaching models to express their uncertainty in words. arXiv preprint arXiv:2205.14334.
|
2306.06264#33
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 33 |
# B. Automation and novel interfaces
Yao et al. [62] and Schick et al. [25] have shown that LLMs can be used as agents that can autonomously make use of external tools such as Web-APIsâa paradigm that some call MRKL (pronounced âmiracleâ) Systemsâmodular reasoning, knowledge, and language systems [26]. By giving LLMs access to tools and forcing them to think step-by- step [63], we can thereby convert LLMs from hyperconï¬dent models that often hallucinate to systems that can reason based on observations made by querying robust tools. As the technical report for GPT-4 highlighted [64], giving LLMs access to tools can lead to emergent behavior, i.e., enabling the system to do things that none of its parts could do before. In addition, this approach can make external tools more accessibleâsince users no longer have to learn tool-speciï¬c APIs. It can also make tools more interoperableâby using natural language instead of âglue codeâ to connect tools.
|
2306.06283#33
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 33 |
The experiments with different initial experience combinations or a different training set are conducted on WikiHow as well, and the results are shown in Table 6. The proposed REMEMBERER achieves a consistent improvement compared to the baseline with fixed exemplars, which proves the effectiveness and robustness of REMEMBERER.
# 4.4 Ablation study
Several ablation studies are conducted to verify the design of REMEMBERER framework.
Ablation on n-step bootstrapping policy Ablation studies are conducted to verify the necessity of n-step bootstrapping policy to update the Q value estimations in the experience memory. As stated in Subsection 3.2, updating without bootstrapping may lead to inaccurate value estimations owing to few training steps to explore and exploit. In order to verify this perspective, an average reward estimation is calculated by averaging the sum of the maximum Q value and the history reward stored for each observation in the experience memory:
M Rat DCR 04) + ma OG. 04-4), )
|
2306.07929#33
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 34 |
representative models in Table 7 to show how MT-bench differentiates models, in which we see GPT-4 is significantly better than others. Vicuna-13B is noticeably worse than GPT-3.5/4 in reasoning, math, and coding categories. Note that in math/coding category, GPT-3.5 and GPT-4 have similar overall win-rate because they both failed to answer some hard questions, but GPT-4 is still significantly better than GPT-3 in the direct pairwise comparison or single-answer grading. Please see a performance breakdown of MT-bench score for each category in Appendix D.4.
# 5 Human Preference Benchmark and Standardized Benchmark
Human preference benchmarks such as MT-bench and Chatbot Arena serve as valuable additions to the current standardized LLM benchmarks. They focus on different aspects of a model and the recommended way is to comprehensively evaluate models with both kinds of benchmarks.
|
2306.05685#34
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 34 |
# Table 3: The performance of GPT-3.5 on 150 help requests, prompted in English.
Aspect Proportion Identiï¬es and mentions at least one actual issue. Identiï¬es and mentions all actual issues. Identiï¬es non-existent issues. Includes duplicate or superï¬uous content. Includes code. Includes a model solution. Includes some automated tests. 82% 55% 48% 0.0% 99% 65% 0.0%
highlighted in Table 3, almost all of the responses included code and most eï¬ectively provided model solutions. Second, in 17 of the responses (over 10%), the LLM suggested adding functionality that had not been covered in the course and was not in the course plan; these suggestions included error handling, null safety features of Dart, and speciï¬c library functions for list processing. Third, and again related to the model solutions, for all the six help requests where we classiï¬ed the studentâs code as very incomplete (i.e., far from the actual solution), the response was pedagogically unsuit- able in that it did not focus on what would be relevant to the stu- dent at such a stage. The following scenario outlines one instance of this.
# 4.4 Further Insights: Thematic Analysis of
|
2306.05715#34
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 34 |
[24] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High- resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684â10695, 2022.
[25] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021.
[26] Ian Tenney, Dipanjan Das, and Ellie Pavlick. Bert rediscovers the classical nlp pipeline. arXiv preprint arXiv:1905.05950, 2019.
[27] Mycal Tucker, Peng Qian, and Roger Levy. What if this modified that? syntactic interventions via counterfactual embeddings. arXiv preprint arXiv:2105.14002, 2021.
|
2306.05720#34
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 34 |
0.073 0.097 0.096 0.078 0.084 0.083 0.079 0.071 0.082 0.102 0.051 0.095 0.090 0.080 3-shot 0.089 0.103 0.110 0.103 0.194 0.073 0.158 0.072 0.240 0.151 0.140 0.110 0.169 0.082 0.168 0.206 0.178 0.195 0.121 0.136 0.090 0.072 0.136 0.114 0.051 0.052 0.072 0.064 0.131 0.113 0.138 0.122 0.094 0.078 0.069 0.094 0.061 0.077 0.070 0.073 0.099 0.093 0.052 0.079 0.129 0.068 M3KE 0-shot 0.089 0.126 0.082 0.102 0.063 0.073 0.072 0.081 0.098 0.161 0.115 0.158 0.131 0.079 0.125 0.231 0.203 0.155 0.071 0.076 0.075 0.066 0.057 0.115
|
2306.05783#34
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 34 |
As discussed in Section 2.5, the emergent abilities and abundant open-world knowledge enable large foundation models to extend their roles to the feature engineering stage. MINT [Mysore et al., 2023] synthesizes training query ex- amples with InstructGPT for narrative-driven recommenda- tions. KAR [Xi et al., 2023b] extracts both the reason- ing and factual knowledge from LLM to enhance the per- formance of arbitrary downstream recommendation models. AnyPredict [Wang et al., 2023] leverages ChatGPT APIs to consolidate tabular samples to overcome the barrier across tables with varying schema, resulting in uniï¬ed expanded training data for the follow-up conventional predictive mod- els. GENRE [Liu et al., 2023c] utilizes ChatGPT to per- form news piece generation, user proï¬ling, and news summa- rization, and thus augments the news recommendation model with LLM-generated features.
In these works, although LLM is frozen, the involvement of CRM for the inference phase generally guarantees better recommendation performance, comparing with works from quadrant 3 in Section 3.2 in terms of the best baseline they defeat.
# 3.4 Tune LLM; Infer w/o CRM (Quadrant 4)
|
2306.05817#34
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 34 |
9
# Impacts: People and Society
Evaluating the effect AI has on people and societies, and evaluating people and groups themselves encounters similar challenges as those arising in sampling [20], surveying [126], determining prefer- ences [270], and working with human subjects [131, 12], in addition to challenges that stem from the planetary scale at which AI development seeks to be applied for, and therefore comes to engage with national and global social systems, e.g., economies and cultures. Taxonomies of risks and harms of generative AI systems [80], including their impacts on human rights [111, 186], strongly overlap with what should be evaluated. However, most societal impact taxonomies lack evaluations or examples of evaluating society. We must understand the reason for our evaluation; often we are seeking proof, in the form of evaluations, that is necessary for further action against harmful impacts.
|
2306.05949#34
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 34 |
Large Language Models. In recent years, there has been a surge in the development and application of large language models (LLMs). These models, often encompassing billions of parameters, are pre-trained on massive corpora of text data [3, 44, 45], enabling them to capture intricate linguistic patterns, nuances, and relationships, resulting in unprecedented performance on a wide array of NLP tasks. One of the most noteworthy attributes of LLMs is their few-shot learning capability. Unlike traditional machine learning models that necessitate extensive labeled data for task-specific fine-tuning, LLMs can often perform tasks with minimal task-specific examples. Furthermore, LLMs such as GPT-3 [4] and PaLM [9] have also demonstrated the ability to do in-context learning, where they can adapt to novel tasks by simply providing context within the input prompt, eliminating the need for explicit retraining. In this work, we explore the use of LLMs to build generalist agent on top of MIND2WEB by either tuning medium-sized LMs with only around 1,000 examples, or prompting an LLM such as GPT-4, and have observed promising results.
|
2306.06070#34
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 34 |
Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Teaching models to express their uncertainty in words. arXiv preprint arXiv:2205.14334.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instruc- tions with human feedback. Advances in Neural Information Processing Systems, 35:27730â27744.
|
2306.06264#34
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 34 |
This paradigm has recently been used by Bran et al. [65] to create digital assistants that can call and combine various tools such as Google search and the IBM RXN retrosynthesis tool when prompted with natural language. Boiko et al. [66] used a similar approach and gave LLMs access to laboratories via cloud lab APIs. In their system, the LLM could use external tools to plan a synthesis, which it could execute using the cloud lab.
# a. MAPI-LLM
Electronic structure calculations have reached such a high level of accuracy that one can answer questions like âIs the material AnByCz stable?â Indeed, the Materi- als Project [67] stores thermodynamic data on many components from which one can obtain a reasonable estimate of the stability of a given material. Or, if the material is not in the database, one can do a simulation instead. Similarly, to answer prompts such
13
as âGive me a reaction to produce CaCO3â, there is a lot of helpful information in the Materials Project database and the internet that can help to come up with an answer.
|
2306.06283#34
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 34 |
M Rat DCR 04) + ma OG. 04-4), )
where Rh denotes the total reward of the steps before (gi, oi) on the trajectory and M is the size of the memory. The deduced average reward estimation ËR is compared to the real training reward, and an absolute error is calculated in Table 7. It can be observed that the reward estimation from the experience memory trained without bootstrapping suffers a far greater error than that with bootstrapping. Meanwhile, the performance on the test set is demonstrated in Table 8. Although there is no apparent disparity in the final performance on the WebShop task set, a visible degradation is observed on WikiHow, which reveals the latent risk of a non-bootstrapping update.
9
|
2306.07929#34
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 35 |
We evaluate several model variants derived from LLaMA on MMLU [19], Truthful QA [26] (MC1), and MT-bench (GPT-4 judge). The training details are in Appendix E. Since we have shown that GPT-4 single-answer grading also performs well in Section 4.2, we use GPT-4 single-answer grading for MT-bench in favor of its scalability and simplicity. We ask GPT-4 to give a score for each turn on a scale of 10 by using our prompt templates (Figure 6, Figure 10) and report an average score of 160 = 80 Ã 2 turns. Table 8 shows the results. We find that fine-tuning on high-quality dialog datasets (i.e., ShareGPT) can consistently improve the model performance on MMLU and the improvement scales with fine-tuning data size. On the other hand, a small high-quality conversation dataset can quickly teach the model a style preferred by GPT-4 (or approximately human) but cannot improve MMLU significantly, as shown by the Vicuna-7B (selected) which is trained with only 4.8M tokens or 3K conversations. In Table 8, no single benchmark can determine model quality, meaning
|
2306.05685#35
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05720
| 35 |
[28] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
[29] Patrick von Platen, Suraj Patil, Anton Lozhkov, Pedro Cuenca, Nathan Lambert, Kashif Rasul, Mishig Davaadorj, and Thomas Wolf. Diffusers: State-of-the-art diffusion models. https: //github.com/huggingface/diffusers, 2022.
[30] Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A Yeh, and Greg Shakhnarovich. Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation. arXiv preprint arXiv:2212.00774, 2022.
[31] Kelly W Zhang and Samuel R Bowman. Language modeling teaches you more syntax than trans- lation does: Lessons learned through auxiliary task analysis. arXiv preprint arXiv:1809.10040, 2018.
11
|
2306.05720#35
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 35 |
0.131 0.079 0.125 0.231 0.203 0.155 0.071 0.076 0.075 0.066 0.057 0.115 0.038 0.099 0.075 0.059 0.067 0.142 0.083 0.086 0.073 0.098 0.094 0.070 0.096 0.066 0.108 0.114 0.096 0.140 0.050 0.104 0.087 0.141 Xiezhi-Spec.-Chinese 1-shot 0.089 Xiezhi-Inter.-Chinese 1-shot 0.089 0-shot 0.089 3-shot 0.089 Generation Probability For Ranking 0.130 0.124 0.127 0.117 0.107 0.108 0.152 0.159 0.151 0.151 0.168 0.154 0.148 0.162 0.178 0.155 0.154 0.175 0.183 0.158 0.198 0.161 0.277 0.133 0.213 0.212 0.198 0.086 0.105 0.095 0.142 0.117 0.141 0.211 0.243 0.241 0.186 0.162 0.206 0.108 0.084 0.129 0.173
|
2306.05783#35
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05949
| 35 |
Concretely when evaluating impact, timing will change how we view a system. What is being trained on and generated may not reflect the current world in which it is deployed [235]. Further, when we seek to evaluate society, we cannot escape the ways in which our perception of society, and society itself, has already been influenced by existing AI and social media tools. In crafting and conducting evaluations, we can often encroach on othersâ privacy and autonomy due to the need for highly personal information to evaluate how harms are enacted and distributed across populations. For this reason, it is necessary that any engagements with impact assessments also critically examine how consent is obtained, and what the limits of consent are, when it comes to being subject to bias evaluation and assessment. Similarly, impact assessments must also take into consideration the existing and possible future impacts of being included as a data subject. Participatory justice-led initiatives provide particularly promising avenues for such considerations and engagements. Long- term effects of systems embedded in society, such as economic or labor impact, largely require ideation of generative AI systemsâ possible use cases and have fewer available general evaluations.
The following categories are high-level, non-exhaustive, and present a synthesis of the findings across different modalities. They refer solely to what can be evaluated in people and society:
Trustworthiness and Autonomy
â Trust in Media and Information â Overreliance on Outputs â Personal Privacy and Sense of Self
|
2306.05949#35
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 35 |
Grounded Language Understanding. Our work also aligns with the field of grounded language understanding, which aims to map natural language utterances onto executable plans in a target environment [13]. Many studies have centered around environments underpinned by a well-structured schema or ontology, including relational databases [36, 43] and knowledge bases [12, 42], which may not adequately reflect the more heterogeneous conditions in real-world situations. Our work instead grounds natural language in the noisy and schemaless web environment. Our setting is also connected to embodied AI, where an agent, guided by language instructions, carries out tasks in a physical environment [2, 32, 33]. Nonetheless, existing research primarily focuses on a specific setting (e.g., household environments), limiting their diversity. MIND2WEB provides a unique testbed for studying a broad range of grounding challenges in real-world environments.
|
2306.06070#35
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 35 |
Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, et al. 2023. Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813.
Fabio Petroni, Tim Rocktäschel, Patrick Lewis, An- ton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowl- edge bases? arXiv preprint arXiv:1909.01066.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.
Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation reduces hallucination in conversation. arXiv preprint arXiv:2104.07567.
|
2306.06264#35
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 35 |
as âGive me a reaction to produce CaCO3â, there is a lot of helpful information in the Materials Project database and the internet that can help to come up with an answer.
To answer these questions, state-of-the-art computational tools or existing databases can be used. However, their use often requires expert knowledge. To use existing databases, one must choose which database to use, how to query the database, and what representation of the compound is used (e.g., international chemical identiï¬er (InChI), SMILES, etc.). Otherwise, if the data is not in a database, one must run calculations, which requires a deep understanding of technical details. LLMs can simplify the use of such tools. By typing in a question, we can prompt the LLM to translate this question into a workï¬ow that leads to the answer.
|
2306.06283#35
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 35 |
9
Ablation on the advice generation strategy As stated in Subsection 3.3, owing to the non- exhaustive exploration in the brief training stage, there may be no suitable candidates for the action advice in the exemplars. For instance, there may be no actions recorded with a poor enough Q value estimation or no actions recorded as high-reward. Under this case, action advice can be generated with a randomly sampled action that is not in the record, or it can be given by directly encouraging the action with the highest Q value estimation and discouraging the action with the lowest estimation without regard to the certain value. These two strategies are compared in Table 8. As the results illustrate, the random plan appears to have a minor superiority over the non-random plan. This is attributed to that advice with improper value expectations will mislead the LLM to take wrong judgments about the true value of the available actions.
Additional experiments are conducted to investigate the necessity of the discourage actions in the output part of exemplars and the impact of similarity function components. Owing to limit of budgets, these experiments are only conducted on WikiHow task set.
|
2306.07929#35
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 36 |
(selected) which is trained with only 4.8M tokens or 3K conversations. In Table 8, no single benchmark can determine model quality, meaning that a comprehensive evaluation is needed. Our results indicate that using LLM-as-a-judge to approximate human preferences is highly feasible and could become a new standard in future benchmarks. We are also hosting a regularly updated leaderboard with more models 2. Notably, DynaBench [21], a research platform dedicated to dynamic data collection and benchmarking, aligns with our spirit. DynaBench addresses the challenges posed by static standardized benchmarks, such as saturation and overfitting, by emphasizing dynamic data with human-in-the-loop. Our LLM-as-a-judge approach can automate and scale platforms of this nature.
|
2306.05685#36
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 36 |
p r i n t ( ' Type i n jump l e n g t h s . N e g a t i v e i n p u t s t o p s r e a d i n g . ' ) ; i n t . p a r s e ( s t d i n . r e a d L i n e S y n c ( ) ) ; w h i l e ( t r u e ) }
All of the LLM responses were phrased as actual attempts to help. A large majority had a conï¬dent tone; this was the case even where the advice was completely wrong. Fewer than ten of the responses had a somewhat non-conï¬dent tone, employing phrases such as âthe issue might be,â âthe code seems,â or âthe issue seems.â Of the 150 responses, 27 encouraged the student with phrases such as âyou are close to the solution,â âyou are on the right track,â âyour code looks good,â âyour code is mostly correct but ...â We observed no negativity in any of the responses.
|
2306.05715#36
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 36 |
0.142 0.117 0.141 0.211 0.243 0.241 0.186 0.162 0.206 0.108 0.084 0.129 0.173 0.123 0.148 0.266 0.123 0.146 0.256 0.221 0.208 0.203 0.190 0.212 0.171 0.153 0.126 0.188 0.198 0.149 0.173 0.147 0.152 0.166 0.159 0.130 0.104 0.147 0.097 0.047 0.044 0.068 0.123 0.059 0.051 0.068 0.067 0.073 0.171 0.134 0.165 0.166 0.143 0.167 0.178 0.123 0.139 0.135 0.128 0.111 0.091 0.131 0.120 0.097 0.092 0.094 0.125 0.132 0.126 0.120 0.109 0.103 0.076 0.073 0.047 0.086 0.073 0.080 0.091 0.083 0.090 0.111 0.092 0.089 0.103 0.106 0.094 0.083 0.091 0.087 0.123 0.122 0.118
|
2306.05783#36
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 36 |
As an early attempt, LMRecSys [Zhang et al., 2021b] tunes language models to estimate the score of each candi- date item, resulting in unsatisfying performance. The rea- son might be that its scoring manners are somehow problem- atic, which may result from the limitations of the designed scoring method. Prompt4NR [Zhang and Wang, 2023] ï¬ne- tunes BERT by predicting the key answer words based on the prompting templates. PTab [Liu et al., 2022a] transforms tabular data into text and ï¬netunes a BERT model based on a masked language modeling task and classiï¬cation tasks. UniTRec [Mao et al., 2023] ï¬netunes a BART model with a joint contrastive loss to optimize the discriminative score and a perplexity-based score. RecFormer [Li et al., 2023b] adopts two-stage ï¬netuning based on masked language mod- eling loss and item-item contrastive loss with LongFormer as the backbone model. P5 [Geng et al., 2022], FLAN-T5 [Kang et al., 2023], PBNR [Li et al., 2023f] and InstructRec
|
2306.05817#36
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 36 |
Trustworthiness and Autonomy
â Trust in Media and Information â Overreliance on Outputs â Personal Privacy and Sense of Self
Inequality, Marginalization, and Violence
â Community Erasure â Long-term Amplifying Marginalization by Exclusion (and Inclusion) â Abusive or Violent Content
Concentration of Authority
# â Militarization, Surveillance, and Weaponization â Imposing Norms and Values
Labor and Creativity
# â Intellectual Property and Ownership â Economy and Labor Market
Ecosystem and Environment
# â Widening Resource Gaps â Environmental Impacts
These context-specific categories heavily depend on how generative AI systems are deployed, in- cluding sector and application. In the broader ecosystem, methods of deployment [229] affect social impact.
10
# 4.2.1 Trustworthiness and Autonomy
Human trust in systems, institutions, and people represented by system outputs evolves as generative AI systems are increasingly embedded in daily life. WIth the increased ease of access to creating machine generated content, which produce misinformation [260] as a product, distinguishing between human and machine generated content, verified and misinformation, will become increasingly difficult and poses a series of threats to trust in media and what we can experience with our own hearing and vision.
# 4.2.1.1 Trust in Media and Information
|
2306.05949#36
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 36 |
Tool Learning. Recent developments have underscored LLMsâ potential in using a myriad of tools (i.e., taking actions) to augment their capacity [23, 27], including search engine, translator, calculator, etc. Example works include Toolformer [29], ReAct [41], and ToolkenGPT [15]. The creation of recent benchmarks on tool learning [20, 26] further highlights the growing interest in evaluating LLMsâ proficiency in tool usage. However, existing research primarily concentrates on short-term tool invocation, neglecting long-term planning. MIND2WEB can bridge this lacuna by necessitating LLMs to take actions within realistic web-browsing environments that demand prolonged decision- making sequences. Furthermore, MIND2WEB may stimulate the development of more advanced tools based on LLMs that interface the web with natural language. These advanced tools could be subsequently employed by another LLM for more challenging problem-solving tasks [11, 30].
# 6 Limitations and Potential Societal Impact
|
2306.06070#36
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 36 |
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971.
Neeraj Varshney, Swaroop Mishra, and Chitta Baral. 2022. Investigating selective prediction approaches across several tasks in iid, ood, and adversarial set- tings. arXiv preprint arXiv:2203.00211.
Ivan Vuli´c, Edoardo Maria Ponti, Robert Litschko, Goran Glavaš, and Anna Korhonen. 2020. Prob- ing pretrained language models for lexical semantics. arXiv preprint arXiv:2010.05731.
# A Experimental Details
|
2306.06264#36
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 36 |
The MAPI-LLM team (Mayk Caldas Ramos, Sam Cox, Andrew White) made the ï¬rst steps towards developing such a system (MAPI-LLM) and created a procedure to convert a text prompt into a query of the Materials Project API (MAPI) to answer questions such as âIs the material AnByCz stable?â In addition, MAPI-LLM is capable of handling classiï¬cation queries, such as âIs Fe2O3 magnetic?â, as well as regression problems, such as âWhat is the band gap of Mg(Fe2O3)2?â.
Because an LLM is used to create the workï¬ow, MAPI-LLM can process even more complex questions. For instance, the question âIf Mn23FeO32 is not metallic, what is its band gap?â should create a two-step workï¬ow ï¬rst to check if the material is metallic and then calculate its band gap if it is not.
|
2306.06283#36
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 36 |
Ablation on necessity of the discouraged actions The proposed output format âaction adviceâ comprises both encouraged and discouraged actions. The discouraged actions are believed to help the LLM to avoid similar failures. Results in Table 8 prove necessity of the discouraged actions. Without access to the discouraged actions, the agent can only achieve a much poorer performance than the full model. In the case shown in the supplementary, it can be seen that there may not be proper actions to encourage in the retrieved experience. In such cases, the discouraged actions are especially crucial for the agent to prevent repeating similar mistakes.
|
2306.07929#36
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 37 |
# 6 Discussion
Limitations. This paper emphasizes helpfulness but largely neglects safety. Honesty and harm- lessness are crucial for a chat assistant as well [2]. We anticipate similar methods can be used to evaluate these metrics by modifying the default prompt. Additionally, within helpfulness, there are multiple dimensions like accuracy, relevance, and creativity, but they are all combined into a single metric in this study. A more comprehensive evaluation can be developed by analyzing and separating these dimensions. We propose preliminary solutions to address the limitations and biases of LLM-as-a-judge in Section 3.4, but we anticipate more advanced methods can be developed.
Data collection and release. Appendix C describes the detailed data collection and release processes, which include the instructions we give to users, the screenshots of the data collection interface, the information about participated users, and the content of the released data.
# 2https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard
9
Societal impacts. The societal impact of this study is multi-faceted. Our evaluation methods can help enhance chatbot quality and user experiences. However, addressing biases in these methods is crucial. Our dataset enables better studies of human preferences and model behavior. Advanced chat assistants may replace certain human tasks, resulting in job displacements and new opportunities.
|
2306.05685#37
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 37 |
There was some variation in terms of agency. 78 responses at- tributed actions to the student: what they did or should do, as in âwhen you initialize,â âyou need to,â or âyou can.â Nineteen re- sponses implied a shared activity or a passive âwe,â as in âwe need to,â âwe can,â or âwe should.â In nine responses, the LLM itself was given agency, as in âI would.â
The studentâs code has a syntax error and would have an an inï¬nite loop if the syntax were to be corrected. The LLM responds in very encouraging tones, but not as a teacher might prefer:
You are on the right track by using a while loop to continu- ously ask the user for input until a negative value is entered. However, there are a few issues with your code.
Firstly, you are not storing the input values in a variable. You need to read the user input with stdin.readLineSync() and store it in a variable. You can then convert the input to an integer with âint.parse()â.
Secondly, you need to check if the input value is negative and break out of the loop if it is. You can do this with an if state- ment.
|
2306.05715#37
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 37 |
Probing Performance for Salient Object Segmentation Probing Performance for Monocular Depth Estimation Self-Attn Layers vs. Convolutional Layers Self-Attn Layers vs. Convolutional Layers 0.85 Step 5 SelfAttn â Step 15 Self.attn 09 -- Step 5 Conv ---- Step 15 Conv 0.80 . 2 0.75 Gog o 3 5 0.70 s a = @ 0.65 HOT 5 5 a 0.60: 2 06 0.55 8 2 0.50 â Step 5 Self-Attn â Step 15 Self-Attn Ae Step 5 Conv ---= Step 15 Conv 7â¢m7e mr MO MOM WOVDOVOVOOVIVIVDS gp pee eae ERR RERR EER eee eee ERR RR RRR RR BER R88 28228882 8 aaR RRB ERaeeER EE EB ggeee ee ges ee REE TS gee eee gee ee RR EEF RENN BORN NNUHHRER PRN KN KLEEN NNYUYHRAS BR FRRE BESS 5 & FS FS BSS b&w BSaSRSESa a SF 228322 S22988 323 B22222 2222332233 PNP NEN ANE NDeENDD PN BNA BPN we N WEN
|
2306.05720#37
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 37 |
0.111 0.092 0.089 0.103 0.106 0.094 0.083 0.091 0.087 0.123 0.122 0.118 0.129 0.112 0.096 0.060 0.087 0.083 0.065 0.053 0.056 0.085 0.115 0.079 Instruction For Ranking 0.266 0.414 0.352 0.490 0.453 0.429 Statistic 0.145 0.082 0-shot 0.089 3-shot 0.089 0.123 0.138 0.165 0.186 0.154 0.163 0.177 0.185 0.253 0.136 0.152 0.133 0.140 0.142 0.216 0.178 0.158 0.109 0.172 0.188 0.170 0.163 0.082 0.062 0.105 0.072 0.169 0.171 0.129 0.125 0.114 0.086 0.102 0.140 0.065 0.108 0.100 0.108 0.098 0.101 0.085 0.077 0.075 0.069 0.103 0.138 0.125 0.214 0.180 0.160 0.195 0.173 0.099 0.189
|
2306.05783#37
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05949
| 37 |
# 4.2.1.1 Trust in Media and Information
High capability generative AI systems create believable outputs across modalities and level of risk depends on use case. From impersonation spurring spamming to disinformation campaigns, the spread of misinformation online can be perpetuated by reinforcement and volume; people are more likely to believe false information when they see it more than once, for example if it has been shared by multiple people in their network [179]. This can have devastating real world impacts, from attempting dangerous COVID-19 treatments [160], to inciting violence [146], and the loss of trust in mainstream news [95]. The increasing sophistication of generative AI in recent years has expanded the possibilities of misinformation and disinformation campaigns, and made it harder for people to know when they should trust what they see or hear [41].
|
2306.05949#37
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 37 |
# 6 Limitations and Potential Societal Impact
MIND2WEB is designed to facilitate the development and evaluation of generalist agents for the web. Such agents hold great potential for making the web more accessible and easy to use, especially for individuals who are less familiar with information technology or have disabilities and may struggle to navigate through complex web apps and get overwhelmed by the options available. However, there are still potential concerns and limitations regarding the current data collection, system design and safety for deployment in real world.
Diversity and Representation in Data Collection. Although we strive to choose representative websites covering diverse domains, the present selection predominantly comprises English-language websites primarily used in the U.S. Meanwhile, all our annotators are sourced through the Amazon MTurk platform, which might be biased towards a group that is more proficient in web use. Therefore, the tasks and websites embodied in our dataset may represent only a subset of all potential tasks that can be performed on the web. Bearing this limitation in mind, the design of MIND2WEB and our data collection protocol allow for easy expansion to encompass more tasks and websites. The inclusion of additional websites, potentially from different countries and languages, and tasks from more diverse demographics, such as individuals from different age groups, those traditionally facing web accessibility challenges, and professionals from specific domains like software development, research, law, and more, present exciting directions for future development.
9
|
2306.06070#37
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 37 |
# A Experimental Details
Datasets Our experimental evaluations involved fact-checking benchmarks such as T-REx (Elsahar et al., 2018), which is a curated subset of Wikipedia triples aligned with corresponding Wikipedia ab- stracts. T-REx encompasses a vast collection of 11 million triples and 3.09 million Wikipedia ab- stracts, covering over 600 distinct Wikidata predi- cates. To facilitate the mapping of triples from T- REx to natural language expressions, we employed the LAMA framework introduced by Petroni et al. (2019). LAMA provides natural language tem- plates specifically designed for 41 predicates de- rived from the T-REx benchmark.
Models For our evaluations, we utilized two pop- ular models, BERT (Devlin et al., 2019) and T5 (Raffel et al., 2019), to gauge the accuracy of vari- ous knowledge metrics and to compare the effec- tiveness of explicit and implicit knowledge instil- lation techniques. Additionally, we employed In- structGPT (text-davinci-003) (Ouyang et al., 2022) and FLAN-T5 (XL) (Chung et al., 2022) to inves- tigate the applicability of our proposed methods across different tasks using in-context learning- based models.
# B In-Context Learning Results Based on Entropy Metric
|
2306.06264#37
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 37 |
Moreover, MAPI-LLM applies ICL if the data for a materialâs property is unavailable via the MAPI. MAPI-LLM generates an ICL prompt, building context based on the data for similar materials available in Materials Project database. This context is then leveraged by an LLM to infer properties for the unknown material. This innovative use of ICL bridges data gaps and enhances MAPI-LLMâs robustness and versatility.
b. sMolTalk
The previous application already touches on the problem that software for chemical applications requires scientists to invest a signiï¬cant amount of time in learning even the most basic applications. An example of this is visualization software. Depending on the package and its associated documentation, chemists and materials scientists might spend hours to days learning the details of speciï¬c visualization software that is sometimes poorly documented. And in particular, for occasional use, if it takes a long time to learn the basics, it wonât be used.
14
Researcher Questions Answers
|
2306.06283#37
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 37 |
Ablation on the similarity function As stated in Subsection 3.3, a similarity function is required to select related experiences from the memory. In experiments, the similarity is implemented as two components: task similarity fg and observation similarity fo. Ablation studies are conducted to draw a brief perspective on the impact of these two components. As shown in Table 8, removal of task similarity seems not to affect the performance remarkably, while removal of observation similarity causes a serious degradation. This may indicate that on these tasks, the tested LLM benefits more from experiences that have similar observations rather than similar instruction patterns. On the other side, the pattern-based task similarity for WikiHow introduced in Subsection 4.1 may be too coarse to cluster the experiences. During interaction, the agent may receive instructions of the same pattern (e.g., âaccess article ABCâ) while facing different types of observation (e.g., search result page or category page). The appropriate actions in two situations are also different. Removal of observation similarity will eliminate this difference in experience selection and results in misleading. Case study in the supplementary shows this perspective.
# 5 Conclusion
|
2306.07929#37
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 38 |
Future directions. 1) Benchmarking chatbots at scale with a broader set of categories 2) Open-source LLM judge aligned with human preference 3) Enhancing open modelsâ math/reasoning capability.
# 7 Conclusion
In this paper, we propose LLM-as-a-judge for chatbot evaluation and systematically examine its efficacy using human preference data from 58 experts on MT-bench, as well as thousands of crowd- users on Chatbot Arena. Our results reveal that strong LLMs can achieve an agreement rate of over 80%, on par with the level of agreement among human experts, establishing a foundation for an LLM-based evaluation framework.
# Acknowledgement
This project is partly supported by gifts from Anyscale, Astronomer, Google, IBM, Intel, Lacework, Microsoft, MBZUAI, Samsung SDS, Uber, and VMware. Lianmin Zheng is supported by a Meta Ph.D. Fellowship. We extend our thanks to Xinyang Geng, Hao Liu, Eric Wallace, Xuecheng Li, Tianyi Zhang, Qirong Ho, and Kevin Lin for their insightful discussions.
# References
|
2306.05685#38
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 38 |
Secondly, you need to check if the input value is negative and break out of the loop if it is. You can do this with an if state- ment.
At least twenty of the 150 responses featured a discrepancy be- tween the explanation and the code in the response. For example, one response emphasized how a readlineSync function might re- turn a line break, even though the actual issue (deliberately in- serted in starter code) involves letter case when calling Dartâs readLineSync; however, the model-solution-like code in the LLMâs response was correctly lettered. In another example, the LLM suggested moving a return statement to the end of a functionâbut did not actually implement this advice in the code that it provided.
Three more topics surfaced from this analysis that we felt were of potential signiï¬cance for instructional design. First, and as also
Lastly, you need to calculate the average length of jumps and print it to the console. You can keep track of the total sum of jumps and the number of jumps entered in two separate variables. You can then divide the total sum by the number of jumps to get the average length. If no jumps were entered, you should print "No jumps." instead.
|
2306.05715#38
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 38 |
Figure 9: We found stronger depth and salient object representations in the activations of self-attention layers (solid lines). The probing performance on the activations of convolutional layers (dashed lines) is generally weaker.
In addition to probing the self-attention layers, we also investigated the depth representations in convolutional layers, which produced weaker results. The convolutional layersâ performance only starts to improve when the input has been aggressively downsampled (8 x 8 spatial size) such that a 3 x 3 filter kernel can cover a large portion of the input latent image. Intuitively, encoding depth information will require accessing the global context of images, which is only possible when using self-attention.
Our findings revealed that, during generative training without any prior depth information, self- attention layers outperformed convolutional layers in capturing saliency and depth representations.
# B Spatial and Feature Dimensions of Self-Attention Layers in the LDM
Table 2: Spatial and feature dimensions of self-attention layer activations across transformer blocks of the LDM.
|
2306.05720#38
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 38 |
0.075 0.069 0.103 0.138 0.125 0.214 0.180 0.160 0.195 0.173 0.099 0.189 0.065 0.135 0.202 0.202 0.077 0.158 0.107 0.200 0.256 0.165 0.188 0.176 0.177 0.111 0.056 0.054 0.081 0.201 0.147 0.104 0.146 0.113 0.098 0.147 0.098 0.061 0.081 0.093 0.095 0.114 0.092 0.130 0.141 0.133 0.063 0.082 0.140 0.123 0.170 0.182 0.156 0.164 0.170 0.224 0.184 0.098 0.128 0.303 0.183 0.077 0.198 0.118 0.219 0.200 0.147 0.175 0.197 0.163 0.144 0.043 0.058 0.066 0.213 0.178 0.097 0.135 0.112 0.085 0.159 0.085 0.091 0.072 0.085 0.100 0.106 0.067 0.095 0.142
|
2306.05783#38
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 38 |
The works mentioned above all adopt full ï¬netuning, which could be considerably expensive and unscalable as the size of the language model continuously increases. To this end, PALR [Chen, 2023] fully ï¬netunes LLaMA [Tou- vron et al., 2023] based on only 20% of the user data, which not only achieves overall training efï¬ciency but also demon- strates strong inductive learning capabilities of LLM. Be- sides, parameter-efï¬cient ï¬netuning methods are usually re- quired to efï¬ciently adapt LLM to RS, e.g., option tuning for M6-Rec [Cui et al., 2022], layerwise adaptor tuning for VIP5 [Geng et al., 2023], and low-rank adaption (LoRA) [Hu et al., 2021] for TALLRec [Bao et al., 2023].
As shown in Figure 3, the performance of ï¬netuning LLM based on recommendation data is promising with proper task formulation, even if the model size is still relatively small.
# 3.5 Discussion
|
2306.05817#38
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 38 |
What to Evaluate Surveying trust can apply to trust in AI systems [184, 107] to output factual information, trust in researchers, developers, and organizations developing and deploying AI [143], mitigation and detection measures [222], and trust in overall media and how it is distributed [251]. Trust can be evaluated in the category of information, such as information about democratic and policy institutions [177]. Evaluations and countermeasures of false and misleading information remain challenging. There is no universal agreement about what constitutes misinformation and much of the research on intervention remains siloed [94]. Furthermore, current research efforts towards watermarking text remain brittle and the area of developing watermarks for machine generated outputs is an active research area [128].
Interventions on technical systems include encouraging people Mitigation and Interventions to shift their attention to the accuracy of posts they might share [180], using crowd-sourced fact checking [90], and using digital forensics to detect AI-generated content [76]. However, technical tools such as detection are less accurate as AI systems become more powerful [204].
|
2306.05949#38
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 38 |
9
Use of Multimodal Information. Our current approach, MINDACT, models the web environment using only textual context from webpage snapshots. Nevertheless, crucial information can also be gleaned from the visual representation of a rendered webpage. While not currently utilized, we have included complete webpage snapshots in MIND2WEB, enabling rendering of the webpage for visual interpretation. The use of this multimodal information will be a viable prospect for improving model performance.
Modeling of Interaction Dynamics. In MINDACT, we encode each webpage independently at every step, with only the previous actions provided as historical context. However, the changes of the web environment could also provide significant cues for task completion, such as the appearance of a dropdown menu following a button click. Exploring effective ways to model such dynamic environment transformations during interaction could be an essential aspect for developing robust web agents.
Human-Agent Interaction. In the current design of MIND2WEB, the user provides a single description of the task goal up front, and the agent carries out the task from start to finish. In real-world settings, the user may wish to adjust or add task requirements in the middle, or the agent might seek user confirmation for more accurate task understanding. Extending Mind2Web to an interactive or conversational setting, thereby allowing diverse forms of human-agent interactions, could be an interesting future direction.
|
2306.06070#38
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 38 |
# B In-Context Learning Results Based on Entropy Metric
The results of the factual alignment and Halluci- nation experiments for entropy based metric can be found in Table 4. In the table, we have high- lighted relation types where there is a meaningful difference in our metrics across these classes in green. Firstly, it is evident that most of the rela- tions where our metrics are unable to differentiate between the classes involve location or language as their object. Additionally, when comparing ap- peared with hallucinated facts, we observe that for relations with a location as the object, the model mostly possesses more knowledge about appeared facts in comparison to appeared ones.
Further analysis of the results reveals interest- ing trends in relation to the need for factual align- ment and hallucination for InstructGPT and FLAN- T5. The relations admin territory, country, field of work, work location, instrument, headquar- ters location, location of formation, employer, member of show a lower requirement for explicit knowledge instillation to appear in the generated output in InstructGPT. On the other hand, for
|
2306.06264#38
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.07929
| 38 |
# 5 Conclusion
We introduce Reinforcement Learning with Experience Memory (RLEM) to aid the LLM in learning from its interaction experiences for decision-making tasks. A novel LLM-based agent framework called REMEMBERER is then designed with RLEM by equipping the LLM with a persistent experi- ence memory and updating the memory with the RL algorithm. REMEMBERER agent is capable of exploiting the interaction experiences to improve its policy and gains a significant improvement com- pared to the baseline. Our experimental results demonstrate the superiority. Owing to the simplicity and effectiveness of REMEMBERER, we believe that this work provides a valuable perspective on designing evolvable LLM-based agents with RLEM.
# 6 Limitations
The proposed REMEMBERER agent demonstrates strong superiority on the tested benchmarks. Nevertheless, it is wondered how this framework will be applied to the environments with more long-term episodes or with more extensive or visual-rich observations. Besides, it is observed that the performance of REMEMBERER will encounter quick saturation in training process. This may be due to the limited number of active exemplars. Further efforts are expected to be dedicated in to make the agent performance evolve continuously. Furthermore, as an early exploration, we didnât make use of complicated RL techniques. How recent advancement in RL domain works under RLEM is also an interesting problem.
10
# Acknowledgements
|
2306.07929#38
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 39 |
# References
[1] Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
[2] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022.
[3] Niels J Blunch. Position bias in multiple-choice questions. Journal of Marketing Research, 21(2):216â220, 1984.
[4] Jonathon D Brown. Evaluations of self and others: Self-enhancement biases in social judgments. Social cognition, 4(4):353â376, 1986.
|
2306.05685#39
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05720
| 39 |
Table 2: Spatial and feature dimensions of self-attention layer activations across transformer blocks of the LDM.
Blocks Number of Self-Attn Layers Spatial h à w Feature c Encoder 1 Encoder 2 Encoder 3 Encoder 4 Bottleneck Decoder 1 Decoder 2 Decoder 3 Decoder 4 2 2 2 0 1 0 3 3 3 64 à 64 32 à 32 16 à 16 â 8 à 8 â 16 à 16 32 à 32 64 à 64 320 640 1280 â 1280 â 1280 640 320
In this section, we review the architecture of Stable Diffusion, which helps explain why we need to upsample the predictions from probing classifier. We will use the information in Table 2 again in Appendix F.
As Table 2 shows, the self-attention layers of LDM operate at different spatial and feature dimensions across vision transformer blocks. The probing classifier takes the original activations from the self-attention layers as input, and the classifier outputs the prediction in the same resolution of activations. We upsampled the low resolution predictions to the same spatial size as the original images (512 Ã 512) when comparing the predictions with the synthetic labels.
13
# C Visualizations of Emerging Depth Representations in the LDM
|
2306.05720#39
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 39 |
0.085 0.159 0.085 0.091 0.072 0.085 0.100 0.106 0.067 0.095 0.142 0.123 0.058 0.081 Xiezhi-Spec.-English 1-shot 0.089 0-shot 0.089 3-shot 0.089 0.113 0.130 0.133 0.201 0.176 0.180 0.130 0.069 0.379 0.139 0.159 0.154 0.110 0.183 0.182 0.175 0.176 0.167 0.206 0.173 0.211 0.155 0.106 0.069 0.124 0.071 0.182 0.121 0.189 0.158 0.126 0.091 0.079 0.088 0.088 0.108 0.079 0.096 0.086 0.069 0.123 0.124 0.160 0.059 0.118 0.116 0.119 0.140 0.155 0.153 0.146 0.130 0.082 0.396 0.097 0.165 0.183 0.195 0.204 0.190 0.164 0.189 0.235 0.146 0.172 0.137 0.106 0.120 0.053
|
2306.05783#39
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 39 |
# 3.5 Discussion
We ï¬rst conclude the necessity of collaborative knowledge injection when adapting LLM to RS, and then cast discus- sion on the relationship between the recommendation per- formance and the size of adapted LLM. Finally, we discuss an interesting property found in ChatGPT-like large language models.
Collaborative Knowledge is Needed From Figure 3, we could observe a clear performance bound- ary between works from quadrant 3 and quadrant 1, 2, 4. Re- search works from quadrant 3 are inferior even though they adapt large-scale models, i.e., ChatGPT. This indicates that the recommender system is a highly specialized area, which demands a lot of in-domain collaborative knowledge. LLM cannot learn such knowledge from its general pretraining cor- pus. Therefore, we have to involve in-domain collaborative knowledge for better performance when adapting LLM to RS,
and there are generally two ways to achieve the goal (corre- sponding to quadrant 1, 2, 4): ⢠Tune LLM during the training phase, which injects collaborative knowledge from a data-centric aspect.
⢠Infer with CRM during the inference phase, which injects
|
2306.05817#39
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 39 |
Emerging legal and regulatory approaches around the world include the EU AI Act, which requires labeling AI-generated content, and certain U.S. state laws that criminalize non-consensual deepfake pornography and deepfake content that interferes with elections [38], where lessons can be extrap- olated to generated AI outputs. Policymakers and developers can also ban use cases where false outputs have highest risks.
# 4.2.1.2 Overreliance on Outputs
Overreliance on automation in general is a long-studied problem [174], and carries over in novel and important ways to AI-generated content [178]. People are prone to overestimate and put a higher degree of trust in AI generated content, especially when outputs appear authoritative or when people are in time-sensitive situations [45].
This can be dangerous because many organizations are pursuing the use of large language models to help analyze information despite persistent flaws and limitations, which can lead to the spread of biased and inaccurate information [103]. The study of human-generative AI relationships is nascent, but growing, and highlights that the anthropomorphism [13] of these technologies may contribute to unfounded trust and reliance [192, 225]. Improving the trustworthiness of AI systems is an important ongoing effort across sectors [159, 161].
|
2306.05949#39
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 39 |
Evaluation with Offline/Online Environments. Following recent works [5, 35], we evaluate the system with cached offline environments, which allows us to test using snapshots of complex real- world websites. However, a downside to this is that the task will fail immediately if an action was not cached during data collection, potentially leading to false negatives due to the existence of multiple paths for completing the same task. As described in Appendix C.1, we normalize the actions to address equivalent elements within the same page. In addition, we include complete network traffic in the dataset, presenting possibilities for future research to enable some degree of replay and exploration within the cached environment. Given that MIND2WEB faithfully replicates real-world webpages, systems trained on the dataset should be readily transferable to live websites. Conducting end-to-end live evaluation on real websites with human assistance is a very promising direction that is worth exploration.
|
2306.06070#39
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 39 |
FLAN-T5, the relations headquarters location, employer, country of origin, original language of work, owned by exhibit a similar characteristic. Moreover, certain relations demonstrate higher re- sistance to hallucination in InstructGPT and FLAN- T5. Specifically, the relations headquarters loca- tion, location of formation, employer, member of exhibit a greater resistance to hallucination in InstructGPT, while the relations official language, sister city, headquarters location, instance of, developer, owned by demonstrate a higher resis- tance to hallucination in FLAN-T5.
|
2306.06264#39
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 39 |
As the sMolTalk-team (Jakub L´ala, Sean Warren, Samuel G. Rodriques) showed, one can use LLMs to write code for visualization tools such as 3dmol.js to address this ineï¬ciency [68]. Interestingly, few-shot prompting with several examples of user input with the expected JavaScript code that manipulates the 3dmol.js viewer is all that is needed to create a prototype of an interface that can retrieve protein structures from the protein data bank (PDB) and create custom visualization solutions, e.g., to color parts of a structure in a certain way (Figure 4). The beauty of the language models is that the user can write the prompt in many diï¬erent (âfuzzyâ) ways: whether one writes âcolorâ or âcolourâ, or terms like âlight yellowâ or âpale yellowâ the LLM translates it into something the visualization software can interpret.
|
2306.06283#39
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.05685
| 40 |
[4] Jonathon D Brown. Evaluations of self and others: Self-enhancement biases in social judgments. Social cognition, 4(4):353â376, 1986.
[5] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
[6] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
[7] Cheng-Han Chiang and Hung-yi Lee. Can large language models be an alternative to human evaluations? arXiv preprint arXiv:2305.01937, 2023.
|
2306.05685#40
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 40 |
ICER â23 V1, August 7â11, 2023, Chicago, IL, USA
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, and Juha Sorva
Table 4: The performance of GPT-3.5, in English, on the 150 help requests, split by issue type.
Theme of issue Sub-theme GPT-3.5 identiï¬es and mentions issues: Non-existent(s) All One (or more) Logic error Conditionals (n=37) Iteration (n=30) Arithmetic (n=23) 86% 97% 91% 35% 73% 57% 49% 40% 35% Input / output Formatting (n=25) Unwanted (n=24) Missing (n=10) 72% 75% 70% 44% 54% 50% 52% 63% 50% Other Syntax (n=12) Very incomplete (n=8) Semicolons (n=4) Hidden req (n=3) 92% 100% 100% 33% 50% 50% 63% 13% 100% 25% 0.0% 67%
|
2306.05715#40
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 40 |
13
# C Visualizations of Emerging Depth Representations in the LDM
Prompt = âLuttrell - Espresso 6 Piece Sectionalâ Step1 Step2 Step3 Step4 Step5 Step 11 Step 15 Depth from Internal Representation Depth from Image sete: cso AL EE FE LE Internal Representation Salient Object from Image
Figure 10: Row 1: Images decoded at each denoising step. The latent vector from which the image is decoded serves as the input to the LDM at next step. Row 2 & 4: Predictions of probing classifiers based on the LDMâs internal activations. Row 3 & 5: Baseline prediction of depth and salient object from external models that take decoded images as input.
We observed that the position of the salient object and depth of the scene were both determined at the very early denoising stage (see Figure 10). Despite the LDM being conditioned on noisy latent vectors, the probing classifierâs predictions of the salient object and depth already resembled those in the fully denoised image.
Visit this link to see more examples of how depth representations develop in the early denoising stages of LDM.
# D Choices of Intervened Denoising Steps
|
2306.05720#40
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 40 |
0.183 0.195 0.204 0.190 0.164 0.189 0.235 0.146 0.172 0.137 0.106 0.120 0.053 0.077 0.070 0.209 0.139 0.179 0.124 0.118 0.088 0.098 0.101 0.050 0.068 0.063 0.059 0.088 0.089 0.103 0.103 0.203 0.077 0.143 0.123 0.114 0.144 0.156 0.207 0.219 0.162 0.056 0.394 0.069 0.161 0.215 0.161 0.172 0.194 0.173 0.239 0.168 0.148 0.183 0.149 0.166 0.124 0.043 0.080 0.059 0.195 0.128 0.128 0.124 0.112 0.083 0.110 0.116 0.065 0.086 0.077 0.086 0.098 0.081 0.100 0.107 0.156 0.066 0.124 Xiezhi-Inter.-English 1-shot 0.089 0-shot 0.089 3-shot 0.089 0.124 0.144 0.150 0.175 0.217 0.228 0.157
|
2306.05783#40
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 40 |
⢠Infer with CRM during the inference phase, which injects
collaborative knowledge from a model-centric aspect. As shown in Figure 3, we could observe a clear trajec- tory evolving from quadrant 3 to quadrant 2 and 4 through in-domain collaborative knowledge injection. Therefore, it is natural to draw the future prospect to further ï¬ll in the blank in quadrant 1, where we tune large foundation models for alignments and also involve CRM for inference.
|
2306.05817#40
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 40 |
Persistent security vulnerabilities in large language models and other generative AI systems are another reason why overreliance can be dangerous. For example, data poisoning, backdoor attacks, and prompt injection attacks can all trick large language models into providing inaccurate information in specific instances [220].
11
What to Evaluate For language, in the case of AI chatbots specifically, the conversational interface can additionally elicit trust and other strong emotions from people, even when they understand the limitations of the technology [201]. Overreliance on such tools can not only make people prone to believe inaccurate information, but can also be abused to subtly change or manipulate peopleâs behaviors, for example to make them more likely to purchase particular products or even encourage self-harm [99].
For language models trained on code and code generative systems, inaccurate outputs [60] can nullify potential benefits. Code generative systems can be evaluated for their limitations [56] and hazards [127], from alignment questions like producing bugs and harmful biases, to economic and environmental impacts (see Section 4.1 Impacts: The Technical Base System).
|
2306.05949#40
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 40 |
Safety in Deployment. While the development of general-purpose web agents holds great potential to enhance efficiency, optimize user experiences, and promote web accessibility universally, the accompanying safety considerations for real-world deployment cannot be ignored. These include how to effectively manage sensitive actions like financial transactions, enhancing transparency and interpretability, and keeping users in control during task execution. Additionally, there is the risk of these agents possessing the capability to breach existing security measures such as CAPTCHA and being exploited for malicious activities, such as disseminating false information. Therefore, it is also important for cybersecurity research to consider these potential uses and develop preemptive protective measures.
# 7 Conclusion
In this work, we introduced MIND2WEB, the first dataset for developing and evaluating generalist agents for the web. We also proposed MINDACT, an agent that leverages the power of (large) language models for effectively tackling this task. Our work opens up a wide range of promising future directions, including integrating multi-modal information, reinforcement learning with feedback from real websites, and specialized LMs for web understanding and action taking. We hope that MIND2WEB will serve as a valuable platform for the research community to advance towards generalist agents for the web.
# Acknowledgements
|
2306.06070#40
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.