doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.05783
| 9 |
In light of the aforementioned needs, we propose a comprehensive, multi-disciplinary, auto-updating benchmark for domain knowledge evaluation. We call this benchmark Xiezhi, named after a mythical creature that symbolizes fairness and judgement. Xiezhi consists of 249587 questions with 516 disciplines, ranging from 13 different categories: philosophy, economics, law, education, literature, history, natural sciences, engineering, agriculture, medicine, military science, management, and arts. These 516 disciplines are derived from the Chinese Disciplinary Taxonomy, a comprehensive hierar- chical classification system of domain knowledge proposed by the Chinese Ministry of Education and widely acknowledged in China. We manually selected and annotated 20k questions from the Chinese Graduate Entrance Examination covering these 516 labels to form the Xiezhi-Meta dataset. To facilitate automatic updating, this dataset was utilized to train an annotation model capable of estimating the relevance between questions and disciplinary labels. We subsequently amassed 170k multiple-choice questions originating from diverse examinations, along with 80k multiple-choice questions generated from academic surveys, annotating all of them using the annotation
|
2306.05783#9
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 9 |
⢠Scoring/Ranking function is the core part of recommen- dation, where various neural methods are designed to select the top-relevant items to satisfy usersâ information needs.
U-BERT'21 â UNBERT â21 â jâ LMRecSys "21 |â ps "22 |â prab â22 âTiny-NewsRec "22 â| PREC "22 â| UnisRec 22 â| VQ-Ree 23 â| MoRec â23 â! |-â M6-Rec â22 |â Gpraree â23 |â RecFormer '23 . |â unit Rec â23 GENRE 23 â AnyPredict 23 â} GReat "23 â FewGen â23 â] KGC-LLM "23 â TagGPT "23 â icpc "23 â Pipeline Controller |â BookaPr â23 |â TabLiM â23 |â LANIsTR â23 | struetGpr "23 |. PromptaNR â23 | TALLRee â23 Collection ChatRec â23 â RecLLM '23 â!
Figure 2: The illustration of deep learning based recommender sys- tem pipeline. We list representative works that adapt LLM to differ- ent parts of the system pipeline denoted by different colors.
|
2306.05817#9
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 9 |
In tandem with the rise of AI systemsâ integration with society, many legal jurisdictions have begun to propose AI regulation, which include or mention assessing the impact of an AI system. Regulatory bodies that have announced plans and guidelines skew heavily toward Western and East Asian governmental bodies:the European Union [74], United States of America [250], Canada [148], United Kingdom [68], South Korea [196], Japan [240], and China [69]. While many of these proposed requirements only apply to systems that fall into âhigh riskâ categories as defined by the proposed regulation, generative AI systems are largely being scoped.
# 2.1 Methodology
|
2306.05949#9
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 9 |
# 2 MIND2WEB Dataset
Unlike existing datasets predominantly constructed within simulated environments [31, 40], our objective is to bridge the gap between simulation and reality so that agents trained on our dataset can work on real-world websites out of the box. To achieve this, our approach for data collection adheres to the following principles. Firstly, instead of recreating websites in simulation, which often leads to oversimplified environments, we engage directly with real-world websites and capture snapshots of these environments. Secondly, we collate a diverse set of websites from varied domains and crowd- source realistic tasks that cover a wide range of functionalities provided by these websites. Finally, acknowledging the challenge of perfectly replicating the complexity of real-world environments, we strive to capture a comprehensive snapshot of each website and the full interaction trace, to the extent that all the tasks can be seamlessly replayed offline. This supports rich modeling and evaluation approaches, ensuring a robust and practical dataset for research.
# 2.1 Task Definition
The primary objective of MIND2WEB is for the agent to complete a specific task on the target website through a series of actions. Each instance in our dataset contains three components:
Task description, which outlines the high-level goal of the task. We intentionally avoid low-level, step-by-step instructions, aiming to foster the development of agents that can comprehend and carry out tasks in a more autonomous fashion, rather than merely following prescriptive directives.
|
2306.06070#9
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 9 |
we will have:
K Lecore( (prompt) = -y- Palog( 2 (3) keV
To approximate the predicted probability distri- bution of language models such as GPT-3 when the full vocabulary (V ) is not accessible, we adopt a specific approach. First, we obtain the top-k proba- ble tokens (with their predicted probability) from the model before knowledge instillation (Vb) and after knowledge instillation (Va). Then, we approx- imate the vocabulary by creating a new vocabulary (V â²) that includes only the tokens present in Va and Vb, along with an out-of-vocabulary (oov) token. The size of V â² is determined by the union of Va and Vb, denoted as |Va ⪠Vb| + 1.
Next, we uniformly distribute the missing proba- bility mass from the sum of the top-k predictions among the remaining tokens in V â² (for both, before and after knowledge instillation). This ensures that the probability distribution remains consistent even when some tokens are missing from the Va and Vb. Finally, we utilize this resultant distribution for our factual knowledge measurements. This approach allows us to approximate the predicted probabil- ity distribution of the language model despite not having access to the full vocabulary.
3
# Implicit vs Explicit Knowledge Instillation
We consider two form of knowledge instillation for LLMs:
|
2306.06264#9
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 9 |
One of the conclusions of this work is that without these LLMs, such projects would take many months. The diversity of topics these projects address illustrates the broad applicability of LLMs; the projects touch many diï¬erent aspects of materials science and chemistry, from the wet lab to the computational chemistry lab, software interfaces, and even the classroom. While the examples below are not yet polished products, the simple observation that such capabilities could be created in hours underlines that we need to start thinking about how LLMs will impact the future of materials science, chemistry, and beyond [35]. The diverse applications show that LLMs are here to stay and are likely a foundational capability that will be integrated into most aspects of the research process. Even so, the pace of the developments highlights that we are only beginning to scratch the surface of what LLMs can do for chemistry and materials science.
|
2306.06283#9
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 9 |
in a dynamic process by ChatDB [Hu et al., 2023]. Reflexion [Shinn et al., 2023] exploits a working memory to store experiences for a dedicated task to improve the performance of the agent through several trials. However, as illustrated in Figure 1, the histories stored in working memory cannot benefit the episode for different task goals. Instead, a long-term cross-goal experience memory should be considered. MemPrompt [Madaan et al., 2022] and Ret-LLM [Modarressi et al., 2023] adopt a persistent memory to store human feedback and remind the chatbot of the conversational knowledge and improve it continuously. Voyager [Wang et al., 2023a] designs a skill library to store the past learned skills as JavaScript functions. A simple text experience pool is adopted by GITM [Zhu et al., 2023] to store the successful trajectories for future referencing. Somewhat similar to GITM, REMEMBERER adopts a persistent environment-grounded experience memory to store the experiences and assist in future decision-making even for different task goal. However, instead of plain text records of successful trajectories, REMEMBERER uses a structured memory and designs a
|
2306.07929#9
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 10 |
Core-knowledge benchmarks, including MMLU [19], HellaSwag [50], ARC [9], Wino- Grande [36], HumanEval [6], GSM-8K [10], and AGIEval [51], evaluate the core capabilities of pre-trained LLMs using zero-shot and few-shot benchmark sets. They typically require LLMs to generate a short, specific answer to benchmark questions that can be automatically validated. ⢠Instruction-following benchmarks, such as Flan [27, 46], Self-instruct [44], NaturalInstruc- tions [28], Super-NaturalInstructions [45], expand to slightly more open-ended questions and more diverse tasks and are used to evaluate LLMs after instruction fine-tuning.
⢠Conversational benchmarks, like CoQA [35], MMDialog [15] and OpenAssistant [23], are closest to our intended use cases. However, the diversity and complexity of their questions often fall short in challenging the capabilities of the latest chatbots.
While largely overlooked by existing LLM benchmarks, human preferences serve as a direct measure of a chatbotâs utility in open-ended, multi-turn human-AI interactions. To bridge this gap, we introduce two novel benchmarks expressly tailored to assess human preferences. Simultaneously, these benchmarks are designed to distinguish the core capabilities of state-of-the-art models.
# 2.2 MT-Bench
|
2306.05685#10
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 10 |
A deep learning model is trained by providing training data to the network and adjusting the weights of the neurons so that the overall network learns to produce a desired output. Training re- quires large amounts of data, especially when the data is complexâ for example, when sequential relations like word order are involved. For this reason, methods such as the long-short term memory recur- rent neural network (RNN) [28] have emerged, which allow neu- rons to be connected with a directed graph that can represent a temporal sequence, and where the output of each neuron can be fed back to the network (in a recursion of sorts). The introduction of the attention mechanism to RNN [4] enhanced the capture of long-range dependencies, leading to substantially improved perfor- mance on natural language processing. The attention mechanism further led to the transformer architecture [85], which removed recurrent connections in favor of a self-attention mechanism that improved the parallelization of training and reduced training time. The transformer architecture played a key role in the emergence of the generative pre-trained transformer (GPT) [72]. GPT was ini- tially pre-trained (unsupervised learning) on a large data set in
|
2306.05715#10
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 10 |
# 3.3 Continuous relative depth information
We are also interested if a more fine-grained depth dimension also exists inside LDM. Similar to the probing of binary depth, we extract the output from self-attention layers and train a linear regressor on them to predict the MiDaS relative depth map dc â R512Ã512Ã1.
Ëdc = Interp(W ϵθ(l,t), 512 hl , 512 wl ) (2)
We update W â RcÃ1 by minizing the Huber loss between Ëdc and dc. We also experimented with regularizing depth predictions with a smoothness constraint [13]. However, the smoothness constraint had a negative impact on probing (see Appendix F). Following existing practices [8, 23], the accuracy of depth probing is measured using root mean squared error.
We probed the internal representations of salient regions and depth across all self-attention layers {l1, l2, . . . , l16} of LDM at all sampling steps (t = {1, 2, . . . , 15}). Probing classifiers were trained separately on each layer and step. For a fair comparison, we fixed the number of training epochs, optimization algorithm and learning rate for all training.
|
2306.05720#10
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 10 |
multiple-choice questions originating from diverse examinations, along with 80k multiple-choice questions generated from academic surveys, annotating all of them using the annotation model. To facilitate the usage of Xiezhi and align with the inclination that âconsolidate increasing capabilities into single LLMsâ, we also present Xiezhi-Specialty and Xiezhi-Interdiscipline, each consisting of 15k more balanced, less sensitive, and less China-centric questions. Xiezhi-Specialty encompasses questions solvable using knowledge from a single domain, while Xiezhi-Interdiscipline incorporates questions necessitating knowledge from multiple domains for resolution.
|
2306.05783#10
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 10 |
Figure 2: The illustration of deep learning based recommender sys- tem pipeline. We list representative works that adapt LLM to differ- ent parts of the system pipeline denoted by different colors.
Pipeline controller monitors and controls the operations of the recommendation pipeline mentioned above. It can even provide ï¬ne-grained control over different stages for recommendation (e.g., matching, ranking, reranking) Next, we will elaborate on the adaptation of LLM to dif- ferent parts of the recommendation pipeline, except for user data collection.
|
2306.05817#10
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 10 |
# 2.1 Methodology
We convened thirty experts across industry, academia, civil society, and government to contribute to a two-part workshop series. The first workshop created a framework for defining and categorizing social impacts that can be evaluated. The second workshop examined categoriesâ ability to be evaluated, including past approaches to evaluations and metrics, limitations, and future directions for improvements. For the first workshop, we asked experts to discuss possible impacts of systems for each of the five modalities of generative systems. For the second workshop, we created meta categories of impacts and collected existing methods for evaluation within these categories. The findings from the discussions inform our framework and evaluation method sections. Both workshops were conducted under modified Chatham House Rules, where contributors could opt in to authorship. Another workshop in the form of a CRAFT session at ACM FAccT 2023 will inform an updated version of this paper.
# 3 Related Work
|
2306.05949#10
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 10 |
Action sequence, which is the sequence of actions required to accomplish the task on the website. Each action in the sequence comprises a (Target Element, Operation) pair. The Target Element is an interactable element on the current webpage, and the Operation refers to the action to be executed on that element. We support three common operations: Click (also including Hover and Press Enter), Type, and Select Option. For Type and Select Option, they also require an additional value as an argument. Actions in a sequence often span multiple webpages of a website.
3
Task Description: Webpage Snapshots: âShow me the reviews for the auto repair business closest to 10002. Action Sequence: Target Element Operation TYPE: 1. [searchbox] Find â auto repair 2. [button] Auto Repair cuick TYPE: 3. [textbox] Near 10002 4, [button] 10002 cuick 5. [button] Search cuick 6. [switch] Show BBB Accredited only cuick 7. (svg) CLICK 8. [button] Sort By cuick = â = 9. [link] Fast Lane 24 Hour Auto Repair cick cbutton>Show 888 Accredited <span>Fast Lane 24 Hour Auto <a href" Link: XXX" >Read 10._[link] Read Reviews CLICK only</button> Repair</span> Reviews</a>
|
2306.06070#10
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 10 |
3
# Implicit vs Explicit Knowledge Instillation
We consider two form of knowledge instillation for LLMs:
Explicit knowledge instillation refers to incorpo- rating knowledge into an LLM by explicitly includ- ing it in the prompt. For instance, to incorporate information about Barack Obamaâs marriage into an LLM, instead of asking âBarack Obama is mar- ried to ...â, we would prompt the LLM by prob- ing âBarack Obama is married to Michelle Obama. Barack Obama is married to ...â.
Implicit knowledge instillation involves incor- porating knowledge into an LLM by fine-tuning the LLM on that particular knowledge. For ex- ample, we can implicitly incorporate information about Barack Obamaâs marriage into BERT by fine- tuning it on the prompt âBarack Obama is married to [MASK]â.
Our goal in this work is to answer the research question of when it is appropriate to instill in- formation explicitly as opposed to through fine- tuning. This is an important question to address as
Metrics BERT T5 Ranking Entropy KL-Divergance 51.6 72.2 74.5 30.9 66.4 67.8
Table 1: Accuracy of knowledge metrics in correctly assigning higher level of knowledge to the facts with better representation is evaluated by fine-tuning LLMs on the synthetically provided facts.
|
2306.06264#10
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 10 |
Table I lists the diï¬erent projects created in this collaborative eï¬ort across eight coun- tries and 22 institutions (SI section V). One might expect that 1.5 days of intense collab- orations would, at best, allow a cursory exploration of a topic. However, the diversity of topics and the diversity in the participantsâ expertise, combined with the need to deliver a working prototype (within a short window of time) and the ease of prototyping with LLMs, generated not only many questions but also pragmatic solutions. In the remainder of this article, we focus on the insights we obtained from this collective eï¬ort. For the details of each project, we refer to the SI.
We have grouped the projects into four categories: 1. predictive modeling, 2. automa- tion and novel interfaces, 3. knowledge extraction, and 4. education. The projects in the predictive modeling category use LLMs for classiï¬cation and regression tasksâand also investigate ways to incorporate established concepts such as â-ML [36] or novel con- cepts such as âfuzzyâ context into the modeling. The automation and novel interfaces
5
Table I: Overview of the developed tools and links to source code repositories. Full descriptions of the projects can be found in the Supplementary Material.
|
2306.06283#10
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.05685
| 11 |
# 2.2 MT-Bench
We create MT-bench, a benchmark consisting of 80 high-quality multi-turn questions. MT-bench is designed to test multi-turn conversation and instruction-following ability, covering common use cases and focusing on challenging questions to differentiate models. We identify 8 common categories of user prompts to guide its construction: writing, roleplay, extraction, reasoning, math, coding,
3
knowledge I (STEM), and knowledge II (humanities/social science). For each category, we then manually designed 10 multi-turn questions. Table 1 lists several sample questions.
# 2.3 Chatbot Arena
Our second approach is Chatbot Arena, a crowdsourcing benchmark platform featuring anonymous battles. On this platform, users can interact with two anonymous models simultaneously, posing the same question to both. They vote for which model provides the preferred response, with the identities of the models disclosed post-voting. After running Chatbot Arena for one month, we have collected around 30K votes. Since the platform does not use pre-defined questions, it allows gathering a wide range of unrestricted use cases and votes in the wild, based on the diverse interests of users. A screenshot of the platform can be found at Appendix C.2.
# 3 LLM as a Judge
|
2306.05685#11
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 11 |
of the generative pre-trained transformer (GPT) [72]. GPT was ini- tially pre-trained (unsupervised learning) on a large data set in or- der for the model to infer fundamental rules such as grammar. This was followed by a ï¬ne-tuning phase, where the pre-trained model was further trained to handle various speciï¬c tasks such as clas- siï¬cation, similarity detection, and so on. The original GPT had 117 million parameters (weights or neurons) and outperformed contemporary models on a number of natural language process- ing benchmarks [72]. Subsequent LLMs such as GPT-2 [73], GPT- 3 [11], and InstructGPT [67] have built on these advances, increas- ing the number of parameters by several orders of magnitude and improving the ï¬ne-tuning process [11, 67, 73].
|
2306.05715#11
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 11 |
Controlled Probing Experiment: Because of the large feature space of LDM, probing classifiers may find a spurious representation of depth. We need a baseline to quantify the strength of the internal representation found by probing classifiers. For both probing tasks, we measured the baseline performance using probing classifiers trained on the internal representations of a randomized LDM.
# 4 Linear Representations of Depth and Salient Objects
For both probing tasks, we obtained high probing performance using the internal representations of LDM, especially in the later denoising steps. As shown in Figure 2, the performance increased significantly in the first 5 steps and gradually converged in the late denoising process. At the last step, probing classifiers achieved an average Dice coefficient of 0.85 for salient object segmentation and average RMSE of 0.47 for depth estimation when inputted with decoder side self-attention.
|
2306.05720#11
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 11 |
2
To give more precise evaluation results, we propose a new evaluation setting in this paper. We set 50 options for each multiple-choice question, as previous researchers use only 4 options, resulting in significantly reducing the accuracy of random guessing and thus better revealing the modelâs real capabilities. We rank all options of each model in generation probability, as previous researchers use instructions to query the choice made by each model, to avoid inaccurate evaluations due to modelâs inability in answering multiple-choice questions or errors in the generated content extraction.
|
2306.05783#11
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 11 |
2.1 LLM for Feature Engineering In the feature engineering stage, LLM takes as input the orig- inal data (e.g., user proï¬les, item descriptions), and gener- ates auxiliary textual features as data augmentations, where prompting and templating techniques are involved to extract the open-world knowledge and reasoning ability from the LLM. GReaT [Borisov et al., 2023] tunes a generative lan- guage model to synthesize realistic tabular data as augmen- tations for the training phase. Carranza et al. [2023] explore to train a differentially private (DP) large language model for synthetic user query generation, in order to address the pri- vacy problem in recommender systems. GENRE [Liu et al., 2023c] applies manually designed prompts to obtain addi- tional news summarization, user proï¬les, and synthetic news pieces for news recommendation. KAR [Xi et al., 2023b] extracts the reasoning knowledge on user preferences and the factual knowledge on items from LLM, which can be proactively acquired by the designed factorization prompt- ing. The obtained knowledge serves as augmented features to promote the performance
|
2306.05817#11
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 11 |
# 3 Related Work
Toolkits and repositories for evaluating qualitative aspects of AI systems are broad and constantly evolving. Many are aimed at public agency procurement and deployment. In 2018, AI Now released their framework for algorithmic impact assessments aimed at public agencies [63]. Many public interest organizations and government initiatives have since published frameworks and assessment tools, such as the OECDâs Classification Framework for AI risks [168] and Canadaâs Algorithmic Impact Assessment Tool [247]. The U.S. National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework (AI RMF) [159] is also intended to be applicable to all AI systems, although specific applications to generative AI systems are in progress.
Evaluation suites across system characteristics for specific generative system modalities, such as language, include Holistic Evaluation of Language Models (HELM) [139], BigBench [232], Language Model Evaluation Harness [85]. These evaluation suites incorporate capabilities evaluations as well as evaluations across the categories in this paper, and are similarly living resources. We are not aware of research on evaluation or an evaluation suite dedicated to social impacts or across modalities.
|
2306.05949#11
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 11 |
Figure 2: A sample data instance of our dataset with the three components. Actions marked in red will result in a transition to a new webpage.
Webpage snapshots, which constitute the environment within which the task is performed. We provide the snapshots in a variety of formats to accommodate different modeling approaches: self- contained MHTML file that includes the raw HTML code of the webpage, DOM snapshot containing the DOM tree along with the layout and style information of the screenshot of the rendered webpage, HAR file that includes all the network traffic for replaying the interaction if needed, and trace file that comprises the complete interaction trace during the task annotation process.
The agent receives the task description in the beginning. At each step, it also receives the current webpage and the history of previous actions. The objective is to accurately predict the subsequent action, which encompasses the target element for interaction and the operation.
# 2.2 Data Collection
Our data collection process consists of four stages: website selection, task proposal, task demonstra- tion, and task verification. Website selection and task verification are done by the authors. For task proposal and demonstration, we develop a sophisticated annotation tool using Playwright 2 and hire annotators through Amazon Mechanical Turk. Refer to Supplementary for annotation tool details.
|
2306.06070#11
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 11 |
Table 1: Accuracy of knowledge metrics in correctly assigning higher level of knowledge to the facts with better representation is evaluated by fine-tuning LLMs on the synthetically provided facts.
fine-tuning (implicit instillation) can be costly and may not even be feasible for LLMs such as GPT-3 (Brown et al., 2020) and GPT-4 (OpenAI, 2023). By comparing the two forms of knowledge instil- lation, we aim to determine the conditions under which explicit instillation is accurate and effective.
# 4 Experiment Setup
Datasets We conducted various experiments on fact-checking benchmarks T-REx (Elsahar et al., 2018) and LAMA (Petroni et al., 2019) to assess different knowledge metrics. To compare implicit and explicit knowledge instillation in Section 5.2, we randomly sampled 100 facts from T-REx for each relations appeared in LAMA (more details in Appendix).
|
2306.06264#11
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 11 |
name authors links Predictive modeling Accurate Molecular Energy Predictions Text2Concrete Molecule Discovery by Context Genetic algorithm without genes Text-template paraphrasing Ankur K. Gupta, Garrett W. Merz, Alishba Imran, Wibe A. de Jong Sabine Kruschwitz, Christoph Vélker, Ghezal Ahmad Zia Zhi Hong, Logan Ward Benjamin Weiser, Jerome Genzling, Nicolas Gastellu, Sylvester Zhang, Tao Liu, Alexander Al- , Nicolas Moitess Anne Labar even Ma Michael Pieler © ankur56/ChemLoRA Hf 10.5281/zenodo.8104930 © ghezalahmad/LLMs-for-the-Design-of-Sustainable- Concretes i 10.5281/zenodo. 8091195 ® globuslabs/ScholarBERT-XL i 10.5281/zenodo. 8122087 © BenjaminWeiser/LLM-Guided-GA i 10.5281/zenodo.8125541 © micpie/text-template-paraphrasing-chemistry Hi 10.5281/zenodo. 8093615 Automation and novel interfi BOLLaMa sMolTalk
|
2306.06283#11
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 11 |
LLM learning from failure Learning from failure is one of the characteristic capabilities of human and turns to be an important topic for general artificial intelligence. Some work has explored the ability of the LLM to learn from its failure [Huang et al., 2022b, Raman et al., 2022, Wang et al., 2023b]. Nonetheless, most of such work takes advantage of immediate feedback from the environment and the correction is used only once. In practice, several late failures may be due to some early mistaken actions in an episode. Reflexion [Shinn et al., 2023] designs a heuristic function to detect late failure from the interaction history and stores the LLM-generated reflection in a working memory for use in the next trial. However, these reflections cannot be applied to different task goals. Madaan et al. [2022] stores the failure corrections for a long term, but relies on human feedback. In contrast, REMEMBERER adopts RL to learn from both late and immediate failure from the environment rewards without need for human feedback. Also, REMEMBERER enables the experiences to be reused in the future episode even for a different task goal with a long-term experience memory.
|
2306.07929#11
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 12 |
# 3 LLM as a Judge
While our initial evaluations using MT-bench and Chatbot Arena rely on human ratings, collecting human preferences can be costly and laborious [44, 38, 31, 2, 13]. To overcome this, we aim to develop a more scalable and automated approach. Given that most questions in MT-bench and Chatbot Arena are open-ended without reference answers, devising a rule-based program to assess the outputs is extremely challenging. Traditional evaluation metrics based on the similarity between outputs and reference answers (e.g., ROUGE [25], BLEU [32]) are also ineffective for these questions.
As LLMs continue to improve, they show potential in replacing human annotators in many tasks [17, 20]. Specifically, we are interested in whether LLMs can effectively evaluate the responses of chat assistants and match human preferences. Next, we discuss the use and limitations of LLM-as-a-judge.
# 3.1 Types of LLM-as-a-Judge
We propose 3 LLM-as-a-judge variations. They can be implemented independently or in combination:
Pairwise comparison. An LLM judge is presented with a question and two answers, and tasked to determine which one is better or declare a tie. The prompt used is given in Figure 5 (Appendix). ⢠Single answer grading. Alternatively, an LLM judge is asked to directly assign a score to a
|
2306.05685#12
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 12 |
Discussions about LLMs often feature humanizing phrases such as âhallucinationâ [35] or âthe AI thinks X.â Nevertheless, and de- spite the dramatic advances, LLMs are at heart probabilistic mod- els whose behavior is determined by data. Any output generated by an LLM is based on the inputâthe promptâand the previously learned parameters.
2.2 Large Language Models in CER The emergence of large language models has sparked signiï¬cant in- terest within CER, too [6, 52]. Some of the initial studies focused on the performance of LLMs on introductory programming problems. For example, Finnie-Ansley et al. [22] noted that the Codex LLM performed better than most introductory-level students, and simi- lar observations were made in a data structures course as well [23]; others have reported somewhat lower performance for GitHub Copi- lot, which is built on top of Codex [89]. Researchers have also eval- uated LLMsâ usefulness for creating new, personalized program- ming exercises [76] and explored ârobosourcingâ [18], where LLMs generate input for learnersourcingâthat is, students take LLM-generated materials and improve on them.
|
2306.05715#12
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 12 |
The Dice coefficient suggests that our probing classifiers achieved high-quality salient object segmen- tation in the late denoising steps. As Figure 3a shows, the probing classifier captured salient objects in various scenes when inferring on a decoder side self-attention output at the last denoising step. Even though the spatial dimension of input representations is only 1 16 of output images, the predicted masks still catch fine-grained details such as limbs of human and legs of chairs. Our synthetic dataset covers a variety of objects, including humans, vehicles, furniture, animals, machinery, etc. Itâs unlikely the linear classifiers learned to select features corresponding to certain kinds of objects or memorize their locations in images. Probing classifiers also obtained accurate depth estimations at late denoising steps. As shown in Figure 3b, the probing predictions matched the synthetic depth maps for images of both indoor and outdoor scenes.
In the controlled experiment, the probing classifiers performed significantly worse when trained on the randomized LDM. At the last denoising step, the highest Dice coefficient achieved on the randomized LDM was only 0.30 and the smallest RMSE in depth estimation was only 0.95. These substantial differences indicate the internal representations of trained LDM have a strong correlation with the depth which cannot be easily captured by a random model.
4
|
2306.05720#12
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 12 |
To provide a detailed analysis of current development status of LLMs, as well as to demonstrate the effectiveness of the Xiezhi Benchmark and our proposed evaluation process, we conduct experiments on 47 nearly released LLMs across four benchmarks proposed in different works in our evaluation setting. The experiments encompass 0-shot, 1-shot, and 3-shot configurations, with all LLMs being evaluated on both Chinese and English versions of Xiezhi. This enables us to analyze the LLM results based on their optimal performance. Results show that the best-performing LLMs have surpassed the level of average practitioners in science, engineering, agronomy, and medicine. But humans still greatly outperform all LLMs in domains of economics, jurisprudence, pedagogy, literature, history, and management. We also examined the differences in performance of various LLMs across different benchmarks. Our experiments demonstrate that Xiezhi, covering the most domains, containing the most volume of questions, consisting of the freshest data, is also best at discerning the capability differences between various LLMs, ranging from GPT-4 to LLMs with only 560M parameters, and hence is the benchmark most suitable for the evaluation of LLMs with all levels of abilities.
# 2 Related Works
# 2.1 Large Language Models
|
2306.05783#12
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 12 |
knowledge on items from LLM, which can be proactively acquired by the designed factorization prompt- ing. The obtained knowledge serves as augmented features to promote the performance of recommendation models in a model-agnostic manner. MINT [Mysore et al., 2023] in- structs LLM to generate synthetic queries from existing user- item interactions and thus enrich the training set for narrative- driven recommendations. AnyPredict [Wang et al., 2023] leverages LLM to consolidate datasets with different feature ï¬elds, and align out-domain datasets for a shared target task. Other works also utilize LLM to further enrich the training data from different perspectives, e.g., knowledge graph com- pletion [Chen et al., 2023], tag generation [Li et al., 2023a], and user interest modeling [Christakopoulou et al., 2023].
|
2306.05817#12
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 12 |
Technical evaluation suites are often specific to a type of system and harm; for example, biases in natural language processing systems [33]. Partnership on AIâs ABOUT ML (Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles) project crafted a resource library for developers, deployers, and procurers to better document the system life- cycle [176]. Auditing frameworks (e.g., [190]) are powerful tools that necessarily depend on the sector of deployment. Increasing literature taxonomizes dangers [26], social impacts [110], sociotechnical harms [219], and social risks of all [80] or certain generative AI systems like language models [258], but evaluating these risks and impacts is a complementary yet distinct ongoing research area.
3
# 4 Categories of Social Impact
|
2306.05949#12
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 12 |
Website Selection. We start with 5 top-level domains: Travel, Shopping, Service, Entertainment, and Information, which are subsequently broken down into 31 (secondary) domains. We select websites within each domain based on their popularity in the US, as ranked by similarweb.com. We manually select 3-5 representative websites per domain, resulting in a collection of 137 websites in total.
Task Proposal. We present the annotators with a target website, a concise description of the website, and a few sample tasks associated with it. The annotators are then asked to propose open-ended and realistic tasks based on three criteria: the tasks should be of diverse types, require multiple rounds of interaction, and describe the high-level goal instead of step-by-step instructions. To further stimulate creativity and boost diversity, we use ChatGPT to generate seed tasks by prompting it to test different functionalities of a website. We generate 50 seed tasks per website, of which 10 are randomly sampled and presented to the annotator each time. These seed tasks are mainly for inspirationâannotators are explicitly instructed not to directly use them and we reject task proposals that are highly similar to the seed tasks. All the proposed tasks are further screened by the authors to ensure quality and diversity before entering the demonstration phase.
|
2306.06070#12
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 12 |
Models For our evaluations, we utilized two popular large language models, BERT (Devlin et al., 2019) and T5 (Raffel et al., 2019), to gauge the accuracy of various knowledge metrics and to compare the effectiveness of explicit and im- plicit knowledge instillation techniques. Addition- ally, we employed InstructGPT (text-davinci-003) (Ouyang et al., 2022) and FLAN-T5 (XL) (Chung et al., 2022) to investigate the applicability of our proposed methods across different tasks for in- context learning based models.
# 5 Experiments
In this section, we first evaluate the accuracy of various metrics in measuring an LLMâs knowledge of a fact. We then explore the differences between implicit and explicit knowledge instillation, deter- mining when it is appropriate to instill a knowledge explicitly. Lastly, we examine the application of our proposed metrics in in-context learning based models.
# 5.1 Accuracy of Knowledge Measurements
As we lack access to the amount of knowledge that language models possess for any given fact,
|
2306.06264#12
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 12 |
Hi 10.5281/zenodo. 8093615 Automation and novel interfi BOLLaMa sMolTalk MAPI-LLM Conversational BLN interface (Whinchat) Bojana Rankovié, Andres M. Bran, Philippe Schwaller Jakub Lala, Sean Warren, Samuel G. Rodriques Mayk Caldas Ramos, Sam Cox, Andrew White Joshua D. Bocarsly, Matthew L. Evans and Ben E. Smith © doncamilom/BOLLaMa i 10.5281/zenodo. 8096827 Q jakublala/smoltalk-legacy i 10.5281/zenodo. 8081749 © maykcaldas/MAPI_LLM maykcaldas/MAPI_LLM Hi 10.5281/zenodo . 8097336 Q the-grey-group/datalab i 10.5281/zenodo. 8127782 Knowledge Extraction InsightGraph Extracting Structured Data from Free-form Organic Synthesis Text TableToJson: Structured information from scientific data in tables AbstractToTitle & TitleToAbstract: text summarization and generation Defne Circi, Shruti Badhwar Qianxiang Ai, Jacob N. Sanders, Jiale Shi, Stefan
|
2306.06283#12
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 12 |
LLM for decision-making The powerful capability of LLM is exploited by recent work [Huang et al., 2022a, Raman et al., 2022, Mees et al., 2022, Chen et al., 2022, Ichter et al., 2022, Huang et al., 2022b, Liang et al., 2022] to generate better control plans for various robots and agents. Kim et al. [2023] and Zhang et al. [2023] design LLM-based agents for user interface (UI) interaction. ReAct [Yao et al., 2022b] combines the action decision with natural language reasoning and achieves a promising performance. To our best knowledge, This work is the first one that combines the LLM- based agent with RL algorithm to learn from the interaction experiences and achieve self-evolving.
The proposed REMEMBERER equips the LLM with an external experience memory to help it to learn from both successful and failed experiences. This is also the first work to combine the LLM-based agent with RL algorithm to improve the capability of the agent.
3
R MEMBERER ' \ @op | of ERS) | I ~ 7 | ~ i} âââ i} od | O02, Ay, Qe ane Environment ; LLM Experience 1 Memory 1 > © of, az, Tt, Of41
Figure 2: Pipeline of RLEM and architecture of REMEMBERER
# 3 Method
|
2306.07929#12
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 13 |
single answer. The prompt used for this scenario is in Figure 6 (Appendix).
⢠Reference-guided grading. In certain cases, it may be beneficial to provide a reference solution if applicable. An example prompt we use for grading math problems is in Figure 8 (Appendix).
These methods have different pros and cons. For example, the pairwise comparison may lack scalability when the number of players increases, given that the number of possible pairs grows quadratically; single answer grading may be unable to discern subtle differences between specific pairs, and its results may become unstable, as absolute scores are likely to fluctuate more than relative pairwise results if the judge model changes.
# 3.2 Advantages of LLM-as-a-Judge
LLM-as-a-judge offers two key benefits: scalability and explainability. It reduces the need for human involvement, enabling scalable benchmarks and fast iterations. Additionally, LLM judges provide not only scores but also explanations, making their outputs interpretable, as shown in Figure 1.
# 3.3 Limitations of LLM-as-a-Judge
We identify certain biases and limitations of LLM judges. However, we will also present solutions later and show the agreement between LLM judges and humans is high despite these limitations.
|
2306.05685#13
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 13 |
Another line of work in CER [43, 51, 53, 76] has looked at code explanations constructed by the Codex and GPT-3 LLMs, which have been optimized for source code and natural language, respec- tively. Overall, LLMs have been found capable of explaining source code in natural language, which can be helpful for novices; there is some evidence that GPT-3 outperforms Codex [51], and that LLM-generated code explanations may be of higher quality than
Exploring the Responses of Large Language Models to Beginner Programmersâ Help Requests
those created by students [43]. Recent work has also explored us- ing Codex to explain and enhance error messages [45].
Classroom evaluations are still relatively rare, as suï¬ciently per- formant LLMs emerged only very recently. Most research in CER has involved expert evaluations (e.g., [45, 76]) or lab studies (e.g., [71]). A notable exception is the work of MacNeil et al. [51], who evalu- ated LLM-generated code explanations in an online course on web software development; another is the controlled study by Kazemitabaar et al. [39], where a group of novices with access to Codex outper- formed a control group on code-authoring tasks.
|
2306.05715#13
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 13 |
4
Probing Performance for Salient Object Segmentation Probing Performance for Monocular Depth Estimation â Step1 â Step 2 0.8 g â step3 p09) â step 06 s step 5 g Eos} step 6 a 2 Step 7 g : step 8 8 5 step 9 2 step 10 02 gos) step 11 @ â Step 12 os A â step 13 0.0 â step a a. -.C.rtâ~ârâC=SCi=CKS=CâzaKSC Seo eeregege ggg egg â stepas 883888 2388888888 8838882838883 88383 3 _. Step 15 SERRE SE SEER EE EE SEER RES EEE EEE FG 7 Rancom t z T z t z t z ⬠t z ⬠T z ⬠T z t z t z t z ⬠t z ⬠t z â¬
Figure 2: Probing performance of salient object segmentation and depth estimation on test set. For both tasks, performance grew rapidly in the early denoising process and converged after step 5. The black dashed line is the baseline performance measured on a randomized LDM at step 15, significantly lower than its trained counterpart.
|
2306.05720#13
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 13 |
# 2 Related Works
# 2.1 Large Language Models
Recently, various companies released their LLMs, such as BARD, ERNIE Bot, Bloom Scao et al. (2022), pythia Biderman et al. (2023), Llama Touvron et al. (2023), Claude, ChatGPT OpenAI (2023a), GPT-4 OpenAI (2023b), and ChatGLLM ?. Apart from their outstanding performance on trained tasks, researchers have also discovered that they emerge to have strong performance on many unseen tasks Zhou et al. (2023); Chung et al. (2022). Consequently, the evaluation of LLMsâ capabilities should focus more on a wide range of tasks over numerous diverse domains and contain samples with different difficulty levels.
|
2306.05783#13
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 13 |
2.2 LLM as Feature Encoder In conventional recommender systems, the structured data are usually formulated as one-hot encodings, and a simple
embedding layer is adopted as the feature encoder to obtain dense embeddings. With the emergence of language models, researchers propose to adopt LLM as auxiliary textual fea- ture encoders to gain two major beneï¬ts: (1) further enrich- ing the user/item representations with semantic information for the later neural recommendation models; (2) achieving cross-domain2 recommendation with natural language as the bridge, where feature ï¬elds might not be shared.
|
2306.05817#13
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 13 |
3
# 4 Categories of Social Impact
We divide impacts into two categories for evaluation: what can be evaluated in a technical system and its components, and what can be evaluated among people and society. The former section includes evaluations for base systems and evaluations popularly run or published in top AI conferences. Base systems refer to AI systems, including models and components, that have no predetermined application. The latter section examines systems in context and includes recommendations for infrastructurally mitigating harmful impacts. Both sections can have overlap as the same category can be evaluated differently in a system (see 4.1.4 Privacy and Data Protection) and impact on people and society (see 4.2.1.3 Personal Privacy and Sense of Self).
# Impacts: The Technical Base System
Below we list the aspects relatively able to be evaluated in a generative system from training to deployment testing. These categories, and the suggested evaluations afford application and use-case independent tests of the base model. Evaluation of base systems can be qualitative or quantitative, but only provide a narrow insight into the described aspect of the type of generative AI system. The depth of literature and research on evaluations differ by modality, but the themes for evaluations can be applied to most systems.
The following categories are high-level, non-exhaustive, and present a synthesis of the findings across different modalities. They refer solely to what can be evaluated in a base technical system:
|
2306.05949#13
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 13 |
Task Demonstration. We develop a Playwright-based tool for demonstration (Figure 2). Workers will use the tool to demonstrate how to perform the tasks they have proposed within a web browser. To ensure accuracy, each interaction round is split into two parts: element selection and operation selection. At each step, the worker first selects an element on the webpage by clicking within the browser. They are then asked to confirm the selection and choose the operation to execute on the selected element. Once the task is completed, the worker is given another opportunity to review and modify the task description.
# 2https://playwright.dev/
4
Table 1: Statistics of MIND2WEB compared with existing datasets.
|
2306.06070#13
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 13 |
we conducted a synthetic experiment to evaluate the accuracy of different metrics. We fine-tuned BERT/T5 on a filling-the-blank task to create a gold label for these modelsâ knowledge on a spe- cific fact. For each relation in LAMA, we collected instances where ranking metrics performed poorly, i.e., facts that the models lacked knowledge of. Then, we iteratively removed parts of the prompts corresponding to those facts for each relation to create instances that conveyed less and less infor- mation to the model. For instance, for the relation is married to, starting with instances (1) John is married to [Niki], (2) Mark is married to [Emma], (3) Liam is married to [Ava], (4) William is married to [Sophia], and (5) Noah is married to [Katherine], we modified them to (1) John is married to [Niki], (2) Mark married to [Emma], (3) Liam to [Ava], (4) William [Sophia], and (5) Noah. We then fine- tuned the models to predict the object over the mod- ified instances. Finally, we evaluated the fine-tuned models over the initial examples and calculated the average
|
2306.06264#13
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 13 |
text summarization and generation Defne Circi, Shruti Badhwar Qianxiang Ai, Jacob N. Sanders, Jiale Shi, Stefan Bringuier, Brenden Pelkie, Marcus Schwarting Maria Victoria Gil Kamal Choudhary Q defnecirci/InsightGraph Hi 10.5281/zenodo. 8092575 © qai222/LLM_organic_synthesis i 10.5281/zenodo. 8091902 © vgvinter/TableToJson Hi 10.5281/zenodo. 8093731 © usnistgov/chemnlp Hi 10.5281/zenodo.8122419 Education Beatriz Elias Moubarak, ©
|
2306.06283#13
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.05685
| 14 |
We identify certain biases and limitations of LLM judges. However, we will also present solutions later and show the agreement between LLM judges and humans is high despite these limitations.
Position bias is when an LLM exhibits a propensity to favor certain positions over others. This bias is not unique to our context and has been seen in human decision-making [3, 34] and other ML domains [22, 41].
Figure 11 (Appendix) shows an example of position bias. GPT-4 is tasked to evaluate two responses from GPT-3.5 and Vicuna-13B to an open-ended question. When GPT-3.5âs answer is positioned
4
Table 2: Position bias of different LLM judges. Consistency is the percentage of cases where a judge gives consistent results when swapping the order of two assistants. âBiased toward firstâ is the percentage of cases when a judge favors the first answer. âErrorâ indicates wrong output formats. The two largest numbers in each column are in bold.
|
2306.05685#14
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 14 |
As noted above, an LLMâs outputs are determined by prompts and the modelâs parameters. Coming up with good inputs is key to generating meaningful output, so it makes sense that much of the LLM-based work in CER has involved some prompt engineer- ing. As an example, Denny et al. [14] improved the performance of GitHub Copilot on introductory programming exercises from approximately 50% to 80% by exploring alternative prompts. Sim- ilarly, Leinonen et al. [45] explored ï¬ve diï¬erent prompts for en- hancing programming error messages and chose the prompt that lead to the best initial results. Prompt engineering may also in- volve a comparison of diï¬erent LLMs [51]. For a literature review on prompting (from a machine learning perspective), see Liu et al. [49].
To the best of our knowledge, there is no prior work on how LLMs perform on responding to help requests on programming problemsâthat is, scenarios where students have explicitly signaled that they require help.
|
2306.05715#14
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 14 |
Dice: 0.88 _Dice:0.86 _Dice:0.85 Dice: 0.82 Dice: 0.80 RMSE=049 RMSE=049 RMSE=0.49 _RMSE=0.50 _RMSE=0.51 5 13 1 \] â Probing Prediction Synthetic Ground-Truth (a) (0) Ne2est R Farthest
Figure 3: Row 2 shows the predictions of two probing classifiers. The probing classifiers achieved an average Dice score of 0.85 for salient object segmentation and an average RMSE of 0.50 for depth estimation on 371 test samples.
# 4.1 Depth emerges at early denoising steps
We saw a dramatic increase in probing performance during the first 5 steps. For both probing tasks, the performance difference between successive denoising steps vanished after step 5. High probing performance at the early steps suggests an interesting behavior of LDM: the depth dimension develops at a stage when the decoded image still appears extremely noisy to a human.
|
2306.05720#14
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 14 |
The development of LLMs has spurred the growth of a series of small-scale conversational LLMs, such as Alpaca Taori et al. (2023), Vicuna Chiang et al. (2023), H2Ogpt H2O.ai (2023), and Moss Sun et al. (2023a). Most of these small conversational LLMs are fine-tuned based on existing pre-trained LLMs through high-quality dialog data generated from LLMs Ji et al. (2023b); Xu et al. (2023) by parameter-efficient tuning methods Hu et al. (2021, 2023). In order to achieve excellent performance, these models continuously acquire the latest data from the internet, and their iteration speed is much faster than LLMs. Any new benchmark will quickly become outdated as it is incorporated into the modelâs training data.
# 2.2 Benchmarks
|
2306.05783#14
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 14 |
For item representation enhancement, LLM is leveraged as feature encoders for scenarios with abundant textual features available (e.g., item title, textual body, description), includ- ing but not limited to: document ranking [Zou et al., 2021; Liu et al., 2021], news recommendation [Zhang et al., 2021a; Wu et al., 2021; Wu et al., 2022; Yu et al., 2022b; Liu et al., 2022b], tweet search [Zhang et al., 2022], tag selection [He et al., 2022], and code example recommendation [Rahmani et al., 2023]. TCF [Li et al., 2023d] further explores the per- formance limits of such a LLM-as-item-encoder paradigm by scaling the size of LLM up to 175 billions. As for user-side enrichment, U-BERT [Qiu et al., 2021] ameliorates the user representation by encoding review texts into dense vectors via BERT.
|
2306.05817#14
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 14 |
The following categories are high-level, non-exhaustive, and present a synthesis of the findings across different modalities. They refer solely to what can be evaluated in a base technical system:
Bias, Stereotypes, and Representational Harms ⢠Cultural Values and Sensitive Content ⢠Disparate Performance ⢠Privacy and Data Protection ⢠Financial Costs ⢠Environmental Costs ⢠Data and Content Moderation Labor
# 4.1.1 Bias, Stereotypes, and Representational Harms
Generative AI systems can embed and amplify harmful biases that are most detrimental to marginal- ized peoples. Categories of bias, from system to human to statistical, interact with each other and are intertwined [211]. For bias evaluations that do not narrowly capture biases as they occur in Generative AI systems, it is necessary to consider work outside of the field of question. For instance, for natural language processing, bias evaluations must seriously engage with the relationship between the modality (i.e. language) and social hierarchies [33]. When thinking about representational harms [125], it is also important to consider the extent to which any representation could confer harm (see 4.2.2.2 Long-term Amplifying Marginalization by Exclusion (and Inclusion)).
|
2306.05949#14
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 14 |
# 2https://playwright.dev/
4
Table 1: Statistics of MIND2WEB compared with existing datasets.
# Dom. # Env. Env. Type Avg. # Elements # Tasks Task Info. MiniWoB++ [22] WebShop [40] RUSS [39] PixelHelp [21] META-GUI [35] MoTIF [5] â 1 â 4 6 15 100 1 22 4 11 125 Simplified mobile websites Simplified shopping websites Real-world websites Mobile apps Mobile apps Mobile apps 28 38 801 â 79 188 100 12,000 products 80 187 1,125 dialogues 756 Low-level High-level High & low High & low High-level High & Low MIND2WEB 5 / 31 137 Real-world websites 1,135 2,350 High-level Avg. # Actions 3.6 11.3 5.4 - 4.3 4.4 7.3
|
2306.06070#14
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 14 |
fine- tuned the models to predict the object over the mod- ified instances. Finally, we evaluated the fine-tuned models over the initial examples and calculated the average pairwise accuracy of metrics in select- ing the instance that the model should have more knowledge about (e.g., the modelâs knowledge of fact (1) should be higher than that of fact (2)). Ta- ble 1 presents the accuracy of different knowledge metrics. The results reveal that KL-divergence and entropy-based metrics surpass ranking methods by more than 20% and 35% respectively in BERT and T5, showcasing the superior accuracy of our pro- posed metrics in capturing the factual knowledge of LLMs. Additionally, KL-divergence exhibits a slight advantage over entropy in both LLMs.
|
2306.06264#14
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 14 |
# Beatriz MouriËno, Elias Moubarak, Joren Van Herck, Sauradeep Majumdar, Xiaoqi Zhang
# © XiaoqZhang/i-Digest Hi 10.5281/zenodo . 8080962
projects show that natural language might be the universal âglueâ connecting our toolsâ perhaps in the future, we will need not to focus on new formats or standards but rather use natural language descriptions to connect across the existing diversity and diï¬erent modalities [35].
LLMs can also help make knowledge more accessible, as the projects in the âknowledge extractionâ category show; they can extract structured information from unstructured text. In addition, as the project in the âeducationâ category shows, LLMs can also oï¬er new educational opportunities.
# A. Predictive modeling
Predictive modeling is a common application of ML in chemistry. Based on the language-interfaced ï¬ne-tuning (LIFT) framework [37], Jablonka et al. [32] have shown
6
|
2306.06283#14
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 14 |
RLEM (Reinforcement Learning with Experience Memory) is proposed for an LLM-based agent to learn from its interaction experiences by updating an external persistent memory. The pipeline of RLEM and the architecture of REMEMBERER agent are depicted in Figure 2. REMEMBERER agent consists of two components: an LLM making decisions and an experience memory storing the interaction experiences. At the decision step, the LLM first takes an observation ot from the environment. The observation ot is then adopted to retrieve several related experiences from the connected experience memory according to some similarity functions. The experiences are represented as a group of observations Ox, actions Ax, and the corresponding Q value estimations Qx. Here x denotes the index set of retrieved experiences and depends on the specific runtime observation ot. Subsequently, LLM will decide the action at in accordance with ot, the feedback from the last interaction (e.g., the reward rtâ1), as well as the retrieved experiences (Ox, Ax, Qx). at will be executed in the environment and the resulted reward rt will be returned to the LLM as the feedback. And the transition tuple, (ot, at, rt, ot+1), comprising the last observation, the taken action, the corresponding reward, and the new observation will be used to update the experience memory. The following subsections will detail the structure and updating policy of REMEMBERER experience memory and the usage of the retrieved experiences.
|
2306.07929#14
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 15 |
Judge Prompt Consistency Biased toward first Biased toward second Error Claude-v1 default rename 23.8% 56.2% 75.0% 11.2% 0.0% 28.7% 1.2% 3.8% GPT-3.5 default rename 46.2% 51.2% 50.0% 38.8% 1.2% 6.2% 2.5% 3.8% GPT-4 default rename 65.0% 66.2% 30.0% 28.7% 5.0% 5.0% 0.0% 0.0%
Table 3: Failure rate under ârepetitive listâ at- tack for different LLM judges on 23 answers. Claude-v1 GPT-3.5 GPT-4
# Judge
Table 4: Judge failure rate on 10 math questions with different prompts. We test LLaMA-13B vs. Vicuna-13B and swap positions. A failure means when GPT-4 says an incorrect answer is correct.
# Failure rate
91.3%
91.3%
8.7%
# Default CoT Reference
# Failure rate
14/20
6/20
3/20
|
2306.05685#15
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 15 |
To the best of our knowledge, there is no prior work on how LLMs perform on responding to help requests on programming problemsâthat is, scenarios where students have explicitly signaled that they require help.
2.3 Novice Programmers and Errors Students learning to program are bound to face errors. In CER, early studies of novice errors focused on speciï¬c problems such as the âRainfall Problemâ [36, 79, 81, 82]. Later studies have evolved alongside new capabilities for data collection. Using data from au- tomated assessment [2, 19, 29, 68] and programming environments that track studentsâ process [31], researchers have quantiï¬ed the types of errors that students face while programming [15, 20, 32, 56, 86]. Some errors are more frequent than others [83], some errors take more time to ï¬x than others [10, 15, 57, 80], and the types of errors that students face tend to evolve [3]. Data on errors informs teachers about the issues that their students frequently face, which does not always match the teachersâ expectations [10].
|
2306.05715#15
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 15 |
As Figure 4a shows, the images decoded at early denoising steps are heavily corrupted. A human viewer can barely capture any meaningful information from them. Unexpectedly, the representation of LDM delineates the salient object in the second denoising step. For comparison, we ran TRACER on the images decoded from partially denoised latents. This state-of-the-art salient object detection model cannot find the salient objects inside noisy images.
A continuous depth dimension also appears at the earlier steps. Figure 4b shows that the internal representation has determined the layout of the hotel room as early as at step 5. We ran MiDaS on the partially denoised images, but it failed until a significant amount of noise was removed (at step 11).
# Is depth represented in latent space?
Because an LDM operates on a latent-space representation of an image, one might ask whether this representation itself contains depth information. To address this question, we performed a probing study of the VAE self-attention layer.
For the salient object detection task, the self-attention layer in VAE cannot decode salient objects from the corrupted latents at early steps (see Table 1). Its performance starts to improve when details in latent vectors become more perceptible. After the latent is completely denoised at step 15, the
5
|
2306.05720#15
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 15 |
# 2.2 Benchmarks
A number of studies concentrate on assessing a modelâs knowledge and reasoning ability. Cer- tain works, including HellaSwag Zellers et al. (2019), Physical IQA Bisk et al. (2020), and Cos- mosQA Huang et al. (2019), focus on evaluating the understanding of LLMsâ commonsense knowl- edge. Meanwhile, other research, such as MMLU Hendrycks et al. (2021), AGI-Eval Zhong et al. (2023), MMCU Zeng (2023), C-Eval Huang et al. (2023), M3KE Liu et al. (2023), and Lex- Treme Niklaus et al. (2023), target at evaluating the modelsâ proficiency in domain knowledge. However, whether these benchmarks provide effective evaluations for all language models remains de- batable. This is because only LLMs with super abilities show disparities on their datasets, while small LLMs only perform at a level close to random guessing, leading to different evaluation researches having different or even contradictory results on small LLMs. Furthermore, as the training corpora for models become increasingly larger, these benchmarks might lose their evaluative significance shortly after they are proposed, due to their incorporation into the training sets of LLMs.
|
2306.05783#15
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 15 |
Apart from user/item representation improvement, adopt- ing LLM as feature encoders also enables transfer learning and cross-domain recommendation, where natural language serves as the bridge to link the heterogeneous information from different domains. ZESRec [Ding et al., 2021] ap- plies BERT to convert item descriptions into universal con- tinuous representations for zero-shot recommendation. Wang et al. [2022] train a general-purpose recommendation model based on items with mixture-of-modality features, which are encoded by language or vision foundation models. In UniSRec [Hou et al., 2022], the item representations are learned for cross-domain sequential recommendation via a ï¬xed BERT model followed by a lightweight MoE-enhanced network. Built upon UniSRec, VQ-Rec [Hou et al., 2023a] introduces vector quantization techniques to better align the textual embeddings generated by LLMs to the recommenda- tion space. Fu et al. [2023a] further explore layerwise adap- tor tuning on the language model to obtain better embeddings over textual features from different domains.
|
2306.05817#15
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 15 |
Although bias evaluations in data have been subject to a large body of research, bias is not only a âdata problem.â Biases are not only introduced in the data pipeline but throughout the entire machine learning pipeline [237]. The overall level of harm is also impacted by modeling choice [108]. These can include choices about many stages of the optimization process [237, 129]; privacy constraints [24], widely used compression techniques [109, 15, 169] and the choice hardware [273] have all been found to amplify harm on underrepresented protected attributes [28]. The geographic location, demographic makeup, and team structures of researcher and developer organizations can also introduce biases.
What to Evaluate While the degree of harm depends on many factors from type of output to the cultural context of training and deployment, focus on bias evaluations has centered on protected classes as defined by United States [77] and United Nations [249] guidelines. These guidelines are non-exhaustive and harms exist outside of their proposed categories but can be evaluated by adding categories. For instance, for generative AI systems developed on data from the South Asian subcontinent, it may also be useful to include considerations of caste bias [217]. Additional harmful biases include misrepresentations of humans generally, such as associated humans or a group of humans with other animals [223].
4
|
2306.05949#15
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 15 |
Task Verification. Lastly, all task demonstrations are verified by the authors to ensure the following: First, all actions are accurately reflected in the task description. The authors will modify the task description if needed to align it with the annotated actions; Second, the recorded actions are correct and clean, with extraneous steps discarded. Finally, the starting and ending points of tasks are consistent, such as excluding actions for closing popup windoes, or ending the annotation at the search result page if the task was to find a certain item without clicking on specific items. After verification, we discarded 61 out of the total 2,411 tasks. Among the 2,350 retained tasks, the task description was refined in 390 instances to better correspond with the demonstrated actions, while some extraneous steps were discarded in 187 instances. Overall, the data collection pipeline has been proven effective and produces high-quality data.
# 2.3 Comparison with Existing Work and Research Challenges
|
2306.06070#15
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 15 |
# 5.2 Implicit vs Explicit Knowledge Instillation
Given the high cost and sometimes infeasibility of implicit knowledge instillation (fine-tuning), it is important to determine when explicit knowledge instillation can be a viable alternative. To address this question, we conducted a comparison between two different knowledge instillation methods using BERT and T5 language model using our proposed metrics over the LAMA benchmark. The resulting scatter plots, depicting instances of implicit versus explicit instillation for BERT and T5, are shown in Figures 3 and 4, respectively. Based on these results, we first observe a strong correlation be- tween implicit and explicit knowledge instillation. Furthermore, T5 exhibits a higher/better level of correlation between these two methods compared to BERT, indicating that we can estimate implicit
10 Mismatch Implicit Explicit
25 7 Mismatch 20 Implicit 10 Explicit
(a) Entropy. (b) KL-Divergance.
Figure 3: The correlation between explicit and implicit knowledge instillation using entropy and KL-divergence metric for BERT language model. For the entropy metric, mismatch happens when the sign of the metric differs between implicit and explicit instillation. In the KL-divergence case, the mismatch arises when the metric for implicit instillation is significantly higher or lower than that of explicit instillation.
knowledge with greater accuracy in this model us- ing explicit instillation.
|
2306.06264#15
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 15 |
6
that LLMs can be employed to predict various chemical properties, such as solubility or HOMO-LUMO gaps based on line representations of molecules such as self-referencing embedded strings (SELFIES) [38, 39] and SMILES. Taking this idea even further, Ramos et al. [34] used this framework (with in-context learning (ICL)) for Bayesian optimizationâguiding experiments without even training models.
The projects in the following build on top of those initial results and extend them in novel ways as well as by leveraging established techniques from quantum machine learning.
Given that these encouraging results could be achieved with and without ï¬ne-tuning (i.e., updates to the weights of the model) for the language-interfaced training on tabular datasets, we use the term LIFT also for ICL settings in which structured data is converted into text prompts for an LLM.
a. Molecular Energy Predictions
|
2306.06283#15
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 15 |
# 3.2 Experience memory of REMEMBERER
The experience memory is one of the pivotal components of the proposed REMEMBERER framework. It is adopted to store the interac- tion experiences, and the LLM is expected to benefit from the stored experiences in future decision-making. The memory can be regarded as a group of external parameters of the LLM- based agent. Such an agent is a semi-parametric system that can evolve through RL process. Dur- ing the interaction, new experiences are added to the experience memory so that the overall sys- tem can attain a more capable interaction ability compared to the agents with just a fixed LLM and fixed exemplars. This procedure can be considered analogous to the training stage of conventional parametric agents.
r Task & Obsy. | Action Q Value ( 91; 01) ay âq1 (92, 02) a2 q2 ( 93, 03) a3 q3 :
To be specific, the experience memory is designed as a table storing the task information, observation, action, and the corresponding Q value estimation. The Q value is the expectation of the accumulated future reward and gives an assessment of the value of the action candidate. Figure 3 depicts a demonstration of the proposed experience memory. There are two stages to build a practical REMEMBERER agent with experience memory: initialization and training. The experience memory is supposed to be first initialized with some initial records before the training stage. The initial
4
|
2306.07929#15
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 16 |
# Failure rate
91.3%
91.3%
8.7%
# Default CoT Reference
# Failure rate
14/20
6/20
3/20
first, GPT-4 considers GPT-3.5âs answer more detailed and superior. However, upon switching the positions of the two responses, GPT-4âs judgement flips, favoring Vicunaâs answer.
To analyze the position bias, we construct two similar answers to each first-turn question in MT-bench by calling GPT-3.5 twice with a temperature of 0.7. We then try three LLMs with two different prompts: âdefaultâ is our default prompt in Figure 5 (Appendix). ârenameâ renames the assistants in our default prompt to see whether the bias is on positions or names. As in Table 2, we found all of them exhibit strong position bias. Most LLM judges favor the first position. Claude-v1 also shows a name bias which makes it favors "Assistant A", as illustrated by the "rename" prompt. The position bias can be very significant. Only GPT-4 outputs consistent results in more than 60% of cases.
|
2306.05685#16
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 16 |
Only some of the errors that students face are related to syntax, of course [3, 21]; logic errors are also common, and varied. Ettles et al. [21] sorted common logic errors in three categories: algorithmic errors have a fundamentally ï¬awed approach, misinterpretations involve misinterpreting the task, and misconceptions are ï¬aws in programming knowledge. A related stream of research has sought to improve error messages, which when done right could lead to better learning [7, 17], especially as regular error messages do not always match the underlying cause [7, 20, 56].
3 METHODOLOGY 3.1 Context and Data Our study is based on data from an open, online introductory pro- gramming course organized by Aalto University in Finland. The
ICER â23 V1, August 7â11, 2023, Chicago, IL, USA
workload, level of expectations, and breadth diï¬er from normal in- troductory programming courses at Aalto and in Finland, however. The estimated workload of this course is only 2 ECTS credits (ca. 50 to 60 hours of study) as opposed to the more typical 5 ECTS (ca. 125 to 150h). There are no deadlines, and students can work at their own pace. The course is open to both lifelong learners and Aalto students; we will refer to all participants as âstudents.â
|
2306.05715#16
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 16 |
5
Table 1: Weak depth information was found in VAEâs bottleneck activations, but LDM encodes a much stronger representation of depth. This table shows the probing performance achieved by the LDMâs and VAEâs self-attention layers at steps 5 and 15.
Step Model Saliency Detection (average Dice â) Depth Estimation (average RMSE â) 5 LDM VAE 0.84 0.15 0.53 0.96 15 LDM VAE 0.85 0.71 0.47 0.84
|
2306.05720#16
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 16 |
Moreover, the rise of the generative LLMs presents its own difficulties in evaluation Sai et al. (2022). Beginning with MMLU Hendrycks et al. (2021), numerous works have proposed to use of multiple-choice questions to assess generative models. Recently, a variety of evaluation studies,
3
such as SuperClue, employed an identical prompt to query all LLMs and do extraction to obtain the choice made by these LLMs. This approach requires models to have strong abilities in instruction understanding especially in multiple-choice answering, as many LLMs are unable to meet that needs, leading to unfair evaluation results.
# 3 Xiezhi Benchmark
# 3.1 Chinese Discipline Taxonomy
Chinese Discipline Taxonomy, developed by the Chinese Ministry of Education, organizes disciplines of different domains in college education. The taxonomy divides all domains into different disciplines categories and various levels of disciplines. The meanings of these levels are as follows:
Discipline Categories: This is the highest level of discipline taxonomy, divided according to the nature, characteristics of subjects. There are 14 subject categories in Chinese Discipline Taxonomy, including philosophy, economics, law, education, literature, history, science, engineering, agriculture, medicine, military science, management, art, and Inter-discipline.
|
2306.05783#16
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 16 |
2.3 LLM as Scoring/Ranking Function In the stage of scoring/ranking, the ultimate goal of LLM is to provide a ranked list of items [ik]N k=1, ik â I, where I is the universal item set (next item prediction is a special case where N = 1). Such a goal could be achieved by various kinds of tasks specially designed for LLM (e.g., rating prediction, item ID generation). According to different tasks to be solved by LLM, we classify them into three categories: (1) item scoring task, (2) item generation task, and (3) hybrid task.
Item Scoring Task In item scoring tasks, the large language model serves as a pointwise function F (u, i), âu â U, âi â I, which estimates the score of each candidate item i for the target user u. Here U and I denote the universal set of users and items, respectively.
2Different domains means data sources with different distribu- tions, e.g., scenarios, datasets, platforms, etc.
|
2306.05817#16
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 16 |
4
Popular evaluations for biases use association tests [46] or examine stereotypes [157, 156, 138], correlations and co-occurrences [272], and sentiment analysis [66]. In language, these evaluations can occur at the word or sentence level. For images, additional tools such as captioning systems can be used. For certain modalities, such as language, biases can be represented differently [142]. Across modalities, biases can be evaluated using intrinsic and extrinsic methods [91], where the former seeks to evaluate biases within model weights and the latter evaluates the expression of biases in the outputs for downstream tasks (e.g. captioning). Evaluations can also be specific to a certain function of a modality, such as question-answering in language [175].
Limitations There are often legal obstacles around collecting certain protected attributes, which leads to selection bias in the availability of protected features annotations. Moverover, as geographic and cultural contexts shift, so do the meaning of different categories. Annotators often have different perceptions of concepts like race or are influenced by their own lived experience when categorizing protected categories.
|
2306.05949#16
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 16 |
MIND2WEB presents a unique ensemble of research challenges for the development of generalist agents for the web in real-world settings. As shown in Table 1, MIND2WEB distinguishes itself from existing literature in several ways. Firstly, MIND2WEB spans across 137 websites from 31 domains, allowing comprehensive testing of an agentâs ability in generalizing across varied environments. Secondly, we utilize real-world websites without manual simplification. Consequently, the included environments exhibit complexity far surpassing that encountered in previous studies, yet better reflecting the intricacy of the modern web. With an average of over 1,000 elements per page embedded within complex DOM structures, how to effectively process such long and highly structured documents presents a significant challenge for modeling. Lastly, we direct the annotators to propose open-ended tasks that explore different functionalities of the website to mimic genuine web usage. Meanwhile, contrary to prior studies [5, 21, 22, 39] that provide step-by-step directives and primarily focus on testing the agentâs ability to translate low-level instructions into actions, e.g., âType New York in the location field, click the search button and choose the tomorrow tab,â we opted for the setting where only high-level goals are available, e.g., âWhat is the weather for New York tomorrow?â This poses a much greater yet realistic planning and grounding challenge for the agent.
|
2306.06070#16
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 16 |
knowledge with greater accuracy in this model us- ing explicit instillation.
Notably, there are specific regions in the plot where a mismatch occurs between the two forms of instillation. Specifically, for the entropy met- ric, these regions correspond to instances where the sign of the metrics differs between implicit and explicit instillation. In the case of KL-divergence, the mismatch arises when the metric for implicit instillation is significantly higher or lower than that of explicit instillation. Upon further investigation of the instances falling into these mismatched areas, we find that the majority of them are samples with labels related to location (e.g., has capital relation) or language (e.g., has official language relation) for both BERT and T5. This demonstrates that we cannot approximate implicit instillation with the explicit approach for these types of relations. Ad- ditionally, T5 exhibits fewer instances of mismatch compared to BERT. Considering that both BERT and T5 share relation types that require implicit instillation for accurately inheriting knowledge, the question that remains is whether these problematic relations also affect in-context learning models like InstructGPT. We address this question in the next section.
|
2306.06264#16
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 16 |
a. Molecular Energy Predictions
A critical property in quantum chemistry is the atomization energy of a molecule, which gives us the basic thermochemical data used to determine a moleculeâs stability or reactivity. State-of-the-art quantum chemical methods (i.e., G4(MP2) [40]) can predict this energy with an accuracy of 0.034 eV (or 0.79 kcal/mol) [41, 42]. This accuracy is similar to, and in some cases even better than, the accuracy that can be reached experi- mentally. This motivated Ramakrishnan et al. [41] and Narayanan et al. [42] to compute these atomization energies for the 134,000 molecules in the QM9-G4MP2 dataset.
|
2306.06283#16
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 16 |
4
records are necessary to inform the LLM of the format of the input and the output. Then, during the analogical training stage, the agent interacts with the environment to collect new experiences, and conducts off-policy learning [Sutton and Barto, 1999]. Particularly, given the task information g and the new transition (ot, at, rt, ot+1), as a quadruple of the last observation, action, reward, and the new observation, a new estimation is calculated first according to the estimated Bellman optimality equation [Bellman, 1952] as
Qâ²(g, ot, at) = rt + γ max a Q(g, ot+1, a). (1)
Here max can be calculated from the actions already recorded for (g, ot+1) by considering the Q value of unrecorded actions 0, if the action space cannot be traversed, e.g., action space involving free-form language. Then a new record is inserted directly if there does not exist a record associated to (g, ot, at) in the memory:
Q(g, ot, at) = Qâ²(g, ot, at). (2) If (g, ot, at) has been already inserted into the record, the recorded Q value estimation will be updated by Q-Learning [Watkins and Dayan, 1992]:
|
2306.07929#16
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 17 |
Note that this test is challenging because the answers are very similar and occasionally indistinguish- able even to humans. We will show that position bias is less prominent in some cases in Appendix D.1. As for the origin of this bias, we suspect that it could be rooted in the training data or inherent to the left-to-right architecture of causal transformers, but leave a deeper study as future work.
Verbosity bias is when an LLM judge favors longer, verbose responses, even if they are not as clear, high-quality, or accurate as shorter alternatives.
|
2306.05685#17
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 17 |
The course materials are written in Finnish and the program- ming language is Dart4. The topics are typical of classic introduc- tory courses and include standard input and output, variables, con- ditionals, loops, functions, lists, and maps.
The course has a bespoke online ebook, which covers the con- tent with a combination of reading materials, worked examples, videos, quizzes, and programming exercises. Students program in their web browser, using a customized DartPad5 embedded in the ebook. In addition to DartPadâs default behavior of continuously highlighting syntax errors and running code in the browser, our custom version supports in-browser standard I/O. The exercises are automatically assessed, the platform provides exercise-speciï¬c feedback, and there is no limit on the number of submissions.
|
2306.05715#17
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 17 |
(a) Prompt = âLapierre Pulsium 600 FDJ Road Bikeâ (b) Prompt = "Perfect suite for our honeymoon, honeymoon suite luxury hotelsâ Step1 Step2_Step3_âStep4_âStepS Step 11 Step 18 Step1 Step2 Step Step4 âStepS_â Step 11 Step 15 LOMDics: 072 081 «0.790881 =k = LOMIMSE: 0.60 860086. sB4 8008) VAEDice: 0020.01 0.010100 ot VAERMSE: 4,09 1091.08 4.071.06 1024.05 Input Latent 4Channels, (2x2) Lom Probing Results VAE Probing Results Decoded Image TRACER " Midas Synthetic , Ow Synthetic Label : Label Farthest Nearest
Figure 4: LDMâs internal representations of salient object (a) and depth (b) appear surprisingly early in the denoising. Probing the internal representations of VAE, however, cannot find the salient objects and depth when latents are corrupted. We reported the Dice coefficient and RMSE between probing results at each step and the synthetic label obtained at step 15.
|
2306.05720#17
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 17 |
First-level disciplines: A discipline category is divided into numerous first-level disciplines, each possessing relatively independent research content. For example, the âEconomicsâ category is divided into first-level disciplines âApplied Economicsâ and âTheoretical Economicsâ, and âArt Studiesâ consist of âTheatre & File Studiesâ, âFine Artâ and so on.
Second-level disciplines: These disciplines represent more subdivided areas of study or topics within the first-level discipline. For example, within the first-level discipline of âApplied Economicsâ, further divisions include âFinancial Marketsâ, âBankingâ, âInsuranceâ and many other second-level disciplines.
As shown in Fig. 1, Xiezhi Benchmark consists of a total of 13 disciplinary categories, 118 first-level disciplines, and 385 second-level disciplines as question labels. The detailed information on the disciplines and the question amount used in Xiezhi Benchmark is listed in Tab. 4 in Appendix.
# 3.2 Dataset Construction
# 3.2.1 Data collection
|
2306.05783#17
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 17 |
2Different domains means data sources with different distribu- tions, e.g., scenarios, datasets, platforms, etc.
The final ranked list of items is obtained by sorting the score, requiring N forwards of function F'(u, 7): linlhL1 = Sort ({F(u, ix) }i1) - (1) PTab [Liu et al., 2022a] models the prediction task as a text classification problem, and tunes the language model based on pure textual inputs generated by prompting. Kang et al. [2023] finetune large language model for rating predic- tion in a regression manner, which exhibit a surprising per- formance by scaling the model size of finetuned LLM up to 11 billion. RecFormer [Li et al., 2023b] estimates the match- ing score between the semantic representation of user inter- action sequence and candidate items, respectively. Another line of research intends to concatenate the item description (e.g., title) to the user behavior history with different prompts, and estimates the score as the overall perplexity [Mao et al., 2023], log-likelihood [Sileo et al., 2022], or joint probabil- ity [Zhang et al., 2021b] of the prompting text.
|
2306.05817#17
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 17 |
Due to its contextual and evolving nature [83], bias evaluation cannot be fully standardized and static [117]. Protected class categorization itself cannot be exhaustive and can be inherently harmful. By framing work within such considerations, it is possible to delineate which qualities that are evaluated for. Precisely identifying which framing is used for bias evaluation and mitigation can help delineate the particular areas where robust evaluation has been done, where developers expect biases to arise, and which groups for whom they believe biases are unlikely to arise, or bias evaluations have not been as rigorous, e.g., due to a lack of bias evaluation resources. Certain protected classes, such as race and gender, are often more represented in publications and publication venues around biases of (generative) systems. Many evaluations focus on distinct or binary groups, due to the complexity of operationalising intersectionality [257, 133]; in many cases, assumptions used to simplify for the sake of mathematical notation and interpretation result in obscuring the very phenomena they seek to describe [64].
|
2306.05949#17
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 17 |
# 3 Method: MINDACT
Employing the data from MIND2WEB, we introduce an exploratory framework, MINDACT, for our task, leveraging the power of LLMs. Raw HTML documents, which could consist of thousands of elements, are either infeasible or cost-prohibitive to be directly fed into LLMs. We propose a two-stage process that synergizes the strength of small and large LMs, as shown in Figure 3. In the first stage, a fine-tuned small LM is used to rank the elements present on a webpage, yielding a small pool of promising candidates. In the second stage, these candidate elements are consolidated to form a representative snippet of the webpage, which is then processed by an LLM to predict the final action, including predicting both the element for interaction and the corresponding operation.
# 3.1 Candidate Generation with Small LMs
Given the task description, the snapshot of the webpage at step t, and the actions performed in the preceding t â 1 steps, we treat candidate generation as a ranking task. The task is to select the top-k
5
HTML Candidate 1 HTML Document Elements Snippet i 1 Task + Previous Target 1 Description Actions ' Element WT Surjuey
|
2306.06070#17
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 17 |
knowledge. This is especially true for in-context learning based methods, for which implicit instilla- tion may not be a viable option. Therefore, in this section, we aim to explore the real-world applica- tions of our metrics in two tasks: (1) factual align- ment, where we investigate how our metrics can ensure that specific facts appear in LLMsâ genera- tion, and (2) avoiding hallucination, by measuring the correlation between our knowledge metrics for hallucinated and non-hallucinated facts.
Factual Alignment Factual alignment refers to the task of ensuring that a specific fact appears in the generated output of an LLM. This task is partic- ularly important in cases where a more accurate or controllable generation is required. To investigate factual alignment using our knowledge metrics, we ask the LLM to write a summary about an entity and categorize the facts about that entity into two categories: (1) facts that appear in the summary, and (2) facts that didnât appear in the generated output.
# In-Context Learning Based Applications
While our proposed methods have demonstrated superior performance compared to ranking-based metrics, it remains unclear whether they have prac- tical utility beyond the realm of analyzing LLMs
|
2306.06264#17
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 17 |
The Berkeley-Madison team (Ankur Gupta, Garrett Merz, Alishba Imran, and Wibe de Jong) used this dataset to ï¬ne-tune diï¬erent LLMs using the LIFT frame- work. The team investigated if they could use an LLM to predict atomization energies with chemical accuracy. Jablonka et al. [32] emphasized that these LLMs might be par- ticularly useful in the low-data limit. Here, we have a relatively large dataset, so it is an ideal system to gather insights into the performance of these models for datasets much larger than those used by Jablonka et al. [32].
The Berkeley-Madison team showed that the LIFT framework based on simple line representations such as SMILES and SELFIES [38, 39] can yield good predictions (R2 > 0.95 on a holdout test set), that are, however, still inferior to dedicated models that have access to 3D information [43, 44]. An alternative approach to achieve chemical accuracy with LLMs tuned only on string representations is to leverage a â-ML scheme [45] in which the LLM is tuned to predict the diï¬erence between G4(MP2) and B3LYP [46]
7
|
2306.06283#17
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 17 |
Q(g, ot, at) â (1 â α)Q(g, ot, at) + αQâ²(g, ot, at). (3)
Here the learning rate, α, is 1/N where N denotes the times this value is updated. As Equation 1 may lead to an inaccurate estimation owing to insufficient sampling out of few training steps of REMEMBERER, n-step bootstrapping [Mnih et al., 2016] is adopted to ameliorate this problem, which estimates QⲠby
n-1 Q'(g, 01,44) = Vo yiresi +y" max Q(g, Ottn, 4); (4) i=0
where n is the steps to expand. The ablation study in Subsection 4.4 proves this perspective.
# 3.3 Usage of the experiences
In order to assist the LLM in making deci- sions, the stored experiences are adopted as dy- namic exemplars for few-shot in-context learn- ing. Given the task goal g and the current ob- servation ot, a similarity function f is used to calculate the similarity of (g, ot) with (gi, oi) from the memory.
|
2306.07929#17
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 18 |
Verbosity bias is when an LLM judge favors longer, verbose responses, even if they are not as clear, high-quality, or accurate as shorter alternatives.
To examine this bias, we design a ârepetitive listâ attack with model answers from MT-bench. We first select 23 model answers from MT-bench that contain a numbered list. We then make them unnecessarily verbose by asking GPT-4 to rephrase the list without adding any new information and insert the rephrased new list to the beginning of the original list. For example, if the original response contains 5 items, then the new response will contain 10 items but the first 5 items are rephrased from the original 5 items. An example is shown in Figure 12 (Appendix). We define the attack is successful if an LLM judge thinks the new response is better than the old response. Table 3 shows the failure rate of LLM judges under this attack, demonstrating that all LLMs may be prone to verbosity bias though GPT-4 defends significantly better than others. As a calibration, we find LLM judges are able to correctly judge identical answers (i.e., they always return a tie for two identical answers) but cannot pass the more advanced ârepetitive listâ attack.
|
2306.05685#18
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 18 |
A key feature of the platform is the ability to ask for help from teachers. Asking for help is done by clicking a âRequest helpâ but- ton. The button resides next to feedback from automated assess- ment and is at ï¬rst inactive, but becomes active whenever a stu- dent submits an exercise for automated assessment and the solu- tion does not pass the automated tests. Clicking the button opens up a dialog for a help request that gets sent to a queue with the associated exercise details and source code. Course staï¬ responds to the help requests manually. The students also have access to an unoï¬cial chatroom (Slack) with other course participants.
Our data is from 2022. During the year, there were 4,247 distinct students in the course, who collectively made 120,583 submissions to programming exercises. 831 help requests were submitted. In this article, we focus on the ï¬fteen programming exercises with the most help requests (out of 64 exercises in total). The ï¬fteen exercises, which are summarized in Table 1, account for more than 65% of all the help requests during the year.
For this study, we translated the programming exercise hand- outs (problem descriptions) to English. For each of the 15 exer- cises with the most help requests, we randomly sampled ten, which yielded a body of 150 help requests in total.
|
2306.05715#18
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 18 |
segmentation performance of VAE self-attention is slightly lower than the average performance of LDM decoder side self-attention (Dice coefficient of 0.71 vs 0.85). For the depth estimation, the self-attention layer of VAE failed across all steps. The average RMSE obtained by VAE self-attention at the final step is still 0.84.
These results suggest that the VAE bottleneck does not contain a significant linear representation of depth early in the denoising process. Later in the process, some saliency / background information emerges. In all cases, it seems the LDM has a stronger representation of depth.
# 5 Causal Role of Depth Representation
Probing experiments show a high correlation between the internal representations of LDM and the depth of its output images. But does this representation play a causal role? To answer this question, we designed causal intervention experiments for salient (foreground) object and depth representations. Essentially, we want to change the modelâs output by solely modifying the internal representations in LDM using the projection learned by our probing classifiers.
# Intervention experiment
|
2306.05720#18
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 18 |
# 3.2 Dataset Construction
# 3.2.1 Data collection
Xiezhi consists of 249,587 questions from mainly two different sources. The first category includes 170k multiple-choice questions collected from six different examinations: elementary school exams, middle school entrance exams, college entrance exams, undergraduate exams, graduate entrance exams, and adult education exams. The second category is 80k multiple-choice questions generated by our auto updating framework, which is deliberated in the following section.
|
2306.05783#18
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 18 |
The methods mentioned above generally follow the con- ventional paradigm of recommendation models, where the output of LLM is fed into a delicately designed projection layer to calculate the ï¬nal score for classiï¬cation or regres- sion tasks. Recently, researchers also propose to enable LLM to directly output the score or userâs preference towards a target item in natural language manners (e.g., integers 1-5 for rating, yes/no for preference). Prompt4NR [Zhang and Wang, 2023] transforms the score estimation into a cloze [MASK] prediction task for binary key answer words (e.g., related/unrelated, good/bad) with multi-prompt ensembling. TabLLM [Hegselmann et al., 2023] and TALLRec [Bao et al., 2023] train the decoder-only LLM to follow instructions and answer a binary question appended behind the contex- tual prompting information. PBNR [Li et al., 2023f] tunes an encoder-decoder LLM (i.e., T5) to predict the yes/no answer about user preference towards each candidate news article. Zhiyuli et al. [2023] instruct LLM to predict the user rating in a textual manner, and restrict the output format as a value with two decimal places through manually designed prompts.
|
2306.05817#18
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 18 |
Obtaining data for bias evaluations is not straightforward, as there are often legal obstacles around collecting data about protected attributes, which leads to selection bias in the availability of protected features annotations [21, 252]. Moverover, as geographic and cultural contexts shift, so do the meaning of different categories [206, 112] and must be interpreted according to their local meaning. Annotators often have different perceptions of concepts like race or are influenced by their own lived experience [234] when categorizing protected categories [187].
When conducting association tests, although based in human associations, one should remain aware that general societal attitudes do not always represent subgroups of people and cultures. Evaluations for stereotype detection can raise false positives and can flag relatively neutral associations based in fact (e.g. population x has a high proportion of lactose intolerant people) [238]. Whenever additional tooling is used to aid in identifying biases, e.g., the use of an image captioning system in addition to the base system, tool added introduces its own biases, similarly introduced in each step of developing the tool, which are embedded into the ecosystem of the biases of the system under study.
# 4.1.2 Cultural Values and Sensitive Content
|
2306.05949#18
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 18 |
5
HTML Candidate 1 HTML Document Elements Snippet i 1 Task + Previous Target 1 Description Actions ' Element WT Surjuey
Candidate Representation ancestors: /htmi/div dialog/ul location search results target: (button id=5 (span (span Boston ) (span NY, USA)) ) Task Query Task : Check for pickup restaurant available in Boston, NY on March 18, 5pm with just one guest Previous Actions: [combobox] Reservation type -> SELECT: Pickup [svg] -> CLICK [searchbox] Find a location -> TYPE: Boston Q > B 2. i= o is W1 Surjuey
Candidate Representation ancestors: /htmi/div dialog/ul location search results target: (button id=5 (span (span Boston ) (span NY, USA)) ) Task Query Task : Check for pickup restaurant available in Boston, NY on March 18, 5pm with just one guest Previous Actions: [combobox] Reservation type -> SELECT: Pickup [svg] -> CLICK [searchbox] Find a location -> TYPE: Boston Q > B 2. i= o is W1 Surjuey
Figure 3: The overall pipeline for MINDACT with a small ranking LM for candidate generation, and a large prediction LM for action prediction.
Figure 4: Illustration of the candidate gener- ation module and the templates for construct- ing task query and candidate representation.
|
2306.06070#18
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 18 |
While our proposed methods have demonstrated superior performance compared to ranking-based metrics, it remains unclear whether they have prac- tical utility beyond the realm of analyzing LLMs
To conduct the factual alignment experiment, we selected a set of the most popular human and non-human entities and their corresponding facts from the T-REx benchmark. We gather 500 entities and their 5175 corresponding facts from T-REx benchmark. We prompted the LLMs to generate a paragraph about each entity by using the prompt, "Write a paragraph about [entity]."
6| Mismatch Implicit Explicit
implicit ~ 8 B & 5S ob 8 & & ou y ° N a ° ° 0.0 25 5.0 75 10.0 12.5 15.0 175 Explicit
(a) Entropy. (b) KL-Divergance.
Figure 4: The correlation between explicit and implicit knowledge instillation using entropy and KL-divergence metric for T5 language model. The mismatch regions are being identified as before.
|
2306.06264#18
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 18 |
7
Table II: LIFT for molecular atomization energies on the QM9-G4MP2 dataset. Metrics for models tuned on 90% of the QM9-G4MP2 dataset (117,232 molecules), using 10% (13,026 molecules) as a holdout test set. GPTChem refers to the approach reported by Jablonka et al. [32], GPT-2-LoRA to PEFT of the GPT-2 model using LoRA. The results indicate that the LIFT framework can also be used to build predictive models for atomization energies, that can reach chemical accuracy using a â-ML scheme. Baseline performance (mean absolute error reported by Ward et al. [44]): 0.0223 eV for FCHL-based prediction of GP4(MP2) atomization energies and 0.0045 eV (SchNet) and 0.0052 eV (FCHL) for the â-ML scheme.
|
2306.06283#18
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 18 |
Last 5 Actions: search[3 ounce bottle bright citrus deodorant sensitive skin] Observation: Instruction: j would Like a 3 ounce bottle ... [Back to Search] (orseuneaol [BO78GTKVXY] vee [BO8KBVI4XN] Available Actions: back to search Encouraged: click[b078gwrclj] -> 1.0 bo78gwre1j and bo7sgtkvxy are bright citrus deodorant less than 50 dollars. T can check bO7Bgwrcij First. Discouraged: click[b087wksr2g] -> 0.0 bos7wksr2g is not the desired item.
# J
Si = f ((g, ot), (gi, oi)). (5)
Commonly, a similarity function f can be di- vided into two components, task similarity fg and observation similarity fo:
Si = λfg(g, gi) + (1 â λ)fo(ot, oi).
The m records with the highest similarities are retrieved to form the exemplars in the prompt. The particular similarity function designed for each task set is detailed in Subsection 4.1.
|
2306.07929#18
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 19 |
Self-enhancement bias. We adopt the term âself-enhancement biasâ from social cognition litera- ture [4] to describe the effect that LLM judges may favor the answers generated by themselves.
We examine this effect statistically. Figure 3(b) shows the win rate (w/o tie) of six models under different LLM judges and humans. Compared to humans, we do observe that some judges favor certain models. For example, GPT-4 favors itself with a 10% higher win rate; Claude-v1 favors itself with a 25% higher win rate. However, they also favor other models and GPT-3.5 does not favor itself. Due to limited data and small differences, our study cannot determine whether the models exhibit a self-enhancement bias. Conducting a controlled study is challenging because we cannot easily rephrase a response to fit the style of another model without changing the quality.
5
|
2306.05685#19
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 19 |
# 3.2 Generating LLM Responses to Help Requests
We generated responses to the help requests with two LLMs: the OpenAI Codex model (code-davinci-002), which is optimized for code, and the GPT-3.5 model (gpt-3.5-turbo6 ) which handles both free-form text and code7.
We started the analysis with a prompt engineering phase, try- ing out diï¬erent types of prompts to ï¬nd out what produced the most consistent and helpful outputs. We considered the following as potential parts of the prompt:
|
2306.05715#19
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 19 |
# Intervention experiment
Intervention: Repositioning foreground objects. Our goal is to see if changing the depth represen- tation, with the same prompt and initial input, will lead to a corresponding change in apparent depth in the output image. To modify the geometry while preserving the semantics of original output, we test a minimal change: we identify a foreground object, and translate its representation in 2D space. When translating the objectâs representation, we used a modified salient object mask dâ² reference. The mask dâ²
6
Prompt Â¥/ = âSouthern living container plantsâ Original ff 4 t=4 t=2 ten ten+4 t â > > eccce > â +> eevee > â Original Probing dy VAE ar al â- â¬9(1,2) Po(â¬ocs)) > dy rm Onna a Intervened et oe a d, Modify Internal Representation = ft SESEBEEEEEEEEN ! n 4(1,t) Po(â¬oasy) > dy, Randomly Translated | Affected
|
2306.05720#19
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 19 |
Figure 2: The figure on the right is the statistics of all questions collected by Xiezhi. The middle figure shows statistics for Xiezhi-Specialty and the left shows Xiezhi-Interdiscipline.
4
# 3.2.2 Auto Updating
Our auto-updating method comprises three primary components: the manual annotation of Xiezhi- Meta dataset, the generation of questions from open academic documents, and the subsequent automated annotation process.
# Manual Annotation
We annotated 20k questions collected from the Graduate Entrance Examination to form the meta version of Xiezhi through both manual efforts and chatGPT. The aim of annotation is to remove unanswerable questions and to tag each question with as many disciplines as possible.
We first used ChatGPT to tag each question with first or second-level disciplines. In the process of tagging, we construct a prompt by concatenating the description of a question with its options, answers, and exam information with the description of each discipline to increase chatGPTâs understanding of the question so that the question could be better tagged.
|
2306.05783#19
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05949
| 19 |
# 4.1.2 Cultural Values and Sensitive Content
Cultural values are specific to groups and sensitive content is normative. Sensitive topics also vary by culture and can include hate speech, which itself is contingent on cultural norms of acceptability [242]. Abusive and offensive language are a large umbrella for unsafe content, which can also include abuse and hate speech[151, 236]. What is considered a sensitive topic, such as egregious violence or adult sexual content, can vary widely by viewpoint. Due to norms differing by culture, region, and language, there is no standard for what constitutes sensitive content.
Increasing politicization of model training and outputs, as seen in projects such as with projects like RightWingGPT [202], raises urgency in evaluating the complexity of political values. Distinct cultural values present a challenge for deploying models into a global sphere, as what may be appropriate in one culture may be unsafe in others [238]. Generative AI systems cannot be neutral or objective, nor can they encompass truly universal values. There is no âview from nowhereâ; in evaluating anything, a particular frame of reference [207] is imposed [237].
4.1.2.1 Hate, Toxicity, and Targeted Violence Beyond hate speech and toxic language, genera- tions may also produce harmful biases [87], stereotypes [165] (overlapping with 4.1.1Bias, Stereo5
|
2306.05949#19
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.