doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.06283
| 65 |
A Video tink https://www.youtube.com/watch?v=aKLOIFTwxsQ B Questions: 1. What inspired you to pursue research in the field of carbon capture? 2. Can you discuss the main challenges faced by scientists in the field of carbon capture? Path to audio How do you think the general public can become more aware ofthe importance of carbon capture and reducing C02 emissions? Are there any potential negative impacts or drawbacks of using carbon capture technology? Go Can you discuss any current or ongoing projects related to carbon capture that you are involved in? How do you see the field of carbon capture evolving in the next few years, and what role do you see computational modeling playing in that evolution? Keywords: 1. Monte Carlo simulation Chemical compounds: 2. Metropolis algorithm âcopper, co2, carbon, carbon dioxide, zine 3. Importance sampling âSummary: 0 o 0 1 oO = n ° p 4 The speaker Beard Sith, iscontinunga lecture on Monte Caro simulation focusing onthe dtl ofthe (e) Ln iupac: Nove algorithm. The audiences introduced ta basic form ofthe metropolis algrth, in which prices ina . lis
|
2306.06283#65
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.05685
| 66 |
The processes involved in the central dogma are:
The processes involved in the central dogma are:
1. Transcription rephrased: The act of copying genetic information from DNA to RNA. 2. Translation rephrased: The act of using genetic information in RNA to create proteins. 3. Transcription: The process by which the genetic information in DNA is copied into RNA. 4. Translation: The process by which the genetic information in RNA is used to synthesize proteins.
1. Transcription: The process by which the genetic information in DNA is copied into RNA. 2. Translation: The process by which the genetic information in RNA is used to synthesize proteins.
The term "central dogma" was coined by Francis Crick, one of the co-discoverers of the structure of DNA, in 1958.
The term "central dogma" was coined by Francis Crick, one of the co-discoverers of the structure of DNA, in 1958.
# GPT-3.5 Judgment:
.
# While
# both
assistants correctly state the central dogma of molecular biology and who coined the term, Assistant A provides a more comprehensive explanation of the processes involved in the central dogma, including the rephrased definitions of transcription and translation. Therefore, I choose A as the better assistant.
# Claude-v1 Judgment:
|
2306.05685#66
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 66 |
[21] Andrew Ettles, Andrew Luxton-Reilly, and Paul Denny. 2018. Common Logic Errors Made by Novice Programmers. In Proc. of the 20th Australasian Computing Education Conf. (Brisbane, Queensland, Australia) (ACE â18). ACM, New York, NY, USA, 83â89. https://doi.org/10.1145/3160489.3160493
[22] James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly, and James Prather. 2022. The Robots Are Coming: Exploring the Implications of Ope- nAI Codex on Introductory Programming. In Australasian Computing Education
ICER â23 V1, August 7â11, 2023, Chicago, IL, USA
Conf. ACM, 10â19.
[23] James Finnie-Ansley, Paul Denny, Andrew Luxton-Reilly, Eddie Antonio Santos, James Prather, and Brett A Becker. 2023. My AI Wants to Know if This Will Be on the Exam: Testing OpenAIâs Codex on CS2 Programming Exercises. In Proceedings of the 25th Australasian Computing Education Conference. 97â104.
|
2306.05715#66
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 66 |
OpenAI (2023a). Chatgpt: Optimizing language models for dialogue. https://openai.com/ blog/chatgpt.
OpenAI (2023b). Gpt-4 technical report.
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al. (2022). Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â27744.
Perez, E., Ringer, S., LukoÅ¡i¯utËe, K., Nguyen, K., Chen, E., Heiner, S., Pettit, C., Olsson, C., Kundu, S., Kadavath, S., et al. (2022). Discovering language model behaviors with model-written evaluations. arXiv preprint arXiv:2212.09251.
|
2306.05783#66
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 66 |
[Li et al., 2023e] Xiangyang Li, Bo Chen, Lu Hou, and Ruiming Tang. Ctrl: Connect tabular and language model for ctr prediction. arXiv preprint arXiv:2306.02841, 2023. [Li et al., 2023f] Xinyi Li, Yongfeng Zhang, and Edward C Malthouse. Pbnr: Prompt-based news recommender sys- tem. arXiv preprint arXiv:2304.07862, 2023.
[Li et al., 2023g] Xinyi Li, Yongfeng Zhang, and Edward C Malthouse. A preliminary study of chatgpt on news recommendation: Personalization, provider fairness, fake news. arXiv preprint arXiv:2306.10702, 2023.
[Lin and Zhang, 2023] Guo Lin and Yongfeng Zhang. Sparks of artiï¬cial general recommender (agr): Early ex- periments with chatgpt. arXiv preprint arXiv:2305.04518, 2023.
|
2306.05817#66
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 66 |
Reducing carbon emissions should start with standards and transparency for carbon emissions reporting and accounting for efficiency. Having a more standardized approach, such as ISO standards, to reporting the carbon emissions of AI can help better understand their evolution, and to compare the emissions of different approaches and models. While certain conferences such as NeurIPS are starting to include compute information in submissions in submission checklists, there is still a lot of variability in carbon reporting, and figures can vary widely depending on what factors are included. The current pursuit of accuracy above all else is often at odds with other aspects of model
18
performance, including efficiency. Including these metrics when comparing two or more models (e.g. in benchmarks and leaderboards) can help users make trade-offs that consider both aspects and choose the model that best corresponds to their use case and criteria.
# 5 Broader Impacts of Evaluations
Understanding an AI system from conception to training to deployment requires insight into training data, the model itself, and the use case/application into which the system is deployed. It also requires understanding people, society, and how societal processes, institutions, and power are changed and shifted by an AI system.
# 5.1 Context for the System
|
2306.05949#66
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 66 |
[45] Ce Zhou, Qian Li, Chen Li, Jun Yu, Yixin Liu, Guangjing Wang, Kai Zhang, Cheng Ji, Qiben Yan, Lifang He, Hao Peng, Jianxin Li, Jia Wu, Ziwei Liu, Pengtao Xie, Caiming Xiong, Jian Pei, Philip S. Yu, and Lichao Sun. A comprehensive survey on pretrained foundation models: A history from BERT to chatgpt. CoRR, abs/2302.09419, 2023. doi: 10.48550/ARXIV.2302.09419. URL https://doi.org/10.48550/arXiv.2302.09419.
15
# A Overview
Our supplementary includes the following sections:
⢠Section B: Data Collection Details. Details for crowsourcing and implementation details for the three data collection phases: task proposal, task demonstration, and task verification.
Section C: Experiment Details. Details for evaluation and model implementation. ⢠Section D: Additional Results. Results for additional auxiliary experiments.
Following NeurIPS Dataset and Benchmark track guidelines, we have shared the following artifacts:
|
2306.06070#66
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 66 |
ofthe (e) Ln iupac: Nove algorithm. The audiences introduced ta basic form ofthe metropolis algrth, in which prices ina . lis algorithm, in which pa Cu > tessfeubehem ne nim ih govfeom Zn questions about the lgorth's efectveness and implementation including the selection ofprtlsat SMILES:C{-0)=0 random, the order of displacement, and the siz of the displacement. The talk will delve into these questions and provide answers to them, Sortby » Search roa oF Questions: Can you explain the concept of importance sampling in Monte Carlo simulation? How do you calculate the energy of the system in the new position of the particle? Why is itimportant to space out the sampling of thermodynamic properties during Monte Carlo simulation?
|
2306.06283#66
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 66 |
- search[3 ounce bottle bright citrus deodorant sensitive skin] Observation: | Instruction: i would like a 3 ounce bottle of bright citrus deodorant for sensitive skin, and price lower than 40.00 dollars Back to Search] Page | (Total results: 50) Next >] BO78GWRCI1J] Bright Citrus Deodorant by Earth Mama | Natural and Safe for Sensitive Skin, Pregnancy and Breastfeeding, Contains Organic Calendula 3-Ounce $10.99 BO78GTKVXY] Ginger Fresh Deodorant by Earth Mama | Natural and Safe for Sensitive Skin, Pregnancy and Breastfeeding, Contains Organic Calendula 3-Ounce $10.99 BO8KBVJ4XN] Barrel and Oak - Aluminum-Free Deodorant, Deodorant for Men, Essential Oil-Based Scent, 24-Hour Odor Protection, Cedar & Patchouli Blend, Gentle on Sensitive Skin (Mountain Sage, 2.7 0z, 2-Pack) $15.95 Available Actions: - back to search - next > - b078gwre1j - bO78gtkvxy - bO8kbvj4xn Encouraged: click[b078gwrclj] ->
|
2306.07929#66
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 67 |
# Claude-v1 Judgment:
Both assistants answer the question by defining the central dogma and detailing the two processes involved, transcription and translation. However, . Assistant A also mentions who coined the term "central dogma", which provides useful context and shows a more comprehensive understanding of the topic. Overall, assistant A gives a fuller and more insightful response, so I have chosen it as the better assistant for this question.
# GPT-4 Judgment:
Both Assistant A and Assistant B provide accurate and relevant information about the central dogma of molecular biology, the processes involved, and who named it. . Assistant B's answer is more concise and avoids this redundancy. Therefore, Assistant B's answer is better.
# XX
Figure 12: An example of ârepetitive listâ attack to examine verbosity bias. Except for the two rephrased items (highlighted in red), Assistant Aâs answer is exactly the same as Assistant B. Both GPT-3.5 and Claude-v1 show a verbosity bias towards the longer and repetitive answer. Only GPT-4 successfully detected this attack.
19
# yy
Question: Benjamin went to a bookstore and purchased a variety of books. He bought 5 copies of a sci-fi novel, each priced at $20, 3 copies of a history book priced at $30 each, and 2 copies of a philosophy book for $45 each. What was the total cost of his purchases?
|
2306.05685#67
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 67 |
[24] Elena L Glassman, Jeremy Scott, Rishabh Singh, Philip J Guo, and Robert C Miller. 2015. OverCode: Visualizing variation in student solutions to programming problems at scale. ACM Transactions on Computer-Human Interaction (TOCHI) 22, 2 (2015), 1â35.
[25] Andrew Head, Elena Glassman, Gustavo Soares, Ryo Suzuki, Lucas Figueredo, Loris DâAntoni, and Björn Hartmann. 2017. Writing reusable code feedback at scale with mixed-initiative program synthesis. In Proceedings of the Fourth (2017) ACM Conference on Learning@ Scale. 89â98.
[26] Arto Hellas, Juho Leinonen, and Petri Ihantola. 2017. Plagiarism in take-home exams: help-seeking, collaboration, and systematic cheating. In Proc. of the 2017 ACM conf. on innovation and technology in computer science education. 238â243. [27] Andrew D Hilton, Genevieve M Lipp, and Susan H Rodger. 2019. Translation from Problem to Code in Seven Steps. In Proceedings of the ACM Conference on Global Computing Education. 78â84.
|
2306.05715#67
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 67 |
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485â5551.
Sai, A. B., Mohankumar, A. K., and Khapra, M. M. (2022). A survey of evaluation metrics used for nlg systems. ACM Computing Surveys (CSUR), 55(2):1â39.
Scao, T. L., Fan, A., Akiki, C., Pavlick, E., Ili´c, S., Hesslow, D., Castagné, R., Luccioni, A. S., Yvon, F., Gallé, M., et al. (2022). Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100.
|
2306.05783#67
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 67 |
[Lin et al., 2021] Jianghao Lin, Weiwen Liu, Xinyi Dai, Weinan Zhang, Shuai Li, Ruiming Tang, Xiuqiang He, Jianye Hao, and Yong Yu. A graph-enhanced click model for web search. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1259â1268, 2021.
[Liu et al., 2019] Weiwen Liu, Jun Guo, Nasim Sonboli, Robin Burke, and Shengyu Zhang. Personalized fairness- In Proceedings of aware re-ranking for microlending. the 13th ACM conference on recommender systems, pages 467â471, 2019.
[Liu et al., 2021] Yiding Liu, Weixue Lu, Suqi Cheng, Dait- ing Shi, Shuaiqiang Wang, Zhicong Cheng, and Dawei Yin. Pre-trained language model for web-scale retrieval in baidu search. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 3365â3375, 2021.
|
2306.05817#67
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 67 |
# 5.1 Context for the System
Context is critical to robust evaluation; the way in which we properly define and evaluate harm in any given application requires an understanding of the target industry, task, end-user, and model architecture. Communication across model developers, model deployers, and end-users is key to developing a comprehensive evaluation and risk mitigation strategy. Actors across the ecosystem should collaborate to craft robust evaluations and invest in the safeguards needed to prevent harm.
# 5.2 Context of the Evaluation
Systems can be deployed in contexts where there is not sufficient attention towards evaluating and moderating performance. This means disparate performance is not caught, as seen with social media platform moderation outside of the most commonly-written languages and wealthiest countries [203]. Moreover, as cultural values change between cultural contexts, both within and outside of any given language, the particular cultural values that are being evaluated should be made explicit. A byproduct of such specificity is that it becomes clear where evaluations should be extended while providing a framework for such extensions.
# 5.3 Choosing Evaluations
|
2306.05949#67
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 67 |
Following NeurIPS Dataset and Benchmark track guidelines, we have shared the following artifacts:
Artifcat Link License Homepage https://osu-nlp-group.github.io/Mind2Web/ - Code Repository https://github.com/OSU-NLP-Group/Mind2Web MIT License Training Data https://huggingface.co/datasets/osunlp/Mind2Web CC BY 4.0 Test Data https://shorturl.at/iGI45 (password âmind2webâ) CC BY 4.0
The authors and the OSU NLP group are committed to ensuring its regular upkeep and updates.
16
Table 3: Prompt for ChatGPT to generate seed tasks to inspire the annotators during task proposal.
Prompt We are testing the website
|
2306.06070#67
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 67 |
Figure 10: The I-digest interface. A video (e.g., of a lecture recording) can be described using the Whisper model. Based on the transcript, an LLM can generate questions (and answers). Those can assist students in their learning. The LLM can also detect mentions of chemicals and link to further information about them (e.g., on PubChem [100â102]).
novel educational opportunities [99], for example, by personalizing content or providing almost limitless varied examples.
The I-Digest (Information-Digestor) hackathon team (Beatriz MouriËno, Elias Moubarak, Joren Van Herck, Sauradeep Majumdar, Xiaoqi Zhang) created a path toward such a new educational opportunity by providing students with a digital tutor based on course material such as lecture recordings. Using the Whisper model [71], videos of lec- ture recordings can be transcribed to text transcripts. The transcripts can then be fed into an LLM with the prompt to come up with questions about the content presented in the video (Figure 10). In the future, these questions might be shown to students before a video starts, allowing them to skip parts they already know or after the video, guiding students to the relevant timestamps or additional material in case of an incorrect answer.
|
2306.06283#67
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.05685
| 68 |
$45 each. What was the total cost of his purchases? Assistant A: Assistant B: The total cost of Benjaminâs purchases was $115. The total cost of his purchases is $295. GPT-4 Judgment: Assistant B provided the correct answer to the userâs question. Assistant Aâs response was incorrect. To calculate the total cost of Benjaminâs purchases, we need to multiply the number of books by their respective prices and then add the results: 5 sci-fi novels * $20 = $100 3 history books * $3@ = $90 2 philosophy books * $45 = $90 Assistant Bâs response is accurate and directly answers the userâs question. Therefore, my final verdict is: [[B]] GPT-4âs Answer (if asked separately): The cost of the sci-fi novels is 5 * $20 = $<<5*20=100>>100. The cost of the history books is 3 * $30 = $<<3*30=90>>90. The cost of the philosophy books is 2 * $45 = $<<2*45=90>>90. The total cost of his purchases is
Figure 13: With a default prompt, GPT-4 shows limited capability in grading math questions. Despite being able to answer the question itself, its judgment was influenced by the given answers, leading to arithmetic mistakes highlighted in yellow.
|
2306.05685#68
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 68 |
[28] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neu- ral computation 9, 8 (1997), 1735â1780.
[29] Petri Ihantola, Tuukka Ahoniemi, Ville Karavirta, and Otto Seppälä. 2010. Re- view of recent systems for automatic assessment of programming assignments. In Proc. of the 10th Koli calling int. conf. on computing education research. 86â93. [30] Petri Ihantola, Juha Sorva, and Arto Vihavainen. 2014. Automatically detectable indicators of programming assignment diï¬culty. In Proceedings of the 15th An- nual Conference on Information technology education. 33â38.
[31] Petri Ihantola, Arto Vihavainen, Alireza Ahadi, Matthew Butler, Jürgen Börstler, Stephen H Edwards, Essi Isohanni, Ari Korhonen, Andrew Petersen, Kelly Rivers, et al. 2015. Educational data mining and learning analytics in programming: Literature review and case studies. In Proc. of the 2015 ITiCSE on Working Group Reports. ACM, 41â63.
|
2306.05715#68
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 68 |
Srivastava, A., Rastogi, A., Rao, A., Shoeb, A. A. M., Abid, A., Fisch, A., Brown, A. R., Santoro, A., Gupta, A., Garriga-Alonso, A., et al. (2022). Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615.
12
StabilityAI (2023). Stability-AI/StableLM. Stablelm: Stability ai language models. https://github.com/
Sun, T., Xiaotian, Z., Zhengfu, H., Peng, L., Qinyuan, C., Hang, Y., Xiangyang, L., Yunfan, S., Qiong, T., Xingjian, Z., Ke, C., Yining, Z., Zhejian, Z., Ruixiao, L., Jun, Z., Yunhua, Z., Linyang, L., Xiaogui, Y., Lingling, W., Zhangyue, Y., Xuanjing, H., and Xipeng, Q. (2023a). Fudannlp moss.
|
2306.05783#68
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 68 |
[Liu et al., 2022a] Guang Liu, Jie Yang, and Ledell Wu. Ptab: Using the pre-trained language model for modeling tabular data. arXiv preprint arXiv:2209.08060, 2022. [Liu et al., 2022b] Qijiong Liu, Jieming Zhu, Quanyu Dai, and Xiaoming Wu. Boosting deep ctr prediction with a plug-and-play pre-trainer for news recommendation. In Proceedings of the 29th International Conference on Com- putational Linguistics, pages 2823â2833, 2022.
[Liu et al., 2023a] Junling Liu, Chao Liu, Renjie Lv, Kang Zhou, and Yan Zhang. Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149, 2023.
[Liu et al., 2023b] Peng Liu, Lemei Zhang, and Jon Atle Gulla. Pre-train, prompt and recommendation: A comprehensive survey of language modelling paradigm arXiv preprint adaptations in recommender systems. arXiv:2302.03735, 2023.
|
2306.05817#68
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 68 |
# 5.3 Choosing Evaluations
The evaluations selected to determine a modelâs performance will impact the values that it propagates out during deployment. There is no universal evaluation by which to evaluate a modelâs performance, and any evaluation metrics should be used with deployment context in mind [210, 191]. Furthermore, notable work at top AI ethics publication venues has not adequately centered on the least powerful in society [32], thereby further exacerbating disparate outcomes by only providing avenues for mitigation for some.
# 6 Conclusion
Just as generative AI systems undergo performance evaluations, they must be evaluated for social impacts. The seven categories in our framework for technical base systems move toward a standard for all modalities of a base system. Our analyses of popular evaluation methods per category can help to improve research in producing novel evaluations. Evaluating people and society interacts with risk and harms taxonomies for generative AI systems. Existing people and societal evaluations are limited and must consider challenges and ethics of determining human responses. Since social impact evaluations can only give limited information about each impact type, we recommend that all categories are given equal importance, and that all relevant stakeholders are meaningfully consulted throughout the development, evaluation, and deployment processes.
# Acknowledgments and Disclosure of Funding
|
2306.05949#68
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 68 |
https://aa.com | American Airlines - Airline tickets and low fares at aa.com Book low fares to destinations around the world and find the latest deals on airline tickets, hotels, car rentals and vacations at aa.com. As an AAdantage member you earn miles on every trip and everyday spend. We want to test practical daily tasks that a user would do on the web- site. Come up with a list of 5 example tasks and try to cover different cases. Requirements: - Each example should be a single sentence and not just click one of the elements. - Donât give step-by-step instructions or directly mention the element to interact. - Describe the goal of the task and provide concrete information or constraints. Use mock-up information (identifier, number, personal information, name, date, attributes, constraints, etc.) to make the task more specific and realistic. Sample Responses Retrieve the confirmation number for a reservation made under the name jane smith departing from dallas (dfw) to miami (mia) on january 20th. Find and book a round-trip flight from new york to london departing on december 15th and returning on december 22nd for under $800. Find
|
2306.06070#68
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 68 |
Importantly, and in contrast to conventional educational materials, this approach can generate a practically inï¬nite number of questions and could, in the future, be contin- uously be improved by student feedback. In addition, it is easy to envision extending this approach to consider lecture notes or books to guide the students further or even recommend speciï¬c exercises.
25
# II. CONCLUSION
The fact that the groups were able to present prototypes that could do quite complex tasks in such a short time illustrates the power of LLMs. Some of these prototypes would have taken many months of programming just a few months ago, but the fact that LLMs could reduce this time to a few hours is one of the primary reasons for the success of our hackathon. Combined with the time-constrained environment in teams (with practically zero cost of âfailureâ), we found more energy and motivation. The teams delivered more results than in most other hackathons we participated in.
|
2306.06283#68
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 68 |
Bright Citrus Deodorant by Earth Mama | Natural and Safe for Sensitive Skin, Pregnancy and Breastfeeding, Contains Organic Calendula 3-Ounce $10.99
Ginger Fresh Deodorant by Earth Mama | Natural and Safe for Sensitive Skin, Pregnancy and Breastfeeding, Contains Organic Calendula 3-Ounce $10.99
Barrel and Oak - Aluminum-Free Deodorant, Deodorant for Men, Essential Oil-Based Scent, 24-Hour Odor Protection, Cedar & Patchouli Blend, Gentle on Sensitive Skin (Mountain Sage, 2.7 0z, 2-Pack)
Figure 8: Exemplar for WebShop. YAML markups are adopted to avoid confusing the keywords like âObservation:â with the colon-ended titles in the page representation.
18
# Task:
Search an article to learn how to hide gauges. Then, access the article "How to Hide Gauges" Last 5 Actions:
|
2306.07929#68
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 69 |
Question: Each problem consists of three statements. Based on the first two statements, the third statement may be true, false, or uncertain. 1. Oranges cost more than apples. 2. Oranges cost less than bananas. 3. Bananas cost more than apples and bananas cost more than orange. If the first two statements are true, then the third statement is Assistant A: Assistant B: false. False. If oranges cost more than apples (statement 1) and If the first two statements are true, then the oranges cost less than bananas (statement 2), then third statement is false. The first statement it is not possible for bananas to cost less than says that oranges cost more than apples, and the oranges and also cost more than both apples and second statement says that oranges cost less oranges (statement 3). Therefore, statement 3 is than bananas. This means that oranges cost less false. than bananas, which contradicts the third statement that bananas cost more than apples and more than oranges. Therefore, the third statement must be false. GPT-4 Judgment: and relevant responses to the userâs question. They both concluded that the third statement is false based on the first two statements. However, Assistant Bâs response is slightly clearer and more concise in explaining the
|
2306.05685#69
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 69 |
[32] Matthew C Jadud. 2005. A ï¬rst look at novice compilation behaviour using BlueJ. Computer Science Education 15, 1 (2005), 25â40.
[33] Matthew C Jadud. 2006. Methods and tools for exploring novice compilation behaviour. In Proceedings of the second international workshop on Computing ed- ucation research. 73â84.
[34] Johan Jeuring, Hieke Keuning, Samiha Marwan, Dennis Bouvier, Cruz Izu, Natalie Kiesler, Teemu Lehtinen, Dominic Lohr, Andrew Peterson, and Sami Towards Giving Timely Formative Feedback and Hints to Sarsa. 2022. Novice Programmers. the 2022 Working Group Reports on Innovation and Technology in Computer Science Education (ITiCSE-WGR â22). Association for Computing Machinery, New York, NY, USA, 95â115. https://doi.org/10.1145/3571785.3574124
[35] Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Andrea Madotto, and Pascale Fung. 2022. Survey of hallucination in natural language generation. Comput. Surveys (2022).
|
2306.05715#69
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 69 |
Sun, T., Zhang, X., He, Z., Li, P., Cheng, Q., Yan, H., Liu, X., Shao, Y., Tang, Q., Zhao, X., Chen, K., Zheng, Y., Zhou, Z., Li, R., Zhan, J., Zhou, Y., Li, L., Yang, X., Wu, L., Yin, Z., Huang, X., and Qiu, X. (2023b). Moss: An open-source tool-augmented conversational language model from fudan university. https://github.com/OpenLMLab/MOSS.
Suzgun, M., Scales, N., Schärli, N., Gehrmann, S., Tay, Y., Chung, H. W., Chowdhery, A., Le, Q. V., Chi, E. H., Zhou, D., et al. (2022). Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261.
|
2306.05783#69
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 69 |
[Liu et al., 2023c] Qijiong Liu, Nuo Chen, Tetsuya Sakai, and Xiao-Ming Wu. A ï¬rst look at llm-powered generative news recommendation. arXiv preprint arXiv:2305.06566, 2023.
[Ma et al., 2023] Yubo Ma, Yixin Cao, YongChing Hong, and Aixin Sun. Large language model is not a good few- shot information extractor, but a good reranker for hard samples! arXiv preprint arXiv:2303.08559, 2023.
[Mao et al., 2023] Zhiming Mao, Huimin Wang, Yiming Du, and Kam-fai Wong. Unitrec: A uniï¬ed text-to-text trans- former and joint contrastive learning framework for text- based recommendation. arXiv preprint arXiv:2305.15756, 2023.
|
2306.05817#69
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 69 |
# Acknowledgments and Disclosure of Funding
We thank the unnamed workshop contributors. We also thank Hugging Face for supporting this work. Thank you to Rishi Bommasani, Nima Boscarino, Deep Ganguli, and Andrew Smart for their thoughtful feedback. Any remaining errors are the authorsâ alone.
19
Contributions, listed in alphabetical order by last name:
Workshop Leads: Irene Solaiman, Zeerak Talat
# FAccT 2023 CRAFT Leads: William Agnew, Marie-Therese Png, Irene Solaiman, Zeerak Talat
Major contributors: William Agnew, Lama Ahmad, Dylan Baker, Ellie Evans, Sara Hooker, Yacine Jernite, Alberto Lusoli, Sasha Luccioni, Jessica Newman, Marie-Therese Png, Irene Solaiman, Zeerak Talat
Workshop Contributors: Willie Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Sasha Luccioni, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Irene Solaiman, Andrew Strait, Zeerak Talat, Apostol Vassilev
# References
[1] Analyse and automate every message | Re:infer, . URL https://www.reinfer.io/.
|
2306.05949#69
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 69 |
20th. Find and book a round-trip flight from new york to london departing on december 15th and returning on december 22nd for under $800. Find the lowest fare for a one-way flight from new york city (jfk) to los angeles (lax) leaving on august 15th and returning on august 20th, with no layovers exceeding 2 hours. Rent a car in honolulu for one week starting on august 1st that can fit at least four passengers and has gps navigation included. Cancel a car rental reservation for a compact car in orlando, florida that was scheduled to be picked up on may 25th at 5 pm without incurring any cancellation fees.
|
2306.06070#69
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 69 |
Through the LIFT framework, one can use LLMs to address problems that could already be addressed with conventional approachesâbut in a much more accessible way (using the same approach for diï¬erent problems), while also reusing established concepts such as â-ML. At the same time, however, we can use LLMs to model chemistry and materials science in novel ways; for example, by incorporating context information such as âfuzzyâ design rules or directly operating on unstructured data. Overall, a common use case has been to use LLMs to deal with âfuzzinessâ in programming and tool de- velopment. We can already see tools like Copilot and ChatGPT being used to convert âfuzzy abstractionsâ or hard-to-deï¬ne tasks into code. These advancements may soon allow everyone to write small apps or customize them to their needs (end-user program- ming). Additionally, we can observe an interesting trend in tool development: most of the logic in the showcased tools is written in English, not in Python or another pro- gramming language. The resulting code is shorter, easier to understand, and has fewer dependencies because LLMs are adept at handling fuzziness that is
|
2306.06283#69
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 69 |
18
# Task:
Search an article to learn how to hide gauges. Then, access the article "How to Hide Gauges" Last 5 Actions:
Search an article to learn how to hide gauges. Then, access the article "How to Hide Gauges" Last 5 Actions: Screen: <button alt="Open navigation drawer" id="0" clickable="true"></button> <img class="Wwikihow toolbar logo" id="1" clickable="false"> <img class="search button" alt="Search" id="2" clickable="true"> <div class="WwebView" id="3" clickable="true"></div> <div class="statusBarBackground" id="4" clickable="false"></div> Instruction: Last Reward: 0.0 Total Reward: Encouraged: INPUT(2, hide gauges) -> 2.0 <img class="search button" alt="Search" id="2" clickable="true"> Discouraged: SCROLL(RIGHT) -> 0.0
Screen:
Figure 9: Exemplar for WikiHow
19
|
2306.07929#69
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 70 |
They both concluded that the third statement is false based on the first two statements. However, Assistant Bâs response is slightly clearer and more concise in explaining the contradiction between the statements. Therefore, I choose Assistant B as the better response. ((8]] GPT-4âs Answer (if asked separately): True. If oranges cost more than apples (statement 1) and oranges cost less than bananas (statement 2), it logically follows that bananas cost more than apples and bananas cost more than oranges, which is the third statement. So, if the first two statements are true, the third one is also true.
|
2306.05685#70
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 70 |
[36] W. Lewis Johnson, Elliot Soloway, Benjamin Cutler, and Steven Draper. 1983. Bug Catalogue: I. Technical Report. Yale University, YaleU/CSD/RR #286. [37] Slava Kalyuga. 2007. Expertise reversal eï¬ect and its implications for learnertailored instruction. Educational psychology review 19 (2007), 509â539.
[38] Stuart A Karabenick. 2004. Perceived achievement goal structure and college student help seeking. J of educational psychology 96, 3 (2004).
[39] Majeed Kazemitabaar, Justin Chow, Carl Ka To Ma, Barbara J Ericson, David Weintrop, and Tovi Grossman. 2023. Studying the eï¬ect of AI Code Generators on Supporting Novice Learners in Introductory Programming. arXiv preprint arXiv:2302.07427 (2023).
[40] Hieke Keuning, Johan Jeuring, and Bastiaan Heeren. 2018. A systematic litera- ture review of automated feedback generation for programming exercises. ACM Transactions on Computing Education (TOCE) 19, 1 (2018), 1â43.
|
2306.05715#70
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 70 |
Taori, R., Gulrajani, I., Zhang, T., Dubois, Y., Li, X., Guestrin, C., Liang, P., and Hashimoto, T. B. (2023). Stanford alpaca: An instruction-following llama model. https://github.com/ tatsu-lab/stanford_alpaca.
Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al. (2023). Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
Wang, E. J., Alexiuk, C., Bo, Y., Yang, Z., Kwok, K., Gusev, I., Echavez, A., et al. (2023a). Alpaca- lora. https://github.com/tloen/alpaca-lora.
|
2306.05783#70
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 70 |
[Muhamed et al., 2021] Aashiq Muhamed, Iman Keivanloo, Sujan Perera, James Mracek, Yi Xu, Qingjun Cui, San- tosh Rajagopalan, Belinda Zeng, and Trishul Chilimbi. Ctr-bert: Cost-effective knowledge distillation for billion- In NeurIPS Efï¬cient Natural parameter teacher models. Language and Speech Processing Workshop, 2021.
[Mysore et al., 2023] Sheshera Mysore, Andrew McCallum, and Hamed Zamani. Large language model aug- mented narrative driven recommendations. arXiv preprint arXiv:2306.02250, 2023.
[Ouyang et al., 2022] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instruc- tions with human feedback. Advances in Neural Informa- tion Processing Systems, 35:27730â27744, 2022.
|
2306.05817#70
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 70 |
# References
[1] Analyse and automate every message | Re:infer, . URL https://www.reinfer.io/.
[2] Global Comprehensive Privacy Law Mapping Chart, . URL https://iapp.org/ resources/article/global-comprehensive-privacy-law-mapping-chart/.
[3] Methodologies and and Adaptation, Tools to Evaluate Climate Change Im- https://unfccc.int/ pacts methodologies-and-tools-to-evaluate-climate-change-impacts-and-adaptation-2. . URL
[4] California Consumer Privacy Act of 2018, 2018. URL https://leginfo.legislature. ca.gov/faces/codes_displayText.xhtml?division=3.&part=4.&lawCode=CIV& title=1.81.5.
[5] AB-5 Worker status: employees and independent contractors., 2019. URL https://leginfo. legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB5.
|
2306.05949#70
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 70 |
# B Data Collection Details
# B.1 Crowdsourcing Details
In this study, we employ annotators from the Amazon Mechanical Turk (mturk) platform. The annotators are required to have a minimum of 1,000 approved HITs with an approval rate exceeding 98% on the platform. We design the compensation with an estimated hourly rate of $10.10 to respect the minimum wage guidelines in Ohio, United States. Every worker passing our qualification receives a bonus, and we pay $0.80 for each approved final task. We do not collect any identifiable private information during the study, and explicitly instruct the annotators to refrain from entering personal or sensitive data into the system. Annotators engage with our annotation tool only within a secure, remote sandbox environment, posing no foreseeable harm. The study complies with the IRB exemption criteria, per the Office of Responsible Research Practices at The Ohio State University. All annotators are presented with a consent form, to which they must agree before participating in the study. To prepare the workers for the task, we provide a comprehensive training document and a video tutorial, followed by a qualification assessment comprising a questionnaire and a series of test demonstrations using our tool. It is noteworthy that the task is divided into two phases: task proposal and task demonstration. The proposal phase comes with a nominal reward, with the majority of the compensation dispensed upon successful completion of the demonstration.
|
2306.06070#70
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 70 |
pro- gramming language. The resulting code is shorter, easier to understand, and has fewer dependencies because LLMs are adept at handling fuzziness that is diï¬cult to address with conventional code. This suggests that we may not need more formats or standards for interoperability; instead, we can simply describe existing solutions in natural lan- guage to make them interoperable. Exploring this avenue further is exciting, but it is equally important to recognize the limitations of LLMs, as they currently have limited
|
2306.06283#70
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 70 |
Experience Instruction: Access the article "How to Become a Specialist Physician" Observation: Action QValue âOpen navigation drawerâ i search button" alt="Search" i cuck@ ff] 0 More options" id="2" clickable=' v4-460px-Become-a Doctor Step-6-Version-6"id='3" clickable="false CLICK(D (Link w Relonnvos) D '5" clickable="false">Take the MCAT exam and submit your scores... </div> " clickable="false">The Medical College Admission Test (MCAT). <button class=" ref-6" id="7" clickable="true">Link to Reference 6</button> i clickable="false">* </div> </p> INPUT(2, How to Become a Specialist Physician) fg ° ="false">Look for study guides online or at bookstores... </p> -460px-Become-a-Doct fep-7-Version- "10" clickable="false"> "statusBarBackground" id="1 1" clickable="false"></div> INPUT(2, become a specialist 0 physician) Encouraged: CLICK(6) -> 0.3 <p id="6" clickable="false">The Medical College Admission Test (MCAT)...
|
2306.07929#70
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05715
| 71 |
[41] Teemu Koivisto and Arto Hellas. 2022. Evaluating CodeClusters for Eï¬ectively Providing Feedback on Code Submissions. In 2022 IEEE Frontiers in Education Conference (FIE). IEEE, 1â9.
[42] Juho Leinonen, Francisco Enrique Vicente Castro, and Arto Hellas. 2022. Time- on-Task Metrics for Predicting Performance. In Proceedings of the 53rd ACM Tech- nical Symposium on Computer Science Education V. 1. 871â877.
[43] Juho Leinonen, Paul Denny, Stephen MacNeil, Sami Sarsa, Seth Bernstein, Joanne Kim, Andrew Tran, and Arto Hellas. 2023. Comparing Code Expla- arXiv preprint nations Created by Students and Large Language Models. arXiv:2304.03938 (2023).
|
2306.05715#71
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 71 |
Wang, J., Liang, Y., Meng, F., Shi, H., Li, Z., Xu, J., Qu, J., and Zhou, J. (2023b). Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048.
Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Yogatama, D., Bosma, M., Zhou, D., Metzler, D., et al. (2022). Emergent abilities of large language models. arXiv preprint arXiv:2206.07682.
Xiong, H., Wang, S., Zhu, Y., Zhao, Z., Liu, Y., Huang, L., Wang, Q., and Shen, D. (2023). Doctorglm: Fine-tuning your chinese doctor is not a herculean task.
Xu, C., Guo, D., Duan, N., and McAuley, J. (2023). Baize: An open-source chat model with parameter-efficient tuning on self-chat data. arXiv preprint arXiv:2304.01196.
|
2306.05783#71
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 71 |
[Patro et al., 2020] Gourab K Patro, Arpita Biswas, Niloy Ganguly, Krishna P Gummadi, and Abhijnan Chakraborty. Fairrec: Two-sided fairness for personalized recommenda- In Proceedings of the web tions in two-sided platforms. conference 2020, pages 1194â1204, 2020.
[Petrov and Macdonald, 2023] Aleksandr V Petrov and Craig Macdonald. Generative sequential recommendation with gptrec. arXiv preprint arXiv:2306.11114, 2023. [Qiu et al., 2021] Zhaopeng Qiu, Xian Wu, Jingyue Gao, and Wei Fan. U-bert: Pre-training user representations for In Proceedings of the AAAI improved recommendation. Conference on Artiï¬cial Intelligence, volume 35, pages 4320â4327, 2021.
[Radford et al., 2019] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
|
2306.05817#71
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 71 |
[6] DOD Adopts of Ethical Defense Principles for Artificial Intelligence Department //www.defense.gov/News/Releases/Release/Article/2091996/ dod-adopts-ethical-principles-for-artificial-intelligence/. > Release, Feb. 2020. URL > U.S. https:
[7] COM (2021) 762: Proposal for a DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on improving working conditions in platform work, 2021. URL https: //eur-lex.europa.eu/procedure/EN/2021_414. Doc ID: 2021_414 Doc Title: Proposal for a DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on improving working conditions in platform work Usr_lan: en.
[8] Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLA- TIVE ACTS, 2021. URL https://eur-lex.europa.eu/legal-content/EN/TXT/ ?uri=celex%3A52021PC0206.
|
2306.05949#71
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 71 |
Quality and diversity are ensured through a two-stage review process. The first author reviews all tasks after task proposal and manually select the tasks for demonstration. After task demonstration, a thorough final verification of all collected data is conducted by all authors to authenticate the tasks and recorded actions. Each demonstration is first verified by one of the authors, and uncertain ones are further verified by the first author to reach consensus.
17
Thanks for working on the HIT! Worker ID A DELTAS ook cniciin MvTnPs rucistarUS owl Ne scnu BSN Target Website: https://inw.delta.con ir Line Lights & Plane Tickets + Hotel: *e Rental Car CMH Current Target Task Find New York from June 2nd At any time, you can press âctrliqâ to quit the Hi l NEW ROUTES NEW ADVENTURES oH Explore seasonal routes to jumpstart your travel plans and discover new 2. [ink] CMH Columbus, Of -> CLTOK destinations we're flying tolike Auckland, Geneva and Berlin Actions 1, [link] OW Departure Airport or City Columbus VIEW OFFERS You selected: [link] To Destination Airport or City| Your Destination Which action do you want to take ) Click Type Hover Press Enter Click (Fake) Ignore
|
2306.06070#71
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 71 |
# interpretability and lack robustness.
It is interesting to note that none of the projects relied on the knowledge or under- standing of chemistry by LLMs. Instead, they relied on general reasoning abilities and provided chemistry information through the context or ï¬ne-tuning. However, this also brings new and unique challenges. All projects used the models provided by OpenAIâs API. While these models are powerful, we cannot examine how they were built or have any guarantee of continued reliable access to them.
26
|
2306.06283#71
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 71 |
specialist 0 physician) Encouraged: CLICK(6) -> 0.3 <p id="6" clickable="false">The Medical College Admission Test (MCAT)... </p> Random encouraged action Omitted discouraged actions is | Discouraged: B | ciicw( > img e option "2 2 true! E CLICK > utton ref-6" i ble="true">Link x to Reference 6</button | rneut(2, How to me a cialist Physician) -> my alt="M F c lickab "true" INPUT(2, become a specialist physician) -> img alt="Moré t is" 7 "2" ib Le rue" s = Encouraged: 3 INPUT(2, get a hottie girlfriend) -> 0.2 <img alt="More 2 options" i i 3
|
2306.07929#71
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 72 |
Question: How many integers are in the solution of the inequality |x + 5| < 10? Assistant A: Assistant B: There are 19 integers in the solution of the To solve the inequality |x + 5| < 10, we need to inequality |x + 5| < 10. consider two cases: Case 1: x + 5 is positive or zero In this case, the inequality becomes x + 5 < 10, which simplifies to x < 5. Case 2: x + 5 is negative In this case, the inequality becomes -(x +5) < 10, which simplifies to -x - 5 < 10. Adding 5 to both sides gives -x < 15, and multiplying by -1 (and flipping the inequality) gives x > -15. So the solution to the inequality is -15 < x < 5. The integers in this range are . Therefore, there are 20 integers in the solution of the inequality. GPT-4 Judgment: To solve the inequality |x + 5| < 10, we need to consider two cases: Case 1: x + 5 is positive or zero In this case, the inequality becomes x + 5 < 10, which simplifies to x < 5. Case 2: x + 5 is negative In this case, the inequality becomes -(x +
|
2306.05685#72
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 72 |
[44] Juho Leinonen, Paul Denny, and Jacqueline Whalley. 2022. A Comparison of Immediate and Scheduled Feedback in Introductory Programming Projects. In Proc. of the 53rd ACM Technical Symposium on Computer Science Education V. 1 (Providence, RI, USA) (SIGCSE 2022). ACM, New York, NY, USA, 885â891. https://doi.org/10.1145/3478431.3499372
[45] Juho Leinonen, Arto Hellas, Sami Sarsa, Brent Reeves, Paul Denny, James Prather, and Brett A Becker. 2023. Using large language models to enhance pro- gramming error messages. In Proceedings of the 54th ACM Technical Symposium
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, and Juha Sorva
on Computer Science Education V. 1. 563â569.
|
2306.05715#72
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 72 |
Zellers, R., Holtzman, A., Bisk, Y., Farhadi, A., and Choi, Y. (2019). Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830.
Zeng, A., Liu, X., Du, Z., Wang, Z., Lai, H., Ding, M., Yang, Z., Xu, Y., Zheng, W., Xia, X., Tam, W. L., Ma, Z., Xue, Y., Zhai, J., Chen, W., Liu, Z., Zhang, P., Dong, Y., and Tang, J. (2023). GLM-130b: An open bilingual pre-trained model. In The Eleventh International Conference on Learning Representations (ICLR).
Zeng, H. (2023). Measuring massive multitask chinese understanding. arXiv preprint arXiv:2304.12986.
Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., et al. (2023). A survey of large language models. arXiv preprint arXiv:2303.18223.
|
2306.05783#72
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 72 |
[Raffel et al., 2020] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a uniï¬ed text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485â 5551, 2020.
AmirHossein Naghshzan, and Latifa Guerrouj. Improving code exam- ple recommendations on informal documentation using bert and query-aware lsh: A comparative study. arXiv preprint arXiv:2305.03017, 2023.
[Sileo et al., 2022] Damien Sileo, Wout Vossen, and Robbe Zero-shot recommendation as language Raymaekers. In Advances in Information Retrieval: 44th modeling. European Conference on IR Research, ECIR 2022, Sta- vanger, Norway, April 10â14, 2022, Proceedings, Part II, pages 223â230. Springer, 2022.
[Singh and Joachims, 2018] Ashudeep Singh and Thorsten Joachims. Fairness of exposure in rankings. In Proceed- ings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2219â 2228, 2018.
|
2306.05817#72
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 72 |
and Ha- rassment URL https://www.adl.org/resources/blog/ americans-views-generative-artificial-intelligence-hate-and-harassment.
[10] Cyber girls, cyber-violence-growing-threat-especially-women-and-girls?language_ content_entity=en.
[11] REAIM 2023 | Ministry of Foreign Affairs | Government.nl, 2023. URL https://www.government.nl/ministries/ministry-of-foreign-affairs/ activiteiten/reaim.
20
[12] L. Abbott and C. Grady. A Systematic Review of the Empirical Literature Evaluating IRBs: What We Know and What We Still Need to Learn. Journal of empirical research on human research ethics : JERHRE, 6(1):3â19, Mar. 2011. ISSN 1556-2646. doi: 10.1525/jer.2011.6.1. 3. URL https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3235475/.
|
2306.05949#72
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 72 |
Figure 7: Illustration of our annotation tool, which consists of two side-by-side windows. On the left we provide a dialogue window for the user to control the tool and select operations to take. On the right we provide the browser window for the user to interact with and select web elements.
# Complete Demonstration
ao Expl id Select Website orethe Task Select the Confirm the and Task P Demonstration â Task Website âSaumur an element
Figure 8: The overall procedure for task demonstration.
# B.2 Task Proposal
|
2306.06070#72
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 72 |
26
Although there are open-source language models and techniques available, they are generally more diï¬cult to use compared to simply using OpenAIâs API. Furthermore, the performance of language models can be fragile, especially for zero- or few-shot applica- tions. To further investigate this, new benchmarks are needed that go beyond the tabular datasets we have been using for ML for molecular and materials scienceâwe simply have no frameworks to compare and evaluate predictive models that use context, unstructured data, or tools. Without automated tests, however, it is diï¬cult to improve these systems systematically. On top of that, consistent benchmarking is hard because de-duplication is ill-deï¬ned even if the training data are known. To enable a scientiï¬c approach to the development and analysis of these systems, we will also need to revisit versioning frame- works to ensure reproducibility as systems that use external tools depend on the exact versions of training data, LLM, as well as of the external tools and prompting setup.
The diversity of the prototypes presented in this work shows that the potential appli- cations are almost unlimited, and we can probably only see the tip of the icebergâfor instance, we didnât even touch modalities other than text thus far.
|
2306.06283#72
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.05685
| 73 |
the inequality becomes x + 5 < 10, which simplifies to x < 5. Case 2: x + 5 is negative In this case, the inequality becomes -(x + 5) < 10, which simplifies to -x - 5 < 10. Adding 5 to both sides gives -x < 15, and multiplying by -1 (and flipping the inequality) gives x > -15. So the solution to the inequality is -15 < x < 5. The integers in this range are . Therefore, there are 2@ integers in the solution of the inequality. Comparing the answers provided by the two AI assistants: Assistant A: 19 integers Assistant B: 20 integers The correct answer is 2@ integers, so Assistant B provided the correct and more helpful response. Final verdict: [[B]] GPT-4âs Answer (if asked separatedly): We can split the inequality into two cases: 1) If $x+5\ge 0$, then $|x+5|=x+5$. So, we have $x+5<10$, which means $x<5$. Since $x+5\ge @$, we have $x\ge -5$. So in this case, $x$ can be any integer from $-5$ to $4$, inclusive, which
|
2306.05685#73
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 73 |
on Computer Science Education V. 1. 563â569.
[46] Juho Leinonen, Leo Leppänen, Petri Ihantola, and Arto Hellas. 2017. Comparison of Time Metrics in Programming. In Proc. of the 2017 ACM Conf. on Int. Comput- ing Education Research (Tacoma, Washington, USA) (ICER â17). ACM, NY, NY, USA, 200â208. https://doi.org/10.1145/3105726.3106181
[47] Leo Leppänen, Arto Hellas, and Juho Leinonen. 2022. Piloting Natural Language Generation for Personalized Progress Feedback. In 2022 IEEE Frontiers in Educa- tion Conference (FIE). IEEE, 1â8.
[48] Soohyun Nam Liao, Sander Valstar, Kevin Thai, Christine Alvarado, Daniel Zin- garo, William G Griswold, and Leo Porter. 2019. Behaviors of higher and lower performing students in CS1. In Proceedings of the 2019 ACM Conference on Inno- vation and Technology in Computer Science Education. 196â202.
|
2306.05715#73
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 73 |
Zhong, W., Cui, R., Guo, Y., Liang, Y., Lu, S., Wang, Y., Saied, A., Chen, W., and Duan, N. (2023). Agieval: A human-centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364.
Zhou, C., Li, Q., Li, C., Yu, J., Liu, Y., Wang, G., Zhang, K., Ji, C., Yan, Q., He, L., et al. (2023). A comprehensive survey on pretrained foundation models: A history from bert to chatgpt. arXiv preprint arXiv:2302.09419.
Zhu, M., Deng, Z., Xiong, W., Yu, M., Zhang, M., and Wang, W. Y. (2019). Neural correction model for open-domain named entity recognition. arXiv preprint arXiv:1909.06058.
13
|
2306.05783#73
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 73 |
[Sun et al., 2019] Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. Bert4rec: Se- quential recommendation with bidirectional encoder rep- resentations from transformer. In Proceedings of the 28th ACM international conference on information and knowl- edge management, pages 1441â1450, 2019.
[Sun et al., 2023] Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. Is chatgpt good at search? investigating large language models as re- ranking agent. arXiv preprint arXiv:2304.09542, 2023.
[Touvron et al., 2023] Hugo Touvron, Thibaut Lavril, Gau- tier Izacard, Xavier Martinet, Marie-Anne Lachaux, Tim- oth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Ham- bro, Faisal Azhar, et al. Llama: Open and efï¬cient founda- tion language models. arXiv preprint arXiv:2302.13971, 2023.
|
2306.05817#73
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 73 |
[13] G. Abercrombie, A. C. Curry, T. Dinkar, and Z. Talat. Mirages: On Anthropomorphism in Dia- logue Systems, May 2023. URL http://arxiv.org/abs/2305.09800. arXiv:2305.09800 [cs].
[14] D. Acemoglu and P. Restrepo. Tasks, Automation, and the Rise in U.S. Wage Inequal- ity. Econometrica, 90(5):1973â2016, 2022. ISSN 1468-0262. doi: 10.3982/ECTA19815. URL https://onlinelibrary.wiley.com/doi/abs/10.3982/ECTA19815. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.3982/ECTA19815.
|
2306.05949#73
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 73 |
Figure 8: The overall procedure for task demonstration.
# B.2 Task Proposal
We use ChatGPT to generate sample tasks to provide inspiration to the annotators, and the prompt used is shown in Table 3. For each HIT, we ask the annotator to select a website of their interest first. Following this, we present them with ten sample tasks produced by ChatGPT, and request them to propose a maximum of five additional tasks. The annotator is instructed not to directly copy the sample tasks. We manually evaluate all submitted tasks and reject those that demonstrate low quality or are too similar to previously accepted tasks. We set a nominal reward of $0.05 for each task proposal HIT, and the annotator will receive it no matter whether the tasks are accepted or not. For accepted ones, the annotator will receive the full reward of $0.80 on successful demonstration of the task. Once we have collected a total of around 20 tasks for a specific website, we desist from showing it to the user, aiming for a balanced distribution among websites and increased task diversity.
# B.3 Task Demonstration
|
2306.06070#73
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 73 |
Given these new ways of working and thinking, combined with the rapid pace of developments in the ï¬eld, we believe that we urgently need to rethink how we work and teach. We must discuss how we ensure safe use [103], standards for evaluating and sharing those models, and robust and reliable deployments. But we also need to discuss how we ensure that the next generation of chemists and materials scientists are proï¬cient and critical users of these toolsâthat can use them to work more eï¬ciently while critically reï¬ecting on the outputs of the systems. We believe that to truly leverage the power of LLMs in the molecular and material sciences, we need a community eï¬ortâincluding not only chemists and computer scientists but also lawyers, philosophers, and ethicists: the possibilities and challenges are too broad and profound to tackle alone.
# Acknowledgements
We would like to speciï¬cally thank Jim Warren (NIST) for his contributions to discus- sions leading up to the hackathon and his participation as a judge during the event. We would also like to thank Anthony Costa and Christian Dallago (NVIDIA) for supporting the hackathon.
B.B., I.T.F, and ZH acknowledge support from the the National Science Foundation awards #2226419 and #2209892. This work was performed under the following ï¬nancial
27
|
2306.06283#73
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 73 |
Current Step Exemplar w/o Obsrv. Sim. Instruction: Instruction: Access the article "How to Get a Hottie Access the article "How to Avoid Girlfriend" Misgenderingâ Getting a Date Getting a date can be difficult and intimidating, but you can do so confidently with the help of wikiHowâs Getting a Date category. Learn how to get a guy to like you or make a gil fall for you. When youre ready, ask that special someone out, plan the perfect outing, start a relationship with them, and more with our 490+ articles. Featured Articles How to Avoid Misgendering + 21,959 views + Updated year ago How to Avoid Alimony + 45,106 views + Updated? years ago + Expert Co-Authored How to Avoid Aspartame + 21,684 views + Updated? years ago + Expert Co-Authored How to Avoid Blushing + 1,086,470 views + Updated? years ago How to Avoid Conflict, + 204,873 views + Updated? years ago + Expert Co-Authored How to Avoid Dehydration + 127,974 views + Updatedt year ago How to Avoid Discrimination + 205,383 views + Updatedt year ago How to Avoid Distractions fo = 1.0 Similariti Exemplar w/ Obsrv. Sim. Instruction: Access the
|
2306.07929#73
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05715
| 74 |
[49] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. Comput. Surveys 55, 9 (2023), 1â35.
[50] Dastyni Loksa, Amy J Ko, Will Jernigan, Alannah Oleson, Christopher J Mendez, and Margaret M Burnett. 2016. Programming, problem solving, and self- awareness: Eï¬ects of explicit guidance. In Proceedings of the 2016 CHI conference on human factors in computing systems. 1449â1461.
[51] Stephen MacNeil, Andrew Tran, Arto Hellas, Joanne Kim, Sami Sarsa, Paul Denny, Seth Bernstein, and Juho Leinonen. 2023. Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Develop- ment E-Book. In Proc. SIGCSEâ23. ACM, 6 pages.
|
2306.05715#74
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 74 |
Model BLOOM-560m Scao et al. (2022) 3 BLOOMZ-560m Muennighoff et al. (2022) 4 Pythia-1B Biderman et al. (2023) 5 BLOOM-1b7 Scao et al. (2022) 6 BLOOMZ-1b7 Muennighoff et al. (2022) 7 Dolly-v2-3b Conover et al. (2023) 8 Pythia-2.8B Biderman et al. (2023) 9 BLOOM-3b Scao et al. (2022) 10 BLOOMZ-3b Muennighoff et al. (2022) 11 StableLM-Tuned-Alpha-3B StabilityAI (2023) 12 ChatGLM-6B Zeng et al. (2023); Du et al. (2022) 13 DoctorGLM Xiong et al. (2023) 14 Dolly-v2-7b Conover et al. (2023) 15 h2ogpt-oig-oasst1-512-6.9b H2O.ai (2023) 16 Pythia-6.9B Biderman et al. (2023) 17 Alpaca-7B Taori et al.
|
2306.05783#74
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 74 |
[Wang and Lim, 2023] Lei Wang and Ee-Peng Lim. Zero- shot next-item recommendation using large pretrained lan- guage models. arXiv preprint arXiv:2304.03153, 2023.
[Wang et al., 2019] Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, and Bing Xiang. Multi-passage bert: A globally normalized bert model for open-domain question answering. arXiv preprint arXiv:1908.08167, 2019.
[Wang et al., 2022] Jie Wang, Fajie Yuan, Mingyue Cheng, Joemon M Jose, Chenyun Yu, Beibei Kong, Zhijin Wang, Bo Hu, and Zang Li. Transrec: Learning transferable rec- ommendation from mixture-of-modality feedback. arXiv preprint arXiv:2206.06190, 2022.
[Wang et al., 2023] Zifeng Wang, Chufan Gao, Cao Xiao, and Jimeng Sun. Anypredict: Foundation model for tabu- lar prediction. arXiv preprint arXiv:2305.12081, 2023.
|
2306.05817#74
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 74 |
[15] O. Ahia, J. Kreutzer, and S. Hooker. The Low-Resource Double Bind: An Empiri- In Findings of the As- cal Study of Pruning for Low-Resource Machine Translation. sociation for Computational Linguistics: EMNLP 2021, pages 3316â3333. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021.findings-emnlp.282. URL https://aclanthology.org/2021.findings-emnlp.282.
[16] N. Ahmed and M. Wahed. The De-democratization of AI: Deep Learning and the Compute Divide in Artificial Intelligence Research, Oct. 2020. URL http://arxiv.org/abs/2010. 15581. arXiv:2010.15581 [cs].
[17] A. Al Odhayani, W. J. Watson, and L. Watson. Behavioural consequences of child abuse. Canadian Family Physician, 59(8):831â836, Aug. 2013. ISSN 0008-350X. URL https: //www.ncbi.nlm.nih.gov/pmc/articles/PMC3743691/.
|
2306.05949#74
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 74 |
# B.3 Task Demonstration
We develop a dedicated annotation tool for task demonstration using Playwright,3 which allows us to interact with the browser and record user actions. As shown in Figure 7. the tool is composed of two windows. The dialogue window on the left serves as the annotatorâs control panel for guiding the interaction flow and choosing operations. The browser window on the right is where the annotator navigates the website and selects elements for interaction. Figure 8 shows the overall procedure for task demonstration. The annotator starts by selecting the website and task to be demonstrated. Once selected, the tool will bring up the website in the browser. The annotator is then instructed to explore the website and practice the task. To collect clean actions during actual demonstration, the workers are asked to close pop-up windows during exploration. We also provide anonymous accounts for the workers to use so that no private information is entered. The exploration stage is
3https://playwright.dev/
18
You selected: [link] To Destination Airport or City Your Destination What text do you want to type in the field: New Yorkl 7
You selected: [combobox] Make Which option do you want to select: >t ) Toyota oyota FIA Lotus Bentley
Figure 9: Dialogue window for the Type operation.
Figure 10: Dialogue window for the Select Option operation.
|
2306.06070#74
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 74 |
27
assistance award 70NANB19H005 from the U.S. Department of Commerce, National Institute of Standards and Technology as part of the Center for Hierarchical Materials Design (CHiMaD).
K.J.S, A.S. acknowledge support from the the National Science Foundation award #1931306.
K.M.J., S.M., J.v.H., X.Z., B.M., E.M., and B.S. were supported by the MAR- VEL National Centre for Competence in Research funded by the Swiss National Science Foundation (grant agreement ID 51NF40-182892) and the USorb-DAC Project, which is funded by a grant from The Grantham Foundation for the Protection of the Environ- ment to RMIâs climate tech accelerator program, Third Derivative. B.M. was further supported by the European Unionâs Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 945363.
M.C.R., S.C., and A.D.W. were supported by the National Science Foundation and the National Institute of General Medical Sciences under Grant No. 1764415 and award number R35GM137966, respectively.
|
2306.06283#74
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 74 |
Discrimination + 205,383 views + Updatedt year ago How to Avoid Distractions fo = 1.0 Similariti Exemplar w/ Obsrv. Sim. Instruction: Access the article "How to Get Gallade in Pokémon" Pokemon Black and White Learn everything you want about Pokemon Black and White with the wikiHow Pokemon Black and White Category. Learn about topics such as How to Get Zorua in Pokémon White, How to Catch Landorus in Pokérnon Black and White, How to Find All of the 7 Sages in Pokémon Black and White, and more with our helpful step-by-step instructions with photos and videos. Catch Landorus in Pokémon Black and White Get Zorua in Pokémon White [Get Gallade in Pokémon| Similarities: Tg = 1.0 fo © 0.77 I | I | | | | I | | | | I | | | I I Articles about Pokemon Black and White | I I | I | I I | I | | | | I | | 1
|
2306.07929#74
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 75 |
2) If $x+5<0$, then $|x+5|=-(x+5)$. So, we have $-(x+5)<10$, which means $x+5>-10$, which means $x>- 15$. Since $x+5<0$, we have $x<-5$. So in this case, $x$ can be any integer from $-14$ to $-6$, inclusive, which gives us $-6-(-14)+1=9$ integers.
Figure 15: An example of GPT-4âs wrong judgment with chain-of-thought prompt. We can see GPT-4 exactly copied Assistant Bâs answer (which contains arithmetic errors) and determined Assistant Aâs answer is incorrect. This suggest GPT-4âs chain-of-thought process can be significantly influenced by the given answers despite we ask it to think independently.
21
|
2306.05685#75
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 75 |
[52] Stephen MacNeil, Andrew Tran, Juho Leinonen, Paul Denny, Joanne Kim, Arto Hellas, Seth Bernstein, and Sami Sarsa. 2023. Automatically Generating CS Learning Materials with Large Language Models. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 2 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 1176. https://doi.org/10.1145/3545947.3569630
[53] Stephen MacNeil, Andrew Tran, Dan Mogil, Seth Bernstein, Erin Ross, and Zi- heng Huang. 2022. Generating Diverse Code Explanations Using the GPT-3 Large Language Model. In Proc. of the 2022 ACM Conf. on Int. Computing Educa- tion Research - Volume 2. ACM, 37â39.
[54] Ye Mao. 2019. One minute is enough: Early prediction of student success and event-level diï¬culty during novice programming tasks. In In: Proceedings of the 12th International Conference on Educational Data Mining (EDM 2019).
|
2306.05715#75
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 75 |
H2O.ai (2023) 16 Pythia-6.9B Biderman et al. (2023) 17 Alpaca-7B Taori et al. (2023) 18 Alpaca-LoRA-7B Wang et al. (2023a) 19 Baize-7B Xu et al. (2023) 20 Baize Healthcare-7B Xu et al. (2023) 21 LLaMA-7B Touvron et al. (2023) 22 StableLM-Tuned-Alpha-7B StabilityAI (2023) 23 Vicuna-7b-delta-v1.1 Chiang et al. (2023) 24 BELLE-7B-0.2M 25/0.6M 26/1M 27/2M 28 Ji et al. (2023a) BLOOM-7b1 Scao et al. (2022) 29 BLOOMZ-7b1 30/mt 31/p3 32 Muennighoff et al. (2022) Dolly-v2-12b Conover et al. (2023) 33 h2ogpt-oasst1-512-12b H2O.ai (2023) 34 Open-Assistant-SFT-4-12B Kpf et al.
|
2306.05783#75
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 75 |
[Wei et al., 2022] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yo- gatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682, 2022.
[Wu et al., 2021] Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. Empowering news recommendation with pre-trained language models. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1652â 1656, 2021.
[Wu et al., 2022] Chuhan Wu, Fangzhao Wu, Tao Qi, Chao Zhang, Yongfeng Huang, and Tong Xu. Mm-rec: Visiolin- guistic model empowered multimodal news recommenda- tion. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2560â2564, 2022.
|
2306.05817#75
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 75 |
[18] A. Allen. Protecting Oneâs Own Privacy in a Big Data Economy. Harvard Law Re- view, 130(2), Dec. 2016. URL https://harvardlawreview.org/forum/vol-130/ protecting-ones-own-privacy-in-a-big-data-economy/.
[19] C. Alonso, S. Kothari, Intelli- the Gap Between Rich Dec. https://www.imf.org/en/Blogs/Articles/2020/12/02/ and S. Rehman. How Artificial gence Could Widen URL 2020. blog-how-artificial-intelligence-could-widen-the-gap-between-rich-and-poor-nations. and Poor Nations,
[20] C. Andrade. The Limitations of Online Surveys. Indian Journal of Psychological Medicine, 42(6):575â576, Nov. 2020. doi: https://doi.org/10.1177/0253717620957496. URL https: //journals.sagepub.com/doi/epub/10.1177/0253717620957496.
|
2306.05949#75
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 75 |
not recorded and primarily serves to familiarize the annotator with the website and task, as well as to prepare the website to prevent future pop-ups, thereby ensuring a clean, streamlined set of final recorded actions. After exploration, the annotator is directed to return to the homepage and reset any altered values, allowing us to begin the demonstration in a fresh state. During the demonstration, the annotator will illustrate how to accomplish the task step-by-step using both the browser and the dialogue window. To ensure a clean set of annotated actions, annotators are restricted from directly engaging with the browser during the demonstration phase. Instead, we divide each action step into two stages: Element selection and operation selection. At each step, the annotator first selects the target element by clicking it in the browser. We will highlight the selected element in the browser window but block the actual click event. The annotator is then prompted to select the operation to perform within the dialogue window, which is then carried out by the annotation tool in the browser. We provide 6 operations: Click, Type, Hover, Press Enter, Click (Fake) and Ignore. For the Type operation, the annotator is additionally required to supply the value as shown in Figure 9. If the chosen element is
|
2306.06070#75
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 75 |
Q.A.âs contribution to this work was supported by the National Center for Advanc- ing Translational Sciences of the National Institutes of Health under award number U18TR004149. The content is solely the responsibility of the authors and does not necessarily represent the oï¬cial views of the National Institutes of Health.
M.V.G. acknowledges support from the Spanish National Research Council (CSIC) through the Programme for internationalization i-LINK 2021 (Project LINKA20412), and from the Spanish Agencia Estatal de Investigaci´on (AEI) through the Grant TED2021- 131693B-I00 funded by MCIN/AEI/ 10.13039/501100011033 and by the âEuropean Union NextGenerationEU/PRTRâ and through the Ram´on y Cajal Grant RYC-2017- 21937 funded by MCIN/AEI/ 10.13039/501100011033 and by âESF Investing in your futureâ.
|
2306.06283#75
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.05685
| 76 |
Question: Share ideas for adapting art masterpieces into interactive experiences for children. List 5 specific artworks and associated ideas. Assistant A: Assistant B: Sure, here are five ideas for adapting art masterpieces into interactive (omitted) experiences for children: 1. âThe Starry Nightâ by Vincent van Gogh - Create a virtual reality experience where children can explore the night sky and see the stars and constellations as van Gogh saw them. They can also create their own starry night scenes using different colors and brush strokes. - Create a sensory experience where children can touch, feel, and smell different objects to explore the surrealist themes of Daliâs painting. They can also create their own surreal landscapes using different materials and textures. 3. (omitted) Follow-up Question: Write a concrete plan for your second example. Include budget estimates. Assistant A: Assistant B: Sure, here is a concrete plan for adapting âThe Persistence of Memoryâ by Salvador (omitted) Dali into an interactive experience for children: 1. Budget: $5,000 2. Objectives: - To create a sensory experience that allows children to explore the surrealist themes of Daliâs painting. -
|
2306.05685#76
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 76 |
[55] Samiha Marwan, Ge Gao, Susan Fisk, Thomas W Price, and Tiï¬any Barnes. 2020. Adaptive immediate feedback can improve novice programming engagement and intention to persist in computer science. In Proceedings of the 2020 ACM conference on international computing education research. 194â203.
[56] Davin McCall and Michael Kölling. 2014. Meaningful categorisation of novice programmer errors. In 2014 IEEE Frontiers in Education Conf. (FIE) Proc. IEEE, 1â8.
[57] Davin McCall and Michael Kölling. 2019. A new look at novice programmer
errors. ACM Transactions on Computing Education (TOCE) 19, 4 (2019), 1â30. [58] Silvia Muller, Monica Babes-Vroman, Mary Emenike, and Thu D Nguyen. 2020. Exploring Novice Programmersâ Homework Practices: Initial Observations of Information Seeking Behaviors. In Proceedings of the 51st ACM Technical Sympo- sium on Computer Science Education.
[59] Moin Nadeem, Anna Bethke, and Siva Reddy. 2020. StereoSet: Measuring stereo- arXiv preprint arXiv:2004.09456 typical bias in pretrained language models. (2020).
|
2306.05715#76
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 76 |
H2O.ai (2023) 34 Open-Assistant-SFT-4-12B Kpf et al. (2023) 35 Pythia-12B Biderman et al. (2023) 36 Baize-13B Xu et al. (2023) 37 LLaMA-13B Touvron et al. (2023) 38 Vicuna-13b-delta-v1.1 Chiang et al. (2023) 39 moss-moon-003-sft Sun et al. (2023b) 40 moss-moon-003-sft-plugin Sun et al. (2023b) 41 GPT-NeoX-20B Black et al. (2022) 42 h2ogpt-oasst1-512-20b H2O.ai (2023) 43 Baize-30B Xu et al. (2023) 44 LLaMA-30B Touvron et al. (2023) 45 LLaMA-65B Touvron et al. (2023) 46 BLOOM Scao et al. (2022) 47 BLOOMZ 48/mt 49/p3 50 Muennighoff et al. (2022) ChatGPT (2023.05.01) OpenAI (2023a) 51 GPT-4
|
2306.05783#76
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 76 |
[Xi et al., 2023a] Yunjia Xi, Jianghao Lin, Weiwen Liu, Xinyi Dai, Weinan Zhang, Rui Zhang, Ruiming Tang, and Yong Yu. A birdâs-eye view of reranking: from list level to page level. In Proceedings of the Sixteenth ACM Interna- tional Conference on Web Search and Data Mining, pages 1075â1083, 2023.
[Xi et al., 2023b] Yunjia Xi, Weiwen Liu, Jianghao Lin, Jieming Zhu, Bo Chen, Ruiming Tang, Weinan Zhang, Rui Zhang, and Yong Yu. Towards open-world recommen- dation with knowledge augmentation from large language models. arXiv preprint arXiv:2306.10933, 2023.
[Yu et al., 2022a] Junliang Yu, Hongzhi Yin, Xin Xia, Tong Chen, Jundong Li, and Zi Huang. Self-supervised learn- ing for recommender systems: A survey. arXiv preprint arXiv:2203.15876, 2022.
|
2306.05817#76
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 76 |
[21] M. Andrus, E. Spitzer, J. Brown, and A. Xiang. What We Canât Measure, We Canât Understand: Challenges to Demographic Data Procurement in the Pursuit of Fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 249â 260. ACM, 2021-03-03. ISBN 978-1-4503-8309-7. doi: 10.1145/3442188.3445888. URL https://dl.acm.org/doi/10.1145/3442188.3445888.
[22] D. Autor. The Labor Market Impacts of Technological Change: From Unbridled Enthusiasm to Qualified Optimism to Vast Uncertainty, May 2022. URL https://www.nber.org/ papers/w30074.
|
2306.05949#76
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 76 |
Press Enter, Click (Fake) and Ignore. For the Type operation, the annotator is additionally required to supply the value as shown in Figure 9. If the chosen element is a select HTML element, and the annotator opts for Click, it translates to a Select Option operation and we will prompt the annotator to select one of the options as shown in Figure 10. To avoid ambiguity, the Click, Hover and Press Enter operations are all mapped to Click in the final dataset. Click (Fake) is a special operation. It will be recorded the same as a normal Click but will not get executed in the browser. This is designed for safeguarding against state-changing actions (i.e., actions that produce side effects to the world), such as posting a comment or scheduling an appointment, since it will interfere with other real users of the website. In practice, once a model predicts Click (Fake), it may prompt the user for confirmation before executing such state-changing actions. Finally, the annotator can also choose Ignore in case they select a wrong element. Once all the actions have been annotated, the annotator can choose to complete the task. They will then be asked to confirm the task description again and make any necessary modifications.
|
2306.06070#76
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 76 |
The datalab project (M.L.E., B.E.S. and J.D.B.) has received funding from the Eu- ropean Unionâs Horizon 2020 research and innovation programme under grant agreement 957189 (DOI: 10.3030/957189), the Battery Interface Genome - Materials Acceleration Platform (BIG-MAP), as an external stakeholder project. M.L.E. additionally thanks the BEWARE scheme of the Wallonia-Brussels Federation for funding under the European Commissionâs Marie Curie-Sklodowska Action (COFUND 847587). B.E.S. acknowledges support from the UKâs Engineering and Physical Sciences Research Council (ESPRC).
28
B.P. acknowledges support from the National Science Foundation through NSF-CBET Grant No. 1917340. The authors thank Phung Cheng Fei, Hassan Harb, and Vinayak Bhat for their helpful comments on this project.
D.C. and L.C.B. thank NSF DGE-2022040 for the aiM NRT funding support.
|
2306.06283#76
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.05685
| 77 |
$5,000 2. Objectives: - To create a sensory experience that allows children to explore the surrealist themes of Daliâs painting. - To encourage children to create their own surreal landscapes using different materials and textures. GPT-4 Judgment: Assistant A provided a concrete plan for adapting âThe Persistence of Memoryâ by Salvador Dali into an interactive experience for children, while Assistant B provided a plan for the âMona Lisa Interactive Experience for Children.â
|
2306.05685#77
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 77 |
[60] Matti Nelimarkka and Arto Hellas. 2018. Social help-seeking strategies in a pro- gramming MOOC. In Proceedings of the 49th ACM Technical Symposium on Com- puter Science Education. 116â121.
[61] Andy Nguyen, Christopher Piech, Jonathan Huang, and Leonidas Guibas. 2014. Codewebs: scalable homework search for massive open online programming courses. In Proceedings of the 23rd international conference on World wide web. 491â502.
[62] Henrik Nygren, Juho Leinonen, and Arto Hellas. 2019. Non-restricted Access to Model Solutions: A Good Idea?. In Proceedings of the 2019 ACM Conference on Innovation and Technology in Computer Science Education. 44â50.
[63] Henrik Nygren, Juho Leinonen, Nea Pirttinen, Antti Leinonen, and Arto Hellas. 2019. Experimenting with model solutions as a support mechanism. In Proceed- ings of the 2019 Conference on United Kingdom & Ireland Computing Education Research. 1â7.
[64] Daniel FO Onah, Jane Sinclair, and Russell Boyatt. 2014. Dropout rates of mas- sive open online courses: behavioural patterns. EDULEARN14 proceedings (2014), 5825â5834.
|
2306.05715#77
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 77 |
49/p3 50 Muennighoff et al. (2022) ChatGPT (2023.05.01) OpenAI (2023a) 51 GPT-4 (2023.05.01) OpenAI (2023b) 52 #Parameter 0.56B 0.56B 1B 1.7B 1.7B 2.8B 2.8B 3B 3B 3B 6B 6B 6.9B 6.9B 6.9B 7B 7B 7B 7B 7B 7B 7B 7.1B 7.1B 7.1B 12B 12B 12B 12B 13B 13B 13B 16B 16B 20B 20B 33B 33B 65B 176B 176B - - Base Model - BLOOM-560m - - BLOOM-1b7 Pythia-2.8B - - BLOOM-3b StableLM-Base-Alpha-3B - ChatGLM-6B Pythia-6.9B Pythia-6.9B - LLaMA-7B LLaMA-7B LLaMA-7B LLaMA-7B - StableLM-Base-Alpha-7B LLaMA-7B Bloomz-7b1-mt - BLOOM-7b1 Pythia-12B
|
2306.05783#77
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 77 |
[Yu et al., 2022b] Yang Yu, Fangzhao Wu, Chuhan Wu, Jingwei Yi, and Qi Liu. Tiny-newsrec: Effective and ef- ï¬cient plm-based news recommendation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5478â5489, 2022.
[Yuan et al., 2023] Zheng Yuan, Fajie Yuan, Yu Song, Youhua Li, Junchen Fu, Fei Yang, Yunzhu Pan, and Yongxin Ni. Where to go next for recommender sys- tems? id-vs. modality-based recommender models revis- ited. arXiv preprint arXiv:2303.13835, 2023.
[Zafrir et al., 2019] Oï¬r Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. Q8bert: Quantized 8bit bert. In 2019 Fifth Workshop on Energy Efï¬cient Machine Learn- ing and Cognitive Computing-NeurIPS Edition (EMC2- NIPS), pages 36â39. IEEE, 2019.
[Zhang and Wang, 2023] Zizhuo Zhang and Bang Wang. Prompt learning for news recommendation. arXiv preprint arXiv:2304.05263, 2023.
|
2306.05817#77
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 77 |
[23] F. O. Baah, A. M. Teitelman, and B. Riegel. Marginalization: Conceptualizing patient vulnerabilities in the framework of social determinants of health â An integrative review. Nursing inquiry, 26(1):e12268, Jan. 2019. ISSN 1320-7881. doi: 10.1111/nin.12268. URL https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6342665/.
[24] E. Bagdasaryan, O. Poursaeed, and V. Shmatikov. Differential privacy has disparate impact on model accuracy. In Proceedings of the 33rd International Conference on Neural Information Processing Systems. Curran Associates Inc., 2019.
[25] Y. Bai, S. Kadavath, S. Kundu, A. Askell, J. Kernion, A. Jones, A. Chen, A. Goldie, A. Mirho- seini, C. McKinnon, C. Chen, C. Olsson, C. Olah, D. Hernandez, D. Drain, D. Ganguli, D. Li, E. Tran-Johnson, E. Perez, J. Kerr, J. Mueller, J. Ladish, J. Landau, K. Ndousse, K. Lukosuite,
21
|
2306.05949#77
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 77 |
Pop-ups and CAPTCHAs. In this study, we emphasize on clean and direct task execution actions, intentionally omitting extraneous steps like pop-ups and CAPTCHAs that might introduce ambiguity in evaluation. We carefully select only those websites that pose no access issues when used with our tool. Before recording the task demonstration, annotators are requested to familiarize themselves with the website, and preemptively close pop-up windows and clear CAPTCHAs to avoid their recurrence during the actual demonstration. Annotators are further guided not to engage in extra steps such as closing ads unless necessary during the task demonstration. In the final task verification, we revisit the actions and filter out those unrelated to direct task execution. At the same time, we acknowledge that these instances constitute a significant aspect of the dynamic web environment in the real world. Enhancing systems to robustly tackle such scenarios on-the-go could form an interesting avenue for future research.
Mitigating Disruptions on the Websites. The annotator are advised against actions that could potentially interfere the normal operation of the website. To handle tasks such as scheduling appointments, we introduce a Click (Fake) operation that annotators can utilize to indicate the action without actually executing it on the website.
# B.4 Task Verification
|
2306.06070#77
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 77 |
D.C. and L.C.B. thank NSF DGE-2022040 for the aiM NRT funding support.
D.C. and L.C.B. thank NSF DGE-2022040 for the aiM NRT funding support. K.C. thank the National Institute of Standards and Technology for funding, compu- tational, and data-management resources. Please note certain equipment, instruments, software, or materials are identiï¬ed in this paper in order to specify the experimental procedure adequately. Such identiï¬cation is not intended to imply recommendation or endorsement of any product or service by NIST, nor is it intended to imply that the materials or equipment identiï¬ed are necessarily the best available for the purpose.
A.K.G., G.W.M., A.I., and W.A.d.J. were supported by the U.S. Department of Energy, Oï¬ce of Science, Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC02-05CH11231, FWP No. DAC-LBL-Long, and by the U.S. Department of Energy, Oï¬ce of Science, Oï¬ce of High Energy Physics under Award Number DE-FOA-0002705.
|
2306.06283#77
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.05685
| 78 |
Assistant A provided a concrete plan for adapting âThe Persistence of Memoryâ by Salvador Dali into an interactive experience for children, while Assistant B provided a plan for the âMona Lisa Interactive Experience for Children.â Therefore, Assistant Bâs answer is more relevant and directly addresses the userâs request. My final verdict is: [[B]]
Figure 16: In this example, despite Assistant A correctly followed userâs instruction to generate a concrete plan for the second example of its previous response, GPT-4 wrongly referred to the second example in Assistant Bâs response, resulting in a wrong judgment. This suggests the prompt design that breaks the questions into two prompts may cause LLM judge struggle to locate assistantsâ previous responses.
22
# J
# C Data Collection
We describe our data collection process for both MT-bench and Chatbot Arena.
# C.1 MT-bench human evaluation
Figure 17 shows the normal voting interface. Figure 18 shows that we additionally show GPT-4âs judgment to users and ask if it is reasonable when a human differs from GPT-4.
|
2306.05685#78
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 78 |
[65] OpenAI. 2023. GPT-4 Technical https://doi.org/10.48550/arXiv.2303.08774 arXiv:2303.08774 [cs] Report.
[66] Claudia Ott, Anthony Robins, and Kerry Shephard. 2016. Translating principles of eï¬ective feedback for students into the CS1 context. ACM Transactions on Computing Education (TOCE) 16, 1 (2016), 1â27.
[67] Long Ouyang, Jeï¬ Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. arXiv
Exploring the Responses of Large Language Models to Beginner Programmersâ Help Requests
preprint arXiv:2203.02155 (2022).
[68] José Carlos Paiva, José Paulo Leal, and Ãlvaro Figueira. 2022. Automated Assess- ment in Computer Science Education: A State-of-the-Art Review. ACM Transac- tions on Computing Education (TOCE) (2022).
|
2306.05715#78
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 78 |
- StableLM-Base-Alpha-7B LLaMA-7B Bloomz-7b1-mt - BLOOM-7b1 Pythia-12B Pythia-12B Pythia-12B-deduped - LLaMA-13B - LLaMA-13B moss-moon-003-base moss-moon-003-base - GPT-NeoX-20B LLaMA-30B - - - BLOOM GPT-3.5 - #Layer 24 24 16 24 24 32 32 30 30 16 28 28 32 32 32 32 32 32 32 32 16 32 30 30 30 36 36 36 36 40 40 40 34 34 44 44 60 60 80 70 70 - - #Encoder - - - - - - - - - - 28 28 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - #Decoder 24 24 16 24 24 32 32 30 30 16 28 28 32 32 32 32 32 32 32 32 16 32 30 30 30 36 36 36 36 40 40 40 34 34 44 44 60 60 80 70 70 - - #Pretrain Tokens 350B tokens - 300B tokens 350B tokens - - 300B tokens 350B tokens - - 1T tokens - - - 300B tokens
|
2306.05783#78
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 78 |
[Zhang et al., 2021a] Qi Zhang, Jingjie Li, Qinglin Jia, Chuyuan Wang, Jieming Zhu, Zhaowei Wang, and Xi- uqiang He. Unbert: User-news matching bert for news recommendation. In IJCAI, pages 3356â3362, 2021. [Zhang et al., 2021b] Yuhui Zhang, Hao Ding, Zeren Shui, Yifei Ma, James Zou, Anoop Deoras, and Hao Wang. Lan- guage models as recommender systems: Evaluations and limitations. 2021.
[Zhang et al., 2022] Xinyang Zhang, Yury Malkov, Omar Florez, Serim Park, Brian McWilliams, Jiawei Han, and Ahmed El-Kishky. Twhin-bert: A socially-enriched pre- trained language model for multilingual tweet representa- tions. arXiv preprint arXiv:2209.07562, 2022.
[Zhang et al., 2023a] Jizhi Zhang, Keqin Bao, Yang Zhang, Is chat- Wenjie Wang, Fuli Feng, and Xiangnan He. gpt fair for recommendation? evaluating fairness in large language model recommendation. arXiv preprint arXiv:2305.07609, 2023.
|
2306.05817#78
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 78 |
21
L. Lovitt, M. Sellitto, N. Elhage, N. Schiefer, N. Mercado, N. DasSarma, R. Lasenby, R. Lar- son, S. Ringer, S. Johnston, S. Kravec, S. E. Showk, S. Fort, T. Lanham, T. Telleen-Lawton, T. Conerly, T. Henighan, T. Hume, S. R. Bowman, Z. Hatfield-Dodds, B. Mann, D. Amodei, N. Joseph, S. McCandlish, T. Brown, and J. Kaplan. Constitutional AI: Harmlessness from AI Feedback, Dec. 2022. URL http://arxiv.org/abs/2212.08073. arXiv:2212.08073 [cs].
|
2306.05949#78
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 78 |
# B.4 Task Verification
All collected data undergoes an additional verification process conducted by the authors, as demon- strated in Figure 11. The verification interface is shown in Figure 11. This verification consists of three tasks. Firstly, we evaluate whether a task should be discarded due to its low quality. Secondly, we examine each step to determine if any action should be discarded. This includes reviewing the
19
12510 Website: healthgrades Browse primary care dectors who are within 1 mile of distance Feel better about finding healthcare {searchbox} Search doctors, conditions, * or procedures -> 16 a TYPE: primary care , Find the care you need aad [searchbox] Search doctors, conditions, or fem] Primary -> procedures -> TYPE: primary care click e& [button] All Filters + Keep it -> CLICK a Remove [button] Close -> CLICK {svgl -> cuck Popular Searches on Healthgrades Unsure [button] Apply kb Filters -> CLICK = @ 7] @ co) (TY Hatt of a Americans who see doctors each year
Figure 11: Illustration of our verification tool.
|
2306.06070#78
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 78 |
M.B, B.R., and P.S. were supported by the NCCR Catalysis (grant number 180544), a National Centre of Competence in Research funded by the Swiss National Science Foundation.
S.G.R. and J.L. acknowledge the generous support of Eric and Wendy Schmidt, and the core funding of the Francis Crick Institute, which receives its funding from Cancer Research UK, the UK Medical Research Council, and the Wellcome Trust.
[1] Butler, K. T.; Davies, D. W.; Cartwright, H.; Isayev, O.; Walsh, A. Machine learning for molecular and materials science. Nature 2018, 559, 547â555.
[2] Moosavi, S. M.; Jablonka, K. M.; Smit, B. The Role of Machine Learning in the Under- standing and Design of Materials. J. Am. Chem. Soc. 2020, 142, 20273â20287.
[3] Morgan, D.; Jacobs, R. Opportunities and Challenges for Machine Learning in Materials Science. Annu. Rev. Mater. Res. 2020, 50, 71â103.
|
2306.06283#78
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.05685
| 79 |
Instruction Please act as an impartial judge and evaluate the quality of the responses provided by two Al assistants to the user question displayed below. You should choose the assistant that follows the userâs instructions and answers the userâs question better. Your evaluation should consider factors such as the helpfulness, relevance, accuracy, depth, creativity, and level of detail of their responses. Begin your evaluation by comparing the two responses and provide a short explanation. Avoid any positional biases and ensure that the order in which the responses were presented does not influence your decision. Do not allow the length of the responses to influence your evaluation. Do not favor certain names of the assistants. Be as objective as possible. Finally, indicate your verdict by clicking one button. Please use the tie button sparsely. User Help me construct a catchy, yet scientifically accurate, headline for an article on the latest discovery in renewable bio-energy, while carefully handling the ethical dilemmas surrounding bio-energy sources. Propose 4 options. Assistant A Assistant B 1. âRevolutionizing Renewable Energy: New Breakthroughs in Bio-Energy Production While 1. Breakthrough in Renewable Bio-Energy
|
2306.05685#79
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 79 |
[69] Andrew Petersen, Jaime Spacco, and Arto Vihavainen. 2015. An exploration of error quotient in multiple contexts. In Proceedings of the 15th Koli Calling Conference on Computing Education Research. 77â86.
[70] James Prather, Raymond Pettit, Brett A Becker, Paul Denny, Dastyni Loksa, Alani Peters, Zachary Albrecht, and Krista Masci. 2019. First things ï¬rst: Providing metacognitive scaï¬olding for interpreting problem prompts. In Proceedings of the 50th ACM technical symposium on computer science education. 531â537. [71] James Prather, Brent N Reeves, Paul Denny, Brett A Becker, Juho Leinonen, An- drew Luxton-Reilly, Garrett Powell, James Finnie-Ansley, and Eddie Antonio Santos. 2023. "Itâs Weird That it Knows What I Want": Usability and Interactions with Copilot for Novice Programmers. arXiv preprint arXiv:2304.02491 (2023).
[72] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Im- proving language understanding by generative pre-training. (2018).
|
2306.05715#79
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 79 |
- 300B tokens 350B tokens - - 300B tokens 350B tokens - - 1T tokens - - - 300B tokens - - - - 1T tokens - - - 350B tokens - - - - 300B tokens - 1T tokens - - - 825GB - - 1.4T tokens 1.4T tokens 366B tokens - - - #IFT Sample - 3.67B tokens - - 8.39B tokens 15K - - 8.39B tokens 632K â 6.38M 15K 398K - 52K 52K 263K 201K - 632K 70K 0.2M/0.6M/1M/2M - 4.19B tokens 15K 94.6K 161K - 263K - 70K 1.1M 1.4M - 94.6K 263K - - - 2.09B tokens â â
|
2306.05783#79
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.