doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.06264
| 40 |
Relations IntructGPT FLAN-T5 Appeared Didnât Appear Hallucinated Appeared Didnât Appear Hallucinated shares border with official language named after part of capital diplomatic relation sister city continent capital of place of birth genre located in the admin territory country has part religion country of citizenship field of work occupation position held work location instrument place of death position played headquarters location location of formation employer member of instance of developer language of work or name country of origin original language of work owned by 0.164 0.318 0.071 0.01 0.202 0.111 - 0.099 0.217 0.192 0.088 0.137 0.025 - - 0.336 0.267 0.246 0.354 0.131 0.046 0.206 0.271 0.498 0.288 0.023 0.152 - - - - - - 0.127 0.372 0.272 0.006 0.22 0.189 0.67 0.003 0.565 0.392 0.713 - - - - - - - - - - - - - - - - - - - - - - 0.111 0.427 0.141 0.076 0.305 0.204 0.48 0.122 0.191 0.346 0.1 0.014
|
2306.06264#40
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 40 |
However, this application also highlights that further developments of these LLM- based tools are needed. For example, a challenge the sMolTalk tool faces is robustness. For instance, fragments from the prompt tend to leak into the output and must be handled with more involved mechanisms, such as retries in which one gives the LLMs access to the error messages or prompt engineering. Further improvement can also be expected if the application leverages a knowledge base such as the documentation of 3dmol.js.
As the work of Glenn Hocky and Andrew White shows [69], an LLM-interface for software can also be used with other programs such as VMD [70] and extended with speech- to-text models (such as Whisper [71]) to enable voice control of such programs. In
15
[secrite comanae | i (ae ] (data) {
|
2306.06283#40
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 40 |
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhari- wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agar- wal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCan- dlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Hugo Larochelle, MarcâAurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: An- nual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html.
|
2306.07929#40
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 41 |
[8] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023.
[9] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018.
[10] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
10
|
2306.05685#41
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 41 |
Table 5: GPT-3.5 performance on help requests related to speciï¬c programming exercises. Each row describes GPTâs behavior on requests related to that exercise. The ï¬gures are out of ten, as we sampled ten help requests per exercise.
Exercise Diï¬erence between two numbers Asking for a password Average of entered numbers Counting positive numbers Authentication Veriï¬cation of input On calculating an average Searching from a phone book Fixing a bit! Average distance of long jumps Sum between Count of entered numbers Explaining the number First and last name In reverse order 6 10 10 10 9 8 8 8 7 8 9 8 5 9 10 1 8 6 7 6 4 6 4 5 6 6 7 1 9 10 9 4 5 2 5 5 4 5 5 4 5 4 9 1 4
# GPT-3.5 identiï¬es and mentions issues: One (or more) All Non-existent(s)
|
2306.05715#41
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 41 |
Visit this link to see more examples of how depth representations develop in the early denoising stages of LDM.
# D Choices of Intervened Denoising Steps
Original Original Modified Intervened New Synthetic Original Original Modified Output Label Label âOutput Label Output Label Label Intervened New Synthetic Output Label Intervened Intervened First a4 First 3Step fa 1 Step Prompt = "Cubic Fun 3D 193 Parca Puzzle intervened Prompt = âCarson Palmer tosses four icKS I stervengg The Hanging Templeâ ntevenes at Buffalo ~ in the fourth quarterâ Wedded First First 4Steps 2 Steps Intervened Intervened § First First 5 Steps 3Steps Intervened Intervened First First 6 Steps 4 Steps Intervened Intervened First First 7 Steps 5 Steps
Figure 11: Denoising steps: we experimented with intervening in multiple numbers of steps, both for saliency (left) and depth representations (right). For the saliency representation, the intervention was effective after intervening 5 steps. Further increasing the number of intervened steps had almost no change on the modelâs output. For the depth representation, the intervention was effective after intervening three steps. Intervening additional steps did not improve the results.
|
2306.05720#41
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 41 |
1-shot 0.089 0-shot 0.089 3-shot 0.089 0.124 0.144 0.150 0.175 0.217 0.228 0.157 0.058 0.383 0.176 0.194 0.174 0.088 0.133 0.218 0.197 0.187 0.072 0.217 0.193 0.207 0.151 0.099 0.106 0.121 0.074 0.200 0.158 0.200 0.166 0.110 0.099 0.094 0.099 0.105 0.109 0.076 0.123 0.088 0.066 0.134 0.198 0.141 0.085 0.083 0.117 0.129 0.149 0.164 0.204 0.171 0.132 0.055 0.405 0.141 0.183 0.216 0.158 0.191 0.188 0.231 0.185 0.180 0.150 0.184 0.151 0.150 0.079 0.059 0.058 0.084 0.214 0.174 0.185 0.126 0.145 0.099 0.120 0.113 0.063 0.071 0.074 0.085 0.102 0.085 0.121 0.200
|
2306.05783#41
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 41 |
Is Bigger Always Better? By injecting in-domain collaborative knowledge from ei- ther data-centric or model-centric aspects, research works from quadrant 1, 2, 4 can achieve satisfying recommenda- tion performance compared with attention-based baselines, except for a few cases. Among these studies, although we could observe that the size of adapted LLM gradually in- creases according to the timeline, a ï¬ne-grained cross com- parison among them (i.e., a uniï¬ed benchmark) remains va- cant. Hence, it is difï¬cult to directly conclude that larger model size of LLM can deï¬nitely yield better results for rec- ommender systems. We prefer to leave this as a open ques- tion for future works: Is bigger language models always bet- ter for recommender systems? Or is it good enough to use small-scale language models in combination with collabora- tive knowledge injection?
|
2306.05817#41
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 41 |
Mitigation and Interventions There are few protections against these risks. Vulnerability disclo- sure, bug bounties, and AI incident databases can help report the vulnerabilities and limitations of generative AI systems. Several components of the EU AI Act may also be helpful, for example re- quiring labeling of AI-generated content, and prohibiting certain kinds of manipulation. For example, Section 5.2.2 of the 2021 proposal prohibits "practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of spe- cific vulnerable groups such as children or persons with disabilities in order to materially distort their behavior in a manner that is likely to cause them or another person psychological or physical harm.â The proposal also notes, âOther manipulative or exploitative practices affecting adults that might be facilitated by AI systems could be covered by the existing data protection, consumer protection and digital service legislation that guarantee that natural persons are properly informed and have free choice not to be subject to profiling or other practices that might affect their behavior.â [8]
# 4.2.1.3 Personal Privacy and Sense of Self
|
2306.05949#41
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 41 |
# Acknowledgements
The authors would thank colleagues from the OSU NLP group for constructive feedback and all contributors from the Amazon Mechanical Turk platform who participated in our study and assisted in data collection. This research was sponsored in part by NSF OAC 2112606, NSF CAREER #1942980, ARL W911NF2220144 and Ohio Supercomputer Center [6]. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official
10
policies, either expressed or implied, of the U.S. government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein.
# References
[1] Puppeteer headless chrome node.js api. https://github.com/puppeteer/puppeteer, 2021.
|
2306.06070#41
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 41 |
- - - 0.111 0.427 0.141 0.076 0.305 0.204 0.48 0.122 0.191 0.346 0.1 0.014 0.039 0.034 0.466 0.429 0.634 0.273 0.336 0.221 0.017 0.159 0.399 - - - - - - - - - - 1.245 1.221 2.441 2.417 0.408 0.665 - 1.61 1.822 - - 1.621 2.393 - - 1.104 1.476 - 1.674 1.78 - 1.305 - 1.387 - 0.942 - - - - 0.298 0.416 1.293 0.929 0.835 1.831 2.416 1.155 0.518 0.511 1.487 2.176 0.872 - 1.907 0.762 - - - - - - 0.539 - 1.289 - - - - - 1.239 0.501 - - - - 0.948 - 1.08 2.372 0.746 0.803 - 1.578 0.905 1.146 1.459 1.027 1.357 1.6 - 0.859 1.144 1.224 1.241 2.736 1.34 1.297 0.525 -
|
2306.06264#41
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.06283
| 41 |
15
[secrite comanae | i (ae ] (data) {
Figure 4: The sMolTalk interface. Based on few-shot prompting LLMs can create code for visualization tools such as 3dmol.js that can create custom visualization based on a natural- language description of the desired output. The top left box is the input ï¬eld where users can enter commands in natural language. The top right box prints the code the LLM generates. This code generates the visualization shown in the lower box. In this example, the user entered a sequence of four commands: the LLM (1) generates code for retrieving the structure, (2) colors the carbons blue, (3) displays the hydrogens as red spheres, and (4) reduces the size of the spheres.
particular, such an LLM-based agent approach might be implemented for the PyMOL program, where various tools for protein engineering could be interfaced through a chat interface, lowering the barrier to entry for biologists to use recent advancements within in silico protein engineering (such as RosettaFold [72] or RFDiï¬usion [73]).
# c. ELN interface: whinchat
|
2306.06283#41
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 41 |
Boyuan Chen, Fei Xia, Brian Ichter, Kanishka Rao, Keerthana Gopalakrishnan, Michael S. Ryoo, Austin Stone, and Daniel Kappler. Open-vocabulary queryable scene representations for real world planning. CoRR, abs/2209.09874, 2022. doi: 10.48550/arXiv.2209.09874. URL https: //doi.org/10.48550/arXiv.2209.09874.
Chenxu Hu, Jie Fu, Chenzhuang Du, Simian Luo, Junbo Zhao, and Hang Zhao. Chatdb: Augmenting llms with databases as their symbolic memory. CoRR, abs/2306.03901, 2023. doi: 10.48550/arXiv. 2306.03901. URL https://doi.org/10.48550/arXiv.2306.03901.
|
2306.07929#41
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 42 |
10
[11] Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in Neural Information Processing Systems, 35:16344â16359, 2022.
[12] Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023.
[13] Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, and Tong Zhang. Lmflow: An extensible toolkit for finetuning and inference of large foundation models. arXiv preprint arXiv:2306.12420, 2023.
[14] Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387, 2023.
|
2306.05685#42
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 42 |
Our notes also recorded whenever a response was good enough to give to students as is. This was rare, especially since most re- sponses included model solutions: fewer than ten of the 150 re- sponses were considered presentable without editing, assuming that the objective was to help students in a pedagogically sensi- ble way. This number would go up if a script were to prune out code from the responses, but we did not explore this further.
5 DISCUSSION Subsections 5.1â5.4 below discuss our main research interest: an- swering help requests with LLMs. Subsection 5.5 provides addi- tional observations on student help seeking, issues in student code, and contextual factors.
issue-hunting, but it is far from reliable in terms of ï¬nding all the issues, and false positives are common as well.
|
2306.05715#42
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 42 |
One variable in our intervention experiment is how many denoising steps to intervene on. In a preliminary study, we compared the modelâs outputs after intervening different numbers of denoising steps on saliency and depth representations.
14
Prompt = °3-Tier Black Plant Stand, Planter" Prompt = âJaime Lannister from Game of Thrones bust 200mm" Original Original Modified intervened New Synthetic Original Original Modified Intervened New Synthetic Output Label Label Output Label Output Label Label Output Label Same Depth Increasing Depth (a) (b)
Prompt = âGreat Blue Looking out at Seaâ Prompt = "10 best ceramic artists 10 best images about ceramic on ceramicâ Original Original Modified intervened New Synthetic Original Original Modified Intervened New Synthetic Output Label Label Output Label Output Label Label Output Label Same Depth Increasing Depth Increasing Depth (4)
Figure 12: To make modified depth labels consistent with the rules of perspective, we scaled down the salient object that was farther away from the viewpoint in the modified label. For each intervention, we progressively increased the depth of the added salient object.
As Figure 11 shows, the intervention on salient object representations was effective after intervening the first 5 steps of denoising. Further increasing the number of intervened steps had no significant influence on the generated image. For the intervention on depth, we observed that intervening the first 3 steps of denoising was sufficient to change the depth of generated scenes.
|
2306.05720#42
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 42 |
0.099 0.120 0.113 0.063 0.071 0.074 0.085 0.102 0.085 0.121 0.200 0.167 0.063 0.108 0.160 0.145 0.209 0.158 0.229 0.232 0.134 0.049 0.398 0.070 0.176 0.231 0.219 0.157 0.209 0.198 0.274 0.193 0.173 0.196 0.185 0.138 0.097 0.059 0.063 0.075 0.182 0.191 0.149 0.118 0.107 0.096 0.120 0.156 0.067 0.079 0.084 0.090 0.116 0.088 0.105 0.205 0.152 0.076 0.104 ChatGPT GPT-4 0.240 0.402 0.298 0.415 0.371 0.517 0.286 0.413 0.289 0.410 0.360 0.486 0.290 0.404 0.218 0.392 0.418 0.496 0.487 0.565 0.217 0.396 0.361 0.434 0.428 0.495 0.305 0.463 0.452 0.506 0.517 0.576 Performance-Average
|
2306.05783#42
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 42 |
LLM is Good at Reranking Hard Samples Although works in quadrant 3 suffer from inferior perfor- mance for zero/few-shot learning since little in-domain col- laborative knowledge is involved, researchers [Ma et al., 2023; Hou et al., 2023b] have found that large language mod- els such as ChatGPT are more likely to be a good reranker for hard samples. They introduce the ï¬lter-then-rerank paradigm which leverages a pre-ranking function from traditional rec- ommender systems (e.g., matching or pre-ranking stage in in- dustrial applications) to pre-ï¬lter those easy negative items, and thus generate a set of candidates with harder samples for LLM to rerank. In this way, the listwise reranking perfor- mance of LLM (especially ChatGPT-like APIs) could be pro- moted. This ï¬nding is instructive for industrial applications, where we could require LLM to only handle hard samples and leave other samples for light-weight models for saving computational costs.
|
2306.05817#42
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 42 |
# 4.2.1.3 Personal Privacy and Sense of Self
Privacy is linked with autonomy; to have privacy is to have control over information related to oneself. Privacy can protect both powerful and vulnerable peoples and is interpreted and protected differently by culture and social classes throughout history [152]. Personal and private information has many legal definitions and protections globally [2] and when violated, can be distinct from harm [47] and refer to content that is shared, seen, or experienced outside of the sphere a person has consented to.
What to Evaluate As seen in the Technical Base System section on 4.1.4 Privacy and Data Protection, privacy can be evaluated in a system as well as its impacts on society. Impacts [230] and harms [59] from the loss and violation of privacy are difficult to enumerate and evaluate, such as loss of opportunity or reputational damage. Harms can lead to shifts in power differentials and less respect or influence in an affected environment, in addition to personal changes in expectations of privacy [144] and autonomy. The type of private information violated, such as medical information, can trigger different impacts and responses.
|
2306.05949#42
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 42 |
[2] Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alexander Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J. Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, and Mengyuan Yan. Do as I can, not as I say: Grounding language in robotic affordances. CoRR, abs/2204.01691, 2022. doi: 10.48550/arXiv.2204.01691. URL https://doi.org/10. 48550/arXiv.2204.01691.
|
2306.06070#42
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06264
| 42 |
1.459 1.027 1.357 1.6 - 0.859 1.144 1.224 1.241 2.736 1.34 1.297 0.525 - - 3.167 3.352 - - 3.823 1.591 2.457 Table 4: Per-relation breakdown of three classes of facts, categorized by their appearance in the generated paragraphs produced by InstructGPT and FLAN-T5, is presented to evaluate the effectiveness of the entropy metric in distinguishing between facts across these classes. Relations in which our metric demonstrates effective differentiation between different fact classes are highlighted in green.
|
2306.06264#42
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
http://arxiv.org/pdf/2306.06264
|
Pouya Pezeshkpour
|
cs.CL, cs.LG
| null | null |
cs.CL
|
20230609
|
20230609
|
[
{
"id": "2302.13971"
},
{
"id": "1909.01066"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2106.09231"
},
{
"id": "1907.11692"
},
{
"id": "2104.07567"
},
{
"id": "2010.05731"
},
{
"id": "1910.10683"
},
{
"id": "2207.05221"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
},
{
"id": "2203.00211"
},
{
"id": "2301.12810"
}
] |
2306.07929
| 42 |
Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero- shot planners: Extracting actionable knowledge for embodied agents. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 9118â9147. PMLR, 2022a. URL https://proceedings.mlr.press/v162/huang22a.html.
|
2306.07929#42
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 43 |
[15] Jiazhan Feng, Qingfeng Sun, Can Xu, Pu Zhao, Yaming Yang, Chongyang Tao, Dongyan Zhao, and Qingwei Lin. Mmdialog: A large-scale multi-turn dialogue dataset towards multi-modal open-domain conversation. arXiv preprint arXiv:2211.05719, 2022.
[16] Xinyang Geng, Arnav Gudibande, Hao Liu, Eric Wallace, Pieter Abbeel, Sergey Levine, and Dawn Song. Koala: A dialogue model for academic research. Blog post, April 2023.
[17] Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. Chatgpt outperforms crowd-workers for text-annotation tasks. arXiv preprint arXiv:2303.15056, 2023.
[18] Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, and Dawn Song. The false promise of imitating proprietary llms. arXiv preprint arXiv:2305.15717, 2023.
|
2306.05685#43
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 43 |
issue-hunting, but it is far from reliable in terms of ï¬nding all the issues, and false positives are common as well.
Our main analysis focused on the LLM responses that were pro- duced with GPT-3.5 in English. We observed that the model iden- tiï¬ed and mentioned at least one actual issue in 82% of the help requests; all were identiï¬ed and mentioned in 55% of the cases. âMentioningâ an issue, in our sense, implies also suggesting how to ï¬x the issue; this is more than most feedback systems for pro- gramming exercises do, as they tend to focus on identifying stu- dent mistakes [40].
A signiï¬cant limitation to the quality of GPT-3.5âs responses is that 48% of them reported on issues that did not actually exist in the studentâs code. Such responses may lead students down a âde- bugging rabbit holeâ [84] as the student tries to ï¬x non-existent
5.1 LLM Performance on Help Requests Both large language models were able to identify some issues in the help requests, but GPT-3.5 was considerably more accurate than Codex. Overall, GPT-3.5 might be described as quite eï¬ective at
|
2306.05715#43
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 43 |
# E Fine-grained Intervention on Depth Representation
Adding salient objects at different depths: In this experiment, we aimed to test the continuity of the LDMâs depth representation. Specifically, if we increase the depth disparity between two objects in the LDMâs representations, will the depth difference between two objects enlarge accordingly in the output image?
To perform this intervention, we copied the depth map of a salient object onto a horizontally translated depth label. The modified depth label initially has two salient objects at the same depth. To create the depth disparity, we increased the depth value of the added salient object so it became distant in the depth label. It is arguable that having two objects with the same size at different depths breaks the rules of perspective. To make the modified depth label geometrically coherent, we scaled down the salient object that was distant to the viewpoint. This experiment required manually modifying depth label for intervention, and it is therefore hard to generalize on the entire dataset for quantitative evaluation.
15
Table 3: Smoothness constraint negatively affects the performance of probing regressors, especially when the resolution of probing prediction is low. The resolution of the probing prediction is equal to the spatial size of input activations, which can be as small as 8 Ã 8 (see Table 2 of Appendix B). The spatial size of the ground truth label is 512 Ã 512. We observed that training probing regressors without smoothness loss improved the probing performance.
|
2306.05720#43
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05817
| 43 |
4 Challenges from Industrial Applications Since the research of recommender systems is highly application-oriented, in this section, we highlight the key challenges in adapting LLM to RS, which mainly arise from the unique characteristics of recommender systems and in- dustrial applications. Accordingly, we will also discuss the preliminary efforts done by existing works, as well as other possible solutions. The following challenges are proposed from three aspects: (1) efï¬ciency (training efï¬ciency, infer- ence latency), (2) effectiveness (in-domain long text model- ing, ID indexing & modeling), and (3) ethics (fairness).
|
2306.05817#43
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 43 |
Mitigation and Interventions Mitigation first should determine who is responsible for an individ- ualâs privacy, while recognizing that all individuals may not have the same level of technical or data literacy. Robustly protecting privacy and autonomy requires both individual and collective action; an individual must be data-conscious in addition to technical and policy privacy protection provisions [18]. Outside of an individualistic framework, certain rights such as refusal [58] and inclusion also requires consideration of individual self-determination: establishing how an individual wants to interact with technology.
Technical methods to preserve privacy in a generative AI system, as seen in privacy-preserving ap- proaches to language modeling [39], cannot guarantee full protection. Upholding privacy regulations requires engagement from multiple affected parties [189] and can protect individuals but fail at loopholes, as seen with tracking continuing when an individual opts-out [42] from data collection [140]. Improving common practices and better global regulation for collecting training data can help. Opt-in approaches can protect individuals but are often not practiced due to economic incentives that stem from collecting data [244]. Privacy options for users should ease accessibility [263], such as standardized form factors when users visit a website requesting privacy permissions.
12
# 4.2.2 Inequality, Marginalization, and Violence
|
2306.05949#43
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 43 |
[3] Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ B. Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Bryn- jolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie S. Chen, Kathleen Creel, Jared Quincy Davis, Dorottya Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte
|
2306.06070#43
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 43 |
tured and unstructured lab data in a manner that is actionable by both humans and computers. However, one challenge in developing these systems is that it is diï¬cult for a traditional user interface to have enough ï¬exibility to capture the richness and diversity of real, interconnected, experimental data. Interestingly, LLMs can interpret and con- textualize both structured and unstructured data and can therefore be used to create a novel type of ï¬exible, conversational interface to such experimental data. The whinchat team (Joshua D. Bocarsly, Matthew L. Evans, and Ben E. Smith) embedded an LLM chat interface within datalab, an open source materials chemistry data management system, where the virtual LLM-powered assistant can be âattachedâ to a given sample. The virtual assistant has access to responses from the JavaScript object notation (JSON) API of datalab (containing both structured and unstructured/free text data) and can use them to perform several powerful tasks: First, it can contextualize existing data by explaining related experiments from linked responses, resolving acronyms/short-hand notations used by experimentalists, or creating concise textual summaries of complex and nested
|
2306.06283#43
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 43 |
Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Tomas Jackson, Noah Brown, Linda Luu, Sergey Levine, Karol Hausman, and Brian Ichter. Inner monologue: Embodied reasoning through planning with language models. In Karen Liu, Dana Kulic, and Jeffrey Ichnowski, editors, Conference on Robot Learning, CoRL 2022, 14-18 December 2022, Auckland, New Zealand, volume 205 of Proceedings of Machine Learning Research, pages 1769â1782. PMLR, 2022b. URL https://proceedings.mlr.press/v205/huang23c.html.
|
2306.07929#43
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 44 |
[19] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020.
[20] Fan Huang, Haewoon Kwak, and Jisun An. Is chatgpt better than human annotators? potential and limitations of chatgpt in explaining implicit hate speech. arXiv preprint arXiv:2302.07736, 2023.
[21] Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, et al. Dynabench: Rethinking benchmarking in nlp. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4110â4124, 2021.
|
2306.05685#44
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 44 |
Exploring the Responses of Large Language Models to Beginner Programmersâ Help Requests
issues while remaining oblivious to actual ones. This phenome- non of LLMs often âhallucinatingâ false information has been high- lighted by many [35]. The conï¬dent tone of the LLM responsesâ we observed just a handful of responses in less-than-conï¬dent tonesâ may exacerbate the problem.
In our brief exploration of non-English prompts, GPT-3.5 per- formed similarly in Finnish as in English in terms of the LLMâs abil- ity to identify issues in code. The Finnish in the LLMâs responses was also in general understandable and could have been shown to students, as far as language quality was concerned. This suggests that responses from large language models are potentially viable in non-English-speaking classrooms.
|
2306.05715#44
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 44 |
At Denoising Step 15 RMSE â Encoder Block 3 Layer 2 Layer 1 Bottleneck Layer 1 Decoder Block 2 Layer 2 Layer 1 Layer 3 0.559 0.583 < 0.05 0.519 0.545 < 0.05 0.594 0.639 ⪠0.05 0.485 0.474 0.501 0.511 ⪠0.05 ⪠0.05 0.522 0.543 < 0.05 Spatial Size h à w 16 à 16 16 à 16 8 à 8 16 à 16 16 à 16 16 à 16
Generated Image Probing Prediction Ground Truth Label + 512 > 16 < 512 4 512 4 Pa & v 1 32
Figure 13: The probing predictions have much smaller spatial resolutions compared to that of the generated images. In this example, the probing prediction has a spatial size of only 16 Ã 16, whereas the ground truth depth label of the generated image has a spatial size of 512 Ã 512. Each pixel of the probing prediction represents 32 Ã 32 pixels of the ground truth label. While we anticipate the per-pixel depth to change smoothly in the high resolution depth label, this is not true for the low resolution probing prediction. Applying the smoothness regularization on the low resolution prediction adversely affects probing performance.
|
2306.05720#44
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05817
| 44 |
4.1 Training Efï¬ciency There are two key aspects to keep good performance of mod- ern deep learning based recommender systems: (1) enlarge the volumes of training data (e.g., billion-level training sam- ples), and (2) increase the model update frequency (from day-level to hour-level, or even minute-level). Both of them highly require the training efï¬ciency. Although, as suggested in Section 3.5, tuning LLM (possibly with CRM) is a promis- ing approach to align LLM to RS for better performance, it actually brings prohibitive adaptation costs in terms of both memory usage and time consumption. Therefore, how to ensure the efï¬ciency when we involve LLM in the training phase is a key challenge for industrial applications.
Existing works mainly propose to leverage parameter- efï¬cient ï¬netuning strategies (e.g., option tuning [Cui et al., 2022] and layerwise adaptor tuning [Geng et al., 2023]), which mainly solve the memory usage problem, but the time consumption is still high.
|
2306.05817#44
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 44 |
12
# 4.2.2 Inequality, Marginalization, and Violence
Generative AI systems are capable of exacerbating inequality, as seen in sections on 4.1.1 Bias, Stereotypes, and Representational Harms and 4.1.2 Cultural Values and Sensitive Content, and Disparate Performance. When deployed or updated, systemsâ impacts on people and groups can directly and indirectly be used to harm and exploit vulnerable and marginalized groups.
# 4.2.2.1 Community Erasure
Biases in a systemâs development process and safety provisions for generative AI systems, such as content moderation, can lead to community erasure [97]. Avoiding the generation of the harms outlined is seen as a generally desirable outcome. However, the removal of harmful content can come with its own costs of lower general performances for sub-populations that use models for generation [269]. Mitigation thus currently serves as a double-edged sword, where removal of toxic content also has negative implications, in particular for marginalized communities. Both the benefits and the costs of content moderation are unequally distributed. The automatic systems that remove undesirable content can perform next to randomly or be harmful for marginalized populations [208], while the selection criteria for what constitutes safe content are aligned with technical safety and mitigation decisions. These impacts compound to make marginalized populations pay a greater cost for an intervention that they benefit from less.
|
2306.05949#44
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06283
| 44 |
related experiments from linked responses, resolving acronyms/short-hand notations used by experimentalists, or creating concise textual summaries of complex and nested entries. Second, it can reformat or render the data, for instance, by creating (mermaid.js) ï¬owcharts or (Markdown) tables (Figure 5). Third, it can use its generic reasoning abilities to suggest future experiments, for instance, related materials to study, synthesis protocols to try, or additional characterization techniques. This is shown in the examples given in SI section 2C, where whinchat was able to provide hints about which
|
2306.06283#44
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 44 |
Brian Ichter, Anthony Brohan, Yevgen Chebotar, Chelsea Finn, Karol Hausman, Alexander Herzog, Daniel Ho, Julian Ibarz, Alex Irpan, Eric Jang, Ryan Julian, Dmitry Kalashnikov, Sergey Levine, Yao Lu, Carolina Parada, Kanishka Rao, Pierre Sermanet, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Mengyuan Yan, Noah Brown, Michael Ahn, Omar Cortes, Nicolas Sievers, Clayton Tan, Sichun Xu, Diego Reyes, Jarek Rettinghouse, Jornell Quiambao, Peter Pastor, Linda Luu, Kuang-Huei Lee, Yuheng Kuang, Sally Jesmonth, Nikhil J. Joshi, Kyle Jeffrey, Rosario Jauregui Ruano, Jasmine Hsu, Keerthana Gopalakrishnan, Byron David, Andy Zeng, and Chuyuan Kelly Fu. Do as I can, not as I say: Grounding language in robotic affordances. In Karen Liu, Dana Kulic, and Jeffrey Ichnowski, editors, Conference on Robot Learning, CoRL 2022,
|
2306.07929#44
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 45 |
[22] Miyoung Ko, Jinhyuk Lee, Hyunjae Kim, Gangwoo Kim, and Jaewoo Kang. Look at the first sentence: Position bias in question answering. arXiv preprint arXiv:2004.14602, 2020. [23] Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Richárd Nagyfi, et al. Ope- nassistant conversationsâdemocratizing large language model alignment. arXiv preprint arXiv:2304.07327, 2023.
[24] Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110, 2022.
[25] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74â81, 2004.
|
2306.05685#45
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 45 |
5.2 Pedagogical Quality of LLM Feedback 5.2.1 The Problem of Model Solutions. Even though we explicitly prompted GPT-3.5 not to produce model solutions, corrected code, or automated tests, almost every response did include code, and two responses out of three essentially provided a model solution for the exercise. Similar phenomena have been acknowledged as a limitation of LLMs, and recent research eï¬orts have improved LLMsâ ability to follow instructions [67]; this has been claimed as an improvement in the recently released GPT-4, for example [65]. The instant provision of model solutions poses some obvious prob- lems from a pedagogical point of view. Nevertheless, we note that there are cases where model solutions are useful: for example, model solutions have been deliberately provided to students in some prior research [62, 63], and they are also often provided in automated learning environments that are not focused on grading [34]. It would also be possible to create a parser for LLM responses that strips away code before relaying the response to students.
|
2306.05715#45
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05720
| 45 |
As Figure 12 shows, inserting the depth map of a salient object in the LDMâs depth representation created another salient object at the corresponding location in the generated image. Increasing the depth of the added object pushed it farther away from the viewpoint. In Figure 12ab, increasing the depth of the added objects resulted in a blurred effect and creates the perception of greater depth within the scene. In Figure 12d, the object with increased depth also exhibited a reduction in its physical size.
# F Smoothness regularization
When training the probing regressors for depth estimation, we also experimented with applying a smoothness constraint [13] to the probing prediction. The local changes in per-pixel depth within a high resolution image are mostly small. The smoothness constraint leverages this property of the image and penalizes rapid local changes to improve the depth prediction.
However, the depth predictions from our probing regressors have much lower resolution compared to the generated images, since LDM operates on a latent vector with smaller spatial size. In the low resolution probing prediction (see Figure 13), one pixel in the depth map represents a much larger region in the generated image, and the change in depth between two large regions are often
16
unsmooth. We observed that training the probing regressor without smoothness loss improved its depth estimation performance, especially when the resolution of the probing prediction is low (see Table 3).
17
|
2306.05720#45
|
Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model
|
Latent diffusion models (LDMs) exhibit an impressive ability to produce
realistic images, yet the inner workings of these models remain mysterious.
Even when trained purely on images without explicit depth information, they
typically output coherent pictures of 3D scenes. In this work, we investigate a
basic interpretability question: does an LDM create and use an internal
representation of simple scene geometry? Using linear probes, we find evidence
that the internal activations of the LDM encode linear representations of both
3D depth data and a salient-object / background distinction. These
representations appear surprisingly early in the denoising process$-$well
before a human can easily make sense of the noisy images. Intervention
experiments further indicate these representations play a causal role in image
synthesis, and may be used for simple high-level editing of an LDM's output.
Project page: https://yc015.github.io/scene-representation-diffusion-model/
|
http://arxiv.org/pdf/2306.05720
|
Yida Chen, Fernanda Viégas, Martin Wattenberg
|
cs.CV, cs.AI, cs.LG
|
A short version of this paper is accepted in the NeurIPS 2023
Workshop on Diffusion Models: https://nips.cc/virtual/2023/74894
| null |
cs.CV
|
20230609
|
20231104
|
[
{
"id": "2209.14988"
},
{
"id": "1805.01070"
},
{
"id": "2210.13382"
},
{
"id": "1909.01066"
},
{
"id": "1809.10040"
},
{
"id": "2212.00774"
},
{
"id": "1610.01644"
},
{
"id": "2206.00364"
},
{
"id": "1905.05950"
},
{
"id": "2212.08861"
},
{
"id": "1908.02899"
},
{
"id": "2111.02114"
},
{
"id": "2105.14002"
},
{
"id": "2112.03126"
}
] |
2306.05783
| 45 |
Category Philosophy Economics Jurisprudence Pedagogy Literature History Science Engineering Agronomy Medicine Management Art Studies Xiezhi Overall MMLU Overall C-Eval Overall M3KE Overall Top 0.856â 0.871â 0.761â 0.854â 0.825â 0.854â 0.926â 0.928â 0.902â 0.805â 0.857â 0.821â GPT-4 0.431 GPT-4 0.402 GPT-4 0.413 GPT-4 0.404 Human Average 0.453â 0.520â 0.460â 0.510â 0.560â 0.460â 0.394â 0.380â 0.333â 0.430â 0.513â 0.400â bloomz-mt 0.337 Bloomz-mt 0.266 ChatGPT 0.286 ChatGPT 0.290 ChatGPT 0.477 GPT-4 0.419 GPT-4 0.368 GPT-4 0.472 GPT-4 0.417 GPT-4 0.437 GPT-4 0.436 GPT-4 0.420 GPT-4 0.515
|
2306.05783#45
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 45 |
From an industrial perspective, we suggest adopting the long-short update strategy, when we leverage LLM for fea- ture engineering and feature encoder. To be speciï¬c, we can cut down the training data volume and relax the update frequency for LLM (e.g.week-level) while maintaining full training data and high update frequency for CRM. The ba- sis to support this approach is that researchers [Chen, 2023; Zhou et al., 2023] point out that LLM has strong inductive learning capacities to produce generalized and reliable out- puts via a handful of supervisions. In this way, LLM can provide aligned in-domain knowledge to CRM, while CRM act as a frequently updated adapter for LLM.
|
2306.05817#45
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 45 |
The production of harmful content is currently mitigated using combinations of four methods: data sourcing [30]; human moderation of content included in training data [65]; automated moderation of content included in training data [101]; and keyword deny-lists [149]. Given that the exclusion of harmful content within datasets stand to create distinct harms to marginalized communities, efforts towards mitigation of generating harmful content becomes a question of the politics of classification [36, 135, 72, 242] and its potential harms.
What to Evaluate Evaluating Disparate Performance once systems have undergone safety provi- sions can give signal to possible erasure. Accounting for the demographics and composition of human crowdworkers can also provide information [209] about subsequent impacts. Longer-term impacts of erasure depend on the systemâs deployment context, leading to opportunity loss or reinforced biases and norms.
Mitigation and Interventions Better democratic processes for developing and deploying systems and safety provisions such as content moderation should work with marginalized populations. This should include more investment in representative crowdworkers and appropriate compensation and mental health support. Lessons from social media content moderation can apply, such as working with groups who have been erased and documenting patterns of erasure to improve future approaches [213].
# 4.2.2.2 Long-term Amplifying Marginalization by Exclusion (and Inclusion)
|
2306.05949#45
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 45 |
[4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 1877â1901. Curran Associates, Inc., 2020.
[5] Andrea Burns, Deniz Arsan, Sanjna Agrawal, Ranjitha Kumar, Kate Saenko, and Bryan A. Plum- mer. A dataset for interactive vision-language navigation with unknown command feasibility. In European Conference on Computer Vision, 2022.
|
2306.06070#45
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 45 |
NMR-active nuclei can be probed in the given sample.
It is easy to envision that this tool could be even more helpful by ï¬ne-tuning or condi- tioning it on a research groupâs knowledge base (e.g., group Wiki or standard operating procedures) and communication history (e.g., a groupâs Slack history). An important limitation of the current implementation is that the small context window of available LLMs limits the amount of JSON data one can directly provide within the prompt, lim- iting each conversation to analyzing a relatively small number of samples. Therefore, one needs to either investigate the use of embeddings to determine which samples to include in the context or adopt an âagentâ approach where the assistant is allowed to query the API of the ELN (interleaved with extraction and summarization calls).
d. BOLLaMa: facilitating Bayesian optimization with large language models
Bayesian optimization (BO) is a powerful tool for optimizing expensive functions, such as mapping of reaction conditions to the reaction yield. Chemists would greatly beneï¬t from using this method to reduce the number of costly experiments they need
17
|
2306.06283#45
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.05685
| 46 |
[25] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74â81, 2004.
[26] Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958, 2021.
[27] Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. The flan collection: Designing data and methods for effective instruction tuning. arXiv preprint arXiv:2301.13688, 2023.
[28] Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. Cross-task general- ization via natural language crowdsourcing instructions. In ACL, 2022.
[29] OpenAI. Evals is a framework for evaluating llms and llm systems, and an open-source registry of benchmarks. https://github.com/openai/evals.
# [30] OpenAI. Gpt-4 technical report, 2023.
11
|
2306.05685#46
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 46 |
Even if LLM responses to help requests are not directly sent to students, they might be used to help teachers respond to re- quests. One option is to employ an LLM to create template re- sponses, which are then edited by teachers. This might also be ex- plored in the context of programming error messages [7, 17, 45] as well as in feedback systems that group similar submissions to- gether so that feedback may be provided to many students at once [24, 25, 41, 61].
5.2.2 The Problem of Eï¬ective Feedback. Some of the LLM responses included words of encouragement. The LLM might state, for exam- ple, that âyou are on the right pathâ or vaguely praise the student. Positivity can certainly be desirable in feedback, but it is challeng- ing to provide just the right kind of supportive feedback that takes the studentâs level, context, and other factors into account [66]. Praising on easy tasks may lead students simply to dismiss the feedback; at worst, it may implicitly suggest that the student lacks ability [8] and demotivate the student.
|
2306.05715#46
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 46 |
0.472 GPT-4 0.417 GPT-4 0.437 GPT-4 0.436 GPT-4 0.420 GPT-4 0.515 GPT-4 0.469 GPT-4 0.390 GPT-4 0.437 ChatGPT 0.267 ChatGPT 0.240 Bloomz-mt 0.204 baize-7b (lora) 0.231 bloomz-mt 0.453 bloomz-mt 0.310 llama-65b-hf 0.323 bloomz-mt 0.442 bloomz-mt 0.405 bloomz-mt 0.272 bloomz-mt 0.408 ChatGPT 0.412 bloomz-mt 0.366 baize-healthcare-lora-7B 0.279 baize-lora-30B 0.375 baize-healthcare-lora-7B 0.417 BELLE-7B-0.2M 0.211 baize-30b (lora) 0.193 baize-7b (lora) 0.194 baize-7b-healthcare (lora) 0.203 GPT-4 0.413 llama-65b-hf 0.290 baize-lora-7B 0.230 ChatGPT 0.280
|
2306.05783#46
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 46 |
4.2 Online recommender systems are usually real-time services and extremely time-sensitive, where all stages (e.g., match- ing, ranking, reranking) should be done within around tens of milliseconds. The involvement of LLM during the infer- ence phase gives rise to the inference latency problem. The inference time of the LLM is expensive, not to mention the additional time cost brought by prompt template generation. Pre-computing and caching the outputs or middle repre- sentations of LLM is the common strategy to ensure low- latency inference when involving LLM during the inference phase. When adapting the LLM as the scoring/ranking func- tions, M6-Rec [Cui et al., 2022] proposes the multi-segment late interaction strategy. The textual features of user and item are split into ï¬ner-grained segments that are more static, e.g., by representing each clicked item as an individual segment. Then, we can pre-compute and cache the encoded representa- tions of each segment using the ï¬rst several transformer lay- ers, while the rest of the layers are leveraged to perform late interaction between segments when the recommendation re- quest arrives.
|
2306.05817#46
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 46 |
# 4.2.2.2 Long-term Amplifying Marginalization by Exclusion (and Inclusion)
Biases, dominant cultural values, and disparate performance seen in lack of representation in training and development of generative AI systems can exacerbate marginalization when those systems are deployed. For example, increasing resourcing and performance for already highly resourced languages reinforces those languagesâ dominance.
Inclusion without consent can also harm marginalized groups. While some research strives to improve performance for underrepresented Indigenous languages [116], the same Indigenous groups resist AI approaches to use of their language [158]. Profit from Indigenous languages and groups who have been systematically exploited continues directly and indirectly.
Disparate Performance in Critical Infrastructure Generative AI use in critical infrastructure that directly impacts human wellbeing can also be classified as high-risk use cases. This includes use in judicial systems, healthcare such as mental health and medical advice, and democratic processes, such as election or political information. An example is generative AI systems used to replace care work, such as crisis intervention and research [82] and action [153] to use chatbots for eating disorder prevention. Technical tooling used in human systems and processes that have long-recorded discrimination patterns [261] can instead exacerbate harm [134].
13
|
2306.05949#46
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 46 |
[6] Ohio Supercomputer Center. Ohio supercomputer center, 1987. URL http://osc.edu/ark: /19495/f5s1ph73.
[7] Radoslav Chakarov. How many websites are there? how many are active in 2023? https: //webtribunal.net/blog/how-many-websites/. 2023.
[8] Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870â1879, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1171. URL https://aclanthology.org/P17-1171.
[9] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope,
11
|
2306.06070#46
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 46 |
17
LLM Chat Block with contextual data (powered by GPT-3.5-turbo) â 4 X Hello! | am your virtual assistant and | am here to help you manage your experimental data. What can | help you with today? Please make me a mermaid flowchat showing how this cell was made from other items in the database raw | formatted Certainly! Here is a Mermaid flowchart that shows how the cell with item_id jdb1-1_e1_cl was made from other items in the database: In this flowchart, each rectangular box represents an item in the database, and the arrows between the boxes represent the relationships between the items. For example, SM1 and SM2 were used to create sample JDB1-1, which was then used to create sample E1, which was finally used to create the cell cl.
|
2306.06283#46
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 46 |
Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer tasks. CoRR, abs/2303.17491, 2023. doi: 10.48550/arXiv.2303.17491. URL https://doi.org/10. 48550/arXiv.2303.17491.
11
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke In NeurIPS, http://papers.nips.cc/paper_files/paper/2022/hash/ Iwasawa. 2022. 8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html. Large language models are zero-shot reasoners. URL
|
2306.07929#46
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 47 |
# [30] OpenAI. Gpt-4 technical report, 2023.
11
[31] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â27744, 2022.
[32] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311â318, 2002.
[33] Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277, 2023.
[34] Priya Raghubir and Ana Valenzuela. Center-of-inattention: Position biases in decision-making. Organizational Behavior and Human Decision Processes, 99(1):66â80, 2006.
|
2306.05685#47
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 47 |
Instructional guidance should attend to the studentâs current level of domain knowledge; a mismatch will result in poorer learn- ing outcomes [37]. Although the LLM responses sought to address the technical issues and at times provided positive feedback, we saw little indication of the feedback being adjusted to the (begin- ner) level of the programming exercises being solved or to the con- text (the introductory course that we mentioned in the prompt). A handful of the LLM responses included suggestions that were well beyond the scope of an introductory programming course. In this
ICER â23 V1, August 7â11, 2023, Chicago, IL, USA
work, we did not even attempt to describe student-speciï¬c levels of prior knowledge to the LLMs.
|
2306.05715#47
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 47 |
0.203 GPT-4 0.413 llama-65b-hf 0.290 baize-lora-7B 0.230 ChatGPT 0.280 baize-healthcare-lora-7B 0.284 ChatGPT 0.233 ChatGPT 0.220 bloomz-mt 0.387 ChatGPT 0.311 ChatGPT 0.265 pythia-2.8b 0.367 bloomz-mt 0.377 BELLE-7B-1M 0.209 Bloomz-7b1-mt 0.189 baize-30b (lora) 0.191 Bloomz-mt 0.161 Language Models pythia-1.4b 0.321 BELLE-7B-1M 0.255 BELLE-7B-0.2M 0.217 BELLE-7B-0.2M 0.251 baize-lora-13B 0.249 BELLE-7B-0.2M 0.214 BELLE-7B-1M 0.210 bloomz-7b1 0.274 bloomz-7b1-mt 0.224 doctorglm-6b 0.253 bloomz-p3 0.280 ChatGPT 0.339 bloomz-7b1 0.203
|
2306.05783#47
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05949
| 47 |
13
Generative AI used in medical education and potentially in clinical decision-making will continue to underserve and expose institutionally marginalised individuals and communities to life-impacting risks. From inaccurate skin cancer diagnosis [262], to the scoring of Black patients in the U.S. medical system as less sick than the reality of their complex health and resource allocation needs [167], the use of generative AI in medical settings must be sensitive to existing challenges to equality within medical practice [114].
What to Evaluate Systems should again undergo Disparate Performance evaluations once updated for a high-risk task in critical infrastructure and account for the additional deployment context. Long- term impacts in addition to marginalization can include erasure. Evaluating marginalization will depend on context, and should account for marginalization when work by marginalized populations is less visible or uncredited [264]. Evaluating marginalization impacts on individuals, such as through health [23], is ongoing research.
Mitigation and Intervention Improving evaluation work for underrepresented populations and such as for low-resource languages, and crediting local researchers [34], can help give more information to disparate performance. Engagement with populations should be done in ways that embody local approaches [37]. Policies should be crafted to better respect rights to refusal [224]. Regulations for AI that address these discriminatory patterns should coordinate with other nations to ensure protections are global and regulations are not âpatchworkedâ.
|
2306.05949#47
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 47 |
11
James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways, 2023. URL http://jmlr.org/papers/v24/22-1144.html.
|
2306.06070#47
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 47 |
Figure 5: Using an LLM as an interface to an ELN/data management system. LLM-based assistants can provide powerful interfaces to digital experimental data. The ï¬gure shows a screenshot of a conversation with whinchat in the datalab data management system (https:// github.com/the-grey-group/datalab). Here, whinchat is provided with data from the JSON API of datalab of an experimental battery cell. The user then prompts (green box) the system to build a ï¬owchart of the provenance of the sample. The assistant responds with mermaid.js markdown code, which the datalab interface automatically recognizes and translates into a visualization.
|
2306.06283#47
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 47 |
Michael Laskin, Luyu Wang, Junhyuk Oh, Emilio Parisotto, Stephen Spencer, Richie Steigerwald, DJ Strouse, Steven Hansen, Angelos Filos, Ethan A. Brooks, Maxime Gazeau, Himanshu Sahni, Satinder Singh, and Volodymyr Mnih. In-context reinforcement learning with algorithm distillation. CoRR, abs/2210.14215, 2022. doi: 10.48550/arXiv.2210.14215. URL https://doi.org/10. 48550/arXiv.2210.14215.
Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, and Andy Zeng. Code as policies: Language model programs for embodied control. CoRR, abs/2209.07753, 2022. doi: 10.48550/arXiv.2209.07753. URL https://doi.org/10.48550/ arXiv.2209.07753.
|
2306.07929#47
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 48 |
[35] Siva Reddy, Danqi Chen, and Christopher D Manning. Coqa: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249â266, 2019.
[36] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9):99â106, 2021.
[37] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, 2022.
[38] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
|
2306.05685#48
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 48 |
work, we did not even attempt to describe student-speciï¬c levels of prior knowledge to the LLMs.
Future work should explore the creation of large language mod- els that are âawareâ of studentsâ evolving prior knowledge and com- petence in programming. Such LLMs might then generate feedback messages that match the level of the particular student. One poten- tial direction for this work is to track the time that the student has spent on a task, which has been observed as one of the indicators of programming exercise diï¬culty [30] and which correlates with performance [42, 46]; the LLM could be ï¬ne-tuned to take task dif- ï¬culty into consideration. Fine-tuning an LLM to match speciï¬c course progressions is also a possibility. Moreover, it might be fruit- ful to distinguish feedback on progress from suggestions about ï¬x- ing speciï¬c issues. Here, approaches such as adaptive immediate feedback [55] and personalized progress feedback [47] could be meaningful directions.
|
2306.05715#48
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 48 |
0.224 doctorglm-6b 0.253 bloomz-p3 0.280 ChatGPT 0.339 bloomz-7b1 0.203 Bloomz-7b1 0.167 baize-13b (lora) 0.184 llama-7b 0.158 llama-7b-hf 0.241 llama-7b-hf 0.234 ChatGPT 0.213 baize-lora-13B 0.244 baize-lora-7B 0.213 BELLE-7B-1M 0.207 bloomz-3b 0.200 bloomz-7b1-mt 0.253 BELLE-7B-0.2M 0.216 BELLE-7B-0.2M 0.223 BELLE-7B-0.2M 0.268 BELLE-7B-0.2M 0.238 baize-lora-7B 0.200 llama-13b 0.166 baize-7b-healthcare (lora) 0.178 baize-13b (lora) 0.155 BELLE-7B-0.2M 0.228 falcon-7b 0.233 llama-7b-hf 0.210 pythia-1.4b 0.241
|
2306.05783#48
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 48 |
Moreover, we could seek ways to reduce the size of model for ï¬nal inference, where methods have been well explored in other deep learning domains, e.g., distillation [Jiao et al.,
2019], pruning [Chen et al., 2020], and quantization [Zafrir et al., 2019]. For instance, CTRL [Li et al., 2023e] propose to perform contrastive learning to distill the semantic knowl- edge from LLM to CRM which is then ï¬netuned for the in- ference phase. These strategies generally serve as a tradeoff between the model performance and inference latency. Al- ternatively, we could involve LLM in the feature engineering stage, which does not bring extra burden of computation to the inference phase.
4.3 When adapting LLM, we have to construct in-domain tex- tual inputs via prompting templates and insert proper instruc- tions and demonstrations at the front if needed. However, the general guideline of industrial recommender systems requires longer user history, larger candidate set and more features to achieve better recommendation performance, possibly lead- ing to long-text inputs for LLM. Such long-text inputs from RS domains (i.e., in-domain long texts) could result in two key challenges as follows.
|
2306.05817#48
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 48 |
When attempting to improve performance for underrepresented indigenous languages, it is important to adhere to established principles such as the Indigenous Data Sovereignty principles, e.g.: The CARE Principles for Indigenous Data Governance [51] or FAIR principles [52].
Participatory methodologies in AI development have [31] included engaging locally led and com- pensated focus groups with impacted community members, in collaboration with engineers, to think through potential harmful outcomes. âRed-teamingâ - testing AI models for potential vulnerabili- ties, biases, and weaknesses through real-world simulations is also an entry point for engaging the âepistemic privilegeâ [246] of those most affected by the social impacts of generative AI systems.
Addressing barriers to evaluations are rendered difficult, and at times impossible, given that the model is enclosed in software or only available through an API. Therefore, given the overlaps in the public sphere, advocacy of open-sourced / licensed access are increasingly popular and compelling [231].
|
2306.05949#48
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 48 |
[10] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Y. Zhao, Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models. CoRR, abs/2210.11416, 2022. doi: 10.48550/arXiv.2210.11416. URL https://doi.org/10.48550/arXiv.2210.11416.
|
2306.06070#48
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 48 |
to run [75, 76]. However, BO faces an interface and accessibility problem, too. The existing frameworks require signiï¬cant background knowledge and coding experience not conventionally taught in chemistry curricula. Therefore, many chemists cannot beneï¬t from tools such as BO. The BOLLaMa-team (Bojana Rankovi´c, Andres M. Bran, Philippe Schwaller) showed that LLMs can lower the barrier for the use of BO by providing a natural language chat-like interface to BO algorithms. Figure 6 shows a prototype of a chat interface in which the LLM interprets the user request, initializes a BO run by suggesting initial experimental conditions, and then uses the feedback of the user to drive the BO algorithm and suggest new experiments. The example used data on various additives for a cooperative nickel-photoredox catalyzed reaction [77] and the BO code from Rankovi´c et al. [78]. This ideally synergizes with an LLM interface to a data management solution (as discussed in the previous project) as one could directly persist the experimental results and leverage prior records to âbootstrapâ BO runs.
18
|
2306.06283#48
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 48 |
Xinnian Liang, Bing Wang, Hui Huang, Shuangzhi Wu, Peihao Wu, Lu Lu, Zejun Ma, and Zhoujun Li. Unleashing infinite-length input capacity for large-scale language models with self-controlled memory system. CoRR, abs/2304.13343, 2023. doi: 10.48550/arXiv.2304.13343. URL https: //doi.org/10.48550/arXiv.2304.13343.
Aman Madaan, Niket Tandon, Peter Clark, and Yiming Yang. Memory-assisted prompt editing to improve gpt-3 after deployment. arXiv preprint arXiv:2201.06009, 2022.
Oier Mees, Jessica Borja-Diaz, and Wolfram Burgard. Grounding language with visual affordances over unstructured data. CoRR, abs/2210.01911, 2022. doi: 10.48550/arXiv.2210.01911. URL https://doi.org/10.48550/arXiv.2210.01911.
|
2306.07929#48
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 49 |
[39] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
[40] Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926, 2023.
[41] Xuanhui Wang, Nadav Golbandi, Michael Bendersky, Donald Metzler, and Marc Najork. Position bias estimation for unbiased learning to rank in personal search. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 610â618, 2018.
|
2306.05685#49
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 49 |
5.3 The Need to Comprehend Code The proliferation of LLMs and their inevitable use by both novices and professional programmers lends further emphasis to program comprehension as a key skill. Programmers need to understand code and learn to debug code created by othersâwhere âothersâ now includes LLMs. Although LLMs are a partial cause of the situ- ation, they may also be part of the solution. Even with the deï¬cien- cies that LLMs now have (e.g., inaccuracy and conï¬dent hallucina- tions), they could potentially be taken into use in programming courses as long as the issues are acknowledged. For example, if it is clear enough to students that the code created by LLMs is often faulty, a novel type of learning activity might involve students eval- uate LLM-created code to spot issues and improve the code, with the aim of teaching code comprehension, debugging, and refactor- ing in the process. In addition to potentially being educational for the students, such activities could be used to further tune LLM by giving it the improved code as feedback.
|
2306.05715#49
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 49 |
0.228 falcon-7b 0.233 llama-7b-hf 0.210 pythia-1.4b 0.241 alpaca-lora-7b 0.206 baize-lora-7B 0.202 BELLE-7B-0.6M 0.197 falcon-7b 0.228 bloomz-7b1 0.215 bloomz-7b1 0.222 baize-lora-7B 0.268 baize-lora-13B 0.229 bloomz-7b1-mt 0.196 stablelm-7b 0.158 Bloomz-3b 0.168 alpaca-7b 0.142 BELLE-7B-1M 0.226 baize-lora-7B 0.222 BELLE-7B-1M 0.199 llama-65b-hf 0.237 BELLE-7B-0.2M 0.194 alpaca-lora-7b 0.192 BELLE-7B-0.2M 0.191 alpaca-lora-7b 0.224 bloomz-3b 0.200 bloomz-7b1-mt 0.219 baize-healthcare-lora-7B 0.263
|
2306.05783#49
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 49 |
First, Hou et al. [2023b] discover that LLM has difï¬culty in dealing with long texts especially when we extend the text with longer user history or larger candidate set, even though the total number of input tokens does not exceed the length of the context window (e.g., 512 for BERT, 4096 for Chat- GPT). The reason might be that the distribution of in-domain long text is quite different from the pretraining corpora of LLM. Furthermore, an excessively long-text input will cause the memory inefï¬ciency problem, and might even break the context window limitation, leading to partial information lost and inferior outputs from LLM.
To this end, it is of great importance to investigate how to properly ï¬lter, select, and arrange the textual information as the input for LLM during prompting engineering, as well as how to instruct or tune the LLM to better align with the distribution of these in-domain long-text inputs. Besides, in NLP domains, a range of works are proposed to address the context window limitation (e.g., sliding windows [Wang et al., 2019], memory mechanism [Ding et al., 2020]), which could be considered in recommender systems.
|
2306.05817#49
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 49 |
Smuha [226] proposes accountability and monitoring mechanisms at a public oversight level, for example mandatory impact assessments of AI systems which incorporates opportunities for societal feedback. Smuha also emphasises the importance of independent information collection and distri- bution about AIâs societal impact. Further, it will be necessary to introduce procedural rights - for example "right to access to information, access to justice, and participation in public decision-making on AI, regardless of the demonstration of individual harm".
# 4.2.2.3 Abusive or Violence Content
Generative AI systems can generate outputs that are used for abuse, constitute non-consensual content, or are threats of violence and harassment [9]. Non-consensual sexual representations of people, include representations of minors as generative child sexual abuse material (CSAM) [155]. Abuse and violence can disparately affect groups, such as women and girls [10].
|
2306.05949#49
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 49 |
[11] Yingqiang Ge, Wenyue Hua, Jianchao Ji, Juntao Tan, Shuyuan Xu, and Yongfeng Zhang. Openagi: When LLM meets domain experts. CoRR, abs/2304.04370, 2023. doi: 10.48550/ arXiv.2304.04370. URL https://doi.org/10.48550/arXiv.2304.04370.
[12] Yu Gu, Sue Kase, Michelle Vanni, Brian M. Sadler, Percy Liang, Xifeng Yan, and Yu Su. Beyond I.I.D.: three levels of generalization for question answering on knowledge bases. In Jure Leskovec, Marko Grobelnik, Marc Najork, Jie Tang, and Leila Zia, editors, WWW â21: The Web Conference 2021, Virtual Event / Ljubljana, Slovenia, April 19-23, 2021, pages 3477â3488. ACM / IW3C2, 2021. doi: 10.1145/3442381.3449992. URL https://doi.org/10.1145/ 3442381.3449992.
|
2306.06070#49
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 49 |
18
LLM+B0 Backend Chat Interface User side Hi and welcome to BOLLaMa, the coolest llama for optimization in chemistry! How can assist you today? Hi BOLLaMa! | want to optimize the additives to increase the yield of a decarboxylative amination reaction. Let's start with 4 experiments. Welcome to BOLLaMa. Sure! You can start with a these 4 additives: i © <SMILES> BO Step Ey data={ SMILES_1: 4, cu 12, Perfect! Based on your results, | suggest you try bo_step(data) this additive next <SMILES> {im glad you got better bo_step(data) results! Try this next: <SMILES> [|
Figure 6: Schematic overview of BoLLama. An LLM can act as an interface to a BO algorithm. An experimental chemist can bootstrap an optimization and then, via a chat interface, update the state of the simulation to which the bot responds with the recommended next steps.
|
2306.06283#49
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 49 |
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Maria-Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pages 1928â1937. JMLR.org, 2016. URL http://proceedings.mlr.press/v48/mniha16.html.
Ali Modarressi, Ayyoob Imani, Mohsen Fayyaz, and Hinrich Schütze. RET-LLM: towards a general read-write memory for large language models. CoRR, abs/2305.14322, 2023. doi: 10.48550/arXiv. 2305.14322. URL https://doi.org/10.48550/arXiv.2305.14322.
|
2306.07929#49
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 50 |
[42] Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, and Yue Zhang. Pandalm: An automatic evaluation benchmark for llm instruction tuning optimization, 2023.
[43] Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. How far can camels go? exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751, 2023.
[44] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instruc- tions, 2022.
|
2306.05685#50
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 50 |
5.4 On the Evolution of LLMs The evolution of large language models has been extremely rapid recently, and only seems to accelerate. We conducted our analy- sis in March 2023, at a time when GPT-3.5-turbo from March 1st was the most recent model readily available. At the time of writ- ing, however, the most recent model is GPT-4, which reportedly performs better on most tasks.
Our results suggest that this evolution is also visible in perfor- mance on the task we are interested in, responding to student help requests. Comparing the older Codex LLM to the newer GPT-3.5, we found that GPT-3.5 outperformed Codex. This raises interesting questions about how long the results of LLM performance studies are valid. For example, much of the prior work in CER has em- ployed LLMs that are already âancient.â
The rapid evolution can be troublesome for research replication and for the integration of LLMs into teaching. For example, on March 21st, 2023, OpenAI announced that support for the Codex API will be discontinued within days. This renders our results on Codex performance nearly impossible to replicate. Such develop- ments highlight the importance of truly open LLMs that can be run locally without relying on third-party APIs.
|
2306.05715#50
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 50 |
0.224 bloomz-3b 0.200 bloomz-7b1-mt 0.219 baize-healthcare-lora-7B 0.263 alpaca-lora-7b 0.227 alpaca-lora-7b 0.194 llama-65b 0.143 llama-65b 0.154 falcon-40b-instruct 0.141 vicuna-13b-delta-v1.1 0.223 falcon-7b-instruct 0.214 alpaca-lora-7b 0.192 BELLE-7B-1M 0.237 bloomz-3b 0.187 baize-healthcare-lora-7B 0.181 vicuna-7b-delta-v1.1 0.188 BELLE-7B-1M 0.215 pythia-1.4b 0.193 BELLE-7B-1M 0.210 BELLE-7B-1M 0.259 moss-moon-003-base 0.224 vicuna-7b-delta-v1.1 0.191 Bloomz-3b 0.139 llama-13b 0.152 stablelm-7b 0.140
|
2306.05783#50
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 50 |
4.4 In recommender systems, there exists a kind of pure ID fea- tures that inherently contains no semantic information (e.g., user ID, item ID). If we include these ID features in the prompting text, the tokenization is actually unmeaningful to language models (e.g., user ID AX1265 might be tok- enized as [AX, 12, 65]). Many works [Cui et al., 2022; Hou et al., 2023a] tend to directly abandon these ID fea- tures (e.g., replacing item IDs with item titles or descriptions) for uniï¬ed cross-domain recommendation via the natural lan- guage interface, since the IDs are usually not shared in dif- ferent domains. However, some works [Geng et al., 2022; Yuan et al., 2023] point out that bringing ID features can greatly promote the recommendation performance, although sacriï¬cing the cross-domain generalization ability. There- fore, it is still an open question about whether we should re- tain the ID features or not, which divides the key challenges regarding ID indexing & modeling into two directions.
|
2306.05817#50
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 50 |
What to Evaluate Sensitive topics and traumaâs impacts on people are by nature challenging to evaluate and must be done with care. Consequences of abuse of children and minors can be long-term or lifelong [17]. Impacts and trauma can resurface throughout a personâs life in many aspects. Evaluations for generative AI impacts can overlap with similar harms such as image-based sexual abuse [122]. As seen in 4.1.2 Cultural Values and Sensitive Content, consent from existing people should be evaluated with the person themselves.
Mitigation and Intervention Research to detect, mitigate, and report abusive and violent content such as CSAM is ongoing [241] and tools specific to modalities such as images can help identify content that is not yet labeled as CSAM [243]. Relevant regulation should be updated to address generated content that may not accurately portray an existing person or their body or self, but lead to real harms.
14
# 4.2.3 Concentration of Authority
Use of generative AI systems to contribute to authoritative power and reinforce dominant values systems can be intentional and direct or more indirect. Concentrating authoritative power can also exacerbate inequality and lead to exploitation.
# 4.2.3.1 Militarization, Surveillance, and Weaponization
|
2306.05949#50
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 50 |
[13] Yu Gu, Xiang Deng, and Yu Su. Donât generate, discriminate: A proposal for grounding language models to real-world environments. CoRR, abs/2212.09736, 2022. doi: 10.48550/ arXiv.2212.09736. URL https://doi.org/10.48550/arXiv.2212.09736.
[14] Izzeddin Gur, Ofir Nachum, Yingjie Miao, Mustafa Safdari, Austin Huang, Aakanksha Chowd- hery, Sharan Narang, Noah Fiedel, and Aleksandra Faust. Understanding html with large language models, 2023.
[15] Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings. CoRR, abs/2305.11554, 2023. doi: 10.48550/arXiv.2305.11554. URL https://doi.org/10.48550/arXiv.2305.11554.
|
2306.06070#50
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 50 |
As the examples in this section show, we ï¬nd that LLMs have the potential to greatly enhance the eï¬ciency of a diverse array of processes in chemistry and materials science by providing novel interfaces to tools or by completely automating their use. This can help streamline workï¬ows, reduce human error, and increase productivityâoften by replacing âglue codeâ with natural language or studying a software library by chatting with an LLM.
# C. Knowledge Extraction
Beyond proving novel interfaces for tools, LLMs can also serve as powerful tools for extracting knowledge from the vast amount of chemical literature available. With LLMs, researchers can rapidly mine and analyze large volumes of data, enabling them to uncover novel insights and advance the frontiers of chemical knowledge. Tools such as paper- qa [28] can help to dramatically cut down the time required for literature search by automatically retrieving, summarizing, and contextualizing relevant fragments from the entire corpus of the scientiï¬c literatureâfor example, answering questions (with suitable citations) based on a library of hundreds of documents [35]. As the examples in the
19
|
2306.06283#50
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 50 |
Xiaoman Pan, Wenlin Yao, Hongming Zhang, Dian Yu, Dong Yu, and Jianshu Chen. Knowledge-in- context: Towards knowledgeable semi-parametric language models. CoRR, abs/2210.16433, 2022. doi: 10.48550/arXiv.2210.16433. URL https://doi.org/10.48550/arXiv.2210.16433.
Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, and Jianfeng Gao. Check your facts and try again: Improving large language models with external knowledge and automated feedback. CoRR, abs/2302.12813, 2023. doi: 10.48550/arXiv.2302.12813. URL https://doi.org/10.48550/arXiv.2302.12813.
|
2306.07929#50
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 51 |
[45] Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. Super-naturalinstructions:generalization via declarative instructions on 1600+ tasks. In EMNLP, 2022.
[46] Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021.
[47] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
12
|
2306.05685#51
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 51 |
ICER â23 V1, August 7â11, 2023, Chicago, IL, USA
5.5 Additional Observations 5.5.1 each help request is linked to a speciï¬c exercise submission. How- ever, of all the submissions in the course, only a tiny fraction have associated help requests. During 2022, we got 120,583 submissions but only 831 (0.7%) of them had a help request. We checked whether this could be due to studentsâ mostly submitting correct solutions, but that was not the case: only 56,855 submissions (47%) passed all the tests. This means that the students asked for help with only 1.3% of the 63,728 failed submissions.
|
2306.05715#51
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 51 |
Table 2: Ranking of all LLMs in each category in 0-shot setting. â denotes human performance exceeds the state-of-the-art LLMs, whereas â signifies LLMs have surpassed human performance.
absolute guarantee that each LLM will exhibit enhanced performance in response to an increased number of demonstrations. On the contrary, several LLMs exhibit a decline in performance as the quantity of learning examples expands. In contrast, GPT-4 and ChatGPT demonstrate a more stable improvement in their performance through few-shot learning. This can be attributed to the extensive domain knowledge possessed by GPT-4 and ChatGPT, enabling them to effectively comprehend the features embedded within the learning samples.
8
|
2306.05783#51
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 51 |
On the one hand, we could sacriï¬ce the cross-domain gen- eralization ability to obtain better in-domain recommendation performance by keeping the ID features. P5 [Geng et al., 2022] and its variants [Geng et al., 2023; Hua et al., 2023a; Hua et al., 2023b] remain the ID features as textual inputs in the prompting templates. P5 designs a whole-word embed- ding layer to assign the same whole-word embedding for to- kens from the same ID feature. The whole-word embeddings will be added to the token embeddings in the same way as position embeddings in language models. Based on P5, Hua et al. [2023b] further explore various item ID indexing strate- gies (e.g., sequential indexing, collaborative indexing) to en- sure the IDs of similar items consist of similar sub-tokens. RecFormer [Li et al., 2023b] and UniSRec [Hou et al., 2022] omit the item IDs in prompting texts, but introduce additional ID embeddings at either bottom embedding layer or top pro- jection layer. In this line, researchers should focus on how to associate LLM with ID features via carefully designed ID indexing & modeling strategies.
|
2306.05817#51
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 51 |
# 4.2.3.1 Militarization, Surveillance, and Weaponization
Concentrating power can occur at increasing levels, from small groups to national bodies. Code generative systems can improve development for technical surveillance systems and language models can be used to surveil text communication within work, social, and other environments [1].
Generative AI mechanisms for accumulating power and control at a national level, such as surveillance, has not yet happened, but government and military interest in deploying and weaponizing generative AI systems is growing [106]. Use includes generating synthetic data for training AI systems [102] and military planning [78]. Military use is not inherently weaponization and risk depends on the use case and government interest. Favorable arguments use AI to protect national security and require differentiating national security interests from undue harm [44].
Generative AI systems are also enabling new kinds of cyberattacks, and amplifying the possibilities of existing cyberattacks. For example, synthetic audio has been used to copy the sound of someoneâs voice for more compelling fraud and extortion [124]. Large language models are also facilitating disinformation campaigns, influence operations, and phishing attacks [92].
|
2306.05949#51
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 51 |
[16] Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. Deberta: decoding-enhanced bert with disentangled attention. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https: //openreview.net/forum?id=XPZIaotutsD.
[17] Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao, and Ji-Rong Wen. Struct- gpt: A general framework for large language model to reason over structured data. CoRR, abs/2305.09645, 2023. doi: 10.48550/arXiv.2305.09645. URL https://doi.org/10.48550/ arXiv.2305.09645.
|
2306.06070#51
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 51 |
19
Json file âAbstract: 15 nm diameter SiO2 {"nodes": [ i with a grafted ("id":1, (-) âconsisting of a S nm rubbery ânameâ: "SiO2 nanoparticles", e (=) (2) polyhex inner âlabelâ: "Viaterial", 1 (=) block and a 30 nm outer block of matrix "attributes": ( . compatible vlat â nm Sock copolymer) ~J (=) MA) were synthesized to toughen an | gam rafted block copolyme (=) S) epoxy. A systematic study of the effect of 1 = block copolymer graft density (from 0.07 {edges": [ e (â] to 0.7 chains/nm2) and block molecular { âweight (from 20 to 80 kg/mol) on the âsource": 1, iS) tensile behavior, fracture toughness, and âtargetâ: 6, 2 @) 8 fatigue properties was conducted. ... Ned âpropertiesâ }
Figure 7: The InsightGraph interface. A suitably prompted LLM can create knowledge graph representations of scientiï¬c text that can be visualized using tools such as neo4jâs visualization tools. [81]
previous section indicated, this is particularly useful if the model is given access to search engines on the internet.
|
2306.06283#51
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 51 |
Shreyas Sundara Raman, Vanya Cohen, Eric Rosen, Ifrah Idrees, David Paulius, and Stefanie Tellex. Planning with large language models via corrective re-prompting. CoRR, abs/2211.09935, 2022. doi: 10.48550/arXiv.2211.09935. URL https://doi.org/10.48550/arXiv.2211.09935.
Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan, editors, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3980â3990. Association for Computational Linguistics, 2019. doi: 10.18653/v1/D19-1410. URL https://doi.org/10.18653/v1/D19-1410.
|
2306.07929#51
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
2306.05685
| 52 |
12
[48] Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023.
[49] Zongheng Yang, Zhanghao Wu, Michael Luo, Wei-Lin Chiang, Romil Bhardwaj, Woosuk Kwon, Siyuan Zhuang, Frank Sifei Luan, Gautam Mittal, Scott Shenker, and Ion Stoica. SkyPilot: An intercloud broker for sky computing. In 20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 23), pages 437â455, Boston, MA, April 2023. USENIX Association.
[50] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019.
|
2306.05685#52
|
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
|
Evaluating large language model (LLM) based chat assistants is challenging
due to their broad capabilities and the inadequacy of existing benchmarks in
measuring human preferences. To address this, we explore using strong LLMs as
judges to evaluate these models on more open-ended questions. We examine the
usage and limitations of LLM-as-a-judge, including position, verbosity, and
self-enhancement biases, as well as limited reasoning ability, and propose
solutions to mitigate some of them. We then verify the agreement between LLM
judges and human preferences by introducing two benchmarks: MT-bench, a
multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our
results reveal that strong LLM judges like GPT-4 can match both controlled and
crowdsourced human preferences well, achieving over 80% agreement, the same
level of agreement between humans. Hence, LLM-as-a-judge is a scalable and
explainable way to approximate human preferences, which are otherwise very
expensive to obtain. Additionally, we show our benchmark and traditional
benchmarks complement each other by evaluating several variants of LLaMA and
Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with
human preferences are publicly available at
https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.
|
http://arxiv.org/pdf/2306.05685
|
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
|
cs.CL, cs.AI
|
NeurIPS 2023 Datasets and Benchmarks Track
| null |
cs.CL
|
20230609
|
20231224
|
[
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2305.10403"
},
{
"id": "2304.07327"
},
{
"id": "2201.11903"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.12420"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2306.04751"
},
{
"id": "2211.09110"
},
{
"id": "2301.13688"
},
{
"id": "2004.14602"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2211.05719"
},
{
"id": "2206.04615"
},
{
"id": "2204.05862"
},
{
"id": "2305.01937"
},
{
"id": "2305.14387"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2306.05715
| 52 |
Asking for help is diï¬cult [12, 38, 74, 75, 78], but even so, the low proportion of help requests underlines that nearly all failing sub- missions are such where students do not explicitly request for help. This raises a question related to research based on studentsâ code submissions and errors therein: the vast majority of prior research has not explicitly collected information on whether students want help, so some earlier ï¬ndings about student âfailureâ may in fact be related to students employing the submission system as a feed- back mechanism, not necessarily needing help but simply check- ing whether they are on the right path. If so, prior research such as the reported mismatch between educatorsâ beliefs about studentsâ mistakes and logged data about mistakes [9] might be explained in part by students asking for help from educators only when they really need help, which might diï¬er from how they employ auto- mated systems.
|
2306.05715#52
|
Exploring the Responses of Large Language Models to Beginner Programmers' Help Requests
|
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
http://arxiv.org/pdf/2306.05715
|
Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
|
cs.CY, cs.AI, cs.CL, cs.HC, cs.SE
|
13 pages, 1 figure. To be published in Proceedings of the 2023 ACM
Conference on International Computing Education Research V.1 (ICER '23 V1)
| null |
cs.CY
|
20230609
|
20230609
|
[
{
"id": "2004.09456"
},
{
"id": "2302.07427"
},
{
"id": "2203.02155"
},
{
"id": "2304.02491"
},
{
"id": "2211.04715"
},
{
"id": "2306.02608"
},
{
"id": "2303.08774"
},
{
"id": "2304.03938"
}
] |
2306.05783
| 52 |
8
Observation 3: More LLMsâ parameters donât guarantee better performance Numerous studies have posited that an increase in the number of model parameters corresponds to an enhancement in modelâs performance. This notion holds true when comparing LLMs that exhibit an order of magnitude difference in their parameters. For instance, Bloomz-mt with 146 billion parameters significantly outperforms Bloomz-560m with 560 million parameters. However, this argument does not consistently hold. For instance, Bloomz-7b1 surpasses Bloomz-p3 in the majority of domain tasks, and Pythia-1.4b outperforms other Pythia models with larger parameter counts across most benchmarks. A possible explanation for this phenomenon could be that LLMs with different parameter quantities are optimally suited to different amounts of pre-training and fine-tuning data Hoffmann et al. (2022).
|
2306.05783#52
|
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation
|
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align
with the rapid development of large language models (LLMs). We present Xiezhi,
the most comprehensive evaluation suite designed to assess holistic domain
knowledge. Xiezhi comprises multiple-choice questions across 516 diverse
disciplines ranging from 13 different subjects with 249,587 questions and
accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k
questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results
indicate that LLMs exceed average performance of humans in science,
engineering, agronomy, medicine, and art, but fall short in economics,
jurisprudence, pedagogy, literature, history, and management. We anticipate
Xiezhi will help analyze important strengths and shortcomings of LLMs, and the
benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
|
http://arxiv.org/pdf/2306.05783
|
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
|
cs.CL
|
Under review of NeurIPS 2023
| null |
cs.CL
|
20230609
|
20230615
|
[
{
"id": "2301.13126"
},
{
"id": "2302.13971"
},
{
"id": "2303.04048"
},
{
"id": "1905.07830"
},
{
"id": "2304.12986"
},
{
"id": "2304.07854"
},
{
"id": "2211.05100"
},
{
"id": "1909.00277"
},
{
"id": "2305.10263"
},
{
"id": "1909.06058"
},
{
"id": "2206.07682"
},
{
"id": "2304.06364"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2212.08073"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2303.18223"
},
{
"id": "2302.09419"
},
{
"id": "2303.14742"
},
{
"id": "2111.10952"
},
{
"id": "2301.12726"
},
{
"id": "2304.01933"
},
{
"id": "2106.09685"
},
{
"id": "2303.12712"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
},
{
"id": "2105.09938"
}
] |
2306.05817
| 52 |
On the other hand, we could abandon the ID features to achieve uniï¬ed cross-domain recommendation via natural language interface. Maintaining a uniï¬ed model to serve var- ious domains is very promising, especially when we involve large language model [Cui et al., 2022; Hou et al., 2023a]. In this direction, in order to achieve similar performance to those works that keep ID features, researchers could inves- tigate ways to introduce ID features in an implicit manner, e.g., contrastive learning between representations of LLMs and corresponding ID embeddings.
4.5 Fairness Researchers have discovered that bias in the pretraining cor- pus could mislead LLM to generate harmful or offensive con- tent, e.g., discriminating against disadvantaged groups. Al- though there are strategies (e.g., RLHF [Ouyang et al., 2022]) to reduce the harmfulness of LLM, existing works have al- ready detected the unfairness problem in recommender sys- tems brought by LLM from both user-side [Hua et al., 2023a; Zhang et al., 2023a] and item-side [Hou et al., 2023b] per- spectives.
|
2306.05817#52
|
How Can Recommender Systems Benefit from Large Language Models: A Survey
|
Recommender systems (RS) play important roles to match users' information
needs for Internet applications. In natural language processing (NLP) domains,
large language model (LLM) has shown astonishing emergent abilities (e.g.,
instruction following, reasoning), thus giving rise to the promising research
direction of adapting LLM to RS for performance enhancements and user
experience improvements. In this paper, we conduct a comprehensive survey on
this research direction from an application-oriented view. We first summarize
existing research works from two orthogonal perspectives: where and how to
adapt LLM to RS. For the "WHERE" question, we discuss the roles that LLM could
play in different stages of the recommendation pipeline, i.e., feature
engineering, feature encoder, scoring/ranking function, and pipeline
controller. For the "HOW" question, we investigate the training and inference
strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to
tune LLMs or not, and whether to involve conventional recommendation model
(CRM) for inference. Detailed analysis and general development trajectories are
provided for both questions, respectively. Then, we highlight key challenges in
adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and
ethics. Finally, we summarize the survey and discuss the future prospects. We
also actively maintain a GitHub repository for papers and other related
resources in this rising direction:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys.
|
http://arxiv.org/pdf/2306.05817
|
Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
|
cs.IR, cs.AI
|
15 pages; 3 figures; summarization table in appendix
| null |
cs.IR
|
20230609
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2304.05263"
},
{
"id": "2305.07001"
},
{
"id": "2305.11700"
},
{
"id": "2305.06566"
},
{
"id": "2305.15756"
},
{
"id": "2105.08318"
},
{
"id": "2304.03879"
},
{
"id": "2303.08559"
},
{
"id": "1703.04247"
},
{
"id": "2206.07682"
},
{
"id": "2305.07961"
},
{
"id": "2305.05973"
},
{
"id": "2305.15498"
},
{
"id": "2306.11114"
},
{
"id": "2305.04518"
},
{
"id": "2305.00447"
},
{
"id": "2305.02182"
},
{
"id": "2305.08845"
},
{
"id": "2305.12090"
},
{
"id": "2212.10403"
},
{
"id": "2304.03022"
},
{
"id": "2305.07609"
},
{
"id": "2209.08060"
},
{
"id": "2209.07562"
},
{
"id": "2304.09542"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "1908.08167"
},
{
"id": "1909.10351"
},
{
"id": "2305.15036"
},
{
"id": "2305.12081"
},
{
"id": "2304.07862"
},
{
"id": "2305.03017"
},
{
"id": "2305.09858"
},
{
"id": "2305.06474"
},
{
"id": "2305.13731"
},
{
"id": "2304.03153"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2306.10702"
},
{
"id": "2306.02250"
},
{
"id": "2303.13835"
},
{
"id": "2305.14302"
},
{
"id": "2302.03735"
},
{
"id": "2306.02841"
},
{
"id": "2305.11206"
},
{
"id": "2203.15876"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
}
] |
2306.05949
| 52 |
What to Evaluate If deployed covertly, under NDA, or without transparency, generative AI systems used for surveillance or weaponization cannot be tracked or evaluated. Evaluations can broadly analyze the quantity of where such systems have been deployed, such as the number of devices sold, or number of system deployments, as a brute force measure.
Mitigation and Intervention For procurement of technical systems, developers can restrict surveil- lance and weaponization as use cases. Government development of generative AI systems for surveillance and weaponization requires additional protocols. Governments and militaries can make commitments toward ethical and responsible uses of AI [6] and joint commitments from multiple countries [11] can create accountability among military powers. Regulatory approaches can draw boundaries for harmful uses by militaries, but will grapple with tensions for what constitutes national security [266].
# 4.2.3.2 Imposing Norms and Values
|
2306.05949#52
|
Evaluating the Social Impact of Generative AI Systems in Systems and Society
|
Generative AI systems across modalities, ranging from text, image, audio, and
video, have broad social impacts, but there exists no official standard for
means of evaluating those impacts and which impacts should be evaluated. We
move toward a standard approach in evaluating a generative AI system for any
modality, in two overarching categories: what is able to be evaluated in a base
system that has no predetermined application and what is able to be evaluated
in society. We describe specific social impact categories and how to approach
and conduct evaluations in the base technical system, then in people and
society. Our framework for a base system defines seven categories of social
impact: bias, stereotypes, and representational harms; cultural values and
sensitive content; disparate performance; privacy and data protection;
financial costs; environmental costs; and data and content moderation labor
costs. Suggested methods for evaluation apply to all modalities and analyses of
the limitations of existing evaluations serve as a starting point for necessary
investment in future evaluations. We offer five overarching categories for what
is able to be evaluated in society, each with their own subcategories:
trustworthiness and autonomy; inequality, marginalization, and violence;
concentration of authority; labor and creativity; and ecosystem and
environment. Each subcategory includes recommendations for mitigating harm. We
are concurrently crafting an evaluation repository for the AI research
community to contribute existing evaluations along the given categories. This
version will be updated following a CRAFT session at ACM FAccT 2023.
|
http://arxiv.org/pdf/2306.05949
|
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev
|
cs.CY, cs.AI
| null | null |
cs.CY
|
20230609
|
20230612
|
[
{
"id": "2007.04068"
},
{
"id": "2305.09800"
},
{
"id": "1908.09203"
},
{
"id": "2202.05520"
},
{
"id": "2302.10329"
},
{
"id": "2107.03374"
},
{
"id": "2210.06245"
},
{
"id": "2211.02001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2301.10226"
},
{
"id": "2202.02647"
},
{
"id": "2112.10752"
},
{
"id": "2206.04615"
},
{
"id": "2202.00885"
},
{
"id": "2010.15581"
},
{
"id": "2305.09941"
},
{
"id": "2301.04246"
},
{
"id": "2304.12298"
},
{
"id": "2203.09509"
},
{
"id": "2207.14157"
},
{
"id": "2102.09692"
},
{
"id": "1804.10999"
},
{
"id": "2303.11156"
},
{
"id": "2104.06390"
},
{
"id": "2002.05651"
}
] |
2306.06070
| 52 |
[18] Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769â6781, Online, November 2020. Association for Computational Lin- guistics. doi: 10.18653/v1/2020.emnlp-main.550. URL https://aclanthology.org/2020. emnlp-main.550.
[19] Gilly Leshed, Eben M. Haber, Tara Matthews, and Tessa Lau. CoScripter: Automating & sharing how-to knowledge in the enterprise. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1719â1728, Florence Italy, April 2008. ACM. ISBN 978-1-60558-011-1. doi: 10.1145/1357054.1357323.
12
|
2306.06070#52
|
Mind2Web: Towards a Generalist Agent for the Web
|
We introduce Mind2Web, the first dataset for developing and evaluating
generalist agents for the web that can follow language instructions to complete
complex tasks on any website. Existing datasets for web agents either use
simulated websites or only cover a limited set of websites and tasks, thus not
suitable for generalist web agents. With over 2,000 open-ended tasks collected
from 137 websites spanning 31 domains and crowdsourced action sequences for the
tasks, Mind2Web provides three necessary ingredients for building generalist
web agents: 1) diverse domains, websites, and tasks, 2) use of real-world
websites instead of simulated and simplified ones, and 3) a broad spectrum of
user interaction patterns. Based on Mind2Web, we conduct an initial exploration
of using large language models (LLMs) for building generalist web agents. While
the raw HTML of real-world websites are often too large to be fed to LLMs, we
show that first filtering it with a small LM significantly improves the
effectiveness and efficiency of LLMs. Our solution demonstrates a decent level
of performance, even on websites or entire domains the model has never seen
before, but there is still a substantial room to improve towards truly
generalizable agents. We open-source our dataset, model implementation, and
trained models (https://osu-nlp-group.github.io/Mind2Web) to facilitate further
research on building a generalist agent for the web.
|
http://arxiv.org/pdf/2306.06070
|
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
|
cs.CL
|
Website: https://osu-nlp-group.github.io/Mind2Web. Updated with
supplementary material. NeurIPS'23 Spotlight
| null |
cs.CL
|
20230609
|
20231209
|
[] |
2306.06283
| 52 |
previous section indicated, this is particularly useful if the model is given access to search engines on the internet.
a. InsightGraph
To facilitate downstream use of the information, LLMs can also convert unstructured dataâthe typical form of these literature reportsâinto structured data. The use of GPT for this application has been reported by Dunn et al. [79] and Walker et al. [80], who used an iterative ï¬ne-tuning approach to extract data structured in JSON from papers. In their approach, initial (zero-shot) completions of the LLM are corrected by domain experts. Those corrected completions are then used to ï¬netune LLMs, showing improved performance on this task.
However, for certain applications, one can construct powerful prototypes using only
|
2306.06283#52
|
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
|
Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
|
http://arxiv.org/pdf/2306.06283
|
Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L. Evans, Nicolas Gastellu, Jerome Genzling, María Victoria Gil, Ankur K. Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub Lála, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas Moitessier, Elias Moubarak, Beatriz Mouriño, Brenden Pelkie, Michael Pieler, Mayk Caldas Ramos, Bojana Ranković, Samuel G. Rodriques, Jacob N. Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben E. Smith, Joren Van Herck, Christoph Völker, Logan Ward, Sean Warren, Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
|
cond-mat.mtrl-sci, cs.LG, physics.chem-ph
| null | null |
cond-mat.mtrl-sci
|
20230609
|
20230714
|
[
{
"id": "2209.08203"
},
{
"id": "2212.04450"
}
] |
2306.07929
| 52 |
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. CoRR, abs/2302.04761, 2023. doi: 10.48550/arXiv.2302.04761. URL https: //doi.org/10.48550/arXiv.2302.04761.
12
Dale Schuurmans. Memory augmented large language models are computationally universal. CoRR, abs/2301.04589, 2023. doi: 10.48550/arXiv.2301.04589. URL https://doi.org/10.48550/ arXiv.2301.04589.
Colleen M Seifert, Andrea L Patalano, Kristian J Hammond, and Timothy M Converse. Experience and expertise: The role of memory in planning for opportunities. 1997.
|
2306.07929#52
|
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
|
Inspired by the insights in cognitive science with respect to human memory
and reasoning mechanism, a novel evolvable LLM-based (Large Language Model)
agent framework is proposed as REMEMBERER. By equipping the LLM with a
long-term experience memory, REMEMBERER is capable of exploiting the
experiences from the past episodes even for different task goals, which excels
an LLM-based agent with fixed exemplars or equipped with a transient working
memory. We further introduce Reinforcement Learning with Experience Memory
(RLEM) to update the memory. Thus, the whole system can learn from the
experiences of both success and failure, and evolve its capability without
fine-tuning the parameters of the LLM. In this way, the proposed REMEMBERER
constitutes a semi-parametric RL agent. Extensive experiments are conducted on
two RL task sets to evaluate the proposed framework. The average results with
different initialization and training sets exceed the prior SOTA by 4% and 2%
for the success rate on two task sets and demonstrate the superiority and
robustness of REMEMBERER.
|
http://arxiv.org/pdf/2306.07929
|
Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, Kai Yu
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230609
|
20231030
|
[
{
"id": "2201.06009"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.