doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.07124 | 38 | Training models to follow instructions has led to new capabilities in text (Ouyang et al., 2022; Wang et al., 2022b; Chung et al., 2022) and visual modalities (Xu et al., 2023b; OpenAI, 2023). Prior work has shown its benefits for traditional language tasks (Sanh et al., 2022; Wei et al., 2022; Longpre et al., 2023a; Iyer et al., 2022), multilingual tasks (Muennighoff et al., 2022b; Yong et al., 2022), and helpfulness in dialog (Köpf et al., 2023; Bai et al., 2022; Ganguli et al., 2022). For coding applications, PanGu-Coder2 (Shen et al., 2023), WizardCoder (Luo et al., 2023) and InstructCodeT5+ (Wang et al., 2023c) are recent models trained with coding instructions. However, they all use the CodeAlpaca dataset (Chaudhary, 2023), which is synthetically generated from OpenAI models. Using data from powerful closed-source models provides a strong advantage, but limits the model use and has other limitations highlighted in §1. CoEditor (Wei et | 2308.07124#38 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 38 | human programmers. Liu et al. (2023a) utilize a sandbox environment to curate reliable datasets in better alignment with human preference and train a socially-aligned LLM. Liang et al. (2023) and Du et al. (2023) also make use of the multi-agent debate framework in other scenarios such as translation and arithmetic problems resulting in better results. Wang et al. (2023d) propose an alternative method called self-collaboration to enable the communication of agents by utilizing a single LLM prompted by multi-persona descriptions. Mandi et al. (2023) propose a novel framework | 2308.07201#38 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 39 | The above issues apply to most LLM-based tools, and the likeli- hood of an LLMâs response being incorrect, harmful, off-topic, or otherwise âoff the railsâ increases with additional rounds of user input and model response. Therefore, by design, every query to CodeHelp is a one-shot request, independent of any others and with no possibility for follow-up or dialogue. This limits the use- fulness of the system, as asking a follow-up question or requesting additional information in the context of an initial response could be very helpful, but the one-shot limitation is imposed to mitigate many of the risks of using LLMs. Users can submit revised queries with additional information or questions informed by an earlier response if they choose to.
5 EXPERIENCES AND RESULTS We used CodeHelp in two sections of an undergraduate introductory- level computer- and data-science course taught by an author of this paper in the Spring semester of 2023. Fifty two students completed the course. Of those students, our analyses includes data from 49 who used CodeHelp at least once during the semester, and data from 45 who completed a survey about using CodeHelp at the end of the semester. The course is designed to serve a broad audience and attracts students from across the institution who take the course to meet general education requirements or to meet requirements for data-analytic or data-science related credentials. | 2308.06921#39 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 39 | BEQUE [113] leverages LLMs for rewriting queries in e-commerce product searches. It designs three Supervised Fine-Tuning (SFT) tasks: quality classification of e-commerce query rewrites, product title prediction, and CoT query rewriting. To our knowledge, it is the first model to di- rectly fine-tune LLMs, including ChatGLM [68, 114], Chat- GLM2.0 [68, 114], Baichuan [115], and Qwen [116], specif- ically for the query rewriting task. After the SFT stage, BEQUE uses an offline system to gather feedback on the rewrites and further aligns the rewriters with e-commerce search objectives through an object alignment stage. Online A/B testing demonstrates the effectiveness of the method.
# 3.3.3 Knowledge Distillation
Although LLM-based methods have demonstrated signif- icant improvements in query rewriting tasks, their practi- cal implementation for online deployment is hindered by the substantial latency caused by the computational re- quirements of LLMs. To address this challenge, knowledge distillation has emerged as a prominent technique in the
TABLE 2. Summary of existing LLM-enhanced query rewrit- ing methods. âDocsâ and âKDâ stand for document corpus and knowledge distillation, respectively. | 2308.07107#39 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 39 | models. Using data from powerful closed-source models provides a strong advantage, but limits the model use and has other limitations highlighted in §1. CoEditor (Wei et al., 2023) proposes an âauto-editingâ task, trained on 1650 python commit history repositories. Our work expands this proposal to more general coding tasks (using instructions), more languages, and orders of magnitude more commit data. | 2308.07124#39 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.06921 | 40 | The course provides twelve weeks of instruction in Python foun- dations and three weeks of instruction in Pandas2 and Seaborn3. The format of the course is âflipped,â with students responsible for reading course materials prior to class, while class time is spent working through assignments on lab computers. The instructor and a TA assist students and provide instruction/support as needed. CodeHelp was introduced in the fourth week of the semester with a quick demonstration in class. During class, students were en- couraged to use CodeHelp for assistance first before asking the instructor or TA for help, but they were otherwise free to make their own choices about when and how to use it. | 2308.06921#40 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 40 | Methods Target Data Generation Ad-hoc HyDE [97] Ad-hoc Jagerman et al. [102] Ad-hoc Query2Doc [86] Ad-hoc Ma et al. [100] Ad-hoc PromptCase [103] Ad-hoc GRF+PRF [99] Ad-hoc GRM [96] Ad-hoc InteR [98] Ad-hoc LameR [108] Ad-hoc CAR [107] Ad-hoc QUILL [97] LLMCS [95] Conversational CONVERSER [105] Conversational Conversational Ye et al. [104] Prompting Prompting Prompting Finetuning Prompting Prompting Prompting Prompting Prompting Prompting LLMs LLMs LLMs Prompting Prompting Prompting | 2308.07107#40 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 40 | 5.3 CODE BENCHMARKS
Many code synthesis benchmarks have been proposed (Wang et al., 2022d;c; Yu et al., 2023; Lai et al., 2023; Du et al., 2023). HumanEval (Chen et al., 2021; Liu et al., 2023b) has emerged as the standard for this task. Prior work has extended HumanEval to new programming languages via automatic translation mechanisms (Athiwaratkun et al., 2022; Cassano et al., 2023; Orlanski et al., 2023). These approaches are error-prone and only translate tests, not the actual solutions, which are needed for tasks like code explanation. Thus, we rely only on humans to create all parts of HUMANEVALPACK including test cases, correct solutions, buggy solutions, and other metadata across 6 languages. | 2308.07124#40 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 40 | |
e
Both Assistant 1 and Assistant 2 provided comprehensive and detailed answers. They both suggested similar strategies for dealing with stress, such as exercise, relaxation techniques, getting enough sleep, eating a healthy diet, connecting with others, time management, and seek- ing professional help. However, Assistant 2 went a step further by mentioning mindfulness social support, and relaxation techniques like progressive muscle relaxation, aromatherapy, and mas- sage. Therefore, I would lean towards Assistant 2 for providing a slightly more detailed response.
b |
I agree with the Aliceâs assessments. However, I would like to point out that Assistant 1âs answer was more concise and to the point that ... . On the other hand, Assistant 2âs answer . In your opinion, does the additional detail provided by Assistant was more detailed and ... 2 make its response more helpful, or could the conciseness of Assistant 1âs response be seen as more beneficial? | 2308.07201#40 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 41 | 5.1 Student Use Even with no firm requirement to do so, students used CodeHelp consistently throughout the semester. Figure 9 shows that roughly half of the class used CodeHelp each week, and we saw that roughly 70% of the students used CodeHelp in four or more different weeks. We also observed a wide range of intensity of use between students. Roughly 80% of the class submitted 10 or more queries (indicating more than initial trial usage), roughly 50% submitted 30 or more, and seven of the 49 submitted over 100 queries, including one student with more than 600 queries. The heatmap in Figure 10 shows the usage concentrated during two separate class sessions (1 and 2pm on Mon/Wed/Fri) and before assignments were due on Saturday. Otherwise, there was some use across nearly all hours, including many when no instructor or TA would have been available. Overall,
2Pandas. Available at: https://pandas.pydata.org/ [accessed 2023-06-20] 3Seaborn. Available at: https://seaborn.pydata.org/ [accessed 2023-06-20]
~ 3 1 & o a 8 1 40 - Percentage of Students Week | 2308.06921#41 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 41 | industry. In the QUILL [97] framework, a two-stage distil- lation method is proposed. This approach entails utilizing a retrieval-augmented LLM as the professor model, a vanilla LLM as the teacher model, and a lightweight BERT model as the student model. The professor model is trained on two extensive datasets, namely Orcas-I [117] and EComm [97], which are specifically curated for query intent understand- ing. Subsequently, a two-stage distillation process is em- ployed to transfer knowledge from the professor model to the teacher model, followed by knowledge transfer from the teacher model to the student model. Empirical findings demonstrate that this knowledge distillation methodology surpasses the simple scaling up of model size from base to XXL, resulting in even more substantial improvements. In a recently proposed ârewrite-retrieve-readâ framework [100], an LLM is first used to rewrite the queries by prompt- ing, followed by a retrieval-augmented reading process. To improve framework effectiveness, a trainable rewriter, implemented as a small language model, is incorporated to further adapt search queries to align with both the frozen retriever and the LLM | 2308.07107#41 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 41 | Code repair is commonly evaluated on Quixbugs (Lin et al., 2017; Prenner & Robbes, 2021; Ye et al., 2021; Xia & Zhang, 2023; Jiang et al., 2023; Sobania et al., 2023) or Python bugs (He et al., 2022; Bradley et al., 2023). The latter does not support code execution, which limits its utility. While Quixbugs supports execution with unit tests, it only contains 40 samples in Python and Java. Further, the problems in Quixbugs are generic functions, such as bucket sort. This makes them easy to solve and hard to decontaminate training data for. Our benchmark, HUMANEVALFIX, contains 164 buggy functions for six languages with solutions and unit tests. Further, our coding problems, derived from HumanEval, are very specific, such as keeping track of a bank account balance (see Figure 14).
Prior work on evaluating code explanations (Lu et al., 2021; Cui et al., 2022) has relied on metrics such as METEOR (Banerjee & Lavie, 2005) or BLEU (Papineni et al., 2002). By chaining code explanation with code synthesis, we can evaluate this task using the execution-based pass@k metric overcoming the major limitations of BLEU and other heuristics-based metrics (Reiter, 2018). | 2308.07124#41 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 41 | In terms of consistency with the question, both responses are equally consistent as they both provided effective ways to deal with stress. Assistant 1âs response was indeed more concise, while Assistant 2 provided more detailed techniques. The effectiveness of these responses could depend on the individualâs preference for brevity or detail. Therefore, I believe both responses are equally valuable.
=f
I appreciate the points made by both of you. While Assistant 1âs response was indeed more concise, I believe that the additional detail provided by Assistant 2 could be more beneficial for someone looking for a variety of strategies to manage stress. The inclusion of mindfulness practices and ... . Therefore, I still lean towards Assistant 2âs response as being slightly more helpful.
b |
However, I would like to emphasize that the effectiveness of a response can also de- pend on the context and the individualâs needs. For someone who is overwhelmed and looking for quick, straightforward advice, the conciseness of Assistant 1âs response might be more bene- ficial. On the other hand, for someone who is looking for a variety of strategies to try, the detailed response of Assistant 2 might be more helpful. Therefore, I believe both responses have their merits and itâs difficult to definitively say one is better than the other. What do you think about this perspective?
¢ | 2308.07201#41 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 42 | ~ 3 1 & o a 8 1 40 - Percentage of Students Week
Figure 9: Percentage of the class (y axis) using CodeHelp each week (x axis) across the semester [7 = spring break]. Note that the y axis scale only extends to 70. The figure shows consistent use across the whole semester.
the continuing, consistent usage strongly suggests that the students generally found the tool beneficial.
5.2 Student Survey At the end of the course we distributed an online survey to un- derstand studentsâ perceptions of CodeHelp. Taking the survey was optional, but students did receive extra-credit for completing it. A total of 45 students (87 percent of the class) completed the survey. Table 1 shows the results for a selection of questions about studentsâ perceptions of the tool and its value to them. Overall, stu- dents found it valuable, and a large majority (95%) were interested in using it in future CS courses.
For additional detail, the survey included the following open- response questions, which were designed to elicit both positive and negative responses:
⢠Q1: What did you find most beneficial about using Code- Help?
⢠Q2: Do you think there is anything negative about students using CodeHelp? | 2308.06921#42 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 42 | a trainable rewriter, implemented as a small language model, is incorporated to further adapt search queries to align with both the frozen retriever and the LLM readerâs requirements. The rewriterâs refinement involves a two-step training process. Initially, supervised warm-up training is conducted using pseudo data. Then, the retrieve-then-read pipeline is described as a reinforcement learning scenario, with the rewriterâs training acting as a policy model to maximize pipeline performance rewards. | 2308.07107#42 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 42 | Large-scale benchmarking has proven useful in many areas of natural language processing (Wang et al., 2019; Kiela et al., 2021; Srivastava et al., 2022; Muennighoff et al., 2022a). By producing 18 scores (6 languages across 3 tasks) for 9 models, we take a step towards large-scale benchmarking of code models. However, we lack many models capable of generating code (Black et al., 2021; Fried et al., 2022; Black et al., 2022; Wang & Komatsuzaki, 2021; Biderman et al., 2023b). Future work may consider more models or extending HUMANEVALPACK to new languages or tasks, such as code efficiency (Madaan et al., 2023a; Yetistiren et al., 2022) or code classification (Khan et al., 2023).
# 6 CONCLUSION | 2308.07124#42 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.06921 | 43 | ⢠Q1: What did you find most beneficial about using Code- Help?
⢠Q2: Do you think there is anything negative about students using CodeHelp?
In general, responses were relatively short but tended to be longer for the first question on beneficial aspects (word count; M = 16.2, SD = 10.3) compared to the second question on negative aspects (M = 12.0, SD = 13.0). To understand the patterns present in the responses, we conducted a thematic analysis in which interesting features of each response were extracted as codes and then collated into higher-level themes [2]. We identified five prominent themes in the response to Q1, highlighted in bold in the text that follows. The most prominent theme by a clear margin, appearing in 19 of the student responses, was around âavailabilityâ and specifi- cally that students valued the convenience of being able to ask for assistance outside of the classroom when TAs and the professor were busy or unavailable. Responses representative of this theme include: âit was a tool that was always there when I needed it, I didnât have to go to office or TA hours or emailâ and âthe ability to get help without talking to professor or TAâ.
Mark Liffiton, Brad Sheese, Jaromir Savelka, and Paul Denny | 2308.06921#43 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 43 | # 3.4 Limitations
While LLMs offer promising capabilities for query rewrit- ing, they also meet several challenges. Here, we outline two main limitations of LLM-based query rewriters.
# 3.4.1 Concept Drifts
When using LLMs for query rewriting, they may introduce unrelated information, known as concept drift, due to their extensive knowledge base and tendency to produce detailed and redundant content. While this can enrich the query, it also risks generating irrelevant or off-target results.
This phenomenon has been reported in several stud- ies [107, 113, 118] These studies highlight the need for a balanced approach in LLM-based query rewriting, ensuring
8
that the essence and focus of the original query are main- tained while leveraging the LLMâs ability to enhance and clarify the query. This balance is crucial for effective search and IR applications.
3.4.2 Correlation between Retrieval Performance and Ex- pansion Effects | 2308.07107#43 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 43 | # 6 CONCLUSION
This work studies training and evaluation of Code LLMs that follow instructions. We introduce COMMITPACK, a 4TB dataset of Git commits covering 350 programming languages. We filter this large-scale dataset to create COMMITPACKFT, 2GB of high-quality code with commit messages that assimilate instructions. To enable a comprehensive evaluation of instruction code models, we construct HUMANEVALPACK, a human-written benchmark covering 3 different tasks for 6 programming languages. We ablate several instruction datasets and find that COMMITPACKFT combined with natural language data leads to the best performance. While our models, OCTOCODER and OCTOGEEX, are the best permissively licensed Code LLMs available, they are outperformed by closed-source models such as GPT-4. In addition to improving the instruction tuning paradigm, future work should consider training more capable base models.
9
# OctoPack: Instruction Tuning Code Large Language Models
# ACKNOWLEDGEMENTS | 2308.07124#43 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.06921 | 44 | Mark Liffiton, Brad Sheese, Jaromir Savelka, and Paul Denny
07:00 09:00 150 11:00 125 a s 13:00 | â | @ ic) ' pT pT mz = Q 15:00 =I 100 8 âS 17:00 i-â| 5 oa 2 19:00 {ââ_} ââ ® = g = 21:00 â â â 50 o â F 23:00 â 01:00 â P25 03:00 Sun Mon Tue Wed = Thu Fri Sat Day of Week
Figure 10: Queries by hour (y axis) and day (x axis) over the whole term. The time span between 4 and 7 AM is not shown due to no activity. The high activity blocks on Mon, Wed, and Fri correspond to the times students were in the classroom. The higher activity on Saturday evening is prior to a recurring deadline for weekly assignments.
Many students (11) explicitly appreciated that CodeHelp could aid them in âfixing errorsâ, which was the next most common theme. This included getting help to understand error messages and producing explanations of errors. The following are two ex- amples of typical quotes supporting this theme: âit was helpful in understanding some of the error message we hadnât learned about in classâ and âit really helps with trouble shooting when it comes to semantic errorsâ. | 2308.06921#44 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 44 | 3.4.2 Correlation between Retrieval Performance and Ex- pansion Effects
Recently, a comprehensive study [119] conduct experiments on various expansion techniques and downstream ranking models, which reveals a notable negative correlation be- tween retriever performance and the benefits of expansion. Specifically, while expansion tends to enhance the scores of weaker models, it generally hurts stronger models. This ob- servation suggests a strategic approach: employ expansions with weaker models or in scenarios where the target dataset substantially differs in format from the training corpus. In other cases, it is advisable to avoid expansions to maintain clarity of the relevance signal.
# 4 RETRIEVER
In an IR system, the retriever serves as the first-pass docu- ment filter to collect broadly relevant documents for user queries. Given the enormous amounts of documents in an IR system, the retrieverâs efficiency in locating relevant documents is essential for maintaining search engine per- formance. Meanwhile, a high recall is also important for the retriever, as the retrieved documents are then fed into the ranker to generate final results for users, which determines the ranking quality of search engines. | 2308.07107#44 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 44 | 9
# OctoPack: Instruction Tuning Code Large Language Models
# ACKNOWLEDGEMENTS
We thank Hugging Face for providing compute instances. We are extremely grateful to Rodrigo Garcia for the Rust translations, Dimitry Ageev and Calum Bird for help with GPT-4 evaluation, Loubna Ben Allal for help on evaluation, Arjun Guha for insightful discussions on chaining evaluation tasks to avoid evaluating with BLEU, Lewis Tunstall for help on the OASST data, Victor Sanh and Nadav Timor for discussions, Jiaxi Yang for logo editing and domain classification prompting design, Ghosal et al. (2023); Zeng et al. (2023) for design inspiration, Harm de Vries for feedback and all members of BigCode for general support. Finally, we thank every programmer who takes the time to write informative commit messages.
# REFERENCES
Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. A transformer-based approach for source code summarization. arXiv preprint arXiv:2005.00653, 2020. | 2308.07124#44 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 44 | Py
After discussing with my co-workers, I believe both Assistants performed
equally well in providing comprehensive and accurate responses.
10
(a) Acc. and Kap. vs Role Numbers (b) Acc. and Kap. vs Discussion Turns
Figure 3: (a) Performance with Increased Different Roles on FairEval Dataset. We adopt one- by-one communication strategy and 2 discussion turns. Additional role descriptions are shown in Appendix A. (b) Performance with Increased Discussion Turns on FairEval Dataset. We adopt one-by-one communication strategy and 2 different roles.
designed for the collaboration of multiple robots, utilizing multiple LLMs to enhance coordination and strategic planning among the robots. Concurrent with our work, Li et al. (2023b) propose Peer Rank and Discussion (PRD) which is similar to our approach. However, they probe the different dimensions of evaluation by using different models as agents and did not explore alternative com- munication strategies.
# 6 CONCLUSION | 2308.07201#44 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 45 | One interesting theme that emerged (10 students), distinct from the âavailabilityâ of CodeHelp, was that it supported âindepen- denceâ by enabling students to make progress without the need to seek external help when they were stuck. This included provid- ing initial support to students who had difficulty starting work, nudging students in the right direction when they were close to a solution, and helping students who were anxious to ask for help without the fear of embarrassment. Comments that supported this theme included âIt was nice to have a source to ask when I was unsure how to begin codingâ, âit helped lead me in the right direction if I almost had the right codeâ and âI felt like I could ask it any question, even dumb ones, which I often did to avoid embarrassing myself in front of the Professor or TAâ. | 2308.06921#45 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 45 | In recent years, retrieval models have shifted from rely- ing on statistic algorithms [29] to neural models [3, 31]. The latter approaches exhibit superior semantic capability and excel at understanding complicated user intent. The success of neural retrievers relies on two key factors: data and model. From the data perspective, a large amount of high- quality training data is essential. This enables retrievers to acquire comprehensive knowledge and accurate matching patterns. Furthermore, the intrinsic quality of search data, i.e., issued queries and document corpus, significantly influ- ences retrieval performance. From the model perspective, a strongly representational neural architecture allows retriev- ers to effectively store and apply knowledge obtained from the training data. | 2308.07107#45 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 45 | Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, et al. Santacoder: donât reach for the stars! arXiv preprint arXiv:2301.03988, 2023.
Ben Athiwaratkun, Sanjay Krishna Gouda, Zijian Wang, Xiaopeng Li, Yuchen Tian, Ming Tan, Wasi Uddin Ahmad, Shiqi Wang, Qing Sun, Mingyue Shang, et al. Multi-lingual evaluation of code generation models. arXiv preprint arXiv:2210.14868, 2022.
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. | 2308.07124#45 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 45 | # 6 CONCLUSION
In this paper, we present evidence that ChatEval contributes to improving the evaluation perfor- mance concerning text quality, aligning more closely with human preferences. We emphasize the necessity of the diverse role specification and propose distinct communication strategies as integral components within ChatEval. Our qualitative analysis of the discussion process conveys insightful intuitions about how a text is evaluated by ChatEval and substantiates our approachâs ability to sup- port comprehensive evaluations akin to human judgment, thereby demonstrating the reliability and efficacy of our framework.
# REFERENCES
Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, et al. Benchmarking foundation models with language-model-as-an-examiner. arXiv preprint arXiv:2306.04181, 2023.
Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pp. 65â72, 2005. | 2308.07201#45 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 46 | The remaining themes, which were less common, focused on the âspeedâ (6) with which students could make progress or obtain feedback and the use of CodeHelp to assist with âlearning/un- derstandingâ (7). Typical comments aligning with these themes includedâHelped me work fasterâ and âit helped understand the code I was writing sometimesâ. Students also appreciated that CodeHelp would provide guidance rather than directly revealing the solution, as exemplified by the comment âIt gave us help on the answer not just the answer itselfâ. Overall, the responses to Q1 tell a story that CodeHelp was seen as a useful resource for obtaining rapid assis- tance and a complementary tool to traditional TA and instructor support.
As to the concerns (Q2), we also identified five prominent themes, again highlighted in bold. Around half of the students (24) stated that they had âno concernsâ. Some of the students would even suggest the use of the tool should have been more extensive: âWe
CodeHelp: Using Large Language Models with Guardrails
Table 1: Results for selected questions in the student survey (ð = 45 of 52 students). Rows may not sum to 100% due to rounding. | 2308.06921#46 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 46 | Unfortunately, there are some long-term challenges that hinder the advancement of retrieval models. First, user queries are usually short and ambiguous, making it difficult to precisely understand the userâs search intents for retriev- ers. Second, documents typically contain lengthy content and substantial noise, posing challenges in encoding long documents and extracting relevant information for retrieval models. Additionally, the collection of human-annotated relevance labels is time-consuming and costly. It restricts the retrieversâ knowledge boundaries and their ability to generalize across different application domains. Moreover, existing model architectures, primarily built on BERT [59], exhibit inherent limitations, thereby constraining the perfor- mance potential of retrievers. Recently, LLMs have exhibited extraordinary abilities in language understanding, text gen- eration, and reasoning. This has motivated researchers to use these abilities to tackle the aforementioned challenges
and aid in developing superior retrieval models. Roughly, these studies can be categorized into two groups, i.e., (1) leveraging LLMs to generate search data, and (2) employing LLMs to enhance model architecture.
# 4.1 Leveraging LLMs to Generate Search Data | 2308.07107#46 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 46 | Hannah McLean Babe, Sydney Nguyen, Yangtian Zi, Arjun Guha, Molly Q Feldman, and Car- olyn Jane Anderson. Studenteval: A benchmark of student-written prompts for large language models of code. arXiv preprint arXiv:2306.04556, 2023.
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022. URL https://arxiv.org/abs/2204.05862.
Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pp. 65â72, 2005.
Antonio Valerio Miceli Barone and Rico Sennrich. A parallel corpus of python functions and documentation strings for automated code documentation and code generation. arXiv preprint arXiv:1707.02275, 2017. | 2308.07124#46 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 46 | Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020.
Chris Callison-Burch. Fast, cheap, and creative: Evaluating translation quality using amazonâs mechanical turk. In Proceedings of the 2009 conference on empirical methods in natural language processing, pp. 286â295, 2009.
Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. Evaluation of text generation: A survey. CoRR, abs/2006.14799, 2020. URL https://arxiv.org/abs/2006.14799.
Cheng-Han Chiang and Hung-yi Lee. Can large language models be an alternative to human evalu- ations? arXiv preprint arXiv:2305.01937, 2023.
11 | 2308.07201#46 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 47 | Table 1: Results for selected questions in the student survey (ð = 45 of 52 students). Rows may not sum to 100% due to rounding.
Strongly Agree Agree Disagree Strongly Disagree CodeHelp helped me complete my work successfully. CodeHelp helped me learn the course material. If I took more Computer Science courses, I would like to be able to use CodeHelp in those classes. 9% 7% 31% 71% 56% 64% 18% 33% 4% 2% 4% 0%
should even use it during quizzesâ. Others explained why they did not have any concerns: âNo, absolutely not, especially considering it never handed me the answer on a silver platter.â
The most prominent theme as to the concerns was the perceived âdifficultyâ in using CodeHelp. Multiple students (14) stated that the tool is difficult to use when the problem is not understood: âsometimes i didnt know exactly what to ask.. but i usually got there eventuallyâ and âI did not like how hard it was to ask something I do not understand.â. Several students also reported receiving an- swers that were difficult to utilize or not helpful: âThere were many times that CodeHelp misunderstood my question and gave me advice which confused me even more.â and âSometimes it gives really strange responses that are not related to the problemâ. | 2308.06921#47 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 47 | # 4.1 Leveraging LLMs to Generate Search Data
In light of the quality and quantity of search data, there are two prevalent perspectives on how to improve retrieval per- formance via LLMs. The first perspective revolves around search data refinement methods, which concentrate on re- formulating input queries to precisely present user intents. The second perspective involves training data augmenta- tion methods, which leverage LLMsâ generation ability to enlarge the training data for dense retrieval models, partic- ularly in zero- or few-shot scenarios.
# 4.1.1 Search Data Refinement | 2308.07107#47 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 47 | Mohammad Bavarian, Heewoo Jun, Nikolas A. Tezak, John Schulman, Christine McLeavey, Jerry Tworek, and Mark Chen. Efficient training of language models to fill in the middle. arXiv preprint arXiv:2207.14255, 2022.
Loubna Ben Allal, Niklas Muennighoff, Logesh Kumar Umapathi, Ben Lipkin, and Leandro von Werra. A framework for the evaluation of code generation models. https://github.com/b igcode-project/bigcode-evaluation-harness, 2022.
Stella Biderman, USVSN Sai Prashanth, Lintang Sutawika, Hailey Schoelkopf, Quentin Anthony, Shivanshu Purohit, and Edward Raf. Emergent and predictable memorization in large language models. arXiv preprint arXiv:2304.11158, 2023a. | 2308.07124#47 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 47 | 11
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023), 2023.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Improv- ing factuality and reasoning in language models through multiagent debate. arXiv preprint arXiv:2305.14325, 2023.
Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166, 2023.
Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, and Xiaojun Wan. Human-like summarization evaluation with chatgpt. arXiv preprint arXiv:2304.02554, 2023. | 2308.07201#47 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 48 | CodeHelp was easy to introduce to the class. As an instructional resource, its utility is immediately and obviously apparent. Stu- dents required little convincing to give it a try. While in class, we requested that students ask CodeHelp for help before seeking help from the instructor or teaching assistant. We did not enforce this as a rule but encouraged it throughout the semester. The idea was that CodeHelp could provide an initial level of support and handle rela- tively straightforward but common concerns, such as syntax errors. CodeHelp performed very well in this capacity, and given its flexi- bility and low-cost, it is a great addition to the classroom for this functionality alone. However, CodeHelp also provided much more sophisticated help on a huge range of introductory CS problems throughout the semester. | 2308.06921#48 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 48 | # 4.1.1 Search Data Refinement
Typically, input queries consist of short sentences or keyword-based phrases that may be ambiguous and contain multiple possible user intents. Accurately determining the specific user intent is essential in such cases. Moreover, documents usually contain redundant or noisy information, which poses a challenge for retrievers to extract relevance signals between queries and documents. Leveraging the strong text understanding and generation capabilities of LLMs offers a promising solution to these challenges. As yet, research efforts in this domain primarily concentrate on employing LLMs as query rewriters, aiming to refine input queries for more precise expressions of the userâs search intent. Section 3 has provided a comprehensive overview of these studies, so this section refrains from further elabora- tion. In addition to query rewriting, an intriguing avenue for exploration involves using LLMs to enhance the effec- tiveness of retrieval by refining lengthy documents. This intriguing area remains open for further investigation and advancement.
# 4.1.2 Training Data Augmentation
Due to the expensive economic and time costs of human- annotated labels, a common problem in training neural retrieval models is the lack of training data. Fortunately, the excellent capability of LLMs in text generation offers a potential solution. A key research focus lies in devising strategies to leverage LLMsâ capabilities to generate pseudo- relevant signals and augment the training dataset for the retrieval task. | 2308.07107#48 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 48 | Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle OâBrien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pp. 2397â2430. PMLR, 2023b.
10
# OctoPack: Instruction Tuning Code Large Language Models
Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. Gpt-neo: Large scale autore- gressive language modeling with mesh-tensorflow. If you use this software, please cite it using these metadata, 58, 2021.
Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, et al. Gpt-neox-20b: An open-source autoregressive language model. arXiv preprint arXiv:2204.06745, 2022. | 2308.07124#48 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 48 | Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anushree Venkatesh, Raefer Gabriel, and Dilek Hakkani-T¨ur. Topical-chat: Towards knowledge- grounded open-domain conversations. 2019.
Marzena Karpinska, Nader Akoury, and Mohit Iyyer. The perils of using mechanical turk to evaluate open-ended text generation. arXiv preprint arXiv:2109.06835, 2021.
Grzegorz Kondrak. N-gram similarity and distance. In International symposium on string processing and information retrieval, pp. 115â126. Springer, 2005.
Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents forâ mindâ exploration of large scale language model society. arXiv preprint arXiv:2303.17760, 2023a.
Ruosen Li, Teerth Patel, and Xinya Du. Prd: Peer rank and discussion improve large language model based evaluations. arXiv preprint arXiv:2307.02762, 2023b. | 2308.07201#48 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 49 | Several students (5) reported that sometimes an answer provided by CodeHelp contained elements that were ânot coveredâ in class and, hence, the students were not expected to have knowledge of those elements. Responses representative of this theme included: âSometimes it tells you to do code that we havenât learned in classâ and âI would run into the issue where it wanted me to use concepts that I havenât been taught yet. This is both and good and a bad thing because it can introduce students to resources, but also confuse them.â. A small number of studentsâ responses (3) were hinting on using CodeHelp without investing proper effort at solving the problem independently (i.e., âover-relianceâ). The responses suggest that the students were aware this could have negative effects on their learning, yet, they would still engage in that practice: â think some people could complete the code without help and by going directly to CodeHelp their limiting themselvesâ and âI do think that sometimes I can get to dependent on CodeHelp and I have to scale it back a bit.â. Several responses (3) stated that CodeHelp is ânot humanâ and, hence, its capabilities are in | 2308.06921#49 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 49 | Why do we need data augmentation? Previous studies of neural retrieval models focused on supervised learning, namely training retrieval models using labeled data from specific domains. For example, MS MARCO [111] pro- vides a vast repository, containing a million passages, more than 200,000 documents, and 100,000 queries with human- annotated relevance labels, which has greatly facilitated the development of supervised retrieval models. However, this paradigm inherently constrains the retrieverâs generaliza- tion ability for out-of-distribution data from other domains. The application spectrum of retrieval models varies from natural question-answering to biomedical IR, and it is ex- pensive to annotate relevance labels for data from different domains. As a result, there is an emerging need for zero-shot
9 | 2308.07107#49 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 49 | Herbie Bradley, Honglu Fan, Harry Saini, Reshinth Adithyan, Shivanshu Purohit, and Joel Lehman. Diff models - a new way to edit code. CarperAI Blog, Jan 2023. URL https://carper.ai/ diff-model/.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Conference on Neural Information Processing Systems (NeurIPS), 2020. URL https://papers.nips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac 142f64a-Abstract.html.
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. | 2308.07124#49 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 49 | Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. Encouraging divergent thinking in large language models through multi- agent debate. arXiv preprint arXiv:2305.19118, 2023.
Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pp. 74â81, 2004.
Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M Dai, Diyi Yang, and Soroush Vosoughi. Training socially aligned language models in simulated human society. arXiv preprint arXiv:2305.16960, 2023a.
Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. Gpteval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634, 2023b. | 2308.07201#49 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 50 | and I have to scale it back a bit.â. Several responses (3) stated that CodeHelp is ânot humanâ and, hence, its capabilities are in some way limited as compared to the assistance provided by an instructor or a TA. However, the responses do not go into much detail as why this might be the case: âless personalâ and âNo, but it cannot be a substitute for a real person.â One of the responses explained the preference for human assistance in terms of difficulty (see above) of formulating the proper question for CodeHelp: âno but personally I prefer to ask a real person because its difficult to phrase you questions in a way that wonât confuse CodeHelpâ. | 2308.06921#50 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 50 | 9
Few-shot prompt Example 1: Document: ...If you are pregnant, limit caffeine to 200 milligrams each day. This is about the amount in 1% 8- ounce cups of coffee or one 12-ounce cup of coffee. Relevant Query: Is a little caffeine ok during pregnancy? Prompts & Document text Example N: Document: Passiflora herbertiana. A rare passion fruit native to Australia... Relevant Query: What fruit is native to Australia? Example N + 1: Document: {#Document} Relevant Query: Zero-shot prompt Write a Question answered by the given passage. Passage: {#Passage} Query: OO ) Filtered Relevant Queries Augmented Training Corpus Framework of pseudo query generation Retriever | Retrieved Passages LLM-based Relevance Estimator | Pseudo Queries Question Soft Relevance Augmented Training Corpus Framework of relevance label generation
Fig. 5. Two typical frameworks for LLM-based data augmentation in the retrieval task (right), along with their prompt examples (left). Note that the methods of relevance label generation do not treat questions as inputs but regard their generation probabilities conditioned on the retrieved passages as soft relevance labels.
TABLE 3. The comparison of existing data augmentation methods powered by LLMs for training retrieval models. | 2308.07107#50 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 50 | Nghi DQ Bui, Hung Le, Yue Wang, Junnan Li, Akhilesh Deepak Gotmare, and Steven CH Hoi. Codetf: One-stop transformer library for state-of-the-art code llm. arXiv preprint arXiv:2306.00029, 2023.
Federico Cassano, John Gouwar, Daniel Nguyen, Sydney Nguyen, Luna Phipps-Costin, Donald Pinckney, Ming-Ho Yee, Yangtian Zi, Carolyn Jane Anderson, Molly Q Feldman, et al. Multipl-e: a scalable and polyglot approach to benchmarking neural code generation. IEEE Transactions on Software Engineering, 2023.
Sahil Chaudhary. Code alpaca: An instruction-following llama model for code generation. https: //github.com/sahil280114/codealpaca, 2023.
Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. Codet: Code generation with generated tests. arXiv preprint arXiv:2207.10397, 2022.
Lingjiao Chen, Matei Zaharia, and James Zou. How is chatgptâs behavior changing over time?, 2023a. | 2308.07124#50 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 50 | Andrea I Luppi, Pedro AM Mediano, Fernando E Rosas, Negin Holland, Tim D Fryer, John T OâBrien, James B Rowe, David K Menon, Daniel Bor, and Emmanuel A Stamatakis. A synergistic core for human brain evolution and cognition. Nature Neuroscience, 25(6):771â782, 2022.
Zhao Mandi, Shreeya Jain, and Shuran Song. Roco: Dialectic multi-robot collaboration with large language models. arXiv preprint arXiv:2307.04738, 2023.
Mary L McHugh. Interrater reliability: the kappa statistic. Biochemia medica, 22(3):276â282, 2012.
Shikib Mehri and Maxine Eskenazi. Usr: An unsupervised and reference free evaluation metric for dialog generation. arXiv preprint arXiv:2005.00456, 2020. | 2308.07201#50 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 51 | CodeHelp appeared to provide accurate and helpful responses to students the majority of the time. CodeHelp did not âgive away the answerâ or otherwise become a complete replacement for ac- tively working through problems. It appears to strike a nice balance between providing enough information to move students forward without undermining the intent of the assignments.
CodeHelp was a great addition to the course in terms of serving students who had difficulty attending office hours or who needed frequent reassurance or feedback as they worked through assign- ments outside of class time. It was also exceptional in providing a novel avenue for delivering support to students who did not take advantage of traditional avenues of support. For example, some students who seemed uncomfortable, embarrassed, or otherwise re- luctant to ask for help from the instructor or TA had no reservations about asking CodeHelp.
CodeHelp sometimes provided assistance that was inconsistent with the content of the class and the knowledge-level of the stu- dents. For example, CodeHelp might suggest solving problems with methods that had not yet been introduced. This was confusing and frustrating for some students. During the semester, the avoid set functionality (Section 3.3) was added to allow the instructor to explicitly prohibit certain kinds of content in CodeHelp responses, which largely resolved the problem. Students sometimes provided too little information describing their problem to get a useful re- sponse and required some coaching to provide detailed or thought- ful descriptions of problems to CodeHelp. | 2308.06921#51 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 51 | TABLE 3. The comparison of existing data augmentation methods powered by LLMs for training retrieval models.
Methods # Examples Generator Synthetic Data Filter Method LLMsâ tuning InPairs [120] Ma et al. [121] InPairs-v2 [122] PROMPTAGATOR [123] TQGen [124] UDAPDR [125] SPTAR [126] ART [127] 3 0-2 3 0-8 0 0-3 1-2 0 Curie Alpaca-LLaMA & tk-Instruct GPT-J FLAN T0 GPT3 & FLAN-T5-XXL LLaMA-7B & Vicuna-7B T5-XL & T5-XXL Relevant query Relevant query Relevant query Relevant query Relevant query Relevant query Relevant query Soft relevance labels Generation probability - Relevance score from fine-tuned monoT5-3B Round-trip filtering Generation probability Round-trip filtering BM25 filtering - Fixed Fixed Fixed Fixed Fixed Fixed Soft Prompt tuning Fixed
and few-shot learning models to address this problem [128]. A common practice to improve the modelsâ effectiveness in a target domain without adequate label signals is through data augmentation. | 2308.07107#51 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 51 | Lingjiao Chen, Matei Zaharia, and James Zou. How is chatgptâs behavior changing over time?, 2023a.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595, 2023b.
Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. Teaching large language models to self-debug. arXiv preprint arXiv:2304.05128, 2023c.
Fenia Christopoulou, Gerasimos Lampouras, Milan Gritta, Guchun Zhang, Yinpeng Guo, Zhongqi Li, Qi Zhang, Meng Xiao, Bo Shen, Lin Li, et al. Pangu-coder: Program synthesis with function-level language modeling. arXiv preprint arXiv:2207.11280, 2022. | 2308.07124#51 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 51 | Jekaterina Novikova, OndËrej DuËsek, Amanda Cercas Curry, and Verena Rieser. Why we need In Proceedings of the 2017 Conference on Empirical Meth- new evaluation metrics for NLG. ods in Natural Language Processing, pp. 2241â2252, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1238. URL https:// aclanthology.org/D17-1238.
12
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â27744, 2022.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pp. 311â318, 2002. | 2308.07201#51 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 52 | 5.3 Instructor Reflections After the conclusion of the semester, the instructor, who is also one of the authors, reflected on what did and did not work:
Reviewing student queries submitted to CodeHelp provided an entirely new type of insight into student learning. In comparison to submitted work, the queries were a much more direct and unfiltered look into student thinking as they worked through problems. On
some occasions, this feedback guided modifications of assignments and additional class instruction during the semester.
Overall, given its great utility in a wide range of circumstances, its ease of use, and low cost, I found CodeHelp to be a tremen- dous asset in my course. I intend to continue using it in all of my introductory courses moving forward.
6 RECOMMENDED PRACTICES Based on our experiences, we have collected a few recommenda- tions for integrating CodeHelp into a class effectively. | 2308.06921#52 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 52 | and few-shot learning models to address this problem [128]. A common practice to improve the modelsâ effectiveness in a target domain without adequate label signals is through data augmentation.
How to apply LLMs for data augmentation? In the scenario of IR, it is easy to collect numerous documents. However, the challenging and costly task lies in gathering real user queries and labeling the relevant documents accordingly. Considering the strong text generation capability of LLMs, many researchers [120, 122] suggest using LLM-driven pro- cesses to create pseudo queries or relevance labels based on existing collections. These approaches facilitate the con- struction of relevant query-document pairs, enlarging the training data for retrieval models. According to the type of generated data, there are two mainstream approaches that complement the LLM-based data augmentation for retrieval models, i.e., pseudo query generation and relevance label generation. Their frameworks are visualized in Figure 5. Next, we will give an overview of the related studies. | 2308.07107#52 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 52 | Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. URL https://arxiv.org/abs/2210.11416.
Haotian Cui, Chenglong Wang, Junjie Huang, Jeevana Priya Inala, Todd Mytkowicz, Bo Wang, Jianfeng Gao, and Nan Duan. Codeexp: Explanatory code document generation. arXiv preprint arXiv:2211.15395, 2022.
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdi- nov. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860, 2019.
11
# OctoPack: Instruction Tuning Code Large Language Models | 2308.07124#52 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 52 | Joon Sung Park, Joseph C OâBrien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023.
Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, arXiv preprint and Maosong Sun. arXiv:2307.07924, 2023. Communicative agents for software development.
Yossi Rubner, Carlo Tomasi, and Leonidas J Guibas. The earth moverâs distance as a metric for image retrieval. International journal of computer vision, 40:99â121, 2000.
Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, An- toine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207, 2021. | 2308.07201#52 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 53 | 6 RECOMMENDED PRACTICES Based on our experiences, we have collected a few recommenda- tions for integrating CodeHelp into a class effectively.
Initial introduction. When first introducing CodeHelp to stu- dents, motivate its use by sharing some of the benefits identified in this work, as relevant to your course. Explain carefully its strengths and limitations in the context of your course: how it will likely be able to help, and where may it produce incorrect responses. Provide guidance on how to ask for help most effectively. This in- cludes providing the relevant portions of oneâs code, identifying and copying the important information from error messages, and providing enough information for the issue to be identified. These are the same skills one needs to effectively communicate issues to instructors or peers. Providing good and bad examples or taking a moment to roleplay a few situations may help here. Demonstrate CodeHelp with a few issues similar to those you expect your stu- dents to encounter. Model how to provide sufficient information and communicate clearly. | 2308.06921#53 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 53 | to GPT-3, which subsequently generates possible relevant queries for the given document. By combining the same demonstration with various documents, it is easy to create a vast pool of synthetic training samples and support the fine-tuning of retrievers on specific target domains. Recent studies [121] have also leveraged open-sourced LLMs, such as Alpaca-LLaMA and tk-Instruct, to produce sufficient pseudo queries and applied curriculum learning to pre-train dense retrievers. To enhance the reliability of these synthetic samples, a fine-tuned model (e.g., a monoT5-3B model fine- tuned on MSMARCO [122]) is employed to filter the gener- ated queries. Only the top pairs with the highest estimated relevance scores are kept for training. This âgenerating-then- filteringâ paradigm can be conducted iteratively in a round- trip filtering manner, i.e., by first fine-tuning a retriever on the generated samples and then filtering the generated sam- ples using this retriever. Repeating these EM-like steps until convergence can produce high-quality training sets [123]. Furthermore, by adjusting the prompt given to LLMs, they can generate queries of different types. This capability al- lows for a more accurate simulation of real queries with various patterns [124]. | 2308.07107#53 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 53 | 11
# OctoPack: Instruction Tuning Code Large Language Models
Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory- efficient exact attention with io-awareness. Advances in Neural Information Processing Systems, 35:16344â16359, 2022.
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023.
Kaustubh D Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Srivastava, Samson Tan, et al. Nl-augmenter: A framework for task-sensitive natural language augmentation. arXiv preprint arXiv:2112.02721, 2021.
Yangruibo Ding, Zijian Wang, Wasi Uddin Ahmad, Murali Krishna Ramanathan, Ramesh Nallapati, Parminder Bhatia, Dan Roth, and Bing Xiang. Cocomic: Code completion by jointly modeling in-file and cross-file context. arXiv preprint arXiv:2212.10007, 2022. | 2308.07124#53 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 53 | Thibault Sellam, Dipanjan Das, and Ankur P Parikh. Bleurt: Learning robust metrics for text gener- ation. arXiv preprint arXiv:2004.04696, 2020.
Chenhui Shen, Liying Cheng, Yang You, and Lidong Bing. Are large language models good evalu- ators for abstractive summarization? arXiv preprint arXiv:2305.13091, 2023.
Amanda Stent, Matthew Marge, and Mohit Singhai. Evaluating evaluation methods for genera- tion in the presence of variation. In International conference on intelligent text processing and computational linguistics, pp. 341â351. Springer, 2005.
Chris Van Der Lee, Albert Gatt, Emiel Van Miltenburg, Sander Wubben, and Emiel Krahmer. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th International Conference on Natural Language Generation, pp. 355â368, 2019.
Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048, 2023a. | 2308.07201#53 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 54 | During Use. Throughout the course, while students are using CodeHelp, it is helpful to view the studentsâ queries regularly. You can gain detailed insight into where they are struggling at each point in the term that may lead to adapting course plans. Addi- tionally, you might identify students whose usage is not effective (e.g., repeatedly submitting ineffective queries or demonstrating over-reliance), and reach out to them directly to provide guidance or a nudge.
Instructors and TAs should sample CodeHelpâs responses in each section of the course to spot and mitigate issues. For example, if CodeHelp suggests a technique, function, or concept that does not fit the design of your course, you can add that to the avoid set (Section 3.3) to prevent it from being used in future responses. | 2308.06921#54 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 54 | ⢠Pseudo query generation. Given the abundance of docu- ments, a straightforward idea is to use LLMs for generating their corresponding pseudo queries. One such illustration is presented by inPairs [120], which leverages the in-context learning capability of GPT-3. This method employs a col- lection of query-document pairs as demonstrations. These pairs are combined with a document and presented as input
In practice, it is costly to generate a substantial number of pseudo queries through LLMs. Balancing the generation costs and the quality of generated samples has become an urgent problem. To tackle this, UDAPDR [125] is proposed, which first produces a limited set of synthetic queries using
10 | 2308.07107#54 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 54 | Yihong Dong, Xue Jiang, Zhi Jin, and Ge Li. Self-collaboration code generation via chatgpt. arXiv preprint arXiv:2304.07590, 2023.
Dawn Drain, Colin B Clement, Guillermo Serrato, and Neel Sundaresan. Deepdebug: Fixing python bugs using stack traces, backtranslation, and code skeletons. arXiv preprint arXiv:2105.09352, 2021.
Xueying Du, Mingwei Liu, Kaixin Wang, Hanlin Wang, Junwei Liu, Yixuan Chen, Jiayi Feng, Chaofeng Sha, Xin Peng, and Yiling Lou. Classeval: A manually-crafted benchmark for evaluating llms on class-level code generation. arXiv preprint arXiv:2308.01861, 2023.
Jane Dwivedi-Yu, Timo Schick, Zhengbao Jiang, Maria Lomeli, Patrick Lewis, Gautier Izacard, Edouard Grave, Sebastian Riedel, and Fabio Petroni. Editeval: An instruction-based benchmark for text improvements. arXiv preprint arXiv:2209.13331, 2022. | 2308.07124#54 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 54 | Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926, 2023b.
Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et al. Pandalm: An automatic evaluation benchmark for llm instruction tuning optimization. arXiv preprint arXiv:2306.05087, 2023c.
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. Unleashing cognitive synergy in large language models: A task-solving agent through multi-persona self- collaboration. arXiv preprint arXiv:2307.05300, 2023d.
Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021. | 2308.07201#54 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 55 | 7 CONCLUSION AND FUTURE WORK This work shows that LLMs, when properly implemented and inte- grated into a learning environment, can be a valuable aid to both students and educators. We developed CodeHelp to provide imme- diate, high-quality support to students working on programming exercises while mitigating the risk of fostering an over-reliance on the automated assistance. Providing an automated option for this kind of help can increase the level of support students receive throughout a course due to a combination of being constantly avail- able and avoiding the anxiety associated with asking a professor or TA for help. In our pilot study, students found CodeHelp to be a welcome addition to direct support from a professor and teaching assistants.
Going forward, we intend to continue developing and improv- ing CodeHelp. The âavoid setâ functionality proved to be critical for obtaining course-appropriate responses in many cases, and we
Mark Liffiton, Brad Sheese, Jaromir Savelka, and Paul Denny | 2308.06921#55 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 55 | 10
LLMs for the target domain. These high-quality examples are subsequently used as prompts for a smaller model to generate a large number of queries, thereby constructing the training set for that specific domain. It is worth noting that the aforementioned studies primarily rely on fixed LLMs with frozen parameters. Empirically, optimizing LLMsâ pa- rameters can significantly improve their performance on downstream tasks. Unfortunately, this pursuit is impeded by the prohibitively high demand for computational re- sources. To overcome this obstacle, SPTAR [126] introduces a soft prompt tuning technique that only optimizes the promptsâ embedding layer during the training process. This approach allows LLMs to better adapt to the task of gener- ating pseudo-queries, striking a favorable balance between training cost and generation quality.
In addition to the above studies, pseudo query gen- eration methods are also introduced in other application scenarios, such as conversational dense retrieval [105] and multilingual dense retrieval [129]. | 2308.07107#55 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 55 | Aryaz Eghbali and Michael Pradel. Crystalbleu: precisely and efficiently measuring the similarity of code. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering, pp. 1â12, 2022.
Sarah Fakhoury, Saikat Chakraborty, Madan Musuvathi, and Shuvendu K Lahiri. Towards gener- ating functionally correct code edits from natural language issue descriptions. arXiv preprint arXiv:2304.03816, 2023.
Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. Incoder: A generative model for code infilling and synthesis. arXiv preprint arXiv:2204.05999, 2022.
Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166, 2023. | 2308.07124#55 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 55 | Anita Williams Woolley, Christopher F Chabris, Alex Pentland, Nada Hashmi, and Thomas W Mal- one. Evidence for a collective intelligence factor in the performance of human groups. science, 330(6004):686â688, 2010.
Ning Wu, Ming Gong, Linjun Shou, Shining Liang, and Daxin Jiang. Large language models are diverse role-players for summarization evaluation. arXiv preprint arXiv:2303.15078, 2023.
13
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. Bertscore: Evaluat- ing text generation with bert. arXiv preprint arXiv:1904.09675, 2019.
Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M Meyer, and Steffen Eger. Mover- score: Text generation evaluating with contextualized embeddings and earth mover distance. arXiv preprint arXiv:1909.02622, 2019. | 2308.07201#55 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 56 | Mark Liffiton, Brad Sheese, Jaromir Savelka, and Paul Denny
plan to give instructors more ways to provide context about their courses and thus further tailor the LLM responses for their students. Additionally, we plan to explore different forms or levels of inter- vention that might be appropriate depending on the complexity of the task, the experience level of the student, or even the specific learning objectives of the course. And we see many opportunities for the tool to be more individualized, adapting to the needs of each student. For example, it could record and maintain information about each individual studentâs mastery of different topics, using that to guide the responses generated for them.
While encouraging, this work presents only an initial exploration into the effective deployment of LLMs in computing education. For example, while students positively rated CodeHelp and the instruc- tor found it easy to use and deploy, future work should establish more robust metrics for gauging efficacy, such as measuring impact on student learning outcomes or comparing student performance in classrooms that use CodeHelp to those that do not.
We also recognize that further work needs to be conducted with larger, more diverse populations of students. It would also be inter- esting to deploy CodeHelp in different educational settings, such as in distance learning or self-paced programming courses, to evaluate its flexibility and adaptability. | 2308.06921#56 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 56 | In addition to the above studies, pseudo query gen- eration methods are also introduced in other application scenarios, such as conversational dense retrieval [105] and multilingual dense retrieval [129].
Relevance label generation. In some downstream tasks of retrieval, such as question-answering, the collection of questions is also sufficient. However, the relevance labels connecting these questions with the passages of support- ing evidence are very limited. In this context, leveraging the capability of LLMs for relevance label generation is a promising approach that can augment the training corpus for retrievers. A recent method, ART [127], exemplifies this approach. It first retrieves the top-relevant passages for each question. Then, it employs an LLM to produce the genera- tion probabilities of the question conditioned on these top passages. After a normalization process, these probabilities serve as soft relevance labels for the training of the retriever. Additionally, to highlight the similarities and differences among the corresponding methods, we present a compar- ative result in Table 3. It compares the aforementioned methods from various perspectives, including the number of examples, the generator employed, the type of synthetic data produced, the method applied to filter synthetic data, and whether LLMs are fine-tuned. This table serves to facilitate a clearer understanding of the landscape of these methods.
# 4.2 Employing LLMs to Enhance Model Architecture | 2308.07107#56 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 56 | Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, et al. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858, 2022.
Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, 2021. URL https://doi.org/10.5281/zenodo.5371628.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. In International Conference on Machine Learning, pp. 10764â10799. PMLR, 2023. | 2308.07124#56 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 56 | Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685, 2023.
Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Jiawei Han. Towards a unified multi-dimensional evaluator for text generation. arXiv preprint arXiv:2210.07197, 2022.
# A PROMPT TEMPLATE AND DIVERSE ROLE PROMPT
The overall prompt template is shown in Table 6, we draw inspiration from Wu et al. (2023) and design several different role descriptions as follows.
General Public You are now General Public, one of the referees in this task. You are interested in the story and looking for updates on the investigation. Please think critically by yourself and note that itâs your responsibility to choose one of which is the better first. | 2308.07201#56 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 57 | Our findings could have implications beyond computing educa- tion. LLMs such as those used in CodeHelp could potentially be adapted to support learning in other domains. We hope that our work serves as an impetus for other researchers and educators to explore the use of LLMs in diverse educational contexts, continuing the dialogue around the opportunities and challenges they present.
REFERENCES [1] Brett A Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather, and Eddie Antonio Santos. 2023. Programming Is Hard-Or at Least It Used to Be: Educational Opportunities and Challenges of AI Code Generation. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1. 500â506.
[2] Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2 (2006), 77â101. https://doi.org/10.1191/ 1478088706qp063oa
[3] Peter Brusilovsky, Barbara J Ericson, Cay S Horstmann, and Christian Servin. 2023. The Future of Computing Education Materials. (2023). | 2308.06921#57 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 57 | # 4.2 Employing LLMs to Enhance Model Architecture
Leveraging the excellent text encoding and decoding capa- bilities of LLMs, it is feasible to understand queries and doc- uments with greater precision compared to earlier smaller- sized models [59]. Researchers have endeavored to utilize LLMs as the foundation for constructing advanced retrieval models. These methods can be grouped into two categories, i.e., dense retrievers and generative retrievers.
# 4.2.1 Dense Retriever
In addition to the quantity and quality of the data, the representative capability of models also greatly influences the efficacy of retrievers. Inspired by the LLMâs excellent capability to encode and comprehend natural language, some researchers [130â132] leverage LLMs as retrieval en- coders and investigate the impact of model scale on retriever performance. | 2308.07107#57 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 57 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, and Soujanya Poria. Flacuna: Unleashing the problem solving power of vicuna using flan fine-tuning. arXiv preprint arXiv:2307.02053, 2023.
12
# OctoPack: Instruction Tuning Code Large Language Models
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Nan Duan, and Weizhu Chen. Critic: Large language models can self-correct with tool-interactive critiquing. arXiv preprint arXiv:2305.11738, 2023.
Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, and Dawn Song. The false promise of imitating proprietary llms. arXiv preprint arXiv:2305.15717, 2023. | 2308.07124#57 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 57 | Critic You are now Critic, one of the referees in this task. You will check fluent writing, clear sentences, and good wording in summary writing. Your job is to question others judgment to make sure their judgment is well-considered and offer an alternative solution if two responses are at the same level.
News Author You are News Author, one of the referees in this task. You will focus on the consistency with the original article. Please help other people to determine which response is the better one.
Psychologist You are Psychologist, one of the referees in this task. You will study human behavior and mental processes in order to understand and explain human behavior. Please help other people to determine which response is the better one.
Scientist You are Scientist, one of the referees in this task. You are a professional engaged in systematic study who possesses a strong background in the scientific method, critical thinking, and problem-solving abilities. Please help other people to determine which response is the better one.
# B FORMAL DEPICTION OF DIFFERENT COMMUNICATION STRATEGY
14 | 2308.07201#57 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 58 | [3] Peter Brusilovsky, Barbara J Ericson, Cay S Horstmann, and Christian Servin. 2023. The Future of Computing Education Materials. (2023).
[4] Gustavo Carreira, Leonardo Silva, Antonio Jose Mendes, and Hugo Goncalo Oliveira. 2022. Pyo, a Chatbot Assistant for Introductory Programming Students. In 2022 International Symposium on Computers in Education (SIIE). IEEE, Coimbra, Portugal, 1â6. https://doi.org/10.1109/SIIE56031.2022.9982349
[5] Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. 2022. CodeT: Code Generation with Generated Tests. arXiv:2207.10397 [cs.CL] | 2308.06921#58 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 58 | General Retriever. Since the effectiveness of retrievers pri- marily relies on the capability of text embedding, the evo- lution of text embedding models often has a significant impact on the progress of retriever development. In the era of LLMs, a pioneer work is made by OpenAI [130]. They view the adjacent text segments as positive pairs to facilitate the unsupervised pre-training of a set of text embedding models, denoted as cpt-text, whose parameter values vary from 300M to 175B. Experiments conducted on the MS MARCO [111] and BEIR [128] datasets indicate that larger model scales have the potential to yield improved performance in unsupervised learning and transfer learning for text search tasks. Nevertheless, pre-training LLMs from scratch is prohibitively expensive for most researchers. To overcome this limitation, some studies [131, 133] use pre- trained LLMs to initialize the bi-encoder of dense retriever. Specifically, GTR [133] adopts T5-family models, including T5-base, Large, XL, and XXL, to initialize and fine-tune dense retrievers. RepLLaMA [131] further fine-tunes the LLaMA | 2308.07107#58 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 58 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, et al. Textbooks are all you need. arXiv preprint arXiv:2306.11644, 2023.
Jingxuan He, Luca Beurer-Kellner, and Martin Vechev. On distribution shift in learning-based bug detectors. In International Conference on Machine Learning, pp. 8559â8580. PMLR, 2022.
Vincent J Hellendoorn, Charles Sutton, Rishabh Singh, Petros Maniatis, and David Bieber. Global relational models of source code. In International conference on learning representations, 2019.
Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. Measuring coding challenge competence with apps. arXiv preprint arXiv:2105.09938, 2021. | 2308.07124#58 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 58 | # B FORMAL DEPICTION OF DIFFERENT COMMUNICATION STRATEGY
14
[Question] {source text} [The Start of Assistant 1âs Answer] {compared text one} [The End of Assistant 1âs Answer] [The Start of Assistant 2âs Answer] {compared text two} [The End of Assistant 2âs Answer] [System] We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. Please consider the helpfulness, relevance, accuracy, and level of detail of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance. There are a few other referees assigned the same task, itâs your responsibility to discuss with them and think critically before you make your final judgment. Here is your discussion history: {chat history} {role description} Now itâs your time to talk, please make your talk short and clear, {agent name} !
Table 6: The prompt template for FairEval Dataset. We replace the colored slot with real text before querying the LLMs. Note that we use the same template when conducting single-agent-based experiments and ignore the chat history and role description slot. | 2308.07201#58 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 59 | [6] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv:2107.03374 [cs.LG] Jonathan E Collins. 2023. Policy Solutions: Policy questions for ChatGPT and artificial intelligence. Phi Delta Kappan 104, 7 (2023), 60â61.
[8] Tyne Crow, Andrew Luxton-Reilly, and Burkhard Wuensche. 2018. Intelligent tutoring systems for programming education: a systematic review. In Proceed- ings of the 20th Australasian Computing Education Conference. ACM, Brisbane Queensland Australia, 53â62. https://doi.org/10.1145/3160489.3160492
[9] Paul Denny, Viraj Kumar, and Nasser Giacaman. 2023. Conversing with Copi- lot: Exploring Prompt Engineering for Solving CS1 Problems Using Natu- ral Language. In Proceedings of the 54th ACM Technical Symposium on Com- puter Science Education V. 1. ACM, Toronto ON Canada, 1136â1142. https: //doi.org/10.1145/3545945.3569823 | 2308.06921#59 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 59 | Large, XL, and XXL, to initialize and fine-tune dense retrievers. RepLLaMA [131] further fine-tunes the LLaMA model on multiple stages of IR, including retrieval and reranking. For the dense retrieval task, RepLLaMA appends an end-of-sequence token â</s>â to the input sequences, i.e., queries or documents, and regards its output embeddings as the representation of queries or documents. The experiments confirm again that larger model sizes can lead to better performance, particularly in zero-shot settings. Notably, the researchers of RepLLaMA [131] also study the effectiveness of applying LLaMA in the reranking stage, which will be introduced in Section 5.1.3. | 2308.07107#59 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 59 | Yi Hu, Haotong Yang, Zhouchen Lin, and Muhan Zhang. Code prompting: a neural symbolic method for complex reasoning in large language models. arXiv preprint arXiv:2305.18507, 2023.
Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. Summarizing source code using a neural attention model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2073â2083, 2016.
Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian OâHoro, Gabriel Pereyra, Jeff Wang, Christopher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, and Ves Stoyanov. Opt-iml: Scaling language model instruction meta learning through the lens of generalization. arXiv preprint arXiv:2212.12017, 2022. URL https://arxiv.org/abs/2212.12017. | 2308.07124#59 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 59 | Algorithm 1: One-by-One input : agents number N , discuss turn T , a group of debate agents [D1, · · · , DN ], chat history
of each agent [H1, · · · , HN ], answer extracter (either majority vote or average score) EXT
of each agent [H1, · · · , HN ], answer extracter (either majority vote or average score) EXT output: Final results for text evaluation AN S 1 for t â 0 to T do 2 for n â 1 to N do hn â Dn(Hn); // utilize agents to generate responses for m â n to N do if m > 1 then 3 4 5 6 Hm â Hm + hn; // concatenate current response to later agentsâ chat history 7 end 8 end end 9 10 end 11 AN S â EXT ([H1, · · · , HN ]); 12 return AN S;
15
Algorithm 2: Simultaneous-Talk input : agents number N , discuss turn T , a group of debate agents [D1, · · · , DN ], chat history | 2308.07201#59 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.07107 | 60 | Task-aware Retriever. While the aforementioned studies primarily focus on using LLMs as text embedding mod- els for downstream retrieval tasks, retrieval performance can be greatly enhanced when task-specific instructions are integrated. For example, TART [132] devises a task-aware retrieval model that introduces a task-specific instruction before the question. This instruction includes descriptions of the taskâs intent, domain, and desired retrieved unit. For instance, given that the task is question-answering, an effective prompt might be âRetrieve a Wikipedia text that answers this question. {question}â. Here, âWikipediaâ (do- main) indicates the expected source of retrieved documents, âtextâ (unit) suggests the type of content to retrieve, and âanswers this questionâ (intent) demonstrates the intended relationship between the retrieved texts and the question. This approach can take advantage of the powerful language modeling capability and extensive knowledge of LLMs to precisely capture the userâs search intents across various retrieval tasks. Considering the efficiency of retrievers, it first fine-tunes a TART-full model with cross-encoder archi- | 2308.07107#60 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 60 | Mingi Jeon, Seung-Yeop Baik, Joonghyuk Hahn, Yo-Sub Han, and Sang-Ki Ko. Deep Learning-based Code Complexity Prediction. openreview, 2022.
Nan Jiang, Kevin Liu, Thibaud Lutellier, and Lin Tan. Impact of code language models on automated program repair. arXiv preprint arXiv:2302.05020, 2023.
Tae-Hwan Jung. Commitbert: Commit message generation using pre-trained programming language model. arXiv preprint arXiv:2105.14242, 2021.
Mohammad Abdullah Matin Khan, M Saiful Bari, Xuan Long Do, Weishi Wang, Md Rizwan Parvez, and Shafiq Joty. xcodeeval: A large scale multilingual multitask benchmark for code understanding, generation, translation and retrieval. arXiv preprint arXiv:2303.03004, 2023.
Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Casey A Fitzpatrick, Peter Bull, Greg Lipstein, Tony Nelli, Ron Zhu, et al. The hateful memes challenge: Competition report. In NeurIPS 2020 Competition and Demonstration Track, pp. 344â360. PMLR, 2021. | 2308.07124#60 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 60 | 15
Algorithm 2: Simultaneous-Talk input : agents number N , discuss turn T , a group of debate agents [D1, · · · , DN ], chat history
of each agent [H1, · · · , HN ], answer extracter (either majority vote or average score) EXT , buffer BU F output: Final results for text evaluation AN S 1 for t â 0 to T do 2 for n â 1 to N do hn â Dn(Hn); // utilize agents to generate responses buf â buf + hn; // add the responses in current turn to the buffer 3 4 5 6 end for n â 1 to N do 7 Hn â Hn + buf ; // add the buffer to all agentsâ chat history end 8 9 end 10 AN S â EXT ([H1, · · · , HN ]); 11 return AN S;
Algorithm 3: Simultaneous-Talk-with-Summarizer input : agents number N , discuss turn T , a group of debate agents [D1, · · · , DN ], chat history | 2308.07201#60 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 61 | [11] Paul Denny, James Prather, Brett A. Becker, James Finnie-Ansley, Arto Hellas, Juho Leinonen, Andrew Luxton-Reilly, Brent N. Reeves, Eddie Antonio San- tos, and Sami Sarsa. 2023. Computing Education in the Era of Generative AI. arXiv:2306.02608 [cs.CY] James Finnie-Ansley, Paul Denny, Brett A Becker, Andrew Luxton-Reilly, and James Prather. 2022. The robots are coming: Exploring the implications of openai codex on introductory programming. In Proceedings of the 24th Australasian Computing Education Conference. 10â19. https://doi.org/10.1145/3511861.3511863 [13] Zhikai Gao, Sarah Heckman, and Collin Lynch. 2022. Who Uses Office Hours? A Comparison of In-Person and Virtual Office Hours Utilization. In Proceedings of the 53rd ACM Technical Symposium on Computer Science Education - Volume 1 (Providence, RI, USA) (SIGCSE 2022). Association for Computing Machinery, New York, NY, USA, 300â306. | 2308.06921#61 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07124 | 61 | Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, et al. The stack: 3 tb of permissively licensed source code. arXiv preprint arXiv:2211.15533, 2022.
Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Richárd Nagyfi, et al. Openassistant conversationsâdemocratizing large language model alignment. arXiv preprint arXiv:2304.07327, 2023.
13
# OctoPack: Instruction Tuning Code Large Language Models
Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Wen-tau Yih, Daniel Fried, Sida Wang, and Tao Yu. Ds-1000: A natural and reliable benchmark for data science code generation. In International Conference on Machine Learning, pp. 18319â18345. PMLR, 2023. | 2308.07124#61 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 61 | of each agent [H1, · · · , HN ], answer extracter (either majority vote or average score) EXT , buffer BU F , summarizer SU M output: Final results for text evaluation AN S 1 for t â 0 to T do 2 for n â 1 to N do hn â Dn(Hn); // utilize agents to generate responses buf â buf + hn; // add the responses in current turn to the buffer 3 4 5 6 end for n â 1 to N do 7 Hn â Hn + SU M (BU F ); // add the summarized buffer to all agentsâ chat history end 8 9 end 10 AN S â EXT ([H1, · · · , HN ]); 11 return AN S;
16 | 2308.07201#61 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.07107 | 62 | # 4.2.2 Generative Retriever
Traditional IR systems typically follow the âindex-retrieval- rankâ paradigm to locate relevant documents based on user queries, which has proven effective in practice. However, these systems usually consist of three separate modules: the index module, the retrieval module, and the reranking module. Therefore, optimizing these modules collectively
11
can be challenging, potentially resulting in sub-optimal retrieval outcomes. Additionally, this paradigm demands additional space for storing pre-built indexes, further bur- dening storage resources. Recently, model-based generative retrieval methods [135â137] have emerged to address these challenges. These methods move away from the traditional âindex-retrieval-rankâ paradigm and instead use a unified model to directly generate document identifiers (i.e., Do- cIDs) relevant to the queries. In these model-based gener- ative retrieval methods, the knowledge of the document corpus is stored in the model parameters, eliminating the need for additional storage space for the index. Existing methods have explored generating document identifiers through fine-tuning and prompting of LLMs [138, 139] | 2308.07107#62 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 62 | Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou, Eduardo González Ponferrada, Huu Nguyen, et al. The bigscience roots corpus: A 1.6 tb composite multilingual dataset. Advances in Neural Information Processing Systems, 35:31809â31826, 2022.
Joel Lehman, Jonathan Gordon, Shawn Jain, Kamal Ndousse, Cathy Yeh, and Kenneth O Stanley. Evolution through large models. arXiv preprint arXiv:2206.08896, 2022.
Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023a.
Hongyu Li, Seohyun Kim, and Satish Chandra. Neural code search evaluation dataset. arXiv preprint arXiv:1908.09804, 2019. | 2308.07124#62 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.06921 | 63 | [15] Sajed Jalil, Suzzana Rafi, Thomas D. LaToza, Kevin Moran, and Wing Lam. 2023. ChatGPT and Software Testing Education: Promises & Perils. In 2023 IEEE International Conference on Software Testing, Verification and Valida- tion Workshops (ICSTW). IEEE. https://doi.org/10.1109/icstw58534.2023.00078 arXiv:arXiv:2302.03287
[16] Enkelejda Kasneci, Kathrin Sessler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, Stepha Krusche, Gitta Kutyniok, Tilman Michaeli, Claudia Nerdel, Jürgen Pfeffer, Oleksandra Poquet, Michael Sailer, Albrecht Schmidt, Tina Seidel, Matthias Stadler, Jochen Weller, Jochen Kuhn, and Gjergji Kasneci. 2023. ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences 103 (2023), 102274. https://doi.org/10.1016/j.lindif.2023.102274 | 2308.06921#63 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 63 | Fine-tuning LLMs. Given the vast amount of world knowl- edge contained in LLMs, it is intuitive to leverage them for building model-based generative retrievers. DSI [138] is a typical method that fine-tunes the pre-trained T5 models on retrieval datasets. The approach involves encoding queries and decoding document identifiers directly to perform re- trieval. They explore multiple techniques for generating document identifiers and find that constructing semantically structured identifiers yields optimal results. In this strategy, DSI applies hierarchical clustering to group documents ac- cording to their semantic embeddings and assigns a seman- tic DocID to each document based on its hierarchical group. To ensure the output DocIDs are valid and do represent actual documents in the corpus, DSI constructs a trie using all DocIDs and utilizes a constraint beam search during the decoding process. Furthermore, this approach observes that the scaling law, which suggests that larger LMs lead to improved performance, is also applied to generative retrievers. | 2308.07107#63 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 63 | Hongyu Li, Seohyun Kim, and Satish Chandra. Neural code search evaluation dataset. arXiv preprint arXiv:1908.09804, 2019.
Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. Starcoder: may the source be with you! arXiv preprint arXiv:2305.06161, 2023b.
Xueyang Li, Shangqing Liu, Ruitao Feng, Guozhu Meng, Xiaofei Xie, Kai Chen, and Yang Liu. Transrepair: Context-aware program repair for compilation errors. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering, pp. 1â13, 2022a.
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation with alphacode. Science, 378(6624):1092â1097, 2022b. | 2308.07124#63 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.06921 | 64 | [17] Majeed Kazemitabaar, Justin Chow, Carl Ka To Ma, Barbara J. Ericson, David Weintrop, and Tovi Grossman. 2023. Studying the Effect of AI Code Generators on Supporting Novice Learners in Introductory Programming. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI â23). Association for Computing Machinery, New York, NY, USA, Article 455, 23 pages. https://doi.org/10.1145/3544548.3580919
[18] Hieke Keuning, Johan Jeuring, and Bastiaan Heeren. 2019. A Systematic Lit- erature Review of Automated Feedback Generation for Programming Exer- cises. ACM Transactions on Computing Education 19, 1 (March 2019), 1â43. https://doi.org/10.1145/3231711 | 2308.06921#64 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 64 | Prompting LLMs. In addition to fine-tuning LLMs for re- trieval, it has been found that LLMs (e.g., GPT-series models) can directly generate relevant web URLs for user queries with a few in-context demonstrations [139]. This unique capability of LLMs is believed to arise from their training exposure to various HTML resources. As a result, LLMs can naturally serve as generative retrievers that directly gener- ate document identifiers to retrieve relevant documents for input queries. To achieve this, an LLM-URL [139] model is proposed. It utilizes the GPT-3 text-davinci-003 model to yield candidate URLs. Furthermore, it designs regular expressions to extract valid URLs from these candidates to locate the retrieved documents.
To provide a comprehensive understanding of this topic, Table 4 summarizes the common and unique characteristics of the LLM-based retrievers discussed above.
# 4.3 Limitations
Though some efforts have been made for LLM-augmented retrieval, there are still many areas that require more de- tailed investigation. For example, a critical requirement for retrievers is fast response, while the main problem of existing LLMs is the huge model parameters and overlong inference time. Addressing this limitation of LLMs to ensure the response time of retrievers is a critical task. Moreover, even when employing LLMs to augment datasets (a context | 2308.07107#64 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 64 | Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pp. 74â81, 2004.
Derrick Lin, James Koppel, Angela Chen, and Armando Solar-Lezama. Quixbugs: A multi-lingual program repair benchmark set based on the quixey challenge. In Proceedings Companion of the 2017 ACM SIGPLAN international conference on systems, programming, languages, and applications: software for humanity, pp. 55â56, 2017.
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023a.
Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation. arXiv preprint arXiv:2305.01210, 2023b.
Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172, 2023c. | 2308.07124#64 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.06921 | 65 | [19] Mario Konecki, Nikola Kadoic, and Rok Piltaver. 2015. Intelligent assistant for helping students to learn programming. In 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). IEEE, Opatija, Croatia, 924â928. https://doi.org/10.1109/MIPRO.2015. 7160406 Juho Leinonen, Paul Denny, Stephen MacNeil, Sami Sarsa, Seth Bernstein, Joanne Kim, Andrew Tran, and Arto Hellas. 2023. Comparing Code Explanations Created by Students and Large Language Models. arXiv:2304.03938 [cs.CY] | 2308.06921#65 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 65 | TABLE 4. The comparison of retrievers that leverage LLMs as the foundation. âKDâ is short for âKnowledge Distilla- tionâ.
Methods Backbone Architecture LLMâs tuning cpt-text [130] GPT-series GTR [133] T5 RepLLaMA [131] TART-full [132] LLAMA T0 & Flan-T5 TART-dual [132] Contriever DSI [138] LLM-URL [139] T5 GPT-3 Dense Dense Dense Dense Dense Generative Generative Pre-training Fine-tuning Pre-training & Fine-tuning Fine-tuning Fine-tuning & Prompting KD & Prompting Fine-tuning Prompting
TABLE 5. Summary of existing LLM-based re-ranking meth- ods. âEncâ and âDecâ denote encoder and decoder, respec- tively. | 2308.07107#65 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 65 | Tianyang Liu, Canwen Xu, and Julian McAuley. Repobench: Benchmarking repository-level code auto-completion systems. arXiv preprint arXiv:2306.03091, 2023d.
Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. Gpteval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634, 2023e.
Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. The flan collection: Designing data and methods for effective instruction tuning. arXiv preprint arXiv:2301.13688, 2023a.
Shayne Longpre, Gregory Yauney, Emily Reif, Katherine Lee, Adam Roberts, Barret Zoph, Denny Zhou, Jason Wei, Kevin Robinson, David Mimno, et al. A pretrainerâs guide to training data: Measuring the effects of data age, domain coverage, quality, & toxicity. arXiv preprint arXiv:2305.13169, 2023b.
14
# OctoPack: Instruction Tuning Code Large Language Models | 2308.07124#65 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.