doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.06921 | 12 | pertinent to our work is the recent paper by Hellas et al. exploring responses generated by Codex and GPT-3.5 to 150 student help requests from a historical dataset [14]. The data had previously been collected via a platform that allowed students to click a âRequest helpâ button when their code did not pass automated tests. This added their request to a queue that was monitored by a teacher who could respond manually. When assessing the GPT-3.5 model, they found that many of the generated responses were accurate and that 99% of the responses contained source code. Interestingly, the authors | 2308.06921#12 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 12 | The field of IR has witnessed significant advancement with the emergence of various models over time. One such early model is the Boolean model, which employs Boolean logic operators to combine query terms and retrieve doc- uments that satisfy specific conditions [25]. Based on the âbag-of-wordsâ assumption, the vector space model [26] represents documents and queries as vectors in term-based space. Relevance estimation is then performed by assessing the lexical similarity between the query and document vectors. The efficiency of this model is further improved through the effective organization of text content using the inverted index. Moving towards more sophisticated approaches, statistical language models have been intro- duced to estimate the likelihood of term occurrences and incorporate context information, leading to more accurate and context-aware retrieval [27, 54]. In recent years, the neural IR [30, 55, 56] paradigm has gained considerable attention in the research community. By harnessing the powerful representation capabilities of neural networks, this paradigm can capture semantic relationships between queries and documents, thereby significantly enhancing re- trieval performance.
Researchers have identified several challenges with im- plications for the performance and effectiveness of IR sys- tems, such as query ambiguity and retrieval efficiency. In
6. https://github.com/RUC-NLPIR/LLM4IR-Survey | 2308.07107#12 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 12 | Gexe Age seg fe $s = HZ Dy L4H BAG, Zo Porat avs Bes gz 28 a a ae 5 5 8 £8 Bg Sar § = & 8 ° v . uo s is) B=] g L Pod ou bd co a a a Ss o a £ e 5 3 (25.57%) New Features Deprecation (0.28%) /-ââ Build System/Tooling (1.30%) (0.88%) User Interface Documentation (3.9346) Dependencies (5.38%) (13.32%) Testing Configuration (4.61%) (0.624%) Logging/Instrumentation Release Management (4.14%) ~~ Formatting/Linting (0.40%) (19.02%) Bug Fixes Refactoring/Code Cleanup (19.78%) (0.644) Performance Improvements
Figure 2: Overview of COMMITPACK and COMMITPACKFT. Top: Language distribution of the full commit data (COMMITPACK) and the variant filtered for high-quality instructions (COMMITPACKFT). See Appendix C for the full distribution. Bottom: Task distribution of commits on the Python subset of COMMITPACKFT (59K samples) according to GPT-4. | 2308.07124#12 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 12 | IMPLEMENTATION DETAILS
We choose to utilize models from OpenAIâs GPT family as our LLMs in ChatEval, including GPT-4 and ChatGPT (GPT-3.5-turbo) and set the temperature to 0 to ensure reproducibility. The rationale behind this selection is the exceptional performance these models offer, being among the most ad- vanced and powerful in the world. Additionally, their accessibility and ease of use through APIs enable us to directly call and interact with the models during our research, significantly simplifying the process. In our current research, we focus on homogeneous groups of LLMs. That is, within a given multi-agent group, all LLMs belong to the same GPT family model, either all GPT-4 or all ChatGPT. We acknowledge the potential of heterogeneous groups for future research, which could provide fascinating insights into how strong models and weak models can cooperate in a multi-agent setting.
3.2 BENCHMARKS
The detailed introduction of different categories and benchmarks are listed as follows: | 2308.07201#12 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 13 | CodeHelp: Using Large Language Models with Guardrails
characterise the language model as an âunreliable tutorâ that has a âpenchant for blurting out model solutions even when you di- rectly ask them not toâ. Again, this work emphasises the need for tools that can provide assistance to students without immediately revealing answers.
Our work differs from these recent studies in several key ways. Our primary contribution is the explicit design of appropriate guardrails to avoid student over-reliance on model-generated code. Like Kazemitabaar et al. [17], we deployed our tool in the class- room; however, our evaluation ran for 12 weeks, and we explore how students interact with it outside of scheduled class sessions. In the dataset used by Hellas et al. [14], students infrequently used the âRequest helpâ button likely due to the fact that requests were added to a queue and responded to manually by a teacher. In our work, students receive immediate feedback from CodeHelp at any time of the day or night.
# 3 CODEHELP DESIGN AND IMPLEMENTATION | 2308.06921#13 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 13 | 6. https://github.com/RUC-NLPIR/LLM4IR-Survey
light of these challenges, researchers have directed their at- tention toward crucial modules within the retrieval process, aiming to address specific issues and effectuate correspond- ing enhancements. The pivotal role of these modules in ameliorating the IR pipeline and elevating system perfor- mance cannot be overstated. In this survey, we focus on the following four modules, which have been greatly enhanced by LLMs.
Query Rewriter is an essential IR module that seeks to improve the precision and expressiveness of user queries. Positioned at the early stage of the IR pipeline, this module assumes the crucial role of refining or modifying the initial query to align more accurately with the userâs informa- tion requirements. As an integral part of query rewriting, query expansion techniques, with pseudo relevance feed- back being a prominent example, represent the mainstream approach to achieving query expression refinement. In ad- dition to its utility in improving search effectiveness across general scenarios, the query rewriter finds application in diverse specialized retrieval contexts, such as personalized search and conversational search, thus further demonstrat- ing its significance. | 2308.07107#13 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 13 | Base dataset Subset Dataset (â) Lang. Samples Code fraction Lang. Samples Code fraction xP3x StarCoder Self-Instruct OASST COMMITPACKFT 8 12 49 350 532,107,156 5,003 161,443 742,273 0.67% 100% 0.9% 100% 8 12 28 6 5,000 5,003 8,587 5,000 100% 100% 2.5% 100%
Table 1: Statistics of code instruction data we consider. We display the number of programming languages, total samples, and fraction of samples that contain code for permissive instruction datasets. For finetuning on these datasets, we use small subsets with around 5,000 samples each.
languages during instruction tuning boosts their performance further. We hypothesize that code data exhibits the same behavior. To improve performance on code-related tasks, we thus construct a code instruction dataset leveraging the natural structure of Git commits. | 2308.07124#13 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 13 | 3.2 BENCHMARKS
The detailed introduction of different categories and benchmarks are listed as follows:
Open-ended Question Answer is a key component within the field of NLP and generative AI. It necessitates an AI system to provide comprehensive, detailed, and human-like responses to questions that donât have a predefined or fixed set of possible answers. The work of Chiang et al. (2023) encompasses a collection of 80 open-ended questions originating from a wide array of categories, including common-sense, counterfactual, coding, etc. We then take the human annotation results from Wu et al. (2023) to conduct the experiments in this paper. For each question, they direct three annotators to evaluate the replies given by Vicuna-13B and ChatGPT through the given rules and finally derive the results by the majority votes among the annotators. | 2308.07201#13 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 14 | # 3 CODEHELP DESIGN AND IMPLEMENTATION
We designed CodeHelp to augment and complement the learning support students receive from instructors and teaching assistants. We aimed to provide a tool in which a student could 1) request help with issues they face in programming activities and 2) immediately receive a helpful response that provides guidance and explanation without providing a complete solution. To accomplish this, we cre- ated CodeHelp with a simple, clear interface for students (Sec. 3.1); developed a workflow of multiple LLM prompts to generate the desired responses, with guardrails, from a studentâs input (Sec. 3.2); and implemented features specifically for instructors to manage and observe their studentsâ usage (Sec. 3.3). For broad accessibility, CodeHelp is implemented as a web application; it is accessible at https://codehelp.app/. | 2308.06921#14 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 14 | Retriever, as discussed here, is typically employed in the early stages of IR for document recall. The evolution of retrieval technologies reflects a constant pursuit of more effective and efficient methods to address the challenges posed by ever-growing text collections. In numerous ex- periments on IR systems over the years, the classical âbag- of-wordsâ model BM25 [29] has demonstrated its robust performance and high efficiency. In the wake of the neural IR paradigmâs ascendancy, prevalent approaches have pri- marily revolved around projecting queries and documents into high-dimensional vector spaces, and subsequently com- puting their relevance scores through inner product cal- culations. This paradigmatic shift enables a more efficient understanding of query-document relationships, leveraging the power of vector representations to capture semantic similarities. | 2308.07107#14 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 14 | COMMITPACK To construct the dataset, we use commit metadata from the GitHub action dump on Google BigQuery.3 We apply several quality filters, filter for commercially-friendly licenses, and discard all commits that affect more than a single file to ensure commit messages are very specific and to avoid additional complexity from dealing with multiple files. We use the filtered metadata to scrape the affected code files prior to and after the commit from GitHub. This leads to close to 4 terabytes of data covering 350 programming languages (COMMITPACK). As instruction tuning does not necessarily require so much data (Zhou et al., 2023a; Touvron et al., 2023), we apply several
# 3https://www.gharchive.org/
3
# OctoPack: Instruction Tuning Code Large Language Models
strict filters to reduce the dataset to 2 gigabytes (COMMITPACKFT). These strict filters include filtering for samples where the commit message has specific words in uppercase imperative form at the start (e.g. "Verify ..."), consists of multiple words and does not contain external references. All filters are detailed in Appendix D. Figure 2 depicts the distribution of both datasets and the tasks contained in COMMITPACKFT. For instruction tuning our models, we select 5,000 random samples from COMMITPACKFT across the 6 programming languages that we evaluate on. | 2308.07124#14 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 14 | Dialogue Response Generation is a task involves creating a coherent and contextually appropriate response to a given input dialogue. We draw upon the Topical-Chat (Gopalakrishnan et al., 2019) dataset for our study. We then take the human annotation results from Mehri & Eskenazi (2020) where they carry out the annotations on 60 dialogue contexts with each response generated by 6 different systems. Human evaluators analyzed these responses based on natural, coherence, engag- ingness, groundedness, and understandable, where we take the first four dimensions for experiments in our paper following Zhong et al. (2022).
4
# 3.3 BASELINES | 2308.07201#14 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 15 | 3.1 Student Interfaces CodeHelpâs student interfaces are simple, with minimal choices and clear guidance. Students accessing CodeHelp are brought di- rectly to the Help Request form, shown in Figure 1. We opted for a structured input, organizing it into several specific fields rather than having a single free-form text input. This both provides guid- ance to students about what information is typically needed for an effective query and gives more context and structure to the prompt that is ultimately fed to an LLM, which increases the chances of supporting the student successfully. Moreover, the structured in- put provides students an opportunity to practice asking technical questions, providing the necessary relevant context.
Students are asked to provide:
⢠The programming language in which they are working. The instructor can set a class-wide initial default, and the form then defaults to each studentâs most recently selected language.
⢠The relevant snippet of code. This is optional, as not all queries reference existing code.
⢠The error message. This is optional as well. If an error message is provided, the underlying LLM is prompted to explain the error message to the student. | 2308.06921#15 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 15 | Reranker, as another crucial module in the retrieval pipeline, primarily focuses on fine-grained reordering of documents within the retrieved document set. Different from the retriever, which emphasizes the balance of ef- ficiency and effectiveness, the reranker module places a greater emphasis on the quality of document ranking. In pursuit of enhancing the search result quality, researchers delve into more complex matching methods than the tradi- tional vector inner product, thereby furnishing richer match- ing signals to the reranker. Moreover, the reranker facilitates the adoption of specialized ranking strategies tailored to meet distinct user requirements, such as personalized and diversified search results. By integrating domain-specific objectives, the reranker module can deliver tailored and purposeful search results, enhancing the overall user expe- rience.
Reader has evolved as a crucial module with the rapid development of LLM technologies. Its ability to comprehend real-time user intent and generate dynamic responses based on the retrieved text has revolutionized the presentation of IR results. In comparison to presenting a list of candidate
3
documents, the reader module organizes answer texts more intuitively, simulating the natural way humans access infor- mation. To enhance the credibility of generated responses, the integration of references into generated responses has been an effective technique of the reader module. | 2308.07107#15 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 15 | Alternatives We consider three additional datasets for instruction tuning presented in Table 1. xP3x: xP3x is a large-scale collection of multilingual instruction data with around 532 million samples (Muennighoff et al., 2022b). We focus only on the code subset of xP3x, excluding Neural- CodeSearch (Li et al., 2019) which is not licensed permissively, and select 5,000 samples. Self-Instruct: Using the Self-Instruct method (Wang et al., 2022a) and the StarCoder model (Li et al., 2023b), we create 5,003 synthetic instructions and corresponding answers. OASST: OASST is a diverse dataset of multi-turn chat dialogues (Köpf et al., 2023). While most dialogues center around natural language, some also contain code. We reuse a filtered variant of OASST from prior work (Dettmers et al., 2023) and additionally filter out moralizing assistant answers (Appendix D) leading to 8,587 samples.
# 3 HUMANEVALPACK: EVALUATING INSTRUCTION TUNED CODE MODELS | 2308.07124#15 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 15 | 4
# 3.3 BASELINES
We evaluate ChatEval against following methods. As the main portion of our comparison, we pri- marily focuses on the single-agent-based method. Single-Agent means that we directly query an LLM to generate the response towards the evaluation3. We use Multi-Agent to represent ChatEval where several agents discuss towards the evaluation. By default, we configure the communication strategy to one-by-one, agent numbers to 2, and discussion turns to 2 in this section and employ po- sition calibration techniques in both single-agent and multi-agent settings. We will discuss more de- bate configurations in Section 4 for completeness. For the open-ended question answer task, we also compare our method with FairEval (Wang et al., 2023b). They propose various strategies to improve the evaluation performance of a LLM including Multiple Evidence Calibration (MEC) and Balanced Position Calibration (BPC). For the dialogue response generation task, we also compare our method with G-EVAL (Liu et al., 2023b). They utilize CoT and probability-weighted summation for their method. Additionally, we include results from n-gram-based metrics, such as ROUGE (Lin, 2004), BLEU (Papineni et al., 2002) and embedding-based metrics such as BERTScore (Zhang et al., 2019). | 2308.07201#15 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 16 | ⢠The error message. This is optional as well. If an error message is provided, the underlying LLM is prompted to explain the error message to the student.
Language: Please select v Code: Copy just the most relevant part of your code here Responses will be more helpful when you include only code relevant to your issue Error Message: IF your issue relates to an error message, copy the message here. Be sure to include the message itself and the quoted line on which it says the error occurred. Your Issue / Question: Clearly describe your issue or question. Include as relevant: what you are trying to do, what you expect the code to do, what the code actually does, and what you need help understanding Submit Request
Figure 1: The Help Request form (text areas have been shrunk here to save space). The four separate inputs (language, code, error, and issue) and connected guidance text help students structure their request and encourage good practices when requesting support. | 2308.06921#16 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 16 | Furthermore, researchers explore unifying the above modules to develop a novel LLM-driven search model known as the Search Agent. The search agent is distin- guished by its simulation of an automated search and result understanding process, which furnishes users with accurate and readily comprehensible answers. WebGPT [24] serves as a pioneering work in this category, which models the search process as a sequence of actions of an LLM-based agent within a search engine environment, autonomously accomplishing the whole search pipeline. By integrating the existing search stack, search agents have the potential to become a new paradigm in future IR.
# 2.2 Large Language Models
Language models (LMs) are designed to calculate the gen- erative likelihood of word sequences by taking into ac- count the contextual information from preceding words, thereby predicting the probability of subsequent words. Consequently, by employing certain word selection strate- gies (such as greedy decoding or random sampling), LMs can proficiently generate natural language texts. Although the primary objective of LMs lies in text generation, recent studies [57] have revealed that a wide array of natural lan- guage processing problems can be effectively reformulated into a text-to-text format, thus rendering them amenable to resolution through text generation. This has led to LMs becoming the de facto solution for the majority of text-related problems. | 2308.07107#16 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 16 | HumanEvalPack Languages: Python, JavaScript, Java, Go, C++, Rust \Subtasks: HumanEvalFix, HumanEvalExplain, HumanEvalSynthesize Metric: Pass@k Creation: Humans | Fix Code Explain Code Synthesize Code from typing import List {rom typing import List Write a Python function âhas_close_elements(numbers:List{loat}, threshold: float) -> boo! to solve the following problem: Check ifin given lst of numbers, are any two numbers closer to teach other than given threshold, >>> has_close_elements((1.0, 2.0, 3.0), 0.5) False >>> has_close_elements((1.0, 28, 3.0, 4.0, 5.0, 2.0], 0.3) True {def has_close_elements(numbers: Listfloat), threshold: float) > def has_close_elements(numbers: Lisfloat], threshold: float) > boo: bool:for idx, elem in enumerate(aumbers) for idx, elem in enumerate(numbers): for idx2, elem? in enumerate(numbers): for idx2, elem? | 2308.07124#16 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.06921 | 17 | User: Holley When: 2023-03-02 12:06pm Language: Python code: Error Message: Issue / Question: I wantto print the numbers upto 10, but this ust crashes Response Remember: It will not always be correct! Ie looks lke you are trying to.use a for loop to print the numbers up to 10, ut you are running into an error. In Python, the for loop syntax requires that you provide aniterable object, such asa ist arrange, to loop over. To generate a range fof numbers From 0 to 10, youcan use the range function. This will give you an terable object that you can use in your for loop. and end number, The tart number wil be the ber you want ta print. For example, f you want Ifyou wantto print the numbers 1 te 10, you can use the range Function witha first umber you wane to print, and the end number will be one more than the la to print the numbers 1to 10, youcan use the range function lke this: range(, 11) Wosthishelgtur {Yes GD No
Figure 2: The Response View. Students are shown the details of their request followed by the systemâs response. A promi- nent warning reminds students that the response may be incorrect. A simple feedback form allows the students to indicate whether the answer was helpful. | 2308.06921#17 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 17 | The evolution of LMs can be categorized into four pri- mary stages, as discussed in prior literature [52]. Initially, LMs were rooted in statistical learning techniques and were termed statistical language models. These models tackled the issue of word prediction by employing the Markov assumption to predict the subsequent word based on preceding words. Thereafter, neural networks, particu- larly recurrent neural networks (RNNs), were introduced to calculate the likelihood of text sequences and establish neural language models. These advancements made it feasible to utilize LMs for representation learning beyond mere word sequence modeling. ELMo [58] first proposed to learn contextualized word representations through pre- training a bidirectional LSTM (biLSTM) network on large- scale corpora, followed by fine-tuning on specific down- stream tasks. Similarly, BERT [59] proposed to pre-train a Transformer [60] encoder with a specially designed Masked Language Modeling (MLM) task and Next Sentence Predic- tion (NSP) task on large corpora. These studies initiated a new era of pre-trained language models (PLMs), with the âpre-training then | 2308.07107#17 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 17 | for idx, elem in enumerate(numbers): for idx2, elem? in enumerate(numbers): for idx2, elem? in enumerate(numbers): ifid = idx: ifidx = idx: distance = abs(elem - elem2) distance = elem - elem2 if distance < threshold: if distance < threshold: relum True retum True retum False from typing import List retum False Provide a concise natural language description of the function using See det has_close_elements(rumbers: Lis{float], threshold: float) > boa: "Check ifn given list of numbers, are any two numbers closer to each other than given threshold, >>> has_close_elements((1.0, 2.0, 3.0), 0.5) False def check(has_close_elements): âassert has_close_elements((1.0, 2.0, 3.9, 4.0, 5.0, 2.2}, 0.3) == Tue assert has_close_elements((1.0, 2.0, 3.9, 4.0, 5.0, 2.2), 0.05) == False assert has_close_elements({1.0, 2.0, | 2308.07124#17 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 17 | We adopt the same evaluation approach as Wang et al. (2023b) to assess the annotation results produced by different methods and annotators. Specifically, we calculate the Accuracy (Acc.), which measures the proportion of correctly classified instances out of the total instances, and the Kappa correlation coefficient (Kap.) (McHugh, 2012) which gauges the agreement between results from models and human annotators while taking into account the possibility of agreement occurring by chance. Both metrics provide insights into the reliability and consistency of the annotations. We take the human annotation results and FairEvalâs (Wang et al., 2023b) best results from their paper. As is shown in Table 1, different annotators can reach a relatively high agreement and perform better than any other LLM-based approach. Still, the average human annotations accuracy which is 71.7% shows there exists a certain degree of discrepancy among different unique individuals revealing that text evaluation is absolutely an arduous task. The second part and the third part of Table 1 show the results of FairEvalâs method and the results of our proposed method respectively. We find that (1) ChatEval can enhance the performance of the evaluation process, achieving higher alignment with | 2308.07201#17 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 18 | ⢠The question or description of the issue with which the student needs help.
After submitting a request for help, the student is brought to the response view, an example of which is shown in Figure 2. This view displays the query (for reference) and the generated response. Because there is a chance the LLM may generate an incorrect or confusing answer (discussed further in Section 4), a warning re- minder is displayed prominently above every response. A simple feedback form allows the student to note whether the response was helpful. The query, response, and any feedback are stored for the studentâs future reference and made available to the instructor.
In practice, students do not always provide sufficient information or context to provide accurate assistance. CodeHelp attempts to | 2308.06921#18 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 18 | tion (NSP) task on large corpora. These studies initiated a new era of pre-trained language models (PLMs), with the âpre-training then fine-tuningâ paradigm emerging as the prevailing learning approach. Along this line, numerous generative PLMs (e.g., GPT-2 [33], BART [61], and T5 [57]) have been developed for text generation problems including summarization, machine translation, and dialogue gener- ation. Recently, researchers have observed that increasing the scale of PLMs (e.g., model size or data amount) can | 2308.07107#18 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 18 | 3.9, 4.0, 5.0, 2.2), 0.05) == False assert has_close_elements({1.0, 2.0, 6.9, 4.0, 5.0], 0.95) âCheck ifn given list of numbers, are any two numbers closer to âeach other than given threshold. >>> has_close_elements((1.0, 2.0, 3.0), 0.5) assert has_close_elements((1.0, 2.0, 5:9, 4.0, 5.0], 0.8) assert has_close_elements((1.0, 2.0, 3.0, 4.0, 5.0, 2.0}, 0.1 Tue assert has_close_elements((1.1,2.2, 3.1, 4.1, 5.1], 1.0)== True assert has_close_elements({1.1,2.2, 3.1, 4.1, 5.1], 0.5) == False check(has_close_elements) False >>> has_close_elements((1.0, 28, 3.0, 4.0, 5.0, 2.0], 0.3) True âCheck ifn given lst of numbers, | 2308.07124#18 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 18 | method and the results of our proposed method respectively. We find that (1) ChatEval can enhance the performance of the evaluation process, achieving higher alignment with human preference compared with single-agent evaluation. Specifically, the multi-agent-based method improves the accuracy by 6.2% for ChatGPT and 2.5% for GPT-4; (2) ChatEval surpasses FairEvalâs best results within both ChatGPT and GPT-4 settings showing the effectiveness of our proposed method. | 2308.07201#18 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 19 | In practice, students do not always provide sufficient information or context to provide accurate assistance. CodeHelp attempts to
Response Remember: twill not always be correct! âThe code is attempting to filter a pandas DataFrame based on the âStateâ column, bu is encountering a KeyEtror. Please Provide the code or information related to how the âdF DataFrame is defined. âAn attempt.ata response is below, but you can (Retry) this query and provide adaktional details or clarification to receive a âmore helpful response. Ie looks lke you are trying to access a column called Stateâ in your DataFrame, butts not present. The Keyrror you are sing is telling you that the key âStateâ does not exist in your DataFrame. To Ficthis issue, you should check the column namesin your DataFrame and make sure that âStateâ is spelled correctly andis present. You can use the df.colunns attribute to see a lst of al the column names in your DataFrame. âAdditionally itlooks Uke you are trying to use the / operator to combine two conditions inyour Lee statement.In Python, the / âoperator ic used for division, not combining conditions Instead, you should use the & operator to combine conditions. I hope this helps Let me knowif you have any Further questions or issues. | 2308.06921#19 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 19 | ae Decoder-Only L 6 GPT 2019 OoL_eaT ) 6 = GS GPT-2 XLNet ⬠ââ GL xner_) 2020 G (mts) ©) 6 (errs ( Unitmv2 2021 am) 9 aries) ¢ aa) i GLM \o) GPT-J o ERNIE Switch © switch) © Conner) @ (codex) 2022 G [InstructGPT ]} (J { GPT-Neox | @{ BLOOM } G(_ ChatePT }OOQ{ ort | Gl Minerva } © Chinchilla }OO{ lamDA ) G{ PalM 2023 Oo(_tiawa) @ (_orra_) (ard) i (Cawde +
Fig. 2. The evolution of LLMs (encoder-decoder and decoder-only structures).
consistently improve their performance on downstream tasks (a phenomenon commonly referred to as the scaling law [62, 63]). Moreover, large-sized PLMs exhibit promis- ing abilities (termed emergent abilities [42]) in addressing complex tasks, which are not evident in their smaller coun- terparts. Therefore, the research community refers to these large-sized PLMs as large language models (LLMs). | 2308.07107#19 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 19 | 28, 3.0, 4.0, 5.0, 2.0], 0.3) True âCheck ifn given lst of numbers, are any. âWrite functional code in Python according to the description. >>> has_close_elements((1.0, 28, 3.0, 4.0, 5.0, 2.0], 0.3) Tue {or idx, elem in enumerate(numbers): {for idx2, elem2 in enumerate(numbers) iid I= ied: distance = abs(elem - elem2) if distance < threshold: return True Fix bugs in has_close_elements. rom typing import List rom typing import List return False âdef has_close_elements(numbers: Listfloat), threshold: float) > def has_close_elements(numbers: Lisfloat], threshold: float) > bool bool fans common semen ffatn Comer EET Corina lem n umerat(rurbe} farbia anor enaecate Model Input ix id idx = eo pro -cr eed) Sacco) ifistance < treshok i stance < threshold eine tenn Target Output remit reomnittn | 2308.07124#19 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.06921 | 20 | Figure 3: Example response with a request for clarification (the yellow panel in the middle). The retry button takes the student to a help request form pre-filled with the same query.
determine whether each request is lacking in this way, and if so, it presents the student with a request for clarification as shown in Figure 3. The clarification request attempts to help the student identify what additional information is needed. The determination and clarification request are generated by an LLM as well (described in Section 3.2), and because it could be incorrect, the student is also given a response to their request as written. This is mostly done to prevent students becoming stuck in a series of clarification requests without receiving any support. When a clarification is requested, the system describes the main response as an âattemptâ at a response to indicate to the student that it may be less accurate given the missing information.
3.2 Generating Responses We designed CodeHelp to generate responses to student requests that are similar to those of a human tutor or instructor helping a student in a one-on-one session. Specifically, our goals for the responses were:
⢠Provide explanations and guidance to support the student in their learning.
⢠Never include complete solutions that the student can copy without thinking or learning.
⢠Identify incomplete or ambiguous queries and prompt the student for additional information. | 2308.06921#20 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 20 | As shown in Figure 2, existing LLMs can be catego- rized into two groups based on their architectures: encoder- decoder [57, 61, 64â69] and decoder-only [33â35, 70â80] models. The encoder-decoder models incorporate an en- coder component to transform the input text into vectors, which are then employed for producing output texts. For example, T5 [57] is an encoder-decoder model that converts each natural language processing problem into a text-to- text form and resolves it as a text generation problem. In contrast, decoder-only models, typified by GPT, rely on the Transformer decoder architecture. It uses a self-attention mechanism with a diagonal attention mask to generate a sequence of words from left to right. Building upon the success of GPT-3 [34], which is the first model to encompass over 100B parameters, several noteworthy models have been inspired, including GPT-J, BLOOM [78], OPT [75], Chinchilla [81], and LLaMA [35]. These models follow the similar Transformer decoder structure as GPT-3 and are trained on various combinations of datasets. | 2308.07107#20 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 20 | Figure 3: HUMANEVALPACK overview. The first HumanEval problem is depicted across the three scenarios for Python. The bug for HUMANEVALFIX consists of a missing "abs" statement.
When instruction tuning LLMs using natural language (NL) data, the input is an NL instruction with optional NL context and the target output is the NL answer to the task (Wei et al., 2022). When instruction tuning with code (C) data, code may either appear only in the input alongside the NL instruction (NL+CâNL, e.g. code explanation), only in the output (NLâC, e.g. code synthesis), or in both input and output (NL+CâC, e.g. code modifications like bug fixing). While prior benchmarks commonly only cover variants of code synthesis, users may want to use models in all three scenarios. Thus, we expand the code synthesis benchmark HumanEval (Chen et al., 2021; Zheng et al., 2023) to cover all three input-output combinations for six languages (Figure 3).
4
# OctoPack: Instruction Tuning Code Large Language Models | 2308.07124#20 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 20 | For the dialogue response generation benchmarks, we align the evaluation method with Zhong et al. (2022), calculating the turn-level Spearman and Kendall-Tau correlation in correspondence with hu- man judgments on four aspects (naturalness, coherence, engagingness and groundedness). Results can be found in Table 2. In the first part of Table 2, we demonstrate that n-gram-based metrics and embedding-based metrics perform overall poorly on all the aspects evaluated illustrating that these methods can hardly reveal human preference. In the second part of Table 2, we show the results from the G-eval (Liu et al., 2023b) paper. They first ask the LLM to generate intermediate thought and finally calculate the weighted summation of the output scores based on the probabil- ity. The results show that their method outperforms previous traditional metrics depicting the fact that the LLM-based evaluator is effective and reliable for evaluating the dialogue response genera- tion task. While their method delivers sound results, our proposed approach raises the bar in terms of performance for GPT-4. Specifically, ChatEval improves the average Spearman and Kendall- Tau correlation by | 2308.07201#20 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 21 | ⢠Never include complete solutions that the student can copy without thinking or learning.
⢠Identify incomplete or ambiguous queries and prompt the student for additional information.
⢠Only respond to questions relevant to the course (to prevent abuse of the tool as unrestricted access to an LLM).
In CodeHelp, we achieve these goals via careful design of multiple prompts for the LLMs generating responses. The LLMs used in CodeHelp operate by repeatedly predicting the next word in a sequence, and so they are commonly used by providing a text prompt from which the LLM generates a completion, i.e., a sequence of words predicted to follow the prompt. LLMs are limited in the number and complexity of instructions they can accurately follow in a single prompt and completion, and we found that current LLMs could not consistently achieve all of the desired goals with a single prompt and its completion. Therefore, the current design of CodeHelp employs three separate prompts. The response workflow using these prompts is shown in Figure 4.
Mark Liffiton, Brad Sheese, Jaromir Savelka, and Paul Denny | 2308.06921#21 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 21 | Owing to their vast number of parameters, fine-tuning LLMs for specific tasks, such as IR, is often deemed imprac- tical. Consequently, two prevailing methods for applying LLMs have been established: in-context learning (ICL) and parameter-efficient fine-tuning. ICL is one of the emergent abilities of LLMs [34] empowering them to comprehend and furnish answers based on the provided input context, rather than relying merely on their pre-training knowledge. This method requires only the formulation of the task description and demonstrations in natural language, which are then fed as input to the LLM. Notably, parameter tuning is not
4
Instruction } | Demonstrations { Input (context) Write a passage to answer the given query: Query: what state is this zip code 85282 NX | Passage: Welcome to TEMPE, AZ 85282. 85282 is a rural zip code in Tempe, Arizona. The population is primarily white... Query: when was pokemon green released? Passage: Large Language Models Pokemon Green was released in Japan on February 27th, 1996. It was the first in the Pokemon series of games and served as the basis for Pokemon Red and Blue, which were released in the US in 1998. The original Pokemon Green remains a beloved classic among fans of the series ¥ IR systems } Generated passage | 2308.07107#21 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 21 | 4
# OctoPack: Instruction Tuning Code Large Language Models
HUMANEVALFIX (NL+CâC) Given an incorrect code function with a subtle bug and accom- panying unit tests, the model is tasked to fix the function. We manually add a bug to each of the 164 HumanEval solutions across all 6 languages (984 total bugs). For a given sample, the bugs are as similar as possible across the 6 languages enabling meaningful comparison of scores across languages. Bugs are written such that the code still runs but produces an incorrect result leading to at least one unit test failing. Bug statistics and examples are in Appendix K. We also evaluate an easier variant of this task where instead of unit tests, models are provided with the correct function docstring as the source of truth to fix bugs, see Appendix I. | 2308.07124#21 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 21 | proposed approach raises the bar in terms of performance for GPT-4. Specifically, ChatEval improves the average Spearman and Kendall- Tau correlation by 0.096 (16.3%) and 0.057 (10.0%) respectively. Additionally, compared with the single-agent method, ChatEval amplifies the performance both for ChatGPT and GPT-4, showing the effectiveness of our method which is aligned with the results in Section 3.4. | 2308.07201#21 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 22 | Mark Liffiton, Brad Sheese, Jaromir Savelka, and Paul Denny
A studentâs request for help (query) is included in a âsufficiency checkâ prompt and in a prompt for generating the main response. Because we want the system to provide its main response even in cases when the query is determined to be insufficient as written, CodeHelp generates the sufficiency check in parallel with the main response. If the sufficiency check determines clarification is needed, we display the clarification request above the main response (Fig- ure 3); otherwise, only the main response is shown. From the âmain responseâ prompt, two different completions are generated and scored for quality (described below). The higher-scoring prompt is kept and checked for the presence of code blocks, and a third prompt is used to remove them if found. | 2308.06921#22 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 22 | Fig. 3. An example of LLM-based query rewriting for ad-hoc search. The example is cited from the Query2Doc paper [86]. LLMs are used to generate a passage to supplement the original query, where N = 0 and N > 0 correspond to zero-shot and few-shot scenarios.
required for ICL. Additionally, the efficacy of ICL can be fur- ther augmented through the adoption of chain-of-thought prompting, involving multiple demonstrations (describe the chain of thought examples) to guide the modelâs reasoning process. ICL is the most commonly used method for apply- ing LLMs to IR. Parameter-efficient fine-tuning [82â84] aims to reduce the number of trainable parameters while main- taining satisfactory performance. LoRA [82], for example, has been widely applied to open-source LLMs (e.g., LLaMA and BLOOM) for this purpose. Recently, QLoRA [85] has been proposed to further reduce memory usage by lever- aging a frozen 4-bit quantized LLM for gradient compu- tation. Despite the exploration of parameter-efficient fine- tuning for various NLP tasks, its implementation in IR tasks remains relatively limited, representing a potential avenue for future research. | 2308.07107#22 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 22 | HUMANEVALEXPLAIN (NL+CâNL) Given a correct code function, the model is tasked to generate an explanation of the code. Subsequently, the same model is tasked to regenerate the code given only its own explanation. The second step allows us to score this task via code execution and measure pass@k (Chen et al., 2021) instead of evaluating the explanation itself using heuristic-based metrics like BLEU (Papineni et al., 2002) or ROUGE (Lin, 2004) which have major limitations (Reiter, 2018; Schluter, 2017; Eghbali & Pradel, 2022; Zhou et al., 2023b). To prevent models from copying the solution into the description, we remove any solution overlap of at least 20 characters from the description. We further enforce a character length limit on the model-generated explanation equivalent to the length of the docstring describing the function. This limit is specified in the prompt for the model. Note that the function docstring itself is never provided to the model for this task. | 2308.07124#22 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 22 | 3We use the same prompt template as our multi-agent debate settings in single-agent baseline except that we ignore some slot.
5
Table 1: Accuracy (Acc.) and Kappa correlation coefficient (Kap.) of different methods on FairEval benchmark.
Evaluator Methods Human Annotator1 Annotator2 Annotator3 FairEval ChatGPT GPT-4 Ours ChatGPT ChatGPT GPT-4 GPT-4 - - - MEC+BPC MEC+BPC Single-Agent Multi-Agent Single-Agent Multi-Agent Acc. (%) Kap. 68.8 76.3 70 0.5 0.62 0.5 58.7 62.5 0.31 0.37 53.8 60.0 61.3 63.8 0.27 0.33 0.36 0.40
Table 2: Turn-level Spearman (Ï) and Kendall-Tau (Ï ) correlations of different methods on Topical- Chat benchmark, SA means Single-Agent and MA means Multi-Agent. Our ChatGPT settings should be compared to G-EVAL-3.5, and GPT-4 settings should be compared to G-EVAL-4. | 2308.07201#22 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 23 | Sufficiency Check. To check for insufficient or incomplete queries, the studentâs query is included in a prompt with instructions that explain the context, describe the meaning of each field in the stu- dentâs input, and request an assessment of sufficiency. The full prompt is shown in Figure 5. To improve the accuracy of the LLMâs response, we include instructions in the prompt for the LLM to sum- marize the request and state its reasoning before generating the final determination. This is a specific instance of a technique gener- ally referred to as âchain of thought promptingâ (CoT), which has been found to improve the accuracy of LLM responses in various contexts [35].
Main Response. Similar to the sufficiency check, the main prompt, shown in Figure 6, inserts the individual fields of a studentâs query into instructions explaining the system context and meaning of each field. As one part of preventing solution code in the response, the system modifies the studentâs provided issue to append, âPlease do not write any example code in your response.â Additionally, if the instructor has specified any keywords they want the LLM to avoid for the current class (discussed in Section 3.3), the prompt includes text listing those. | 2308.06921#23 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 23 | 3 QUERY REWRITER Query rewriting in modern IR systems is essential for improving search query effectiveness and accuracy. It re- formulates usersâ original queries to better match search results, alleviating issues like vague queries or vocabulary mismatches between the query and target documents. This task goes beyond mere synonym replacement, requiring an understanding of user intent and query context, particularly in complex searches like conversational queries. Effective query rewriting enhances search engine performance.
Traditional methods for query rewriting improve re- trieval performance by expanding the initial query with in- formation from highly-ranked relevant documents. Mainly- used methods include relevance feedback [87â92], word- embedding based methods [93, 94] etc. However, the limited ability of semantic understanding and comprehension of user search intent limits their performance in capturing the full scope of user intent.
Recent advancements in LLMs present promising oppor- tunities to boost query rewriting capabilities. On one hand, | 2308.07107#23 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 23 | HUMANEVALSYNTHESIZE (NLâC) Given a natural language docstring or comment describing the desired code, the model is tasked to synthesize the correct code. This task corresponds to the original HumanEval benchmark (Chen et al., 2021). For instruction tuned models, we add an explicit instruction to the input explaining what the model should do. For models that have only gone through language model pretraining, we follow Chen et al. (2021) and provide the model with the function header and docstring to evaluate its completion of the function.
For all tasks we execute the code generations to compute performance using the pass@k metric (Chen et al., 2021): a problem is considered solved if any of k code generations passes every test case. We focus on the simplest version of pass@k, which is pass@1: the likelihood that the model solves a problem in a single attempt. Like Chen et al. (2021), we use a sampling temperature of 0.2 and topp = 0.95 to estimate pass@1. We generate n = 20 samples, which is enough to get reliable pass@1 estimates (Li et al., 2023b). For GPT-4, we generate n = 1 samples. Using n = 1 instead of n = 20 for GPT-4 only changes scores by around 2% while providing 20x cost savings. | 2308.07124#23 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 23 | Naturalness Ï 0.146 0.176 0.203 0.193 0.300 0.295 ROUGE-L 0.175 0.180 0.235 0.131 0.316 0.232 BLEU-4 0.209 0.226 0.233 0.214 0.335 0.317 BERTScore 0.539 0.532 0.544 0.519 0.691 0.660 G-EVAL-3.5 0.565 0.549 0.605 0.594 0.631 0.627 G-EVAL-4 ChatGPT(SA) 0.474 0.421 0.527 0.482 0.599 0.549 ChatGPT(MA) 0.441 0.396 0.500 0.454 0.664 0.607 0.532 0.483 0.591 0.535 0.734 0.676 GPT-4(SA) 0.630 0.571 0.619 0.561 0.765 0.695 GPT-4(MA) Coherence Ï Engagingness Groundedness Ï Metrics Ï Ï Ï Ï 0.327 0.310 0.310 0.213 0.317 0.291 0.567 0.586 0.551 0.531 0.576 0.558 0.602 0.583 0.774 0.750 0.722 0.700 Ï | 2308.07201#23 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 24 | Even with the main prompt explicitly instructing the LLM to not include solution or example code in its response, the response may still contain code. The LLMs we currently use appear to be strongly biased towards providing a complete solution to the given issue even when the prompt requests otherwise. Likewise, the instruc- tions to not use any keywords in the instructorâs avoid set are not followed in all cases. Therefore, CodeHelp generates two different completions for the main response, scores them based on whether they include a code block or any of the keywords in the instructorâs avoid set, and takes the better of the two.
Code Removal. In cases where the highest-scoring response in- cludes a code block, CodeHelp uses a third prompt (Figure 7) to clean up the response and remove the code. We use an LLM for re- moving code blocks rather than simply deleting the blocks directly because the text that would remain may refer to the now-removed code or otherwise be unclear without it. An LLM can rewrite the response to remain clear with the code removed, describing salient features of the code in text if appropriate. | 2308.06921#24 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 24 | Recent advancements in LLMs present promising oppor- tunities to boost query rewriting capabilities. On one hand,
Reformulate the current question into a de-contextualized rewrite under the multi-turn information-seeking dialog context and generate a correct response. Turn 1: Question: What should | consider when buying a phone? Rewrite: This is the first turn. So, the question should be rewritten as: What should | consider when buying a phone? Response: The design of the phone and the overall ... Turn 2: Question: Cool. Which one would you recommend? Rewrite: Based on Turn 1, you are inquiring about what should be considered when buying a phone. So, the question should be rewritten as: Cool. Which smartphone would you recommend for me? Response: Just because a phone has everything ... Turn 1: Question: What was the basis of the Watergate scandal? Rewrite: Response: Turn 2: Turn t: Question: So, what happened to Nixon? Rewrite: Large Language Models ) v Based on all previous turns, Nixon was badly involved in the Watergate scandal. So, the question should be rewritten as: So, what happened to Nixon after the events of the Watergate scandal? Response: With the mounting evidence and loss... ¥ IR systems ] Generated query | 2308.07107#24 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 24 | Python HumanEval is the most commonly used code benchmark, thus many training datasets have already been decontaminated for HumanEval to enable fair evaluation. By reusing HumanEval and manually expanding it to more scenarios and languages, we ensure that existing decontamination remains valid. This enables a fair comparison across a large variety of models.
4 OCTOCODER: BEST COMMERCIALLY LICENSED CODE LLM
4.1 ABLATING INSTRUCTION DATA CHOICES
[£1 No instruction tuning 19) OASST Gy xP3x-Code + OASST CE Self-instruct ) Self-Instruct + OASST HN CommitPackFT + OASST 50 +13 PNW oo 80 Pass@1 (%) ° ° Code Fixing Code Explanation Code Synthesis Average
Figure 4: Comparing permissively licensed instruction datasets by instruction tuning StarCoder. Models are evaluated on the Python subset of HUMANEVALPACK.
5
# OctoPack: Instruction Tuning Code Large Language Models
We instruction tune the pretrained StarCoder model (Li et al., 2023b) on different combinations of our instruction datasets (§2). We evaluate all models on the Python subset of HUMANEVALPACK as depicted in Figure 4. Similar to prior work (Taori et al., 2023), we format all instructions into a consistent schema to distinguish question and answer (see Figure 17). | 2308.07124#24 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.06921 | 25 | Large Language Models. Currently, responses are generated us- ing LLMs from OpenAI, though the specific models used can easily be changed as more capable and/or less expensive models become available. Specifically, the âSufficiency Checkâ and âMain Responseâ completions are currently performed by the gpt-3.5-turbo-0301
CodeHelp: Using Large Language Models with Guardrails
@) Query -Language -Code -Error -lssue Response scoring response Presented as clarification request Presented as main response Presented as main response removal
Figure 4: CodeHelpâs response workflow. Steps using a large language model completion are tagged LLM.
You are a system for assisting students like me with programming.
You are a system for assisting a student with programming.
My inputs provide: [brief description of each input]
The students provide: [brief description of each input]
Please assess the following submission to determine whether it is sufficient for you to provide help or if you need additional infor- mation. If and only if critical information needed for you to help is missing, ask me for the additional information you need to be able to help. State your reasoning first. Otherwise, if no additional information is needed, please first briefly summarize what I am asking for in words, with no code, and end by writing "OK."
Inputs: [delimited query inputs] | 2308.06921#25 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 25 | Fig. 4. An example of LLM-based query rewriting for con- versational search. The example is cited from LLMCS [95]. The LLM is used to generate a query based on the demon- strations and previous search context. Additional responses are required to be generated for improving the query un- derstanding. N = 0 and N > 0 correspond to zero-shot and few-shot scenarios.
given the context and subtleties of a query, LLMs can pro- vide more accurate and contextually relevant rewrites. On the other hand, LLMs can leverage their extensive knowl- edge to generate synonyms and related concepts, enhancing queries to cover a broader range of relevant documents, thereby effectively addressing the vocabulary mismatch problem. In the following sections, we will introduce the recent works that employ LLMs in query rewriting.
# 3.1 Rewriting Scenario
Query rewriting typically serves two scenarios: ad-hoc re- trieval, which mainly addresses vocabulary mismatches between queries and candidate documents, and conver- sational search, which refines queries based on evolving conversations. The upcoming section will delve into the role of query rewriting in these two domains and explore how LLMs enhance this process.
# 3.1.1 Ad-hoc Retrieval | 2308.07107#25 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 25 | COMMITPACKFT enables CodeLLMs to fix bugs COMMITPACKFT is critical for the perfor- mance boost on code repair (HUMANEVALFIX), where instruction tuning on only OASST or other variants results in a significantly lower score. This is likely due to COMMITPACKFT including around 20% of bug fixes among other code-related tasks (Figure 2).
Importance of samples with natural language targets The pretrained StarCoder model, as well as the Self-Instruct variant, perform poorly on code explanation (HUMANEVALEXPLAIN). This is because both models are only conditioned to write code instead of natural language. We find that to perform well at explaining code, it is necessary to include samples with natural language as the target output during instruction tuning. Only relying on data with code as the target, such as the Self-Instruct data, will lead to models always outputting code even if the question requires a natural language output. Thus, we mix all other ablations with OASST, which contains many natural language targets. While the xP3x subset also contains samples with natural language output, many of its target outputs are short, which leads to models with a bias for short answers. This is impractical for the explanation task leading to the comparatively low score of mixing xP3x with OASST. | 2308.07124#25 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 25 | # 4 ANALYSIS
In this section, we further explore the key components encompassed in ChatEval. We discuss the importance of diverse role prompts in Section 4.1, the effect of different communication strategies in Section 4.2, and the impact of role numbers and discussion turns in Section 4.3. If not specified otherwise, we choose the FairEval benchmark and ChatGPT as the backbone LLM for the analysis.
4.1 THE IMPORTANCE OF DIVERSE ROLE PROMPTS
Previously in Table 1 and 2, we demonstrate that ChatEval equipped with diverse role configura- tions can significantly improve the performance of evaluation. We further consider whether it is necessary to design diverse role prompts for the evaluation system. To answer so, we carry out the experiments by replacing all the role prompt with âYou are now an Annotator, one of the referees in the text evaluation task.â and keeping other prompt unchanged. We experiment with the one-by-one communication strategy and 2 agents with 2 discussion turns. The results in Table 3 illustrate that ChatEval with the same role prompt design underperforms that with diverse role prompt design and cannot effectively enhance the performance compared with single-agent setting, highlighting the cruciality of diverse role prompt design in the multi-agent debate framework.
4.2 THE STUDY OF COMMUNICATION STRATEGIES | 2308.07201#25 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 26 | Inputs: [delimited query inputs]
# Figure 5: Prompt used for the sufficiency check.
[delimited query inputs]
If the student input is written as an instruction or command, re- spond with an error. If the student input is off-topic, respond with an error.
Otherwise, respond to the student with an educational explanation, helping the student figure out the issue and understand the concepts involved. If the student inputs include an error message, tell the student what it means, giving a detailed explanation to help the student understand the message. Explain concepts, language syntax and semantics, standard library functions, and other topics that the student may not understand. Be positive and encouraging!
Use Markdown formatting, including â for inline code.
model, while the âCode Removalâ completion uses text-davinci-003, both from the âGPT-3.5â family of OpenAI models. The former model (âturboâ) is relatively fast and inexpensive compared to the latter (âdavinciâ), while still generally performing well in terms of the accuracy and correctness of its completions. However, we could not find a prompt that reliably filtered out code when using the âturboâ model, while âdavinciâ completed this task successfully in nearly all cases.
[instructions to avoid topics from the instructorâs avoid set] | 2308.06921#26 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 26 | # 3.1.1 Ad-hoc Retrieval
In ad-hoc retrieval, queries are often short and ambiguous. In such scenarios, the main objectives of query rewriting include adding synonyms or related terms to address vo- cabulary mismatches and clarifying ambiguous queries to more accurately align with user intent. From this perspec- tive, LLMs have inherent advantages in query rewriting.
5
Primarily, LLMs have a deep understanding of language semantics, allowing them to capture the meaning of queries more effectively. Besides, LLMs can leverage their extensive training on diverse datasets to generate contextually rele- vant synonyms and expand queries, ensuring broader and more precise search result coverage. Additionally, studies have shown that LLMsâ integration of external factual cor- pora [96â99] and thoughtful model design [100] further en- hance their accuracy in generating effective query rewrites, especially for specific tasks. | 2308.07107#26 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 26 | COMMITPACKFT+OASST yields best performance All instruction datasets provide similar boosts for code synthesis (HUMANEVALSYNTHESIZE), which has been the focus of all prior work on code instruction models (Wang et al., 2023c; Luo et al., 2023; Muennighoff et al., 2022b). We achieve the best average score by instruction tuning on COMMITPACKFT mixed with our filtered OASST data yielding an absolute 23% improvement over StarCoder. Thus, we select COMMITPACKFT+OASST for our final model dubbed OCTOCODER. Using the same data, we also instruction tune the 6 billion parameter CodeGeeX2 (Zheng et al., 2023) to create OCTOGEEX.
4.2 COMPARING WITH OTHER MODELS
We benchmark OCTOCODER and OCTOGEEX with state-of-the-art Code LLMs on HUMANEVAL- PACK in Table 2. For all models, we use the prompt put forward by the model creators if applicable or else a simple intuitive prompt, see Appendix N. | 2308.07124#26 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 26 | 4.2 THE STUDY OF COMMUNICATION STRATEGIES
As shown in Figure 2, we also design three different communication strategy termed as one-by-one, simultaneous-talk, simultaneous-talk-with-summarizer. The detailed descriptions and formal for6
mulations can be found in Appendix B. We experiment with 3 agents and 2 discussion turns with diverse role prompts in this section. As is shown in Table 4, we can find that the one-by-one commu- nication strategy is more effective than other strategies for ChatGPT setting. Although the other two communication strategies did not perform as robustly as the one-by-one strategy, it is noteworthy that they still exceeded the performance of the naive single-agent method. Furthermore, the vari- ations in performance among three different communication strategies underscore the influence of different strategies on the effectiveness of the evaluation process, revealing the potential for further exploration and optimization of ChatEval. Thus, future studies could be aimed at a more com- prehensive understanding of different communication strategies, and how they could be effectively employed to enhance performance. This could serve as an avenue for substantial improvements and novel insights in the multi-agent debate framework.
# 4.3 THE IMPACT OF ROLE NUMBERS AND DISCUSSION TURNS | 2308.07201#26 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 27 | [instructions to avoid topics from the instructorâs avoid set]
Do not write any example code blocks. Do not write a corrected or updated version of the studentâs code. You must not write code for the student.
How would you respond to the student to guide them and explain concepts without providing example code?
The cost of using OpenAIâs models scales with the length of the prompts and the completions generated. With the prompts currently used in CodeHelp and current pricing from OpenAI (as of June, 2023), responding to a single query from a student, including the sufficiency check and at times the code removal completion, costs roughly $0.002 on average. We discuss student usage patterns from a pilot course in Section 5, and from that, we estimate that providing 50 students unrestricted access to CodeHelp for a full semester would cost under $10 in OpenAI usage fees.
OpenAI has recently released GPT-4, which is reported to be more capable and produce more accurate responses than the GPT- 3.5 family. We have not yet tested using GPT-4 for CodeHelpâs completions, but based on the current pricing and our prompt and
# Figure 6: Prompt used for the main response.
completion lengths, we estimate it would increase the cost by 25x, reaching roughly $0.05 per student query. | 2308.06921#27 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 27 | Currently, there are many studies leveraging LLMs to rewrite queries in adhoc retrieval. We introduce the typ- ical method Query2Doc [86] as an example. As shown in Figure 3, Query2Doc prompts the LLMs to generate a relevant passage according to the original query (âwhen was pokemon green released?â). Subsequently, the original query is expanded by incorporating the generated passage. The retriever module uses this new query to retrieve a list of relevant documents. Notably, the generated passage contains additional detailed information, such as âPokemon Green was released in Japan on February 27thâ, which effectively mitigates the âvocabulary mismatchâ issue to some extent.
In addition to addressing the âvocabulary mismatchâ problem [96â99, 101, 102], other works utilize LLMs for dif- ferent challenges in ad-hoc retrieval. For instance, Prompt- Case [103] leverages LLMs in legal case retrieval to simplify complex queries into more searchable forms. This involves using LLMs to identify legal facts and issues, followed by a prompt-based encoding scheme for effective language model encoding.
# 3.1.2 Conversational Search | 2308.07107#27 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 27 | OCTOCODER performs best among permissive models OCTOCODER has the highest average score across all three evaluation scenarios among all permissive models. With just 6 billion parameters, OCTOGEEX is the smallest model benchmarked, but still outperforms all prior permissive Code LLMs. GPT-4 (OpenAI, 2023) performs best among all models benchmarked with a significant margin. However, GPT-4 is closed-source and likely much larger than all other models evaluated.
Instruction tuning generalizes to unseen programming languages Trained primarily on natu- ral language, not code, BLOOMZ (Muennighoff et al., 2022b) performs worse than other models despite having 176 billion parameters. Go and Rust are not contained in BLOOMZâs instruction data, yet it performs much better than the random baseline of 0.0 for these two languages across most tasks. This confirms our hypothesis that models are capable of generalizing instructions to programming languages only seen at pretraining, similar to crosslingual generalization for natural languages (Muennighoff et al., 2022b). To improve programming language generalization fur- ther, we tune OCTOCODER and OCTOGEEX on many languages from COMMITPACKFT, and this generalization improvement is reflected in the performance on HUMANEVALPACKâs new languages. | 2308.07124#27 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 27 | # 4.3 THE IMPACT OF ROLE NUMBERS AND DISCUSSION TURNS
We then study the impact of different role numbers and discussion turns. From Figure 3a, a dis- cernible trend is observed in the relationship between the role number and both Acc. and Kap. within the FairEval dataset. As the role number increases, there is a corresponding growth in Acc. and Kap. This pattern reaches an apex with an Acc. of 62.5% at role numbers 3 and 4 before de- clining at role number 5. Despite this decline in accuracy, Kap. continues to ascend between role numbers 3 and 4, underscoring the effectiveness of incorporating diverse roles within ChatEval.
Conversely, no significant upward trend is detected with respect to the increase in discussion turns, as is shown in Figure 3b. This observation aligns with the findings in Liang et al. (2023); Du et al. (2023), highlighting a consistent phenomenon where continual discussion often leads to stagna- tion or even degradation of performance. Such a trend may be attributed to issues associated with the ever-increasing context length, which consequently diminishes the performance. This insight prompts a more nuanced understanding of the balance needed between role differentiation and dis- cussion dynamics to optimize the performance of ChatEval.
# 4.4 QUALITATIVE ANALYSIS | 2308.07201#27 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 28 | # Figure 6: Prompt used for the main response.
completion lengths, we estimate it would increase the cost by 25x, reaching roughly $0.05 per student query.
Examples. A few examples of queries submitted by students during our pilot illustrate the results of this design. First, in an example where the only input is an instruction to write code for an assigned exercise, we see how CodeHelp provides assistance without writing any code, as intended.
The following was written to help a student in a CS class. However, any example code (such as in ``` Markdown delimiters) can give the student an assignmentâs answer rather than help them figure it out themselves. We need to provide help without including example code. To do this, rewrite the following to remove any code blocks so that the response explains what the student should do but does not provide solution code.
[original response to be rewritten]
# Figure 7: Prompt used for code removal.
Issue/Question: Write a while loop that starts at the last character in the string and works its way backwards to the first character in the string, printing each letter on a separate line, except backwards. | 2308.06921#28 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 28 | # 3.1.2 Conversational Search
Query rewrites in conversational search play a pivotal role in enhancing the search experience. Unlike traditional queries in ad-hoc retrieval, conversational search involves a dialogue-like interaction, where the context and user intent evolve with each interaction. In conversational search, query rewriting involves understanding the entire conversationâs context, clarifying any ambiguities, and personalizing re- sponses based on user history. The process includes dy- namic query expansion and refinement based on dialogue information. This makes conversational query rewriting a sophisticated task that goes beyond traditional search, fo- cusing on natural language understanding and user-centric interaction.
In the era of LLMs, leveraging LLMs in conversational search tasks offers several advantages. First, LLMs pos- sess strong contextual understanding capabilities, enabling them to better comprehend usersâ search intent within the context of multi-turn conversations between users and the system. Second, LLMs exhibit powerful generation abilities, allowing them to simulate dialogues between users and the system, thereby facilitating more robust search intent modeling.
The LLMCS framework [95] is a pioneering approach that employs LLMs to effectively extract and understand user search intent within conversational contexts. As illus- trated in their work, LLMCS uses LLMs to produce both query rewrites and extensive hypothetical system responses from various perspectives. These outputs are combined | 2308.07107#28 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 28 | Pretraining weight correlates with programming language performance after instruction tuning Prior work has shown that the performance on natural languages after instruction tuning is correlated with the weight of these languages during pretraining (Muennighoff et al., 2022b). The more weight during pretraining, the better the performance after instruction tuning. We find the same to be the case for programming languages. Python, Java, and JavaScript collectively make up around 30% of the pretraining data of StarCoder (Li et al., 2023b). After instruction tuning StarCoder to produce OCTOCODER, we see the best performance among these three languages, especially for
6
# OctoPack: Instruction Tuning Code Large Language Models
Model (â) Python JavaScript Java Go C++ Rust Avg.
HUMANEVALFIX | 2308.07124#28 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 28 | # 4.4 QUALITATIVE ANALYSIS
Table 5 showcases the debate process towards the evaluation of two assistantsâ responses to the open-ended question âWhat are the most effective ways to deal with stress?â. Assistant 1âs response is :
Here are some effective ways to deal with stress: 1. Exercise regularly: Physical activity can help reduce stress and improve mood. 2. Practice relaxation techniques: Deep breathing, meditation, and yoga are some effective ways to relax and reduce stress levels. 3. Get enough sleep: Lack of sleep can increase stress levels. Try to get at least 7-8 hours of sleep each night. 4. Eat a healthy diet: A well-balanced diet can help reduce stress levels and improve overall health. 5. Connect with others: Spending time with friends and family, joining a support group, or participating in social activities can help reduce stress levels and improve mood. 6. Time management: Prioritizing tasks and creating a schedule can help reduce stress and increase productivity. 7. Seek professional help: If stress is impacting your daily life, consider seeking professional help such as therapy or counseling.
Assistant 2âs response is : | 2308.07201#28 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 29 | Response: You can use a while loop to iterate over the characters in the string in reverse order. To do this, you can use the len() func- tion to get the length of the string, and use the range() function to generate a sequence of numbers from the length of the string down to 0. Then you can use the str[index] syntax to access each character in the string.
To help you understand how this works, think about the following example:
Say you have a string my_str = "Hello World". You can use the len() function to get the length of the string, which is 11. Then you can use the range() function to generate a sequence of numbers from 11 down to 0. This sequence of numbers will represent the indices of the characters in the string. Finally, you can use the str[index] syntax to access each character in the string.
Using this approach, you can iterate over the characters in the string in reverse order. This will print out each character in the string on a separate line, except backwards.
When students provide a problem specification with little or no code, CodeHelp tends to respond with a high-level outline like this as a starting point. | 2308.06921#29 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 29 | into a comprehensive representation that effectively cap- tures the userâs full search intent. The experimental results show that including detailed hypothetical responses with concise query rewrites markedly improves search perfor- mance by adding more plausible search intent. Ye et al. [104] claims that human query rewrite may lack sufficient information for optimal retrieval performance. It defines four essential properties for well-formed LLM-generated query rewrites. Results show that their method informative query rewrites can yield substantially improved retrieval performance compared to human rewrites.
Besides, LLMs can be used as a data expansion tool in conversational dense retrieval. Attributed to the high cost of producing hand-written dialogues, data scarcity presents a significant challenge in the domain of conversational search. To address this problem, CONVERSER [105] employs LLMs to generate synthetic passage-dialogue pairs through few- shot demonstrations. Furthermore, it efficiently trains a dense retriever using a minimal dataset of six in-domain dialogues, thus mitigating the issue of data sparsity.
# 3.2 Rewriting Knowledge | 2308.07107#29 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 29 | 6
# OctoPack: Instruction Tuning Code Large Language Models
Model (â) Python JavaScript Java Go C++ Rust Avg.
HUMANEVALFIX
Non-permissive models InstructCodeT5+â WizardCoderâ GPT-4 2.7 31.8 47.0 1.2 29.5 48.2 4.3 30.7 50.0 2.1 30.4 50.6 0.2 18.7 47.6 0.5 13.0 43.3 1.8 25.7 47.8 Permissive models BLOOMZ StarChat-β CodeGeeX2â StarCoder OCTOGEEXâ OCTOCODER 16.6 18.1 15.9 8.7 28.1 30.4 15.5 18.1 14.7 15.7 27.7 28.4 15.2 24.1 18.0 13.3 30.4 30.6 16.4 18.1 13.6 20.1 27.6 30.2 6.7 8.2 4.3 15.6 22.9 26.1 5.7 3.6 6.1 6.7 9.6 16.5 12.5 11.2 12.1 13.4 24.4 27.0
HUMANEVALEXPLAIN | 2308.07124#29 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 29 | Assistant 2âs response is :
Stress can be caused by a variety of factors, including work, relationships, health, and financial issues. Here are some effective ways to manage stress: 1. Exercise: Regular exercise releases endorphins, which are natural stress- relievers. It can be as simple as going for a walk or jog, practicing yoga, or joining a fitness class. 2. Mindfulness: Mindfulness practices, such as meditation or deep breathing, can
7
help to reduce stress and improve focus. 3. Social Support: Spending time with family and friends, or seeking support from a mental health professional, can provide a sense of connection and help to reduce feelings of isolation and loneliness. 4. Healthy Eating: A balanced diet that includes plenty of fruits, vegetables, and whole grains can help to reduce stress levels. 5. Good Sleep: Getting enough sleep is essential for good mental and physical health. Aim for 7-8 hours per night and establish a regular sleep routine. 6. Time Management: Prioritizing tasks and setting realistic goals can help to reduce stress levels by giving a sense of control and direction. 7. Relaxation Techniques: Techniques such as progressive muscle relaxation, aro- matherapy, and massage can help to reduce stress levels and promote relaxation. 8. Seek professional help: If stress is interfering with daily life, it may be time to seek professional help from a therapist or counselor. | 2308.07201#29 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 30 | When students provide a problem specification with little or no code, CodeHelp tends to respond with a high-level outline like this as a starting point.
When students provide an error message, CodeHelp will typ- ically explain what the message means and suggest a fix. In the following example, the student has not provided the details from the error message, but the LLM is still able to identify where it occurs, providing the student with effective help.
# Code:
def character_remover ( word , character ): word = word . remove ( character ) return word
# Error: Attribute Error
Issue/Question: I want this function to print the word, excluding the character.
Mark Liffiton, Brad Sheese, Jaromir Savelka, and Paul Denny
Response: The error message you are seeing is an Attribute Error, which means that the object you are trying to access does not have the attribute or method you are trying to use. In this case, the object is the string word and the attribute you are trying to use is remove(). The remove() method is not a valid method for strings, so you are getting this error. | 2308.06921#30 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 30 | # 3.2 Rewriting Knowledge
Query rewriting typically necessitates additional corpora for refining initial queries. Considering that LLMs incorporate world knowledge in their parameters, they are naturally capable of rewriting queries. We refer to these methods, which rely exclusively on the intrinsic knowledge of LLMs, as LLM-only methods. While LLMs encompass a broad spectrum of knowledge, they may be inadequate in spe- cialized areas. Furthermore, LLMs can introduce concept drift, leading to noisy relevance signals. To address this issue, some methods incorporate domain-specific corpora to provide more detailed and relevant information in query rewriting. We refer to methods enhanced by domain-specific corpora to boost LLM performance as corpus-enhanced LLM-based methods. In this section, we will introduce these two methods in detail.
# 3.2.1 LLM-only methods | 2308.07107#30 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 30 | HUMANEVALEXPLAIN
Non-permissive models InstructCodeT5+â WizardCoderâ GPT-4 20.8 32.5 64.6 0.0 33.0 57.3 0.0 27.4 51.2 0.0 26.7 58.5 0.1 28.2 38.4 0.0 16.9 42.7 3.5 27.5 52.1 Permissive models BLOOMZ StarChat-β CodeGeeX2â StarCoder OCTOGEEXâ OCTOCODER 14.7 25.4 0.0 0.0 30.4 35.1 8.8 21.5 0.0 0.0 24.0 24.5 12.1 24.5 0.0 0.0 24.7 27.3 8.5 18.4 0.0 0.0 21.7 21.1 0.6 17.6 0.0 0.0 21.0 24.1 0.0 13.2 0.0 0.0 15.9 14.8 7.5 20.1 0.0 0.0 22.9 24.5
HUMANEVALSYNTHESIZE | 2308.07124#30 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 30 | We can find that both of the responses produce similar strategies and equally compelling descriptions for dealing with stress, making it challenging to discern significant disparity in terms of quality. It is in this context of nuanced evaluation that the significance of the ChatEval process emerges. To understand this complexity better, We first outline the ChatEval process and subsequently delve into the agentsâ constructive behaviors during discussions.
As is shown in Table 5, Alice first points out that the response of Assistant 2 contains more detailed information and he prefers to choose Assistant 2 as a better response. Bob then says that she agrees with Aliceâs assessments, but in the meantime, she also points out that Assistant 1âs response is also concise and carries out a thought-provoking question. Carol then gives the feedback that she believes both responses are equally valuable. In the subsequent discussion, Bob indicates that Assistant 1âs response is straightforward while Assistant 2âs is detailed, suggesting that the effectiveness of the response should depend on the context and individualâs needs. At the end of the debate, we finally extract the evaluation results that both responses are of the same quality which is identical to human annotation results. | 2308.07201#30 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 31 | You can use the replace() method instead. This method takes two parameters: the substring you want to replace and the substring you want to replace it with. In this case, you want to replace the character with an empty string. To do this, you can pass an empty string as the second parameter to the replace() method. This will remove the character from the string.
For example, if you have a string word = 'Hello' and a character character = 'l', you can remove the character from the string by calling word.replace(character, ''). This will return a new string with the character removed, 'Heo'. | 2308.06921#31 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 31 | # 3.2.1 LLM-only methods
LLMs are capable of storing knowledge within their pa- rameters, making it a natural choice to capitalize on this knowledge for the purpose of query rewriting. As a pio- neering work in LLM-based query rewriting, HyDE [101] generates a hypothetical document by LLMs according to the given query and then uses a dense retriever to retrieve relevant documents from the corpus that are relevant to the generated document. Query2doc [86] generates pseudo doc- uments via prompting LLMs with few-shot demonstrations, and then expands the query with the generated pseudo document. Furthermore, the influence of different prompt- ing methods and various model sizes on query rewriting has also been investigated [102]. To better accommodate the frozen retriever and the LLM-based reader, a small language model is employed as the rewriter that is trained using reinforcement learning techniques with the rewards provided by the LLM-based reader [100]. GFF [106] presents a âGenerate, Filter, and Fuseâ method for query expansion. It employs an LLM to create a set of related keywords via a reasoning chain. Then, a self-consistency filter is used to identify the most important keywords, which are
6
concatenated with the original queries for the downstream reranking task. | 2308.07107#31 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 31 | HUMANEVALSYNTHESIZE
Non-permissive models InstructCodeT5+â WizardCoderâ GPT-4 37.0 57.3 86.6 18.9 49.5 82.9 17.4 36.1 81.7 9.5 36.4 72.6 19.8 40.9 78.7 0.3 20.2 67.1 17.1 40.1 78.3 Permissive models BLOOMZ StarChat-β CodeGeeX2â StarCoder OCTOGEEXâ OCTOCODER 15.6 33.5 35.9 33.6 44.7 46.2 14.8 31.4 32.2 30.8 33.8 39.2 18.4 26.7 30.8 30.2 36.9 38.2 8.4 25.5 22.5 17.6 21.9 30.4 6.5 26.6 29.3 31.6 32.3 35.6 5.5 14.0 18.1 21.8 15.7 23.4 11.5 26.3 28.1 27.6 30.9 35.5 | 2308.07124#31 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 31 | From this sequence, we can pinpoint several fascinating behaviors exhibited by the agents: (1) Opening Statement: Alice initiates the debate with a clear stance, establishing the foundational argument and guiding the trajectory of the subsequent discourse. (2) Alternative Proposal: Bob introduces an alternative viewpoint, emphasizing the need to consider diverse interpretations. This not only broadens the discussion but also stimulates critical thinking. In the context of a debate, the introduction of an alternative proposal prevents the stagnation of thought, challenges pre-existing bias, and uncovers considerations that might otherwise be overlooked, ensuring that the discussions are well-rounded. (3) Stance Maintenance: Aliceâs persistent adherence to her initial stance, even when faced with opposing views, exemplifies commitment and challenges other participants to re- fine their perspectives. By firmly holding his position, Alice encourages depth in the discourse, prompting others to dive deeper into their arguments and perhaps consider aspects they hadnât pre- viously. It ensures the conversation remains robust, focused, and continually evolving, driving all participants to a higher level of engagement and critical thinking. (4) Seeking Consensus: The dis- cussionâs climax reveals a collective agreement amongst the participants, which is reached through mutual understanding and compromise, underlining the value of each presented viewpoint. | 2308.07201#31 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 32 | 3.3 For Instructors CodeHelp provides instructors with interfaces that enable them to configure their classes and view their studentsâ interactions with the system. When configuring a class, an instructor can set the default language for the class (students can still choose other languages when requesting help) and specify a set of keywords that should be avoided in any response to the student, aka an âavoid set.â The avoid set is important because LLMs, with no context about the course for which help is being requested, tend to generate responses using common programming âbest practices,â and these practices can be too advanced or otherwise inappropriate for a given course context. For example, in an introductory programming course, an instructor may want students to practice summing the values in a list using a for loop and an accumulator variable, while an LLM might suggest a student use the sum() function to solve such a problem. | 2308.06921#32 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 32 | 6
concatenated with the original queries for the downstream reranking task.
It is worth noting that though the designs of these meth- ods are different, all of them rely on the world knowledge stored in LLMs without additional corpora.
# 3.2.2 Corpus-enhanced LLM-based methods
Although LLMs exhibit remarkable capabilities, the lack of domain-specific knowledge may lead to the generation of hallucinatory or irrelevant queries. To address this issue, recent studies [96â99] have proposed a hybrid approach that enhances LLM-based query rewriting methods with an external document corpus.
Why incorporate a document corpus? The integration of a document corpus offers several notable advantages. Firstly, it boosts relevance by using relevant documents to refine query generation, reducing irrelevant content and improv- ing contextually appropriate outputs. Second, enhancing LLMs with up-to-date information and specialized knowl- edge in specific fields enables them to effectively deal with queries that are both current and specific to certain domains.
How to incorporate a document corpus? Thanks to the flexibility of LLMs, various paradigms have been proposed to incorporate a document corpus into LLM-based query rewriting, which can be summarized as follows. | 2308.07107#32 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 32 | Table 2: Zero-shot pass@1 (%) performance across HUMANEVALPACK. InstructCodeT5+, WizardCoder, StarChat-β, StarCoder and OCTOCODER have 16B parameters. CodeGeeX2 and OCTOGEEX have 6B parameters. BLOOMZ has 176B parameters. In this work, we call models "permissive" if weights are freely accessible and usable for commercial purposes. â: Commercial license available after submitting a form. â : Trained on data that may not be used âto develop models that compete with OpenAIâ thus we classify them as non-permissive in this work (see §1).
7
# OctoPack: Instruction Tuning Code Large Language Models
HUMANEVALSYNTHESIZE. OCTOCODER performs weakest on Rust, which is the lowest resource language of StarCoder among the languages we benchmark (1.2% of pretraining data). | 2308.07124#32 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 32 | In light of the above, ChatEval stands out not just as a tool for comparison but as an embodiment of interactive natural language dialogue. By simulating human argumentative interactions, it differen- tiates itself from static, single-presented opinions. This dynamic interaction showcases the richness and complexity of language, capturing nuances often missed in singular viewpoints. As such, Chat- Eval offers a reliable evaluation process that not only mirrors human discourse but also highlights the transformative power of collaborative dialogue. This positions it uniquely, underscoring its sig- nificant potential to execute text evaluation tasks both reliably and effectively.
5 RELATED WORK
Automatic NLG evaluation In the landscape of NLG, evaluating the quality of generated text rep- resents a particularly arduous task. For a significant period, evaluation was primarily dependent on
8
Table 3: Effect of diverse role specification on FairEval benchmark.
Evaluator Methods ChatGPT ChatGPT Multi-Agent (Same Role Prompt) ChatGPT Multi-Agent (Diverse Role Prompt) Single-Agent Acc. (%) Kap. 0.27 53.8 0.25 53.8 0.33 60
Table 4: Comparing of different communication strategies on FairEval benchmark. | 2308.07201#32 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 33 | Instructors may access their studentsâ queries in the âInstructor Viewâ (Figure 8). This view provides a list of the users in their class with query counts (total and within the past week) and a list of all the student queries. The list of queries shows salient details of each query (with full text for any field appearing when hovering the cursor over it), and any row can be selected to take the instructor to the response view for that query. The list of queries can be filtered to show those from a selected user, and it is searchable (full text) and sortable. Instructors can also download their class data as CSV files.
CodeHelp integrates with learning management systems (LMSes) like Moodle or Canvas that support LTI (Learning Tools Interoper- ability). With a small amount of setup, an instructor can provide their students access to CodeHelp via a simple link in their course on the LMS. Via this link, students may access CodeHelp and be au- tomatically authenticated without having to create, manage, or use a separate login. Instructors and TAs are identified automatically by LTI, so they have access to the instructor interfaces in CodeHelp with no additional work. They can then configure their course for student use and monitor their studentsâ queries and the responses they are receiving.
CodeHelp: Using Large Language Models with Guardrails | 2308.06921#33 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 33 | ⢠Late fusion of LLM-based re-writing and pseudo relevance feedback (PRF) retrieval results. Traditional PRF methods leverage relevant documents retrieved from a document corpus to rewrite queries, which restricts the query to the information contained in the target corpus. On the con- trary, LLM-based rewriting methods provide external con- text not present in the corpus, which is more diverse. Both approaches have the potential to independently enhance retrieval performance. Therefore, a straightforward strategy for combining them is using a weighted fusion method for retrieval results [99].
⢠Combining retrieved relevant documents in the prompts of LLMs. In the era of LLMs, incorporating instructions within the prompts is the most flexible method for achieving specific functionalities. QUILL [97] and CAR [107] illus- trate how retrieval augmentation of queries can provide LLMs with context that significantly enhances query un- derstanding. LameR [108] takes this further by using LLM expansion to improve the simple BM25 retriever, intro- ducing a retrieve-rewrite-retrieve framework. Experimental results reveal that even basic term-based retrievers can achieve comparable performance when paired with LLM- based rewriters. Additionally, InteR [98] proposes a multi- turn interaction framework between search engines and LLMs. This enables search engines to expand queries using LLM-generated insights, while LLMs refine prompts using relevant documents sourced from the search engines. | 2308.07107#33 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 33 | Models struggle with small targeted changes HUMANEVALFIX is the most challenging task for most models. They commonly regenerate the buggy function without making any change (e.g. WizardCoder in Figure 33) or they introduce new bugs (e.g. GPT-4 in Figure 32). We analyze model performance by bug type in Appendix L and find bugs that require removing excess code are the most challenging. OCTOCODER performs comparatively well across all languages. Instruction tuning on COMMITPACKFT has likely taught OCTOCODER to make small, targeted changes to fix bugs.
Models struggle switching between code and text Some models fail at HUMANEVALEXPLAIN, as they do not generate natural language explanations. We manually inspect explanations for the first ten samples of the Python split and disqualify a model if none of them are explanations. This is the case for StarCoder and CodeGeeX2, which generate code instead of natural language explanations. BLOOMZ and InstructCodeT5+ also occasionally generate code. Other models exclusively generate natural language explanations, not containing any code for inspected samples. | 2308.07124#33 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.06921 | 34 | Users Queries id username â#queries wk id user time langâ code error issue response (len) helpful sylvester 603 123 2459. Murray 2023-04-14 132m python dloct.âSurvived Recode The Oand 1 values used toenc.. main (1213) 23 Emma 117 "1 2458 Sylvester 2023-04-14 1:31pm python _m_mask = df[/Sex'] ==. using pandas dataframes, how t. main (1264) 49 Bong 156 oO 2487 Kayleigh 2023-04-14 1:28pm python dfloc[:,'Pclass'].rep. âType Error: list indices must b Im,using pandas, How do |use_ insufficient, (647) 36 Winnie 15 7 main (@75) 35 | [usmes nt n 2456 Kayleigh 2023-04-14 1:27pm python df loc(:,Pelass'].rep. Typettror: list indices must b Im.using pandas, How.do | use main (675) as or ri 2455 James 2023-04-14 1:22pm python df_big_3=dfisin({/Name& iwant to the date frame, insufficient (792) main (1034) 19 Murray 103 1" 2454 | 2308.06921#34 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 34 | ⢠Enhancing factuality of generative relevance feedback (GRF) by pseudo relevance feedback (PRF). Although generative doc- uments are often relevant and diverse, they exhibit halluci- natory characteristics. In contrast, traditional documents are generally regarded as reliable sources of factual information. Motivated by this observation, GRM [96] proposes a novel technique known as relevance-aware sample estimation (RASE). RASE leverages relevant documents retrieved from
TABLE 1. Partial Examples of different prompting methods in query rewriting.
Methods Prompts Zero-shot HyDE [101] LameR [108] Please write a passage to answer the question. Question: {#Question} Passage: Give a question {#Question} and its possible an- swering passages: A. {#Passage 1} B. {#Passage 2} C. {#Passage 3} ... Please write a correct answering passage. Few-shot Query2Doc [101]Write a passage that answers the given query: Query: {#Query 1} Passage: {#Passage 1} ... Query: {#Query} Passage: Chain-of-Thought CoT [102] Answer the following query based on the context: Context: {#PRF doc 1} {#PRF doc 2} {#PRF doc 3} Query: {#Query} Give the rationale before answering | 2308.07107#34 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 34 | Models struggle adhering to a specified output length HUMANEVALEXPLAIN instructs models to fit their explanation within a given character limit (§3). Current models appear to have no understanding of how many characters they are generating. They commonly write very short and thus underspecified explanations (e.g. BLOOMZ in Figure 34) or excessively long explanations that end up being cut off (e.g. StarChat-β in Figure 37). Future work could investigate how to enable models to be aware of their generated output length to improve HUMANEVALEXPLAIN performance. | 2308.07124#34 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 34 | human annotations, a process that is labor-intensive and limited by scalability issues. Automatic NLG evaluation attempts to address these challenges by leveraging computational models to assess the quality of a generated text. Previous work lies on the following categories: (1) n-gram-based metrics: ROUGE (Lin, 2004) is a set of metrics that compute the amount of overlap between n- grams in the machine-generated summaries and the reference summaries. BLEU (Papineni et al., 2002) compare the generated text with reference translations, based on the co-occurrence of n-grams in both texts. In spite of being easily and widely used, the above method is incapable of capturing syntactic and semantic similarity (Stent et al., 2005). (2) embedding-based metrics: Word embed- dings are vector representations of words that capture their semantic properties, such that words with similar meanings have similar embeddings. A bunch of work leverages word embeddings to evaluate the semantic similarity between two pieces of text. BERTScore (Zhang et al., 2019) use contextual- ized word embeddings from transformer models like BERT (Devlin et al., 2018), BLEURT | 2308.07201#34 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 35 | python df_big_3=dfisin({/Name& iwant to the date frame, insufficient (792) main (1034) 19 Murray 103 1" 2454 Sylvester 2023-04-14 1:17pm python df_sur = df.loc[(dF.locf; using pandas dataframes, how t. main (996) 45 Kayleigh 9s 14 2453. Lynnette 2023-04-14 1:17pm python df view gross =dfset columns ow do create a view of the main (207) 12 Mitchel 89 ° 2452 James 2023-04-14 4:17pm _ python what isthe syntax and documen... main (705) 26 Kerrie % 5 2451 James 2023-04-14 1:16pm python _big_musical = ['The Lion K Typeâ¬tror: isin() takes 2 posi, imin pandas, im looking for t. main (769) x Teo rors7 [die)perpace [« M2 3 4 5 6 > 2450 Murray 2023-04-14 1:14pm python import urllib,request.as reque, Use -locl] to. replace the 1,2 main (825) 1610170 0f 2569 | 10 v|perpage |< 1 18 16 17 18 19. 257 > | 2308.06921#35 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 35 | the collection to assign weights to generated documents. In this way, GRM ensures that relevance feedback is not only diverse but also maintains a high degree of factuality.
# 3.3 Rewriting Approaches
There are three main approaches used for leveraging LLMs in query rewriting: prompting methods, fine-tuning, and knowl- edge distillation. Prompting methods involve using specific prompts to direct LLM output, providing flexibility and interpretability. Fine-tuning adjusts pre-trained LLMs on specific datasets or tasks to improve domain-specific perfor- mance, mitigating the general nature of LLM world knowl- edge. Knowledge distillation, on the other hand, transfers LLM knowledge to lightweight models, simplifying the complexity associated with retrieval augmentation. In the following section, we will introduce these three methods in detail.
# 3.3.1 Prompting
Prompting in LLMs refers to the technique of providing a specific instruction or context to guide the modelâs genera- tion of text. The prompt serves as a conditioning signal and influences the language generation process of the model. Existing prompting strategies can be roughly categorized into three groups: zero-shot prompting, few-shot prompt- ing, and chain-of-thought (CoT) prompting [45]. | 2308.07107#35 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 35 | HumanEval code synthesis is close to saturation Pure code synthesis on HUMANEVALSYN- THESIZE is the easiest task for all models. With a pass rate of 86.6% for a single solution, GPT-4 is close to fully saturating the Python subset. GPT-4 was originally found to score 67% on Python HumanEval (OpenAI, 2023) and 81% in later work (Bubeck et al., 2023). Our score for GPT-4 is significantly higher, possibly due to improvements made to the API by OpenAI, contamination of HumanEval in GPT-4 training, or slightly different prompting and evaluation. An example of our prompt is depicted in Figure 3 (right). We perform very careful evaluation to ensure every generation is correctly processed. We reproduce the HumanEval score of WizardCoder (Luo et al., 2023; Xu et al., 2023a) and find it to also perform well across other languages. For BLOOMZ and InstructCodeT5+ our evaluation leads to a higher Python score than they reported, likely because of our more careful processing of generations. OCTOCODER has the highest performance for every language among permissively licensed models. With a pass@1 of 46.2% on the original Python split, OCTOCODER improves by a relative 38% over its base model, StarCoder.
5 RELATED WORK | 2308.07124#35 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 35 | text. BERTScore (Zhang et al., 2019) use contextual- ized word embeddings from transformer models like BERT (Devlin et al., 2018), BLEURT (Sellam et al., 2020) utilize supervised training data to enhance the performance. MoverScore (Zhao et al., 2019) combine contextualized word embeddings with Earth Moverâs Distance (Rubner et al., 2000). (3) LLM-based metrics: Amidst the flourishing advancement of LLM which embodies a wealth of information derived from extensive training data, using LLM as an evaluator has experienced no- table progress. GPTScore (Fu et al., 2023) utilize conditional probability to assign the text a score representing its quality. Wang et al. (2023a) explore the potential of utilizing ChatGPT as an NLG evaluator by prompting it to score a text directly. Wang et al. (2023c) curate a reliable dataset con- taining pairwise comparison and evaluation explanation which can be used to train a foundation model making it a better evaluator. Bai et al. (2023) propose decentralized evaluation to provide fairer evaluation results. G-EVAL | 2308.07201#35 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.07107 | 36 | Zero-shot prompting. Zero-shot prompting involves in- structing the model to generate texts on a specific topic without any prior exposure to training examples in that domain or topic. The model relies on its pre-existing knowl- edge and language understanding to generate coherent and contextually relevant expanded terms for original queries. Experiments show that zero-shot prompting is a simple yet effective method for query rewriting [98, 99, 102, 108â110]. ⢠Few-shot prompting. Few-shot prompting, also known as in-context learning, involves providing the model with a limited set of examples or demonstrations related to the
7 | 2308.07107#36 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 36 | 5 RELATED WORK
5.1 CODE MODELS
There has been extensive work on code models tailored to a specific coding task, such as code summarization (Iyer et al., 2016; Ahmad et al., 2020; Zhang et al., 2022a; Shi et al., 2022) or code editing (Drain et al., 2021; Zhang et al., 2022c; He et al., 2022; Zhang et al., 2022b; Wei et al., 2023; Prenner & Robbes, 2023; Fakhoury et al., 2023; Skreta et al., 2023) (also see work on edit models more generally (Reid & Neubig, 2022; Schick et al., 2022; Dwivedi-Yu et al., 2022; Raheja et al., 2023)). These works use task-specific heuristics that limit the applicability of their methods to other tasks. In contrast, we aim to build models applicable to all kinds of tasks related to code and beyond. | 2308.07124#36 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.06921 | 37 | Figure 8: An instructorâs view of student help requests. The full contents of each field are displayed in a tooltip when the user hovers a mouse pointer over it. Note that real usernames have been replaced with pseudonyms.
4 LIMITATIONS AND RISKS CodeHelp is subject to many of the known limitations and risks of using LLMs. In particular, completions can be factually incorrect and can include harmful biases. The problem of inaccuracies in the LLM responses (sometimes called âhallucinationâ or âconfabula- tionâ) is present in CodeHelp with the models it is currently using. Sometimes, the response contains one or more false statements, and this may confuse or mislead the user. Users are sensitised to this issue via the prominent notice above each response saying âRemember: It will not always be correct!â In our experience, when inaccuracies did occur, they were often in a particular detail of the response, which still gave correct high-level guidance or pointed the user in the right direction. In our and our studentsâ experiences, the rate of inaccuracies is low enough for the tool to still be valuable and worth the studentsâ time, and as models improve, the accuracy will improve. | 2308.06921#37 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 37 | 7
desired task or domain [86, 102, 109, 110]. These examples serve as a form of explicit instruction, allowing the model to adapt its language generation to the specific task or domain at hand. Query2Doc [86] prompts LLMs to write a document that answers the query with some demo query- document pairs provided by the ranking dataset, such as MSMARCO [111] and NQ [112]. This work experiments with a single prompt. To further study the impact of different prompt designing, recent works [102] have ex- plored eight different prompts, such as prompting LLMs to generate query expansion terms instead of entire pseudo documents and CoT prompting. There are some illustrative prompts in Table 1. This work conducts more experiments than Query2Doc, but the results show that the proposed prompt is less effective than Query2Doc. | 2308.07107#37 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 37 | Through large-scale pretraining more generally applicable code models have been developed (Nijkamp et al., 2022; 2023; Xu et al., 2022a; Christopoulou et al., 2022; Gunasekar et al., 2023; Li et al., 2023b; Bui et al., 2023; Scao et al., 2022a;b). However, these models only continue code making them hard to use for tasks such as explaining code with natural language (HUMANEVALEXPLAIN). Teaching them to follow human instructions is critical to make them applicable to diverse tasks.
8
# OctoPack: Instruction Tuning Code Large Language Models
INSTRUCTION MODELS | 2308.07124#37 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 37 | Communicative Agents Most recently, significant attention has been dedicated to the development of communicative agents. These agents, often acted by LLMs like ChatGPT or GPT-4, are designed to interact and communicate effectively with other agents or human users using natural language. The primary goal is to facilitate more productive and efficient interaction and collaboration as differ- ent agents can autonomously communicate and negotiate to tackle a more complex task collectively. Several studies have explored various aspects of communicative agents. Li et al. (2023a) propose a cooperative agent framework dubbed as role-playing enabling agents to autonomously cooperate to solve complex tasks. Park et al. (2023) create a sandbox environment consisting of 25 individual virtual entities endowed with a character description and memory system. Every intelligent agent is capable of autonomously interacting with other agents and the environment simulating reliable human behavior. Qian et al. (2023) establish a chat-based software development framework that can complete a software design and produce executable software at a reduced cost compared to recruiting human programmers. Liu et al. (2023a) utilize a sandbox environment to curate reliable datasets in better alignment with human preference and train a | 2308.07201#37 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 38 | LLMs can learn harmful biases such as gender or racial stereo- types from their training data, which can then be reflected in the completions they generate. This is a well-known and heavily studied issue in language model research [36], and it has been an important issue to the computing education community as well [1]. While the models used by CodeHelp have been specifically trained and improved by OpenAI to reduce these biases, some still exist [37]. These models generally do not make offensive statements unless one actively crafts a prompt to elicit one, but for example they might respond in a way that implicitly reflects a common stereotype. This is highly unlikely to occur in the context of requesting help on a specific programming issue, but the possibility exists. | 2308.06921#38 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 38 | ⢠Chain-of-thought prompting. CoT prompting [45] is a strategy that involves iterative prompting, where the model is provided with a sequence of instructions or partial out- puts [102, 109]. In conversational search, the process of query re-writing is multi-turn, which means queries should be refined step-by-step with the interaction between search engines and users. This process is naturally coincided with CoT process. As shown in 4, users can conduct the CoT process through adding some instructions during each turn, such as âBased on all previous turns, xxxâ. While in ad-hoc search, there is only one-round in query re-writing, so CoT could only be accomplished in a simple and coarse way. For example, as shown in Table 1, researchers add âGive the rationale before answeringâ in the instructions to prompt LLMs think deeply [102].
# 3.3.2 Fine-tuning
Fine-tuning is an effective approach for adapting LLMs to specific domains. This process usually starts with a pre- trained language model, like GPT-3, which is then further trained on a dataset tailored to the target domain. This domain-specific training enables the LLM to learn unique patterns, terminology, and context relevant to the domain, which is able to improve its capacity to produce high-quality query rewrites. | 2308.07107#38 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.