id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.07107#212
Large Language Models for Information Retrieval: A Survey
Inpars-v2: Large language models as efficient dataset generators for information retrieval,â CoRR, vol. abs/2301.01820, 2023. [123] Z. Dai, V. Y. Zhao, J. Ma, Y. Luan, J. Ni, J. Lu, A. Bakalov, K. Guu, K. B. Hall, and M. Chang, â Promptagator: Few-shot dense retrieval from 8 ex- amples,â in ICLR. OpenReview.net, 2023. [124] R. Meng, Y. Liu, S. Yavuz, D. Agarwal, L. Tu, N. Yu, J. Zhang, M. Bhat, and Y. Zhou, â
2308.07107#211
2308.07107#213
2308.07107
[ "2305.03195" ]
2308.07107#213
Large Language Models for Information Retrieval: A Survey
Augtriever: Unsuper- vised dense retrieval by scalable data augmentation,â 2023. [125] J. Saad-Falcon, O. Khattab, K. Santhanam, R. Flo- rian, M. Franz, S. Roukos, A. Sil, M. A. Sultan, and C. Potts, â UDAPDR: unsupervised domain adapta- tion via LLM prompting and distillation of rerankers,â
2308.07107#212
2308.07107#214
2308.07107
[ "2305.03195" ]
2308.07107#214
Large Language Models for Information Retrieval: A Survey
in Proceedings of the 2023 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP 2023, Sin- gapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguis- tics, 2023, pp. 11 265â 11 279. [126] Z. Peng, X. Wu, and Y. Fang, â
2308.07107#213
2308.07107#215
2308.07107
[ "2305.03195" ]
2308.07107#215
Large Language Models for Information Retrieval: A Survey
Soft prompt tuning for augmenting dense retrieval with large language models,â 2023. [127] D. S. Sachan, M. Lewis, D. Yogatama, L. Zettlemoyer, J. Pineau, and M. Zaheer, â Questions are all you need to train a dense passage retriever,â Transactions of the Association for Computational Linguistics, vol. 11, pp. 600â 616, 2023. [128] N. Thakur, N. Reimers, A. R ¨uckl´e, A. Srivastava, and I. Gurevych, â BEIR:
2308.07107#214
2308.07107#216
2308.07107
[ "2305.03195" ]
2308.07107#216
Large Language Models for Information Retrieval: A Survey
A heterogeneous benchmark for zero-shot evaluation of information retrieval models,â 28 in NeurIPS Datasets and Benchmarks, 2021. ´Abrego, J. Wieting, J. Lin, and D. Cer, â Leveraging llms for synthesizing training data across many languages in multilingual dense retrieval,â CoRR, vol. abs/2311.05800, 2023. [130] A. Neelakantan, T. Xu, R. Puri, A. Radford, J. M. Han, J. Tworek, Q. Yuan, N. Tezak, J. W. Kim, C. Hal- lacy, J. Heidecke, P. Shyam, B. Power, T. E. Nekoul, G. Sastry, G. Krueger, D. Schnurr, F. P. Such, K. Hsu, M. Thompson, T. Khan, T. Sherbakov, J. Jang, P. Welin- der, and L. Weng, â Text and code embeddings by contrastive pre-training,â
2308.07107#215
2308.07107#217
2308.07107
[ "2305.03195" ]
2308.07107#217
Large Language Models for Information Retrieval: A Survey
CoRR, vol. abs/2201.10005, 2022. [131] X. Ma, L. Wang, N. Yang, F. Wei, and J. Lin, â Fine- tuning llama for multi-stage text retrieval,â CoRR, vol. abs/2310.08319, 2023. [132] A. Asai, T. Schick, P. S. H. Lewis, X. Chen, G. Izac- ard, S. Riedel, H. Hajishirzi, and W. Yih, â Task-aware retrieval with instructions,â in Findings of the Associ- ation for Computational Linguistics:
2308.07107#216
2308.07107#218
2308.07107
[ "2305.03195" ]
2308.07107#218
Large Language Models for Information Retrieval: A Survey
ACL 2023, Toronto, Canada, July 9-14, 2023, A. Rogers, J. L. Boyd-Graber, and N. Okazaki, Eds. Association for Computational Linguistics, 2023, pp. 3650â 3675. [133] J. Ni, C. Qu, J. Lu, Z. Dai, G. H. ´Abrego, J. Ma, V. Y. Zhao, Y. Luan, K. B. Hall, M. Chang, and Y. Yang, â
2308.07107#217
2308.07107#219
2308.07107
[ "2305.03195" ]
2308.07107#219
Large Language Models for Information Retrieval: A Survey
Large dual encoders are generalizable retrievers,â in EMNLP. Association for Computational Linguistics, 2022, pp. 9844â 9855. [134] G. Izacard, M. Caron, L. Hosseini, S. Riedel, P. Bo- janowski, A. Joulin, and E. Grave, â Unsupervised dense information retrieval with contrastive learn- ing,â Trans. Mach. Learn.
2308.07107#218
2308.07107#220
2308.07107
[ "2305.03195" ]
2308.07107#220
Large Language Models for Information Retrieval: A Survey
Res., vol. 2022, 2022. [135] D. Metzler, Y. Tay, D. Bahri, and M. Najork, â Rethink- ing search: making domain experts out of dilettantes,â SIGIR Forum, vol. 55, no. 1, pp. 13:1â 13:27, 2021. [136] Y. Zhou, J. Yao, Z. Dou, L. Wu, and J. Wen, â
2308.07107#219
2308.07107#221
2308.07107
[ "2305.03195" ]
2308.07107#221
Large Language Models for Information Retrieval: A Survey
Dy- namicretriever: A pre-trained model-based IR system without an explicit index,â Mach. Intell. Res., vol. 20, no. 2, pp. 276â 288, 2023. [137] J. Chen, R. Zhang, J. Guo, Y. Liu, Y. Fan, and X. Cheng, â Corpusbrain: Pre-train a generative retrieval model for knowledge-intensive language tasks,â
2308.07107#220
2308.07107#222
2308.07107
[ "2305.03195" ]
2308.07107#222
Large Language Models for Information Retrieval: A Survey
in Proceed- ings of the 31st ACM International Conference on Infor- mation & Knowledge Management, Atlanta, GA, USA, October 17-21, 2022, M. A. Hasan and L. Xiong, Eds. ACM, 2022, pp. 191â 200. [138] Y. Tay, V. Tran, M. Dehghani, J. Ni, D. Bahri, H. Mehta, Z. Qin, K. Hui, Z. Zhao, J. P. Gupta, T. Schuster, W. W. Cohen, and D. Metzler, â
2308.07107#221
2308.07107#223
2308.07107
[ "2305.03195" ]
2308.07107#223
Large Language Models for Information Retrieval: A Survey
Transformer memory as a differentiable search index,â in NeurIPS, 2022. [139] N. Ziems, W. Yu, Z. Zhang, and M. Jiang, â Large language models are built-in autoregressive search en- gines,â in Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, A. Rogers, J. L. Boyd-Graber, and N. Okazaki, Eds. Association for Computational Linguistics, 2023, pp. 2666â 2678. [140] R. F. Nogueira, W. Yang, K. Cho, and J. Lin, â
2308.07107#222
2308.07107#224
2308.07107
[ "2305.03195" ]
2308.07107#224
Large Language Models for Information Retrieval: A Survey
Multi- stage document ranking with BERT,â CoRR, vol. abs/1910.14424, 2019. [141] J. Ju, J. Yang, and C. Wang, â Text-to-text multi-view learning for passage re-ranking,â in SIGIR. ACM, 2021, pp. 1803â 1807. [142] R. Pradeep, R. F. Nogueira, and J. Lin, â
2308.07107#223
2308.07107#225
2308.07107
[ "2305.03195" ]
2308.07107#225
Large Language Models for Information Retrieval: A Survey
The expando- mono-duo design pattern for text ranking with pre- trained sequence-to-sequence models,â CoRR, vol. abs/2101.05667, 2021. [143] H. Zhuang, Z. Qin, R. Jagerman, K. Hui, J. Ma, J. Lu, J. Ni, X. Wang, and M. Bendersky, â Rankt5: Fine- tuning T5 for text ranking with ranking losses,â in Pro- ceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2023, Taipei, Taiwan, July 23-27, 2023, H. Chen, W. E. Duh, H. Huang, M. P. Kato, J. Mothe, and B. Poblete, Eds.
2308.07107#224
2308.07107#226
2308.07107
[ "2305.03195" ]
2308.07107#226
Large Language Models for Information Retrieval: A Survey
ACM, 2023, pp. 2308â 2313. [144] L. Zhang, Y. Zhang, D. Long, P. Xie, M. Zhang, and M. Zhang, â Rankinggpt: Empowering large language models in text ranking with progressive enhance- ment,â CoRR, vol. abs/2311.16720, 2023. [145] X. Zhang, S. Hofst¨atter, P. Lewis, R. Tang, and J. Lin, â
2308.07107#225
2308.07107#227
2308.07107
[ "2305.03195" ]
2308.07107#227
Large Language Models for Information Retrieval: A Survey
Rank-without-gpt: Building gpt-independent list- wise rerankers on open-source large language mod- els,â arXiv preprint arXiv:2312.02969, 2023. [146] P. Liang, R. Bommasani, T. Lee, D. Tsipras, D. Soylu, M. Yasunaga, Y. Zhang, D. Narayanan, Y. Wu, A. Ku- mar, B. Newman, B. Yuan, B. Yan, C. Zhang, C. Cos- grove, C. D. Manning, C. R´e, D. Acosta-Navas, D. A. Hudson, E. Zelikman, E. Durmus, F. Ladhak, F. Rong, H. Ren, H. Yao, J. Wang, K. Santhanam, L. J. Orr, L. Zheng, M. Y ¨uksekg ¨on ¨ul, M. Suzgun, N. Kim, N. Guha, N. S. Chatterji, O. Khattab, P. Henderson, Q. Huang, R. Chi, S. M. Xie, S. Santurkar, S. Gan- guli, T. Hashimoto, T. Icard, T. Zhang, V. Chaudhary, W. Wang, X. Li, Y. Mai, Y. Zhang, and Y. Koreeda, â
2308.07107#226
2308.07107#228
2308.07107
[ "2305.03195" ]
2308.07107#228
Large Language Models for Information Retrieval: A Survey
Holistic evaluation of language models,â CoRR, vol. abs/2211.09110, 2022. [147] H. Zhuang, Z. Qin, K. Hui, J. Wu, L. Yan, X. Wang, and M. Bendersky, â Beyond yes and no: Improving zero- shot LLM rankers via scoring fine-grained relevance labels,â CoRR, vol. abs/2310.14122, 2023.
2308.07107#227
2308.07107#229
2308.07107
[ "2305.03195" ]
2308.07107#229
Large Language Models for Information Retrieval: A Survey
[148] D. S. Sachan, M. Lewis, M. Joshi, A. Aghajanyan, W. Yih, J. Pineau, and L. Zettlemoyer, â Improving pas- sage retrieval with zero-shot question generation,â in EMNLP. Association for Computational Linguistics, 2022, pp. 3781â 3797. [149] S. Zhuang, B. Liu, B. Koopman, and G.
2308.07107#228
2308.07107#230
2308.07107
[ "2305.03195" ]
2308.07107#230
Large Language Models for Information Retrieval: A Survey
Zuccon, â Open-source large language models are strong zero- shot query likelihood models for document ranking,â in Findings of the Association for Computational Lin- guistics: EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 8807â 8817. [150] S. Cho, S. Jeong, J. Seo, and J. C.
2308.07107#229
2308.07107#231
2308.07107
[ "2305.03195" ]
2308.07107#231
Large Language Models for Information Retrieval: A Survey
Park, â Discrete prompt optimization via constrained generation for zero-shot re-ranker,â in ACL (Findings). Association for Computational Linguistics, 2023, pp. 960â 971. [151] A. Drozdov, H. Zhuang, Z. Dai, Z. Qin, R. Rahimi, X. Wang, D. Alon, M. Iyyer, A. McCallum, D. Metzler, and K.
2308.07107#230
2308.07107#232
2308.07107
[ "2305.03195" ]
2308.07107#232
Large Language Models for Information Retrieval: A Survey
Hui, â PaRaDe: Passage ranking using demon- strations with LLMs,â in Findings of the Association for Computational Linguistics: EMNLP 2023, H. Bouamor, Singapore: Association J. Pino, and K. Bali, Eds. 29 for Computational Linguistics, Dec. 2023, pp. 14 242â 14 252. [152] W. Sun, L. Yan, X. Ma, S. Wang, P. Ren, Z. Chen, D. Yin, and Z. Ren, â
2308.07107#231
2308.07107#233
2308.07107
[ "2305.03195" ]
2308.07107#233
Large Language Models for Information Retrieval: A Survey
Is chatgpt good at search? investigating large language models as re-ranking agents,â in Pro- ceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 14 918â 14 937. [153] X. Ma, X. Zhang, R. Pradeep, and J. Lin, â
2308.07107#232
2308.07107#234
2308.07107
[ "2305.03195" ]
2308.07107#234
Large Language Models for Information Retrieval: A Survey
Zero-shot listwise document reranking with a large language model,â CoRR, vol. abs/2305.02156, 2023. [154] R. Tang, X. Zhang, X. Ma, J. Lin, and F. Ture, â Found in the middle: Permutation self-consistency improves listwise ranking in large language models,â CoRR, vol. abs/2310.07712, 2023. [155] Z. Qin, R. Jagerman, K. Hui, H. Zhuang, J. Wu, J. Shen, T. Liu, J. Liu, D. Metzler, X. Wang et al., â Large lan- guage models are effective text rankers with pairwise ranking prompting,â arXiv preprint arXiv:2306.17563, 2023.
2308.07107#233
2308.07107#235
2308.07107
[ "2305.03195" ]
2308.07107#235
Large Language Models for Information Retrieval: A Survey
[156] S. Zhuang, H. Zhuang, B. Koopman, and G. Zuccon, â A setwise approach for effective and highly efficient zero-shot ranking with large language models,â CoRR, vol. abs/2310.09497, 2023. [157] F. Ferraretto, T. Laitz, R. de Alencar Lotufo, and R. F. Nogueira, â
2308.07107#234
2308.07107#236
2308.07107
[ "2305.03195" ]
2308.07107#236
Large Language Models for Information Retrieval: A Survey
Exaranker: Synthetic explanations im- prove neural rankers,â in Proceedings of the 46th Inter- national ACM SIGIR Conference on Research and Develop- ment in Information Retrieval, SIGIR 2023, Taipei, Taiwan, July 23-27, 2023, H. Chen, W. E. Duh, H. Huang, M. P. Kato, J. Mothe, and B. Poblete, Eds. ACM, 2023, pp. 2409â 2414. [158] L.
2308.07107#235
2308.07107#237
2308.07107
[ "2305.03195" ]
2308.07107#237
Large Language Models for Information Retrieval: A Survey
Boytsov, P. Patel, V. Sourabh, R. Nisar, S. Kundu, R. Ramanathan, and E. Nyberg, â Inpars-light: Cost- effective unsupervised training of efficient rankers,â CoRR, vol. abs/2301.02998, 2023. [159] A. Askari, M. Aliannejadi, E. Kanoulas, and S. Ver- berne, â
2308.07107#236
2308.07107#238
2308.07107
[ "2305.03195" ]
2308.07107#238
Large Language Models for Information Retrieval: A Survey
Generating synthetic documents for cross- encoder re-rankers: A comparative study of chatgpt and human experts,â CoRR, vol. abs/2305.02320, 2023. [160] R. Pradeep, S. Sharifymoghaddam, and J. Lin, â Rankvicuna: Zero-shot listwise document reranking with open-source large language models,â CoRR, vol. abs/2309.15088, 2023.
2308.07107#237
2308.07107#239
2308.07107
[ "2305.03195" ]
2308.07107#239
Large Language Models for Information Retrieval: A Survey
[161] â â , â Rankzephyr: Effective and robust zero- listwise reranking is a breeze!â CoRR, vol. shot abs/2312.02724, 2023. [162] W. Sun, Z. Chen, X. Ma, L. Yan, S. Wang, P. Ren, Z. Chen, D. Yin, and Z. Ren, â Instruction distilla- tion makes large language models efficient zero-shot rankers,â arXiv preprint arXiv:2311.01555, 2023. [163] C.
2308.07107#238
2308.07107#240
2308.07107
[ "2305.03195" ]
2308.07107#240
Large Language Models for Information Retrieval: A Survey
J. C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. N. Hullender, â Learn- ing to rank using gradient descent,â in ICML, ser. ACM International Conference Proceeding Series, vol. 119. ACM, 2005, pp. 89â 96. [164] J. A. Baktash and M. Dawodi, â
2308.07107#239
2308.07107#241
2308.07107
[ "2305.03195" ]
2308.07107#241
Large Language Models for Information Retrieval: A Survey
Gpt-4: A review on advancements and opportunities in natural language processing,â arXiv preprint arXiv:2305.03195, 2023. [165] H. Wachsmuth, S. Syed, and B. Stein, â Retrieval of the best counterargument without prior topic knowl- edge,â in ACL (1). Association for Computational Linguistics, 2018, pp. 241â 251. [166] K. Guu, K. Lee, Z. Tung, P. Pasupat, and M. Chang, â
2308.07107#240
2308.07107#242
2308.07107
[ "2305.03195" ]
2308.07107#242
Large Language Models for Information Retrieval: A Survey
Retrieval augmented language model pre-training,â in Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, ser. Proceedings of Machine Learning Research, vol. 119. PMLR, 2020, pp. 3929â 3938. [167] P. S. H. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. K ¨uttler, M. Lewis, W. Yih, T. Rockt¨aschel, S. Riedel, and D. Kiela, â
2308.07107#241
2308.07107#243
2308.07107
[ "2305.03195" ]
2308.07107#243
Large Language Models for Information Retrieval: A Survey
Retrieval- augmented generation for knowledge-intensive NLP tasks,â in Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., 2020. [168] W. Shi, S. Min, M. Yasunaga, M. Seo, R. James, M. Lewis, L. Zettlemoyer, and W. Yih, â REPLUG: retrieval-augmented black-box language models,â CoRR, vol. abs/2301.12652, 2023.
2308.07107#242
2308.07107#244
2308.07107
[ "2305.03195" ]
2308.07107#244
Large Language Models for Information Retrieval: A Survey
Izacard, P. S. H. Lewis, M. Lomeli, L. Hos- seini, F. Petroni, T. Schick, J. Dwivedi-Yu, A. Joulin, S. Riedel, and E. Grave, â Atlas: Few-shot learning with retrieval augmented language models,â J. Mach. Learn. Res., vol. 24, pp. 251:1â 251:43, 2023. [170] A. Lazaridou, E. Gribovskaya, W. Stokowiec, and N. Grigorev, â
2308.07107#243
2308.07107#245
2308.07107
[ "2305.03195" ]
2308.07107#245
Large Language Models for Information Retrieval: A Survey
Internet-augmented language models through few-shot prompting for open-domain ques- tion answering,â CoRR, vol. abs/2203.05115, 2022. [171] H. He, H. Zhang, and D. Roth, â Rethinking with retrieval: Faithful large language model inference,â CoRR, vol. abs/2301.00303, 2023. [172] W. Yu, H. Zhang, X. Pan, K. Ma, H. Wang, and D. Yu, â
2308.07107#244
2308.07107#246
2308.07107
[ "2305.03195" ]
2308.07107#246
Large Language Models for Information Retrieval: A Survey
Chain-of-note: Enhancing robustness in retrieval-augmented language models,â CoRR, vol. abs/2311.09210, 2023. [173] O. Ram, Y. Levine, I. Dalmedigos, D. Muhlgay, A. Shashua, K. Leyton-Brown, and Y. Shoham, â In- context retrieval-augmented language models,â CoRR, vol. abs/2302.00083, 2023.
2308.07107#245
2308.07107#247
2308.07107
[ "2305.03195" ]
2308.07107#247
Large Language Models for Information Retrieval: A Survey
[174] Z. Shao, Y. Gong, Y. Shen, M. Huang, N. Duan, and W. Chen, â Enhancing retrieval-augmented large language models with iterative retrieval-generation synergy,â in Findings of the Association for Computa- tional Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds.
2308.07107#246
2308.07107#248
2308.07107
[ "2305.03195" ]
2308.07107#248
Large Language Models for Information Retrieval: A Survey
Association for Computational Linguistics, 2023, pp. 9248â 9274. [175] H. Trivedi, N. Balasubramanian, T. Khot, and A. Sab- harwal, â Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step ques- tions,â in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, A. Rogers, J. L. Boyd-Graber, and N. Okazaki, Eds. Association for Computational Linguistics, 2023, pp. 10 014â 10 037. [176] Z. Jiang, F. F. Xu, L. Gao, Z. Sun, Q. Liu, J.
2308.07107#247
2308.07107#249
2308.07107
[ "2305.03195" ]
2308.07107#249
Large Language Models for Information Retrieval: A Survey
Dwivedi- 30 Yu, Y. Yang, J. Callan, and G. Neubig, â Active retrieval augmented generation,â in Proceedings of the 2023 Con- ference on Empirical Methods in Natural Language Pro- cessing, EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 7969â
2308.07107#248
2308.07107#250
2308.07107
[ "2305.03195" ]
2308.07107#250
Large Language Models for Information Retrieval: A Survey
7992. [177] A. Asai, Z. Wu, Y. Wang, A. Sil, and H. Hajishirzi, â Self-rag: Learning to retrieve, generate, and critique through self-reflection,â CoRR, vol. abs/2310.11511, 2023. [178] J. Liu, J. Jin, Z. Wang, J. Cheng, Z. Dou, and J. Wen, â
2308.07107#249
2308.07107#251
2308.07107
[ "2305.03195" ]
2308.07107#251
Large Language Models for Information Retrieval: A Survey
RETA-LLM: A retrieval-augmented large language model toolkit,â CoRR, vol. abs/2306.05212, 2023. [179] T. Vu, M. Iyyer, X. Wang, N. Constant, J. W. Wei, J. Wei, C. Tar, Y. Sung, D. Zhou, Q. V. Le, and T. Luong, â Freshllms: Refreshing large language mod- els with search engine augmentation,â
2308.07107#250
2308.07107#252
2308.07107
[ "2305.03195" ]
2308.07107#252
Large Language Models for Information Retrieval: A Survey
CoRR, vol. abs/2310.03214, 2023. [180] X. Lyu, S. Grafberger, S. Biegel, S. Wei, M. Cao, S. Schelter, and C. Zhang, â Improving retrieval- augmented large language models via data impor- tance learning,â CoRR, vol. abs/2307.03027, 2023. [181] T. Gao, H. Yen, J. Yu, and D. Chen, â
2308.07107#251
2308.07107#253
2308.07107
[ "2305.03195" ]
2308.07107#253
Large Language Models for Information Retrieval: A Survey
Enabling large lan- guage models to generate text with citations,â in Pro- ceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 6465â 6488. [182] H. Luo, T. Zhang, Y. Chuang, Y. Gong, Y. Kim, X. Wu, H. Meng, and J. R. Glass, â
2308.07107#252
2308.07107#254
2308.07107
[ "2305.03195" ]
2308.07107#254
Large Language Models for Information Retrieval: A Survey
Search augmented instruc- tion learning,â in Findings of the Association for Compu- tational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 3717â 3729. [183] X. V. Lin, X. Chen, M. Chen, W. Shi, M. Lomeli, R. James, P. Rodriguez, J. Kahn, G. Szilvasy, M. Lewis, L. Zettlemoyer, and S. Yih, â
2308.07107#253
2308.07107#255
2308.07107
[ "2305.03195" ]
2308.07107#255
Large Language Models for Information Retrieval: A Survey
RA-DIT: retrieval- instruction tuning,â CoRR, vol. augmented dual abs/2310.01352, 2023. [184] W. Yu, Z. Zhang, Z. Liang, M. Jiang, and A. Sabhar- wal, â Improving language models via plug-and-play retrieval feedback,â CoRR, vol. abs/2305.14002, 2023. [185] Z. Feng, X. Feng, D. Zhao, M. Yang, and B. Qin, â Retrieval-generation synergy augmented large lan- guage models,â CoRR, vol. abs/2310.05149, 2023. [186] S.
2308.07107#254
2308.07107#256
2308.07107
[ "2305.03195" ]
2308.07107#256
Large Language Models for Information Retrieval: A Survey
Kadavath, T. Conerly, A. Askell, T. Henighan, D. Drain, E. Perez, N. Schiefer, Z. Hatfield-Dodds, N. DasSarma, E. Tran-Johnson, S. Johnston, S. E. Showk, A. Jones, N. Elhage, T. Hume, A. Chen, Y. Bai, S. Bowman, S. Fort, D. Ganguli, D. Hernandez, J. Ja- cobson, J. Kernion, S. Kravec, L. Lovitt, K. Ndousse, C. Olsson, S. Ringer, D. Amodei, T. Brown, J. Clark, N. Joseph, B. Mann, S. McCandlish, C. Olah, and J. Kaplan, â Language models (mostly) know what they know,â
2308.07107#255
2308.07107#257
2308.07107
[ "2305.03195" ]
2308.07107#257
Large Language Models for Information Retrieval: A Survey
CoRR, vol. abs/2207.05221, 2022. [187] Z. Jiang, J. Araki, H. Ding, and G. Neubig, â How can we know When language models know? on the cali- bration of language models for question answering,â Trans. Assoc. Comput. Linguistics, vol. 9, pp. 962â 977, 2021. [188] O. Press, M. Zhang, S. Min, L. Schmidt, N. A. Smith, and M. Lewis, â Measuring and narrowing the compo- sitionality gap in language models,â in Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, H.
2308.07107#256
2308.07107#258
2308.07107
[ "2305.03195" ]
2308.07107#258
Large Language Models for Information Retrieval: A Survey
Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 5687â 5711. [189] O. Khattab, K. Santhanam, X. L. Li, D. Hall, P. Liang, C. Potts, and M. Zaharia, â Demonstrate- search-predict: Composing retrieval and language models for knowledge-intensive NLP,â
2308.07107#257
2308.07107#259
2308.07107
[ "2305.03195" ]
2308.07107#259
Large Language Models for Information Retrieval: A Survey
CoRR, vol. abs/2212.14024, 2022. [190] O. Yoran, T. Wolfson, B. Bogin, U. Katz, D. Deutch, and J. Berant, â Answering questions by meta-reasoning over multiple chains of thought,â in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, H.
2308.07107#258
2308.07107#260
2308.07107
[ "2305.03195" ]
2308.07107#260
Large Language Models for Information Retrieval: A Survey
Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 5942â 5966. [191] M. A. Arefeen, B. Debnath, and S. Chakradhar, â Lean- context: Cost-efficient domain-specific question an- swering using llms,â CoRR, vol. abs/2309.00841, 2023. [192] F. Xu, W. Shi, and E. Choi, â
2308.07107#259
2308.07107#261
2308.07107
[ "2305.03195" ]
2308.07107#261
Large Language Models for Information Retrieval: A Survey
RECOMP: improving retrieval-augmented lms with compression and selec- tive augmentation,â CoRR, vol. abs/2310.04408, 2023. Jiang, M. R. Parvez, and G. Neubig, â Learning to filter context for retrieval- augmented generation,â CoRR, vol. abs/2311.08377, 2023. [194] J. Liu, L. Li, T. Xiang, B. Wang, and Y. Qian, â
2308.07107#260
2308.07107#262
2308.07107
[ "2305.03195" ]
2308.07107#262
Large Language Models for Information Retrieval: A Survey
TCRA- LLM: token compression retrieval augmented large for inference cost reduction,â in language model Findings of the Association for Computational Linguis- tics: EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 9796â 9810. [195] N. F. Liu, K. Lin, J. Hewitt, A. Paranjape, M.
2308.07107#261
2308.07107#263
2308.07107
[ "2305.03195" ]
2308.07107#263
Large Language Models for Information Retrieval: A Survey
Bevilac- qua, F. Petroni, and P. Liang, â Lost in the middle: How language models use long contexts,â CoRR, vol. abs/2307.03172, 2023. [196] R. Ren, Y. Wang, Y. Qu, W. X. Zhao, J. Liu, H. Tian, H. Wu, J. Wen, and H. Wang, â Investigating the factual knowledge boundary of large language models with retrieval augmentation,â CoRR, vol. abs/2307.11019, 2023.
2308.07107#262
2308.07107#264
2308.07107
[ "2305.03195" ]
2308.07107#264
Large Language Models for Information Retrieval: A Survey
[197] Y. Liu, S. Yavuz, R. Meng, M. Moorthy, S. Joty, C. Xiong, and Y. Zhou, â Exploring the integration strategies of retriever and large language models,â CoRR, vol. abs/2308.12574, 2023. [198] R. Aksitov, C. Chang, D. Reitter, S. Shakeri, and Y.
2308.07107#263
2308.07107#265
2308.07107
[ "2305.03195" ]
2308.07107#265
Large Language Models for Information Retrieval: A Survey
Sung, â Characterizing attribution and fluency tradeoffs for retrieval-augmented large language models,â CoRR, vol. abs/2302.05578, 2023. [199] A. Mallen, A. Asai, V. Zhong, R. Das, D. Khashabi, and H. Hajishirzi, â When not to trust language models: Investigating effectiveness of parametric and non- parametric memories,â in Proceedings of the 61st An- nual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), ACL 2023, Toronto,
2308.07107#264
2308.07107#266
2308.07107
[ "2305.03195" ]
2308.07107#266
Large Language Models for Information Retrieval: A Survey
31 Canada, July 9-14, 2023, A. Rogers, J. L. Boyd-Graber, and N. Okazaki, Eds. Association for Computational Linguistics, 2023, pp. 9802â 9822. [200] Y. Wang, X. Ma, and W. Chen, â Augmenting black- box llms with medical textbooks for clinical question answering,â CoRR, vol. abs/2309.02233, 2023.
2308.07107#265
2308.07107#267
2308.07107
[ "2305.03195" ]
2308.07107#267
Large Language Models for Information Retrieval: A Survey
and structure- S. Horawalavithana, aware for interdisciplinary science,â CoRR, vol. abs/2311.12289, 2023. [202] X. Li, E. Nie, and S. Liang, â Crosslingual retrieval augmented in-context learning for bangla,â CoRR, vol. abs/2311.00587, 2023. [203] A. Lozano, S. L. Fleming, C. Chiang, and N. Shah, â
2308.07107#266
2308.07107#268
2308.07107
[ "2305.03195" ]
2308.07107#268
Large Language Models for Information Retrieval: A Survey
Clinfo.ai: An open-source retrieval-augmented large system for answering medical language model questions using scientific literature,â CoRR, vol. abs/2310.16146, 2023. [204] B. Zhang, H. Yang, T. Zhou, A. Babar, and X. Liu, â Enhancing financial sentiment analysis via retrieval augmented large language models,â in 4th ACM In- ternational Conference on AI in Finance, ICAIF 2023, Brooklyn, NY, USA, November 27-29, 2023. ACM, 2023, pp. 349â
2308.07107#267
2308.07107#269
2308.07107
[ "2305.03195" ]
2308.07107#269
Large Language Models for Information Retrieval: A Survey
356. [205] A. Louis, G. van Dijck, and G. Spanakis, â Inter- pretable long-form legal question answering with retrieval-augmented large language models,â CoRR, vol. abs/2309.17050, 2023. [206] G. Zyskind, T. South, and A. Pentland, â Donâ t forget private retrieval: distributed private similar- ity search for large language models,â
2308.07107#268
2308.07107#270
2308.07107
[ "2305.03195" ]
2308.07107#270
Large Language Models for Information Retrieval: A Survey
CoRR, vol. abs/2311.12955, 2023. Jiang, M. Zeller, R. Waleffe, T. Hoefler, and G. Alonso, â Chameleon: a heterogeneous and disag- gregated accelerator system for retrieval-augmented language models,â CoRR, vol. abs/2310.09949, 2023. [208] Y. Hoshi, D. Miyashita, Y. Ng, K. Tatsuno, Y. Morioka, O. Torii, and J. Deguchi, â
2308.07107#269
2308.07107#271
2308.07107
[ "2305.03195" ]
2308.07107#271
Large Language Models for Information Retrieval: A Survey
Ralle: A framework for developing and evaluating retrieval-augmented large language models,â in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023 - System Demonstrations, Singapore, De- cember 6-10, 2023, Y. Feng and E. Lefever, Eds. Asso- ciation for Computational Linguistics, 2023, pp. 52â 69. J. Hall, N. Shazeer, A. Kulshreshtha, H. Cheng, A. Jin, T. Bos, L. Baker, Y. Du, Y. Li, H. Lee, H. S. Zheng, A. Ghafouri, M. Menegali, Y. Huang, M. Krikun, D. Lepikhin, J. Qin, D. Chen, Y. Xu, Z. Chen, A. Roberts, M. Bosma, Y. Zhou, C. Chang, I. Krivokon, W. Rusch, M. Pick- ett, K. S. Meier-Hellstern, M. R. Morris, T. Doshi, R. D. Santos, T. Duke, J. Soraker, B. Zevenbergen, V. Prabhakaran, M. Diaz, B. Hutchinson, K. Olson, A. Molina, E. Hoffman-John, J. Lee, L. Aroyo, R. Ra- jakumar, A. Butryna, M. Lamm, V. Kuzmina, J. Fenton, A. Cohen, R. Bernstein, R. Kurzweil, B. A. y Arcas, C. Cui, M. Croak, E. H. Chi, and Q. Le, â Lamda: Language models for dialog applications,â
2308.07107#270
2308.07107#272
2308.07107
[ "2305.03195" ]
2308.07107#272
Large Language Models for Information Retrieval: A Survey
CoRR, vol. abs/2201.08239, 2022. [210] K. Shuster, M. Komeili, L. Adolphs, S. Roller, A. Szlam, and J. Weston, â Language models that seek for knowledge: Modular search & generation for dialogue and prompt completion,â in Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, Y. Goldberg, Z. Kozareva, and Y. Zhang, Eds. Association for Computational Linguistics, 2022, pp. 373â 393. [211] X. Liu, H. Lai, H. Yu, Y. Xu, A. Zeng, Z. Du, P. Zhang, Y. Dong, and J. Tang, â
2308.07107#271
2308.07107#273
2308.07107
[ "2305.03195" ]
2308.07107#273
Large Language Models for Information Retrieval: A Survey
Webglm: Towards an effi- cient web-enhanced question answering system with human preferences,â in Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2023, Long Beach, CA, USA, August 6-10, 2023, A. K. Singh, Y. Sun, L. Akoglu, D. Gunopulos, X. Yan, R. Kumar, F. Ozcan, and J. Ye, Eds. ACM, 2023, pp. 4549â 4560. [212] I. Gur, H. Furuta, A. Huang, M. Safdari, Y. Matsuo, D. Eck, and A. Faust, â
2308.07107#272
2308.07107#274
2308.07107
[ "2305.03195" ]
2308.07107#274
Large Language Models for Information Retrieval: A Survey
A real-world webagent with planning, long context understanding, and program synthesis,â CoRR, vol. abs/2307.12856, 2023. J. Aslanides, H. F. Song, M. J. Chadwick, M. Glaese, S. Young, L. Campbell-Gillingham, G. Irving, and N. McAleese, â Teaching language models to support answers with verified quotes,â CoRR, vol. abs/2203.11147, 2022. [214] X. Shi, J. Liu, Y. Liu, Q. Cheng, and W. Lu, â
2308.07107#273
2308.07107#275
2308.07107
[ "2305.03195" ]
2308.07107#275
Large Language Models for Information Retrieval: A Survey
Know where to go: Make LLM a relevant, responsible, and trustworthy searcher,â CoRR, vol. abs/2310.12443, 2023. [215] Y. Qin, Z. Cai, D. Jin, L. Yan, S. Liang, K. Zhu, Y. Lin, X. Han, N. Ding, H. Wang, R. Xie, F. Qi, Z. Liu, M. Sun, and J. Zhou, â
2308.07107#274
2308.07107#276
2308.07107
[ "2305.03195" ]
2308.07107#276
Large Language Models for Information Retrieval: A Survey
Webcpm: Interactive web search for chinese long-form question answering,â in Proceed- ings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, A. Rogers, J. L. Boyd-Graber, and N. Okazaki, Eds. Association for Computational Linguistics, 2023, pp. 8968â 8988. [216] X. Deng, Y. Gu, B. Zheng, S. Chen, S. Stevens, B. Wang, H. Sun, and Y. Su, â Mind2web: Towards a generalist agent for the web,â
2308.07107#275
2308.07107#277
2308.07107
[ "2305.03195" ]
2308.07107#277
Large Language Models for Information Retrieval: A Survey
CoRR, vol. abs/2306.06070, 2023. [217] S. Yao, H. Chen, J. Yang, and K. Narasimhan, â Web- shop: Towards scalable real-world web interaction with grounded language agents,â in NeurIPS, 2022. [218] S. Zhou, F. F. Xu, H. Zhu, X. Zhou, R. Lo, A. Sridhar, X. Cheng, Y. Bisk, D. Fried, U. Alon, and G. Neubig, â
2308.07107#276
2308.07107#278
2308.07107
[ "2305.03195" ]
2308.07107#278
Large Language Models for Information Retrieval: A Survey
Webarena: A realistic web environment for build- ing autonomous agents,â CoRR, vol. abs/2307.13854, 2023. [219] R. Lo, A. Sridhar, F. F. Xu, H. Zhu, and S. Zhou, â Hierarchical prompting assists large language model on web navigation,â in Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, De- cember 6-10, 2023, H.
2308.07107#277
2308.07107#279
2308.07107
[ "2305.03195" ]
2308.07107#279
Large Language Models for Information Retrieval: A Survey
Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 10 217â 10 244. [220] S. MacAvaney, C. Macdonald, R. Murray-Smith, and I. Ounis, â Intent5: Search result diversification using causal language models,â CoRR, vol. abs/2108.04026, 2021. 32 [221] N.
2308.07107#278
2308.07107#280
2308.07107
[ "2305.03195" ]
2308.07107#280
Large Language Models for Information Retrieval: A Survey
Craswell, â Mean reciprocal rank,â in Encyclopedia ¨Ozsu, Eds. of Database Systems, L. Liu and M. T. Springer US, 2009, p. 1703. [222] K. J¨arvelin and J. Kek¨al¨ainen, â Cumulated gain-based evaluation of IR techniques,â ACM Trans. Inf. Syst., vol. 20, no. 4, pp. 422â 446, 2002. [223] K. Papineni, S. Roukos, T. Ward, and W. Zhu, â
2308.07107#279
2308.07107#281
2308.07107
[ "2305.03195" ]
2308.07107#281
Large Language Models for Information Retrieval: A Survey
Bleu: a method for automatic evaluation of machine trans- lation,â in Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA. ACL, 2002, pp. 311â 318. [224] C.-Y. Lin, â ROUGE: A package for automatic evalu- ation of summaries,â in Text Summarization Branches Out. Barcelona, Spain: Association for Computational Linguistics, Jul. 2004, pp. 74â 81. [225] P. Manakul, A. Liusie, and M. J. F. Gales, â
2308.07107#280
2308.07107#282
2308.07107
[ "2305.03195" ]
2308.07107#282
Large Language Models for Information Retrieval: A Survey
Selfcheck- gpt: Zero-resource black-box hallucination detection for generative large language models,â CoRR, vol. abs/2303.08896, 2023. [226] H. Qian, Y. Zhu, Z. Dou, H. Gu, X. Zhang, Z. Liu, R. Lai, Z. Cao, J. Nie, and J. Wen, â Webbrain: Learn- ing to generate factually correct articles for queries by grounding on large web corpus,â CoRR, vol. abs/2304.04358, 2023.
2308.07107#281
2308.07107#283
2308.07107
[ "2305.03195" ]
2308.07107#283
Large Language Models for Information Retrieval: A Survey
[227] J. Li, X. Cheng, W. X. Zhao, J. Nie, and J. Wen, â Halueval: A large-scale hallucination evaluation benchmark for large language models,â CoRR, vol. abs/2305.11747, 2023. [228] L. Chen, Y. Deng, Y. Bian, Z. Qin, B. Wu, T. Chua, and K. Wong, â
2308.07107#282
2308.07107#284
2308.07107
[ "2305.03195" ]
2308.07107#284
Large Language Models for Information Retrieval: A Survey
Beyond factuality: A comprehensive evalu- ation of large language models as knowledge genera- tors,â in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 6325â 6341. [229] S. Xu, D. Hou, L. Pang, J. Deng, J. Xu, H. Shen, and X. Cheng, â
2308.07107#283
2308.07107#285
2308.07107
[ "2305.03195" ]
2308.07107#285
Large Language Models for Information Retrieval: A Survey
Ai-generated images introduce invisible relevance bias to text-image retrieval,â CoRR, vol. abs/2311.14084, 2023. [230] S. Dai, Y. Zhou, L. Pang, W. Liu, X. Hu, Y. Liu, X. Zhang, and J. Xu, â Llms may dominate informa- tion access: Neural retrievers are biased towards llm- generated texts,â CoRR, vol. abs/2310.20501, 2023. [231] J. S. Park, J. C. Oâ Brien, C. J. Cai, M. R. Morris, P. Liang, and M. S. Bernstein, â
2308.07107#284
2308.07107#286
2308.07107
[ "2305.03195" ]
2308.07107#286
Large Language Models for Information Retrieval: A Survey
Generative agents: Interactive simulacra of human behavior,â CoRR, vol. abs/2304.03442, 2023. 33
2308.07107#285
2308.07107
[ "2305.03195" ]
2308.06921#0
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
3 2 0 2 g u A 4 1 ] Y C . s c [ 1 v 1 2 9 6 0 . 8 0 3 2 : v i X r a # CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes # Mark Liffiton [email protected] Illinois Wesleyan University Bloomington, Illinois, USA Brad Sheese [email protected] Illinois Wesleyan University Bloomington, Illinois, USA Jaromir Savelka [email protected] Carnegie Mellon University Pittsburgh, Pennsylvania, USA # Paul Denny [email protected] The University of Auckland Auckland, New Zealand ABSTRACT Computing educators face significant challenges in providing timely support to students, especially in large class settings. Large lan- guage models (LLMs) have emerged recently and show great promise for providing on-demand help at a large scale, but there are concerns that students may over-rely on the outputs produced by these mod- els. In this paper, we introduce CodeHelp, a novel LLM-powered tool designed with guardrails to provide on-demand assistance to programming students without directly revealing solutions. We detail the design of the tool, which incorporates a number of useful features for instructors, and elaborate on the pipeline of prompt- ing strategies we use to ensure generated outputs are suitable for students. To evaluate CodeHelp, we deployed it in a first-year com- puter and data science course with 52 students and collected student interactions over a 12-week period. We examine studentsâ usage patterns and perceptions of the tool, and we report reflections from the course instructor and a series of recommendations for classroom use. Our findings suggest that CodeHelp is well-received by stu- dents who especially value its availability and help with resolving errors, and that for instructors it is easy to deploy and complements, rather than replaces, the support that they provide to students.
2308.06921#1
2308.06921
[ "2304.03938" ]
2308.06921#1
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
unlikely to be exhaustive. Thus, there is great need for scalable ap- proaches for providing immediate, high-quality support to students who are learning to program. Large language models (LLMs) have recently garnered consider- able interest due to their capabilities for generating human-like text in a wide array of contexts, including computing education [27]. There, LLMs have shown great potential for generating resources such as programming exercises, code explanations and model so- lutions [11]. Recent work has even shown that LLM-generated explanations of code are perceived as more useful to students than explanations produced by their peers [20]. Thus, the prospect of using LLMs to produce real-time, on-demand help for students appears promising. However, a common concern is that students may rely too heavily on the outputs produced by such models, espe- cially if they can be used to generate solutions directly [1]. Related concerns around student over-reliance on LLM-based tools are com- mon in educational settings [16]. Indeed, when OpenAI recently released the widely publicised GPT-4 model, they showcased the example of a â socraticâ tutor, highlighting how the model could be steered away from revealing solutions directly to the user1. CCS CONCEPTS â ¢ Social and professional topics â Computer science edu- cation; Software engineering education; â ¢ Human-centered computing â Interactive systems and tools. KEYWORDS Intelligent tutoring systems, Intelligent programming tutors, Pro- gramming assistance, Novice programmers, Natural language in- terfaces, Large language models, Guardrails 1 INTRODUCTION AND MOTIVATION As student interest in programming continues to grow and class sizes expand, educators face significant challenges in providing effective and timely support to all students. Traditional approaches of offering on-demand expert help do not scale well in very large settings, and not all students feel comfortable approaching an in- structor or a teaching assistant for help [13]. Similarly, authoring static hints or responses to commonly encountered issues that can be presented to students needing help is both time intensive and In this paper we introduce CodeHelp, an LLM-powered tool for generating real-time help for programming and computer sci- ence students. A key contribution of CodeHelp is its use of robust â
2308.06921#0
2308.06921#2
2308.06921
[ "2304.03938" ]
2308.06921#2
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
guardrailsâ that are specifically designed to not reveal solutions directly while helping students resolve their issues, thus mitigating the over-reliance trap that direct use of LLMs may cause. We de- scribe the design of the CodeHelp tool and elaborate on the LLM prompting strategies that we use to generate outputs that guide students towards a solution without producing answers directly. We also discuss the toolâ s useful features for instructors, including the ability to observe, summarise, and review how their students engage with it. To explore its potential, we deployed CodeHelp in a first-year computer- and data-science course with 52 students and monitored its usage over a 12-week period. We investigate when and how frequently students engaged with CodeHelp, what types of help they request, and how useful they found the tool. To date, there has been significant interest in the computing education liter- ature focusing on the accuracy of LLMs, the types of resources they can generate, and comparative analyses involving historical student data [11]. To our knowledge, this work represents the first evalua- tion of an always-available LLM-powered teaching assistant with 1https://openai.com/research/gpt-4 guardrails tailored for computer science education. We found that CodeHelp is well-received by students, it is easy and inexpensive to deploy, and most importantly, it appears to effectively complement and expand on the support students receive from course instructors and teaching assistants (TAs). 2 RELATED WORK Providing effective automated assistance to novice programmers has been a longstanding research problem. Considerable attention has been devoted to the development and evaluation of so-called in- telligent tutoring systems for programming, sometimes referred to as intelligent programming tutors (IPT). Such systems vary greatly and contain a large range of supplementary features [8]. Most of the work has been devoted to various approaches for the generation of effective hints [21, 22] and feedback [18]. The primary difference between CodeHelp and previous work in this area is that CodeHelp is able to respond to a far wider range of requests and requires little or no configuration or setup for any specific class context due to its underlying use of LLMs. Prior to the development and use of LLMs, similar tools had to rely on various rule-based and machine learning-based natural language processing techniques that were much more specialized and, hence, brittle. For example, they could only support a single programming language or type of support request.
2308.06921#1
2308.06921#3
2308.06921
[ "2304.03938" ]
2308.06921#3
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
CodeHelp supports any programming language with sufficient coverage in the underlying LLMâ s training set. In particular, programming languages that are commonly used in com- puting education are covered very well. CodeHelp can also respond effectively to a wide variety of request types. Chatbots provide a convenient interaction experience and have previously been deployed as intelligent assistants in programming education contexts. For example, Carreira et al. developed Pyo, a chatbot designed to help novice programmers in online courses by providing definitions of concepts, guiding them through errors, and assisting with exercises [4]. Although the goal of Pyo is very similar to that of CodeHelp, a notable distinction is that Pyo is rule-based with predetermined topics and conversation flows, while CodeHelp is far more flexible. In similar work, Konecki et al. proposed a rule- based intelligent assistant for programming education aiming to increase engagement, motivation and learning time [19]. Although the primary focus of CodeHelp is to assist students in resolving their issues when programming, we expect it may influence engagement and motivation as well. Python-Bot [24] and RevBot [25] are examples of AI-based sys- tems that help students understand Python syntax and practice past exam questions. Here, the focus is not on resolving issues, as with CodeHelp, but rather on helping students understand particular topics and testing their knowledge. Duckbot is another chatbot designed to enhance help-seeking between students and teaching staff in programming tutorials [29]. Walden et al. [34] developed a chatbot for teaching secure programming in PHP. Unlike many existing chatbot tools that have a narrow focus, CodeHelp lever- ages the power of LLMs to provide support across a wide variety of contexts involving various programming languages. LLMs have been shown to exhibit remarkable performance on a broad range of tasks, including code generation [6].
2308.06921#2
2308.06921#4
2308.06921
[ "2304.03938" ]
2308.06921#4
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
Finnie-Ansley Mark Liffiton, Brad Sheese, Jaromir Savelka, and Paul Denny et al. found that Codex (GitHub Copilot) outperforms typical stu- dents in CS1 programming exams [12]. Similarly, Savelka et al. found that GPT-4 comfortably passes diverse types of assessments from introductory and intermediate Python programming classes at the post-secondary education level [31]. Denny et al. evaluated Copilot on 166 CS1 coding problems and found that it successfully solves around half of these problems on its very first attempt, and that it solves 60% of the remaining problems if the problem de- scription is reformulated appropriately [9]. Tian et al. evaluated ChatGPT as a programming assistant and found that it successfully handles typical programming challenges [33]. LLMs have also been applied to other computing education tasks, such as writing tests [5, 15], and helping novices learn how to craft effective prompts [10]. Moreover, LLMs have been employed to generate example ex- planations as scaffolding to help students learn how to understand and explain code themselves [20] and to generate programming exercises and code explanations [30]. This prior work demonstrates the capabilities and the flexibility of the LLMs that power CodeHelp. Despite their impressive performance at many tasks, LLMs may not be as effective as human tutors in some domains. For instance, LLMs may struggle with certain types of programming multiple- choice questions [32] or certain types of coding exercises [31]. An empirical evaluation of GitHub Copilotâ s code suggestions revealed limitations in generating reliable code [23]. Pardos and Bhandari [26] compared learning gains from hints generated by LLMs and human tutors, finding that although both led to positive learning gains, human-generated hints were superior. They also found that only 70% of ChatGPT-generated hints were usable. Our vision for CodeHelp is that it will serve to augment existing instruction, pro- viding students with another convenient and accessible avenue to seek support, rather than replacing human instructors or TAs. Two recent studies in the computing education literature pro- vide excellent motivation for our work. Both studies highlight the pressing need for a tool that provides appropriate guardrails when generating responses to studentsâ
2308.06921#3
2308.06921#5
2308.06921
[ "2304.03938" ]
2308.06921#5
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
requests. The first study, by Kazemitabaar et al., analyses student use of their Coding Steps tool [17]. Coding Steps integrates an AI code generator into the user interface of an online programming tool. When a student uses this code generator, they provide a natural language prompt which is packaged together with their existing code and six static examples and sent to the OpenAI Codex API. The response from the API is then automatically inserted for the student into the code editor. In their study, where students tackled 45 Python programming tasks over ten 90-minute sessions, AI-generated code was submit- ted by students without any modification 49% of the time. This heavy use of the code generator raises concerns around student over-reliance which has been identified as a key challenge for edu- cators [1, 3, 7, 28]. The second study that is particularly pertinent to our work is the recent paper by Hellas et al. exploring responses generated by Codex and GPT-3.5 to 150 student help requests from a historical dataset [14]. The data had previously been collected via a platform that allowed students to click a â Request helpâ button when their code did not pass automated tests. This added their request to a queue that was monitored by a teacher who could respond manually. When assessing the GPT-3.5 model, they found that many of the generated responses were accurate and that 99% of the responses contained source code. Interestingly, the authors
2308.06921#4
2308.06921#6
2308.06921
[ "2304.03938" ]
2308.06921#6
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
CodeHelp: Using Large Language Models with Guardrails characterise the language model as an â unreliable tutorâ that has a â penchant for blurting out model solutions even when you di- rectly ask them not toâ . Again, this work emphasises the need for tools that can provide assistance to students without immediately revealing answers. Our work differs from these recent studies in several key ways. Our primary contribution is the explicit design of appropriate guardrails to avoid student over-reliance on model-generated code. Like Kazemitabaar et al. [17], we deployed our tool in the class- room; however, our evaluation ran for 12 weeks, and we explore how students interact with it outside of scheduled class sessions. In the dataset used by Hellas et al. [14], students infrequently used the â Request helpâ button likely due to the fact that requests were added to a queue and responded to manually by a teacher. In our work, students receive immediate feedback from CodeHelp at any time of the day or night. # 3 CODEHELP DESIGN AND IMPLEMENTATION We designed CodeHelp to augment and complement the learning support students receive from instructors and teaching assistants. We aimed to provide a tool in which a student could 1) request help with issues they face in programming activities and 2) immediately receive a helpful response that provides guidance and explanation without providing a complete solution. To accomplish this, we cre- ated CodeHelp with a simple, clear interface for students (Sec. 3.1); developed a workflow of multiple LLM prompts to generate the desired responses, with guardrails, from a studentâ s input (Sec. 3.2); and implemented features specifically for instructors to manage and observe their studentsâ usage (Sec. 3.3). For broad accessibility, CodeHelp is implemented as a web application; it is accessible at https://codehelp.app/. 3.1 Student Interfaces CodeHelpâ s student interfaces are simple, with minimal choices and clear guidance. Students accessing CodeHelp are brought di- rectly to the Help Request form, shown in Figure 1. We opted for a structured input, organizing it into several specific fields rather than having a single free-form text input. This both provides guid- ance to students about what information is typically needed for an effective query and gives more context and structure to the prompt that is ultimately fed to an LLM, which increases the chances of supporting the student successfully.
2308.06921#5
2308.06921#7
2308.06921
[ "2304.03938" ]
2308.06921#7
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
Moreover, the structured in- put provides students an opportunity to practice asking technical questions, providing the necessary relevant context. Students are asked to provide: â ¢ The programming language in which they are working. The instructor can set a class-wide initial default, and the form then defaults to each studentâ s most recently selected language. â ¢ The relevant snippet of code. This is optional, as not all queries reference existing code. â ¢ The error message. This is optional as well. If an error message is provided, the underlying LLM is prompted to explain the error message to the student. Language: Please select v Code: Copy just the most relevant part of your code here Responses will be more helpful when you include only code relevant to your issue Error Message: IF your issue relates to an error message, copy the message here. Be sure to include the message itself and the quoted line on which it says the error occurred. Your Issue / Question: Clearly describe your issue or question. Include as relevant: what you are trying to do, what you expect the code to do, what the code actually does, and what you need help understanding Submit Request Figure 1: The Help Request form (text areas have been shrunk here to save space). The four separate inputs (language, code, error, and issue) and connected guidance text help students structure their request and encourage good practices when requesting support. User: Holley When: 2023-03-02 12:06pm Language:
2308.06921#6
2308.06921#8
2308.06921
[ "2304.03938" ]
2308.06921#8
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
Python code: Error Message: Issue / Question: I wantto print the numbers upto 10, but this ust crashes Response Remember: It will not always be correct! Ie looks lke you are trying to.use a for loop to print the numbers up to 10, ut you are running into an error. In Python, the for loop syntax requires that you provide aniterable object, such asa ist arrange, to loop over. To generate a range fof numbers From 0 to 10, youcan use the range function. This will give you an terable object that you can use in your for loop. and end number, The tart number wil be the ber you want ta print. For example, f you want Ifyou wantto print the numbers 1 te 10, you can use the range Function witha first umber you wane to print, and the end number will be one more than the la to print the numbers 1to 10, youcan use the range function lke this: range(, 11) Wosthishelgtur {Yes GD No
2308.06921#7
2308.06921#9
2308.06921
[ "2304.03938" ]
2308.06921#9
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
Figure 2: The Response View. Students are shown the details of their request followed by the systemâ s response. A promi- nent warning reminds students that the response may be incorrect. A simple feedback form allows the students to indicate whether the answer was helpful. â ¢ The question or description of the issue with which the student needs help. After submitting a request for help, the student is brought to the response view, an example of which is shown in Figure 2. This view displays the query (for reference) and the generated response. Because there is a chance the LLM may generate an incorrect or confusing answer (discussed further in Section 4), a warning re- minder is displayed prominently above every response. A simple feedback form allows the student to note whether the response was helpful. The query, response, and any feedback are stored for the studentâ s future reference and made available to the instructor. In practice, students do not always provide sufficient information or context to provide accurate assistance. CodeHelp attempts to Response Remember: twill not always be correct! â The code is attempting to filter a pandas DataFrame based on the â Stateâ
2308.06921#8
2308.06921#10
2308.06921
[ "2304.03938" ]
2308.06921#10
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
column, bu is encountering a KeyEtror. Please Provide the code or information related to how the â dF DataFrame is defined. â An attempt.ata response is below, but you can (Retry) this query and provide adaktional details or clarification to receive a â more helpful response. Ie looks lke you are trying to access a column called Stateâ in your DataFrame, butts not present. The Keyrror you are sing is telling you that the key â Stateâ
2308.06921#9
2308.06921#11
2308.06921
[ "2304.03938" ]
2308.06921#11
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
does not exist in your DataFrame. To Ficthis issue, you should check the column namesin your DataFrame and make sure that â Stateâ is spelled correctly andis present. You can use the df.colunns attribute to see a lst of al the column names in your DataFrame. â Additionally itlooks Uke you are trying to use the / operator to combine two conditions inyour Lee statement.In Python, the / â operator ic used for division, not combining conditions Instead, you should use the & operator to combine conditions.
2308.06921#10
2308.06921#12
2308.06921
[ "2304.03938" ]
2308.06921#12
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
I hope this helps Let me knowif you have any Further questions or issues. Figure 3: Example response with a request for clarification (the yellow panel in the middle). The retry button takes the student to a help request form pre-filled with the same query. determine whether each request is lacking in this way, and if so, it presents the student with a request for clarification as shown in Figure 3. The clarification request attempts to help the student identify what additional information is needed. The determination and clarification request are generated by an LLM as well (described in Section 3.2), and because it could be incorrect, the student is also given a response to their request as written. This is mostly done to prevent students becoming stuck in a series of clarification requests without receiving any support. When a clarification is requested, the system describes the main response as an â attemptâ at a response to indicate to the student that it may be less accurate given the missing information. 3.2 Generating Responses We designed CodeHelp to generate responses to student requests that are similar to those of a human tutor or instructor helping a student in a one-on-one session. Specifically, our goals for the responses were:
2308.06921#11
2308.06921#13
2308.06921
[ "2304.03938" ]
2308.06921#13
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
â ¢ Provide explanations and guidance to support the student in their learning. â ¢ Never include complete solutions that the student can copy without thinking or learning. â ¢ Identify incomplete or ambiguous queries and prompt the student for additional information. â ¢ Only respond to questions relevant to the course (to prevent abuse of the tool as unrestricted access to an LLM). In CodeHelp, we achieve these goals via careful design of multiple prompts for the LLMs generating responses. The LLMs used in CodeHelp operate by repeatedly predicting the next word in a sequence, and so they are commonly used by providing a text prompt from which the LLM generates a completion, i.e., a sequence of words predicted to follow the prompt. LLMs are limited in the number and complexity of instructions they can accurately follow in a single prompt and completion, and we found that current LLMs could not consistently achieve all of the desired goals with a single prompt and its completion. Therefore, the current design of CodeHelp employs three separate prompts. The response workflow using these prompts is shown in Figure 4.
2308.06921#12
2308.06921#14
2308.06921
[ "2304.03938" ]
2308.06921#14
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
Mark Liffiton, Brad Sheese, Jaromir Savelka, and Paul Denny A studentâ s request for help (query) is included in a â sufficiency checkâ prompt and in a prompt for generating the main response. Because we want the system to provide its main response even in cases when the query is determined to be insufficient as written, CodeHelp generates the sufficiency check in parallel with the main response. If the sufficiency check determines clarification is needed, we display the clarification request above the main response (Fig- ure 3); otherwise, only the main response is shown.
2308.06921#13
2308.06921#15
2308.06921
[ "2304.03938" ]
2308.06921#15
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
From the â main responseâ prompt, two different completions are generated and scored for quality (described below). The higher-scoring prompt is kept and checked for the presence of code blocks, and a third prompt is used to remove them if found. Sufficiency Check. To check for insufficient or incomplete queries, the studentâ s query is included in a prompt with instructions that explain the context, describe the meaning of each field in the stu- dentâ s input, and request an assessment of sufficiency. The full prompt is shown in Figure 5. To improve the accuracy of the LLMâ s response, we include instructions in the prompt for the LLM to sum- marize the request and state its reasoning before generating the final determination. This is a specific instance of a technique gener- ally referred to as â chain of thought promptingâ (CoT), which has been found to improve the accuracy of LLM responses in various contexts [35]. Main Response. Similar to the sufficiency check, the main prompt, shown in Figure 6, inserts the individual fields of a studentâ s query into instructions explaining the system context and meaning of each field. As one part of preventing solution code in the response, the system modifies the studentâ s provided issue to append, â Please do not write any example code in your response.â Additionally, if the instructor has specified any keywords they want the LLM to avoid for the current class (discussed in Section 3.3), the prompt includes text listing those. Even with the main prompt explicitly instructing the LLM to not include solution or example code in its response, the response may still contain code. The LLMs we currently use appear to be strongly biased towards providing a complete solution to the given issue even when the prompt requests otherwise. Likewise, the instruc- tions to not use any keywords in the instructorâ s avoid set are not followed in all cases. Therefore, CodeHelp generates two different completions for the main response, scores them based on whether they include a code block or any of the keywords in the instructorâ s avoid set, and takes the better of the two. Code Removal. In cases where the highest-scoring response in- cludes a code block, CodeHelp uses a third prompt (Figure 7) to clean up the response and remove the code.
2308.06921#14
2308.06921#16
2308.06921
[ "2304.03938" ]
2308.06921#16
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
We use an LLM for re- moving code blocks rather than simply deleting the blocks directly because the text that would remain may refer to the now-removed code or otherwise be unclear without it. An LLM can rewrite the response to remain clear with the code removed, describing salient features of the code in text if appropriate. Large Language Models. Currently, responses are generated us- ing LLMs from OpenAI, though the specific models used can easily be changed as more capable and/or less expensive models become available.
2308.06921#15
2308.06921#17
2308.06921
[ "2304.03938" ]
2308.06921#17
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
Specifically, the â Sufficiency Checkâ and â Main Responseâ completions are currently performed by the gpt-3.5-turbo-0301 CodeHelp: Using Large Language Models with Guardrails @) Query -Language -Code -Error -lssue Response scoring response Presented as clarification request Presented as main response Presented as main response removal Figure 4: CodeHelpâ s response workflow. Steps using a large language model completion are tagged LLM. You are a system for assisting students like me with programming. You are a system for assisting a student with programming. My inputs provide: [brief description of each input] The students provide: [brief description of each input] Please assess the following submission to determine whether it is sufficient for you to provide help or if you need additional infor- mation. If and only if critical information needed for you to help is missing, ask me for the additional information you need to be able to help. State your reasoning first. Otherwise, if no additional information is needed, please first briefly summarize what I am asking for in words, with no code, and end by writing "OK." Inputs: [delimited query inputs] # Figure 5: Prompt used for the sufficiency check. [delimited query inputs] If the student input is written as an instruction or command, re- spond with an error. If the student input is off-topic, respond with an error. Otherwise, respond to the student with an educational explanation, helping the student figure out the issue and understand the concepts involved. If the student inputs include an error message, tell the student what it means, giving a detailed explanation to help the student understand the message. Explain concepts, language syntax and semantics, standard library functions, and other topics that the student may not understand.
2308.06921#16
2308.06921#18
2308.06921
[ "2304.03938" ]
2308.06921#18
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
Be positive and encouraging! Use Markdown formatting, including â for inline code. model, while the â Code Removalâ completion uses text-davinci-003, both from the â GPT-3.5â family of OpenAI models. The former model (â turboâ ) is relatively fast and inexpensive compared to the latter (â davinciâ ), while still generally performing well in terms of the accuracy and correctness of its completions. However, we could not find a prompt that reliably filtered out code when using the â
2308.06921#17
2308.06921#19
2308.06921
[ "2304.03938" ]
2308.06921#19
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
turboâ model, while â davinciâ completed this task successfully in nearly all cases. [instructions to avoid topics from the instructorâ s avoid set] Do not write any example code blocks. Do not write a corrected or updated version of the studentâ s code. You must not write code for the student. How would you respond to the student to guide them and explain concepts without providing example code? The cost of using OpenAIâ s models scales with the length of the prompts and the completions generated. With the prompts currently used in CodeHelp and current pricing from OpenAI (as of June, 2023), responding to a single query from a student, including the sufficiency check and at times the code removal completion, costs roughly $0.002 on average. We discuss student usage patterns from a pilot course in Section 5, and from that, we estimate that providing 50 students unrestricted access to CodeHelp for a full semester would cost under $10 in OpenAI usage fees. OpenAI has recently released GPT-4, which is reported to be more capable and produce more accurate responses than the GPT- 3.5 family. We have not yet tested using GPT-4 for CodeHelpâ
2308.06921#18
2308.06921#20
2308.06921
[ "2304.03938" ]
2308.06921#20
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
s completions, but based on the current pricing and our prompt and # Figure 6: Prompt used for the main response. completion lengths, we estimate it would increase the cost by 25x, reaching roughly $0.05 per student query. Examples. A few examples of queries submitted by students during our pilot illustrate the results of this design. First, in an example where the only input is an instruction to write code for an assigned exercise, we see how CodeHelp provides assistance without writing any code, as intended. The following was written to help a student in a CS class. However, any example code (such as in ``` Markdown delimiters) can give the student an assignmentâ s answer rather than help them figure it out themselves. We need to provide help without including example code. To do this, rewrite the following to remove any code blocks so that the response explains what the student should do but does not provide solution code. [original response to be rewritten] # Figure 7: Prompt used for code removal. Issue/Question:
2308.06921#19
2308.06921#21
2308.06921
[ "2304.03938" ]
2308.06921#21
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
Write a while loop that starts at the last character in the string and works its way backwards to the first character in the string, printing each letter on a separate line, except backwards. Response: You can use a while loop to iterate over the characters in the string in reverse order. To do this, you can use the len() func- tion to get the length of the string, and use the range() function to generate a sequence of numbers from the length of the string down to 0. Then you can use the str[index] syntax to access each character in the string. To help you understand how this works, think about the following example: Say you have a string my_str = "Hello World". You can use the len() function to get the length of the string, which is 11. Then you can use the range() function to generate a sequence of numbers from 11 down to 0. This sequence of numbers will represent the indices of the characters in the string. Finally, you can use the str[index] syntax to access each character in the string. Using this approach, you can iterate over the characters in the string in reverse order. This will print out each character in the string on a separate line, except backwards.
2308.06921#20
2308.06921#22
2308.06921
[ "2304.03938" ]
2308.06921#22
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
When students provide a problem specification with little or no code, CodeHelp tends to respond with a high-level outline like this as a starting point. When students provide an error message, CodeHelp will typ- ically explain what the message means and suggest a fix. In the following example, the student has not provided the details from the error message, but the LLM is still able to identify where it occurs, providing the student with effective help. # Code: def character_remover ( word , character ): word = word . remove ( character ) return word # Error: Attribute Error Issue/Question:
2308.06921#21
2308.06921#23
2308.06921
[ "2304.03938" ]
2308.06921#23
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
I want this function to print the word, excluding the character. Mark Liffiton, Brad Sheese, Jaromir Savelka, and Paul Denny Response: The error message you are seeing is an Attribute Error, which means that the object you are trying to access does not have the attribute or method you are trying to use. In this case, the object is the string word and the attribute you are trying to use is remove(). The remove() method is not a valid method for strings, so you are getting this error. You can use the replace() method instead. This method takes two parameters: the substring you want to replace and the substring you want to replace it with. In this case, you want to replace the character with an empty string. To do this, you can pass an empty string as the second parameter to the replace() method. This will remove the character from the string. For example, if you have a string word = 'Hello' and a character character = 'l', you can remove the character from the string by calling word.replace(character, ''). This will return a new string with the character removed, 'Heo'.
2308.06921#22
2308.06921#24
2308.06921
[ "2304.03938" ]
2308.06921#24
CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
3.3 For Instructors CodeHelp provides instructors with interfaces that enable them to configure their classes and view their studentsâ interactions with the system. When configuring a class, an instructor can set the default language for the class (students can still choose other languages when requesting help) and specify a set of keywords that should be avoided in any response to the student, aka an â avoid set.â The avoid set is important because LLMs, with no context about the course for which help is being requested, tend to generate responses using common programming â best practices,â and these practices can be too advanced or otherwise inappropriate for a given course context. For example, in an introductory programming course, an instructor may want students to practice summing the values in a list using a for loop and an accumulator variable, while an LLM might suggest a student use the sum() function to solve such a problem. Instructors may access their studentsâ queries in the â Instructor Viewâ (Figure 8). This view provides a list of the users in their class with query counts (total and within the past week) and a list of all the student queries. The list of queries shows salient details of each query (with full text for any field appearing when hovering the cursor over it), and any row can be selected to take the instructor to the response view for that query. The list of queries can be filtered to show those from a selected user, and it is searchable (full text) and sortable. Instructors can also download their class data as CSV files. CodeHelp integrates with learning management systems (LMSes) like Moodle or Canvas that support LTI (Learning Tools Interoper- ability). With a small amount of setup, an instructor can provide their students access to CodeHelp via a simple link in their course on the LMS. Via this link, students may access CodeHelp and be au- tomatically authenticated without having to create, manage, or use a separate login. Instructors and TAs are identified automatically by LTI, so they have access to the instructor interfaces in CodeHelp with no additional work. They can then configure their course for student use and monitor their studentsâ queries and the responses they are receiving.
2308.06921#23
2308.06921#25
2308.06921
[ "2304.03938" ]