id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2308.08833#4 | CMB: A Comprehensive Medical Benchmark in Chinese | The philosophy to create CMB. The CMB dataset as a whole includes multiple-choice questions in qualification examination (CMB-Exam) and complex clinical diagnostic questions based on actual case studies (CMB-Clin). Each multiple-choice question offers four to six options, and there is one or more correct answers. Clinical diagnostic questions are set based on actual and complex cases encountered in the teaching process, and the correct answer is determined by the consensus of teaching experts. The sources of existing medical benchmarks could be the internet Li et al. (2023), hospitals, etc. However, these data sources have either privacy or inaccuracy issues. First, we decide to leverage qualification examination as the data source, resulting in CMB-Exam subset. The merits of qual- ification examination are two bold: (I) the ground truth of qualification examination is objective and typically accurate; (II) there is clear anchor (i.e., 60% accuracy) that is aligned with a qualified expert in a specific domain. As shown in Figure 1, the multiple-choice questions cover four clinical medical professions: physicians, nurses, medical technicians, and pharmacists. The involved exams cover the whole professional career path, ranging from undergraduate medical basic knowledge exams, graduate selection exams, standardized exams, professional qualification exams, intermediate professional title exams, to advanced professional title exams. | 2308.08833#3 | 2308.08833#5 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#5 | CMB: A Comprehensive Medical Benchmark in Chinese | 2 Other than the exams in CMB-Exam that is related to theoretical knowledge, the second subset of CMB (i.e., CMB-Clin) is more practical. CMB-Clin includes complex clinical diagnostic problems that evaluate the modelâ s ability to synthesize knowledge and reasoning. On the one hand, the knowledge aspect implies the need for the model to draw upon its medical knowledge when answering questions. On the other hand, the reasoning facet necessitates the modelâ s ability to analyze case reports, thus combining its own medical knowledge to respond to inquiries. We believe CMB-Exam and CMB-Clin are complementary in medicine, and both as a whole could be a complete evaluation protocol to not only the career of a medical doctor but also the learning path of a medical LLM. Take-away messages from CMB. After benchmarking various LLMs in CMB, we get the following observations that might be insightful. I) GPT-4 exhibits significant superiority in the medical domain, with indigenous large-scale models also demonstrating commendable performance; II) Most specialized medical models still lag behind general models in performance, indicating ample room for improvement in the medical modeling field; III) Accuracy exhibits significant disparities across professional levels and knowledge areas, notably between traditional Chinese medicine and Western medicine; IV) The effectiveness of the CoT and few-shot prompts varies among models with different accuracy levels, especially presenting potential risks in knowledge-intensive tasks; and V) Results of automatic evaluation using GPT-4 highly agree with expert evaluation results. # 2 Related work # 2.1 Medical Benchmark Medical benchmarks have evolved to broadly encompass two types of tasks based on the capabilities of the models they seek to probe: Objective tasks and Subjective tasks. The former typically assumes the form of multiple-choice questions (Welbl et al., 2018; Jin et al., 2020; Pal et al., 2022; Hendrycks et al., 2021b; Singhal et al., 2022; Li et al., 2021; Abacha and Demner-Fushman, 2019), information retrieval (Abacha et al., 2017; Zhu et al., 2019; Abacha et al., 2019), and cloze-style reading comprehension (Suster and Daelemans, 2018; Pampari et al., 2018; Zhu et al., 2020), which serve to evaluate a modelâ | 2308.08833#4 | 2308.08833#6 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#6 | CMB: A Comprehensive Medical Benchmark in Chinese | s medical knowledge with unbiased accuracy. Sources for these tasks range from medical textbooks and exams to case reports such as CliCR (Suster and Daelemans, 2018), Wikipedia like MedHop (Welbl et al., 2018), and medical practices exemplified by MMLU (Hendrycks et al., 2021b) and MedMCQA (Pal et al., 2022). In contrast, subjective tasks involve open-ended text generation constructed directly from consumer queries and doctor responses, often sourced from online medical forums. The task typically demands models to generate consumer-oriented replies (Singhal et al., 2022; Li et al., 2023) or explanations for multiple-choice questions (Liu et al., 2023). As of now, there are relatively few open-ended text generation question-answering tasks that specifically center around providing consultation based on diagnostic reports. Few existing benchmark datasets encapsulate both task types, with MultiMedQA (Singhal et al., 2022) and CMExam (Liu et al., 2023) sharing the closest resemblance to our work. Differing from prior work, our dataset exceeds in size and includes questions not only from the Chinese National Medical Licensing Examination but also from various authoritative medical textbooks. Moreover, our subjective tasks deviate from the existing works, stemming from textbook examples requiring answers to diagnosis-related questions based on case reports, resembling real-life consultation scenarios. # 2.2 Other Benchmarks of Large Language Models The explosive growth in the number and capability of LLMs has led to a multitude of works aiming to discern their true capacity, evaluating both their general and specific abilities. General ability benchmarks include comprehensive test suites, each targeting different aspects of LLMâ s proficiency, ranging from handling multi-turn dialogues (Zheng et al., 2023) to gauging language comprehension and reasoning abilities (Srivastava et al., 2022; Zhang et al., 2023b; Zhong et al., 2023). OpenLLM (Beeching et al., 2023) provides a public competition platform to compare and assess the performance of various LLM models across multiple tasks. In terms of specific abilities, several benchmarks, apart from those related to medicine, aim to evaluate different capabilities of models. ARB (Sawada et al., 2023) was introduced to assess LLMsâ | 2308.08833#5 | 2308.08833#7 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#7 | CMB: A Comprehensive Medical Benchmark in Chinese | performance in high-level reasoning tasks across multiple domains. C-Eval Huang et al. (2023) serves 3 as the first comprehensive benchmark to evaluate the advanced knowledge and reasoning abilities of Chinese-based models. M3Exam (Zhang et al., 2023b) provides a unique and comprehensive evaluation framework, combining various languages, modalities, and levels, to test the general abilities of Juris Master in different contexts. Gaokao (Zhang et al., 2023c), MATH (Hendrycks et al., 2021c), and APPS (Hendrycks et al., 2021a) focus on assessing LLM proficiency in complex, context-specific tasks, and code generation, respectively. # 3 Dataset # 3.1 CMB-Exam: Comprehensive Medical Exams Category Subcategory # Subject # Questions Physician (å | 2308.08833#6 | 2308.08833#8 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#8 | CMB: A Comprehensive Medical Benchmark in Chinese | »å¸ ) Resident Physician (ä½ é ¢å »å¸ ); Licensed Assistant Physician (æ §ä¸ å ©ç å »å¸ ); Licensed Physician (æ §ä¸ å »å¸ ); Associate Professional Physician (ä¸ çº§è ç§°); Advanced Professional Physicians (é« çº§è ç§°) 81 124,926 Nurse (æ ¤ç ) Practicing Nurse (æ ¤å£«); Licensed Practical Nurse (æ ¤å¸ ); Charge Nurse (ä¸»ç®¡æ ¤ å¸ ); Advanced Practice Nurse (é« çº§æ ¤å¸ ) 8 16,919 Technicians (å »æ ) Medical Technician (å »æ 士); Medical Technologist (å »æ å¸ ); Supervising Technol- ogist (主管æ å¸ ) 21 27,004 Pharmacist (è ¯å¸ ) Licensed Pharmacist (æ §ä¸ è¥¿è ¯å¸ ); Licensed TCM Pharmacist (æ §ä¸ ä¸ è ¯å¸ ); Junior Pharmacist (å çº§è ¯å¸ ); Junior Pharmacist Assistant (å çº§è ¯å£«); Junior TCM Pharmacist (å çº§ä¸ è ¯å¸ ); Junior TCM Pharmacist Assistant (å çº§ä¸ è ¯å£«); Chief Pharmacists (ä¸»ç®¡è ¯å¸ ); Chief TCM Pharmacists (ä¸»ç®¡ä¸ è ¯å¸ ) 8 33,354 Undergraduate Dis- ciplines (å ¦ç§ è è¯ )1 Fundamental Medicine (å ºç¡ å »å ¦); Clinical Medicine (ä¸´åº å »å ¦); Traditional Chi- nese (TCM) and Chinese Herbal Medicine (ä¸ å »å ¦ä¸ ä¸ è ¯å ¦); Preventive Medicine and Public Health (é¢ é ²å »å ¦ä¸ å ¬å ±å «ç å ¦) 53 62,271 Graduate Entrance Exam (è ç ) Total Integrated Western Medicine (è¥¿å »ç»¼å ); Integrated TCM (ä¸ | 2308.08833#7 | 2308.08833#9 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#9 | CMB: A Comprehensive Medical Benchmark in Chinese | å »ç»¼å ); Political Science (æ ¿æ²»); Nursing (æ ¤ç å ¦) 28 5 176 16,365 280,839 1 We referenced the National Standard Subject Classification of the Peopleâ s Republic of China, see https://xkb.pku.edu.cn/docs/ 2018-10/20220328083301969071.pdf. Table 1: Statistics of the CMB-Exam Categories, Subcategories, Subjects, and Questions. # 3.1.1 Taxonomy To obtain a precise taxonomy of medical evaluation, we aligned it with the disciplinary and exam- ination systems of the medical field. First, we chose four main medical professions: physicians, pharmacists, medical technicians, and nurses, covering various occupational difficulty levels of exam- inations. Considering the learning trajectories and professional growth paths, we additionally include discipline examinations and graduate entrance examinations for these four professions, ultimately resulting in six categories: Physician, Nurse, Technician, Pharmacist, Undergraduate Disciplines, and Graduate Entrance Exam. One could refer to Table 1 for the detailed taxonomy. Moreover, we carried out a more detailed subject division within each subcategory, resulting in a total of 174 categories, the detailed directory list of which can be found in Appendix A. Through this structured arrangement, our directory structure reflects characteristics closely connected to the actual medical field, providing a solid foundation for further analysis and research. # 3.1.2 Data Collecting and Processing Data Sources The data used is derived from publicly available mock examination questions, course- work exercises, and summaries of commonly misunderstood examination questions. A significant portion of these materials comes from the Chinese Medical Question Database3, from which we obtained explicit permission to share the data. Manual Verification The data has various formats, with PDF and JSON being the most prevalent. For PDF documents, we first used Optical Character Recognition (OCR) to transform them into plain text. This text was then processed into structured formats and underwent manual verification to ensure both OCR accuracy and proper formatting. | 2308.08833#8 | 2308.08833#10 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#10 | CMB: A Comprehensive Medical Benchmark in Chinese | 3https://www.medtiku.com/ 4 Data Preprocessing All questions underwent a standardized data preprocessing procedure, in- cluding de-duplication and cleansing. In instances where we were unable to verify the question quality from the source, we conducted manual validation to ensure the absence of grammatical errors. Additionally, with the aid of the comment system provided by the Chinese Medical Question Database, we enacted a rigorous selection and deletion process for the data, ensuring the accuracy of the knowledge embedded in the questions. Split #subcategory #Q per subcategory 28 28 28 11,200 280 269,359 Test 400 10 1 Dev -2 Train 1 It is with explanations in dev set. 2 Each subcategory has a different number of questions. Table 2: Data split in CMB-Exam. Data Statistics Finally, we obtained a total of 280,839 multiple-choice questions. To assess the modelâ s comprehension of medical knowledge, we randomly selected 400 questions from each subcategory as a test set. Additionally, to facilitate experimentation with few-shot learning strategies, we randomly selected 10 questions with explanations from each subcategory as a dev set. The remaining 269,359 questions were used as the train set. | 2308.08833#9 | 2308.08833#11 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#11 | CMB: A Comprehensive Medical Benchmark in Chinese | # 3.2 CMB-Clin: Clinical Diagnostic Questions The QA dataset is based on 74 classical complex and real-world cases originating from textbooks, offering an opportunity to investigate modelsâ proficiency in knowledge application amidst real-life diagnosis and treatment circumstances. A modelâ s competence is gauged not merely by its mastery of medical knowledge but also by its ability to synthesize and apply this knowledge to solve real-world problems. # 3.2.1 Task Formulation In our dataset, we simulate dialogue interactions between an examiner and a candidate, focusing on assessing the modelâ s diagnostic and therapeutic capacities. The data is with 74 real consultation scenarios (or ailments), each consisting of a case instance with multiple questions, culminating in 208 questions in total. As shown in Figure 1, each case presents a patient description followed by interrelated, sequential questions. It includes three parts: I) Description D: patient information, including medical history summaries and chief complaints, physical examinations such as visual and tactile inspection, ancillary examinations like biopsy and CT scans; II) Questions Q: questions related to diagnosis and treatment based on descriptions. Some questions might be interrelated; and III) Solutions S: corresponding solutions to questions. For instance, in the k-th conversation round, the input x is formed by concatenating the patientâ s description with previous question-answer pairs and the current question, represented as x = Di + Qi + Si + . . . Qi+k. The expected response is Si+k. # 4 Experiments on CMB-Exam # 4.1 Experimental Setup Models We evaluate the following Chinese medical LLMs to compare their performance on CMB- Exam: HuatuoGPT (Zhang et al., 2023a), BianQue (Chen et al., 2023), ChatMed-Consult (Zhu and Wang, 2023), MedicalGPT (Xu, 2023), ChatGLM-Med (Wang et al., 2023b), Bentsao (Wang et al., 2023a), and DoctorGLM (Xiong et al., 2023). In addition to these specialized models, we also include two proprietary models (i.e., ChatGPT (gpt-3.5-turbo-16k-0613) and GPT-4 (gpt-4-0613) and | 2308.08833#10 | 2308.08833#12 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#12 | CMB: A Comprehensive Medical Benchmark in Chinese | 5 Model Open Physician Nurse Pharmacist Technician Disciplines Graduate Entrance Exam General Models GPT-4 â 59.90 (59.90) 69.31 (69.31) 52.19 (52.19) 61.50 (61.50) 59.69 (59.69) 54.19 (54.19) ChatGLM2-6B + CoT â 40.20 (40.22) 40.25 (41.13) 48.50 (48.50) 47.56 (48.37) 40.34 (40.38) 36.06 (36.76) 38.67 (38.67) 36.58 (37.17) 37.19 (37.25) 35.56 (36.31) 33.37 (33.43) 35.06 (35.68) ChatGPT + CoT â 40.75 (40.75) 17.75 (17.75) 45.69 (45.69) 19.94 (19.94) 36.59 (36.59) 16.00 (16.00) 40.08 (40.08) 20.25 (20.25) 37.94 (37.94) 19.25 (19.25) 28.81 (28.81) 16.19 (16.19) Baichuan-13B-chat + CoT â 34.80 (37.16) 37.70 (39.92) 41.25 (42.11) 44.75 (46.25) 35.41 (36.91) 41.22 (42.20) 35.17 (36.20) 34.67 (36.52) 31.81 (36.39) 37.94 (39.87) 27.56 (29.03) 32.94 (33.99) Medical Models HuatuoGPT (å | 2308.08833#11 | 2308.08833#13 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#13 | CMB: A Comprehensive Medical Benchmark in Chinese | ä½ ) + CoT â 29.10 (29.58) 29.90 (30.32) 33.56 (34.26) 34.00 (34.17) 27.41 (28.75) 29.06 (29.35) 30.58 (31.47) 30.92 (31.08) 29.44 (30.13) 27.38 (27.64) 25.06 (25.79) 25.69 (26.05) MedicalGPT + CoT â 26.40 (26.56) 24.80 (25.61) 30.94 (30.94) 27.19 (27.98) 24.72 (24.84) 23.09 (24.07) 27.17 (27.32) 24.58 (26.00) 25.44 (25.62) 23.75 (24.77) 21.50 (21.64) 21.06 (21.79) ChatMed-Consult + CoT â 20.20 (21.41) 19.40 (20.92) 22.31 (23.48) 21.69 (23.56) 20.59 (21.58) 20.00 (21.65) 22.67 (23.55) 22.83 (23.59) 20.38 (21.36) 18.88 (20.44) 17.44 (18.08) 18.56 (19.55) ChatGLM-Med + CoT Bentsao (æ | 2308.08833#12 | 2308.08833#14 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#14 | CMB: A Comprehensive Medical Benchmark in Chinese | ¬è ) + CoT BianQue-2 (æ é¹ ) + CoT â â â 21.75 (23.59) 15.55 (20.89) 21.55 (21.67) 21.00 (21.10) 4.90 (19.05) 7.85 (19.62) 22.06 (23.37) 16.25 (22.13) 19.94 (19.99) 20.56 (20.61) 4.19 (19.04) 6.63 (19.31) 21.84 (22.67) 17.34 (21.06) 20.94 (21.07) 20.66 (20.78) 4.28 (20.36) 7.34 (20.75) 21.00 (21.85) 16.33 (20.65) 22.75 (22.85) 22.17 (22.24) 3.58 (18.11) 8.33 (20.47) 18.44 (19.72) 12.63 (17.12) 19.56 (19.83) 19.25 (19.53) 3.31 (16.27) 6.63 (18.11) 17.50 (18.14) 12.56 (16.88) 16.81 (16.93) 16.44 (16.54) 3.25 (18.63) 5.94 (15.03) DoctorGLM + CoT 2.70 (16.51) 3.15 (20.61) 3.31 (26.36) 3.13 (26.72) 3.84 (20.86) 3.41 (21.21) 3.75 (18.07) 2.50 (13.35) 3.19 (22.99) 3.38 (25.21) 2.25 (18.02) 2.25 (19.79) â | 2308.08833#13 | 2308.08833#15 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#15 | CMB: A Comprehensive Medical Benchmark in Chinese | Avg 59.46 (59.46) 39.71 (39.74) 38.51 (39.23) 38.31 (38.31) 18.23 (18 .23) 34.33 (36.30) 38.20 (39.79) 29.19 (30.00) 29.49 (29.77) 26.03 (26.15) 24.08 (25.04) 20.60 (21.58) 20.23 (21.62) 20.43 (21.56) 15.11 (19.79) 20.26 (20.39) 20.01 (20.13) 3.92 (18.57) 7.12 (18.88) 3.17 (20.47) 2.97 (21.15) Table 3: Zero-shot accuracy in the answer-only and CoT settings across different categories. Values in parentheses are the accuracy that only involves questions for which model answers are not empty (i.e. a valid answer can be extracted from model outputs). two publicly-available general-domain instruction-following models (i.e., ChatGLM-24 (Du et al., 2022) and Baichuan-13B-Chat5). Please refer to Appendix B for more details. Decoding Hyperparameters For all the aforementioned models (except for ChatGPT and GPT-4), we adopt their default hyper-parameters specified in transformers.GenerationConfig6. Besides, to reduce the variance in generation, we adopt greedy decoding for all models with min_new_tokens and max_new_tokens set to 1 and 512, respectively, to avoid empty or lengthy answers. Evaluation Details We evaluate the models in both answer-only and chain-of-thought (CoT) settings. We extract answers from model outputs using an empirically designed regular expression. Each extracted answer is compared to the solution and is deemed correct if and only if they are exactly matched. We adopt accuracy as our metric. # 4.2 Benchmarking Results We report the zero-shot results in Table 3. There are several observations drawn from different aspects. | 2308.08833#14 | 2308.08833#16 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#16 | CMB: A Comprehensive Medical Benchmark in Chinese | On general LLMs. Among the generic LLMs, the performance of GPT-4 in medicine significantly surpasses that of other models, with a marginal cliff-like improvement of 20 percent. This impressive performance has contributed to our profound appreciation of the capabilities of this model. Simulta- neously, two indigenous general-purpose models, ChatGLM2-6B and Baichuan-13B-chat, are closely trailing GPT-4. Notably, the ChatGLM2 model, with only 6B parameters, even outperforms ChatGPT, a testament to the rapid iterative capabilities of indigenous large-scale models and their excellence in specialized knowledge domains. | 2308.08833#15 | 2308.08833#17 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#17 | CMB: A Comprehensive Medical Benchmark in Chinese | On medical LLMs. Among the medical LLMs, there are some regrettable observations. In the medical field, the development of specialized models seems to be overshadowed by updates in general large-scale models. Specifically, we observe that the performance of BianQue-2 and DoctorGLM in the medical model domain was underwhelming. These two models, due to their lack of superior directive-following capabilities and input length limitations, struggled to fully understand the intent # 4https://github.com/THUDM/ChatGLM2-6B 5https://github.com/baichuan-inc/Baichuan-13B 6https://huggingface.co/docs/transformers/main_classes/text_generation#transformers. GenerationConfig 6 | 2308.08833#16 | 2308.08833#18 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#18 | CMB: A Comprehensive Medical Benchmark in Chinese | of the questions, thereby failing to provide accurate answers. This deficiency resulted in their lower scores in the overall evaluation. In different categories. LLMs show varied performance across clinical specialties. Specifically, scores for pharmacist-related questions tend to be lower, while those concerning nursing staff are typically higher. This difference might arise from the foundational knowledge nurses require, which is straightforward, compared to the intricate distinctions in drug names and indications pharmacists deal with. Despite these performance variations among specialties, the models exhibit a consistent trend, suggesting no inherent bias towards any particular domain. These findings are pivotal for our ongoing research and optimization efforts. # 4.3 Analysis # 4.3.1 Do few-shot prompting and CoT help? Protocol To investigate the effects of the few-shot prompting and CoT strategies, we perform the three-shot and CoT experiments on CMB-Exam, with the results reported in Appendix C.1. Results The study reveals that the efficacy of both the few-shot approach and the CoT strategy in evaluated LLMs largely depends on the model capacities. The CoT strategy, contrary to expectations, often doesnâ t boost accuracy, especially in knowledge-dense tasks (e.g., medical MCQs in CMB- Exam). It might unintentionally confuse models with irrelevant context, hindering their reasoning. For the few-shot prompting, its effectiveness is predominantly evident in situations where the model already demonstrates relatively strong accuracy (e.g., accuracy above 25%). In weaker models, the few-shot prompting can unintentionally harm the results. This can be attributed to two primary factors: first, some models might struggle with processing extensive text; and second, others may need additional refinement to better follow in-context examples. # 4.3.2 On the Perceived Difficulty | 2308.08833#17 | 2308.08833#19 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#19 | CMB: A Comprehensive Medical Benchmark in Chinese | Protocol There is a sequential career track for Physician, Nurse, Technicians, Pharmacist in China. For example, the career track of a Physician includes Resident Physician, Licensed Assistant Physi- cian, Licensed Physician, Associate Professional Physician, and Advanced Professional Physicians, professional difficulty of which is from low to high. We aims to examine whether the difficulty degrees perceived by LLMs and humans are consistent. Specifically, we denote the average zero-shot accuracy of the top five LLMs as the indicator of perceived difficulty degree from LLMs; the lower, the more difficult. | 2308.08833#18 | 2308.08833#20 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#20 | CMB: A Comprehensive Medical Benchmark in Chinese | = e é ES 3 8 8 Figure 2: Accuracy across various clinical medicine fields at different career stages. The accuracies are the Zero-shot average values for TOP-5 models using direct response strategy. Results As depicted in Figure 2, the y-axis showcases rising professional levels with the type of examination. The accuracy rates for physicians and nursing models decrease as professional levels increase, except for the residency qualification examination, suggesting it tests nuanced clinical knowledge distinctions 7. Conversely, medical technicians exhibit the opposite trend, with head technician examination accuracy being the highest. This is likely due to its focus on personnel man- agement and communication, which does not fall in medical profession and could be learned from the 7A plausible explanation could be that this exam focuses on discerning if medical students confuse clinical knowledge. The granularity of the knowledge assessed is quite detailed, potentially making it less amicable for larger models. 7 massive amount of general corpora. | 2308.08833#19 | 2308.08833#21 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#21 | CMB: A Comprehensive Medical Benchmark in Chinese | While pharmacist exam results vary, models targeting traditional Chinese medicine consistently score lower than those on Western pharmacology, highlighting the need for specialized models in the Chinese medical domain. # 5 Experiments on CMB-Clin # 5.1 Experimental Setup Prompt construction Every prompt comprises two components: a description that may (or may not) encompass conversation history Di, and the question Qi. To integrate the conversation history into the description, we prepend the appropriate roles to each question and solution when working with chat LLMs (all models except MedicalGPT). For non-chat LLMs, specifically MedicalGPT, we prefix "é ®é¢ ï¼ " ("question:") to each question and "ç æ¡ ï¼ " ("solution:") to each corresponding solution. These consolidated texts are then used to instruct the models for generating appropriate responses. Decoding hyperparameters All hyperparameters remain consistent with those used in CMB- Exam. However, we set repetition_penalty=1.1 (previously 1.0) based on the observation that the default setting yields highly repetitive patterns which make the results meaningless. Additionally, to understand the influence of temperature on the quality of generation, we perform an experiment with decoding temperatures set at 0.2, 0.6, 1.0, and 1.5. This fills the gap of previous studies (Huang et al., 2023; Zhang et al., 2023c; Zheng et al., 2023; Zhang et al., 2023b; Zhu et al., 2023; Zhong et al., 2023), which often overlooked the impact of decoding strategies. Expert Evaluation To guarantee the precision of our evaluation, we engage three annotators with professional medical knowledge to evaluate on a randomly selected subset of 320 responses from a pool of 208à 11 model-generated responses. This subset constitutes 15% of the total, with 11 representing the number of models evaluated. All annotators follow a uniform set of guidelines. Equipped with a reference solution, they rate each response across four aspects â fluency, relevance, completeness, and medical proficiency â using a grading scale from 1 to 5. Details of the evaluation interface can be found in Appendix C.2.1. Automatic Evaluation To enhance efficiency and reduce manual evaluation costs, we advocate for a robust automatic evaluation approach. | 2308.08833#20 | 2308.08833#22 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#22 | CMB: A Comprehensive Medical Benchmark in Chinese | We use ChatGPT and GPT-4 to assess the model responses, adhering to the same guidelines as those used in expert evaluations. Benefiting from definitive scoring criteria for each aspect, our method bypasses the positional bias inherent in conventional side-by-side automated assessments (Wang et al., 2023c). For robustness considerations, ChatGPT reviews each response five times to address variance in the temperature experiment, while GPT-4 assesses each response once for consistency. The prompt template for the automatic evaluation is detailed in Appendix C.2.2. # 5.2 Benchmarking Results Figure 3 shows the ranking results of expert and GPT-4 evaluation. The horizontal axis of Figure 3 is sorted by the ranking of average scores of GPT-4 evaluation. Detailed scores are presented in Table 4 and Table 5 . The first echelon consists of GPT-4, ChatGPT and Baichuan-13B-chat. They perform significantly better in terms of relevance, completeness and proficiency than other models, with a marginal superiority of at least 7.4%. ChatGLM2-6B, HuatuoGPT, BianQue-2 and ChatMed- Consult form the second tier. They have mediocre medical proficiency though they have similar performance in terms of fluency to the first tier. Regretfully, MedicalGPT, DoctorGLM, Bentsao and ChatGLM-Med yield unsatisfactory results due to their potential deficiency. | 2308.08833#21 | 2308.08833#23 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#23 | CMB: A Comprehensive Medical Benchmark in Chinese | # 5.3 Analysis # 5.3.1 Agreements between Automatic and Expert Evaluation Figure 3 demonstrates a strong agreement of resulted rankings between GPT-4 and expert evaluation, with the spearman correlation of rankings being 0.93. The rankings agree with each other except 8 Rankings by Perspective and Model GPr4 1 â e- Fluency 2 â eâ Relevance Completeness 3 â e Proficiency un? 2s ts @? 8 Expert 9 |e Fluency ~@ Relevance 10 Completeness â -@: Proficiency Wy oe. Avg & & -@S& x & ° > & eF F Â¥ SFE EC x PW Ss & SF © Ss ie) vw Ft & & oe e Roy gf & * we eS s s $ ow & 3 Figure 3: Rankings by perspective and model. Dashed lines and solid lines are the resulted rankings from expert and GPT-4 evaluation, respectively. For visual clarity, each line is shifted vertically for a small value. A model is better if it has a smaller ranking (a higher position) on the vertical axis. Model Fluency Relevance Completeness Proficiency Avg. | 2308.08833#22 | 2308.08833#24 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#24 | CMB: A Comprehensive Medical Benchmark in Chinese | GPT-4 ChatGPT Baichuan-13B-chat ChatGLM2-6B HuatuoGPT BianQue-2 ChatMed-Consult MedicalGPT DoctorGLM Bentsao ChatGLM-Med 4.97 4.96 4.96 4.86 4.89 4.86 4.88 4.48 4.74 3.88 3.55 4.53 4.47 4.19 3.76 3.75 3.52 3.08 2.64 2.00 2.05 1.97 4.12 4.17 3.97 3.51 3.38 3.02 2.67 2.19 1.65 1.71 1.61 4.45 4.42 4.23 4.00 3.86 3.60 3.30 2.89 2.30 2.58 2.37 4.52 4.51 4.34 4.03 3.97 3.75 3.48 3.05 2.67 2.55 2.38 Table 4: Results of automatic evaluation using GPT-4 on CMB-Clin. Avg. represents the average scores of each model across all aspects. Models are displayed in descending order of Avg. in the original table. for a flip for GPT-4 and ChatGPT (dashed and solid brown lines are parallel, except for a flip at GPT-4 and ChatGPT). Figure 4 shows the linear correlation between automatic evaluations and expert evaluations averaged over three experts and all aspects. All four evaluated aspects show positively correlated trends between expert and GPT-4 evaluation (See Appendix C.2.3). The overall Pearson correlation (Figure 4) is 0.84. The two correlations indicate that the automatic evaluation is highly aligned with expert evaluation. # 5.3.2 Consistent results with CMB-Exam We compute the spearman correlation between the obtained rankings of CMB-Exam and CMB-Clin, yielding a correlation of 0.89 with a two-tailed p-value of 2.3e â 4. This suggests a high consistency between the evaluation results on the two datasets. | 2308.08833#23 | 2308.08833#25 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#25 | CMB: A Comprehensive Medical Benchmark in Chinese | However, it is worth noting that this observation is not due to an equivalence of the evaluated abilities between CMB-Exam and CMB-Clin. We attribute the consistency of results to the speculation that, currently most models are trained for injecting knowledge without hurting their conversation ability. We hope that after being supervised-finetuned on CMB-Exam training set, which consists of enormous multiple-choice questions, a model can still achieve decent scores on CMB-Clin. This objective aligns with our expectation of a doctor: we hope that a doctor is sufficiently informed with medical knowledge and is able to conversate with a patient. | 2308.08833#24 | 2308.08833#26 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#26 | CMB: A Comprehensive Medical Benchmark in Chinese | 9 Models Fluency Relevance Completeness Proficiency Avg. ChatGPT GPT-4 Baichuan-13B-chat ChatGLM2-6B HuatuoGPT BianQue-2 ChatMed-Consult MedicalGPT DoctorGLM Bentsao ChatGLM-Med 4.93 4.88 4.79 4.77 4.70 4.44 4.26 4.21 3.74 3.52 2.92 4.65 4.61 4.29 4.06 3.89 3.50 3.39 3.40 2.46 2.62 2.23 4.22 4.20 4.22 3.96 3.69 3.30 3.16 3.09 2.35 2.36 1.98 4.34 4.39 4.30 3.99 3.81 3.43 3.27 3.10 2.30 2.30 1.92 4.53 4.52 4.40 4.20 4.02 3.67 3.52 3.45 2.71 2.70 2.26 Table 5: Results of expert evaluation on CMB-Clin. Avg. are the averaged scores of each model over all perspectives. Models are arranged in descending order of Avg. Overall Correlation pearson=0.84 Expert # 4s # 6 Averaged Scores S es ChatGLM2-68 0 o2 To TS Temperature Figure 4: Correlation between expert and automatic evaluation on CMB- Clin. Figure 5: The effect of different decoding temperatures on averaged scores over the four aspects. # 5.3.3 Effects of Decoding Hyper-parameters Figure 5 demonstrates the result under different decoding temperatures. The overall performance drops when the temperature increases from 0 to 1.5. This might be due to the fact that a higher temperature leads to more randomized (diversified) outputs, which is not desired in medicine where precise and definite contents are preferred. | 2308.08833#25 | 2308.08833#27 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#27 | CMB: A Comprehensive Medical Benchmark in Chinese | However, we find that pairwise spearman correlations under different temperatures are all above 0.87 (See Appendix C.2.4), meaning that the resulted rankings of models are robust to temperature change. This reveals the importance of aligning different temperatures when comparing performance across models. # 6 Conclusion In conclusion, while LLMs have potential in the realm of medicine, their accurate evaluation remains pivotal for real-world applications. The introduction of the CMB benchmark, tailored to the local cultural environment in China, gives a more contextualized and comprehensive evaluation benchmark. Although not framed as a competitive leaderboard, it serves as a crucial tool for tracking LLM progress in medical domains, particularly within China. This might pave the way for the broader and more effective utilization of LLMs in Chinaâ s medical landscape. | 2308.08833#26 | 2308.08833#28 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#28 | CMB: A Comprehensive Medical Benchmark in Chinese | 10 # Ethical Statement The permission to release the data The data utilized in this study primarily originate from publicly accessible mock examination questions, coursework exercises, and summations of commonly misunderstood examination questions. A portion of these items are sourced from the Chinese Medical Question Database8, from whom we received explicit permission and support to include their questions in our evaluation. The privacy issue We have removed all personal information in our benchmark. # References Asma Ben Abacha, Eugene Agichtein, Yuval Pinter, and Dina Demner-Fushman. 2017. | 2308.08833#27 | 2308.08833#29 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#29 | CMB: A Comprehensive Medical Benchmark in Chinese | Overview of the medical question answering task at TREC 2017 liveqa. In Proceedings of The Twenty-Sixth Text REtrieval Conference, TREC 2017, Gaithersburg, Maryland, USA, November 15-17, 2017, volume 500-324 of NIST Special Publication. National Institute of Standards and Technology (NIST). Asma Ben Abacha and Dina Demner-Fushman. 2019. A question-entailment approach to question answering. BMC Bioinform., 20(1):511:1â 511:23. | 2308.08833#28 | 2308.08833#30 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#30 | CMB: A Comprehensive Medical Benchmark in Chinese | Asma Ben Abacha, Yassine Mrabet, Mark Sharp, Travis R. Goodwin, Sonya E. Shooshan, and Dina Demner-Fushman. 2019. Bridging the gap between consumersâ medication questions and trusted answers. In MEDINFO 2019: Health and Wellbeing e-Networks for All - Proceedings of the 17th World Congress on Medical and Health Informatics, Lyon, France, 25-30 August 2019, volume 264 of Studies in Health Technology and Informatics, pages 25â 29. IOS Press. Edward Beeching, Clémentine Fourrier, Nathan Habib, Sheon Han, Nathan Lambert, Nazneen Rajani, Lewis Tunstall Omar Sanseviero, and Thomas Wolf. 2023. Open llm leaderboard. https: //huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard. Yirong Chen, Zhenyu Wang, Xiaofen Xing, Zhipei Xu, Kai Fang, Sihang Li, Junhong Wang, and Xiangmin Xu. 2023. Bianque-1.0: Improving the "question" ability of medical chat model through finetuning with hybrid instructions and multi-turn doctor qa datasets. github. Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320â 335. | 2308.08833#29 | 2308.08833#31 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#31 | CMB: A Comprehensive Medical Benchmark in Chinese | Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, and Jacob Steinhardt. 2021a. Measuring coding challenge competence with APPS. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual. | 2308.08833#30 | 2308.08833#32 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#32 | CMB: A Comprehensive Medical Benchmark in Chinese | Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou andchen2023bianque1 Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021b. Measuring massive multitask language understanding. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021c. | 2308.08833#31 | 2308.08833#33 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#33 | CMB: A Comprehensive Medical Benchmark in Chinese | Measuring mathematical problem solving with the MATH dataset. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685. | 2308.08833#32 | 2308.08833#34 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#34 | CMB: A Comprehensive Medical Benchmark in Chinese | 8https://www.medtiku.com/ 11 Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. 2023. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322. Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. 2020. | 2308.08833#33 | 2308.08833#35 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#35 | CMB: A Comprehensive Medical Benchmark in Chinese | What disease does this patient have? A large-scale open domain question answering dataset from medical exams. CoRR, abs/2009.13081. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Jianquan Li, Xidong Wang, Xiangbo Wu, Zhiyi Zhang, Xiaolong Xu, Jie Fu, Prayag Tiwari, Xiang Wan, and Benyou Wang. 2023. Huatuo-26m, a large-scale chinese medical qa dataset. arXiv preprint arXiv:2305.01526. Jing Li, Shangping Zhong, and Kaizhi Chen. 2021. MLEC-QA: A chinese multi-choice biomedical question answering dataset. | 2308.08833#34 | 2308.08833#36 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#36 | CMB: A Comprehensive Medical Benchmark in Chinese | In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 8862â 8874. Association for Computational Linguistics. Junling Liu, Peilin Zhou, Yining Hua, Dading Chong, Zhongyu Tian, Andrew Liu, Helin Wang, Chenyu You, Zhenhua Guo, Lei Zhu, and Michael Lingzhi Li. 2023. Benchmarking large language models on cmexam - A comprehensive chinese medical exam dataset. CoRR, abs/2306.03030. | 2308.08833#35 | 2308.08833#37 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#37 | CMB: A Comprehensive Medical Benchmark in Chinese | Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. 2022. Medmcqa: A large- scale multi-subject multi-choice dataset for medical domain question answering. In Conference on Health, Inference, and Learning, CHIL 2022, 7-8 April 2022, Virtual Event, volume 174 of Proceedings of Machine Learning Research, pages 248â 260. PMLR. Anusri Pampari, Preethi Raghavan, Jennifer J. Liang, and Jian Peng. 2018. emrqa: A large corpus for question answering on electronic medical records. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2357â | 2308.08833#36 | 2308.08833#38 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#38 | CMB: A Comprehensive Medical Benchmark in Chinese | 2368. Association for Computational Linguistics. Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer learning in biomedical natural language processing: An evaluation of bert and elmo on ten benchmarking datasets. In Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019). Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, and Aran Komatsuzaki. 2023. | 2308.08833#37 | 2308.08833#39 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#39 | CMB: A Comprehensive Medical Benchmark in Chinese | ARB: advanced reasoning benchmark for large language models. CoRR, abs/2307.13692. Karan Singhal, Shekoofeh Azizi, Tao Tu, S. Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Kumar Tanwani, Heather Cole-Lewis, Stephen Pfohl, Perry Payne, Martin Seneviratne, Paul Gamble, Chris Kelly, Nathaneal Schärli, Aakanksha Chowdhery, Philip Andrew Mansfield, Blaise Agüera y Arcas, Dale R. | 2308.08833#38 | 2308.08833#40 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#40 | CMB: A Comprehensive Medical Benchmark in Chinese | Webster, Gregory S. Corrado, Yossi Matias, Katherine Chou, Juraj Gottweis, Nenad Tomasev, Yun Liu, Alvin Rajkomar, Joelle K. Barral, Christopher Semturs, Alan Karthikesalingam, and Vivek Natarajan. 2022. Large language models encode clinical knowledge. CoRR, abs/2212.13138. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Santilli, Andreas Stuhlmüller, Andrew M. Dai, Andrew La, Andrew K. Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakas, and et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. CoRR, abs/2206.04615. | 2308.08833#39 | 2308.08833#41 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#41 | CMB: A Comprehensive Medical Benchmark in Chinese | 12 Mujeen Sung, Jinhyuk Lee, Sean Yi, Minji Jeon, Sungdong Kim, and Jaewoo Kang. 2021. Can language models be biomedical knowledge bases? arXiv preprint arXiv:2109.07154. Simon Suster and Walter Daelemans. 2018. Clicr: a dataset of clinical case reports for machine reading comprehension. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1551â | 2308.08833#40 | 2308.08833#42 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#42 | CMB: A Comprehensive Medical Benchmark in Chinese | 1563. Association for Computational Linguistics. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. | 2308.08833#41 | 2308.08833#43 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#43 | CMB: A Comprehensive Medical Benchmark in Chinese | Tao Tu, Shekoofeh Azizi, Danny Driess, Mike Schaekermann, Mohamed Amin, Pi-Chuan Chang, Andrew Carroll, Chuck Lau, Ryutaro Tanno, Ira Ktena, et al. 2023. Towards generalist biomedical ai. arXiv preprint arXiv:2307.14334. Haochun Wang, Chi Liu, Nuwa Xi, Zewen Qiang, Sendong Zhao, Bing Qin, and Ting Liu. 2023a. Huatuo: Tuning llama model with chinese medical knowledge. Haochun Wang, Chi Liu, Sendong Zhao Zhao, Bing Qin, and Ting Liu. 2023b. Chatglm-med: å ºäº ä¸ æ å »å ¦ç ¥è¯ ç chatglm模å å¾®è° . https://github.com/SCIR-HI/Med-ChatGLM. | 2308.08833#42 | 2308.08833#44 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#44 | CMB: A Comprehensive Medical Benchmark in Chinese | Junjie Wang, Yuxiang Zhang, Lin Zhang, Ping Yang, Xinyu Gao, Ziwei Wu, Xiaoqun Dong, Junqing He, Jianheng Zhuo, Qi Yang, Yongfeng Huang, Xiayu Li, Yanghan Wu, Junyu Lu, Xinyu Zhu, Weifeng Chen, Ting Han, Kunhao Pan, Rui Wang, Hao Wang, Xiaojun Wu, Zhongshen Zeng, Chongpei Chen, Ruyi Gan, and Jiaxing Zhang. 2022. Fengshenbang 1.0: Being the foundation of chinese cognitive intelligence. CoRR, abs/2209.02970. Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhi- fang Sui. 2023c. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. | 2308.08833#43 | 2308.08833#45 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#45 | CMB: A Comprehensive Medical Benchmark in Chinese | Trans. Assoc. Comput. Linguistics, 6:287â 302. Honglin Xiong, Sheng Wang, Yitao Zhu, Zihao Zhao, Yuxiao Liu, Qian Wang, and Dinggang Shen. 2023. Doctorglm: Fine-tuning your chinese doctor is not a herculean task. arXiv preprint arXiv:2304.01097. Ming Xu. 2023. Medicalgpt: Training medical gpt model. https://github.com/shibing624/ MedicalGPT. | 2308.08833#44 | 2308.08833#46 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#46 | CMB: A Comprehensive Medical Benchmark in Chinese | Hongbo Zhang, Junying Chen, Feng Jiang, Fei Yu, Zhihong Chen, Jianquan Li, Guiming Chen, Xiangbo Wu, Zhiyi Zhang, Qingying Xiao, et al. 2023a. Huatuogpt, towards taming language model to be a doctor. arXiv preprint arXiv:2305.15075. Ningyu Zhang, Mosha Chen, Zhen Bi, Xiaozhuan Liang, Lei Li, Xin Shang, Kangping Yin, Chuanqi Tan, Jian Xu, Fei Huang, Luo Si, Yuan Ni, Guotong Xie, Zhifang Sui, Baobao Chang, Hui Zong, Zheng Yuan, Linfeng Li, Jun Yan, Hongying Zan, Kunli Zhang, Buzhou Tang, and Qingcai Chen. 2022. CBLUE: A Chinese biomedical language understanding evaluation benchmark. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7888â | 2308.08833#45 | 2308.08833#47 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#47 | CMB: A Comprehensive Medical Benchmark in Chinese | 7915, Dublin, Ireland. Association for Computational Linguistics. Wenxuan Zhang, Sharifah Mahani Aljunied, Chang Gao, Yew Ken Chia, and Lidong Bing. 2023b. M3exam: A multilingual, multimodal, multilevel benchmark for examining large language models. arXiv preprint arXiv:2306.05179. Xiaotian Zhang, Chunyang Li, Yi Zong, Zhengyu Ying, Liang He, and Xipeng Qiu. 2023c. Evaluating the performance of large language models on gaokao benchmark. arXiv preprint arXiv:2305.12474. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685. | 2308.08833#46 | 2308.08833#48 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#48 | CMB: A Comprehensive Medical Benchmark in Chinese | 13 Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023. Agieval: A human-centric benchmark for evaluating foundation models. Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Neil Zhenqiang Gong, Yue Zhang, et al. 2023. Promptbench: Towards evaluating the robustness of large language models on adversarial prompts. arXiv preprint arXiv:2306.04528. | 2308.08833#47 | 2308.08833#49 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#49 | CMB: A Comprehensive Medical Benchmark in Chinese | Ming Zhu, Aman Ahuja, Da-Cheng Juan, Wei Wei, and Chandan K. Reddy. 2020. Question answering with long multiple-span answers. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 3840â 3849. Association for Computational Linguistics. Ming Zhu, Aman Ahuja, Wei Wei, and Chandan K. Reddy. 2019. A hierarchical attention retrieval model for healthcare question answering. In The World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 2472â 2482. ACM. Wei Zhu and Xiaoling Wang. 2023. Chatmed: A chinese medical large language model. https: //github.com/michael-wzhu/ChatMed. | 2308.08833#48 | 2308.08833#50 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#50 | CMB: A Comprehensive Medical Benchmark in Chinese | 14 # A Dataset Table 7, 8, 9 present a detailed directory structure of CMB-Exam. Initially, the organization is based on clinical professions and the exams commonly undertaken by these professionals, divided into six primary sections. Upon this foundation, each section is further categorized based on career progression and examination subjects. Within each sub-category, we have meticulously classified according to specific departments or courses. # B Details of Evaluated Models In this section, we introduce and detail the models utilized in our evaluation. These models fall under three primary categories: seven Chinese medical LLMs, two proprietary LLMs, and two publicly-available general-domain LLMs. # Chinese medical LLMs: | 2308.08833#49 | 2308.08833#51 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#51 | CMB: A Comprehensive Medical Benchmark in Chinese | â ¢ HuatuoGPT: It leverages real-world and synthetic instruction and conversation data to fine-tune Baichuan-7B9 base model. â ¢ BianQue: It enhances its questioning ability by asking patients for more information to solve the issue that patients may not reveal all information in a single-turn conversation. â ¢ ChatMed-Consult: It is built upon Chinese LLaMa (ctt) using real-world questions and synthetic responses from ChatGPT. â ¢ MedicalGPT: | 2308.08833#50 | 2308.08833#52 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#52 | CMB: A Comprehensive Medical Benchmark in Chinese | It is based on Ziya-LLaMa (Wang et al., 2022) and adopts a four-stage training recipe, including continued pre-training, supervised fine-tuning, reward modeling, reinforcement learning. â ¢ ChatGLM-Med: It is finetuned on ChatGLM-6B (Du et al., 2022) using instruction tuning data, which are built upon CMeKG10. â ¢ Bentsao: It is finetuned on LLaMa-7B (Touvron et al., 2023) using the same data as ChatGLM-Med. â ¢ DoctorGLM: It leverages ChatGPT and BART (Lewis et al., 2019) to construct large-scale, high-quality Chinese dataset, which is used to tune LoRA (Hu et al., 2021)layers on top of ChatGLM-6B. # Proprietary models: | 2308.08833#51 | 2308.08833#53 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#53 | CMB: A Comprehensive Medical Benchmark in Chinese | â ¢ ChatGPT: Developed by OpenAI, ChatGPT, rooted in the GPT-3.5 architecture, excels in both understanding and generating natural language. â ¢ GPT-4: Another offering from OpenAI, GPT-4 employs deep learning techniques to elevate natural language processing capabilities, showcasing remarkable advancements across diverse tasks. # Publicly-available general-domain LLMs: â ¢ ChatGLM-2: The second version of ChatGLM, which is an open source, bilingual dialogue language model. | 2308.08833#52 | 2308.08833#54 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#54 | CMB: A Comprehensive Medical Benchmark in Chinese | â ¢ Baichuan-13B-chat: An advanced variant of Baichuan-13B model, focuses on dialogue tasks, boasting 13 billion parameters for efficient and effective conversation generation. It is noteworthy that both ChatGLM-2 and Baichuan-13B-chat have exhibited exceptional per- formances on well-known general-domain benchmarks, such as C-Eval (Huang et al., 2023), Gaokao (Zhang et al., 2023c), and AGIEval (Zhong et al., 2023). 9https://github.com/baichuan-inc/Baichuan-13B 10https://github.com/king-yyf/CMeKG_tools | 2308.08833#53 | 2308.08833#55 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#55 | CMB: A Comprehensive Medical Benchmark in Chinese | 15 # C Experiment Details # C.1 CMB-Exam We present the few-shot experimental results on CMB-Exam in Table 10. After considering inference speed and the studies mentioned previously, we opt for a 3-shot experimental setup. For comparative effectiveness, we experiment with two strategies: direct answer generation and COT. Since some models are not able to generate valid answers, we provide (in parentheses) the reference accuracy using the number of questions for which answers are successfully extracted as the denominator. A detailed analysis is provided in the main text. # C.2 CMB-Clin # C.2.1 Screenshot of Human Evaluation UI We show the screenshot of human evaluation UI in Figure 7 and Figure 8. We split the screenshot into two figures for better visual clarity. # C.2.2 Prompts for Automatic Evaluation The prompt for automatic evaluation contains task instructions, metrics, criteria, and placeholders for information to be evaluated. It is designed based on the suggestion of experts and used by both ChatGPT and GPT-4. You are an AI evaluator specializing in assessing the quality of answers provided by other language models . Your primary goal is to rate the answers based on their fluency , relevance , completeness , proficiency in medicine . Use the following scales to evaluate each criterion : Fluency : 1: Completely broken and unreadable sentence pieces 2: Mostly broken with few readable tokens 3: Moderately fluent but with limited vocabulary 4: Mostly coherent in expressing complex subjects 5: Human - level fluency Relevance : 1: Completely unrelated to the question 2: Some relation to the question , but mostly off - topic 3: Relevant , but lacking focus or key details 4: Highly relevant , addressing the main aspects of the question 5: Directly relevant and precisely targeted to the question Completeness : 1: Extremely incomplete 2: Almost incomplete with limited information 3: Moderate completeness with some information 4: Mostly complete with most of the information displayed 5: Fully complete with all information presented Proficiency in medicine : 1: Using plain languages with no medical terminology . 2: Equipped with some medical knowledge but lacking in - depth details 3: Conveying moderately complex medical information with clarity 4: Showing solid grasp of medical terminology but having some minor mistakes in detail 5: Fully correct in all presented medical knowledge | 2308.08833#54 | 2308.08833#56 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#56 | CMB: A Comprehensive Medical Benchmark in Chinese | You will be provided with the following information : - a description - a conversation based on the description ( optional ) - a question based on the description and conversation - the solution to the question - a model â s answer to the question 16 # [ description ] { description } [ end of description ] [ conversation ] { history } [ end of conversation ] [ question ] { question } [ end of question ] [ solution ] { solution } [ end of solution ] [ answer ] { answer } [ end of answer ] Make sure to provide your evaluation results in JSON format and ONLY the JSON , with separate ratings for each of the mentioned criteria as in the following example : { â fluency â : 3, â relevance â : 3, â completeness â : 3, â proficiency â : 3} Settings Original T-0.2 T-0.6 T-1.0 T-1.5 Original T-0.2 T-0.6 T-1.0 T-1.5 1.00 0.95 0.90 0.87 0.87 0.90 0.98 1.00 0.90 0.90 0.87 0.88 0.90 1.00 1.00 0.87 0.88 0.90 1.00 1.00 0.95 1.00 0.98 0.88 0.88 Table 6: Pairwise Spearman correlations between results under different decoding temperatures. Original: results of greedy decoding (temperature 0). T-x: results of using nucleus sampling under temperature x. | 2308.08833#55 | 2308.08833#57 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#57 | CMB: A Comprehensive Medical Benchmark in Chinese | Fluency pearson=0.71 Relevance pearson=0.81 Completeness pearson=0.78 Proficiency pearson=0.75 # GPr4 Figure 6: Correlation of expert and automatic evaluation on CMB-Clin of each perspective with pearson correlation. The four plots show correlations in fluency, relevance, completeness and proficiency in medicine, respectively. Each plot consists of 320 data points with many overlapped. The darker a point is, the more overlapped data there are at that position. Each expert score is averaged over the three expert annotators. # C.2.3 Agreement of Expert and GPT-4 Evaluation per Perspective Figure 6 shows the agreement between expert and GPT-4 evaluation on each perspective. The pearson correlations are all above 0.71, indicating a strong linear correlation between the two evaluation approaches. | 2308.08833#56 | 2308.08833#58 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#58 | CMB: A Comprehensive Medical Benchmark in Chinese | 17 FBP S,, RIBIEREESS 30 user NPN) CRAMER, EFTPS ATO) v Fa trite inet 19: RABMETAAOTTHR 22: KB, RAD RATA 39: AERA, (CAR 4: PRAGA HSA LATA 5: OFA 19: SiGELEX 23: SQGS-EXR, CtBABAN 39: BK, BREE AATS 42: BRR, MRT ASAD 5: BRB, HERMES TE Sam 33: BEMRBH, BHHES 43: ABHESBRET 53: RSSBSSR ESFMiRSwt: 153: (ERR BAAT ANI REESE 23: BRHESIR, BRERA 333: TAMMSAT-ENEAEHES 43: NEARER, SMES 533: EPS SRE PHIR LESS EY Frank: WEE (ZL) DARARAEG, MEARS, Wid (EP, WaENZ) DASARERARWIEN, (PURUMERIRCESIN, MiB (G+) URES. @2HS (4) URE, BOWIE, MANES. BSE (AP) SER, (POUT AIAE, WF WORRY, HE, BE, NIeTER, NEMEETITO. RevmEMMeEN He, AAR TE. ZAMPAMATE, HOMEPAN DRE TE. | 2308.08833#57 | 2308.08833#59 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#59 | CMB: A Comprehensive Medical Benchmark in Chinese | Figure 7: The guideline for human evaluation and the introduction to components of user interface (in Chinese). Note that Figure 7 precedes Figure 8 in the same webpage. # C.2.4 Pairwise Correlation of Rankings under Different Temperatures We evaluate the results generated under each setting (i.e., under different temperatures) using ChatGPT. Then for each setting, we obtain a ranking for all models. We then calculate the pairwise spearman correlation between all sets of rankings. The results are summarized in Table 6. | 2308.08833#58 | 2308.08833#60 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#60 | CMB: A Comprehensive Medical Benchmark in Chinese | 18 BR S4AIEEE: 334/340 aH PIL BtA, 30%, tiieskg, B170cm, AR FUe aca as STEKO FM PREADRS ; ARIA, RLF AB, BEXZOROMERK, BIR, HEPES: VLE PHSRUHT UIP, POMERAT oh, FESEKMMNATAes seh, MERI, PRIM ERRSCTE LEOREPFH, SS, FREE? SOATEST URE? 2. FS LARIMER? SAIMED SSR EESD? AS AIMERA SBE? PRIMERS LEOREPFH, SS, FREE? SMD eS USB? iB AD ARP EEF NS REMARE OMS? FASE OER. ORES RAT ASE TT AAG ? BERIT. GB ALAR ERA Las ane BIE: UDR PRY FER ERIN B: 1. BOA CBZE: UALR PAIRS AT CREAR, SARE IIRS SR, iia OAL, 2A: OAR P ASTER ED CORRE REE, iS UCM RORY 35, GRP LARA AODAE. SERB: OAR PATE ATLL DIPSET, DYE RAIL, BREE NBR. 4. PRE: DUP RPA CURIE, nA HE, BELL RRO PSSM RAI SCRAIRS. SEER ORARECARSIA, UAE 48, BRB, RRB. OPA UAE EE, EO AUUERE, FREBE BS. ARCOM BLA Fk B. JL PCD aBieia. ORUATIAERUED, SEO AERRILIAIEAD nae ABE 1 2 1 2 3 4 3 4 Os Os t-a reat BARS 1 2 1 2 3 4 3 4 Os Os Ta Use via API * Built with Gradio @ Figure 8: | 2308.08833#59 | 2308.08833#61 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#61 | CMB: A Comprehensive Medical Benchmark in Chinese | The user interface for scoring an answer (in Chinese). Note that Figure 8 follows Figure 7 in the same webpage. 19 Category Physician Subcategory Subject Resident Physician Clinical Pathology Oral Otolaryngology Rehabilitation Medicine Ophthalmology Neurology Orthopedics Anesthesiology Pediatrics Dermatology Psychiatry General Practice Medical Imaging Internal Medicine Ultrasound Surgery Obstetrics and Gynecology Pediatric Surgery Licensed Assistant Physician Integrated Chinese and Western Medicine Clinical Chinese Medicine Public Health Oral Licensed Physician Chinese Medicine Public Health Clinical Oral Integrated Chinese and Western Medicine Associate Professional Physician General Medicine Internal Oral Orthopedics Chinese Internal Medicine Surgery Ultrasound Medicine Dermatology and Venereology Otolaryngology Internal Medicine Infectious Diseases Obstetrics and Gynecology Cardiovascular Internal Medicine and Respiratory Internal Medicine Oncology Acupuncture Attending in TCM Pathology Preventive Medicine Pediatrics Psychotherapy Radiology Psychiatry Oral Restoration Dermatology Digestive Internal Medicine Rehabilitation Medicine Infectious Disease Nuclear Medicine Oral Medicine Integrated Chinese and Western Internal Medicine Ophthalmology Anesthesiology Hospital Infection Nutrition Tuberculosis Critical Care Medicine Psychological Counselor Pain Medicine Neurology Orthodontics Oral and Maxillofacial Surgery Plastic Surgery Nephrology Rheumatology and Clinical Immunology Occupational Disease Advanced Professional Physicians # Questions 1124 1074 952 461 951 791 939 907 749 977 903 712 964 752 430 829 800 296 3441 5364 3454 2067 1090 4490 4085 10241 1505 5320 3492 858 894 2896 5071 2218 1158 983 5671 600 2641 617 942 1169 1642 2817 3773 1393 2401 754 1183 909 160 630 861 1250 862 1101 988 923 827 1009 58 579 495 884 126 578 367 187 81 37 54 Respiratory InternalMedicine Orthopedics Endocrinology Cardiology Digestive Internal Medicine General Surgery Senior Gynecology and Obstetrics General Internal Medicine General Practice Pediatrics | 2308.08833#60 | 2308.08833#62 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#62 | CMB: A Comprehensive Medical Benchmark in Chinese | Table 7: Catalog Structure of Physician 20 1522 1245 1326 1604 1577 1850 3249 607 74 65 Category Undergraduate Disciplines Subcategory Subject Foudamental Medicine Pathophysiology Medical Psychology Biochemistry and MolecularBiology Cell Biology Medical Immunology Pathology Medical Genetics Parasitology Systematic Anatomy Bioinformatics Physiology Pharmacology Medical Microbiology Local Anatomy Histology and Embryology Human Parasitology Medical Statistics Clinical Medicine Medical Imaging Radiology Experimental Diagnostic Medicine Neurology Surgery Dermatology and Venereology Pediatrics Nuclear Medicine Physical Diagnosis Dental Pulp Disease Basic Nursing Diagnostics Ultrasonic Medicine Oral Care Evidence-Based Medicine Fundamental Nursing Epidemiology Oral Tissue Pathology Infectious Disease Oral Anatomy and Physiology Anesthesiology Interventional Radiology TCM and Chinese Herbal Medicine Preventive Medicine Hygiene Medical Ethics Preventive Medicine and Public Health # Questions 1455 932 2402 1399 2485 2786 1369 806 1967 185 2306 2424 1342 489 1398 766 198 1858 541 548 1163 2164 2168 3760 1383 621 346 978 103 192 263 95 393 864 387 287 362 606 81 1926 1316 500 # Table 8: Catalog Structure of Undergraduate Disciplines | 2308.08833#61 | 2308.08833#63 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#63 | CMB: A Comprehensive Medical Benchmark in Chinese | 21 Category Subcategory Subject Practicing Nurse Practicing Nurse Licensed Practical Nurse Licensed Practical Nurse Nurse Charge Nurse Pediatric Internal Medicine Charge Nurse Surgery Obstetrics and Gynecology Advanced Practice Nurse Advanced Practice Nurse Medical Technician Rehabilitation Medicine Therapy Radiology Inspection Oncology Medical Technologist Rehabilitation Medicine Therapy Oncology Radiology Inspection Technician Supervising Technologist Radiation Therapy for Oncology Ultrasonic Medicine Blood Transfusion Technology Microbiological Inspection Radiology Pathology Physical and Chemical Inspection Clinical Medicine Inspection Medical Record Information Nuclear Medicine Electrocardiology Disinfection Technology Rehabilitation Medicine and Treatment Nursing Surgical Nursing Basic Nursing Graduate Entrance Exam Political Science Political Science Integrated Western Medicine Integrated Western Medicine Integrated TCM Integrated TCM Licensed Pharmacist Licensed Pharmacist Licensed TCM Pharmacist Licensed TCM Pharmacist Pharmacist Junior Pharmacist Junior Pharmacist Junior Pharmacist Assistant Junior Pharmacist Assistant Junior TCM Pharmacist Junior TCM Pharmacist Assistant Junior TCM Pharmacist Junior TCM Pharmacist Assistant Chief Pharmacist Chief Pharmacist Chief TCM Pharmacist Chief TCM Pharmacist # Questions 3303 4223 905 958 4558 341 755 1876 1752 1033 1166 1086 1739 1538 1337 1458 1701 145 2199 704 1428 2407 783 1378 1331 1275 1021 575 948 1112 902 1514 8913 3924 8248 4460 2720 3705 3502 4017 3403 3299 # Table 9: Catalog Structure of Nurse, Technician, Graduate Entrance Exam and Pharmacist | 2308.08833#62 | 2308.08833#64 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#64 | CMB: A Comprehensive Medical Benchmark in Chinese | 22 Model Open Physician Nurse Pharmacist Technician Undergraduate Disciplines Graduate Entrance Exam General Models ChatGLM2-6B + CoT â 43.80 (43.84) 41.25 (42.94) 51.94 (51.94) 52.81 (53.86) 40.66 (40.78) 42.56 (44.18) 40.83 (40.90) 41.00 (41.65) 42.13 (42.32) 39.81 (40.72) 43.94 (44.17) 42.12 (42.85) Baichuan-13B-chat + CoT â 35.90 (36.04) 38.15 (39.37) 41.38 (41.43) 48.31 (49.25) 34.53 (34.74) 42.59 (43.73) 28.83 (28.95) 38.50 (39.05) 34.44 (34.58) 41.06 (41.60) 35.19 (35.25) 37.25 (38.20) Medical Models HuatuoGPT (å | 2308.08833#63 | 2308.08833#65 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#65 | CMB: A Comprehensive Medical Benchmark in Chinese | ä½ ) + CoT â 31.85 (31.88) 26.90 (29.92) 33.56 (33.56) 32.75 (35.25) 29.06 (29.07) 25.12 (28.78) 32.08 (32.08) 28.58 (30.44) 29.56 (29.60) 27.56 (30.36) 28.25 (28.27) 23.56 (26.47) MedicalGPT + CoT â 23.00 (23.13) 4.75 (17.00) 26.81 (27.02) 15.19 (23.02) 22.97 (22.99) 14.28 (25.16) 22.83 (22.87) 18.58 (23.92) 25.25 (25.33) 17.12 (20.59) 21.56 (21.60) 9.63 (17.86) Bentsao (æ | 2308.08833#64 | 2308.08833#66 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#66 | CMB: A Comprehensive Medical Benchmark in Chinese | ¬è ) + CoT â 20.75 (20.91) 1.30 (12.01) 20.06 (20.06) 4.13 (28.62) 19.69 (19.85) 4.31 (20.45) 23.92 (24.00) 5.58 (19.07) 18.81 (18.98) 4.81 (13.99) 18.69 (18.85) 4.75 (18.44) ChatMed-Consult + CoT â 18.25 (18.33) 9.60 (37.05) 18.88 (18.88) 19.19 (21.37) 20.16 (20.24) 16.03 (18.28) 21.25 (21.30) 18.25 (20.06) 18.12 (18.28) 16.44 (18.16) 20.88 (20.98) 11.94 (17.42) ChatGLM-Med + CoT â 14.70 (20.36) 1.30 (17.81) 14.94 (20.41) 3.88 (18.36) 19.38 (20.90) 9.13 (17.19) 16.00 (19.02) 4.42 (17.48) 12.31 (16.83) 4.44 (15.50) 12.38 (15.02) 2.25 (15.59) DoctorGLM + CoT â 4.40 (16.95) 6.95 (21.56) 5.19 (21.15) 7.31 (23.44) 7.97 (20.74) 7.25 (21.01) 8.08 (21.42) 9.75 (18.61) 5.69 (19.16) 6.94 (17.11) 4.00 (15.75) 6.06 (18.67) BianQue-2 (æ | 2308.08833#65 | 2308.08833#67 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#67 | CMB: A Comprehensive Medical Benchmark in Chinese | é¹ ) 0.10 (9.17) 2.35 (17.17) 0.38 (22.55) 2.50 (16.65) 0.34 (19.84) 3.28 (15.62) 0.37 (28.96) 3.06 (19.82) 0.81 (36.61) 3.88 (16.24) â + CoT Avg 43.88 (44.00) 43.26 (44.37) 35.04 (35.17) 40.98 (41.84) 30.73 (30.74) 27.41 (30.20) 23.74 (23.82) 13.26 (21.26) 20.32 (20.44) 4.15 (18.76) 19.59 (19.67) 15.24 (22.06) 14.95 (18.76) 4.23 (16.99) 5.89 (19.20) 7.38 (20.07) 0.50 (29.44) 0.42 (24.43) 1.17 (17.50) 2.71 (17.17) Table 10: Three-shot average accuracy of direct answer generation versus COT strategy across categories. Parenthetical accuracy rates indicate cases with successful answer extraction. | 2308.08833#66 | 2308.08833#68 | 2308.08833 | [
"2306.05685"
]
|
2308.08833#68 | CMB: A Comprehensive Medical Benchmark in Chinese | 23 | 2308.08833#67 | 2308.08833 | [
"2306.05685"
]
|
|
2308.08285#0 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | 3 2 0 2 g u A 6 1 ] R I . s c [ 1 v 5 8 2 8 0 . 8 0 3 2 : v i X r a # Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval Guangyuan Ma1,2*, Xing Wu1,2*, Peng Wang1,2, Zijia Lin3, Songlin Hu1,2 1 Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China 2 School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China 3 Kuaishou Technology {maguangyuan,wuxing,wangpeng2022,husonglin}@iie.ac.cn, [email protected] # Abstract | 2308.08285#1 | 2308.08285 | [
"2203.05765"
]
|
|
2308.08285#1 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre- training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we lever- age the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowl- edge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learn- ing and bottlenecked query generation. Furthermore, we in- corporate a curriculum learning strategy to reduce the re- liance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion sig- nificantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out- of-domain retrieval abilities, making it more widely applica- ble for retrieval when initializing with no human-labeled data. Introduction Dense passage retrieval (Karpukhin et al. 2020) has broad real-world applications, like web search (Liu et al. 2021; Zou et al. 2023), retrieval-augmented generation (Lewis et al. 2020; Cai et al. 2022) and query answering (Sakata et al. 2019). It utilizes well-trained language-model-based retrievers to extract sentence representations and retrieve rel- evant passages with given queries. Recent studies have made impressive progress in improving the effectiveness of dense retrievers, such as hard negative mining (Qu et al. 2021), late interaction (Khattab and Zaharia 2020; Santhanam et al. 2022), distillation (Ren et al. 2021; Lu et al. 2022), and en- sembling (Gao and Callan 2022; Wu et al. 2023b). More- over, the development of task-specific pre-training (Gao and Callan 2021; Wu et al. 2023a; Liu and Shao 2022) pushes the limits of retrieval tasks to new boundaries. | 2308.08285#0 | 2308.08285#2 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#2 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Specifically, those studies usually employ contrastive learning with span corruption (Gao and Callan 2022; Izacard et al. 2021; Ma et al. 2022), or additional decoder with bottlenecked struc- tures (Gao and Callan 2021; Lu et al. 2021; Liu and Shao 2022; Wu et al. 2023a) for better representation learning. Large language models (LLMs), like ChatGPT (Ouyang et al. 2022), PaLM (Chowdhery et al. 2022), LLaMA (Tou- vron et al. 2023), and tk-Instruct (Wang et al. 2022b), are pre-trained on large-scale web corpus and exhibit excel- lent abilities in context generation and instruction follow- ing. There has been growing interest in incorporating pow- erful LLMs into retrieval tasks. Existing studies (Gao et al. 2023; Wang, Yang, and Wei 2023; Jagerman et al. 2023; Yu et al. 2023) focus on query expansion with LLMs for en- hancing the lexical match of query-passage pairs. They uti- lize the LLM-generated relevant passages as enriched query contexts. Those studies have yielded better retrieval per- formances, especially for zero-shot scenarios. Nevertheless, conducting query expansion still needs heavy online infer- ences with LLMs, which slows down the retrieval speed. While query expansion expands the query with gener- ated passages, document expansion, i.e., query generation, is also a popular technique to boost retrieval performances. It exploits a fully fine-tuned model, like T5 (Nogueira et al. 2019) or BART (Cho et al. 2022), to generate relevant queries of a given passage, which either enrich the context of the passage or serve as additional fine-tuning corpus. Due to the excellent generation ability of LLMs, huge poten- tial lies in the utilization of LLMs as document expansion models. However, we argue that several drawbacks still hin- der such usage. Firstly, document expansion relies on the online inference of LLM in open-domain passage retrieval, particularly when dealing with candidate corpora from new domains. | 2308.08285#1 | 2308.08285#3 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#3 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | To avoid the need for additional LLM inferences during retrieval, a feasible solution is to pre-train or fine- tune an end-to-end retriever. However, this approach lacks exploration and necessitates training paradigms specifically designed for retrieval. Furthermore, document expansion in- volves feeding a substantial corpus into LLMs to generate queries, resulting in significant costs associated with LLM inferences. Unfortunately, there is a shortage of methods to mitigate these inference costs. To mitigate the high online inference costs of LLM doc- ument expansion, as is presented in Figure 1, we prompt the LLM query generation for a series of pre-training ex- periments tailored for dense retrieval. We emphasize that our work only involves LLM inferences at the pre-training stage of retrievers, but not the inference stage as traditional query (Gao et al. 2023; Wang, Yang, and Wei 2023) or doc- ument expansion (Nogueira et al. 2019). Two pre-training paradigms, i.e., contrastive learning and bottlenecked query generation, are explored in detail. | 2308.08285#2 | 2308.08285#4 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#4 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | *These authors contributed equally. For contrastive pre-training, a direct contrastive loss of the # LLaMA Prompts # ### Instruction: Generate ten search queries for the following passage ### Input: <passage> # ### Response: Ne 7â © Tk-Instruct Prompts /â â ~ Definition: Generate one search query in question or phrase format. The generated query should be unambiguous and related to the input. # Positive Example 1-â Input: <Example 1 - Input> # Output: <Example 1 Output> # Positive Example 2-â Input: <Example 2 - Input> Output: <Example 2 - Output> Now complete the following example- # Input: <passage> Ne Output: ) Figure 1: | 2308.08285#3 | 2308.08285#5 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#5 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Query Generation prompts for Alpaca-LLaMA and tk-Instruct. generated queries and passages is used to pull together their embeddings, while pushing away in-batch negatives in the latent space. We follow the contrastive architecture in (Gao and Callan 2022) for fair comparision, and we argue that LLM-generated queries can serve as the better context for effective query-passage alignment. Bottlenecked pre-training techniques are popular in re- cent works (Lu et al. 2021; Liu and Shao 2022; Wu et al. 2023a), which connect accessional decoders solely through the encoderâ | 2308.08285#4 | 2308.08285#6 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#6 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | s representation. To pre-train with bottlenecked query generation, similar to (Wu, Ma, and Hu 2022), we adapt a single-layer Transformers decoder and use the casual language model (CLM) task to generate expanded queries with the assistance of the encoderâ s embeddings. This bottle- necked encoder-decoder structure first derives a compressed representation through the encoder and then decompresses the context information as LLM-expanded queries via the decoder. As a result, the sentence embeddings contain en- riched context information, providing effective initialization for fine-tuning and inference. Especially, LLM-based doc- ument expansion requires no human-labeled corpus as pre- vious works (Wu, Ma, and Hu 2022; Cho et al. 2022) for training additional domain-specific generative models like docT5query (Nogueira et al. 2019). Furthermore, to mitigate the LLM inference costs for document expansion, we interpolate a two-stage curriculum learning strategy for both pre-training schemas. Span cor- ruption is firstly used to randomly sample contextual pairs from a long document. Then we leverage the generation abilities of LLMs to produce a relatively small amount of queries for the next stage of pre-training. In our study, we use Alpaca-LLaMA (Wang et al. 2023) and tk-Instruct (Wang et al. 2022b) with different parameter sizes for query generation. We conduct the experiments on the large-scale MS-MARCO (Nguyen et al. 2016) datasets and test on the in-domain MS-MARCO passage retrieval task, TREC-DL 2019 & 2020 (Craswell et al. 2020, 2021) and the out-of-domain BEIR (Thakur et al. 2021) task. Sev- eral benefits are observed in our studies. 1) LLMs can gen- erate a large number of high-quality queries based on the world knowledge of LLM itself, which requires no addi- tional human labeling and is suitable for scenarios lack- ing in manually annotated data. 2) Contrastive pre-training with LLM-generated queries has stronger in-domain zero- shot retrieval performance and on-par performance with the state-of-the-art (SOTA) methods after full fine-tuning. | 2308.08285#5 | 2308.08285#7 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#7 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | It also shows better domain adaption abilities in out-of-domain BEIR datasets. 3) Bottlenecked query generation shows bet- ter initialization abilities after full fine-tuning. 4) With our two-stage curriculum learning strategy, we reduce the num- ber of MS-MARCO corpus involved in LLM inferences from 8.8 million to 0.4 million, while keeping the minor per- formance degeneration. Our contributions are summarized as follows. We systematically study the potential of incorporating LLMs into the pre-training stage of dense passage re- trieval, suitable for the scarcity of human-annotated data. â ¢ We find stronger zero-shot and fine-tuned performances with contrastive learning and good initialization abilities with bottlenecked query generation pre-training. | 2308.08285#6 | 2308.08285#8 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#8 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | â ¢ We design a two-stage curriculum learning strategy that greatly reduces the usage of LLM-expanded queries while keeping the minor performance degeneration. Methodology In this section, we first introduce the definition of dense pas- sage retrieval. Then we introduce our method for LLM query generation, the detailed pre-training designs of contrastive learning and bottlenecked query generation, and the two- stage curriculum learning strategy for extended analyses. Preliminaries Given a query q and a set of passages Pn, the passage re- trieval task aims to find the relevant passages based on the similarity search. Dense passage retrieval utilizes an encoder model Enc, e.g., a Transformers-based model like BERT (Devlin et al. 2019), to yield the sentence representations and measure query-passage similarities through inner prod- uct or cosine distance. Formally, given a query q and a pas- sage q, we can use a query encoder Encq and a passage encoder Encp to derive their corresponding sentence repre- sentations, i.e., vq and vp from the encoder hidden states of the last layer at CLS position h . | 2308.08285#7 | 2308.08285#9 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#9 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Then the similarity a) LLM Query Generation 2) Beene] CE) = & I] | MLM Loss | Roe eS Passage Model Generation Pre-training | Contrastive 1 -â <â â _ Loss B SESE) SESE) Passage 00 00 OO 009 00 0a Large Language LLM Generated Query (GUS) | MLM Loss | f+ Encoder Encoder sha Encoder Decoder (ese EEE Bee Passage 00 00 oO 00a 00 Oa { { { | Figure 2: | 2308.08285#8 | 2308.08285#10 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#10 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Pre-training with LLM-based document expansion for dense passage retrieval. a) We utilize large language models (LLMs) to generate pseudo-queries with zero-shot or few-shot prompts. b) Bottlenecked query generation pre-training appends an auxiliary Transformers decoder to the encoder. Besides the Masked Language Modelling (MLM) loss of the encoder, we connect the encoder-decoder with merely the bottlenecked representation, i.e., the hidden states of [CLS] token, and make the decoder generate whole LLM-expanded queries with the Cross-Entropy (CE) loss. c) Contrastive pre-training pulls together the representations of the passage and LLM-expanded queries and pushes away in-batch negatives. To minimize reliance on LLM expansions, we implement a two-stage curriculum learning strategy. It first utilizes randomly sampled passages to fully initialize the encoders. And then we can use a relatively small amount of LLM-expanded queries in the second phase. between q and p, i.e., Sim(q, p), can be calculated as the inner product of vq and vp for simplicity as follows. rate the LLM-generated queries into the dense model pre- training. Sim(q, p) = Encq(q) · Encp(p) = vT q vp (1) The key to improving retrieval performances is to yield stronger representations vq, vp with better context align- ment. The representations can be regarded as the compres- sion of full contexts. We believe that incorporating the strong context-generation abilities of LLMs into the pre-training stage with carefully designed pre-tasks can be a new way for improving such alignment. Bottlenecked Query Generation Pre-training Bottlenecked pre-training trains a monomeric encoder (Enc) with good initialization abilities for subsequent fine- tuning. Given a tokenized sentence t â T from the training corpus, we randomly select a certain ratio of tokens, with the corresponding indices denoted as M , and replace them with mask tokens [m]: mask(t) = {[CLS], t1, t2, [m], t4, ..., tn, [SEP]} (2) # LLM Query Generation Cross-Entropy (CE) loss is then used to optimize as Masked Language Model (MLM) loss for the encoder. | 2308.08285#9 | 2308.08285#11 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#11 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Given a passage p, we use a zero-shot prompt for Alpaca- LLaMA and a few-shot prompt for tk-Instruct to expand queries, as illustrated in Figure 1. We empirically find that Alpaca 7B and 13B models work well on the zero-shot prompt, which helps save computation budgets. We man- ually write a few examples for tk-Instruct, as we find that few-shot prompts make its query generation more stable. LLM-based document expansion enriches the pre-training corpus with additional contextual information. Instead of di- rectly appending the expanded queries onto the passage, we seek to incorporate them into our pre-training stage for bet- ter initialization of end-to-end retrievers. Our work only in- volves LLM inference at the pre-training stage, but not the retrieval stage like traditional query or document expansion works. Two pre-training paradigms are involved to incorpo- | 2308.08285#10 | 2308.08285#12 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#12 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Lenc = â log p(ti|Enc(mask(t))) tâ T iâ M (3) where ti is groundtruth tokens w.r.t corresponding mask to- kens [m]. A single-layer accessional Transformers decoder (Dec) is further introduced, which receives the input from the con- catenation of the encoder representation h and con- textual texts x, e.g., LLM-generated queries. Tctx = {h [CLS] last , x1, ..., xN , [SEP]} (4) Then the decoder uses the Casual Language Model (CLM) loss to generate the whole input context with the as- sistance of encoder representation. Model / Zero-shot Evaluation MS-MARCO MRR@10 Recall@50 Recall@1k TREC DL 19 TREC DL 20 nDCG@10 nDCG@10 BM25 SimCSE (Gao, Yao, and Chen 2021)â coCondenser (Gao and Callan 2022)â Contriever (Izacard et al. 2021)â 18.7 8.7 7.5 16.8 59.2 33.7 31.3 60.8 85.7 64.6 58.1 89.1 51.2 24.5 22.1 44.5 47.7 17.9 20.7 43.2 Contrastive Pre-training Baseline + tk-inst 3b queries + Alpaca 7b queries + Alpaca 13b queries 12.5 20.9+8.4 22.6+10.1 22.7+10.2 49.0 70.2+21.2 70.7+21.7 71.7+22.7 82.3 92.8+10.5 93.8+11.5 94.3+12.0 36.0 47.0+11.0 51.0+15.0 53.9+17.9 38.4 48.6+10.2 48.9+10.5 50.1+11.7 Table 1: Zero-shot evaluation of contrastive pre-training with LLM-based document expansion. â | 2308.08285#11 | 2308.08285#13 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#13 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | denotes our reproduced re- sults. The best scores are marked in bold. Results with the increment over the corresponding baseline have been tested with two-tailed t-tests, demonstrating statistically significant improvements ( p-value â ¤ 0.01 ). Lice=- SY) logp(xi|Dec(x[:i-1])) 6) ei CT ete The final loss L is then formulated as follows. L = Lenc + Ldec (6) Through the bottlenecked encoder-decoder structure, we seek to compress the context signal from LLM-generated queries into the encoder representations and give strong ini- tialization ability to the encoder. Contrastive Pre-training For reproduction and fair comparison, we adapt the con- trastive pre-training architecture from coCondenser (Gao and Callan 2022). The passage p and its sampled or gener- ated context pctx are directly forwarded through the encoder Enc. Besides the MLM loss Lenc of the encoder, an extra Transformers decoder Decext is also introduced for repre- sentation pre-training, which takes the concatenation of the [CLS] encoder representation h and encoder hidden states last hi l from l-th layer. Then a cross-entropy loss is used for the decoderâ | 2308.08285#12 | 2308.08285#14 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#14 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | s pre-task. Leat = â > > log p(ti| Decen(htCPS! shi, --,hi)) teT ieM 7) (7) Differently, for pre-training with LLM-expanded queries, assuming vp and vctx denote encodersâ representations, a contrastive loss with in-batch negatives is used as follows. exp(Up - Udn) exp(Up * Udin) + 5 exP(Up * Vax) Lor = â log (8) where v+ corresponding to p. And vâ the context texts of the other passages in the batch. L = Lenc + Lext + LCL (9) Through contrastive pre-training, the representations of passage and LLM-generated queries are directly pulled to- gether in the same latent space, which gives better query- passage alignment and zero-shot ability to encoders. Curriculum Learning As discussed before, LLM-based document expansion faces the challenge of costly inference due to large numbers of documents or passages. Since we intend to pre-train our model with enriched contexts, inspired by the wisdom of curriculum learning (Bengio et al. 2009), we consider 1) a randomly cropped passage span as a coarse-grained context, while 2) the LLM-expanded queries as fine-grained con- text, as depicted in Figure 2. Following the span corruption strategies in the seed-encoder (Lu et al. 2021) and coCon- denser (Gao and Callan 2022), we use the coarse-grained context as the passage itself in the bottlenecked generation pre-training, and the randomly sampled passage span in con- trastive pre-training. As we focus on LLM-based document expansion, other span corruption strategies (Wu et al. 2023a) are left to our future work. After pre-training on a large amount of randomly cropped contexts, we initialize from the first stage and then use the fine-grained LLM-expanded queries for the second-phrase pre-training. Experiments find that this curriculum strategy greatly reduces the need for LLM inferences on MS-MARCO passages, while still main- taining similar retrieval performances. Zero-shot evaluation and Fine-tuning We conduct the zero-shot evaluation of the contrastive pre-trained encoder without fine-tuning on MS-MARCO, TREC-DL, and BEIR datasets. | 2308.08285#13 | 2308.08285#15 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#15 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | We conduct fine-tuning on both pre-training schemas to verify their retrieval initializa- tion ability. Following DPR (Karpukhin et al. 2020), a sim- ple contrastive loss is applied to optimize the retriever. The final optimization objective is the sum of the above losses. c og exp(q-p*) exp(q: pt) +o exp(q- p~) (10) Model / Fine-tuned Results MS-MARCO MRR@10 Recall@50 Recall@1k TREC DL 19 TREC DL 20 nDCG@10 nDCG@10 Contriever (Izacard et al. 2021)â Condenser (Gao and Callan 2021) coCondenser (Gao and Callan 2022) SimLM (Wang et al. 2022a) RetroMAE (Liu and Shao 2022) CoT-MAE (Wu et al. 2023a) 33.4 36.6 38.2 39.1 39.3 39.4 85.0 85.4â 86.5â 87.3â 87.0â 87.0 98.4 97.4 98.4 98.6 98.5 98.7 62.8 69.8 71.7â 68.9â 69.1â 70.9â 63.2 66.5â 68.4â 68.8â 70.0â | 2308.08285#14 | 2308.08285#16 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#16 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | 70.4 Contrastive Pre-training Baseline + tk-instruct 3b queries + Alpaca 7b queries + Alpaca 13b queries 38.8 39.6+0.8 40.0+1.2 39.6+0.8 87.8 88.8+1.0 89.0+1.2 88.8+1.0 98.8 99.0 99.1 98.9 71.1 72.9+1.8 72.9+1.8 72.6+1.5 68.4 71.1+2.7 71.3+2.9 72.3+3.9 Bottlenecked Query Generation Baseline + tk-instruct 3b queries + Alpaca 7b queries + Alpaca 13b queries 39.3 40.3+1.0 39.9+0.6 39.7 87.9 88.7+0.8 88.2 88.3 98.6 98.9 98.7 98.7 69.9 70.7+0.8 69.6 70.8+0.9 67.4 70.0+2.6 70.7+3.3 69.4+2.0 Table 2: Fine-tuned results of pre-training with LLM-based document expansion. â denotes our reproduced results. The best scores are marked in bold. Results with the increment over the corresponding baseline have been tested with two-tailed t-tests, demonstrating statistically significant improvements ( p-value â ¤ 0.01 ). where q is a given query, p+ and pâ are their corresponding positive passage and negative passages respectively. | 2308.08285#15 | 2308.08285#17 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#17 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Experiments This section introduces detailed experiment settings for pre- training and fine-tuning. Then we present the main results. of pre-training with LLM-generated queries. We use the co- sine scheduler with the same hyper-parameter settings for the first stage, and a constant learning rate for the second stage. All pre-training seeds are set to 42 for reproducibility. The encoders are directly tested on downstream tasks with- out fine-tuning for zero-shot evaluation. # Pre-training # Fine-tuning Following the pretraining settings in (Gao and Callan 2022), we choose the MS-MARCO dataset (Nguyen et al. 2016) with 3.2M documents as our pre-training corpus. LLMs with different types and parameter sizes, i.e. Alpaca 7B, 13B (Wang et al. 2023), and tk-instruct 3B (Wang et al. 2022b) are used to generate the queries for LLM-based document expansion. Nucleus sampling with topp = 0.95, topk = 50, and temperature = 0.7 is used for LLM generation. For bottlenecked query generation pre-training, the en- coder is initialized from the 12-layer BERT-base model (De- vlin et al. 2019), while the single-layer decoder is randomly initialized from scratch. We use the AdamW optimizer with a learning rate of 3e-4, batch size of 2048, total steps of 80k, and a warmup ratio of 0.1. The pre-training uses 8 Tesla A100 GPUs and trains for 19 hours. For contrastive pre-training, we adapt the codes and architecture from (Gao and Callan 2022) and initialize from (Gao and Callan 2021) by following their settings. We use a learning rate of 1e-4, batch size of 2048, and total steps of 120k and keep other hyper-parameters the same as above for training 50 hours. For curriculum learning, 75% of the total steps are used for the first stage of pre-training with sampled spans, and the remaining 25% of the steps are used for the second stage | 2308.08285#16 | 2308.08285#18 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#18 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | The encoder is fine-tuned and tested on MS-MARCO Pas- sage Ranking task (Nguyen et al. 2016), TREC Deep Learn- ing (DL) 2019 (Craswell et al. 2020) and 2020 (Craswell et al. 2021). MS-MARCO Passage Ranking dataset con- tains 8.8 million passages and 500k human annotated query- passage pairs. Following (Gao and Callan 2021), we re- port the performance metrics on MRR@10, Recall@50, Re- call@1K, and evaluate the models on its development set with 6,980 queries, because its test set is not publicly avail- able. TREC-DL 2019 and 2020 test sets both contain 200 an- notated queries. We adopt the Tevatron pipeline (Gao et al. 2022) with the AdamW optimizer for a learning rate of 2e-5, a batch size of 8, negative samples per passage of 15, a neg- ative depth of 200, and trains for 3 epochs. The performance metrics of TREC and BEIR are reported on NDCG@10. # Baselines We compare to self-contained baselines without using LLM- expanded queries, but only use randomly sampled spans as coarse-grained contexts. All other hyper-parameters used in the pre-training remain the same as the main experiments for fair comparison. In fine-tuned experiments, the contrastive pre-training baselines are mainly from (Wu, Ma, and Hu | 2308.08285#17 | 2308.08285#19 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#19 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Results / nDCG@10 BM25 TREC-COVID NFCorpus 65.6 32.5 NQ HotpotQA FiQA-2018 32.9 60.3 23.6 ArguAna Touch´e-2020 31.5 36.7 CQADupStack Quora 29.9 78.9 DBPedia 31.3 SCIDOCS 15.8 FEVER Climate-FEVER SciFact 75.3 21.3 66.5 coCondenser Contriever 21.2 13.7 27.3 31.7 10.7 22.3 7.2 25.4 48.1 24.5 34.4 5.8 37.9 16.7 10.5 71.3 28.4 83.5 16.3 29.2 4.6 14.9 16.8 6.4 43.2 68.2 15.5 64.9 SimCSE Baseline 27.5 10.5 16.2 29.9 16.3 23.8 9.7 9.3 24.2 19.6 28.0 13.4 35.8 8.1 13.5 73.7 18.2 75.8 16.7 22.5 6.1 10.4 29.2 14.2 25.0 43.6 8.5 52.7 + tk-Instruct 3b 36.8+20.6 33.1+3.2 34.3+25.0 56.2+32.0 29.8+10.3 44.6+8.8 16.3+8.2 30.9+12.8 83.8+8.0 30.2+7.7 13.6+3.2 61.9+18.3 18.4+9.8 64.4+11.7 39.6+12.8 + Alpaca 7b 52.3+36.1 30.9+1.0 31.8+22.5 51.5+27.3 27.2+7.6 40.5+4.8 13.7+5.5 32.4+14.2 83.3+7.5 28.8+6.3 13.5+3.2 67.2+23.6 13.8+5.3 60.8+8.1 39.1+12.4 + Alpaca 13b 54.7+38.5 33.5+3.5 31.9+22.6 51.8+27.6 28.6+9.0 40.6+4.9 16.9+8.7 33.3+15.1 84.3+8.5 29.6+7.1 14.4+4.1 73.1+29.5 17.2+8.6 60.9+8.2 40.8+14.0 Average 43.0 20.3 36.9 22.0 26.8 | 2308.08285#18 | 2308.08285#20 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#20 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Table 3: Out-of-domain zero-shot evaluation of contrastive pre-training with LLM-based document expansion on BEIR bench- mark. All baselines tested on nDCG@10 are based on our reproduction. Results with the increment over the corresponding baseline have been tested with two-tailed t-tests, demonstrating statistically significant improvements ( p-value â ¤ 0.01 ). 2022) by following their hyper-parameter settings, and other baselines are based on our settings. We also compare with other remarkable baselines, in- cluding the traditional sparse retrieval BM25 (Robertson, Zaragoza et al. 2009), unsupervised sentence similarity encoder SimCSE (Gao, Yao, and Chen 2021), unsuper- vised contrastive pre-training method coCondenser (Gao and Callan 2022) and Contriever (Izacard et al. 2021) for zero-shot evaluation. For fine-tuned results, we also com- pare with the latest bottlenecked pre-training methods, in- cluding Condenser (Gao and Callan 2021), SimLM (Wang et al. 2022a), RetroMAE (Liu and Shao 2022) and CoT- MAE (Wu et al. 2023a). Note that the recent bottlenecked methods using multi-task pre-training (Zhou et al. 2022) or hybrid retrieval (Liu et al. 2023; Wu et al. 2023b) are not compared, as they are beyond the scope of fair comparison. Zero-shot Evaluation Table 1 reports the in-domain zero-shot evaluation of con- trastive pre-training with LLM-based document expansion. Pre-training with LLM-expanded queries shows clear im- provements over its baselines that merely use randomly sampled passages. This indicates that our method achieves strong zero-shot retrieval abilities for in-domain evaluation on the MS-MARCO and TREC-DL 19 & 20 datasets. MS-MARCO passage task (in Recall@50 and Recall@1k) and TREC-DL 19 & 20 (in nDCG@10). 2) Bottlenecked query generation gives better initialization on MS-MARCO w.r.t the official preferred metric MRR@10, but still lies be- hind contrastive pre-training in other metrics. | 2308.08285#19 | 2308.08285#21 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#21 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Out-of-domain Evaluation We also evaluate the out-of-domain zero-shot BEIR bench- mark for contrastive pre-training with LLM-based document expansion and report the metric (nDCG@10) in Table 3. BM25 is a very strong baseline w.r.t all the other contrastive pre-training methods that do not go through human-labeled fine-tuning. Nevertheless, our method still shows strong im- provements over its contrastive baseline. Specifically, com- pared with Contriever (Izacard et al. 2021), which is an un- supervised contrastive method pre-trained on a much larger corpus CCNET (Wenzek et al. 2020), pre-training with LLM expansion also shows superior retrieval performances. Extended Analyses In this section, we analyze the effect of scaling up LLMs and the curriculum learning strategy with expanded queries generated by Alpaca 13b 1. # Fine-tuned Retrieval The fine-tuned results of the two pre-training methods, i.e., contrastive pretraining and bottlenecked query generation pretraining, are presented in Table 2. Pre-training with LLM- expanded queries also gives a statistically significant boost to their baselines and counterparts. In addition, we notice that 1) Contrastive pre-training gives better results on the Effects of Scaling up LLMs We use three LLMs with different parameter sizes ranging from 3b to 13b, prompting them for document expansion and integrating the generated queries into pre-training. As shown in Table 1, scaling up the LLMs shows better re- trieval performances in zero-shot contrastive pre-training. 1Alpaca 13b is chosen because of better results in zero-shot and on-par performances in fine-tuned retrieval. 40.0 75.0 239.5 73.0 ° ® ® c & 39.0 710 9 S 2 6 38.5 69.0 < 9 & = 38.0 67.0 2 ; fe) Qa75 Bottleneck (MARCO) | 5.0 Bottleneck (DL20) F 37.0 63.0 50k 0.1M 0.4M_ 0.8M 1M 4M 8.8M Amount of Training Corpus for Fine-grained Pre-training Figure 3: | 2308.08285#20 | 2308.08285#22 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#22 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Effects of curriculum learning for fine-tuned bot- tlenecked pre-training with expanded queries generated by Alpaca 13b. The dashed lines are the corresponding base- lines from Table 2. But this observation is not valid after fine-tuning in Table 2. We hypothesize that for fine-tuning with human labels, these LLMs are all capable enough for giving a good initialization for retrieval. # Effects of Curriculum Learning To further reduce the need for LLM-expanded queries in pre-training, we attempt to use a curriculum learning strat- egy as detailed before. We use randomly sampled spans as the coarse-grained context in the first stage of curriculum pre-training for 75% of the total training steps. Then we use a small amount of LLM-expanded queries as the fine- grained context for the remaining pre-training steps. Fig- ure 3 and 4 show that both pre-training schemas benefit from curriculum learning. Bottleneck query generation out- performs its baseline with just 0.4 million LLM-expanded queries after fine-tuning. Zero-shot contrastive pre-training surpasses the baselines and continues to demonstrate sus- tainable improvements as the number of fine-grained queries increases. | 2308.08285#21 | 2308.08285#23 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#23 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | # Related Works # Pre-training for Dense Retrieval Dense passage retrieval has gained sustainable improve- ments with the recent development of pre-training tasks. Some works focus on contrastive pre-training with con- structed span relationship (Chang et al. 2020), randomly cropped spans (Gao and Callan 2022) or multiple granular- ity alignments (Ma et al. 2022). And meanwhile, the others focus on pre-training with auxiliary bottlenecked decoders, like pre-training with a weak generative decoder (Lu et al. 2021), extreme masked ratio (Liu and Shao 2022), and con- textual span sampling (Wu et al. 2023a). Our method is sim- ilar to (Gao and Callan 2022) and (Wu et al. 2023a), but our core contribution is the methodology of incorporating expanded queries generated by LLMs into such pre-training schemas, which brings better context alignment and stronger zero-shot and fine-tuned performances. 25.0 55.0 s 20.0 ee 50.0 2 2 ee 5 S 15.0 45.0 2 = 2 QO | crc e crn enn - eee ee = 9 10.0 40.0 & rr ot = Q A fo} Q 5.0 -=-Contrast (MARCO) | 35.0 wt ~+-Contrast (DL20) i 0.0 30.0 50k 01M 04M 08M 1M 4M 88M Amount of Training Corpus for Fine-grained Pre-training Figure 4: Effects of curriculum learning for zero-shot con- trastive pre-training with LLM-expanded queries. LLM-based Query and Document Expansion Traditional query or document expansions generate addi- tional context via query rewriting (Lavrenko and Croft 2017), or with specially fine-tuned T5 (Nogueira et al. 2019) or BART models (Cho et al. 2022). | 2308.08285#22 | 2308.08285#24 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#24 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | With the bloom of LLMs (Ouyang et al. 2022; Touvron et al. 2023; Wang et al. 2022b), growing researches focus on using LLMs as query expansion models (Gao et al. 2023; Wang, Yang, and Wei 2023; Jagerman et al. 2023; Yu et al. 2023), which enhance the lexical match of query-passage pairs. However, as discussed before, LLM-based document ex- pansion is yet lacking exploration due to expensive infer- ence costs brought by the huge amount of documents and the online inference issue. We propose to tackle those issues with pre-training techniques and curriculum learning strate- gies tailored for dense retrieval. Our method is also orthog- onal to traditional query and document expansion and can incorporate them into the retrieval stage. Conclusion This paper systematically studies the potential of pre- training with Large Language Model-based document ex- pansion for dense passage retrieval. Strong improvements in zero-shot and out-of-domain performances are observed in contrastive pre-training with LLM-based document ex- pansion. Moreover, both contrastive pretraining and bottle- necked query generation pretraining achieve good retrieval abilities after fine-tuning. We further propose a two-stage curriculum learning strategy that can greatly reduce the need for LLM-expanded queries in pre-training, while keeping the minor performance degeneration. LLMs excel in ex- panding high-quality queries with enriched context informa- tion, which is suitable for scenarios lacking in human anno- tations. Researchers can thus deploy quick initialization of an unsupervised dense retrieval system with the pre-training of LLM-based document expansion, with even NO human labels provided. Limitation We are also interested in testing more types of LLMs with different sizes, such as ChatGPT (Ouyang et al. 2022), and LLaMA 2 (Touvron et al. 2023), or different prompts for document expansion, but our experiment budget is limited to support immediate investigations and we leave that to our future works. References Bengio, Y.; Louradour, J.; Collobert, R.; and Weston, J. 2009. Curriculum learning. In Danyluk, A. P.; Bottou, L.; and Littman, M. | 2308.08285#23 | 2308.08285#25 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#25 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | L., eds., Proceedings of the 26th Annual In- ternational Conference on Machine Learning, ICML 2009, Montreal, Quebec, Canada, June 14-18, 2009, volume 382 of ACM International Conference Proceeding Series, 41â 48. ACM. Cai, D.; Wang, Y.; Liu, L.; and Shi, S. 2022. Recent ad- vances in retrieval-augmented text generation. In Proceed- ings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 3417â 3419. Chang, W.; Yu, F. X.; Chang, Y.; Yang, Y.; and Kumar, S. 2020. | 2308.08285#24 | 2308.08285#26 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#26 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Pre-training Tasks for Embedding-based Large-scale Retrieval. In 8th International Conference on Learning Rep- resentations, ICLR 2020, Addis Ababa, Ethiopia, April 26- 30, 2020. OpenReview.net. Cho, S.; Jeong, S.; Yang, W.; and Park, J. C. 2022. Query Generation with External Knowledge for Dense Retrieval. In Agirre, E.; Apidianaki, M.; and Vulic, I., eds., Proceedings of Deep Learning Inside Out: The 3rd Workshop on Knowl- edge Extraction and Integration for Deep Learning Archi- tectures, DeeLIO@ACL 2022, Dublin, Ireland and Online, May 27, 2022, 22â | 2308.08285#25 | 2308.08285#27 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#27 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | 32. Association for Computational Lin- guistics. Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; Mishra, G.; Roberts, A.; Barham, P.; Chung, H. W.; Sutton, C.; Gehrmann, S.; Schuh, P.; Shi, K.; Tsvyashchenko, S.; Maynez, J.; Rao, A.; Barnes, P.; Tay, Y.; Shazeer, N.; Prabhakaran, V.; Reif, E.; Du, N.; Hutchinson, B.; Pope, R.; Bradbury, J.; Austin, J.; Isard, M.; Gur-Ari, G.; Yin, P.; Duke, T.; Levskaya, A.; Ghemawat, S.; Dev, S.; Michalewski, H.; Garcia, X.; Misra, V.; Robinson, K.; Fe- dus, L.; Zhou, D.; Ippolito, D.; Luan, D.; Lim, H.; Zoph, B.; Spiridonov, A.; Sepassi, R.; Dohan, D.; Agrawal, S.; Omer- nick, M.; Dai, A. M.; Pillai, T. S.; Pellat, M.; Lewkowycz, A.; Moreira, E.; Child, R.; Polozov, O.; Lee, K.; Zhou, Z.; Wang, X.; Saeta, B.; Diaz, M.; Firat, O.; Catasta, M.; Wei, J.; Meier-Hellstern, K.; Eck, D.; Dean, J.; Petrov, S.; and Fiedel, N. 2022. PaLM: Scaling Language Modeling with Pathways. CoRR, abs/2204.02311. Craswell, N.; Mitra, B.; Yilmaz, E.; and Campos, D. 2021. Overview of the TREC 2020 deep learning track. arXiv:2102.07662. Craswell, N.; Mitra, B.; Yilmaz, E.; Campos, D.; and Voorhees, E. M. 2020. Overview of the TREC 2019 deep learning track. arXiv:2003.07820. | 2308.08285#26 | 2308.08285#28 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#28 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), 4171â 4186. Min- neapolis, Minnesota: Association for Computational Lin- guistics. Gao, L.; and Callan, J. 2021. Condenser: a Pre-training In Proceedings of the Architecture for Dense Retrieval. 2021 Conference on Empirical Methods in Natural Lan- guage Processing, 981â 993. | 2308.08285#27 | 2308.08285#29 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#29 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Online and Punta Cana, Do- minican Republic: Association for Computational Linguis- tics. Gao, L.; and Callan, J. 2022. Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval. In Proceedings of the 60th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), 2843â 2853. Dublin, Ireland: Association for Computational Linguistics. Gao, L.; Ma, X.; Lin, J.; and Callan, J. 2022. | 2308.08285#28 | 2308.08285#30 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#30 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Tevatron: An efficient and flexible toolkit for dense retrieval. arXiv preprint arXiv:2203.05765. Gao, L.; Ma, X.; Lin, J.; and Callan, J. 2023. Precise Zero- Shot Dense Retrieval without Relevance Labels. In Rogers, A.; Boyd-Graber, J. L.; and Okazaki, N., eds., Proceedings of the 61st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, 1762â | 2308.08285#29 | 2308.08285#31 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#31 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | 1777. Association for Computational Linguistics. Gao, T.; Yao, X.; and Chen, D. 2021. SimCSE: Simple Con- trastive Learning of Sentence Embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 6894â 6910. Online and Punta Cana, Dominican Republic: Association for Computational Lin- guistics. Izacard, G.; Caron, M.; Hosseini, L.; Riedel, S.; Bojanowski, P.; Joulin, A.; and Grave, E. 2021. Towards Unsuper- vised Dense Information Retrieval with Contrastive Learn- ing. CoRR, abs/2112.09118. Jagerman, R.; Zhuang, H.; Qin, Z.; Wang, X.; and Bender- sky, M. 2023. Query Expansion by Prompting Large Lan- guage Models. CoRR, abs/2305.03653. Karpukhin, V.; Oguz, B.; Min, S.; Lewis, P.; Wu, L.; Edunov, S.; Chen, D.; and Yih, W.-t. 2020. Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), 6769â 6781. Online: | 2308.08285#30 | 2308.08285#32 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#32 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Associa- tion for Computational Linguistics. Khattab, O.; and Zaharia, M. 2020. ColBERT: Efficient and Effective Passage Search via Contextualized Late Interac- tion over BERT. In Huang, J. X.; Chang, Y.; Cheng, X.; Kamps, J.; Murdock, V.; Wen, J.; and Liu, Y., eds., Proceed- ings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, 39â 48. ACM. Lavrenko, V.; and Croft, W. B. 2017. Relevance-Based Lan- guage Models. SIGIR Forum, 51(2): 260â 267. Lewis, P. S. H.; Perez, E.; Piktus, A.; Petroni, F.; Karpukhin, V.; Goyal, N.; K¨uttler, H.; Lewis, M.; Yih, W.; Rockt¨aschel, T.; Riedel, S.; and Kiela, D. 2020. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.; and Lin, H., eds., Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Pro- cessing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Liu, Y.; Lu, W.; Cheng, S.; Shi, D.; Wang, S.; Cheng, Z.; and Yin, D. 2021. Pre-trained Language Model for Web-scale Retrieval in Baidu Search. In Zhu, F.; Ooi, B. C.; and Miao, C., eds., KDD â 21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, Sin- gapore, August 14-18, 2021, 3365â 3375. ACM. | 2308.08285#31 | 2308.08285#33 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#33 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Liu, Z.; and Shao, Y. 2022. RetroMAE: Pre-training Retrieval-oriented Transformers via Masked Auto-Encoder. arXiv preprint arXiv:2205.12035. Liu, Z.; Xiao, S.; Shao, Y.; and Cao, Z. 2023. RetroMAE-2: Duplex Masked Auto-Encoder For Pre-Training Retrieval- Oriented Language Models. In Rogers, A.; Boyd-Graber, J. L.; and Okazaki, N., eds., Proceedings of the 61st An- nual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, 2635â 2648. Association for Computational Linguistics. Lu, S.; He, D.; Xiong, C.; Ke, G.; Malik, W.; Dou, Z.; Ben- nett, P.; Liu, T.-Y.; and Overwijk, A. 2021. | 2308.08285#32 | 2308.08285#34 | 2308.08285 | [
"2203.05765"
]
|
2308.08285#34 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | Less is More: Pretrain a Strong Siamese Encoder for Dense Text Retrieval Using a Weak Decoder. In Proceedings of the 2021 Confer- ence on Empirical Methods in Natural Language Process- ing, 2780â 2791. Lu, Y.; Liu, Y.; Liu, J.; Shi, Y.; Huang, Z.; Sun, S. F. Y.; Tian, H.; Wu, H.; Wang, S.; Yin, D.; et al. 2022. | 2308.08285#33 | 2308.08285#35 | 2308.08285 | [
"2203.05765"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.