doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.09687 | 176 | # Response (2/3):
<Redundancy>8</Redundancy>
<Retained>10</Retained>
Reasoning:
The merged NDA <S> contains multiple redundant sections from the original NDAs. For example, the introductory sections, such as the agreement between the parties and the purpose of information sharing, are repeated multiple times. Additionally, sections related to the definition of confidential information, the recipientâs obligations, exclusions, non-competition, term and termination, and general provisions are also redundantly included.
However, in terms of retained information, the merged NDA retains all the key elements from the original NDAs, including the parties involved, the objective, specification of protected information, recipientâs obligations, violation ramifications, general terms, legal reference, and attestation. Overall, while there is redundancy in the merged NDA, it successfully retains all the necessary information from the original NDAs.
# Response (3/3):
<Redundancy>3</Redundancy>
The merged NDA contains some redundancy. There are three sections that have overlapping information between the original NDAs. These sections include: | 2308.09687#176 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 177 | <Redundancy>3</Redundancy>
The merged NDA contains some redundancy. There are three sections that have overlapping information between the original NDAs. These sections include:
1. The agreement between the parties (Section 1 in Doc1 and the beginning of Doc2) 2. Information sharing and the purpose of the NDA (Section 2 in both Doc1 and Doc2) 3. General provisions such as governing law and amendments (Section 7 in Doc1 and Doc2) <Retained>9</Retained>
The merged NDA retains a significant amount of information from the original NDAs. It includes all key clauses and provi- sions, such as the definition of confidential information, the obligations of the receiving party, exclusions, non-competition, and general provisions. Only a few minor details, such as specific penalties and fine amounts, are not retained in the merged NDA. Overall, the merged NDA effectively combines the essential elements from the original NDAs.
Note: Remember to remove the reasoning from between the tags when providing the final scores.
# Final Overall Score (Harmonic Mean of Averages):
6.87
50
Table 41: Merging 4 example NDAs; Note that S1-S3 are the three best scoring merge attemps from the previous step: Step 2 (Prompt) | 2308.09687#177 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 178 | 50
Table 41: Merging 4 example NDAs; Note that S1-S3 are the three best scoring merge attemps from the previous step: Step 2 (Prompt)
The following NDAs <S1> - <S3> each merge the initial NDAs <Doc1> - <Doc4>. Combine the merged NDAs <S1> - <S3> into a new one, maximizing their advantages and overall information retention, while minimizing redundancy. Output only the new NDA between the tags <Merged> and </Merged>, without any additional text.
Here are the original NDAs <Doc1> - <Doc4>:
<Doc1>
. . . (Omitted, see Table 31)
</Doc1>
<Doc2>
. . . (Omitted, see Table 31)
</Doc2>
<Doc3>
. . . (Omitted, see Table 32)
</Doc3>
<Doc4>
. . . (Omitted, see Table 33)
</Doc4>
Here are the merged NDAs <S1> - <S3>:
<S1>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date]. . . . (Omitted)
</S1>
<S2>
# NON-DISCLOSURE AGREEMENT (NDA) | 2308.09687#178 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 179 | </S1>
<S2>
# NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date]. . . . (Omitted, see Tables 37-38)
</S2>
<S3>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date]. . . . (Omitted)
</S3>
51
Table 42: Merging 4 example NDAs: Step 2 (Responses)
For this run, none of the responses improve upon the previous best and thus will all be discarded by the next step. For that reason and due to their long length, we do not present them fully and exclude their scoring.
# Response (1/5):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date]. 2. Information sharing for the purpose of [specific project or purpose]. . . . (Omitted)
Score: 6.50
# Response (2/5):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA) | 2308.09687#179 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 180 | Score: 6.50
# Response (2/5):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA)
Effective from [Effective Date], this NDA involves [Your Company Name] (âDisclosing Partyâ), and [Recipient Name] (âReceiving Partyâ).
1. Agreement between [Your Company Name] and [Recipient Name] on [Date]. . . . (Omitted)
Score: 5.62
# Response (3/5):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date]. 2. Information sharing for the purpose of [specific project or purpose]. . . . (Omitted)
# Score: 6.60
# Response (4/5):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date]. 2. Information sharing for the purpose of [specific project or purpose]. . . . (Omitted)
# Score: 6.15
# Response (5/5):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA) | 2308.09687#180 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 181 | # Score: 6.15
# Response (5/5):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date]. 2. Information sharing for the purpose of [specific project or purpose]. . . . (Omitted)
Score: 6.22
52
Table 43: Merging 4 example NDAs: Step 3 (Prompt)
The following NDA <S> merges initial NDAs <Doc1> - <Doc4>.
Please improve the merged NDA <S> by adding more information and removing redundancy. Output only the improved NDA, placed between the tags <Merged> and </Merged>, without any additional text.
Here are NDAs <Doc1> - <Doc4>:
<Doc1>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date]. 2. Information sharing for the purpose of [specific project or purpose]. . . . (Omitted, see Table 31)
</Doc1>
<Doc2>
# NON-DISCLOSURE AGREEMENT (NDA)
Effective from [Effective Date], this NDA involves [Your Company Name] (âDisclosing Partyâ), and [Recipient Name] (âReceiving Partyâ). | 2308.09687#181 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 182 | 1. Purpose: The Disclosing Party will disclose confidential information related to [Topic of Research] to the Receiving Party for [Purpose].
. . . (Omitted, see Table 31)
</Doc2>
<Doc3>
CONFIDENTIALITY & NON-DISCLOSURE AGREEMENT
Entities Involved:
Effective [Date], between [AquaBlue Innovations], established in [State], and [PineTree Solutions], a registered entity.
Objective:
To safeguard classified data during talks of a potential . . . (Omitted, see Table 32)
</Doc3>
<Doc4>
SECRECY & DISCLOSURE AGREEMENT
Contracting Parties:
Dated [Date], drawn between [AquaBlue Innovations], a [State]-based corporation, and [PineTree Solutions], a licensed organization.
Aim:
To protect exclusive insights amidst . . . (Omitted, see Table 33)
</Doc4>
Here is the summary NDA <S>:
<S>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date]. 2. Information sharing for the purpose of [specific project or purpose]. . . . (Omitted, see Tables 37-38)
</S>
53 | 2308.09687#182 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 183 | </S>
53
Table 44: Merging 4 example NDAs; we only present the highest scoring response fully in Table 46: Step 3 (Responses Part 1)
# Response (1/10):
# <Merged>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date]. 2. Information sharing for the purpose of [specific project or purpose]. . . . (Omitted)
Score: 6.35
# Response (2/10):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date]. 2. Information sharing for the purpose of [specific project or purpose]. . . . (Omitted)
Score: 6.35
# Response (3/10):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date], for the purpose of [specific project or purpose].
2. âConfidential Informationâ includes all potentially commercially valuable information, specifically software develop- ment tactics, processes, in-house research results, system designs, architectural schematics, proprietary computational algorithms, internal software blueprints, and intellectual property awaiting patents.
. . . (Omitted)
# Score: 6.87 | 2308.09687#183 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 184 | . . . (Omitted)
# Score: 6.87
# Response (4/10):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date] for the purpose of [specific project or purpose].
2. âConfidential Informationâ includes all potentially commercially valuable information, such as software development tactics, processes, in-house research results, system designs, architectural schematics, proprietary computational algo- rithms, internal software blueprints, and intellectual property awaiting patents.
. . . (Omitted, see Table 46))
Score: 7.78
# Response (5/10):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date]. 2. Information sharing for the purpose of [specific project or purpose]. . . . (Omitted)
Score: 6.43
54
Table 45: Merging 4 example NDAs; we only present the highest scoring response fully in Table 46: Step 3 (Responses Part 2)
# Response (6/10):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA) | 2308.09687#184 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 185 | # Response (6/10):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date]. 2. Information sharing for the purpose of [specific project or purpose]. . . . (Omitted)
# Score: 6.35
# Response (7/10):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date] for the purpose of [specific project or purpose].
2. âConfidential Informationâ includes all potentially commercially valuable information, specifically software develop- ment tactics, processes, in-house research results, system designs, architectural schematics, proprietary computational algorithms, internal software blueprints, and intellectual property awaiting patents.
. . . (Omitted)
Score: 6.70
# Response (8/10):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date] for the purpose of [specific project or purpose]. | 2308.09687#185 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 186 | 1. Agreement between [Your Company Name] and [Recipient Name] on [Date] for the purpose of [specific project or purpose].
2. âConfidential Informationâ includes potentially commercially valuable information, such as software development tac- tics, processes, in-house research results, non-public reports, data, designs, and other materials provided by the Disclosing Party. . . . (Omitted)
Score: 5.33
# Response (9/10):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date]. 2. Information sharing for the purpose of [specific project or purpose], including [Topic of Research]. . . . (Omitted)
# Score: 7.30
# Response (10/10):
<Merged>
NON-DISCLOSURE AGREEMENT (NDA)
Agreement between [Your Company Name] and [Recipient Name] on [Date].
Information sharing for the purpose of [specific project or purpose].
. . . (Omitted)
Score: 6.40
55
Table 46: Merging 4 example NDAs: Step 3 (Highest Scoring Response)
NON-DISCLOSURE AGREEMENT (NDA) | 2308.09687#186 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 187 | Score: 6.40
55
Table 46: Merging 4 example NDAs: Step 3 (Highest Scoring Response)
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date] for the purpose of [specific project or purpose].
2. âConfidential Informationâ includes all potentially commercially valuable information, such as software development tactics, processes, in-house research results, system designs, architectural schematics, proprietary computational algo- rithms, internal software blueprints, and intellectual property awaiting patents.
3. Receiving party is obligated to protect the Confidential Information, use it solely for the disclosed purpose, and not disclose it without consent.
4. Breach penalties include injunctive relief, other remedies, and a fine per breach (e.g., $200,000 or $280,000 for each infringement, depending on the agreement).
5. The Agreement applies to the Parties and their successors and assigns. It contains all related agreements and lack of enforcement doesnât imply waiver.
6. The Agreement is under the laws of [State] and is valid for a specified period (e.g., two years) unless terminated earlier with written notice (e.g., 30 days). | 2308.09687#187 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 188 | 7. The Receiving Party agrees not to engage in any competing business against the Disclosing Party during the agreement and a specified period after its termination (e.g., one year).
8. The Agreement can only be amended if written and signed by both parties, and overrules previous related agreements. 9. The Agreement is legally enforceable and binding for signatories and successors.
10. The Agreement is governed by the laws of [State].
Signed as of the Effective Date by [Your Company Name] - Disclosing Party [Recipient Name] - Receiving Party, along with the other parties involved.
56
Table 47: Merging 4 example NDAs: Step 3 (Scoring Example â Prompt)
Please score the merged NDA <S> in terms of how much redundant information is contained, independent of the original NDAs, as well as how much information is retained from the original NDAs.
A score of 10 for redundancy implies that absolutely no information is redundant, while a score of 0 implies that at least half of the information is redundant (so everything is at least mentioned twice).
A score of 10 for retained information implies that all information from the original NDAs is retained, while a score of 0 implies that no information is retained. | 2308.09687#188 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 189 | A score of 10 for retained information implies that all information from the original NDAs is retained, while a score of 0 implies that no information is retained.
You may provide reasoning for your scoring, but the final score for redundancy should be between the tags <Redundancy> and </Redundancy>, and the final score for retained information should be between the tags <Retained> and </Retained>, without any additional text within any of those tags.
Here are NDAs <Doc1> - <Doc4>:
<Doc1>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date]. . . . (Omitted, see Table 31)
</Doc1>
<Doc2>
NON-DISCLOSURE AGREEMENT (NDA)
Effective from [Effective Date], this NDA involves [Your Company Name] (âDisclosing Partyâ), and [Recipient Name] (âReceiving Partyâ).
. . . (Omitted, see Table 31)
</Doc2>
<Doc3>
CONFIDENTIALITY & NON-DISCLOSURE AGREEMENT
Entities Involved:
Effective [Date], between [AquaBlue Innovations], established in [State], and . . . (Omitted, see Table 32)
</Doc3>
<Doc4> | 2308.09687#189 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 190 | Effective [Date], between [AquaBlue Innovations], established in [State], and . . . (Omitted, see Table 32)
</Doc3>
<Doc4>
SECRECY & DISCLOSURE AGREEMENT
Contracting Parties:
Dated [Date], drawn between [AquaBlue Innovations], a [State]-based corporation, and . . . (Omitted, see Table 33)
</Doc4>
Here is the summary NDA <S>:
<S>
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date] for the purpose of [specific project or purpose].
2. âConfidential Informationâ includes all potentially commercially valuable information, such as software development tactics, processes, in-house research results, system designs, architectural schematics, proprietary computational algo- rithms, internal software blueprints, and intellectual property awaiting patents.
. . . (Omitted, see Table 46)
</S>
57
Table 48: Merging 4 example NDAs: Step 3 (Scoring Example â Responses)
# Response (1/3):
<Redundancy>5</Redundancy>
<Retained>8</Retained>
Reasoning: | 2308.09687#190 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 191 | # Response (1/3):
<Redundancy>5</Redundancy>
<Retained>8</Retained>
Reasoning:
The merged NDA <S> retains most of the information from the original NDAs, combining relevant clauses and terms into a single comprehensive agreement. However, some redundancy is present as similar information is mentioned multiple times, such as the definition of âConfidential Informationâ and the obligations of the receiving party. Additionally, some specific details from the original NDAs are not included in the merged NDA, such as the exclusion criteria in Doc2 and the provisions for non-competition in Doc3 and Doc4. Nonetheless, the merged NDA captures the essence of the original NDAs and includes all key information necessary for a non-disclosure agreement.
# Response (2/3):
<Redundancy>8</Redundancy>
<Retained>10</Retained>
Reasoning:
The merged NDA <S> retains all the key information from the original NDAs, including the purpose of the agreement, the definition of confidential information, the obligations of the receiving party, breach penalties, applicability to parties and successors, governing law, and signature requirements. | 2308.09687#191 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 192 | There is some redundant information, such as the inclusion of system designs, architectural schematics, proprietary com- putational algorithms, internal software blueprints, and intellectual property awaiting patents from both Doc3 and Doc4. However, this redundancy is minor and does not significantly impact the overall usefulness of the NDA.
Overall, the merged NDA <S> retains all important information while minimizing redundancy, making it highly effective in conveying the necessary terms and obligations of the agreement. Therefore, it receives a score of 8 for redundancy and a score of 10 for retained information.
# Response (3/3):
<Redundancy>7</Redundancy>
<Retained>10</Retained>
Reasoning:
The merged NDA <S> contains some redundant information, such as the definition of âConfidential Informationâ and the obligations of the receiving party. However, it retains all the important information from the original NDAs, including the purpose of the agreement, the specific types of confidential information, the breach penalties, the applicability to parties and successors, the governing law, and the provisions regarding non-competition, amendment, and termination. Additionally, it incorporates the additional information about the agreementâs validity period and the option for legal recourse in case of non-compliance. Therefore, the merged NDA <S> scores a 7 for redundancy and a 10 for retained information.
# Final Overall Score (Harmonic Mean of Averages): | 2308.09687#192 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 193 | # Final Overall Score (Harmonic Mean of Averages):
7.78
58
Table 49: Merging 4 example NDAs: Final Merged NDA
NON-DISCLOSURE AGREEMENT (NDA)
1. Agreement between [Your Company Name] and [Recipient Name] on [Date] for the purpose of [specific project or purpose].
2. âConfidential Informationâ includes all potentially commercially valuable information, such as software development tactics, processes, in-house research results, system designs, architectural schematics, proprietary computational algo- rithms, internal software blueprints, and intellectual property awaiting patents.
3. Receiving party is obligated to protect the Confidential Information, use it solely for the disclosed purpose, and not disclose it without consent.
4. Breach penalties include injunctive relief, other remedies, and a fine per breach (e.g., $200,000 or $280,000 for each infringement, depending on the agreement).
5. The Agreement applies to the Parties and their successors and assigns. It contains all related agreements and lack of enforcement doesnât imply waiver.
6. The Agreement is under the laws of [State] and is valid for a specified period (e.g., two years) unless terminated earlier with written notice (e.g., 30 days). | 2308.09687#193 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 194 | 7. The Receiving Party agrees not to engage in any competing business against the Disclosing Party during the agreement and a specified period after its termination (e.g., one year).
8. The Agreement can only be amended if written and signed by both parties, and overrules previous related agreements. 9. The Agreement is legally enforceable and binding for signatories and successors.
10. The Agreement is governed by the laws of [State].
Signed as of the Effective Date by [Your Company Name] - Disclosing Party [Recipient Name] - Receiving Party, along with the other parties involved.
59
F Evaluation - GoT Configurations We detail the concrete operations that GoT was configured with to solve the set intersection and sorting use cases.
Listing 1: GoT configuration for the set intersection use case with 32 elements 1 Generate (k =1) # Split second set into two halves of 16 elements 2 foreach subset : 3
Generate ( k =5) # Determine intersected subset of subset and first input set
4 5 6 Aggregate (10) # Merge both intersected subsets 7 Score ( k =1) # Score locally the intersected result sets 8 KeepBestN (1) # Keep the best result 9 GroundTruth () # Compare to precomputed result
Score ( k =1) # Score locally the intersected subsets KeepBestN (1) # Keep the best intersected subset | 2308.09687#194 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 195 | Score ( k =1) # Score locally the intersected subsets KeepBestN (1) # Keep the best intersected subset
Listing 2: GoT configuration for the set intersection use case with 64 elements 1 Generate (k =1) # Split second set into four parts of 16 elements 2 foreach subset : 3
Generate ( k =5) # Determine intersected subset of subset and first input set 4 5 6 merge step 1: 7 8 9 10 merge step 2: 11 12 13 14 final merge : 15 Score ( k =1) # Score locally the intersected subsets KeepBestN (1) # Keep the best intersected subset Aggregate (10) # Merge intersected subsets 1 and 2 Score ( k =1) # Score locally the intersected result sets KeepBestN (1) # Keep the best result Aggregate (10) # Merge intersected subsets 3 and 4 Score (k =1) # Score locally the intersected result sets KeepBestN (1) # Keep the best result Aggregate (10) # Merge intermediate intersected subsets from merge step 1 and 2 Score (k =1) # Score locally the intersected result sets KeepBestN (1) # Keep the best result 16 17 18 GroundTruth () # Compare to precomputed result | 2308.09687#195 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 196 | Listing 3: GoT configuration for the set intersection use case with 128 elements 1 Generate (k =1) # Split second set into eight parts of 16
elements
# 2 foreach subset : 3
Generate ( k =5) # Determine intersected subset of subset and first input set
4 5 6 merge step 1: 7 8 9 10 merge step 2: 11 12 13 14 merge step 3: 15 16 17 18 merge step 4: 19 20
Score (k =1) # Score locally the intersected subsets KeepBestN (1) # Keep the best intersected subset
5 KeepBestN(1) # Keep the best intersected subset 6 merge step 1:
Aggregate (5) # Merge intersected subsets 1 and 2 Score ( k =1) # Score locally the intersected result sets KeepBestN (1) # Keep the best result
9 KeepBestN(1) # Keep the best result 10 merge step 2:
Aggregate (5) # Merge intersected subsets 3 and 4 Score (k =1) # Score locally the intersected result sets KeepBestN (1) # Keep the best result
Aggregate (5) # Merge intersected subsets 5 and 6 Score (k =1) # Score locally the intersected result sets KeepBestN (1) # Keep the best result
17 KeepBestN(1) # Keep the best result 18 merge step 4: | 2308.09687#196 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 197 | 17 KeepBestN(1) # Keep the best result 18 merge step 4:
Aggregate (5) # Merge intersected subsets 7 and 8 Score (k =1) # Score locally the intersected result sets
60
Listing 4: GoT configuration for the set intersection use case with 128 elements (cont.)
KeepBestN (1) # Keep the best result
# 21 22 merge step 5: 23
Aggregate (5) # Merge intermediate intersected subsets from merge step 1 and 2
Score (k =1) # Score locally the intersected result sets KeepBestN (1) # Keep the best result
24 25 26 merge step 6: 27
Aggregate (5) # Merge intermediate intersected subsets from merge step 3 and 4
Score (k =1) # Score locally the intersected result sets KeepBestN (1) # Keep the best result
# 28 29 30 final merge : 31
Aggregate (5) # Merge intermediate intersected subsets from merge step 5 and 6
Score (k =1) # Score locally the intersected result sets KeepBestN (1) # Keep the best result
32 33 34 GroundTruth () # Compare to precomputed result | 2308.09687#197 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 198 | Score (k =1) # Score locally the intersected result sets KeepBestN (1) # Keep the best result
32 33 34 GroundTruth () # Compare to precomputed result
Listing 5: GoT configuration for the sorting use case with 32 elements 1 Generate (k =1) # Split list into two halves of 16 elements 2 foreach list part : 3 4 5 6 Aggregate (10) # Merge both partially sorted lists 7 Score (k =1) # Score locally the sorted result lists 8 KeepBestN (1) # Keep the best result 9 Generate (k =10) # Try to improve solution 10 Score (k =1) # Score locally the sorted result lists 11 KeepBestN (1) # Keep the best result 12 GroundTruth () # Compare to precomputed result
Listing 6: GoT configuration for the sorting use case with 64 elements 1 Generate (k =1) # Split list into four parts of 16 elements 2 foreach list part : 3 4 5 6 merge step 1: 7 8 9 10 11 12 13 merge step 2: 14 15 16 17 18 19 20 final merge : 21
Generate (k =5) # Sort list part Score (k =1) # Score partially sorted list KeepBestN (1) # Keep the best partially sorted list | 2308.09687#198 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 199 | Generate (k =5) # Sort list part Score (k =1) # Score partially sorted list KeepBestN (1) # Keep the best partially sorted list
Aggregate (10) # Merge partially sorted lists 1 and 2 Score (k =1) # Score locally the partially sorted result lists KeepBestN (1) # Keep the best result Generate (k =5) # Try to improve the partial solution Score (k =1) # Score locally the partially sorted result lists KeepBestN (1) # Keep the best result
Aggregate (10) # Merge partially sorted lists 3 and 4 Score (k =1) # Score locally the partially sorted result lists KeepBestN (1) # Keep the best result Generate (k =5) # Try to improve the partial solution Score (k =1) # Score locally the partially sorted result lists KeepBestN (1) # Keep the best result
Aggegrate (10) # Merge partially sorted lists from merge step 1 and 2
Score (k =1) # Score locally the sorted result lists KeepBestN (1) # Keep the best result Generate (k =10) # Try to improve solution Score (k =1) # Score locally the sorted result lists KeepBestN (1) # Keep the best result
22 23 24 25 26 27 GroundTruth () # Compare to precomputed result | 2308.09687#199 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 200 | Listing 7: GoT configuration for the sorting use case with 128 elements 1 Generate (k =1) # Split list into eight parts of 16 elements 2 foreach list part : 3 4 5 6 merge step 1: 7 8 9 10 11 12 13 merge step 2: 14 15 16 17 18 19 20 merge step 3: 21 22 23 24 25 26 27 merge step 4: 28 29 30 31 32 33 34 merge step 5: 35 Generate ( k =5) # Sort list part Score ( k =1) # Score partially sorted list KeepBestN (1) # Keep the best partially sorted list Aggregate (10) # Merge partially sorted lists 1 and 2 Score ( k =1) # Score locally the partially sorted result lists KeepBestN (1) # Keep the best result Generate ( k =5) # Try to improve the partial solution Score ( k =1) # Score locally the partially sorted result lists KeepBestN (1) # Keep the best result Aggregate (10) # Merge partially sorted lists 3 and 4 Score ( k =1) # Score locally the partially sorted result lists KeepBestN (1) # Keep the best result Generate ( k =5) # Try to improve the partial solution Score ( k =1) # Score locally the partially | 2308.09687#200 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 201 | lists KeepBestN (1) # Keep the best result Generate ( k =5) # Try to improve the partial solution Score ( k =1) # Score locally the partially sorted result lists KeepBestN (1) # Keep the best result Aggregate (10) # Merge partially sorted lists 5 and 6 Score ( k =1) # Score locally the partially sorted result lists KeepBestN (1) # Keep the best result Generate ( k =5) # Try to improve the partial solution Score ( k =1) # Score locally the partially sorted result lists KeepBestN (1) # Keep the best result Aggregate (10) # Merge partially sorted lists 7 and 8 Score ( k =1) # Score locally the partially sorted result lists KeepBestN (1) # Keep the best result Generate ( k =5) # Try to improve the partial solution Score ( k =1) # Score locally the partially sorted result lists KeepBestN (1) # Keep the best result Aggregate (10) # Merge partially sorted lists from merge step 1 and 2 Score (k =1) # Score locally the partially sorted result lists KeepBestN (1) # Keep the best result Generate ( k =5) # Try to improve the partial solution Score ( k =1) # | 2308.09687#201 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.09687 | 202 | the partially sorted result lists KeepBestN (1) # Keep the best result Generate ( k =5) # Try to improve the partial solution Score ( k =1) # Score locally the partially sorted result lists KeepBestN (1) # Keep the best result 36 37 38 39 40 41 merge step 6: 42 Aggregate (10) # Merge partially sorted lists from merge step 3 and 4 Score (k =1) # Score locally the partially sorted result lists KeepBestN (1) # Keep the best result Generate ( k =5) # Try to improve the partial solution Score ( k =1) # Score locally the partially sorted result lists KeepBestN (1 # Keep the best result 43 44 45 46 47 48 final merge : 49 Aggregate (10) # Merge partially sorted lists from merge step 5 and 6 | 2308.09687#202 | Graph of Thoughts: Solving Elaborate Problems with Large Language Models | We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms
such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary
advantage of GoT is the ability to model the information generated by an LLM as
an arbitrary graph, where units of information ("LLM thoughts") are vertices,
and edges correspond to dependencies between these vertices. This approach
enables combining arbitrary LLM thoughts into synergistic outcomes, distilling
the essence of whole networks of thoughts, or enhancing thoughts using feedback
loops. We illustrate that GoT offers advantages over state of the art on
different tasks, for example increasing the quality of sorting by 62% over ToT,
while simultaneously reducing costs by >31%. We ensure that GoT is extensible
with new thought transformations and thus can be used to spearhead new
prompting schemes. This work brings the LLM reasoning closer to human thinking
or brain mechanisms such as recurrence, both of which form complex networks. | http://arxiv.org/pdf/2308.09687 | Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler | cs.CL, cs.AI, cs.LG | null | Proceedings of the AAAI Conference on Artificial Intelligence 2024
(AAAI'24) | cs.CL | 20230818 | 20231124 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.00633"
},
{
"id": "2205.09712"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2307.15337"
},
{
"id": "2305.13246"
},
{
"id": "2305.10601"
},
{
"id": "2305.16582"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2210.00720"
},
{
"id": "2307.09288"
},
{
"id": "2305.08291"
},
{
"id": "2302.01560"
},
{
"id": "2302.12822"
},
{
"id": "2106.03594"
},
{
"id": "2303.04129"
},
{
"id": "2205.09702"
},
{
"id": "2005.03675"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2101.00190"
}
] |
2308.08833 | 1 | Xiang Wan, Benyou Wang , Haizhou Li
The Chinese University of Hong Kong, Shenzhen Shenzhen Research Institute of Big Data [email protected]
# Abstract
Large Language Models (LLMs) provide a possibility to make a great breakthrough in medicine. The establishment of a standardized medical benchmark becomes a fundamental cornerstone to measure progression. However, medical environ- ments in different regions have their local characteristics, e.g., the ubiquity and significance of traditional Chinese medicine within China. Therefore, merely trans- lating English-based medical evaluation may result in contextual incongruities to a local region. To solve the issue, we propose a localized medical benchmark called CMB, a Comprehensive Medical Benchmark in Chinese, designed and rooted entirely within the native Chinese linguistic and cultural framework. While traditional Chinese medicine is integral to this evaluation, it does not constitute its entirety. Using this benchmark, we have evaluated several prominent large- scale LLMs, including ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical domain. It is worth noting that our benchmark is not devised as a leaderboard competition but as an instrument for self-assessment of model advancements. We hope this benchmark could facilitate the widespread adoption and enhancement of medical LLMs within China. Check details in https://cmedbenchmark.llmzoo.com/.
# Introduction | 2308.08833#1 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 2 | # Introduction
Over the past two centuries, medical advancements have substantially increased human life expectancy. Medicineâs effectiveness often hinges on experience, with veteran physicians typically outperforming novices. In parallel, large language models like ChatGPT are shaped by their vast data experiences. This mutual reliance on experiential learning between physicians and LLMs suggests a promising frontier for the integration of LLMs into the medical domain.
Medical evaluation is highly professional. Although the future of LLMs for medicine is promising, their evaluation is a challenging topic. Deploying LLMs in hospitals raises significant ethical concerns that real-world feedback becomes difficult. Existing works on LLMs tend to leverage subjective evaluation (Zheng et al., 2023) where none of references is used during the assessment. However, the evaluation in medicine is much more professional than that of the general domain. For instance, assessing radiology-related issues poses a challenge for the public, a senior professor in medicine, or even a general practitioner. Subjective evaluation would be difficult to be scaled up since professional manual judging is expensive.
â The first four authors contributed to this work equally.
&
Benyou Wang is the Corresponding author.
Preprint. Under review.
Briefly describe the treatment principles for this patient | 2308.08833#2 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 3 | â The first four authors contributed to this work equally.
&
Benyou Wang is the Corresponding author.
Preprint. Under review.
Briefly describe the treatment principles for this patient
Figure 1: Components of the CMB dataset. Left: The structure of CMB-Exam, consisting of multiple- choice and multiple-answer questions. Right: an example of CMB-Clin. Each example consists of a description and a multi-turn conversation.
Benchmark for medical knowledge. Another school of evaluation protocol is objective evaluation, where the expected output has a clear reference. Certain protocols emphasize natural language understanding tasks that are not knowledge-intensive, as seen in studies (Zhang et al., 2022; Peng et al., 2019). In the era of Large Language Models (LLM), modern NLP evaluations underscore the significance of knowledge (Huang et al., 2023; Hendrycks et al., 2021b). In biomedicine, a typical example to probe knowledge is BioLAMA Sung et al. (2021); however, it is tailored to evaluate masked language models instead of auto-regressive ones. Another benchmark is MultiMedBench Tu et al. (2023), covering question answer, report summarization, visual question answering, report generation, and medical image classification. Note that MultiMedBench is only in English. | 2308.08833#3 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 4 | The necessity to localize medical benchmark. During economic globalization, a unified medical standard may overlook the unique medical needs and practices of different regions and ethnic groups, indicating the necessity to localize medical benchmarks. For example, in Asia, Traditional Chinese Medicine (TCM) not only offers profound insights and localized medical solutions in the prevention, treatment, and rehabilitation of diseases but also has formed a medical paradigm closely associated with regional, climatic, dietary, and lifestyle characteristics, over its long historical evolution. Simultaneously, it poses significant challenges when applying the Western medical framework to a local environment, which needs cross-cultural communication and understanding. Therefore, we should adopt a native medical benchmark instead of a translated medical benchmark for a local environment. Note that the precise translation of medical terminologies necessitates both medical professions and the cultural context in the target language.
The philosophy to create CMB. The CMB dataset as a whole includes multiple-choice questions in qualification examination (CMB-Exam) and complex clinical diagnostic questions based on actual case studies (CMB-Clin). Each multiple-choice question offers four to six options, and there is one or more correct answers. Clinical diagnostic questions are set based on actual and complex cases encountered in the teaching process, and the correct answer is determined by the consensus of teaching experts. | 2308.08833#4 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 5 | The sources of existing medical benchmarks could be the internet Li et al. (2023), hospitals, etc. However, these data sources have either privacy or inaccuracy issues. First, we decide to leverage qualification examination as the data source, resulting in CMB-Exam subset. The merits of qual- ification examination are two bold: (I) the ground truth of qualification examination is objective and typically accurate; (II) there is clear anchor (i.e., 60% accuracy) that is aligned with a qualified expert in a specific domain. As shown in Figure 1, the multiple-choice questions cover four clinical medical professions: physicians, nurses, medical technicians, and pharmacists. The involved exams cover the whole professional career path, ranging from undergraduate medical basic knowledge exams, graduate selection exams, standardized exams, professional qualification exams, intermediate professional title exams, to advanced professional title exams.
2 | 2308.08833#5 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 6 | 2
Other than the exams in CMB-Exam that is related to theoretical knowledge, the second subset of CMB (i.e., CMB-Clin) is more practical. CMB-Clin includes complex clinical diagnostic problems that evaluate the modelâs ability to synthesize knowledge and reasoning. On the one hand, the knowledge aspect implies the need for the model to draw upon its medical knowledge when answering questions. On the other hand, the reasoning facet necessitates the modelâs ability to analyze case reports, thus combining its own medical knowledge to respond to inquiries. We believe CMB-Exam and CMB-Clin are complementary in medicine, and both as a whole could be a complete evaluation protocol to not only the career of a medical doctor but also the learning path of a medical LLM. | 2308.08833#6 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 7 | Take-away messages from CMB. After benchmarking various LLMs in CMB, we get the following observations that might be insightful. I) GPT-4 exhibits significant superiority in the medical domain, with indigenous large-scale models also demonstrating commendable performance; II) Most specialized medical models still lag behind general models in performance, indicating ample room for improvement in the medical modeling field; III) Accuracy exhibits significant disparities across professional levels and knowledge areas, notably between traditional Chinese medicine and Western medicine; IV) The effectiveness of the CoT and few-shot prompts varies among models with different accuracy levels, especially presenting potential risks in knowledge-intensive tasks; and V) Results of automatic evaluation using GPT-4 highly agree with expert evaluation results.
# 2 Related work
# 2.1 Medical Benchmark | 2308.08833#7 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 8 | Medical benchmarks have evolved to broadly encompass two types of tasks based on the capabilities of the models they seek to probe: Objective tasks and Subjective tasks. The former typically assumes the form of multiple-choice questions (Welbl et al., 2018; Jin et al., 2020; Pal et al., 2022; Hendrycks et al., 2021b; Singhal et al., 2022; Li et al., 2021; Abacha and Demner-Fushman, 2019), information retrieval (Abacha et al., 2017; Zhu et al., 2019; Abacha et al., 2019), and cloze-style reading comprehension (Suster and Daelemans, 2018; Pampari et al., 2018; Zhu et al., 2020), which serve to evaluate a modelâs medical knowledge with unbiased accuracy. Sources for these tasks range from medical textbooks and exams to case reports such as CliCR (Suster and Daelemans, 2018), Wikipedia like MedHop (Welbl et al., 2018), and medical practices exemplified by MMLU (Hendrycks et al., 2021b) and MedMCQA (Pal et al., 2022). In contrast, subjective tasks involve open-ended text generation constructed directly from consumer | 2308.08833#8 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 9 | al., 2021b) and MedMCQA (Pal et al., 2022). In contrast, subjective tasks involve open-ended text generation constructed directly from consumer queries and doctor responses, often sourced from online medical forums. The task typically demands models to generate consumer-oriented replies (Singhal et al., 2022; Li et al., 2023) or explanations for multiple-choice questions (Liu et al., 2023). As of now, there are relatively few open-ended text generation question-answering tasks that specifically center around providing consultation based on diagnostic reports. | 2308.08833#9 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 10 | Few existing benchmark datasets encapsulate both task types, with MultiMedQA (Singhal et al., 2022) and CMExam (Liu et al., 2023) sharing the closest resemblance to our work. Differing from prior work, our dataset exceeds in size and includes questions not only from the Chinese National Medical Licensing Examination but also from various authoritative medical textbooks. Moreover, our subjective tasks deviate from the existing works, stemming from textbook examples requiring answers to diagnosis-related questions based on case reports, resembling real-life consultation scenarios.
# 2.2 Other Benchmarks of Large Language Models
The explosive growth in the number and capability of LLMs has led to a multitude of works aiming to discern their true capacity, evaluating both their general and specific abilities. General ability benchmarks include comprehensive test suites, each targeting different aspects of LLMâs proficiency, ranging from handling multi-turn dialogues (Zheng et al., 2023) to gauging language comprehension and reasoning abilities (Srivastava et al., 2022; Zhang et al., 2023b; Zhong et al., 2023). OpenLLM (Beeching et al., 2023) provides a public competition platform to compare and assess the performance of various LLM models across multiple tasks. | 2308.08833#10 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 11 | In terms of specific abilities, several benchmarks, apart from those related to medicine, aim to evaluate different capabilities of models. ARB (Sawada et al., 2023) was introduced to assess LLMsâ performance in high-level reasoning tasks across multiple domains. C-Eval Huang et al. (2023) serves
3
as the first comprehensive benchmark to evaluate the advanced knowledge and reasoning abilities of Chinese-based models. M3Exam (Zhang et al., 2023b) provides a unique and comprehensive evaluation framework, combining various languages, modalities, and levels, to test the general abilities of Juris Master in different contexts. Gaokao (Zhang et al., 2023c), MATH (Hendrycks et al., 2021c), and APPS (Hendrycks et al., 2021a) focus on assessing LLM proficiency in complex, context-specific tasks, and code generation, respectively.
# 3 Dataset
# 3.1 CMB-Exam: Comprehensive Medical Exams | 2308.08833#11 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 12 | Category Subcategory # Subject # Questions Physician (å»å¸) Resident Physician (ä½é¢å»å¸); Licensed Assistant Physician (æ§ä¸å©çå»å¸); Licensed Physician (æ§ä¸å»å¸); Associate Professional Physician (ä¸çº§èç§°); Advanced Professional Physicians (é«çº§èç§°) 81 124,926 Nurse (æ¤ç) Practicing Nurse (æ¤å£«); Licensed Practical Nurse (æ¤å¸); Charge Nurse (ä¸»ç®¡æ¤ å¸); Advanced Practice Nurse (é«çº§æ¤å¸) 8 16,919 Technicians (廿) Medical Technician (廿士); Medical Technologist (廿å¸); Supervising Technol- ogist (主管æå¸) 21 27,004 Pharmacist (è¯å¸) Licensed Pharmacist (æ§ä¸è¥¿è¯å¸); Licensed TCM Pharmacist (æ§ä¸ä¸è¯å¸); Junior Pharmacist (å级è¯å¸); Junior Pharmacist Assistant | 2308.08833#12 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 13 | TCM Pharmacist (æ§ä¸ä¸è¯å¸); Junior Pharmacist (å级è¯å¸); Junior Pharmacist Assistant (å级è¯å£«); Junior TCM Pharmacist (å级ä¸è¯å¸); Junior TCM Pharmacist Assistant (å级ä¸è¯å£«); Chief Pharmacists (主管è¯å¸); Chief TCM Pharmacists (主管ä¸è¯å¸) 8 33,354 Undergraduate Dis- ciplines (å¦ç§èè¯)1 Fundamental Medicine (åºç¡å»å¦); Clinical Medicine (临åºå»å¦); Traditional Chi- nese (TCM) and Chinese Herbal Medicine (ä¸å»å¦ä¸ä¸è¯å¦); Preventive Medicine and Public Health (é¢é²å»å¦ä¸å
Œ
±å«çå¦) 53 62,271 Graduate Entrance Exam (èç ) Total Integrated Western Medicine (西å»ç»¼å); Integrated TCM (ä¸å»ç»¼å); Political Science (æ¿æ²»); | 2308.08833#13 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 15 | 1 We referenced the National Standard Subject Classification of the Peopleâs Republic of China, see https://xkb.pku.edu.cn/docs/ 2018-10/20220328083301969071.pdf.
Table 1: Statistics of the CMB-Exam Categories, Subcategories, Subjects, and Questions.
# 3.1.1 Taxonomy
To obtain a precise taxonomy of medical evaluation, we aligned it with the disciplinary and exam- ination systems of the medical field. First, we chose four main medical professions: physicians, pharmacists, medical technicians, and nurses, covering various occupational difficulty levels of exam- inations. Considering the learning trajectories and professional growth paths, we additionally include discipline examinations and graduate entrance examinations for these four professions, ultimately resulting in six categories: Physician, Nurse, Technician, Pharmacist, Undergraduate Disciplines, and Graduate Entrance Exam. One could refer to Table 1 for the detailed taxonomy. Moreover, we carried out a more detailed subject division within each subcategory, resulting in a total of 174 categories, the detailed directory list of which can be found in Appendix A. Through this structured arrangement, our directory structure reflects characteristics closely connected to the actual medical field, providing a solid foundation for further analysis and research.
# 3.1.2 Data Collecting and Processing | 2308.08833#15 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 16 | # 3.1.2 Data Collecting and Processing
Data Sources The data used is derived from publicly available mock examination questions, course- work exercises, and summaries of commonly misunderstood examination questions. A significant portion of these materials comes from the Chinese Medical Question Database3, from which we obtained explicit permission to share the data.
Manual Verification The data has various formats, with PDF and JSON being the most prevalent. For PDF documents, we first used Optical Character Recognition (OCR) to transform them into plain text. This text was then processed into structured formats and underwent manual verification to ensure both OCR accuracy and proper formatting.
3https://www.medtiku.com/
4
Data Preprocessing All questions underwent a standardized data preprocessing procedure, in- cluding de-duplication and cleansing. In instances where we were unable to verify the question quality from the source, we conducted manual validation to ensure the absence of grammatical errors. Additionally, with the aid of the comment system provided by the Chinese Medical Question Database, we enacted a rigorous selection and deletion process for the data, ensuring the accuracy of the knowledge embedded in the questions.
Split #subcategory #Q per subcategory 28 28 28 11,200 280 269,359
Test 400 10 1 Dev -2 Train 1 It is with explanations in dev set. 2 Each subcategory has a different number of questions. Table 2: Data split in CMB-Exam. | 2308.08833#16 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 17 | Test 400 10 1 Dev -2 Train 1 It is with explanations in dev set. 2 Each subcategory has a different number of questions. Table 2: Data split in CMB-Exam.
Data Statistics Finally, we obtained a total of 280,839 multiple-choice questions. To assess the modelâs comprehension of medical knowledge, we randomly selected 400 questions from each subcategory as a test set. Additionally, to facilitate experimentation with few-shot learning strategies, we randomly selected 10 questions with explanations from each subcategory as a dev set. The remaining 269,359 questions were used as the train set.
# 3.2 CMB-Clin: Clinical Diagnostic Questions
The QA dataset is based on 74 classical complex and real-world cases originating from textbooks, offering an opportunity to investigate modelsâ proficiency in knowledge application amidst real-life diagnosis and treatment circumstances. A modelâs competence is gauged not merely by its mastery of medical knowledge but also by its ability to synthesize and apply this knowledge to solve real-world problems.
# 3.2.1 Task Formulation
In our dataset, we simulate dialogue interactions between an examiner and a candidate, focusing on assessing the modelâs diagnostic and therapeutic capacities. The data is with 74 real consultation scenarios (or ailments), each consisting of a case instance with multiple questions, culminating in 208 questions in total. | 2308.08833#17 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 18 | As shown in Figure 1, each case presents a patient description followed by interrelated, sequential questions. It includes three parts: I) Description D: patient information, including medical history summaries and chief complaints, physical examinations such as visual and tactile inspection, ancillary examinations like biopsy and CT scans; II) Questions Q: questions related to diagnosis and treatment based on descriptions. Some questions might be interrelated; and III) Solutions S: corresponding solutions to questions.
For instance, in the k-th conversation round, the input x is formed by concatenating the patientâs description with previous question-answer pairs and the current question, represented as x = Di + Qi + Si + . . . Qi+k. The expected response is Si+k.
# 4 Experiments on CMB-Exam
# 4.1 Experimental Setup | 2308.08833#18 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 19 | # 4 Experiments on CMB-Exam
# 4.1 Experimental Setup
Models We evaluate the following Chinese medical LLMs to compare their performance on CMB- Exam: HuatuoGPT (Zhang et al., 2023a), BianQue (Chen et al., 2023), ChatMed-Consult (Zhu and Wang, 2023), MedicalGPT (Xu, 2023), ChatGLM-Med (Wang et al., 2023b), Bentsao (Wang et al., 2023a), and DoctorGLM (Xiong et al., 2023). In addition to these specialized models, we also include two proprietary models (i.e., ChatGPT (gpt-3.5-turbo-16k-0613) and GPT-4 (gpt-4-0613) and
5 | 2308.08833#19 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 20 | Model Open Physician Nurse Pharmacist Technician Disciplines Graduate Entrance Exam General Models GPT-4 â 59.90 (59.90) 69.31 (69.31) 52.19 (52.19) 61.50 (61.50) 59.69 (59.69) 54.19 (54.19) ChatGLM2-6B + CoT â 40.20 (40.22) 40.25 (41.13) 48.50 (48.50) 47.56 (48.37) 40.34 (40.38) 36.06 (36.76) 38.67 (38.67) 36.58 (37.17) 37.19 (37.25) 35.56 (36.31) 33.37 (33.43) 35.06 (35.68) ChatGPT + CoT â 40.75 (40.75) 17.75 (17.75) 45.69 (45.69) 19.94 (19.94) 36.59 (36.59) 16.00 (16.00) 40.08 (40.08) 20.25 (20.25) 37.94 (37.94) 19.25 (19.25) 28.81 (28.81) 16.19 (16.19) Baichuan-13B-chat + | 2308.08833#20 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 21 | (37.94) 19.25 (19.25) 28.81 (28.81) 16.19 (16.19) Baichuan-13B-chat + CoT â 34.80 (37.16) 37.70 (39.92) 41.25 (42.11) 44.75 (46.25) 35.41 (36.91) 41.22 (42.20) 35.17 (36.20) 34.67 (36.52) 31.81 (36.39) 37.94 (39.87) 27.56 (29.03) 32.94 (33.99) Medical Models HuatuoGPT (åä½) + CoT â 29.10 (29.58) 29.90 (30.32) 33.56 (34.26) 34.00 (34.17) 27.41 (28.75) 29.06 (29.35) 30.58 (31.47) 30.92 (31.08) 29.44 (30.13) 27.38 (27.64) 25.06 (25.79) 25.69 (26.05) MedicalGPT + CoT â 26.40 (26.56) 24.80 (25.61) 30.94 (30.94) 27.19 (27.98) 24.72 | 2308.08833#21 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 22 | + CoT â 26.40 (26.56) 24.80 (25.61) 30.94 (30.94) 27.19 (27.98) 24.72 (24.84) 23.09 (24.07) 27.17 (27.32) 24.58 (26.00) 25.44 (25.62) 23.75 (24.77) 21.50 (21.64) 21.06 (21.79) ChatMed-Consult + CoT â 20.20 (21.41) 19.40 (20.92) 22.31 (23.48) 21.69 (23.56) 20.59 (21.58) 20.00 (21.65) 22.67 (23.55) 22.83 (23.59) 20.38 (21.36) 18.88 (20.44) 17.44 (18.08) 18.56 (19.55) ChatGLM-Med + CoT Bentsao (æ¬è) + CoT BianQue-2 (æé¹) + CoT â â â 21.75 (23.59) 15.55 (20.89) 21.55 (21.67) 21.00 (21.10) 4.90 (19.05) 7.85 (19.62) | 2308.08833#22 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 23 | 15.55 (20.89) 21.55 (21.67) 21.00 (21.10) 4.90 (19.05) 7.85 (19.62) 22.06 (23.37) 16.25 (22.13) 19.94 (19.99) 20.56 (20.61) 4.19 (19.04) 6.63 (19.31) 21.84 (22.67) 17.34 (21.06) 20.94 (21.07) 20.66 (20.78) 4.28 (20.36) 7.34 (20.75) 21.00 (21.85) 16.33 (20.65) 22.75 (22.85) 22.17 (22.24) 3.58 (18.11) 8.33 (20.47) 18.44 (19.72) 12.63 (17.12) 19.56 (19.83) 19.25 (19.53) 3.31 (16.27) 6.63 (18.11) 17.50 (18.14) 12.56 (16.88) 16.81 (16.93) 16.44 (16.54) 3.25 (18.63) 5.94 (15.03) DoctorGLM + CoT 2.70 (16.51) 3.15 | 2308.08833#23 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 24 | (16.54) 3.25 (18.63) 5.94 (15.03) DoctorGLM + CoT 2.70 (16.51) 3.15 (20.61) 3.31 (26.36) 3.13 (26.72) 3.84 (20.86) 3.41 (21.21) 3.75 (18.07) 2.50 (13.35) 3.19 (22.99) 3.38 (25.21) 2.25 (18.02) 2.25 (19.79) â Avg 59.46 (59.46) 39.71 (39.74) 38.51 (39.23) 38.31 (38.31) 18.23 (18 .23) 34.33 (36.30) 38.20 (39.79) 29.19 (30.00) 29.49 (29.77) 26.03 (26.15) 24.08 (25.04) 20.60 (21.58) 20.23 (21.62) 20.43 (21.56) 15.11 (19.79) 20.26 (20.39) 20.01 (20.13) 3.92 (18.57) 7.12 (18.88) | 2308.08833#24 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 25 | 3.17 (20.47) 2.97 (21.15) Table 3: Zero-shot accuracy in the answer-only and CoT settings across different categories. Values in parentheses are the accuracy that only involves questions for which model answers are not empty (i.e. a valid answer can be extracted from model outputs).
two publicly-available general-domain instruction-following models (i.e., ChatGLM-24 (Du et al., 2022) and Baichuan-13B-Chat5). Please refer to Appendix B for more details.
Decoding Hyperparameters For all the aforementioned models (except for ChatGPT and GPT-4), we adopt their default hyper-parameters specified in transformers.GenerationConfig6. Besides, to reduce the variance in generation, we adopt greedy decoding for all models with min_new_tokens and max_new_tokens set to 1 and 512, respectively, to avoid empty or lengthy answers.
Evaluation Details We evaluate the models in both answer-only and chain-of-thought (CoT) settings. We extract answers from model outputs using an empirically designed regular expression. Each extracted answer is compared to the solution and is deemed correct if and only if they are exactly matched. We adopt accuracy as our metric.
# 4.2 Benchmarking Results
We report the zero-shot results in Table 3. There are several observations drawn from different aspects. | 2308.08833#25 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 26 | # 4.2 Benchmarking Results
We report the zero-shot results in Table 3. There are several observations drawn from different aspects.
On general LLMs. Among the generic LLMs, the performance of GPT-4 in medicine significantly surpasses that of other models, with a marginal cliff-like improvement of 20 percent. This impressive performance has contributed to our profound appreciation of the capabilities of this model. Simulta- neously, two indigenous general-purpose models, ChatGLM2-6B and Baichuan-13B-chat, are closely trailing GPT-4. Notably, the ChatGLM2 model, with only 6B parameters, even outperforms ChatGPT, a testament to the rapid iterative capabilities of indigenous large-scale models and their excellence in specialized knowledge domains.
On medical LLMs. Among the medical LLMs, there are some regrettable observations. In the medical field, the development of specialized models seems to be overshadowed by updates in general large-scale models. Specifically, we observe that the performance of BianQue-2 and DoctorGLM in the medical model domain was underwhelming. These two models, due to their lack of superior directive-following capabilities and input length limitations, struggled to fully understand the intent | 2308.08833#26 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 27 | # 4https://github.com/THUDM/ChatGLM2-6B 5https://github.com/baichuan-inc/Baichuan-13B 6https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.
GenerationConfig
6
of the questions, thereby failing to provide accurate answers. This deficiency resulted in their lower scores in the overall evaluation.
In different categories. LLMs show varied performance across clinical specialties. Specifically, scores for pharmacist-related questions tend to be lower, while those concerning nursing staff are typically higher. This difference might arise from the foundational knowledge nurses require, which is straightforward, compared to the intricate distinctions in drug names and indications pharmacists deal with. Despite these performance variations among specialties, the models exhibit a consistent trend, suggesting no inherent bias towards any particular domain. These findings are pivotal for our ongoing research and optimization efforts.
# 4.3 Analysis
# 4.3.1 Do few-shot prompting and CoT help?
Protocol To investigate the effects of the few-shot prompting and CoT strategies, we perform the three-shot and CoT experiments on CMB-Exam, with the results reported in Appendix C.1. | 2308.08833#27 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 28 | Results The study reveals that the efficacy of both the few-shot approach and the CoT strategy in evaluated LLMs largely depends on the model capacities. The CoT strategy, contrary to expectations, often doesnât boost accuracy, especially in knowledge-dense tasks (e.g., medical MCQs in CMB- Exam). It might unintentionally confuse models with irrelevant context, hindering their reasoning.
For the few-shot prompting, its effectiveness is predominantly evident in situations where the model already demonstrates relatively strong accuracy (e.g., accuracy above 25%). In weaker models, the few-shot prompting can unintentionally harm the results. This can be attributed to two primary factors: first, some models might struggle with processing extensive text; and second, others may need additional refinement to better follow in-context examples.
# 4.3.2 On the Perceived Difficulty
Protocol There is a sequential career track for Physician, Nurse, Technicians, Pharmacist in China. For example, the career track of a Physician includes Resident Physician, Licensed Assistant Physi- cian, Licensed Physician, Associate Professional Physician, and Advanced Professional Physicians, professional difficulty of which is from low to high. We aims to examine whether the difficulty degrees perceived by LLMs and humans are consistent. Specifically, we denote the average zero-shot accuracy of the top five LLMs as the indicator of perceived difficulty degree from LLMs; the lower, the more difficult. | 2308.08833#28 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 29 | = e é ES 3 8 8
Figure 2: Accuracy across various clinical medicine fields at different career stages. The accuracies are the Zero-shot average values for TOP-5 models using direct response strategy.
Results As depicted in Figure 2, the y-axis showcases rising professional levels with the type of examination. The accuracy rates for physicians and nursing models decrease as professional levels increase, except for the residency qualification examination, suggesting it tests nuanced clinical knowledge distinctions 7. Conversely, medical technicians exhibit the opposite trend, with head technician examination accuracy being the highest. This is likely due to its focus on personnel man- agement and communication, which does not fall in medical profession and could be learned from the
7A plausible explanation could be that this exam focuses on discerning if medical students confuse clinical knowledge. The granularity of the knowledge assessed is quite detailed, potentially making it less amicable for larger models.
7
massive amount of general corpora. While pharmacist exam results vary, models targeting traditional Chinese medicine consistently score lower than those on Western pharmacology, highlighting the need for specialized models in the Chinese medical domain.
# 5 Experiments on CMB-Clin
# 5.1 Experimental Setup | 2308.08833#29 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 30 | # 5 Experiments on CMB-Clin
# 5.1 Experimental Setup
Prompt construction Every prompt comprises two components: a description that may (or may not) encompass conversation history Di, and the question Qi. To integrate the conversation history into the description, we prepend the appropriate roles to each question and solution when working with chat LLMs (all models except MedicalGPT). For non-chat LLMs, specifically MedicalGPT, we prefix "é®é¢ï¼" ("question:") to each question and "çæ¡ï¼" ("solution:") to each corresponding solution. These consolidated texts are then used to instruct the models for generating appropriate responses. | 2308.08833#30 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 31 | Decoding hyperparameters All hyperparameters remain consistent with those used in CMB- Exam. However, we set repetition_penalty=1.1 (previously 1.0) based on the observation that the default setting yields highly repetitive patterns which make the results meaningless. Additionally, to understand the influence of temperature on the quality of generation, we perform an experiment with decoding temperatures set at 0.2, 0.6, 1.0, and 1.5. This fills the gap of previous studies (Huang et al., 2023; Zhang et al., 2023c; Zheng et al., 2023; Zhang et al., 2023b; Zhu et al., 2023; Zhong et al., 2023), which often overlooked the impact of decoding strategies.
Expert Evaluation To guarantee the precision of our evaluation, we engage three annotators with professional medical knowledge to evaluate on a randomly selected subset of 320 responses from a pool of 208Ã11 model-generated responses. This subset constitutes 15% of the total, with 11 representing the number of models evaluated. All annotators follow a uniform set of guidelines. Equipped with a reference solution, they rate each response across four aspects â fluency, relevance, completeness, and medical proficiency â using a grading scale from 1 to 5. Details of the evaluation interface can be found in Appendix C.2.1. | 2308.08833#31 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 32 | Automatic Evaluation To enhance efficiency and reduce manual evaluation costs, we advocate for a robust automatic evaluation approach. We use ChatGPT and GPT-4 to assess the model responses, adhering to the same guidelines as those used in expert evaluations. Benefiting from definitive scoring criteria for each aspect, our method bypasses the positional bias inherent in conventional side-by-side automated assessments (Wang et al., 2023c). For robustness considerations, ChatGPT reviews each response five times to address variance in the temperature experiment, while GPT-4 assesses each response once for consistency. The prompt template for the automatic evaluation is detailed in Appendix C.2.2.
# 5.2 Benchmarking Results | 2308.08833#32 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 33 | # 5.2 Benchmarking Results
Figure 3 shows the ranking results of expert and GPT-4 evaluation. The horizontal axis of Figure 3 is sorted by the ranking of average scores of GPT-4 evaluation. Detailed scores are presented in Table 4 and Table 5 . The first echelon consists of GPT-4, ChatGPT and Baichuan-13B-chat. They perform significantly better in terms of relevance, completeness and proficiency than other models, with a marginal superiority of at least 7.4%. ChatGLM2-6B, HuatuoGPT, BianQue-2 and ChatMed- Consult form the second tier. They have mediocre medical proficiency though they have similar performance in terms of fluency to the first tier. Regretfully, MedicalGPT, DoctorGLM, Bentsao and ChatGLM-Med yield unsatisfactory results due to their potential deficiency.
# 5.3 Analysis
# 5.3.1 Agreements between Automatic and Expert Evaluation
Figure 3 demonstrates a strong agreement of resulted rankings between GPT-4 and expert evaluation, with the spearman correlation of rankings being 0.93. The rankings agree with each other except
8
Rankings by Perspective and Model | 2308.08833#33 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 34 | 8
Rankings by Perspective and Model
GPr4 1 âe- Fluency 2 âeâ Relevance Completeness 3 âe Proficiency un? 2s ts @? 8 Expert 9 |e Fluency ~@ Relevance 10 Completeness â-@: Proficiency Wy oe. Avg & & -@S& x & ° > & eF F Â¥ SFE EC x PW Ss & SF © Ss ie) vw Ft & & oe e Roy gf & * we eS s s $ ow & 3
Figure 3: Rankings by perspective and model. Dashed lines and solid lines are the resulted rankings from expert and GPT-4 evaluation, respectively. For visual clarity, each line is shifted vertically for a small value. A model is better if it has a smaller ranking (a higher position) on the vertical axis. | 2308.08833#34 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 35 | Model Fluency Relevance Completeness Proficiency Avg. GPT-4 ChatGPT Baichuan-13B-chat ChatGLM2-6B HuatuoGPT BianQue-2 ChatMed-Consult MedicalGPT DoctorGLM Bentsao ChatGLM-Med 4.97 4.96 4.96 4.86 4.89 4.86 4.88 4.48 4.74 3.88 3.55 4.53 4.47 4.19 3.76 3.75 3.52 3.08 2.64 2.00 2.05 1.97 4.12 4.17 3.97 3.51 3.38 3.02 2.67 2.19 1.65 1.71 1.61 4.45 4.42 4.23 4.00 3.86 3.60 3.30 2.89 2.30 2.58 2.37 4.52 4.51 4.34 4.03 3.97 3.75 3.48 3.05 2.67 2.55 2.38
Table 4: Results of automatic evaluation using GPT-4 on CMB-Clin. Avg. represents the average scores of each model across all aspects. Models are displayed in descending order of Avg. in the original table. | 2308.08833#35 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 36 | for a flip for GPT-4 and ChatGPT (dashed and solid brown lines are parallel, except for a flip at GPT-4 and ChatGPT). Figure 4 shows the linear correlation between automatic evaluations and expert evaluations averaged over three experts and all aspects. All four evaluated aspects show positively correlated trends between expert and GPT-4 evaluation (See Appendix C.2.3). The overall Pearson correlation (Figure 4) is 0.84. The two correlations indicate that the automatic evaluation is highly aligned with expert evaluation.
# 5.3.2 Consistent results with CMB-Exam | 2308.08833#36 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 37 | # 5.3.2 Consistent results with CMB-Exam
We compute the spearman correlation between the obtained rankings of CMB-Exam and CMB-Clin, yielding a correlation of 0.89 with a two-tailed p-value of 2.3e â 4. This suggests a high consistency between the evaluation results on the two datasets. However, it is worth noting that this observation is not due to an equivalence of the evaluated abilities between CMB-Exam and CMB-Clin. We attribute the consistency of results to the speculation that, currently most models are trained for injecting knowledge without hurting their conversation ability. We hope that after being supervised-finetuned on CMB-Exam training set, which consists of enormous multiple-choice questions, a model can still achieve decent scores on CMB-Clin. This objective aligns with our expectation of a doctor: we hope that a doctor is sufficiently informed with medical knowledge and is able to conversate with a patient.
9 | 2308.08833#37 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 38 | 9
Models Fluency Relevance Completeness Proficiency Avg. ChatGPT GPT-4 Baichuan-13B-chat ChatGLM2-6B HuatuoGPT BianQue-2 ChatMed-Consult MedicalGPT DoctorGLM Bentsao ChatGLM-Med 4.93 4.88 4.79 4.77 4.70 4.44 4.26 4.21 3.74 3.52 2.92 4.65 4.61 4.29 4.06 3.89 3.50 3.39 3.40 2.46 2.62 2.23 4.22 4.20 4.22 3.96 3.69 3.30 3.16 3.09 2.35 2.36 1.98 4.34 4.39 4.30 3.99 3.81 3.43 3.27 3.10 2.30 2.30 1.92 4.53 4.52 4.40 4.20 4.02 3.67 3.52 3.45 2.71 2.70 2.26
Table 5: Results of expert evaluation on CMB-Clin. Avg. are the averaged scores of each model over all perspectives. Models are arranged in descending order of Avg.
Overall Correlation pearson=0.84 Expert
# 4s
# 6 Averaged Scores S
es ChatGLM2-68 0 o2 To TS Temperature | 2308.08833#38 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 39 | Overall Correlation pearson=0.84 Expert
# 4s
# 6 Averaged Scores S
es ChatGLM2-68 0 o2 To TS Temperature
Figure 4: Correlation between expert and automatic evaluation on CMB- Clin.
Figure 5: The effect of different decoding temperatures on averaged scores over the four aspects.
# 5.3.3 Effects of Decoding Hyper-parameters
Figure 5 demonstrates the result under different decoding temperatures. The overall performance drops when the temperature increases from 0 to 1.5. This might be due to the fact that a higher temperature leads to more randomized (diversified) outputs, which is not desired in medicine where precise and definite contents are preferred. However, we find that pairwise spearman correlations under different temperatures are all above 0.87 (See Appendix C.2.4), meaning that the resulted rankings of models are robust to temperature change. This reveals the importance of aligning different temperatures when comparing performance across models.
# 6 Conclusion | 2308.08833#39 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 40 | # 6 Conclusion
In conclusion, while LLMs have potential in the realm of medicine, their accurate evaluation remains pivotal for real-world applications. The introduction of the CMB benchmark, tailored to the local cultural environment in China, gives a more contextualized and comprehensive evaluation benchmark. Although not framed as a competitive leaderboard, it serves as a crucial tool for tracking LLM progress in medical domains, particularly within China. This might pave the way for the broader and more effective utilization of LLMs in Chinaâs medical landscape.
10
# Ethical Statement
The permission to release the data The data utilized in this study primarily originate from publicly accessible mock examination questions, coursework exercises, and summations of commonly misunderstood examination questions. A portion of these items are sourced from the Chinese Medical Question Database8, from whom we received explicit permission and support to include their questions in our evaluation.
The privacy issue We have removed all personal information in our benchmark.
# References
Asma Ben Abacha, Eugene Agichtein, Yuval Pinter, and Dina Demner-Fushman. 2017. Overview of the medical question answering task at TREC 2017 liveqa. In Proceedings of The Twenty-Sixth Text REtrieval Conference, TREC 2017, Gaithersburg, Maryland, USA, November 15-17, 2017, volume 500-324 of NIST Special Publication. National Institute of Standards and Technology (NIST). | 2308.08833#40 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 41 | Asma Ben Abacha and Dina Demner-Fushman. 2019. A question-entailment approach to question answering. BMC Bioinform., 20(1):511:1â511:23.
Asma Ben Abacha, Yassine Mrabet, Mark Sharp, Travis R. Goodwin, Sonya E. Shooshan, and Dina Demner-Fushman. 2019. Bridging the gap between consumersâ medication questions and trusted answers. In MEDINFO 2019: Health and Wellbeing e-Networks for All - Proceedings of the 17th World Congress on Medical and Health Informatics, Lyon, France, 25-30 August 2019, volume 264 of Studies in Health Technology and Informatics, pages 25â29. IOS Press.
Edward Beeching, Clémentine Fourrier, Nathan Habib, Sheon Han, Nathan Lambert, Nazneen Rajani, Lewis Tunstall Omar Sanseviero, and Thomas Wolf. 2023. Open llm leaderboard. https: //huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard. | 2308.08833#41 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 42 | Yirong Chen, Zhenyu Wang, Xiaofen Xing, Zhipei Xu, Kai Fang, Sihang Li, Junhong Wang, and Xiangmin Xu. 2023. Bianque-1.0: Improving the "question" ability of medical chat model through finetuning with hybrid instructions and multi-turn doctor qa datasets. github.
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320â335.
Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, and Jacob Steinhardt. 2021a. Measuring coding challenge competence with APPS. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual. | 2308.08833#42 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 43 | Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou andchen2023bianque1 Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021b. Measuring massive multitask language understanding. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021c. Measuring mathematical problem solving with the MATH dataset. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual.
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
8https://www.medtiku.com/
11 | 2308.08833#43 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 44 | 8https://www.medtiku.com/
11
Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. 2023. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322.
Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. 2020. What disease does this patient have? A large-scale open domain question answering dataset from medical exams. CoRR, abs/2009.13081.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. | 2308.08833#44 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 45 | Jianquan Li, Xidong Wang, Xiangbo Wu, Zhiyi Zhang, Xiaolong Xu, Jie Fu, Prayag Tiwari, Xiang Wan, and Benyou Wang. 2023. Huatuo-26m, a large-scale chinese medical qa dataset. arXiv preprint arXiv:2305.01526.
Jing Li, Shangping Zhong, and Kaizhi Chen. 2021. MLEC-QA: A chinese multi-choice biomedical question answering dataset. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 8862â8874. Association for Computational Linguistics.
Junling Liu, Peilin Zhou, Yining Hua, Dading Chong, Zhongyu Tian, Andrew Liu, Helin Wang, Chenyu You, Zhenhua Guo, Lei Zhu, and Michael Lingzhi Li. 2023. Benchmarking large language models on cmexam - A comprehensive chinese medical exam dataset. CoRR, abs/2306.03030. | 2308.08833#45 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 46 | Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. 2022. Medmcqa: A large- scale multi-subject multi-choice dataset for medical domain question answering. In Conference on Health, Inference, and Learning, CHIL 2022, 7-8 April 2022, Virtual Event, volume 174 of Proceedings of Machine Learning Research, pages 248â260. PMLR.
Anusri Pampari, Preethi Raghavan, Jennifer J. Liang, and Jian Peng. 2018. emrqa: A large corpus for question answering on electronic medical records. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2357â2368. Association for Computational Linguistics.
Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer learning in biomedical natural language processing: An evaluation of bert and elmo on ten benchmarking datasets. In Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019). | 2308.08833#46 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 47 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, and Aran Komatsuzaki. 2023. ARB: advanced reasoning benchmark for large language models. CoRR, abs/2307.13692.
Karan Singhal, Shekoofeh Azizi, Tao Tu, S. Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Kumar Tanwani, Heather Cole-Lewis, Stephen Pfohl, Perry Payne, Martin Seneviratne, Paul Gamble, Chris Kelly, Nathaneal Schärli, Aakanksha Chowdhery, Philip Andrew Mansfield, Blaise Agüera y Arcas, Dale R. Webster, Gregory S. Corrado, Yossi Matias, Katherine Chou, Juraj Gottweis, Nenad Tomasev, Yun Liu, Alvin Rajkomar, Joelle K. Barral, Christopher Semturs, Alan Karthikesalingam, and Vivek Natarajan. 2022. Large language models encode clinical knowledge. CoRR, abs/2212.13138. | 2308.08833#47 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 50 | 12
Mujeen Sung, Jinhyuk Lee, Sean Yi, Minji Jeon, Sungdong Kim, and Jaewoo Kang. 2021. Can language models be biomedical knowledge bases? arXiv preprint arXiv:2109.07154.
Simon Suster and Walter Daelemans. 2018. Clicr: a dataset of clinical case reports for machine reading comprehension. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1551â1563. Association for Computational Linguistics.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. | 2308.08833#50 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 51 | Tao Tu, Shekoofeh Azizi, Danny Driess, Mike Schaekermann, Mohamed Amin, Pi-Chuan Chang, Andrew Carroll, Chuck Lau, Ryutaro Tanno, Ira Ktena, et al. 2023. Towards generalist biomedical ai. arXiv preprint arXiv:2307.14334.
Haochun Wang, Chi Liu, Nuwa Xi, Zewen Qiang, Sendong Zhao, Bing Qin, and Ting Liu. 2023a. Huatuo: Tuning llama model with chinese medical knowledge.
Haochun Wang, Chi Liu, Sendong Zhao Zhao, Bing Qin, and Ting Liu. 2023b. Chatglm-med: åºäº
䏿å»å¦ç¥è¯çchatglm模åå¾®è°. https://github.com/SCIR-HI/Med-ChatGLM. | 2308.08833#51 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 52 | Junjie Wang, Yuxiang Zhang, Lin Zhang, Ping Yang, Xinyu Gao, Ziwei Wu, Xiaoqun Dong, Junqing He, Jianheng Zhuo, Qi Yang, Yongfeng Huang, Xiayu Li, Yanghan Wu, Junyu Lu, Xinyu Zhu, Weifeng Chen, Ting Han, Kunhao Pan, Rui Wang, Hao Wang, Xiaojun Wu, Zhongshen Zeng, Chongpei Chen, Ruyi Gan, and Jiaxing Zhang. 2022. Fengshenbang 1.0: Being the foundation of chinese cognitive intelligence. CoRR, abs/2209.02970.
Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhi- fang Sui. 2023c. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926.
Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Trans. Assoc. Comput. Linguistics, 6:287â302. | 2308.08833#52 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 53 | Honglin Xiong, Sheng Wang, Yitao Zhu, Zihao Zhao, Yuxiao Liu, Qian Wang, and Dinggang Shen. 2023. Doctorglm: Fine-tuning your chinese doctor is not a herculean task. arXiv preprint arXiv:2304.01097.
Ming Xu. 2023. Medicalgpt: Training medical gpt model. https://github.com/shibing624/ MedicalGPT.
Hongbo Zhang, Junying Chen, Feng Jiang, Fei Yu, Zhihong Chen, Jianquan Li, Guiming Chen, Xiangbo Wu, Zhiyi Zhang, Qingying Xiao, et al. 2023a. Huatuogpt, towards taming language model to be a doctor. arXiv preprint arXiv:2305.15075. | 2308.08833#53 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 54 | Ningyu Zhang, Mosha Chen, Zhen Bi, Xiaozhuan Liang, Lei Li, Xin Shang, Kangping Yin, Chuanqi Tan, Jian Xu, Fei Huang, Luo Si, Yuan Ni, Guotong Xie, Zhifang Sui, Baobao Chang, Hui Zong, Zheng Yuan, Linfeng Li, Jun Yan, Hongying Zan, Kunli Zhang, Buzhou Tang, and Qingcai Chen. 2022. CBLUE: A Chinese biomedical language understanding evaluation benchmark. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7888â7915, Dublin, Ireland. Association for Computational Linguistics.
Wenxuan Zhang, Sharifah Mahani Aljunied, Chang Gao, Yew Ken Chia, and Lidong Bing. 2023b. M3exam: A multilingual, multimodal, multilevel benchmark for examining large language models. arXiv preprint arXiv:2306.05179.
Xiaotian Zhang, Chunyang Li, Yi Zong, Zhengyu Ying, Liang He, and Xipeng Qiu. 2023c. Evaluating the performance of large language models on gaokao benchmark. arXiv preprint arXiv:2305.12474. | 2308.08833#54 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 55 | Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685.
13
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023. Agieval: A human-centric benchmark for evaluating foundation models.
Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Neil Zhenqiang Gong, Yue Zhang, et al. 2023. Promptbench: Towards evaluating the robustness of large language models on adversarial prompts. arXiv preprint arXiv:2306.04528. | 2308.08833#55 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 56 | Ming Zhu, Aman Ahuja, Da-Cheng Juan, Wei Wei, and Chandan K. Reddy. 2020. Question answering with long multiple-span answers. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 3840â3849. Association for Computational Linguistics.
Ming Zhu, Aman Ahuja, Wei Wei, and Chandan K. Reddy. 2019. A hierarchical attention retrieval model for healthcare question answering. In The World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 2472â2482. ACM.
Wei Zhu and Xiaoling Wang. 2023. Chatmed: A chinese medical large language model. https: //github.com/michael-wzhu/ChatMed.
14
# A Dataset
Table 7, 8, 9 present a detailed directory structure of CMB-Exam. Initially, the organization is based on clinical professions and the exams commonly undertaken by these professionals, divided into six primary sections. Upon this foundation, each section is further categorized based on career progression and examination subjects. Within each sub-category, we have meticulously classified according to specific departments or courses.
# B Details of Evaluated Models | 2308.08833#56 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 57 | # B Details of Evaluated Models
In this section, we introduce and detail the models utilized in our evaluation. These models fall under three primary categories: seven Chinese medical LLMs, two proprietary LLMs, and two publicly-available general-domain LLMs.
# Chinese medical LLMs:
⢠HuatuoGPT: It leverages real-world and synthetic instruction and conversation data to fine-tune Baichuan-7B9 base model.
⢠BianQue: It enhances its questioning ability by asking patients for more information to solve the issue that patients may not reveal all information in a single-turn conversation.
⢠ChatMed-Consult: It is built upon Chinese LLaMa (ctt) using real-world questions and synthetic responses from ChatGPT.
⢠MedicalGPT: It is based on Ziya-LLaMa (Wang et al., 2022) and adopts a four-stage training recipe, including continued pre-training, supervised fine-tuning, reward modeling, reinforcement learning.
⢠ChatGLM-Med: It is finetuned on ChatGLM-6B (Du et al., 2022) using instruction tuning data, which are built upon CMeKG10.
⢠Bentsao: It is finetuned on LLaMa-7B (Touvron et al., 2023) using the same data as ChatGLM-Med. | 2308.08833#57 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 58 | ⢠Bentsao: It is finetuned on LLaMa-7B (Touvron et al., 2023) using the same data as ChatGLM-Med.
⢠DoctorGLM: It leverages ChatGPT and BART (Lewis et al., 2019) to construct large-scale, high-quality Chinese dataset, which is used to tune LoRA (Hu et al., 2021)layers on top of ChatGLM-6B.
# Proprietary models:
⢠ChatGPT: Developed by OpenAI, ChatGPT, rooted in the GPT-3.5 architecture, excels in both understanding and generating natural language.
⢠GPT-4: Another offering from OpenAI, GPT-4 employs deep learning techniques to elevate natural language processing capabilities, showcasing remarkable advancements across diverse tasks.
# Publicly-available general-domain LLMs:
⢠ChatGLM-2: The second version of ChatGLM, which is an open source, bilingual dialogue language model.
⢠Baichuan-13B-chat: An advanced variant of Baichuan-13B model, focuses on dialogue tasks, boasting 13 billion parameters for efficient and effective conversation generation. | 2308.08833#58 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 59 | ⢠Baichuan-13B-chat: An advanced variant of Baichuan-13B model, focuses on dialogue tasks, boasting 13 billion parameters for efficient and effective conversation generation.
It is noteworthy that both ChatGLM-2 and Baichuan-13B-chat have exhibited exceptional per- formances on well-known general-domain benchmarks, such as C-Eval (Huang et al., 2023), Gaokao (Zhang et al., 2023c), and AGIEval (Zhong et al., 2023).
9https://github.com/baichuan-inc/Baichuan-13B 10https://github.com/king-yyf/CMeKG_tools
15
# C Experiment Details
# C.1 CMB-Exam
We present the few-shot experimental results on CMB-Exam in Table 10. After considering inference speed and the studies mentioned previously, we opt for a 3-shot experimental setup. For comparative effectiveness, we experiment with two strategies: direct answer generation and COT. Since some models are not able to generate valid answers, we provide (in parentheses) the reference accuracy using the number of questions for which answers are successfully extracted as the denominator. A detailed analysis is provided in the main text.
# C.2 CMB-Clin
# C.2.1 Screenshot of Human Evaluation UI | 2308.08833#59 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 60 | # C.2 CMB-Clin
# C.2.1 Screenshot of Human Evaluation UI
We show the screenshot of human evaluation UI in Figure 7 and Figure 8. We split the screenshot into two figures for better visual clarity.
# C.2.2 Prompts for Automatic Evaluation
The prompt for automatic evaluation contains task instructions, metrics, criteria, and placeholders for information to be evaluated. It is designed based on the suggestion of experts and used by both ChatGPT and GPT-4. You are an AI evaluator specializing in assessing the quality of answers
provided by other language models . Your primary goal is to rate the answers based on their fluency , relevance , completeness , proficiency in medicine . Use the following scales to evaluate each criterion : | 2308.08833#60 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 61 | provided by other language models . Your primary goal is to rate the answers based on their fluency , relevance , completeness , proficiency in medicine . Use the following scales to evaluate each criterion :
Fluency : 1: Completely broken and unreadable sentence pieces 2: Mostly broken with few readable tokens 3: Moderately fluent but with limited vocabulary 4: Mostly coherent in expressing complex subjects 5: Human - level fluency Relevance : 1: Completely unrelated to the question 2: Some relation to the question , but mostly off - topic 3: Relevant , but lacking focus or key details 4: Highly relevant , addressing the main aspects of the question 5: Directly relevant and precisely targeted to the question Completeness : 1: Extremely incomplete 2: Almost incomplete with limited information 3: Moderate completeness with some information 4: Mostly complete with most of the information displayed 5: Fully complete with all information presented Proficiency in medicine : 1: Using plain languages with no medical terminology . 2: Equipped with some medical knowledge but lacking in - depth details 3: Conveying moderately complex medical information with clarity 4: Showing solid grasp of medical terminology but having some minor mistakes in detail 5: Fully correct in all presented medical knowledge
You will be provided with the following information : - a description - a conversation based on the description ( optional ) - a question based on the description and conversation - the solution to the question - a model âs answer to the question
16 | 2308.08833#61 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 62 | 16
# [ description ] { description } [ end of description ]
[ conversation ] { history } [ end of conversation ] [ question ] { question } [ end of question ] [ solution ] { solution } [ end of solution ] [ answer ] { answer } [ end of answer ] Make sure to provide your evaluation results in JSON format and ONLY the JSON , with separate ratings for each of the mentioned criteria as in the following example : { â fluency â: 3, â relevance â: 3, â completeness â: 3, â proficiency â: 3}
Settings Original T-0.2 T-0.6 T-1.0 T-1.5 Original T-0.2 T-0.6 T-1.0 T-1.5 1.00 0.95 0.90 0.87 0.87 0.90 0.98 1.00 0.90 0.90 0.87 0.88 0.90 1.00 1.00 0.87 0.88 0.90 1.00 1.00
0.95 1.00 0.98 0.88 0.88 Table 6: Pairwise Spearman correlations between results under different decoding temperatures. Original: results of greedy decoding (temperature 0). T-x: results of using nucleus sampling under temperature x. | 2308.08833#62 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 63 | Fluency pearson=0.71 Relevance pearson=0.81 Completeness pearson=0.78 Proficiency pearson=0.75
# GPr4
Figure 6: Correlation of expert and automatic evaluation on CMB-Clin of each perspective with pearson correlation. The four plots show correlations in fluency, relevance, completeness and proficiency in medicine, respectively. Each plot consists of 320 data points with many overlapped. The darker a point is, the more overlapped data there are at that position. Each expert score is averaged over the three expert annotators.
# C.2.3 Agreement of Expert and GPT-4 Evaluation per Perspective
Figure 6 shows the agreement between expert and GPT-4 evaluation on each perspective. The pearson correlations are all above 0.71, indicating a strong linear correlation between the two evaluation approaches.
17 | 2308.08833#63 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 64 | FBP S,, RIBIEREESS 30 user NPN) CRAMER, EFTPS ATO) v Fa trite inet 19: RABMETAAOTTHR 22: KB, RAD RATA 39: AERA, (CAR 4: PRAGA HSA LATA 5: OFA 19: SiGELEX 23: SQGS-EXR, CtBABAN 39: BK, BREE AATS 42: BRR, MRT ASAD 5: BRB, HERMES TE Sam 33: BEMRBH, BHHES 43: ABHESBRET 53: RSSBSSR ESFMiRSwt: 153: (ERR BAAT ANI REESE 23: BRHESIR, BRERA 333: TAMMSAT-ENEAEHES 43: NEARER, SMES 533: EPS SRE PHIR LESS EY Frank: WEE (ZL) DARARAEG, MEARS, Wid (EP, WaENZ) DASARERARWIEN, (PURUMERIRCESIN, MiB (G+) URES. @2HS (4) URE, BOWIE, MANES. BSE (AP) SER, (POUT AIAE, WF WORRY, HE, BE, NIeTER, NEMEETITO. RevmEMMeEN He, AAR TE. | 2308.08833#64 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 66 | Figure 7: The guideline for human evaluation and the introduction to components of user interface (in Chinese). Note that Figure 7 precedes Figure 8 in the same webpage.
# C.2.4 Pairwise Correlation of Rankings under Different Temperatures
We evaluate the results generated under each setting (i.e., under different temperatures) using ChatGPT. Then for each setting, we obtain a ranking for all models. We then calculate the pairwise spearman correlation between all sets of rankings. The results are summarized in Table 6.
18 | 2308.08833#66 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 67 | BR S4AIEEE: 334/340 aH PIL BtA, 30%, tiieskg, B170cm, AR FUe aca as STEKO FM PREADRS ; ARIA, RLF AB, BEXZOROMERK, BIR, HEPES: VLE PHSRUHT UIP, POMERAT oh, FESEKMMNATAes seh, MERI, PRIM ERRSCTE LEOREPFH, SS, FREE? SOATEST URE? 2. FS LARIMER? SAIMED SSR EESD? AS AIMERA SBE? PRIMERS LEOREPFH, SS, FREE? SMD eS USB? iB AD ARP EEF NS REMARE OMS? FASE OER. ORES RAT ASE TT AAG ? BERIT. GB ALAR ERA Las ane BIE: UDR PRY FER ERIN B: 1. BOA CBZE: UALR PAIRS AT CREAR, SARE IIRS SR, iia OAL, 2A: OAR P ASTER ED CORRE REE, iS UCM RORY 35, GRP LARA AODAE. SERB: OAR PATE ATLL DIPSET, DYE RAIL, BREE NBR. 4. PRE: DUP RPA CURIE, nA HE, BELL RRO | 2308.08833#67 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 70 | Category Physician Subcategory Subject Resident Physician Clinical Pathology Oral Otolaryngology Rehabilitation Medicine Ophthalmology Neurology Orthopedics Anesthesiology Pediatrics Dermatology Psychiatry General Practice Medical Imaging Internal Medicine Ultrasound Surgery Obstetrics and Gynecology Pediatric Surgery Licensed Assistant Physician Integrated Chinese and Western Medicine Clinical Chinese Medicine Public Health Oral Licensed Physician Chinese Medicine Public Health Clinical Oral Integrated Chinese and Western Medicine Associate Professional Physician General Medicine Internal Oral Orthopedics Chinese Internal Medicine Surgery Ultrasound Medicine Dermatology and Venereology Otolaryngology Internal Medicine Infectious Diseases Obstetrics and Gynecology Cardiovascular Internal Medicine and Respiratory Internal Medicine Oncology Acupuncture Attending in TCM Pathology Preventive Medicine Pediatrics Psychotherapy Radiology Psychiatry Oral Restoration Dermatology Digestive Internal Medicine Rehabilitation Medicine Infectious Disease Nuclear Medicine Oral Medicine Integrated Chinese and Western Internal Medicine Ophthalmology Anesthesiology Hospital Infection Nutrition Tuberculosis Critical Care Medicine Psychological Counselor Pain Medicine Neurology Orthodontics Oral and Maxillofacial Surgery | 2308.08833#70 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 71 | Hospital Infection Nutrition Tuberculosis Critical Care Medicine Psychological Counselor Pain Medicine Neurology Orthodontics Oral and Maxillofacial Surgery Plastic Surgery Nephrology Rheumatology and Clinical Immunology Occupational Disease Advanced Professional Physicians # Questions 1124 1074 952 461 951 791 939 907 749 977 903 712 964 752 430 829 800 296 3441 5364 3454 2067 1090 4490 4085 10241 1505 5320 3492 858 894 2896 5071 2218 1158 983 5671 600 2641 617 942 1169 1642 2817 3773 1393 2401 754 1183 909 160 630 861 1250 862 1101 988 923 827 1009 58 579 495 884 126 578 367 187 81 37 54 | 2308.08833#71 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 72 | Respiratory InternalMedicine Orthopedics Endocrinology Cardiology Digestive Internal Medicine General Surgery Senior Gynecology and Obstetrics General Internal Medicine General Practice Pediatrics
Table 7: Catalog Structure of Physician
20
1522 1245 1326 1604 1577 1850 3249 607 74 65
Category Undergraduate Disciplines Subcategory Subject Foudamental Medicine Pathophysiology Medical Psychology Biochemistry and MolecularBiology Cell Biology Medical Immunology Pathology Medical Genetics Parasitology Systematic Anatomy Bioinformatics Physiology Pharmacology Medical Microbiology Local Anatomy Histology and Embryology Human Parasitology Medical Statistics Clinical Medicine Medical Imaging Radiology Experimental Diagnostic Medicine Neurology Surgery Dermatology and Venereology Pediatrics Nuclear Medicine Physical Diagnosis Dental Pulp Disease Basic Nursing Diagnostics Ultrasonic Medicine Oral Care Evidence-Based Medicine Fundamental Nursing Epidemiology Oral Tissue Pathology Infectious Disease Oral Anatomy and Physiology Anesthesiology Interventional Radiology TCM and Chinese Herbal Medicine Preventive Medicine Hygiene Medical Ethics Preventive Medicine and Public Health # Questions 1455 932 2402 1399 2485 2786 1369 806 1967 185 2306 2424 1342 489 1398 766 198 1858 541 548 1163 2164 2168 3760 1383 621 346 978 103 192 263 95 393 864 387 287 362 606 81 1926 1316 500
# Table 8: Catalog Structure of Undergraduate Disciplines | 2308.08833#72 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 74 | 21
Category Subcategory Subject Practicing Nurse Practicing Nurse Licensed Practical Nurse Licensed Practical Nurse Nurse Charge Nurse Pediatric Internal Medicine Charge Nurse Surgery Obstetrics and Gynecology Advanced Practice Nurse Advanced Practice Nurse Medical Technician Rehabilitation Medicine Therapy Radiology Inspection Oncology Medical Technologist Rehabilitation Medicine Therapy Oncology Radiology Inspection Technician Supervising Technologist Radiation Therapy for Oncology Ultrasonic Medicine Blood Transfusion Technology Microbiological Inspection Radiology Pathology Physical and Chemical Inspection Clinical Medicine Inspection Medical Record Information Nuclear Medicine Electrocardiology Disinfection Technology Rehabilitation Medicine and Treatment Nursing Surgical Nursing Basic Nursing Graduate Entrance Exam Political Science Political Science Integrated Western Medicine Integrated Western Medicine Integrated TCM Integrated TCM Licensed Pharmacist Licensed Pharmacist Licensed TCM Pharmacist Licensed TCM Pharmacist Pharmacist Junior Pharmacist Junior Pharmacist Junior Pharmacist Assistant Junior Pharmacist Assistant Junior TCM Pharmacist Junior TCM Pharmacist Assistant Junior TCM Pharmacist Junior TCM Pharmacist Assistant Chief Pharmacist Chief Pharmacist Chief TCM Pharmacist Chief TCM Pharmacist # Questions 3303 4223 905 958 4558 341 755 1876 1752 1033 1166 1086 1739 1538 1337 1458 1701 145 2199 704 1428 2407 783 1378 1331 1275 1021 575 948 1112 902 1514 8913 3924 8248 4460 2720 3705 3502 4017 3403 3299 | 2308.08833#74 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 76 | Model Open Physician Nurse Pharmacist Technician Undergraduate Disciplines Graduate Entrance Exam General Models ChatGLM2-6B + CoT â 43.80 (43.84) 41.25 (42.94) 51.94 (51.94) 52.81 (53.86) 40.66 (40.78) 42.56 (44.18) 40.83 (40.90) 41.00 (41.65) 42.13 (42.32) 39.81 (40.72) 43.94 (44.17) 42.12 (42.85) Baichuan-13B-chat + CoT â 35.90 (36.04) 38.15 (39.37) 41.38 (41.43) 48.31 (49.25) 34.53 (34.74) 42.59 (43.73) 28.83 (28.95) 38.50 (39.05) 34.44 (34.58) 41.06 (41.60) 35.19 (35.25) 37.25 (38.20) Medical Models HuatuoGPT (åä½) + CoT â 31.85 (31.88) 26.90 (29.92) 33.56 (33.56) 32.75 (35.25) 29.06 | 2308.08833#76 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 77 | + CoT â 31.85 (31.88) 26.90 (29.92) 33.56 (33.56) 32.75 (35.25) 29.06 (29.07) 25.12 (28.78) 32.08 (32.08) 28.58 (30.44) 29.56 (29.60) 27.56 (30.36) 28.25 (28.27) 23.56 (26.47) MedicalGPT + CoT â 23.00 (23.13) 4.75 (17.00) 26.81 (27.02) 15.19 (23.02) 22.97 (22.99) 14.28 (25.16) 22.83 (22.87) 18.58 (23.92) 25.25 (25.33) 17.12 (20.59) 21.56 (21.60) 9.63 (17.86) Bentsao (æ¬è) + CoT â 20.75 (20.91) 1.30 (12.01) 20.06 (20.06) 4.13 (28.62) 19.69 (19.85) 4.31 (20.45) 23.92 (24.00) 5.58 (19.07) 18.81 (18.98) 4.81 | 2308.08833#77 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 78 | (19.85) 4.31 (20.45) 23.92 (24.00) 5.58 (19.07) 18.81 (18.98) 4.81 (13.99) 18.69 (18.85) 4.75 (18.44) ChatMed-Consult + CoT â 18.25 (18.33) 9.60 (37.05) 18.88 (18.88) 19.19 (21.37) 20.16 (20.24) 16.03 (18.28) 21.25 (21.30) 18.25 (20.06) 18.12 (18.28) 16.44 (18.16) 20.88 (20.98) 11.94 (17.42) ChatGLM-Med + CoT â 14.70 (20.36) 1.30 (17.81) 14.94 (20.41) 3.88 (18.36) 19.38 (20.90) 9.13 (17.19) 16.00 (19.02) 4.42 (17.48) 12.31 (16.83) 4.44 (15.50) 12.38 (15.02) 2.25 (15.59) DoctorGLM + CoT â 4.40 (16.95) 6.95 (21.56) | 2308.08833#78 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08833 | 79 | 12.38 (15.02) 2.25 (15.59) DoctorGLM + CoT â 4.40 (16.95) 6.95 (21.56) 5.19 (21.15) 7.31 (23.44) 7.97 (20.74) 7.25 (21.01) 8.08 (21.42) 9.75 (18.61) 5.69 (19.16) 6.94 (17.11) 4.00 (15.75) 6.06 (18.67) BianQue-2 (æé¹) 0.10 (9.17) 2.35 (17.17) 0.38 (22.55) 2.50 (16.65) 0.34 (19.84) 3.28 (15.62) 0.37 (28.96) 3.06 (19.82) 0.81 (36.61) 3.88 (16.24) â + CoT Avg 43.88 (44.00) 43.26 (44.37) 35.04 (35.17) 40.98 (41.84) 30.73 (30.74) 27.41 (30.20) 23.74 (23.82) 13.26 (21.26) 20.32 (20.44) 4.15 (18.76) | 2308.08833#79 | CMB: A Comprehensive Medical Benchmark in Chinese | Large Language Models (LLMs) provide a possibility to make a great
breakthrough in medicine. The establishment of a standardized medical benchmark
becomes a fundamental cornerstone to measure progression. However, medical
environments in different regions have their local characteristics, e.g., the
ubiquity and significance of traditional Chinese medicine within China.
Therefore, merely translating English-based medical evaluation may result in
\textit{contextual incongruities} to a local region. To solve the issue, we
propose a localized medical benchmark called CMB, a Comprehensive Medical
Benchmark in Chinese, designed and rooted entirely within the native Chinese
linguistic and cultural framework. While traditional Chinese medicine is
integral to this evaluation, it does not constitute its entirety. Using this
benchmark, we have evaluated several prominent large-scale LLMs, including
ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical
domain. It is worth noting that our benchmark is not devised as a leaderboard
competition but as an instrument for self-assessment of model advancements. We
hope this benchmark could facilitate the widespread adoption and enhancement of
medical LLMs within China. Check details in
\url{https://cmedbenchmark.llmzoo.com/}. | http://arxiv.org/pdf/2308.08833 | Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li | cs.CL, cs.AI | null | null | cs.CL | 20230817 | 20230817 | [
{
"id": "2306.05685"
},
{
"id": "2302.13971"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.14334"
},
{
"id": "1910.13461"
},
{
"id": "2306.05179"
},
{
"id": "2106.09685"
},
{
"id": "2304.01097"
},
{
"id": "2305.08322"
},
{
"id": "2305.12474"
},
{
"id": "2305.15075"
},
{
"id": "2109.07154"
},
{
"id": "2305.01526"
}
] |
2308.08155 | 0 | 3 2 0 2
t c O 3 ] I A . s c [
2 v 5 5 1 8 0 . 8 0 3 2 : v i X r a
# AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
Qingyun Wuâ , Gagan Bansalâ, Jieyu Zhang±, Yiran Wuâ , Beibin Liâ
Erkang Zhuâ, Li Jiangâ, Xiaoyun Zhangâ, Shaokun Zhangâ , Jiale Liuâ
Ahmed Awadallahâ, Ryen W. Whiteâ, Doug Burgerâ, Chi Wangâ1
âMicrosoft Research, â Pennsylvania State University
±University of Washington,âXidian University
° Plot a chart of nS SD wera ond TESLA BD] Wirt: stock price change ZA » vo. s| J Execute the following code... â= Month Multi-Agent Conversations @&@D Error package 2a GD No, please plot % yfinance is not ~ change! installed Got it! Here is the @my Sorry! Please first revised code. ~ pip install yfinance ® ouput âand then execute hopen BO Frtaling. E a Month chat Hierarchi Flexible Conversation Patterns Example Agent Chat | 2308.08155#0 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 0 | 3 2 0 2
g u A 6 1 ] R I . s c [
1 v 5 8 2 8 0 . 8 0 3 2 : v i X r a
# Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
Guangyuan Ma1,2*, Xing Wu1,2*, Peng Wang1,2, Zijia Lin3, Songlin Hu1,2 1 Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China 2 School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China 3 Kuaishou Technology {maguangyuan,wuxing,wangpeng2022,husonglin}@iie.ac.cn, [email protected]
# Abstract | 2308.08285#0 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.