davidkim205 commited on
Commit
602f1bb
·
verified ·
1 Parent(s): 63dfa06

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -3
README.md CHANGED
@@ -3,11 +3,33 @@ language:
3
  - ko
4
  ---
5
  # ko-bench
 
6
 
7
- https://github.com/davidkim205/ko-bench/blob/main/data/ko_bench/ko_question.jsonl
8
- 총 80 행.
9
 
10
- ## 구조
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
  ```json
13
  [
 
3
  - ko
4
  ---
5
  # ko-bench
6
+ To fairly evaluate various LLMs, it is essential to present the same set of questions to all models. This requires a systematically curated benchmark dataset.
7
 
8
+ [Ko-Bench](https://github.com/davidkim205/ko-bench/blob/main/data/ko_bench/ko_question.jsonl) is a benchmark designed to assess the Korean language proficiency of different LLM models. Existing LLM evaluation datasets often fail to provide accurate assessments within a Korean context. Ko-Bench addresses this limitation by establishing more objective and finely tuned evaluation criteria for Korean LLMs, enabling more reliable performance comparisons.
 
9
 
10
+ Ko-Bench is based on the [MT-Bench](https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/data/mt_bench/question.jsonl) dataset but has been translated into Korean. Additionally, its questions have been modified and supplemented to reflect linguistic and cultural characteristics specific to Korean. This enhancement allows for a more accurate evaluation of LLMs in a Korean-language environment.
11
+
12
+ ## ko-bench Generation Rules
13
+ Ko-Bench is based on MT-Bench but has been restructured with evaluation criteria optimized for the Korean language environment. To achieve this, the following modifications were applied.
14
+ 1. Incorporating Geographical and Cultural Elements
15
+ Foreign place names, such as "Hawaii," were replaced with Korean landmarks like "Jeju Island" to ensure that Korean LLMs can naturally reflect geographical and cultural aspects in their responses.
16
+ 2. Enhancing Linguistic Naturalness
17
+ Foreign words and expressions such as "casual" and "limerick" were adapted to better fit Korean linguistic conventions, ensuring that questions sound natural in a Korean-language context.
18
+ 3. Localization of Roleplay Scenarios
19
+ Well-known international figures like "Elon Musk" and "Sheldon" were replaced with Korean celebrities such as "Cheon Song-yi" (from the drama My Love from the Star) and "Yoo Jae-suk", allowing the model to be evaluated on its ability to mimic Korean personalities' speech patterns and styles.
20
+ 4. Applying Korean Standards
21
+ Elements such as currency units, names, variable names, company names, and job titles were adjusted to align with Korean conventions, ensuring that models generate contextually appropriate responses in a Korean setting.
22
+
23
+ ## ko-bench Structure
24
+ Similar to MT-Bench, Ko-Bench consists of 8 categories, each containing 10 questions, resulting in a total of 80 questions. Each question follows a multi-turn format, meaning that every interaction consists of two consecutive turns, just like in MT-Bench.
25
+
26
+ - **question_id**: A unique identifier representing the sequence number of the data entry within the dataset.
27
+ - **category**: Each question falls into one of the following 8 categories: Coding, Extraction, Humanities, Math, Reasoning, Roleplay, STEM(Science, Technology, Engineering, Mathematics), Writing.
28
+ - **pairs**: A set of two question-answer interactions in a multi-turn dialogue.
29
+ - **prompt**: The initial question related to the category.
30
+ - **refer**: The reference answer for the prompt. The LLM’s response does not have to match refer exactly, but it serves as a benchmark for evaluating correctness.
31
+ - **prompt**: A follow-up question that assumes the LLM remembers the context of the previous prompt and its response.
32
+ - **refer**: The reference answer for the second prompt, serving as a guideline for evaluating the LLM's response.
33
 
34
  ```json
35
  [