xuhuang87 commited on
Commit
5df7530
·
1 Parent(s): a7d864d

update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -3
README.md CHANGED
@@ -59,17 +59,55 @@ configs:
59
  data_files: arenahard_hu.jsonl
60
  - config_name: sr
61
  data_files: arenahard_sr.jsonl
 
 
 
62
  ---
63
  ## Dataset Sources
64
 
65
  - **Paper**: BenchMAX: A Comprehensive Multilingual Evaluation Suite for Large Language Models
66
- - **Link**: https://arxiv.org/pdf/2502.07346
67
  - **Repository**: https://github.com/CONE-MT/BenchMAX
68
 
69
  ## Dataset Description
70
- BenchMAX_Model-based is a dataset of BenchMAX, sourcing from [m-ArenaHard](https://huggingface.co/datasets/CohereForAI/m-ArenaHard).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
72
- We extend the original English dataset to 16 non-English languages by first translating and then manual post-editing.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
 
74
  ## Supported Languages
75
  Arabic, Bengali, Chinese, Czech, English, French, German, Hungarian, Japanese, Korean, Serbian, Spanish, Swahili, Telugu, Thai, Russian, Vietnamese
 
59
  data_files: arenahard_hu.jsonl
60
  - config_name: sr
61
  data_files: arenahard_sr.jsonl
62
+ tags:
63
+ - multilingual
64
+ - instruction-following
65
  ---
66
  ## Dataset Sources
67
 
68
  - **Paper**: BenchMAX: A Comprehensive Multilingual Evaluation Suite for Large Language Models
69
+ - **Link**: https://huggingface.co/papers/2502.07346
70
  - **Repository**: https://github.com/CONE-MT/BenchMAX
71
 
72
  ## Dataset Description
73
+ BenchMAX_Model-based is a dataset of BenchMAX, sourcing from [m-ArenaHard](https://huggingface.co/datasets/CohereForAI/m-ArenaHard), which evaluates the instruction following capability via model-based judgment.
74
+
75
+ We extend the original dataset to languages that not supported by translating.
76
+ Then manual post-editing is applied for all non-English languages.
77
+
78
+ ## Usage
79
+
80
+ ```bash
81
+ git clone https://github.com/CONE-MT/BenchMAX.git
82
+ cd BenchMAX
83
+ pip install -r requirements.txt
84
+
85
+ cd tasks/arenahard
86
+ bash prepare.sh
87
+ ```
88
 
89
+ Then modify the model configs in `arena-hard-auto/config`.
90
+ Please add your model config to `api_config.yaml` and add your model name to the model list in other config like `gen_answer_config_*.yaml` and `judge_config_*.yaml`.
91
+
92
+ Finally, deploy your model and run the evaluation, where your model first generates responses to prompts and DeepSeek-V3 judge them against GPT-4o responses, as we do in the paper.
93
+
94
+ ```bash
95
+ # serve your model by vllm
96
+ vllm serve meta-llama/Llama-3.1-8B-Instruct
97
+
98
+ # generate responses
99
+ cd arena-hard-auto
100
+ languages=(en ar bn cs de es fr hu ja ko ru sr sw te th vi zh)
101
+ for lang in "${languages[@]}"; do
102
+ python gen_answer.py --setting-file config/gen_answer_config_${lang}.yaml
103
+ done
104
+
105
+ # run LLM-as-a-judge
106
+ export OPENAI_API_KEY=...
107
+ for lang in "${languages[@]}"; do
108
+ python gen_judgment.py --setting-file config/judge_config_${lang}.yaml
109
+ done
110
+ ```
111
 
112
  ## Supported Languages
113
  Arabic, Bengali, Chinese, Czech, English, French, German, Hungarian, Japanese, Korean, Serbian, Spanish, Swahili, Telugu, Thai, Russian, Vietnamese