Datasets:
update README.md
Browse files
README.md
CHANGED
@@ -70,9 +70,9 @@ tags:
|
|
70 |
- **Repository**: https://github.com/CONE-MT/BenchMAX
|
71 |
|
72 |
## Dataset Description
|
73 |
-
BenchMAX_Model-based is a dataset of BenchMAX, sourcing from [m-ArenaHard](https://huggingface.co/datasets/CohereForAI/m-ArenaHard), which evaluates the instruction following capability via model-based judgment.
|
74 |
|
75 |
-
We extend the original dataset to languages that not supported by
|
76 |
Then manual post-editing is applied for all non-English languages.
|
77 |
|
78 |
## Usage
|
@@ -87,7 +87,8 @@ bash prepare.sh
|
|
87 |
```
|
88 |
|
89 |
Then modify the model configs in `arena-hard-auto/config`.
|
90 |
-
Please add your model config to `api_config.yaml` and add your model name to the model list in other
|
|
|
91 |
|
92 |
Finally, deploy your model and run the evaluation, where your model first generates responses to prompts and DeepSeek-V3 judge them against GPT-4o responses, as we do in the paper.
|
93 |
|
|
|
70 |
- **Repository**: https://github.com/CONE-MT/BenchMAX
|
71 |
|
72 |
## Dataset Description
|
73 |
+
BenchMAX_Model-based is a dataset of [BenchMAX](https://arxiv.org/pdf/2502.07346), sourcing from [m-ArenaHard](https://huggingface.co/datasets/CohereForAI/m-ArenaHard), which evaluates the instruction following capability via model-based judgment.
|
74 |
|
75 |
+
We extend the original dataset to include languages that are not supported by m-ArenaHard through Google Translate.
|
76 |
Then manual post-editing is applied for all non-English languages.
|
77 |
|
78 |
## Usage
|
|
|
87 |
```
|
88 |
|
89 |
Then modify the model configs in `arena-hard-auto/config`.
|
90 |
+
Please add your model config to `api_config.yaml` and add your model name to the model list in other configs like `gen_answer_config_*.yaml`.
|
91 |
+
If you want to change the judge model, you can modify `judge_config_*.yaml`.
|
92 |
|
93 |
Finally, deploy your model and run the evaluation, where your model first generates responses to prompts and DeepSeek-V3 judge them against GPT-4o responses, as we do in the paper.
|
94 |
|