Datasets:
update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
3 |
task_categories:
|
4 |
-
-
|
5 |
language:
|
6 |
- en
|
7 |
- zh
|
@@ -264,15 +264,36 @@ configs:
|
|
264 |
---
|
265 |
## Dataset Sources
|
266 |
- **Paper**: BenchMAX: A Comprehensive Multilingual Evaluation Suite for Large Language Models
|
267 |
-
- **Link**: https://
|
268 |
- **Repository**: https://github.com/CONE-MT/BenchMAX
|
269 |
|
270 |
## Dataset Description
|
271 |
-
BenchMAX_Domain_Translation is a dataset of BenchMAX.
|
272 |
|
273 |
-
We collect the domain parallel data from other tasks in BenchMAX.
|
274 |
Each sample contains one or three human-annotated translations.
|
275 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
276 |
## Supported Languages
|
277 |
Arabic, Bengali, Chinese, Czech, English, French, German, Hungarian, Japanese, Korean, Serbian, Spanish, Swahili, Telugu, Thai, Russian, Vietnamese
|
278 |
|
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
3 |
task_categories:
|
4 |
+
- translation
|
5 |
language:
|
6 |
- en
|
7 |
- zh
|
|
|
264 |
---
|
265 |
## Dataset Sources
|
266 |
- **Paper**: BenchMAX: A Comprehensive Multilingual Evaluation Suite for Large Language Models
|
267 |
+
- **Link**: https://huggingface.co/papers/2502.07346
|
268 |
- **Repository**: https://github.com/CONE-MT/BenchMAX
|
269 |
|
270 |
## Dataset Description
|
271 |
+
BenchMAX_Domain_Translation is a dataset of BenchMAX, which evaluates the translation capability on specific domains.
|
272 |
|
273 |
+
We collect the domain multi-way parallel data from other tasks in BenchMAX, such as math data, code data, etc.
|
274 |
Each sample contains one or three human-annotated translations.
|
275 |
|
276 |
+
## Usage
|
277 |
+
|
278 |
+
```bash
|
279 |
+
git clone --recurse-submodules https://github.com/CONE-MT/BenchMAX.git
|
280 |
+
cd BenchMAX
|
281 |
+
pip install -r requirements.txt
|
282 |
+
|
283 |
+
cd tasks/translation
|
284 |
+
tasks=("ifeval" "gpqa" "lcb_v4" "mgsm" "humaneval" "nexus" "arenahard")
|
285 |
+
max_tokens_list=(512 3072 2048 1024 1024 512 3072)
|
286 |
+
for i in "${!tasks[@]}"; do
|
287 |
+
task=${tasks[$i]}
|
288 |
+
max_tokens=${max_tokens_list[$i]}
|
289 |
+
python generate_translation.py -s en -t zh,es,fr,de,ru,ja,th,sw,bn,te,ar,ko,vi,cs,hu,sr --task-name $task --model-name ${model} --infer-backend vllm --max-tokens ${max_tokens}
|
290 |
+
python generate_translation.py -s zh,es,fr,de,ru,ja,th,sw,bn,te,ar,ko,vi,cs,hu,sr -t en --task-name $task --model-name ${model} --infer-backend vllm --max-tokens ${max_tokens}
|
291 |
+
|
292 |
+
python evaluate_translation.py -s en -t zh,es,fr,de,ru,ja,th,sw,bn,te,ar,ko,vi,cs,hu,sr --task-name $task --model-name ${model} --metrics spBLEU
|
293 |
+
python evaluate_translation.py -s zh,es,fr,de,ru,ja,th,sw,bn,te,ar,ko,vi,cs,hu,sr -t en --task-name $task --model-name ${model} --metrics spBLEU
|
294 |
+
done
|
295 |
+
```
|
296 |
+
|
297 |
## Supported Languages
|
298 |
Arabic, Bengali, Chinese, Czech, English, French, German, Hungarian, Japanese, Korean, Serbian, Spanish, Swahili, Telugu, Thai, Russian, Vietnamese
|
299 |
|