Spaces:
Runtime error
Runtime error
from dataclasses import dataclass | |
from enum import Enum | |
from src.envs import REPO_ID | |
class Task: | |
benchmark: str | |
metric: str | |
col_name: str | |
# Select your tasks here | |
# --------------------------------------------------- | |
class Tasks(Enum): | |
# task_key in the json file, metric_key in the json file, name to display in the leaderboard | |
task1 = Task("PeKA", "acc", "PeKA*") | |
task2 = Task("PersBETS", "acc", "PersBETS*") | |
task3 = Task("khayyam_challenge", "acc", "Khayyam Challenge") | |
task4 = Task("parsinlu_mc", "acc", "ParsiNLU MCQA") | |
task5 = Task("parsinlu_nli", "acc", "ParsiNLU NLI") | |
task6 = Task("parsinlu_qqp", "acc", "ParsiNLU QQP") | |
# task7 = Task("persian_ARC", "acc", "Persian ARC") | |
NUM_FEWSHOT = 0 # Change with your few shot | |
# --------------------------------------------------- | |
# Your leaderboard name | |
TITLE = f""" | |
<img src="https://huggingface.co/spaces/{REPO_ID}/resolve/main/banner_green.png" style="width:70%;display:block;margin-left:auto;margin-right:auto"> | |
""" | |
# What does your leaderboard evaluate? | |
INTRODUCTION_TEXT = """ | |
Persian LLM Leaderboard is designed to be a challenging benchmark and provide a reliable evaluation of LLMs in Persian Language. | |
Note: This is a demo version of the leaderboard. Two new benchmarks are introduced: *PeKA* and *PersBETS*, challenging the native knowledge of the models along with | |
linguistic skills and their level of bias, ethics, and trustworthiness. **These datasets are not yet public, but they will be uploaded onto huggingface along with a detailed paper | |
explaining the data and performance of relevent models.** | |
Note: **We plan to release an evaluation framework soon in which the details and methods of evaluation are specified.** | |
""" | |
# Which evaluations are you running? how can people reproduce what you have? | |
LLM_BENCHMARKS_TEXT = f""" | |
## ABOUT | |
For now, the only competitive open language models capable of properly speaking Persian are the multilingual ones, Meta's Llama 3.1 being the prime example. | |
There are only a few capable multilingual LLMs in Persian that derive their main knowledge from English. A Persian LLM is almost an imagination right now as there doesn't exist | |
that many models being expert in Persian in the first place. | |
Our goal is to provide a benchmark on diverse domains and tasks that provides insights on how much is the gap between the SOTA models right now in different settings. | |
We use our own framework to evaluate the models on the following benchmarks (TO BE RELEASED SOON) | |
### Tasks | |
- <a href="https://arxiv.org/abs/1803.05457" target="_blank"> AI2 Reasoning Challenge </a> (25-shot) - a set of grade-school science questions. | |
- <a href="https://arxiv.org/abs/1905.07830" target="_blank"> HellaSwag </a> (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models. | |
- <a href="https://arxiv.org/abs/2009.03300" target="_blank"> MMLU </a> (5-shot) - a test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. | |
- <a href="https://arxiv.org/abs/2109.07958" target="_blank"> TruthfulQA </a> (0-shot) - a test to measure a model's propensity to reproduce falsehoods commonly found online. Note: TruthfulQA is technically a 6-shot task in the Harness because each example is prepended with 6 Q/A pairs, even in the 0-shot setting. | |
- <a href="https://arxiv.org/abs/1907.10641" target="_blank"> Winogrande </a> (5-shot) - an adversarial and difficult Winograd benchmark at scale, for commonsense reasoning. | |
- <a href="https://arxiv.org/abs/2110.14168" target="_blank"> GSM8k </a> (5-shot) - diverse grade school math word problems to measure a model's ability to solve multi-step mathematical reasoning problems. | |
For all these evaluations, a higher score is a better score. | |
We chose these benchmarks as they test a variety of reasoning and general knowledge across a wide variety of fields in 0-shot and few-shot settings. | |
## REPRODUCIBILITY | |
To reproduce our results, here are the commands you can run, using [this version](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463) of the Eleuther AI Harness: | |
`python main.py --model=hf-causal-experimental --model_args="pretrained=<your_model>,use_accelerate=True,revision=<your_model_revision>"` | |
` --tasks=<task_list> --num_fewshot=<n_few_shot> --batch_size=1 --output_path=<output_path>` | |
``` | |
python main.py --model=hf-causal-experimental \ | |
--model_args="pretrained=<your_model>,use_accelerate=True,revision=<your_model_revision>" \ | |
--tasks=<task_list> \ | |
--num_fewshot=<n_few_shot> \ | |
--batch_size=1 \ | |
--output_path=<output_path> | |
``` | |
**Note:** We evaluate all models on a single node of 8 H100s, so the global batch size is 8 for each evaluation. If you don't use parallelism, adapt your batch size to fit. | |
*You can expect results to vary slightly for different batch sizes because of padding.* | |
The tasks and few shots parameters are: | |
- ARC: 25-shot, *arc-challenge* (`acc_norm`) | |
- HellaSwag: 10-shot, *hellaswag* (`acc_norm`) | |
- TruthfulQA: 0-shot, *truthfulqa-mc* (`mc2`) | |
- MMLU: 5-shot, *hendrycksTest-abstract_algebra,hendrycksTest-anatomy,hendrycksTest-astronomy,hendrycksTest-business_ethics,hendrycksTest-clinical_knowledge,hendrycksTest-college_biology,hendrycksTest-college_chemistry,hendrycksTest-college_computer_science,hendrycksTest-college_mathematics,hendrycksTest-college_medicine,hendrycksTest-college_physics,hendrycksTest-computer_security,hendrycksTest-conceptual_physics,hendrycksTest-econometrics,hendrycksTest-electrical_engineering,hendrycksTest-elementary_mathematics,hendrycksTest-formal_logic,hendrycksTest-global_facts,hendrycksTest-high_school_biology,hendrycksTest-high_school_chemistry,hendrycksTest-high_school_computer_science,hendrycksTest-high_school_european_history,hendrycksTest-high_school_geography,hendrycksTest-high_school_government_and_politics,hendrycksTest-high_school_macroeconomics,hendrycksTest-high_school_mathematics,hendrycksTest-high_school_microeconomics,hendrycksTest-high_school_physics,hendrycksTest-high_school_psychology,hendrycksTest-high_school_statistics,hendrycksTest-high_school_us_history,hendrycksTest-high_school_world_history,hendrycksTest-human_aging,hendrycksTest-human_sexuality,hendrycksTest-international_law,hendrycksTest-jurisprudence,hendrycksTest-logical_fallacies,hendrycksTest-machine_learning,hendrycksTest-management,hendrycksTest-marketing,hendrycksTest-medical_genetics,hendrycksTest-miscellaneous,hendrycksTest-moral_disputes,hendrycksTest-moral_scenarios,hendrycksTest-nutrition,hendrycksTest-philosophy,hendrycksTest-prehistory,hendrycksTest-professional_accounting,hendrycksTest-professional_law,hendrycksTest-professional_medicine,hendrycksTest-professional_psychology,hendrycksTest-public_relations,hendrycksTest-security_studies,hendrycksTest-sociology,hendrycksTest-us_foreign_policy,hendrycksTest-virology,hendrycksTest-world_religions* (average of all the results `acc`) | |
- Winogrande: 5-shot, *winogrande* (`acc`) | |
- GSM8k: 5-shot, *gsm8k* (`acc`) | |
Side note on the baseline scores: | |
- for log-likelihood evaluation, we select the random baseline | |
- for GSM8K, we select the score obtained in the paper after finetuning a 6B model on the full GSM8K training set for 50 epochs | |
""" | |
EVALUATION_QUEUE_TEXT = """ | |
## Important Notes | |
- Right now, the models added **are not automatically evaluated**. - We may support automatic evaluation in the future on our own clusters. | |
- An evaluation framework will be available in the future to help everyone reproduce the results. | |
- We only support models with **a causal language modeling head** for now. | |
## Don't forget to read the FAQ and the About tabs for more information! | |
## First steps before submitting a model | |
### 1) Make sure you can load your model and tokenizer using AutoClasses: | |
```python | |
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer | |
config = AutoConfig.from_pretrained("your model name", revision=revision) | |
model = AutoModelForCausalLM.from_pretrained("your model name", revision=revision) | |
tokenizer = AutoTokenizer.from_pretrained("your model name", revision=revision) | |
``` | |
If this step fails, follow the error messages to debug your model before submitting it. It's likely your model has been improperly uploaded. | |
Note: make sure your model is public! | |
### 2) Make sure your model has an open license! | |
This is a leaderboard for Open LLMs, and we'd love for as many people as possible to know they can use your model 🤗 | |
### 3) Fill up your model card | |
When we add extra information about models to the leaderboard, it will be automatically taken from the model card | |
### 4) Select the correct precision | |
Not all models are converted properly from `float16` to `bfloat16`, and selecting the wrong precision can sometimes cause evaluation error (as loading a `bf16` model in `fp16` can sometimes generate NaNs, depending on the weight range). | |
## In case of model failure | |
If your model is displayed in the `FAILED` category, its execution stopped. | |
Make sure you have followed the above steps first. | |
""" | |
CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results" | |
CITATION_BUTTON_TEXT = r""" | |
""" | |