Lexora-Medium-7B / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
f34697d verified
|
raw
history blame
4.03 kB
---
library_name: transformers
tags: []
model-index:
- name: Lexora-Medium-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 41.03
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepMount00/Lexora-Medium-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 32.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepMount00/Lexora-Medium-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 13.75
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepMount00/Lexora-Medium-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 7.38
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepMount00/Lexora-Medium-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 14.76
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepMount00/Lexora-Medium-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 36.95
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepMount00/Lexora-Medium-7B
name: Open LLM Leaderboard
---
## How to Use
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "DeepMount00/Lexora-Medium-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
)
prompt = [{'role': 'user', 'content': """Marco ha comprato 5 scatole di cioccolatini. Ogni scatola contiene 12 cioccolatini. Ha deciso di dare 3 cioccolatini a ciascuno dei suoi 7 amici. Quanti cioccolatini gli rimarranno dopo averli distribuiti ai suoi amici?"""}]
inputs = tokenizer.apply_chat_template(
prompt,
add_generation_prompt=True,
return_tensors='pt'
)
tokens = model.generate(
inputs.to(model.device),
max_new_tokens=1024,
temperature=0.001,
do_sample=True
)
print(tokenizer.decode(tokens[0], skip_special_tokens=False))
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_DeepMount00__Lexora-Medium-7B)
| Metric |Value|
|-------------------|----:|
|Avg. |24.43|
|IFEval (0-Shot) |41.03|
|BBH (3-Shot) |32.70|
|MATH Lvl 5 (4-Shot)|13.75|
|GPQA (0-shot) | 7.38|
|MuSR (0-shot) |14.76|
|MMLU-PRO (5-shot) |36.95|