leaderboard-pr-bot commited on
Commit
a7b4f1e
·
1 Parent(s): afca346

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -84,3 +84,17 @@ with torch.inference_mode():
84
  stopping_criteria=stopping_criteria
85
  )
86
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
  stopping_criteria=stopping_criteria
85
  )
86
  ```
87
+
88
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
89
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_willnguyen__lacda-2-7B-chat-v0.1)
90
+
91
+ | Metric | Value |
92
+ |-----------------------|---------------------------|
93
+ | Avg. | 43.91 |
94
+ | ARC (25-shot) | 53.07 |
95
+ | HellaSwag (10-shot) | 77.57 |
96
+ | MMLU (5-shot) | 46.03 |
97
+ | TruthfulQA (0-shot) | 44.57 |
98
+ | Winogrande (5-shot) | 74.19 |
99
+ | GSM8K (5-shot) | 6.29 |
100
+ | DROP (3-shot) | 5.65 |