Update README.md
Browse files
README.md
CHANGED
|
@@ -19,20 +19,20 @@ Evaluation of the model was conducted using the PoLL (Pool of LLM) technique, as
|
|
| 19 |
(two per evaluator). The evaluators included GPT-4o, Gemini-1.5-pro, and Claude3.5-sonnet.
|
| 20 |
|
| 21 |
Performance Scores (on a scale of 5):
|
| 22 |
-
| Model | Score
|
| 23 |
-
|
| 24 |
-
| gpt-4o | 4.13
|
| 25 |
-
| mistralai/Mixtral-8x7B-Instruct-v0.1 | 3.71
|
| 26 |
-
| **cmarkea/Mixtral-8x7B-Instruct-v0.1-4bit** | 3.68
|
| 27 |
-
| gpt-3.5-turbo | 3.66
|
| 28 |
-
| TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ | 3.56
|
| 29 |
-
| mistralai/Mistral-7B-Instruct-v0.2 | 1.98
|
| 30 |
-
| cmarkea/bloomz-7b1-mt-sft-chat | 1.69
|
| 31 |
-
| cmarkea/bloomz-3b-dpo-chat | 1.68
|
| 32 |
-
| cmarkea/bloomz-3b-sft-chat | 1.51
|
| 33 |
-
| croissantllm/CroissantLLMChat-v0.1 | 1.19
|
| 34 |
-
| cmarkea/bloomz-560m-sft-chat | 1.04
|
| 35 |
-
| OpenLLM-France/Claire-Mistral-7B-0.1 | 0.38
|
| 36 |
|
| 37 |
The impact of quantization is negligible.
|
| 38 |
|
|
|
|
| 19 |
(two per evaluator). The evaluators included GPT-4o, Gemini-1.5-pro, and Claude3.5-sonnet.
|
| 20 |
|
| 21 |
Performance Scores (on a scale of 5):
|
| 22 |
+
| Model | Score | # params (Billion) | size (GB) |
|
| 23 |
+
|---------------------------------------------:|:--------:|:------------------:|:---------:|
|
| 24 |
+
| gpt-4o | 4.13 | N/A | N/A |
|
| 25 |
+
| mistralai/Mixtral-8x7B-Instruct-v0.1 | 3.71 | 46.7 | 93.4 |
|
| 26 |
+
| **cmarkea/Mixtral-8x7B-Instruct-v0.1-4bit** | **3.68** | **46.7** | **23.35** |
|
| 27 |
+
| gpt-3.5-turbo | 3.66 | 175 | 350 |
|
| 28 |
+
| TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ | 3.56 | 46.7 | 46.7 |
|
| 29 |
+
| mistralai/Mistral-7B-Instruct-v0.2 | 1.98 | 7.25 | 14.5 |
|
| 30 |
+
| cmarkea/bloomz-7b1-mt-sft-chat | 1.69 | 7.07 | 14.14 |
|
| 31 |
+
| cmarkea/bloomz-3b-dpo-chat | 1.68 | 3 | 6 |
|
| 32 |
+
| cmarkea/bloomz-3b-sft-chat | 1.51 | 3 | 6 |
|
| 33 |
+
| croissantllm/CroissantLLMChat-v0.1 | 1.19 | 1.3 | 2.7 |
|
| 34 |
+
| cmarkea/bloomz-560m-sft-chat | 1.04 | 0.56 | 1.12 |
|
| 35 |
+
| OpenLLM-France/Claire-Mistral-7B-0.1 | 0.38 | 7.25 | 14.5 |
|
| 36 |
|
| 37 |
The impact of quantization is negligible.
|
| 38 |
|