Update README.md
Browse files
README.md
CHANGED
@@ -19,6 +19,8 @@ using the [jpacifico/french-orca-dpo-pairs-revised](https://huggingface.co/datas
|
|
19 |
Training in French also improves the model in English, surpassing the performances of its base model.
|
20 |
Window context = 4k tokens
|
21 |
|
|
|
|
|
22 |
### OpenLLM Leaderboard
|
23 |
|
24 |
Chocolatine is the best-performing 14B model on the [OpenLLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) (2024/09/01)
|
|
|
19 |
Training in French also improves the model in English, surpassing the performances of its base model.
|
20 |
Window context = 4k tokens
|
21 |
|
22 |
+
* **4-bit quantized version** is available here : [jpacifico/Chocolatine-14B-Instruct-DPO-v1.2-Q4_K_M-GGUF](https://huggingface.co/jpacifico/Chocolatine-14B-Instruct-DPO-v1.2-Q4_K_M-GGUF)
|
23 |
+
|
24 |
### OpenLLM Leaderboard
|
25 |
|
26 |
Chocolatine is the best-performing 14B model on the [OpenLLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) (2024/09/01)
|