AlejandroOlmedo commited on
Commit
892e1a6
·
verified ·
1 Parent(s): a62628a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -18,8 +18,13 @@ tags:
18
  **A fine-tuned version of Deepseek-R1-Distilled-Qwen-1.5B that surpasses the performance of OpenAI’s o1-preview with just 1.5B parameters on popular math evaluations.**
19
 
20
  *Special thanks to Agentica for fine-tuning this version of Deepseek-R1-Distilled-Qwen-1.5B. More information about it can be found here: https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview.*
 
 
 
 
21
 
22
- I simply converted it to MLX format with a quantized in 8-bits for better performance on Apple Silicon Macs (M1,M2,M3,M4 Chips).
 
23
 
24
  # Alejandroolmedo/DeepScaleR-1.5B-Preview-Q8-mlx
25
 
 
18
  **A fine-tuned version of Deepseek-R1-Distilled-Qwen-1.5B that surpasses the performance of OpenAI’s o1-preview with just 1.5B parameters on popular math evaluations.**
19
 
20
  *Special thanks to Agentica for fine-tuning this version of Deepseek-R1-Distilled-Qwen-1.5B. More information about it can be found here: https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview.*
21
+ </a>
22
+ <a href="https://huggingface.co/agentica-org" style="margin: 2px;">
23
+ <img alt="Hugging Face" src="https://img.shields.io/badge/Agentica-fcd022?style=for-the-badge&logo=huggingface&logoColor=000&labelColor" style="display: inline-block; vertical-align: middle;"/>
24
+ </a>
25
 
26
+
27
+ I simply converted it to MLX format with a quantization of 8-bits for better performance on Apple Silicon Macs (M1,M2,M3,M4 Chips).
28
 
29
  # Alejandroolmedo/DeepScaleR-1.5B-Preview-Q8-mlx
30