AlejandroOlmedo commited on
Commit
99b083a
·
verified ·
1 Parent(s): 83b2d72

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -30,12 +30,12 @@ I simply converted it to MLX format with a quantization of 4-bit for better perf
30
  ## Other Types:
31
  | Link | Type | Size| Notes |
32
  |-------|-----------|-----------|-----------|
33
- | [MLX] (https://huggingface.co/Alejandroolmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-8bit-mlx) | 8-bit | 8.10 GB | **Best Quality** |
34
- | [MLX] (https://huggingface.co/Alejandroolmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-4bit-mlx) | 4-bit | 4.30 GB | Good Quality|
35
 
36
- # Alejandroolmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-4bit-mlx
37
 
38
- The Model [Alejandroolmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-4bit-mlx](https://huggingface.co/Alejandroolmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-4bit-mlx) was converted to MLX format from [Dongwei/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math](https://huggingface.co/Dongwei/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math) using mlx-lm version **0.20.5**.
39
 
40
  ## Use with mlx
41
 
@@ -46,7 +46,7 @@ pip install mlx-lm
46
  ```python
47
  from mlx_lm import load, generate
48
 
49
- model, tokenizer = load("Alejandroolmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-4bit-mlx")
50
 
51
  prompt="hello"
52
 
 
30
  ## Other Types:
31
  | Link | Type | Size| Notes |
32
  |-------|-----------|-----------|-----------|
33
+ | [MLX] (https://huggingface.co/AlejandroOlmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-8bit-mlx) | 8-bit | 8.10 GB | **Best Quality** |
34
+ | [MLX] (https://huggingface.co/AlejandroOlmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-4bit-mlx) | 4-bit | 4.30 GB | Good Quality|
35
 
36
+ # AlejandroOlmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-4bit-mlx
37
 
38
+ The Model [AlejandroOlmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-4bit-mlx](https://huggingface.co/AlejandroOlmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-4bit-mlx) was converted to MLX format from [Dongwei/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math](https://huggingface.co/Dongwei/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math) using mlx-lm version **0.20.5**.
39
 
40
  ## Use with mlx
41
 
 
46
  ```python
47
  from mlx_lm import load, generate
48
 
49
+ model, tokenizer = load("AlejandroOlmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-4bit-mlx")
50
 
51
  prompt="hello"
52