AlejandroOlmedo commited on
Commit
3be7211
·
verified ·
1 Parent(s): ca5c838

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -30,13 +30,13 @@ I simply converted it to MLX format with a quantization of 8-bit for better perf
30
  ## Other Types:
31
  | Link | Type | Size| Notes |
32
  |-------|-----------|-----------|-----------|
33
- | [MLX] (https://huggingface.co/Alejandroolmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-8bit-mlx) | 8-bit | 8.10 GB | **Best Quality** |
34
- | [MLX] (https://huggingface.co/Alejandroolmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-4bit-mlx) | 4-bit | 4.30 GB | Good Quality|
35
 
36
 
37
- # Alejandroolmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-8bit-mlx
38
 
39
- The Model [Alejandroolmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-8bit-mlx](https://huggingface.co/Alejandroolmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-8bit-mlx) was converted to MLX format from [Dongwei/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math](https://huggingface.co/Dongwei/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math) using mlx-lm version **0.20.5**.
40
 
41
  ## Use with mlx
42
 
@@ -47,7 +47,7 @@ pip install mlx-lm
47
  ```python
48
  from mlx_lm import load, generate
49
 
50
- model, tokenizer = load("Alejandroolmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-8bit-mlx")
51
 
52
  prompt="hello"
53
 
 
30
  ## Other Types:
31
  | Link | Type | Size| Notes |
32
  |-------|-----------|-----------|-----------|
33
+ | [MLX] (https://huggingface.co/AlejandroOlmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-8bit-mlx) | 8-bit | 8.10 GB | **Best Quality** |
34
+ | [MLX] (https://huggingface.co/AlejandroOlmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-4bit-mlx) | 4-bit | 4.30 GB | Good Quality|
35
 
36
 
37
+ # AlejandroOlmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-8bit-mlx
38
 
39
+ The Model [AlejandroOlmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-8bit-mlx](https://huggingface.co/AlejandroOlmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-8bit-mlx) was converted to MLX format from [Dongwei/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math](https://huggingface.co/Dongwei/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math) using mlx-lm version **0.20.5**.
40
 
41
  ## Use with mlx
42
 
 
47
  ```python
48
  from mlx_lm import load, generate
49
 
50
+ model, tokenizer = load("AlejandroOlmedo/DeepSeek-R1-Distill-Qwen-7B-GRPO_Math-8bit-mlx")
51
 
52
  prompt="hello"
53