AlejandroOlmedo commited on
Commit
fc0f65c
·
verified ·
1 Parent(s): 976ff2c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -31,12 +31,12 @@ I simply converted it to MLX format (using mlx-lm version **0.21.4**.) with a qu
31
  ## Other Types:
32
  | Link | Type | Size| Notes |
33
  |-------|-----------|-----------|-----------|
34
- | [MLX] (https://huggingface.co/Alejandroolmedo/OpenThinker-7B-8bit-mlx) | 8-bit | 8.10 GB | **Best Quality** |
35
- | [MLX] (https://huggingface.co/Alejandroolmedo/OpenThinker-7B-4bit-mlx) | 4-bit | 4.30 GB | Good Quality|
36
 
37
- # Alejandroolmedo/OpenThinker-7B-4bit-mlx
38
 
39
- The Model [Alejandroolmedo/OpenThinker-7B-4bit-mlx](https://huggingface.co/Alejandroolmedo/OpenThinker-7B-4bit-mlx) was
40
  converted to MLX format from [open-thoughts/OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B)
41
  using mlx-lm version **0.21.4**.
42
 
@@ -49,7 +49,7 @@ pip install mlx-lm
49
  ```python
50
  from mlx_lm import load, generate
51
 
52
- model, tokenizer = load("Alejandroolmedo/OpenThinker-7B-4bit-mlx")
53
 
54
  prompt = "hello"
55
 
 
31
  ## Other Types:
32
  | Link | Type | Size| Notes |
33
  |-------|-----------|-----------|-----------|
34
+ | [MLX] (https://huggingface.co/AlejandroOlmedo/OpenThinker-7B-8bit-mlx) | 8-bit | 8.10 GB | **Best Quality** |
35
+ | [MLX] (https://huggingface.co/AlejandroOlmedo/OpenThinker-7B-4bit-mlx) | 4-bit | 4.30 GB | Good Quality|
36
 
37
+ # AlejandroOlmedo/OpenThinker-7B-4bit-mlx
38
 
39
+ The Model [AlejandroOlmedo/OpenThinker-7B-4bit-mlx](https://huggingface.co/AlejandroOlmedo/OpenThinker-7B-4bit-mlx) was
40
  converted to MLX format from [open-thoughts/OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B)
41
  using mlx-lm version **0.21.4**.
42
 
 
49
  ```python
50
  from mlx_lm import load, generate
51
 
52
+ model, tokenizer = load("AlejandroOlmedo/OpenThinker-7B-4bit-mlx")
53
 
54
  prompt = "hello"
55