sbnc commited on
Commit
f4b18fe
·
verified ·
1 Parent(s): 01925af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -0
README.md CHANGED
@@ -12,6 +12,11 @@ base_model: codellama/CodeLlama-7b-Instruct-hf
12
 
13
  # mlx-community/CodeLlama-7b-Instruct-hf-4bit-mlx-2
14
 
 
 
 
 
 
15
  This model [mlx-community/CodeLlama-7b-Instruct-hf-4bit-mlx-2](https://huggingface.co/mlx-community/CodeLlama-7b-Instruct-hf-4bit-mlx-2) was
16
  converted to MLX format from [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf)
17
  using mlx-lm version **0.23.2**.
 
12
 
13
  # mlx-community/CodeLlama-7b-Instruct-hf-4bit-mlx-2
14
 
15
+ As opposed to [mlx-community/CodeLlama-7b-Instruct-hf-4bit-MLX](https://huggingface.co/mlx-community/CodeLlama-7b-Instruct-hf-4bit-MLX), this
16
+ one is converted from the base model using a newer MLX-LM version which uses newer and improved quantization and generates model
17
+ in a more standard format. For context see [Issue#130](https://github.com/ml-explore/mlx-lm/issues/130) and [PR#114](https://github.com/ml-explore/mlx-lm/pull/114)
18
+ of the [MLX-LM](https://github.com/ml-explore/mlx-lm) repo.
19
+
20
  This model [mlx-community/CodeLlama-7b-Instruct-hf-4bit-mlx-2](https://huggingface.co/mlx-community/CodeLlama-7b-Instruct-hf-4bit-mlx-2) was
21
  converted to MLX format from [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf)
22
  using mlx-lm version **0.23.2**.