Update README.md
Browse files
README.md
CHANGED
@@ -14,9 +14,7 @@ inference: false
|
|
14 |
## Model Details
|
15 |
|
16 |
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/metharme-7b
|
17 |
-
|
18 |
-
|
19 |
-
python llama.py /metharme-7b c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors 4bit-128g.safetensors
|
20 |
|
21 |
|
22 |
Metharme 7B is an instruct model based on Meta's LLaMA-7B.
|
|
|
14 |
## Model Details
|
15 |
|
16 |
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/metharme-7b
|
17 |
+
**not uploaded yet** Currently Quantizing for KoboldAI use using https://github.com/0cc4m/GPTQ-for-LLaMa
|
|
|
|
|
18 |
|
19 |
|
20 |
Metharme 7B is an instruct model based on Meta's LLaMA-7B.
|