Initial GPTQ model commit
Browse files
README.md
CHANGED
@@ -53,21 +53,22 @@ Each separate quant is in a different branch. See below for instructions on fet
|
|
53 |
|
54 |
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
|
55 |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
|
56 |
-
| main | 4 |
|
|
|
|
|
|
|
|
|
57 |
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 40.66 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
58 |
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 37.99 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
59 |
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 36.65 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
60 |
-
| gptq-
|
61 |
-
| gptq-8bit-128g-actorder_False | 8 | 128 | False | Processing, coming soon | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
|
62 |
-
| gptq-8bit-128g-actorder_True | 8 | 128 | True | Processing, coming soon | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
|
63 |
-
| gptq-8bit-64g-actorder_True | 8 | 64 | True | Processing, coming soon | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
|
64 |
|
65 |
## How to download from branches
|
66 |
|
67 |
-
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/airoboros-l2-70B-gpt4-1.4.1-GPTQ:gptq-
|
68 |
- With Git, you can clone a branch with:
|
69 |
```
|
70 |
-
git clone --branch gptq-
|
71 |
```
|
72 |
- In Python Transformers code, the branch is the `revision` parameter; see below.
|
73 |
|
@@ -79,7 +80,7 @@ It is strongly recommended to use the text-generation-webui one-click-installers
|
|
79 |
|
80 |
1. Click the **Model tab**.
|
81 |
2. Under **Download custom model or LoRA**, enter `TheBloke/airoboros-l2-70B-gpt4-1.4.1-GPTQ`.
|
82 |
-
- To download from a specific branch, enter for example `TheBloke/airoboros-l2-70B-gpt4-1.4.1-GPTQ:gptq-
|
83 |
- see Provided Files above for the list of branches for each option.
|
84 |
3. Click **Download**.
|
85 |
4. The model will start downloading. Once it's finished it will say "Done"
|
@@ -103,7 +104,7 @@ from transformers import AutoTokenizer, pipeline, logging
|
|
103 |
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
104 |
|
105 |
model_name_or_path = "TheBloke/airoboros-l2-70B-gpt4-1.4.1-GPTQ"
|
106 |
-
model_basename = "gptq_model-4bit
|
107 |
|
108 |
use_triton = False
|
109 |
|
@@ -121,7 +122,7 @@ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
|
121 |
To download from a specific branch, use the revision parameter, as in this example:
|
122 |
|
123 |
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
124 |
-
revision="gptq-
|
125 |
model_basename=model_basename,
|
126 |
use_safetensors=True,
|
127 |
trust_remote_code=False,
|
@@ -198,10 +199,10 @@ Thank you to all my generous patrons and donaters!
|
|
198 |
|
199 |
### Overview
|
200 |
|
201 |
-
Llama 2
|
202 |
-
|
203 |
-
See that model card for all the details.
|
204 |
|
|
|
|
|
205 |
|
206 |
### Licence and usage restrictions
|
207 |
|
|
|
53 |
|
54 |
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
|
55 |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
|
56 |
+
| main | 4 | None | True | 35.33 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
57 |
+
| gptq-3bit--1g-actorder_True | 3 | None | True | 26.78 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
|
58 |
+
| gptq-3bit-128g-actorder_False | 3 | 128 | False | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
|
59 |
+
| gptq-3bit-128g-actorder_True | 3 | 128 | True | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
|
60 |
+
| gptq-3bit-64g-actorder_True | 3 | 64 | True | 29.30 GB | False | AutoGPTQ | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. Poor AutoGPTQ CUDA speed. |
|
61 |
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 40.66 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
62 |
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 37.99 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
63 |
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 36.65 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
64 |
+
| gptq-4bit-128g-actorder_False | 4 | 128 | False | 36.65 GB | True | AutoGPTQ | 4-bit, without Act Order and group size 128g. |
|
|
|
|
|
|
|
65 |
|
66 |
## How to download from branches
|
67 |
|
68 |
+
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/airoboros-l2-70B-gpt4-1.4.1-GPTQ:gptq-3bit--1g-actorder_True`
|
69 |
- With Git, you can clone a branch with:
|
70 |
```
|
71 |
+
git clone --branch gptq-3bit--1g-actorder_True https://huggingface.co/TheBloke/airoboros-l2-70B-gpt4-1.4.1-GPTQ`
|
72 |
```
|
73 |
- In Python Transformers code, the branch is the `revision` parameter; see below.
|
74 |
|
|
|
80 |
|
81 |
1. Click the **Model tab**.
|
82 |
2. Under **Download custom model or LoRA**, enter `TheBloke/airoboros-l2-70B-gpt4-1.4.1-GPTQ`.
|
83 |
+
- To download from a specific branch, enter for example `TheBloke/airoboros-l2-70B-gpt4-1.4.1-GPTQ:gptq-3bit--1g-actorder_True`
|
84 |
- see Provided Files above for the list of branches for each option.
|
85 |
3. Click **Download**.
|
86 |
4. The model will start downloading. Once it's finished it will say "Done"
|
|
|
104 |
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
105 |
|
106 |
model_name_or_path = "TheBloke/airoboros-l2-70B-gpt4-1.4.1-GPTQ"
|
107 |
+
model_basename = "gptq_model-4bit--1g"
|
108 |
|
109 |
use_triton = False
|
110 |
|
|
|
122 |
To download from a specific branch, use the revision parameter, as in this example:
|
123 |
|
124 |
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
125 |
+
revision="gptq-3bit--1g-actorder_True",
|
126 |
model_basename=model_basename,
|
127 |
use_safetensors=True,
|
128 |
trust_remote_code=False,
|
|
|
199 |
|
200 |
### Overview
|
201 |
|
202 |
+
Llama 2 70b fine tune using https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1
|
|
|
|
|
203 |
|
204 |
+
See the previous llama 65b model card for info:
|
205 |
+
https://hf.co/jondurbin/airoboros-65b-gpt4-1.4
|
206 |
|
207 |
### Licence and usage restrictions
|
208 |
|