Update README.md
Browse files
README.md
CHANGED
@@ -35,10 +35,8 @@ Weight quantization also reduces disk size requirements by approximately 50%.
|
|
35 |
Only weights and activations of the linear operators within transformers blocks are quantized.
|
36 |
Weights are quantized with a symmetric static per-channel scheme, where a fixed linear scaling factor is applied between INT8 and floating point representations for each output channel dimension.
|
37 |
Activations are quantized with a symmetric dynamic per-token scheme, computing a linear scaling factor at runtime for each token between INT8 and floating point representations.
|
38 |
-
Linear scaling factors are computed via by
|
39 |
-
The [
|
40 |
-
Both algorithms are implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
|
41 |
-
GPTQ used a 1% damping factor and 256 sequences sequences taken from Neural Magic's [LLM compression calibration dataset](https://huggingface.co/datasets/neuralmagic/LLM_compression_calibration).
|
42 |
|
43 |
## Deployment
|
44 |
|
|
|
35 |
Only weights and activations of the linear operators within transformers blocks are quantized.
|
36 |
Weights are quantized with a symmetric static per-channel scheme, where a fixed linear scaling factor is applied between INT8 and floating point representations for each output channel dimension.
|
37 |
Activations are quantized with a symmetric dynamic per-token scheme, computing a linear scaling factor at runtime for each token between INT8 and floating point representations.
|
38 |
+
Linear scaling factors are computed via by minimizing the mean squarred error (MSE).
|
39 |
+
The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.GPTQ used a 1% damping factor and 256 sequences sequences taken from Neural Magic's [LLM compression calibration dataset](https://huggingface.co/datasets/neuralmagic/LLM_compression_calibration).
|
|
|
|
|
40 |
|
41 |
## Deployment
|
42 |
|