yuanzu commited on
Commit
1f37595
·
verified ·
1 Parent(s): d4440e1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -16,14 +16,14 @@ In benchmarking, we observe **no accuracy loss** and up to **30\%** performance
16
  ## 1. Benchmarking Result (detailed in [PULL REQUEST](https://github.com/sgl-project/sglang/pull/3730)):
17
  | Model | Config | Accuracy (GSM8K) | Accuracy (MMLU) | Output Throughput(qps=128) | Output Throughput(bs=1) |
18
  |--------|--------|-------------------|----------------|------------------------------|--------------------------|
19
- | BF16 R1 | (A100\*16)x2 | 95.8 | 87.1 | 4450.02 (+33%) | 44.18 (+18%) |
20
- | INT8 R1 | A100\*32 | 95.5 | 87.1 | 3342.29 | 37.20 |
21
 
22
  ## 2. Quantization Process
23
 
24
  We apply INT8 quantization to the BF16 checkpoints.
25
 
26
- The weight scales are determined by dividing the block-wise maximum of element values by the INT8 type maximum.
27
 
28
  To generate this weight, run the provided script in the ``./inference`` directory:
29
 
 
16
  ## 1. Benchmarking Result (detailed in [PULL REQUEST](https://github.com/sgl-project/sglang/pull/3730)):
17
  | Model | Config | Accuracy (GSM8K) | Accuracy (MMLU) | Output Throughput(qps=128) | Output Throughput(bs=1) |
18
  |--------|--------|-------------------|----------------|------------------------------|--------------------------|
19
+ | BF16 R1 | A100\*32 | 95.5 | 87.1 | 3342.29 | 37.20 |
20
+ | INT8 R1 | (A100\*16)x2 | **95.8** | **87.1** | 4450.02 **(+33%)** | 44.18 **(+18%)** |
21
 
22
  ## 2. Quantization Process
23
 
24
  We apply INT8 quantization to the BF16 checkpoints.
25
 
26
+ The quantization scales are determined by dividing the block-wise maximum of element values by the INT8 type maximum.
27
 
28
  To generate this weight, run the provided script in the ``./inference`` directory:
29