id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st183300 | Hi @roman.spb , you can use the following equations to convert between the floating point and quantized domains:
q = clamp(std::nearbyint(fp / scale) + zp, qmin, qmax)
fp = float(q - zp) * scale |
st183301 | Hi Vasiliy! Thank you so much for your help! These equations work for operating with a single number. But when I try to add numbers in quantized domain and then clamp according to formulas I still get very different result from pytorch implementation of
nn.quantized.FloatFunctional().add()
So it maybe some different issue than just clamping.
I’ve tried adding two numbers in float domain and still had no luck in reproducing add() behaviour.
Maybe you can help me to understand how this function works internally? Or maybe where I can find the source code implementing this function? |
st183302 | The implementation is:
github.com
pytorch/pytorch/blob/c371542efc31b1abfe6f388042aa3ab0cef935f2/torch/nn/quantized/modules/functional_modules.py 1
from typing import List
import torch
from torch import Tensor
from torch._ops import ops
class FloatFunctional(torch.nn.Module):
r"""State collector class for float operations.
The instance of this class can be used instead of the ``torch.`` prefix for
some operations. See example usage below.
.. note::
This class does not provide a ``forward`` hook. Instead, you must use
one of the underlying functions (e.g. ``add``).
Examples::
>>> f_add = FloatFunctional()
This file has been truncated. show original |
st183303 | Hi HDCharles.
Code you mention is not what’s needed, it refers to C++ compiled implementation of ops.quantized.add. It was very complicated to figure out where is the actual code.
I found it via googling in issues to pytorch in github repository
github.com
pytorch/pytorch/blob/b56ba296b1cb5d65a3fe2e33cc1d910481baa644/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp#L1173-L1240
// Note: out is assumed to be the same size as self and other.
// Note: Addition is only supported when self, other, out are of the same dtype.
template <bool ReLUFused = false>
void qadd_kernel(Tensor& out, const Tensor& self, const Tensor& other) {
int64_t zero_point = out.q_zero_point();
// NOLINTNEXTLINE(bugprone-narrowing-conversions,cppcoreguidelines-narrowing-conversions)
float scale = out.q_scale();
float inv_scale = 1.0f / scale;
int64_t self_zero_point = self.q_zero_point();
// NOLINTNEXTLINE(bugprone-narrowing-conversions,cppcoreguidelines-narrowing-conversions)
float self_scale = self.q_scale();
int64_t other_zero_point = other.q_zero_point();
// NOLINTNEXTLINE(bugprone-narrowing-conversions,cppcoreguidelines-narrowing-conversions)
float other_scale = other.q_scale();
// Broadcast out the parameters here to amortize out that cost across
// loop iterations.
// TODO: we can optimize dequantization by doing a premultiplication
// of the zero point by scale and doing FMA on scale*x_q - (scale*zero_point)
auto self_zero_point_vec = Vectorized<float>((float)self_zero_point);
This file has been truncated. show original
I still not yet reproduced these operations, because implementation is obscured by other fucntions like Vectorized::dequantize which obscure the actual implementation and there are lots of different implementations of that. |
st183304 | It looks to me like its just dequantizing, adding together, and then requantizing:
import torch
x = torch.randn(10,10)
y = torch.randn(10,10)
#arbitrary scales and zero_points
zp_x = 1
zp_y = 2
zp_z = 3
s_x = .1
s_y = .2
s_z = .3
#quantize tensors
xq = torch.quantize_per_tensor(x, s_x, zp_x, torch.qint8)
yq = torch.quantize_per_tensor(y, s_y, zp_y, torch.qint8)
#setup add operation
add_op = torch.nn.quantized.QFunctional()
add_op.scale = s_z
add_op.zero_point = zp_z
zq_qfunc = add_op.add(xq,yq)
print("QFunctional output", zq_qfunc)
#manually do operation
xdq = xq.dequantize()
ydq = yq.dequantize()
#add together
zdq = xdq+ydq
#requantize
zq_manual = torch.quantize_per_tensor(zdq, s_z, zp_z, torch.qint8)
print("Manual output", zq_manual)
print("difference", zq_manual.int_repr()-zq_qfunc.int_repr())
You can get the specifics about the quant/dequant process formulas here: Quantization API Reference — PyTorch master documentation |
st183305 | Hd Charles, thank you very much for your help!
It appeared as simple as you mentioned. Now it makes sense why it is named floatfunctional. I thought I tried it at the beginning of efforts, but I think I made some mistake.
To add: dequantize() and quantize_per_tensor are comparatively straightforward to implement in primitive operations.
def manual_addition(xq1_int, scale1, zp1, xq2_int, scale2, zp2, scale_r, zp_r):
# from int8 to float
xdq = scale1 * (xq1_int.astype(np.float) - zp1)
ydq = scale2 * (xq2_int.astype(np.float) - zp2)
# float addition
zdq = xdq + ydq
# from float to int8
zq_manual_int = (((zdq / scale_r).round()) + zp_r).round()
return zq_manual_int #clipping might be needed |
st183306 | In the tutorials of Quantization, there is a mention that the model size would be reduced by using Dynamic Quantization(DQ).
After I merged the DQ codes, I found the model size is reduces(5MB->2MB). that what I expected.
However, I am wondering why the model is reduces.
so, I tried to log the model state_dict and the log is following.
the original model’s state_dict()
'model.layers.3.residual_group.blocks.5.mlp.fc1.weight', tensor([[-0.0133, -0.0458, -0.0438, ..., -0.0109, 0.0203, -0.0292],
[ 0.0185, 0.0241, 0.0071, ..., 0.0204, 0.0048, -0.0240],
[-0.0027, -0.0198, -0.0116, ..., -0.0246, -0.0079, -0.0145],
...,
[-0.0086, 0.0161, 0.0068, ..., 0.0200, 0.0013, -0.0164],
[ 0.0080, -0.0006, -0.0074, ..., 0.0420, -0.0109, 0.0062],
[-0.0169, 0.0129, 0.0252, ..., -0.0208, -0.0016, -0.0064]])), ('model.layers.3.residual_group.blocks.5.mlp.fc1.bias', tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])),
this is DQ’s state_dict()
('model.layers.3.residual_group.blocks.5.mlp.fc1.scale', tensor(1.)),
('model.layers.3.residual_group.blocks.5.mlp.fc1.zero_point', tensor(0)),
('model.layers.3.residual_group.blocks.5.mlp.fc1._packed_params.dtype', torch.qint8),
('model.layers.3.residual_group.blocks.5.mlp.fc1._packed_params._packed_params', (tensor([[-0.0131, -0.0459, -0.0435, ..., -0.0107, 0.0203, -0.0292],
[ 0.0185, 0.0238, 0.0072, ..., 0.0203, 0.0048, -0.0238],
[-0.0024, -0.0197, -0.0119, ..., -0.0244, -0.0078, -0.0143],
...,
[-0.0083, 0.0161, 0.0066, ..., 0.0197, 0.0012, -0.0161],
[ 0.0078, -0.0006, -0.0072, ..., 0.0417, -0.0107, 0.0060],
[-0.0167, 0.0131, 0.0250, ..., -0.0209, -0.0018, -0.0066]],
size=(120, 60), dtype=torch.qint8,
quantization_scheme=torch.per_tensor_affine, scale=0.0005962323630228639,
zero_point=0), tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
requires_grad=True))),
I think DQ’s weight/bias looks like fp32 not int8.
in this reason, I wonder how to reduce model size by using fp32 weight format.
could you tell me why the this is happend?
Thank you for your help. |
st183307 | The answer is in your question.
fp32 is a floating point expressed by 32bits.
qint8 is a quantized integer expressed by 8bits.
In the aspect of computer science, the expected size of model would be 1.25MB but not 2MB.
However, there is additional information which contains a quantization scheme and not all the tensors inside the model can be converted into qint8.
For more info, please read the docs Quantization — PyTorch 1.10.1 documentation |
st183308 | 작성하신 코드에 의하면
양자화 전의 모델은 데이터 타입이 float32이고
양자화 이후 모델의 데이터 타입은 qint8 입니다.
float32는 32bits, qint8은 8bits의 크기 임으로 크기가 줄어드는것 입니다.
float32를 유지 하면서 모델 크기를 줄이기 위한 방법으로는 pruning, distillation 정도가 있을것 같습니다. |
st183309 | @thecho7 , @seungtaek94
Thank you for your reply.
먼저 꼼꼼한 답신 주셔서 감사합니다.
제 질문이 조금 불명확하게 표현된 것 같습니다.
예를 들어, 아래와 같은 fp32 weight가 있는 경우
torch.float32
tensor([-1.0000, 0.3520, 1.3210, 2.0000])
해당 데이터를 아래와 같이 양자화 시킬 수 있습니다.
(아래 포맷이 Torch에서 제공하는 Dynamic Quantization model의 레이어/데이터 표현법으로 확인하였습니다.
torch.quint8
tensor([-1.0000, 0.4000, 1.3000, 2.0000], size=(4,), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.1, zero_point=10)
이 때, weight의 datetype은 말씀처럼 "quint8"이라 fp32에 비해 적은 공간을 사용함으로 모델 사이즈가 줄어든다는 것은 이해하였습니다.
다만, logging을 할 때 왜 정수(int)가 아닌 소수(float)으로 표현되는지가 의문이었습니다.
양자화를 거친 weight의 value는 Quant/DeQuant(fp32->int8->fp32) 작업으로 인해서 quantization noise가 조금 섞인 값으로 보입니다.
질문을 정리하자면 아래와 같습니다.
Dynamic Quantization에서 모델을 저장할 때는 int로 저장하고, load를 할 때에 fp32로 convert를 하는건가요? (만약 맞다면, 그 이유는 무엇인가요?)
Dynamic Quantization에서는 int 연산에 최적화된 Quantized_operators를 사용하지 않는건가요?
감사합니다.
First of all, Thank you for your detail reply.
I think my question is a little bit unclear.
for example, if there are weights of fp32 datatype as below
torch.float32
tensor([-1.0000, 0.3520, 1.3210, 2.0000])
the data can be quantized as follows
torch.quint8
tensor([-1.0000, 0.4000, 1.3000, 2.0000], size=(4,), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.1, zero_point=10)
I understand the model size can be reduced by using less space compared to fp32.
Because the datatype of the weight is converted fp32 to quint8.
However, I was wondering why it is expressed as a decimal number (float) instead of an integer (int) when logging.
To summarize the questions:
When saving a model in Dynamic Quantization, does PyTorch Quant lib save it as int and convert it to fp32 when loading? (If yes, why?)
Does Dynamic Quantization not use Quantized_operators optimized for int operations?
thank you. |
st183310 | Pytorch 코드를 봤는데 그냥 규칙인것 같습니다.
pytorch/_tensor_str.py at master · pytorch/pytorch · GitHub 1
사용합니다. |
st183311 | However, I was wondering why it is expressed as a decimal number (float) instead of an integer (int) when logging.
For the same reason you don’t output a bunch of 1’s and 0’s when you print an fp32 number. All these numbers are just 1’s and 0’s in your computer. Its what they represent that’s important. You never print our an fp32 number, you print out the decimal representation of the 1’s and 0’s. The ‘non integer’ values that are being output for qint8 are the same, those are the actual values that it represents. In order to do efficient computations with these values, it utilizes some aspects of int8 data storage/ops, but thats only really important if you are trying to mess with things at a really low level. For most purposes its better to just think of qint8 as similar to fp16 i.e. not limited to integers, takes up less space, has lower fidelity compared to fp32.
When saving a model in Dynamic Quantization, does PyTorch Quant lib save it as int and convert it to fp32 when loading? (If yes, why?)
They are stored as qtensors (qint8) which, as mentioned before, is different from int8. pytorch/linear.py at 3e43c478a8832cec063aa566583a05f87d7dc3b0 · pytorch/pytorch · GitHub This is the deserialization code which takes the weight and bias and then packs them into the format that is necessary for the efficient computation of Linear.
Does Dynamic Quantization not use Quantized_operators optimized for int operations?
Yes it does generally use int8 operations (in part) when performing quantized operations. Here is a reasonable explanation of how qint8 Linear can be broken down into other ops, including int8 ops, to speed things up: Quantization for Neural Networks - Lei Mao's Log Book 1 (the first part explain quantization at a reasonable level, the part I’m talking about though is the Quantized Matrix Multiplication section) |
st183312 | When I quantize a model with “Hardswish”, It’s got a Error with info:
AttributeError: ‘Hardswish’ object has no attribute ‘activation_post_process’,
And my coonfig is like this:
image1745×361 47.8 KB
How could I fix this error? thanks |
st183313 | you are using qat prepare but normal qconfig. Also in the mapping, nnq.Hardswish isn’t a qat module. If you are intending to do qat you should do something like
import torch
qconfig = torch.quantization.get_default_qat_qconfig("fbgemm")
model = torch.nn.Sequential(torch.nn.modules.Hardswish(), torch.nn.modules.Linear(1,1))
model.qconfig = qconfig
model_prep = torch.quantization.prepare_qat(model)
print(model_prep)
model_prep(torch.randn(1,1))
if you are not intending to do qat you should do something like
import torch
qconfig = torch.quantization.get_default_qconfig("fbgemm")
model = torch.nn.Sequential(torch.nn.modules.Hardswish(), torch.nn.modules.Linear(1,1))
model.qconfig = qconfig
model_prep = torch.quantization.prepare(model)
print(model_prep)
model_prep(torch.randn(1,1)) |
st183314 | I found that in some case, the pytorch result of qat linear will be error. Next I will describe it.
I insert some print in model forword code
print(layers_3_out)
layers_4_out = layers_4(layers_3_out)
print(layers_4._packed_params)
print(layers_4_out)
The layers_4 is define in init:
self.layers_4 = nn.Linear(in_features=16, out_features=1, bias=True)
So as above shows, I print the input and linear params and output, and the result is as follows:
tensor([[5.0954, 1.2068, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.6091,
0.0000, 1.3409, 2.6818, 4.2908, 2.1454, 2.9500, 1.8772],
[5.0954, 1.3409, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.6818, 4.2908, 2.0113, 2.9500, 1.8772],
[5.0954, 1.3409, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.6818, 4.2908, 2.0113, 2.9500, 1.8772],
[5.0954, 1.3409, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.6818, 4.2908, 2.0113, 2.9500, 1.8772],
[5.0954, 1.3409, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.6818, 4.2908, 2.0113, 2.9500, 1.8772],
[5.0954, 1.3409, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.6818, 4.2908, 2.0113, 2.9500, 1.8772],
[5.2295, 1.4750, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.3409,
0.0000, 1.4750, 3.0840, 4.4249, 2.0113, 2.9500, 1.7432],
[5.0954, 1.3409, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.6818, 4.2908, 2.0113, 2.9500, 1.8772],
[4.9613, 1.3409, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.6818, 4.2908, 2.0113, 2.9500, 1.7432],
[4.9613, 1.3409, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.6818, 4.2908, 2.0113, 2.9500, 1.7432],
[4.9613, 1.3409, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.6818, 4.2908, 2.0113, 2.9500, 1.7432],
[4.6931, 1.2068, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.4136, 3.8886, 1.8772, 2.8159, 1.7432],
[4.6931, 1.2068, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.4136, 3.8886, 1.8772, 2.8159, 1.7432],
[4.6931, 1.2068, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.4136, 3.8886, 1.8772, 2.8159, 1.7432],
[4.6931, 1.2068, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.4136, 3.8886, 1.8772, 2.8159, 1.7432],
[4.4249, 1.0727, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.2068, 2.2795, 3.6204, 1.8772, 2.8159, 1.7432],
[4.4249, 1.0727, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.2068, 2.1454, 3.7545, 1.7432, 2.8159, 1.7432],
[4.5590, 1.2068, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.2068, 2.2795, 3.7545, 1.7432, 2.8159, 1.7432],
[4.4249, 1.0727, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.2068, 2.1454, 3.7545, 1.7432, 2.8159, 1.7432],
[4.4249, 1.0727, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.2068, 2.1454, 3.7545, 1.7432, 2.8159, 1.7432],
[4.4249, 1.0727, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.2068, 2.1454, 3.7545, 1.7432, 2.8159, 1.7432],
[4.6931, 1.2068, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.4136, 4.0227, 1.7432, 2.8159, 1.7432],
[4.4249, 1.3409, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.2068,
0.0000, 1.3409, 2.4136, 3.8886, 1.7432, 2.6818, 1.7432],
[4.5590, 1.3409, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.2068,
0.0000, 1.3409, 2.4136, 3.8886, 1.7432, 2.8159, 1.6091],
[4.4249, 1.0727, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.2068, 2.1454, 3.7545, 1.7432, 2.8159, 1.7432],
[4.6931, 1.2068, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.4136, 4.0227, 1.7432, 2.8159, 1.7432],
[4.6931, 1.2068, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.4136, 3.8886, 1.8772, 2.8159, 1.7432],
[4.9613, 1.3409, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.6818, 4.2908, 2.0113, 2.9500, 1.7432],
[5.0954, 1.3409, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.6818, 4.2908, 2.0113, 2.9500, 1.8772],
[5.0954, 1.3409, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.6818, 4.2908, 2.0113, 2.9500, 1.8772],
[5.3636, 1.4750, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.4750, 3.0840, 4.5590, 2.1454, 2.9500, 1.8772],
[5.0954, 1.3409, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.4750,
0.0000, 1.3409, 2.6818, 4.2908, 2.0113, 2.9500, 1.8772]],
size=(32, 16), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.13408887386322021,
zero_point=173)
(tensor([[-0.5172, -0.6572, 0.2640, 0.4471, 0.2855, -0.5549, 0.4741, 0.4364,
-0.5118, -0.0162, -0.3448, -0.5172, -0.6896, -0.3448, -0.6465, -0.4902]],
size=(1, 16), dtype=torch.qint8,
quantization_scheme=torch.per_tensor_affine, scale=0.005387257784605026,
zero_point=0), tensor([-0.1708], requires_grad=True))
tensor([[3.7054],
[3.8143],
[3.8143],
[3.8143],
[3.8143],
[3.8143],
[3.5964],
[3.8143],
[3.8143],
[3.8143],
[3.8143],
[3.9233],
[3.9233],
[3.9233],
[3.9233],
[4.0323],
[4.1413],
[4.0323],
[4.1413],
[4.1413],
[4.1413],
[3.9233],
[4.1413],
[4.1413],
[4.1413],
[3.9233],
[3.9233],
[3.8143],
[3.8143],
[3.8143],
[3.5964],
[3.8143]], size=(32, 1), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.10898102819919586,
zero_point=186)
Then I calculate the first result of this linear, as
Expand Table
input
5.0954
1.2068
0
0
0
0
0
0
1.6091
0
1.3409
2.6818
4.2908
2.1454
2.95
1.8772
bias
weight
-0.5172
-0.6572
0.264
0.4471
0.2855
-0.5549
0.4741
0.4364
-0.5118
-0.0162
-0.3448
-0.5172
-0.6896
-0.3448
-0.6465
-0.4902
-0.1708
mul
-2.63534088
-0.79310896
0
0
0
0
0
0
-0.82353738
0
-0.46234232
-1.38702696
-2.95893568
-0.73973392
-1.907175
-0.92020344
-12.79820454
And I found that the result should be -12.79820454, but the result from pytorch is 3.7054, it is error.
And before inference, I qat and prepare model as before:
net.train()
net.qconfig = torch.quantization.get_default_qat_qconfig(“qnnpack”)
net.fuse_modules()
torch.quantization.prepare_qat(net, inplace=True)
net.load_state_dict(new_state_dict, strict=True)
net.eval()
net.apply(torch.quantization.disable_observer)
net = torch.quantization.convert(net)
And In this model, other linear is calculated correct, except this one. So if anyone meet this problem before, Can anyone help me? |
st183315 | The Code as Follow:
import torch
import torch.nn as nn
import torch.optim as optim
class mturn_predictor(nn.Module):
def __init__(self, *args):
super().__init__()
self.QuantStub = torch.quantization.QuantStub()
self.cls_score_parking_goal_combine_multi_layers_0_new = nn.Linear(in_features=16, out_features=1, bias=True)
self.DeQuantStub = torch.quantization.DeQuantStub()
def forward(self, x):
Cat_out = self.QuantStub(x)
cls_score_parking_goal_combine_multi_layers_0_new = self.cls_score_parking_goal_combine_multi_layers_0_new(Cat_out)
DeQuantStub_out = self.DeQuantStub(cls_score_parking_goal_combine_multi_layers_0_new)
return DeQuantStub_out
def generate_input():
example = torch.tensor([[4.4249,0,0,0,2.5477,0,4.559,0,0,0,0.5364,3.3522,2.1454,0,0,0]])
return example
if __name__ == "__main__":
model = mturn_predictor()
input_data = generate_input()
print(input_data)
output_data = model(input_data)
## convert into qat model
model.train()
model.qconfig = torch.quantization.get_default_qat_qconfig("qnnpack")
torch.quantization.prepare_qat(model, inplace=True)
output_data = model(input_data)
model.eval()
model_qat = torch.quantization.convert(model)
print("torch.quantization.convert Done")
weight_tensor = torch.tensor([[-0.5172,-0.6572,0.264,0.4471,0.2855,-0.5549,0.4741,0.4364,-0.5118,-0.0162,-0.3448,-0.5172,-0.6896,-0.3448,-0.6465,-0.4902]])
weight_qat = torch.quantize_per_tensor(weight_tensor, 0.00538725778460502, 0, torch.qint8)
bias_tensor = torch.tensor([-0.17078383266925812])
model_qat.cls_score_parking_goal_combine_multi_layers_0_new._packed_params.set_weight_bias(weight_qat, bias_tensor)
model_qat.cls_score_parking_goal_combine_multi_layers_0_new.scale = 0.10898102819919586
model_qat.cls_score_parking_goal_combine_multi_layers_0_new.zero_point = 186
model_qat.QuantStub.register_buffer('scale', torch.tensor([0.13408887386322021]))
model_qat.QuantStub.register_buffer('zero_point', torch.tensor([173], dtype=torch.long))
print(model_qat)
#import pdb; pdb.set_trace()
print("Input:", input_data)
print("Weight:", model_qat.cls_score_parking_goal_combine_multi_layers_0_new._packed_params)
output_data = model_qat(input_data)
print("Output:", output_data)
print("Matmul: ", torch.matmul(input_data, torch.transpose(weight_tensor, 0, 1)))
The result of Matmul and Linear is different,
Output: tensor([[7.4107]])
Matmul: tensor([[-2.7979]]) |
st183316 | A better test would be to see whether the quantization agrees with the matmul without hardcoding the qparams and things. I moved the weight and bias assignment to the top and it looks fine
not sure what the exact issue is with your code, but the extremely high zero_point value seems suspect.
import torch
import torch.nn as nn
import torch.optim as optim
class mturn_predictor(nn.Module):
def __init__(self, *args):
super().__init__()
self.QuantStub = torch.quantization.QuantStub()
self.cls_score_parking_goal_combine_multi_layers_0_new = nn.Linear(in_features=16, out_features=1, bias=True)
self.DeQuantStub = torch.quantization.DeQuantStub()
def forward(self, x):
Cat_out = self.QuantStub(x)
cls_score_parking_goal_combine_multi_layers_0_new = self.cls_score_parking_goal_combine_multi_layers_0_new(Cat_out)
DeQuantStub_out = self.DeQuantStub(cls_score_parking_goal_combine_multi_layers_0_new)
return DeQuantStub_out
def generate_input():
example = torch.tensor([[4.4249,0,0,0,2.5477,0,4.559,0,0,0,0.5364,3.3522,2.1454,0,0,0]])
return example
if __name__ == "__main__":
model = mturn_predictor()
input_data = generate_input()
weight = torch.tensor([[-0.5172,-0.6572,0.264,0.4471,0.2855,-0.5549,0.4741,0.4364,-0.5118,-0.0162,-0.3448,-0.5172,-0.6896,-0.3448,-0.6465,-0.4902]])
bias = torch.tensor([-0.17078383266925812])
model.cls_score_parking_goal_combine_multi_layers_0_new.weight.data = weight
model.cls_score_parking_goal_combine_multi_layers_0_new.bias.data = bias
output_data = model(input_data)
## convert into qat model
model.train()
model.qconfig = torch.quantization.get_default_qat_qconfig("qnnpack")
torch.quantization.prepare_qat(model, inplace=True)
output_data_prepared = model(input_data)
model.eval()
model_qat = torch.quantization.convert(model)
print("torch.quantization.convert Done")
print(model_qat)
print("matmul: ", bias+torch.matmul(input_data, torch.transpose(weight, 0, 1)))
print("functional linear: ", torch.nn.functional.linear(input_data, weight, bias))
print("non_quantized_model result", output_data)
print("prepared model result", output_data_prepared)
print("q model result", model_qat(input_data)) |
st183317 | Hello,
does anybody know how to convert a model given as .pkl-file into .pth?
in my master thesis I’m working on a solution to deploy a SlowFast-Network for human action recognition on an embedded platform with an FPGA. I use the open source codebase PySlowFast and have tested the inference on GPU already (just to see how it works). My embedded platform has an FPGA from Xilinx and I want to use Vitis AI from Xilinx for the quantization and compilation of the model.
The Vitis AI - quantizer needs a pre-trained PyTorch model, generally a .pth-file and a float model definition as input. The SlowFast-Network is saved as a .pkl-file. Does anybody know how I can get a .pth-file and a float model definition from this .pkl-file respectively from the PySlowFast codebase?
For example detectron offers a python skript to convert a model (.yaml and .pkl) to a model.pb. Do you maybe know a similar solution for PyTorch-Models?
Many thanks! |
st183318 | Hello,
I would be very sceptical of defining format from the extension. I myself saw lot’s of different ways of saving pytorch models of the same format with different extensions (though pytorch’s documentations suggests conventinal way).
Not 100% sure but it may end up that .pth and .pkl are refering to the same format. |
st183319 | Hello Roman,
thanks for your reply. So you think I can maybe use the pkl file as input for the quantizer? Or do you think that I just need to re-save the file as pth? I hope I understood you correctly as I’m new to this topic.
Thank you |
st183320 | Hi, I tried quantization but the cpu I have uses avx and not avx2. So I get this error:
RuntimeError: Didn't find engine for operation quantized::linear_prepack NoQEngine
with this code:
model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
Is there any way to use quantization with a cpu that only supports avx?
Sincerely |
st183321 | You could try to use the qnnpack engine with SSE4_1, but “AVX not AVX2” is generally not terribly well-suported. |
st183322 | tom:
qnnpack avec SSE4_1
Hi, thanks for your answer, can you give an example or a link? Google does not find any result with SSE4_1 |
st183323 | QNNPACK 1 is part of the PyTorch github repo, is also needed on ARM.
I don’t know if SSE support is compiled in by default, but I think it is you best bet.
Best regards
Thomas |
st183324 | when I inferenced my model with int8 quantization, I meet the following error: what should I do to solve it?
NotImplementedError: Could not run ‘quantized::conv2d.new’ with arguments from the ‘CPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). |
st183325 | Can you provide the model code which you are trying to quantize. FYI quantization is not implemented yet for CUDA
At the moment PyTorch doesn’t provide quantized operator implementations on CUDA - this is the direction for future work. Move the model to CPU in order to test the quantized functionality. |
st183326 | I inference the model in cpu mode. I changed to use the graph mode to quantify the model, but the erros generated:
Screenshot from 2021-12-13 15-56-252051×274 40.6 KB
isn’t the if branch supported?
def forward(self, x):
if x.ndim == 5:
return self.forward_time_series(x)
else:
return self.forward_single_frame(x) |
st183327 | @FengMu1995 the if branch is definitely supported
Here is an example. Without looking at the entire code it would be difficult to understand the issue. I feel that there are elements that you are switching between GPU and CPU , and since quantization does not work on the GPU it throws an error
class random_model(nn.Module):
def __init__(self):
super(random_model, self).__init__()
self.model1 = nn.Sequential(
nn.Linear(100, 10),
nn.BatchNorm1d(10),
nn.ReLU(),
nn.Linear(10, 4),
nn.BatchNorm1d(4),
nn.ReLU(),
nn.Linear(4, 1),
)
self.model2 = nn.Sequential(
nn.Linear(100, 10),
nn.BatchNorm1d(10),
nn.ReLU(),
nn.Linear(10, 1),
)
def forward(self, x, flag_condition=True):
if flag_condition==True:
return self.model1(x)
else:
return self.model2(x)
X = torch.rand(100, 100)
y = torch.randint(2,(100,)).type(torch.FloatTensor)
model = random_model()
criterion = nn.MSELoss()
num_epochs = 100
learning_rate = 1e-2
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
for cur_epoch in range(num_epochs):
model.zero_grad()
if cur_epoch % 2 ==0:
output = model(X, flag_condition=True)
else:
output = model(X, flag_condition=False)
loss = criterion(y, output)
loss.backward()
optimizer.step()
print("Cur Epoch {0} loss is {1}".format(cur_epoch, loss.item())) |
st183328 | FengMu1995:
NotImplementedError: Could not run ‘quantized::conv2d.new’ with arguments from the ‘CPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build).
please take a look at Quantization — PyTorch 1.10.0 documentation 4 |
st183329 | the “if branch” was not used to switch the gpu and cpu, It is only for judging the input dismension
source code as following:
if x.ndim == 5:
return self.forward_time_series(x)
else:
return self.forward_single_frame(x)
errors as following:
File “/home/wudi/Software/yes/envs/torch1.9/lib/python3.8/site-packages/torch/quantization/quantize_jit.py”, line 54, in _prepare_jit
model_c = torch._C._jit_pass_insert_observers(model._c,
RuntimeError: branches for if should return values that are observed consistently, if node:%5 : Tensor[] = prim::If(%4) # /data00/peterlin/RVM/model/mobilenetv3.py:69:8
block0():
%6 : Tensor[] = prim::CallMethod[name=“forward_time_series”](%self.1, %x.1) # /data00/peterlin/RVM/model/mobilenetv3.py:70:19
→ (%6)
block1():
%7 : Tensor[] = prim::CallMethod[name=“forward_single_frame”](%self.1, %x.1) # /data00/peterlin/RVM/model/mobilenetv3.py:72:19
→ (%7) |
st183330 | I have met the same problem as you , I guess that’s because your original model is built By conv2d , but your quantized model is built By quantized::conv2d,when you try to restore a quantized model for disk , it cannot run quantized::conv2d.new on conv2d. |
st183331 | can you try using eager mode to quantize the if branch? also can you describe the problem in a bit more details, in terms of what are you trying to achieve, and what is the output |
st183332 | Hello,
During quantization, I realized quantized operations such as quantized::mul, quantized::cat are x10 slower than fp32 ops.
Is the only workaround wrapping those functions with dequant() and quant()? Please refer to the profiling below…
FP32 Profiling (CPU)
-------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg CPU Mem Self CPU Mem # of Calls
-------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
aten::mkldnn_convolution 36.28% 14.237ms 37.56% 14.741ms 1.340ms 7.88 Mb 0 b 11
aten::upsample_nearest2d 6.71% 2.632ms 8.38% 3.288ms 469.671us 15.33 Mb 15.32 Mb 7
aten::mul 6.10% 2.395ms 6.10% 2.395ms 342.071us 19.41 Mb 19.41 Mb 7
aten::_cat 5.54% 2.175ms 6.69% 2.626ms 375.100us 19.41 Mb 0 b 7
aten::_cat 5.28% 2.074ms 6.45% 2.533ms 361.814us 19.41 Mb 0 b 7
aten::upsample_nearest2d 4.50% 1.766ms 7.26% 2.851ms 407.229us 15.33 Mb 15.32 Mb 7
Quantized Profiling (CPU)
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg CPU Mem Self CPU Mem # of Calls
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
quantized::mul 20.94% 21.950ms 21.79% 22.839ms 3.263ms 4.85 Mb 0 b 7
quantized::cat 18.42% 19.306ms 18.84% 19.741ms 2.820ms 4.85 Mb 0 b 7
quantized::cat 17.84% 18.692ms 18.27% 19.143ms 2.735ms 4.85 Mb 0 b 7
quantized::conv2d 8.18% 8.576ms 8.86% 9.285ms 1.326ms 1.02 Mb -4.08 Mb 7
quantized::batch_norm2d 4.65% 4.878ms 5.34% 5.598ms 933.033us 980.00 Kb -7.75 Kb 6
quantized::conv2d 4.35% 4.561ms 4.90% 5.134ms 733.500us 981.00 Kb -3.83 Mb 7
quantized::mul 4.34% 4.553ms 5.13% 5.380ms 768.571us 1.02 Mb 0 b 7
quantized::leaky_relu 1.73% 1.810ms 2.13% 2.234ms 372.417us 980.00 Kb 0 b 6 |
st183333 | yeah I think a lot of them are using dequant/quant to simulate the quantized operation for quantized::cat, but for quantized::mul I remember we do have more efficient implementations in fbgemm/qnnpack. Wondering which quantized engine are you using right now? and which platform did you run this?
you can print quantized engine by: print(torch.backends.quantized.engine) |
st183334 | jerryzh168:
print(torch.backends.quantized.engine)
I am currently running this model in x64 Windows 10 environment. My final goal is to trace this model using TorchScript and save it to .pt file, so that I can import this model in my .cpp application, which will run in Android device (Galaxy S10). According to my plan, if I set the quantization configuration as qnnpack, will this model give better result when it is deployed to the target device?
It is just a rough plan, please share some of your experienced thought. I really appreciate that, since I am kinda newbie to this deployment plan
On Windows 10 machine(x64)
print(torch.backends.quantized.engine)
-------------------------------------------
fbgemm |
st183335 | depends on which qconfig you are using when you quantize the model, are you suing get_default_qconfig(“fbgemm”) or get_default_qconfig(“qnnpack”)?
If you are using the qnnpack qconfig, then you should only run with qnnpack backend, because fbgemm backend would have overflows.
If you are using fbgemm config, then you can run on both backends. |
st183336 | I am trying to fuse some modules in the AlexNet
model = torchvision.models.alexnet()
with the architecture:
AlexNet(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2))
(1): ReLU(inplace=True)
(2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(4): ReLU(inplace=True)
(5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(7): ReLU(inplace=True)
(8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(9): ReLU(inplace=True)
(10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace=True)
(12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(6, 6))
(classifier): Sequential(
(0): Dropout(p=0.5, inplace=False)
(1): Linear(in_features=9216, out_features=4096, bias=True)
(2): ReLU(inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Linear(in_features=4096, out_features=4096, bias=True)
(5): ReLU(inplace=True)
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
)
This is the code I use to fuse Conv2d and ReLU
torch.quantization.fuse_modules(model, [["features[0]", "features[1]"]])
or
torch.quantization.fuse_modules(model, [["features0", "features.1"]])
but got the same error
AttributeError: 'AlexNet' object has no attribute 'features[0]'
How can I fuse modules within a Sequential block? |
st183337 | Solved by Vasiliy_Kuznetsov in post #2
Hi @lhkhiem28 , [["features.0", "features.1"]] should work. More generally, you could print out the fully qualified name of every module in the model with something like
for name, mod in model.named_modules():
print(name, mod) |
st183338 | Hi @lhkhiem28 , [["features.0", "features.1"]] should work. More generally, you could print out the fully qualified name of every module in the model with something like
for name, mod in model.named_modules():
print(name, mod) |
st183339 | First of all, I would like to thank you for the awesome torch.quantization . But at the moment, the quantization of embeddings is not supported, although ususally it’s one of the biggest (in terms of size) parts of the model (in NLP).
I tried to use nn.Embeddings as nn.Linear because they have a very similar nature, but get the following error:
RuntimeError: Could not run 'aten::index_select' with arguments from the 'QuantizedCPUTensorId' backend. 'aten::index_select' is only available for these backends: [CPUTensorId, CUDATensorId, SparseCPUTensorId, SparseCUDATensorId, VariableTensorId].
So I’m interested whether it’s planned to support nn.Embeddings quantization? |
st183340 | Solved by supriyar in post #15
It is possible to set the qconfig of Embeddings to None if you wish to skip quantizing them. For example
class EmbeddingWithLinear(torch.nn.Module):
def __init__(self):
super().__init__()
self.emb = torch.nn.Embedding(num_embeddings=10, embedding_dim=12)
… |
st183341 | @jerryzh168
Hey, I saw that Embedding quantization was added in 1.7.0, but I can’t reproduce it with the latest version. I tried both static and dynamic quantization. Can you please share a code snippet that converts Embeddings to int8? |
st183342 | When I try static quantization, I’m getting:
AssertionError: The only supported dtype for nnq.Embedding is torch.quint8 |
st183343 | I am having the same error. In Version 1.6 embeddings were not a problem or were ignored. |
st183344 | @skurzhanskyi and @pintonos, could you share a small repro to reproduce the error? |
st183345 | A small repro would be great. Currently can you try setting the qconfig for the embedding module to float_qparams_weight_only_qconfig? We only support float qparams quantization for the Embedding layers. If you use the default qconfig for Embedding layers, you may run into this error.
Small example to demonstrate this
class EmbeddingWithLinear(torch.nn.Module):
def __init__(self):
super().__init__()
self.emb = torch.nn.Embedding(num_embeddings=10, embedding_dim=12)
self.fc = torch.nn.Linear(5, 5)
self.emb.qconfig = float_qparams_weight_only_qconfig
self.qconfig = default_qconfig
def forward(self, indices, linear_in):
return self.emb(indices), self.fc(linear_in) |
st183346 | Thanks for the example!
However according to this file 8 float_qparams_weight_only_qconfig is part of torch.quantization. With pytorch 1.7.1 CPU version torch.quantization.float_qparams_weight_only_qconfig cannot be imported!
Is this configuration not published yet? |
st183347 | Looks like this was renamed after 1.7. If you switch to nightly then you should be able to use it.
If you’re using 1.7 version try using float_qparams_dynamic_qconfig instead. |
st183348 | I updated to the nightly version. Using this code for testing:
import torch
import numpy as np
from torch.quantization import QuantStub, DeQuantStub, float_qparams_weight_only_qconfig, default_qconfig
class EmbeddingWithLinear(torch.nn.Module):
def __init__(self):
super().__init__()
self.emb = torch.nn.Embedding(num_embeddings=10, embedding_dim=12)
self.fc = torch.nn.Linear(5, 5)
self.emb.qconfig = float_qparams_weight_only_qconfig
self.qconfig = default_qconfig
def forward(self, indices, linear_in):
return self.emb(indices), self.fc(linear_in)
# create a model instance
model_fp32 = EmbeddingWithLinear()
indices_fp32 = torch.tensor(np.array([1, 3, 4, 5])).long()
input_fp32 = torch.randn(5, 5)
model_fp32.eval()
res = model_fp32(indices_fp32, input_fp32)
print(res)
model_fp32_prepared = torch.quantization.prepare(model_fp32)
model_fp32_prepared(indices_fp32, input_fp32)
model_int8 = torch.quantization.convert(model_fp32_prepared)
res = model_int8(indices_fp32, input_fp32)
print(res)
Getting this error:
RuntimeError: Could not run 'quantized::linear' with arguments from the 'CPU' backend. ...
Isn’t that exactly your example code? |
st183349 | Hi @pintonos,
The error is because you are trying to pass in a FP32 input tensor to a quantized operator. If you change the model to include quant/dequant stubs it should work as expected
class EmbeddingWithLinear(torch.nn.Module):
def __init__(self):
super().__init__()
self.emb = torch.nn.Embedding(num_embeddings=10, embedding_dim=12)
self.fc = torch.nn.Linear(5, 5)
self.emb.qconfig = float_qparams_weight_only_qconfig
self.qconfig = default_qconfig
self.quant = QuantStub()
self.dequant = DeQuantStub()
def forward(self, indices, linear_in):
a = self.emb(indices)
x = self.quant(linear_in)
quant = self.fc(x)
return a, self.dequant(quant) |
st183350 | Thanks so far!
Seems to work now, but I am getting an error while slicing my indices tensor after the model was calibrated and quantized. The slicing worked every time before the quantization itself.
emb(Xi[:, i - self.num, :])
Error:
RuntimeError: Expect weight, indices, and offsets to be contiguous.
Using torch.LongTensor(128, 1).random_(0, 10) which leads to the same tensor shape as input works, but the tensor slicing seems to make problems.
Any suggenstions? |
st183351 | It seems like this was recently modified in https://github.com/pytorch/pytorch/pull/48993 11. The operator expects the values passed in to the embedding operator to be contiguous.
You could check the inputs by doing x.is_contiguous() and call x.contiguous() if they are not.
I’ll file an issue to support this in the operator itself. |
st183352 | Sorry to reopen this topic.
Is it somehow possible to skip embedding layers to be quantized in post-static quantization? So that only linear layers for instance are getting quantized, as it was with earlier versions? |
st183353 | It is possible to set the qconfig of Embeddings to None if you wish to skip quantizing them. For example
class EmbeddingWithLinear(torch.nn.Module):
def __init__(self):
super().__init__()
self.emb = torch.nn.Embedding(num_embeddings=10, embedding_dim=12)
self.fc = torch.nn.Linear(5, 5)
self.emb.qconfig = None
self.qconfig = default_qconfig
def forward(self, indices, linear_in):
return self.emb(indices), self.fc(linear_in) |
st183354 | Hi
I have the same error. I try to quantize the DETR.
Error:
“AssertionError: The only supported dtype for nnq.Embedding is torch.quint8”.
class DETR(nn.Module):
“”" This is the DETR module that performs object detection “”"
def init(self, backbone, transformer, num_classes, num_queries, aux_loss=False):
“”" Initializes the model.
Parameters:
backbone: torch module of the backbone to be used. See backbone.py
transformer: torch module of the transformer architecture. See transformer.py
num_classes: number of object classes
num_queries: number of object queries, ie detection slot. This is the maximal number of objects
DETR can detect in a single image. For COCO, we recommend 100 queries.
aux_loss: True if auxiliary decoding losses (loss at each decoder layer) are to be used.
“”"
super().init()
self.num_queries = num_queries
self.transformer = transformer
hidden_dim = transformer.d_model
self.class_embed = nn.Linear(hidden_dim, num_classes + 1)
self.bbox_embed = MLP(hidden_dim, hidden_dim, 4, 3)
self.query_embed = nn.Embedding(num_queries, hidden_dim)
self.query_embed.qconfig = None # --------------------------------------------------
self.qconfig = default_qconfig
self.input_proj = nn.Conv2d(backbone.num_channels, hidden_dim, kernel_size=1)
self.backbone = backbone
self.aux_loss = aux_loss
def forward(self, samples: NestedTensor):
"""Â The forward expects a NestedTensor, which consists of:
- samples.tensor: batched images, of shape [batch_size x 3 x H x W]
- samples.mask: a binary mask of shape [batch_size x H x W], containing 1 on padded pixels
It returns a dict with the following elements:
- "pred_logits": the classification logits (including no-object) for all queries.
Shape= [batch_size x num_queries x (num_classes + 1)]
- "pred_boxes": The normalized boxes coordinates for all queries, represented as
(center_x, center_y, height, width). These values are normalized in [0, 1],
relative to the size of each individual image (disregarding possible padding).
See PostProcess for information on how to retrieve the unnormalized bounding box.
- "aux_outputs": Optional, only returned when auxilary losses are activated. It is a list of
dictionnaries containing the two above keys for each decoder layer.
"""
samples = self.quant(samples) #------------------------------------------------
if isinstance(samples, (list, torch.Tensor)):
samples = nested_tensor_from_tensor_list(samples)
features, pos = self.backbone(samples)
src, mask = features[-1].decompose()
assert mask is not None
hs = self.transformer(self.input_proj(src), mask, self.query_embed.weight, pos[-1])[0]
outputs_class = self.class_embed(hs)
outputs_coord = self.bbox_embed(hs).sigmoid()
out = {'pred_logits': outputs_class[-1], 'pred_boxes': outputs_coord[-1]}
if self.aux_loss:
out['aux_outputs'] = self._set_aux_loss(outputs_class, outputs_coord)
out = self.dequant(out) #-------------------------------------------------------------
return out |
st183355 | I am using Post Training Quantization and try to extract the quantized weight for inference phase, but I failed.
I try to directly use
for weight in quantized_model.state_dict():
np.set_printoptions(suppress=True)
print(weight, “\n”, quantized_model.state_dict()[weight].detach().cpu().clone().numpy())
get “TypeError: NumPy conversion for QuantizedCPUQInt8Type is not supported”
Could give me any advice for extracting the quantized weight from the quantized model?
Thank you very much!! |
st183356 | Solved by jerryzh168 in post #3
the error message shows that the weight is already a quantized weight, it’s just numpy operation is not supported in quantized weight, so if you remove the numpy() call it should work. |
st183357 | check out https://pytorch.org/docs/stable/quantization.html#quantized-torch-tensor-operations 33. Some options:
convert your quantized tensor to floating point with x.dequantize()
get the raw integer values with x.int_repr(), this should be used together with x.q_scale() and x.q_zero_point. |
st183358 | the error message shows that the weight is already a quantized weight, it’s just numpy operation is not supported in quantized weight, so if you remove the numpy() call it should work. |
st183359 | If I have quantized weight, how can I extract in each layer. How can I separate tensor, scale and zero_point into array or numpy array?
(Pdb) quantized_model.state_dict()[‘model_fp32.conv1.weight’]
tensor([[[[ 6.6680e-03, 5.3344e-03, 2.0004e-02, …, -2.8006e-02,
-6.8014e-02, -1.4670e-02],
[ 1.3336e-02, 9.3353e-03, 2.1338e-02, …, -1.1602e-01,
-3.7341e-02, 5.0677e-02],
[ 2.0004e-02, -1.8671e-02, -5.3344e-02, …, -7.3348e-02,
9.8687e-02, 8.4017e-02],
…,
[-4.9344e-02, -2.9339e-02, 1.3336e-03, …, 1.3203e-01,
9.3353e-03, -6.0012e-02],
[-4.0008e-03, 2.1338e-02, -4.0008e-03, …, 5.3344e-03,
-6.6680e-02, 3.0673e-02],
[ 6.6680e-03, 6.6680e-03, 6.6680e-03, …, -3.0673e-02,
1.2002e-02, 8.4017e-02]],
[[ 1.0669e-02, 2.0004e-02, 3.2007e-02, ..., 4.0008e-03,
-3.2007e-02, 5.3344e-03],
[ 2.9339e-02, 3.6007e-02, 5.6012e-02, ..., -8.9352e-02,
-2.4005e-02, 2.9339e-02],
[ 4.8010e-02, 2.9339e-02, -6.6680e-03, ..., -5.6012e-02,
7.4682e-02, 1.6003e-02],
...,
[-1.8671e-02, 1.0669e-02, 2.8006e-02, ..., 8.5351e-02,
-8.0017e-02, -1.6803e-01],
[ 1.8671e-02, 4.2675e-02, -1.3336e-03, ..., -6.9348e-02,
-1.5070e-01, -4.1342e-02],
[ 2.1338e-02, 0.0000e+00, -2.8006e-02, ..., -1.0936e-01,
-4.0008e-02, 5.3344e-02]],
...,
[ 3.7336e-02, 3.6070e-02, 3.9867e-02, ..., 5.6953e-02,
5.9484e-02, 3.8602e-02],
[ 2.3414e-02, 5.0625e-02, 4.7461e-02, ..., 6.9610e-02,
5.8219e-02, 4.4930e-02],
[ 6.9610e-03, 4.0500e-02, 5.0625e-02, ..., 5.7586e-02,
6.7078e-02, 5.1891e-02]]]], size=(64, 3, 7, 7), dtype=torch.qint8,
quantization_scheme=torch.per_channel_affine,
scale=tensor([1.3336e-03, 1.6211e-03, 1.5573e-03, 1.2130e-03, 4.1713e-04, 1.1682e-03,
3.4338e-04, 1.4215e-03, 1.3556e-03, 1.1205e-03, 1.8750e-03, 8.0779e-07,
7.6834e-04, 1.2095e-03, 4.1902e-04, 1.2486e-03, 9.4629e-04, 2.1297e-03,
3.7952e-07, 4.7981e-04, 1.3093e-04, 1.4277e-03, 2.5236e-03, 1.3021e-03,
2.5295e-03, 3.1860e-04, 3.6392e-04, 2.9317e-03, 1.9204e-03, 3.7749e-07,
8.5399e-04, 9.0634e-04, 2.1589e-03, 1.1871e-03, 2.4718e-03, 3.7885e-07,
2.1255e-06, 1.6729e-03, 8.8754e-07, 1.0482e-05, 1.2293e-03, 2.2856e-04,
4.7812e-04, 3.0442e-03, 2.9528e-03, 1.5815e-03, 6.9944e-04, 1.3110e-03,
2.4429e-03, 3.9633e-04, 1.4372e-03, 2.6385e-04, 2.5296e-04, 6.3496e-07,
3.4408e-04, 3.4195e-04, 1.0956e-06, 6.7232e-04, 6.8186e-04, 2.4606e-03,
3.9087e-04, 5.1462e-04, 8.7948e-07, 6.3281e-04], dtype=torch.float64),
zero_point=tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
axis=0)
Thanks,
Stephen |
st183360 | I type quantized_model.state_dict()[‘model_fp32.conv1.weight’].q_scale()
*** RuntimeError: Expected quantizer->qscheme() == kPerTensorAffine to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
The data_structure shows: Tensor quantization_scheme=torch.per_channel_affine
What’s wrong with that?
I can print out the int_repr() with dtype = torch.int8
Thanks,
Stephen |
st183361 | I tried to narrow down to a simple question.
x = torch.tensor([[-1.0, 0.0], [1.0, 2.0]])
y = torch.quantize_per_channel(x, torch.tensor([0.1, 0.01]), torch.tensor([10, 0]), 0, torch.quint8)
y
tensor([[-1., 0.],
[ 1., 2.]], size=(2, 2), dtype=torch.quint8,
quantization_scheme=torch.per_channel_affine,
scale=tensor([0.1000, 0.0100], dtype=torch.float64),
zero_point=tensor([10, 0]), axis=0)
y.int_repr()
tensor([[ 0, 10],
[100, 200]], dtype=torch.uint8)
y.q_scale()
*** RuntimeError: Expected quantizer->qscheme() == kPerTensorAffine to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
I installed pytorch version : 1.10.0
Thanks,
Stephen |
st183362 | Hi, not sure if you have already solved this but this is because torch supports two different quantization schemes: per tensor affine and per channel affine. In per tensor affine, a single scale and zero point are saved per tensor. So you can use .q_scale() as you did to get that single value.
However, in your case, PyTorch is using per channel affine which means there are N scale and zero point values, where N = number of channels. In this case you have to use `.q_per_channel_scales()’ to return a tensor of all the scale values. |
st183363 | hello,
I have Resnet classification from torchvision and I finetune it for my custom dataset and save my model(final_model.pth)
know I want to quantize this model and it quantizes without a problem and if I inference write away it is ok but if I load my quantize model and then inference it has Error
######Quantization Aware Training#######
import torch
model = torch.load("final_model.pth", map_location='cpu')
model.train()
model.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')
#model_fp32_fused = torch.quantization.fuse_modules(model,[['conv1', 'bn1', 'relu']])
model_fp32_prepared = torch.quantization.prepare_qat(model)
train_annotations_dir = val_annotations_dir = "/content/annotations.json"
train_img_dir = val_img_dir = "/content/images"
epoch = 2
batch_size = 32
image_size = 640
model_path=model_fp32_prepared
gpu_num = 0
learning_rate =0.01
opt = "adam"
momentum = 0.9
patience = 5
output = "/content/output2/"
#
m = run_classification_train(train_annotations_dir,train_img_dir,val_annotations_dir,val_img_dir,epoch,batch_size,image_size,model_path,gpu_num,learning_rate,momentum,opt,patience)
m.to("cpu")
model_int8 = torch.quantization.convert(m)
final_model_path = m
torch.save(model_int8.state_dict(), "state_qmodel.pth")
torch.save(model_int8, "fm.pth")
inference(val_annotations_dir,val_img_dir,image_size,batch_size,final_model_path,output)
if I load state_qmodel.pth I get ERROR
Error(s) in loading state_dict for ResNet:
Unexpected key(s) in state_dict: “conv1.bias”, “conv1.scale”, “conv1.zero_point”, “layer1.0.conv1.bias”, “layer1.0.conv1.scale”, “layer1.0.conv1.zero_point”, “layer1.0.conv2.bias”, “layer1.0.conv2.scale”, “layer1.0.conv2.zero_point”, “layer1.0.conv3.bias”, “layer1.0.conv3.scale”, “layer1.0.conv3.zero_point”, “layer1.0.downsample.0.bias”, “layer1.0.downsample.0.scale”, “layer1.0.downsample.0.zero_point”, “layer1.1.conv1.bias”, “layer1.1.conv1.scale”, “layer1.1.conv1.zero_point”, “layer1.1.conv2.bias”, “layer1.1.conv2.scale”, “layer1.1.conv2.zero_point”, “layer1.1.conv3.bias”, “layer1.1.conv3.scale”, “layer1.1.conv3.zero_point”, “layer1.2.conv1.bias”, “layer1.2.conv1.scale”, “layer1.2.conv1.zero_point”, “layer1.2.conv2.bias”, “layer1.2.conv2.scale”, “layer1.2.conv2.zero_point”, “layer1.2.conv3.bias”, “layer1.2.conv3.scale”, “layer1.2.conv3.zero_point”, “layer2.0.conv1.bias”, “layer2.0.conv1.scale”, “layer2.0.conv1.zero_point”, “layer2.0.conv2.bias”, “layer2.0.conv2.scale”, “layer2.0.conv2.zero_point”, “layer2.0.conv3.bias”, “layer2.0.conv3.scale”, “layer2.0.conv3.zero_point”, “layer2.0.downsample.0.bias”, “layer2.0.downsample.0.scale”, “layer2.0.downsample.0.zero_point”, “layer2.1.conv1.bias”, “layer2.1.conv1.scale”, “layer2.1.conv1.zero_point”, “layer2.1.conv2.bias”, “layer2.1.conv2.scale”, “layer2.1.conv2.zero_point”, “layer2.1.conv3.bias”, “layer2.1.conv3.scale”, “layer2.1.conv3.zero_point”, “layer2.2.conv1.bias”, “layer2.2.conv1.scale”, "layer2.2.conv1.zero_point
import torch
model = torch.load("/content/final_model.pth")
final_model_path = model.load_state_dict(torch.load('/content/state_qmodel.pth'))
inference(val_annotations_dir,val_img_dir,image_size,batch_size,final_model_path,output) |
st183364 | Solved by Vasiliy_Kuznetsov in post #2
Hi @m.safari, when you run the quantization APIs it changes the state dict, because quantized layers can have different fields compared to their floating point counterparts. Therefore, when you load a quantized checkpoint, the recommendation is to create the fp32 architecture, run the quantization … |
st183365 | Hi @m.safari, when you run the quantization APIs it changes the state dict, because quantized layers can have different fields compared to their floating point counterparts. Therefore, when you load a quantized checkpoint, the recommendation is to create the fp32 architecture, run the quantization APIs (on random weights), and then load the quantized state dict. In your example, it would be something like
# create fp32 model
model = torch.load("/content/final_model.pth")
# quantize it without calibration (weights will not be final)
model.train()
model.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')
#model_fp32_fused = torch.quantization.fuse_modules(model,[['conv1', 'bn1', 'relu']])
model_fp32_prepared = torch.quantization.prepare_qat(model)
model_int8 = torch.quantization.convert(m)
# load the real state dict
model_int8.load_state_dict(torch.load('/content/state_qmodel.pth') |
st183366 | class TemporalShift(nn.Module):
def __init__(self, n_segment=3, n_div=8):
super(TemporalShift, self).__init__()
self.n_segment = n_segment
self.fold_div = n_div
def forward(self, x):
x = self.shift(x, self.n_segment, fold_div=self.fold_div)
return x
@staticmethod
def shift(x, n_segment, fold_div=3):
nt, c, h, w = x.size()
n_batch = nt // n_segment
x = x.view(n_batch, n_segment, c, h, w)
fold = c // fold_div
#out = torch.zeros_like(x)
out = x.clone().zero_()
out[:, :-1, :fold] = x[:, 1:, :fold] # shift left
out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold] # shift right
out[:, :, 2 * fold:] = x[:, :, 2 * fold:] # not shift
return out.view(nt, c, h, w)
when using fx to tracing tsm, i get this error:
out[:, :-1, :fold] = x[:, 1:, :fold]
TypeError: ‘Proxy’ object does not support item assignment |
st183367 | Solved by Vasiliy_Kuznetsov in post #2
Hi @keyky, this is a limitation of symbolic tracing with FX. Here is a workaround using torch.fx.wrap:
@torch.fx.wrap
def shift_left(out, x, fold):
out[:, :-1, :fold] = x[:, 1:, :fold] # shi… |
st183368 | Hi @keyky, this is a limitation of symbolic tracing with FX. Here is a workaround using torch.fx.wrap:
@torch.fx.wrap
def shift_left(out, x, fold):
out[:, :-1, :fold] = x[:, 1:, :fold] # shift left
@torch.fx.wrap
def shift_right(out, x, fold):
out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold] # shift right
@torch.fx.wrap
def not_shift(out, x, fold):
out[:, :, 2 * fold:] = x[:, :, 2 * fold:] # not shift
class TemporalShift(nn.Module):
def __init__(self, n_segment=3, n_div=8):
super(TemporalShift, self).__init__()
self.n_segment = n_segment
self.fold_div = n_div
def forward(self, x):
x = self.shift(x, self.n_segment, fold_div=self.fold_div)
return x
@staticmethod
def shift(x, n_segment, fold_div=3):
nt, c, h, w = x.size()
n_batch = nt // n_segment
x = x.view(n_batch, n_segment, c, h, w)
fold = c // fold_div
#out = torch.zeros_like(x)
out = x.clone().zero_()
shift_left(out, x, fold)
shift_right(out, x, fold)
not_shift(out, x, fold)
return out.view(nt, c, h, w)
m = TemporalShift()
ms = torch.fx.symbolic_trace(m) |
st183369 | I quantized both my model and input to “int8”, but an error generated,
Screenshot from 2021-12-22 16-23-051631×328 31.8 KB
why does the model require that input type is float |
st183370 | Solved by FengMu1995 in post #3
Hi, I’ve found the error and remove the “normalize”, it worked. |
st183371 | return torch._C._nn.upsample_bilinear2d(input, output_size, align_corners, scale_factors)
RuntimeError: “compute_indices_weights_linear” not implemented for ‘Half’,
pytorch1.9.1 did not support float16? |
st183372 | Solved by FengMu1995 in post #8
ok, I have known where the problem is. I should add the quant(x) into the forward headmost. |
st183373 | You could use float16 on a GPU, but not all operations for float16 are supported on the CPU as the performance wouldn’t benefit from it (if I’m not mistaken). |
st183374 | Howerver, I still have a problem with model-int8 on cpu and gpu,
“RuntimeError: “upsample_bilinear2d_out_frame” not implemented for ‘Char’ ” |
st183375 | int8 is not implemented in native operations and you would need to use the quantization util. for it. |
st183376 | Could you explain it more specifically, please? I haven’t understood the quantization util. |
st183377 | I would probably start with the docs 5 and then take a look at this tutorial 3 and the coverage 3 for more information. |
st183378 | ok, I have known where the problem is. I should add the quant(x) into the forward headmost. |
st183379 | Hi,
I’m trying to perform QAT on GPT2 model, but I’m a bit confused about the documentation regarding the QuantStub.
Where should I place the QuantStub and DeQuantStub? Based on my understanding I should place the first QuantStub after the Embedding layer and the DequantStub after the Relu activation layer of the FFN; then subsequently the QuantStub will be after the previous DequantStub, which is before the second linear layer of the FFN of the previous decoder layer. Is that correct?
I can only fuse the first linear layer and the Relu activation layer of the FFN in each of the decoder layers. Am I right?
Decoder-Only-Architecture-used-by-GPT-2722×852 62.6 KB
Thanks in advance!
JM |
st183380 | I am also having problems using Quantization-Aware with GPT-2, did you find a solution? Can you share it with me? Thank you. |
st183381 | I can talk about how to place quantstub/dequantstub in general
QuantStub should be placed at the point when you want to quantize a floating point Tensor to a quantized Tensor, the module following QuantStub is also expected to be quantized (to int8 quantized module)
DeQuantStub should be placed at the point when you want to dequantize a int8 tensor back to a floating point Tensor.
e.g.
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv = torch.nn.Conv2d(3, 3, 3)
self.quant = QuantStub
self.dequant = DeQuantStub()
def forward(self, x):
# original input assumed to be fp32
x = self.quant(x)
# after quant, x is quantized to int8 tensor
x = self.conv(x)
# we also need to quantize conv module to be a int8 quantized conv module
# which takes int8 Tensor as input and outputs a int8 quantized Tensor
x = self.dequant(x)
# dequant would dequantize a int8 quantized Tensor back to a fp32 Tensor
return x
Please post the exact model if you need more specific help on the model.
Can you include the actual model implementation? in the meantime, you can find all supported fusions here: pytorch/fuser_method_mappings.py at master · pytorch/pytorch · GitHub |
st183382 | Hi Jerry,
Thanks for the explanation.
In your example, the input is quantized from fp32 to int8 by the QuantStub module, but how about the weights in the layer (linear, or conv for example)? It seems that we don’t need to quantize the weight from your example?
How about the output from previous layers? For example, the output from the previous linear or activation layer. I understand after the calculation the result should be fp32, so do we need to put QuantStub in between two layers? |
st183383 | for weights, it’s quantized when we swap the floating point conv to quantized conv, it can be done through attaching a qconfig to the conv module instance
there might still be some misunderstanding, a quantized module takes int8 Tensor as input and outputs a int8 Tensor as well, so if previous linear/activation layer is quantized (meaning we attach a int8 qconfig to that layer), we do not need to put a QuantStub in between the two layers. |
st183384 | if my model consists of deeper and nested layers, should I insert quant&dequant into every layer? |
st183385 | Solved by supriyar in post #2
If all the operators in the model can be quantized, you can insert a quant and dequant at the beginning and end of the model. See (beta) Static Quantization with Eager Mode in PyTorch — PyTorch Tutorials 1.10.0+cu102 documentation for more details.
If there are only specific layers that can be quan… |
st183386 | If all the operators in the model can be quantized, you can insert a quant and dequant at the beginning and end of the model. See (beta) Static Quantization with Eager Mode in PyTorch — PyTorch Tutorials 1.10.0+cu102 documentation 1 for more details.
If there are only specific layers that can be quantized then you would have to wrap them individually using quant-dequant blocks in eager mode. Alternatively, you could try the FX graph mode quantization flow which should automate the process - (prototype) FX Graph Mode Post Training Static Quantization — PyTorch Tutorials 1.10.0+cu102 documentation 2 |
st183387 | I just recently figured out this simple operation returns error.
class QNet(nn.Module):
def __init__(self):
super(QNet, self).__init__()
self.quant = QuantStub()
self.dequant = DeQuantStub()
self.conv = nn.Conv2d(1, 1, 1)
self.bn = nn.BatchNorm2d(1)
self.relu = nn.ReLU()
def forward(self, x):
x = self.quant(x)
x = self.conv(x)
x = x * 2 # <<<< Error
x = self.bn(x)
x = self.relu(x)
x = self.dequant(x)
return x
NotImplementedError: Could not run 'aten::empty_strided' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty_strided' is only available for these backends: [CPU, CUDA, Meta, BackendSelect, Python, Named, Conjugate, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].
Is this because quantized tensor only operates with quantized objects? |
st183388 | please take a look at the response here: NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'QuantizedCPU' backend - #3 by jerryzh168 4 |
st183389 | I really appreciated your reply! However, I realized some FloatFunctional() ops are slower than FP32?
Slower ops in quantized::mul, quantized::cat quantization
Hello,
During quantization, I realized quantized operations such as quantized::mul, quantized::cat are x10 slower than fp32 ops.
Is the only workaround wrapping those functions with dequant() and quant()? Please refer to the profiling below…
FP32 Profiling (CPU)
-------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU tota… |
st183390 | Hello, I am following FX mode post training static quantization tutorial 1, and got an error when running the last line of this code below.
# 1. FX Mode quantization
model_to_quantize = copy.deepcopy(model_fp)
model_to_quantize.eval()
qconfig_dict = {"": torch.quantization.get_default_qconfig('qnnpack')}
# prepare (Insert observer)
model_prepared = quantize_fx.prepare_fx(model_to_quantize, qconfig_dict)
# calibrate
model_prepared.eval()
model_prepared(img, mask)
-----------------------------------------------------------------------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Users\dalsee\PycharmProjects\PConv\mymodel.py", line 266, in <module>
model_prepared(img, mask)
File "C:\Users\dalsee\Anaconda3\envs\deeplearning\lib\site-packages\torch\fx\graph_module.py", line 512, in wrapped_call
raise e.with_traceback(None)
RuntimeError: Overflow when unpacking long
img and mask are both the input of the model, which is 256x256 tensor converted from graylevel picture. |
st183391 | Hi @dalseeroh, did you make any changes when running the tutorial locally? If yes, can you share them here?
cc @jerryzh168 in case we need to update the tutorial. |
st183392 | Yes, I used my custom UNet-like model.
class PartialConvLayer(nn.Module):
def __init__(self, in_channels, out_channels, kernel, bn=True, bias=False, sample="none-3", activation="relu"):
super().__init__()
self.bn = bn
self.kernel_size = kernel
self.in_channel = in_channels
if sample == "down-7":
# Kernel Size = 7, Stride = 2, Padding = 3
self.input_conv = nn.Conv2d(in_channels, out_channels, kernel, 2, 3, bias=bias)
self.mask_conv = nn.Conv2d(in_channels, out_channels, kernel, 2, 3, bias=False)
elif sample == "down-5":
self.input_conv = nn.Conv2d(in_channels, out_channels, kernel, 2, 2, bias=bias)
self.mask_conv = nn.Conv2d(in_channels, out_channels, kernel, 2, 2, bias=False)
elif sample == "down-3":
self.input_conv = nn.Conv2d(in_channels, out_channels, kernel, 2, 1, bias=bias)
self.mask_conv = nn.Conv2d(in_channels, out_channels, kernel, 2, 1, bias=False)
else:
self.input_conv = nn.Conv2d(in_channels, out_channels, kernel, 1, 1, bias=bias)
self.mask_conv = nn.Conv2d(in_channels, out_channels, kernel, 1, 1, bias=False)
nn.init.constant_(self.mask_conv.weight, 1.0)
# Initialize weight using Kaiming Initialization
# a: negative slope of relu set to 0, same as relu
# "fan_in" preserved variance from forward pass
nn.init.kaiming_normal_(self.input_conv.weight, a=0, mode="fan_in")
for param in self.mask_conv.parameters():
param.requires_grad = False
if bn:
# Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
# Applying BatchNorm2d layer after Conv will remove the channel mean
self.batch_normalization = nn.BatchNorm2d(out_channels)
if activation == "relu":
# Used between all encoding layers
self.activation = nn.ReLU()
elif activation == "leaky_relu":
# Used between all decoding layers (Leaky RELU with alpha = 0.2)
self.activation = nn.LeakyReLU(negative_slope=0.2)
def forward(self, input_x, mask):
# output = W^T dot (X .* M) + b
output = self.input_conv(input_x * mask)
# requires_grad = False
with torch.no_grad():
# mask = (1 dot M) + 0 = M
output_mask = self.mask_conv(mask)
if self.input_conv.bias is None:
output_bias = 0
else:
# Only the last layer has a bias term
# ** for ease of of quantization and for future 1-channel conversion,
# bias term is hardcoded
# output_bias = self.input_conv.bias.reshape(1, -1, 1, 1)
output_bias = 0.56
# Mask Update
updated_mask = torch.clamp(output_mask, 0, 1)
# Output image Update
num_scaling = torch.ones_like(output) * self.kernel_size * self.kernel_size
denom_scaling = output_mask / self.in_channel
scaling_factor = num_scaling / denom_scaling
updated_output = (output - output_bias) * scaling_factor + output_bias
updated_output = updated_output * updated_mask
if self.bn:
updated_output = self.batch_normalization(updated_output)
if hasattr(self, 'activation'):
updated_output = self.activation(updated_output)
return updated_output, updated_mask
class PartialConvUNet(nn.Module):
# 256 x 256 image input, 256 = 2^8
def __init__(self, input_size=256, layers=7):
if 2 ** (layers + 1) != input_size:
raise AssertionError
super().__init__()
self.freeze_enc_bn = False
self.layers = layers
# ======================= ENCODING LAYERS =======================
# 3x256x256 --> 64x128x128
self.encoder_1 = PartialConvLayer(3, 64, 7, bn=False, sample="down-7")
# 64x128x128 --> 128x64x64
self.encoder_2 = PartialConvLayer(64, 128, 5, sample="down-5")
# 128x64x64 --> 256x32x32
self.encoder_3 = PartialConvLayer(128, 256, 3, sample="down-3")
# 256x32x32 --> 512x16x16
self.encoder_4 = PartialConvLayer(256, 512, 3, sample="down-3")
# 512x16x16 --> 512x8x8 --> 512x4x4 --> 512x2x2
for i in range(5, layers + 1):
name = "encoder_{:d}".format(i)
setattr(self, name, PartialConvLayer(512, 512, 3, sample="down-3"))
# ======================= DECODING LAYERS =======================
# dec_7: UP(512x2x2) + 512x4x4(enc_6 output) = 1024x4x4 --> 512x4x4
# dec_6: UP(512x4x4) + 512x8x8(enc_5 output) = 1024x8x8 --> 512x8x8
# dec_5: UP(512x8x8) + 512x16x16(enc_4 output) = 1024x16x16 --> 512x16x16
for i in range(layers, 0, -1):
name = "decoder_{:d}".format(i)
setattr(self, name, PartialConvLayer(512 + 512, 512, 3, activation="leaky_relu"))
# UP(512x16x16) + 256x32x32(enc_3 output) = 768x32x32 --> 256x32x32
self.decoder_4 = PartialConvLayer(512 + 256, 256, 3, activation="leaky_relu")
# UP(256x32x32) + 128x64x64(enc_2 output) = 384x64x64 --> 128x64x64
self.decoder_3 = PartialConvLayer(256 + 128, 128, 3, activation="leaky_relu")
# UP(128x64x64) + 64x128x128(enc_1 output) = 192x128x128 --> 64x128x128
self.decoder_2 = PartialConvLayer(128 + 64, 64, 3, activation="leaky_relu")
# UP(64x128x128) + 3x256x256(original image) = 67x256x256 --> 3x256x256(final output)
self.decoder_1 = PartialConvLayer(64 + 3, 3, 3, bn=False, activation="", bias=True)
def forward(self, input_x, mask):
encoder_dict = {}
mask_dict = {}
key_prev = "h_0"
encoder_dict[key_prev], mask_dict[key_prev] = input_x, mask
# Encoder Path
for i in range(1, self.layers + 1):
encoder_key = "encoder_{:d}".format(i)
key = "h_{:d}".format(i)
# Passes input and mask through encoding layer
encoder_dict[key], mask_dict[key] = getattr(self, encoder_key)(encoder_dict[key_prev], mask_dict[key_prev])
key_prev = key
# Gets the final output data and mask from the encoding layers
# 512 x 2 x 2
out_key = "h_{:d}".format(self.layers)
out_data, out_mask = encoder_dict[out_key], mask_dict[out_key]
# Decoder Path
for i in range(self.layers, 0, -1):
encoder_key = "h_{:d}".format(i - 1)
decoder_key = "decoder_{:d}".format(i)
# Upsample to 2 times scale, matching dimensions of previous encoding layer output
out_data = F.interpolate(out_data, scale_factor=2)
out_mask = F.interpolate(out_mask, scale_factor=2)
# concatenate upsampled decoder output with encoder output of same H x W dimensions
# s.t. final decoding layer input will contain the original image
out_data = torch.cat([out_data, encoder_dict[encoder_key]], dim=1)
# also concatenate the masks
out_mask = torch.cat([out_mask, mask_dict[encoder_key]], dim=1)
# feed through decoder layers
out_data, out_mask = getattr(self, decoder_key)(out_data, out_mask)
return out_data
def train(self, mode=True):
super().train(mode)
if self.freeze_enc_bn:
for name, module in self.named_modules():
if isinstance(module, nn.BatchNorm2d) and "enc" in name:
# Sets batch normalization layers to evaluation mode
module.eval()
and just ran this:
# 1. FX Mode quantization
model_fp = PartialConvUNet()
model_to_quantize = copy.deepcopy(model_fp)
model_to_quantize.eval()
qconfig_dict = {"": torch.quantization.get_default_qconfig('qnnpack')}
# prepare (Insert observer)
model_prepared = quantize_fx.prepare_fx(model_to_quantize, qconfig_dict)
# calibrate
model_prepared.eval()
img = torch.ones([1, 3, 256, 256])
mask = torch.ones([1, 3, 256, 256])
model_prepared(img, mask)
Should I switch it to eager mode?? |
st183393 | Think my model’s output values are pretty large, when I pass the input.
model_fp = PartialConvUNet()
img = torch.ones([1, 3, 256, 256])
mask = torch.ones([1, 3, 256, 256])
output = model_fp(img, mask)
print(output)
tensor([[[[ 7.6655, -0.8053, 3.6252, ..., 4.2638, 8.6232, 3.3039],
[ 4.4455, -4.1953, 0.1104, ..., 3.5996, 7.1602, 4.6564],
[ 1.0595, -4.2852, 0.7656, ..., 4.1680, 6.3477, 2.9308],
...,
[ 20.6318, 11.1790, 5.8915, ..., 9.6741, 7.4005, 2.2128],
[ 18.7051, 9.1751, 3.2128, ..., 4.9373, 3.2832, 5.4050],
[ 1.8005, -8.3266, -5.7646, ..., -0.7729, -1.1793, 13.2123]], |
st183394 | dalseeroh:
Overflow when unpacking long
could you narrow down to which line the error happens? looks like an error from unpacking an integer number in python, not sure where that happens… |
st183395 | I found out that the scale and zero point parameters of the BatchNorm2d module after conversion with post-training quantization are not included the model.state_dict(). Therefore, when saving the state dict of the converted model to a file, these parameters are lost. This yields different results when doing inference just after post-training quantization, or whether first saving the state dict of the quantized model to a file and later on loading it again.
This does not happen when the batch norm module is fused with another module (like convolution), because then, because the scale and zero point is saved for the module if it is fused (e.g., with convolution module).
An alternative is to save the state dict of the model before running torch.quantization.convert(model). However, is there a specific reason why the scale and zero point of the quantized batch norm module is not included in the state dict of the model after conversion?
Reference: torch.nn.quantized.modules.batchnorm — PyTorch 1.10.0 documentation (here you see that scale and zero point are taken from the activation_post_process.calculate_qparams(), but when loading a new model without doing proper calibration, these values are not set correctly) |
st183396 | Solved by jerryzh168 in post #2
I think this might be a bug, thanks for reporting, opened an issue here: quantized batchnorm parameters/buffers not saved in state_dict · Issue #69808 · pytorch/pytorch · GitHub |
st183397 | I think this might be a bug, thanks for reporting, opened an issue here: quantized batchnorm parameters/buffers not saved in state_dict · Issue #69808 · pytorch/pytorch · GitHub 3 |
st183398 | I am trying to quantize a model, but I got an error when I executes:
model_int = torch.quantization.convert(model_fp_prepared)
print("Conversion completed")
q_output = model_int(img, mask)
NotImplementedError: Could not run ‘aten::empty.memory_format’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login 1 for possible resolutions. ‘aten::empty.memory_format’ is only available for these backends: [CPU, CUDA, Meta, MkldnnCPU, SparseCPU, SparseCUDA, BackendSelect, Python, Named, Conjugate, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].
It looks like it happened here:
output = self.input_conv(input_x * mask) |
st183399 | I guess I figured it out. This error seems to happen when I try to multiply those quantized tensors(input_x, mask). The workaround I took is:
# First, dequantize the quantized tensor
input_x = self.dequant(input_x)
mask = self.dequant(mask)
# Do the operation and quantize it back
masked = input_x * mask
masked = self.quant(masked)
input_x = self.quant(input_x)
mask = self.quant(mask)
output = self.input_conv(masked)
Seems like pretty tedious work but it works. However, can I use self.quant() multiple times like that? or Should I use self.quant1(), self.quant2(), self.quant3() separately? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.