id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st30868
|
It seems like bincount performance is dramatically reduced when using a high number of bins i.e. number of classes squared.
When training on Mapillary V2 with ~115 classes, it takes anywhere between 80 and 400ms for bincounting on an image of only 256x512. Whereas when training on an higher resolution cityscapes with 19 classes, 1024x2048, so more pixels to bincount, it usually completes in less than 10ms.
I presume that bincounting is achieved with atomic add operations, where there should be less collisions with more classes so should be faster if anything. My thoughts are that this is because bincount uses on chip memory if it will fit on an SM, whereas if its too big it uses global memory instead so becomes super slow. If this is the case, any thoughts on improvement on speed? Maybe calculating the confusion matrix in chunks instead, to try to get it back on chip memory? Theoretically 115x115x4(int32) = 52.9kbytes < 99kbytes available on SM 8.6 threadblock. I tried casting input to int16 to save space but that didn’t change anything, bincount might cast it back to 32 or even 64 internally.
Most of my training loop when running mapillary is consumed by this bincount, so making it speedy would be of great benefit.
Pytorch: 1.8.1+cu111
GPU: RTX3090
Just some snippets of my code below for reference.
def _gen_confusion_mat(self, prediction: torch.Tensor, target: torch.Tensor,
mask: torch.Tensor = None) -> torch.Tensor:
if mask is not None:
conf_mat = torch.bincount(self._n_classes * target[mask] + prediction[mask],
minlength=self._n_classes**2)
else:
conf_mat = torch.bincount(self._n_classes * target + prediction,
minlength=self._n_classes**2)
return conf_mat.reshape(self._n_classes, self._n_classes)
def add_sample(self, predictions: Dict[str, torch.Tensor],
targets: Dict[str, torch.Tensor], loss: int=0, **kwargs) -> None:
mask = targets['seg'] != 255
torch.cuda.synchronize()
s_time = time.time()
for idx in range(preds.shape[0]):
conf_mat = self._gen_confusion_mat(preds[idx], targets['seg'][idx], mask[idx])
self.metric_data["Confusion_Mat"] += conf_mat
torch.cuda.synchronize()
print(f"Confusion mat gen: {1000*(time.time() - s_time):.2f} ms")
s_time = time.time()
|
st30869
|
Solved by 5had3z in post #2
Yeah so if I chunk it down into slices of 999 with a mask each iteration is less than a millisecond, and after looping over all of them, it results in a factor of 10 reduction in overall time spent (max time 40ms, usually ~10ms).
I found a few magic numbers with SummaryOps.cu: THRESH_NUMBER_BINS_F…
|
st30870
|
Yeah so if I chunk it down into slices of 999 with a mask each iteration is less than a millisecond, and after looping over all of them, it results in a factor of 10 reduction in overall time spent (max time 40ms, usually ~10ms).
I found a few magic numbers with SummaryOps.cu: THRESH_NUMBER_BINS_FOR_MULTI_BLOCK_MEM = 100 and THRESH_NUMBER_BINS_FOR_GLOBAL_MEM = 1000 which I’m not sure why is a thing when the proper checks for whether the bins will fit within shared memory are done anyway, and briefly looking at the kernel itself, I’m not sure why there’s only a limit of 100 for smem, I guess to use up all the smem I’d only be able to run one block at a time on an SM, I’m sure a cuda ninja could explain to me why this is the case.
Either way to help anyone else out, here’s my IP:
def _gen_confusion_mat(self, prediction: torch.Tensor, target: torch.Tensor,
mask: torch.Tensor = None) -> torch.Tensor:
if mask is not None:
temp = self._n_classes * target[mask] + prediction[mask]
else:
temp = self._n_classes * target + prediction
i = 0
conf_mat = torch.zeros(self._n_classes**2, dtype=torch.int32, device=target.device)
while i < self._n_classes**2:
t_mask = temp >= i
t_mask &= temp < i + 999
minlength = self._n_classes**2-i if i + 999 > self._n_classes**2 else 999
conf_mat[i:i+999] = torch.bincount(temp[t_mask] - i, minlength=minlength)
i += 999
return conf_mat.reshape(self._n_classes, self._n_classes)
|
st30871
|
Hi all,
I’m trying to convert a PyTorch model to ONNX with torch.onnx.export, but the operation fails upon trying the ‘var’ operator (symbolic_opset9). I have done some reading and found mean variance normalization (mvn) is supported, but I could not find anything about var alone.
The issue lies when torch.onnx.export is called, with the following trace:
/opt/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py:715: UserWarning: ONNX export failed on ATen operator var because torch.onnx.symbolic_opset9.var does not exist
Traceback (most recent call last):
File “onnx_conversion_script.py”, line 74, in
torch.onnx.export(model, dummy_input, model_name + “.onnx”, verbose=True, input_names = input_name, output_names=output_name)
File “/opt/anaconda3/lib/python3.7/site-packages/torch/onnx/init.py”, line 158, in export
custom_opsets, enable_onnx_checker)
File “/opt/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py”, line 68, in export
custom_opsets=custom_opsets, enable_onnx_checker=enable_onnx_checker)
File “/opt/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py”, line 469, in _export
fixed_batch_size=fixed_batch_size)
File “/opt/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py”, line 338, in _model_to_graph
fixed_batch_size=fixed_batch_size, params_dict=params_dict)
File “/opt/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py”, line 153, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File “/opt/anaconda3/lib/python3.7/site-packages/torch/onnx/init.py”, line 189, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File “/opt/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py”, line 716, in _run_symbolic_function
op_fn = sym_registry.get_registered_op(op_name, ‘’, opset_version)
File “/opt/anaconda3/lib/python3.7/site-packages/torch/onnx/symbolic_registry.py”, line 94, in get_registered_op
return _registry[(domain, version)][opname]
KeyError: ‘var’
Is a custom operator necessary? Has anyone else run into this? I have tried both the latest PyTorch as well as the nightly build with no success.
Thanks.
|
st30872
|
Variance doesn’t seems to be an ONNX operator. But hopefully, it can be expressed as a combination of simpler operations.
One way to “bypass” it is to remove your call to “var” and replace by its formula.
A better approach is to add to torch.onnx the var export (I could do it if I get some free time this evening).
Ideally, if its usage is common, it would be interesting to open an issue on the ONNX Repo to ask for a Variance opcode that could be added to the standard.
Do you know how many models use this operator? I never had to use it before personally.
PS: The variance can be expressed as 31
|
st30873
|
@jaolan I think you should open a feature request on https://github.com/pytorch/pytorch/issues 32
|
st30874
|
Thanks for your reply!
Adding var to the export is a good idea . This is the first time I’ve seen it in a model I am exporting as well, but it seems like an operator that could be useful to others - I will open a feature request in PyTorch.
|
st30875
|
@Jeremy_Cochoy @jaolan
Hi Both, I’m using Pytorch 1.4 and got the save issue when convert pytorch model to onnx.
I’m new to Pytorch, I want to know how to “Adding var to the export”?
Thanks.
|
st30876
|
Oh I got this instruction as following, I’ll try first. Sorry for post before searching the web.
github.com
onnx/tutorials/blob/master/tutorials/PytorchAddExportSupport.md 2
<!--- SPDX-License-Identifier: Apache-2.0 -->
## Fail to export the model in PyTorch
When you try to export a model, you may receive a message similar to the following:
```
UserWarning: ONNX export failed on elu because torch.onnx.symbolic.elu does not exist
RuntimeError: ONNX export failed: Couldn't export operator elu
```
The export fails because PyTorch does not support exporting the `elu` operator. If you've already reached out to the ONNX team but haven't received a response, you can add support for this yourself. The difficulty of doing this depends on your answers to the following questions:
### Determine how difficult it is to add support for the operator
#### Question 1: Is the operator you want standardized in ONNX?
Answer:
- **Yes.** Great! It will be straightforward to add support for the missing operator.
- **No.** In this case, it may be difficult to do the work by yourself.
Check the [Standardization Section](#standardize_op).
#### Question 2: Can the ONNX operator be imported by the backend framework, such as Caffe2?
Answer:
- **Yes.** Terrific. We are able to run the exported ONNX model.
This file has been truncated. show original
|
st30877
|
Is my code for getting the predicted class correct? I don’t understand how softmax is being used here in cross entropy to get the class with the highest probability
for i, (x,y) in enumerate(zip(feature_trainloader,label_trainloader), 0):
optimizer.zero_grad()
output = self.forward(x)
target = y
loss = self.CrossEntropyLoss(output, target)
epoch_loss_train.append(loss.item())
y_true_train = torch.cat((y_true_train, target))
_, pred_class = torch.max(output, dim=1)
y_pred_train = torch.cat((y_pred_train, pred_class))
loss.backward()
optimizer.step()
|
st30878
|
nn.CrossEntropyLoss will apply F.log_Softmax and nn.NLLLoss internally to calculate the loss, which is why you shouldn’t apply a softmax activation on the model outputs.
To get the predictions you can apply torch.argmax directly on the logits, since the order won’t change compared to the probabilities (the highest/lowest logit will also be the highest/lowest probability).
|
st30879
|
You cant use nn.CrossEntropyLoss with softmax in pytorch.
use as per details given by @ptrblck .
If you really need softmax probabilities at the end then you can get using torch.exp(output)
and if you really want to train with softmax function than use custom cross entropy loss.
something like
def custom_categorical_cross_entropy(y_pred, y_true):
y_pred = torch.clamp(y_pred, 1e-9, 1 - 1e-9)
return -(y_true * torch.log(y_pred)).sum(dim=1).mean()
|
st30880
|
I’m currently using PyTorch for working on a crowd density analysis application and I’m very interested in analysing, in real-time, the video captured using a camera-equipped drone. How does one go about doing this? Sorry if this isn’t the most appropriate forum for this question!
|
st30881
|
Which micro controller are you using in your drone?
Depending on this, it could be better to export your Pytorch model with onnx 103 to another framework, e.g. Caffe2.
|
st30882
|
Thanks for the helpful reply! I have an Arduino UNO microcontroller to control my drone
|
st30883
|
I’m not familiar with Arduino, but you could try to build Pytorch from source.
What kind of use case do you have in mind?
|
st30884
|
I noticed in one of your answers you talk about running PyTorch on Raspberry Pi 3; I might use that! The specific use case is that I want to run my model real-time on top of a drone (w camera) and observe the results on my computer.
|
st30885
|
Sure, you could try that.
Let me know, if you got stuck compiling it.
May I know what your use case is?
Real-time on image data on an embedded platform could be tricky
|
st30886
|
I suggest you the use of opencv in case you need to work with rt video in a raspberry. Adrian Rosebrock has a complete blog about it https://www.pyimagesearch.com 61
|
st30887
|
We are working on making pytorch models run on microcontrollers including arduino family. Let us know, if you want to collaborate.
|
st30888
|
Check out deepC compiler 80 and arduino library 59 for an easy approach to put pytorch & onnx models into Arduino micro-controller applications including drone.
|
st30889
|
@ptrblck: I have quantized my pytorch model using post training quantization. Now, I want to export with ONNX and I see that ONNX does not support quantized models. Is there any other alternative you know? I am aiming to deploy my model to Arduino.
|
st30890
|
Of course @mohaimen.
Use https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html#post-training-static-quantization 11 to quantize the model before exporting and you’re all set.
Now, you can use http://tinyml.studio/ 23 without downloading and installing deepC.
|
st30891
|
Here is a brief talk on how to bring PyTorch models to IoT or MCU device on a drone.
ONNX on MCUs
|
st30892
|
@mohaimen @srohit0 I tried using quant wrapper for a pretrained faster RCNN model, the model size reduced significantly but the speed didn’t.
Could you tell if there are better ways for pre-trained model quantization?
|
st30893
|
@kamathis4 I tried using deepC last year ago and I could not make it work. At that time, it did not support quantised models. Finally, what I did was building the same model in tensorflow (TF), quantize the model using TF Lite and then deploy the model in Sony Spresense using TF Lite Micro. The TF Lite quantization reduced the model accuracy significantly (about 14%).
Therefore, the answer to your question is Yes. There are different frameworks in the market, and found TF was the only matured framework that would let you deploy something on microcontrollers. Having said that, there is always option to build your nets in C/C++ which can directly be ported to the microcontrollers. You just need some programming skills.
|
st30894
|
Hey,
i am sorry if i missed a similar topic.
I want to skip a weight in a convolution layer. (I know bad description ).
Lets say i have a kernel like [a, b, c].
But i want a == c (or more precisely I want the optim. to threat it as one parameter).
Is there a proper way to archive this?
I am currently just setting them to (a + c) / 2 every step but this is not very satisfactoring.
Sorry if i am just a little bit stupid
And thank you very much for your time
NPC
|
st30895
|
Solved by KFrank in post #2
Hi NPC!
To me the conceptually cleanest approach would be to build your
constrained kernel from a separate two-element Parameter that
you actually optimize. Something like:
import torch
from itertools import chain
# initialize a = 1.1, b = 2.2
k = torch.nn.Parameter (torch.tensor ([1.1, 2.2]…
|
st30896
|
Hi NPC!
NPC:
Lets say i have a kernel like [a, b, c].
But i want a == c (or more precisely I want the optim. to threat it as one parameter).
To me the conceptually cleanest approach would be to build your
constrained kernel from a separate two-element Parameter that
you actually optimize. Something like:
import torch
from itertools import chain
# initialize a = 1.1, b = 2.2
k = torch.nn.Parameter (torch.tensor ([1.1, 2.2]))
opt = torch.optim.SGD (chain (my_model.parameters(), (k, )), lr = 0.1)
# ...
# then in your forward pass
# x = apply_some_model_layers (x)
ker = torch.cat ((k[:2], k[0:1]))
x = torch.nn.functional.conv1d (x, ker)
# x = apply_some_more_model_layers (x)
return x
I believe that the weight of a torch.nn.Conv1d has to
be a leaf tensor, so we have to use the functional form,
torch.nn.functional.conv1d (). By using differentiable
pytorch tensor operations to build ker from k, you will be able
to backpropagate through the construction of ker and properly
optimize the two elements of k.
Best.
K. Frank
|
st30897
|
Hi.
I have to select certain hidden states from a LSTM outputs, indices of which are different for each sample, and apply pooling on them. A simple way is to write a for loop and select indices for each sample in the batch. But I’d like to use pytorch’s api to minimize python overhead. Is there any option to do this?
|
st30898
|
Why do we calculate square root of MSE since minimizing MSE is the same as minimizing RMSE ? Is it because of numerical stability or something ? Or to avoid exploding gradient which can result from bigger loss function values?
|
st30899
|
Hello Serdar!
serdarrader:
Why do we calculate square root of MSE since minimizing MSE is the same as minimizing RMSE ?
First a comment: I would lean towards using the mean-squared-error
(MSE), as it is a “more natural” measure of error (whatever that might
mean).
(Just to be clear “RMSE” is an acronym for “root-mean-squared-error”,
and is equal to sqrt (MSE).)
Now for some concrete technical differences:
Consider a single variable, x, and minimizing x**2 with respect to
x using gradient descent. Note that sqrt (x**2) = abs (x). x**2
is the one-dimensional version of MSE, and abs (x) = sqrt (x**2)
is the one-dimensional version of RMSE.
Both x**2 and abs (x) are minimized when x = 0 (at which point
both equal zero). The gradient of x**2 is “softer” in that is gets
smaller (and approaches zero) as x gets closer to 0. In contrast,
the gradient of abs (x) is always either +1 or -1, and doesn’t
change in magnitude as x approaches zero.
When x is large (greater than 1/2), x**2 will have the larger gradient,
and, using gradient descent, drive you towards the minimum at zero
more rapidly. But when x is small, abs (x) will have the larger
gradient.
Unless you expect x to start out very large, you might expect
minimization of abs (x) to proceed more rapidly because its
gradient doesn’t get smaller. On the other hand, because the
magnitude of its gradient stays the same, once near the x = 0
minimum, you might expect gradient descent to jump back and
forth from positive x to negative x back to positive x, and so on,
without making further progress towards x = 0.
So … Pick your poison.
(All of these effects can be addressed to some degree by using
variants of plain-vanilla gradient descent, such as adding momentum,
using an optimizer such as Adam, and/or using a learning-rate
scheduler.)
Of course, the realistic case of using either MSE or RMSE as the
loss function to be applied to the output of a complicated network
is much more involved, but, at some level, the above comments still
apply.
Best.
K. Frank
|
st30900
|
KFrank:
be addressed to some degree by using
variants of plain-vanilla gradient descent, such as adding momentum,
using an optimizer such as Adam , and/or using a learning-rate
scheduler.)
Of course, the realistic case
That is a very outstanding answer!
|
st30901
|
The RMSE is an indication of the noise levels in the scale of standard deviations.
The RMSE 19 has nice mathematical properties for fast calculations (Its gradient is linear and propagates easily).
|
st30902
|
Hi,
My ground truth is complex-valued. The input can be real valued or complex but the output needs to be complex. Making the output as two channel image and using [
torch.view_as_complex
](torch.view_as_complex — PyTorch 1.8.1 documentation 1). But that doesn’t give good results.
so, I am trying to make the input as well the weights of the NN to be complex. But I get the following error message:
RuntimeError: “unfolded2d_copy” not implemented for ‘ComplexFloat’
(minimal program to reproduce the error)
import torch
import torch.nn as nn
c1 = nn.Conv2d(1, 32, 3, 1)
c1.type(torch.cfloat)
c1(torch.ones([3,1,20,20], dtype=torch.cfloat))
|
st30903
|
The feature request seems to be tracked here 44 (as well as the related issues posted in the linked issue) and so far the workaround seems to be to split these operations into the re and im part (as you already did).
|
st30904
|
Hi, I have encountered a problem that relates to the time consumption of the indexing operation.
I have a tensor which is ‘output:[B,C,H,W]’ and I use unfold to transform it like ‘out_patch:[B,C,H//N,W//N,N,N]’. N is the patch size.
And there is a binary mask tensor ‘mask:[B,C,H//N,W//N]’ so that I can select patches from out_patch by: ‘patch=out_patch[mask]’.
There is a similar operation like ‘patch=out_patch[mask]’ in the latter part of the code but the time it consumed is nearly 100 times of the former one. I have tested many times but the problem still occurs.
Later, I found the problem was that there are some nn.conv2d operations running before the indexing operation and the whole code is running on GPU. If I put the indexing operation before the convolutions or run the code on CPU, the time would be the same as the former one.
I wonder what causes this phenomenon and is there any way to avoid it? As in my method, the latter indexing operation has to be put behind the convolutions and the code has to be run on GPU. Running Time is very important to me. Thanks.
|
st30905
|
Solved by ptrblck in post #9
Yes, the nonzero would need some time to be executed, but you are right that your profiling is wrong as it accumulates the model forward time into the nonzero operation, so it looks as if the nonzero op is more expensive than it is.
As mentioned before, this operation is synchronizing, so should be…
|
st30906
|
CUDA operations are executed asynchronously and based on the description of the issue I guess you might not synchronize the code before starting and stopping the timers, which would return a wrong timing profile (e.g. only the kernel launches might be profiled).
Synchronize the code via torch.cuda.synchronize() before starting/stopping the timers, add warmup iterations, and in the best case calculate the average time of multiple iterations to stabilize the profiling.
|
st30907
|
Hi, thanks so much for your response. I have revised my code based on your suggestion and I found it seems not to be the problem of the location of the indexing operation, but the operation of torch.nonzero takes up too much time. As convolution is highly sped up by GPU, operations like torch.nonzero seem to be less efficient when the code is running on GPU and this single operation takes almost half of the time of the whole inference process. I wonder whether this observation is normal and is there any way to reduce the time of these operations. Thanks.
|
st30908
|
The nonzero() operation is synchronizing the code, so unless you’ve added the manual synchronizations into your code as suggested, your timings will be wrong and the nonzero() operation will accumulate the execution time of the asynchronously launched CUDA kernels.
|
st30909
|
Actually I have added the synchronizations, but I am not sure whether I have done it right:
t_nonzero = []
class net(nn.Module):
......
def forward():
......
def inferece():
......
t1 = 1000*time.time()
idx = torch.nonzero(mask.squeeze(1))
t2 = 1000*time.time()
t_nonzero.append(t2-t1)
......
if __name__ == "__main__":
......
## warmup
for _ in range(100):
out = net.inference(input, ratio, patchsize)
torch.cuda.synchronize()
t0 = 1000 * time.time()
n = 1000
for _ in range(n):
out = net.inference(input, ratio, patchsize)
torch.cuda.synchronize()
t1 = 1000 * time.time()
fps = 1000 * n / (t1 - t0)
print('time_nonzero:',np.mean(t_nonzero[100:]))
print('Time:',(t1-t0)/n)
Could you please help me with it? Thanks a lot.
|
st30910
|
You are synchronizing before some timers, but skip it inside the inference operation.
What could happen is:
torch.cuda.synchronize() # valid sync, CPU waits until GPU is ready
t0 = 1000 * time.time() # start timer
out = net.inference(input, ratio, patchsize) # execute inference
# inside inference
...... # execute model forward pass
t1 = 1000*time.time() # start new timer without waiting for the GPU to finish the work
idx = torch.nonzero(mask.squeeze(1)) # nonzero synchronizes, CPU waits
t2 = 1000*time.time() # stop timer, which shows the GPU forward time + nonzero op
|
st30911
|
Hi, if I am understanding correctly, what you mean is that torch.nonzero operation indeed takes up some time but my measurement of that operation is wrong and it didn’t take that lot of time. I deleted the torch.nonzero operation and run the inference process again, finding that there are just a few decreases in the whole inference time which means that torch.nonzero is not that time-consuming as I expected. But I believe these indexing operations should be less effective than convolutions when the code is running on GPU. So I guess real-time models should avoid these operations as possible.
|
st30912
|
Yes, the nonzero would need some time to be executed, but you are right that your profiling is wrong as it accumulates the model forward time into the nonzero operation, so it looks as if the nonzero op is more expensive than it is.
As mentioned before, this operation is synchronizing, so should be avoided for this reason (in case you want to remove all syncs, if possible).
markcheung:
But I believe these indexing operations should be less effective than convolutions when the code is running on GPU.
I would claim it depends on the actual workflow, so you could profile it with synchronizations and compare the runtimes, if needed.
|
st30913
|
I have a pytorch tensor as output language model (GTP2) and I use argmax to get index with the highest probability.
predicted_index = torch.argmax(predictions_2[0, -1, :]).item()
But I need all the indexes sorted by probability.
How do I do this in pytorch?
#####################################################
print (“predicted_index:”,predicted_index)
print (“predictions_2[0, -1, :]:”,predictions_2[0, -1, :])
I get:
predicted_index: 484
predictions_2[0, -1, :] field is: tensor([-122.9283, -124.4627, -128.4069, …, -131.5974, -128.7110,
-125.7269], device=‘cuda:0’)
|
st30914
|
Have a look at @ptrblck’s answer:
Advanced indexing with torch.topk
Would this work:
x = torch.randn(3, 10, 10)
idx = torch.topk(x, k=2, dim=0)[1]
x.scatter_(0, idx, 100)
print(x)
you can get the topk values and indices using torch.topk() API.
|
st30915
|
It, works now. Thanks very much.
predicted_k_indexes = torch.topk(predictions_2[0, -1, :],k=3)
prk_0 = predicted_k_indexes[0]
prk_1 = predicted_k_indexes[1]
for item11 in prk_1:
print (item11.item())
output:
484
523
35075
|
st30916
|
How can find the top 3 max values in the tensor and how can I store the index values in one variable.
|
st30917
|
Solved by vinsent_paramanantha in post #3
Thank you @eqy
But my question is about how to pick the top 3 max values of the indexes and store that indexes in a variable . This is a solution I found
frame = sorted(range(len(mul_reward)), key=lambda i: mul_reward[i], reverse=True)[:4]
print(frame)
con_frame = torch.Tensor(frame)
|
st30918
|
The topk function probably does something similar to what you want: torch.topk — PyTorch 1.8.1 documentation 22
|
st30919
|
Thank you @eqy
But my question is about how to pick the top 3 max values of the indexes and store that indexes in a variable . This is a solution I found
frame = sorted(range(len(mul_reward)), key=lambda i: mul_reward[i], reverse=True)[:4]
print(frame)
con_frame = torch.Tensor(frame)
|
st30920
|
I’m not sure I understand; what happens when you run
_, frame = torch.topk(mul_reward, 3) ?
|
st30921
|
This question doesn’t be PyTorch-specific, so I would recommend to post it in a general discussion/help board (or a Jupyter-specific one).
|
st30922
|
Hi, I am trying to run a simple VAE model written in pytorch. To that end, I use the classical MNIST dataset by means of torchvision.datasets. Now, the data have been already downloaded and recorded in its raw values i.e., [0, 255]). Once I load it from the directory, the transforms defined to normalize the data into [-1, 1] with zero mean and unit variance, is not applied. Ergo, the data is in its original range [0, 255]. How could I do the scaling into [0,1] and the to reapply the transforms.Normalize? Here is the code:
transform_train = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=(0.1307,), std=(0.3081,))
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=(0.1307,), std=(0.3081,))
])
use_cuda = torch.cuda.is_available()
trainset = datasets.MNIST(root='../../data/MNIST', train=True, download=False, transform=transform_train)
valset = datasets.MNIST(root='../../data/MNIST', train=False, download=False, transform=transform_test)
|
st30923
|
The provided code is normalizing the data and it’s not in its original range as seen here:
x, y = trainset[0]
print(x.min(), x.max())
> tensor(-0.4242) tensor(2.8215)
Are you sure this code snippet was returning the raw data?
|
st30924
|
The data I have, was downloaded without any transformation, and the values I see are in the range [0, 255], even after executing the code I put in the thread. Notice that I set the download flag to False since I have already the data locally recorded… If the data were in [0, 1] then I see that the mean and std are respectively, 0.1307 and 0.3081.
|
st30925
|
Your code unfortunately doesn’t show how you are checking the data stats. As you can see in my code snippet the min and max values show the transformed results, not the original data values.
In case you are checking the internal .data attribute, then note that this is the original data, where no transformations were applied. Also, feel free to post an executable code snippet to reproduce this issue.
|
st30926
|
My model has several sub-modules, stored in a nn.ModuleList member, in forward() if I calling each sub-module like this
def forward(self, x):
out = [m(x) for m in self.sub_module_list] # where self.sub_module_list is nn.ModuleList object
out = torch.cat(out)
is forward() of each submodule run parallel or sequential?
I suppose if module is jit traced the answer is clear: it is parallel since that’s exactly what static computation graph want to do. But if not jit traced, will this line of code run sequentially?
|
st30927
|
It looks like there is one tensor and all of them are on the same device so I would expect them to run sequentially. However, there can be some pipeline parallelism because there aren’t dependencies between the module lists.
|
st30928
|
Hi
I’m trying to implement a model that works on the hidden states of the LSTM. What I’m trying to do is to use pack_padded_sequence to make my LSTM model learn better. The problem I have right now is that I cannot understand how this packing (and sorting) affects the hidden states generated by the model. Are they returned to the right order? or should I rearrange them accordingly?
|
st30929
|
My feeling is that this is not possible. During training I believe this possible because we are feeding the transformer the true tokens (basically doing teacher forcing). However, during testing we don’t have the truth and we have to truly be auto-regressive so we have to really generate one token at a time. Is this right?
Note this is a code example showing what I believe the common transformer test time:
-NeuralMachineTranslation/translator.py at 7fd9450c88d833748218c1678124cc67e3303065 · azadyasar/NeuralMachineTranslation · GitHub
for i in range(max_len):
trg_tensor = torch.LongTensor(trg_indexes).unsqueeze(0).to(self.config.device)
trg_mask = self.model.make_trg_mask(trg_tensor)
with torch.no_grad():
output, attention = self.model.decoder(trg_tensor, enc_src, trg_mask, src_mask)
pred_token = output.argmax(2)[:,-1].item()
trg_indexes.append(pred_token)
if pred_token == self.config.trg_vocab.eos_idx:
break
other example and discussion: How to vectorize decoder translation in transformer? · Issue #1 · azadyasar/NeuralMachineTranslation · GitHub 2
|
st30930
|
I am interested in implementing a more flexible API for the Linear layer, where the inputs are only the output feature size. The input feature size is inferred from the size of input tensor. I have a minimal implementation:
from torch.nn import Module, Linear
class FlexibleLinear(Module):
def __init__(self, out_feats):
super(FlexibleLinear, self).__init__()
self.out_feats = out_feats
self.initialized = False
self.linear = None
def build(self, x):
if self.initialized:
return
in_feats = x.shape[1]
out_feats = self.out_feats
self.linear = Linear(in_feats, out_feats)
self.initialized = True
def forward(self, x):
self.build(x)
y = self.linear(x)
return y
I am wondering if there is any (better) way to do this, and/or if this can create any problem for a larger scale network.
|
st30931
|
I’m not really convinced about this code. There are issues:
Layer is not allocated properly. It will raise an Exception.
Layer is not tracked by the nn.Module. Thus, optimizer won’t track it and this will create a silent bug.
Despite I don’t know the details of the new lazy modules, I would recommend you to check them to code this.
https://pytorch.org/docs/stable/_modules/torch/nn/modules/conv.html#LazyConv2d
|
st30932
|
Thank you for the issues mentioned and the link.
It seems like LazyLinear is a new feature added since 1.8
https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html 2
|
st30933
|
While using torch.norm my code is giving nans. So, I tried changing the dtype argument as mentioned here torch.norm — PyTorch 1.7.0 documentation 7 to dtype=torch.float64.
But it is giving the following error:
ValueError: dtype argument is not supported in frobenius norm.
As mentioned as WARNING on this page as torch.norm — PyTorch 1.7.0 documentation 7 torch.norm is deprecated and maybe removed in future release.
But the following snippet is also giving as ImportError : cannot import name 'linalg'
from torch import linalg
Please look into the issue.
Thanks.
|
st30934
|
Ashima_Garg:
But the following snippet is also giving as ImportError : cannot import name 'linalg'
from torch import linalg
Which torch version are you using?
linalg is introduced in pytorch 1.7.0 I think.
Also, can you give a short snippet of code to reproduce the issue?
|
st30935
|
Even I am facing the same issue.
from torch import linalg
gives me the error
from torch import linalg
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-11-fe7ec55944c3> in <module>
----> 1 from torch import linalg
ImportError: cannot import name 'linalg' from 'torch' (/home/nithin/anaconda3/envs/Pytorch_cuda_env/lib/python3.7/site-packages/torch/__init__.py)
|
st30936
|
Which PyTorch version are you using?
As given by @InnovArul it seems that this namespace was introduced in PyTorch>=1.7.
|
st30937
|
See torch.nn.modules.loss — PyTorch 1.8.1 documentation 7 where classes _Loss and _WeightedLoss are protected and not getting imported in __init__.
What is the reason for this? For users who want to their own loss class it would be useful to both make use of the existing base classes and use them to better document the purpose of their own implementation as identifying them e.g. as a weighted loss.
|
st30938
|
The main reason for having the _Loss class is for the backward-compatibility of the reduction method.
There is nothing special about a loss compared to other nn.Module classes and I personally would think that it is good that no-one gets the idea of doing isinstance(l, Loss) anywhere because that would preclude passing in a nn.Module that does some processing and then calls loss.
Best regards
Thomas
|
st30939
|
I really cannot follow that argument - the ability to subclass opens the possibility to re-use code already there and is normally not something where checking with isinstance is a concern. ALL concrete implementations of loss classes can be subclassed anyways and the same goes for other module classes, but making e.g. _WeightedLoss protected just means that nobody can easily re-use the (reduction) code in that base class. My question was: why would that be a bad thing?
Not allowing it forces people to copy/paste the code or reimplement something comparable.
|
st30940
|
johann-petrak:
nobody can easily re-use the (reduction) code in that base class. My question was: why would that be a bad thing?
The idea here is that for any reasonably modern code, the re-usable part would be exactly self.reduction = reduction.
What I am trying to suggest is exactly this: New code should not provide the deprecated arguments size_average and reduction, so re-using the translation of them to then set self.reduction = … should not be terribly interesting.
When PyTorch finally drops the support for the deprecated arguments, _Loss might go away. (Yeah, and I readily admit that there is no reason why CTCLoss should subclass _Loss when adopting this line of reasoning.)
But maybe I am not understanding this right. Given the current constraints, what would you be copy-pasting to make your code work better?
Best regards
Thomas
|
st30941
|
Sorry, my mistake, I misunderstood what the legacy reduction issue is all about.
Totally makes sense to me now!
|
st30942
|
Hey Folks. I just discovered the pytorch-forecasting package’s TimeSeriesDataSet class, and how it helps with taking data from a pandas dataframe and creating a pytorch DataLoader. They show one example of creating a TimeSeries Dataset, but don’t but don’t have much in the way of a tutorial etc. I was wondering if I can create a sequence-to-sequence dataset using this class, meaning I would need the TimeSeriesDataSet class to cut the time series data into windows, where say the first 10 days are the data and the subsequent 10 days are the labels.
I have code to do this manually right now, and it works. But for the future it would be nice to have a tool like this to setup new datasets faster. So was just wondering if anyone know whether I can setup a Sequence-to-Sequence set of (data, labels) for the DataLoader, and if anyone knows of a good example of this.
Thanks.
|
st30943
|
i have a pretrained model and want to see list of the names (parameters, netwoeks, etc) in my model
how I can do that?
pretrain = torch.load(address.pth')
checkpoint = pretrain['model']
how to know the name of the weights, so i can use them later for loading?
if my question is vague let me explain what im looking for
lets say i have a model including network like:
Conv1 = conv2d(...)
Conv2 = conv2d(...)
after saving the model, how i can see the list of names, such as Conv1, and Conv2
|
st30944
|
I used visual code software and with this IDE, it is very easy to check them by adding in “watch” section.
|
st30945
|
Hi,
Conv1, Conv2 are names for layers (or in fact modules). Names of parameters look like Conv1.weight, Conv2.bias.
Sometimes your layer names are just numbers (e.g when you are using Sequential without specifying the names). Assuming what you want are the names of parameters (for loading or something), you could use model.state_dict().keys() to get all the names.
Best
|
st30946
|
@kaixin
Thats exactly my issue.
They are name of the layers.
hmmm so i have a pth file, is there a way to find it in my pth file before loading the model?
|
st30947
|
@isalirezag
Not sure what you mean by “before loading the model”, at least you need torch.load() to load the .pth file so you can inspect it.
|
st30948
|
sorry for my confusing question.
This is what i was looking for:
a=torch.load('*.pth')
a_key=a['model'].keys() # so a_key is the name of all parameters that i have
|
st30949
|
If a contains only the pre-trained weights, you could simply call a.keys() to get the names of all parameters because a is actually an OrderedDict object.
|
st30950
|
Hi while doing the torch.load() i am getting an error that MainModule is missing and its points the error to a serialization.py. I am using weights from https://github.com/ox-vgg/vgg_face2 41
|
st30951
|
isalirezag:
a=torch.load('*.pth')
a_key=a['model'].keys() # so a_key is the name of all parameters that i have
Hii isalirezag,
Im new on torch and I loaded a model from .pth file and I have an OrderedDict object. My question is how can I use this object to do an object detection?
Thanks
|
st30952
|
I have a NN with 4 heads that I want to train one after the other.
After having trained 1 head, I save the net and I then want to resume the model, change the requires_grad of the already trained head and start training the next head but I get this error:
ValueError: loaded state dict contains a parameter group that doesn’t match the size of optimizer’s group
I get the same error even if the requires_grad is changed after having loaded the model.
I do not get any error if I do not make any change to requires_grad. Meaning that resume works fine.
The problem arises when I load the optimiser. The model can be loaded without problems.
Observe that the the Net structure never changes in time. It is the trainability of some layers that are switched.
How can I solve this problem?
|
st30953
|
Are you filtering the parameters based on their requires_grad attribute before passing them to the optimizer? Also, could you post an executable code snippet to reproduce this issue, as I’m currently unsure what might cause the error?
|
st30954
|
Hello, first of all thanks for helping.
My code is structured in this way:
I initially create the net structure, with all param_grad=True by default
depending on information on a config file I switch to False the param_grad of all heads but one. For instance the code to switch to False the heat map head is the following:
if config["trainable_heads"]["heatmap"] == False:
enc_hm = map(lambda x: x[1],
filter(lambda p: p[1].requires_grad and not ("backbone.decoder_delta" in p[0]),
model.named_parameters()))
for param in enc_hm:
param.requires_grad = False
I have 3 other identical snippet for the other 3 heads.
Then I train and save the net.
When I resume, I get the error, which only happens if I try to switch the trainable head. If I do not switch heads, the resume works perfectly.
An attempt I made is the following: before resuming from previous training, I set the layers of the network in the exact same trainability status as just before saving. I checked it was the case by comparing the name of all trainable parameters just before saving and just before resuming. The 2 list are indeed identical and as I would have expected. Which confuses me a lot.
The error is when I load the optimiser:
self.model.load_state_dict(checkpoint['state_dict'])
self.scheduler.load_state_dict(checkpoint['scheduler'])
self.optimizer.load_state_dict(checkpoint['optimizer'])
No error is raised in the first 2.
|
st30955
|
I cannot reproduce the issue using different work flows of freezing different sets of parameters:
model = models.resnet18()
for param in model.fc.parameters():
param.requires_grad = False
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
output = model(torch.randn(1, 3, 224, 224))
output.mean().backward()
optimizer.step()
optimizer.zero_grad()
torch.save(model.state_dict(), 'model.pt')
torch.save(optimizer.state_dict(), 'opt.pt')
# load plain model and optimizer
model = models.resnet18()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
model.load_state_dict(torch.load('model.pt'))
optimizer.load_state_dict(torch.load('opt.pt'))
# load model with same frozen parameters
model = models.resnet18()
for param in model.fc.parameters():
param.requires_grad = False
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
model.load_state_dict(torch.load('model.pt'))
optimizer.load_state_dict(torch.load('opt.pt'))
# load model with different frozen parameters
model = models.resnet18()
for param in model.conv1.parameters():
param.requires_grad = False
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
model.load_state_dict(torch.load('model.pt'))
optimizer.load_state_dict(torch.load('opt.pt'))
|
st30956
|
Hello,
From what I see from your code, you always pass all networks parameters to the Adam optimizer, regardless requires_grad value of each of them.
I thought I had to pass to the optimiser only the trainable ones. This is incorrect, right?
Thanks for helping
|
st30957
|
You could filter them out before, if you like. As long as the .grad attributes aren’t filled in the frozen parameters, the optimizer won’t update them, but your approach would be more explicit.
Were you able to reproduce the issue using my code snippet and the filtering or any other addition to the code?
|
st30958
|
Is there a way to visualize the graph of a model similar to what Tensorflow offers?
|
st30959
|
There will be tensorboard integration in the future.
For the moment, you can use the visualize function from https://github.com/szagoruyko/functional-zoo/blob/master/visualize.py 18.0k
and an example on how to use it can be found in here 11.8k
|
st30960
|
I’ve made a simpler example to visualize resnet-18 by using the visualize.py as @fmassa mentioned.
See https://gist.github.com/wangg12/f11258583ffcc4728eb71adc0f38e832 10.8k.
|
st30961
|
I tried your code snippet. However, it doesn’t seem to visualize ResNet correctly… I also tried AlexNet, VGG-19, and same story…
|
st30962
|
code:
%matplotlib inline
from graphviz import Digraph
import re
import torch
import torch.nn.functional as F
from torch.autograd import Variable
from torch.autograd import Variable
import torchvision.models as models
def make_dot(var):
node_attr = dict(style='filled',
shape='box',
align='left',
fontsize='12',
ranksep='0.1',
height='0.2')
dot = Digraph(node_attr=node_attr, graph_attr=dict(size="12,12"))
seen = set()
def add_nodes(var):
if var not in seen:
if isinstance(var, Variable):
value = '('+(', ').join(['%d'% v for v in var.size()])+')'
dot.node(str(id(var)), str(value), fillcolor='lightblue')
else:
dot.node(str(id(var)), str(type(var).__name__))
seen.add(var)
if hasattr(var, 'previous_functions'):
for u in var.previous_functions:
dot.edge(str(id(u[0])), str(id(var)))
add_nodes(u[0])
add_nodes(var.creator)
return dot
inputs = torch.randn(1,3,224,224)
resnet18 = models.resnet18()
y = resnet18(Variable(inputs))
print(y)
g = make_dot(y)
g
result:
Screen Shot 2017-04-16 at 12.38.08 PM.png952×1146 48.6 KB
definitely, the result doesn’t seem like a ResNet… similar things happen for AlexNet, VGG, etc.
|
st30963
|
Yes, the visualization code is currently broken for convnets because certain layers have C++ implementations that don’t expose the graph pointers to Python. It’ll be fixed after the autograd refactor.
|
st30964
|
zym1010:
g = make_dot(y)
This is my result for resnet18().
I can not find any problem. @zym1010
image.png2226×384 105 KB
|
st30965
|
I’m having the same problem as @zym1010 . Is this because I’m using a different pytorch version? If not, can someone suggest how to go about fixing this?
|
st30966
|
I built PyTorch from source. I now get a KeyError when I try to build the graph. I’m using this 182 script to build my graph. I’m trying to generate a graph for this 98 network.
|
st30967
|
I built from the latest master and change var.creator to var.grad_fn (because there is no creator in master’s Variable according to https://github.com/szagoruyko/functional-zoo/blob/master/visualize.py 414) but it still does not draw like @wangg12 shows.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.