id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st82268 | One way to do this is to subclass nn.Sequential 7 and change the forward method so that you can use a number to determine how many layers deep it goes. Then, you can build your models using your modified version of nn.Sequential to do this. |
st82269 | Again, thanks for the suggestion but this is not what I am looking for. I want to take in input a model, defined by someone else in some way that I don’t care about, and stop execution at the n-th module, as returned by model.modules(). I don’t want to modify the model definition, or do anything that would imply having knowledge of how the model is defined or how it works.
In short I want it to be completely automated and work on any possible model without any kind of modification on my part |
st82270 | You need to define more specifically what you want. When you say a “module”, do you mean things in nn.* like linear layers nn.Linear, conv layers nn.Conv*d, upsampling nn.Upsample, nonlinearities, etc.? Or do you mean PyTorch Functions, which are the real fundamental ops of the computation graph, e.g. view, thnn_convolution, svd, transpose, matmul, etc. Each of the former class is usually implemented with one or more Functions from the latter class, so you need to be clear what your notion of n-th “module” means. If you are referring to the former class, which is probably more reasonable because they are higher-level and more intuitive, there is another difficulty. People can write models using lower-level Functions, e.g. instead of using nn.Linear they use a transpose + addmm, use functions, such as linalg ops including trtrs and potrs, which do not have the corresponding notion of a higher level nn.* module. |
st82271 | As I mentioned before, I will consider a “module” everything that is returned by model.modules(). They are also the only thing you can attach forward hooks to.
Some people may decide to write their models in a different way, such that model.modules() will be empty or missing pieces, as you point out. But since there is no way around it, I will accept this limitation.
Essentially I want this:
for idx, md in model.named_modules():
if idx == idx_to_stop_execution_to:
md.register_forward_hook(my_function)
_ = model(inputs)
and my_function is defined in such a way that it will save the output of that module, and then stop the forward pass of model to save time. I tried to raise an exception in my_function, but the outputs were not being correctly saved.
*** EDIT***
It turns out, the error was that I forgot to de-register the forward hooks. The code below works as intended.
for idx, md in model.named_modules():
if idx == idx_to_stop_execution_to:
md.register_forward_hook(my_function)
try: _ = model(inputs)
except CustomException: pass |
st82272 | antspy:
As I mentioned before, I will consider a “module” everything that is returned by model.modules().
This is still ill-defined. What if a module contains Sequential(linear1, linear2)? All three will show up in model.modules(), but one of them contains the other two. What do you mean by n-th module?
antspy:
It turns out, the error was that I forgot to de-register the forward hooks. The code below works as intended.
for idx, md in model.named_modules():
if idx == idx_to_stop_execution_to:
md.register_forward_hook(my_function)
try: _ = model(inputs)
except CustomException: pass
This won’t work for a bunch of models, e.g.
def forward(self, x):
for _ in range(100):
x = self.fc(x)
return x
def forward(self, x):
if x.data[0] < 1:
return self.fc1(x)
else:
return self.fc2(x)
return x
def forward(self, x):
z = self.subnet(Variable(torch.randn(1, 2))
return z + x
No where guarantees that named_modules or modules returns in certain order. It doesn’t even work for sequential modules.
If you want to limit youself to nn.Sequential, fine, there are a number of ways to do this. But if you want to do it for general modules. I don’t see how any of the proposals you made could work. You should think about what exactly is a “module”, and with dynamic python flow control, how to get the n-th actually executed “module”. |
st82273 | You make some very good points; especially about sequential and the possibility of each module showing up twice.
Note though that
1 - My main question was “stop execution when forward hook is triggered”. So essentially everywhere you want to put a forward hook in, stop execution there. The idea of having it stopped at the “n-th” module was just for my convenience, so that I don’t have to inspect any piece of code.
2 - While there are significant pitfalls to be aware of (and I thank you for pointing them out!), these are left to the “user”. You put in input a model, and a index n; you will get in output the result of registering a forward hook to the “n-th” module returned by model.modules(). It is up to the user then to identify which index is correct, whether a layer shows up twice, or whether it just doesn’t make any sense (as in your last examples). This is just a utility that is useful in a large enough number of cases, and for the cases where it does not make sense, then it does not make sense. The only thing that is required for this to work is that model.modules() is deterministic; i.e. given the same model definition, the modules are returned in the same order. If this holds, then the user would put the correct n, and the method will work (if applicable).
What do you think? Thanks for the answer anyhow |
st82274 | Given the limitations, it might not be very helpful in many cases. But a way to do this may be registering a hook on every submodule, where this hook increments a global counter and if the counter reaches N saves the result at a global ptr and throws an exception. It’s very very hacky though |
st82275 | @SimonW I am implementing ManifoldMixup (http://proceedings.mlr.press/v97/verma19a/verma19a.pdf 3) which is similar to Mixup regularization technique except it works at layer level instead of input level as in Mixup. I need a similar functionality to implement this,
Select a random index and apply forward hook to that layer
Forward pass using data input x_0 and record output at hooked layer
Use this output along with new input x_1 by adding new hook at the same layer to do this mixup operation
It would be nice if I can stop processing model’s forward pass at hook in first step. Is there a better way to do this now since it has been over a year after this thread? |
st82276 | image.png1234×450 7.18 KB
Hi guys,
I’m having trouble with loading saved weights.
I just trained with local cpu, and i didn’t use any DataParallel.
When i see torch.load(’./seq2seq.pth’), its in OrderedDict.
Please help me!! |
st82277 | This message might be a bit misleading, but no missing or unexpected keys were found, so your code works fine.
The output was changed to <All keys matched successfully> in PyTorch 1.2.0. |
st82278 | I found it works fine.
Really appreciate with your help, ptrblck!
Have a good day:) |
st82279 | I have a simple Pytorch model with a single dense and relu layer.
I set the seed to have a fixed starting weight and also to have a fixed input to the model
as below.
torch.manual_seed(0)
net = nn.Sequential(OrderedDict({"fc1": nn.Linear(20, 2, bias=False),
"relu": nn.ReLU()}))
# obtain the initial weights set
dense_weights = np.array(net.fc1.weight.data.numpy())
print("initial_weight")
print(dense_weights)
# this is the input to be supplied to the model
# of size (20,)
np.random.seed(0)
arr = np.random.rand(20)
print("supplied input")
print(arr)
# these are the desired weights if the model were to converge
# we do this so that we know we can have something to achieve
# if the model were to be trained
np.random.seed(1)
rndw2 = np.random.rand(*net.fc1.weight.shape)
potential_final_weights = rndw2
# this is the label so as to say.
sample_output = np.dot(np.array(potential_final_weights), arr)
# SGD with momentum
optimizer = optim.SGD(net.parameters(), lr=1, momentum=0.9)
optimizer.zero_grad()
output = net(torch.Tensor(np.expand_dims(arr, axis=0)))
target = torch.Tensor(np.expand_dims(sample_output, axis=0))
# use MSE loss
criterion = nn.MSELoss()
loss = criterion(output, target.float())
print("loss obtained")
print(loss)
loss.backward()
optimizer.step()
# updated weights after one training input
updated_weights = np.array(net.fc1.weight.data.numpy())
print("updated_weights")
print(updated_weights)
Now I see two types of results when I run and am pretty confused about why that is the case:
sometimes when I run I get this output:
initial_weight
[[-0.0016741 0.11995244 -0.18403849 -0.16456097 -0.08612314 0.0599618
-0.00443037 0.17729548 -0.01984377 0.05916917 -0.0675769 -0.04395336
-0.21362242 -0.14809078 -0.09217589 0.0082832 0.08839965 0.13416916
-0.15159222 -0.09737244]
[ 0.08121783 0.18568039 -0.04601832 0.16732758 -0.03604169 0.02366075
0.20247063 -0.2074334 -0.14076896 -0.05660947 -0.08716191 0.19319645
-0.14493737 -0.10293356 -0.15622076 -0.2094214 -0.13052833 0.19221196
0.09977746 0.10837609]]
supplied input
[0.5488135 0.71518937 0.60276338 0.54488318 0.4236548 0.64589411
0.43758721 0.891773 0.96366276 0.38344152 0.79172504 0.52889492
0.56804456 0.92559664 0.07103606 0.0871293 0.0202184 0.83261985
0.77815675 0.87001215]
loss obtained
tensor(28.8879, grad_fn=<MseLossBackward>)
updated_weights
[[ 2.6501863 3.575739 2.728507 2.4683082 1.960972 3.1809146
2.109986 4.4863324 4.636564 1.911954 3.7580292 2.5116606
2.5311623 4.3243814 0.2510695 0.42929098 0.1860947 4.1573787
3.608452 4.106516 ]
[ 3.3013914 4.382067 3.490707 3.36444 2.4497607 3.8134565
2.7700217 5.0250616 5.5135403 2.193241 4.5582995 3.2964973
3.1880748 5.3280225 0.26058462 0.30181146 -0.01189651 5.077625
4.6656265 5.2131886 ]]
but sometimes I see very different updated weights though as you can see the initial weights and the input is just the same! Also the loss looks different. Could someone please help me understand the reason for this difference?
initial_weight
[[-0.0016741 0.11995244 -0.18403849 -0.16456097 -0.08612314 0.0599618
-0.00443037 0.17729548 -0.01984377 0.05916917 -0.0675769 -0.04395336
-0.21362242 -0.14809078 -0.09217589 0.0082832 0.08839965 0.13416916
-0.15159222 -0.09737244]
[ 0.08121783 0.18568039 -0.04601832 0.16732758 -0.03604169 0.02366075
0.20247063 -0.2074334 -0.14076896 -0.05660947 -0.08716191 0.19319645
-0.14493737 -0.10293356 -0.15622076 -0.2094214 -0.13052833 0.19221196
0.09977746 0.10837609]]
supplied input
[0.5488135 0.71518937 0.60276338 0.54488318 0.4236548 0.64589411
0.43758721 0.891773 0.96366276 0.38344152 0.79172504 0.52889492
0.56804456 0.92559664 0.07103606 0.0871293 0.0202184 0.83261985
0.77815675 0.87001215]
loss obtained
tensor(27.1065, grad_fn=<MseLossBackward>)
updated_weights
[[-1.6741008e-03 1.1995244e-01 -1.8403849e-01 -1.6456097e-01
-8.6123139e-02 5.9961796e-02 -4.4303685e-03 1.7729548e-01
-1.9843772e-02 5.9169173e-02 -6.7576900e-02 -4.3953359e-02
-2.1362242e-01 -1.4809078e-01 -9.2175886e-02 8.2831979e-03
8.8399649e-02 1.3416916e-01 -1.5159222e-01 -9.7372442e-02]
[ 3.3013914e+00 4.3820672e+00 3.4907069e+00 3.3644400e+00
2.4497607e+00 3.8134565e+00 2.7700217e+00 5.0250616e+00
5.5135403e+00 2.1932409e+00 4.5582995e+00 3.2964973e+00
3.1880748e+00 5.3280225e+00 2.6058462e-01 3.0181146e-01
-1.1896506e-02 5.0776248e+00 4.6656265e+00 5.2131886e+00]] |
st82280 | Are you wondering about the reproducibility of the script?
I just executed it several times on my machine and get always your second output. |
st82281 | yes I was wondering about the consistency of results. Oh that is weird! I wonder why I get both those outputs when I re-run multiple times. |
st82282 | I tried it again it does seem that I keep getting those two results. My numpy and torch version are both the latest. Not sure what is going on. Any suggestions to debug this? |
st82283 | @ptrblck I think I just found the source of the issue ! So I printed the Pytorch net object and I noticed that there was an ordering issue in the ordered dictionary causing the different outputs.
So when the output is the second case (the one you get), the net looks like this when printed:
Sequential(
(fc1): Linear(in_features=20, out_features=2, bias=False)
(relu): ReLU()
)
but in the first case, the net looks like this !!!:
Sequential(
(relu): ReLU()
(fc1): Linear(in_features=20, out_features=2, bias=False)
)
the relu and fc are flipped! I think the ordered dictionary should have been specified this way, just took a look at an example in the documentation.
OrderedDict([("fc1", nn.Linear(20, 2, bias=False)),
("relu", nn.ReLU())])
instead of the
OrderedDict({"fc1": nn.Linear(20, 2, bias=False),
"relu": nn.ReLU()})
my bad that I failed to realize that. With the other way, it now gives me the same results as you. |
st82284 | It freezes hangs right at the beginning. Seems to be something to do with the multiprocessing/queues.py. I have already read some other posts and tried:
Not using GPU
“from torch.utils.data.dataloader import DataLoader” and “from torch.utils.dataimport DataLoader”
setting pin_memory=False
adding
if __name__ == "__main__"
running in Administrator mode
reinstalling pytorch
python3.7 on Windows 10, latest stable pyTorch build 1.2
from torch.utils.data.dataset import Dataset
from torch.utils.data import DataLoader
class DriveData(Dataset):
def __init__(self):
self.data = [1, 2, 3, 4, 5, 6]
# Override to give PyTorch access to any image on the dataset
def __getitem__(self, index):
return self.data[index]
# Override to give PyTorch size of dataset
def __len__(self):
return len(self.data)
def main():
dset_train = DriveData()
train_loader = DataLoader(dset_train, batch_size=2, shuffle=True, num_workers=1)
for i, data in enumerate(train_loader):
print(i)
print(data)
if __name__ == "__main__":
main()
Output when num_workers is 0:
0
tensor([2, 6])
1
tensor([1, 4])
2
tensor([5, 3])
No output when num_workers is >0. Just hangs. |
st82285 | I tried the 1.2.0 + cuda 10.0 + python 3.6 package, which can’t reproduce this issue. |
st82286 | Did you copy paste exactly his code ? because I tried it myself and I had the same issue! |
st82287 | Would you please send a bug report on https://github.com/pytorch/pytorch/issues 30? BTW, what is the traceback if you press ctrl+c? |
st82288 | I reported the issue.
by traceback you mean the error text, I didnt get you ? I am using jupyter notebook btw |
st82289 | Yes, I mean the error text if you kill that process at background. BTW, is it reproducible if you run it through command prompt? |
st82290 | same error when run from the command prompt. Here’s the error message:
BrokenPipeError Traceback (most recent call last)
<ipython-input-10-344640e27da1> in <module>
----> 1 final_model, hist = train_model(model, dataloaders_dict, criterion, optimizer)
<ipython-input-9-fdf91f815fa7> in train_model(model, dataloaders, criterion, optimizer, num_epochs)
23 # Iterate over data.
24 end = time.time()
---> 25 for i, (inputs, labels) in enumerate(dataloaders[phase]):
26 inputs = inputs.to(device, non_blocking=True)
27 labels = labels.to(device , non_blocking=True)
~\Anaconda3\envs\py_gpu\lib\site-packages\torch\utils\data\dataloader.py in __iter__(self)
276 return _SingleProcessDataLoaderIter(self)
277 else:
--> 278 return _MultiProcessingDataLoaderIter(self)
279
280 @property
~\Anaconda3\envs\py_gpu\lib\site-packages\torch\utils\data\dataloader.py in __init__(self, loader)
680 # before it starts, and __del__ tries to join but will get:
681 # AssertionError: can only join a started process.
--> 682 w.start()
683 self.index_queues.append(index_queue)
684 self.workers.append(w)
~\Anaconda3\envs\py_gpu\lib\multiprocessing\process.py in start(self)
110 'daemonic processes are not allowed to have children'
111 _cleanup()
--> 112 self._popen = self._Popen(self)
113 self._sentinel = self._popen.sentinel
114 # Avoid a refcycle if the target function holds an indirect
~\Anaconda3\envs\py_gpu\lib\multiprocessing\context.py in _Popen(process_obj)
221 @staticmethod
222 def _Popen(process_obj):
--> 223 return _default_context.get_context().Process._Popen(process_obj)
224
225 class DefaultContext(BaseContext):
~\Anaconda3\envs\py_gpu\lib\multiprocessing\context.py in _Popen(process_obj)
320 def _Popen(process_obj):
321 from .popen_spawn_win32 import Popen
--> 322 return Popen(process_obj)
323
324 class SpawnContext(BaseContext):
~\Anaconda3\envs\py_gpu\lib\multiprocessing\popen_spawn_win32.py in __init__(self, process_obj)
87 try:
88 reduction.dump(prep_data, to_child)
---> 89 reduction.dump(process_obj, to_child)
90 finally:
91 set_spawning_popen(None)
~\Anaconda3\envs\py_gpu\lib\multiprocessing\reduction.py in dump(obj, file, protocol)
58 def dump(obj, file, protocol=None):
59 '''Replacement for pickle.dump() using ForkingPickler.'''
---> 60 ForkingPickler(file, protocol).dump(obj)
61
62 #
BrokenPipeError: [Errno 32] Broken pipe |
st82291 | this issue is weird! My code runs on Colab smoothly, so I created an envirnment locally with EXACTLY the same versions of python 3.6.8, pytorch 1.1.0, torchvision 0.3.0, and cudatoolkit 10.0.130. Still having the same bug! |
st82292 | Hi! I was learning to use the Tensorboard in PyTorch according to the tutorial in https://pytorch.org/tutorials/intermediate/tensorboard_tutorial.html 21.
I run the code within the Jupyter Notebook. The “IMAGES” can be visualized as expected. However, there were only two blank boxes in the “GRAPHS”, where the structure of the model was supposed to be shown.
The results are shown below.
graphs.png1857×973 58.6 KB
images.png1858×958 46.3 KB
Has anyone encountered this problem? I got stuck here for two days. I really don’t know what else I can do.
Thanks in advance for your help! |
st82293 | No, I haven’t got a solution yet. I still don’t know why. Now I kind of feel relieved to know that I am not alone . Let’s wait for someone to help us. |
st82294 | This is apparently quite common, as I’m seeing it as well.
There are two open bug reports on it, the more useful of which is [this one.]
No known work-arounds.
I am curious if anyone at all has gotten this to work, and if so, what is different about our respective systems. The only thing I can think of that is remotely non-standard is that I run in a docker and on a remote Jupyter-lab, but that has never caused problems in the past.
(https://github.com/pytorch/pytorch/issues/24157 61) |
st82295 | I get the exact same behavior - two empty rectangles where the model graph should be. |
st82296 | Hi! i’m having the same issue, it’s seems to be some kind of bug, actually there are lot of people with the same problem. So it will be tackled soon or later. |
st82297 | for example look how nice the optimizer is (with all the fields):
optimizer
Adam (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.999)
eps: 1e-08
initial_lr: 0.001
lr: 0.001
weight_decay: 0
)
but look at the scheduler:
scheduler
<torch.optim.lr_scheduler.MultiStepLR at 0x12035d4a8>
why does the scheduler print so badly? |
st82298 | To answer your question, that’s most likely because the scheduler does not have as important parameters as the optimizer, and the __str__() method has not been implemented.
You can either inherit from MultiStepLR and create your own subclass, with a __str__() method that prints the elements you want, or create an external function that extracts the elements you want directly (e.g. milestones, gamma, last_epoch…) |
st82299 | The parameters are obviously important! They affect the final generalization of any model!
Anyway, thats a pitty its not implemented. How does one implement it and pull request it? |
st82300 | Have a look at the CONTRIBUTING 4 document for some guide lines.
If you are interested in contributing to PyTorch, your contributions will fall into two categories:
You want to propose a new feature and implement it.
Post about your intended feature, and we shall discuss the design and implementation. Once we agree that the plan looks good, go ahead and implement it.
You want to implement a feature or bug-fix for an outstanding issue.
Search for your issue here: https://github.com/pytorch/pytorch/issues
Pick an issue and comment on the task that you want to work on this feature.
If you need more context on a particular issue, please ask and we shall provide.
Once you finish implementing a feature or bug-fix, please send a Pull Request to https://github.com/pytorch/pytorch 2 |
st82301 | One way is similar to what @alex.veuthey mentioned, you can implement __repr__() method in your scheduler class. The following is __repr__() method in optimizer class.
def __repr__(self):
format_string = self.__class__.__name__ + ' ('
for i, group in enumerate(self.param_groups):
format_string += '\n'
format_string += 'Parameter Group {0}\n'.format(i)
for key in sorted(group.keys()):
if key != 'params':
format_string += ' {0}: {1}\n'.format(key, group[key])
format_string += ')'
return format_string
It exactly does what you want when you print(optimizer), so you can do something like this.
Another way you can directly call state_dict() method just like print(scheduler.state_dict()) and it will return some parameters you might want to look into. |
st82302 | I wonder how PyTorch implements convolutional layers to ensure reasonable performance.
How are input feature maps stored in memory?
Is data replicated when applying filters to different patches of the input? |
st82303 | You can have a look at Convolution.cpp which uses some switches to decide, which backend to call or if to use the ATen implementation.
E.g. if you are using the GPU, in some cases cudnn will be used, which itself uses different algorithms depending on the data type and shape. If you have a static input shape you could use torch.backends.cudnn.benchmark=True to let cudnn chose the fastest algorithm for your work load. |
st82304 | I have a encoder-decoder architecture. I want to do two things:
I wanna stack the same network three times
and share the weights of some layers (the layers from green areas), and not all layers (not the pink ones)
Some thing Iike this image:
I am new to pytorch and I am not shore how to do it.
I appreciate your help for any reference or sudo code. |
st82305 | Hi, to do so the easiest way is to generate two nn.Modules. One for the green part and another one for the pink one.
To share weights you only have to instantiate the class once. To have different ways you need to instantiate classifierX and axiliary classifierX 3 times.
something like:
class GlobalNet(nn.Module):
init---
self.net_in_green = instance()
self.classifier0 = pinkinstance()
self.classifier1 = pinkinstance()
self.classifier2 = pinkinstance()
forward(inputs)----
Here you have to squeeze everything those 3 inputs into batch dimension, assuming you have a single intput it would be similar to
inputs #shape--> batch, 3, other dimensions.-..
inputs = inputs.view(-1,other dimensions)
inputs #shape (3*batch, other dimensions)
Then you apply green net
greenout = self.net_in_green(squeezed input)
Now unsqueeze them with view
output0 = self.classifier0(greenout_unsqueezed0)
output1 = self.classifier1(greenout_unsqueezed1)
output02= self.classifier2(greenout_unsqueezed2)
The same with auxiliar classifier
The only drawback is that batch normalization is also shared…
As far as I know it’s not straight forward to achieve different statistics. |
st82306 | Hi,
I need to design a similar network where modules in the green area should weights, but there is a feedback from the pink area to the green area. Any idea how to do that? |
st82307 | Since Pytorch’s pretrained imagenet models are finetuned for RGB images, is it possible to work around them with grayscale images?
One possible solution is repeating grayscale image over three channels or convert them to RGB to work with existing situation.
Is it possible to some how take the mean of the three channels weight and tweak resnet to accept the mean weights and train using grayscale images ?
Any working example would be great.
Thanks in advance. |
st82308 | Hi @Rosa, models are trained with imagenet work well with rgb images but in your case you can either choose to change the first conv layer according to this 116 by @JuanFMontesinos or you can easily use the repitation likethis in numpy:
rgb = np.repeat(input[…,np.newaxis],3,-1) |
st82309 | Thanks for your answer. I have already seen this post. The solution provided here mentions that:
you would be training from the scratch.
Which beats the purpose of using pretrained model, If I am not wrong. Does this approach make sure that model using the pretrained models weights ?
And the 2nd solution of making grayscale to rgb , I mentioned already. If the information is same why I would like to use 3x computational resource and time ?
One should be able to use only grayscale image. |
st82310 | @Rosa I think the weights trained on Imagenet which is RGB, so it might be not be able to generalize well on grayscales. |
st82311 | I was wondering if there is a specific method to create a well performing neural network with only positive weights (I already tried clipping the weight before training or so and initializing the weights with only positive value ) but still doesn’t give a good results so is there any other method to do so ?
Thank You |
st82312 | Did you try the solution proposed here 25?
doesn’t give a good results
What is your baseline to establish if the result is good? Are you trying the match the accuracy of the same model but with default relative weights? |
st82313 | Thank you for your reply , I mean by a good result lower results but acceptable because we are not using negative weight that has an impact on the back prob algorithm. |
st82314 | you mean by P.data are the weight of the neural net and clamp function will clip them to 0 (if they are negative)? |
st82315 | I mean by a good result lower results but acceptable because we are not using negative weight that has an impact on the back prob algorithm.
Sure that makes sense.
you mean by P.data are the weight of the neural net and clamp function will clip them to 0 (if they are negative)?
That’s right.
Is it what you already have tried here:
(I already tried clipping the weight before training
? |
st82316 | Thank you I tried that and the results are acceptable but my real targeted question were can I have some think like model.weight.data.uniform(0.0,1.0) to init the weights in positive range and keep clamping them along the way to be sure that they don’t go beyond 0 and 1 (I tried that also and it seems to me that the results are horrible around 20% acc) is there any solution for this ?
or how to not get affected to much by the lack of negative weights ? |
st82317 | The only way would be having an unsigned float type for the parameters, which does not exist. Otherwise, you would need to clamp something (params or grads) at some point anyway.
Btw I am curious, why do you need to have only positive weights? |
st82318 | Dear all,
I am trying to build PyTorch (git clone of the github repo) from sources but I encounter an error at compile time:
FAILED: caffe2/CMakeFiles/torch.dir/operators/torch_generated_channelwise_conv3d_op_cudnn.cu.o
/tmp/pytorch/pytorch/caffe2/operators/channelwise_conv3d_op_cudnn.cu(102): error: identifier "__ldg" is undefined
/tmp/pytorch/pytorch/caffe2/operators/channelwise_conv3d_op_cudnn.cu(123): error: identifier "__ldg" is undefined
/tmp/pytorch/pytorch/caffe2/operators/channelwise_conv3d_op_cudnn.cu(102): error: identifier "__ldg" is undefined
detected during instantiation of "void caffe2::DepthwiseConv3dGPUKernelNCHW(caffe2::DepthwiseArgs, const T *, const T *, T *, int) [with T=float]"
(380): here
/tmp/pytorch/pytorch/caffe2/operators/channelwise_conv3d_op_cudnn.cu(123): error: identifier "__ldg" is undefined
detected during instantiation of "void caffe2::DepthwiseConv3dGPUKernelNCHW(caffe2::DepthwiseArgs, const T *, const T *, T *, int) [with T=float]"
(380): here
/tmp/pytorch/pytorch/caffe2/operators/channelwise_conv3d_op_cudnn.cu(184): error: identifier "__ldg" is undefined
detected during instantiation of "void caffe2::DepthwiseConv3dBackpropFilterGPUKernelNCHW(caffe2::DepthwiseArgs, const T *, const T *, T *, int) [with T=float]"
(505): here
/tmp/pytorch/pytorch/caffe2/operators/channelwise_conv3d_op_cudnn.cu(303): error: identifier "__ldg" is undefined
detected during instantiation of "void caffe2::DepthwiseConv3dBackpropInputGPUKernelNCHW(caffe2::DepthwiseArgs, const T *, const T *, T *, int) [with T=float]"
(515): here
6 errors detected in the compilation of "/tmp/tmpxft_00003524_00000000-13_channelwise_conv3d_op_cudnn.compute_30.cpp1.ii".
CMake Error at torch_generated_channelwise_conv3d_op_cudnn.cu.o.Release.cmake:279 (message):
Error generating file
/tmp/pytorch/pytorch/build/caffe2/CMakeFiles/torch.dir/operators/./torch_generated_channelwise_conv3d_op_cudnn.cu.o
From what I understand, __ldg is available only for devices with compute capabilities >= 3.5, knowing that I am compiling PyTorch against CUDA 9.2.148 on a NVidia V100 GPU (so my hardware definitely meets these requirements).
Any help would be appreciated. |
st82319 | It might be related to this issue 63.
From the issue:
enable the environment variable export TORCH_CUDA_ARCH_LIST=7.0 , and this should be fixed. |
st82320 | I noticed that it is not possible to feed packed_sequences to things like activation functions or linear layers, forcing me to design models having a forward method like :
def forward(self, input, lengths, hidden = None):
input = nn.utils.rnn.pack_padded_sequence(input, lengths, batch_first = True, enforce_sorted = False)
out, hidden = self.lstm(input,hidden)
out = nn.utils.rnn.pad_packed_sequence(out, batch_first = True, padding_value= -100)[0]
out = self.drop_layer(self.sigmoid(out))
out = self.softmax(self.linear_layer(out))
Where I have to unpack the packed_sequence directly after the passing through LSTM, and later need to filter it before loss calculation. if I don’t, I received an error message :
TypeError: sigmoid(): argument 'input' (position 1) must be Tensor, not PackedSequence
This seem highly inefficient since a lot of the calculations done by the activations functions and linear layers will have to be thrown away afterwards.
Why is that so ? What is the reason preventing us from passing packed_sequences to activation function or linear layers ? |
st82321 | Hi,
I am using PyTorch 1.2.0 self-compiled with CUDA compute capability 5.2 with C++ and everything works as expected.
I read somewhere that everything down to compute capability 3.5 is supported. Hence, as we aim to support as many graphic cards as possible, i tried to compile PyTorch with compute capability 3.5. However, I get an error when using this version:
.THCudaCheck FAIL file=D:/tools/pytorch-v1.2.0/aten/src\THC/generic/THCTensorMath.cu line=16 error=209 : no kernel image is available for execution on the device
exception message: cuda runtime error (209) : no kernel image is available for execution on the device at D:/tools/pytorch-v1.2.0/aten/src\THC/generic/THCTensorMath.cu:16
The above operation failed in interpreter, with the following stack trace:
at code/model-input_rgbip-output_14classes_best_train_2019_08_03_cpu-eval-mode-export_latest_pytorch.py:292:12
_135 = getattr(_131, “1”)
_136 = _135.weight
_137 = _135.bias
_138 = getattr(self.decoder0, “0”)
_139 = _138.weight
140 = _138.bias
_141 = getattr(self.logit, “0”)
_142 = _141.weight
_143 = _141.bias
input0 = torch._convolution(input, _1, None, [2, 2], [3, 3], [1, 1], False, [0, 0], 1, True, False, True)
~~~~~~~~~~~~~~~~~~ <— HERE
input1 = torch.batch_norm(input0, weight, bias, running_mean, running_var, False, 0.10000000000000001, 1.0000000000000001e-05, True)
input2 = torch.relu(input1)
input3 = torch.max_pool2d(input2, [3, 3], [2, 2], [1, 1], [1, 1], False)
input4 = torch._convolution(input3, 5, None, [1, 1], [1, 1], [1, 1], False, [0, 0], 1, True, False, True)
input5 = torch.batch_norm(input4, weight0, bias0, running_mean0, running_var0, False, 0.10000000000000001, 1.0000000000000001e-05, True)
input6 = torch.relu(input5)
input7 = torch._convolution(input6, 7, None, [1, 1], [1, 1], [1, 1], False, [0, 0], 1, True, False, True)
out = torch.batch_norm(input7, weight1, bias1, running_mean1, running_var1, False, 0.10000000000000001, 1.0000000000000001e-05, True)
input8 = torch.add(out, input3, alpha=1)Compiled from code /opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py(340): forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(523): _slow_forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(537): call
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/container.py(92): forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(523): _slow_forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(537): call
…/dl/models/unet.py(153): forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(523): _slow_forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(537): call
/opt/conda/lib/python3.6/site-packages/torch/jit/init.py(883): trace_module
/opt/conda/lib/python3.6/site-packages/torch/jit/init.py(751): trace
(5):
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(3296): run_code
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(3214): run_ast_nodes
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(3049): run_cell_async
/opt/conda/lib/python3.6/site-packages/IPython/core/async_helpers.py(67): _pseudo_sync_runner
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2874): _run_cell
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2848): run_cell
/opt/conda/lib/python3.6/site-packages/ipykernel/zmqshell.py(536): run_cell
/opt/conda/lib/python3.6/site-packages/ipykernel/ipkernel.py(294): do_execute
/opt/conda/lib/python3.6/site-packages/tornado/gen.py(209): wrapper
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py(534): execute_request
/opt/conda/lib/python3.6/site-packages/tornado/gen.py(209): wrapper
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py(267): dispatch_shell
/opt/conda/lib/python3.6/site-packages/tornado/gen.py(209): wrapper
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py(357): process_one
/opt/conda/lib/python3.6/site-packages/tornado/gen.py(742): run
/opt/conda/lib/python3.6/site-packages/tornado/gen.py(781): inner
/opt/conda/lib/python3.6/site-packages/tornado/ioloop.py(743): _run_callback
/opt/conda/lib/python3.6/site-packages/tornado/ioloop.py(690):
/opt/conda/lib/python3.6/asyncio/events.py(145): _run
/opt/conda/lib/python3.6/asyncio/base_events.py(1451): _run_once
/opt/conda/lib/python3.6/asyncio/base_events.py(438): run_forever
/opt/conda/lib/python3.6/site-packages/tornado/platform/asyncio.py(148): start
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelapp.py(505): start
/opt/conda/lib/python3.6/site-packages/traitlets/config/application.py(658): launch_instance
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py(16):
/opt/conda/lib/python3.6/runpy.py(85): _run_code
/opt/conda/lib/python3.6/runpy.py(193): _run_module_as_main
So I assume compute capability 3.5 is also not fully supported anymore?
Down to which compute capability PyTorch 1.2.0 should work correctly?
My system:
Windows 10
Visual Studio 2019 - CUDA 10.1
Python 3.7
Self-Compiled PyTorch 1.2.0
Thanks!
Best,
Thomas |
st82322 | I tried to build with compute capability 5.0 now, and this build is working. So I assume PyTorch requires 5.0 as minimum compute capability. Is this correct? |
st82323 | Hi,
The comments about pytorch working all the way down to cc3.5 are quite old. I’m afraid this is not true anymore. |
st82324 | github.com
pytorch/builder/blob/master/conda/pytorch-1.1.0/bld.bat#L18 5
set build_with_cuda=
) else (
set build_with_cuda=1
set desired_cuda=%CUDA_VERSION:~0,-1%.%CUDA_VERSION:~-1,1%
)
if "%build_with_cuda%" == "" goto cuda_flags_end
set CUDA_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v%desired_cuda%
set CUDA_BIN_PATH=%CUDA_PATH%\bin
set TORCH_CUDA_ARCH_LIST=3.5;5.0+PTX
if "%desired_cuda%" == "8.0" set TORCH_CUDA_ARCH_LIST=%TORCH_CUDA_ARCH_LIST%;6.0;6.1
if "%desired_cuda%" == "9.0" set TORCH_CUDA_ARCH_LIST=%TORCH_CUDA_ARCH_LIST%;6.0;7.0
if "%desired_cuda%" == "9.2" set TORCH_CUDA_ARCH_LIST=%TORCH_CUDA_ARCH_LIST%;6.0;6.1;7.0
if "%desired_cuda%" == "10.0" set TORCH_CUDA_ARCH_LIST=%TORCH_CUDA_ARCH_LIST%;6.0;6.1;7.0;7.5
set TORCH_NVCC_FLAGS=-Xfatbin -compress-all
:cuda_flags_end
set DISTUTILS_USE_SDK=1
github.com
pytorch/builder/blob/master/windows/cuda100.bat#L36 4
echo NVTX ^(Visual Studio Extension ^for CUDA^) ^not installed, failing
exit /b 1
goto optcheck
)
IF "%CUDA_PATH_V10_0%"=="" (
echo CUDA 10.0 not found, failing
exit /b 1
) ELSE (
IF "%BUILD_VISION%" == "" (
set TORCH_CUDA_ARCH_LIST=3.5;5.0+PTX;6.0;6.1;7.0;7.5
set TORCH_NVCC_FLAGS=-Xfatbin -compress-all
) ELSE (
set NVCC_FLAGS=-D__CUDA_NO_HALF_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_50,code=compute_50
)
set "CUDA_PATH=%CUDA_PATH_V10_0%"
set "PATH=%CUDA_PATH_V10_0%\bin;%PATH%"
)
:optcheck
github.com
pytorch/builder/blob/master/windows/cuda90.bat#L36
echo NVTX ^(Visual Studio Extension ^for CUDA^) ^not installed, failing
exit /b 1
goto optcheck
)
IF "%CUDA_PATH_V9_0%"=="" (
echo CUDA 9 not found, failing
exit /b 1
) ELSE (
IF "%BUILD_VISION%" == "" (
set TORCH_CUDA_ARCH_LIST=3.5;5.0+PTX;6.0;7.0
set TORCH_NVCC_FLAGS=-Xfatbin -compress-all
) ELSE (
set NVCC_FLAGS=-D__CUDA_NO_HALF_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_50,code=compute_50
)
set "CUDA_PATH=%CUDA_PATH_V9_0%"
set "PATH=%CUDA_PATH_V9_0%\bin;%PATH%"
)
:optcheck
Actually 3.5 is in the CUDA_ARCH_LIST, I wonder why that is not supported. |
st82325 | Interestingly, when I use the prebuild windows version built with CUDA/CuDNN, it works on a graphics card with cc 3.7 (Tesla K80). However, when I build pytorch myself with CUDA but without CuDNN, it does not work on this graphics card.
Is it possible that some instructions are only implemented for a higher cc in CUDA, but if CuDNN is used these instructions are implemented with CuDNN and hence it works? |
st82326 | The loss is not decreasing after around 11 epochs. The model is a slight variant of the one in pytorch tutorial. It wasn’t working even for the tutorial’s model which I thought was because it was underfitting.
[10, 2000] loss: 0.873
[10, 4000] loss: 0.901
[10, 6000] loss: 0.890
[10, 8000] loss: 0.927
[10, 10000] loss: 0.910
[10, 12000] loss: 0.939
[11, 2000] loss: 0.828
[11, 4000] loss: 0.867
[11, 6000] loss: 0.883
[11, 8000] loss: 0.902
[11, 10000] loss: 0.917
[11, 12000] loss: 0.917
[12, 2000] loss: 0.818
[12, 4000] loss: 0.848
[12, 6000] loss: 0.883
[12, 8000] loss: 0.861
[12, 10000] loss: 0.871
[12, 12000] loss: 0.903
[13, 2000] loss: 0.799
[13, 4000] loss: 0.807
[13, 6000] loss: 0.864
[13, 8000] loss: 0.850
[13, 10000] loss: 0.888
[13, 12000] loss: 0.910
[14, 2000] loss: 0.777
[14, 4000] loss: 0.836
[14, 6000] loss: 0.843
[14, 8000] loss: 0.832
[14, 10000] loss: 0.863
[14, 12000] loss: 0.873
[15, 2000] loss: 0.759
[15, 4000] loss: 0.819
[15, 6000] loss: 0.817
[15, 8000] loss: 0.834
[15, 10000] loss: 0.847
[15, 12000] loss: 0.873
[16, 2000] loss: 0.756
[16, 4000] loss: 0.810
[16, 6000] loss: 0.830
[16, 8000] loss: 0.834
[16, 10000] loss: 0.860
[16, 12000] loss: 0.846
[17, 2000] loss: 0.762
[17, 4000] loss: 0.785
[17, 6000] loss: 0.811
[17, 8000] loss: 0.828
[17, 10000] loss: 0.849
[17, 12000] loss: 0.848
[18, 2000] loss: 0.779
[18, 4000] loss: 0.817
[18, 6000] loss: 0.793
[18, 8000] loss: 0.815
[18, 10000] loss: 0.852
[18, 12000] loss: 0.848
[19, 2000] loss: 0.739
[19, 4000] loss: 0.769
[19, 6000] loss: 0.816
[19, 8000] loss: 0.813
[19, 10000] loss: 0.834
[19, 12000] loss: 0.859
[20, 2000] loss: 0.738
[20, 4000] loss: 0.789
[20, 6000] loss: 0.826
[20, 8000] loss: 0.834
[20, 10000] loss: 0.867
[20, 12000] loss: 0.855
[21, 2000] loss: 0.772
[21, 4000] loss: 0.795
[21, 6000] loss: 0.793
[21, 8000] loss: 0.805
[21, 10000] loss: 0.834
[21, 12000] loss: 0.871
[22, 2000] loss: 0.757
[22, 4000] loss: 0.801
[22, 6000] loss: 0.831
[22, 8000] loss: 0.823
[22, 10000] loss: 0.840
[22, 12000] loss: 0.878
[23, 2000] loss: 0.771
[23, 4000] loss: 0.776
[23, 6000] loss: 0.811
[23, 8000] loss: 0.809
[23, 10000] loss: 0.844
[23, 12000] loss: 0.859
import torch
import torchvision
import torchvision.transforms as transforms
import torchvision.models as models
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.conv3 = nn.Conv2d(16, 16, 5, padding=2)
self.conv4 = nn.Conv2d(16,32, 5, padding=2)
self.fc1 = nn.Linear(800, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84,67)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
x = self.pool(F.relu(self.conv4(x)))
x = x.view(-1,800)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(200): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training') |
st82327 | please refer to :
GitHub
beichen2012/pytorchAdamMemLeak 2
test libtorch train step for adam(RMSProp) memory leaks in CUDA - beichen2012/pytorchAdamMemLeak |
st82328 | please also refter to :
github.com/pytorch/pytorch
Issue: libtorch(c++) Adam and RMSProp optimizer memory leaks in CUDA 4
opened by beichen2012
on 2019-08-28
🐛 Bug
libtorch(c++) Adam and RMSProp optimizer memory leaks in CUDA
To Reproduce
libtorch1.2, cuda10,win10, vs2015, win64
Steps to reproduce the behavior:
please refer to the... |
st82329 | Thanks for the code and for opening an issue.
For the same of completeness: the issue is tracked here 24. |
st82330 | I noticed a strange thing with the following input tensor (the 1 is at position 62) :
tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0.]])
and target :
tensor([62])
the following commands gives the following output :
crit = nn.NLLLoss()
soft = nn.Softmax(dim = 1)
crit(input[0,:].unsqueeze(0),target[0].unsqueeze(0))
Out[120]: tensor(-1.)
crit(soft(input[0].unsqueeze(0)),target[0].unsqueeze(0))
Out[135]: tensor(-0.0270)
This is not zero but it should be… why is that so ? The unsqueeze are here so that NLLLoss is satisfied by input dimensions, |
st82331 | Hello,
I think, NLLLoss() works with log probabilities so you should use a LogSoftmax() instead of Softmax(). |
st82332 | But from the documentation: https://pytorch.org/docs/stable/nn.html 3 you can see that the LogSoftmax() will compute LogSoftmax(xi)=log(exp(xi) / ∑_j exp(xj)). So you are summing 61 times exp(0) = 1 + exp(1) = e in your case. So the behavior is normal. So basically, if you want your loss to be 0, the number at the 62th item should be really high compared to the other ones. Try to put 20 instead of 1 and you will have a loss with 0 value.
Just tell me if I was not clear enough. |
st82333 | Changing the value of 1 to a higher value solved the issue, thanks for the help. So I guess I can safely implement what I had in mind, the loss will behave as expected. |
st82334 | I know we can calculate outer subtraction using broadcasting as given in
stackoverflow.com
Outer sum, etc. in pytorch 29
python, optimization, pytorch, torch
asked by
iago-lito
on 01:26PM - 12 Oct 18 UTC
But how can I do it for batches where first dimension is batch size and then we have second dimension to be the vectors for which we want to calculate outer subtraction between each vector in the first tensor with every other vector in the second tensor. |
st82335 | See https://stackoverflow.com/questions/55739993/pytorch-batch-outer-addition 391 |
st82336 | Hello,
Here’s a simple code:
class Identity(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return x
class CNNLSTM(nn.Module):
def __init__(self):
super(CNNLSTM, self).__init__()
self.BackBone = models.resnet18(pretrained= True)
num_ftrs = self.BackBone.fc.in_features
self.BackBone.fc = Identity()
self.lstm = nn.LSTM(512, 512, batch_first = True)
self.fc = nn.Linear(512, 1)
def forward(self, video):
batch_size, time_steps, C, H, W = video.size()
c_in = video.view(batch_size * time_steps, C, H, W)
c_out = self.BackBone(c_in)
r_in = c_out.view(batch_size, time_steps, -1)
r_out, _ = self.lstm(r_in)
output = torch.sigmoid(self.fc(r_out[:, -1, :]))
return output.squeeze()
If I run the training loop without actually using the model, means only fetching the batches for a number of epochs my GPU uses 1.1 GB out of 4.0 GB.
means I only do this:
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch+1, num_epochs))
for i, (inputs, labels) in enumerate(dataloader):
inputs = inputs.to(device, non_blocking=True)
labels = labels.to(device , non_blocking=True)
If I call my model however on the batch, I get CUDA out of memory. Means if I only add these to my loop:
optimizer.zero_grad()
outputs = model(inputs)
where does the 3 GB go! do the gradients occupy memory. Because I thought once you call the model in:
model = CNNLSTM()
model.to(device)
The graphs and the gradients have already allocated their memory?
Thank you |
st82337 | When you actually execute the model, the computation graph gets built dynamically (recording all the operations).
What (I think…) takes most of the memory is the outputs of some operations (the edges of the graph). Those will be needed by autograd to compute the gradients so are kept in memory until you call outputs.backward().
Please find more details here 1 and autograd concept better explained than by me. |
st82338 | just like human brain is able to do both translation and classification, how to create one model, that accomplishes both of these tasks?
so I give sentence in a language as input, it returns sentence in another language, if I give image as input, it returns label, it is able to do both. |
st82339 | Hi all,
I have two tensors with shape [B1, N] and [B2, N], N is the vector length.
I want to get a similarity metric with shape [B1, B2] with the current API.
Can I do that at once? I found that it always rises shape mismatching error.
image.png962×529 61.6 KB |
st82340 | YEs, you can by the way below.
Assume the shape of x1 and x2 is [B1, N] and [B2, N], then cosine similarity is computed below.
x1_norm = F.normalize(x1, p=2, dim=1)
x2_norm = F.normalize(x2, p=2, dim=1)
cosine similarity = x1_norm * x2_norm.t() |
st82341 | Yes, but you have to unsqueeze and swap shape, so the [B1, N] becomes [B1, N, 1] and [B2, N] becomes [1, N, B2]:
t1 = torch.randn((17,1024,1))
t2 = torch.randn((1,1024,512))
s1 = torch.nn.functional.cosine_similarity(t1,t2)
print(s1.shape)
gives
torch.Size([17, 512]) |
st82342 | I am trying to use an LSTM to predict daily usage for users. I have data for 30 days.
Based on business knowledge I know users divide roughly into different categories. E.g. daily users would have a non-zero usage almost every day, weekly users have one or two days of non-zero usage every 7 days and monthly users might have a couple of days with non-zero usage per 30 days.
Samples where every column is one day and each row is access usage for one user.
User 1: 50, 80, 33, 19, 30, 15, ...
User 2: 0, 21, 13, 30, 0, 5, 0, 0, 55, 28, 0, 19, 0, ...
User 3: 11, 2, 11, 56, .....
From above, User1,3 maybe a daily user, User2 maybe a weekly user.
If I only can get info such as user name and access usage.
Can a single lstm model capture this different types of patterns of users?
The goal is to predict the daily usage for next 10 days of each users.
Why I ask is because I tried 100 epochs with learning rate 0.0001, but error still failed.
The prediction result always look like the same even give another input of user name. |
st82343 | I noticed what I find to be surprising behaviour: if I index a tensor with a python tuple, I get an alias of the indexed element, but if I index with a python list, I get a copy:
t = torch.rand(3,5)
print(t[1,2].data_ptr())
idx = (1,2)
print(t[idx].data_ptr())
idx = [1,2]
print(t[idx].data_ptr())
Output:
94484139998412
94484139998412
94484140672144
Is this expected behaviour? I gather that indexing with a tuple is identical to directly indexing an element, so returning a reference/alias makes sense.
Am I to understand that a python list is converted to a tensor, and that tensor indexing with a non-zero-dim tensor results in a copy? If so, this discussion around indexing with zero-dim tensors seems relevant here.
There’s also this post 3 on a possibly related confusion. |
st82344 | By using a list to index a tensor you can select elements of a dimension in arbitrary order. Due to the internal representation of a tensor over its underlying continuous storage it is in general not possible to create a new tensor that is just a view into the old one and shares its storage object. In contrast by using a tuple you can specify only single elements along each dimension - each tuple entry is responsible for a different dimension.
e.g. lets say you want to index the diagonal elements in reverse:
>>> x = torch.rand(3,3)
>>> x
tensor([[0.1899, 0.9408, 0.0889],
[0.4863, 0.5366, 0.1633],
[0.8910, 0.4463, 0.2007]])
>>> x[[-1,-2,-3],[-1,-2,-3]]
tensor([0.2007, 0.5366, 0.1899]) |
st82345 | Oh, I get it. A tuple indexes an element, a list provides multiple indexes in the first dimension of t.
In [31]: t = torch.rand(3,5)
...: print(t[1,2])
...: print(t[1,2].data_ptr())
tensor(0.1442)
140485274866972
In [32]: idx = (1,2)
...: print(t[idx])
...: print(t[idx].data_ptr())
tensor(0.1442)
140485274866972
In [33]: idx = [1,2]
...: print(t[idx])
...: print(t[idx].data_ptr())
tensor([[0.8098, 0.4710, 0.1442, 0.5391, 0.1699],
[0.7292, 0.6585, 0.7074, 0.4800, 0.6104]])
140485233816192 |
st82346 | I want to use the concept of multiprocessing with PyTorch. Are there any basic tutorials for this? |
st82347 | nn.parallel.DataParallel
https://discuss.pytorch.org/t/is-the-loss-function-paralleled-when-using-dataparallel/3346 |
st82348 | http://pytorch.org/docs/notes/multiprocessing.html 21
also https://docs.python.org/2/library/multiprocessing.html 26 |
st82349 | Hello,
I am looking for sparse matrix solvers (Ax=b) like BiCGStab in Pytorch. I understand that we can use one of the optimisers in PyTorch to that, but I am not sure how well its performance, compared with iterative solvers like BiCGStab.
Can anyone help ? |
st82350 | My model takes multiple inputs (9 tensors), how do I pass it as one input in the following form:
torch.onnx.export(model,inputs,'model.onnx')
I’ve tried putting all the tensors in the list and passing it as input.
TypeError: forward() missing 8 required positional argument. What can be a work around for this ? |
st82351 | I’m not really experienced in ONNX, so that unfortunately I cannot really help.
However, could you just change the forward method of your model to take a list instead of 9 arguments? |
st82352 | My forward method takes 9 arguments only. But in this case, they (as per docs) assume that only one tensor is passed to model. But what about models which take multiple tensors as input ? |
st82353 | I have a similar problem but looks simpler taking only two arguments inputs.
For your case, from other forum, the solution is to wrap your input as tuple e.g. instead of
onnx.export(model, inputs, ...)
use this:
onnx.export(model, (inputs,), ...) |
st82354 | Hello, I want to do some experiments with weight normalization and I was wondering if it would be possible to set the g in weight normalization as 1 or a different function of time in PyTorch.
Thank you in advance! |
st82355 | I believe this might be working: net.fc1.weight_g = nn.Parameter(torch.Tensor(1)) |
st82356 | Hi,
Can we code a classification without onehot encoding, for example;
x, y = dataloader() ##Load input x and label y(integer scalar)
out = model()
z = round(out) ##Rounding scalar "out" to integer scalar "z"
loss = criteria(z, y)
If this is possible, how we treat back prop on the rounding? |
st82357 | You cannot backprop through round.
You could take the (square, or something) difference of z and y, i.e. solving a regression problem.
Regression on class numbers doesn’t work particularly well for classification in general. (We would not be making the distinction if it did.)
Best regards
Thomas |
st82358 | Hi NaN!
111137:
Can we code a classification without onehot encoding,
…
If you’re asking specifically about whether we can avoid
one-hot encoding the target labels, the answer is yes.
Let’s say we have a classification problem with 5 classes.
The most common approach (probably) is to build a network
whose output (for a single sample) is a length-5 vector (of
real numbers, not integers). But the label (for a single sample)
is a single integer class label (so not one-hot encoded).
Then we run the output and label through, for example,
nn.CrossEntropyLoss as the loss criterion.
Again, in this scheme, the output of your network is a vector of
length number-of-classes, but your labels are single numbers.
Best.
K. Frank |
st82359 | @KFrank -san,
Thank you, yes I want to do is so.
Just, making output layer as single out (float scalar) and doing train with the value and label (as float scalar). After that cap the rounding to the output to make precise integer number at inference service. |
st82360 | Can I make such a linear layer in which weights will not change?
I want to compress the output from 10 neurons to 3, but so that the linear layer is not trained.
self.out = nn.Linear(10, 3) |
st82361 | Solved by spanev in post #9
Can you share the code of the model or at least the concerned part of the model?
If out is nested in another layer or class you would have to do something like model.submodule_name.out |
st82362 | eval() won’t work in this case, as it switches the behavior of certain layers like batchnorm and dropout layers.
Set the requires_grad attribute to False for self.out.weight and self.out.bias to fix these parameters.
EDIT: I was too slow, as @spanev already added the right approach |
st82363 | Did you mean that?
def forward(self, x):
out = self.fc1(x) #Linear 1
out = self.relu1(out)
out = self.bn1(out)
out = self.fc1(out) #Linear 2
self.out.weight.requires_grad = False
self.out.bias.requires_grad = False
return out |
st82364 | If you don’t want to train these parameters at all, follow @spanev’s approach and set the requires _grad attribute to False after creating an instance of the model. |
st82365 | @spanev’s example only freezes the parameters of model.out, which would refer to your linear layer. |
st82366 | I can’t figure out where in the code I need to insert this.
AttributeError: 'model' object has no attribute 'out'
Sorry, got it
…
I wonder how to check if the weights are changing))) |
st82367 | Can you share the code of the model or at least the concerned part of the model?
If out is nested in another layer or class you would have to do something like model.submodule_name.out |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.