id
stringlengths
3
8
text
stringlengths
1
115k
st31268
You haven’t pushed classifier to the GPU only model, so you should add classifier.cuda() to the code.
st31269
I ran into a similar issue that wasn’t obvious. I was explicitly setting the weights and the bias parameter of a linear layer. Those weights are tensors, so they also need to be moved to CUDA.
st31270
Hi, in pytorch lightning, recently I found that if I use F.Dropout in forward step, even when I set mode to model.val, the dropout still work, then I realize I should replace it with nn.Dropout as module attribute. After that, everything performs normally. Then my concern would be like, if I use F.relu in forward instead of nn.ReLU as attribute, can the model performs normally in backward, i.e. considering there is a relu or relu just work in forward? and if I want to use twice nn.ReLu, should I define one self.relu=nn.ReLU and use self.relu in 2 different position or should I define 2, i.e. self.relu1 = nn.ReLU, self.relu2 = nn.ReLU So basically I am not very clear how the backpropagation find which module( or function) it should consider.
st31271
Solved by KFrank in post #2 Hi Shawn! F.relu() (which is to say torch.nn.functional.relu()) is a function. nn.ReLU (torch.nn.ReLU) is a class that simply calls F.relu(). These two ways of packaging the function do the same thing, including when calling .backward(). There is no need to instantiate two instances of the n…
st31272
Hi Shawn! Shawn_Zhuang: if I use F.relu in forward instead of nn.ReLU as attribute, can the model performs normally in backward F.relu() (which is to say torch.nn.functional.relu()) is a function. nn.ReLU (torch.nn.ReLU) is a class that simply calls F.relu(). These two ways of packaging the function do the same thing, including when calling .backward(). if I want to use twice nn.ReLu, should I define one self.relu=nn.ReLU and use self.relu in 2 different position or should I define 2, i.e. self.relu1 = nn.ReLU, self.relu2 = nn.ReLU There is no need to instantiate two instances of the nn.ReLU function object (but you can if you want). An instance of nn.ReLU doesn’t contain any state, so whether you have two instances or only one, they all do the same thing, simply calling F.relu(). (My preference would be to instantiate only one nn.Relu function object, or if I didn’t need an object instance, simple call F.relu(). But it really doesn’t matter.) Best. K. Frank
st31273
Is there a way to create a tensor using a per-channel means and std? For instance, I have two tensors mean and std with k values each, and I would like to generate an output tensor which is a stack of k tensors generated from normal distributions using each mean and std value. What would be an efficient way to do it? Essentially, I would like to do the following: output = torch.empty(shape) for i in range(k): output[:, i, ...] = output[:, i, ...].normal_(mean=mean[i], std=std[i]) Is there a better way of achieving this? Thanks in advance!
st31274
Currently, I have a Dataloader initialized with a dataset of an array of tensors that does not inherit from Dataset. So far my model has not been learning at all. I was wondering if it is necessary to pass in a dataset class that inherits from Dataset for Dataloader to work or not.
st31275
I don’t think it should really matter unless your dataloader is missing some functions (which should throw an error). If the data and label tensors returned by your dataset are reasonable it is likely the problem is somewhere else.
st31276
I was trying to build a tree like neural net, where there are nested modules in ModuleList/ModuleDict objects. However I encountered a maximum recursion bug in the top node, the ‘root’ of the tree. To make it simple, I created a minimal example to reproduce this error (Pythorch 1.2): class TreeNode_Test(nn.Module): def __init__(self): super(TreeNode_Test, self).__init__() self.nodesInLevels = nn.ModuleList([self]) myModel = TreeNode_Test() myModel # when calling this or myModel.nodesInLevels I ll get max recursion error: File "C:\Users\mk23\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1042, in __repr__ mod_str = repr(module) File "C:\Users\mk23\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1042, in __repr__ mod_str = repr(module) File "C:\Users\mk23\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1036, in __repr__ extra_repr = self.extra_repr() RecursionError: maximum recursion depth exceeded Any ideas?
st31277
Hi, You model references itself when you use self in your ModuleList. So the print function that tries to print all the module that are contained in your model will run infinitely.
st31278
That is an explanation but not a solution I don’t think this behaviour is correct, if I change the parent class to object instead of nn.Module, or the nn.ModuleList to python list(), then it will work as expected - but then it won’t work with DataParallel. As then it won’t replicate the model properly across multiple GPUs and I will end up with the dreaded tensors/parameters on different GPUs error… It isn’t just print, (which I could avoid by not calling it), pretty much everything ends up in an infinite loop, eg module.apply(fn) which I can’t avoid using.
st31279
What would your use case be? If you use self as a module inside nn.Module, even the __call__ function will try to recursively call itself.
st31280
Hi, I have encountered the same issue when following this parallelism tutorial: Multi-GPU Examples — PyTorch Tutorials 1.7.1 documentation 2 → Attributes of the wrapped module Simple code snippet to reproduce: >>> import torch >>> class TorchDataParallel(torch.nn.DataParallel): ... def __getattr__(self, name): ... return getattr(self.module, name) ... >>> block = torch.nn.Module() >>> parallel_block = TorchDataParallel(block) >>> parallel_block.stream_names Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in __getattr__ File "<stdin>", line 3, in __getattr__ File "<stdin>", line 3, in __getattr__ [Previous line repeated 995 more times] RecursionError: maximum recursion depth exceeded Wondering what would be the right way to access custom attributes?
st31281
Which PyTorch version are you using? I’ve just tried it out on a ~1 week old source build and don’t get an error.
st31282
I get the same on ‘1.7.1+cu101’. Can get it to work doing: class DataParallel(torch.nn.parallel.DataParallel): def __getattr__(self, name): module = object.__getattribute__(self, "_modules")["module"] if name == "module": return module return getattr(module, name)
st31283
In the forward function of nn.TransformerEncoderLayer, the input goes through MultiheadAttention, followed by Dropout, then LayerNorm. According to the documentation, the input-output shape of MultiheadAttention is (S, N, E) → (T, N, E) where S is the source sequence length, L is the target sequence length, N is the batch size, E is the embedding dimension. The input-output shape of LayerNorm is (N, *) → (N, *). Wouldn’t this cause a problem because the batch size of the MultiheadAttention output is the second dimension, while LayerNorm expects the batch size to be in the first dimension? Or am I missing something? Thanks!
st31284
From LayerNorm — PyTorch 1.8.1 documentation 2, If a single integer is used, it is treated as a singleton list, and this module will normalize over the last dimension which is expected to be of that specific size. So, TLDR it doesn’t care if seq len and batch size are permuted as long as the last dim is correct.
st31285
In the case of nn.TransformerEncoderLayer 5 (below I do not distinguish between embed_dim and d_model in the case of internal operations): if batch_first=True, then the input of the model will be of shape (batch_size, seq_len, dim) : here N of nn.LayerNorm corresponds to batch_size, * to (seq_len, dim) and normalized_shape to dim otherwise, the input of the model will be of shape (seq_len, batch_size, dim) : here N of nn.LayerNorm corresponds to seq_len, * to (batch_size, dim) and normalized_shape to dim But in all cases, the normalization is done along the last dimension (dim), because according to the documentation of nn.LayerNorm, when a single integer is used for normalized_shape, it is treated as a singleton list, and this module will normalize over the last dimension which is expected to be of that specific size. There is no need to talk about target length in the case of the encoder, we have only one input of length seq_len / source_seq_len. It is in the case of the nn.TransformerDecoderLayer 1 that we can even talk about T and S (target length and source length). And in this case an nn.LayerNorm will be applied to (target_seq_len, batch_size, dim) (or (batch_size, target_seq_len, dim) for batch_first=True) and others to (source_seq_len, batch_size, dim) (or (batch_size, source_seq_len, dim) for batch_first=True), always following the last dimension, no matter what value batch_first has. See : TransformerEncoderLayer : pytorch/transformer.py at master · pytorch/pytorch · GitHub 4 TransformerDecoderLayer : pytorch/transformer.py at master · pytorch/pytorch · GitHub
st31286
Thank you both, that clarifies my confusion. For nn.LayerNorm, it doesn’t matter whether the batch size is in the first or second index for a 3D tensor, because it acts on the last dimension. A related follow-up question to clarify my understanding of nn.MultiheadAttention: for nn.MultiheadAttention, unlike nn.LayerNorm, it does matter whether the batch size is in the first or second index, right? It doesn’t just act on the last dimension?
st31287
In fact all the computations at the level of nn.MultiheadAttention 2 are done with this shape (seq_len, batch_size, dim), that is batch_first=False. Below, seq_len corresponds to source_seq_len in the case of the query, and to target_seq_len in the case of the key and value. As you can see here 4, when batch_first=True, i.e. the entries (query, key, value) are of the shape (batch_size, seq_len, dim), they are first transposed into (seq_len, batch_size, dim) before the computations, and then the result (attn_output), of shape (seq_len, batch_size, dim), is re-transposed to (batch_size, seq_len, dim) before being returned.
st31288
after training the model, I use torch.save to save the model with .pth format. And in another python file, I try to use torch.load to load the model and do the prediction, but seems like it retrain the model and then do the prediction, Can someone explains that for me ?
st31289
What do you mean by “but seems like it retrain the model and then do the prediction”? After loading your model, you usually set it to evaluation mode using model.eval(), since some layers behave differently. BatchNorm and Dropout are examples of the different behavior between training and evaluation. Could you explain a bit more?
st31290
thanks for your reply, for e.g., I trained a model with dataset mnist, while training, I set 5 epoch, and print something after each epoch, and the end of the code, I use torch. save(cnn,'path') to save the model with .pth format. And in another python file, when I use model = torch.load('path') to load the model, it prints from the first epoch (train again?). So that’s the point I’m confused with. Thanks again for your reply.
st31291
Could you try to save and load the state_dict? You can find some information here 83.
st31292
Hi, I know it’s been years, but do u find a solution for this? I’m stuck with the same issue although I did load the state_dict as the documentation stated.
st31293
Hi all. In few research papers I have found that pseudocode for an algorithm is written in PyTorch style. Is there any format for writing these PyTorch style pseudocode. If yes, kindly post the related link. PS: It would be beneficial if you also post some more research papers with PyTorch style pseudocode. Thank you
st31294
can you share some of these papers you mentioned so we can see what the style looks like?
st31295
\usepackage[ruled,vlined]{algorithm2e} \definecolor{commentcolor}{RGB}{110,154,155} % define comment color \newcommand{\PyComment}[1]{\ttfamily\textcolor{commentcolor}{\# #1}} % add a "#" before the input text "#1" \newcommand{\PyCode}[1]{\ttfamily\textcolor{black}{#1}} % \ttfamily is the code font ... \begin{algorithm}[h] \SetAlgoLined \PyComment{this is a comment} \\ \PyComment{this is a comment} \\ \PyComment{} \\ \PyComment{going to have indentation} \\ \PyCode{for i in range(N):} \\ \Indp % start indent \PyComment{your comment} \\ \PyCode{your code} \PyComment{inline comment} \\ \Indm % end indent, must end with this, else all the below text will be indented \PyComment{this is a comment} \\ \PyCode{your code} \caption{PyTorch-style pseudocode for your-algo} \label{algo:your-algo} \end{algorithm} I use the above latex code to construct a Pytorch-Style (more to a Python-Style) pseudocode algorithm table.
st31296
Hi all, What is the reshape layer in pytorch? In torch7 it seems to be nn.View, but what it is in pytorch? What I want is to add a reshpe layer in nn.Sequential. Thanks.
st31297
Solved by allenye0119 in post #8 If you really want a reshape layer, maybe you can wrap it into a nn.Module like this: import torch.nn as nn class Reshape(nn.Module): def __init__(self, *args): super(Reshape, self).__init__() self.shape = args def forward(self, x): return x.view(self.shape)
st31298
We don’t recommend that. Use nn.Sequential only for trivial sequences, if you need to insert some reshaping or views, wrap it in the container. You can see how torchvision models 3.8k are implemented.
st31299
apaszke: recommend Hi, good example. Thanks. what is not recommended? My network is a bit complex so I use nn.Sequential. BTW, if I do not use nn.Sequential, what is the reshape layer in pytorch? Thank you.
st31300
There’s no reshape layer. You just call .view on the output you want to reshape in the forward function of your custom model.
st31301
I didn’t find how reshape is wrapped in a container in that example. Could you elabrate a little more? Thanks!
st31302
you can see this line here https://github.com/pytorch/examples/blob/master/dcgan/main.py#L178 2.8k
st31303
Thanks for your reply. But it is still in forward function. How could I do something like self.node = nn.sequential(*layers), layers contain reshape, and later I only need to call self.node(input)
st31304
If you really want a reshape layer, maybe you can wrap it into a nn.Module like this: import torch.nn as nn class Reshape(nn.Module): def __init__(self, *args): super(Reshape, self).__init__() self.shape = args def forward(self, x): return x.view(self.shape)
st31305
Thanks~ but it is still so many codes, a lambda layer like the one used in keras would be very helpful.
st31306
We are not big on layers, in fact you can avoid entire sequential and put a for loop
st31307
Knowing that Flatten() layer was recently added, how about adding the Reshape layer as well for the very same reason Flatten() was added. It just makes life easier, specially for new comers from Keras. and Also, it can come in handy in normal sequential models as well.
st31308
I have to ask why reshaping does not count as “trivial”? Current way of work forces me to separate logic of data flow to two separate places - definition of the nn.Sequential, and forward()
st31309
I think in Pytorch the way of thinking, differently from TF/Keras, is that layers are generally used on some process that requires some gradients, Flatten(), Reshape(), Add(), etc… are just formal process, no gradients involved, so you can just use helper functions like the ones in torch.nn.functional.*… There’s some use cases where a Reshape() layer can come in handy, like in embedded systems where you add to your model firstly a reshape, so that all the model is compacted to be flashed in the device, and the reshape can adjust incoming data from sensors… For high level DL, those layers are more confusing than beneficial…
st31310
I think the layer nn.Unflatten() 360 may do the job. It can be inserted into a Sequential model.
st31311
I am trying to create a data loader for my dataset. When I run it I am getting the above error. My code is as: import torch import pandas as pd from torch.utils.data import DataLoader, Dataset data = pd.read_csv("data.csv") data = data.to_numpy() data = data[:,1:].astype('float64') class MyDataset(Dataset): def __init__(self, data, transform = None): self.data = data self.transform = transform def __getitem__(self, index): return self.data[index] def __len__(self): return len(self.data) data = torch.tensor(data) test = data dataset = MyDataset(data) train_loader = DataLoader(dataset) When I run the below. I am getting the error " in __getitem__x = self.data[index], TypeError: ‘int’ object is not subscriptable for epoch in range(1000): loss_sum = 0.0 for i, x in enumerate(train_loader): optimizer.zero_grad() loss = -model.log_prob(x.to(args.device)).mean() loss.backward() optimizer.step() loss_sum += loss.detach().cpu().item()
st31312
Try to print the variable data to see its type (and its shape too): based on the error, it seems to be an integer and not an array.
st31313
I tried that and this is its output <class 'torch.Tensor'> torch.Size([600, 3000])
st31314
class MyDataClassification(nn.Module): def init(self, ): super(MyDataClassification, self).init() self.layer_1a = torch.nn.Conv1d(in_channels = ch1, out_channels = 32, kernel_size = 4, stride=1) self.relu = nn.ReLU() self.layer_2a = torch.nn.Conv1d(in_channels = 32, out_channels = 16, kernel_size = 3, stride=1) self.relu = nn.ReLU() self.layer_3a = torch.nn.Conv1d(in_channels = 16, out_channels = 1, kernel_size = 2, stride=1) self.relu = nn.ReLU() self.layer_1b = torch.nn.Conv1d(in_channels = ch2, out_channels = 32, kernel_size = 4, stride=1) self.relu = nn.ReLU() self.layer_2b = torch.nn.Conv1d(in_channels = 32, out_channels = 16, kernel_size = 3, stride=1) self.relu = nn.ReLU() self.layer_3b = torch.nn.Conv1d(in_channels = 16, out_channels = 1, kernel_size = 2, stride=1) self.relu = nn.ReLU() self.layer_3 = nn.Linear(whatever_value_makes_this_work, seq_len) self.dropout = nn.Dropout(p=0.2) def forward(self, x1, x2): x1 = self.layer_1a(x1) x1 = self.layer_2a(x1) x1 = self.layer_3a(x1) x2 = self.layer_1b(x2) x2 = self.layer_2b(x2) x2 = self.layer_3b(x2) x = torch.add(x1, x2) x = torch.flatten(x,start_dim=2, end_dim=-1) x = self.layer_3(x) x = self.layer_4(x) return x My input x1 is a tensor of shape [batch_size, ch1 ,seq_len] and x2 is of shape [batch_size, ch2, seq_len] My target is of shape [batch_size, seq_len] The output of the above model is [batch_size, no_of_classes, seq_len]. I am using CrossEntropyLoss( ) and the model seems to be training. But when I print the loss and accuracy using print(f’Epoch {e+0:03}: | Train Loss: {train_epoch_loss/len(train_loader.dataset):.5f} | Val Loss: {val_epoch_loss/len(val_loader.dataset):.5f} | Train Acc: {train_epoch_acc/len(train_loader.dataset):.3f}| Val Acc: {val_epoch_acc/len(val_loader.dataset):.3f}’) The values are accuracy extremely high! In thousands or ten thousands!! What is going wrong? Am I making some error that I am not aware of here?
st31315
It seems the accuracy calculation is wrong, so could you post the corresponding code and explain how these values are calculated?
st31316
This is how I calculate the loss y_train_pred = model(X1_train_batch, X2_train_batch) train_loss = criterion(y_train_pred, y_train_batch) train_acc = multi_acc(y_train_pred, y_train_batch) train_loss.backward() optimizer.step() train_epoch_loss += train_loss.item() train_epoch_acc += train_acc.item() where def multi_acc(y_pred, y_test): y_pred_softmax = torch.log_softmax(y_pred, dim = 1) _, y_pred_tags = torch.max(y_pred_softmax, dim = 1) correct_pred = (y_pred_tags == y_test).float() acc = correct_pred.sum() / len(correct_pred) acc = torch.round(acc * 100) return acc And the loss function used is CrossEntropyLoss( ) as mentioned. This the snippet I usually use to print the values. print(f'Epoch {e+0:03}: | Train Loss: {train_epoch_loss/len(train_loader):.5f} | Val Loss: {val_epoch_loss/len(val_loader):.5f} | Train Acc: {train_epoch_acc/len(train_loader):.3f}| Val Acc: {val_epoch_acc/len(val_loader):.3f}')
st31317
It seems that multi_acc is returning the accuracy (in %) for each batch and the training loop accumulates it. Later you are then dividing by the number of samples. An example run for a 3 batches and 30 samples would thus be: train_epoch_acc = 90 + 80 + 70 # returned by multi_acc train_epoch_acc/len(train_loader) = 240 / 3 = 80 so it looks alright assuming all batches contain the same number of samples (otherwise you would add a bias to the calculation). Note that you’ve previously divided by len(train_loader.dataset), which gives the number of samples, while len(train_loader) returns the number of batches.
st31318
Yes, so there is nothing wrong with the calculation while using train_loader.dataset right? I shouldn’t be getting accuracy rate in thousands. Do you have any idea of where it could be going wrong?
st31319
No, using len(train_loader.dataset) would be wrong as described before, since you would normalize by the number of samples. Add print debug statements and check the intermediate values in the same way I’ve tried to explain your workflow.
st31320
Sorry. Yeah, that makes more sense. But I still end up with values like 27855.0 for train_epoch_acc/len(train_loader)
st31321
I built a network using customized layers. It runs fine on a single GPU but crashes when using two GPUs of a server. The codes and error messages are shown below. It seems that one of the tensors was split into the 2 GPUs while the other was not. Was it caused by the customized forward function? How should I solve it? Thanks! Code: os.environ['CUDA_VISIBLE_DEVICES'] = '0,1' class Cov1(layers): def __init__(self, in_dim=Fdim, out_dim=Fdim, bias=True): super(Cov1, self).__init__(in_dim, out_dim, bias) def forward(self,seq,sum_idx): simcov1 = torch.zeros(seq.shape).cuda() for i in range(0,self.in_dim): SeqDist = Vsets(seq[:,i].unsqueeze(1)) simcov1[:, i] = torch.mean(SeqDist * sum_idx, 1) simcov1 = 1 - simcov1 if self.bias is not None: mean_dist = simcov1.matmul(self.weight) + self.bias return (mean_dist) else: mean_dist = simcov1.matmul(self.weight) return (mean_dist) Error screen: image867×40 2.62 KB
st31322
If you are using DataParallel the assumption is all input tensors have the same dimension along the first (batch) dimension. Otherwise the splitting behavior becomes tricky to reason about. What are the input shapes (and the meaning of the dimensions) being passed and is DataParallel being used?
st31323
Thanks eqy. Yes, DataParallel is used for the model: model = nn.DataParallel(model).cuda(). And the dimension of the first tensor SeqDist is 446x446 and the second tensor sum_idx is 446. The multiplication SeqDist * sum_idx is to select the rows specified in sum_idx and calculate each row’s average. If the first dimension of the tensors changes, I don’t know how to make the multiplication work…
st31324
In this case, can you simply make this data parallel by doing something like making seqdist (N, 446, 446) and sum_idx (N, 446)?
st31325
Hi, I am currently fixing the seed values globally in my script using the following snippet: seed_value=123457 np.random.seed(seed_value) # cpu vars torch.manual_seed(seed_value) # cpu vars random.seed(seed_value) # Python torch.cuda.manual_seed(seed_value) torch.cuda.manual_seed_all(seed_value) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False Further, I am using multiple processes in my DataLoader using num_workers > 0 to load frames for a video. Does the above snippet assign a fixed seed worker_id+seed_value to every worker in an epoch? I am also using random augmentations as a part of my data loading pipeline like RandomResizedCrop, RandomHorizontal Flipping from torchvision transforms. If the worker seed is fixed at worker_id+seed_value , does that mean at each epoch, the data will go through the same set of augmentations? If someone can clarify this, it would be of great help. Thanks!
st31326
From the documentation 1: By default, each worker will have its PyTorch seed set to base_seed + worker_id, where base_seed is a long generated by main process using its RNG (thereby, consuming a RNG state mandatorily). However, seeds for other libraries may be duplicated upon initializing workers (e.g., NumPy), causing each worker to return identical random numbers. (See this section in FAQ.). So unless you reset the RNG between to epochs, the creation of the base seed from the RNG (which isn’t completely obvious to me that it is a good idea for perfect randomness) ensures that you get a new random seed every time. Best regards Thomas P.S.: I would always recommend experimentally verifying things like this. Even just b1 = next(iter(dl)) ; b2 = next(iter(dl)) and inspecting the results.
st31327
Many thanks for the reply ! I cross-checked the generated batches using consecutive calls of next(iter(dl)). The batches are different. So the base_seed generated by the main process is different than the seed fixed manually using torch.manual_seed? Also, if I interpret it, for each epoch, the workers will be assigned a new seed base_seed + worker_id. And the data augmentations will be based on the new worker seed?
st31328
Digbalay_Bose: So the base_seed generated by the main process is different than the seed fixed manually using torch.manual_seed? Yes, the base_seed is generated from drawing a random number, so it is dependent on / defined by the RNG state (and thus the manual_seed) but not identical to it. Digbalay_Bose: Also, if I interpret it, for each epoch, the workers will be assigned a new seed base_seed + worker_id. And the data augmentations will be based on the new worker seed? One thing to keep in mind is that if enabled, the shuffeling of the dataset (i.e. which things to stick in a minibatch) is the done in the parent. The randomness in the dataset (and potentially the collation function) like augmentations (in the dataset) are indeed from in the worker based on the new worker seed. All this assuming that you use PyTorch’s random functions (which you should and you need to be super-careful to properly initialize your RNG if you don’t). All this said, there is a good argument to be made to do the random augmentation after batching on the GPU if possible. Solves not only any confusion around randomness but is likely much more efficient, too. Best regards Thomas
st31329
Thank you @tom for clarifying the doubts. In the torch.utils.DataLoader argument, we can pass a function declaration to worker_init_fn. Is it advisable to use worker_init_fn to access the worker’s current seed and seed the other accompanying libraries like random based on the same seed ? worker_seed = torch.initial_seed() % 2**32 numpy.random.seed(worker_seed) random.seed(worker_seed) DataLoader( train_dataset, batch_size=batch_size, num_workers=num_workers, worker_init_fn=seed_worker ) ```. (As mentioned in https://pytorch.org/docs/stable/notes/randomness.html#dataloader)
st31330
Well, so it depends on Will you use other libraries’ random functions? Is there a chance you might be using them inadvertently? Will others grab your code and do funny things with it and then claim it’s your fault they didn’t get proper randomness? If any of these is yes or at least might be yes, it is a good idea to initialize randomness. If it is all firmly no, it just adds uninteresting boilerplate to your code. Personally, I think that mixing RNGs from different libraries is not a great idea (and in fact, trying to be clever around RNGs is usually a good way to shoot yourself into the foot unless you exactly know what you are doing). Imagine you do this with two libraries that have identical RNGs. Now you seed them to identical states. By some accident you draw random integers in some range from both of them in sync. They will always be identical. Now you combine them, say, by taking the difference. Instead of getting a random number with a symmetric triangular distribution on [-1, 1] which you would get from independent RVs you now get all 0s. Now this is an obvious and improbable example, but more subtle interactions do exist and happen where people do not expect them. (For another variation of the “don’t try to be clever” thing: here is a link why, while understanding the motivation, I am skeptical of the “seed with a random number” approach: Random number generator seed mistakes & how to seed an RNG 3 .) Best regards Thomas
st31331
I implemented a GAN model and because I need to train it for 500 epochs, I’ve saved the result of each 10 epochs for both models: torch.save({ 'epoch': epoch + 1, 'gen_state_dict': gen.state_dict(), 'disc_state_dict': disc.state_dict(), 'gen_optim': opt_gen.state_dict(), 'disc_optim': opt_disc.state_dict(), }, os.path.join("", 'gan_epoch-{}.pt'.format(epoch + 1))) and I load it: disc = Discriminator(in_channels=3).to(device) gen = Generator(in_channels=3).to(device) checkpoint = torch.load("/content/drive/MyDrive/Epochs/gan_epoch-20.pt") gen.load_state_dict(checkpoint['gen_state_dict']) disc.load_state_dict(checkpoint['disc_state_dict']) opt_disc.load_state_dict(checkpoint['disc_optim']) opt_gen.load_state_dict(checkpoint['gen_optim']) disc.train() gen.train() The code works well but I am wondering if the results will be correct, I have noticed that the training becomes faster, before saving the models, one epoch takes 20 minutes, now it takes only 8 minutes also the discriminator loss increases a lot from 0.xxx to 7.xxx is this normal?
st31332
Did you make sure the same datasets are loaded and contain the same number of samples? I would be suspicious, if the training suddenly sees a 2x speedup without changing anything else.
st31333
Thank you for your reply. Yes, I’m sure the same dataset is loaded, I ran it for the second time, and now it took 20 minutes, I’m really confused why this happened
st31334
Hi All, I have a few questions related to the topic of modifying gradients and the optimizer. I’m wondering if there is an easy way to perform gradient ascent instead of gradient descent. For example, this would correspond to replacing grad_weight by -grad_weight in linear layer definition as seen in class LinearFunction(Function): from the Extending PyTorch page. My concern here is that this will mess up a downstream function that requires grad_weight instead of -grad_weight, or is this not a concern at all? A suggestion made to me was to try to modify the optimizer. Is there a simple way to go about doing W + dW instead of W - dW in the optimizer? I can’t really tell from the source code for SGD or ADAM. Thanks for reading!
st31335
Hi, The simplest way to do gradient ascent on a loss L is to do gradient descent on -L .
st31336
That is an interesting solution. I think I need to further clarify my original question. I would like to include a negative sign on the updates to the weights, and this corresponds to changing grad_weight to -grad_weight, while grad_input and grad_bias are left untouched. However, I am wary of unintended consequences of doing something like this to the gradients, and was wondering if there was an easy way to change the optimizer such that it performed gradient ascent(W + dW) for the non last layer weights specifically, but left the other parameters alone?
st31337
In that case I guess you will have to create your custom optimizer to handle that. With one group for the descent part and one group for the ascent part for example.
st31338
Continuing the discussion from Gradient Ascent and Gradient Modification/Modifying Optimizer instead of Grad_weight: I’m working on a similar problem where I need to optimize the following loss function: IMG_20200926_2229553106×4096 176 KB Here w (omega) is model parameter and Lamdas are Lagrange Multipliers. I need to perform gradient descent wrt. omega and simultaneously gradient ascent wrt. lambda. lambda is not a model parameter and only included in the loss term. Will your solution of updating lambda using gradient descent on -L work in this case? If it does then taking negative learning rate for lambdas in gradient descent should also be equivalent. And if it doesn’t then what should be the pytorch solution for this(without changing the optimizer source code)? Or should I need to creat a custom optimizer?
st31339
I think that this is a bit too late, but the solution I came up with is to use a custom autograd function, which reverses gradient direction. As like as @Tamal_Chowdhury , I have a lagrangian optimization problem, for which this function works perfectly. A small working example would be: import torch class AscentFunction(torch.autograd.Function): @staticmethod def forward(ctx, input): return input @staticmethod def backward(ctx, grad_input): return -grad_input def make_ascent(loss): return AscentFunction.apply(loss) x = torch.normal(10, 3, size=(10,)) w = torch.ones_like(x, requires_grad=True) loss = (x * w).sum() print(f'descent loss: {loss.item():.2f}') loss.backward() print(w.grad) w.grad = None loss = (x * w).sum() m_loss = make_ascent(loss) print(f'ascent loss: {m_loss.item():.2f}') m_loss.backward() print(w.grad) It’s output: descent loss: 96.13 tensor([12.7093, 11.2243, 6.4265, 7.6572, 14.2737, 15.1144, 8.0099, 6.2517, 7.6352, 6.8274]) ascent loss: 96.13 tensor([-12.7093, -11.2243, -6.4265, -7.6572, -14.2737, -15.1144, -8.0099, -6.2517, -7.6352, -6.8274])
st31340
I want to apply label smoothin to MSE LOSS. Have you ever implemented it? Please help me.
st31341
How would label smoothing work for a regression use case? Or would you like to perform a classification using nn.MSELoss?
st31342
Hello, I am trying to apply PCA and T-SNE on an image dataset. I have the images in a Dataloader but I can’t figure how to use them as the X on the sklearn functions. Can anyone help?
st31343
These scikit-learn methods expect 2-dimensional numpy arrays as the input, so you could either pass a batch returned by the DataLoader to it (after calling numpy() on it and making sure the shape is right) or you could stack (some) batches and pass this larger array to it.
st31344
Hi everyone. Recently I write a function to simulate a complex homography transform. I firstly deal with the output of the network (resnet18) and get the transformed grid using my written function. Then I transform the random tensor and compute the loss. However, I find the loss.backward() is very slow. My code is as following: # -*-coding:utf-8-*- import torch import torch.nn.functional as F import pdb from torch import optim import sys import sys sys.path.append('/home/yongjie/uda_for_convex/pose_estimation') from model import ResNet18 import math import time from tqdm import tqdm def generate_grid(alpha, beta, d): size = (1, 3, 720, 720) N, C, H, W = size B = N Rotx = torch.zeros(B, 3, 3).to(device).clone() ones = torch.ones(B,).to(device).clone() # pdb.set_trace() Rotx[:, 0, 0] = ones Rotx[:,1, 1] = torch.cos(beta).squeeze(1) Rotx[:,1, 2] = -torch.sin(beta).squeeze(1) Rotx[:,2, 1] = torch.sin(beta).squeeze(1) Rotx[:,2, 2] = torch.cos(beta).squeeze(1) Roty = torch.zeros(B, 3, 3).to(device).clone() ones = torch.ones(B,).to(device).clone() Roty[:,1,1] = ones.clone() Roty[:,0,0] = torch.cos(alpha).squeeze(1) Roty[:,0,2] = torch.sin(alpha).squeeze(1) Roty[:,2,0] = -torch.sin(alpha).squeeze(1) Roty[:,2,2] = torch.cos(alpha).squeeze(1) # construct homo R = torch.bmm(Rotx, Roty) R_1 = torch.inverse(R).clone() t = torch.zeros(B,3).to(device) # pdb.set_trace() t[:,2] = d.squeeze(1) # translation vector R_1[:,:,2] = t temp_homo = R_1 homo = torch.inverse(R_1) # ------------------- # construct the circle and find the center and scale C = torch.zeros(B, 3, 3).to(device) C[:,0,0] = torch.tensor(1.) C[:,1,1] = torch.tensor(1.) C[:,2,2] = torch.tensor(-1.) C2 = torch.bmm(torch.inverse(torch.transpose(temp_homo,1,2)), C) C2_ = torch.bmm(C2, torch.inverse(temp_homo)) C3 = torch.inverse(C2_) # dual format a = C3[:,0,0] b = C3[:,0,2]+C3[:,2,0] c = C3[:,2,2] right_x = (-b-torch.sqrt(b.mul(b)-4*a.mul(c)))/(2*a) left_x = (-b+torch.sqrt(b.mul(b)-4*a.mul(c)))/(2*a) right_x = -1./right_x left_x = -1./left_x width = right_x-left_x center_x = (right_x+left_x)/2 a_ = C3[:,1,1] b_ = C3[:,1,2]+C3[:,2,1] c_ = C3[:,2,2] bottom_y = (-b_-torch.sqrt(b_.mul(b_)-4*a_.mul(c_)))/(2*a_) top_y = (-b_+torch.sqrt(b_.mul(b_)-4*a_.mul(c_)))/(2*a_) bottom_y = -1./bottom_y top_y = -1./top_y height = bottom_y-top_y center_y = (top_y+bottom_y)/2 scale = torch.max(width, height) #--------------------- # generate the compact grid according the homo, center and scale # size = (1, 3, 1024, 1024) N, C, H, W = size N=B base_grid = torch.zeros(N, H, W, 2).to(device) linear_points = torch.linspace(-1, 1, W).to(device) if W > 1 else torch.Tensor([-1]).to(device) base_grid[:, :, :, 0] = torch.ger(torch.ones(H).to(device), linear_points).expand_as(base_grid[:, :, :, 0]) linear_points = torch.linspace(-1, 1, H).to(device) if H > 1 else torch.Tensor([-1]).to(device) base_grid[:, :, :, 1] = torch.ger(linear_points, torch.ones(W).to(device)).expand_as(base_grid[:, :, :, 1]) base_grid = base_grid.view(N, H * W, 2) # transform the center and scale center_x = center_x.unsqueeze(1) center_y = center_y.unsqueeze(1) center = torch.cat((center_x,center_y), 1).unsqueeze(1).repeat(1,W*H,1) scale = scale.unsqueeze(1).repeat(1,H*W).unsqueeze(2).repeat(1,1,2) base_grid = base_grid*scale/2 base_grid = base_grid+center # extend the homo, easy to calculate h = homo.unsqueeze(1).repeat(1, W*H, 1, 1) temp1 = (h[:, :, 0, 0] * base_grid[:, :, 0] + h[:, :, 0, 1] * base_grid[:, :, 1] + h[:, :, 0, 2]) temp2 = (h[:, :, 2, 0] * base_grid[:, :, 0] + h[:, :, 2, 1] * base_grid[:, :, 1] + h[:, :, 2, 2]) u1 = temp1 / temp2 temp3 = (h[:, :, 1, 0] * base_grid[:, :, 0] + h[:, :, 1, 1] * base_grid[:, :, 1] + h[:, :, 1, 2]) temp4 = (h[:, :, 2, 0] * base_grid[:, :, 0] + h[:, :, 2, 1] * base_grid[:, :, 1] + h[:, :, 2, 2]) v1 = temp3 / temp4 grid1 = u1.view(N, H, W, 1) grid2 = v1.view(N, H, W, 1) grid = torch.cat((grid1, grid2), 3) return grid device = 2 BS = 1 predictor = ResNet18(in_channel=3, num_classes=1).to(device) optimizer = optim.SGD(predictor.parameters(), lr=0.0001, momentum=0.9, weight_decay=0.005) for i in tqdm(range(100)): optimizer.zero_grad() images_source = torch.rand(BS, 3, 720, 720).to(device) temp_image2 = F.interpolate(images_source, size=(256,256), mode='bilinear') output_four = predictor(temp_image2) k_p = output_four[0] alpha_p = output_four[1] beta_p = output_four[2] d_p = output_four[3] K_label = (k_p*(-0.22)+(-0.5)) K_up = (K_label / ((1. * K_label + 1.) ** 2 + 0.0000000001)) alpha = ((alpha_p * 120.-60.) * math.pi/180.) beta = ((beta_p * 60.-30.) * math.pi/180.) d = (d_p * 6.+2.) input_tensor = torch.rand(BS, 3, 720, 720).to(device) homo_grid = generate_grid(alpha, beta, d) h_t = F.grid_sample(input_tensor, homo_grid) temp_loss = F.mse_loss(h_t, torch.tensor([1.]).to(device)) start = time.time() temp_loss.backward() print(time.time()-start) optimizer.step() The output is image749×124 5.84 KB I’m not sure if this phenomenon is related to some internal function in pytorch such as torch.inverse or torch.sqrt. Could you give me some advice? Thanks very much.
st31345
Solved by Yongjie_Shi in post #3 Thank you for your suggestions. Through step-by-step debugging, I found that the slow backpropagation was related to the large computational graph. There are two places in the code that greatly extend the computational graph. The first is center_x = center_x.unsqueeze(1) center_y = center_…
st31346
You could profile the code to further isolate the bottleneck of the script, which could help in further debugging. E.g. we’ve been working on the usage of cusolver in more torch.linalg methods, which could speed up your workflow in case you are using the nightly binaries or a source build.
st31347
Thank you for your suggestions. Through step-by-step debugging, I found that the slow backpropagation was related to the large computational graph. There are two places in the code that greatly extend the computational graph. The first is center_x = center_x.unsqueeze(1) center_y = center_y.unsqueeze(1) center = torch.cat((center_x,center_y), 1).unsqueeze(1).repeat(1,W*H,1) scale = scale.unsqueeze(1).repeat(1,H*W).unsqueeze(2).repeat(1,1,2) base_grid = base_grid*scale/2 base_grid = base_grid+center which can be replaced by center_x = center_x.unsqueeze(1) center_y = center_y.unsqueeze(1) # center = torch.cat((center_x,center_y), 1).unsqueeze(1).repeat(1,W*H,1) # scale = scale.unsqueeze(1).repeat(1,H*W).unsqueeze(2).repeat(1,1,2) center = torch.cat((center_x,center_y), 1) scale = scale base_grid = base_grid*scale/2. base_grid = base_grid+center The repeat operation will greatly expand the computational graph. Other is h = homo.unsqueeze(1).repeat(1, W*H, 1, 1) temp1 = (h[:, :, 0, 0] * base_grid[:, :, 0] + h[:, :, 0, 1] * base_grid[:, :, 1] + h[:, :, 0, 2]) temp2 = (h[:, :, 2, 0] * base_grid[:, :, 0] + h[:, :, 2, 1] * base_grid[:, :, 1] + h[:, :, 2, 2]) u1 = temp1 / temp2 temp3 = (h[:, :, 1, 0] * base_grid[:, :, 0] + h[:, :, 1, 1] * base_grid[:, :, 1] + h[:, :, 1, 2]) temp4 = (h[:, :, 2, 0] * base_grid[:, :, 0] + h[:, :, 2, 1] * base_grid[:, :, 1] + h[:, :, 2, 2]) which can be replaced by h = homo temp1 = (h[:, 0, 0] * base_grid[:, :, 0] + h[:, 0, 1] * base_grid[:, :, 1] + h[:, 0, 2]) temp2 = (h[:, 2, 0] * base_grid[:, :, 0] + h[:, 2, 1] * base_grid[:, :, 1] + h[:, 2, 2]) u1 = temp1 / temp2 temp3 = (h[:, 1, 0] * base_grid[:, :, 0] + h[:, 1, 1] * base_grid[:, :, 1] + h[:, 1, 2]) temp4 = (h[:, 2, 0] * base_grid[:, :, 0] + h[:, 2, 1] * base_grid[:, :, 1] + h[:, 2, 2]) SkyAndCloud once asked a similar question which can be seen in link 3. In general, it is better not to greatly expand the computational graph during the forward pass, otherwise, it will cause the backward to be slower.
st31348
Hi, I am getting the following error: Screen Shot 2021-05-31 at 12.58.21 AM2022×1336 380 KB While I am running the following code: Screen Shot 2021-05-31 at 1.03.24 AM2010×660 146 KB This is what in train method: Screen Shot 2021-05-31 at 1.05.10 AM1606×808 126 KB I am stuck with this error and I would appreciate your help. Please note when I change the data type from bool to long for masks I am getting 0s only in pred, 0s only in y_pred, and 0s only in y_true, and this is not should be the case. Please advice. Thank you.
st31349
Based on the error message torch_cluster/sampler.py is raising this error, as LongTensors are expected. monee_h.a: Please note when I change the data type from bool to long for masks I am getting 0s only in pred, 0s only in y_pred, and 0s only in y_true, and this is not should be the case. Please advice. Converting a BoolTensor to a LongTensor will only return all zeros, if the BoolTensor was containing only False values, so it might be expected.
st31350
Hello everyone, the question is more about deep learning, than about pytorch specifically. What is the best way to build a many-to-many timeseries model for numerical sequences of constant length p.e. Vehicle Trajectory? If I take the last timestep of the encoder, like this… …what would be the best way to generate a sequence of multiple timesteps, using the last hidden state as the new input? I’ve seen differen versions like this one… and this one … Or a combination of the two, where you concat the last timestep of the encoder ouptut with every timestep of the decoder output… Feel free to suggest a different / better way. Also, what is a better choice for labels? The absolute future values The change per time of the future values → integral of the model output to get prediction I’m working on all of these variations right now. I was just curious if there is already a “go to” solution for my problem. Thanks in advance, Arthur
st31351
First decoder has weaker (bayesian) prior, i.e. potentially discardable starting context as sequence length grows. For models in later pictures, that’s achievable (much easier in gated rnns), but has to be learned. From other perspective, they have a shortcut connection to time zero, that may be beneficial if initial context is strongly informative. Re: integral. I think you may face some issues with gradient flow, if you go that route and do things like cumsum(). Look into neural ODEs if you feel that learning changes is more suitable for your task, but they’re more complex and slower.
st31352
Hello, I have created a Class for the Dataset, (Code Below) `Preformatted text`class CustomDataset(Dataset): def __init__(self, csv_file, id_col, target_col, root_dir, sufix=None, transform=None): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. id_col (string): csv id column name. target_col (string): csv target column name. sufix (string, optional): Optional sufix for samples. transform (callable, optional): Optional transform to be applied on a sample. """ self.data = pd.read_csv(csv_file) self.id = id_col self.target = target_col self.root = root_dir self.sufix = sufix self.transform = transform def __len__(self): return len(self.data) def __getitem__(self, idx): # get the image name at the different idx img_name = self.data.loc[idx, self.id] # if there is not sufic, nothing happened. in this case sufix is '.jpg' if self.sufix is not None: img_name = img_name + self.sufix # it opens the image of the img_name at the specific idx image = Image.open(os.path.join(self.root, img_name)) # if there is not transform nothing happens, here we defined below two transforms for train and for test if self.transform is not None: image = self.transform(image) # define the label based on the idx label = pd.read_csv(csv_file).loc[idx, ['healthy', 'multiple_diseases', 'rust', 'scab']].values label = torch.from_numpy(label.astype(np.int8)) #label = label.unsqueeze(-1) return image, label It returns a label shape: torch.size([4]) and then train_dataset = CustomDataset(csv_file=data_dir+'train.csv', root_dir=data_dir+'images', **params) train_loader = DataLoader(train_dataset, batch_size=4, shuffle=True, num_workers=4) But I have this message Error:" ValueError: Expected input batch_size (4) to match target batch_size (16)." for idx, (data, target) in enumerate(loaders): ## find the loss and update the model parameters accordingly ## record the average training loss, using something like ## train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss)) optimizer.zero_grad() # forward pass: compute predicted outputs by passing inputs to the model #print(data) output = model(data) # calculate the batch loss #torch.max(target, 1)[1] print('output shape: ', output.shape) #target = target.view(-1) print('target shape: ', target.shape) loss = criterion(output, target) I can see that the Output shape is: torch.size([4, 133]) Target shape is: torch.size([4, 4]) I know that my target should be ([4]) and as the label shape of the dataset is this shape I don’t understand why it changed to [(4, 4)]). I don’t understand what I missed and how can I get the target shape to be ([4]) Look forward to reading your clarifications
st31353
I tried to insert in the back propagation the below, basically the idea was to replace (in bold) the idx (data, target) that is dimension ([4, 4]) with a label_inter, which is the same that is used in the dataset. label_inter = pd.read_csv(data_dir+'train.csv').iloc[idx, 1:5].value label_inter = torch.from_numpy(label_inter.astype(np.int64)) label_inter = label_inter.squeeze(-1) label_inter = label_inter.view(-1) But then I have an error ValueError: Expected input batch_size (1) to match target batch_size (4) after the Idx 455, but I checked and there is nothing specific on the picture Train_456.jpg So this doesn’t work neither. Any suggestions? Why the dataloader change the dimension of my label? Note the picture are categorized as below:
st31354
@Arnaud_Mal how did you load the data. I have exact same data, but I am not able to load it.
st31355
How to use efficientNet as backbone CNN model for feature extraction, so that embeddings of images can be generated.Most of example on GitHub use 4 layer ConvNet so I can not understand how to use same thing for large CNN model. Like there are implementation of efficient-net for Torch, so what steps I need to use them as feature extractor? I am using this efficient net code which implemented network in PyTorch - github.com lukemelas/EfficientNet-PyTorch/blob/master/efficientnet_pytorch/model.py 13 """model.py - Model and module class for EfficientNet. They are built to mirror those in the official TensorFlow implementation. """ # Author: lukemelas (github username) # Github repo: https://github.com/lukemelas/EfficientNet-PyTorch # With adjustments and added comments by workingcoder (github username). import torch from torch import nn from torch.nn import functional as F from .utils import ( round_filters, round_repeats, drop_connect, get_same_padding_conv2d, get_model_params, efficientnet_params, load_pretrained_weights, Swish, This file has been truncated. show original
st31356
I am working on a classification problem. I split my dataset into sub-folders according to class labels. Then, I used ImageFolder and DataLoader from Pytorch to load the data. I have attached my directory structure below. My dataset does not have a train and test folder. Now, I am trying to do splitting in a way that all four sub-folders get split into train and test folders. Most of the solutions I came across have train and test folders pre-hand with respective .csv files. And use train_test split from sklearn. Should I split data before splitting the entire data set it into image sub-folders based on class label?
st31357
It generally makes sense to split across class labels proportionally as splitting blindly across the entire dataset risks omitting smaller classes from either the training or testing set entirely.
st31358
How can i have the custom input dimension size for the input image when working with any pytorch models ,to put it more into simple words i want to change the input size which the models accepts rather than changing the sizes of my input images.
st31359
If you are doing classification most models should work for any input resolution barring memory/computational constraints as convolutional layers do not place any constraints on resolution and average pooling is applied before the fully connected layer(s). However, if you are using a pretrained model you may want to do finetuning if the object scales are very different than what would be expected at training time.
st31360
eqy: However, if you are using a pretrained model you may want to do finetuning if the object scales are very different than what would be expected at training time. @eqy yes i want on pretrained model , i am new to all this so for example if i want it on resnet50 pre trained model how can i achieve it? Can you please elaborate a little .Thank you.
st31361
The imagenet example: examples/main.py at master · pytorch/examples (github.com) is a good starting point. If you have an existing dataset, you should just be able to use a pretrained model and plug in your existing dataset with minimal changes (e.g., replace the fully connected layer with one that matches the number of classes you have).
st31362
Hello, I’m now trying to implement a continuous prediction model using rnn/lstm. so the inputs are some multivariate sequences with different lengths and there’s a binary target correspond to each timestep in each sequence, so the predictions are made for every timestep in all sequences. I’m a little bit confused with the pad_sequence and pack_padded_sequence functions, are they necessary in this case? if I pad the sequences, do I need to pad the labels as well? any help is appreciated!
st31363
Hi, Sorry, I didn’t finish writing and accidentally clicked to post. Is there any particular reason why calculating the batched inverse of a (minibatches, channels, 3, 3) tensor would be ridiculously slow? For perspective, I have a modified Fast SCNN implemented which has one specific portion at the tail end where I perform least squares for every single channel in every single minibatch. My input to the least squares module should have minibatches*channels weight tensors from the earlier layers. I currently calculate the coefficients of a parabola using B = (X^T * X)^-1 * X^T * Y. The only bottleneck in this entire segment is where I calculate (X^T * X)^-1. Essentially, this is just calculating the inverse of one 3x3 matrix once for each channel in each mini batch. Yet, to put into perspective, the calculation of the inverses, this one line of the form torch.inverse(place), takes around 70% - 80% of the total forward pass. This is utilizing CUDA 10. Could this be because of overhead, or is there another issue?
st31364
adzpy: perspective, the calculation of the inverses, this one line of the form torch.inverse(place), takes around 70% - 80% of the total forward pass. This is utilizing CUDA 10. Could this be because of overhead I have met the same problem, are there any suggestions about it?
st31365
Hello. I’m trying to run my model with some data and am getting the following error: TypeError: expected Variable[CPUType] (got torch.cuda.FloatTensor) I’ve checked some of the answers here and it seemed that I hadn’t pushed my model onto the device yet. However, I checked the code and I have in fact done that, and I even explicitly pushed it onto the device in the Python Debugger interactive shell and am still getting the same error. The code is as following: ### Module: main.py def main(): config = get_args() dataset = Data(config) model = GCN(config, dataset.num_features, dataset.num_classes) trainer = Trainer(config, model, dataset) if torch.cuda.is_available(): model = model.to('cuda') # I've double checked that torch.cuda.is_available() returns True. trainer.train() ### Module solver.py class Trainer(): def __init__(self, config, model, dataset): self.config = config self.num_epochs = self.config.num_epochs self.model = model self.dataset = dataset self.features = self.dataset.features self.adj_hat = self.dataset.adj_hat def train(self): self.model.train() optimizer = get_optimizer(self.config, self.model) loss_train = nn.NLLLoss() for epoch in range(self.num_epochs): optimizer.zero_grad() output = self.model(self.features, self.adj_hat) ### Module: models.py class GraphConv(nn.Module): def __init__(self, in_features, out_features): super().__init__() self.weight_mat = nn.Linear(in_features=in_features, out_features=out_features) def forward(self, x, adj_mat): weight_prod = torch.DoubleTensor(self.weight_mat(x)) output = torch.matmul(adj_mat, weight_prod) return output class GCN(nn.Module): def __init__(self, config, num_features, num_classes): super().__init__() self.config = config self.num_hidden = self.config.num_hidden self.num_classes = num_classes self.num_features = num_features self.p = self.config.dropout_rate self.graphconv1 = GraphConv(in_features=self.num_features, out_features=self.num_hidden) self.graphconv2 = GraphConv(in_features=self.num_hidden, out_features=self.num_clases) def forward(self, x, adj_hat): x = F.relu(self.graphconv1(x, adj_hat)) x = F.dropout(input=x, p=self.p, training=self.training) output = F.softmax(self.graphconv2(x, adj_hat), dim=1) return output The specific line of code that’s triggering the error is the x = F.relu(self.graphconv1(x, adj_hat)) inside the GCN model. I don’t understand because if I put self.model on the device, shouldn’t that take care of this issue? Thanks in advance!
st31366
Solved by ptrblck in post #2 You are creating new tensors on the CPU in this line of code: weight_prod = torch.DoubleTensor(self.weight_mat(x)) If you want to change the data type, use out = out.double() instead and make sure it’s the right type for all further calculations.
st31367
You are creating new tensors on the CPU in this line of code: weight_prod = torch.DoubleTensor(self.weight_mat(x)) If you want to change the data type, use out = out.double() instead and make sure it’s the right type for all further calculations.