id
stringlengths
3
8
text
stringlengths
1
115k
st102100
Hi, This looks like and error in python’s multiprocessing library… Which version of python are you using? The source code for python 3.6 is different from the one in your error message here 8.
st102101
Hi albanD, Thank you very much for your quick response! I guess that the version is the newest one, 3.6, since I installed it about the last week. But the version might be the reason as you say. Sorry I am now out of my office and cannot confirm my work PC. Anyway, I appreciate your reply:) Best regards.
st102102
Hi Everyone, Can anyone suggest reasons for this kind of error and possible solution: raise AssertionError(“Invalid device id”) AssertionError: Invalid device id 3rd August.png1920×1080 288 KB
st102103
Hi, The problem is that the current cuda device id is not valid. What does torch.cuda.device_count() returns? If it returns 0 that means that no device are available.
st102104
Thank you so much for reply! My GPU is working, I am trying this project on my personal computer (which has one GPU) as well as with Server’s GPUs.
st102105
But what does torch.cuda.device_count() returns? You need a recent enough nvidia gpu for them to be usable.
st102106
DataParallel is written as: model = torch.nn.DataParallel(model, device_ids=args.gpus).cuda() cudnn.benchmark = True
st102107
What does args.gpus contains? In particular for your system with a single gpu, it should only contain [0] and so no DataParallel should be used in that case.
st102108
My dataset is Videos so I am running on server which has to use at least 4 gpus. if args.gpus is not None: devices = [args.gpus[i] for i in range(args.workers)] else: devices = list(range(args.workers))
st102109
Hey, I’m trying to do the following: I have a single N input data samples of dimension of 1 and a corresponding N output samples with dimension 1. I want to train an LSTM model that is fed with a sequence of say 10 samples with overlapping, i.e: in iteration 1 we feed: x[1:11] and perform BPTT and parameters update. in iteration 2 we feed: x[2:12] and perform BPTT and parameters update. in iteration 3 we feed: x[3:13] and perform BPTT and parameters update. etc… Now, I want the second hidden state & cell state of each iteration to be fed as the initial hidden state and cell state values for the next iteration. The second hidden state is available by calling: output, (hn, cn) = self.lstm_layer(input, [self.hidden_state, self.cell_state]) Here ‘output’ holds all states of the sequence. But this doesn’t get me the second cell state as well for each iteration. Is there any way to overcome this? Is the cell state not necessary for proper learning and zeroing it will do?
st102110
Solved by tom in post #2 You could apply the first step separately. Most times I have seen this, people used non-overlapping samples an cached the states (see e.g. fast.ai’s lecture on language models). Best regards Thomas
st102111
You could apply the first step separately. Most times I have seen this, people used non-overlapping samples an cached the states (see e.g. fast.ai’s lecture on language models). Best regards Thomas
st102112
generally, the dim of convolution output is multiple, but how sigmoid (or any other activition function) output one value? for example, for a given last convolution output 1x1x2048, the output of sigmoid should be 1x1x2048, how does the output change to be one dim value (class number or convolution output )? sorry for so stupid question, but i am just a little confused. thanks!
st102113
Hi, Not sure to understand your question here. The sigmoid 365 function is an element-wise function, so it will not change the shape of the tensor, just replace each entry with 1/(1+exp(-entry)).
st102114
so if the sigmoid output of the given convolution is 1x1x2048, how to get the final catalogue value (for classification problem)?
st102115
you should use fc after the convolution.the input_size of the fc is 112048,and the output_size is you class number.and put the output into the sigmoid function,and get the final result
st102116
Usually, there is a fully connected layer after the last conv layer which maps the output to the number of categories. You are talking about sigmoid function so I assume there are only 2 classes and only 1 output value is needed. In this case, the code should be something like: conv_out = torch.ones((1,1,2048)) # map dim 2048 to 1 using a linear transformation. fc = nn.Linear(2048, 1) fc_out = fc(conv_out) # apply sigmoid function to fc_out to get the probability. y_prob = torch.sigmoid(fc_out) print(y_prob)
st102117
thank you. yes, there are only 2 classes. in fact, I just need the probability. you mean I should map dim 2048 to 1 first? is it ok if not use a linear transformation but use a additional conv layer? for example, nn.Conv2d(opt.ndf * 8, 1, 4, 1, 0, bias=False), #opt.ndf * 8=2048 nn.Sigmoid() is it ok?
st102118
sorry for not clarify it. there are 2 classes. and how to get the final probability using sigmoid for a given 1x1x2048 conv output? hope I have made it clear .
st102119
Yes, you should map dim from 2048 to 1. I would recommend using a linear transformation as the last layer but you can try using another conv layer to see which approach gives you better performance.
st102120
Ok, Then yes as the others stated, a linear layer to a single output node is what you want.
st102121
Hi I wish to record the log likelihood for the validation set for a few iterations. Can anyone tell me how to go about doing it?
st102122
I am assuming you are using a loss function like NLLLoss 3 to get the log likelihood. criterion = nn.NLLLoss() loss = criterion(output, target) You can get the value of the loss using loss.item() and store it in a list or array. For a nice example on how to keep track of your loss check out this line 7 from the imagenet example.
st102123
Thanks a lot Viraat. I wish to clarify my doubt once again. I actually want to record the likelihood of my validation set at the point where the error rate is minimum during training. Using this likelihood as my reference I want to train my model once again now using new training set as combination of both train set and validation set (new train set = train set old + validation set) and continue training until the new log likelihood of the validation set ( validation set remains same as in the previous case ) matches the old recorded log likelihood of the validation set.
st102124
No problem Akhilesh. Please let me know if what I understand is right. I’m assuming you mean that you want to get the log likelihood on your validation set when the validation error is lowest during training. You then want to re-train your model (not from scratch?) using a full dataset (train + validation) and train till new log likelihood of the validation set matches your previous minimum. You say that your validation set remains the same. Traditionally you would want your validation set not to include samples from your training set, but when you combine your train + validation set and use the same validation set, the model will be predicting for inputs it has already seen. I’m curious to why exactly you would want to do something like that. Nevertheless, I think you can keep a track of your validation_error in your training loop and update the log likelihood whenever you find a lower validation_error. Assume that the log likelihood at the lowest validation error is log_likelihood_target. Then you would re-train on the combined set till the log_likelihood of your validation set is equal to log_likelihood_target.
st102125
When in model.eval() phase, this is what I’m doing now that I’m in 0.4: with torch.no_grad(): for i, input in enumerate(data_loader): input_var = input.requires_grad_().to(gpu) output = model(input_var) Feel redundant to say with torch.no_grad(): and then .requires_grad_() Is this the right format? I don’t care to save gradient because I’m not training
st102126
Solved by albanD in post #2 Hi .requires_grad_() should be used only to make the tensor require gradients. In this case you don’t want to require gradients so you should remove it. Otherwise this is the right usage for torch.no_grad().
st102127
Hi .requires_grad_() should be used only to make the tensor require gradients. In this case you don’t want to require gradients so you should remove it. Otherwise this is the right usage for torch.no_grad().
st102128
Hi there, I am programming a C CUDA extension to pytorch. I want to check for some conditions within that code, and raise exceptions if these conditions are not satisfied. I don’t see how to do that, could you please give me some pointers ? thanks
st102129
Well, you cannot raise in CUDA device code and expect that to propagate to host C/Python before synchronization takes place. That said, in cuda kernels you can use assert(condition);. On the host AT_CHECK(cond, "error message with ", mytensor.dims(), " dynamic content"); works. rgrep in aten/src/ will find examples of either. Best regards Thomas
st102130
thanks a lot for the reply. I want to go for the AT_CHECK solution, sorry for the stupid question, but could you please tell me which header I should include for this one ?
st102131
It’s defined in ATen/Error.h, but that should be automatically included from ATen/ATen.h . Best regards Thomas
st102132
Looking at the DataLoader sources, it seems that CPU->GPU data transfers are triggered within a thread instead of a process. Are there any restrictions against calling Tensor.to(“cuda:0”) or Tensor.copy_() from within a worker process? What about Tensor.pin_memory()?
st102133
Bump (and I have edited the title for clarification). It would be nice to have more documentation on this in case a custom dataloader is desired.
st102134
I have created a model, trained it, evaluated and saved with torch.save(model.state_dict(), 'pathToSavedModel') If I create the same model in a different .py file, how can I load the pretrained parameters from an old model to new one so I can evaluate it directly ? Documentation on torch.load does not give me any insight and I could not relate other discussions to my case.
st102135
Solved by ptrblck in post #2 You can load the state_dict using: model = YourModel() model.load_state_dict(torch.load('pathToSavedModel')) Have a look at the Serialization semantics for more information.
st102136
You can load the state_dict using: model = YourModel() model.load_state_dict(torch.load('pathToSavedModel')) Have a look at the Serialization semantics 28 for more information.
st102137
Hi there, Where is the download link of http://download.pytorch.org/whl/cu92/torch-0.4.1-cp35-cp35m-manylinux1_x86_64.whl ? Torch-0.4.1 + CUDA9.2 + Python-3.5 + manylinux1 Thanks.
st102138
This may help github.com pytorch/pytorch.github.io/blob/master/_data/wizard.yml#L223 40 matcher: 'pip,linux,cudanone,python3.5' cmd: 'pip3 install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp35-cp35m-linux_x86_64.whl <br/> pip3 install torchvision' - matcher: 'pip,linux,cuda8,python3.5' cmd: 'pip3 install http://download.pytorch.org/whl/cu80/torch-0.4.1-cp35-cp35m-linux_x86_64.whl <br/> pip3 install torchvision' - matcher: 'pip,linux,cuda9.0,python3.5' cmd: 'pip3 install torch torchvision' - matcher: 'pip,linux,cuda9.2,python3.5' cmd: 'pip3 install http://download.pytorch.org/whl/cu92/torch-0.4.1-cp35-cp35m-linux_x86_64.whl <br/> pip3 install torchvision' - matcher: 'pip,linux,cudanone,python3.6' cmd: 'pip3 install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp36-cp36m-linux_x86_64.whl <br/> pip3 install torchvision' - matcher: 'pip,linux,cuda8,python3.6' cmd: 'pip3 install http://download.pytorch.org/whl/cu80/torch-0.4.1-cp36-cp36m-linux_x86_64.whl <br/> pip3 install torchvision' - matcher: 'pip,linux,cuda9.0,python3.6' cmd: 'pip3 install torch torchvision' -
st102139
After much effort, I managed to build and install pytorch on my macbook such that I was able to make use of my GPU, using version 0.5.0a0+ba634c1. I also have an older version 0.4.0 with no GPU support in a different conda environment. I started running my model with the new install and felt it was significantly slower, both using the CPU and the GPU. After timing it, I found that a single backwards pass took 28 seconds using 0.5.0 while the same code took 0.6 seconds with 0.4.0. What could be slowing it down to such an extent? I’m not sure how to go about debugging the backwards pass… GPU Supported Info PyTorch version: 0.5.0a0+ba634c1 Is debug build: No CUDA used to build PyTorch: 9.2 OS: Mac OSX 10.13.6 GCC version: Could not collect CMake version: version 3.9.1 Python version: 3.5 Is CUDA available: Yes CUDA runtime version: 9.2.148 GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Probably one of the following: /usr/local/cuda/lib/libcudnn.5.dylib /usr/local/cuda/lib/libcudnn.6.dylib /usr/local/cuda/lib/libcudnn.7.dylib /usr/local/cuda/lib/libcudnn.dylib /usr/local/cuda/lib/libcudnn_static.a /usr/local/cuda8.0/lib/libcudnn.6.dylib /usr/local/cuda8.0/lib/libcudnn_static.a Versions of relevant libraries: [pip3] numpy (1.14.4) [pip3] torch (0.4.0) [conda] torch 0.5.0a0+ba634c1 Note that my GPU is a GeForce 750M, and the output of nvcc --version is: nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2018 NVIDIA Corporation Built on Tue_Jun_12_23:08:12_CDT_2018 Cuda compilation tools, release 9.2, V9.2.148 Non-GPU Supported Info (the older, faster one) PyTorch version: 0.4.0 Is debug build: No CUDA used to build PyTorch: Could not collect OS: Mac OSX 10.13.6 GCC version: Could not collect CMake version: version 3.12.0 Python version: 3.6 Is CUDA available: No CUDA runtime version: 9.2.148 GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Probably one of the following: /usr/local/cuda/lib/libcudnn.5.dylib /usr/local/cuda/lib/libcudnn.6.dylib /usr/local/cuda/lib/libcudnn.7.dylib /usr/local/cuda/lib/libcudnn.dylib /usr/local/cuda/lib/libcudnn_static.a /usr/local/cuda8.0/lib/libcudnn.6.dylib /usr/local/cuda8.0/lib/libcudnn_static.a Versions of relevant libraries: [pip3] numpy (1.14.4) [pip3] torch (0.4.0) [conda] torch 0.4.0 [conda] torchvision 0.1.9 py36_1 soumith
st102140
Having compared the results of the Autograd profiler between the two instances, the major bottleneck seems to be that there is a really long “Dropout” call (as in, there are two total, one takes 65 us and the other takes 24,228 us), and a really long “bernoulli_” call (there are two, one takes 15 us and the other take 24,049 us). Also what seems to be apparent when I run the profiler is that even if I call .cuda() on my model, all the time is in CPU and CUDA is 0.00 for everything… I do get the warning Found GPU0 GeForce GT 750M which is of cuda capability 3.0. PyTorch no longer supports this GPU because it is too old. however, everything runs and tensors show device='cuda:0'
st102141
Could be because your GPU really isn’t supported anymore. But in any case I would also try updating the NVIDIA drivers. Could you check GPU usage info via e.g., nvidia-smi (in the terminal)? Your problem may also be related to this thread here: GPU utilized 99% but Cudnn not used, extremely slow 21 In addition to torch.cuda.is_available() returning true, have you checked that torch.backends.cudnn.version() returns sth other than None?
st102142
torch.backends.cudnn.version() returns 7104. Unfortunately there is no decent command line tool analogous to nvidia-smi for OSX, I’ve used the activity monitor and it does show what appears to be consistent full GPU usage, but it’s a graph only and has no other information so is of limited merit! But it does appear to be in use! My NVIDIA drivers are also up to date. Maybe it’s just an issue with my GPU not being supported. I will try installing it with NO_CUDA=1 and see if it does the same. Edit: I installed it with no CUDA and it’s now even slower, it takes 93 seconds versus the 0.6 taken by 0.4.0 I saved the logs from the install: logs here and warnings here
st102143
Hm, it does sound like that it’s using your GPU somehow. Maybe it’s not using the GPU versions via CUDA/cuDNN versions for all operations, which is why it’s now slower than before
st102144
I can deal with not using my GPU but I would like to use the master, do you have any idea what might be causing the speed differences between CPU 0.4.0 and CPU 0.5.0? Or how I can go about diagnosing a potential cause?
st102145
sorry no idea why it would be that slow on CPU now (compared to the CPU performance you got on 0.4)
st102146
the problem you have @al3x is that when you compiled pytorch from master, it didn’t detect a good BLAS library (such as MKL or Accelerate and is using unoptimized code). When you installed 0.4.0 binary, it correctly linked against MKL. That explains the 28s versus 0.6 seconds. In our build from source instructions 14, the critical parts that will make sure you link to MKL are this section: export CMAKE_PREFIX_PATH=[anaconda root directory] conda install numpy pyyaml mkl mkl-include setuptools cmake cffi typing
st102147
I don’t understand why the model.load_state_dict() would change the variable that is passed to initialize the weight of the model (in the following example, the a), and this behavior has never mentioned in the documentation 3. pytorch version: 0.4.0 Here is the toy example: class MINIModel(nn.Module): def __init__(self,h0=None): super(MINIModel, self).__init__() self.h = nn.Linear(3,4) if not h0 is None: self.h.weight.data = h0 def forward(self,x): return self.h(x) m = MINIModel() torch.save(m.state_dict(), 'test/params.ckpt') print('saved model weight') print(m.h.weight.data) a = torch.ones(4,3) m = MINIModel(a) print('before loading, a is:') print(a) m.load_state_dict(torch.load('test/params.ckpt')) print('after loading, a is:') print(a) #HERE the a should not get changed print('after loading, weight is:') print(m.h.weight.data) which outputs: saved model weight tensor([[-0.5034, 0.0249, -0.4539], [-0.3079, -0.2559, 0.5453], [ 0.3883, -0.4435, -0.3697], [ 0.1465, 0.1131, -0.1957]]) before loading, a is: tensor([[ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.]]) after loading, a is: tensor([[-0.5034, 0.0249, -0.4539], [-0.3079, -0.2559, 0.5453], [ 0.3883, -0.4435, -0.3697], [ 0.1465, 0.1131, -0.1957]]) after loading, weight is: tensor([[-0.5034, 0.0249, -0.4539], [-0.3079, -0.2559, 0.5453], [ 0.3883, -0.4435, -0.3697], [ 0.1465, 0.1131, -0.1957]]) After loading, the a should keep to be ones, should not it? Can anyone help explain this? Thank you very much.
st102148
Solved by John_Smith in post #2 This issue has nothing to do with Load_state_dict(). The problem is when you created your model using MINIModel(a), you’ve set the reference of a as the data for the h variable via the following lines: if not h0 is None: self.h.weight.data = h0 Basically, any change to self.h.weight.da…
st102149
This issue has nothing to do with Load_state_dict(). The problem is when you created your model using MINIModel(a), you’ve set the reference of a as the data for the h variable via the following lines: if not h0 is None: self.h.weight.data = h0 Basically, any change to self.h.weight.data will be reflected in a because they share the same data. It can be reproduced by the following code: class MINIModel(nn.Module): def __init__(self,h0=None): super(MINIModel, self).__init__() self.h = nn.Linear(3,4) if not h0 is None: self.h.weight.data = h0 def forward(self,x): return self.h(x) a = torch.ones(4,3) m = MINIModel(a) print('before loading, a is:') print(a) m.h.weight.data.add_(1) print('after changing h.weight.data, a is:') print(a) To fix this, you can change: self.h.weight.data = h0 to: self.h.weight.data = h0.clone()
st102150
Hi all, I am not entirely sure if this is a bug or is intended behavior, but I found that if you try to do a = torch.Tensor([1,2,3]).long() a[1:] = a[:-1] a the output is: tensor([1, 1, 1]) whereas if you use numpy, say a = np.array([1,2,3]) a[1:] = a[:-1] a would be array([1, 1, 2]) Is this intended behavior? Thanks!
st102151
I am using pytorch 0.3.0.post4. I try to train a ssd model on coco using codes from this repo 32 and I get this stopiteration stuff and following messages after around 1000 iterations. I was wondering what would cause this problem and is there any idea to solve it? File “train.py”, line 255, in train() File “train.py”, line 165, in train images, targets = next(batch_iterator) File “/home/rusu5516/.local/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 200, in next raise StopIteration StopIteration
st102152
That’s because the dataloader finished iterating once over the whole dataset. You can make a iteration cycle to avoid this: def cycle(iterable): while True: for x in iterable: yield x ... batch_iterator =iter(cycle(data_loader))
st102153
Hey, I wanted to ask if there is a difference of feeding a batch to the model and then update the weights or feeding data solely to the model and sum up the loss and update after the batch size. I am aware that this depends also on the used criterion, but can I do the following example in praxis? (My GPU-mem is too small to train with big batches and I would like to know if this is a workaround) batch = load_batch() out = model(batch) loss = criterion(out, label) loss.backward() optimizer.step() loss = 0 for data in batch: out = model(data.unsqueeze(0)) loss += criterion(out, label) loss.backward() optimizer.step()
st102154
Solved by ptrblck in post #2 If you use reduction='sum' in your criterion, the gradients should be the same: model = nn.Linear(10, 2) criterion = nn.NLLLoss(reduction='sum') x = torch.randn(2, 10) y = torch.empty(2, dtype=torch.long).random_(2) # use whole batch loss = criterion(model(x), y) loss.backward() model_grad_batch …
st102155
If you use reduction='sum' in your criterion, the gradients should be the same: model = nn.Linear(10, 2) criterion = nn.NLLLoss(reduction='sum') x = torch.randn(2, 10) y = torch.empty(2, dtype=torch.long).random_(2) # use whole batch loss = criterion(model(x), y) loss.backward() model_grad_batch = model.weight.grad.clone() print(model_grad_batch) # use separate samples model.zero_grad() loss = 0 for idx in range(x.size(0)): loss += criterion(model(x[idx].unsqueeze(0)), y[idx].unsqueeze(0)) loss.backward() model_grad_sep = model.weight.grad.clone() print(model_grad_sep) torch.allclose(model_grad_batch, model_grad_sep) Some layers like nn.BatchNorm will behave differently, so maybe you could try to tune the momentum etc.
st102156
I want to implement a linear function: y = W_1x_1 + W_2x_2 + b How can I create this linear layer with nn.Linear()? I am new to pytorch, thanks for any help.
st102157
Solved by John_Smith in post #3 import torch import torch.nn as nn x = torch.tensor([2.0]) class Linear_Model(nn.Module): def __init__(self): super().__init__() self.lm1 = nn.Linear(1,1,bias=False) self.lm2 = nn.Linear(1,1,bias=True) def forward(self, X): out = self.lm1(X) + self.lm2(X) …
st102158
Should be something like x = nn.Linear(input_size, output_size1, bias=False) x = nn.Linear(output_size1, output_size2, bias=True)
st102159
import torch import torch.nn as nn x = torch.tensor([2.0]) class Linear_Model(nn.Module): def __init__(self): super().__init__() self.lm1 = nn.Linear(1,1,bias=False) self.lm2 = nn.Linear(1,1,bias=True) def forward(self, X): out = self.lm1(X) + self.lm2(X) return out mdl = Linear_Model() print(mdl(x))
st102160
I had trained C3D in Caffe, and now I am trying to do the same in Pytorch. But I am not seeing the same loss trend in Pytorch as obtained in Caffe. I am using SGD solver and I had read somewhere that the SGD implementation in Pytorch is a bit different. Loss was decreasing at a faster rate in Caffe. In Pytorch it is not coming down as quickly as in Caffe. Is this expected behaviour?
st102161
Hello, I tried to use multiple GPUs to run my CNN based on Pytorch 0.4.1 using torch.nn.DataParallel function. Here is the code that uses this function: if args.gpus and len(args.gpus) > 1: model = torch.nn.DataParallel(model, args.gpus) It gave me error message: RuntimeError: torch/csrc/autograd/variable.cpp:166: get_grad_fn: Assertion output_nr_ == 0 failed. The line that shot this error is: out += self.bias.view(1, -1, 1, 1).expand_as(out) Some people in Github suggested using DistributedDataParallel instead of DataParallel, since there’s a bug for multiple GPU in Pytorch 0.4.x. If it’s true, could anyone show an example for how to use this DistributedDataParallel function? I’m not familiar with its settings… Thanks
st102162
Hi, are you also using rnn in with multiGPU like in https://github.com/pytorch/pytorch/issues/7092 110?
st102163
I was wondering if: Conv2D(32, (3, 3), padding='same',input_shape=(32,32,3)) is equivalent to nn.Conv2d(3,32,kernel_size=(3,3), padding=1) Thank you in advance.
st102164
Can you pick unique names for the input args instead of numbers? Many duplicates makes it a bit unclear the mappings.
st102165
Hi, so if I try to return multiple values from a custom loss function, it throws error that the loss function is not iterable. Demo Exaple import torch import torch.nn as nn from torch.autograd.function import Function class CenterLoss(nn.Module): def __init__(self, num_classes=10, feat_dim=2, size_average=True): super(CenterLoss, self).__init__() self.centers = nn.Parameter(torch.randn(num_classes, feat_dim)) self.centerlossfunc = CenterlossFunc.apply self.feat_dim = feat_dim self.size_average = size_average def forward(self, label, feat): batch_size = feat.size(0) feat = feat.view(batch_size, -1) # To check the dim of centers and features if feat.size(1) != self.feat_dim: raise ValueError("Center's dim: {0} should be equal to input feature's dim: {1}".format(self.feat_dim,feat.size(1))) loss = self.centerlossfunc(feat, label, self.centers) loss /= (batch_size if self.size_average else 1) return loss,feat b,c = CenterLoss() **TypeError Traceback (most recent call last) in () ----> 1 b,c = CenterLoss() TypeError: ‘CenterLoss’ object is not iterable**
st102166
You would have to create an object of the CenterLoss class first and then call pass it inputs. The below code should work. loss = CenterLoss() b, c = loss(label, feat)
st102167
Can you provide what error you get? I am able to run the below code fine. As I don’t have access to the rest of your code I just return the input and feature as it is. import torch import torch.nn as nn from torch.autograd.function import Function class CenterLoss(nn.Module): def __init__(self, num_classes=10, feat_dim=2, size_average=True): super(CenterLoss, self).__init__() self.centers = nn.Parameter(torch.randn(num_classes, feat_dim)) def forward(self, label, feat): batch_size = feat.size(0) feat = feat.view(batch_size, -1) return label, feat loss = CenterLoss() label = torch.ones(2) feat = torch.ones(2) print(loss(label, feat)) The output is (tensor([1., 1.]), tensor([[1.], [1.]]))
st102168
I did work around, and instead of returning the other variable, I called it using class_name.feat. any suggestions how to overcome the problem when loss after few epochs starts throwing nan.
st102169
Hello, Does anybody know if there is a C++ interface to ONNX so that I can access a saved trained model in C++ using protobuf?
st102170
Due to compatibility issues, I am using pytorch=0.2.0 with python=2.7 I installed it using conda install pytorch=0.2.0 cuda80 -c soumith as it was pointed out on the forum that this will lead to reduction in lag while using .cuda() for the first time, however I do not see any improvements and loading still takes ~3mins. (P.S I’m using tesla v100)
st102171
Has been some time since PyTorch 0.2, but I can’t remember it being that slow. It should take a few seconds at most, not minutes. Not sure what’s causing the problem in your case, could be that it’s been a bug in PyTorch 0.2? PyTorch 0.3 and 0.4 also work with Python 2.7, regardign the compatibility issues you mentioned, the Tesla V100 should work with cuda 8 & 9, so I think so if you can install it 0.3 or 0.4 somehow alongside the 0.2 version, it would help figuring out whether it’s a PyTorch 0.2-specific bug or sth else.
st102172
I remember this issue occurred if a wrong CUDA version was installed and in the first run it’s recompiling pytorch for your GPU. Unfortunately there is no CUDA9 for pytorch 0.2.0. However, could you print torch.version.cuda after your first cuda run? @rasbt ideas sound good to debug your issue!
st102173
rasbt: ion, it would help figuring out whether it’s a PyTorch I tried doing torch.version.cuda after the first .cuda() it seems like torch.version doesn’t have cuda attribute, it shows Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'module' object has no attribute 'cuda'
st102174
tesla v100 needs cuda 9. cuda 9 is incompatible with pytorch 0.2.0, even if you build 0.2.0 from source. The solution is to upgrade to 0.4.0 (0.3.0 might work, but I’m like 60% 20% sure you need 0.4.0). You’ll notice that your ~/.nv directory is probably incrasing in size without bound right?
st102175
The same lagging issue is happening on other machine as well using GTX-1060, with python 3.5.4 and pytorch 0.3.0 py35cuda8.0cudnn6.0_0 the output of torch.version.cuda after first .cuda() is '8.0.61' is there any way to reduce the lag here?
st102176
In case the future reader is interested I solved it using the pytorch0.3.1 compiled with cuda 9 using pip install http://download.pytorch.org/whl/cu90/torch-0.3.0.post4-cp35-cp35m-linux_x86_64.whl
st102177
Hello, I have trained a network. I want to visualize the output of each layers when I forward pass an image. I can see how i can visualize the intermediate layers as they are already in (heightwidthchannels). When it comes to the fully connected they are (samples * 1d vector). How will i reconstruct this back to (samples * height * width) format so that I can visualize what features it has learnt. Thanks in advance for the help
st102178
check the deepdream by google which basically generate image that specifically activates certain neuron https://www.mathworks.com/help/nnet/examples/visualize-features-of-a-convolutional-neural-network.html 30 (matlab) https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/deepdream/deepdream.ipynb 33 (tensorflow) I couldn’t find the PyTorch version.
st102179
There are several third party implementations. You could have a look at this 89 to begin with.
st102180
Hi, I have a problem with starting a distributed training with pytorch. Following is the sample code to repro this issue: from __future__ import print_function import os import torch import torch.nn as nn import torch.utils.data as data import torch.utils.data.distributed if __name__ == "__main__": # Initialize the model model = nn.Sequential( nn.Linear(20, 10), nn.Linear(10, 20) ) torch.distributed.init_process_group(world_size=2, \ init_method='file:///' + os.path.join(os.environ['HOME'], 'distributedFile'), \ backend='gloo') model.cuda() model = nn.parallel.DistributedDataParallel(model) print(4) for epoch in range(5): total_loss = 0 for idx in range(5): in_val = torch.zeros(20) in_val[idx] = 1.0 print(model) output = model(in_val) The error msg is: Traceback (most recent call last): File "dist_test.py", line 36, in <module> output = model(in_val) File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 216, in forward outputs = self.parallel_apply(self._module_copies[:len(inputs)], inputs, kwargs) File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 223, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 65, in parallel_apply raise output File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 41, in _worker output = module(*input, **kwargs) File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward input = module(input) File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 55, in forward return F.linear(input, self.weight, self.bias) File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/functional.py", line 994, in linear output = input.matmul(weight.t()) RuntimeError: size mismatch, m1: [1 x 10], m2: [20 x 10] at /opt/conda/conda-bld/pytorch_1524586445097/work/aten/src/THC/generic/THCTensorMathBlas.cu:249 terminate called after throwing an instance of 'gloo::EnforceNotMet' what(): [enforce fail at /opt/conda/conda-bld/pytorch_1524586445097/work/third_party/gloo/gloo/cuda.cu:249] error == cudaSuccess. 29 vs 0. Error at: /opt/conda/conda-bld/pytorch_1524586445097/work/third_party/gloo/gloo/cuda.cu:249: driver shutting down Any feedback or suggestions would be appreciated.
st102181
A = torch.rand(4,3,5,5) B = A[:,0,:,:] B.size() Out[33]: torch.Size([4, 5, 5]) is there anyway other than using torch.unsqueeze to keep the size of tensor B to be [4,1,3,3]?
st102182
Solved by klory in post #2 [Capture]
st102183
hahah I was more interested in some keepdim = true way, but that works too i guess Thanks
st102184
Hey ! I’m trying to use LSTM module to predict a rather simple sequence. Basically, the network receives 20 time steps, and I want to predict steps 1 to 21. I thought my model was fine, but I can’t get the loss to decrease. Could someone help ? Here’s my code: import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import numpy as np import matplotlib.pyplot as plt plt.style.use('dark_background') x = np.linspace(0,30.,500) y = x*np.sin(x) + 2*np.sin(5*x) nb_steps = 20 class LSTM(nn.Module): def __init__(self): nn.Module.__init__(self) self.lstm = nn.LSTM(1,100) self.head = nn.Linear(100,1) def forward(self,x): outputs, states = self.lstm(x) outputs = outputs.reshape(x.shape[0]*x.shape[1], -1) pred = self.head(outputs) return pred def load_batch(batch_size = 32): x_b = np.zeros((nb_steps,batch_size,1)) y_b = np.zeros((nb_steps*batch_size,1)) inds = np.random.randint(0, 479, (batch_size)) for i,ind in enumerate(inds): x_b[:,i,0] = y[ind:ind+nb_steps] y_b[i*nb_steps:(i+1)*nb_steps,0] = y[ind+1:ind+nb_steps+1] return torch.tensor(x_b).float(), torch.tensor(y_b).float() rnn = LSTM() adam = optim.Adam(rnn.parameters(), 1e-3) epochs = 1000 batch_size = 32 mean_loss = 0. for epoch in range(1,epochs+1): x_b,y_b = load_batch(batch_size) pred = rnn(x_b) shaped_pred = pred.reshape(-1,1) loss = F.mse_loss(shaped_pred, y_b) adam.zero_grad() loss.backward() adam.step() mean_loss += loss.item() if epoch%100 == 0: print('Epoch: {} | Loss: {:.6f}'.format(epoch, mean_loss/100.)) mean_loss = 0. f, ax = plt.subplots(2,1) while True : x_b, y_b = load_batch(1) pred = rnn(x_b).detach().numpy().reshape(-1) ax[0].plot(x,y, label= 'Real') ax[0].plot(x_b.numpy().reshape(-1),y_b.numpy().reshape(-1), label= 'Real batch') ax[0].plot(x_b.numpy().reshape(-1), pred, label = 'Pred') ax[1].scatter(x_b.numpy().reshape(-1),y_b.numpy().reshape(-1), label= 'Real') ax[1].scatter(x_b.numpy().reshape(-1), pred, label = 'Pred') for a in ax: a.legend() plt.pause(0.1) input() for a in ax: a.clear()
st102185
I had posted some code previous and realized it was wrong. The dangers of monkey patching. I figured it out. While this isn’t perfect, I did get the loss from 14000 to 5000 on my computer with this code. I am not sure it is right though. It doesn’t make my computer work hard enough for 1000 epochs. Maybe someone else can help with an explanation. import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import math import numpy as np import matplotlib.pyplot as plt plt.style.use('dark_background') x = np.linspace(0,30.,500) y = x*np.sin(x) + 2*np.sin(5*x) nb_steps = 20 class LSTM(nn.Module): def __init__(self): nn.Module.__init__(self) self.lstm = nn.LSTM(1,100) self.head = nn.Linear(100,1) def forward(self,x): outputs, states = self.lstm(x) outputs = outputs.reshape(x.shape[0]*x.shape[1], -1) pred = self.head(outputs) return pred def load_batch(batch_size = 32): x_b = np.zeros((nb_steps,batch_size,1)) y_b = np.zeros((nb_steps*batch_size,1)) inds = np.random.randint(0, 479, (batch_size)) for i,ind in enumerate(inds): x_b[:,i,0] = y[ind:ind+nb_steps] y_b[i*nb_steps:(i+1)*nb_steps,0] = y[ind+1:ind+nb_steps+1] return torch.tensor(x_b).float(), torch.tensor(y_b).float() rnn = LSTM() epochs = 1000 batch_size = 32 criterion = nn.MSELoss() mean_loss = 0. for epoch in range(1,epochs+1): x_b,y_b = load_batch(batch_size) # pred = rnn(x_b) # loss = F.mse_loss(abs(shaped_pred), abs(y_b)) # print(loss) # loss.backward() def closure(): global loss optimizer.zero_grad() pred = rnn(x_b) shaped_pred = pred.reshape(-1,1) loss = criterion(abs(shaped_pred), abs(y_b)) # print('loss:', loss.item()) loss.backward() return loss optimizer = optim.Adam(rnn.parameters(), 1e-3) optimizer.step(closure) mean_loss += loss.item() if epoch%100 == 0: print('Epoch: {} | Loss: {:.6f}'.format(epoch, mean_loss)) mean_loss = 0 f, ax = plt.subplots(2,1) while True : x_b, y_b = load_batch(1) pred = rnn(x_b).detach().numpy().reshape(-1) ax[0].plot(x,y, label= 'Real') ax[0].plot(x_b.numpy().reshape(-1),y_b.numpy().reshape(-1), label= 'Real batch') ax[0].plot(x_b.numpy().reshape(-1), pred, label = 'Pred') ax[1].scatter(x_b.numpy().reshape(-1),y_b.numpy().reshape(-1), label= 'Real') ax[1].scatter(x_b.numpy().reshape(-1), pred, label = 'Pred') for a in ax: a.legend() plt.pause(0.1) input() for a in ax: a.clear()
st102186
I was able to get the losses down by using a batch size of 1 and running for 5000 epochs. I think now it is just an issue of tuning.
st102187
Hi! I’m modifying Resnet’s code so that each layer can take another set of inputs. Resnet has layers which are a Sequential object. This is the code I’m using to run a tuple of inputs (image,aux_input): def run_layer(some_layer,x_aux): x,aux_info=x_aux for i in range(len(some_layer)): x=some_layer[i].forward((x,aux_info)) return x Assume that each layer does return some kind of an image encodings which have nothing to do with aux_info. I’m wondering if my code is correct and if there’s a better way to code it -I have a feeling there’s something wrong I’m doing so that’s why this post.
st102188
I’m getting random jobs aborted when training an LSTM in 2 GPUs, with the following short, cryptic error: Fatal Python error: deallocating None Thread 0x00007f260ec2d700 (most recent call first): Thread 0x00007f260f7c5700 (most recent call first): Thread 0x00007f260ffc6700 (most recent call first): Current thread 0x00007f266b197740 (most recent call first): File "model.py", line 188 in <module> Aborted (core dumped) Line 188 is the training function. Some jobs finish correctly. Some fail like above. Linux AWS 1044 (Ubuntu 16.04.3) AWS 8 GPU instance (Tesla K80) python3.6 pytorch 0.3.0 CUDA 9.0.176 cuDNN 7.0.3 I’m trying to retrieve the kernel core dump to see if there are any clues there. This happens while doing grid-search on hidden size, and seems unrelated to the parameter value. Not entirely sure how to reproduce it, I’ll report back if I’m able to figure something out. A google search shows scattered instances of it that seem to me point to malloc/free. Way over my head.
st102189
A gdb backtrace could help as well. You could do: gdb python catch throw run test.py where test.py 14 is your python script. When gdb catches the exception, typing backtrace will give a backtrace.
st102190
Great suggestion, thanks for the detailed steps. This is the result: Fatal Python error: deallocating None Thread 0x00007fff9da36700 (most recent call first): Thread 0x00007fff9eb78700 (most recent call first): Thread 0x00007fff9e377700 (most recent call first): Current thread 0x00007ffff7fd8740 (most recent call first): File "model.py", line 188 in <module> Thread 1 "python" received signal SIGABRT, Aborted. 0x00007ffff760a428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 54 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. (gdb) backtrace #0 0x00007ffff760a428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 #1 0x00007ffff760c02a in __GI_abort () at abort.c:89 #2 0x00005555555b1e4f in Py_FatalError () #3 0x0000555555613817 in buffered_flush.cold () #4 0x00005555556603aa in _PyCFunction_FastCallDict () #5 0x000055555566077f in _PyObject_FastCallDict () #6 0x00005555556eeb6f in _PyObject_CallMethodId_SizeT () #7 0x00005555556603aa in _PyCFunction_FastCallDict () #8 0x000055555566077f in _PyObject_FastCallDict () #9 0x00005555556f0faf in _PyObject_CallMethodId () #10 0x0000555555759a98 in flush_std_files () #11 0x00005555555b1e49 in Py_FatalError () #12 0x0000555555642048 in dict_dealloc () #13 0x00005555556f3e3e in subtype_dealloc () #14 0x0000555555641820 in _PyTrash_thread_destroy_chain () #15 0x00005555556ecfeb in fast_function () #16 0x00005555556f2f95 in call_function () #17 0x000055555571462a in _PyEval_EvalFrameDefault () #18 0x00005555556ed8d9 in PyEval_EvalCodeEx () #19 0x00005555556ee67c in PyEval_EvalCode () #20 0x0000555555768ce4 in run_mod () #21 0x00005555557690e1 in PyRun_FileExFlags () #22 0x00005555557692e4 in PyRun_SimpleFileExFlags () #23 0x000055555576cdaf in Py_Main () #24 0x00005555556338be in main () (gdb) Any thoughts? I’ll keep on digging on my end. This is now happening in every pair of GPUs in which I’m running a separate job (not just randomly).
st102191
Nope, not directly. Haven’t worked on this for a few months, so my versions back then are probably obsolete by now. I hacked my way out by saving the model and other relevant params every successful epoch, and the restart training from the last one. Ugly but it works. Sorry you’re facing this, it’s a nasty, uninformative bug
st102192
Thanks for your reply, would you mind tell me your data volume and gpu parallelism strategy(Dataparallel or DistributedDataParallel)? best
st102193
Hi, Is it possible to get topological relationships between layers in PyTorch? I understand that we can do forward and backprop by calling forward(). What if I really need the model relationships? For example, I want to do network pruning. When I delete one Conv layer, I need to update all following layers connected to it accordingly.
st102194
Since graph is dynamic, i.e., you can change topology at each iteration or do things like if random.random() < 0.01: add three linear layers, there is no general rule, unless you define a representation yourself and stick to it in your network definitions.
st102195
Thanks for your reply Simon! I am curious how did PyTorch finish back propagation for each layer if it has no ideas of what layers are following itself, because to update weights of that layer we need to sum all gradients coming from all of its descendants?
st102196
I have an old computer at work with a NVS 300 card which only supports up to CUDA 6.5. It uses Windows 7 (never upgraded to 10 due to compatibility issues with some of the software.) I’ve tried compiling Pytorch on this system to no avail (although I’ve successfully compiled Pytorch from source on multiple Windows machines with GPUs that support CUDA 8+.) Has anyone ever gotten Pytorch GPU to compile on such an old machine? Or should I just give up and use CPU on that one?
st102197
Hi, You can surely compile without CUDA support. But CUDA 6.5 is too old and most of the cuda code use recent features. So you won’t be able to compile with CUDA support.
st102198
Oh well, no point in compiling if there are packages available for the CPU version of Pytorch
st102199
Hi, I want to know, when I would like to measure the time of a image through the GPU model, which one is correct? 1, start = time.time() result = vgg_gpu_model(image) end = time.time() 2, torch.cuda.synchronize() start = time.time() result = vgg_gpu_model(image) torch.cuda.synchronize() end = time.time() I find that, the time result of method 1 and method 2 is very different, but the output result is same.What’s the difference between them? Thanks for your attention.