id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st101100 | When I compile pytorch, it says “cuDNN cannot be found”. it is so strange.
I want to use cudnn, before cudnn works for pytorch version 0.1, but now since i upgraded it to pytorch 0.4, now GPUs cannot be found by pytorch. |
st101101 | Yes this version is potentially too old.
You may want to try to upgrade to the latest version for cuda 8.0.
Also to make sure it’s detected, you can install it in you ./lib64 and ./include folders inside the cuda install. |
st101102 | Thanks. Now i installed the cuda 7.0 and pytorch is successfully compiled with GPU, now i can call torch.cuda.isavalable() return True.
But another error happens, it says “runtime error: cublas runtime error, library not initialized at…/THCGeneral.cpp:377”. |
st101103 | Hi,
What is the code that you run to get this error?
Are the cuda samples working properly on your machine? |
st101104 | thanks.
I ran simpleCUBLAS of cuda sampels and it is OK.
when i ran torch.cuda.is_available(), it returns True.
i just ran lstm = nn.LSTM(3,3)
lstm.cuda()
then I got the error |
st101105 | Hi everyone! I’m currently exploring the possibility of encode a dynamic computational graph with pyTorch and I’m a little confused about what is happening to my “dynamic model”.
As far as I understand, it’s possible to create models where, as instance, the number of layers and/or neurons per layer can change ([reference]) using Python control-flow operators like loops or conditional statements. However, I cannot figure out what it’s happening to the learnable parameters in such dynamic graph.
Just to be clearer, consider this snippet.
Basically, at each forward pass (that is to say, for every batch) we randomly throw a “coin” that will let us lead to different architectures, namely with 0,1,2 or 3 hidden layers.
class DynamicNet(torch.nn.Module):
def __init__(self, D_in, H1, H2, D_out):
super(DynamicNet, self).__init__()
self.input_linear = torch.nn.Linear(D_in, H1)
self.middle_linear1 = torch.nn.Linear(H1, H2)
self.middle_linear2 = torch.nn.Linear(H2, H1)
self.middle_linear3 = torch.nn.Linear(H1, H1)
self.output_linear = torch.nn.Linear(H1, D_out)
def forward(self, x):
x = relu(self.input_linear(x))
coin = random.randint(0, 3)
if coin == 1:
x = relu(self.middle_linear1(x))
elif coin == 2:
x = relu(self.middle_linear1(x))
x = relu(self.middle_linear2(x))
elif coin == 3:
x = relu(self.middle_linear1(x))
x = relu(self.middle_linear2(x))
x = relu(self.middle_linear3(x))
else:
x = relu(self.output_linear(x))
return F.log_softmax(x, dim=1)
My doubts are the following:
Am I really exploiting pyTorch dynamic graph capability? From my perspective, I’m basically creating a tree-like structure where we are assigning some probability to fall in one branch or another
How the weights matrices are updated?
How will look the final model that I eventually will save for future use?
What is the answer of the previous three questions in this second case?
class DynamicNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
super(DynamicNet, self).__init__()
self.input_linear = torch.nn.Linear(D_in, H)
self.middle_linear = torch.nn.Linear(H, H)
self.output_linear = torch.nn.Linear(H, D_out)
def forward(self, x):
x = relu(self.input_linear(x))
coin = random.randint(0, 3)
for _ in range(coin):
x = relu(self.middle_linear(x))
x = relu(self.output_linear(x))
return F.log_softmax(x, dim=1)
Thanks a lot for your answers in adavanced |
st101106 | Solved by ptab in post #2
Regarding point 2: When you feed a batch to your network, forward (including your dice roll) is called. When you calculate gradients by calling .backward on some scalar value (calculated using your network output), the gradient with respect to the weights that were actually used to compute the outpu… |
st101107 | Regarding point 2: When you feed a batch to your network, forward (including your dice roll) is called. When you calculate gradients by calling .backward on some scalar value (calculated using your network output), the gradient with respect to the weights that were actually used to compute the output (this depends the outcome of the dice roll) is computed. The gradient with respect to unused weights is not calculated. For example, if you roll coin == 2, for some weight w of self.middle_linear3 you should have w.grad == None.
EDIT: Note that as soon as self.middle_linear3 has previously been used at least once for a forward/backward call of a batch, w.grad will not be None anymore. If it is not used during a forward/backward call, it just won’t be updated/changed by calling .backward (usually w.grad will be zeros since one usually sets all gradients to zero between optimization steps).
Here is a code example hopefully explaining it well:
import torch
import torch.nn as nn
import random
class TestNet(nn.Module):
def __init__(self):
super(TestNet, self).__init__()
self.fc_1 = nn.Linear(4, 1)
self.fc_2 = nn.Linear(4, 1)
def forward(self, x):
if random.random() < 0.5:
x = self.fc_1(x)
else:
x = self.fc_2(x)
return x
net = TestNet()
data = torch.rand(32, 4)
out = net(data)
loss = sum(out)
loss.backward()
# We called forward/backward once, the gradients of the weights (parameters) of
# either fc_1 or fc_2 should be None
print("Weights and gradients after one f/b call")
for param in net.parameters():
print(param)
print("Gradient:", param.grad, "\n")
# Let's do 10 more forward/backward steps (setting grad to zero between steps)
for _ in range(10):
out = net(data)
loss = sum(out)
net.zero_grad()
loss.backward()
# Now (unless we were very unlucky (0.5**10-unlucky))
# all gradients are not None, and only the gradient with respect to the weights
# that were called in the last iteration are non-zero
print("Weights and gradients after 10 f/b calls")
for param in net.parameters():
print(param)
print("Gradient:", param.grad, "\n")
Partial answer to point 3: The recommended way is saving only the model weights. So I think this is how it works in your case: By saving the weights only it does not matter what your forward function looks like. Your DynamicNet object knows which modules (5x Linear) it contains of and saves their weights. You could create another Net using a completely different forward function as long as your modules are the same (and are named the same) and should be able to load your saved weights. |
st101108 | Thanks for your answer @ptab
So, regarding the point 3: imagine that I’m no longer rolling a dice, but I’m making statements basing on some input properties, something like:
while x.norm(2) < 10:
x = relu(self.middle_linear1(x))
I will train my module weights following this update(forward) rule. But what if I present to the model a sample that does not satisfy the while condition? I mean, the forward function is something that is only used in training and could be totally different in an eventual future test phase where I’m using the saved model? |
st101109 | The model will have the same parameters always. When you use the forward function in the training phase, you build a dynamic computation graph that will tell the backpropagation algorithm how to measure the new gradients in order to update the parameters. But the parameter instances will remain the same, regardless of their forward flow.
It means that, when you’re using your forward function on the testing phase, it changes the way the parameters are used, but they are there and don’t change.
If you, for some strange reason, change the forward function after training the model, the parameters will still be there and your forward function will define how to use them, but that’s about it. |
st101110 | The recommended way to leverage multiple GPUs in the same box is “DataParallel”. After I read the code of “DataParallel” class(https://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/data_parallel.py), I notice that “parallel_apply” does the forward job for network on different device.
However, “parallel_apply” is implemented with “multithreading”(Please refer https://github.com/pytorch/pytorch/blob/c8b246abf31a8717105622077bc7669e2dc753a9/torch/nn/parallel/parallel_apply.py#L61 for more details).
If all the ops of a network can be run on GPU, such approach works well because GPU has asynchronous execution mode. If only parts of these ops of a network can be run on GPU and the others must be run on CPU (e.g GPU memory limitation), does “DataParallel” still works? The fact is that “multithreading” can not make use of multi-core for parallel task due to GIL limitation. Maybe “multiprocessing” is a better choice for such case ? |
st101111 | I want to get the value at idx=[1,2] in the first heatmap, and the value at idx=[3,4] in the second heatmap…
How can I write the code?
a = np.random.randn(5,64,64) # five heatmaps
b =[[1,2], [3,4], [5,6], [7,8], [8,9]]
[a[0,1,2], a[1,3,4], ..., a[4,8,9]] |
st101112 | Hi,
I’m trying to build pytorch from source and want to run it with NNPACK.
I tried to export USE_NNPACK=1 before python setup.py install, but this does not take effect (I don’t see it has compiled NNPACK and I cannot find the built nnpack library).
May I know how do I achieve this? |
st101113 | coords[i] is a list containing 3 elements x,y,z and I want to get the derivative of G[i] w.r.t. each of x,y,z partially i.e. d(G[i])/d(xi)
in some sort of a functional form like f(x) so that I can pass a scalar x to f().
This is one of the functions I am using as one of my inputs to a Neural Network and I want to find the partial derivative of my NN w.r.t to x. Hence, I am trying to find d(NN)/ d(G1[i]). d(G1[i])/ (x_{i})
import pytorch
def sym1(coords):
global avg
global eeta
global Rs
global e
R_avg=Rc
G1=[]
for i,m in enumerate(coords):
G1.append(0)
Ri=np.array(coords[i])
for j in range(i,len(coords)):
if(i!=j):
Rj=np.array(coords[j])
Rij=Ri-Rj
Rij_norm=np.linalg.norm(Rij)
sum1=e**(-eeta*((Rij_norm-Rs)**2))
sum2=cutoff(Rij_norm)
summation=sum1*sum2
G1[i]=G1[i]+summation
return G1 |
st101114 | hello everyone,
Is it possible to use two different GPUs for training the same network? I mean in a multi-gpu scenario, or should all the GPUs be the same?
I’m planning to get a new GTX1080TI(from Asus, my GTX1080 is from gigabyte so the brand is also different), but don’t know if I should use each separately or I can use both of them for training. any help is greatly appreciated. |
st101115 | Solved by ptrblck in post #2
You could use different GPUs, but note that the slowest GPU might be the bottleneck for your overall performance. This means your GTX1080TI would have to wait for your GTX1080 to finish its operations before the next training iteration can be started. |
st101116 | You could use different GPUs, but note that the slowest GPU might be the bottleneck for your overall performance. This means your GTX1080TI would have to wait for your GTX1080 to finish its operations before the next training iteration can be started. |
st101117 | I use torchvision.models.inception_v3() to train on my own data.This is my code
import torch
from torch import nn
from torch.autograd import Variable
import torch.nn.functional as F
from torch import optim
from torch.utils.data import DataLoader
import torchvision
from torchvision import transforms
from torchvision.datasets import ImageFolder
import os
import time
img_transform = {
'train': transforms.Compose([
transforms.Scale(150),
transforms.CenterCrop(299),
transforms.ToTensor()
]),
'val': transforms.Compose([
transforms.Scale(150),
transforms.CenterCrop(299),
transforms.ToTensor()
])
}
root_path = '../data'
batch_size = 24
dset = {
'train': ImageFolder(os.path.join(root_path, 'train/province'), transform=img_transform['train']),
'val': ImageFolder(os.path.join(root_path, 'val/province'), transform=img_transform['val'])
}
dataloader = {
'train': DataLoader(dset['train'], batch_size=batch_size, shuffle=True, num_workers=4),
'val': DataLoader(dset['val'], batch_size=batch_size, num_workers=4)
}
data_size = {
x: len(dataloader[x].dataset.imgs)
for x in ['train', 'val']
}
img_classes = dataloader['train'].dataset.classes
use_gpu = torch.cuda.is_available()
mynet = torchvision.models.inception_v3()
mynet.fc = nn.Linear(2048, 30)
if use_gpu:
mynet = mynet.cuda()
optimizer = optim.SGD(mynet.parameters(), lr=1e-3, momentum=0.9)
criterion = nn.CrossEntropyLoss()
num_epoch = 1
for epoch in range(num_epoch):
print(epoch + 1)
print('*'*10)
running_loss = 0.0
running_acc = 0.0
since = time.time()
for i, data in enumerate(dataloader['train'], 1):
img, label = data
img = Variable(img).cuda()
label = Variable(label).cuda()
# forward
out, _ = mynet(img)
loss = criterion(out, label)
_, pred = torch.max(out, 1)
# backward
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_loss += loss.data[0] * label.size(0)
num_correct = torch.sum(pred==label)
running_acc += num_correct.data[0]
if i % 50 == 0:
print('Loss:{:.4f}, Acc: {:.4f}'.format(
running_loss / (i * batch_size),
running_acc / (i * batch_size)))
running_loss /= data_size['train']
running_acc /= data_size['train']
elips_time = time.time() - since
print('{}/{}, Loss:{:.4f}, Acc:{:.4f}, Time:{:.0f}s'.format(
epoch+1,
num_epoch,
running_loss,
running_acc,
elips_time))
print()
# validation
mynet.eval()
num_correct = 0.0
total = 0.0
for data in dataloader['val']:
img, label = data
img = Variable(img).cuda()
out = mynet(img)
_, pred = torch.max(out.data, 1)
num_correct += (pred.cpu() == label).sum()
total += label.size(0)
print(total)
print(data_size['val'])
print('Acc:{}'.format(num_correct / total))
I can train, but when it comes to validation, I met a problem cuda runtime error
It seems that out of memory, but I don’t understand why. I can train, the validation is just forward, no backward. Maybe the loaded data is not free, but I don’t know how to do it, can anyone help me? Thanks |
st101118 | When performing just inference, you can use the volatile flag to reduce memory consumption:
img = Variable(img, volatile=True).cuda() |
st101119 | No this flag is used to specify that you will not backpropagate for this graph and thus all intermediary buffers are discarded. |
st101120 | Thank you so much. Do you know how to free train data out of memory at the end of train? |
st101121 | It will be freed when it goes out of scope, so its not a problem.
Unless you explicitly keep a reference to it of course. |
st101122 | thank you so much.I can run when I use volatile. But I still don’t understand why I did not use volatile flag it will run out of memory. |
st101123 | The volatile=True flag will disable back propagation (which is not necessary for inference) so when you have volatile=False (the default) then pytorch will allocate more memory. |
st101124 | Thank u for your explanation. When I trained, I put the data in the memory. And after training, the data will be free. And I put the same data to evaluate. As I know, it should not be out of memory, either, because I can put the data to train. So I don’t understand the reason. |
st101125 | Hi, did you solve the problem? I also meet the problem… When the model in training progress, the memory is enough. However, the model in validation progress, the memory is not enough. |
st101126 | I’m on a kind of meta-learning project and I want to update a parameter (theta) of a module with a gradient (d_loss(theta, x)/d_theta) that is differentiable wrt x.
I could get the differentiable gradient using autograd.grad, but I see no way to update my parameter by hand without detaching the gradient.
for example:
module = nn.Linear(4,2)
loss = (module(input)*x).sum()
grad = autograd.grad(loss, (module.weight), retain_graph=True)[0]
# this does not work:
module.weight = module.weight - alpha*grad
Is there any simple way to do that?
Another solution to my problem would be the equivalent of an nn.Module but with tensors instead of parameters (I just don’t want to re-implement a conv2d mechanism by hand with tensors) |
st101127 | Solved by alexis-jacq in post #2
Solved: nn.functional is exactly what I need. Sorry for the topic. |
st101128 | How do I make sure that the GPU is actually being utilized for computation? I use the “tensor.to()” function to put my model onto the GPU’s memory. I noticed that the GPU memory usage is normal (0.8/2 GB) but for whatever reason the actual GPU usage is shown as 0-1% in Windows Task Manager:
GPU utilization question.png2998×1914 139 KB |
st101129 | Hi I am working on Implementing DeepLabV3, I would like to know how to implement Image Pooling in pytorch.
Screenshot from 2018-02-23 18-01-21.png1920×1080 298 KB
arxiv.org
1706.05587.pdf 15
2.72 MB |
st101130 | Atrous convolution is the same as dilated convolutions.
Look here 17, you need to set different values for the dilation. |
st101131 | Hi richard,
I do not exactly know what is Image pooling But since they are concatenating all the layers obtained by spatial pyramid pooling later so I am expecting It is a pooling operation which is not changing the dimensions. I was guessing It will decrease the number of channels(I might be wrong). I have gone through other architecture DeepLabV3+ Encoder-Decoder with Atrous Separable Convolution for semantic Image segmentation. Even there no mention about Image Pooling |
st101132 | Hi ignacio-rocco,
I did not understand the Image pooling method and how to Implement it? |
st101133 | It seems that you want to implement ASPP.
From Figs 2d and 5 it seems that this means running a set of convolutional layers to obtain a sense feature map, and then applying a set of different convolutions with different dilations. The results are then concatenated. This feature is a multi-scale representation of the image. (Somehow also similar to hypercolumns) |
st101134 | I think you are trying to implement the image-level features. The image-level features are exploited in ParseNet and it is implemented by a global average pooling. You can confirm it (section 3.3), but I almost sure about that. I will start to do the same work applied to COCO2017 dataset for semantic segmentation. |
st101135 | Now, I have a matrix A, whose size is N * N. Now, I want to decomposition A to A = L+T, where L is the lower triangular part of A and T is the strictly upper triangular part of A. How can I do this? |
st101136 | So you are looking to do LU decomposition, right? You can do that with two cholesky factorizations (see
http://www.alecjacobson.com/weblog/#article/2242 34)
Anyway:
Cholesky pytorch function:
torch.potrf(A)
potrf docs:
https://pytorch.org/docs/stable/torch.html#torch.potrf 70 |
st101137 | The code for DCGAN is here
github.com
maxmatical/pytorch-projects/blob/master/DCGAN_with_GPU_Google_Colab.ipynb 15
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "DCGAN with GPU - Google Colab.ipynb",
"version": "0.3.2",
"provenance": [],
"collapsed_sections": [],
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
This file has been truncated. show original
The error message is
Epoch [1/20], Step [96/469], d_loss: 0.5494, g_loss: 5.2465, D(x): 0.95, D(G(z)): 0.23
Epoch [1/20], Step [196/469], d_loss: 1.2054, g_loss: 5.1388, D(x): 0.86, D(G(z)): 0.54
Epoch [1/20], Step [296/469], d_loss: 1.2517, g_loss: 4.7196, D(x): 0.91, D(G(z)): 0.61
Epoch [1/20], Step [396/469], d_loss: 1.4117, g_loss: 0.7615, D(x): 0.39, D(G(z)): 0.06
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1594: UserWarning: Using a target size (torch.Size([128, 1, 1, 1])) that is different to the input size (torch.Size([96, 1, 1, 1])) is deprecated. Please ensure they have the same size.
"Please ensure they have the same size.".format(target.size(), input.size()))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-32-38a5a48d3f53> in <module>()
27 # loss for real images
28 outputs = D(images).view(-1,1,1,1)
---> 29 d_loss_real = criterion(outputs, real_labels)
30 real_score = outputs
31
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
--> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
484
485 def forward(self, input, target):
--> 486 return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
487
488
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction)
1595 if input.nelement() != target.nelement():
1596 raise ValueError("Target and input must have the same number of elements. target nelement ({}) "
-> 1597 "!= input nelement ({})".format(target.nelement(), input.nelement()))
1598
1599 if weight is not None:
ValueError: Target and input must have the same number of elements. target nelement (128) != input nelement (96)
It seems like it has something to do with the batch size? Since I set the batch size to 128. But I haven’t encountered any errors similar to this before.
Update: it seems that when I change the batch size to something smaller (i.e. 32) the code runs fine with no errors. Can anyone suggest a reason as to why this is happening? |
st101138 | Probably your number of samples is not not divisible by the batch_size without a remainder, which might yield a smaller number of samples for the last batch.
As you create real_labels using the batch_size you could have a size mismatch for the last batch:
real_labels = torch.ones(bs).view(-1,1,1,1)
You could just keep the code and get rid of the last (smaller) batch using drop_last=True in your DataLoader.
Alternatively, you could create your labels using the current number of samples in the batch instead of the global batch_size:
real_labels = torch.ones(images.size(0)).view(-1,1,1,1) |
st101139 | In tensorflow, creating a meshgrid is pretty easy
x_t, y_t = tf.meshgrid(tf.linspace(0.0, _width_f - 1.0, _width),
tf.linspace(0.0 , _height_f - 1.0 , _height))
How can I create a meshgrid in pytorch?
My try:
a = torch.linspace(0.0, _width_f - 1.0, _width)
b = torch.linspace(0.0 , _height_f - 1.0 , _height)
x_t = a.view(-1, 1).repeat(1, b.size(0))
y_t = b.view(1, -1).repeat(a.size(0), 1) |
st101140 | This 294 seems to work:
a = torch.linspace(0.0, _width_f - 1.0, _width)
b = torch.linspace(0.0 , _height_f - 1.0 , _height)
x_t = a.repeat(_height)
y_t = b.repeat(_width,1).t().contiguous().view(-1) |
st101141 | You can also do this to get X and Y values of a meshgrid:
xv, yv = torch.meshgrid([torch.arange(0,5), torch.arange(0,10)])
I am using PyTorch 0.4.1. |
st101142 | Hi,
I encounter a problem when writing forward function.
def forward(self, x):
x = self.l1(x)
x = self.l2(x)
.......
return x
To avoid hard coding, I prefer a loop like
for i in range(10):
x = self.locals()[ 'l' + str(i) ]
But this doesn’t work since l1, l2 appear as the name of attribute. Any suggestion will be much appreciated. |
st101143 | Solved by InnovArul in post #2
what about this?
x = getattr(self, 'l' + str(i))
Alternatively, you can use nn.ModuleList |
st101144 | what about this?
x = getattr(self, 'l' + str(i))
Alternatively, you can use nn.ModuleList 17 |
st101145 | That works. Thanks a lot! But I still have some problems with following parts:
for i in range(10):
locals()[ ‘l’ + str(i) ] = nn.Linear(32, 32)
getattr(self, ‘l’ + str(i)) = locals() [ ‘l’ + str(i) ]
It seems that getattr can’t be used to define the attribute (only get), but I still need to define them in loop. locals() also failed for defining the layer. |
st101146 | Hi,
I am trying to compare the performance between two models, one with self-attention layers and other one without. All hyper parameters are being fixed and I am only testing including/excluding the attention layers.
My biggest problem is the weight initialization of convolutional layers. I am using ‘nn.init.xavier_normal_’ for initializing the weights but still suffering from roller-coaster performance from run to run.
How to ensure persistence in initializing the weights of my model such that the difference in performance becomes for sure due to architecture change, not initialization change ?
BTW: I am using 'torch.cuda.manual_seed_all(5) ’ , but without any benefit in terms of persistence.
Best |
st101147 | Setting the seed might not be enough to get exactly the same parameters.
Since one model might have more or other layers than the second one, the PRNG might be called differently.
I would suggest to initialize one model and copy all parameters to the other model. This would make sure that at least all common layers have the same parameters.
Here 50 is a small example. |
st101148 | Hello, I am training my deep learning model and keep receiving this warning/error. I am not sure what it means but it seems to point to the DataLoader function. The code continues to run even with these errors.
Can someone tell me what this means and how I can fix it ?
Exception ignored in: <bound method _DataLoaderIter.__del__ of <torch.utils.data.dataloader._DataLoaderIter object at 0x7f47480d6f98>>
Traceback (most recent call last):
File "/home/kong/anaconda3/envs/social/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 349, in __del__
self._shutdown_workers()
File "/home/kong/anaconda3/envs/social/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 328, in _shutdown_workers
self.worker_result_queue.get()
File "/home/kong/anaconda3/envs/social/lib/python3.5/multiprocessing/queues.py", line 337, in get
return ForkingPickler.loads(res)
File "/home/kong/anaconda3/envs/social/lib/python3.5/site-packages/torch/multiprocessing/reductions.py", line 70, in rebuild_storage_fd
fd = df.detach()
File "/home/kong/anaconda3/envs/social/lib/python3.5/multiprocessing/resource_sharer.py", line 58, in detach
return reduction.recv_handle(conn)
File "/home/kong/anaconda3/envs/social/lib/python3.5/multiprocessing/reduction.py", line 181, in recv_handle
return recvfds(s, 1)[0]
File "/home/kong/anaconda3/envs/social/lib/python3.5/multiprocessing/reduction.py", line 152, in recvfds
msg, ancdata, flags, addr = sock.recvmsg(1, socket.CMSG_LEN(bytes_size))
ConnectionResetError: [Errno 104] Connection reset by peer |
st101149 | If could be related to this issue 376, which was recently fixed in this PR 247.
As it’s quite new, you could try to build PyTorch from source and check, if your issue still occurs.
You can find the build instructions here 115.
Let me know, if you get stuck. |
st101150 | I just recently switch over to Google colab for the GPU, and I’m getting the following error for my notebook
github.com
maxmatical/pytorch-projects/blob/generative-models/DCGAN_with_GPU_Google_Colab.ipynb 5
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "DCGAN with GPU - Google Colab.ipynb",
"version": "0.3.2",
"provenance": [],
"collapsed_sections": [],
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
This file has been truncated. show original
I have set my NNs to cuda, as well as all the variables, but I’m still getting the error
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-28-3334654f75a5> in <module>()
32 # loss for fake images
33 z = torch.randn(bs, 100).view(-1, 100, 1,1) # 100 is input channels for G
---> 34 fake_img = G(z)
35 outputs = D(fake_img).view(-1,1,1,1)
36 d_loss_fake = criterion(outputs, fake_labels)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
--> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)
<ipython-input-10-ff06956ca9a4> in forward(self, input)
42 self.tanh = nn.Tanh()
43 def forward(self, input):
---> 44 out = self.relu(self.bn1(self.deconv1(input)))
45 out = self.relu(self.bn2(self.deconv2(out)))
46 out = self.relu(self.bn3(self.deconv3(out)))
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
--> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input, output_size)
689 return F.conv_transpose2d(
690 input, self.weight, self.bias, self.stride, self.padding,
--> 691 output_padding, self.groups, self.dilation)
692
693
RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'weight'
NVM solved, one of the variables wasn’t set to CUDA. I forgot there were two z’s in the training |
st101151 | Program fails while trying to broadcast a simple tensor - any idea why?
def checker(r):
if rank == r:
tensor = torch.tensor(0, device="cuda")
else:
tensor = torch.tensor(1, device="cuda")
torch.distributed.broadcast(tensor, r)
nvidia-smi returns
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.37 Driver Version: 396.37 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro GV100 Off | 00000000:1A:00.0 Off | Off |
| 38% 47C P2 40W / 250W | 2435MiB / 32508MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Quadro GV100 Off | 00000000:67:00.0 Off | Off |
| 43% 52C P2 37W / 250W | 11MiB / 32508MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+ |
st101152 | Hello. I have a large network, which is to large for some batchsize. However it can be decouple to two sub module and has their loss for backprop respectively.
Therefore, I wonder whether there is a way to do like this:
subnet1.forward()
loss1 = calc_loss(subnet1)
loss1.back_ward()
get the gradient to grad1
optimizer.zero_grad()
subnet2.forward()
loss2 = calc_loss(subnet2)
loss2.back_ward()
get the gradient to grad2
optimizer.zero_grad()
collected_grad = grad1 + grad2
distribute the collected_gradto parameter
the key step is to retrieve the gradient, and then assign back.
Can PyTorch accomplishes that ?
Thanks ! |
st101153 | How are you planning on storing the gradient and reassigning it back?
If you just hold it on the GPU, the same amount of memory will be used.
I think you should check out torch.utils.checkpoint 9, which can be used to trade compute for memory. |
st101154 | I noticed that torch.SVD 25 and torch.inverse 179 are only defined over 2-dimensional tensors. Why is this the case? Is there no efficient way to implement a batch SVD or batch inverse for tensors with shapes (*, M,N) or (*,M,M), respectively? Or better yet, some way to specify the two axes to use.
Perhaps there’s some reason this is not well-parallelizable? |
st101155 | How would you compute a batch SVD? Anyway, can’t you just iterate over the batch dimension? |
st101156 | I’m not sure how I’d implement it, that’s why I asked the question. On a GPU I imagine a block would run the necessary SVD or inversion operations for each of the set of problems. Is there something inherent to these matrix operations that prohibits this type of implementation?
For a reason why you don’t want to just iterate over the batch, perhaps you have an image with a matrix representing each pixel? Then you have BxHxWxKxK, for example. You then might want to invert theKxK elements for some reason. Or run SVD over them. In this case iterating over each would be absurdly slow, but the operations at each pixel are completely independent from one another which would lend itself to parallelizaton. |
st101157 | Hi,
I’m trying to implement Population Based Training on GPU. Here I call multiple processes using torch.multiprocessing where each process trains the model with different hyperparameters (learning rate in my example). At regular intervals the accuracy is calculated and each process saves its model and optimizer parameters onto a shared memory space. This memory is managed by torch.multiprocessing.Manager().dict(). Here’s the initialization code:
if __name__ == "__main__":
try:
set_start_method('spawn')
except RuntimeError:
pass
train_state_dict = mp.Manager().dict()
val_acc_dict = mp.Manager().dict()
net_acc_dict = mp.Manager().dict()
print(torch.cuda.device_count())
processes = []
for rank in range(4):
learning_rate = [0.01, 0.06, 0.001, 0.008]
p = mp.Process(target=training_cifar_multi, \
args = (train_state_dict, val_acc_dict, net_acc_dict ,rank, \
return_top_arg, learning_rate[rank]))
p.start()
processes.append(p)
for p in processes:
p.join()
If any processes’s model’s accuracy is not in the top 20%, it, like any normal human being, caves in to societal pressure and copies the model and optimizer parameter from one of the models in the top 20%, and then tweaks the copied hyperparameters by a small amount to avoid being caught (jokes apart, that’s what it actually does, copy everything and then perturb the hyperparameters).
Here’s how model parameters are saved in any of the processes:
train_state_dict[name] = {'state_dict': model.state_dict(), 'optimizer':
optimizer.state_dict(), 'epoch':epoch}
Here’s how model parameters are loaded if the model in underperforming. Flag is the name of the process which is performing in the top 20%:
flag = return_top_arg(val_acc_dict, valid_accuracy)
if flag:
model.load_state_dict(train_state_dict[flag]['state_dict'])
optimizer.load_state_dict(train_state_dict[flag]['optimizer'])
epoch = train_state_dict[flag]['epoch']
for param_group in optimizer.param_groups:
param_group['lr'] = (np.random.uniform(0.5,2,1)[0])*param_group['lr']
However, I run into this error when the process tires to load the model:
Traceback (most recent call last):
File "/home/usr/anaconda2/envs/py36/lib/python3.6/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/home/usr/anaconda2/envs/py36/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/usr/PBT/cifar_10.py", line 93, in training_cifar_multi
model.load_state_dict(train_state_dict[flag]['state_dict'])
File "<string>", line 2, in __getitem__
File "/home/usr/anaconda2/envs/py36/lib/python3.6/multiprocessing/managers.py", line 772, in _callmethod
raise convert_to_error(kind, result)
multiprocessing.managers.RemoteError:
---------------------------------------------------------------------------
Unserializable message: Traceback (most recent call last):
File "/home/usr/anaconda2/envs/py36/lib/python3.6/multiprocessing/managers.py", line 283, in serve_client
send(msg)
File "/home/usr/anaconda2/envs/py36/lib/python3.6/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/usr/anaconda2/envs/py36/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/home/usr/anaconda2/envs/py36/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 104, in reduce_storage
metadata = storage._share_cuda_()
RuntimeError: invalid device pointer: 0x1020ec00000 at /opt/conda/conda-bld/pytorch_1501971235237/work/pytorch-0.1.12/torch/lib/THC/THCCachingAllocator.cpp:211
I’m not sure how to debug this. Can someone please help me out here? |
st101158 | I have the following error. I have set my cudnn in some path and set the $LD_LIBRARY_PATH with:
export LD_LIBRARY_PATH=`pwd`:$LD_LIBRARY_PATH
How can I find and solve the problem.
Traceback (most recent call last):
File "main.py", line 157, in <module>
train()
File "main.py", line 131, in train
output, hidden = model(data, hidden)
File "/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 202, in __call__
result = self.forward(*input, **kwargs)
File "/data/disk1/workbench/learn/pytorch/examples/word_language_model/model.py", line 28, in forward
output, hidden = self.rnn(emb, hidden)
File "/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 202, in __call__
result = self.forward(*input, **kwargs)
File "/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/modules/rnn.py", line 81, in forward
return func(input, self.all_weights, hx)
File "/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 235, in forward
return func(input, *fargs, **fkwargs)
File "/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/autograd/function.py", line 201, in _do_forward
flat_output = super(NestedIOFunction, self)._do_forward(*flat_input)
File "/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/autograd/function.py", line 223, in forward
result = self.forward_extended(*nested_tensors)
File "/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 180, in forward_extended
cudnn.rnn.forward(self, input, hx, weight, output, hy)
File "/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/backends/cudnn/rnn.py", line 184, in forward
handle = cudnn.get_handle()
File "/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/backends/cudnn/__init__.py", line 337, in get_handle
handle = CuDNNHandle()
File "/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/backends/cudnn/__init__.py", line 128, in __init__
check_error(lib.cudnnCreate(ctypes.byref(ptr)))
File "/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/backends/cudnn/__init__.py", line 324, in check_error
raise CuDNNError(status)
torch.backends.cudnn.CuDNNError: 6: CUDNN_STATUS_ARCH_MISMATCH
Exception ctypes.ArgumentError: "argument 1: <type 'exceptions.TypeError'>: Don't know how to convert parameter 1" in <bound method CuDNNHandle.__del__ of <torch.backends.cudnn.CuDNNHandle instance at 0x7fa7707dd5f0>> ignored |
st101159 | I use cudnn cuDNN v5 (May 12, 2016), for CUDA 7.5.
my cuda version is :
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2015 NVIDIA Corporation
Built on Tue_Aug_11_14:27:32_CDT_2015
Cuda compilation tools, release 7.5, V7.5.17
How can I find more details about the cudnn? |
st101160 | Hi @apaszke , I have got the following Error:
torch.backends.cudnn.version()
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-3-de1bb2d5285f> in <module>()
----> 1 torch.backends.cudnn.version()
/global-hadoop/home/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/backends/cudnn/__init__.pyc in version()
73 def version():
74 if not lib:
---> 75 raise RuntimeError("cuDNN not initialized")
76 if len(__cudnn_version) == 0:
77 __cudnn_version.append(lib.cudnnGetVersion())
RuntimeError: cuDNN not initialized
Since I don’t have root priveledge, I copied the system cuda floder into my own place and set the CUDA_ROOT and CUDA_HOME variable to the path. Afterwards, I copied cudnn file into the path following this answer(http://stackoverflow.com/questions/39262468/installing-cudnn-for-theano-without-root-access 27). Is there any suggestion I can take for this situation? |
st101161 | Right, sorry. Do this please:
print(torch.backends.cudnn.is_acceptable(torch.cuda.FloatTensor(1)))
print(torch.backends.cudnn.version()) |
st101162 | Hi, @apaszke, I have tried the code and I got:
In [1]: import torch
In [2]: print(torch.backends.cudnn.is_acceptable(torch.cuda.FloatTensor(1)))
...: print(torch.backends.cudnn.version())
...:
True
5005
Now the error code change to:
torch.backends.cudnn.CuDNNError: 6: CUDNN_STATUS_ARCH_MISMATCH
Exception ctypes.ArgumentError: "argument 1: <type 'exceptions.TypeError'>: Don't know how to convert parameter 1" in <bound method CuDNNHandle.__del__ of <torch.backends.cudnn.CuDNNHandle instance at 0x7f4b9099a320>> ignored
I just found that my gpu is Tesla M2075, I searched a similar issue in caffe, saying that cudnn require higher version than pure cuda. Is it not supported in Tesla? Can I run the sample code with only cuda instead of cudnn? |
st101163 | M2075 is Fermi architecture card, cudnn is not supported on it. You can disable cudnn by setting torch.backend.cudnn.enabled=False. But you can expect only very modest speed-ups with such an old card. |
st101164 | @ngimel, Thanks for your help. However, another problem encountered.
THCudaCheck FAIL file=/data/users/soumith/miniconda2/conda-bld/pytorch-0.1.9_1487343590888/work/torch/lib/THC/generated/../generic/THCTensorMathPointwise.cu line=246 error=8 : invalid device function
Traceback (most recent call last):
File "main.py", line 157, in <module>
train()
File "main.py", line 131, in train
output, hidden = model(data, hidden)
File "/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 202, in __call__
result = self.forward(*input, **kwargs)
File "/data/disk1/ckyn/workbench/learn/pytorch/examples/word_language_model/model.py", line 28, in forward
output, hidden = self.rnn(emb, hidden)
File "/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 202, in __call__
result = self.forward(*input, **kwargs)
ons/rnn.py", line 138, in forward
nexth, output = func(input, hidden, weight)
File "/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 67, in forward
hy, output = inner(input, hidden[l], weight[l])
File "/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 96, in forward
hidden = inner(input[i], hidden, *weight)
File "/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 22, in LSTMCell
gates = F.linear(input, w_ih, b_ih) + F.linear(hx, w_hh, b_hh)
File "/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py", line 752, in __add__
return self.add(other)
File "/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py", line 292, in add
return self._add(other, False)
File "/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py", line 286, in _add
return Add(inplace)(self, other)
File "/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/autograd/_functions/basic_ops.py", line 13, in forward
return a.add(b)
RuntimeError: cuda runtime error (8) : invalid device function at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.9_1487343590888/work/torch/lib/THC/generated/../generic/THCTensorMathPointwise.cu:246
Is there any idea about this? |
st101165 | Pytorch binaries are not built for your architecture:
https://github.com/pytorch/builder/blob/master/conda/pytorch-0.1.9/build.sh#L5 30
(yours is 2.0). Try compiling from source, but that too may fail as your card is very old. Even if it does not fail, still expect very small speed-ups (if any). |
st101166 | I get a warning message on cudnn when run an official example mnist.py 24. The message is
“/usr/local/lib/python3.5/dist-packages/torch/backends/cudnn/init.py:57: UserWarning: cuDNN library not found. Check your LD_LIBRARY_PATH
}.get(sys.platform, ‘LD_LIBRARY_PATH’)))”
Sure, if I export cuDNN library in LD_LIBRARY_PATH so that I can get rid of this warning message.
However, in Linux like ubuntu 16.xx we use dynamic linker run-time bindings (use ldconfig to
make proper config), and usually no need to use environment variable LD_LIBRARY_PATH.
Can we make it in this way? |
st101167 | We can’t, because some Python bindings load cuDNN dynamically using ctypes, and it has to find it somehow. But we could save the path to the place where cuDNN was found during install. |
st101168 | The problem is not ctypes (it looks in ld cache) and not ld cache per se. The problem is that ld cache typically contains libname.so.MAJOR (verify this with ldconfig -p), and for cudnn pytorch tries to load libcudnn.so.MAJOR.MINOR.PATCH. Try adding libcudnn.so.MAJOR.MINOR.PATCH to your ld cache (ldconfig -l may be?) |
st101169 | Thanks for pointing out this. I tried “ldconfig -l /usr/local/cuda-8.0/lib64/libcudnn.so.5.1.5”,
it seems doesn’t work (/etc/ld.so.cache doesn’t change), though no error message.
“man ldconfig” doesn’t give detail or example usage for option -l, and it said
"Intended for use by experts only". So, I’m not expert (indeed). |
st101170 | HI, @apaszke
There is cudnn5.0 lib on my PC, however I got warning:
UserWarning: PyTorch was compiled without cuDNN support. To use cuDNN, rebuild PyTorch making sure the library is visible to the build system.
"PyTorch was compiled without cuDNN support. To use cuDNN, rebuild "
How to build pytorch with cuDNN support?
cudnn.h is in /usr/local/cuda-8.0/include/cudnnv5/ and cudnn.so.5 is in /usr/local/cuda-8.0/lib64/cuDNNv5/. The path has been added in system environment variable.
export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64/cuDNNv5:$LD_LIBRARY_PATH
I build the pytorch from source code.
cd pytorch-root/ & python setup.py install |
st101171 | Hi – I updated my Pytorch version to the latest from source, and the backpropagation code for WGAN 7 now gives the error “Trying to backward through the graph a second time…”
Here is the code for updating the discriminator:
self.D.zero_grad()
d_real_pred = self.D(real_data)
d_real_err = torch.mean(d_real_pred) #want to push d_real as high as possible
d_real_err.backward(one_neg)
z_input = to_var(torch.randn(self.batch_size, 128))
d_fake_data = self.G(z_input).detach()
d_fake_pred = self.D(d_fake_data)
d_fake_err = torch.mean(d_fake_pred) #want to push d_fake as low as possible
d_fake_err.backward(one)
gradient_penalty = self.calc_gradient_penalty(real_data.data, d_fake_data.data)
gradient_penalty.backward()
d_err = d_fake_err - d_real_err + gradient_penalty
self.D_optimizer.step()
For calculating the gradient penalty:
def calc_gradient_penalty(self, real_data, fake_data):
alpha = torch.rand(self.batch_size, 1, 1)
alpha = alpha.expand_as(real_data)
alpha = alpha.cuda() if self.use_cuda else alpha
interpolates = alpha * real_data + ((1 - alpha) * fake_data)
interpolates = interpolates.cuda() if self.use_cuda else interpolates
interpolates = autograd.Variable(interpolates, requires_grad=True)
disc_interpolates = self.D(interpolates)
gradients = autograd.grad(outputs=disc_interpolates, inputs=interpolates,
grad_outputs=torch.ones(disc_interpolates.size()).cuda() \
if self.use_cuda else torch.ones(disc_interpolates.size()),
create_graph=True, retain_graph=True, only_inputs=True)[0]
gradient_penalty = self.lamda*((gradients.norm(2, 1).norm(2,1) - 1) ** 2).mean() #norm 2 times
return gradient_penalty
Any guidance on what might be causing this error? |
st101172 | Hi,
This part of the code looks ok.
Could you share what is self.D?
Also where exactly is the error raised? |
st101173 | Hey I think I have the same issue. where self.D is probably a discriminator net that looks something like:
class DiscriminatorNet(torch.nn.Module):
"""
A discriminative neural network
"""
def __init__(self):
super(DiscriminatorNet, self).__init__()
n_out = 1
self.hidden0 = nn.Sequential(
nn.Conv1d(in_channels=4,out_channels=100,kernel_size=1)
)
self.hidden1 = nn.Sequential(
ResBlock(DIM),
ResBlock(DIM),
ResBlock(DIM),
ResBlock(DIM),
ResBlock(DIM)
)
self.out = nn.Sequential(
nn.Linear(DIM * L, n_out),
)
def forward(self, x):
# transpose x to match size of layers
x = x.permute(0,2,1)
x = self.hidden0(x)
x = self.hidden1(x)
x = x.view(-1, DIM * L)
x = self.out(x)
return x
And the error occurs at the line gradient_penalty.backward() |
st101174 | I would like to fine-tune partial parameters, so I set the requires_grad flag of parameters that I want to keep fixed to be Flase. But I find it does not work. It still optimize all parameters. |
st101175 | That is a good question! My model has two branches which only share shallow layers. When I fine tune the second branch, I set the requires_grad flag of first branch parameters False. Then I find the output of first branch is not the same when testing. |
st101176 | Is it possible to share some code to reproduce this?
Also, please mention which pytorch version are you using. |
st101177 | Hi!
So my pdb output says it all but just to be clear:
I am trying to mul two Tensors. According to the documentation here this should work:
http://pytorch.org/docs/notes/broadcasting.html 7
I’ve tried + too and the same error. So that makes me confused…otherwise the framework is really the best I’ve seen so far.
So…I’m confused. Is broadcasting not supported yet, even though it appears on the documentation? when will it be supported? What should I do in the mean time? I know that I can use Act_weights.expand_as(Act) but that seems a bit awkward code-wise Also, there are other solutions which seem to rely even more on the underlying code so i wonder if that solution is slow…
Thanks in advance!
(Pdb) Act
Variable containing:
0.0000 0.7156
0.0000 0.7219
0.0000 3.1095
[torch.FloatTensor of size 3x2]
(Pdb) Act_weights
Variable containing:
0.5000 0.5000
[torch.FloatTensor of size 1x2]
(Pdb) Act.size()
torch.Size([3, 2])
(Pdb) Act_weights.size()
torch.Size([1, 2])
(Pdb) Act*Act_weights
*** RuntimeError: inconsistent tensor size at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:846
(Pdb) Act.data.numpy()*Act_weights.data.numpy()
array([[ 0. , 0.35781485],
[ 0. , 0.3609345 ],
[ 0. , 1.5547576 ]], dtype=float32) |
st101178 | It will be possible soon:
[announcement] those of you who use the master branch, breaking changes incoming
Dear PyTorch users,
Most of you use our stable releases. Our current stable release is v0.1.2
However, some of you use the master branch of PyTorch.
We wanted to give those of you who use the master branch a heads-up about some breaking changes that will be merged starting today.
These breaking changes are because we will be introducing NumPy-like Broadcasting into PyTorch (See PR#1563).
We will be releasing a comprehensive set of backward-compatibility warnings and codemod mechanisms in v0… |
st101179 | yeah、it confused me now!、but my other colleague can do this… maybe my version is old |
st101180 | root = "/home/hu/.PyCharmCE2018.2/config/scratches/data/corel_5k/"
best_F1 = 0
lr = 0.001
step = 0
viz = visdom.Visdom()
# 定义是否使用GPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# 参数设置,使得我们能够手动输入命令行参数,就是让风格变得和Linux命令行差不多
parser = argparse.ArgumentParser(description='PyTorch CIFAR100 Training')
parser.add_argument('--resume', default='', type=str, metavar='PATH',
help='path to latest checkpoint (default:none)')
parser.add_argument('--epochs', default=160, type=int, metavar='N',
help='number of total epochs to run')
parser.add_argument('--start_epoch', default=0, type=int, metavar='N',
help='number of total epochs to run')
parser.add_argument('-e', '--evaluate', dest='evaluate',
help='evaluate model on validation set')
args = parser.parse_args()
resnet50 = models.resnet50(pretrained=True)
resnet50.conv1 = nn.Conv2d(4, 64, kernel_size=7, stride=2, padding=3, bias=False)
resnet50.fc = nn.Linear(2048, 50)
def feature_layer():
layers = []
for name, layer in resnet50._modules.items():
if isinstance(layer, nn.Conv2d):
layers += []
else:
continue
features = nn.Sequential(*layers)
return features
class net(nn.Module):
def __init__(self):
super(net, self).__init__()
self.features = feature_layer()
def forward(self, x):
x = self.features(x)
return x
model = net().to(device)
if device == 'cuda':
model = torch.nn.DataParallel(model)
cudnn.benchmark == True
pretrained_dict = resnet50.state_dict()
model_dict = model.state_dict()
pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
model_dict.update(pretrained_dict)
model.load_state_dict(model_dict)
class LoadDataset():
def __init__(self, txt, transform):
self.txt = txt
fh = open(self.txt, 'r')
imgs = []
self.transform = transform
for line in fh:
line = line.strip('\n')
line = line.rstrip()
words = line.split()
image1 = words[0]
image1 = int(image1)
image1 = image1 // 1000
image1 = image1 * 1000
image1 = '%d' % image1
imageList = root + 'images/' + image1 + '/' + words[0] + '.jpeg'
words.pop(0)
lableList = list(map(int, words))
lableList = np.array(lableList)
lableList = torch.from_numpy(lableList)
imgs.append((imageList, lableList))
self.imgs = imgs
def __getitem__(self, item):
image, label = self.imgs[item]
image = Image.open(image)
img = transform(image)
return img, label
def __len__(self):
return len(self.imgs)
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.3865, 0.3995, 0.3425), (0.2316, 0.2202, 0.2197)),
])
trainset = LoadDataset(txt=root + 'labels/training_label', transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True, num_workers=2)
valset = LoadDataset(txt=root + 'labels/val_label', transform=transform)
valloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True, num_workers=2)
x, train_F1, test_F1 = 0, 0, 0
win = viz.line(
X=np.array([x]),
Y=np.column_stack((np.array([train_F1]), np.array([test_F1]))),
opts=dict(
legend=["train_F1", "test_F1"]
)
)
def main():
global args, best_prec1, lr
args = parser.parse_args()
print("=> loading checkpoint '{}'".format('model_best.pth.tar'))
checkpoint = torch.load('model_best.pth.tar')
args.start_epoch = checkpoint['epoch']
best_F1 = checkpoint['best_F1']
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
print("=> loaded checkpoint '{}' (epoch {})"
.format('model_best.pth.tar', checkpoint['epoch']))
criterion = nn.CrossEntropyLoss().cuda()
optimizer = torch.optim.Adam(list(model.parameters()), lr=lr)
if args.evaluate:
validate(valloader, model, criterion)
return
for epoch in range(args.start_epoch, args.epochs):
print("epoch = %d" % epoch)
adjust_LR(optimizer, epoch) # adjust learning_rate
train_loss, train_F1 = train(trainloader, model, criterion, optimizer, epoch)
test_loss, test_F1 = validate(valloader, model, criterion)
is_best = test_F1 > best_F1
best_F1 = max(test_F1, best_F1)
viz.line(
X=np.array([epoch]),
Y=np.column_stack((np.array([train_F1]), np.array([test_F1]))),
win=win,
update="append"
)
save_checkpoint({
'epoch': i + 1,
'state_dict': model.state_dict(),
'best_F1': best_F1,
'optimizer': optimizer.state_dict(),
}, is_best)
def save_checkpoint(state, is_best, filename='checkpoint.pth.tar'):
torch.save(state, filename)
if is_best:
shutil.copyfile(filename, 'model_best.pth.tar')
def adjust_LR(optimizer, epoch):
lr = lr * (0.1 ** (epoch // 40))
for param_group in optimizer.param_group:
param_group['lr'] = lr
def train(trainloader, model, criterion, optimizer, epoch):
model.train()
for i, (input, target) in enumerate(trainloader):
step += 1
input = input.to(device)
if torch.cuda.is_available():
target = target.cuda()
output = model(input)
loss = criterion(output, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
F1 = metrics.f1_score(target, output, average='weighted')
print('train : step = %d, F1 = %.3f' % (step, loss, F1))
return loss, F1
def validate(valloader, model, criterion):
model.eval()
f1_total = 0
loss_total = 0
total = 0
for i, (input, target) in enumerate(valloader):
input = input.to(device)
if torch.cuda.is_available():
target = target.cuda()
output = model(input)
loss = criterion(output, target)
f1 = metrics.f1_score(target, output, average='weighted')
loss_total += loss
f1_total += f1
total += 1
loss = loss_total / total
f1 = f1_total / total
print('val: test_loss = %.4f, test_F1' % (loss, f1))
return loss, f1
print("start test")
testset = LoadDataset(txt=root + 'labels/test_label', transform=transform)
testloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True, num_workers=2)
criterion = nn.CrossEntropyLoss().cuda()
optimizer = torch.optim.Adam(list(model.parameters()), lr=lr)
f1_total = 0
loss_total = 0
total = 0
for i, (input, target) in enumerate(testloader):
input = input.to(device)
if torch.cuda.is_available():
target = target.cuda()
output = model(input)
loss = criterion(output, target)
f1 = metrics.f1_score(target, output, average='weighted')
loss_total += loss
f1_total += f1
total += 1
loss = loss_total / total
f1 = f1_total / total
print('test: test_loss = %.4f, test_F1' % (loss, f1))
Above is my code.And when I ran it,it told me:
Traceback (most recent call last):
** File “/home/hu/下载/Corel5k (2).py”, line 231, in **
** optimizer = torch.optim.Adam(list(model.parameters()), lr=lr)**
** File “/home/hu/.local/lib/python3.6/site-packages/torch/optim/adam.py”, line 41, in init**
** super(Adam, self).init(params, defaults)**
** File “/home/hu/.local/lib/python3.6/site-packages/torch/optim/optimizer.py”, line 38, in init**
** raise ValueError(“optimizer got an empty parameter list”)**
ValueError: optimizer got an empty parameter list
How can I fix it?
Appreciate for answering! |
st101181 | Solved by Ranahanocka in post #4
The optimizer is optimizing the model weights, so there needs to be some parameters to optimize. Indeed, conv, or linear layer would work fine. Did that solve your problem? |
st101182 | Git-oNmE:
layers = [] for name, layer in resnet50._modules.items(): if isinstance(layer, nn.Conv2d): layers += [] else: continue
you have not included any trainable layers in your model. |
st101183 | Thank you for your quick answer!
So should I add some conv layers in __init __?
class net(nn.Module):
def __init__(self):
super(net, self).__init__()
self.features = feature_layer()
def forward(self, x):
x = self.features(x)
return x
I’m a beginner so the question might be kind of stupid,pardon:) |
st101184 | The optimizer is optimizing the model weights, so there needs to be some parameters to optimize. Indeed, conv, or linear layer would work fine. Did that solve your problem? |
st101185 | I am conducting a survey on distributed training. What type of all reduce algorithm does pytorch use for distributed training ?
Facebook mentioned it using a scatter gather + all gather approach (the halfing doubling algorithm). I’m curious to see what pytorch supports and hopefully someone who can shed light on why a certain method is more popular |
st101186 | I have a datasource with 5 parameters as input and 1 output, now the input is a 1005 matrix and out is 1001
How should I set my Net batchsize to train the data |
st101187 | I don’t really understand what you said, in pytorch you have to add an adicional dimension corresponding to batchsize, (typically dim 0).
you can use torch.unsqueeze to add this dimension to a tensor |
st101188 | I mean the batch for input is 5 and the batch for output is 1,
because my network is 5 input to get 1 output. |
st101189 | As far as I understand your use case, you have 5 input features and 1 output feature.
You also have 100 samples in your dataset.
As @JuanFMontesinos mentioned you might define the batch size as you want. E.g. if you set your batch size to 5, you would have input of [5, 5] and an output of [5, 1]. |
st101190 | I can see many tutorial about freezing layers or freezing all weights in a layer but I would like to freeze only a subset of weights. For example, if the kernel size is 7x7 I would like to train only 5x5 starting from top left and freezing right bottom 2x2. I would like to declare it as 7x7 rather than 5x5. Is it possible to freeze these 2x2 ? |
st101191 | See this post:
Freezing part of the Layer Weights
Hello Everyone,
How could I freeze some parts of the layer weights to zero and not the entire layer.
I tried below code, but it doesn’t freeze the specific parts(1:10 array in 2nd dimension) of the layer weights.
I am new to ML & started with Pytorch. Appreciate any help. Thanks.
for child in model_ft.children():
print(“Freezing Parameters(1->10) on the Convolution Layer”,child)
for param in child.parameters():
param.data[:,1:10,:,:].zero_()
param.data[:,1:10,:,:].requires_grad = False
o…
Essentially you just don’t those specefic parameters grad. |
st101192 | I need to sort a 2D tensor and apply the same sorting to a 3D tensor.
I tried direct indexing but that does not work with multidimensional indexes. I also tried gather but it does not work because the index and the souce do not have the same dimensionality.
Any ideas?
t = torch.tensor([[1,3,2],
[4,2,3]])
t2 = torch.tensor([[[1.1,1.2],[3.1,3.2],[2.1,2.2]],
[[4.1,4.2],[2.1,2.2],[3.1,3.2]]])
t2sorted = torch.tensor([[[1.1,1.2],[2.1,2.2],[3.1,3.2]],
[[2.1,2.2],[3.1,3.2],[4.1,4.2]]])
s,idx = t.sort(-1)
# direct indexing
npt.assert_equal(t2sorted,t2[idx].numpy())
# RuntimeError: invalid argument 2: out of range: 12 out of 12 at c:\programdata\miniconda3\conda-bld\pytorch-cpu_1524541161962\work\aten\src\th\generic/THTensorMath.c:430
# gather
npt.assert_equal(t2sorted,t2.gather(1,idx).numpy())
# RuntimeError: invalid argument 4: Index tensor must have same dimensions as input tensor at c:\programdata\miniconda3\conda-bld\pytorch-cpu_1524541161962\work\aten\src\th\generic/THTensorMath.c:581
# if t2 has same shape as t it gather will work
t2 = torch.tensor([[2,4,3],
[5,3,4]])
s,idx = t.sort(-1)
npt.assert_equal(s,(t2.gather(1,idx)-1).numpy()) |
st101193 | You need to first sort each of the nested arrays first, and then sort by the first slice. Assuming t2 sub arrays are already sorted:
sort_by = np.argsort(t2[:, 0])
t2sorted = t2[sort_by] |
st101194 | Thanks for looking into this!
Your example is not quite what I need to do. My sort order for the first two dimensions is already defined by the sort order of t stored in the idx. I need to apply idx to t2 with the goal of getting t2sorted. t2sorted in the example is the expected result for purpose of testing. |
st101195 | You need to extract the slice from t2 that contains: 1.1,3.1,2.1,4.1,2.1,3.1 and then sort by these. I didn’t check my code |
st101196 | Here is the solution I came up with. It is ugly but does not need a loop:
nLastDim = t2.shape[-1]
nLast2Dim = t2.shape[-2]
nLast3Dim = t2.shape[-3]
lastDimCounter = torch.arange(0,nLastDim,dtype=torch.long)
last3DimCounter = torch.arange(0,nLast3Dim,dtype=torch.long)
t2 = t2.reshape(-1)[(idx*nLastDim+(last3DimCounter*nLastDim*nLast2Dim).unsqueeze(-1)).unsqueeze(-1).expand(-1,-1,nLastDim) + lastDimCounter] |
st101197 | Shape returns a list, you can do that in one call.
I.e.,
ndim = t2.shape
Where ndim[0] = nLast3Dim in your code.
You can use the sort function on the slice as I mentioned for cleaner implementation. If you still need help I’ll do it when I get to a computer |
st101198 | I’m implementing VAE(Variational autoencoder) with a prior different from unit Gaussian. Unfortunately I keep getting nans after every few epochs. I have tried reducing the learning rate and clipping gradient.
Any help would be greatly appreciated. Thanks. |
st101199 | First, can you make sure there are no NANs in your data? Sometimes this can accidentally happen during data augmentation.
I.e., just before the forward pass, please print:
torch.sum(torch.isnan(data)) |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.