id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st119568
|
Yes, the type of the coco caption is unicode (not string) so that this problem is caused. [torch.utils.data.CocoCaptions] (https://github.com/pytorch/vision/blob/master/torchvision/datasets/coco.py#L20 3) returns img and target. target is an unicode object.
I solved this problem by converting return type unicode to string.
There is two possible options to handle this problem.
i) Add the code in torch.utils.data.CocoCaptions to return target as a string type.
ii) Add the code in default_collater to handle the case of unicode object (The problem now is the function considers unicode as collections.Iterable 2).
Could i pull request to solve this problem?
|
st119569
|
I think we should support unicode in default_collate. But this will need an additional guard by sys.version_info[0] < 3, because there’s no such thing as unicode object in Python 3. I’ll open an issue and PRs are welcome
|
st119570
|
Thanks, i added the code and tested the code works well in Python 2.7.
I have opened pull request below.
github.com/pytorch/pytorch
Support unicode objects in default_collate (Issue #826) 34
pytorch:master ← yunjey:patch-3
opened
Feb 23, 2017
yunjey
+4
-0
|
st119571
|
Hey guys,
EDIT: SEE BELOW
I’m working on using FP16 in DenseNets where concatenation is a key part of the architecture, but since HalfTensors don’t support stateless methods I can’t call cat().
I’ve thrown a workaround in by instantiating a temporary variable to store the concatenated results and then just dumping to it by indexing, but then I get an error calling backwards() that looks like it comes from the final layer:
torch.mm 1(grad_output, weight)
RuntimeError: Type HalfTensor doesn’t implement stateless methods
Is there an easy way to work around this, or is training in half precision not supported yet?
I’m on CUDA8.0, cuDNN5105, Ubuntu14.04, pytorch1.9 (the latest version that you get through conda install) and a Maxwell titan X. I’m in a situation where memory costs are more important than speed so I can eat the slowdown of FP16 if it reduces memory useage.
Aside, I get an error when calling torch.backends.cudnn.version() if I don’t call torch.backends.cudnn._loadlib() first (the “cuDNN not initialized” error). cuDNN still works, but when trying to check if a source build has succeeded this can be confusing.
EDIT: Nevermind, it looks like nothing else other than F.linear has this issue, so switching over to the state’d version (three edits) fixed this and is now running smoothly and quickly in FP16. Dongers up for pytorch, woo!
Probably would still make sense to add in a stateless cat() method, though, I suspect the way I’m doing it is inefficient.
Thanks,
Andy
|
st119572
|
if you could give a repro for the F.linear issue, we’re happy to fix this asap. Open an issue on pytorch/pytorch with the repro.
|
st119573
|
Done 10, and I’ve included a link to my fix 11.
Any ideas on if there’s a better workaround for the cat() issue would also be appreciated (I’m guessing the reason tensors don’t have a tensor.cat() function attached to them since it would change their shape/memory size?)
|
st119574
|
It’s hard to say what would be a good choice of semantics for tensor.cat() (should it modify the tensor? should it be included in the list?), so we’d rather stick to the stateless one. If the method is implemented in THC, then exposing it is a matter of changing one line in our C wrappers. Otherwise, it will require some changes in the backends.
|
st119575
|
Is there any method that can easily used to index a one dimensional tensor with a tensor.
For example:
x=torch.range(0,24)
y = torch.Tensor([[1,2],[3,4])
I want to get something like x[y] = [[1,2],[3,4]]
But pytorch tensor do not accept different index matrix shape.
|
st119576
|
We’ll have to add support for this kind of indexing too. For now you have to use gather.
|
st119577
|
My original model as below:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.encoder = Encoder()
self.decoder = Decoder()
def forward(self, x):
x = self.encoder(x)
x = F.dropout(x, p = 0.2)
x = self.decoder(x)
return x
Note that the encoder and decoder consist of multiple modules, respectively.
After saving it and loading it, I want to make the new model by removing the decoder part and adding the classifier.
net = torch.load('net.pth')
class classifier(nn.Module):
super(classifier, self),__init__()
self.encoder = nn.Sequential(*list(net.encoder.children()))
self.fc = nn.Linear(enc_dim, num_class)
def forward(self, x):
x = self.encoder(x)
x = F.dropout(x,p = 0.5)
x = self.fc(x)
x = F.log_softmax(x)
return x
When I execute the script, I got the following error
THCudaCheck FAIL file=torch/csrc/cuda/Module.cpp line=80 error=10 : invalid device ordinal
Can you please tell me how I can solve this error ?
Also, when I use DataParallel, I think
nn.Sequential(*list(ae_net.encoder.children()))
needs to be modified as like
nn.Sequential(*list(ae_net.module.encoder.children()))
Is it right ?
|
st119578
|
invalid device ordinal points to you trying to use a CUDA device id that doesn’t exist. For example, trying to use id=2 on a device with only 2 GPUs (0, 1)
|
st119579
|
I do not understand your explanation. Where can i add gpu id ? I have 4gpus, but at this time i just use only one by using CUDA_VISIBLE_DEVICES.
|
st119580
|
you must’ve tried to do:
CUDA_VISIBLE_DEVICES=4 python example.py
CUDA_VISIBLE_DEVICES is 0-indexed, so if you have 4 GPUs, then valid values are 0, 1, 2 or 3.
pytorch (actually CUDA) is complaining that you gave an invalid device id to use
|
st119581
|
@Seungyoung_Park yes, you need to add an additional .module. if you save a model wrapped in DataParallel. BTW, you are you doing that: nn.Sequential(*list(net.encoder.children()))? This will create a sequential with all modules that are already in net.encoder, so why not change it to self.encoder = net.encoder directly?
|
st119582
|
No, I tried it using CUDA_DEVICES_VISIBLE=1. But, may be i used different gpu when the model was loaded.
|
st119583
|
@Seungyoung_Park yes, you need to unpack the original module from the data parallel
|
st119584
|
I cant seem to find it in the PyTorch documentation anywhere, however it seems to exist here? (https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/container.py 64)
|
st119585
|
It seems that is hasn’t been added to the docs. I’ll do that when I’ll be sending some for ModuleList. Thanks for the report!
|
st119586
|
Thanks @apaszke, the reason I am asking is because I am trying to add a “reshape” operation in an nn.Sequential instance but I am not quite sure now that is supposed to be done? Thanks!
|
st119587
|
We recommend implementing custom containers if you need anything more complex than passing the data forward (including reshaping). See how torchvision models are implemented.
|
st119588
|
I’d like to access the output of a module inside an nn.Sequential. It could be that I have a big CNN wrapped in nn.Sequential, and I’d like to concat the output of several layers. Or it could be that I’m just checking the intermediate layer outputs to debug stuff.
Doing this used to be quite easy in lua torch because modules cache their output in module.output. But pytorch apparently doesn’t do that anymore. Is there a way I can access intermediate layers’ output elegantly?
Currently I just inherit from nn.Sequential and in forward() I cache the outputs from specific modules I want. I’m not sure if this screws up memory management since it looks like the intermediate values need to be re-allocated at every network pass?
|
st119589
|
Subclassing sequential and overriding forward (just like you are doing now) is the right approach.
We wanted to get rid of Sequential itself, but we kept it as a convenience container.
There is no screwing up of memory management because of subclassing sequential (or caching outputs), it works out fine.
|
st119590
|
What would be the consequence if we cache every module’s output by adding the following to nn.Module’s __call__():
> self.output = var
Would this increase memory usage by prolonging the lifetime of output? I’m very curious about how intermediate values are allocated and deallocated.
|
st119591
|
yes if you do that, all output variables will be cached and you will run out of memory (because in pytorch output buffers are not reused but recreated)
|
st119592
|
You won’t necessarily run out of memory, because if you overwrite them at every forward, you’ll be keeping at most one additional copy of the graph, that’s likely to have its buffers freed if only you called backward on it.
About the original question, you might also want to take a look at register_forward_hook Module method.
|
st119593
|
Hi,
I am training a pretrained model resnet-18 imported from torchvision.models with dataset containing 1050 images of size 3x240x320. After training, I am testing with 399 samples, but I am getting RunTime CUDA Error : Out of Memory. Also, I have ported the test dataset to CUDA and volatile attribute is set to True. The model is in GPU and after training nvidia-smi output is 3243MiB/4038MiB
Is there any way available to free the GPU memory so that I can do the testing?
|
st119594
|
it’s possible that moving both the training and test dataset over to CUDA doesn’t give enough space for the network to forward-prop through. Loading your dataset into GPU memory itself takes 1.3 GB of memory.
Are you giving a large batch size as input to your network? Can you reduce the batch size at inference time?
|
st119595
|
Yes I was giving a large batch size, so I reduced the batch size at inference time. It is working fine now. Thank you very much.
|
st119596
|
Also, remember to use the volatile flag for inference! It will greately reduce the memory usage.
|
st119597
|
Hi,
I was told to convert a saved cudnn network to a non-cudnn network in lua first, and then use load_lua(See: https://github.com/pytorch/pytorch/issues/791 26)
How exactly can I do this?
|
st119598
|
you can use cudnn.convert
GitHub
soumith/cudnn.torch 56
Torch-7 FFI bindings for NVIDIA CuDNN. Contribute to soumith/cudnn.torch development by creating an account on GitHub.
|
st119599
|
rnn = nn.LSTMCell(10, 20)
input = Variable(torch.randn(6, 3, 10))
hx = Variable(torch.randn(3, 20))
cx = Variable(torch.randn(3, 20))
output = []
for i in range(6):
… hx, cx = rnn(input, (hx, cx))
… output.append(hx)
In the above code, in for loop, i think input should be changed to input[i].
If i’m right, example code for torch.nn.LSTMCell and torch.nn.GRUCell should be changed.
The example codes are here. http://pytorch.org/docs/nn.html#recurrent-layers 9
|
st119600
|
Hi all,
Can I train 3D networks (such as C3D 71, V2V 34) with Pytorch?
How can I generate 5D tensor (batch, channel, length, height, width) from image sequences for the network?
Could you please show me some examples?
Thank you in advance.
|
st119601
|
Yes, we support that. You can find modules like Conv3d and pooling for volumes. I don’t understand the question about generating batches, you need to write your own Dataset 100 that will load and return 4D elements from __getitem__. Then use a DataLoader 48 to load the data (you can give it a batch_size argument, and it will batch the 4D elements into a 5D tensor for you).
|
st119602
|
The return is a tuple of 5 elements (N, c, d, h, w).
Link to the documentation: http://pytorch.org/docs/nn.html#convtranspose3d 109
|
st119603
|
You need to return 4D elements from the dataset, so they will be concatenated into 5D batches. Then, we only support batch mode in nn, so you’ll be using 5D tensors throught the network.
|
st119604
|
Sorry, I means that the function getitem return the dim of elements, when I write the Dataset.
|
st119605
|
As I said, __getitem__ should return 4D elements, because the DataLoader will concatenate them into a 5D batch.
|
st119606
|
My net:
class convNet(nn.Module):
#constructor
def __init__(self):
super(convNet, self).__init__()
#defining layers in convnet
#input size=32x32x3
self.conv1 = nn.Conv2d(1, 96, kernel_size=3,stride=1,padding=1)
#conv1: 28x28x1 -> 28x28x96
self.conv2 = nn.Conv2d(96, 256, kernel_size=3,stride=1,padding=1)
#conv2: 28x28x96 -> 28x28x256
self.conv3 = nn.Conv2d(256,384,kernel_size=3,stride=1,padding=1)
#conv3: 28x28x256 -> 28x28x384
#pooling: 28x28x384 -> 14x14x384
self.conv4 = nn.Conv2d(384,512,kernel_size=3,stride=1,padding=1)
#conv4: 14x14x384 -> 14x14x512
#pooling: 14x14x512 -> 7x7x512
self.fc1 = nn.Linear(7*7*512, 300)
self.fc2 = nn.Linear(300, 100)
self.fc3 = nn.Linear(100, 10)
def forward(self, x):
conv1_relu = nnFunctions.relu(self.conv1(x))
conv2_relu = nnFunctions.relu(self.conv2(conv1_relu))
conv3_relu =nnFunctions.max_pool2d(nnFunctions.relu(self.conv3(conv2_relu)),2)
conv4_relu =nnFunctions.max_pool2d(nnFunctions.relu(self.conv4(conv3_relu)),2)
x = conv4_relu.view(-1, 7*7*512)
x = nnFunctions.relu(self.fc1(x))
x = nnFunctions.relu(self.fc2(x))
x = self.fc3(x)
return x
I have an image of 1x28x28 and I am feeding it to the above net.
But it gives me the following error:
RuntimeError: CHECK_ARG((long)pad.size() == input->nDimension - 2) failed at torch/csrc/cudnn/Conv.cpp:258
|
st119607
|
Are you sure you are passing the image as (1, 28, 28). It accepts input as (N, C, H, W). It runs on my computer perfectly.
import torch
import torch.nn as nn
from torch.autograd import Variable
import numpy as np
class channel1(nn.Module):
def __init__(self):
super(channel1, self).__init__()
self.conv1 = nn.Conv2d(1, 96, kernel_size=3, stride=1, padding=1)
self.relu1 = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(96, 10, kernel_size=3, stride=1, padding=1)
def forward(self, x):
x = self.conv1(x)
x = self.relu1(x)
x = self.conv2(x)
return x
if __name__ == '__main__':
someNet = channel1()
x = Variable(torch.randn(1, 1, 28, 28))
print someNet(x)
|
st119608
|
your code runs too on my system
if __name__ == '__main__':
net = convNet()
x = Variable(torch.randn(10, 1, 28, 28))
print(net(x))
|
st119609
|
@sarthak1996 we only support batch mode. If you want to process a single image you have to unsqueeze an additional dimension at the front, to simulate a batch of 1 image.
|
st119610
|
This is my complete code:
np_data.shape ->(42000, 784)
np_data=np_data.astype(np.float32).reshape(42000,28,28,1)
np_labels.shape ->(42000, 1)
np_data.shape ->(42000, 28, 28, 1)
np_data=np.transpose(np_data,[0,3,1,2])
np_data.shape ->(42000, 1, 28, 28)
batch_size=50
features=torch.from_numpy(np_data)
features=features.contiguous().view(42000,-1)
targets=torch.from_numpy(np_labels)
import torch.utils.data as data_utils
train = data_utils.TensorDataset(features, targets)
train_loader = data_utils.DataLoader(train, batch_size=batch_size, shuffle=True)
criterion = nn.CrossEntropyLoss() # use a Classification Cross-Entropy loss
import torch.optim as optim
def trainConvNet(train_loader,net,criterion,epochs,total_samples,learning_rate):
prev_loss=0
optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=0.9)
for epoch in range(int(epochs)): # loop over the dataset multiple times
running_loss = 0.0
for i,data in enumerate(train_loader):
inputs,labels=data
# wrap them in Variable
inputs, labels = Variable(inputs).cuda(), Variable(labels).cuda()
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels[:,0])
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
print('Finished Training')
return net
class convNet(nn.Module):
#constructor
def __init__(self):
super(convNet, self).__init__()
#defining layers in convnet
#input size=32x32x3
self.conv1 = nn.Conv2d(1, 96, kernel_size=3,stride=1,padding=1)
#conv1: 28x28x1 -> 28x28x96
self.conv2 = nn.Conv2d(96, 256, kernel_size=3,stride=1,padding=1)
#conv2: 28x28x96 -> 28x28x256
self.conv3 = nn.Conv2d(256,384,kernel_size=3,stride=1,padding=1)
#conv3: 28x28x256 -> 28x28x384
#pooling: 28x28x384 -> 14x14x384
self.conv4 = nn.Conv2d(384,512,kernel_size=3,stride=1,padding=1)
#conv4: 14x14x384 -> 14x14x512
#pooling: 14x14x512 -> 7x7x512
self.fc1 = nn.Linear(7*7*512, 300)
self.fc2 = nn.Linear(300, 100)
self.fc3 = nn.Linear(100, 10)
def forward(self, x):
x = nnFunctions.relu(self.conv1(x))
x = nnFunctions.relu(self.conv2(x))
x = nnFunctions.max_pool2d(nnFunctions.relu(self.conv3(x)),2)
x = nnFunctions.max_pool2d(nnFunctions.relu(self.conv4(x)),2)
x = x.view(-1, 7*7*512)
x = nnFunctions.relu(self.fc1(x))
x = nnFunctions.relu(self.fc2(x))
x = self.fc3(x)
return x
net=convNet()
net.cuda()
net=trainConvNet(train_loader,net,criterion,1,42000,0.01)
I dont understand why am I getting the above error.
|
st119611
|
Add print(inputs.size()) before net(inputs). You’ve flattened the dataset to be 42000x784 so your batches will be of size 50x784. Remove .view(42000, -1) call on the features.
|
st119612
|
Hi I am trying to implement “Exploiting linear structure for efficient evaluation” paper and it requires me to perform SVD on the weight tensor and find its rank 1 approximation and now these rank 1 filters perform the convolution operation.
I wrote a simple test suite to verify this and I am getting good speed ups. Here is the code which does that:
# doing passes
res1 = np.zeros((oH, pw))
out = np.zeros((oH, oW))
t = time.time()
for i in xrange(oH):
hst, hend = i*stride, i*stride+HH
patch = padded_x[hst:hend, :]
res1[i] = np.sum(patch*u, axis=0)
for i in xrange(oW):
wst, wend = i*stride, i*stride+WW
patch = res1[:, wst:wend]
out[:, i] = np.sum(patch*v, axis=1)
elapsed = time.time() - t
print "time taken to do this awesome operation is {}".format(elapsed)
print out
I wanted to know if there is a way I can do this using pytorch and get the speed improvements?
|
st119613
|
All of the operations used here are supported, but we don’t support backpropagation through SVD. I’d suggest implementing your own Function to do that. You can find a tutorial in the notes 130.
|
st119614
|
Hi Adam,
I am trying to implement this in pytorch now, but i am not getting the right performance.
If you would guide me in the right direction, I would be grateful.
import numpy as np
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.nn.functional as nnf
import time
image = torch.randn(512, 512)
image = Variable(image.view(1,1,512, 512))
u = torch.Tensor([1,2,1])
v = torch.Tensor([-1, 0, 1])
w = Variable(torch.ger(u, v))
w = w.view(1,1,3,3)
u = Variable(u.view(1,1,3,1))
v = Variable(v.view(1,1,1,3))
t = time.time()
res1 = nnf.conv2d(image, u, stride=1, padding=1)
out = nnf.conv2d(res1, v, stride=1)
print "time taken is {}".format(time.time()-t)
print out
t = time.time()
out1 = nnf.conv2d(image, w, stride=1, padding=1)
print "time taken by normal is {}".format(time.time()-t)
print out1
|
st119615
|
I don’t think it’s that surprising that a single convolution is faster. For each of them, you have to preprocess the image in some way, most libraries are optimized for 3x3 filters, since they’re one of the most commonly used sizes, and you benefit from data locality if you only do a single conv (no need to transfer the image back and forth).
|
st119616
|
So i would not get any improvements if I implement the forward and backward pass for this as per your previous suggestion of extending “autograd”
Thanks
|
st119617
|
I think the answer should be 1? However, when I switch the tensor to CPU, it returns 1. Why is the difference so significant?
Thank you.
|
st119618
|
Wow, that’s weird. It’s a bug in the C backend, the same thing happens in Lua Torch. We’ll fix that right away.
|
st119619
|
Hi everyone,
I’m trying to implement one of the stability tricks for GAN using pytorch based on the DCGAN example.
I’ve used torch before and found a WhiteNoise Layer 252 that gave me good results, but now I’d like to port
this to pytorch.
I am no expert in pytorch therefore I’m having problems defining the forward method and make it compatible
to the multi-gpu dcgan example.
Any hint would be welcome and I’m happy to make pull request as an added feature once done.
Link to Gist containing my attempt 99
Edit:
I’ve followed the "extending pytorch guide on the docs and update my code accordingly."
There still seems to be an issue in the forward, due to error
“TypeError: forward() takes exactly 2 arguments (1 given)”
|
st119620
|
There are a number of problems with your example. One that is very important is that you’re nesting data parallel invocations (you have one in the forward function, but noise module is also wrapped in the DataParallel module.
Apart from that I really encourage you to revisit the tutorials, new module implementations should look differently than these from Lua Torch. There’s no need for having a noise layer, in the Forward of your generator you can just sample noise (Variable(torch.randn(size))) and forward that through the network. You might also want to read through the DCGAN example 174 or code for the Wasserstein GAN 83 paper.
|
st119621
|
Thank you @apaszke !
I believe you may have misunderstood the purpose of this module.
I am aware that for the generator you sample your latent space z (normal distribution) and forward that through.
What I would like to achieve here is additive noise on the inputs of the Discriminator,
that’s why I chose to create another module or layer in the Discriminator.
I will have another look at the tutorials, thank you in the meantime.
|
st119622
|
I’m replying to myself, in case anyone else runs into this issue.
The simpler way was to simply create a new noise variable and add it to the
real and fake images before calling:
input.data.resize_(real_cpu.size()).copy_(real_cpu)
additive_noise.data.resize_(real_cpu.size()).normal_(0, std)
input.data.add_(additive_noise.data)
output = netD(input)
|
st119623
|
You could also just sample white noise, wrap it in a Variable and add that. I think that would be a more elegant solution.
|
st119624
|
I think having a module for white noises would still be useful when we have a nn.Sequential module with multiple children modules and we want to add white noise in the middle of the nn.Sequential module.
Definitely, we can add white noises in forward function, but in that case, we should separate the nn.Sequential module into two modules so that we can put white noises in the middle. Right? I am not sure if there would be a more elegant solution.
|
st119625
|
@supakjk Nothing stops you from doing that, but it’s not the recommended way. I find using layers elegant only for more complex functions, that have lots of parameters, like Conv - you have weight, bias, stride, padding, etc. For something as simple as adding noise, I’d rather add that to the forward function.
If you want to add the noise in the middle of a network, just use two sequentials.
|
st119626
|
Hi. In the testing/eval phase, after one layer is processed is there anyway to free its memory? Or can the output tensors of all layers share the same memory, since the outputs are not needed for calculating the gradients.
Thanks.
|
st119627
|
In test/eval phase you can create the input to your model as a Variable with a volatile flag - Variable(data, volatile=True). This will ensure that no buffers needed for backward or outputs (if you don’t save them anywhere) will be kept, and will be freed as soon as they can.
Training is already quite memory efficient. If it doesn’t need to hold onto some input/output because it can compute the gradient in another way, it won’t do it.
|
st119628
|
I have an image and I want to pass it through convolution net and the output of the net is the same dimensional matrix as the input.
eg. Suppose I have an input of 3x32x32 the output is the same as 32x32.
Now I want to calculate the loss with a label of 32x32 but pytorch does not support multi-dimensional label to be fed in the loss function.
Is there a way to do this
|
st119629
|
First thing, you probably want your image ot be Bx3x32x32 - we only support batched inputs in CHW data ordering, but I think you meant to use a HWC order. You need to transpose the image.
Secondly, what kind of loss would you like to use exactly? If it’s segmentation then, nn.NLLLoss2d should do it.
|
st119630
|
apaszke:
Secondly, what kind of loss would you like to use exactly? If it’s segmentation then, nn.NLLLoss2d should do it.
I have some co-ordinates for bounding boxes in the train dataset and I want the net to train on those.
|
st119631
|
My training labels would be 0 for non bounding box regions and 1 for bounding box regions. So I want to use L2 and L1 penalities. Which loss function should I use.
|
st119632
|
Sooo you’re predicting a 0 or 1 for each pixel? Why does your output have 3 channels then?
|
st119633
|
L1 and L2 losses should be easy to write yourself using only autograd operations. It’s just a subtraction + abs or pow.
|
st119634
|
Is it possible to construct a pycuda array from a cuda array created in pytorch, or vice versa? Asking because there’s existing code for MPI based on pycuda (https://github.com/seba-1511/mpi4pycuda 86) that I would like to build on top of.
|
st119635
|
It is possible and you can find an example here 942. However, it’s quite hard to do it properly, as PyCuda initializes its own CUDA contexts instead of using a default one, so sometimes you may end up in a situation where PyTorch pointers are inaccessible form PyCuda and vice versa. Be careful with imports and initialization.
|
st119636
|
pytorch worked fine when I installed with either anaconda or pip.
However, when I tried to build pytorch from the source (python setup.py install), I got the following error messages. (I am using python 2.7.13, gcc 4.8.5, cuda 8.0)
gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/homes/3/kimjook/pytorch -I/homes/3/kimjook/pytorch/torch/csrc -I/homes/3/kimjook/pytorch/torch/lib/tmp_install/include -I/homes/3/kimjook/pytorch/torch/lib/tmp_install/include/TH -I/homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THPP -I/homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THNN -I/u/drspeech/opt/anaconda2/lib/python2.7/site-packages/numpy/core/include -I/usr/local/cuda/include -I/homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THCUNN -I/u/drspeech/opt/anaconda2/include/python2.7 -c torch/csrc/cuda/Storage.cpp -o build/temp.linux-x86_64-2.7/torch/csrc/cuda/Storage.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda/lib64
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default]
torch/csrc/nn/THNN_generic.cpp: In function ‘void torch::nn::Abs_updateOutput(thpp::Tensor*, thpp::Tensor*)’:
torch/csrc/nn/THNN_generic.cpp:45:14: error: ‘THCudaHalfTensor’ was not declared in this scope
(THCudaHalfTensor*)input->cdata(),
^
In file included from /homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THPP/tensors/../storages/THCStorage.hpp:11:0,
from /homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THPP/tensors/THCTensor.hpp:22,
from torch/csrc/DynamicTypes.cpp:11:
/homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THPP/tensors/../storages/../TraitsCuda.hpp:9:20: error: ‘half’ was not declared in this scope
struct type_traits<half> {
^
It seems that the compiler cannot correctly recognize “half” precision related data structures or classes. How would I reslove this problem? (I don’t need half precision operations so if possible it’s fine for me to disable it.)
Thanks!
|
st119637
|
That’s weird. Could you please clean the install (watch out, if you have any uncommited changes it will delete them!!) git clean -xfd and then run python setup.py install and upload the full log to pastebin?
|
st119638
|
i’ve looked at the log and I cant really see what the problem is
are you using CUDA 8.0.27 by any chance, instead of the latest 8.0.44?
|
st119639
|
I have no idea what’s wrong. You have a fresh version of CUDA, so half should be automatically enabled everywhere.
Can you check if adding #include <cuda.h> after this line 8 helps?
|
st119640
|
Ok, are you sure that /usr/local/cuda-8.0 and /usr/local/cuda contain the same CUDA version? I’m pretty sure there’s something wrong with that. All librs are compiled with /usr/local/cuda-8.0 and claim to detect FP16, but not when the extension is compiled.
Another hack would be to add -DCUDA_HAS_FP16=1 here 17.
|
st119641
|
Oh. That’s it. My system’s /usr/local/cuda is linked to cuda-6.5 since I am sharing the system with others.
I am using CUDA 8 by setting
export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda-8.0/bin:$LD_PATH
Would there be any other way I can explictly specify the directory path?
Thanks!
|
st119642
|
I am trying to threshold Variables using the adaptive value as below.
import torch
from torch.autograd import Variable
import torch.nn as nn
x = Variable(torch.rand(10))
maxx = torch.max(x)
m = nn.Threshold(0.6*maxx,0)
y = m(x)
As you can see, it won’t work because ‘maxx’ is variable not constant.
How can I use the adaptive threshold ?
Thanks in advance for your help.
|
st119643
|
Hi,
I think you want to use directly the nn functional version of threshold:
import torch
from torch.autograd import Variable
import torch.nn as nn
x = Variable(torch.rand(10))
maxx = torch.max(x)
y = nn.functional.threshold(x, 0.6*maxx.data[0], 0)
EDIT: access the content of the maxx Variable since we don’t want to propagate through it.
|
st119644
|
Thanks for your reply.
Unfortunately, your code also won’t work because maxx is Variable.
|
st119645
|
Hi,
Yes, my bad.
But given that you will not be able to backpropagate though the threshold value, you don’t want the threshold to be a Variable. You can thus just use the content of the maxx Variable. I updated the code sample above.
|
st119646
|
When I use this snippet in my code, I got an error as
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
My test code is as follows.
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
class AdaReLU(nn.Module):
def __init__(self):
super(AdaReLU, self).__init__()
self.relu = nn.ReLU()
def forward(self, x):
y = x
for i in range(x.size(0)):
for ii in range(x.size(1)):
maxx = torch.max(x[i,ii,0,:])
y[i,ii,0,:] = F.threshold(x[i,ii,0,:], config.max_th*maxx.data[0], 0)
x = self.relu(y)
return x
class TestNet(nn.Module):
def __init__(self):
super(TestNet, self).__init__()
self.conv_1 = nn.Conv2d(len(config.alphabet), config.basic_frames, kernel_size = (1,3), stride = (1,2), padding = (0,0), bias = False)
self.batchnorm_1 = nn.BatchNorm2d(config.basic_frames)
self.conv_2 = nn.ConvTranspose2d(config.basic_frames, len(config.alphabet), kernel_size = (1,3), stride = (1,2), padding = (0,0), bias = False)
self.batchnorm_2 = nn.BatchNorm2d(len(config.alphabet))
self.adarelu = AdaReLU()
def forward(self, x):
x = self.adarelu(self.batchnorm_1(self.conv_1(x)))
x = F.sigmoid(self.batchnorm_2(self.conv_2(x)))
return x
x = Variable(torch.rand(32,68,1,2283))
model = TestNet()
optimizer = optim.SGD(model.parameters(), lr =0.01, momentum=0.9)
optimizer.zero_grad()
output = model(data)
loss = F.binary_cross_entropy(output, data)
loss.backward()
optimizer.step()
|
st119647
|
Doing y = x was a good idea to prevent changing the input inplace. Unfortunately, doing it that way leads to x and y begin the same tensors and so you keep modifying the input inplace.
Try to replace it with y = x.clone().
|
st119648
|
Thanks, it works.
But, due to the for-loops, it is too slow.
Do you have any idea to speed up the process ?
|
st119649
|
Hi,
I think you can replace the loop by:
y = x * (1 - x.lt(maxx.expand_as(x))).float()
|
st119650
|
The whole function can be simplified to this:
def AdaReLU(x):
max_val, _ = torch.max(x, 3)
mask = x < max_val.expand_as(x)
mask.detach_()
x = x.clone()
x[mask] = 0
return F.relu(x)
The detach_() call shouldn’t be necessary, but I’ve found a bug there
|
st119651
|
Because I need to employ nn.Replicate from torch, I tried the below
import torch
import torch.nn.functional as F
from torch.legacy import nn as nn
from torch.autograd import Variable
input = Variable(torch.rand(32,128,1,1))
replicate = nn.Replicate(nf = 32, dim = 3)
output = replicate(input)
But, I got the error message TypeError: ‘Replicate’ object is not callable
Can you tell me how I can use nn.Replicate ?
|
st119652
|
torch.expand can do the same thing, however the arguments are a little different from nn.Replicate.
|
st119653
|
You can’t mix legacy.nn with autograd or new nn. It’s only for loading legacy models.
|
st119654
|
In torch.autograd._functions.reduce
class Prod was implemented in a way that it produces nan gradient when zeros value is given.
Beginning with the product of all input, the gradient is calculated by dividing that product by each input entry.
When input entry is zero, this method returns ‘nan’ gradient.
By replacing backward() function of Prod() class with
if self.dim is None:
input, = self.saved_tensors
zero_loc = (input==0).nonzero()
if zero_loc.dim() == 0:
grad_input = grad_output.new(self.input_size).fill_(self.result)
return grad_input.div(input)
elif zero_loc.size()[0] > 1:
return grad_output.new(self.input_size).fill_(0)
else:
grad_input = grad_output.new(self.input_size).fill_(0)
indexing_tuple = tuple(zero_loc[0].numpy())
input_copy = grad_output.new(self.input_size).fill_(0)
input_copy.copy_(input)
input_copy[indexing_tuple] = 1.0
grad_input[indexing_tuple] = input_copy.prod()
return grad_input
else:
input, output = self.saved_tensors
input_copy = grad_output.new(self.input_size).fill_(0)
input_copy.copy_(input)
input_copy[input == 0] = 1.0
repeats = [1 for _ in self.input_size]
repeats[self.dim] = self.input_size[self.dim]
output_zero_cnt = (input == 0).sum(self.dim)
output_one_zero_ind = (output_zero_cnt == 1).nonzero()
grad_input = output.mul(grad_output)
grad_input[output_zero_cnt > 0] = 0.0
grad_input = grad_input.repeat(*repeats).div_(input_copy)
if output_one_zero_ind.dim() == 0:
return grad_input
for i in range(output_one_zero_ind.size()[0]):
if output_one_zero_ind.is_cuda:
output_one_zero_vec_ind = tuple(output_one_zero_ind[i].cpu().numpy())
else:
output_one_zero_vec_ind = tuple(output_one_zero_ind[i].numpy())
output_one_zero_vec_indexing = output_one_zero_vec_ind[:self.dim] + (slice(0, None),) + output_one_zero_vec_ind[self.dim+1:]
output_one_zero_vec = input.new(self.input_size[self.dim]).fill_(0)
output_one_zero_vec.copy_(input[output_one_zero_vec_indexing])
output_one_zero_vec[(output_one_zero_vec==0).nonzero()[0, 0]] = 1.0
grad_input[output_one_zero_vec_ind] = output_one_zero_vec.prod() if output_one_zero_vec.numel()>1 else 1.0
return grad_input
nan in Prod.backward() is solved.
Naming is bad but it works.
It would be better this modification is reflected in the next release.
|
st119655
|
Previous code only worked for 2 dim tensor.
I modified it to make it work for n dim tensor.
|
st119656
|
If prod() is used to a tensor T where T.numel()==1
this returns ‘inf’
This should be solved.
|
st119657
|
In Torch, we can assign a thread to each GPU as like below.
net = nn.DataParallelTable(1, true, true):add(net, {1, 2}):threads(function() require 'cudnn' cudnn.benchmark = true cudnn.fastest = true end)
Is it possible to do this in pytorch ?
If this is not supported now, do you have any plan to support this ?
Also, when I employed DataParallel, it seemed to me that pytorch use only one thread.
First GPU consumed almost all memory, while the others consumed only half of their memories.
Is this normal ?
|
st119658
|
unlike LuaTorch, we dont dispatch DataParallel via python threads in PyTorch.
If this is not supported now, do you have any plan to support this ?
No, we plan to (and do) dispatch multi-GPU differently at high performance, without the user needing to do anything.
Also, when I employed DataParallel, it seemed to me that pytorch use only one thread.
This is irrelevant to the user, we are working on improving the internals and multi-gpu perf.
First GPU consumed almost all memory, while the others consumed only half of their memories.
Is this normal ?
Depending on how many parameters you have in your model, this is possible.
|
st119659
|
Repeating a Variable may be really useful in order to avoid loops, but I don’t see it in the methods, while everything seems ready for it. The method should be as simple as:
def repeat(self, repeats):
return Repeat(repeats)(self)
|
st119660
|
I was struggling to feed my predictions into an sklearn function
M_ = M.data.numpy()
I got error
RuntimeError: numpy conversion for LongTensor is not supported
I tried some other things. Didn’t work and I gave up.
Then I start reading the docs. I found out it should be
M_ = M.data.cpu().numpy()
I suggest to improve this error message. Maybe something like
RuntimeError: numpy conversion for LongTensor is not supported on CUDA
That way, novices like me will know what to do
|
st119661
|
thanks, it’s a good suggestion. i’ll work on this.
Tracking it here: https://github.com/pytorch/pytorch/issues/779 33
|
st119662
|
While trying to extend word_language_model example, I’m hitting an error:
Trying to backward through the graph second time, but the buffers have already been freed. Please specify retain_variables=True when calling backward for the first time.
I guess I’m missing something obvious here, but why running the model again doesn’t refill the buffers? I was going to implement training in a similar loop.
Code to reproduce:
import torch as th
import torch.nn as nn
from torch.autograd import Variable
# borrowed from `word_language_model`
class RNNModel(nn.Module):
"""Container module with an encoder, a recurrent module, and a decoder."""
def __init__(self, rnn_type, ntoken, ninp, nhid, nlayers):
super(RNNModel, self).__init__()
self.encoder = nn.Embedding(ntoken, ninp)
self.rnn = getattr(nn, rnn_type)(ninp, nhid, nlayers, bias=False)
self.decoder = nn.Linear(nhid, ntoken)
self.init_weights()
self.rnn_type = rnn_type
self.nhid = nhid
self.nlayers = nlayers
def init_weights(self):
initrange = 0.1
self.encoder.weight.data.uniform_(-initrange, initrange)
self.decoder.bias.data.fill_(0)
self.decoder.weight.data.uniform_(-initrange, initrange)
def forward(self, input, hidden):
emb = self.encoder(input)
output, hidden = self.rnn(emb, hidden)
decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2)))
return decoded.view(output.size(0), output.size(1), decoded.size(1)), hidden
def init_hidden(self, bsz):
weight = next(self.parameters()).data
if self.rnn_type == 'LSTM':
return (Variable(weight.new(self.nlayers, bsz, self.nhid).zero_()),
Variable(weight.new(self.nlayers, bsz, self.nhid).zero_()))
else:
return Variable(weight.new(self.nlayers, bsz, self.nhid).zero_())
vocab_size = 256
batch_size = 64
model = RNNModel('GRU', vocab_size, 100, 100, 3)
model.train()
optimizer = th.optim.Adam(model.parameters(), lr=1e-2)
criterion = nn.CrossEntropyLoss()
x = Variable(th.LongTensor(50, 64))
x[:] = 1
state = model.init_hidden(batch_size)
print('first pass')
optimizer.zero_grad()
logits, state = model(x, state)
loss = criterion(logits.view(-1, vocab_size), x.view(-1))
loss.backward()
optimizer.step()
print('second pass')
optimizer.zero_grad()
logits, state = model(x, state)
loss = criterion(logits.view(-1, vocab_size), x.view(-1))
loss.backward() # <-- error occurs here
optimizer.step()
|
st119663
|
You need to call state.detach_() . Otherwise, it will still hold on to the graph that created it, i.e. that of the first pass.
|
st119664
|
Um sorry. This will only work from the next release that’s going to be up soon. For now use repackage_hidden 207 from the example.
|
st119665
|
Thanks! Didn’t realize that I was reusing old graph with this.
PyTorch is awesome!
|
st119666
|
For example
tensor_a is currently in cpu
I wanna transfer tensor_a to cpu or gpu based on
tensor_b's residential state.
|
st119667
|
Here’s another torch.multiprocessing situation I can’t wrap my head around. This example is working:
import torch
import torch.multiprocessing as mp
def put_in_q():
x = torch.IntTensor(2, 2).fill_(22)
x = x.cuda()
print(x)
p = mp.Process(target=put_in_q, args=())
p.start()
p.join()
But not this one:
import torch
import multiprocessing as mp
class CudaProcess(mp.Process):
def __init__(self):
mp.Process.__init__(self)
def run(self):
x = torch.IntTensor(2, 2).fill_(22)
x = x.cuda()
print(x)
p = CudaProcess()
p.start()
p.join()
The error I’m getting is:
terminate called after throwing an instance of 'THException'
what(): cuda runtime error (3) : initialization error at .../pytorch/torch/lib/THC/THCGeneral.c:70
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.