id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st117268
|
a common way to step into this issue is if you do something like : total_loss = total_loss + loss inadvertently, say for reporting purposes, instead of loss.data[0]
|
st117269
|
Hi,
I need to calculate variances of some numbers that are pretty small and pretty close together, i.e. the variances are pretty small. Unfortunately, using torch.var( ), I get zeroes for the variance, when I’m not supposed to get zeroes, but the problem occurs only when using the dim=... argument.
EDIT: I managed to resolve my actual problem by using DoubleTensors instead of FloatTensors while I was composing this question. However, I thought it might still be worth checking in, why my problem occurs when using the dim=... but not when leaving it out. Why is that so?
See this example:
import torch
my_array = torch.Tensor([-0.008015724,-0.008016450])
torch.var(my_array, dim=0)[0]
#Output: 0
torch.var(my_array)
#Output: 2.6385144069607236e-13
torch.var(my_array.double(), dim=0)[0]
#Output: 2.6385144069607236e-13
Many thanks to anyone who can help me understand this better.
|
st117270
|
i’m just vaguely wondering as i skim through your question, if this is related to Bessel’s correction that we apply: https://en.wikipedia.org/wiki/Bessel’s_correction 9
|
st117271
|
tl;dr: Are operations (multiplication/addition/slicing/cropping/clamping etc) faster on numpy arrays or torch tensors, or should I expect the same performance?
I just started using PyTorch (previously used Torch) and I’m wondering what the comparison is between numpy arrays and torch tensors when it comes to performance. I’m used to the many in-place operations that Torch supports. Numpy arrays seem to like copying stuff around. Any insights on this?
Example:
I have an image dataset from which I will be loading and performing multiple operations on (cropping, resizing, slicing, multiplication/addition etc). I only care about the end result of these operations so in-place operations would be preferred (at least I think).
These operations will be different for each iteration, so loading a preprocessed version of the set won’t do it.
I will be using opencv to load the images, so they will be numpy arrays once loaded.
Should i convert them to tensors and do the operations on the tensors or leave the conversion at the end?
|
st117272
|
honestly, it’s a mixed result. some ops are faster in numpy and others are faster in pytorch.
|
st117273
|
Hi,
I need the function that does something like torch.Tensor.scatter_(dim, index, src).
x = torch.Tensor([[9,10,11,12]])
id = torch.Tensor([[4,3,4,7]])
out = torch.zeros(1, 8).scatter_(1, id, x)
When I run the code above, I got the output [0, 0, 0, 10, 11, 0, 0, 12].
While I expect the output to be [0, 0, 0, 10, 20, 0, 0, 12].
Note that the value in index 4 is 20 (9+11), not 11.
Does anyone know how to implement scatter that add the source value to the tensor, not just cover it?
|
st117274
|
There’s a function called scatter_add_ that was added into master recently. It’ll be in the next release.
If you want it immediately, you can compile pytorch from source: https://github.com/pytorch/pytorch#from-source 448
|
st117275
|
I like to tag RNN on top of a CNN. I am having problem with finding the right approach to pass this data through CNN to RNN. Right now , I reshape input output of CNN by merging seq and batch dimensions. Is there any more native way to do it in Pytorch ?
I think what I need is something like Timedstributed layer of Keras.
|
st117276
|
I am trying to train a classification model finetuning from Resnet50, but I get different result with the following different construction methods.
The performance of Method 2 is much better than Method 1. Why get such result?
Method 1
class MyResNet(nn.Module):
def __init__(self, pretrained=True):
super(MyResNet, self).__init__()
self.pretrained = pretrained
self.base = resnet50(pretrained=pretrained)
self.classifier = nn.Linear(2048, 100)
def forward(self, x):
for name, module in self.base._modules.items():
if name == 'avgpool':
break
else:
x = module(x)
x = F.avg_pool2d(x, x.size()[2:])
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
Method 2
class MyResNet(nn.Module):
def __init__(self, pretrained=True):
super(MyResNet, self).__init__()
self.pretrained = pretrained
self.base = resnet50(pretrained=pretrained)
self.classifier = nn.Linear(2048, 100)
def forward(self, x):
for name, module in self.base._modules.items():
if name == 'avgpool':
break
if name == 'layer4':
x = module[0](x)
residual = x
x = module[1].conv1(x)
x = module[1].bn1(x)
x = module[1].conv2(x)
x = module[1].bn2(x)
x = module[1].conv3(x)
x = module[1].bn3(x)
x += residual
x = module[1].relu(x)
residual = x
x = module[2].conv1(x)
x = module[2].bn1(x)
x = module[2].conv2(x)
x = module[2].bn2(x)
x = module[2].conv3(x)
x = module[2].bn3(x)
x += residual
x = module[2].relu(x)
else:
x = module(x)
x = F.avg_pool2d(x, x.size()[2:])
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
|
st117277
|
did you ever figure this out? I haven’t checked if Method1 and Method2 are exactly equivalent, but I do suspect that maybe they are not…
|
st117278
|
I have 2 conv layers and want their weights to be the same [tied]. How should I implement it in pytorch.
x =self.relu(self.conv1_1(x))
x_bn = self.bn1(x)
y =self.relu(self.conv1_2(y,weight=self.conv1_1.weight))
This was returning error!
I want y to use the weights of x.
EDIT
I guess using the same conv layer would work!
|
st117279
|
Thanks for this great framework and amazing supports in this forum.
I have a question regarding a slightly more complicated usage of variable-length of input. Since I work on video understanding, I will use video as an example to explain what I would like to do.
Basically, I would like to develop a framework to work with variable-length of variable-length of inputs.
# num_frames: number of frames changes per video
# num_features: each video frame has various number of features
# feature_dim: feature dimension is fixed
video: (num_frames, num_features, feature_dim)
If the batch size is 4 (corresponding to 4 videos), for each of the video frame within a video, I can first pad the number of features to make them consistent across the entire video sequence.
# each video has different length and each video frame has *padded* number of features
video_1: (9, 50, 512)
video_2: (10, 30, 512)
video_3: (5, 20, 512)
video_4: (7, 70, 512)
Once I have the batch, I can again pad the number of features for each video and then pad the length of videos (as below).
Since I have two layers of variable-length inputs, I wonder using the strategy of padding the sensors, getting their corresponding lengths, sort the tensors, and using pack_padded_sequence will still make sense. A simple model can be an FC layer (512->256) shared across all video frames, followed by a MaxPooling, and finally, an RNN run through the video sequence.
# 2-layer padding # shared FC layer + MaxPooling # RNNs
video_1: (10, 70, 512) ------> (10, 1, 256) -------> (1, 1, 256)
video_2: (10, 70, 512) ------> (10, 1, 256) -------> (1, 1, 256)
video_3: (10, 70, 512) ------> (10, 1, 256) -------> (1, 1, 256)
video_4: (10, 70, 512) ------> (10, 1, 256) -------> (1, 1, 256)
While these operations seems to be straight-forward, I am a little bit worried about whether if the gradient flow will be calculated correctly, since it involved two-layer of variable-length of inputs.
I have read all topics in this forum regarding using variable-length, but previous questions focus on only one layer of variable-length (various length of sentence for NMT or various length for videos), whereas I have two layers of variable-length inputs.
I am half way of developing code for data loading and constructing my model, but I would be really appreciated that if anyone has suggestions or previous experiences that they can share with me.
|
st117280
|
pack_padded_sequence only applies to RNNs (nn.RNN, nn.LSTM, nn.GRU).
You can process your initial (shared FB layer + MaxPooling) without padding the feature dimension, and then once you pooled, you can pad the output and send it into RNNs as packed sequences.
|
st117281
|
I associate the weights of two modules as module1.weight = module2.weight, and put them in one optimizer=torch.optim.SGD([module1,module2], lr=learning_rate). I find after the loss.backward() and optimizer.step(), their weights are updated by -2learning_rateaccumulated_gradient. I understand that the loss.backward() will accumulate gradients for these two modules. But does it make sense to update twice the weight using the accumulated gradient? That means, if in a network, we make some modules share weights, then these modules will equivalently have several times larger learning rate?
|
st117282
|
you can use the functional interface, where:
import torch.nn.functional as F
# in __init__
weight = nn.Parameter(torch.randn(20, 30))
bias = nn.Parameter(torch.randn(30))
# and later in forward(input1, input2):
out1 = F.linear(input1, weight, bias)
out2 = F.linear(input2, weight, bias)
|
st117283
|
Hi all,
I have a NLP model that includes two RNNs: one of them, say RNN_Word, works at a word-level and the other one, say RNN_Char, works at a character-level. The model receives a sentence as input and outputs a label for each word in the sentence.
For each word, the final state of RNN_Char is concatenated with the word embedding and then this concatenated tensor is fed as input to RNN_Word.
I wonder how I can use mini-batch in training. I could group sentences with the same length (in number of words) and then, for each batch of sentences, I could group words with the same length (in number of characters), but this procedure seems to be rather inefficient.
I think I cannot use padding to force all words to have the same length, because my loss is defined at a word-level and the state of RNN_Char would be changing while I was feeding stuffing characters to it.
What do you suggest?
|
st117284
|
your best approach is to group sentences of the same length together (as you did mention).
Alternatively, you can mini-batch the char-level separately, and the word-level separately.
|
st117285
|
Thank you for your reply! Yes, but even if I group sentences of the same length together, then, for each group of sentences, I need to group words of the same length together, right?
|
st117286
|
I am trying to design a network which contains a recurrent sub-network. The input of this sub-network will change over iterations(because it current input depends on its last output). However, I find the network works much worse in the evaluation mode than the training mode. Through debugging, I find it’s the problem of batchnorm statistics, i.e. running_mean and running_var. As stated in recent paper “recurrent batch normalization” (https://arxiv.org/pdf/1603.09025.pdf 9), the batchnorm statistics should be done separately for each iterations. I have also saw one implementation of this paper https://github.com/jihunchoi/recurrent-batch-normalization-pytorch/blob/master/bnlstm.py 10, but I don’t think it works since it doesn’t implement the important separate batch stastics for different iterations. Does someone know how to implement this or other tricks that have the equivalent effect?
|
st117287
|
Some people may suggest to create multiplt BN modules and use the first one for the first iteration, second one for the second iteration, and so on. The problem is that, as pointed in the paper mentioned above, the learnable parameters gamma and beta should be shared between these BN modules. Only the running_mean and running_var should be differentiated by the iterations. If do it in this way, how do we force them to share the learnable weights gamma and beta?
|
st117288
|
you can create the modules and the manually set the gamma and beta of all the modules to just one of them.
For example:
a = nn.BatchNorm2d(...)
b = nn>BatchNorm2d(...)
a.gamma = b.gamma
a.beta = b.beta
|
st117289
|
Thanks. But I think this is just the overall mean for normalization, but not the mean image? (3x256x256)
|
st117290
|
we dont have a full mean image (as it is not necessary to achieve convergence). We just compute one global mean value per image channel, instead of a mean image.
|
st117291
|
I followed this paper and tried to implement the class model visualization for vgg16/alexnet. But then I failed to do so. (https://arxiv.org/abs/1312.6034 12). Not sure why. The attached is the fail dumbbell (image id:544) image after running 200 epochs.
|
st117292
|
The only one I know is: https://github.com/leelabcnbc/cnnvis-pytorch/blob/master/test.ipynb 71
|
st117293
|
I can quote from the documentation, while I am using torch.nn.LSTM:
If a torch.nn.utils.rnn.PackedSequence has been given as the input, the output will also be a packed sequence.
In my code I declared: rnn = nn.LSTM(inpSiz, hidSiz, 2, batch_first=True)
I then fed X. X is a packed sequence of autograd variable, and deep inside, it has a TensorFloat of size (batchSiz,maxLen,dims). I got the output of the LSTM and which in turn was passed to log_softmax() producing the output (batchSiz*maxLen,prob_distribution_over_10_classes). The log probabilities were then fed into nn.NLLLoss() with targets t. t is also an autograd.Variable, and deep inside, has size (batchSiz*seqLen,). Please note that seqLen != maxLen. So that gave me the following error
RuntimeError: Assertion ‘THIndexTensor_(size)(target, 0) == batch_size’ failed. at /py/conda-bld/pytorch_1493673470840/work/torch/lib/THNN/generic/ClassNLLCriterion.c:50
Upon receiving this error, I padded the targets t with _0_s as well, like I did it for X. Now everything worked. t is of size (batchSiz*maxLen), and everything worked. I don’t find any useful information given on t while a n rnn.PackedSequence is fed. Am I doing it right? Another question; is padding _0_s okay? Because in one-hot encoding I have the first class as 0.
|
st117294
|
I created my own custom loss. In both forward and backward, I need to enumerate each sample using for loop in python. There is no way to vectorized the operation, since each sample will have different properties.
It seems that using this approach make the computation very slow. Using a GPU also does not really help. My guess is that the for loop is not being parallelized when using GPU.
How to code a for loop in forward and backward that can be parallelized by GPU?
Thanks
|
st117295
|
there is no easy way around this yet. While we are working on a JIT compiler that should give automatic batching, it wont be ready in the near future.
The only hard path to make it faster is to likely either partly vectorize your computation or learn GPU programming.
|
st117296
|
I have an interesting observation. If I save a model by using torch.save(model.state_dict(), “models.pth”) and reload it. Then the test accuracy changes. Here is the code I am using
#save the model
CUDA_DEVICE = 1
net = torch.load("wideresnet.pth")
torch.save(net.state_dict(), "wideresnetState.pth")
# reload it
net1 = wrn.WideResNet(depth=28, num_classes = 10, widen_factor=10, dropRate=0.0)
net1 = torch.nn.DataParallel(net1, device_ids=[CUDA_DEVICE]).cuda(CUDA_DEVICE)
net1.load_state_dict(torch.load("wideresnetState.pth"))
The test accuracy of “net” is 96.29% while the the test accuracy of “net1” is 94.99%.
|
st117297
|
net1 over here is likely in .train() mode (they are by default). It will matter for BatchNorm.
You’ll have to set net1 to eval mode using net1.eval()
|
st117298
|
Hi,
I have a image im, and need to crop it to a bounding box (x1, x2, y1, y2). Currently I am doing im[y1: y2, x1: x2], but pytorch reminds me that indices should be integers or None. Is it possible to make the operation back-propagatable with respect to x1, x2, y1, y2?
|
st117299
|
My requirement is a little different from your paper. Rather than learning to focus to and transform specific region of an image, I would like to deterministically crop a given region.
Thanks anyway!
|
st117300
|
indexing is not differentiable wrt the indices. you will need to use a non-differentiable optimization method, probably reinforcement learning.
|
st117301
|
Is there any advantage in installing from source (other than getting the most up to date version) ? If so, is there a guide to correctly installing / setting up cuda that anyone could suggest? The docs assume a lot of knowledge that I seem to lack!
|
st117302
|
I think there is no particular advantage except if you want to have the latest version, or to push fixes.
|
st117303
|
I have a list containing a set of image file names and their corresponding labels.
These images are stored in a folder with some other images.
How can I load these images with DataLoader()?
|
st117304
|
torchvision.datasets.ImageFolder(root, transform=None, target_transform=None, loader=)。this maybe help you.
|
st117305
|
May I ask what is inside ‘loader’?
Is it a list class containing a set of image file names represented by str class?
|
st117306
|
I am not sure what you meant, but loader is a function to load files/images.
make_dataset() is where you define the list of files you are going to randomly select and pass to loader. By checking the code inside make_dataset(), you will get a better idea of what “list of files” will be collected.
github.com
pytorch/vision/blob/master/torchvision/datasets/folder.py#L24 147
filename (string): path to a file
Returns:
bool: True if the filename ends with a known image extension
"""
filename_lower = filename.lower()
return any(filename_lower.endswith(ext) for ext in IMG_EXTENSIONS)
def find_classes(dir):
classes = [d for d in os.listdir(dir) if os.path.isdir(os.path.join(dir, d))]
classes.sort()
class_to_idx = {classes[i]: i for i in range(len(classes))}
return classes, class_to_idx
def make_dataset(dir, class_to_idx):
images = []
dir = os.path.expanduser(dir)
for target in sorted(os.listdir(dir)):
d = os.path.join(dir, target)
|
st117307
|
I want to train fully convolutional networks for pixel-wise semantic segmentation. And I want to use vgg_bn or resnet (which have batchNorm layers in them) as initialization. However, training FCN is memory extensive, so batchsize = 1 is often chosen. So finetuning the BatchNorm layer is senseless and I wish to freeze the parameters and moving averages of the BN layers during finetuning.
My question is how to realize this.
I have tried setting the requires_grad as False for all BN layers, but the moving averages changes after each forward propagation.
Anyone can help?
|
st117308
|
You need to set the batchnorm modules to eval() mode. If you are calling batchnorm with the Functional interface, this involves setting “training=false.” Otherwise just make sure all the modules you aren’t training have had .eval() called on them. Note that .train() or .eval() calls propagate to all child modules, so if you call .train() on your whole network you’ll have to call .eval() separately on the modules you don’t want to finetune.
|
st117309
|
Hi, @ajbrock, Thanks for your help. I used nn.BatchNorm2d and called .eval() for them in the init function. I think it is reset by parental module’s .train() calls.
|
st117310
|
Yes, that is precisely what I said. You need to call .eval() on the modules you want to put into inference mode, after you call .train() on the parent modules.
|
st117311
|
There are the Tensor x with the shape (3,2,3), and the Tensor y with the shape (3,3,480000), and then committing torch.bmm(x,y) noted as Z.
In the meantime, I got Z_n by using torch.from_numpy(x.numpy() @ y.numpy()), and I find that Z is not equal to Z_n…Why is that???
The code like this:
X=torch.Tensor(x) #size(3,2,3) Y=torch.Tensor(y) #size(3,3,480000) Z=torch.bmm(X,Y) #size(3,2,480000) Z_n=torch.from_numpy(X.numpy() @ Y.numpy())
|
st117312
|
I just tried your snippet with random numbers, and the difference I got (for floats) was in the order of 1e-7, so it seems to be equivalent to me. Do you have a reproducible snippet?
For reference, here is what I used:
import torch
import numpy as np
X = torch.rand(3, 2, 3)
Y = torch.rand(3, 3, 480000)
Z = torch.bmm(X, Y)
Z_n=torch.from_numpy(X.numpy() @ Y.numpy())
print((Z - Z_n).abs().max())
|
st117313
|
@fmassa
The relevant code is:
batch=torch.from_numpy(batch) #batch is the matrix of a image ranged from 0 to 1 with batch size 3, size(3,1200,1600,3)
initial=np.array([[2,0,0],[0,2,0]])
initial=initial.astype('float32')
initial=initial.flatten()
W=torch.zeros(1200*1600*3,6)
b=torch.Tensor(initial)
h=torch.from_numpy(torch.zeros(3,1200*1600*3).numpy() @ W.numpy() +b.numpy())
h=h.view(-1,2,3).type(torch.Tensor) #size(3,2,3)
x_t=torch.mm(torch.ones(600,1),torch.unsqueeze(torch.linspace(-1.0,1.0,800),1).transpose(1,0)) y_t=torch.mm(torch.unsqueeze(torch.linspace(-1.0,1.0,600),1),torch.ones(1,800))
x_t_flat=x_t.view(1,-1)
y_t_flat=y_t.view(1,-1)
ones=torch.ones(x_t_flat.size())
grid=torch.cat([x_t_flat,y_t_flat,ones],0)
grid=grid.view(-1).repeat(3,1).view(3,3,-1) #size(3,3,480000)
First,I use this:
T=torch.from_numpy(h.numpy() @ grid.numpy())
and then another way:
T_another=torch.bmm(h,grid)
the subsequent code is omitted.
and I have the last output of the image matrix that can’t imshow() normally with the second way but the first is OK. When I look back on the code, I just found the T is unequal to T_another.
|
st117314
|
It’s very difficult to debug anything with the snippet you sent. Check the value of the difference between both matrices, I suspect you have either nans or the difference is such that you go slightly beyond 0 or 1
|
st117315
|
Now I have found a strange thing,
I have a tensor theta with the size (3,2,3)
theta
2 0 0
0 2 0
[torch.FloatTensor of size 2x3]
2 0 0
0 2 0
[torch.FloatTensor of size 2x3]
2 0 0
0 2 0
[torch.FloatTensor of size 2x3]
and a tensor grid with the size (3,3,480000)
grid
-1.0000 -0.9975 -0.9950 … 0.9950 0.9975 1.0000
-1.0000 -1.0000 -1.0000 … 1.0000 1.0000 1.0000
1.0000 1.0000 1.0000 … 1.0000 1.0000 1.0000
[torch.FloatTensor of size 3x480000]
-1.0000 -0.9975 -0.9950 … 0.9950 0.9975 1.0000
-1.0000 -1.0000 -1.0000 … 1.0000 1.0000 1.0000
1.0000 1.0000 1.0000 … 1.0000 1.0000 1.0000
[torch.FloatTensor of size 3x480000]
-1.0000 -0.9975 -0.9950 … 0.9950 0.9975 1.0000
-1.0000 -1.0000 -1.0000 … 1.0000 1.0000 1.0000
1.0000 1.0000 1.0000 … 1.0000 1.0000 1.0000
[torch.FloatTensor of size 3x480000]
then I use torch.bmm:
torch.bmm(theta,grid)
( 0 ,.,.) =
-2.0000 -1.9950 -1.9900 … 0.0000 0.0000 0.0000
-2.0000 -2.0000 -2.0000 … 0.0000 0.0000 0.0000
( 1 ,.,.) =
-2.0000 -1.9950 -1.9900 … 0.0000 0.0000 0.0000
-2.0000 -2.0000 -2.0000 … 0.0000 0.0000 0.0000
( 2 ,.,.) =
-2.0000 -1.9950 -1.9900 … 0.0000 0.0000 0.0000
-2.0000 -2.0000 -2.0000 … 0.0000 0.0000 0.0000
[torch.FloatTensor of size 3x2x480000]
and obviously the output is wrong, the right answer is:
theta.numpy() @ grid.numpy()
( 0 ,.,.) =
-2.0000 -1.9950 -1.9900 … 1.9900 1.9950 2.0000
-2.0000 -2.0000 -2.0000 … 2.0000 2.0000 2.0000
( 1 ,.,.) =
-2.0000 -1.9950 -1.9900 … 1.9900 1.9950 2.0000
-2.0000 -2.0000 -2.0000 … 2.0000 2.0000 2.0000
( 2 ,.,.) =
-2.0000 -1.9950 -1.9900 … 1.9900 1.9950 2.0000
-2.0000 -2.0000 -2.0000 … 2.0000 2.0000 2.0000
[torch.FloatTensor of size 3x2x480000]
so I wonder how it is happening?
|
st117316
|
The reason seems to be that the dimension of matrix is too huge, and pytorch will omit the bottom part of the matrix. Are there any solutions?
|
st117317
|
If you can come up with a small minima working example (with random inputs) that reproduces the problem, could you please open an issue in pytorch?
|
st117318
|
Memory usage on two machines are extremely, the one with K80 are using more than 2 times of the one with Titan X. Any thoughts? The system setups are the same: Ubuntu 16.04, cuda 8.0, cudnn 5.1. Thanks!
|
st117319
|
If cudnn.benchmark = True, it will automatically find the proper algorithms based on the GPUs, and may choose different convolution algorithms for K80 and TitanX, which have different memory consumptions. Maybe try to set it to False and see if they are the same.
|
st117320
|
We have been recently tracking some memory leak on our machine. I created a very simple script which just does this:
a = []
for i in range(100):
t = torch.ones(10,1000,1000)
if cuda:
t = t.cuda()
a.append(t)
and I use a flag to control whether .cuda() is used or not. It seems that the CUDA part is causing some kind of memory leak. In the following data, the used started at 4.8G but they were dead memory that did not show up as used by any process in TOP. After running the cuda version of the loop a few times, 300M more memory were dead. I wonder if anyone has any idea as to what is happening? I am using RedHat 4.4.2, Python 2.7.13 with the newest PyTorch, CUDA 7.5.17.
test$ free -h
total used free shared buffers cached
Mem: 23G 6.0G 17G 9.1M 360M 919M
-/+ buffers/cache: 4.8G 18G
Swap: 82G 0B 82G
test$ python test.py cpu
test$ python test.py cpu
test$ python test.py cpu
test$ python test.py cpu
test$ python test.py cpu
test$ free -h
total used free shared buffers cached
Mem: 23G 6.0G 17G 9.1M 360M 919M
-/+ buffers/cache: 4.8G 18G
Swap: 82G 0B 82G
test$ python test.py cuda
test$ python test.py cuda
test$ python test.py cuda
test$ python test.py cuda
test$ python test.py cuda
test$ free -h
total used free shared buffers cached
Mem: 23G 6.4G 17G 9.1M 360M 919M
-/+ buffers/cache: 5.2G 18G
Swap: 82G 0B 82G
|
st117321
|
Please understand that the kernel does not always have to free memory unless it thinks it needs to, it might be caching some pages for various reasons. This is not a memory leak, unless you literally cannot reclaim memory (i.e. if you try to allocate 18G + 0.4GB, and allocations fail).
|
st117322
|
I do understand that. This used memory is never claimed by the kernel anytime, even if it goes into swap mode because it runs out of memory when a memory intensive program is run.
|
st117323
|
I see. If you tried to actually test memory allocations to system limits and they seem to fail, maybe it’s a CUDA bug. At this point the best thing to try is probably upgrade CUDA versions. I cant think of anything else.
My previous answer was from seeing this behavior on some of my systems and then realizing that the kernel was just caching some pages.
|
st117324
|
It did. We did a very trivial test creating a lot of tensors either on the GPU or on the CPU and found that it was definitely the GPU. The update solved it.
|
st117325
|
Hi all!
I am implementing the model of the paper, Gulcehre, Caglar, et al. “Pointing the unknown words.” arXiv preprint arXiv:1603.08148 (2016).
But I need a special NLL loss function which does not sum the values inside it.
copySwitchValue = sigmoid(CopySwitch(concat([DecGRUState, attentionState], dim=1)));
copyVocabProb = attentionProb;
copyOutProb = copyVocabProb * copySwitchValue + 0.000001;
copyLoss = LogLoss(copyOutProb, trgCopyId) * copyMask;
readoutState = ReadOut(concat([wordEmbed, attentionState, DecGRUState], dim=1));
scoringResult = Scoring.BuildNet(readoutState);
vocabProb = softmax(scoringResult) * (1 - copySwitchValue) + 0.000001;
vocabLoss = LogLoss(vocabProb, trgWordId) * (1 - copyMask);
For example, suppose the source sentence has 20 words, target has 18 words, batch size is 64 and vocab size 30000. So we have copyOutProb (18, 64, 20), vocabProb (18, 64, 30000), copyMask (18, 64).
What I want to implement here is to get two LogLoss value matrices with shape (18, 64) so I can mask them with copyMask. After being masked, they can be summed up to get the final loss value. The current NLLLoss is summing everything up, so I am wondering is there a way to do this loss masking thing?
Thanks.
|
st117326
|
Hi
Is there any plan to support 3d upsampling (e.g. NearestUpsampling3d) recently?
If not, is there any other “shortcut" ways to realize NearestUpsampling3d with current functions?
|
st117327
|
on the master branch, we now have F.upsample that supports both 2D and 3D inputs for nearest and bilinear modes: http://pytorch.org/docs/nn.html?highlight=upsample#torch.nn.functional.upsample 329
It will be part of the next release 0.2.0
|
st117328
|
Tried to run the ImageNet example (https://github.com/pytorch/examples/tree/master/imagenet 14) under Python3.5, and it is trying to write data to Soumith’s computer. I don’t think my Ethernet cable is long enough to reach San Francisco, so what are my other options?
Process Process-4:
Traceback (most recent call last):
File "/conda3/envs/idp/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/conda3/envs/idp/lib/python3.5/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/conda3/envs/idp/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 36, in _worker_loop
data_queue.put((idx, samples))
File "/conda3/envs/idp/lib/python3.5/multiprocessing/queues.py", line 349, in put
obj = ForkingPickler.dumps(obj)
File "/conda3/envs/idp/lib/python3.5/multiprocessing/reduction.py", line 50, in dumps
cls(buf, protocol).dump(obj)
File "/conda3/envs/idp/lib/python3.5/site-packages/torch/multiprocessing/reductions.py", line 113, in reduce_storage
fd, size = storage._share_fd_()
RuntimeError: unable to write to file </torch_6487_1133870694> at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488757768560/work/torch/lib/TH/THAllocator.c:267
|
st117329
|
Hi,
FuriouslyCurious:
/data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488757768560/work/torch/lib/TH/THAllocator.c:267
this does not try to write to soumith’s computer, but the the source file location is embedded into the error message when the pytorch library is compiled. Apparently the conda builds are done in the directory /data/users/soumith/miniconda2/conda-bld.
If you compile pytorch from source, you get to have your own path there.
The path that it appears to try to write to is /torch_6487_1133870694 (at the filesystem root) and you would not want to have it there. I would suspect that you either don’t have TEMP not set or some other other path wrong. Maybe you have an empty path where you would want ‘.’ instead.
Best regards
Thomas
|
st117330
|
I has this problem than use docker image - and solution was add flag “–ipc=host” for docker run
nvidia-docker run --rm -ti --ipc=host pytorch-cudnnv6
Full doc for docker image - https://github.com/pytorch/pytorch#docker-image 179
|
st117331
|
not specifically a pytorch question, but does the vocab size matter when training RNNs? I started with a character level rnn, but Im wondering if I move to longer words/specialized vocabulary, will it improve the model performance? From what I understand of RNNs, using longer length vocab words will mean that the RNN layers can learn longer sequences because there is more information embedded in the RNN. Are there any papers that go over this kind of topic that I should read?
|
st117332
|
For a research project, I need to load a dataset of images, and numerical labels for drive and motor data. Each image is also labeled with a flag, and I need to be able to set which image flags to ignore and keep during runtime. Currently the data loading system was written and optimized for Caffe and uses the hdf5 format for extracting these files. During training, extracting and converting the hdf5 data is the main bottleneck. Because of the ignore flag feature, I was not able to use the default PyTorch Dataset to dynamically load the data, since a single call to the get item in the dataset must return an item at each call. I am looking to convert our current dataset into a format that would be optimized for loading in to PyTorch, that also supports loading a frame and skipping it if it has an ignore flag set. My current code for importing the dataset and using it for training are available here:
###Loading hdf5 Dataset w/ Ignore List
github.com
sauhaardac/pytorch-neural-z2color/blob/master/libs/import_utils.py 14
import pickle
import h5py
from utils import *
Segment_Data = {}
hdf5_runs_path = desktop_dir('bair_car_data/hdf5/runs')
hdf5_segment_metadata_path = desktop_dir('bair_car_data/hdf5/segment_metadata')
def load_hdf5(path):
F = h5py.File(path)
labels = {}
Lb = F['labels']
for k in Lb.keys():
if Lb[k][0]:
labels[k] = True
else:
labels[k] = False
This file has been truncated. show original
###Use of Dataset in Traning Code
github.com
sauhaardac/pytorch-neural-z2color/blob/master/train.py 9
import argparse
import datetime
import random
import torch
import torch.nn as nn
import torch.nn.utils as nnutils
from torch.autograd import Variable
from libs.import_utils import *
from nets.z2_color import Z2Color
# Define Arguments and Default Values
parser = argparse.ArgumentParser(description='PyTorch z2_color Training',formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--validate', type=str, metavar='PATH',
help='path to model for validation')
# parser.add_argument('--skipfirstval', type=str, metavar='PATH',
# help='Skip the first validation (if restoring an end of epoch save file)')
parser.add_argument('--resume', type=str, metavar='PATH',
help='path to model for training resume')
parser.add_argument('--ignore', default=['reject_run', 'left', 'out1_in2', 'racing', 'Smyth'], type=str, nargs='+',
This file has been truncated. show original
What would be the best format to use for my particular use case? Are there any examples of setting up a similar type of dataset?
|
st117333
|
In general, I write my own Dataset class that inherits from the PyTorch Dataset and it handles all the logic of what data and labels to feed to the network when. Then the PyTorch Data Loader doesn’t have to know about any of that, it just loads pairs.
Disclaimer: I didn’t read the code, so I’m not sure precisely what the problem is.
|
st117334
|
Sorry if I wasn’t being clear. My specific problem is that the PyTorch Dataset class has a get_item function that requires an index. My problem is that I don’t know if data at a certain index will be used or not until I load it and check it’s flag to see if I should ignore it. Each time get_item is called I need to return something, so this hasn’t been working for me. Is there a way I can tell the pytorch dataset class to skip a particular index after a call to get_item?
|
st117335
|
It only takes an index so that the DataLoader can load a certain number of images during training (e.g. one epoch worth of images). get_item could essentially ignore the index and iteratively load data, check the ignore flag, and only return the data if ignore is False. That’s probably better than skipping the index, because you’ll actually go through the same number of datapoints each time you call the DataLoader.
|
st117336
|
How would the DataLoader know when I’m out of data? If the index is ignored and I just iteratively load data until it’s over, how can I signal to the data loader that I am done going through the dataset? Is it possible for get_item to return None and the DataLoader to ignore that index? I have my dataset indexed it’s just the ignore list that needs to happen dynamically.
|
st117337
|
The DataLoader samples data points until it has selected len(dataset) number of samples. So you could just set the length of your dataset to be a fixed number (by overriding the __len__ method).
|
st117338
|
With this approach could I continue to have epoch training behavior?
Currently I have some saving and validation code that should run after each
epoch of data is shown.
|
st117339
|
I noticed that there exists a difference on resNet architecture between caffe and pytorch. It’s about downsample operation:
in pytorch, downsample is executed at the end of each Bottleneck (if needed) as blow:
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
But in caffe prototxt, the position of downsample is at the top of each Botteleneck.
Netscope.png696×674 59.4 KB
I am curious about the reason, and what will happen to this difference?
|
st117340
|
It’s the same in both cases - the downsizing is not at the top or bottom of the main “trunk” path, but rather in the parallel shortcut path.
In the caffe case it’s the res5a_branch1 convolution that’s doing the downsizing. In the PyTorch case note that it’s downsample(x), not downsample(out) - this is the parallel shortcut branch, not an operation being added after bn3(out).
Ben
|
st117341
|
I meet a out of memory problem:
File “/home/yfwu/ctavatar/tools-pytorch/reg_gan3d.py”, line 171, in train
errG.backward()
File “/home/yfwu/pytorch/local/lib/python2.7/site-packages/torch/autograd/variable.py”, line 146, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
RuntimeError: cuda runtime error (2) : out of memory at /b/wheel/pytorch-src/torch/lib/THC/generic/THCStorage.cu:66
I tried several times and it happened at the same time(after 370 batch training).
This didn’t happen before I added the line : " netG.load_state_dict(torch.load(’…/models/netG_epoch_6.pth’))", and I can train successfully without pre-trained model.
I am wondering if something are saved because of this line. Many thanks in advance.
Following is my code:
def train(epoch):
netD.train()
netG.load_state_dict(torch.load('../models/netG_epoch_6.pth'))
netG.train()
epoch_loss_D = 0
epoch_loss_G = 0
for batch_idx, (ins, tgs) in enumerate(train_dataloader):
ins = Variable(ins.cuda())
tgs = Variable(tgs.cuda())
############################
# (1) Update D network: maximize log(D(tgs)) + log(1 - D(G(ins)))
###########################
# train with real
optimizerD.zero_grad()
output_D = netD(tgs)
label.resize_(1).fill_(real_label)
labelv = Variable(label)
errD_real = criterion_GAN(output_D, labelv)
errD_real.backward()
# train with fake
fake = netG(ins)
labelv = Variable(label.fill_(fake_label))
output_D = netD(fake.detach())
errD_fake = criterion_GAN(output_D, labelv)
errD_fake.backward()
errD = errD_real + errD_fake
optimizerD.step()
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
optimizerG.zero_grad()
labelv = Variable(label.fill_(real_label))
output_D = netD(fake)
errG = criterion_GAN(output_D, labelv)
errG.backward()
optimizerG.step()
|
st117342
|
My problem has been solved. See:
CUDA memory continuously increases when net(images) called in every iteration
Hi, I have a very strange error, whereby, when I get by outputs = net(images) within every iteration in a for loop, the CUDA memory usage keeps on increasing, until the GPU runs out of memory.
The weird situation is that if I have this loop inside a function, it causes this issue. If I just have the contents of the function sitting in my normal script, it works just fine. What may be the cause of this??
Thanks
Thanks pytorch forum!
|
st117343
|
Hello,
When I ran my model on multiple GUPs, and ran into out of memory error. So I half the batch_size, and again ran into the same error (after a longer while).
So I checked the GPU memory usage with nivida-smi, and have two questions:
Here is the output of nivida-smi:
| 0 33446 C python 9446MiB |
| 1 33446 C python 5973MiB |
| 2 33446 C python 5973MiB |
| 3 33446 C python 5945MiB |
±----------------------------------------------------------------------------+
The memory on GPU_0 is roughly twice used than the other three. Is it nature that the first GPU is more used?
The memory usage increase slowly as the program goes, which eventually cause the out-memory-error. How can I solve the problem?
BTW, I installed pytorch using the latest Dockfile.
Thanks!
|
st117344
|
Hi Dear RangoHU,
I face the same problem, do you have any idea why the memory increasing during training?
Thanks!
|
st117345
|
My out of memory problem has been solved. Please check
CUDA memory continuously increases when net(images) called in every iteration
Hi, I have a very strange error, whereby, when I get by outputs = net(images) within every iteration in a for loop, the CUDA memory usage keeps on increasing, until the GPU runs out of memory.
The weird situation is that if I have this loop inside a function, it causes this issue. If I just have the contents of the function sitting in my normal script, it works just fine. What may be the cause of this??
Thanks
|
st117346
|
I’m new to Pytorch so I apologize if this is an obvious question.
I’m training in batch. Each item in the batch has N vectors (these are RNN embeddings of sequences); they may have a different number of vectors, i.e. first item might have 2, second might have 7, and so on.
For each item in the batch, I want to multiply each of the RNN embeddings against another vector unique to that batch item. I.e.,
for i in range(batch_size):
rnn_embs = batch_rnn_embs[i] # Variable of shape N_i x H
vecs = batch_vecs[i] # Variable of shape H x 1
scores = rnn_embs.mm(vecs) # Variable of shape N_i x 1
However, because these are in batch, I eventually want the scores vector to be the same length for everything in the batch, i.e. some N_max (with the “padding” value -inf). I tried the following:
padding = torch.FloatTensor((pad_length, 1)).fill_(-float('inf'))
padded_scores = torch.cat([scores, padding], 1)
But I get an error: “expected a Variable argument, but got torch.cuda.FloatTensor”. I don’t want the padding to be a learnable Variable. It seems I can’t concatenate a tensor and a variable.
I’m also wondering if I could simply extend scores to be of length N_max, then set the padding values using masked_fill. But I’m also not sure how to extend a Variable.
The last resort would be using bmm. It’s possible, but this makes everything a bit chunkier especially on the non-Pytorch side (need to compute masking/padding ahead of time and make sure it works in batch properly). I thought it would be easier to do this kind of thing in Pytorch (as opposed to Tensorflow).
|
st117347
|
I fixed this by doing the following:
torch.autograd.Variable(torch.FloatTensor((pad_length, 1)).fill_(-float('inf')), requires_grad = False)
|
st117348
|
I’m playing around with the pretrained models.
I know a network like alexnet has 4096 features before classification. Is there a way in pytorch to see what the number of features are for all the different pretrained networks using the same code? For example I’m looking at resnet152 and cant tell from reading the code here how many features it has : https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py 3
|
st117349
|
You could print(your_model).
It will display the detail information of every layer.
|
st117350
|
Is there any way of plotting or saving a Variable as image after a conv2d operation?
|
st117351
|
you can save it / plot it as an image using make_grid or save_image here: https://github.com/pytorch/vision#utils 160
|
st117352
|
Something doenst fits. In the forward method at my module, I am calling save_image(x.data,"before_conv.png"), for a Variable of shape [1, 4, 80, 80] and gives me an error: RuntimeError: inconsistent tensor size at /py/conda-bld/pytorch_1490980628440/work/torch/lib/TH/generic/THTensorCopy.c:51
|
st117353
|
you cant save a 4-channel image. You will have to save each of the 80x80 images separately.
save_image(x.data[0, 0, :, :], ...)
|
st117354
|
For a 3-D tensor z = torch.rand(2, 3, 5), z.t() raise an RuntimeError: t() expects a 2D tensor, but self is 3D. But when wrapping it as the Variable z = Variable(torch.rand(2, 3, 5)), z.t() works normally, is there any plausible explanation for this different behavior? Thanks.
|
st117355
|
@cswhjiang Thanks for your reply. Yes, just transposing the first two dimension. My confusion is that why this method has inconsistent results for Tensor and Variable.
|
st117356
|
This is a bug (i.e. we didn’t have checks).
I’ve added them in this PR: https://github.com/pytorch/pytorch/pull/1823 166
|
st117357
|
Hi, I’m building a model using Bidirectional GRU, so I use nn.GRU(bidirectional=True)
From the doc, I got it’s outputs are
Outputs: output, h_n
- **output** (seq_len, batch, hidden_size * num_directions): tensor containing the output features h_t from
the last layer of the RNN, for each t. If a :class:`torch.nn.utils.rnn.PackedSequence` has been given as the
input, the output will also be a packed sequence.
- **h_n** (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t=seq_len
And the first dim of h_n is all GRU Cell. Because in the model, there are (num_layers * num_directions) GRUCell.
But I wonder h_n is like
[
layer0_forward
layer0_backward
layer1_forward
layer1_backward
layer2_forward
layer2_backward
…
] or
[
layer0_forward
layer1_forward
layer2_forward
layer0_backward
layer1_backward
layer2_backward
…
]
Is anybody know it? Or how I can figure it out?
|
st117358
|
After checking GRU implement, I think I can probably find answer from the implement of self._backend.RNN
func = self._backend.RNN(
self.mode,
self.input_size,
self.hidden_size,
num_layers=self.num_layers,
batch_first=self.batch_first,
dropout=self.dropout,
train=self.training,
bidirectional=self.bidirectional,
batch_sizes=batch_sizes,
dropout_state=self.dropout_state
)
But how can see these code??
|
st117359
|
Alright, but I still have a question between the last time step output of GRU and GRU’s final state.
They are expected to be the same, right? I print them out and find them not really the same.
I run the code below.
# coding:utf-8
import torch
import torch.nn as nn
import numpy as np
from torch.autograd import Variable
seq_len = 6
batch_size = 1
input_size = 1
hidden_size = 10
num_layers = 3
batch_first = True
bidirectional = True
torch.manual_seed(1)
# **input** (seq_len, batch, input_size)
x = torch.rand(batch_size, seq_len, input_size)
x_rev = torch.from_numpy(np.flip(x.numpy(), 1).copy())
# (num_layers * num_directions, batch, hidden_size)
if bidirectional:
num_directions = 2
else:
num_directions = 1
h0 = torch.rand(num_layers*num_directions, batch_size, hidden_size)
rnn = nn.GRU(input_size, hidden_size, num_layers,
batch_first=batch_first, bidirectional=bidirectional)
print rnn
out, ht = rnn(Variable(x), Variable(h0))
print out.data.size()
print ht.data.size()
assert out.data.numpy().shape == (batch_size, seq_len, hidden_size*num_directions)
assert ht.data.numpy().shape == (num_layers*num_directions, batch_size, hidden_size)
print "output:"
print out.data.numpy()[0, seq_len-1, :] # [hidden_size*num_directions]
print "===================================="
print "ht:"
print ht.data.numpy()[:, 0, :] # [num_layers*num_directions, hidden_size]
So I get the output like this:
>>>
GRU(1, 10, num_layers=3, batch_first=True, bidirectional=True)
(1L, 6L, 20L)
(6L, 1L, 10L)
output:
[ **0.04090527 0.04467951 0.06184166 -0.119278 -0.07899605 -0.17775261**
**-0.25711796 -0.0560216 -0.06801324 -0.62566853** 0.09493496 -0.00143968
0.25473037 0.59195685 0.08295314 0.61662054 0.39969781 0.52175015
0.43700069 -0.04902107]
====================================
ht:
[[ 0.41622654 0.05891414 0.24079823 -0.20317592 0.20570976 0.07495184
0.31944707 -0.3336893 -0.17610091 -0.01868644]
[ 0.188013 -0.27898508 0.13432087 -0.079565 0.19181061 -0.28547999
-0.19238529 0.08653103 -0.33994722 0.12975907]
[-0.1610465 -0.1817638 -0.07482101 -0.04572783 0.27683198 0.16544969
0.10135207 -0.43468314 -0.46809191 -0.00571362]
[-0.27692401 -0.04289184 0.14566612 0.12111901 0.12315567 0.35866803
0.0838761 -0.08178325 0.40468279 -0.1950635 ]
[ **0.04090527 0.04467951 0.06184166 -0.119278 -0.07899605 -0.17775261**
**-0.25711796 -0.0560216 -0.06801324 -0.62566853**]
[ 0.04620094 -0.34189698 0.08069657 0.39240748 -0.09260736 0.61043888
0.26960379 0.2404768 -0.13964601 0.07339926]]
As we can see, output[:hidden_size] = ht[4]
BUT output[hidden_size:] =
0.09493496 -0.00143968
0.25473037 0.59195685 0.08295314 0.61662054 0.39969781 0.52175015
0.43700069 -0.04902107
cannot match in ht.
I check the doc
Outputs: output, h_n
- **output** (seq_len, batch, hidden_size * num_directions): tensor containing the output features h_t from
the last layer of the RNN, for each t. If a :class:`torch.nn.utils.rnn.PackedSequence` has been given as the
input, the output will also be a packed sequence.
- **h_n** (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t=seq_len
and don’t know what I’ve missed. We’ve got the final step forward output in h_n but why not the final step backward output in h_n?? h_n should have num_directions final hidden state right?
|
st117360
|
I’m observing a similar issue. Only half of the elements of the final timestep of the output match hidden state.
|
st117361
|
I believe this is answered here:
BiLSTM hidden states and output don't match
You’re actually comparing the first output of the reverse direction. I think you want out[0, :, 500:]
[image]
|
st117362
|
Hi
I’m training an RNN with LSTM cell. It works when I run the model on CPU, but when I load the model and the data on GPU like:
seq = Sequence()
seq.cuda()
input = Variable(torch.from_numpy(data).cuda())
out = seq(input)
and then I get the error:
TypeError: addmm_ received an invalid combination of arguments - got (int, int, torch.FloatTensor,
torch.cuda.FloatTensor), but expected one of:
(torch.FloatTensor mat1, torch.FloatTensor mat2)
(torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
(float beta, torch.FloatTensor mat1, torch.FloatTensor mat2)
(float alpha, torch.FloatTensor mat1, torch.FloatTensor mat2)
(float beta, torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
(float alpha, torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
(float beta, float alpha, torch.FloatTensor mat1, torch.FloatTensor mat2)
didn’t match because some of the arguments have invalid types: (int, int, torch.FloatTensor, !torch.cuda.FloatTensor!)
(float beta, float alpha, torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
didn’t match because some of the arguments have invalid types: (int, int, !torch.FloatTensor!, !torch.cuda.FloatTensor!)
It seems like the input is not on GPU, but when I output the variable the datatype is
[torch.cuda.DoubleTensor of size 7x6 (GPU 0)]
So how can I fix the problem?
Thank you.
|
st117363
|
If you implement your own nn.Module, which includes some parameters inside. You should declare it as Parameters to let nn.Module.cuda() transfer the parameters into GPU memory.
|
st117364
|
Hi, there is not enough information about Sequence() but I think hidden weight is not cuda.FloatTensor but FloatTensor . It might be the cause of that problem.
|
st117365
|
Here’s the source code for nn.Module.cuda():
http://pytorch.org/docs/_modules/torch/nn/modules/module.html#Module.cuda 209
In short, it iterates all the sub-modules and nn.Parameters, and calls the .cuda() method recursively.
To make .cuda() work correctly, you should declare the variable in your Module like that:
self.weight = Parameter(torch.Tensor(out_features, in_features))
Ref:
http://pytorch.org/docs/nn.html?highlight=linear#parameters 93
|
st117366
|
Thanks.
The problem is that the Sequence() module includes some variables which are not loaded to GPU, the cell states and hidden states of LSTM. I just try to declare these variable in the Module, but it seems like Variable can not be included in parameter. So what I’ve done is calls .cuda() method for those variables in forward(). Is there a more efficient solution?
|
st117367
|
@zed no that’s the best you can do. only nn.Parameter and registered buffers are typecasted:
http://pytorch.org/docs/nn.html?highlight=register#torch.nn.Module.register_buffer 214
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.