id
stringlengths
3
8
text
stringlengths
1
115k
st119968
I think in the future we’ll also have to figure out some kind of format that allows for exporting graphs. This will be needed for Caffe2 and TF model exports.
st119969
Let me rephrase that to be sure to correctly understand :). In luatorch – the nngraph objects are saved in the serialization but we do need the code to build the class – each module implementing its load function. However, we do have enough information from the .t7 and the corresponding logic to build something functional. You mean that in python the .pt does not contain the graph structure at all? When we save a model, we do need to specify the graph structure in order to load it? In the .pt file – I see the files: sys_info => is just a dictionary with system info tensors => is the description of the tensor storage => is the actual storage pickle => I thought that it was here that I would find the graph structure (or the Variable expression) – isn’t it? Thanks! Jean
st119970
Yes, because in Lua torch nngraph objects are the same all the time. PyTorch builds the graph dynamically, so every iteration uses fresh Python objects to represent the graph, and the recipe for the graph construction is the model’s code. Your descriptions are correct, but pickle won’t serialize your model class - it only saves a string like my_models.MyModel, and its __dict__ entries. THe name is a pointer that allows it to find your class at runtime, it will instantiate it, and copy the __dict__ items. As you see there’s no graph serialized there - only the parameters and class name.
st119971
I am getting the following error when I try to install on my 64-bit Ubuntu 14.04 LTS system. The command I run is: pip install https://s3.amazonaws.com/pytorch/whl/cu80/torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl 14 And the error I get is: torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform. After that it says: /data/venv/pytorch/local/lib/python2.7/site-packages/pip/vendor/requests/packages/urllib3/util/ssl.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning 3. InsecurePlatformWarning For additional context, I have made a virtual environment (called pytorch) first, and then within that environment, I tried to run the install command.
st119972
This is now fixed. Please use the latest pip install commands from the website pytorch.org 85 (I updated the website)
st119973
Hi, I know that it is not the intended usecase for PyTorch, but I am wondering if it can (and if it does, does it do it automatically) use multiple CPUs if available? Say I have a cluster with 128 CPUs, would there be an equivalent to e.g,. tensorflows sess = tf.Session(config=tf.ConfigProto( intra_op_parallelism_threads=NUM_THREADS))
st119974
Hi, Distributed training is going to be supported in the next release 142 of pytorch, in around 90 days from now. You can check the progress of the distributed implementation in here 226.
st119975
Hi guys, where is the category file list for the models pre-trained in the ZOO? We will need to upload that in order to use the model, right?
st119976
the list of category names that correspond to each of the output neurons of CNN. Even for ImageNet, they need to specified.
st119977
I think what is missing is a dictionary or so mapping the model output to something meaningful. I.e., right now, the network would return integer labels right? Say I use the model to do predictions on some (the same or different) dataset, how would I know what object an output of e.g., 23 corresponds to?
st119978
The indices of the dataset are in alphabetical order thanks to this line in the FolderDataset 64. Thus, we only need the map from the folder names to the classes, which can be found for example in this file 30, and is already in alphabetical order as well.
st119979
@fmassa thanks a lot, but does not work for me: https://github.com/e-lab/pytorch-demos/tree/master/visor 66 I am not getting the right categories…
st119980
@Eugenio_Culurciello I believe the categories.txt file does not contain the categories in the right order. The order should be the one present in the syncet_words from here 36. The alphabetical ordering I mentioned correspond to the syncet names (n01440764 etc), and not the category names.
st119981
Hi, I stumbled upon this question because I am trying to have control over how my convolutional weights are initialized. At any rate, we can create a 2D convolutional layer via nn.functional.conv2d 193, or, via nn.Conv2d 103 The API for both of those however seems different. For the former: torch.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) For the latter: class torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True) So, my questions are the following: Why should I use one over the other? I am trying to initialize my convolutional weights with what I want, so is that possible using nn.Conv2D? I guess one way is to copy over the proper weights that I want, so is that basically it? Thanks
st119982
It is recommended to use nn.Conv2D because it uses the nn.Module abstraction and nicely ties into the torch.optim framework well. Yes, here’s an example of initializing the weights of a ConvNet via a custom weight initialization: https://github.com/pytorch/examples/blob/c6bff8c51eee802be1a77575be0eb3eb5e211ada/dcgan/main.py#L89-L96,L131 337 https://github.com/pytorch/examples/blob/c6bff8c51eee802be1a77575be0eb3eb5e211ada/dcgan/main.py#L131 142
st119983
I was trying to run a simple model on a dataset where I loaded my dataset into a np.float32 array and the target labels into a np.int32 array. Now, PyTorch would automatically keep this types when converting them into tensors via from_numpy (e.g., the data would be Float and the labels would be Int). However, the loss function expects Longs instead of Ints. (Or maybe I made a mistake somewhere). I (On a side note, would you recommend using doubles and longs over floats and ints performance-wise?) I posted a simplified example below, where I have to cast the target array to long (loss = F.nll_loss(output, target.long())); otherwise, I get a TypeError: TypeError: FloatClassNLLCriterion_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, torch.IntTensor, torch.FloatTensor, bool, NoneType, torch.FloatTensor), but expected (int state, torch.FloatTensor input, torch.LongTensor target, torch.FloatTensor output, bool sizeAverage, [torch.FloatTensor weights or None], torch.FloatTensor total_weight) I was wondering if this is desired behavior (i.e., that the the loss function expects LongTensors)? (PS: Is there a tensor attribute to return the type, e.g., sth like NumPys my_array.dtype?) class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 20, kernel_size=5) self.conv2 = nn.Conv2d(20, 32, kernel_size=5) self.conv2_drop = nn.Dropout2d(p=0.5) self.fc1 = nn.Linear(800, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), kernel_size=2, stride=2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 800) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = F.relu(self.fc2(x)) return F.log_softmax(x) model = Net() if torch.cuda.is_available(): model.cuda() optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9) batch_size=64 model.train() for step in range(1000): offset = (step * batch_size) % (train_labels.shape[0] - batch_size) data = train_data32[offset:(offset + batch_size), :, :, :] target = train_labels[offset:(offset + batch_size)] print('orig data type', data.dtype) print('orig data type', target.dtype) if torch.cuda.is_available(): data, target = data.cuda(), target.cuda() data, target = Variable(torch.from_numpy(data)), Variable(torch.from_numpy(target)) optimizer.zero_grad() print('input batch dim:', data.size(), 'type', ) output = model(data) print('output batch dim:', output.size()) print('target batch dim:', target.size()) loss = F.nll_loss(output, target.long()) loss.backward() optimizer.step() break
st119984
Hi, I’ll try to address your questions in the following points. For training neural networks, using float is more than enough precision-wise, so no need for double. For keeping track of indices, int32 might not be enough for large models, so int64 (long) is preferred. That’s probably one of the reasons why we use long whenever we pass indices to functions (including NLLLoss) Note that you can also convert a numpy array to a tensor using torch.Tensor(numpy_array), and you can specify the type of the output tensor you want, in your case torch.LongTensor(numpy_array). This constructor does not share memory with the numpy array, so it’s slower and less memory efficient than the from_numpy equivalent. You can get the type of the tensor by passing no arguments to the type function, so tensor.type() returns the type of the tensor, and you can do things like tensor = torch.rand(3).double() new_tensor = torch.rand(5).type(tensor.type())
st119985
it’s on our list of things to do to allow Int labels as well, but right now it is expected behavior to ask for LongTensors as labels. We use Long labels because some of the use-cases we had in Torch had nClasses that didn’t fit the Int precision limits. Since we use the same C backend in PyTorch, we went with long labels. We DO not recommend double for performance, especially on the GPU. GPUs have bad double precision perf and are optimized for float32 performance. (PS: Is there a tensor attribute to return the type, e.g., sth like NumPys my_array.dtype?) You can simply get the class name: x = torch.randn(10) print(x.__class__)
st119986
Hi, perhaps I am missing something, but I cannot seem to figure out how to change the padding and stride amounts in the Conv2d method. From here 45 I see the nn.SpatialConvolution API, and here 35 it mentions what the PyTorch API is? But no where can I see how to change the stride and padding amount… please tell me that this exists? Thanks
st119987
Oh, how did you even find that link These are very old docs, use only docs.pytorch.org 45. When do you want to change them? You can pass them as arguments to the Module constructor e.g. nn.Conv2d(16, 32, kernel_size=3, padding=(5, 3)) Alternatively, if you need to change them at runtime, I’d suggest using the functional interface: import torch.nn.functonal as F ... F.conv2d(input, self.weight, self.bias, kernel_size=3, padding=(x, y))
st119988
Sweet thank you! I should not watch netflix while searching for pytorch docs it seems. The change at run-time is intriguing … I didn’t even know people did that. How could you guarantee the same dimensionality if you changed padding amounts during run time though?
st119989
The whole point of run-time graph construction is that you don’t have to provide the framework any guarantees about dimensionality As long as you do ops that match the sizes, they can vary all the times, and the same applies to module parameters. That’s the whole beauty of dynamic graphs. I don’t know if people do that, but it might be reasonable if you have variably sized inputs, or are doing something completely new.
st119990
Very interesting - I mean I know that in the context of RNNs we can have variable length inputs etc, since the back-prop can be unrolled N times, but I didnt realize that we could have variable length weights/parameters etc… I’ll read more about it, (and any sources on this are appreciated), but the top question in my mind is how do you guarantee that in a dynamic graph, when we change the dimensionality of weights, that those are new weights are even trained? Does that make sense?
st119991
For example conv layers can be applied to variably sized inputs, with a fixed set of weights. I didn’t mean to say that you necessarily want to alter the weights, but there have been some papers on having networks approximate parameters of another network (e.g. this one 50), so it could probably be used for that.
st119992
Hi, Before I get to my issue, just wanted to say thank you for PyTorch. I think the library is clean and code is easy to follow. I’m trying to replicate a part of this model https://github.com/harvardnlp/sent-conv-torch/blob/master/model/convNN.lua 16. I think I doing the concat part wrong, where the output from three different CNNs are combined (Torch version uses JoinTable). Although I’m not getting any error, the loss.backward() takes forever to run. I really new to both Torch and PyTorch, any help would be much appreciated. Is there something wrong with the way I’ve defined my model ? class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.embed = nn.Embedding(vocab.size(), 300) #self.embed.weight = Parameter( torch.from_numpy(vocab.get_weights().astype(np.float32))) self.conv_3 = nn.Conv2d(1, 50, kernel_size=(3, 300),stride=(1,1)) self.conv_4 = nn.Conv2d(1, 50, kernel_size=(4, 300),stride=(1,1)) self.conv_5 = nn.Conv2d(1, 50, kernel_size=(5, 300),stride=(1,1)) self.decoder = nn.Linear(50 * 3, len(labels)) def forward(self, x): e1 = self.embed(x) x = F.dropout(e1, p=0.2) x = e1.view(x.size()[0], 1, 50, 300) cnn_3 = F.relu(F.max_pool2d(self.conv_3(x), (maxlen - 3 + 1, 1))) cnn_4 = F.relu(F.max_pool2d(self.conv_4(x), (maxlen - 4 + 1, 1))) cnn_5 = F.relu(F.max_pool2d(self.conv_5(x), (maxlen - 5 + 1, 1))) x = torch.cat([e.unsqueeze(0) for e in [cnn_3, cnn_4, cnn_5]]) x = x.view(-1, 50 * 3) return F.log_softmax(self.decoder(F.dropout(x, p=0.2)))
st119993
I’d guess that it’s because you’re using 5x300 convolutional kernels, that are incredibly expensive to compute (7x7 kernels are considered large and expensive). Apart from this I can’t see any mistakes at a glance. There might be some discrepancies with the original code, but I don’t know it so it’s hard to tell.
st119994
Hi, I just realized Variable.grad field is also a Variable, it thus seems more natural to pass gradients as a Variable when doing backward (currently it must be a tensor). Any reason for such an interface?
st119995
.grad has been changed to a Variable only a day or two before the release, because we didn’t want the user code to break when we add support for multiple backward. Still, we’ve left this as-is, because most people don’t care about differentiating the backward graph, and they would pass non-volatile Variables, and that would slow down the backward. I don’t think you can pass in a Variable there right now, but you certainly will be able to in the future.
st119996
Hi, saving a nn.Module seems obscure, but by digging the code I can tell it should look like: torch.save(net.state_dict(), 'net_name.pth') right? But how to clean/clear all the .grad fields of each Parameter (which is derived from Variable) before the saving, in case I just want a minimum net for inference next time? A user may want something like the shorthand net:clearState() in Torch 7.
st119997
Hi I think the method you are using of considering torch.save(net.state_dict(), 'net_name.pth') is the current advised way of doing it, and it will only save the weights and not the gradients. Indeed, since https://github.com/pytorch/pytorch/pull/451 44 the state_dict function only returns a tensor, and not the corresponding Variable.
st119998
e.g., in Torch 7 I have a net with the last module an nn.ConcatTable, then I make the gradOutputs a table of tensors and do the net:backward(inputs, gradOutputs) How to do similar things with pytorch? I tried to backward() for each output, but it complained that backward should not be called multiple times?
st119999
I think you want torch.autograd.backward(variables, grads): http://pytorch.org/docs/autograd.html#torch.autograd.backward 1.0k It takes a list of variables and a list of gradients (one for each variable).
st45000
Hi Rohit! Rohit_Modee: I have tensor x shape = torch.Size([32, 69, 64]) and y = torch.Size([32, 69, 68, 64]). I want output to be torch.Size([32, 69, 68, 1]). matrix multiplication of [1,64] and [68,64] to [68,1]. Try: torch.einsum ('ijl, ijkl -> ijk', x, y).unsqueeze (-1) The einsum() “contracts” over the “64” dimension, while the .unsqueeze (-1) adds the trailing singleton dimension. Best. K. Frank
st45001
I’m new to Torch, and there’s one thing I find counterintuitive in argmin and argmax operations, so, per example from the docs we have >>> a = torch.randn(4, 4) >>> a tensor([[ 0.1139, 0.2254, -0.1381, 0.3687], [ 1.0100, -1.1975, -0.0102, -0.4732], [-0.9240, 0.1207, -0.7506, -1.0213], [ 1.7809, -1.2960, 0.9384, 0.1438]]) >>> torch.argmin(a) tensor(13) But wouldn’t it be more natural to return tensor([3,1])? In particular case, I have two distinct sets of points A and B, and I look for the indices of a pair (a,b) ∈ A × B. I can measure pairwise distances using cdist, but a 42-like answer is of little use. Sure the indices can be inferred, but doing so looks messy and superfluous. Is there any better solution? And also, are there any reason for delivering argmin/argmax this way beyond backward compatibility and seeking in an effectively linear array?
st45002
Any general guidelines both space and time wise as to which frameworks are best suited in the various stages of ML Are there any guidelines as to why transforms, for example, shouldn’t happen on the GPU but on Pytorch CPU. If Pytorch CPU is slower than Numpy, then why not code the transforms up in Numpy.
st45003
Solved by ptrblck in post #2 I would claim it all depends on your use case and there is no silver bullet. By default you would want to use the GPU to train the model and the CPU to load and process the data. Using multiple workers in the DataLoader the data loading and processing will be applied in the background, while the GP…
st45004
I would claim it all depends on your use case and there is no silver bullet. By default you would want to use the GPU to train the model and the CPU to load and process the data. Using multiple workers in the DataLoader the data loading and processing will be applied in the background, while the GPU is busy with training the model. torchvision.transforms are using PIL as the default backend for image transformations, which uses numpy arrays internally. However, if you need differentiable transformations (e.g. via Kornia), you would have to apply them on tensors and cannot use numpy anymore (or would need to write the backward functions manually). Also, on “bigger” systems, you might see that the CPU is not fast enough feeding the GPUs, which is visible in starving GPUs. In that case you could use e.g. NVIDIA DALI 3 to apply the transformations on the GPU, which could give you an overall performance boost.
st45005
pytorch.org torch.nn.functional — PyTorch 1.7.0 documentation 1 There is no note in the description of avg_pool1d on this page that says "may choose a non-deterministic algorithm, so this function is deterministic? The function conv1d, which is similar to avg_pool1d, has its Note, so please tell me what avg_pool1d is doing. Thank you!!
st45006
Solved by ptrblck in post #2 As described in the note for F.conv1d cudnn might non-deterministically pick a kernel or the kernel itself might yield non-deterministic results, which is not the case for the pooling layer.
st45007
As described in the note for F.conv1d cudnn might non-deterministically pick a kernel or the kernel itself might yield non-deterministic results, which is not the case for the pooling layer.
st45008
Hi, I’m new to Pytorch and Azure functions. The project I’m currently working on requires an older version of Pytorch( version 1.0.1). The deployed function results in the following error: image1059×296 26.9 KB I don’t get the above error when I use a higher version. Does this mean Azure functions does not support older Pytorch versions?
st45009
Based on the error it seems you are right and would need to stick to newer PyTorch releases. Generally I would also recommend to use the latest PyTorch version to get bug fixes and new utilities.
st45010
I am new to PyTorch and I am afraid I do not understand some concept. I have a binary classifier (dog vs cat) of images 64x64. class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 50, 5) self.pool1 = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(50, 100, 7) self.pool2 = nn.MaxPool2d(2,2) self.fc1 = nn.Linear(100 * 12 * 12, 120) self.fc2 = nn.Linear(120, 100) self.fc3 = nn.Linear(100, 2) def forward(self, x): x = self.pool1(F.relu(self.conv1(x))) x = self.pool2(F.relu(self.conv2(x))) print(x.shape) <---- problem is HERE x = x.view(100, 100 * 12 * 12) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x When I train the network (using one tutorial) I am wrong size error: RuntimeError: shape '[100, 14400]' is invalid for input of size 273600 When I try to call: net = Net() output = net(train_data) Where train_data is an array of N (here 19) images: train_data = [] for i in range(len(train_addrs[:100])): # read an image and resize to (IMAGE_SIZE, IMAGE_SIZE) # cv2 load images as BGR, convert it to RGB addr = train_addrs[i] img = cv2.imread(addr) img = cv2.resize(img, (IMAGE_SIZE, IMAGE_SIZE), interpolation=cv2.INTER_CUBIC) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) train_data.append([np.array(img), np.array(train_labels[i])]) shuffle(train_data) In other words: my network expects one image at a time. Why? What am I doing wrong
st45011
Solved by Ouasfi in post #2 Hello, I think the problem occurs during flattening… x.view(100, 100*12*12) only works with a batch of 100. class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 50, 5) self.pool1 = nn.MaxPool2d(2, 2) self.conv2 = nn.Con…
st45012
Hello, I think the problem occurs during flattening… x.view(100, 100*12*12) only works with a batch of 100. class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 50, 5) self.pool1 = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(50, 100, 7) self.pool2 = nn.MaxPool2d(2,2) self.fc1 = nn.Linear(100 * 12 * 12, 120) self.fc2 = nn.Linear(120, 100) self.fc3 = nn.Linear(100, 2) self.flatten = nn.Flatten() def forward(self, x): x = self.pool1(F.relu(self.conv1(x))) x = self.pool2(F.relu(self.conv2(x))) print(x.shape) #x = x.view(100, 100 * 12 * 12) x = self.flatten(x) # or x = x.view(-1, 100 * 12 * 12) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x N, C, H, W = 200, 3, 64, 64 train_data = torch.rand(N, C, H, W) net = Net() output = net(train_data) I prefer using nn.Flatten to this purpose. To specify the output size you want to passe to the fully connected layers you can use nn.AdaptiveMaxPool2d.
st45013
Ouasfi: class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 50, 5) self.pool1 = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(50, 100, 7) self.pool2 = nn.MaxPool2d(2,2) self.fc1 = nn.Linear(100 * 12 * 12, 120) self.fc2 = nn.Linear(120, 100) self.fc3 = nn.Linear(100, 2) self.flatten = nn.Flatten() def forward(self, x): x = self.pool1(F.relu(self.conv1(x))) x = self.pool2(F.relu(self.conv2(x))) print(x.shape) #x = x.view(100, 100 * 12 * 12) x = self.flatten(x) # or x = x.view(-1, 100 * 12 * 12) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x Thank You! I need to read more aobut Flatten and Adaptive Max Pooling before I use them, but You saved me a lot of time!
st45014
lets’ say I have a tensor with content: [0,55,49,38,2,0,96,28,2,0,73,2] 0 is begin of sentence token 2 is end of sentence token I want to do random sentence permutation as like [0,73,2,0,55,49,38,2,0,96,28,2] or [0,73,2,0,96,28,2,0,55,49,38,2] how can I do it? Thanks.
st45015
This code might work: x = torch.tensor([0,55,49,38,2,0,96,28,2,0,73,2]) idx = torch.cat((torch.tensor([0]), torch.randperm(len(x)-2)+1, torch.tensor([len(x)-1]))) print(idx) out = x[idx] print(out)
st45016
I’m using an IterableDataset inside a DataLoader (multiple workers). Some of the stuff in my IterableDataset code calls numpy.random functions. I noticed after a while that in each epoch, the sequence of values returned by the random functions is exactly the same! In other words, every worker is (somehow) reset to the same random seed at the beginning of the epoch (or when it is created). So if (for example) the worker tried to do random image crops with positions from numpy.random, they are the same crops for each image for every epoch. How/where is the seed set? Is this expected behavior? WHY does this happen? I would expect the numpy.random seed in each worker to act the same as in a new process, unless numpy.random.seed is explicitly called by the user code. (I was not doing anything explicit to set the seed, using either numpy or torch calls, or anything to make torch deterministic. This seems to just be the default behavior - torch modifyint numpy to make it deterministic without being requested to by the user) Simple code to reproduce: import torch import numpy as np from torch.utils.data import DataLoader, Dataset class TestIterableDataset(torch.utils.data.IterableDataset): def __init__(self): super(TestIterableDataset).__init__() def __iter__(self): worker_info = torch.utils.data.get_worker_info() for n in range(10): yield(worker_info.id, np.random.randint(1000000)) ds = TestIterableDataset() for worker_id, number in DataLoader(ds, batch_size=4, num_workers=2): print(worker_id, number) # This prints the same result every time it is run, and the same sequence from each worker: # tensor([0, 0, 0, 0]) tensor([ 68669, 230721, 801136, 274196]) # tensor([1, 1, 1, 1]) tensor([ 68669, 230721, 801136, 274196]) # tensor([0, 0, 0, 0]) tensor([617084, 429589, 436968, 718987]) # tensor([1, 1, 1, 1]) tensor([617084, 429589, 436968, 718987]) # tensor([0, 0]) tensor([150977, 59469]) # tensor([1, 1]) tensor([150977, 59469])
st45017
Opened issue here: https://github.com/pytorch/pytorch/issues/41329 36 and apparently it’s not a bug, just a gotcha - a pretty well documented one at that. Numpy random number generator keeps its seed over a fork(), and DataLoader starts worked processed using a fork, without doing anything special about numpy.random. I’d say it seems like much more of a numpy issue - keep seeds over a fork by default is a kind of wtf behavior, but okay. Easy enough to work around, just re-seed in worker_init_fn or at the beginning of iter works fine.
st45018
That does not solve the randomness problem inside numpy+pytorch. For example, you will get exactly the same numpy random numbers in each epoch! Does anyone know a “clean” solution to this? I have to call (from inside dataset.getitem) a very messy (possibly wrong) code to make every worker have different numpy random seed at different epochs. def set_cuda_rand_seed(): worker = torch.utils.data.get_worker_info() new_seed = np.random.randint(0, 2 ** 32 - 1) if worker is not None: new_seed = worker.seed new_seed = int(new_seed) % (2 ** 32 - 1) new_seed = int(new_seed) random.seed(new_seed) np.random.seed(new_seed) # todo <— bug using num_workers>0 return new_seed
st45019
Hi I don’t really understand when pytorch internally allocates memory / does copies. I’m using the fairseq multihead attention code https://github.com/pytorch/fairseq/blob/master/fairseq/modules/multihead_attention.py to build a transformer model that can do incremental encoding. However, I notice encoding with incremental state is 5x slower than encoding the entire sequence at once. My current hypothesis is its because of this line github.com pytorch/fairseq/blob/master/fairseq/modules/multihead_attention.py#L266 if saved_state is not None: # saved states are stored with shape (bsz, num_heads, seq_len, head_dim) if "prev_key" in saved_state: _prev_key = saved_state["prev_key"] assert _prev_key is not None prev_key = _prev_key.view(bsz * self.num_heads, -1, self.head_dim) if static_kv: k = prev_key else: assert k is not None k = torch.cat([prev_key, k], dim=1) if "prev_value" in saved_state: _prev_value = saved_state["prev_value"] assert _prev_value is not None prev_value = _prev_value.view(bsz * self.num_heads, -1, self.head_dim) if static_kv: v = prev_value else: assert v is not None v = torch.cat([prev_value, v], dim=1) prev_key_padding_mask: Optional[Tensor] = None This line cats the new key row to the end of the previously saved key matrix. When this operation is done, does pytorch internally do a malloc, copy the old matrix over, and then copy the new row over? If so, is there a way to pre allocate space for rows like this to avoid a copy when cat is called?
st45020
k will be a new tensor and thus use new memory for the concatenated tensors. PyTorch uses a caching allocator internally, which should reuse the GPU memory (assuming you are using the GPU), but this operation would still copy the tensors. You could append the tensors to a list and call torch.cat or torch.stack at the end, if possible.
st45021
Thanks @ptrblck. If I call cat on a list of tensors repeatedly, won’t each cat call require a new malloc + copy? The list would have a new element for every input token in the sequence. I was planning on making a much larger (say, 500 column tensor) and fill in columns of that tensor one by one. And resize to a larger tensor when my original one was full.
st45022
Yes, calling torch.cat multiple times would create new tensors, so you could append all tensors to a list and create the tensor once it’s ready. Based on your description, it seems the size of the final tensor is unknown and you would need to reallocate new memory nevertheless (by calling torch.cat multiple times)?
st45023
Yea my plan was to just preallocate a tensor with say 500 rows and double it in size every time I need to resize so that I don’t end up doing quite as many copies. I was looking for the cheapest way to do that. Unfortunately at each input step I need to construct the cat’ed tensor ( in order to compute the attention scores at that timestep) so I can’t just use a list and cat it once at the end.
st45024
Hi, I am trying to use the StepLR scheduler I have defined it here ‘optimizer = optim.SGD(model.parameters(), lr=0.03, momentum=0.9, weight_decay=0.0005, nesterov=True)’ ‘scheduler1 = optim.lr_scheduler.StepLR(optimizer, step_size=100, gamma=0.55)’ and calling it to step in my training loop ‘scheduler1.step()’ which results in the error “Attribute error: tuple object has no attribute step” Can anyone help with this?
st45025
Could you check, if you have a trailing comma at the end of the scheduler1 = ... definition? If not, could you post an executable code snippet to reproduce this issue?
st45026
I have a 4-dimensional tensor and I want to select all the elements from that tensor except for one row along a dimension. How can I do that? Here is the code snippet I have features = Variable(torch.rand(2, 128, 19, 128)) selected = features[:, :, ~2, :] # I want to select all rows except the 2nd row along 3rd dimension.
st45027
You could use a double for loop and iterate through the tensor removing the second rows. Im sure there are better ways to do this but seeing how your tensor isn’t that big it should take that long.
st45028
Hello, I am have a model for classifies single particle trajectories (according to what algorithm was used to generate the trajectories). My current model structure has 2 convolutional layers a bidirectional LSTM of length 3 and some Linear layers for output. Here is structure of my convolutional LSTM ConejeroConvNet( (ConvBlock): Sequential( (0): Conv1d(1, 20, kernel_size=(3,), stride=(1,)) (1): ReLU() (2): Conv1d(20, 64, kernel_size=(3,), stride=(1,)) (3): ReLU() (4): Dropout(p=0.2, inplace=False) (5): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (bi_lstm): LSTM(498, 32, num_layers=3, batch_first=True) (linearOuts): Sequential( (0): Linear(in_features=4096, out_features=1000, bias=True) (1): ReLU() (2): Linear(in_features=1000, out_features=50, bias=True) (3): ReLU() (4): Linear(in_features=50, out_features=5, bias=True) ) ) I replaced the LSTM with a classifier transformer adapted from the classifier transformer here: http://peterbloem.nl/blog/transformers 1 Here is the structure of my transformer convTransformer( (ConvBlock): Sequential( (0): Conv1d(1, 20, kernel_size=(3,), stride=(1,)) (1): ReLU() (2): Conv1d(20, 64, kernel_size=(3,), stride=(1,)) (3): ReLU() (4): Dropout(p=0.2, inplace=False) (5): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (cTrans): ClassifierTransformer( (transBlocks): Sequential( (0): TransformerBlock( (attention): SelfAttentionNarrow( (toKeys): Linear(in_features=8, out_features=8, bias=False) (toQueries): Linear(in_features=8, out_features=8, bias=False) (toValues): Linear(in_features=8, out_features=8, bias=False) (unifyHeads): Linear(in_features=64, out_features=64, bias=True) ) (norm1): LayerNorm((64,), eps=1e-05, elementwise_affine=True) (norm2): LayerNorm((64,), eps=1e-05, elementwise_affine=True) (ff): Sequential( (0): Linear(in_features=64, out_features=256, bias=True) (1): ReLU() (2): Linear(in_features=256, out_features=64, bias=True) ) (dropOut): Dropout(p=0.2, inplace=False) ) (1): TransformerBlock( (attention): SelfAttentionNarrow( (toKeys): Linear(in_features=8, out_features=8, bias=False) (toQueries): Linear(in_features=8, out_features=8, bias=False) (toValues): Linear(in_features=8, out_features=8, bias=False) (unifyHeads): Linear(in_features=64, out_features=64, bias=True) ) (norm1): LayerNorm((64,), eps=1e-05, elementwise_affine=True) (norm2): LayerNorm((64,), eps=1e-05, elementwise_affine=True) (ff): Sequential( (0): Linear(in_features=64, out_features=256, bias=True) (1): ReLU() (2): Linear(in_features=256, out_features=64, bias=True) ) (dropOut): Dropout(p=0.2, inplace=False) ) (2): TransformerBlock( (attention): SelfAttentionNarrow( (toKeys): Linear(in_features=8, out_features=8, bias=False) (toQueries): Linear(in_features=8, out_features=8, bias=False) (toValues): Linear(in_features=8, out_features=8, bias=False) (unifyHeads): Linear(in_features=64, out_features=64, bias=True) ) (norm1): LayerNorm((64,), eps=1e-05, elementwise_affine=True) (norm2): LayerNorm((64,), eps=1e-05, elementwise_affine=True) (ff): Sequential( (0): Linear(in_features=64, out_features=256, bias=True) (1): ReLU() (2): Linear(in_features=256, out_features=64, bias=True) ) (dropOut): Dropout(p=0.2, inplace=False) ) ..............[more transformer blocks]....................... (9): TransformerBlock( (attention): SelfAttentionNarrow( (toKeys): Linear(in_features=8, out_features=8, bias=False) (toQueries): Linear(in_features=8, out_features=8, bias=False) (toValues): Linear(in_features=8, out_features=8, bias=False) (unifyHeads): Linear(in_features=64, out_features=64, bias=True) ) (norm1): LayerNorm((64,), eps=1e-05, elementwise_affine=True) (norm2): LayerNorm((64,), eps=1e-05, elementwise_affine=True) (ff): Sequential( (0): Linear(in_features=64, out_features=256, bias=True) (1): ReLU() (2): Linear(in_features=256, out_features=64, bias=True) ) (dropOut): Dropout(p=0.2, inplace=False) ) ) (dropout): Dropout(p=0.2, inplace=False) (linOut): Linear(in_features=64, out_features=5, bias=True) ) ) With data set of size 10k (7.5k for training 2.5 for test) the transfomer outperformed my convolutional LSTM by 9%. However, when I tried it with a data set of size 150k (110k train 40k test) the training loss went all over the place and it performed only slightly better than guessing randomly. This is what my model does when given 110k training data. the losses are jumping around and never go below 3. WhatsApp Image 2020-12-01 at 3.00.05 AM1036×338 110 KB As you can see below. it behaves as expected with 7.5k training data. the losses dont jump as much and end around .7. I kept everything the same between the runs (batch size, optimizer criterion, epochs, patience, etc.). I should add that I am using a modified version of Early stopping available here: https://github.com/Bjarten/early-stopping-pytorch/blob/master/MNIST_Early_Stopping_example.ipynb. I can add more detail if necessary. However, I think that my problem is conceptual as I do not understand why only my transformer model is acting like this. When I add more data to my convolutional LSTM model it works fine and it improves the models capability of prediction. Thank you very much for your help. I appreciate any suggestions to help me fix the problem. edited: formatting and typos
st45029
Responding to my own question in case anyone ever has this question in the future. I introduced gradient clipping as shown here: https://stackoverflow.com/questions/54716377/how-to-do-gradient-clipping-in-pytorch for p in model.parameters(): p.register_hook(lambda grad: torch.clamp(grad, -clip_value, clip_value)) This appears to have remediated the problem.
st45030
I have two separate datasets. Each set has 3224224 images. I want to input those datasets into a network as a 6 channel input. Both datasets have the same labels. I want to use Imagefolder for this. trainset_cort = torchvision.datasets.ImageFolder(root=data_dir_train_cort, transform=transform) trainset_slice = torchvision.datasets.ImageFolder(root=data_dir_train_slice, transform=transform) trainloader= torch.utils.data.DataLoader(trainset_new , batch_size=4,shuffle=True, num_workers=2) testset_cort = torchvision.datasets.ImageFolder(root=data_dir_val_cort, transform=transform) testset_slice = torchvision.datasets.ImageFolder(root=data_dir_val_slice, transform=transform) testloader= torch.utils.data.DataLoader(testset_new , batch_size=4,shuffle=True, num_workers=2) I’m trying above code. i want to create both trainset_new and testset_new as 6 by 224 by 224 imageset
st45031
You could create your own Dataset and concatenate both images: class MyDataset(Dataset): def __init__(self, path_cort, path_slice, transform=None): self.data_cort = datasets.ImageFolder( root=path_cort, transform=transform) self.data_slice = datasets.ImageFolder( root=path_slice, transform=transform) def __getitem__(self, index): x_cort, y = self.data_cort[index] x_slice, _ = self.data_slice[index] x = torch.cat((x_cort, x_slice), dim=1) return x, y def __len__(self): return len(self.data_cort) # assert both datasets have equal length
st45032
using this I get the error as: module.__init__() takes at most 2 arguments (3 given) I am using google colab for this and the code I ran is this: import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import time import os import copy class CustomDataset(datasets): def __init__(self, path_cort, path_slice, transform): self.data_cort = datasets.ImageFolder( root=path_cort, transform=transform) self.data_slice = datasets.ImageFolder( root=path_slice, transform=transform) def __getitem__(self, index): x_cort, y = self.data_cort[index] x_slice, _ = self.data_slice[index] x = torch.cat((x_cort, x_slice), dim=1) return x, y def __len__(self): return len(self.data_cort) # assert both datasets have equal length root = "/content/drive/MyDrive/Rohit/Human Pose Estimation/UTD MHAD/" transform = transforms.Compose([ transforms.Resize((224,224)), transforms.ToTensor(), transforms.Normalize(mean, std) ]) Dataset = CustomDataset(root+"angle/train", root+"anglevel/train", transform) num_classes = 27 Inside the respective train folders, each class has several images. Hope you will help me out, thanks in advance
st45033
Could you device the CustomDataset from torch.utils.data.Dataset instead of torchvision.datasets and check, if it would solve the issue?
st45034
Hello, I’m trying to implement neural networks pruning strategies, the code was working fine until I converted it from numpy to pytorch (only the pruning part) to leverage GPU speed. The code is The function that does the sparsification: def magnitude_pruning(A, drop=0.2): if drop<0: drop=0 shape_a = A.shape n_elem = A.numel() A = A.view(-1) n_drop = int(n_elem * drop) drop_idxs = torch.topk(A, n_drop, largest=False, sorted=False)[-1] mask = torch.ones(n_elem, device=device) mask[drop_idxs] = 0 A = A * mask A = A.view(shape_a) return A, mask.view(shape_a) And here is a snippet where I use the function os.environ["CUDA_LAUNCH_BLOCKING"]="1" for key,p_drop in p.items(): attributepath = key.split(".") cur = models["resnet18_srn"] for attr in attributepath[:-1]: if attr.isdecimal(): cur = cur[int(attr)] else: cur = getattr(cur, attr) W_ = magnitude_pruning(cur.weight.detach(),p_drop)[0] I get the following error RuntimeError Traceback (most recent call last) <ipython-input-13-899364821043> in <module>() 9 else: 10 cur = getattr(cur, attr) ---> 11 W_ = magnitude_pruning(cur.weight.detach(),value)[0] <ipython-input-8-4d745d9774f3> in magnitude_pruning(A, drop) 7 A = A.view(-1) 8 n_drop = int(n_elem * drop) ----> 9 drop_idxs = torch.topk(A, n_drop, largest=False, sorted=False)[-1] 10 mask = torch.ones(n_elem, device=device) 11 mask[drop_idxs] = 0 RuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THC/generic/THCTensorTopK.cu:188 I’m running on colab and I used both os.environ["CUDA_LAUNCH_BLOCKING"]="1" !export CUDA_LAUNCH_BLOCKING=1 to ensure cuda’s error is in the spot where it happens, the value of the p dictionary (which represents the drop percentage) is as follows {'conv1.weight': 0.5248447873548604, 'last_linear.0.weight': 0.004661282702045355, 'layer1.0.conv1.weight': 0.37808877777400185, 'layer1.0.conv2.weight': 0.06504905686978213, 'layer1.1.conv1.weight': 0.34116854079977843, 'layer1.1.conv2.weight': -7.333395402042697e-08, 'layer2.0.conv1.weight': 0.07128420402834179, 'layer2.0.conv2.weight': -2.1180268428011573e-08, 'layer2.0.downsample.0.weight': 0.5666850700441095, 'layer2.1.conv1.weight': -2.1861888077623348e-08, 'layer2.1.conv2.weight': 0.9691538018553951, 'layer3.0.conv1.weight': 0.4329534168219621, 'layer3.0.conv2.weight': 0.06542427263283024, 'layer3.0.downsample.0.weight': 0.3568506078785666, 'layer3.1.conv1.weight': 0.09328176622263307, 'layer3.1.conv2.weight': 0.6633195976051018, 'layer4.0.conv1.weight': 0.9914340620049251, 'layer4.0.conv2.weight': 0.8091775692583867, 'layer4.0.downsample.0.weight': 0.5552448512140766, 'layer4.1.conv1.weight': 0.9907790148874497, 'layer4.1.conv2.weight': 0.8827941427912476} Note the error does not occur when using cpu, and when using different p dictionary like this one the error does not occur as well: {'conv1.weight': 0.7392659868638237, 'last_linear.0.weight': 0.10501561799105052, 'layer1.0.conv1.weight': 0.6349508179361485, 'layer1.0.conv2.weight': 0.44357375136506516, 'layer1.1.conv1.weight': 0.6290701837014776, 'layer1.1.conv2.weight': 0.36933085729608905, 'layer2.0.conv1.weight': 0.4509790372596437, 'layer2.0.conv2.weight': 0.42935426060761517, 'layer2.0.downsample.0.weight': 0.7065802232955889, 'layer2.1.conv1.weight': 0.3707469940086635, 'layer2.1.conv2.weight': 0.7227469103776935, 'layer3.0.conv1.weight': 0.7189300890084274, 'layer3.0.conv2.weight': 0.46585970510469366, 'layer3.0.downsample.0.weight': 0.6160370089895277, 'layer3.1.conv1.weight': 0.5312890654313192, 'layer3.1.conv2.weight': 0.8089039876828459, 'layer4.0.conv1.weight': 0.8378576175338739, 'layer4.0.conv2.weight': 0.7549824163148917, 'layer4.0.downsample.0.weight': 0.6779744785558353, 'layer4.1.conv1.weight': 0.9224107770405527, 'layer4.1.conv2.weight': 0.7217466674953182} Note that to ensure non-negative percentage in the function magnitude_pruning I zero negative values of drop.
st45035
Solved by ptrblck in post #6 Thanks for the follow up! This is indeed a bug and should be fixed. I’ll create an issue later to track it.
st45036
Based on the error message and the line of code (assuming the blocking launch was working in the notebook), I guess you might use an invalid index in: drop_idxs = torch.topk(A, n_drop, largest=False, sorted=False)[-1] Run the code on the CPU to get a better error message or, if that doesn’t help, print A and n_drop and check for invalid indices.
st45037
Thank you for your response! I tried running the code on CPU but the problem doesn’t occur there, So i don’t think the problem is from invalid indexing. also when I changed the code (for logging purposes) to def magnitude_pruning(A, drop=0.2): if drop<0: drop=0 shape_a = A.shape n_elem = A.numel() A = A.view(-1) n_drop = int(n_elem * drop) print("....n_drop = ",n_drop) print("....A size = ", n_elem) top_k_res = torch.topk(A, n_drop, largest=False, sorted=False) print("...topk result = ",top_k_res) drop_idxs = top_k_res[1] mask = torch.ones(n_elem, device=device) mask[drop_idxs] = 0 A = A * mask A = A.view(shape_a) return A, mask.view(shape_a) The error somewhat changed to this ....n_drop = 3816 ....A size = 9408 ...topk result = torch.return_types.topk( values=tensor([-0.0028, -0.0026, -0.0069, ..., -0.0078, -0.0095, -0.0018], device='cuda:0'), indices=tensor([ 1, 2, 6, ..., 9401, 9407, 8464], device='cuda:0')) ....n_drop = 13695 ....A size = 36864 ...topk result = torch.return_types.topk( values=tensor([-0.0278, -0.0324, -0.2429, ..., -0.0029, -0.0083, -0.0025], device='cuda:0'), indices=tensor([ 1, 3, 4, ..., 36851, 36863, 35230], device='cuda:0')) ....n_drop = 0 ....A size = 36864 ...topk result = torch.return_types.topk( values=tensor([], device='cuda:0'), indices=tensor([], device='cuda:0', dtype=torch.int64)) ....n_drop = 5047 ....A size = 36864 ...topk result = --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-10-f45d5e9895b6> in <module>() 10 else: 11 cur = getattr(cur, attr) ---> 12 W_ = magnitude_pruning(cur.weight.detach(),value)[0] 5 frames <ipython-input-7-51b1f75557a0> in magnitude_pruning(A, drop) 10 print("....A size = ", n_elem) 11 top_k_res = torch.topk(A, n_drop, largest=False, sorted=False) ---> 12 print("...topk result = ",top_k_res) 13 14 drop_idxs = top_k_res[1] /usr/local/lib/python3.6/dist-packages/torch/tensor.py in __repr__(self) 177 return handle_torch_function(Tensor.__repr__, relevant_args, self) 178 # All strings are unicode in Python 3. --> 179 return torch._tensor_str._str(self) 180 181 def backward(self, gradient=None, retain_graph=None, create_graph=False): /usr/local/lib/python3.6/dist-packages/torch/_tensor_str.py in _str(self) 370 def _str(self): 371 with torch.no_grad(): --> 372 return _str_intern(self) /usr/local/lib/python3.6/dist-packages/torch/_tensor_str.py in _str_intern(self) 350 tensor_str = _tensor_str(self.to_dense(), indent) 351 else: --> 352 tensor_str = _tensor_str(self, indent) 353 354 if self.layout != torch.strided: /usr/local/lib/python3.6/dist-packages/torch/_tensor_str.py in _tensor_str(self, indent) 239 return _tensor_str_with_formatter(self, indent, summarize, real_formatter, imag_formatter) 240 else: --> 241 formatter = _Formatter(get_summarized_data(self) if summarize else self) 242 return _tensor_str_with_formatter(self, indent, summarize, formatter) 243 /usr/local/lib/python3.6/dist-packages/torch/_tensor_str.py in __init__(self, tensor) 87 88 else: ---> 89 nonzero_finite_vals = torch.masked_select(tensor_view, torch.isfinite(tensor_view) & tensor_view.ne(0)) 90 91 if nonzero_finite_vals.numel() == 0: RuntimeError: CUDA error: device-side assert triggered I tried to look into the source of the problem and found that when the value of drop is very low (~0) is when it occur, so I tried using torch.topk on an arbitrary tensor with k=0, it works for the first time and then the next time it gives this error RuntimeError: CUDA error: device-side assert triggered , by now I can fix the issue by just adding an if statement, but I’m curious about the reason it happens, Thank you!
st45038
The last stack trace is most likely pointing to the wrong line of code and the CUDA_LAUNCH_BLOCKING env var is not working properly inside your notebook. I would generally recommend to export the notebook as a python script and run it via: CUDA_LAUNCH_BLOCKING=1 python script.py args to get the proper line of code. That being said, I don’t quite understand the last statement. Could you post a code snippet showing when the assert is triggered?
st45039
I mean that I found the source of the problem, using topk with k=0 makes the gpu “unusable” for the next operation, example x = torch.Tensor([1,2,3,4,5,2,1,-1,-4,12,4]).cuda() torch.topk(x,k=0,largest=False) and then running any command that utilizes gpu (I ran it on a different cell) x = x**2 will result in an error, RuntimeError Traceback (most recent call last) <ipython-input-3-d4dbc0d35a05> in <module>() ----> 1 x = x**2 RuntimeError: CUDA error: device-side assert triggered When I put it in a script and run it with CuDA_LAUNCH_BLOCKING I get the following stack trace (for the same snippet of code as above) /pytorch/aten/src/THC/THCTensorTopK.cuh:107: gatherTopK: block: [0,0,0], thread: [0,0,0] Assertion `writeIndex < outputSliceSize` failed. /pytorch/aten/src/THC/THCTensorTopK.cuh:107: gatherTopK: block: [0,0,0], thread: [1,0,0] Assertion `writeIndex < outputSliceSize` failed. /pytorch/aten/src/THC/THCTensorTopK.cuh:107: gatherTopK: block: [0,0,0], thread: [2,0,0] Assertion `writeIndex < outputSliceSize` failed. /pytorch/aten/src/THC/THCTensorTopK.cuh:107: gatherTopK: block: [0,0,0], thread: [3,0,0] Assertion `writeIndex < outputSliceSize` failed. /pytorch/aten/src/THC/THCTensorTopK.cuh:107: gatherTopK: block: [0,0,0], thread: [4,0,0] Assertion `writeIndex < outputSliceSize` failed. /pytorch/aten/src/THC/THCTensorTopK.cuh:107: gatherTopK: block: [0,0,0], thread: [5,0,0] Assertion `writeIndex < outputSliceSize` failed. /pytorch/aten/src/THC/THCTensorTopK.cuh:107: gatherTopK: block: [0,0,0], thread: [6,0,0] Assertion `writeIndex < outputSliceSize` failed. /pytorch/aten/src/THC/THCTensorTopK.cuh:107: gatherTopK: block: [0,0,0], thread: [7,0,0] Assertion `writeIndex < outputSliceSize` failed. /pytorch/aten/src/THC/THCTensorTopK.cuh:107: gatherTopK: block: [0,0,0], thread: [8,0,0] Assertion `writeIndex < outputSliceSize` failed. /pytorch/aten/src/THC/THCTensorTopK.cuh:107: gatherTopK: block: [0,0,0], thread: [9,0,0] Assertion `writeIndex < outputSliceSize` failed. /pytorch/aten/src/THC/THCTensorTopK.cuh:107: gatherTopK: block: [0,0,0], thread: [10,0,0] Assertion `writeIndex < outputSliceSize` failed. THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCTensorTopK.cu line=188 error=710 : device-side assert triggered Traceback (most recent call last): File "script.py", line 3, in <module> torch.topk(x,k=0,largest=False) RuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THC/generic/THCTensorTopK.cu:188
st45040
Thanks for the follow up! This is indeed a bug and should be fixed. I’ll create an issue later to track it.
st45041
Hey Folks, I was just trying to understand the Pytorch Embedding layers. I am creating an time-series prediction model using an LSTM, but I also have some categorical information that I want to include in the model. I have some time series variables, such as the year, month, week number, and day, as well as some spatial variables including US State and county number. Now, I was wondering if I have to create separate embedding layers for each categorical column, or can I just create a single embedding to cover all of the categorical columns? More specifically, do I need to create a separate embedding column for the year, then a separate one fr the month, and then one for the week, and day. In this case, then I would need to concatenate all of those layers to pass them to a fully connected layer or something. Or can I just keep the (year, month, week number, and day) as the matrix that I input into the embedding layer? In other words, does the pytorch implementation of Embedding layers handle having these multiple columns as represented by a single output embedding matrix? Hopefully my question is clear, but please let me know if I need to clarify anything. I just wanted to understand how to best use these embeddings for categorical features. Thanks.
st45042
Solved by ptrblck in post #3 You could feed the data as a single tensor to the nn.Embedding layer. However, I would use separate embeddings, as your input data would have completely different ranges. nn.Embedding layers accept inputs containing values in [0, nb_embeddings-1]. If your year data contains values in e.g. [1988,…
st45043
krishnab: Or can I just keep the (year, month, week number, and day) as the matrix that I input into the embedding layer? In other words, does the pytorch implementation of Embedding layers handle having these multiple columns as represented by a single output embedding matrix? You could feed the data as a single tensor to the nn.Embedding layer. However, I would use separate embeddings, as your input data would have completely different ranges. nn.Embedding layers accept inputs containing values in [0, nb_embeddings-1]. If your year data contains values in e.g. [1988, 2020], you would waste a lot of embedding vectors, as they are never used. However, if you normalize the data (subtract the min. value), you would have overlapping indices for all data attributes, i.e. the year, week, day etc. would all index the same embeddings.
st45044
@ptrblck thanks so much for the response. Yes, this make sense, I was thinking along these lines as well. I suppose if I want to combine some categorical variables, I could Cartesian product the columns and then generate embedding values for that cartesian product. But I can see how keeping the embeddings separated makes sense most of the time. Thanks again.
st45045
That’s another interesting idea. Let us know, which approach worked better in case you are trying out both.
st45046
Hi, I have a problem with my code. The error first emerged after upgrading to 1.7.0 I used for name, param in model.named_parameters(): mom_param = optimizer.state[param][‘momentum_buffer’] before to get the momentum buffer. As Iread this is no longer working in 1.7.0 For all except one layer (Linear) I can use: k = 0 for name, param in model.named_parameters(): mom_param = optimizer.state_dict()[k][param][‘momentum_buffer’] What can I do to get the momentum buffer of my Linear layer? I seachred hot to make the code inside one of this fancy code snips. But I haven’t found out how to do it. Thanks for helping. Jessi
st45047
The first code snippet seems still to work in 1.8.0.dev20201116: model = models.resnet18() optimizer = torch.optim.SGD(model.parameters(), lr=1e-3, momentum=0.9) model(torch.randn(1, 3, 224, 224)).mean().backward() optimizer.step() for name, param in model.named_parameters(): mom_param = optimizer.state[param]['momentum_buffer'] print(mom_param) Could you share a code snippet, where this would be failing?
st45048
Could somebody help me. I am new to Pytorch and am trying to predict house prices. I have been able to batch my dataset and do a train-test-split. Even though I trained my model for many epochs my accuracy isn’t really improving or that good. I used get an error of 1e10 (using MSE). Is this the best it can get or am I doing something wrong. My training loop: epochs=10 final_losses=[] lrs=2 dif=3 for j in range(lrs): ###Backward Propogation-- Define the loss_function,define the optimizer loss_function=nn.MSELoss() optimizer=torch.optim.Adam(model.parameters(),lr=10**(-j-dif)) print("\n\n","-"*12) print("Lr: ", str(10**(-j-dif))) print("-"*12) print("\n") for i in range(epochs): for step,(x, y) in enumerate(train):#tqdm y_pred=model(x) y_pred=y_pred loss=loss_function(y_pred,y) final_losses.append(loss) optimizer.zero_grad() loss.backward() optimizer.step() print("\nEpoch number: {} and the loss : {}".format(i+1,loss.item())) Testing loop: #### Prediction In X_test data d=train predictions=[] for x,y in test: #print(x)! with torch.no_grad(): for step,i in enumerate(range(len(x))): print("Count: {}".format(step+1)) mse=abs(model(x[i]).tolist()[0]-y[i].tolist())**2 print("Expected:",y[i].tolist()," Predicted: ",model(x[i]).tolist()[0]," Error: {:.2e}".format(mse)) break My error plot: My predictions: image1271×1325 93.8 KB I am confused why my expected vs predicted values differ so much. My complete Notebook: https://colab.research.google.com/drive/1Vd9fJs0bfd7IZ8D5dHS859EThz1DEB58?usp=sharing 3 Any help would be highly appreciated.
st45049
I have a problem similar to your problem. My code is working in keras but not in pytorch. Please somebody help!
st45050
Hello, I am attempting to save out my best model using minimum validation loss via a custom method ModelCheckpoint. I think this is a folder access issue but I have run the ModelCheckpoint algorithm many times in the past and it worked so it may be a PyTorch issue. For some reason there is permission to access the file for the first two iterations but on the third it seems to fail to write the .pt file to my logs? Below is the code: Unique log path: def generate_unique_logpath(logdir, raw_run_name): i = 0 while(True): run_name = raw_run_name + "_" + str(i) log_path = os.path.join(logdir, run_name) if not os.path.isdir(log_path): return log_path i = i + 1 top_logdir = r"C:\Users\Daniel\OneDrive\Documents\Neural Networks Hw\Best Models Project\logs" if not os.path.exists(top_logdir): os.mkdir(top_logdir) ModelCheckpoint: class ModelCheckpoint: def __init__(self, filepath, model): self.min_loss = None self.filepath = filepath self.model = model def update(self, loss): if (self.min_loss is None) or (loss < self.min_loss): # print("Saving a better model") torch.save(self.model.state_dict(), self.filepath) #torch.save(self.model, self.filepath) self.min_loss = loss Usage: if TESTING: #Hyper-Parameter sets learning_rates = np.array([1.0, 0.1, 0.01, 0.001]) hidden_sizes = np.array([5, 10, 20]) EPOCHS = 40000 # just one since we are using a stopping criteria #output storage lists posttrain_loss_storage = torch.zeros(len(learning_rates), len(hidden_sizes), dtype = torch.float32) train_confusion_matrix_storage = np.empty((4,3,3,3), dtype = np.int_) test_confusion_matrix_storage = np.empty((4,3,3,3), dtype = np.int_) lr_count = 0 hs_cound = 0 for i in learning_rates: hs_count = 0 for j in hidden_sizes: string_path = "MLP_1HL" + str(j) + "_Adam" + str(i) logdir = generate_unique_logpath(top_logdir, string_path) print("Logging to {}".format(logdir)) # -> Prints out Logging to ./logs/linear_0 if not os.path.exists(logdir): os.mkdir(logdir) num_hidden = j model = Network(j) # stopping_criteria = StopCriteria(25) #sending model to cuda model.cuda() #X_train.to(device) criterion = nn.CrossEntropyLoss() #cross-entropy loss optimizer = torch.optim.SGD(model.parameters(), lr = i) # implementing momentum for learning rate #Showing test set loss pre-training print("-----------------------------------------------------------------") print("Learning Rate: " + str(i) + "; " + "Hidden Layer PE: " + str(j)) model.eval() y_pred = model(X_test) before_train = criterion(y_pred.squeeze(), torch.max(y_test, 1)[1]) print("Test loss pre training: " + str(before_train.item())) print() model_checkpoint = ModelCheckpoint(logdir + "/best_model.pt", model) # Training model for epoch in range(EPOCHS): optimizer.zero_grad() output = model.forward(X_train) loss = criterion(output.squeeze(), torch.max(y_train, 1)[1]) # if epoch == 0: # print('Epoch: {}; before training loss: {}'.format(epoch,loss.item())) #implementing stopping criteria val_output = model.forward(X_val) val_loss = criterion(val_output.squeeze(), torch.max(y_val, 1)[1]) model_checkpoint.update(val_loss) # if stopping_criteria.step(val_loss): # print('Epoch: {}; after train loss: {}'.format(epoch,loss.item())) # print() # break #printing epoch and loss if epoch % 4999 == 0: print('Epoch: {} train loss: {}'.format(epoch,loss.item())) #backpropagation loss.backward() optimizer.step() # evaluating the model and storing relevant information posttrain_loss_storage[lr_count, hs_count] = loss model_path = logdir + "/best_model.pt" model = Network(j) model = model.cuda() model.load_state_dict(torch.load(model_path)) # Switch to eval mode model.eval() test_pred = model(X_test) test_loss = criterion(test_pred.squeeze(), torch.max(y_test, 1)[1]) test_CM = ConfusionMatrix(test_pred, y_test) test_acc = Accuracy(test_CM.float()) # test_loss, test_acc, confusion_M = test(model, test_loader) print() print(" Test: Loss : {:.4f}, Acc : {:.4f}".format(test_loss, test_acc)) print() print("Test Confusion Matrix: \n" + str(test_CM)) print() print("-----------------------------------------------------------------") #train_confusion_matrix_storage[lr_count,hs_count,:,:] = train_CM #test_confusion_matrix_storage[lr_count,hs_count,:,:] = test_CM hs_count += 1 lr_count += 1 print() print("DONE TESTING HYPERPARAMETERS") Output and Error: Logging to C:\Users\Daniel\OneDrive\Documents\Neural Networks Hw\Best Models Project\logs\MLP_1HL5_Adam1.0_30 ----------------------------------------------------------------- Learning Rate: 1.0; Hidden Layer PE: 5 Test loss pre training: 0.69432133436203 Epoch: 0 train loss: 0.6944330930709839 Epoch: 4999 train loss: 0.5665101408958435 Epoch: 9998 train loss: 0.543997585773468 Epoch: 14997 train loss: 0.5248210430145264 Epoch: 19996 train loss: 0.5120164752006531 Epoch: 24995 train loss: 0.5025898814201355 Epoch: 29994 train loss: 0.49639782309532166 Epoch: 34993 train loss: 0.4912642538547516 Epoch: 39992 train loss: 0.4892112910747528 Test: Loss : 0.6203, Acc : 0.6729 Test Confusion Matrix: tensor([[65, 36], [34, 79]]) ----------------------------------------------------------------- Logging to C:\Users\Daniel\OneDrive\Documents\Neural Networks Hw\Best Models Project\logs\MLP_1HL10_Adam1.0_12 ----------------------------------------------------------------- Learning Rate: 1.0; Hidden Layer PE: 10 Test loss pre training: 0.6900061368942261 Epoch: 0 train loss: 0.6919026970863342 Epoch: 4999 train loss: 0.5735248923301697 Epoch: 9998 train loss: 0.5334906578063965 Epoch: 14997 train loss: 0.5089364647865295 Epoch: 19996 train loss: 0.48545464873313904 Epoch: 24995 train loss: 0.4619404971599579 Epoch: 29994 train loss: 0.4471399486064911 Epoch: 34993 train loss: 0.43656283617019653 Epoch: 39992 train loss: 0.4294586479663849 Test: Loss : 0.6900, Acc : 0.5374 Test Confusion Matrix: tensor([[ 0, 0], [ 99, 115]]) ----------------------------------------------------------------- Logging to C:\Users\Daniel\OneDrive\Documents\Neural Networks Hw\Best Models Project\logs\MLP_1HL20_Adam1.0_11 ----------------------------------------------------------------- Learning Rate: 1.0; Hidden Layer PE: 20 Test loss pre training: 0.6932849884033203 Epoch: 0 train loss: 0.6937627196311951 --------------------------------------------------------------------------- PermissionError Traceback (most recent call last) <ipython-input-8-4d76ab467289> in <module> 58 val_output = model.forward(X_val) 59 val_loss = criterion(val_output.squeeze(), torch.max(y_val, 1)[1]) ---> 60 model_checkpoint.update(val_loss) 61 62 # if stopping_criteria.step(val_loss): <ipython-input-4-4ca0ec529d66> in update(self, loss) 9 if (self.min_loss is None) or (loss < self.min_loss): 10 # print("Saving a better model") ---> 11 torch.save(self.model.state_dict(), self.filepath) 12 #torch.save(self.model, self.filepath) 13 self.min_loss = loss ~\anaconda3\envs\pytorch\lib\site-packages\torch\serialization.py in save(obj, f, pickle_module, pickle_protocol, _use_new_zipfile_serialization) 325 return 326 --> 327 with _open_file_like(f, 'wb') as opened_file: 328 _legacy_save(obj, opened_file, pickle_module, pickle_protocol) 329 ~\anaconda3\envs\pytorch\lib\site-packages\torch\serialization.py in _open_file_like(name_or_buffer, mode) 210 def _open_file_like(name_or_buffer, mode): 211 if _is_path(name_or_buffer): --> 212 return _open_file(name_or_buffer, mode) 213 else: 214 if 'w' in mode: ~\anaconda3\envs\pytorch\lib\site-packages\torch\serialization.py in __init__(self, name, mode) 191 class _open_file(_opener): 192 def __init__(self, name, mode): --> 193 super(_open_file, self).__init__(open(name, mode)) 194 195 def __exit__(self, *args): PermissionError: [Errno 13] Permission denied: 'C:\\Users\\Daniel\\OneDrive\\Documents\\Neural Networks Hw\\Best Models Project\\logs\\MLP_1HL20_Adam1.0_11/best_model.pt'
st45051
Solved: It seemed to be an issue with having my logs folder in OneDrive. I made the logs file local to my PC and it ran fine.
st45052
I get an error “AssertionError: Torch not compiled with CUDA enabled”. I’m using GTX950 gpu. CUDA was available 1 week ago, but not anymore. And I can’t solve this problem. I could not find the solution to the problem in any title related to this subject. Please help. Thank you in advance
st45053
Check if you’ve updated any drivers, if the GPU is still recognized by nvidia-smi, or if you’ve accidentally installed the CPU version of PyTorch.
st45054
Hi, Since I am using Win10, I did nvcc - version control and the result is as follows. Also the pytorch version isn’t the cpu version. nvcc: NVIDIA ® Cuda compiler driver Copyright © 2005-2020 NVIDIA Corporation Built on Wed_Jul_22_19:09:35_Pacific_Daylight_Time_2020 Cuda compilation tools, release 11.0, V11.0.221 Build cuda_11.0_bu.relgpu_drvr445TC445_37.28845127_0
st45055
Sorry, I might have misunderstood. Saying “CUDA application” are you talking about a .cu file or any nvcc command usage? By the way i can access GPU and use it, using “numba” library.
st45056
Pytorch implementation of Mask R-CNN is multilevel inheritance: class GeneralizedRCNN(nn.Module): def __init__(self, sm inputs): self.sm_inputs=sm_inputs def forward(self, x): sm_outputs=self.sm_inputs(x) return sm_outputs class FasterRCNN(GeneralizedRCNN): def __init__(self, sm_cool_inputs): super(FasterRCNN, self).__init__(sm_inputs=sm_cool_inputs) class MaskRCNN(FasterRCNN): def __init__(self, sm_awsm_inputs): super(MaskRCNN, self).__init__(sm_cool_inputs=sm_awsm_inputs) self.sm_module.sm_more = sm_awsm_inputs.sm_awsm_input Finally, you write a script where you call mymodel=MaskRCNN(**awsm_kwargs) mymodel(sm_awsm_img) So how does one actually get to run the forward method in the base class? I think it is some ‘magic method’ like __getitem__ for dataset interfaces. Is this correct?
st45057
Hi, I am trying to implement WGAN-GP (in a conditional probability setting). When I use inplace=True in the ReLU activation layers, I get the error “RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.” during my Critic training. I have come across a similar post here in Freeing buffer strange behavior 1 where replacing the inplace=False in the ReLU activation solved the problem. It “solved” my problem too but my model became very very slow and the model output values dropped drastically (I do not think they are correct). At the time when the aforementioned post was asked, the poster was using pytorch 0.4.1. But I am using pytorch 1.1.0. Strange thing is this exact code was working fine for a few days and then this error started popping up. Could I please have some suggestions about what I can do or any leads to what might trigger this behaviour? Please let me know if I need to provide more information. This is my critic training code- def train_disc_net(bx, by, gen_net, disc_net, disc_net_optmzr): # Zeroing out Gradients disc_net_optmzr.zero_grad() gen_net.zero_grad() disc_net.zero_grad() # Reset requires_grad for p in disc_net.parameters(): p.requires_grad = True # Training sign convention one = torch.ones(bx.shape[0],1).cuda() neg_one = -1*torch.ones(bx.shape[0],1).cuda() # True data dval_true = disc_net(by, bx) dval_true.backward(neg_one) # Generated data by_gen = gen_net(bx) # Generated data dval_gen = disc_net(by_gen.detach(), bx) dval_gen.backward(one) # Wasserstein distance was_dist = dval_true.mean() - dval_gen.mean() Get drift regularization d_drift_reg = dval_true**2 + dval_gen**2 d_drift_reg = 10e-9 * d_drift_reg.mean() d_drift_reg.backward(one) # Train with gradient regularization to ensure lipschitz 1 constraint grad_reg = 10 * get_grad_reg(by, by_gen.detach(), bx, disc_net) grad_reg.backward() <----- Giving the error # Objective function d_cost = -was_dist + grad_reg #+ d_drift_reg # Update the networks disc_net_optmzr.step() return d_cost, was_dist, grad_reg This is my lipschitz constraining code- def get_grad_reg(by, by_gen, bx, disc_net): # Mixing real and fake inputs in a random fashion epsilon = torch.FloatTensor(by.shape[0], 1, 1, 1, 1).uniform_(0.0, 1.0).cuda() by_hat = epsilon * by + (1-epsilon) * by_gen by_hat = torch.autograd.Variable(by_hat, requires_grad=True) # Get output d_hat = disc_net(by_hat, bx.detach()) # I concatenate the inputs of disc_net inside the disc_net function and I want to compute the gradients with respect to this concatenated input d_in = disc_net.x_xyt # Getting gradient regularization grad = torch.autograd.grad(outputs=d_hat, inputs=d_in, grad_outputs=torch.ones(d_hat.size()).cuda(), retain_graph=True, create_graph=True)[0] grad_norm = torch.sqrt(1e-8+torch.sum(grad**2, dim=(1,2, 3, 4))) one = torch.ones(grad_norm.shape).cuda() grad_reg = (grad_norm-one)**2 grad_reg = grad_reg.mean() return grad_reg
st45058
I am sorry to bump up this question. I believe I have sufficiently searched for similar issues on the internet but I was unsucessful. It is currently posing a blockage for me. I share resources with other people and I wanted to know if an upgrade of software requirements is needed. I do not have a computer science background, so I do not know if I might unintentionally break others code in the server trying to upgrade (or if I cannot upgrade by myself, I will have to convince the server admin that this is my problem say).
st45059
Sherine_Brahma: At the time when the aforementioned post was asked, the poster was using pytorch 0.4.1. But I am using pytorch 1.1.0. I would recommend to update to the latest released version (1.7.0) as it would ship with bug fixes and new features. Sherine_Brahma: Strange thing is this exact code was working fine for a few days and then this error started popping up. You might have accidentally changes something in the code, as it shouldn’t break “by itself” Sherine_Brahma: Could I please have some suggestions about what I can do or any leads to what might trigger this behaviour? The inplace error is raised, since a tensor was overridden, which would be needed for the gradient calculation as described here 4. That being said, the model behavior and convergence shouldn’t change by using the inplace or out-of-place relu.
st45060
Thank you so much. I debugged the problem and the output was different because of the layer normalization layers I coincidently started using when I put inplace=false. When I found that there was a bug related to this before, I started attributing the change in output to this. Thank you for clearing that up.
st45061
Can someone point out what are the advantages of this implementation 312 of DropConnect over a simpler method like this: for i in range(num_batches): orig_params = [] for n, p in model.named_parameters(): orig_params.append(p.clone()) p.data = F.dropout(p.data, p=drop_prob) * (1 - drop_prob) output = model(input) for orig_p, (n, p) in zip(orig_params, model.named_parameters()): p.data = orig_p.data loss = nn.CrossEntropyLoss()(output, label) optimizer.zero_grad() loss.backward() optimizer.step()
st45062
Some thoughts: Don’t use .data these days! It’s bad for you, really! with torch.no_grad(): will do fine. In fact, you probably should copy back the original values between the backward and the step. For a reasonably complicated model (more than one layer), your gradients might be off from this and if you had stayed clear of using .data PyTorch would have told you. You multiply with 1- drop_prob, which seems unusual. The convenience of a wrapper: In any single instance is easily done manually, but now you would want to apply this to some weights rather than all. Is it still as straightforward? The safety of a wrapper: Having a well-tested wrapper saves you from implementation mistakes. (See above.) Best regards Thomas
st45063
I’m forced to use p.data, because if I replace p.data with p, the weights won’t be modified. For example, if you print p values before and after the dropout line, you will see that only p.data method works (zeros out some weights). Unless you mean something else? Good point about moving weight restore after backward call! Thank you. I have to multiply by 1 - drop_prob because dropout scales its input internally by 1 / (1 - drop_prob), and if I don’t do this the accuracy drops sharply: with drop_prob=0.05 it does not even converge if I don’t scale back the weights. I’m not sure what’s going on, but I suspect it might have something to do with batchnorm. Any ideas? What do you mean “is it still straightforward?” With my method it’s much easier to apply dropconnect selectively, I don’t have to create wrappers for every single layer type, and I don’t have to modify my model forward function. I agree with your point about safety.
st45064
michaelklachko: open a bug about p not being zeroed No. p = something just doesn’t overwrite elements of p, but instead assigns a new thing to the name p. That’s inherent in how Python works, you want p.copy_(...). There are extremely few reasons to use p.data, and chances are you’re doing it wrong if you’re using it. (And people are getting serious about removing it properly, so hopefully it’ll go away soon.) For the scaling, I don’t know. From a cursory look at the Gal and Ghahramani paper, maybe they also use the plain Bernoulli. I’d probably multiply with torch.bernoulli(weight, 1-drop_prob) instead of using dropout and scaling. Best regards Thomas
st45065
Ok, it makes sense, I replaced all “p.data =” with “p.copy_” and added no_grad() context. No difference in performance that I can see, but if it’s safer, so be it. I ran a few experiments with scaling, and yes, it seems like scaling is necessary, otherwise batchnorm will screw things up during inference. Recomputing batch statistics during inference also fixes the issue (same good accuracy with or without scaling), but that’s obviously not a solution. Not sure how to use torch.bernoulli did you mean binomial? Tried generating binomial masks, but I don’t see a good way to generate them quickly on GPU. I could only do mask = binomial_distr.sample(p.size).cuda() and this is very slow.
st45066
I’ll have to admit that the bernoulli API is more than a little bit awkward (it should be a factory function just like randn, really), but mask = torch.bernoulli(torch.randn(1, 1, device='cuda').expand(*weight.shape), 1-drop_prob) should work. Best regards Thomas
st45067
No .data is more risky because it also skips the versioning count. It has long been deprecated 2. The remove .data issue 14 has great technical discussion. Best regards Thomas