id
stringlengths
3
8
text
stringlengths
1
115k
st103100
Hey i wanna do the same kind of thing too did you find a solution. or is there a document from which i can take inspiration . Thanks
st103101
See https://github.com/Cadene/tensorflow-model-zoo.torch 3.3k for some hints / mechanisms.
st103102
In case someone else gets here and has the same issue, I think that the problem is using reshape before transpose. I have loaded TF weights with PyTorch by permuting the weight Tensor, and it worked fine.
st103103
What’s the shortest way to convert a tensor with a single element to a float? Right now I’m doing: x_inp.min().cpu().detach().numpy().tolist() which works, but it’s a lot. If it doesn’t already exist, it might be nice to have something like: x_inp.to_python_number() which would throw an error if x_inp doesn’t have just a single element.
st103104
Hi, I’m trying to use half-precision when evaluating full precision pretrained model simply by model.half() I get major drop in accuracy. Is it possible to use half-precision only in the inference? Maybe some scaling need to be done? Thanks
st103105
Solved by Adamits in post #2 More neurons in the hidden layer means that you are learning more parameters for more ‘connections’. The LSTM has, for a single state, a structure just like an MLP except that there is a second parameter matrix for the ‘previous state’. So just like with an MLP, more neurons means you are projecting…
st103106
More neurons in the hidden layer means that you are learning more parameters for more ‘connections’. The LSTM has, for a single state, a structure just like an MLP except that there is a second parameter matrix for the ‘previous state’. So just like with an MLP, more neurons means you are projecting your data into a higher dimensional space, and thus able to model different shapes of data. Ultimately, the effect does not seem to be extremely well understood, and to find a ‘good’ number takes experimentation. Here is a stack exchange on that topic: https://ai.stackexchange.com/questions/3156/how-to-select-number-of-hidden-layers-and-number-of-memory-cells-in-lstm 583
st103107
I made this simple class as following: from vgg import VGG16 import torch.nn as nn class NewNet(nn.Module): def __init__(self, n_classes,List,Input): super(NewNet, self).__init__() self.Input = Input self.n_classes = n_classes self.Base = VGG16() self.Loc = nn.Conv2d(self.Input.size(1), len(List) * 4, 3, padding=1) self.Conf = nn.Conv2d(self.Input.size(1), len(List) * (self.n_classes + 1), 3, padding=1) self.ConfMap = nn.Conv2d(self.Input.size(1), len(List), 3, padding=1) def forward(self): x= self.Input Out1 = self.Base(x) Loc_Out1 = self.Loc(Out1) return Loc_Out1 I am not sure why I get this error when I want to use it? object is not iterable Also I am newbie in terms of classes, so im not sure if I can do what i did in __init__ or not, if I can do it, how should i call my Class?
st103108
Can you give a more detailed stack trace along with how you are trying to instantiate the class?
st103109
are you sure that you wand to define function like: forward(self)? In that case, it won’t take any input outside and also impossible for you to iterate over the training date.
st103110
Hello. Can somebody help me to figure out is it normal behaviour of model or not: I have a model with GRUCell in it. I’m using it in RL setting, so I’m feeding it input data one sample at a time (no batches, no tensors for sequence, just separate 1xN tensors for input points in loop) And I have two identical (?) ways of calculating loss: for i_episode in range(max_episodes): sim = Sim() sim.run(max_iters, model) loss = model.loss() loss.backward() model.reset_train_data() if i_episode % update_episode == 0 and i_episode != 0: optimizer.step() optimizer.zero_grad() (That is every training episode I calculate loss across some sim iterations (<=max_iters), then backprop it, accumulating gradients and every update_episode use it in optimizer, zeroing it afterwards. The other way is this: loss = torch.tensor([0.0]) for i_episode in range(max_episodes): sim = Sim() sim.run(max_iters, model) loss += model.loss() model.reset_train_data() if i_episode % update_episode == 0 and i_episode != 0: loss.backward() optimizer.step() optimizer.zero_grad() loss = torch.tensor([0.0]) (Accumulate sum of losses across update_episode episodes, then backpropagate it) It should give the same result, I suppose, but resulting gradients differs ( (grad0-grad1).abs().max() is 1.00000e-04 * 1.1635). After 100-200 updates this cause serious divergence in weigths of models trained first and second ways. It can be be result of rounding erros, but 10^-4 seems to be to much for such kind of error. Also first approach to calculating gradient seems to have poor convergence, while second converges better, but has long autograd graph dependencies, that slows calculations and sometimes causes stack overflows. Any thoughts? Thanks!
st103111
Solved by albanD in post #4 Given the error and how it changes with update_episode, it looks like numerical precision errors. Do you fix your random seed? Does this trend of one training and one not training as well is the same for many different random seed? Even with two different random seeds and the exact same code, espe…
st103112
Hi, From a quick look I would say it is one of the two: Your model.reset_train_data actually changes some tensors inplace that are used in the backward pass or has some unexpected side effect. If update_episode is large(ish) then yes it can be numerical precision errors. It is expected that even the slightest difference will lead to completely different weights after training. To check that I would: Make sure the weights are the same before running tests. Even the slightest difference will give different gradients. Check that it works for update_episode=1 Check what is the error for update_episode=2, if it is already big, then it’s potentially the first. If the error increases when you increase update_episode, then it is most likely the second.
st103113
Hi! Thanks for your reply! My reset_train_data is pretty simple - it just creates new lists for storing logprobs of actions, values and rewards and also inits hidden state of GRUCell: def reset_train_data(self): self.hidden = torch.zeros(1, self.hidden_size) self.values = [] self.logprobs = [] self.rewards = [] Also reset_train_data is called after each training episode in both cases, so in theory if it affects backprop, it should affect regardless of case. Max element difference when using update_episode=0 is 0.0, 1.00000e-09 * 1.3970 when update_episode=2, and increases with update_episode growth. But more important question is why this affects convergence so drastically? Take a look at this graph: image.png703×315 15.2 KB As you can see, two graphs begin to diverge about 1800 episode - and update_episode is only 10 (according to measurements error is about 2e-9)
st103114
Given the error and how it changes with update_episode, it looks like numerical precision errors. Do you fix your random seed? Does this trend of one training and one not training as well is the same for many different random seed? Even with two different random seeds and the exact same code, especially in an RL setting, you can have wildly different behaviours unfortunately.
st103115
You’re right, when using random seed, result is really unpredictable - sometimes sum-of-grads converges, sometimes - grad-of-sums. So, it must be some weird combination of precision erros and weight initialization causing this effect on my fixed seed… BTW, either this environment turns out to be much harder for RL than I expected or I have some sneaky bug here. I developed “snake”-like sim - 40x40 squares field with N random wall blocks (and walls on the border), M “apples” and 3-segment snake, controlled by neural net. Every time step it receives 3x8 vectors of distances in 8 directions to nearest wall, “apple”, and self segment (24 distances total). And it can not properly learn, even when I disable grow-on-eating. Best result I’ve got so far - snake learns to avoid walls, but it is not crazy about “apples”. When I add 25th input to net, representing “satiation” (init it with value 100, every apple adds +100 of it, every step decrements it), training fails completely. Well, not completely - if I enable “die-on-satiation-0”, then it fails. If I just penalize net with reward -1 for each step on satiation=0, it has amazing effect: while still learning to avoid walls (most death caused by collision), total reward slowly rises. But when it learns to live more than 100 iterations, it begin to receive enormous penalties for “starvations” and total reward drops to negative values. And again - this all happens with disabled grow-on-eating! (blue is iterations till death, orange is total reward, graphs diverge at value about 100, when snake learns to live long anough to starve) It is really confuisng and I am still trying to figure out what should I tune in such cases (this RL task seemed very easy to me, I supposed even non-recurrent net should learn optimal behavior in ~1000-5000 episodes…) I am using the same actor-critic code from pytorch actor-critic example, so there should not be any bugs there.
st103116
Hi, I know that these kind of applications tend to have very noisy behaviour. But I am not an expert in RL so not sure about your task in particular
st103117
Thanks for your help, it was great advice to check behavior on different seeds! Probably I’ll commit my code to repo and post question about convergence with a link to it on this forum later, may be someone who is interested in RL would look into it.
st103118
My first GPU is good to go, but every time I ran program on the second GPU, the following error comes out. RuntimeError: cublas runtime error : resource allocation failed at /opt/conda/conda-bld/pytorch_1524586445097/work/aten/src/THC/THCGeneral.cpp:411 Any idea on this kind of issue? I can see both GPUs in terminal by typing nvidia-smi.
st103119
I am facing a very similar problem; my code runs on the first GPU without any errors. However, when I use the second GPU device = torch.device("cuda:1") I get this error message: cuda runtime error (77) : an illegal memory access was encountered at c:\programdata\miniconda3\conda-bld\pytorch_1524549877902\work\aten\src\thc\generic/THCTensorCopy.c:20. I am running my code on Windows 10, with Pytorch 0.4, cuda. Appreciate any help. Update: According to Runtime error occurs when using .cuda(1) 115 there is a workaround for this issue. Before the code block that uses the second GPU, add something like with torch.cuda.device(1).
st103120
Hi, thank for giving attention to my question. A multimodel class has three independent model(GRU1, GRU2, GRU3) and all these outputs are concatenated then the last single neural network was attached to an ensemble. the code is like below. But there might be some problems. Although the value of loss was calculated well. At the stage of loss.backward, it raised an error. Is there any solution to backprop multi models? I think torch cannot track tensor to the start point. Where should I fix the code? Is there any example like this model? Thanks for spending you time to read my question. class MultiModels (torch.nn.Module): def __init__(self,batchsize,hidden_size,hidden_n,dropout=0.5): super(GRU,self).__init__() self.batchsize = batchsize self.hidden_size = hidden_size self.hidden_n = hidden_n self.dropout = dropout # GRU(feature 수,output_size,num_layer) self.GRU1 = torch.nn.GRU(3,hidden_size,hidden_n,batch_first=True) self.GRU2 = torch.nn.GRU(2,hidden_size,hidden_n,batch_first=True) self.GRU3 = torch.nn.GRU(2,hidden_size,hidden_n,batch_first=True) self.ensemble_layer1 = torch.nn.Linear(3*5,7) self.ensemble_layer2 = torch.nn.Linear(7,1) def forward(self,input,hidden): out1,hidden1 = self.GRU1(input,hidden.clone()) out1 = out1.float() out2,hidden2 = self.GRU2(input[:,:,[0,1]],hidden.clone()) out2 = out2.float() out3,hidden3 = self.GRU3(input[:,:,[1,2]],hidden.clone()) out3 = out3.float() total_out = torch.cat((out1[:,-1,:],out2[:,-1,:],out3[:,-1,:]),1) out = self.ensemble_layer1(total_out) out = self.ensemble_layer2(out) out = out.view(self.batchsize,-1) return out def init_hidden(self): hidden = torch.Tensor(torch.zeros(self.hidden_n,self.batchsize,self.hidden_size)).cuda() return hidden model = MultiModels(16,5,2) loss_f = BCEwithLogitLoss() optim = Adam(model.parameters(),lr=0.001) ------training session------ optim.zero_grad() hidden = model.init_hidden() pred = model(input,hidden) loss = loss_f(pred,label) loss.backward() # Error! image.png986×652 11.6 KB
st103121
jjongjjong: At the stage of loss.backward, it raised an error Could you paste the error?
st103122
You haven’t call model’s forward function in any place of your code. You should do model = MultiModels(16,5,2) output = model(your_input) and then using output to compute the loss.
st103123
This error is pretty rare since I have googled it and got nothing. Could you remove the three float() in forward and try again? The data type in the whole neural network should be the same, at least I think so.
st103124
You should keep everything the same type in your neural net, from input to output, and also the weights. You could use torch.set_default_tensor_type('torch.FloatTensor') to set a default tensor type. If you want to train with GPU, you should make sure everything is torch.cuda.FloatTensor type.
st103125
For example I have simple net: class Net(nn.Module): def __init__(self): super().__init__() self.init = nn.Sequential( nn.BatchNorm2d(3), nn.Conv2d(3, 5, kernel_size=3), nn.ReLU() ) self.conv1 = nn.Conv2d(5, 10, kernel_size=1) def forward(self, x): x = self.init(x) x = self.conv1(x) return x And I want to reinitialize activation layers like this: net = Net() for m in net.modules(): if type(m) == nn.ReLU: m = nn.ELU() But this code do nothing. I can do it directly like this: net.init[2] = nn.ELU() but I want to get generic approach for this task. How I can do it? Thanks in advance
st103126
Hi, The assignment m = nn.ELU() just changes the content of the python variable m and set it to whatever is on the other side. The object that used to be pointed by this python variable is deleted. You will need to index it as a list for this to work.
st103127
I know this is a very basic question, but it’s my first day with pytorch and I can’t seem to figure it out? What is the difference between no_grad() and requires_grad, and when to use each of them, and when/how to mix them? Thanks guys. Best, Boris
st103128
Solved by ptrblck in post #2 with torch.no_grad() is a context manager and is used to prevent calculating gradients in the following code block. Usually it is used when you evaluate your model and don’t need to call backward() to calculate the gradients and update the corresponding parameters. Also, you can use it to initiali…
st103129
with torch.no_grad() is a context manager and is used to prevent calculating gradients in the following code block. Usually it is used when you evaluate your model and don’t need to call backward() to calculate the gradients and update the corresponding parameters. Also, you can use it to initialize the weights with torch.nn.init functions, since you don’t need the gradients there either. requires_grad on the other hand is used when creating a tensor, which should require gradients. Usually you don’t need this in the beginning, as all parameters which require gradients are already wrapped in nn.Modules in the nn package. You could set this property e.g. on your input tensor, if you need to update your input for example in an adversarial training setup.
st103130
Do you need this transform to be differentiable, or just to use it as part of the data loading?
st103131
If you are referring to upsampling using http://pytorch.org/docs/nn.html#upsamplingbilinear2d 152 then you can upsample by a non-int scale_factor by directly providing the output size. For example: import torch import torch.nn as nn from torch.autograd import Variable inp = Variable(torch.randn(10, 3, 24, 24)) m = nn.UpsamplingBilinear2d(size=(55, 55)) out = m(inp) print(out.size())
st103132
I need it to be differentiable, as I am doing a pixel-to-pixel task which requires the output size is different with the input size.
st103133
This appears to give an error saying that the upsampled size is not divisible by the original size when done with UpsampleNearest2d rather than UpsampleBilinear2d
st103134
Hi, all. I want to compile sten alone, because I want to use pytorch in C++. when I follow the install guide in https://github.com/pytorch/pytorch/tree/master/aten 2 , I got an error. I just use cmake .., and the error is : -- Found system Eigen at /usr/local/include/eigen3 -- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR) -- Found CUDA: /usr/local/cuda (found suitable version "9.0", minimum required is "7.0") -- Caffe2: CUDA detected: 9.0 -- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc -- Caffe2: CUDA toolkit directory: /usr/local/cuda -- Caffe2: Header version is: 9.0 -- Found cuDNN: v7.1.4 (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so) -- Autodetected CUDA architecture(s): 6.1 -- Added CUDA NVCC flags for: -gencode;arch=compute_61,code=sm_61 CMake Error at /home/mjj/Pytorch/pytorch/cmake/public/utils.cmake:5 (set): set given invalid arguments for CACHE mode. Call Stack (most recent call first): /home/mjj/Pytorch/pytorch/cmake/Dependencies.cmake:480 (caffe2_update_option) CMakeLists.txt:65 (include) anyone could help me ? thx very much!!
st103135
Does anyone know a convenient and easy way to understand whether your NN is suffering from dead RELU’s in Pytorch?
st103136
Solved by ptrblck in post #2 By dead ReLUs you mean a negative input and thus a zero result? If so, you could count the zeros in the output activation. Here is a toy example for a simple model: model = nn.Sequential( nn.Linear(10, 10), nn.ReLU() ) x = torch.randn(10, 10) output = model(x) (output == 0).sum(1).float(…
st103137
By dead ReLUs you mean a negative input and thus a zero result? If so, you could count the zeros in the output activation. Here is a toy example for a simple model: model = nn.Sequential( nn.Linear(10, 10), nn.ReLU() ) x = torch.randn(10, 10) output = model(x) (output == 0).sum(1).float() / output.size(1) Fore more complex models, you could use a forward hook to get the intermediate activations.
st103138
Hi, I want to load Lua model like this, model = Net() model.cuda() from torch.utils.serialization import load_lua model.load_state_dict(load_lua(’/home/nn4.small2.v1.t7’)) But I got unknown class and can not load model, is there anyway to fix it? /home/lab/yifan/venv/local/lib/python2.7/site-packages/torch/utils/serialization/read_lua_file.pyc in read_object(self) 538 "{}. If you want to ignore this error and load this object " 539 "as a dict, specify unknown_classes=True in reader’s " –> 540 “constructor”).format(cls_name)) 541 542 def _can_be_list(self, table): T7ReaderException: don’t know how to deserialize Lua class nn.SpatialConvolutionMM. If you want to ignore this error and load this object as a dict, specify unknown_classes=True in reader’s constructor Thanks a lot!
st103139
Thanks Smith! I try that but it turns out AttributeError: ‘SpatialBatchNormalization’ object has no attribute ‘running_var’
st103140
that checkpoint is REALLY old, even by torch standards. Please load the checkpoint in torch and save it again, and then load it in pytorch.
st103141
Hi, I faced the same problem once with “nn.SpatialConvolutionMM.”. To overcome this, I used the following script: https://raw.githubusercontent.com/soumith/cudnn.torch/master/convert.lua 441 (provided by @smth) which allowed me to convert SpatialConvolutionMM to SpatialConvolution Hope this helps.
st103142
In my network, the last layer is nn.Linear classifier with in_channels=32, out_channels=64*64=4096 (very large) My input batch is very large, and for each input x, only some of the class label are admissible. I have external algorithm to filter off impossible class labels. As an example, say I have x1, x2, x3 …xN as input. For xi, there are only 3 possible labels are yi1, yi2, yi3. For xj, there are only 2 possible labels yj1, yj2, etc … I can get the logits of all samples of all classes by calling logit = classifier([x1,x2 …xN]) = [ [z11,z12…z14096], [z21,z22…z24096] … [zN1,zN2…zN4096]]. Since i don’t need most of the values, is there a faster way to do this, e.g. using torch sparse tensor? My current solution uses a loop which is very slow: logit_of_admissble_class = [ classifer_k for zip (xi,yik) ]
st103143
Hello PyTorch community! I’m trying to implement Adam by myself for a learning purpose. Here is my Adam implementation: https://gist.github.com/byorxyz/dfe3da1000e67aced1c7d9279351cb88 9 I think I implemented everything correct however the loss graph of my implementation is very spiky compared to that of torch.optim.Adam. My ADAM implementation loss graph (below) my_adam.PNG961×656 65.8 KB torch.optim.Adam loss graph (below) torch_adam.jpg1050×696 61.1 KB If someone could look at my code and tell me what I am doing wrong, I’ll be very grateful. Thank you for PyTorch! (For the full code including data (super easy to run): https://github.com/byorxyz/AMS_pytorch/blob/master/AdamFails_1dConvex.ipynb 4)
st103144
HI, I have GPU and CUDA, CuDNN, and NCCL. My OS is Ubuntu 16.04, I followed this tutorial to install Coffe2 with GPU support ( conda install -c caffe2 caffe2-cuda9.0-cudnn7 ) and the installation finished successfully but this command: python2 -c ‘from caffe2.python import workspace; print(workspace.NumCudaDevices())’ returns: WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode. WARNING:root:Debug message: libcurand.so.9.0: cannot open shared object file: No such file or directory Segmentation fault (core dumped) Any Idea how can I solve it?? TNX
st103145
I have the matrix A with dimention of BxCxDxD I know via softmax 2d i can do apply softmax in channel wise I was wondering if there is a way to specify how much of the channel to consider. If i want to use loop i should do it like this: SoftMax = nn.Softmax2d() lets say C = 44 and i want to do softmax for first half and second half separately for i in range(2): TempM = A[:,i*22:(i+1)*22,:,:] TempM= SoftMax(TempM) B_Classes[:,i*22:(i+1)*22,:,:] = TempM is there any simpler way to do it?
st103146
I have images and targets size of BxCxDxHxW. I want to perform random rotation using Tensor. Currently, I was successful to use ndimage.affine_transform by feeding each image with size of DxHxW to the function likes images = ndimage.affine_transform(images, full_rot_mat) where full_rot_mat is a rotation matrix. I have two questions: How we can use the below code for tensor; for now, I have to convert back to numpy to use them (img_np = images.data.cpu().numpy())? How can we rotate a batch (like B) ? For now, I have to rotate each image in the batch likes img_np_=img_np[0,:,:,:,0]? This is the code what I am using def do_random_transform(images, targets, x_rotation_max_angel_deg, y_rotation_max_angel_deg, z_rotation_max_angel_deg): x_rot_mat = np.eye(3, 3) if x_rotation_max_angel_deg > 0: rot_deg = random.randint(-1 * x_rotation_max_angel_deg, x_rotation_max_angel_deg) x_rot_mat[1, 1] = math.cos(math.radians(rot_deg)) x_rot_mat[2, 2] = math.cos(math.radians(rot_deg)) x_rot_mat[1, 2] = -1 * math.sin(math.radians(rot_deg)) x_rot_mat[2, 1] = math.sin(math.radians(rot_deg)) y_rot_mat = np.eye(3, 3) if y_rotation_max_angel_deg > 0: rot_deg = random.randint(-1 * y_rotation_max_angel_deg, y_rotation_max_angel_deg) y_rot_mat[0, 0] = math.cos(math.radians(rot_deg)) y_rot_mat[2, 2] = math.cos(math.radians(rot_deg)) y_rot_mat[2, 0] = -1 * math.sin(math.radians(rot_deg)) y_rot_mat[0, 2] = math.sin(math.radians(rot_deg)) z_rot_mat = np.eye(3, 3) if z_rotation_max_angel_deg > 0: rot_deg = random.randint(-1 * z_rotation_max_angel_deg, z_rotation_max_angel_deg) z_rot_mat[0, 0] = math.cos(math.radians(rot_deg)) z_rot_mat[1, 1] = math.cos(math.radians(rot_deg)) z_rot_mat[0, 1] = -1 * math.sin(math.radians(rot_deg)) z_rot_mat[1, 0] = math.sin(math.radians(rot_deg)) full_rot_mat = np.dot(np.dot(x_rot_mat, y_rot_mat), z_rot_mat) images = ndimage.affine_transform(images, full_rot_mat) targets = ndimage.affine_transform(targets, full_rot_mat) return images, targets For using images, targets = data #This is tensor img_np = images.data.cpu().numpy() label_np = targets.data.cpu().numpy() img_np_=img_np[0,:,:,:,0] label_np_=label_np[0,:,:,:,0] images_val, label_val = do_random_transform(img_np_, label_np_, 10, 10, 15)
st103147
Hi, I am trying to use sparse matrix. When I typing to load the batch I am getting the following error: for i, data in enumerate(train_loader, 0): File “/usr/local/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 259, in next batch = self.collate_fn([self.dataset[i] for i in indices]) File “/usr/local/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 135, in default_collate return [default_collate(samples) for samples in transposed] File “/usr/local/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 135, in return [default_collate(samples) for samples in transposed] File “/usr/local/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 112, in default_collate return torch.stack(batch, 0, out=out) File “/usr/local/lib/python3.5/site-packages/torch/functional.py”, line 62, in stack inputs = [t.unsqueeze(dim) for t in sequence] File “/usr/local/lib/python3.5/site-packages/torch/functional.py”, line 62, in inputs = [t.unsqueeze(dim) for t in sequence] AttributeError: ‘torch.sparse.FloatTensor’ object has no attribute ‘unsqueeze’ The version I am using is 0.3.1. Thanks, Ortal
st103148
Like the error says, you can’t use unsqueeze with this kinds of object. I don’t know what dimensions you wish to get but maybe you can use resize_as_ instead?
st103149
I am also dealing with sparse tensors for one project and I am having hard time understanding how to correctly deal with them without reinventing the wheel. Is there any documentation where you found this resize_as_ function ? How should we use it ? Is there any other useful functions ? Because this function is not listed in the documentation (http://pytorch.org/docs/master/sparse.html 14).
st103150
I have the same problem of using sparse matrices. Please post if you find any solution.
st103151
@richard is working on unsqueeze for sparse: https://github.com/pytorch/pytorch/pull/5236 117
st103152
vgg19-dcbb9e9d.pth from torchvision.models takes 0.5Gb. Why so many? Is it possible to reduce file size in the case of only evaluation?
st103153
The model is just really big or rather it has a lot of parameters. You can find more information about the model here 17. I’m not sure how much you can save by zipping it, but you could try that.
st103154
My problem is best illustrated with an example. class Generator(nn.Module): def __init__(self): super(Generator, self).__init__() self.l1 = nn.Sequential( nn.Linear(1200, 1) ) def forward(self, z): zh = self.l1(z) return x works. However, class Generator(nn.Module): def __init__(self): super(Generator, self).__init__() def l1x(somenumber): return nn.Sequential( nn.Linear(somenumber, 1) ) self.l1 = l1x def forward(self, z): zh = self.l1x(1200)(x) return x gives Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor when I use cuda. If I edit self.l1x(1200).cuda()(x) this will work too but I already made my generator object cuda() after I initialized the object. Hence I want it to work like everything else without extra steps. What am I doing wrong?
st103155
In your second approach you re-create the layer every time you call forward. Are you sure you would like that? Your model won’t learn anything as it’s random in every step.
st103156
No I would not like that. Full disclosure, I am a newbie so haven’t the faintest clue if anything I’m doing makes sense. All I want is to not write the same block over and over again. I think I can define a function that returns the layers within the init, and can call it like self.l1 = my_func(a, b, c) but since I’ll call something again under the forward() I feel this is verbose. Or is it necessary and the right way because how pytorch works?
st103157
The first example should work just fine. You usual way is to create the layers in __init__ and call them in forward. If you would like you can also create another member function to create your layers, but you should still call it in __init__. What do you mean by write the same block over and over again? Do you have an example? Currently your second approach seems to need more lines of code.
st103158
class Generator(nn.Module): def __init__(self): super(Generator, self).__init__() self.l1 = nn.Sequential( nn.Linear(1200, 240) ) self.l2 = nn.Sequential( nn.Linear(1200, 100) ) def forward(self, z): zh = self.l1(z) zh2 = self.l2(z) return torch.cat((zh,zh2), dim=1) Ignore the meaningless network. I am wondering why can’t I define a function def my_func(n_out): return nn.Sequential( nn.Linear(1200, n_out) ) and call def forward(self, z): zh = my_func(100)(z) zh2 = my_func(200)(z) return torch.cat((zh,zh2), dim=1)
st103159
You could call my_func in __init__ to create your layers: class Generator(nn.Module): def __init__(self): super(Generator, self).__init__() self.l1 = self.my_func(100) def my_func(self, n_out): return nn.Sequential( nn.Linear(1200, n_out) ) def forward(self, z): z = self.l1(z) return z Inside my_func you are creating the nn.Sequential with the layer inside, i.e. the layer will be created with randomly initialized parameters. If you call this function in your forward, you will re-create your model in every forward pass.
st103160
I am using the C++ code of pytorch because I have a project in C++, and I need to know where the python code and functions are mapped to C++ functions in the pytorch repository. Could anybody tell me?
st103161
Solved by tom in post #2 This is autogrenerated. Do follow the build instructions for PyTorch and then look in torch/csrc/autograd/generated. The autogeneration code is in tools/autograd (including the template/Function.cpp that includes a lot of backwards definitions). It draws from .cwrap and .yaml files in aten/src/ATen …
st103162
This is autogrenerated. Do follow the build instructions for PyTorch and then look in torch/csrc/autograd/generated. The autogeneration code is in tools/autograd (including the template/Function.cpp that includes a lot of backwards definitions). It draws from .cwrap and .yaml files in aten/src/ATen and aten/src/ATen/native. Best regards Thomas
st103163
Ah thank you so much. I really needed this. I’m basically using pytorch as though I am a pytorch developer, but I actually am not Does anybody know where or if there is any documentation on how to build pytorch with debug symbols, or other internal documentation about how it is designed for new incoming pytorch developers? I didn’t see that on the front github page or the pytorch website. Ok, I’ve built it and am looking through what you said…
st103164
If you set the environment variable DEBUG=1, you get debug builds. I’ve been meaning do write up a guide how things in ATen end up as torch.X, but I never get around to it. Best regards Thomas
st103165
Hi, I have two separately trained networks and I am stitching them together also I am intended (for fine tuning) to train them jointly based on the loss function on the output of the second network’s output, is this possible in pytorch or do I need to define a new class putting them into a single network? I look forward to your responses.
st103166
It is as simple as saying it, you do not need to define anything new. You can just do it this way: out_1 = network_1(input) out_2 = network_2(out_1) loss = loss_function(out_2, target) loss.backward() If there is no processing in between, you can also put both of them in a Sequential by doing network = nn.Sequential(network_1, network_2)
st103167
Thanks for the response, this looks very well except that I don’t know how to tell my optimizer what parameters should be optimized. if I use net = nn.Sequential(net1, net2), can I only give the optimizer net.parameters()?
st103168
Yes, it will work that way. If you want to keep 2 separate networks, you have to do as explained in this post: Giving multiple parameters in optimizer 339
st103169
Minimumal-ish example code: import torch from torch import nn N = 5 seq_len = 3 vocab_size = 7 embedding_size = 8 embedding = nn.Embedding(vocab_size, embedding_size) h = nn.Linear(embedding_size * seq_len, vocab_size) encoded = torch.rand(seq_len, N, embedding_size, requires_grad=True) out_probs = torch.zeros(seq_len, N, vocab_size) out = torch.LongTensor(seq_len, N) for t in range(seq_len): out_emb = torch.zeros(seq_len, N, embedding_size) if t > 0: # out_t = out[:t].data out_t = out[:t].detach() out_emb[:t] = embedding(out_t) out_emb = encoded + out_emb out_emb = h(out_emb.transpose(0, 1).contiguous().view(N, -1)) out_probs[t] = out_emb _, decoded = out_emb.max(dim=-1) out[t] = decoded loss = out_probs.sum() loss.backward() gives error RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation But if replace .detach() with .data or .clone().detach(), then no error. Why? Bug in .detach()? bug in my own code? bug in .data?
st103170
Solved by tom in post #2 I think you are running in the case described in the PyTorch 0.4 migration guide. .detach() is safer in that it catches inplace modifications that will cause autograd to give wrong results when backpropagating through the graph that you have “detached from”. Consider the following more minimal exa…
st103171
I think you are running in the case described in the PyTorch 0.4 migration guide 82. .detach() is safer in that it catches inplace modifications that will cause autograd to give wrong results when backpropagating through the graph that you have “detached from”. Consider the following more minimal example: a = torch.arange(5., requires_grad=True) b = a**2 c = a.detach() # error with detach(), wrong result with .data c.zero_() b.sum().backward() print(a.grad) This errors, and rightfully so, because it detects that a has changed inplace and this will trip gradient calculation. If you comment the c.zero_() or use .clone().detach(), you see that a.grad is 2*a just as it should be. If you use .data, the connection will fully break and you’ll silently get the wrong result that a.grad is 0. .data is intended to support some old-style updates (that could use with torch.no_grad(): instead), e.g. in optimizers. Most likely, you should not use it in your own code unless you are exactly sure that it is the right thing to do. Best regards Thomas
st103172
Ah, ok, I think I see. It is because .detach() doesnt implicitly create a copy of the tensor, so when I later modify that tensor, it’s updating the tensor on the upstream side of .detach() too. By cloning first, this issue doesnt arise, and all is ok?
st103173
Yes, detach doesn’t create copies and should only prevent the gradients to be computed but shares the data. So in your case, the detach in clone().detach() should maybe be also redundant except that you save computational resources by not updating the detached variable.
st103174
Do anyone know how can I do weighted learning(e.g.In 14 sklean.svm.svc ,the parameter ‘class_weight’ allow you to assign different weight to different class ) in CNN with cross entropy loss?Thanks very much!
st103175
Are the weights simply the proportion of cases in each class? Or is it the inverse of this?
st103176
Hi All, Did you get any update on this? Is there any sample code for this? Thanks
st103177
After code running for a long time, my dataloader just freezes. It seems like all subprocesses in dataloader hangs up and the main process just wait for dataloading. I use Dataloader like this: train_data_loader = DataLoader(train_data, batch_size=64, shuffle=True, num_workers=10) train_data is a data generation method. The CPU usage: Lark20180716204311.png918×254 27.9 KB The Memory usage: Lark20180716204348.png886×249 22.8 KB You can see that after a several circles of model training and testing, the cpu and memory usage goes up and then goes down. But at last, when testing, the cpu usage goes 1/10, where i think the dataloader of testing just hangs up. What happens of multiprocessing? There is no error until the work was killed.
st103178
Is there a way that allows splitting non-tensor arguments when using torch.nn.DataParallel ? I am passing annotations to my module for computing the loss, and want to split that list according to the batch size as well. I know I could handle this by transforming the annotations to a tensor, but as these annotations are objects with quite a lot of different properties, that is something I am not keen on doing. It would be nice if there would be a callback function you give when initializing the torch.nn.DataParallel module, just like how the collate_fn for torch.utils.data.DataLoader works.
st103179
The pytorch tutorials I have been on normally just measure running_loss += loss.item() and don’t take into account a validation set. At the moment I am calculating the train error with running_loss += loss.item() and every n epochs I calculate the MSE on the validation set. I calculate the validation error by: y_val_predicted = net(X_va) temp1 = mean_squared_error(np.array(y_val_T.detach()), np.array(y_val_predicted.detach())) This seems inefficienti, so I was wondering what other people do in this specific case but also more broadly tips on best practices for measing loss as the network is being trained.
st103180
I have read the other similar posts but have not been able to solve the issue and get Conv1d to work. Initially I was getting this error: Runtime error: Expected object of type torch.DoubleTensor but found type torch.FloatTensor for argument #2 ‘weight’. I tried adding a .double() at the end of my nn.Conv1d line but that changed the errror message to: RuntimeError: Expected object of type torch.FloatTensor but found type torch.DoubleTensor for argument #2 ‘target’ This latter error was being triggered later on in the code. I tried to cast the tensor x = x.type(torch.FloatTensor) but I got the same error. How can i solve this issue? Code: class Net(nn.Module): def __init__ (self): super(Net, self).__init__() self.conv1 = nn.Conv1d(2, 10, 5).double() #.double self.fc1 = nn.Linear(10*95, 100) self.fc2 = nn.Linear(100, 99) def forward(self, x): x = F.relu(self.conv1(x)) x = x.view(-1, 10 * 95) x = F.relu(self.fc1(x)) x = self.fc2(x) x = x.type(torch.FloatTensor) # I tried adding this line in multiple places to no avail return x net = Net()
st103181
Solved by ptrblck in post #2 Your target seems to be still a torch.DoubleTensor. Could you cast it to .float() and try it again?
st103182
Your target seems to be still a torch.DoubleTensor. Could you cast it to .float() and try it again?
st103183
Thank you! I should have realized the error. I had to cast both the X and Y with float. I didn’t even need .double()
st103184
For example I have 3d tensor like this: a = torch.ones((3, 3, 3)) a[:, 1, 1] = 2 a[:, 2, 2] = 5 And I have a 2d “mask” like this: b = torch.zeros((3, 3)) b[1, 1] = 1 b[2, 2] = 1 And I want to get a list of 3d vectors from ‘a’ by mask ‘b’: the output should contain two vectors: [[2, 2, 2], [5, 5, 5]] I tried to use some constructions like torch.masked_select, but it always return 1d-tesnror where not save “vectorized” order of elements. it returns tensor like this: [2, 5, 2, 5, 2, 5] How can I get correct result using pytorch operations? Thanks in advance!
st103185
Does advanced indexing (a[:, b]) work for you (note that you need a byte/uint8 tensor to index)? a = torch.ones((3, 3, 3)) a[:, 1, 1] = 2 a[:, 2, 2] = 5 b = torch.zeros((3, 3), dtype=torch.uint8) b[1, 1] = 1 b[2, 2] = 1 print(a[:, b]) Best regards Thomas
st103186
I’m training a seq2seq model for machine translation, but since the number of output classes is too high, each iteration takes too long, so I found out about the sampled softmax which promises to speed up the computation process, so anyone did this before?
st103187
I have a function center crop for 5D: BxCxDxHxW. The function works fine in pytorch 0.5 but it has error in pytorch 0.4 This is the code def center_crop( x, depth, height, width): crop_d = torch.FloatTensor([x.size()[2]]).sub(depth).div(-2) crop_h = torch.FloatTensor([x.size()[3]]).sub(height).div(-2) crop_w = torch.FloatTensor([x.size()[4]]).sub(width).div(-2) return F.pad(x, [ crop_d.ceil().int()[0], crop_d.floor().int()[0], crop_h.ceil().int()[0], crop_h.floor().int()[0], crop_w.ceil().int()[0], crop_w.floor().int()[0], ]) variable = Variable(torch.randn(8, 2, 24, 32, 32)) print(center_crop(variable)) The error in pytorch 0.4 is File "main.py", line 7, in center_crop crop_w.ceil().int()[0], crop_w.floor().int()[0], File "/home/john/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 1929, in pad return ConstantPadNd.apply(input, pad, value) File "/home/john/anaconda3/lib/python3.6/site-packages/torch/nn/_functions/padding.py", line 27, in forward output = input.new(input.size()[:(ctx.l_diff)] + new_dim).fill_(ctx.value) TypeError: torch.Size() takes an iterable of 'int' (item 2 is 'Tensor') How could I fix it? Thanks
st103188
The code actually mostly runs for me (except the call missing parameters). There is, however, no reason to not just use python ints for the crop_d,h,w calculation (as in crop_h_floor = (x.size(3)-height)//2 and crop_h_ceil = crop_h_floor = (x.size(3)-height+1)//2 and then using those in the pad call. Best regards Thomas
st103189
tom: however, no reason to not just use python ints for the crop_d,h,w calculation (as in crop_h_floor = (x.size(3)-height)//2 and crop_h_ceil = crop_h_floor = (x.size(3)-height+1)//2 and then using those in the pad call. I have fixed it by changed from crop_d.ceil().int()[0] to crop_d.ceil().int()[0].item(). We have to add .item() in the crop_h, crop_d and crop_w. Thanks
st103190
Currently, I am avoiding NaN by doing torch.sqrt(1e-5 + x.pow(2).sum(1)). However, when I check the output, it differs by about 0.001 which is quite big. Is there any way of doing this other than adding 1e-5 (anything smaller will still give NaN, e.g. 1e-6). Or is 0.001 actually fine? My result is not as desired though, still cannot find the culprit and I hope that I can remove this 0.001 discrepancy so that I can eliminate the possibility of this giving me bad result.
st103191
According to torch.optim.lr_scheduler.ReduceLROnPlateau 34 , mode (str) – One of min, max. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing. Default: ‘min’. However, what is the quantity monitored? How to set this value? In Keras, this function specifies very clearly: keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10)
st103192
You pass the value to monitor directly to the scheduler.step(value_to_monitor) function. The example in the docs shows this usage.
st103193
Hello, I was using the model below with 1 GRU layer and it was working perfectly but once I increase the number of GRU layers it start giving errors like RuntimeError: Expected hidden size (2, 10, 100), got (1, 10, 100) Please advise class GRU(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(GRU, self).__init__() num_layers = 2 self.hidden_size = hidden_size self.gru = nn.GRU(input_size, hidden_size,num_layers,dropout=0.1) self.linear = nn.Linear(hidden_size, output_size) def forward(self, input, hidden): hx, hn = self.gru(input, hidden) rearranged = hn.view(hn.size()[1], hn.size(2)) out1 = self.linear(rearranged) out2 = F.softmax(out1) return out2
st103194
because the hidden size should have the first dimension set to the number of layers. The ‘hidden’ state is the state thta you have labelled appropriately ‘hidden’, and that is passed in to the forward method.
st103195
Thanks, When I set the first dimension of the hidden size to the number of layers it gives me this as a warning at the beginning Using a target size (torch.Size([10, 2])) that is different to the input size (torch.Size([2, 10, 2])) is deprecated. Please ensure they have the same size. "Please ensure they have the same size.".format(target.size(), input.size())) and then this was the error ValueError: Target and input must have the same number of elements. target nelement (20) != input nelement (40) as I changed the rearranged variable in the forward to be rearranged = hn.view(hn.size()[0], hn.size(1), hn.size(2)) what might be the solution? Thanks
st103196
Dear all, we recently want to training a new “resnet50” from scratch on pytorch. could everyone give some references to tell how setting the learning-rate, weight-decay. step or poly? thanks~
st103197
The hyper-parameters most likely depend on your use case and dataset. However, I think the ImageNet example 155 will give you good default values.
st103198
The std computed on a tensor containing single element is nan. Precisely: ` v = th.randn(1, 128, 4, 4) v.std(dim=0) tensor([[[nan., nan., nan., nan.], [nan., nan., nan., nan.], [nan., nan., nan., nan.], [nan., nan., nan., nan.]], [[nan., nan., nan., nan.], [nan., nan., nan., nan.], [nan., nan., nan., nan.], [nan., nan., nan., nan.]], [[nan., nan., nan., nan.], [nan., nan., nan., nan.], [nan., nan., nan., nan.], [nan., nan., nan., nan.]], ..., [[nan., nan., nan., nan.], [nan., nan., nan., nan.], [nan., nan., nan., nan.], [nan., nan., nan., nan.]], [[nan., nan., nan., nan.], [nan., nan., nan., nan.], [nan., nan., nan., nan.], [nan., nan., nan., nan.]], [[nan., nan., nan., nan.], [nan., nan., nan., nan.], [nan., nan., nan., nan.], [nan., nan., nan., nan.]]]) ` In fact, just one element is: ` v = th.randn(1) v.std() tensor(nan.) ` Theoretically speaking, shouldn’t the std of a single element be 0? Please help me to clear this confusion
st103199
Have you checked the documentation for std 104? PyTorch uses the unbiased estimate of std by default and that is not well-defined for a single element. If you set unbiased=False, you’ll get what I would suppose is what you expect. Best regards Thomas