id
stringlengths
3
8
text
stringlengths
1
115k
st48668
see this: github.com/pytorch/pytorch High memory usage for CPU inference on variable input shapes (10x compared to pytorch 1.1) 10 opened Oct 15, 2019 closed May 16, 2020 lopuhin 🐛 Bug In pytorch 1.3, when doing inference with resnet34 on CPU with variable input shapes, much more memory is used compared... high priority module: cpu module: memory usage module: mkldnn triaged
st48669
what I want do is: RGB_images = netG(input) #netG is a pretrained model and not change during training,RGB_images is a batch of RGB images YCbCr_images = f(RGB_images) # YCbCr_images is a batch of YCbCr mode images # do things with YCbCr_images Is there any function f in pytorch can achieve what i want?
st48670
there isn’t an in-built way to do this. However, you can simply write this as an autograd function def rgb_to_ycbcr(input): # input is mini-batch N x 3 x H x W of an RGB image output = Variable(input.data.new(*input.size())) output[:, 0, :, :] = input[:, 0, :, :] * 65.481 + input[:, 1, :, :] * 128.553 + input[:, 2, :, :] * 24.966 + 16 # similarly write output[:, 1, :, :] and output[:, 2, :, :] using formulas from https://en.wikipedia.org/wiki/YCbCr return output
st48671
kornia.readthedocs.io kornia.color.ycbcr — Kornia documentation 85 def rgb_to_ycbcr(image: torch.Tensor) -> torch.Tensor: r"""Convert an RGB image to YCbCr. Args: image (torch.Tensor): RGB Image to be converted to YCbCr. Returns: torch.Tensor: YCbCr version of the image. """ if not torch.is_tensor(image): raise TypeError("Input type is not a torch.Tensor. Got {}".format( type(image))) if len(image.shape) < 3 or image.shape[-3] != 3: raise ValueError("Input size must have a shape of (*, 3, H, W). Got {}" .format(image.shape)) r: torch.Tensor = image[..., 0, :, :] g: torch.Tensor = image[..., 1, :, :] b: torch.Tensor = image[..., 2, :, :] delta = .5 y: torch.Tensor = .299 * r + .587 * g + .114 * b cb: torch.Tensor = (b - y) * .564 + delta cr: torch.Tensor = (r - y) * .713 + delta return torch.stack((y, cb, cr), -3)
st48672
Hi everyone, I read the paper about LR range test arxiv.org 1506.01186.pdf 6 965.07 KB And the one about OneCyclePolicy arxiv.org 1708.07120.pdf 5 2.88 MB I copied this implementation of the LR range test Another data science student's blog – 20 Mar 18 How Do You Find A Good Learning Rate 11 This is the main hyper-parameter to set when we train a neural net, but how do you determine the best value? Here's a technique to quickly decide on one. I’ve also documented myself about de OneCyclePolicy Another data science student's blog – 7 Apr 18 The 1cycle policy 8 Properly setting the hyper-parameters of a neural network can be challenging, fortunately, there are some recipe that can help. https://towardsdatascience.com/finding-good-learning-rate-and-the-one-cycle-policy-7159fe1db5d6 3 It seems to be something many people wonder: stackoverflow.com Have I implemented implemenation of learning rate finder correctly? 9 deep-learning, computer-vision, pytorch, mnist asked by blue-sky on 12:12PM - 06 Feb 19 UTC stackoverflow.com How to use Pytorch OneCycleLR in a training loop (and optimizer/scheduler interactions)? 7 pytorch asked by GeneC on 01:31AM - 31 Jan 20 UTC mc.ai – 14 Feb 20 Super-Convergence with JUST PyTorch 3 Source: Deep Learning on Medium Super-Convergence with JUST PyTorchA guide to decrease training time whilst increasing results with built-in PyTorch functions and classesWhy?When creating Snaked, m… stackoverflow.com How does one use torch.optim.lr_scheduler.OneCycleLR()? 9 python-3.x, deep-learning, pytorch, torchvision asked by Ayushman Buragohain on 02:33PM - 15 Jul 20 UTC LR parameter in optimizer larger than max_lr in OneCycleLr vision Hello, I made a little mistake in my code (I think at least …), where I put the following : # Optimizer wd = 0.01 optimizer = optim.Adam(model.parameters(), lr=10e-4, weight_decay=wd) # LR scheduler scheduler = optim.lr_scheduler.OneCycleLR(optimizer, max_lr=1e-4, steps_per_epoch=len(train_loader), epochs=epochs, pct_start=0.3) # only available on pytorch v1.3.0 Notice that in the optimizer I put a learning rate of 10e-4, whereas I put the max_lr on OneCycleLR to 1e-4. So in this situation w… Suppose I have implemented correctly a LR Test How is the correct orchestation between the optimizer and the one cycle scheduler? Because, the optimizer has the param lr, and the scheduler max_lr? Can somebody provide me some example?, since I’ve been looking for it with no success Thanks
st48673
Is there any difference between, saving checkpoint when training with a single GPU and saving checkpoint with 2 GPUs? Example: If I use DataParallel to train on 2 GPUs, if I save checkpoint after each epoch, which parameters will be saved? GPU1 info saved or GPU-2 info saved in checkpoint ? How to check while tranining
st48674
nn.DataParallel will reduce all parameters to the model on the default device, so you could directly store the model.module.state_dict(). If you are using DistributedDataParallel, you would have to make sure that only one rank is storing the checkpoint as otherwise multiple process might be writing to the same file and thus corrupt it. Here 54 is an example how to do so. CC @Janine
st48675
ptrblck: f you are using DistributedDataParallel , you would have to make su Thanks @ptrblck, in nn.DataParallel if we use 2 GPUs, checkpoint saves the data from default device (i.e GPU-0), right? GPU-2 data is not important?
st48676
Yes, that’s the case since nn.DataParallel will create new model replicas in each iteration and thus when you store the state_dict outside of the forward or backward pass, there should only be a single model on the initially specified device.
st48677
yes, I verified with the experimen. If we use two GPUs to train model in DataParallel(), Which portion of the saved data is identical between GPU 0 and GPU 1? weights, gradients, LR, etc the same across 2 GPUs? I think, saved data from only GPU-0 (not from GPU-2), is it correct?
st48678
@ptrblck, In case of DataParallel(). If we use two GPUs, initially model is replicated on both GPUs, weights and bias are same on both GPUs and after calculation, weights are updated GPU-0 (default), but GPU-1 holds the same updated value?
st48679
The replicas will be recreated in each forward pass using nn.DataParallel (which is also why it’s slower than DistributedDataParallel). Your code will most likely just use the single model, as seen here: model = MyModel() model = nn.DataParallel(model) model.to('cuda:0') # push to default device output = model(data) # DataParallel will automatically create the replicas loss = ... loss.backward() # DataParallel will automatically call the backward pass on all models and reduce the gradients optimizer.step() torch.save(model.module.state_dict(), path) # use the default model You can find more information about the underlying workflow in DataParallel in this blog post 13.
st48680
DataParallel(). In case of 2 GPUs, both GPUs hold the same weights. is it correct?
st48681
Model parameters on multiple GPUs by DataParallel and DistributedDataParallel are the same. (unless GPU communication glitch happens) You can just save parameters on GPU 0 and load them later. saving if isinstance(model, (DataParallel, DistributedDataParallel)): torch.save(model.module.state_dict(), model_save_name) else: torch.save(model.state_dict(), model_save_name) loading state_dict = torch.load(model_name, map_location=current_gpu_device) if isinstance(model, (DataParallel, DistributedDataParallel)): model.module.load_state_dict(state_dict) else: model.load_state_dict(state_dict)
st48682
DataParallel(). If we use two GPUs, we know model will replicate on both GPUs, but I would like to know while saving model in checkpoint ---- Does checkpoint save the both replicated model? I believe, the model which is present on default device (GPU-0) will be saved in the checkpoint, Right?
st48683
Yes, for DataParallel, if you save by torch.save(model.state_dict()), it will save parameters on GPU 0. But the parameters will be saved under model.module which cannot be loaded to non-DataParallel formats. That’s why I suggest the above code that makes saving/loading compatible with nn.Module format and nn.DataParallel format. If you use DistributedDataParallel, you should only save parameters from local rank 0. Otherwise, each DDP process will try to save model on each GPU and overwrite each other.
st48684
Thanks for detailed info. Having deep insight, we know models are replicated in DataParallel(), and the weights of both models are the same. Where or in which code part, model weights are updated? ?(in parallel_apply() models are replicated) Which model’s weight is saved in checkpoint? (GPU-0 model weights and GPU-1 model weights) model present on GPU-0 weights saved in checkpoint OR model present on GPU-1 weights are saved in checkpoint?
st48685
Thanks for detailed info. Having deep insight, we know models are replicated in DataParallel(), and the weights of both models are the same. Where or in which code part, model weights are updated? ?(in parallel_apply() models are replicated) Which model’s weight is saved in checkpoint? (GPU-0 model weights and GPU-1 model weights) model present on GPU-0 weights saved in checkpoint OR model present on GPU-1 weights are saved in checkpoint?
st48686
It would be a good idea to review the blog post suggested by ptrblck above. In DataParallel, parallel_apply does not perform parameter update or synchronization. replicate function synchronizes model parameters across GPUs. dataparallel forward 1 As I noted, weights on GPU 0 is saved. You can almost consider weights on other GPUs to be not really there constantly. Parameters on other GPUS are only created just before their forward computation by the above replicate function. And the replicate function is called at every forward call.
st48687
seungjun: istribut Can’t we load nn.DataParallel format checkpoint to nn.Module format model? Is it an issue present in Pytorch checkpoint?
st48688
No. It is not an issue. nn.DataParallel saves the parameters under self.module. For example, let’s assume your original single-gpu model had self.conv layer. In your DataParallel model, it will move to self.module.conv. That’s why I recommend saving self.module.state_dict() as in the above example code.
st48689
I need to use solve on sparse equations but since it’s not supported, i’m thinking of using the conjugate gradient(or any other optimizer really) to solve the sparse equation. However to implement that i’m not sure how i can run the conjugate gradient and not break the computational graph. i.e i can’t run it in no_grad since i need the gradients to calculate the conjugate gradient. what can i do?
st48690
After defining my model: ‘’ def forward(self,x): C1=F.relu(self.CONV1(x)) M1=self.MAX1(C1) C2=F.relu(self.CONV2(M1)) M2=self.MAX2(C2) C3=F.relu(self.CONV3(M2)) M3=self.MAX3(C3) h0= Variable(torch.zeros(2,64,500)) c0=Variable(torch.zeros(2,64,500)) Pred,(hn,cn)=self.LSTM(M3,(h0,c0))'' With ‘’ summary(net, (1,4000)) ‘’ , I keep getting the following error: ’ AttributeError: ‘tuple’ object has no attribute ‘size’ ’ Does anyone know how to fix it ? Thank you
st48691
Solved by ptrblck in post #4 I think this is a known issue in torchsummary, which doesn’t seem to support RNNs as seen here. Also, there is a fork in torch-summary which has apparently fixed this issue.
st48692
Here is my full code Model definition def __init__(self): super(NET,self).__init__() #Couches de convolution - Encoder self.CONV1=nn.Conv1d(1,16,kernel_size=300,padding=0,stride=2) self.CONV2=nn.Conv1d(16,32,kernel_size=150,padding=0,stride=2) self.CONV3=nn.Conv1d(32,64,kernel_size=75,padding=0,stride=1) #Fonctions d'activation - Encoder self.RELU=nn.ReLU() #Maxpooling - Encoder self.MAX1=nn.MaxPool1d(kernel_size=2,stride=2,padding=0) self.MAX2=nn.MaxPool1d(kernel_size=2,stride=2,padding=0) self.MAX3=nn.MaxPool1d(kernel_size=2,stride=2,padding=0) #Bidirectionnel LSTM self.LSTM=nn.LSTM(input_size=60,hidden_size=500,num_layers=2,batch_first=False) ``` **Definition of forward** ```def forward(self,x): C1=F.relu(self.CONV1(x)) M1=self.MAX1(C1) C2=F.relu(self.CONV2(M1)) M2=self.MAX2(C2) C3=F.relu(self.CONV3(M2)) M3=self.MAX3(C3) h0= Variable(torch.zeros(2,64,500)) c0= Variable(torch.zeros(2,64,500)) Pred,out= self.LSTM(M3,(h0,c0))``` **Trying to have a summary of my model** "from torchsummary import summary summary(net, input_size=(1,4000))" **The full error** AttributeError Traceback (most recent call last) <ipython-input-162-69d32417aabe> in <module>() 1 from torchsummary import summary ----> 2 summary(net, input_size=(1,4000)) 5 frames /usr/local/lib/python3.6/dist-packages/torchsummary/torchsummary.py in summary(model, input_size, batch_size, device) 70 # make a forward pass 71 # print(x.shape) ---> 72 model(*x) 73 74 # remove these hooks /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), <ipython-input-160-c66aa62e34fe> in forward(self, x) 37 h0= Variable(torch.zeros(2,64,500)) 38 c0= Variable(torch.zeros(2,64,500)) ---> 39 Pred,out= self.LSTM(M3,(h0,c0)) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 724 _global_forward_hooks.values(), 725 self._forward_hooks.values()): --> 726 hook_result = hook(self, input, result) 727 if hook_result is not None: 728 result = hook_result /usr/local/lib/python3.6/dist-packages/torchsummary/torchsummary.py in hook(module, input, output) 21 if isinstance(output, (list, tuple)): 22 summary[m_key]["output_shape"] = [ ---> 23 [-1] + list(o.size())[1:] for o in output 24 ] 25 else: /usr/local/lib/python3.6/dist-packages/torchsummary/torchsummary.py in <listcomp>(.0) 21 if isinstance(output, (list, tuple)): 22 summary[m_key]["output_shape"] = [ ---> 23 [-1] + list(o.size())[1:] for o in output 24 ] 25 else: AttributeError: 'tuple' object has no attribute 'size'
st48693
I think this is a known issue in torchsummary, which doesn’t seem to support RNNs as seen here 93. Also, there is a fork in torch-summary 397 which has apparently fixed this issue.
st48694
Hi, I am working with 4-dimensional tensors, and I am using for loop to check the conditional flow, I wanted to know how can I implement the below code efficiently ## object_temp Shape -> (24, 7, 7, 30) ## Object Shape -> (24, 7, 7) ## target Shape -> ( 24, 7, 7, 30) for i in range(object.shape[0]): for j in range(object.shape[1]): for k in range(object.shape[2]): if ((target[i,j,k,4] >0) | (target[i,j,k,9] >0)): object[i,j,k] = 1 object_temp[i,j,k,:] = 1
st48695
Solved by JuanFMontesinos in post #4 I don’t understand at all why you need to do a nested loop. You are not permuting indices and you are always taking i,j,k in an ordered way. You can just do something like mask = target[...,4]>0 | target[...,9] object[mask]=1 object[mask.expand_as(object_temp)]=1
st48696
It does not take in input of 1 and 0, I want to create a tensor of the same dimensions as the original tensor target. I can only create one less dimension tensor using torch.where. How can I also get the 4 dimension? # Wanted object_temp Shape -> ( 24, 7, 7, 30) #target Shape -> ( 24, 7, 7, 30)
st48697
I don’t understand at all why you need to do a nested loop. You are not permuting indices and you are always taking i,j,k in an ordered way. You can just do something like mask = target[...,4]>0 | target[...,9] object[mask]=1 object[mask.expand_as(object_temp)]=1
st48698
I have a fairly standard workflow with sequence data (size (N, C, T) representing a single example) where N can vary in size. I have written a keyed batching pipeline that will gather up examples so every forwards on the model will run with a fixed batch size (B, C, T). Many examples might fit into a single batch or one example might span multiple batches. The problem I’m having is my unbatchify step adds a lot of latency into the pipeline so there is significant deadtime on the gpu before the next batch is run. Any recommendations to gather batches and not block? import torch from random import randint from itertools import groupby from operator import itemgetter BATCH, CHANNELS, TIME = 64, 512, 1000 def generate_data(examples=8): """ create example data of various batch sizes """ for idx in range(1, examples + 1): yield ('key-%s' % idx, torch.rand((randint(2, 48), CHANNELS, TIME))) def batchify(items, batchsize, dim=0): """ Batch up multiple examples up to `batch_size`. """ stack, pos = [], 0 for k, v in items: breaks = range(batchsize - pos, v.shape[dim], batchsize) for start, end in zip([0, *breaks], [*breaks, v.shape[dim]]): sub_batch = v[start:end] stack.append(((k, (pos, pos + end - start)), sub_batch)) if pos + end - start == batchsize: ks, vs = zip(*stack) yield ks, torch.cat(vs) stack, pos = [], 0 else: pos += end - start if len(stack): ks, vs = zip(*stack) yield ks, torch.cat(vs, dim) def unbatchify(batches, dim=0): """ reconstruct batches to original examples """ batches = ( (k, v[start:end]) for sub_batches, v in batches for k, (start, end) in sub_batches ) return ( (k, torch.cat([v for (k, v) in group], dim)) for k, group in groupby(batches, itemgetter(0)) ) def model(data): """ dummy model """ return data + 1 batches = batchify(generate_data(), batchsize=64) results = ((key, model(data)) for key, data in batches) for key, res in unbatchify(results): print(key, res.shape)
st48699
I am trying to build libtorch in order to get what comes in this zip https://download.pytorch.org/libtorch/nightly/cpu/libtorch-shared-with-deps-latest.zip 3 , mainly libs, headers and a FindCMake in order to be able to link to the libraries when building my c++ inference code. Until now I couldn’t find any tutorial or example on how to do that, I am not interested in running python scripts that build everything, I just want to create a build directory and run cmake with some flags, then make. I’ve seen it done in some other questons but details were lacking.
st48700
Hi, I am interested in it, but i am confused, and no idea how to do it. Have you get advanced about it ?
st48701
Hi, I have implemented an ensemble consisting of 3-layer MLPs with the following architecture: super(MLP, self).__init__() self.linear1 = torch.nn.Linear(D_in, H) self.relu1 = torch.nn.ReLU() self.batch1 = torch.nn.BatchNorm1d(H) self.hidden1 = torch.nn.Linear(H, H) self.relu2 = torch.nn.ReLU() self.batch2 = torch.nn.BatchNorm1d(H) self.hidden2 = torch.nn.Linear(H, H) self.hidden3 = torch.nn.Linear(H, D_out) self.logSoftMax = torch.nn.LogSoftmax(dim=1) self.SoftMax = torch.nn.Softmax(dim=1) When testing the loss for the ensemble I don’t see the loss decreasing when the number of models is increased. For a single model, it starts out with a reasonable value and then as the ensemble increases it goes up and then down. I think it might be something wrong with how we add the predictions. This is how we are doing it: def avg_evaluate(models): loss_fn = torch.nn.NLLLoss() loss = 0 total = 0 for batch_idx, (data, target) in enumerate(test_loader): y_preds = [] for idx, model in enumerate(models): model = model.eval() data = data.view(data.shape[0], -1) y_pred, _ = model(data) y_preds.append(y_pred) loss = loss + loss_fn(torch.div(torch.stack(y_preds, dim=0).sum(dim=0), len(models)), target).item() print("Final loss") print(loss / len(test_loader)) loss = loss / len(test_loader) Does anyone have any idea of what could be wrong? Would appreciate any help! Thanks.
st48702
My study requires me to use a custom optimizer in my framework. I had written a script which used Adam optim from pytorch ecosystem. I want to WOA optimizer and I want to know what I need to do from zeroing the gradients so that I could easily implement using the prebuilt pipeline. I am getting trouble finding out what does specific optim.zero_grad() do since there’s no explicit mention of function in source code provided. I attaching the optimizer file too. image874×215 10.8 KB import torch from torch.optim import Optimizer import numpy as np def schaffer(X, Y): """constraints=100, minimum f(0,0)=0""" numer = np.square(np.sin(X**2 - Y**2)) - 0.5 denom = np.square(1.0 + (0.001*(X**2 + Y**2))) return 0.5 + (numer*(1.0/denom)) def eggholder(X, Y): """constraints=512, minimum f(512, 414.2319)=-959.6407""" y = Y+47.0 a = (-1.0)*(y)*np.sin(np.sqrt(np.absolute((X/2.0) + y))) b = (-1.0)*X*np.sin(np.sqrt(np.absolute(X-y))) return a+b def booth(X, Y): """constraints=10, minimum f(1, 3)=0""" return ((X)+(2.0*Y)-7.0)**2+((2.0*X)+(Y)-5.0)**2 def matyas(X, Y): """constraints=10, minimum f(0, 0)=0""" return (0.26*(X**2+Y**2))-(0.48*X*Y) def cross_in_tray(X, Y): """constraints=10, minimum f(1.34941, -1.34941)=-2.06261 minimum f(1.34941, 1.34941)=-2.06261 minimum f(-1.34941, 1.34941)=-2.06261 minimum f(-1.34941, -1.34941)=-2.06261 """ B = np.exp(np.absolute(100.0-(np.sqrt(X**2+Y**2)/np.pi))) A = np.absolute(np.sin(X)*np.sin(Y)*B)+1 return -0.0001*(A**0.1) def levi(X, Y): """constraints=10, minimum f(1,1)=0.0 """ A = np.sin(3.0*np.pi*X)**2 B = ((X-1)**2)*(1+np.sin(3.0*np.pi*Y)**2) C = ((Y-1)**2)*(1+np.sin(2.0*np.pi*Y)**2) funcs = {'schaffer':schaffer, 'eggholder':eggholder, 'booth':booth, 'matyas':matyas, 'cross':cross_in_tray, 'levi':levi} func_constraints = {'schaffer':100.0, 'eggholder':512.0, 'booth':10.0, 'matyas':10.0, 'cross':10.0, 'levi':10.0} func = 'booth' nsols = 50 ngens = 30 C = func_constraints[func] constraints = [[-C, C], [-C, C]] opt_func = funcs[func] b = 0.5 a = 2.0 a_step = a/ngens maximize = False class WOA(Optimizer): def __init__(self, opt_func, constraints=constraints, nsols=nsols, b=b, a=a, a_step=a_step,maximize=maximize): self._opt_func = opt_func self._constraints = constraints self._sols = self._init_solutions(nsols) self._b = b self._a = a self._a_step = a_step self._maximize = maximize self._best_solutions = [] def get_solutions(self): """return solutions""" return self._sols def step(self): """solutions randomly encircle, search or attack""" ranked_sol = self._rank_solutions() best_sol = ranked_sol[0] #include best solution in next generation solutions new_sols = [best_sol] for s in ranked_sol[1:]: if (np.random.uniform(0.0, 1.0)) > 0.5: A = self._compute_A() norm_A = torch.linalg.norm(A) if norm_A < 1.0: new_s = self._encircle(s, best_sol, A) else: ###select random sol random_sol = self._sols[torch.randint(self._sols.shape[0])] new_s = self._search(s, random_sol, A) else: new_s = self._attack(s, best_sol) new_sols.append(self._constrain_solution(new_s)) self._sols = torch.stack(new_sols) self._a -= self._a_step def _init_solutions(self, nsols): """initialize solutions uniform randomly in space""" sols = [] for c in self._constraints: sols.append(torch.from_numpy(np.random.uniform(c[0], c[1], size=nsols))) sols = torch.stack(sols, axis=-1) return sols def _constrain_solution(self, sol): """ensure solutions are valid wrt to constraints""" constrain_s = [] for c, s in zip(self._constraints, sol): if c[0] > s: s = c[0] elif c[1] < s: s = c[1] constrain_s.append(s) return constrain_s def _rank_solutions(self): """find best solution""" fitness = self._opt_func(self._sols[:, 0], self._sols[:, 1]) sol_fitness = [(f, s) for f, s in zip(fitness, self._sols)] #best solution is at the front of the list ranked_sol = list(sorted(sol_fitness, key=lambda x:x[0], reverse=self._maximize)) self._best_solutions.append(ranked_sol[0]) return [ s[1] for s in ranked_sol] def print_best_solutions(self): print('generation best solution history') print('([fitness], [solution])') for s in self._best_solutions: print(s) print('\n') print('best solution') print('([fitness], [solution])') print(sorted(self._best_solutions, key=lambda x:x[0], reverse=self._maximize)[0]) def _compute_A(self): r = torch.from_numpy(np.random.uniform(0.0, 1.0, size=2)) return (2.0*torch.multiply(self._a, r))-self._a def _compute_C(self): return 2.0*torch.from_numpy(np.random.uniform(0.0, 1.0, size=2)) def _encircle(self, sol, best_sol, A): D = self._encircle_D(sol, best_sol) return best_sol - torch.multiply(A, D) def _encircle_D(self, sol, best_sol): C = self._compute_C() D = torch.linalg.norm(torch.multiply(C, best_sol) - sol) return D def _search(self, sol, rand_sol, A): D = self._search_D(sol, rand_sol) return rand_sol - torch.multiply(A, D) def _search_D(self, sol, rand_sol): C = self._compute_C() return torch.linalg.norm(torch.multiply(C, rand_sol) - sol) def _attack(self, sol, best_sol): D = torch.linalg.norm(best_sol - sol) L = torch.from_numpy(np.random.uniform(-1.0, 1.0, size=2)) return torch.multiply(torch.multiply(D,torch.exp(self._b*L)), torch.cos(2.0*np.pi*L))+best_sol Thanks for your help
st48703
Can anyone tell, what does zero grad do? Which parameters are replaced with zero gradient?
st48704
optimizer.zero_grad iterated the parameters in each param_group and set’s the .grad attribute either to zero or to None (which was recently introduced for performance reasons).
st48705
It’s not of little significance. Without zeroing out the gradients, the parameters would accumulate the gradients, which is most likely not what you want. If you have trouble implementing it in your optimizer, call model.zero_grad() and make sure that all .grad attributes are reset before starting the next iteration.
st48706
I haven’t created any generator object, how shall I overcome this error image1331×365 27.2 KB
st48707
How do I realize something similar to keras TimeDistributedDense (https://github.com/fchollet/keras/issues/1029 5) in pytorch?
st48708
Because of pytorch’s dynamic graph, you don’t need TimeDistributedDense like in Keras. You can just use Linear. LSTM networks become very straight forward. See the tutorial for pos tagging with LSTM: http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#example-an-lstm-for-part-of-speech-tagging 38
st48709
# 24 fc timedistributed num = 24 fc = nn.ModuleList([nn.Linear(8, 1) for i in range(num)]) # forward pass x = np.zeros(64, 24, 8) outs=[] for i in range(x.shape[1]): outs.append(fc[i](x[:, i, :].unsqueeze(1))) outs=torch.cat(outs, axis=1)
st48710
I’d like to export a pretrained model to ONNX format so that I can run it from a browser with JavaScript. The model uses ReflectionPad and ConvTranspose. If I export with an opset version <=10 JS complains that ConvTranspose is not implemented and if I export with an opset version >= 11 JS complains that there are int64 values in my model which it can’t deal with; there aren’t, but ReflectionPad seems to create them. The model definition is as follows: import torch import onnx print("pytorch version :", torch.__version__) print("onnx version :", onnx.__version__) class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.reflectionpad = nn.ReflectionPad1d(3) self.conv = nn.ConvTranspose1d(4, 4, kernel_size=3) def forward(self, x): x = self.reflectionpad(x) return self.conv(x) model_g = MyModule() x = torch.randn(1, 4, 4) The bug_example.js scipt: async function runExample() { // Create an ONNX inference session with default backend. const session = new onnx.InferenceSession(); await session.loadModel("toy_model.onnx"); const x = new Float32Array(1 * 4 * 4).fill(1); const tensorX = new onnx.Tensor(x, 'float32', [1, 4, 4]); const outputMap = await session.run([tensorX]); const outputData = outputMap.get('output'); // Check if result is expected. console.log('ok'); } The bug_example.html script: <html> <head> <script src="https://cdn.jsdelivr.net/npm/onnxjs/dist/onnx.min.js"></script> <script src="./bug_example.js"></script> </head> <body> <div><input type="button" value="Run" onclick="runExample()"/></div> </body> </html> Using opset=10 (opset=9 behaves the same) torch.onnx.export(model_g, x, "toy_model.onnx", verbose = True, opset_version=10) output: pytorch version : 1.5.1 onnx version : 1.7.0 graph(%input : Float(1, 4, 4), %conv.weight : Float(4, 4, 3), %conv.bias : Float(4)): %3 : Float(1, 4, 10) = onnx::Pad[mode="reflect", pads=[0, 0, 3, 0, 0, 3]](%input) # /home/bram/miniconda3/envs/whispp_env/lib/python3.7/site-packages/torch/nn/functional.py:3397:0 %4 : Float(1, 4, 12) = onnx::ConvTranspose[dilations=[1], group=1, kernel_shape=[3], pads=[0, 0], strides=[1]](%3, %conv.weight, %conv.bias) # /home/bram/miniconda3/envs/whispp_env/lib/python3.7/site-packages/torch/nn/modules/conv.py:647:0 return (%4) Running JS in the browser gives me Uncaught (in promise) TypeError: cannot resolve operator 'ConvTranspose' with opsets: ai.onnx v10 Using opset=11 (opset=12 behaves the same) torch.onnx.export(model_g, x, "toy_model.onnx", verbose = True, opset_version=11) output: pytorch version : 1.5.1 onnx version : 1.7.0 graph(%input : Float(1, 4, 4), %conv.weight : Float(4, 4, 3), %conv.bias : Float(4), %27 : Long(), %28 : Long(2)): %3 : int[] = onnx::Constant[value= 3 3 [ CPULongType{2} ]]() %4 : Tensor = onnx::Constant[value={0}]() %5 : Tensor = onnx::Shape(%3) %6 : Tensor = onnx::Gather[axis=0](%5, %4) %10 : LongTensor = onnx::Sub(%27, %6) %12 : Tensor = onnx::ConstantOfShape[value={0}](%10) %13 : Tensor = onnx::Concat[axis=0](%28, %12) %14 : Tensor = onnx::Constant[value=-1 2 [ CPULongType{2} ]]() %15 : Tensor = onnx::Reshape(%13, %14) %16 : Tensor = onnx::Constant[value={0}]() %17 : Tensor = onnx::Constant[value={-1}]() %18 : Tensor = onnx::Constant[value={-9223372036854775807}]() %19 : Tensor = onnx::Constant[value={-1}]() %20 : Tensor = onnx::Slice(%15, %17, %18, %16, %19) %21 : Tensor = onnx::Transpose[perm=[1, 0]](%20) %22 : Tensor = onnx::Constant[value={-1}]() %23 : Tensor = onnx::Reshape(%21, %22) %24 : Tensor = onnx::Cast[to=7](%23) %25 : Float(1, 4, 10) = onnx::Pad[mode="reflect"](%input, %24) # /home/bram/miniconda3/envs/whispp_env/lib/python3.7/site-packages/torch/nn/functional.py:3397:0 %26 : Float(1, 4, 12) = onnx::ConvTranspose[dilations=[1], group=1, kernel_shape=[3], pads=[0, 0], strides=[1]](%25, %conv.weight, %conv.bias) # /home/bram/miniconda3/envs/whispp_env/lib/python3.7/site-packages/torch/nn/modules/conv.py:647:0 return (%26) Running JS in the browser gives me Uncaught (in promise) TypeError: int64 is not supported I guess the earlier versions didn’t have all functionalities yet, but the more recent version has a bug related to padding layers somehow? Note that using opset=10 without convTranspose works fine. What is the advised way to deal with this?
st48711
I have the same issue too. According to the ONNX.js document, ConvTranspose is not supported. [https://github.com/microsoft/onnxjs/blob/master/docs/operators.md 25]
st48712
I believe these two issues on the onnx.js forums explain the problem best. github.com/microsoft/onnxjs Feature Request - Dealing with int64 in the exported ONNX model 40 opened Oct 7, 2020 snakers4 Hi, Background We are trying to run models from silero-models via onnx.js. We have had various issues with ONNX export, but most of them... github.com/microsoft/onnxjs onnx model error: int64 is not supported 46 opened Jul 17, 2019 aohan237 i use a model from https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/mask-rcnn mask-rcnn and load the model in onnxjs just follow the instructions. const model = new onnx.InferenceSession(); await model.loadModel("./mask_rcnn_R_50_FPN_1x.onnx"); but errors... I have replied there with a link to this issue. I hope it gets picked up on the onnxjs side
st48713
I have a batch = 256 of pair of vectors of dimension 32. The shape is so [256, 2, 32]. What I would want would be to perform the dot product between each of the pair of vectors of the batch, so that I end up with a tensor of shape [256, 32]. How should I do it ? Thank you in advance!
st48714
Solved by KFrank in post #3 Hi Paula (and Alex)! Building on Alex’s suggestion, I believe that X.prod (dim = 1).sum (dim = 1) should do what you want. Best. K. Frank
st48715
I don’t understand how you wish to end up with such dimensions. If one vector is X[:,0,:] and another is X[:,1,:], and you want to dot product them, the result should be a either a scalar, or a vector of length 256 or a vector of length 32 (if you want to perform dot product in one dimension). Maybe you want elementwise product? In that case it can be achieved by torch.prod(X, dim=1)
st48716
Hi Paula (and Alex)! paula_gomez_duran: What I would want would be to perform the dot product between each of the pair of vectors of the batch, so that I end up with a tensor of shape [256, 32]. Building on Alex’s suggestion, I believe that X.prod (dim = 1).sum (dim = 1) should do what you want. Best. K. Frank
st48717
I have an problem : I need to store many outputs of a network in a buffer (for reinforcement learning) for i in range 1000: buffer.append(network[x]) compute mean loss of the elements in the buffer x = [B,C,H,W] = [64,1,30,30] network[x] is shaped like [64,1] So the buffer contains tensor of very small dimension. However the memory of the gpu fills up extremely fast. I suspect that by adding the output to the buffer I am also adding all the computational graph. So I add the graph hundreds of times and it makes my memory explode. Do you have any solutions? Thanks you
st48718
MehdiZouitine: I suspect that by adding the output to the buffer I am also adding all the computational graph. Yes, you are storing the complete computation graphs in the list, if you are not detaching the tensors. MehdiZouitine: Do you have any solutions? It depends on your use case. If you need to store the computation graphs to call backward later, then you could reduce the number of iterations. Alternatively, if you don’t need to compute the gradients, you could store the tensors after calling detach() on them.
st48719
Ok thanks, I need to call backward later so I cant detach. I have another question : if I have a batch of 20 then the memory used by the graph will be 2 time greater than a batch of 10 ? Thank you
st48720
MehdiZouitine: if I have a batch of 20 then the memory used by the graph will be 2 time greater than a batch of 10 ? More of less. You would have to take e.g. memory fragmentation into consideration. Also, if you are using cudnn with benchmark=True, different algorithms might be picked for different batch sizes depending on their speed, so you might end up with a different memory footprint.
st48721
The total loss and the inference result will be the same value every time. gpu cpu is working normally detaset CIFAR100 Please tell me if there is not enough information. I will add it. import torch import torchvision import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torchvision import datasets, transforms from torch.utils.data import Dataset import time class Firttan(nn.Module): def init(self): super().init() def forward(self, inputs): inputs=inputs.view(inputs.size(0),-1) return inputs class MYoptimizer(): def init(self): self.tortal_losses=0 self.optimizer = torch.optim.SGD(net.parameters(), lr=0.01, momentum=0.9) self.scheduler = optim.lr_scheduler.StepLR(self.optimizer, step_size=10, gamma=0.9) self.criterion = torch.nn.CrossEntropyLoss() def loss(self,inputs, labels): self.optimizer.zero_grad() outputs=net(inputs) loss=self.criterion(outputs, labels) loss.backward() torch.nn.utils.clip_grad_norm_(net.parameters(),1,2) self.optimizer.step() self.tortal_losses+=loss.to('cpu').item() def Done(self,epoc,timee): print("Epoch",epoc,"total_loss",self.tortal_losses,timee) self.tortal_losses=0 self.scheduler.step() class swish(nn.Module): def init(self, beta = 1.25): super().init() self.beta = beta def forward(self, inputs): return inputs * torch.sigmoid(self.beta * inputs) class DataSet: def init(self,detas,labels): self.X = detas # 入力 self.t = labels # 出力 def __len__(self): return len(self.X) # データ数(10)を返す def __getitem__(self, index): # index番目の入出力ペアを返す return self.X[index], self.t[index] class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Sequential(nn.Conv2d(3, 16, 3, padding=1),swish(), nn.Conv2d(16, 16, 3, padding=1),swish(), nn.MaxPool2d(2, 2),swish(), nn.Conv2d(16, 32, 3, padding=1),swish(), nn.Conv2d(32, 32, 3, padding=1),swish(), nn.MaxPool2d(2, 2),swish(), Firttan(), nn.Linear(32 * 8* 8, 512), swish(), nn.Linear(512, 100), nn.Softmax(dim=1)) def forward(self, x): x = self.conv1(x) return x def eval_(dataloader): correct = 0 total = 0 with torch.no_grad(): for (images, labels) in dataloader: images, labels = images, labels outputs = net(images.to('cuda:0')) _, predicted = torch.max(outputs.to('cpu').data, 1) correct += (predicted == labels).sum().item() total += labels.size(0) print("Val Acc",(correct/total)) cifar100_data = torchvision.datasets.CIFAR100( ‘./cifar-100’, train=True, download=True, transform=torchvision.transforms.ToTensor()) さっき作ったDataSetクラスのインスタンスを作成 datasetをDataLoaderの引数とすることでミニバッチを作成. batch_size=128 n_epochs = 50 dataloader = torch.utils.data.DataLoader(cifar100_data, batch_size=batch_size,shuffle=True,drop_last=True) net = Net().to(‘cuda:0’) net.train() MYoptimizer=MYoptimizer() print(“start”) for _ in range(n_epochs): timee=time.time() net.train() for inputs, labels in dataloader: MYoptimizer.loss(inputs.to(‘cuda:0’),labels.to(‘cuda:0’)) MYoptimizer.Done(,time.time()-timee) #テスト net.eval() eval(dataloader) print(“a”) result Files already downloaded and verified start Epoch 0 total_loss 1796.0163736343384 8.27594780921936 Val Acc 0.009995993589743589 Epoch 1 total_loss 1796.0164184570312 7.108090877532959 Val Acc 0.009935897435897435 Epoch 2 total_loss 1796.0164275169373 7.067081689834595 Val Acc 0.009995993589743589 Epoch 3 total_loss 1796.0164308547974 7.0995776653289795 Val Acc 0.009995993589743589 I can’t display the code well How can my site recognize it as a code?
st48722
Solved by ptrblck in post #2 Remove the nn.Softmax at the end of your model, as nn.CrossEntropyLoss will apply F.log_softmax internally. PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging easier
st48723
Remove the nn.Softmax at the end of your model, as nn.CrossEntropyLoss will apply F.log_softmax internally. PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging easier
st48724
Hi all, I have an issue related to following error. The cause for the error is known but I cannot find a workaround as I need to run the algorithm that way. RuntimeError: cuda runtime error (59) : device-side assert triggered at /tmp/pip-req-build-4baxydiv/aten/src/THCUNN/generic/ClassNLLCriterion.cu:110 I have a large set of classes in a dataset and I want to train the model with only a subset at a time. Lets say I have 12 classes [0-11] and classifier’s last layer output shape is 5. With data subset with data labels [0,1,2,3,4], model works perfectly. However, it throws the above error for following class label combinations. [1,2,3,4,5] [2,5,6,8,11] I am using cross_entropy criterion and last layer is a linera layer. I am fairly new to Pytorch and also searched some solutions in the forum so cound’t find anything related to my requirement. Appreciate any help that can be provided.
st48725
The error is expected in this case, since nn.CrossEntropyLoss expects targets in the range [0, nb_classes-1] in order to use it as indices to calculate the loss. If you want to train the model with different subsets, your model should most likely still output the logits for all 12 classes, shouldn’t it? I guess your current approach wants to map the new labels to the 5 outputs of your model, which would mix different targets for the same class output of the model so I’m unsure if that’s really what you want.
st48726
Hey guys! I’m quite new with Python and Pytorch but installed it all through tutorials. I use Esrgan in combination with image ImageEnhancingUtility. Because I want to upscale my old cartoons! Every worked fine but since a few days ago I got the Runtime Error I’m really a noob in Python that’s why I ask you guys to help! Schermopname (70)977×512 33.9 KB
st48727
Your current code is trying to allocate too much memory and is thus raising this error. Did you change anything in the model, such as the spatial size of the input or the batch size? A workaround is to reduce the memory usage so that the workload can fit in your 6GB GPU e.g. by reducing the batch size or by using torch.utils.checkpoint to trade compute for memory.
st48728
Hallo. I’m just starting with PyTorch. I have built the following program in python and I want to translate it to PyTorch so I can compare the result but I still haven’t found how to do it. I’m currently attenting a course for PyTorch but it’s really slow and I need some help. Can anyone tell me how to write this code using PyTorch? It’s the code in the paper Deep Learning: An Introduction for Applied Mathematicians - Catherine F. Higham, Desmond J. Higham but I have translated it from MATLAB to Python P.S. Thank you everyone in advance for your time and sorry if it’s against the rules of the Forum. Please delete it if it is. Also if you have any suggestion about Courses or YouTube videos/series that you found helpful while you were getting starting with PyTorch please let me know. import numpy as np from numpy.linalg import norm as norm np.random.seed(5000) def activate(x,W,b): #Evaluates sigmoid function. x is the input vector, y is the output vector W contains the weights, b contains the shifts. The i-th component of y is activate((Wx+b)_i) where activate(z) = 1/(1+exp(-z)) z=W@x+breturn 1/(1+np.exp(-z)) def cost(W2,W3,W4,b2,b3,b4): costvec = np.zeros(10) for i in range(10): x=[x1[i],x2[i]] a2=activate(x,W2,b2) a3=activate(a2,W3,b3) a4=activate(a3,W4,b4) costvec[i] = norm(y[:,i]-a4,2) costvec[i] costval = norm(costvec,2)**2 return costval x1 = np.array([0.1,0.3,0.1,0.6,0.4,0.6,0.5,0.9,0.4,0.7]) x2 = np.array([0.1,0.4,0.5,0.9,0.2,0.3,0.6,0.2,0.4,0.6]) y=np.array([ [1,1,1,1,1,0,0,0,0,0],[0,0,0,0,0,1,1,1,1,1] ]) W2=np.random.random((2,2)) W3=np.random.random((3,2)) W4=np.random.random((2,3)) b2=np.random.random(2) b3=np.random.random(3) b4=np.random.random(2) #forward and back propagate eta = 0.05 #learning rate Niter = 10**6 #number of SG iterations savecost = np.zeros(Niter) #Value of cost at each iteration for counter in range(Niter): k=np.random.randint(0,10) #choose a training point at random x=np.array([x1[k],x2[k]]) #forward pass a2 = activate(x,W2,b2) a3 = activate(a2,W3,b3) a4 = activate(a3,W4,b4) #backward pass delta4 = a4*(1-a4)*(a4-y[:,k]) delta3 = a3*(1-a3)*(W4.T@delta4) delta2 = a2*(1-a2)*(W3.T@delta3) #gradient step W2 = W2 - eta*delta2@x W3 = W3 - eta*delta3.reshape(3,1)@a2.reshape(1,2) W4 = W4 - eta*delta4.reshape(2,1)@a3.reshape(1,3) b2 = b2 - eta*delta2 b3 = b3 - eta*delta3 b4 = b4 - eta*delta4 newcost = cost(W2,W3,W4,b2,b3,b4) savecost[counter]=newcost print(newcost , 'i=',counter) import matplotlib.pyplot as plt iterr=[i for i in range(counter+1)] plt.ylim([10**(-4),10]) plt.xlim([0,Niter]) plt.semilogy(iterr,savecost) plt.show()
st48729
111398: I have built the following program in python and I want to translate it to PyTorch so I can compare the result but I still haven’t found how to do it. I’m currently attenting a course for PyTorch but it’s really slow and I need some help. Can anyone tell me how to write this code using PyTorch? I think the best approach would be if you could explain what you have done so far, what is not working, and where you are stuck. Once we know this information we might be able to guide you through it. Especially if you are taking a course I would recommend to try to debug it yourself first and avoid looking at perfect solutions.
st48730
Hi, I’m using hooks for validation, and with data parallel only the packet of gpu=0 returns, I saw that there is not advance with that issue. So I thought on a hack, I will train with data-parallel but when doing validation I’ll remove the data parallel, and send the model to 1 gpu only. No matter what I tried to do it did not succeed, “to(‘cuda:0’)”, sending it to cpu, no matter what it returns half packet. So now to my question, how can I take a model with data-parallel wrapper, and remove the data-parallel wrapper of him?!!?
st48731
After you’ve wrapped the model into nn.DataParallel you can get the non-parallel model back via: model = model.module Let me know, if you are still seeing any issues.
st48732
while I a am training the Network, Getting TypeError: “‘tuple’ object is not callable” for the ‘for’ loop line of network training code. Attached the concatenated traindataset, its trainloader, and code for training network. The same code worked for non-concatenated train dataset. Not sure what is the issue. Any help would be greatly appreciated. #Concatenate the 5 different Training set into 1 new training set cifar_trainset_new = torch.utils.data.ConcatDataset([cifar_trainset, cifar_trainset2, cifar_trainset3, cifar_trainset4, cifar_trainset5]) cifar_trainloader_new = torch.utils.data.DataLoader(cifar_trainset_new, batch_size=4, shuffle=True) cifar_trainset_new_size = len(cifar_trainset_new) print(cifar_trainset_new_size) Use the augmented training images to train the MLP with the best performance and report the new accuracy performance MLP with best performance is MLP2 which has Accuracy of 41 % mlp3 = MLP2().to(device) # operate on GPU Define a loss function and optimizer criterion3 = nn.CrossEntropyLoss() optimizer3 = optim.SGD(mlp2.parameters(), lr=0.001, momentum=0.9) Training the Network n_epoch3 = 20 for epoch3 in range(n_epoch3): # loop over the dataset multiple times running_loss3 = 0.0 for i, data in enumerate(cifar_trainloader_new, 0): # TODO: write training code # get the inputs inputs = data labels = data inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients optimizer3.zero_grad() # forward + backward + optimize output3 = mlp2(inputs) loss3 = criterion3(output3, labels) loss3.backward() optimizer3.step() # print statistics running_loss3 += loss3.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss2: %.3f' %(epoch3 + 1, i + 1, running_loss3 / 2000)) running_loss3 = 0.0 print(‘Finished Training3 with new Augmented training images’) Save the trained model PATH = ‘./mlp2_cifar10.pth’ torch.save(mlp3.state_dict(), PATH) #Reloading the model mlp3 = MLP2().to(device) mlp3.load_state_dict(torch.load(PATH)) Evaluate the classfication performance on the testing set correct3 = 0 total3 = 0 with torch.no_grad(): for data in cifar_testloader: # TODO: write testing code images , labels = data images = images.to(device) labels = labels.to(device) output3 = mlp3(images) _, predicted3 = torch.max(output3.data, 1) total3 += labels.size(0) correct3 += (predicted3 == labels).sum().item() print(‘Accuracy of the network on the 10000 test images: %d %%’ % (100 * correct3 / total3)) ====== Error ======= TypeError Traceback (most recent call last) in () 14 for epoch3 in range(n_epoch3): # loop over the dataset multiple times 15 running_loss3 = 0.0 —> 16 for i, data in enumerate(cifar_trainloader_new, 0): 17 # TODO: write training code 18 # get the inputs 7 frames in call(self, tensor) 26 27 def call(self, tensor): —> 28 return tensor + torch.randn(tensor.size()) * self.std + self.mean 29 30 def repr(self): TypeError: ‘tuple’ object is not callable
st48733
Could you post the used transformation and how it’s applied to the datasets? Based on the stack trace I assume you are using the noise addition from this post 2?
st48734
shairng the Transformation I applied in the dataset here for ref. Data augmentation techniques to the training set: #original training set: transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) # Shifting up/down and left/right by within 10%: transform2 = transforms.Compose([transforms.RandomAffine(degrees=0, translate=(0.1,0.1), shear=0), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) #Rotating: transform3 = transforms.Compose([transforms.RandomAffine(degrees=30, shear=0), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) #Horizontal Flipping: transform4 = transforms.Compose([transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) #Adding small Gaussian noise: class AddGaussianNoise(object): def __init__(self, mean=0., std=1.): self.std = std self.mean = mean def __call__(self, tensor): return tensor + torch.randn(tensor.size()) * self.std + self.mean def __repr__(self): return self.__class__.__name__ + '(mean={0}, std={1})'.format(self.mean, self.std) transform5 = transforms.Compose([AddGaussianNoise(0.,1.), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) #transform6 = transforms.Compose([transforms.RandomAffine(degrees=0, translate=(0.1,0.1), shear=0), transforms.RandomAffine(degrees=30, shear=0), transforms.RandomHorizontalFlip(), AddGaussianNoise(0.,1.), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) # TODO: load the CIFAR-10 dataset and build dataloader cifar_trainset = datasets.CIFAR10(root = './data', train=True, download=True, transform = transform) cifar_trainset2 = datasets.CIFAR10(root = './data', train=True, download=True, transform = transform2) cifar_trainset3 = datasets.CIFAR10(root = './data', train=True, download=True, transform = transform3) cifar_trainset4 = datasets.CIFAR10(root = './data', train=True, download=True, transform = transform4) cifar_trainset5 = datasets.CIFAR10(root = './data', train=True, download=True, transform = transform5) #Concatenate the 5 different Training set into 1 new training set cifar_trainset_new = torch.utils.data.ConcatDataset([cifar_trainset, cifar_trainset2, cifar_trainset3, cifar_trainset4, cifar_trainset5]) cifar_trainloader_new = torch.utils.data.DataLoader(cifar_trainset_new, batch_size=4, shuffle=True) cifar_trainset_new_size = len(cifar_trainset_new) print(cifar_trainset_new_size) #cifar_trainloader = torch.utils.data.DataLoader(cifar_trainset6, batch_size=4, shuffle=True) #cifar_trainset6_size = len(cifar_trainset6) #print(cifar_trainset6_size) cifar_testset = datasets.CIFAR10(root = './data', train=False, download=True, transform = transform) cifar_testloader = torch.utils.data.DataLoader(cifar_testset, batch_size=1, shuffle=False) cifar_testset_size = len(cifar_testset) print(cifar_testset_size) print("CIFAR10 new training dataset:\n ", cifar_trainset_new) print("CIFAR10 testing dataset:\n ", cifar_testset)
st48735
AddGaussianNoise should be applied on a tensor not a PIL.Image, so you would need to add this transformation after the ToTensor() transform: transform5 = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), AddGaussianNoise(0.,1.)]) PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging easier. I’ve formatted the code for you.
st48736
In the torch.trace tutorial, it says if we don’t do jit.script in some if-else logic statement in the model, the exported trace model won’t capture it. However, in ONNX, does it mean we don’t need to do jit.script in those logic statements in the model? Thank you for answering.
st48737
ONNX export leverage TorchIR to convert PyTorch model to ONNX graph. TorchIR is generated by trace or script, so you will still have the same limitation if you use trace to export to ONNX. What the problem of using script to capture control flow and loop?
st48738
image829×193 7.6 KB github.com/microsoft/onnxjs Error loading onnxjs from webpack opened Jun 16, 2019 csaroff I've been trying to add onnxjs support in VoTT, but I've been hitting some issues. I have a model that I... I guess it is kind of like underdevelopment.
st48739
I am trying to extract tensor data from the MNIST data set. Essentially I want to remove the data labels from the tuple. The standard ConcatDataset is set up using the following: def init(self, *datasets): self.datasets = datasets def getitem(self, i): return tuple(d[i] for d in self.datasets) def len(self): return min(len(d) for d in self.datasets) The issue is I’m not sure how to use indexing to extract the tensors from d[i] to remove the labels. d[i] has structure of tuple((tensor(), label),(tensor(), label)), so I should be able to extract the tensors and append to a new tuple to get rid of the labels, but this has tripped me up a bit. Any help would be appreciated.
st48740
If I have: self.layer1 = torch.nn.Conv1d(in_channels=512, out_channels=512, kernel_size=1) isn’t that equivalent to self.layer1 = torch.nn.Linear(512, 512) ?
st48741
Solved by ptrblck in post #2 Yes, should be the case: # Setup conv = torch.nn.Conv1d(in_channels=512, out_channels=512, kernel_size=1).double() lin = torch.nn.Linear(512, 512).double() # use same param values with torch.no_grad(): lin.weight = nn.Parameter(conv.weight.squeeze(2)) lin.bias = nn.Parameter(conv.bias) # …
st48742
Yes, should be the case: # Setup conv = torch.nn.Conv1d(in_channels=512, out_channels=512, kernel_size=1).double() lin = torch.nn.Linear(512, 512).double() # use same param values with torch.no_grad(): lin.weight = nn.Parameter(conv.weight.squeeze(2)) lin.bias = nn.Parameter(conv.bias) # forward x = torch.randn(2, 512, 20).double() out_conv = conv(x) # permute for linear x_lin = x.permute(0, 2, 1) out_lin = lin(x_lin) # check forward output print(torch.allclose(out_lin.permute(0, 2, 1), out_conv)) > True print((out_lin.permute(0, 2, 1) - out_conv).abs().max()) > tensor(1.2212e-15, dtype=torch.float64, grad_fn=<MaxBackward1>) # check backward out_conv.mean().backward() out_lin.mean().backward() print(torch.allclose(conv.weight.grad.squeeze(2), lin.weight.grad)) > True print(torch.allclose(conv.bias.grad, lin.bias.grad)) > True
st48743
Thanks so much. So there’s literally no difference, not even in terms of computation?
st48744
There is most likely a difference in computation in particular if you are using CUDA operations. E.g. convolutions would be dispatched to cudnn, if you are using an NVIDIA GPU, which could internally call into cublas (same as in the linear layer), but isn’t guaranteed. I don’t know, which methods are exactly called on the CPU. For my code snippet the convolution would use cudnn::cnn::implicit_convolve_dgemm, while the linear layer would call into volta_dgemm_128x64_tn.
st48745
Hi, Recently in my project, I needed to lift a point cloud of shape (n, 3) from an Image of shape (3, H, W), whose RGB color represents the spatial coordinates. I also have a mask with shape (240, 320) to indicate which points I need. I used an indexing operation to lift it: masked = mask > threshold point_cloud = img[:, masked].transpose(0, 1) However, this operation is very slow, took nearly 0.1 seconds. Is there any way to improve the performance here? Thanks!
st48746
I don’t know why exactly it’s slow, but did you try indexing on different devices? For many of my purposes indexing on GPU is faster than on CPU
st48747
I’m working on a tesla V100, so I think may it’s not the problem with the device? and the masked would contain like 10k index.
st48748
Hello there, I was wondering if a network is able to recognize small difference within a feature. Let’s assume this feature is the only feature for a binary problem, it’s the numeric representation of occurrence of this feature in the original data. To be labeled as A it needs to be 0 everything else would be B. But also B just ranges from 1 to 5 for example. So the difference between A and B could just be 1. Is a differentiation between these two still possible or would it be better to put a weight to every occurrence of this feature so that there is a more distinct difference between the two ?
st48749
I’ve tried cloning and running this along with all the other tricks I could find online with no luck: #git clone https://github.com/pytorch/captum.git #cd captum #pip install -e . git clone https://github.com/UserName/captum cd captum git checkout custom-branch pip install -e . If I use the above code, I can import captum but can’t import any submodules. Any suggestions?
st48750
Are you able to build captum from master and import submodules? If so, I guess your custom branch might be broken.
st48751
I just tested installing the latest release of Captum and it seems to have worked: !git clone https://github.com/pytorch/captum %cd captum !git checkout "v0.2.0" !pip3 install -e . import sys sys.path.append('/content/captum') %cd .. Testing the imports: import captum from captum.attr import ( GradientShap, DeepLift, DeepLiftShap, IntegratedGradients, LayerConductance, NeuronConductance, NoiseTunnel, ) from captum import custom_module # This fails The master branch seems to install correctly as well, but my custom branch still does not. I haven’t changed anything on my custom branch relating to the submodule organization or setup.py, so I don’t understand why I can’t import the one submodule? Edit: The import in the custom module’s __init__.py were changed from relative to absolute, and that looks like it might be the cause.
st48752
Are there optimized pytorch binaries for hardware using Intel’s Knights Landing CPUs? related: https://github.com/pytorch/pytorch/issues/32909 4 https://www.quora.com/unanswered/How-does-one-install-custom-binaries-for-the-Knights-Landing-CPU-architecture-for-PyTorch 10 https://www.reddit.com/r/pytorch/comments/jhsy7w/how_does_one_install_custom_binaries_for_the/ 8
st48753
I am trying to invert wav2vec 2.0 and it looks like it takes 400 samples and converts to a 512-dimensional vector. I’m having a hard time figuring out how to invert it. I tried doing a straight mapping from 512 => 400, but it doesn’t give great results, even when overfitting to a handful of samples. I think it’s because I need to include more temporal information. So if 400 samples (25ms) converts to a single 512-dim vector, then 1200 samples (75ms) will give me 3 512-dim vectors. How can I take those 3 vectors and convert back to 1200 samples? What architecture would you recommend?
st48754
When running a select and assign statement on GPU with the same data twice I do not get the same output. However, this works as expected on CPU. What am I missing here? Is selecting on GPU non-deterministic somehow? import torch n = 100 y = torch.randint(0, n, (n,)) store = [] x = torch.rand(n, 3) for _ in range(2): z = torch.zeros_like(x) z[y] += x store.append(z) print((store[0]==store[1]).all()) store = [] x = torch.rand(n, 3).cuda() for _ in range(2): z = torch.zeros_like(x) z[y] += x store.append(z) print((store[0]==store[1]).all()) # console output -> # tensor(True) # tensor(False, device='cuda:0')
st48755
Dear all, I plan to use PyTorch’s C++ front-end for developing physics-informed neural network applications. While the network shall be based on libtorch, the physics model is implemented atop the Eigen library and has full algorithmic differentiation capabilities (forward and backward mode) using the open-source CoDiPack library. From what I read at https://pytorch.org/tutorials/advanced/cpp_autograd.html#using-custom-autograd-function-in-c 6 it should be possible wrap the external physics model in a custom autograd class that provides statically defined forward and backward methods in which (i) the connection with Eigen is made and (ii) the forward and backward propagation is continued using CoDiPack. Does anyone have experience in connecting libtorch with an ‘external model’ via custom auto grad classes? Thanks in advance, Matthias
st48756
Hey pytorcher ! As part of a reinforcement learning problem, I need to save gpu memory. The memory size of the grid is quadratic (in number of pixels). However, only the pixels where the agents are located are important. By using “sparse” representations of these grids we save a lot of memory. To save this memory I created another matrix format. I have a mask M shape [Batch,N,2]: B : Batch size N : Number of agents 2 : x and y coordinates of the agents. mask_corrected866×505 27.9 KB I also have a shape action grid [Batch,C,H,W]: B : Batch size C: Number of actions H : Height of the grid W: Grid width Agent_grid932×975 44.3 KB The goal is to mask this action grid to obtain only the actions at the coordinate of the mask to obtain a masked grid of shape: [B,N,C]. res_corrected (1)877×474 29.6 KB Here is an example of what I would like to get: >>> # Example of dimension >>> BATCH_SIZE = 2 >>> N_AGENT = 3 >>> NB_ACTION = 2 >>> H_GRID = 3 >>> W_GRID = 3 >>> action_grid_batch.size() # [BATCH_SIZE, NB_ACTION, H_GRID, W_GRID] torch.Size([2, 2, 3, 3]) >>> action_grid_batch tensor([[[[0.4000, 0.5000, 0.7000], # Probability of the action 1 on the action_grid 1 in the batch [0.3000, 0.2000, 0.1000], [0.9000, 0.8000, 0.7000]], [[0.6000, 0.5000, 0.3000], # Probability of the action 2 on the action_grid 1 in the batch [0.7000, 0.8000, 0.9000], [0.1000, 0.2000, 0.3000]]], [[[0.3000, 0.2000, 0.1000], # Probability of the action 1 on the action_grid 2 in the batch [0.6000, 0.7000, 0.4000], [0.9000, 0.8000, 0.1000]], [[0.7000, 0.8000, 0.9000], # Probability of the action 2 on element 2 in the batch [0.4000, 0.3000, 0.6000], [0.1000, 0.2000, 0.9000]]]]) >>> batch_mask_agent_position tensor([[[0, 1], # Position (H, W) of the agent 1 on element 1 in the batch [1, 1], # Position (H, W) of the agent 2 on element 1 in the batch [2, 0]],# Position (H, W) of the agent 3 on element 1 in the batch [[1, 1], # Position (H, W) of the agent 1 on element 2 in the batch [1, 2], # Position (H, W) of the agent 2 on element 2 in the batch [2, 2]]]) # Position (H, W) of the agent 3 on element 2 in the batch >>> output = apply_mask_on_grid(action_grid_batch,batch_mask_agent_position) >>> output.size() torch.Size([2, 3, 2]) # [Batch_size, N_AGENT, N_ACTION] >>> output tensor([[[0.5000, 0.5000], # Probability of the action 1 and 2 for the agent position 1 on element 1 in the batch [0.2000, 0.8000], # Probability of the action 1 and 2 for the agent position 2 on element 1 in the batch [0.9000, 0.1000]], # Probability of the action 1 and 2 for the agent position 3 on element 1 in the batch [[0.7000, 0.3000], # Probability of the action 1 and 2 for the agent position 1 on element 2 in the batch [0.4000, 0.6000], # Probability of the action 1 and 2 for the agent position 2 on element 2 in the batch [0.1000, 0.9000]]]) # Probability of the action 1 and 2 for the agent position 3 on element 2 in the batch Thanking you in advance !
st48757
I’m sure there is a better way of indexing, but this could work for now: a, b = batch_mask_agent_position.split(1, dim=2) ret = action_grid_batch[torch.arange(action_grid_batch.size(0)), :, a, b] ret = ret[torch.arange(ret.size(0)), :, torch.arange(ret.size(2))] print(ret) > tensor([[[0.5000, 0.5000], [0.2000, 0.8000], [0.9000, 0.1000]], [[0.7000, 0.3000], [0.4000, 0.6000], [0.1000, 0.9000]]])
st48758
Hi GPU memory consumption increases during training… I thougth reshape is the reason of this. How can I fix it? its my code: class Net(nn.Module): def __init__(self,SR,block_size,phi): super(MHCSResNet,self).__init__() self.conv1= nn.Sequential( nn.Conv2d(..), nn.BatchNorm2d(..), nn.ReLU() ) self.conv2= nn.Sequential( nn.Conv2d(...), nn.BatchNorm2d(...), nn.ReLU() ) self.conv3= nn.Sequential( nn.Conv2d(...), nn.BatchNorm2d(...), nn.ReLU() ) self.conv4= nn.Sequential( nn.Conv2d(...), nn.BatchNorm2d(...), nn.ReLU() ) self.conv5= nn.Sequential( nn.Conv2d(...), nn.BatchNorm2d(...), nn.ReLU() ) self.conv6= nn.Sequential( nn.Conv2d(...), nn.BatchNorm2d(...), nn.ReLU() ) self.conv7= nn.Sequential( nn.Conv2d(...), ) self.fc=nn.Linear(6400,64) def forward(self,kr,y,phi): out_conv1=self.conv1(kr) out_conv2=self.conv2(out_conv1) out_conv3=self.conv3(out_conv2) out_conv4=self.conv4(out_conv3)) out_conv5=self.conv5(out_conv4) out_conv6=self.conv6(out_conv5) out_conv7=self.conv7(out_conv6) #print('out_conv7',out_conv7.shape) out_feedback=kr+out_conv7 out_linear=self.fc(out_feedback.flatten(2)) out_reshape=out_linear.reshape([out_feedback.shape[0],out_feedback.shape[1],8,8]) outfc=torch.zeros(out_feedback.shape[0],out_feedback.shape[1],8,8) fxr=Block_Compressed_Sensing(outfc,phi) #print('fxr',fxr) return fxr,out_reshape,outfc what should I do
st48759
Would you please give some advice on this problem? It seems that you have a good knowledge at Pytorch. THANKS VERY MUCH!!!
st48760
Please don’t tag specific users, as this might discourage others to post an answer and creates some noise.
st48761
Ok. I dont do it again… I really want to know your opinion about the reshape part
st48762
I am trying to perform matrix multiplication of multiple matrices in PyTorch and was wondering what is the equivalent of numpy.linalg.multi_dot() in PyTorch? If there isn’t one, what is the next best way (in terms of speed and memory) I can do this in PyTorch? Code: import numpy as np import torch A = np.random.rand(3, 3) B = np.random.rand(3, 3) C = np.random.rand(3, 3) results = np.linalg.multi_dot(A, B, C) A_tsr = torch.tensor(A) B_tsr = torch.tensor(B) C_tsr = torch.tensor(C) # What is the PyTorch equivalent of np.linalg.multi_dot()? Many thanks!
st48763
Solved by Nikronic in post #2 Hi, I don’t think there is an equivalent function for that in PyTorch (not sure), but I just looked at numpy’s source code and it seems pretty simple as all functions are available in PyTorch to implement same method as numpy. It would not take much time as you just need to replace torch instead of…
st48764
Hi, I don’t think there is an equivalent function for that in PyTorch (not sure), but I just looked at numpy’s source code and it seems pretty simple as all functions are available in PyTorch to implement same method as numpy. It would not take much time as you just need to replace torch instead of numpy in some cases and maybe few tricks. numpy/linalg.py at v1.19.0 · numpy/numpy (github.com) 13 PS. For 3 matrices, it’s very simple. Bests
st48765
Ok, many thanks @Nikronic When I have time, I will put this on my list of to-do’s as a mini project to contribute to PyTorch
st48766
That’s really nice, maybe you can create a PR on official PyTorch github repo and help other people to use it too. AFAIK, PyTorch is trying to replicate all methods in np.linalg, so maybe you can add this to that list and take the responsibility. Linear Algebra tracking issue · Issue #42666 · pytorch/pytorch (github.com) 10 You can find multi_dot in the list of todos (Planned for PyTorch 1.9), and has not been assigned to anyone apparently.
st48767
Hi everyone I am currently working on a CNN project and after looking at the PyTorch tutorial (https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html# 2) I have some questinos regarding normalization. The tutorial mentions that since the PIL images are in range [0,1], they use a mean and standard deviation of 0.5 to bring them into a range of [-1,1]. I know that this is common practice (as is calculating the actual mean and standard deviation or keeping them in range [0,1]). But: not using the actual mean and standard deviation will not result in an overall mean of 0 and variance 1 as often desired, right? Also: why bring the images in a range of [-1,1] in the first place? Does anyone have a paper or book explaining this (I need a reference for my project )? All I can find is people saying that it’s common practice… Any help is very much appreciated! All the best snowe