id
stringlengths
3
8
text
stringlengths
1
115k
st97768
Ah I see, I’ve not been using opencv because all of my images are just NxN arrays. I’ve found that this works if using: from PIL import Image img = Image.fromarray(image.reshape(10,10), 'L') In conjunction with your transforms.Grayscale method
st97769
I have a cnn as below for cifar10: self.layer1 = nn.Sequential( nn.Conv2d(3, 64, kernel_size= 3), nn.BatchNorm2d(64), nn.ReLU(), nn.MaxPool2d(kernel_size=2)) self.layer2 = nn.Sequential( nn.Conv2d(64, 128, kernel_size= 3), nn.BatchNorm2d(128), nn.ReLU(), nn.MaxPool2d(kernel_size=2)) self.layer3 = nn.Sequential( nn.Conv2d(128, 256, kernel_size= 3), nn.BatchNorm2d(256), nn.ReLU()) self.layer4 =nn.AvgPool2d(8) self.layer5 =nn.Linear(256, num_classes) self.layer6 =nn.Softmax(dim=1) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = self.layer3(out) out = self.layer4(out) out = out.reshape(out.size(0), -1) out = self.layer5(out) out = self.layer6(out) return out criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9, weight_decay = 0.0005) I am training this network for 20 epochs, and I use the below data augmentation methods. 1- random crop(32, padding=4) 2- random horizontal flip 3- normalization 4- random affine for horizontal and vertical translation 5- mixup(alpha=1.0) 6- cutout(num_holes=1, size=16) Each time I add a new data augmentation after normalization(4,5,6), my validation accuracy decreases from 60% to 50%. I know if the model’s capacity is low it is possible. However, when I train this network on keras for 20 epochs, using the same data augmentation methods, I can reach over 70% validation accuracy. What am I missing? Note: in keras implementation convolution and dense layers have L2 kernel regularization, in pytorch implementation only the optimizer has L2. Could that be the reason?
st97770
I also tried removing weight decay from SGD, and adding it to conv and dense layers manually as given in here in-pytorch-how-to-add-l1-regularizer-to-activations 6 but still the validation accuracy is 50% with all listed regularizations. So it can’t be the weight decay. I will really appreciate why with keras we can get 20% val_acc difference, but not with pytorch
st97771
If you use nn.CrossEntropyLoss you should pass the logits into this criterion rather than the probabilities from nn.Softmax. Could you remove self.layer6 and try it again?
st97772
Thank you for your answer! I am trying now, but I don’t understand the reason honestly. Shouldn’t I use softmax for multi class distribution, and cross entropy to undo the exponential with log? Could you please help me to understand? Also that keras model has softmax activation on its dense layer, and categorical cross entropy as its loss function.
st97773
Internally nn.CrossEntropyLoss will call nn.LogSoftmax on the input and then use nn.NLLLoss (negative log likelihood loss). So you can remove the nn.Softmax layer and pass the logits to nn.CrossEntropyLoss or alternatively you could use nn.LogSoftmax() as the last layer and use nn.NLLLoss as your criterion. The reason for this is that calculating log of the softmax might be numerically unstable, thus nn.LogSoftmax is preferred.
st97774
Okay I get it now, thank you. Before this change 24 different models’ average validation accuracy was 48,4. After the change 8 models’ average accuracy is 52.78. On the other hand keras model’s average accuracy for 20 models is 64.4. And if I don’t use mixup, cutout, or random affine, pytorch models can get around 60%. There must still be something that I am missing. These are the data augmentation methods, maybe the problem is here: class Cutout(object): """ Randomly mask out one or more patches from an image. Args: n_holes (int): Number of patches to cut out of each image. length (int): The length (in pixels) of each square patch. """ def __init__(self, n_holes, length): self.n_holes = n_holes self.length = length def __call__(self, img): """ Args: img (Tensor): Tensor image of size (C, H, W). Returns: Tensor: Image with n_holes of dimension length x length cut out of it. """ h = img.size(1) w = img.size(2) mask = np.ones((h, w), np.float32) for n in range(self.n_holes): y = np.random.randint(h) x = np.random.randint(w) y1 = int(np.clip(y - self.length / 2, 0, h)) y2 = int(np.clip(y + self.length / 2, 0, h)) x1 = int(np.clip(x - self.length / 2, 0, w)) x2 = int(np.clip(x + self.length / 2, 0, w)) mask[y1: y2, x1: x2] = 0. mask = torch.from_numpy(mask) mask = mask.expand_as(img) img = img * mask return img def mixup_data(x, y, alpha=1.0, is_cuda=True): lam = np.random.beta(alpha, alpha) if alpha > 0. else 1. batch_size = x.size()[0] index = randperm(batch_size).cuda() if is_cuda else randperm(batch_size) mixed_x = lam * x + (1 - lam) * x[index, :] y_a, y_b = y, y[index] return mixed_x, y_a, y_b, lam def mixup_criterion(y_a, y_b, lam): return lambda criterion, pred: lam * criterion(pred, y_a) + (1 - lam) * criterion(pred, y_b) And my prepare_data() method: def prepare_data(batch_size=128, valid_frac=0.1, manual_seed=0): n_holes = 1 length = 16 mean = [x / 255.0 for x in [125.3, 123.0, 113.9]] std = [x / 255.0 for x in [63.0, 62.1, 66.7]] train_transform = transforms.Compose([ transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.RandomAffine(degrees=0,translate=(0.125, 0.125)), transforms.ToTensor(), transforms.Normalize(mean, std), ]) test_transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean, std), ]) train_transform.transforms.append(Cutout(n_holes=n_holes, length=length)) train_dataset = torchvision.datasets.CIFAR10( root='./data', train=True, transform=train_transform, download=True) valid_dataset = torchvision.datasets.CIFAR10( root='./data', train=True, transform=train_transform, download=True) test_dataset = torchvision.datasets.CIFAR10( root='./data', train=False, transform=test_transform, download=True) And in the original script, in train() I use mixup as below (alpha=1.0): def train(self, x_val, y_val): x = Variable(x_val, requires_grad=False) y = Variable(y_val, requires_grad=False) x, y_a, y_b, lam = utils.mixup_data(x, y, self.alpha) x = Variable(x, requires_grad=False) y_a = Variable(y_a, requires_grad=False) y_b = Variable(y_b, requires_grad=False) self.optimizer.zero_grad() output = self.forward(x) loss_loc = lam * self.loss(output, y_a) + (1 - lam) * self.loss(output, y_b) loss_loc.backward(retain_graph=True) self.optimizer.step()
st97775
Your code looks generally good! Could you try to apply the same weight initializations that are used in Keras to compare the models? Here 5 is a small example. Also, could you post the Keras code, as there still might be some small differences? Some minor issue: Variables are deprecated and you can use tensors directly since PyTorch 0.4.0 It’s generally recommended to call the model directly instead of forward. You could change self.forward(x) to self(x).
st97776
There is no specific weight initialization for the keras model. The source code says conv and dense layer kernels are initialized with glorot_uniform. So I will try to implement glorot_uniform now, thanks! (Is it the same as Xavier?) Here is a scratch of keras model, I am not allowed to share it as it is. Conv2D(filters = '64', kernel_size = 3, activation = None, padding='same', kernel_regularizer = regularizers.l2(weight_decay)) BatchNormalization() Activation('relu') MaxPooling2D( pool_size = 2) Conv2D(filters = '128', kernel_size = 3, activation = None, padding='same', kernel_regularizer = regularizers.l2(weight_decay)) BatchNormalization() Activation('relu') MaxPooling2D( pool_size = 2) Conv2D(filters = '256', kernel_size = 3, activation = None, padding='same', kernel_regularizer = regularizers.l2(weight_decay)) BatchNormalization() Activation('relu') GlobalAveragePooling2D() Dense( units = 10, activation = 'softmax', kernel_regularizer = regularizers.l2(weight_decay) ) opt_algo = optimizers.SGD(lr = 0.01, momentum = 0.9) keras_model.compile(optimizer = opt_algo, loss = 'categorical_crossentropy', metrics = ['accuracy']) train_datagen_pre = ImageDataGenerator( featurewise_center=False, samplewise_center=False, featurewise_std_normalization=False, samplewise_std_normalization=False, zca_whitening=False, rotation_range=0, width_shift_range=0.125, height_shift_range=0.125, horizontal_flip=True, vertical_flip=False, preprocessing_function = utils.cutout) train_datagen_pre.fit(X_train) train_datagen = MixupGenerator(X_train, Y_train, batch_size=batch_size, alpha=1.0, datagen=train_datagen_pre)() batch_size = 128 keras_model.fit_generator(generator = train_datagen, steps_per_epoch=X_train.shape[0] // batch_size, epochs=20, validation_data=test_datagen.flow(X_test,Y_test, batch_size =batch_size), validation_steps = X_test.shape[0] //batch_size)
st97777
Thanks for the code! Yes, Xavier Glorot introduces the initialization scheme. Some frameworks use his first name, while others prefer his last name. Besides the potential weight init difference, you are not using any padding in your PyTorch model. For kernel_size=3 and default values for stride, dilation, etc. you should use padding=1.
st97778
I use padding=1 normally, I just forgot to add it there Now I am testing the weight initialized version. Also, is there a way to disable the weight decay for batch_normalization layer’s learnable parameters? I implemented it as below based on https://stackoverflow.com/questions/44641976/in-pytorch-how-to-add-l1-regularizer-to-activations 3. But I am not sure if it is correct (I did’t use out1, out5, and out5?) and if there is a cleaner way to do it. for i, (images, labels) in enumerate(trainloader): images = images.cuda() labels = labels.cuda() lambda2 = 0.0005 # Forward pass out, out1, out5, out9, out13 = model(images) loss = criterion(out, labels) all_1_params = torch.cat([x.view(-1) for x in model.layer1.parameters()]) all_5_params = torch.cat([x.view(-1) for x in model.layer5.parameters()]) all_9_params = torch.cat([x.view(-1) for x in model.layer9.parameters()]) all_13_params = torch.cat([x.view(-1) for x in model.layer13.parameters()]) l2_regularization_1 = lambda2 * torch.norm(all_1_params, 2) l2_regularization_5 = lambda2 * torch.norm(all_5_params, 2) l2_regularization_9 = lambda2 * torch.norm(all_9_params, 2) l2_regularization_13 = lambda2 * torch.norm(all_13_params, 2) loss_all = loss+l2_regularization_1+l2_regularization_5+l2_regularization_9+l2_regularization_13 # Backward and optimize optimizer.zero_grad() loss_all.backward() optimizer.step()
st97779
Hey thank you so much for your support! I could get max 69.9% validation accuracy with random horizontal flip normalization random affine for horizontal and vertical translation mixup(alpha=1.0) cutout(num_holes=1, size=16) Random crop was decreasing val_acc. I guess I shouldn’t use it with cutout. Weight initialization didn’t effect substantially. Here is what I did: Remove softmax layer Remove weight decay from optimizer Decay only conv2d and linear layer weights manually When we just add weight decay to the optimizer, it decays all differentiable parameters including biases and learnable parameters of batch normalization layers. In my case this, plus using softmax layer had catastrophic effects. Keras model had kernel initializers on conv2d and linear layers, hence it didn’t have such a problem. It also has softmax layer.
st97780
Awesome, I’m glad it’s working now! kneazle: Keras model had kernel initializers on conv2d and linear layers, hence it didn’t have such a problem. Just in case I made confusing statements: PyTorch modules also have default initializers, which are different to the ones Keras uses by default.
st97781
I intend during the train compute the loss using MSELoss but using max error as reference for stoping to train and to compute the error. Note that I set reduce=False, so the loss_func will returns a loss per input/target element. How I can use an optimizer that I can use max error to compute? Error: /Envs/climaenv/lib/python3.6/site-packages/torch/optim/lbfgs.py in step(self, closure) 102 # evaluate initial f(x) and df/dx 103 orig_loss = closure() --> 104 loss = float(orig_loss) 105 current_evals = 1 106 state['func_evals'] += 1 ValueError: only one element tensors can be converted to Python scalars Here is the code: def fit(self, x=None, y=None, lr=0.001, epochs=1000): ''' Training function Argurmets: x: features train set y: target train set lr: Learning rate epochs: number of epochs ''' optimizer = torch.optim.LBFGS([{'params': [self.hidden.weight,self.predict.weight]}], lr=lr,max_iter=20) loss_func = torch.nn.MSELoss(reduce=False) def closure(): optimizer.zero_grad() # clear gradients for next train prediction = self(x) # input x and predict based on x loss = loss_func(prediction, y) loss.max().backward() # backpropagation, compute gradients return loss for t in range(epochs): optimizer.step(closure) # apply gradient return self.loss_epochs
st97782
Solved by ptrblck in post #2 If you want to use the max error to stop your training, you could use loss = orig_loss.max().detach().numpy() Currently orig_loss is not a scalar, as you use reduce=False in your loss function. Therefore float cannot cast it to a scalar float.
st97783
If you want to use the max error to stop your training, you could use loss = orig_loss.max().detach().numpy() Currently orig_loss is not a scalar, as you use reduce=False in your loss function. Therefore float cannot cast it to a scalar float.
st97784
Thank You so much! Now I can stop the tranning. But I don’t know if you got my main point: And How can I use to reduce my loss function? Nowdays the option is only sum ou mean. Because that I still got an error in optimizer…
st97785
For unreduced losses you would need to provide the gradient 27: loss_fn = nn.MSELoss(reduce=False) loss = loss_fn(torch.randn(10, 10, requires_grad=True), torch.randn(10, 10)) loss.backward(torch.ones_like(loss))
st97786
In my program, I have to build two different model for training . However, my cuda memory would be overflowed directly . So I want to distribute training in different GPUs for different models. My final loss contains of two parts, one part is independent and other is joint. Dose anybody give me some suggestions ?
st97787
Could you explain, how your independent and joint losses are created? You could use something like this as a starter: modelA = nn.Linear(10, 2).to('cuda:0') modelB = nn.Linear(10, 2).to('cuda:1') criterion = nn.CrossEntropyLoss() optimizerA = optim.SGD(modelA.parameters(), lr=1e-3) optimizerB = optim.SGD(modelB.parameters(), lr=1e-3) for data, target in loader: # Get losses for separate models data, target = data.to('cuda:0'), target.to('cuda:0') output = modelA(data) lossA = criterion(output, target) data, target = data.to('cuda:1'), target.to('cuda:1') output = modelB(data) lossB = criterion(output, target) I’m not sure how you would like to create the joint loss, i.e. just summing lossA and lossB wouldn’t change anything.
st97788
Thanks in advance . modelA = nn.Linear(10, 2).to('cuda:0') modelB = nn.Linear(10, 2).to('cuda:1') criterion = nn.CrossEntropyLoss() optimizerA = optim.SGD(modelA.parameters(), lr=1e-3) optimizerB = optim.SGD(modelB.parameters(), lr=1e-3) for ((dataA, targetA), (dataB, targetB)) in zip(loader_A, loader_B): # Get losses for separate models dataA, targetA = dataA.to('cuda:0'), targetA.to('cuda:0') outputA = modelA(dataA) lossA = criterion(outputA, targetA) dataB, targetB = dataB.to('cuda:1'), targetB.to('cuda:1') outputB = modelB(dataB) lossB = criterion(output, target) lossC = torch.nn.CosineSimilarity(ouputA, outputB) final_loss = lossA + lossB + lossC In above snippet, we aim to eliminate the discrepancy in between two independent systems. Therefore, we should jointly train them .
st97789
In your code snippet you’ll most likely get an error stating some tensors are not on the same device. Since outputA and outputB are on GPU0 and GPU1, respectively, you should push them to the same device. Could you try the following: ... lossC = torch.nn.CosineSimilarity(outputA, outputB.to('cuda:0')) # lossC is now on cuda:0 final_loss = lossA + lossB.to('cuda:0') + lossC
st97790
Yes , you are right . But the lossB have been moved to GPU 0. I wonder that could affect gradient of modelB in backward operation . Could modelB parameters be updated synchronously ? Thanks for your reply again.
st97791
The gradients should lay on GPU1 for lossB, so it shouldn’t be a problem. You can check it with: print(modelB.weight.grad.device)
st97792
Hi, I’m wondering whether resize operation will affect backpropogration. If I have a tensor: input, with size (n1n2n3,1), then run the following codes where netA and netB are two networks: inputv = Variable(input, requires_grad=True) output = netA( inputv ) output = output.view(n1,n2,n3) output = netB( output ) output.backward() For “output = output.view(n1,n2,n3)”, if I delete it, will gradient changes? (For netB, any input will be resized as (:,n2,n3), so even if we don’t resize input manually before feeding into network, it still works without bugs )
st97793
Solved by albanD in post #2 Hi, Resizing is seen as any other op and is perfectly differentiable. So you can .view() is any way you want and it will work !
st97794
Hi, Resizing is seen as any other op and is perfectly differentiable. So you can .view() is any way you want and it will work !
st97795
Suppose one has a list containing two tensors. List = [tensor([[a1,b1], [a2,b2], …, [an,bn]]), tensor([c1, c2, …, cn])]. How does one convert the list into a numpy array (n by 3) where the corresponding tensor elements align by rows? Like the following: array = (a1, b1, c1 a2, b2, c2 … an, bn, cn) Possible? New to this and learning.
st97796
Hi, You should use torch.cat 1.2k to make them into a single tensor: giving nx2 and nx1 will give a nx3 output when concatenating along the 1st dimension.
st97797
We are using PyTorch in an application where the model forward() is being bottlenecked by CPU speed as well as GPU speed. As a solution, we considered using DataParallel to parallelize batch processing. Although we only have 2 GPUs, we hope to use 8 or even 16 threads to cut down the CPU cost (this should be fine since the GPU usage is not at 100% during forward()). We have the following line model = nn.DataParallel(model, device_ids = [0, 0, 1, 1]) which gives the error File "/home/kezhang/top_ml/top_ml/engine.py", line 277, in train label_outputs=self.model(constituents, transitions, seq_lengths) File "/home/kezhang/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/kezhang/.local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 122, in forward replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) File "/home/kezhang/.local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 127, in replicate return replicate(module, device_ids) File "/home/kezhang/.local/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 12, in replicate param_copies = Broadcast.apply(devices, *params) File "/home/kezhang/.local/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 19, in forward outputs = comm.broadcast_coalesced(inputs, ctx.target_gpus) File "/home/kezhang/.local/lib/python3.6/site-packages/torch/cuda/comm.py", line 40, in broadcast_coalesced return torch._C._broadcast_coalesced(tensors, devices, buffer_size) RuntimeError: inputs must be on unique devices suggesting that GPUs need to be unique for DataParallel to work. Is there any particular reason for this? Are there other methods to achieve what we want to do?
st97798
Hi, Using a GPU more than once does not make sense here. It would mean executing python code twice in two different threads and run twice as many ops on the gpu (which are half the size). And GPUs are really bad at running small ops. If the GPU usage is already 100% all the time, then the GPU is fully used and there is nothing you can do to speed things up more from the code point of view. Of course reducing model size/architecture could.
st97799
Hi, I’m in fact interested in that exact behaviour. I want multiple threads to use the same GPU since the CPU is a huge bottleneck and I want to mitigate that by having the CPU portion of the model be parallelized at the cost of running more ops on the same GPU at the same time. Since the GPU is already running small ops even with a single thread, I want to at least see if the benefit from parallelizing the CPU can beat out the penalty from doing the concurrent ops on the GPU. Any thoughts?
st97800
You said above that GPU usage was already 100%. There is nothing you can do to go faster. Your CPU is already waiting on the GPU to finish computing stuff before continuing.
st97801
The thing is that by setting 2 threads to the same gpu what will happen is that the original work that this gpu was doing will be split in two. And then executed as two different worloads. The total amount of work done on the GPU will be EXACTLY the same. Just doing twice the number of operations and smaller ones. So you will execute more cpu code and process the same amount of data on the GPU. It can’t speed up the computations.
st97802
I am trying to say that the GPU is not the problem here. A large fraction of time is being spent on CPU processing so I want to use the multithreading capability that the machine has for the CPU. I understand that the GPU is not going to run any faster, but that’s not the goal in the first place.
st97803
Is part of the model you fit in the dataparallel actually running computation on the cpu?
st97804
Yes, there is significant CPU work in the model forward() as evidenced by profiling, so the DataParallel threads are doing CPU work.
st97805
But the thing is that: If they use pytorch ops mainly, they should already use all the cores available and thus more threads won’t help If they do python stuff, they will be blocked by the GIL in multithread and so won’t run more python code either. You are not in these cases?
st97806
I am not in the first case; they are mainly numpy and list manipulation ops. I don’t believe that I am in the second case either since running DataParallel with 2 GPUs has the CPU running twice as fast for each thread (I profiled this as well), since the threads are not really accessing global memory.
st97807
The think is that even when running multiple thread, only one of them can run python code at a given time. All the others have to wait. A quick intro about the GIL can be found here 5. The numpy operations might benefit a bit from it if they are matrix multiplications and they don’t already use multiple cores. But replacing these ops with pytorch version will use all the cores without need for multithreading. To go back to the original question, DataParallel does not support using multiple times the same GPU because it won’t give any advantage if you use pytorch ops. If you use other ops that could benefit from multithreading, I guess you will need to use python’s builting threading library to paralellelize the part of you code that can be (keep in mind this is mostly IO and some library calls that release the GIL and are monocore.
st97808
Hi, I did some extensive research into the GIL and I think I am understanding what you are saying. Thank you for the new insights. It seems that multiprocessing.Pool can bypass this requirement so I will consider using this to speed up my code - but am I correct in assuming that DataParallel does not use multiprocessing.Pool and is therefore still limited by the GIL?
st97809
Yes DataParallel use threads and so is blocked by the GIL for CPU intensive tasks.
st97810
Hello, In the documentation of Seq2Seq translation, the link at this point( Using teacher forcing causes it to converge faster but when the trained network is exploited, it may exhibit instability is broken.) The working link now lives here: http://minds.jacobs-university.de/uploads/papers/ESNTutorialRev.pdf It’d be nice to fix this, for easier reference and not to do a google search to find the relevant paper about Teacher Forcing. Thanks!
st97811
I want to train two different models which are too large to be put on the same GPU. But actually the problem is that I need to optimize their total loss at the same time and if they’re on different GPUs, I have trouble in implementing the backpropagation. I have tried different approaches but they didn’t work out. Below is a draft of my experiment architecture. Any suggestion is appreciated. Thanks !!
st97812
Thanks for your reply. I’m now moving all losses to GPU 0 and then doing backpropagation. And it seems to work as all parameters in both Model1 and Model2 would update. But I notice that the parameters in Model2 only update slightly compared to Model1. I’m not sure whether the backpropagation is influenced in Model2 due to the location of model losses or it’s all because of the way I’m using the loss. Please let me know if this works for you.
st97813
Hello, I currently have a network that is running, but my final outputs are not matching what they should, and I think the issue has to do with my loss function. The network takes in a 1x12 binary vector and outputs a 1x30 vector. Here’s my code: #Define a Net class that models the architecture described above #This will be a subclass of the generical nn.Model class class Net(nn.Module): #constructor method includes definitions of weight matrices for the layers #these matrices will be available by calling the parameters() instance function def __init__(self): super(Net, self).__init__() #define the operations y = Wx + b associated with the connection weights self.item_to_rep = nn.Linear(8, 8) #maps 8 inputs to 8 representation nodes self.rep_rel_to_hidden = nn.Linear(12, 15) #maps 12 representation & relation nodes to 15 hidden nodes self.hidden_to_out = nn.Linear(15, 30) #maps 15 hidden nodes to 30 output nodes self.loss = nn.BCELoss() #uses BCE loss function #propagates inputs to outputs using current weights in computation #backward method is not defined in this class - it's incorporated into PyTorch def forward(self, x): #split input into item and relation nodes item = x[:8] rel = x[8:] rep = F.relu(self.item_to_rep(item)) temp = torch.cat([rep, rel],-1) hidden = F.relu(self.rep_rel_to_hidden(temp)) output = F.sigmoid(self.hidden_to_out(hidden)) return output #main computation def buildAndTrainNetwork(): net = Net() #initialize network optimizer = optim.SGD(net.parameters(), lr=0.01) #specifies learning algorithm, rate targets, x = textToTensor() #this method isn't done but here's something: for n in range(5000): for i in range(len(x[0])): y = net(x[i]) #forward-propagates inputs to outputs using current weights optimizer.zero_grad() #clears gradient records to prepare for weights update error = net.loss(y,targets[i]) #computs output loss for current training instances error.backward() #triggers the backpropagation of errors optimizer.step() #updates weights based on backpropagated errors if n%500 == 0: print(error.data[0]) print(y,targets[i]) I’ve tried switching to a bunch of different loss functions without having any luck. Here’s a screenshot of my activation output vs target output after 5,000 epochs: errorAfter5000.PNG2201×780 47.2 KB The activation of the 8th node is highest by a lot, but still isn’t close enough to 1. Not really sure what’s going on here since I’m pretty new, and some insight would be highly appreciated. Thanks!
st97814
Get “Illegal instruction (core dumped)” error when trying to copy object in CUDA memory. I tried with python 3.6 and 3.7, CUDA 9.0 and 9.2. I have no idea of how to debug this. This code works fine with pytorch 0.4.1 but always fails in 1.0.0.dev import torch torch.tensor([1.,2.]).cuda() Any ideia of how I can solve this? GDB output: (gdb) run teste.py Starting program: /home/marco/anaconda3/envs/fastai/bin/python teste.py [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". 1.0.0.dev20181004 9.2.148 [New Thread 0x7fffae733700 (LWP 4189)] True GeForce GTX 1070 Thread 1 "python" received signal SIGILL, Illegal instruction. 0x00007fffb9057bc3 in at::cuda::detail::initGlobalStreamState() () from /home/marco/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so (gdb) PyTorch version: 1.0.0.dev20181003 Is debug build: No CUDA used to build PyTorch: 9.2.148 OS: Ubuntu 18.04.1 LTS GCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0 CMake version: Could not collect Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: GeForce GTX 1070 Nvidia driver version: 396.54 cuDNN version: Could not collect Versions of relevant libraries: [pip] numpy (1.15.2) [pip] torch (1.0.0.dev20181003) [conda] cuda92 1.0 0 pytorch [conda] pytorch-nightly 1.0.0.dev20181003 py3.6_cuda9.2.148_cudnn7.1.4_0 [cuda92] pytorch
st97815
This is weird. Thanks for reporting it. Can you report the output of the following GDB commands after the “Thread 1 “python” received signal SIGILL, Illegal instruction”? bt (backtrace) disas (disassemble) Do you know what CPU you have? On Linux, you can usually find out by cat /proc/cpuinfo
st97816
The CPU is an AMD Phenom II X6 GDB output: https://pastebin.com/9uyXXuZ3 9 Thanks
st97817
Thanks, I think this PR will fix the problem: github.com/pytorch/pytorch [WIP] Getting ride of SSE-only code and convolve5x5 25 by cpuhrsch on 06:49PM - 26 Sep 18 1 commits changed 13 files with 61 additions and 1497 deletions. I’ll work on merging it. Should be in a nightly build within a few days.
st97818
Thanks for looking at this. I built from source with the PR you linked. I still got the “Illegal instruction (core dumped)” error, but it’s different this time: Thread 1 "python" received signal SIGSEGV, Segmentation fault. THCPModule_initExtension (self=<optimized out>) at torch/csrc/cuda/Module.cpp:354 354 auto _state_cdata = THPObjectPtr(PyLong_FromVoidPtr(state)); GDB complete output: https://pastebin.com/f4kSG5ve 3 Is still the same problem or did I mess up while building from source?
st97819
Thanks for trying out the PR. I’m not sure exactly what’s going on, but that’s a different error (“Segmentation fault” vs. “Illegal instruction”). If you’re building from source, make sure you run python setup.py clean before you rebuild. Sometimes, only some files get rebuilt which can cause those sorts of crashes.
st97820
That was the first time I built. I will do that before rebuild. Can I do anything to help?
st97821
Could you try rebuilding with DEBUG=1? That may provide better information: python setup.py clean DEBUG=1 python setup.py install If you run into the error, could you try running the following GDB commands: bt disas info registers Thanks for helping to debug this
st97822
@colesbury With the last pytorch-nightly update the errors are gone. (fastai) marco@phenom:~/MachineLearning$ python Python 3.7.0 (default, Jun 28 2018, 13:15:42) [GCC 7.2.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> torch.__version__ '1.0.0.dev20181010' >>> torch.cuda.is_available() True >>> torch.tensor([1.,2.]).cuda() tensor([1., 2.], device='cuda:0') >>> Thanks for your help.
st97823
FWIW, I had the same problem as @elmarculino but I could solve it by installing my own MAGMA library. Installing PyTorch with DEBUG=1 and running under gdb revealed that there was a problem with a MAGMA-related function (see https://pastebin.com/tpb28w7V 4 ). So I removed the conda package magma-cuda92, installed MAGMA 7.3.0 from source, recompiled and it worked.
st97824
@colesbury I’m getting the same error. Tried to do a clean install and it’s still happening. Python version: 3.7.0 Pytorch version: ‘1.0.0a0+4b86a21’ (built from source) GDB output: https://pastebin.com/H4txr59u 7
st97825
In my research (RL), the model is often quite small, it only utilizes 10-20% of GPU power. When doing things like hyperparameter search which we need to train the network with different configurations, is it okay to open several multiprocessing.Process and train them in parallel in a single GPU, given that the GPU memory is enough (quite small network. ) Note that this is different that hogwild training where each process share the same model to update parameters asynchronously.
st97826
Hi! I had similar issue. After some googling I found this: A call to torch.cuda.is_available makes an unrelated multi-processing computation crash? Looks like set_start_method did not work for me but mp = mp.get_context('spawn') did. Hope that provides some help. 2nd link showed the solution: def some_method(): mp = torch.multiprocessing.get_context('forkserver') # <-- This does the magic pool = mp.Pool(processes=1) ........ Hope it helps!
st97827
I have a tensor given as: auto my_tensor = at::zeros({5,5}, at::kCUDA) and another function that takes a tensor as an argument: at::tensor my_func(at::Tensor t) { // some operation changing the data in t } I want to pull call my_func on subsets of tensor in a loop. Currently I’ve tried const int stride = 5; for (int i = 0; i < 5; i++) { my_func(my_tensor.data() + i*stride) } But tensor.data() + 5 gives me a pointer–I need the at::tensor. I can’t just dereference the pointer though, because it’s a pointer to the raw data, not to the at::tensor object. My goal is to update my_tensor in-place so I don’t have to do unnecessary copying. How can I do this? It seems perhaps something along the lines of a TensorAccessor? I went to try it, though, and it told me packed_accessor() didn’t exist (Pytorch 0.4.1).
st97828
marcman411: at::tensor my_func(at::Tensor t) { // some operation changing the data in t } Please pass by reference i.e. at::tensor my_func(at::Tensor &t) { // some operation changing the data in t }
st97829
Thanks for the reply. What I’m actually trying to do is modify slices of the tensor in-place. I’m not concerned necessarily about how to pass the tensor itself as an argument, but rather just a slice of it. Also, I don’t think passing by reference quite makes a difference in this case, as at::tensor 8 just wraps a pointer to the data, not the data itself. In passing by reference, you’re just passing a reference to a pointer.
st97830
Dear all, I am learning semi-supevised learning in image classification. I notice that in Temporal Ensembling 2 and Mean Teacher 2, the optimizer is Adam and they change the beta parameter in Adam during training. But I cannot find a solution to change the parameter in PyTorch. Does anybody know how to achieve the similar feature? Thanks.
st97831
Hi. I’d like to concatenate two Variables, which are each an output of a nn module. Say I have Variables v1 and v2. I could use torch.cat([v1, v2]) in my python interactive mode, but when I try to write a code and run it, it gives error: TypeError: cat received an invalid combination of arguments - got (tuple, int), but expected one of: (sequence[torch.cuda.FloatTensor] tensors) (sequence[torch.cuda.FloatTensor] tensors, int dim) didn’t match because some of the arguments have invalid types: (tuple, int) How should I concatenate two Variables? (I’d like to concat and feed it to another fully connected layer)
st97832
I guess v1 and v2 have the different types(i.e. torch.cuda.FloatTensor and torch.FloatTensor)–that’s the problem they need to have the same type.
st97833
Helped me too. It seems like the error message is incorrect. My problem was that I tried to cat a FloatTensor and a LongTensor but the error was mentioning a tuple type.
st97834
I have got the same problem. However, printing type(v1) and type(v2) results in <class 'torch.autograd.variable.Variable'> for both cases. Is it not possible to concatenate Variables?
st97835
@McLawrence Don’t use type(v1), instead use v1.type(). The former calls Python’s built-in function and only tells you the class, i.e Pytorch torch.autograd.variable.Variable as you posted. The answers here are referring to the data type of the Variable, which is what the latter call above returns. You can have the same data types for Variables as for Tensors in PyTorch and if you try to concatenate (or most other operations) two Variables of different types, you get the error above.
st97836
v1.type() will not work on Variables. One has to use v1.data.type() there. Also, it is recommended to use type(v1). However, my problem was, that one tensor was on the GPU whereas the other one was on the CPU.
st97837
In my case, the type of the concatenated tensor is both of “torch.DoubleTensor”. Converting the type to FloatTensor by using float() worked. I feel something odd from this error message.
st97838
I also encountered this problem, but the two variables’ type are both cuda.FloatTensor. anyone know why? thanks~
st97839
Don’t know if it’s still relevant but for people running into this: I had the same question and looked into torch.sort: https://github.com/pytorch/pytorch/blob/v0.4.1/aten/src/THC/generic/THCTensorSort.cu#L330 107. It is undocumented behaviour, but if the tensor contains at least 2049 elements, internally the function sortViaThrust is used, which as can be seen in the same file performs a stable sort. You can therefore easily define a wrapper function that provides a stable sort by padding the tensor if it contains less than 2049 elements, but be aware that this may break in the future. I would be happy to see if anyone has a stable solution to a stable sort.
st97840
dima: sorting algorithm is used Way to find torch.sort @wouter. Thank you. Wish the code was linked to in the docs. I can’t tell from the code which sort is being used. Where is the code for that?
st97841
Calling .cuda on optimizer that uses Adam gives me AttributeError: 'Adam' object has no attribute 'cuda', calling .cuda() criterion is fine. Also if I have 2 GPUs do I do something different? When I look at my GPU usage it’s very spiky, the usage goes up and down with long delays of zero usage. I followed the data parallelism tutorial to use 2 GPUs for the model and that’s about it. Can some one give me a list of thing I should call .cuda() on?
st97842
For my last project i need to use a matrix as matrix of “vectorized” submatrices. For example give the matrix A = [[1 2 3] [4 5 6] [7 8 9]] The matrix of submatrices with dimension 2x2 is M = [[ 1 2 4 5] [ 2 3 5 6] [ 4 5 7 8] [ 5 6 8 9]] Where each row is one of the 2x2 matrix composing A. For now i’m initializing M with zeros and using a for-loop slicing A to create M but it is time-comsuming ( O (n*m) ). I was wondering if there exist a more pytorch-y way to do it This is my code >>> getSubMatrix = lambda m,i,j : m[i:i+2,j:j+2].contiguous().view(-1,2*2)[0] >>> >>> A = torch.randn(3,3) >>> M = torch.zeros(4, 2*2) >>> k = 0 >>> for i in range(0, 2): ... for j in range(0, 2): ... M[k] = getSubMatrix(A, i, j) ... k = k + 1 ... >>> A tensor([[ 0.4257, 0.7940, -1.2986], [ 0.3243, -1.3812, -1.0442], [-0.3122, 1.2312, 0.9811]]) >>> M tensor([[ 0.4257, 0.7940, 0.3243, -1.3812], [ 0.7940, -1.2986, -1.3812, -1.0442], [ 0.3243, -1.3812, -0.3122, 1.2312], [-1.3812, -1.0442, 1.2312, 0.9811]]) >>> Thanks a lot
st97843
tensor.unfold might be what you’re looking for: A = torch.tensor([[ 0.4257, 0.7940, -1.2986], [ 0.3243, -1.3812, -1.0442], [-0.3122, 1.2312, 0.9811]]) M = A.unfold(dimension=0, size=2, step=1).unfold(dimension=1, size=2, step=1) M = M.contiguous().view(4, 4) print(M) > tensor([[ 0.4257, 0.7940, 0.3243, -1.3812], [ 0.7940, -1.2986, -1.3812, -1.0442], [ 0.3243, -1.3812, -0.3122, 1.2312], [-1.3812, -1.0442, 1.2312, 0.9811]])
st97844
Hi PyTorch users! Is there a way to alter ResNet18 so that training will not cause size mismatch errors when using single channel images as opposed to 3-channel images? I have so far changed my input images so that they are 224x224, altered the number of input channels, and as this is a regression problem I have changed the output to be 1 node but the convolutions are having trouble: ResNet( (conv1): Conv2d(, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (layer1): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer2): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer3): Sequential( (0): BasicBlock( (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer4): Sequential( (0): BasicBlock( (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (avgpool): AvgPool2d(kernel_size=7, stride=1, padding=0) (fc): Linear(in_features=512, out_features=1, bias=True) ) The error: RuntimeError: size mismatch, m1: [64 x 802816], m2: [65536 x 256] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:249
st97845
Did you change more than the last layer? I assume your input is now in the shape [batch_size, 3, 224, 224]? The model looks alright and I can’t see, where this size mismatch occurs. Could you post the whole stack trace or the mode where you’ve manipulated the model?
st97846
I also think it will be helpful to show how you have resized the image, created three channels, and your dataloader. This likely is not a problem with your model but rather with your input.
st97847
My input is now [batch_size, 1, 244, 244] as my images are only single channel. I’m not sure how to output the stack trace but I can give it a try! My model manipulation is as follows: resnet18 = models.resnet18() resnet18.conv1 = torch.nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) resnet18.fc = = torch.nn.Linear(512, 1) As suggested by David below I’ll output the input sizes just to be sure though.
st97848
Thanks your the reply I resize the images by using: transform = transforms.Compose([transforms.ToPILImage(),transforms.Resize((224,224)),transforms.ToTensor(), transforms.Normalize([0.5], [0.5])]) The images are 64x64 numpy arrays originally before the tranformation but they are only single channel images. I’ll take a look into the input dimensions as they should be [batch_size, 1, 224, 224] after the dataloader which looks as follows: loader = DataLoader(FITSCubeDataset(data_path, cube_length, transforms, img_size), batch_size=batch_size, shuffle = False, sampler=train_sampler)
st97849
Your code works fine using this code: x = torch.randn(1, 1, 224, 224) output = resnet18(x) I think you might have a typo in your shapes. Are you using 224 or 244 as the spatial size? Your transformation code looks fine. However, here you say your input is [batch_size, 1, 244, 244].
st97850
I had a look and printed the batch.size() OUT: torch.Size([64, 1, 224, 224]) Yes so the transform was to resize the images to 224 by 224 as they are originally 64x64. Not really sure why it’s producing an error now
st97851
That’s strange. Could you just run your code with my dummy input: resnet18 = models.resnet18() resnet18.conv1 = torch.nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) resnet18.fc = = torch.nn.Linear(512, 1) x = torch.randn(64, 1, 224, 224) output = resnet18(x)
st97852
That works fine…I think it could be due to fact I haven’t altered the output shape given that the error has the following: output = self.classifier(features.view(int(x.size()[0]),-1))
st97853
Are you sure you are using the ResNet? models.resnet18 doesn’t have a self.classifier member, but self.fc. Maybe you are unintentionally using another model like models.vgg16?
st97854
Ah yep you’re completely right! I’ve wrapped the training into a class and forgot to switch the model out for ResNet sorry about that!
st97855
Thanks! I’ve got another error now but I’ll try to work it out myself before I post on here
st97856
Quick update: So I’ve got the pretrained model working with some final layers frozen but I’m only managing to make it work with the following: resnet = models.resnet50(pretrained=True) resnet.conv1 = torch.nn.Conv2d(1,64, kernel_size=(7,7),stride=(2,2),padding=(3,3),bias=False) resnet.fc = torch.nn.Linear(2048,1) Even though in my mind the fc layer should be (512,1) but if I don’t use (2041,1) I get a size mismatch error
st97857
Yea, there is an expansion number in the blocks. If you look at resnet there is an expansion number. github.com pytorch/vision/blob/master/torchvision/models/resnet.py#L114 6 self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = self._make_layer(block, 64, layers[0]) self.layer2 = self._make_layer(block, 128, layers[1], stride=2) self.layer3 = self._make_layer(block, 256, layers[2], stride=2) self.layer4 = self._make_layer(block, 512, layers[3], stride=2) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.fc = nn.Linear(512 * block.expansion, num_classes) for m in self.modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') elif isinstance(m, nn.BatchNorm2d): nn.init.constant_(m.weight, 1) nn.init.constant_(m.bias, 0) def _make_layer(self, block, planes, blocks, stride=1): downsample = None You can use the following code: resnet = models.resnet50(pretrained=True) resnet.conv1 = torch.nn.Conv2d(1,64, kernel_size=(7,7),stride=(2,2),padding=(3,3),bias=False) resnet.fc = torch.nn.Linear(512 * resnet.layer1[0].expansion,1) That will work if you use any of the resnets… Also if you want it to work with any size input, you can use torch.conv.AdaptiveAvgPool2d(1) for the average pooling layer. As long as the initial input’s size is large enough to go through all the convolutions.
st97858
Is it possible to build Pytorch cpp extension for Python with Cmake? How can I do it? I’m trying to use this example https://pytorch.org/cppdocs/installing.html 70 for writing cpp extension for Python https://pytorch.org/tutorials/advanced/cpp_extension.html 64 The problem is Pybind11: if I don’t include Pybind, but use only Pytorch library, there is an error third_party/libtorch/include/pybind11/detail/common.h:112:10: fatal error: ‘Python.h’ file not found If I include pybind using add_subdirectory(third_party/pybind11) everything is compiled, but when I try to import library in python code, there is a strange error Key already registered with the same priority: VariableHooks
st97859
Solved, everything works. I tried to use different versions of Pytorch with extension and when importing it.
st97860
Hi guys I read a lot about how to distribute my training over multiple GPUs in pytorch Documentations. And i am confused, I had to read about multiprocessing in general(because i didn’t have any knowledge about it ) I don’t know if my code is right or not but every time i run it i see error THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1525812548180/work/aten/src/THC/THCTensorRandom.cu line=25 error=46 : all CUDA-capable devices are busy or unavailable And RuntimeError: cuda runtime error (46) : all CUDA-capable devices are busy or unavailable at /opt/conda/conda-bld/pytorch_1525812548180/work/aten/src/THC/THCTensorRandom.cu:25 def run(rank, size): device = torch.device("cuda") dataset = datasets.MNIST('./data', train=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])) # from torch.utils.data.distributed import DistributedSampler train_sampler = DistributedSampler(dataset) train_loader = torch.utils.data.DataLoader(dataset, batch_size=128, num_workers=4, pin_memory=True, sampler=train_sampler) torch.manual_seed(1234) model = Net().cuda() model = torch.nn.parallel.DistributedDataParallel(model) optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5) for epoch in range(10): epoch_loss = 0.0 train_sampler.set_epoch(epoch) for data, target in train_loader: data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) epoch_loss += loss.item() loss.backward() optimizer.step() print('Rank ', dist.get_rank(), ', epoch ', epoch, ': ', epoch_loss / num_batches) def init_processes(rank, size, fn, backend='gloo'): """ Initialize the distributed environment. """ os.environ['MASTER_ADDR'] = '127.0.0.1' os.environ['MASTER_PORT'] = '29500' dist.init_process_group(backend, rank=rank, world_size=size) fn(rank, size) size = 4 processes = [] for rank in range(size): p = Process(target=init_processes, args=(rank, size, run)) processes.append(p) p.start() I also read distributed imagenet example 48 to try to do the same. Is that the correct way of using distributed training? PS: btw the first error appears 3 times the second error appears 4 times(I only took the last line of the second error). Both errors appear at the same time I also tried to google that error but I could not use it to fix my error
st97861
Hi it’s usually simpler to start several python processes using the torch.distributed.launch utility of PyTorch. Here 1.1k is a (very) simple introduction about distributed training in PyTorch (there are several ways you can improve over that but it will show you an example in action).
st97862
I used this command in anaconda prompt conda install -c pytorch pytorch-nightly iam getting PackagesNotFoundError: The following packages are not available from current channels: - pytorch-nightly Current channels: - https://conda.anaconda.org/pytorch/win-64 - https://conda.anaconda.org/pytorch/noarch - https://repo.anaconda.com/pkgs/main/win-64 - https://repo.anaconda.com/pkgs/main/noarch - https://repo.anaconda.com/pkgs/free/win-64 - https://repo.anaconda.com/pkgs/free/noarch - https://repo.anaconda.com/pkgs/r/win-64 - https://repo.anaconda.com/pkgs/r/noarch - https://repo.anaconda.com/pkgs/pro/win-64 - https://repo.anaconda.com/pkgs/pro/noarch - https://repo.anaconda.com/pkgs/msys2/win-64 - https://repo.anaconda.com/pkgs/msys2/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. How to install pytorch nightly in anaconda prompt?
st97863
Could you try to update conda and see, if the package can be found afterwards? conda update -n base conda
st97864
The preview build is unfortunately not yet available on the official channel for Windows. However, it seems there is some progress on the nightly Windows packages here 93.
st97865
I guess they will be included into the official channel, but meanwhile you could install them using @peterjc123’s builds 93.
st97866
Hi. I want to have one layer of Con2d with which the C_in size is defined after some processing in the forward method and is not mentioned in advanced. How can I do this since I need to declare in my __init__ method the fix size for C_in. is there any way like tensorflow which we can mention Noun which means the size can be mentioned dynamically during run time?
st97867
Solved by ptrblck in post #2 You could use the functional API to define your parameters in the forward method. Here is a small example using a random number of kernels for the conv layer: class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.conv_weight = None self.conv…