id
stringlengths
3
8
text
stringlengths
1
115k
st102000
If you check source code functionals come from torch.xxx Altough im not dev, i guess functionals have several benefits for autograd etcetera… therefore I would recommend you to use nn functionals
st102001
You use the documented interfaces. They are good for you. You can use the non-documented interfaces, they will break for you and you get to keep the pieces. The functions will be mostly identical in function at the moment. So the back story is: ATen native (i.e. non-TH… legacy) doesn’t know about namespaces at this point, this causes, at the moment, all “public” ATen functions to show up as torch.XXX, because there also is a C++ interface, you cannot rename everything with “_” as one might think of doing, so things showing up in torch.XXX when they’re documented to live in torch.nn.XXX accidental and a small bug likely to be fixed at some point of time. Best regards Thomas
st102002
The code is below: import torch import matplotlib.pyplot as plt from torch.autograd import Variable # torch.manual_seed(1) # reproducible N_SAMPLES = 20 N_HIDDEN = 300 # training data x = torch.unsqueeze(torch.linspace(-1, 1, N_SAMPLES), 1) y = x + 0.3*torch.normal(torch.zeros(N_SAMPLES, 1), torch.ones(N_SAMPLES, 1)) x = Variable(x) y = Variable(y) # test data test_x = torch.unsqueeze(torch.linspace(-1, 1, N_SAMPLES), 1) test_y = test_x + 0.3*torch.normal(torch.zeros(N_SAMPLES, 1), torch.ones(N_SAMPLES, 1)) test_x = Variable(test_x) test_y = Variable(test_y) # show data plt.scatter(x.data.numpy(), y.data.numpy(), c='magenta', s=50, alpha=0.5, label='train') plt.scatter(test_x.data.numpy(), test_y.data.numpy(), c='cyan', s=50, alpha=0.5, label='test') plt.legend(loc='upper left') plt.ylim((-2.5, 2.5)) plt.show() net_overfitting = torch.nn.Sequential( torch.nn.Linear(1, N_HIDDEN), torch.nn.ReLU(), torch.nn.Linear(N_HIDDEN, N_HIDDEN), torch.nn.ReLU(), torch.nn.Linear(N_HIDDEN, 1), ) net_dropped = torch.nn.Sequential( torch.nn.Linear(1, N_HIDDEN), torch.nn.Dropout(0.5), # drop 50% of the neuron torch.nn.ReLU(), torch.nn.Linear(N_HIDDEN, N_HIDDEN), torch.nn.Dropout(0.5), # drop 50% of the neuron torch.nn.ReLU(), torch.nn.Linear(N_HIDDEN, 1), ) print(net_overfitting) # net architecture print(net_dropped) optimizer_ofit = torch.optim.Adam(net_overfitting.parameters(), lr=0.01) optimizer_drop = torch.optim.Adam(net_dropped.parameters(), lr=0.01) loss_func = torch.nn.MSELoss() plt.ion() # something about plotting for t in range(500): pred_ofit = net_overfitting(x) pred_drop = net_dropped(x) loss_ofit = loss_func(pred_ofit, y) loss_drop = loss_func(pred_drop, y) optimizer_ofit.zero_grad() optimizer_drop.zero_grad() loss_ofit.backward() loss_drop.backward() optimizer_ofit.step() optimizer_drop.step() if t % 10 == 0: # change to eval mode in order to fix drop out effect net_overfitting.eval() net_dropped.eval() # parameters for dropout differ from train mode # plotting plt.cla() test_pred_ofit = net_overfitting(test_x) test_pred_drop = net_dropped(test_x) plt.scatter(x.data.numpy(), y.data.numpy(), c='magenta', s=50, alpha=0.3, label='train') plt.scatter(test_x.data.numpy(), test_y.data.numpy(), c='cyan', s=50, alpha=0.3, label='test') plt.plot(test_x.data.numpy(), test_pred_ofit.data.numpy(), 'r-', lw=3, label='overfitting') plt.plot(test_x.data.numpy(), test_pred_drop.data.numpy(), 'b--', lw=3, label='dropout(50%)') plt.text(0, -1.2, 'overfitting loss=%.4f' % loss_func(test_pred_ofit, test_y).data.numpy(), fontdict={'size': 20, 'color': 'red'}) plt.text(0, -1.5, 'dropout loss=%.4f' % loss_func(test_pred_drop, test_y).data.numpy(), fontdict={'size': 20, 'color': 'blue'}) plt.legend(loc='upper left'); plt.ylim((-2.5, 2.5));plt.pause(0.1) # change back to train mode net_overfitting.train() net_dropped.train() plt.ioff() Run this program and get the following result: Dropout.png2560×1554 216 KB Here is my question: 1.what is valuation model?Why should I use these two lines of code to enter the evaluation mode? net_overfitting.eval() net_dropped.eval() Is the curve above for the training set or the test set? Through the image, it can be seen that the curve is probably fitted by the training set.But the code is written like this: plt.plot(test_x.data.numpy(), test_pred_ofit.data.numpy(), 'r-', lw=3, label='overfitting') plt.plot(test_x.data.numpy(), test_pred_drop.data.numpy(), 'b--', lw=3, label='dropout(50%)') So I am very confused, who can tell me what is going on, why is this, thank you very much! I hope to get a detailed answer, thank you very much!
st102003
Solved by ptrblck in post #2 eval() changes the Module to evaluation, i.e. some layers like Dropout and BatchNorm change their behavior. In the case of Dropout, the connections won’t be dropped but scaled. The curves show the predictions on the test data. As you can see, the “overfitted” model predicts the samples closer t…
st102004
eval() changes the Module to evaluation, i.e. some layers like Dropout and BatchNorm change their behavior. In the case of Dropout, the connections won’t be dropped but scaled. The curves show the predictions on the test data. As you can see, the “overfitted” model predicts the samples closer to the training data as it’s overfitting. Generally both your data sets, train and test, are sampled using the same function, so you will see your model is overfitting if the predictions get very close to the actual training data and fail to generalize the underlying function.
st102005
Hi! I write a LSTM program with pytorch and run on cpu .It works. Then I run my program on GPU ,and a RunTimeError occured what the below show b515356e1df4a51352b0dd973340977.png1782×294 20.1 KB Here are my computer environment: pytorch 0.4.1 CUDA 9.2 GeForce GT 740M What could I do ?Please help me, thank you!
st102006
Because I do not transform my rnn Module into cuda while I transform my data into cuda. Now it works.
st102007
Now, I have two tensor, one size is (batch_size,3) named A, other is (batch, 1, 208, 208) named B. I want to use A[:,0-2] * B [:,0,:,:], and get a tensor with size (batch, 3, 208, 208).
st102008
I think the size of the tensors you are multiplying must be the same at every dimension but the first. It is matrix multiplication after all. Maybe I interpreted your question wrong
st102009
ques.png726×576 21.4 KB I’m trying build a networks like the pic show.You see the inputs don’t connect the 1st hidden layer with full connection.I have no idea how to make it.Could some one please do me a favor?
st102010
If you want the same weights, you could use strided convolution. If not, you could use unfold/fold to get it into a format where multiplying with the same weights would be convolution and do a batch matrix multiplication with different weights. Quite likely it is useful to make more precise (equations or so) how your input looks and how you want to apply connection weights to it. Best regards Thomas
st102011
It seems that the final value of momentum is (learning_rate * momentum) in SGD; which is not according to the standard SGD equations.
st102012
I want to do beam search while testing seq2seq. So say, if the top 2 labels are [0,1] i want to feed them to the next decoder lstm’s input. nn.lstm(input,hidden) But now i want to feed multiple inputs at once ,so that i can have softmax over vocab for all relevant inputs in one go. But then input and hidden dimension is a mismatch. Originally input = [0] hidden =(1X1XDIM, 1X1XDIM) Now i want input = [1,2] hidden = ? Once i get this, i will have the softmax over the vocab for both the inputs and can set the backpointer for each word in vocab and continue
st102013
hi this is my first post, so if not suitable sorry. basically I am developing a NN that uses images. However, I receive this error: RuntimeError: Given groups=1, weight[5, 3, 436, 436], so expected input[5, 436, 436, 3] to have 3 channels, but got 436 channels instead I got that legent of weight[batch,channel,h,w], but input has this data shift. I copied the sample code from this. only change I made is just changing self.conv1 = nn.Conv2d(3,6,5) to nn.Conv2d(3, 5, 436) I can provide more info if necessary. Besides, can I use h!=w input images? thanks!
st102014
Solved by ptrblck in post #4 You can just permute the dimensions: inputs = inputs.permute(0, 3, 2, 1) or change your Dataset to return the input in the right shape.
st102015
It looks like you provided an image with shapes [batch_size, w, h, channels] instead of [batch_size, channels, h, w]. How did you load your images? You can use non square images without a problem. If you need a non square kernel, you can provide the kernel_size as a tuple: kernel_size=(kh, kw).
st102016
yeah after some research, I saw that it is cwh problem. currently I am trying to find the solution to reverse the order. My code is like this: datasetTrain=FencerPicLoader('D:/x/x/x',transformations) train_loader=DataLoader(dataset=datasetTrain, batch_size=5, shuffle=True, num_workers=2) for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(train_loader, 0): # get the inputs inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs)#####error jumps to Net() class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 5, 436) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) Thanks for help and kernel advice!
st102017
You can just permute the dimensions: inputs = inputs.permute(0, 3, 2, 1) or change your Dataset to return the input in the right shape.
st102018
it gave this error: RuntimeError: thnn_conv2d_forward is not implemented for type torch.ByteTensor but probably the problem will be solved if I just change the array. edit: actually that solved the error. apparently forwarding is a problem… I believe I can solve probably.
st102019
nn.Conv2d is complaining about your input. Cast your input to input.float() and try it again.
st102020
Yeah, as I promised I solved that issue before the answer! ok last question (so so sorry I didnt want to), but before that: is there a good website that I can learn these kind of stuff easily? I think I will encounter more of them in the future and want to be less dependent on the pytorch forums etc. I checked stanford ML courses and fast.ai courses, but they hardly ever mentions these inner structures. RuntimeError: Given input size: (5x1x1). Calculated output size: (5x0x0). Output size is too small at c:\programdata\miniconda3\conda-bld\pytorch_1524546371102\work\aten\src\thnn\generic/SpatialDilatedMaxPooling.c:67 so I got this error. In conv2d(3,5,436), I get this error. If I say (3,5,218 or 109) I get RAM error like 71GB needed etc. If I write random number like 50, it works but it heavily slows the down computer. THANKS
st102021
Hi. i’m a novice in pytorch. I really want to train cat&dog images with CNN But it isn’t do well… I think, my image loading mothod is not good. def imlist(fpath): flist = os.listdir(fpath) return flist def traindata(impath,flist): train = [] length = len(flist) for i in range(length): path = impath + flist[i] im = pilimg.open(path) im = im.resize((32,32)) im = np.array(im) im = np.reshape(im,(1,3,32,32)) im = torch.FloatTensor(im) if 'dog' in flist[i]: train.append((im,torch.LongTensor([1]))) if 'cat' in flist[i]: train.append((im,torch.LongTensor([0]))) return train fpath is a folder path which has train images and function imlist return a list which has file name of train image. How can I load images??? and How can I resize images to use cnn ?? Does torch cnn use image format [batch size, channel size, img size, img size]? help me ! . Thank you.
st102022
@Hwanil You can load images using the following code: traindir = os.path.join(args.data, 'train') valdir = os.path.join(args.data, 'val') testdir = os.path.join(args.data, 'test') normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) train_loader = data.DataLoader( datasets.ImageFolder(traindir, transforms.Compose([ transforms.RandomSizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), normalize, ])), batch_size=args.batch_size, shuffle=True, num_workers=args.workers, pin_memory=True) val_loader = data.DataLoader( datasets.ImageFolder(valdir, transforms.Compose([ transforms.Scale(256), transforms.CenterCrop(224), transforms.ToTensor(), normalize, ])), batch_size=args.batch_size, shuffle=True, num_workers=args.workers, pin_memory=True) test_loader = data.DataLoader( TestImageFolder(testdir, transforms.Compose([ transforms.Scale(256), transforms.CenterCrop(224), transforms.ToTensor(), normalize, ])), batch_size=1, shuffle=False, num_workers=1, pin_memory=False)
st102023
Thank you for your help I’ll try it. I have a question . normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) is This parameter(mean) always same value? or just do ?
st102024
This values are tested for ImageNet dataset which are RGB images. SO you can use this values.
st102025
This is a very good example for starting. you can see this example. GitHub desimone/pytorch-cat-vs-dogs 218 Contribute to pytorch-cat-vs-dogs development by creating an account on GitHub.
st102026
Bit late, but here’s some code I just did for pytorch 4.1 def load_dataset(): data_path = 'data/train/' train_dataset = torchvision.datasets.ImageFolder( root=data_path, transform=torchvision.transforms.ToTensor() ) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=64, num_workers=0, shuffle=True ) return train_loader for batch_idx, (data, target) in enumerate(load_dataset()): #train network
st102027
I want to be able to initialize specific parts of a GRUCell weight and bias in different ways, ie for reset gate vs update gate vs candidate. How can I found out the layout of the ih_weight, hh_weight, ih_bias and hh_bias tensors?
st102028
I’d think it might be worth trying whether the layout at the end of the GRU documentation 176 works for GRUCell as well (probably that was added to the RNN/GRU/LSTM-Modules but not to the cells). Best regards Thomas
st102029
Screen Shot 2018-08-05 at 8.46.00 AM.png1036×188 22.6 KB This bit? Sounds plausible. Would be good to get a link to the source-code of the GRUCell implementatino (I can get as far as the .py wrapper, but then it zooms off into C+±land, and not obvious how to find it in the source tree).
st102030
seems this is indeed the layout. by experimenting a bit: import torch from torch import nn, optim def run_findz(): """ I 3 O 4 gru.weight_ih.size() torch.Size([12, 3]) gru.bias_hh.size() torch.Size([12]) output tensor([[ 0.5000, 0.5000, 0.5000, 0.5000], [ 1.0000, 1.0000, 1.0000, 1.0000], [ 1.5000, 1.5000, 1.5000, 1.5000], [ 2.0000, 2.0000, 2.0000, 2.0000]]) output tensor([[ 1., 1., 1., 1.], [ 2., 2., 2., 2.], [ 3., 3., 3., 3.], [ 4., 4., 4., 4.]]) output tensor([[ 1.0000, 1.0000, 1.0000, 1.0000], [ 1.5000, 1.5000, 1.5000, 1.5000], [ 2.0000, 2.0000, 2.0000, 2.0000], [ 2.5000, 2.5000, 2.5000, 2.5000]]) (so z is block 1) """ print('') print('find z') N = 4 I = 3 O = 4 print('I', I, 'O', O) gru = nn.GRUCell(I, O) print('gru.weight_ih.size()', gru.weight_ih.size()) print('gru.bias_hh.size()', gru.bias_hh.size()) gru.weight_ih.data.fill_(0) gru.weight_hh.data.fill_(0) gru.bias_ih.data.fill_(0) gru.bias_hh.data.fill_(0) for i in range(3): gru.bias_hh.data.fill_(0) gru.bias_hh.data[i * 4:(i + 1) * 4].fill_(20) input = torch.zeros(N, I) state = torch.zeros(N, O) for i in range(N): state[i].fill_(i + 1) output = gru(input, state) print('output', output) def run_findn(): """ find N I 3 O 4 gru.weight_ih.size() torch.Size([12, 3]) gru.bias_hh.size() torch.Size([12]) i 0 output tensor([[ 0., 0., 0., 0.]]) i 1 output tensor([[ 0., 0., 0., 0.]]) i 2 output tensor([[ 1., 1., 1., 1.]]) (so n is block 2) """ print('') print('find n') N = 1 I = 3 O = 4 print('I', I, 'O', O) gru = nn.GRUCell(I, O) print('gru.weight_ih.size()', gru.weight_ih.size()) print('gru.bias_hh.size()', gru.bias_hh.size()) gru.weight_ih.data.fill_(0) gru.weight_hh.data.fill_(0) gru.bias_ih.data.fill_(0) gru.bias_hh.data.fill_(0) gru.bias_hh.data[4:8].fill_(-20) # gru.bias_hh.data[0:4].fill_(20) for i in range(3): gru.bias_ih.data.fill_(0) gru.bias_ih.data[i * 4:(i + 1) * 4].fill_(20) input = torch.zeros(N, I) state = torch.zeros(N, O) output = gru(input, state) print('i', i, 'output', output) def run_findr(): """ output: find r I 3 O 4 gru.weight_ih.size() torch.Size([12, 3]) gru.bias_hh.size() torch.Size([12]) output tensor([[ 0., 0., 0., 0.]]) output tensor([[ 1., 1., 1., 1.]]) (so r is block 0) """ print('') print('find r') N = 1 I = 3 O = 4 print('I', I, 'O', O) gru = nn.GRUCell(I, O) print('gru.weight_ih.size()', gru.weight_ih.size()) print('gru.bias_hh.size()', gru.bias_hh.size()) gru.weight_ih.data.fill_(0) gru.weight_hh.data.fill_(0) gru.bias_ih.data.fill_(0) gru.bias_hh.data.fill_(0) gru.bias_hh.data[4:8].fill_(-20) gru.bias_hh.data[8:12].fill_(20) gru.bias_hh.data[0:4].fill_(-100) input = torch.zeros(N, I) state = torch.zeros(N, O) output = gru(input, state) print('output', output) gru.bias_hh.data[0:4].fill_(20) input = torch.zeros(N, I) state = torch.zeros(N, O) output = gru(input, state) print('output', output) if __name__ == '__main__': # run_findz() # run_findn() run_findr()
st102031
hughperkins: This bit? Sounds plausible. Would be good to get a link to the source-code of the GRUCell implementatino (I can get as far as the .py wrapper, but then it zooms off into C+±land, and not obvious how to find it in the source tree). Oh, it’s C++ very late, before that you get a chance to see a pure Python implementation in torch.nn._functions.GRUCell 31 but yes, the torch.nn._functions module certainly is one of the more obscure corners of PyTorch (and it is bound to vanish either during the great refactoring/C++Torch that’s going on or the RNN bits when the great RNN overhaul comes 23). Best regards Thomas
st102032
I’m aware PyTorch has Pyro for Bayesian inference and I have a bit of experience with Bayesian regression using PyMC3. I’ve also heard of people using noise injection as a better regularizer than dropout (e.g. add in some small amount of gaussian noise to each of the outputs of the layers of a neural network). So I tried doing this with a simple linear regression in PyTorch, trying to model it as if I were setting up a Bayesian linear model in pymc3. def model(x): beta = torch.abs(b_sd) * torch.randn(x.shape) + b_mu alpha = torch.abs(a_sd) * torch.randn(x.shape) + a_mu mu = beta * x + alpha sigma = torch.abs(sig) * torch.abs(torch.randn(x.shape)) + sig_mean# y = torch.abs(sigma) * torch.randn(x.shape) + mu return y b_mu = Variable(torch.Tensor([2.0]), requires_grad=True) b_sd = Variable(torch.Tensor([1.0]), requires_grad=True) a_mu = Variable(torch.Tensor([0.1]), requires_grad=True) a_sd = Variable(torch.Tensor([1.0]), requires_grad=True) sig = Variable(torch.Tensor([1.0]), requires_grad=True) sig_mean = Variable(torch.Tensor([1.0]), requires_grad=True) loss_fn = torch.nn.MSELoss(size_average=True) lr = 0.0001 optimizer = torch.optim.SGD(params=[b_mu, b_sd, a_mu, a_sd, sig, sig_mean], lr=lr) And then I train it like a ordinary linear regression using SGD and mini-batches on some synthetic noisy linear data. After training, the standard deviation variables actually do reflect the amount variance in the training data and in particular, the b_sd variable actually ends up very close to what pymc3 gives me using “exact” bayesian inference from MCMC. This noise-injection technique seems much simpler than most of the probabilistic programming languages/libraries and is just a few extra lines of code compared to a normal regression algorithm. My question is, what exactly am I doing with this method? This noise injection technique w/ SGD seems to be a way of doing variational inference on a model with conjugate priors (i.e. the prior and posteriors are all gaussians), but I’m still learning the mechanics of variational inference so I’m not sure. In any case, from a practical standpoint, even if its not giving me exact variance/standard deviation values, it does seem to quantify uncertainty which is useful and is probably regularizing as well.
st102033
eg if I have embedding = nn.Embedding(V, E), is the shortest method something like: scale = my_hand_written_formula_here embedding.weight.data.uniform(-scale, scale) (and ditto for bias too, eg in a Linear). ?
st102034
I think you are missing the underscore to call uniform inplace. Besides that, that would be one way. I would recommend to avoid using .data and use something like this instead: with torch.no_grad(): embedding.weight.uniform_(-scale, scale)
st102035
Ok. How to get scale? Do I need to hunt down an appropriate formula? (I know a couple, just wondering if there is an easy way that avoids having to do this?).
st102036
If none of the implemented init functions 41 provides what you need, you would have to implement it somehow manually. However, there are already a lot of init functions like xavier_uniform etc. Also, maybe it’ll help to have a look at the code 10 to get some idea how to calculate the scale etc.
st102037
Ok. missing sqrt(3) / sqrt(num_inputs) * factor initialization, eg see https://www.tensorflow.org/api_docs/python/tf/uniform_unit_scaling_initializer 10
st102038
(But does otherwise seem quite comprehensive And maybe I just overlooked this one in the list somewhere, eg not written quite in the same formula, but is equivalent).
st102039
Hello. I need to pad a tensor in the forward of a function. On the doc website I can’t find function that can pad a tensor…
st102040
Solved by ptrblck in post #3 You could use F.pad from nn.functional: a = torch.randn(1, 3, 6, 8) p2d = (1, 1, 2, 2) # pad last dim by (1, 1) and 2nd to last by (2, 2) F.pad(a, p2d, 'constant', 0) >> [torch.FloatTensor of size 1x3x10x10]
st102041
I’m assuming that the reason you’re padding a tensor and not a variable is that you don’t need its gradients. If so, couldn’t you turn the tensor into a Variable, then pad it, and then turn it back to a tensor with .data?
st102042
You could use F.pad 1.4k from nn.functional: a = torch.randn(1, 3, 6, 8) p2d = (1, 1, 2, 2) # pad last dim by (1, 1) and 2nd to last by (2, 2) F.pad(a, p2d, 'constant', 0) >> [torch.FloatTensor of size 1x3x10x10]
st102043
Which PzTorch version are you using? In the latest stable version (0.4.0), the operation returns a torch.FloatTensor: a = torch.randn(1, 3, 6, 8) p2d = (1, 1, 2, 2) # pad last dim by (1, 1) and 2nd to last by (2, 2) b = F.pad(a, p2d, 'constant', 0) print(b.type())
st102044
Sorry to bother you! I edit the ResNet from from torchvision.models import ResNet, and have 2 path is the first 3 blocks, which in my code like blow self.img_feature = nn.Sequential( nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False), nn.BatchNorm2d(64), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2, padding=1), self._make_layer(block, 64, layers[0]), self._make_layer(block, 128, layers[1], stride=2), self._make_layer(block, 256, layers[2], stride=2), ) self.flow_feature = nn.Sequential( nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False), nn.BatchNorm2d(64), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2, padding=1), self._make_layer(block, 64, layers[0]), self._make_layer(block, 128, layers[1], stride=2), self._make_layer(block, 256, layers[2], stride=2), ) and when I foward this model, def forward(self, img1, img2, flow): img1_fea = self.img_feature(img1) img2_fea = self.img_feature(img2) flow_fea = self.flow_feature(flow) come out a error like this RuntimeError: Given groups=1, weight[64, 1024, 1, 1], so expected input[2, 64, 56, 56] to have 1024 channels, but got 64 channels instead but if change the foward fuction like this: def forward(self, img1, img2, flow): img1_fea = self.img_feature(img1) img2_fea = self.img_feature(img2) flow_fea = self.img_feature(flow) and the code going well, but the result is not that I want, I’m confused. Thank for your reply! blow is the _make_layer def _make_layer(self, block, planes, blocks, stride=1): downsample = None if stride != 1 or self.inplanes != planes * block.expansion: downsample = nn.Sequential( nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(planes * block.expansion), ) layers = [] layers.append(block(self.inplanes, planes, stride, downsample)) self.inplanes = planes * block.expansion for i in range(1, blocks): layers.append(block(self.inplanes, planes)) return nn.Sequential(*layers)
st102045
Solved by TonyMaster in post #2 Sorry, I just found that it is in the __init__ function, there is a self.inplanes , it is a stupid mistake, sorry.
st102046
Sorry, I just found that it is in the __init__ function, there is a self.inplanes , it is a stupid mistake, sorry.
st102047
Hi, ACGAN paper https://arxiv.org/pdf/1610.09585.pdf 4 paper suggests that for generator we need to max(Lc - Ls), but in every implementation that I have come across is max(Lc + Ls). Why the minus sign is replaced by + sign.
st102048
I wrote the following code: k = topk_probs.shape[1] positions2 = positions.clone().detach() positions2.data.fill_(k-2) positions2.data[target_na.data == 1] = k-1 positions2 = positions2.view(-1, 1) target = topk_probs.gather(1, positions2) hinge_loss = hinge_crit(target, topk_probs, 0.05) positions2 of shape (batch_size, 1) are indices from which we want to get the values. I fill them by k-2 initially, but in the cases where target_na == 1, we set the index to be k-1. Then I gather from topk_probs, which is of shape (batch_size, k). hinge_crit is a hinge loss I implemented myself. This would give me an error RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation I am wondering how I can avoid this error. I tried adding positions2 = positions2.view(-1, 1).detach() but it still gives me the same error. I thought detach will make the variable detached from the previous computation graph and there will be no gradient any more. Did I misunderstand it?
st102049
If there are two options here based on a boolean criterion using torch.where to create positions2 might be a good way. Best regards Thomas
st102050
I want to train a model for more than 100 epochs, and want to see how the sequences of batch affects the result of model. As far as I know, using SGD, if the models is initialized with fixed seed and if it is trained with fixed batch sequences, the model is trained in deterministic. So with the batch sequences fixed, I can check how the initialization affects the variation of the model’s performance. And as I want to train the model for 160 epochs, I have to save 160 of different fixed seeds of batches to compare each models’ differences. I failed to find any pytorch dataloader functions or sources or codes that implemented this. Is it right?
st102051
Hi, The dataloader should be deterministic if the torch and torch.cuda seeds are fixed in your script. Be careful as well that the computations of the net are not necessarily deterministic, in particular few cudnn operations are not. You will need to set torch.backends.cudnn.deterministic=True but this will have a noticeable impact on speed.
st102052
Thanks it was very helpful. I was so confused because i set transforms fixed, initialization fixed, batch fixed and got non-deterministic results, but it is solved
st102053
I think, you need to set seed for random library as well. With the following lines you will get deterministic results. torch.manual_seed(randomseed); torch.cuda.manual_seed_all(randomseed); random.seed(randomseed); np.random.seed(randomseed) torch.backends.cudnn.deterministic=True
st102054
Hello, I would like to add to an existing model, like AlexNet, additional layers. My custom layer only changes the activation values, and does not affect any dimension. My layer includes a 2D convolution but with constant weights (I set requires_grad=False). The thing is that I’m having a hard time loading the pretrained values. I’m looking at model.state_dict() and I can see that my layer affects the state_dict, in contrast to pooling or ReLU that doesn’t appear in it. How do I force my layer to not change the original state_dict (like pooling or activation functions)? Thanks.
st102055
I don’t understand the question. Ofc your need layer changes state_dic, it has to appear there. If the problem is you are having troubles loading the pretrained model try to use the argument strict=false when loading the pretrained model. This allows you to load weights without having the exact order. Just use a singular name for your new layers not get error while loading.
st102056
Unfortunately, because AlexNet is implemented with sequential any additional layer changes the layers names, so strict=false doesn’t work. I can manually remove and rename keys from the state_dict. I wonder if there’s a better way.
st102057
Hello I installed PyTorch in my windows 8.1 machine, but I got a CUDA error saying my graphics card is too old. Looking around, people’s advice was to build from source, but the only instructions I see are for Linux or OSX. I don’t know how to do it. Can anyone help me?
st102058
I am not sure I can help you all the way, but did you check your graphics card, if it has CUDA performances and whether you had the most recent drivers for it? I think pytorch requires NVIDIA CUDA 7.5 or above NVIDIA cuDNN v6.x or above Otherwise, maybe this might help you : github.com/pytorch/pytorch Issue: Add windows support please 26 opened by jf003320018 on 2017-01-19 I think pytorch should add Windows support. Other deep learning frameworks, like tensorflow, theano and mxnet, all support Windows. I only use Windows... 24hr+ enhancement help wanted (I should warn that I am not using Windows myself so this is the extent of the help I might offer ^^’)
st102059
Yes, I can use CUDA with tensorflow, but I hate it and I wanted to use pytorch instead. All the required dependencies have been installed, and in fact I can run pytorch quite well. The problem is that my graphics card is not supported in the published binaries and I need to build from source to get support for my old GPU, but the only instructions on how to build from source are for linux and OSX and they fail in windows because of syntax.
st102060
My GPU has CUDA compute capabilities of 5, and I wasn’t able to get it to work correctly with the Pytorch 0.4.x binaries on Anaconda, but I did manage to compile it in Windows. Here’s the raw footage of how I did it: How to Compile the Latest Pytorch from Source in Windows with CUDA Support I used CUDA 9.0, MSVC 2017 14.11 (very important), and the scripts that peterjc123 created 44. Note that in the video, I had to restart several times due to errors and I purposely left them in there in case you’ve also encountered them. There are no guarantees that these are the only errors you’ll encounter.
st102061
Hi, I have a single GPU which is not fully used by my model. For this reason, I’d like to parallelise multiple forward calls on the same GPU. I’d prefer, if possible, to avoid multiprocessing. As I understand, cuda streams should help me on this, but I’ve not been able to get performance improvements, as if the streams were executed sequentially anyway. Can I obtain parallelism on the same GPU by using cuda streams without multiprocessing?
st102062
Hello I have a simple mnist example set-up. A few days ago it was working perfectly. When I ran it today the Dataloader gets stuck every time. It won’t load a single batch. It just runs forever. Here is a bit of the code: Prepare and load the data: train_samples = datasets.ImageFolder('data/train', transforms.ToTensor()) val_samples = datasets.ImageFolder('data/val', transforms.ToTensor()) train_set = DataLoader(train_samples, batch_size=170, shuffle=True, num_workers=4) val_set = DataLoader(val_samples, batch_size=170, shuffle=False, num_workers=4) Train loop: def train(model, optimizer, criterion): model.train() # training mode running_loss = 0 running_corrects = 0 for x,y in train_set: x=x.to(device) y=y.to(device) optimizer.zero_grad() # make the gradients 0 output = model(x) # forward pass _, preds = torch.max(output, 1) loss = criterion(output, y) # calculate the loss value loss.backward() # compute the gradients optimizer.step() # uptade network parameters # statistics running_loss += loss.item() * x.size(0) running_corrects += torch.sum(preds==y).item() epoch_loss = running_loss / len(train_samples) # mean epoch loss epoch_acc = running_corrects / len(train_samples) # mean epoch accuracy return epoch_loss, epoch_acc I have tried setting the workers to 0 and got the same results. The program gets stuck in for x,y in train_set: every time. Any ideas of why this is the case? Thanks in advance
st102063
You can try setting the batch_size to something really small. And then use img, y = next(iter(train_set)) to see if you are able to extract one batch. It might also be that your installation of PyTorch for some reason has been tinkered with. What version are you on? Have you tried reinstalling/upgrading?
st102064
Thanks for replying. Yesterday I ran the model on CPU a couple times and it worked fine. Then I ran it back on GPU and it worked as expected again. It seems like the problem went away by itself after not working even after several attempts and even after restarting the machine. So… Idk what caused it. Just FYI I have gtx 1080 and I’m running pytorch 0.4. I can’t seem to reproduce the problem.
st102065
I have a simple nn model. Under the normal procedure, if I pass the input tensor T1 to the model, the gpu memory usage is about 500M. Now, I need to calculate an additional loss. So I pass another input tensor T2 to the model. Importantly, the size of T2 will increase over time. I find the gpu memory usage will increase over time too. But at some point, the gpu memory usage will drop and then it will increase again. So I’m curious about the gpu memory management mechanism. If I call torch.cuda.empty_cache() before each forward, will it influence the efficiency of my model?
st102066
I have an encoder LSTM whose last hidden state feeds to the decoder LSTM. If i call backward on the loss for the decoder lstm, will the gradients propagate all the way back into the encoder as well, so that when i do decoder.step() and then encoder.step() both parameters are updated? I feel as the same hidden state is being used, it should automatically take care of backpropagation to the encoder as well.
st102067
Hi the PyTorch community ! I am working on the SVHN dataset and I would like to know if there is a simple way to get the sample of 1. For example in numpy I would do something like this : dataset[dataset[:, 1] == 1]. Does one of you know how I can manage that in PyTorch ? Thank you
st102068
Solved by tux in post #2 I found a solution : import torch.utils.data class FilteredDataset(torch.utils.data.dataset.Dataset): def __init__(self, dataset, wanted_labels): self.parent = dataset indices = [] for index, (img, lab) in enumerate(dataset): if lab == wanted_labels: …
st102069
I found a solution : import torch.utils.data class FilteredDataset(torch.utils.data.dataset.Dataset): def __init__(self, dataset, wanted_labels): self.parent = dataset indices = [] for index, (img, lab) in enumerate(dataset): if lab == wanted_labels: indices.append(index) self.indices = indices def __getitem__(self, index): return self.parent[self.indices[index]] def __len__(self): return len(self.indices) one_label_dataset = FilteredDataset(dataset, 1) print(len(one)) one_label_dataloader = torch.utils.data.DataLoader(dataset=one_label_dataset, batch_size=128, shuffle=True)
st102070
Consider three tensors a = 5*torch.rand(5), b = torch.zeros(5,5) and c = torch.zeros(5). I just realized that b[a.long(),a.long()] = c disconnects a from the computation graph due to the casting operation (a.long()). In my case, a has history and a.requires_grad=True. How can I avoid it getting disconnected during this operation?
st102071
I’m not sure what you mean. I think the issue is the actual casting, i.e., 3.426 -> 3, which is not differentiable.
st102072
Here is the problem broken down into a minumum working example: s = 100 data = s*torch.ones(s) data.requires_grad=True target = torch.rand(s) target.requires_grad=True net = nn.Linear(s,10) optimizer = torch.optim.Adam(net.parameters(),1e-5) for i in range(s): optimizer.zero_grad() idx = net(data).clamp(0,s-1).long() loss = torch.index_select(target, 0, idx).mean() loss.backward() optimizer.step() print(loss) The issue is that the linear layer doesn’t receive gradients and the reason is that idx.requires_grad=False due to the long conversion, which is required by index_select.
st102073
You are right, I misunderstood the question. The matter is i find no way to avoid that. You can use a differentiable step function to push the number very closed to long version and apply the loss function with the lowest error possible.
st102074
Is it allowed / possible to call cublas functions (e.g. dot product) inside a kernel? As a minimal example (based on the c++ extensions tutorial): #include <THC/THCGeneral.h> #include <THC/THCBlas.h> #include <cuda.h> #include <cuda_runtime.h> #include <cublas_v2.h> #include <vector> namespace { __global__ void my_op_forward_kernel( cublasHandle_t handle, const float* x, const float* y, float* output, size_t n, size_t d) { int i = blockIdx.x * blockDim.x + threadIdx.x; if (i + d < n) { float result; cublasSdot(handle, d, x, 1, y, 1, &result); output[i] = result; } } } // namespace std::vector<at::Tensor> my_op_cuda_forward(THCState *state, at::Tensor x, at::Tensor y, int d) { cublasHandle_t handle = THCState_getCurrentBlasHandle(state); cublasSetStream(handle, THCState_getCurrentStream(state)); auto n = x.size(0); auto output = at::zeros_like(x); const int threads = 1024; const dim3 blocks((n + threads - 1) / threads, 1); my_op_forward_kernel<<<blocks, threads, 0, THCState_getCurrentStream(state)>>>( handle, x.data<float>(), y.data<float>(), output.data<float>(), n, d); THCudaCheck(cudaGetLastError()); return {output}; } I can get this to compile, but I get an error when I try to call the function (“cuda runtime error (77): an illegal memory access was encountered at torch/csrc/cuda/Module.cpp”). Is there anything special I should be doing?
st102075
Hi, I have a question about how can I use the Dataset class on my own data. The problem is that I have my data in several hdf5 files (a medical imaging dataset). Each of these files, has several examples (each one with possibly a different number). If I want to create my own dataset, it says that I should provide the len and getitem methods. However, I dont really have a general index that identifies a sample, I can have an index inside each hdf5 file, but I am not sure how can I use all the data without loading all hdf5 files into memory? Any suggestion in appreciated. Thanks Thanks!
st102076
I think the most simple (but not elegant) way is to calculate the total number of your samples by hands, and then map the index to certain hdf5 files. For example, I will use: class MergedDataset(data.Dataset): def __init__(self, hdf5_list): self.datasets = [] self.total_count = 0 for f in hdf5_list: h5_file = h5py.File(f, 'r') dataset = h5_file['YOUR DATASET NAME'] self.datasets.append(dataset) self.total_count += len(dataset) def __getitem__(self, index): ''' Suppose each hdf5 file has 10000 samples ''' dataset_index = index % 10000 in_dataset_index = int(index / 10000) return self.datasets[dataset_index][in_dataset_index] def __len__(self): return len(self.total_count) I am not sure whether open certain hdf5 file will load the total data into memory or not. Wish this help to you.
st102077
Thank you very much @cyyyyc123, this seems like a good solution. I would also like to know if this loads all the data into memory, or just a reference, and only when we index it loads it into memory. Does anybody know?
st102078
@cyyyyc123, I have implemented your idea adding a little since I don’t have the same number of samples in each hdf5. It looks like this: class CT_dataset(data.Dataset): def __init__(self, path_patients): hdf5_list = [x for x in glob.glob(os.path.join(path_patients,'*.h5'))]#only h5 files print 'h5 list ',hdf5_list self.datasets = [] self.datasets_gt=[] self.total_count = 0 self.limits=[] for f in hdf5_list: h5_file = h5py.File(f, 'r') dataset = h5_file['data'] dataset_gt = h5_file['label'] self.datasets.append(dataset) self.datasets_gt.append(dataset_gt) self.limits.append(self.total_count) self.total_count += len(dataset) #print 'len ',len(dataset) #print self.limits def __getitem__(self, index): dataset_index=-1 #print 'index ',index for i in xrange(len(self.limits)-1,-1,-1): #print 'i ',i if index>=self.limits[i]: dataset_index=i break #print 'dataset_index ',dataset_index assert dataset_index>=0, 'negative chunk' in_dataset_index = index-self.limits[dataset_index] return self.datasets[dataset_index][in_dataset_index], self.datasets_gt[dataset_index][in_dataset_index] def __len__(self): return self.total_count It works fine except when I I use numworkers>=2 in the DataLoader. Do you have any idea of what can be happening?
st102079
Yes! I also met a problem when I use numworkers>=2: if I re-index the mini-batch data which fetched by dataloader with numworkers>=2, the data will be in wrong order at times. When I set numworkers=1, everything works well again. I suppose it is because when use multi-workers (multi-processes), pytorch will not provide locks to synchronize the data. In most cases, it will be fine because we just use the data without modify them, but if we want to modify the fetched data, I guess we should use only one process to prevent modify the asynchronous data. I hope this will be helpful to you.
st102080
Hi Roger, I’m curious about the question you asked. Does this solution load all the data into memory?
st102081
I do not find an example to use .h5 file in order to train a model. Could you help, please?
st102082
The problem can be solved by replacing hdf5 with zarr. It works effectively with num_workers >= 2 .
st102083
Hi All, I am a researcher in LBL interested in implementing distributed model parallelism in PyTorch. This could in fact be useful for our research as well. Currently, I am looking at the DistributedDataParallel classes to see how PyTorch decomposes data internally across machines. I wonder if the PyTorch community would be interested in this and if there’s already some work on this topic. Thank you, Saliya
st102084
I cannot speak for the community, but I would be interested in and probably make use of any model parallelism in PyTorch, especially as pertains to RNN variants.
st102085
That would be a handy tool. I’m sure you’ve seen, e.g.: Model parallelism in pytorch for large(r than 1 GPU) models? Hi! I have a model that is too large to fit inside a single TITAN X (even with 1 batch size). I want to split it over several GPUs such that the memory cost is shared between GPUs. That is, place different parts of the same model on different GPUs and train it end-to-end. Questions: Is this possible in pyTorch? If not, is this possible in Torch? Would inter-GPU communication (say, for transferring activations to later layers) involve GPU->host->GPU type transfers? How to set model parallel I got an problem while set an transfer learning task. In my task, the source model and dst model couldn’t run in an Graphics, so is there any function or some method set the model run in different Graphics. Then to do it in a distributed setting, I guess you would “supply” the model on node t with the output of node t-1 (where t=1 gets the “actual” inputs). I put supply in quotes because you would need to do something like torch.distributed.receive(tensor=<whatever>, dst=node_idx-1). Then chunk up your forward to only run the parts of the model you’ve designated for node t. For the “return,” do something like a torch.distributed.send(...). The idea in my head requires you have have parts of your forward cordoned off with if section_num % num_nodes == node_idx. I imagine this has already occurred to you. I illustrate it to make the point that I have no idea how you could do the necessary load balancing at that level automatically. Maybe you could define functions forward_<i>, and have some superclass/wrapper inheriting from nn.Module stitch them together for forward and backward. However, I do not know how pointwise ops (as opposed to collective) work with autograd. But that’s only part of the problem. Usually when you need model parallel, the actual limitation is size of the model, not the forward pass. You need to do the same sort of thing with the __init__ function. Maybe the way the user chunks the __init__ could be used to infer the forward_<i>'s from the forward function. Even after all that, the problem in my head is saving the state dict when some of the weights are on some machine and others on others. Sure, you can broadcast the weights if a save is requested, but wouldn’t you need to have them all at one location to pickle them? At that point, you’ve lost your space savings… I’m interested in hearing what you have in mind. Maybe I’m overthinking this.
st102086
Thank you, Dylan for the response. Yes, I’ve see the two questions. I’ve been quiet on this thread mainly because I’ve been trying to dig into PyTorch code. If I understood correctly, what you have mentioned in the beginning is about splitting up layers into multiple machines. What I was thinking was to split each layer into multiple machines. The reason being that if you split across layers then the machine handling layer i would anyway be idle until layer i-1 is completed, so there’s not much gain from parallelizing the model. On the other hand, if we split a layer into multiple machines, then we can utilize parallelism better. Then once all the computations of a layer is done we can sync (allgather) the outputs before starting the next layer. I am new to PyTorch internals, so would appreciate any help on figuring out the code. I was looking at this post (The pytorch blog "A Tour of PyTorch Internals" is out-of-date. How to know more about the pytorch internal 14). Is there anything else you can recommend?
st102087
Simon Wang’s response there is a very good “quick and high-level description.” Follow Peter Goldsborough’s tutorial 43 for a hands-on introduction to interfacing between ATen and Python. The way the PyTorch source interfaces isn’t exactly the same, but the tutorial will get you acquainted with ATen. If you want to dig further, this blog post 24 has a good tour of the internals of ATen. Do be aware that once you get to the T*C level, you’re up against daily changes (see goldsborough’s first reply on this thread 18).
st102088
I have a dataset that for each sample, it is a 365 * 3000 pandas/numpy array after preprocessing. And they are sparse. The label or target is binary for each sample. I have around 500K of these samples. Apparently I can not load the the 500K * 365 * 3000 data in the memory. So my plan is to save the numpy array to disk like an image and load them back during training using batch, so I will be fine on the memory side. However, I think it’s waste of time by doing the I/O. Do you think is there better solution out there? If I have to save the file, will the torch.sparse help? Thank you.
st102089
I’m not sure how to implement your data loading logic using sparse, but for your first suggestion you could create a custom Dataset and load the samples lazily. Using a DataLoader the batches can be preloaded while your GPU is busy with the training step. Have a look at the data loading tutorial 21 to create your Dataset.
st102090
I want to resize my image tensor using the function as: torch.nn.functional.upsample(input, size=None, scale_factor=None, mode=‘nearest’, align_corners=None) where my statement is as follows: image =image.view(1,3,h,w) resizedimg = F.upsample(image, size=(nw,nh),mode = ‘bilinear’) when I print (resizedimg.shape), it shows that resizedimg is single channle, but I input a 3 channel tensor, I do not understand why output become single channel. and I read in help document that size could be Tuple[int, int, int]), but when I set size as : size=(3,nw,nh), It prompt that size only could be 2 dimesion. Could anybody help me?
st102091
The issue is strange, as image should have 3 channels if the previous line of code didn’t throw an error. Here is a small code snippet working as you would expect: x = torch.randn(1, 3, 24, 24) output = F.upsample(x, size=(30, 30), mode='bilinear') print(output.shape) > torch.Size([1, 3, 30, 30]) Could you print the image shape right before passing it to F.upsample?
st102092
Hi, can Pytorch do unevenly split during multi-GPU training, say GPU 0 is much faster than GPU1, then it’s better to split the samples to 7:3 other than 1:1. Thanks.
st102093
As mentioned above, we use .cuda() in pytorch 0.3, but we can use .to(torch.device(‘cuda’)) in 0.4 too. Are there some differences between them? Like speed? Thanks.
st102094
Hi, No they are the same. .to() is the new version that is more versatile as it take amy device as input.
st102095
I try to read faster r-cnn 7 implemented by ruotianluo. Almost all important python files are in ‘‘lib’’ fold, which is not a package (there is no init.py file under it). I really don’t understand why modules can be import using code like ‘from model.config import cfg’ (model is a fold in lib, and config is a ‘.py’ file under model). The detailed program structure can be found in https://github.com/ruotianluo/pytorch-faster-rcnn 7, and the import sample can be seen in https://github.com/ruotianluo/pytorch-faster-rcnn/blob/master/lib/model/train_val.py 4
st102096
Solved by amazing_coder in post #3 I get it. Just now, I found ‘lib’ is added into python path in ‘_init_paths.py’ . Thank you.
st102097
Let me see if I can help. Folders are folders, they are there because of your O. S. As long as your python folders are in your python path(where python looks for folders and files) Python can see the folder and make use of its contents. Modules end in .py. They can contain the init function, but don’t have to. I have wrote modules with pure functional code, meaning no init and no classes. They need to be imported to be used. Importing saves memory as you don’t need to import all modules on your computer to run your code only the ones you need. It also saves time because you don’t have to wait for your computer to load imported files off the hard drive they are read off the hard drive once and stored in memory. You can import modules, classes and even single functions. Then, after you import you need to create a “version” or instance of the imported object to be able to use it, if it’s a class. You don’t need to create a “version” or instance if its a function. So in your example, you are telling your program to look in the lib folder, and the model module. You are giving directions on where to go. Once it is looking in the model.config you tell it to load the cfg class into memory because it will be needed later. It would be like me telling you the directions to the store and then telling you to grab some bread while you are there because we will need it later
st102098
I get it. Just now, I found ‘lib’ is added into python path in ‘_init_paths.py’ . Thank you.
st102099
I am a beginner with PyTorch and started testing some codes. The environment is Windows 10, Anaconda 3 and some required libraries have been installed, and then I am working on Spyder. I am conflicting with an AttributeError in testing the sample program “mnist_hogwild” in “examples-master” folder in PyTorch. The error is as follows: File “C:\USR\local\Anaconda3\lib\multiprocessing\spawn.py”, line 172, in get_preparation_data main_mod_name = getattr(main_module.spec, “name”, None) AttributeError: module ‘main’ has no attribute ‘spec’ Such an error also occurs when I run the program on the command prompt. How can I resolve this issue? Thank you in advance