id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st116368
|
Thanks! But I modified my questions, since the previous example did not reproduce the exact problem I suffered
|
st116369
|
Hmm, I modified self.save_for_backend to self.save_for_backward because there is no save_for_backend in Function.
class F(torch.autograd.Function):
def forward(self, i):
self.save_for_backward(i) # <- fixed here
return tr.FloatTensor([1])
def backward(self, c):
a = tr.zeros(10,10)
i = self.saved_tensors
a[i] = tr.ones(3,10)
return None
f = F()
f(torch.autograd.Variable(tr.LongTensor([2,3,4])))
Then it returns
Variable containing:
1
[torch.FloatTensor of size 1]
|
st116370
|
I think the canonical way is to make the right hand side a tuple, i.e.
i, = self.saved_tensors
(note the comma on the left hand side).
Best regards
Thomas
|
st116371
|
I have updated my pytorch to the master branch but am unable to understand why I am getting the following error AttributeError: 'module' object has no attribute 'affine_grid'
I also did import import torch.nn.functional as F but am still facing the error.
Edit 1 :
I have also checked the functional.py file and did find the affine_grid function. But still am unable to resolve this error.
|
st116372
|
I am trying to implement a custom conv layer, so I need to process the input using im2col and col2im utilities to extract input patches according to the kernel size, stride and padding parameters. How to access these utilities in Pytorch ?
|
st116373
|
Now I had trained a model ,then I will train another model,but I want to load some layers of the former onewhen I train the new model?anybody can tell me how to do this
|
st116374
|
I am training a deep network and because the training data is very big, so I can’t feed model with a bigger batch size. I wander if I can feed each time with a small batch size and optimize the model once after a specified times of loss backward?
If possible, how can i do to average the gradient values before update parameters?
Thank you!!!
|
st116375
|
num_batches = 0
for sample, target in dataset:
out = model(sample)
loss = loss_fn(out, target)
loss.backward()
num_batches += 1
if num_batches == 10: # optimize every 10 mini-batches
optimizer.step()
model.zero_grad() # or optimizer.zero_grad()
num_batches = 0
|
st116376
|
Hi, smth
It seems backward function only accumulates the gradient, but does not average the gradient?
|
st116377
|
I am simply looking to get the word embeddings/model for some type of word embedding pretraining system like GloVe or Word2Vec. I am not quite sure how to go about this (novice in NLP), but I am guessing what the desired output would be is some lookup table where I feed in say “dog” and get the pretrained word embedding for it, but I am not certain as how I would go about setting up GloVe/Word2Vec to generate this… Also, is there literature detailing how relevant the specificity of the text the word embeddings are pretrained on effects the task if the corpuses being used for the actual task are significantly different?
|
st116378
|
The snli example uses GloVe.
GitHub
pytorch/examples 1.1k
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc. - pytorch/examples
Note that torchtext is a separate package (torchtext on pypi but probably better https://github.com/pytorch/text 670).
Best regards
Thomas
|
st116379
|
I started using PyTorch recently, and I’m very impressed so far; thank you for making it! I installed PyTorch with the pip install, and am using Python 2.7.6
Original Post, Disregard
I noticed steadily increasing memory usage during training a CNN. Here is my training step:
def training_step(self, trainingdata, traininglabels):
self.optimizer.zero_grad() # zero the gradient buffers
prepared_data = self.prepare_data(trainingdata, (len(trainingdata), self.channels_in, self.height, self.width))
indices = np.nonzero(traininglabels)[1] #turn one-hot labels into indices (how Pytorch does it)
prepared_indicies = self.prepare_data(indices, np.shape(indices), dtype=np.int64) #prepare to go into GPU (and do int64 to get longTensor, which is how Pytorch wants the labels)
output = self(prepared_data).cuda()
error = self.loss(output, prepared_indicies).cuda()
error.backward()
self.optimizer.step() # Does the update
def prepare_data(self, data, shape, dtype = np.float32): #prepare train or val/test data for a pass through the network
reshaped = data.reshape(shape).astype(dtype) #correct shape, correct dtype
reshaped = torch.from_numpy(reshaped) #make torch tensor
return Variable(reshaped.cuda()) #put that in torch var
If I comment out
output = self(prepared_data).cuda()
error = self.loss(output, prepared_indicies).cuda()
error.backward()
self.optimizer.step() # Does the update
the leak does not occur. Therefore, the leak occurs in those lines.
if I add, as suggested by Tracking down a suspected memory leak
torch.backends.cudnn.enabled = False
performance is degraded by approximately 40%, but the leak is less severe (approximately 50MB/epoch instead of 200MB). If I then add
del output
del error
To the end of my training step, the memory leak seems to go almost entirely! (approx 3MB/epoch). However, if I add the del lines, but not the enabled = False line, the leak is still present.
Is this a bug or am I missing some sort of clean-up step in my code? Am I supposed to use the del lines? Is there some kind of work-around that would allow me to get the better performace (present without the enabled = False line) without the memory leak?
Never mind, disregard the above, I recorded some of my tests incorrectly,
torch.backends.cudnn.enabled = False
does completely solve the leak by itself. However, it is about 40% slower. Is there any way that I can get the speed without the memory leak? Thanks!
|
st116380
|
The bidirectional LSTM leak, which you linked, seemed to be happening to me for cudnn6 but not 5. My understanding was that it was not so much cudnn code that did it, but something in the build environment or so (not that the result is terribly different for us).
Unfortunately, something was up with my post in the NVidia forums back then (it never appeared there publicly even though I could see it when logged in) and have not revisited it.
I don’t know if similar things apply to CNNs.
Best regards
Thomas
|
st116381
|
Hi all, how do you make a spatial classifier in PyTorch Linear layers?
By spatial classifier I mean that it can take 2d inputs.
|
st116382
|
if the two dimensions are small enough (e,g MNIST pictures), you can flat your inputs and use a 1d linear layer via
1d_input = 2d_input.view(-1)
|
st116383
|
we ended up changing linear layers with 1x1 convolutions:
nn.Conv2d(4096, 4096, kernel_size=1)
|
st116384
|
Below is my network:
class Siamese(nn.Module):
def __init__(self):
super(Siamese, self).__init__()
self.cnn1 = nn.Sequential(
nn.Conv2d(1, 20, kernel_size=5),
nn.MaxPool2d(2, stride=2),
nn.Conv2d(20, 50, kernel_size=5),
nn.MaxPool2d(2, stride=2))
self.fc1 = nn.Sequential(
nn.Linear(50 * 4 * 4, 500),
nn.ReLU(inplace=True),
nn.Linear(500, 10),
nn.Linear(10, 2))
# self.cnn2 = self.cnn1
# self.fc2= self.fc1
# print self.cnn1 is self.cnn2
def forward(self, input1, input2):
output1 = self.cnn1(input1)
output1 = output1.view(output1.size()[0], -1)
output1 = self.fc1(output1)
output2 = self.cnn1(input2)
output2 = output2.view(output2.size()[0], -1)
output2 = self.fc1(output2)
# print output1 - output2
output = torch.sqrt(torch.sum((output1 - output2) * (output1 - output2), 1))
return output1, output2, output
But it crashed when training with the error:
*** Error in `python': free(): invalid next size (fast): 0x00000000042b8d50 ***
Aborted (core dumped)
I have no idea what happens. It seems my network definition is not correct.
|
st116385
|
Input size is (28L, 28L) since I am using MNIST dataset.
I have to fix the seed of random number generator to reproduce.
BTW, I use
criterion = nn.HingeEmbeddingLoss()
as the loss function.
|
st116386
|
Can you please upload your sript in a GitHub gist, so I could try to take a look and reproduce the issue? Thanks!
|
st116387
|
Hi.
After upgrading to 0.1.12_1, I got new error.
Segmentation fault (core dumped)
The previous error disappeared.
Please check https://github.com/melody-rain/siamese-network 383 for the code.
|
st116388
|
Hi,
Can you please give an example that use Siamese network ? I googled pytorch Siamese but got no worked examples.
Thanks
|
st116389
|
@melody it is because of NaN values appearing in your inputs to MaxPooling. If your training is stable and you dont get nans, then it will work fine. Regardless, I have fixed this in master via: https://github.com/pytorch/pytorch/commit/a6876a4783ce3d1bb3c6ba69f54c31983097ed17 260
|
st116390
|
@li_bo you can find an example here: https://github.com/hadikazemi/Machine-Learning/tree/master/PyTorch 581
|
st116391
|
Did you fix the problem in the end? BTW, I try to change your code with cuda. But I am not able to reproduce the invalid next size error or the segmentation fault. The code in the repo seems fine to me except with unstable training, which makes senses. I guess @smth have fixed the stability issue. But current whl on pytorch.org 3 does not have this fix, isn’t it @smth?
|
st116392
|
I’ve met something similar. Could you take a look at this Possible data parallel memory leak for siamese network ?
|
st116393
|
You can find a short tutorial here:
https://github.com/hadikazemi/Machine-Learning/tree/master/PyTorch 764
|
st116394
|
Hi guys!
So I just found out the hard way that if i do something like
self.conv1 = nn.Conv2d(3, 6, 3)
then the weights of the kernels are recognized as a parameter (through net.parameters()) so that the optimizer does go through those weights, while doing the following
self.convNets.append(nn.Conv2d(3,6,3))
Or in other words, having a list of conv2d within your nn.module,
does not register the weights , so the optimizer does not change those weights at all.
My question is, what’s the best way to make sure that those weights do get optimized? Looking at the source code I can see the register_parameter() method within nn.module , but is that the best way?
I was hoping that,since I am using those weights within my forward() pass, the framework will optimize those weights automatically, tbh…
Thanks in advance,
Yoni.
|
st116395
|
Use nn.ModuleList() instead of the usual Python list. (http://pytorch.org/docs/master/nn.html?highlight=modulelist#torch.nn.ModuleList 882)
|
st116396
|
Nice,thanks!
Is there a tutorial where I can learn those things systematically?
I’ve gone through the basic 60 mins tutorial,the tutorial by justin and data loading etc but didn’t see that one.
|
st116397
|
I think that’s normal. I’ve been using PyTorch for over two months and only discovered nn.ModuleList() last week
I don’t think that there is a tutorial that covers everything, because different problems will demand the usage of different functionalities. Anyway, feel free to use this forum. One of the many things that I like about PyTorch is that the community and the developers are very helpful.
|
st116398
|
Hello, I am very new to PyTorch and want to clarify my understanding of PyTorch.
Do I correctly understand each component of PyTorch ??
Computational Graph
Computational graph is composed of Variable and Function.
Function is creator of Variable from other Variable
Thus, a Function has type: Variable* => Variable*
and can be viewed as a edge of computational graph, whereas node are the Variable.
If we only use predefined Function of PyTorch, then we can compute gradient directly using autograd.
When to define new Function
2.1) When Function is too complex to keep it in computational graph for backward (Which might slow down performance)
2.2) When you are using not predefined Function (Cannot use autograd)
|
st116399
|
I found in GRU implementation 3, weight parameters are initialized in shape( 3hidden_size, hidden_size) and (3hidden_size, input_size), I guess GRU inside would slice the parameter into three matrices, corresponding to reset gate, update gate and candidate activation h respectively. But input matrix is often (None, embed_size), it seems GRU performs W*xT computation rather than xW+b?
W*xT: (hidden_size, input_size) * (batch, embed_size)T
xW: (batch, embed_size) (embed_size, hidden_size)
I need to implement a GRU cell to handle variable length hidden states, one of my candidate solution is to reuse pytorch GRU, and set the weight matrices to a maximum shape, then apply {0,1} mask matrix to the output of GRU. It is possible when GRU behaves xW+b internally.
|
st116400
|
I think if you set batch_first=True when you create the GRU it should process as you want.
|
st116401
|
Emmm, I realize that it make no difference whether xW or Wx. Just fill zero and slice.
Thanks.
|
st116402
|
Hi , I am new to deep learning and pytorch I have been trying to implement CBOW model for creating word embeddings
Please help me out I am having an incompatible addition error.Thanks in advance
Code:
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
class CBOW(nn.Module):
def __init__(self,vocab_size,embedding_size,context_size):
super(CBOW,self).__init__()
self.fc1 = nn.Linear(vocab_size,embedding_size)
self.fc2 = nn.Linear(embedding_size,vocab_size)
def forward(self,x):
y = []
for i in xrange(0,174,29):
y.append(self.fc1(x[:,i:i+29]))
embedding = Variable(torch.zeros(1,128))
for i in xrange(len(y)):
embedding = embedding + y[i]
embedding = embedding/len(y)
x = self.fc2(embedding)
return [F.softmax(x),embedding]
def make_corpa(data):
vocab = ""
for i in data:
vocab = vocab + " " + i
vocab.strip(" ")
corpa = {}
all_words = list(set(vocab.split(" ")))
for i in xrange(len(all_words)):
corpa[all_words[i]] = i
return [corpa,len(corpa),corpa.keys()]
def conv_vect(word,corpa):
temp = torch.LongTensor(1,len(corpa)).zero_()
temp[0][corpa[word]] = 1
return temp
def train_word2vec(vocab_size,embedding_dim,number_of_epochs,data):
model = CBOW(vocab_size,embedding_dim,6)
loss = nn.CrossEntropyLoss()
context,word = make_training_data(data,3)
corpa = make_corpa(data)[0]
optimizer = optim.SGD(model.parameters(),lr= 0.01)
for epoch in xrange(number_of_epochs):
for i in xrange(len(context)):
context_vec_tmp = [conv_vect(j,corpa) for j in context[i]]
context_vec = Variable(torch.cat(tuple([context_vec_tmp[j] for j in xrange(len(context_vec_tmp))]),1))
word_vec = Variable(conv_vect(word[i],corpa))
predict = model(context_vec)[0]
predicted = torch.LongTensor(predict.size()[0],predict.size()[1]).zero_()
for i in xrange(predict.size()[1]):
predicted[0][i] = float(int(predict[0][i].data[0]/torch.max(predict.data[0])))
predicted = Variable(predicted)
model.zero_grad()
l = loss(predicted,word_vec)
l.backward()
optimizer.step()
return model
def make_training_data(data,context_size):
context = []
word = []
for i in data:
temp = i.split(" ")
for j in xrange(context_size,len(temp)-context_size,1):
context.append([temp[j - context_size],temp[j - context_size + 1],temp[j - context_size + 2],temp[j + context_size - 2],temp[j + context_size - 1],temp[j + context_size]])
word.append(temp[j])
return context,word
train_word2vec(make_corpa(po)[1],128,10000,po)
# po is a list of sentences
error comes out to be this:
TypeError Traceback (most recent call last)
<ipython-input-12-c4d942812d63> in <module>()
----> 1 train_word2vec(make_corpa(po)[1],128,10000,po)
<ipython-input-10-181483152376> in train_word2vec(vocab_size, embedding_dim, number_of_epochs, data)
10 context_vec = Variable(torch.cat(tuple([context_vec_tmp[j] for j in xrange(len(context_vec_tmp))]),1))
11 word_vec = Variable(conv_vect(word[i],corpa))
---> 12 predict = model(context_vec)[0]
13 predicted = torch.LongTensor(predict.size()[0],predict.size()[1]).zero_()
14 for i in xrange(predict.size()[1]):
/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
204
205 def __call__(self, *input, **kwargs):
--> 206 result = self.forward(*input, **kwargs)
207 for hook in self._forward_hooks.values():
208 hook_result = hook(self, input, result)
<ipython-input-7-33d7cb28d3b4> in forward(self, x)
8 y = []
9 for i in xrange(0,174,29):
---> 10 y.append(self.fc1(x[:,i:i+29]))
11
12 embedding = Variable(torch.zeros(1,128))
/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
204
205 def __call__(self, *input, **kwargs):
--> 206 result = self.forward(*input, **kwargs)
207 for hook in self._forward_hooks.values():
208 hook_result = hook(self, input, result)
/usr/local/lib/python2.7/dist-packages/torch/nn/modules/linear.pyc in forward(self, input)
52 return self._backend.Linear()(input, self.weight)
53 else:
---> 54 return self._backend.Linear()(input, self.weight, self.bias)
55
56 def __repr__(self):
/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/linear.pyc in forward(self, input, weight, bias)
8 self.save_for_backward(input, weight, bias)
9 output = input.new(input.size(0), weight.size(0))
---> 10 output.addmm_(0, 1, input, weight.t())
11 if bias is not None:
12 # cuBLAS doesn't support 0 strides in sger, so we can't use expand
TypeError: addmm_ received an invalid combination of arguments - got (int, int, torch.LongTensor, torch.FloatTensor), but expected one of:
* (torch.LongTensor mat1, torch.LongTensor mat2)
* (torch.SparseLongTensor mat1, torch.LongTensor mat2)
* (int beta, torch.LongTensor mat1, torch.LongTensor mat2)
* (int alpha, torch.LongTensor mat1, torch.LongTensor mat2)
* (int beta, torch.SparseLongTensor mat1, torch.LongTensor mat2)
* (int alpha, torch.SparseLongTensor mat1, torch.LongTensor mat2)
* (int beta, int alpha, torch.LongTensor mat1, torch.LongTensor mat2)
didn't match because some of the arguments have invalid types: (int, int, torch.LongTensor, torch.FloatTensor)
* (int beta, int alpha, torch.SparseLongTensor mat1, torch.LongTensor mat2)
didn't match because some of the arguments have invalid types: (int, int, torch.LongTensor, torch.FloatTensor)
|
st116403
|
Try:
predict = model(context_vec.float())[0]
The weights of your network are FloatTensor, and you are trying to multiply/sum them with your input which is a LongTensor.
|
st116404
|
Hey thanks for looking into the code ,actually first I did go with FloatTensor but in that case I got a different error like this:
TypeError: FloatClassNLLCriterion_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, torch.FloatTensor, torch.FloatTensor, bool, NoneType, torch.FloatTensor), but expected (int state, torch.FloatTensor input, torch.LongTensor target, torch.FloatTensor output, bool sizeAverage, [torch.FloatTensor weights or None], torch.FloatTensor total_weight)
|
st116405
|
Yes, cross-entropy loss requires longs for the target. I assume your target is word_vec. If you just turn your input to float it should not modify the content of the target.
|
st116406
|
So as in the input to the model should be a float Tensor where as the labels should be long am I correct?
|
st116407
|
Could someone break down for me what this Net is? Not sure how this constitutes a logistic regression network:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(3*65536, n_classes, bias=True)
def forward(self, x):
x = self.fc1(x.view(x.size()[0],3*65536))
return x
Thanks
|
st116408
|
Well, with a square loss, it’s a very brave linear regression:
If Y is your target, you want to minimize Loss( Net(X) - Y ) = ( Net(X) - Y )^2
with Net(X) = X.W + b
Then, if you add a rectifier unit to your model, and, more particularly a sigmoid:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(3*65536, n_classes, bias=True)
def forward(self, x):
x = self.fc1(x.view(x.size()[0],3*65536))
return F.sigmoid(x)
In that case you are much more on a logistic regression, since you forces your outputs to be 1 if it passes, 0 otherwise. Then, you minimize the mismatch between categorical values 0 and 1 of output and targets. This is a typical logistic regression.
|
st116409
|
Hello,
what’s wrong with my model
m=torch.nn.Softmax()
model.eval()
preds = model(image)
temps=preds.cpu()
prob=torch.max(m(temps)*100)
error with prob variable
assert input.dim() == 2, 'Softmax requires a 2D tensor as input’
AssertionError: Softmax requires a 2D tensor as input
|
st116410
|
GitHub
pytorch/tutorials 29
PyTorch tutorials. Contribute to pytorch/tutorials development by creating an account on GitHub.
In the above tutorial, there is a code as below.
Does this code load only mini-batch data from the disk after running “dataiter.next()”?
Or, does this code load the entire dataset from the disk to the memory ?
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
dataiter = iter(trainloader)
images, labels = dataiter.next()
|
st116411
|
Once you create an iterator, it will start loading the batches and accumulating them into a queue, but it’s bounded, so it will never load the whole dataset. Once the queue fills up the workers will wait until you take out some data. Since you’re using num_workers=2 the data loading happens asynchronously with your script execution, and you have 2 background processes that fill up the queue.
|
st116412
|
can you please explain what is dataiter , what information is in it?
based on my understanding iter(trainloader) interate the data but how it does it? can we specify it?
and then, what dataiter.next() exactly do?
|
st116413
|
Hello, now I am learning code from https://github.com/pytorch/examples/blob/master/time_sequence_prediction/train.py 2
I want to migrate the cpu version to GPU, to speed up total time.
Here is what I DO:
modify all Variable to cuda()
modify module: seq.float().cuda()
modify loss to : loss.data.cpu().numpy()
However, I hardly see any time improvement, it’s just from 44s to 39s.
Can anyone find some errors on my code or give me some advise?
thanks…
Here is my code of GPU.
Blockquote
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
import time
class Seq(nn.Module):
def init(self):
super(Seq,self).init()
self.lstm1=nn.LSTMCell(1,51)
self.lstm2=nn.LSTMCell(51,1)
def forward(self,input,future=0):
outputs=[]
seq_num,seq_len=input.size() #100*1000
ht1=Variable(torch.zeros((seq_num,51)).cuda().float())
ct1=Variable(torch.zeros((seq_num,51)).cuda().float())
ht2=Variable(torch.zeros((seq_num,1)).cuda().float())
ct2=Variable(torch.zeros((seq_num,1)).cuda().float())
for i,item in enumerate(input.chunk(seq_len,dim=1)):
ht1,ct1=self.lstm1(item,(ht1,ct1))
ht2,ct2=self.lstm2(ct1,(ht2,ct2))
outputs+=[ct2]
for i in range(future):
ht1,ct1=self.lstm1(ct2,(ht1,ct1))
ht2,ct2=self.lstm2(ct1,(ht2,ct2))
outputs+=[ct2]
outputs=torch.stack(outputs,dim=1).squeeze(dim=2)
return outputs
if name == ‘main’:
data=torch.load('sin.pt')
input=Variable(torch.from_numpy(data[3:,:-1]).cuda())
target=Variable(torch.from_numpy(data[3:,1:]).cuda())
test_input=Variable(torch.from_numpy(data[:3,:-1]).cuda())
test_target=Variable(torch.from_numpy(data[:3,1:]).cuda())
seq=Seq()
seq.float().cuda()
criterion=nn.MSELoss()
opt=optim.LBFGS(seq.parameters())
for i in range(10):
t0=time.time()
print('Step:',i)
def closure():
opt.zero_grad()
out=seq(input)
loss=criterion(out,target)
print('loss:',loss.data.cpu().numpy()[0],end=' ')
loss.backward()
return loss
opt.step(closure)
print('\ntime:',time.time()-t0)
future=1000
pred=seq(test_input,future=future)
loss=criterion(pred[:,:-future],test_target)
print('\ntest loss:',loss.data.cpu().numpy()[0])
y=pred.data.cpu().numpy()
plt.plot(range(1000),y[0][:1000])
plt.plot(range(1000,2000),y[0][999:])
plt.show()
|
st116414
|
Hi, I’m a bit confused about an error i have.
[torch.FloatTensor of size 3x1x296]
Traceback (most recent call last):
File “pytorch.py 3”, line 168, in
train(epoch)
File “pytorch.py 3”, line 139, in train
output = model(data)
File “/home/david/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 206, in call
result = self.forward(*input, **kwargs)
File “pytorch.py 3”, line 118, in forward
x = F.relu(self.fc1(x))
File “/home/david/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 206, in call
result = self.forward(*input, **kwargs)
File “/home/david/anaconda3/lib/python3.5/site-packages/torch/nn/modules/linear.py”, line 54, in forward
return self._backend.Linear()(input, self.weight, self.bias)
File "/home/david/anaconda3/lib/python3.5/site-packages/torch/nn/functions/linear.py", line 10, in forward
output.addmm(0, 1, input, weight.t())
RuntimeError: matrices expected, got 3D, 2D tensors at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:1232
my tensor is as i would have hope 3x1x196 which is the expected size with 3 being the batch size.
i think it’s the batch of 3 which is causing the 3d, 2d tensor confusion but not sure how to get around this. is the input to NN the whole batch or just one record ?
full code below.
from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
import torch.utils.data as utils_data
import numpy
from random import randrange
# fix random seed for reproducibility
numpy.random.seed(7)
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=10, metavar='N',
help='number of epochs to train (default: 10)')
parser.add_argument('--lr', type=float, default=0.01, metavar='LR',
help='learning rate (default: 0.01)')
parser.add_argument('--momentum', type=float, default=0.5, metavar='M',
help='SGD momentum (default: 0.5)')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='how many batches to wait before logging training status')
args = parser.parse_args()
args.cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
if args.cuda:
torch.cuda.manual_seed(args.seed)
kwargs = {'num_workers': 1, 'pin_memory': True} if args.cuda else {}
class MyDataset():
def __init__(self):
dataset = numpy.genfromtxt("vectorFile.csv", delimiter=",")
self.features = dataset[:,0:148]
self.labels = dataset[:,149]
self.size = len(dataset)
def __getitem__(self, index):
# choose random index in features
firstPick = randrange(0,self.size)
firstPlayer = self.features[firstPick][0]#dealer or not ?
while True:
secondPick = randrange(0,self.size)
#print('secondPick')
if firstPlayer==self.features[secondPick][0]:
#print('match')
break
if self.labels[firstPick]>self.labels[secondPick]:
#print('adding to outputAnswer')
target = numpy.array([(1,0)])
else:
#print('adding to outputAnswer')
target = numpy.array([(1,0)])
target = torch.from_numpy(target).float()
data = numpy.array([(self.features[firstPick], self.features[secondPick])])
data = data.reshape((1, 296))
data = torch.from_numpy(data).float()
#print('data', data)
#print('target', target)
return data, target
def __len__(self):
return self.size
train_loader = torch.utils.data.DataLoader(MyDataset(), batch_size=3)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
#self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
#self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
#self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(296, 50)
self.fc2 = nn.Linear(50, 2)
def forward(self, x):
print('x')
print(x)
#x = F.relu(F.max_pool2d(self.conv1(x), 2))
#x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
#x = x.view(-1, 320)
x = F.relu(self.fc1(x))
#x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
model = Net()
if args.cuda:
model.cuda()
optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
def train(epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
print(batch_idx)
print(data)
print(target)
if args.cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.data[0]))
def test():
model.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
if args.cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data, volatile=True), Variable(target)
output = model(data)
test_loss += F.nll_loss(output, target, size_average=False).data[0] # sum up batch loss
pred = output.data.max(1)[1] # get the index of the max log-probability
correct += pred.eq(target.data).cpu().sum()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(train_loader.dataset),
100. * correct / len(train_loader.dataset)))
for epoch in range(1, args.epochs + 1):
train(epoch)
#test()
|
st116415
|
It’s the ×1× in the middle, Linear takes batch x features. Try to drop the extra dimension, eg in
data = data.reshape((1, 296))
Best regards
Thomas
|
st116416
|
thanks, that looks to have fixed that bit. I think where i have a lack of knowledge is that i’m confused around what needs to be passed around in the tensor at which points. i’m thinking of batch size, channels, rows and columns. the ‘1’ here seemed ok to me as either channel or row but in fact neither was needed !
|
st116417
|
Hello,
Is BatchNorm layer supporting double backward in the master branch now? How about InstanceNorm? Where to find already supported double backward operation list?
|
st116418
|
Agree, it would be very helpful to have a list of operations that either support or do not support double backward. It would be great to know especially for batch norm.
|
st116419
|
Hi!
I want to train many models in my GPU cards.
Each model has a big fixed embedding matrix and each model is trained separatly (not train a model in multi-GPUs).
In order to place more models in one card. I have to optimize memory cost in my cards.
So I am wondering, is there a way to share the biggest CUDA.Tensor (my embedding matrix) in different processes so that there exists only one copy of this matrix in every card?
Thanks,
looking forward for solutions and suggestions.
|
st116420
|
lx865712528:
I want to train many models in my GPU cards.
Each model has a big fixed embedding matrix and each model is trained separatly (not train a model in multi-GPUs).
How about putting the embedding operation in preprocessing operation to avoid doing it on the fly?
|
st116421
|
To be honest, it’s the last thing I want to do…
Once there is no way to share Tensors, I will preprocess the data.
|
st116422
|
I tried to verify some equations with torch and pytorch, and accidentally found that the gradients differs.
github.com/torch/nn
[Linear Layer] Possibly wrong gradient? 11
opened
Jul 28, 2017
closed
Jul 29, 2017
cdluminate
I thought the formulations for the Linear layer should be:
column y, Matrix M, column x, column b
y = Wx + b
δ...
|
st116423
|
Because you compute and accumulate gradients twice in Lua
A backward call is updateGradInput(input, gradOutput) + accGradParameters(input,gradOutput,scale)
|
st116424
|
Its a simple question but I need help. I have a tensor which contains some zero and nonzero values. I want to replace all nonzero values by zero and zero values by a specific value. How can I do that?
|
st116425
|
From the documentation, here is the input and output shape for a nn.Embedding layer: Input: LongTensor (N, W), N = mini-batch, W = number of indices to extract per mini-batch Output: (N, W, embedding_dim)
And for a RNN:
input (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See torch.nn.utils.rnn.pack_padded_sequence() for details.
h_0 (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch.```
If I'm understanding this correctly, then the output of an embedding layer needs to have its first 2 dimension transposed in order make N, the batch size, be the second dimension; that is, W, N, embedding dimension. Am I understanding this correctly?
|
st116426
|
@jsuit
yes, you understand it correctly.
But you can use batch_first=True for RNN, or provide your batched data with (N, W), N = mini-batch, W = number of indices to extract per mini-batch, because it is not really important, nn.Embedding just takes required indices.
|
st116427
|
I want to execuate own file to test the “struct” in TH, like THStorage.
Copy the file from github. I am confusing about, how to make link of the storage file in the generic, I always get the error of “the function not define: like THLongStorage_new”,(I can achieve only one type, but i always get error when to achieve the “templates” in C).
My CMakeLists.txt as follows:
cmake_minimum_required(VERSION 3.7)
project(MyProj)
set(CMAKE_C_STANDARD 99)
INCLUDE(CheckCSourceRuns)
CHECK_C_SOURCE_RUNS(“
int main()
{
int a;
__sync_lock_test_and_set(&a, 1);
__sync_fetch_and_add(&a, 1);
if(!__sync_bool_compare_and_swap(&a, 2, 3))
return -1;
return 0;
}
” HAS_GCC_ATOMICS)
IF (HAS_MSC_ATOMICS)
ADD_DEFINITIONS(-DUSE_MSC_ATOMICS=1)
MESSAGE(STATUS “Atomics: using MSVC intrinsics”)
ELSEIF (HAS_GCC_ATOMICS)
ADD_DEFINITIONS(-DUSE_GCC_ATOMICS=1)
MESSAGE(STATUS “Atomics: using GCC intrinsics”)
ELSE ()
SET(CMAKE_THREAD_PREFER_PTHREAD TRUE)
FIND_PACKAGE(Threads)
IF (THREADS_FOUND)
ADD_DEFINITIONS(-DUSE_PTHREAD_ATOMICS=1)
TARGET_LINK_LIBRARIES(TH ${CMAKE_THREAD_LIBS_INIT})
MESSAGE(STATUS “Atomics: using pthread”)
ENDIF ()
ENDIF ()
set(hdr THGeneral.h THAtomic.h THGenerateAllTypes.h THGenerateByteType.h THGenerateCharType.h
THGenerateDoubleType.h THGenerateFloatType.h THGenerateFloatTypes.h THGenerateHalfType.h
THGenerateIntType.h THGenerateLongType.h THGenerateShortType.h THGenerateIntTypes.h THHalf.h
THAllocator.h THStorage.h TH.h generic/THStorage.c)
set(src main.c THGeneral.c THAtomic.c THHalf.c THAllocator.c generic/THStorage.c)
set(SOURCE_FILES ${src} ${hdr})
add_subdirectory(generic)
add_executable(MyProj ${SOURCE_FILES})
target_link_libraries(MyProj m)ext by 4 spaces
|
st116428
|
In every “type”, using follow two lines:
#line 1 TH_GENERIC_FILE
#include TH_GENERIC_FILE
But It seems not include (It must something I dont see).
Thank you advance !
|
st116429
|
A similar question:
temp1.h
#ifndef NEW_TEMP
#define NEW_TEMP “temp1.h”
#else
#define A 1
int add(int a, int b);
#endif
temp1.c
#ifndef NEW_TEMP
#define NEW_TEMP “temp1.c”
#else
int add(int a, int b)
{
return a+b;
}
#endif
temp2.h
#ifndef NEW_TEMP
#error “You must define TH_GENERIC_FILE before including THGenerateLongType.h”
#endif
#line 1 NEW_TEMP
#include NEW_TEMP
main.c
#include “stdio.h”
#include “temp1.h”
#include “temp2.h"
int main()
{
printf(”%d\n", A);
int a = 10, b=2;
int c = add(a, b);
printf(“wait”);
}
there is error as well. How to build the link of temp1.h and temp1.c ? (if we delete the temp1.c, and write code in the “temp1.h”, there is no error )
|
st116430
|
Oh no! it’s so stupid of myself!.. I find it in the TH/THStorage.c, there #include the .c file.
|
st116431
|
Suppose I have a python list containing a number of pytorch variables. I want to compute their mean; something like numpy.mean(python_list) when using numpy. How could I do that?
|
st116432
|
I think I was unclear. What is want is to make a new variable whose data is the average of data present in a collection of variables.
|
st116433
|
When I’m traing my gan in MINST datasets, I got the following result, it seems like the generator don’t work at all.
Epoch [0/200], Step[10/600], d_loss: 0.0002, g_loss: 13.0330, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[20/600], d_loss: 0.0045, g_loss: 8.2261, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[30/600], d_loss: 0.0036, g_loss: 27.5934, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[40/600], d_loss: 0.2239, g_loss: 24.8275, D(x): 0.93, D(G(z)): 0.00
Epoch [0/200], Step[50/600], d_loss: 0.0012, g_loss: 19.9418, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[60/600], d_loss: 0.0073, g_loss: 9.9254, D(x): 1.00, D(G(z)): 0.01
Epoch [0/200], Step[70/600], d_loss: 0.0033, g_loss: 8.8135, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[80/600], d_loss: 0.0021, g_loss: 9.3930, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[90/600], d_loss: 1.7211, g_loss: 13.6998, D(x): 1.00, D(G(z)): 0.72
Epoch [0/200], Step[100/600], d_loss: 0.2832, g_loss: 23.5422, D(x): 0.93, D(G(z)): 0.00
Epoch [0/200], Step[110/600], d_loss: 0.0000, g_loss: 26.4731, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[120/600], d_loss: 0.3778, g_loss: 27.6310, D(x): 0.90, D(G(z)): 0.00
Epoch [0/200], Step[130/600], d_loss: 0.0000, g_loss: 27.6310, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[140/600], d_loss: 0.0000, g_loss: 27.6310, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[150/600], d_loss: 0.0000, g_loss: 27.6310, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[160/600], d_loss: 0.0000, g_loss: 27.6310, D(x): 1.00, D(G(z)): 0.00
my Discriminator and Generator is define as:class Discriminator(nn.Module):
def init(self, in_channels, df_dim, im_size):
super(Discriminator, self).init()
self.conv1 = Conv2d_Relu(in_channels, df_dim, 3, 1)
self.conv2 = Conv2d_BatchNorm(df_dim, df_dim2, 3, 1)
self.conv3 = Conv2d_BatchNorm(df_dim2, df_dim4, 3, 1)
self.conv4 = Conv2d_BatchNorm(df_dim4, df_dim*8, 3, 1)
self.fc1 = nn.Linear(im_size * im_size * df_dim * 8, 1)
def forward(self, x):
out = self.conv1(x)
out = self.conv2(out)
out = self.conv3(out)
out = self.conv4(out)
out = out.view(out.size(0), -1)
out = self.fc1(out)
return F.sigmoid(out)
class Generator(nn.Module):
def init(self, in_channels, f_dim=256):
super(Generator, self).init()
self.f_dim = f_dim
self.fc = nn.Linear(in_channels, f_dim * 3 * 3)
self.bn1 = nn.BatchNorm2d(f_dim, momentum=0.01)
self.deconv1 = nn.ConvTranspose2d(f_dim, f_dim / 2, 3, 2)
self.bn2 = nn.BatchNorm2d(f_dim / 2, momentum=0.01)
self.deconv2 = nn.ConvTranspose2d(f_dim / 2, f_dim / 4, 3, 2)
self.bn3 = nn.BatchNorm2d(f_dim / 4, momentum=0.01)
self.deconv3 = nn.ConvTranspose2d(f_dim / 4, 1, 2, 2, 1)
def forward(self, x):
out = F.relu(self.fc(x))
out = out.view(out.size(0), self.f_dim, 3, 3)
out = F.relu(self.bn1(out))
out = F.relu(self.bn2(self.deconv1(out)))
out = F.relu(self.bn3(self.deconv2(out)))
out = self.deconv3(out)
return F.tanh(out)
|
st116434
|
it looks like your model is running in a correct way, but the learning rate is too big.
|
st116435
|
@Soumith_Chintala @apaszke
It seems poised to overtake GPUs and just got $30M funding from a bunch of notable deep learning people (such as Ghahramani, Hassabis, & Sutskever).
How to build a processor for machine intelligence
graphcore.ai
Graphcore: Accelerating machine learning for a world of intelligent machines 6
Graphcore has built a new type of processor for machine intelligence to accelerate machine learning and AI applications for a world of intelligent machines
|
st116436
|
Hi, from http://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html 72 I know I can define forward and backward functions like
def forward(self, input)
def backward(self, grad_output)
My question is can I change the interface of the two functions like
def forward(self, input, others) # need other inputs
def backward(self, others) # no need of grad_output since this is loss layer
Thanks !
|
st116437
|
Hi,
there are two parts to this: It is OK to have several input arguments to forward.
This corresponds to returning a tuple of several input gradients in backward, possibly None if you don’t want to backprop into some of the inputs. (They are optional at the end in old-style autograd, but they become required in new-style autograd (master / pytorch >= 0.2).
However, the output_grad should always be there! To autograd, loss functions are not terribly special and if you add two losses you would still want that to work. (In the end, scalar variables are special in that you can call .backward() without arguments, implicitly using 1 as a starting point.) This will be a scalar variable if your result is a (scalar) loss.
As mentioned above, there has been a change in autograd to make taking derivatives of derivatives work. While I understand that old style currently in the tutorial still works, the new way is to have static methods for forward and backward and storing things in a context (ctx instead of self as first argument). If doing this, the .backward (but not the .forward) will operate on Variables instead of Tensors (use self.saved_variables and it will take care of that) and you need to return None for all non-differentiable arguments. These new-style Functions are then applied with Class.apply(inputs).
If you want to look at small examples, I can offer implicit functions 48 and Cholesky decomposition 27, but they are not really commented in terms of how they use the autograd mechanics. Or you look at the pytorch source, many functions are defined in the files of https://github.com/pytorch/pytorch/tree/master/torch/autograd/_functions 53 .
Best regards
Thomas
|
st116438
|
Thanks a lot Thomas !
1 So if I use the old way
def forward(self, input, others)
return loss
def backward(self, grad_output)
return grad_input, None
The backward must receive a single param of grad_output (and no others), even I do not use it, right ? And if I print out the grad_output value, it will be 1 if this is the final scalar loss layer, right ?
2 When do we have to write the backward function ourselves ? Is your implicit function an example for this ? Now I am just confused about an example that can the auto-backward mechanic work for it. I give a toy example in the following
class Model(nn.Module):
def forward(self, input, x, y): # input is of size N*D
tmp = input[x, y]
loss = tmp - 1
return loss
The real example is the loss will be calculated from different elements of input under different conditions. Each time the calculation will be different. So can I just leave it for the auto-backward to solve it ?
|
st116439
|
Hi,
for 1, yes, that would be the general idea.
Frazer:
And if I print out the grad_output value, it will be 1 if this is the final scalar loss layer, right ?
It is a good idea to multiply grad_output[0] to the input gradient you return to be consistent here, even if you envision that it usually is one. You might have a use case where you want to weight your loss with others and then you’ll be happy if it just works.
A final advice about implementing Functions that I forgot: There is a function autograd.gradcheck that will test your implementation (e.g. code cell 7 in the implicit function example) of the gradient against the numeric derivative. It’s always good to use that, but you might get false positives due to numerical errors (or extreme nonlinearity).
For 2:
Frazer:
When do we have to write the backward function ourselves ?
You only write the backward function if you can’t express what you want to compute with what’s already there. So in your example you did the right thing by just writing a nn.Module and have the backward be calculated step by step automatically.
I rarely find myself missing many things.
In the two examples, implicit functions use an iteration to find the value and that is not good to backpropagate through naïvely. For the Cholesky decomposition, the situation is somewhat similar, although it is mainstream enough to eventually be included in pytorch properly (I guess, other libs also have it) after solving the performance issue in the backward.
Best regards
Thomas
|
st116440
|
Is there a way to do sampling on GPU directly? like
x = torch.cuda.LongTensor(100,100).random_(100)
The other way seems slow:
x = torch.LongTensor(100,100).random_(100).cuda()
|
st116441
|
I didn’t find any same mechanism of subdivision used by darknet in pytorch.
Do i miss it?
Is there any subdivision mechanism in pytorch?
Is there any subdivision implementation in pytorch
|
st116442
|
When I training the network in multiprocess ,I use ctrl + C , and cause this zombie process .
Now, can I fix this problem except reboot if I want to realese the GPU for the zombie process.
QQ图片20170728101901.png809×983 34.8 KB
QQ图片20170728102732.png791×231 7.59 KB
QQ图片20170728101914.png867×254 6.03 KB
|
st116443
|
I am trying to code faster-rcnn from scratch in pytorch. The diagram and code show my construction.
How can I input the parameter list from both ‘featureNet and rpnNet’ for my optimizer?
Is this correct?
params = list(feature_net.parameters()) + list(rpn_net.parameters())
optimizer = optim.SGD(params, lr=0.01, momentum=0.9, weight_decay=0.0005)
rpn.png1049×154 11.1 KB
def run_train():
channel = 3
height, width = 512, 512 #largest size
num_features = 16
feature_net = featureNet((channel, height, width), num_features)
rpn_net = RpnNet(num_features, num_bases)
#learning hyper parameters
num_iter=100
params = list(feature_net.parameters()) + list(rpn_net.parameters())
optimizer = optim.SGD(params, lr=0.01, momentum=0.9, weight_decay=0.0005)
#training --------------------------------------------
feature_net.train()
rpn_net.train()
for it in range(num_iter):
x, annotation = get_train_data()
# forward
f = feature_net(x)
rpn_s,rpn_db = rpn_net(f)
# backward
rpn_label, rpn_bbox_target = rpn_target_layer(annotation) #generate ground truth
rpn_loss_score = F.cross_entropy (rpn_s, rpn_label)
rpn_loss_dbox = F.smooth_l1_loss(rpn_db, rpn_bbox_target)
loss = rpn_loss_label + rpn_loss_box
# update
optimizer.zero_grad()
loss.backward()
optimizer.step()
# print and show results
# ...
|
st116444
|
Hengck:
params = list
my code seems to work.
also see: Giving multiple parameters in optimizer
|
st116445
|
In your case, the simplest would be to use nn.Sequential():
net = nn.Sequential(feature_net, rpn_net)
optimizer = optim.SGD(net.parameters(), lr=0.01, ...)
Then in your loop, for the forward pass, just do:
# forward
rpn_s,rpn_db = net(x)
|
st116446
|
in later part of the development, there will be multi-branches. Hence it will not be sequential.
|
st116447
|
I have a bag of words x and I want to look up embeddings for words in x. But I also want to weight the words differently by a weight vector w, which has the same size as x. Can you show me the best way to do this?
|
st116448
|
I personally have never used PyTorch for my word embeddings, so my answer might not be exactly what you are looking for. I use Gensim’s Word2Vec model for learning the embeddings for each word. You can train the Word2Vec model either using CBOW or Skip-gram. After you have trained the Word2Vec model, you can get the feature vectors from the model as numpy arrays. You can obviously convert the numpy arrays to pytorch tensors using torch.from_numpy(), and then feed that to train your model.
Look at this link for reference:
radimrehurek.com
gensim: topic modelling for humans 35
Efficient topic modelling in Python
|
st116449
|
Hi,
Disclaimer: I’m new to pytorch.
I’ve been working on building a parallel data loader section (such that only the batchSize images from the workers are loaded onto memory) that feeds into a siamese network.
This is what I have so far:
https://gist.github.com/arBest/811f7df58c50495873d5eebc2c348552 30
Question:
I wanna go from
dst = SiameseDataset(pos_pairs_csv_path)
to
dst = SiameseDataset(pos_pairs_csv_path, neg_pairs_csv_path)
where if my batchSize = N, then batchSize = N/2 comes from pos_pairs file and N/2 comes from neg_pairs file.
Thanks in advance for the help!
|
st116450
|
Wrote something that iterates through pos and neg datasets alternatively.
Thanks,
|
st116451
|
Say I have a tensor a = [[2,9,0],[4,1,3],[7,6,8]] and I want the largest 4 values and their indices, so [(9, [0, 1]), (8, [2, 2]), (7, [2, 0]), (6, [2, 1])], would it be more efficient to do torch.topk(a.view([1, 9]), 4) and then search through the original tensor for those values to get the positions of each value or would it be more efficient to convert the tensor to a list of lists?
|
st116452
|
I’d probably go for your topk call on a flattened tensor and unreaveling the indices as described here:
Finding indices of a global maximum value in a variable
Hi,
one way to solve it is to flatten the tensor, take the maximum index along dimension 0 and then unravel the index, either using numpy or your own logic (probably you will come up with something less clumsy ):
rawmaxidx = mytensor.view(-1).min(0)[1]
idx = []
for adim in list(mytensor.size())[::-1]:
idx.append(rawmaxidx%adim)
rawmaxidx = rawmaxidx / adim
idx = torch.cat(idx)
(Note that pytorch / on LongTensor is similar to python2 / or python3 // for ints.
Best rega…
Best regards
Thomas
|
st116453
|
I’ve built a very simple RNN. I found an issue where outputs is generating nonsense. I was debugging it and found that my rnn output is generating the same exact output for each item in the minibatch. Can anyone tell me if Im doing something wrong with my code? I have looked at this code over a few times and I cant find where the error is being introduced. can anyone give me an idea of how to debug this?
class MyRNN(nn.Module):
def init(self, embed_size, hidden_size, vocab_size, num_layers):
super(MyRNN, self).init()
self.embed = nn.Embedding(vocab_size, embed_size)
self.lstm = nn.LSTM(embed_size, hidden_size, num_layers, batch_first=True)
self.linear = nn.Linear(hidden_size,4800)
self.init_weights()
def init_weights(self):
""“Initialize weights.”""
self.embed.weight.data.uniform_(-0.1, 0.1)
self.linear.weight.data.uniform_(-0.1, 0.1)
self.linear.bias.data.fill_(0)
def forward(self,captions, lengths):
embeddings = self.embed(captions)
packed = pack_padded_sequence(embeddings, lengths, batch_first=True)
rnn_features, (hidden,_) = self.lstm(packed)
outputs = self.classifier(rnn_features[0])
return outputs
|
st116454
|
I’d probably take a very sharp look at batch_first=True in conjunction with the [0] here:
outputs = self.classifier(rnn_features[0])
Best regards
Thomas
|
st116455
|
deepcode:
rnn_features, (hidden,_) = self.lstm(packed)
rnn_features are all output tensors of your sequence, so rnn_features shape is (batch_size, sequence_length, output_size). If you wanna get last output, you should use last_output = rnn_features[:, -1, :]
|
st116456
|
Hi there,
I’ve just started to use use torch\ pytorch and so I’m just getting used to using tensors. I understand that for generating a boolean we can use the torch.geq(a,b), torch.leq(a,b) and so forth.
However, as I wanted to generate a new tensor, which looks at each element and if the condition is satisfied then the entry would be: 1, else : -1.
I originally used torch.geq(alpha, 0.5*torch.ones(alpha.size())), but I wasn’t sure how to effectively use this without using an if statement. So instead I just did the following on alpha:
alpha - tensor of shape 2 x 2
alpha = p_joint_new / p_joint_prev
a_values = torch.tensor(alpha.size())
for i in range(alpha.size(0)):
for j in range(alpha.size(1)):
if alpha[i][j] > 0.5:
a_values[i][j] = 2*1 - 1
else:
a_values[i][j] = -1
My way is slow, it is not elegant, nor error proof - is there a better way to do this in pytorch? As I will have to implement a number of conditions of this style.
Thank you for taking the time to reply and for the autograd library in pytorch!
|
st116457
|
You have to be a bit careful with the types (byte does not have negative numbers, so I added .float() below, .long() would also work if you wanted integers), but then the expression notation should work fine:
a = torch.rand(2,2)
print(2*(a>0.5).float()-1)
Best regards
Thomas
|
st116458
|
Hi All,
I am trying to train SSD from this 7 public repo on PASCAL VOC dataset fallowing the instructions in the repo.
When I try with num_workers = 8 and batch_size = 128 it works fine, but when I increase num_workers to 12, it is throwing the fallowing error.
I read online out of memory might be issue, but I have plenty of RAM available. Anyone else facing similar issue? Can any one help me solve it?
Loading base network...
Initializing weights...
Loading Dataset...
Training SSD on VOC0712
Traceback (most recent call last):
File "train.py", line 232, in <module>
train()
File "train.py", line 171, in train
images, targets = next(batch_iterator)
File "/home/XXXX/.virtualenvs/pytorch/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 206, in __next__
idx, batch = self.data_queue.get()
File "/usr/lib/python3.5/multiprocessing/queues.py", line 345, in get
return ForkingPickler.loads(res)
File "/home/XXXX/.virtualenvs/pytorch/lib/python3.5/site-packages/torch/multiprocessing/reductions.py", line 70, in rebuild_storage_fd
fd = df.detach()
File "/usr/lib/python3.5/multiprocessing/resource_sharer.py", line 58, in detach
return reduction.recv_handle(conn)
File "/usr/lib/python3.5/multiprocessing/reduction.py", line 181, in recv_handle
return recvfds(s, 1)[0]
File "/usr/lib/python3.5/multiprocessing/reduction.py", line 160, in recvfds
len(ancdata))
RuntimeError: received 0 items of ancdata
System configuration : 4 Nvidia 1080 Ti’s, 384 GB RAM, 40 CPU cores.
Environment: PyTorch 0.1.12_2, CUDA 8.0.61 with patch 2, cuDNN 6.0
I have seen this 45, but couldn’t find a solution there.
TIA
|
st116459
|
I’m getting this error which doesn’t make any sense because normalize() is mentioned in the docs http://pytorch.org/docs/master/nn.html#torch.nn.functional.normalize 48
|
st116460
|
Hi, I am new to Pytorch and machine learning as well.
I found this tutorial http://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html 139 on LSTM-CRF model very usefull.
But I presume the code is not very straightforward to be changed to make use of GPU computing, given default batch size 1 in the example. I’m wondering any good approach to make the forward (and Veritibi) algorithm to deal with batched input.
Thanks !
|
st116461
|
Thanks for the notice. Yeah, I mean the code is readable and very useful to understand what’s going on.
I tried to change forward algorithm like this:
def _forward_score(self, feats):
def log_sum_exp_2(vecs):
max_scores, max_ids = torch.max(vecs, 1)
max_scores_exp = max_scores.expand(vecs.size(0), vecs.size(0))
return max_scores + torch.log(torch.sum(torch.exp(vecs - max_scores_exp), 1))
init_alphas = torch.Tensor(1, self.tag_size).fill_(-10000.)
init_alphas[0][START_TAG] = 0.
init_variables = Variable(init_alphas)
def iter_forward(variables, feature_list):
if feature_list is None:
end_variables = variables + self.transitions[STOP_TAG].view(1, -1)
return log_sum_exp(end_variables)
head_feat = feature_list[0]
try:
tail_feats = feature_list[1:]
except ValueError:
tail_feats = None
head_feat_exp = head_feat.view(self.tag_size, 1).expand(self.tag_size, self.tag_size)
variables_exp = variables.expand(self.tag_size, self.tag_size)
next_tag_variables_exp = variables_exp + self.transitions + head_feat_exp
new_forward_variables = log_sum_exp(next_tag_variables_exp).view(1, self.tag_size)
return iter_forward(new_forward_variables, tail_feats)
return iter_forward_score(variables=init_variables, feature_list=feats)
But still found this is not a good idea for batch training. Any good suggestion ?
|
st116462
|
I think one way to do it is by computing forward variables at each time step once for multiple tokens in a batch. Suppose batch size 1, we have sequence of length 3: w_11, w_12, w_13. For barch size of 2 we then have
w_11, w_12, w_13
w_21, w_22, w_23
The above code assumes batch size of 1 and already put computations in one iteration. I think we can add one dimension to that, however still need to iterate the time steps.
|
st116463
|
Hi,
I’ve been recently trying to implement UNet using PyTorch. You can find the implementation here 25
I’m encountering a so weird problem here!
The networks apparently converges (reduction in loss value) but when I start to inference on some query images (from training set), the output is very poor, more often blank image. I’m using Sigmoid/BCELos() and Adam optimiser. The data and other stuff have been examined to be as what expected.
On the other hand, I have a successful implementation of UNet using theano, and I tried my best to keep the training procedure of two networks all the same (e.g. fixed training set, fixed hyper-parameters and so on), but very poor performance with Pytorch!
Others have also shared the same issue with their implementations! like this
I do not see any other reason for poor performance except bugs with PyTorch backends!!
Do you guys have any comment on the implementation?
Thanks
Saeed
|
st116464
|
I wrote some relatively simple semantic segmentation code that may be of use to you: https://github.com/Kaixhin/FCN-semantic-segmentation 24
However, I too haven’t managed to get great results - I do get OK results though, and never blank outputs. Do you have a link to the Theano implementation that matches your PyTorch one? The closer they are, the easier it will be to pick up any discrepancies.
|
st116465
|
Thanks for your reply.
Yes, here is the code for saliency detection, which is in essence the same as binary segmentation problems.
GitHub
imatge-upc/salgan 1
SalGAN: Visual Saliency Prediction with Generative Adversarial Networks - imatge-upc/salgan
THanks
|
st116466
|
I had a quick look for you, and noticed that in their network 5 they use Upscale2DLayer, which seems to be a fixed upsampling operation, like nearest neighbour upsampling. Your network 5 uses nn.ConvTranspose2d, which is a learned upsampling operation. Some people have reported better success with this. Also, I do not see batch norm in their network, but you have it in your network.
Also, in your description, you wrote that you use Adam, but in your code I see SGD + momentum instead.
|
st116467
|
Thanks for your reply.
Actually, I’ve modified their work to fit my domain, and so the pytorch work is comparable to this one:
GitHub
saeedizadi/GAN_SKIN 10
GAN network for Skin Lesion Segmentation. Contribute to saeedizadi/GAN_SKIN development by creating an account on GitHub.
You can see that here I’ve switched the network to UNet which used ConvTranspose. I’ve also added BatchNorm to the unet model.
In addition, either SGD or Adam does not impact the problem.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.