id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st97668
|
@Swift2046 yeah don’t worry I do the same quite often…
I’d assume since the features are coordinates of pixels, they should be normalised with respect to max coordinates?
Not sure if it makes sense using tensor normalisation later on.
I’ll give it a try until the end of the week and report back.
Currently VGG11 is giving me decent results on the small subset (65% Top-1) when using valence scores.
|
st97669
|
Absolutely – you could either divide them all by the image height and width you’re using, or I suppose normalise each to fill the frame by first subtracting the minimum x value, then dividing by the maximum; doing the same with y; then take the larger of the two values and divide both x and y by that (to avoid stretching the coordinates).
That’s what I’d do, thinking about it.
And yeah, since you’d be using 1s and 0s, there’d be no further need for normalisation … I’d be interested to try it myself.
|
st97670
|
Excuse the messiness – not tidied up or optimised yet, but just to see how it’d work.
This gets 55.9% accuracy relatively quickly with the Kaggle Facial Expression dataset, with 7 possible expressions, after converting them to feature vectors first. However it fails to convert about 29% of the original dataset – presumably because it wasn’t trained on small, grayscale images. So you’d have to mark it down on the challenge. The best model on Kaggle got about 70%, with the runners up around 50%, but you might be able to get this higher with more training.
import torchvision.datasets as dsets
import torchvision.transforms as transforms
import csv
import os
from PIL import Image, ImageDraw
import face_recognition
import torch
from torch import nn
import numpy as np
from torch.utils.data import Dataset, DataLoader
import math
def str2act(s):
if s is 'none':
return None
elif s is 'hardtanh':
return nn.Hardtanh()
elif s is 'sigmoid':
return nn.Sigmoid()
elif s is 'relu6':
return nn.ReLU6()
elif s is 'tanh':
return nn.Tanh()
elif s is 'tanhshrink':
return nn.Tanhshrink()
elif s is 'hardshrink':
return nn.Hardshrink()
elif s is 'leakyrelu':
return nn.LeakyReLU()
elif s is 'softshrink':
return nn.Softshrink()
elif s is 'softsign':
return nn.Softsign()
elif s is 'relu':
return nn.ReLU()
elif s is 'prelu':
return nn.PReLU()
elif s is 'softplus':
return nn.Softplus()
elif s is 'elu':
return nn.ELU()
elif s is 'selu':
return nn.SELU()
else:
raise ValueError("[!] Invalid activation function.")
class MLP(nn.Module):
def __init__(self, num_layers, in_dim, hidden_dim, out_dim, activation='relu'):
super().__init__()
self.num_layers = num_layers
self.in_dim = in_dim
self.hidden_dim = hidden_dim
self.out_dim = out_dim
self.activation = str2act(activation)
nonlin = True
if self.activation is None:
nonlin = False
layers = []
for i in range(num_layers - 1):
layers.extend(
self._layer(
hidden_dim if i > 0 else in_dim,
hidden_dim,
nonlin,
)
)
layers.extend(self._layer(hidden_dim, out_dim, False))
self.model = nn.Sequential(*layers)
def _layer(self, in_dim, out_dim, activation=True):
if activation:
return [
nn.Linear(in_dim, out_dim),
self.activation,
]
else:
return [
nn.Linear(in_dim, out_dim),
]
def forward(self, x):
out = self.model(x.float())
return out
def _load_data(path='fer2013.csv', expect_labels=True):
assert path.endswith('.csv')
# If a previous call to this method has already converted
# the data to numpy format, load the numpy directly
X_path = path[:-4] + '.X.npy'
Y_path = path[:-4] + '.Y.npy'
if os.path.exists(X_path):
X = np.load(X_path)
if expect_labels:
y = np.load(Y_path)
else:
y = None
return X, y
csv_file = open(path, 'r')
reader = csv.reader(csv_file)
# Discard header
row = next(reader)
y_list = []
X_list = []
counter = 0
skip_counter = 0
for i, row in enumerate(reader):
counter +=1
y_str, X_row_str = (row[0], row[1])
y = int(y_str)
X_row_strs = X_row_str.split(' ')
X_row = [float(x) for x in X_row_strs]
X_row = np.reshape(X_row, (48,48))
image = np.zeros((48, 48, 3), dtype=np.uint8)
image[:,:,0] = X_row
image[:,:,1] = X_row
image[:,:,2] = X_row
print(i)
face_landmarks_list = face_recognition.face_landmarks(image)
X_row = image
landmarks_array = []
for face_landmarks in face_landmarks_list:
for facial_feature in face_landmarks.keys():
for item in face_landmarks[facial_feature]:
landmarks_array.append(np.round(item[0] / 48, 5))
landmarks_array.append(np.round(item[1] / 48, 5))
if face_landmarks_list:
X_list.append(landmarks_array)
y_list.append(y)
else:
skip_counter +=1
X = np.asarray(X_list)
y = np.asarray(y_list)
np.save(X_path, X)
np.save(Y_path, y)
print(skip_counter,'missed, out of', counter, ' - total:', (counter - skip_counter) / counter, '%')
return X, y
class PrepareData(Dataset):
def __init__(self, x, y):
self.x = torch.from_numpy(x) if not torch.is_tensor(x) else x
self.y = torch.from_numpy(y) if not torch.is_tensor(y) else y
def __len__(self):
return len(self.x)
def __getitem__(self, idx):
return self.x[idx], self.y[idx]
X, y = _load_data()
NUM_TRAINING_IMAGES = int(len(X) * 0.9)
print(NUM_TRAINING_IMAGES)
trainer_loader = PrepareData(x=X[:NUM_TRAINING_IMAGES], y=y[:NUM_TRAINING_IMAGES])
test_loader = PrepareData(x=X[NUM_TRAINING_IMAGES:], y=y[NUM_TRAINING_IMAGES:])
torch.manual_seed(1)
# Hyper Parameters
EPOCH = 1000
BATCH_SIZE = 16
LR = 0.0001
# Data Loader for easy mini-batch return in training
train_loader = DataLoader(dataset=trainer_loader, batch_size=BATCH_SIZE, shuffle=True)
test_loader = DataLoader(test_loader, batch_size=1, shuffle=True)
model = MLP(num_layers=5, in_dim=144, hidden_dim=256, out_dim=7)
print(model)
optimizer = torch.optim.Adam(model.parameters(), lr=LR)
# optimizer = torch.optim.SGD(model.parameters(), lr=LR, momentum=0.7)
loss_func = nn.CrossEntropyLoss()
average_loss = 2
correct_ratio = 0
for epoch in range(EPOCH):
for step, (b_x, b_y) in enumerate(train_loader):
# b_x = b_x.view(-1, 28, 28) # reshape x to (batch, time_step, input_size)
output = model(b_x)
loss = loss_func(output, b_y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if step % 10 == 0:
average_loss = average_loss * 0.95 + loss.item() * 0.05
print('Epoch:', epoch, '- step:', step, '- loss:', np.round(loss.item(),5),
'- average loss:', np.round(average_loss, 5), '- correct:', correct_ratio, '%')
if epoch % 5 == 0:
correct = 0
incorrect = 0
for data, target in test_loader:
predicted_answer = np.argmax(model.forward(data).detach().numpy())
right_answer = target[0]
if int(right_answer) == int(predicted_answer):
correct+=1
else:
incorrect+=1
print('Test Correct: {}/{}'.format(correct, correct+incorrect))
correct_ratio = np.round(100 * (correct / (correct+incorrect)), 1)
|
st97671
|
Hi Alex, I went through your code in the model.py file. Here’s what I thought can be changed to make it train in a better way.
Typically, when doing Transfer Learning, we use take a network trained on a dataset D, remove the final classification layer used for that dataset D, and then add our own final layer for classification. The objective here is to learn from the final encodings learnt from dataset D. Hence, a better way of utilizing the pre-trained network is shown below for VGG11.
""" Custom VGG11 """
class VGG11(nn.Module):
def __init__(self):
super(VGG11, self).__init__()
self.net = models.vgg11(pretrained=True)
# Replacing final classification layer with custom classification layer
self.net.classifier[-1] = nn.Linear(4096, 2)
self.net.cuda().half()
def forward(self, x):
f = self.net(x)
return f
|
st97672
|
Hi @Mazhar_Shaikh thanks for the source and advise.
I’m a bit confused though, what is the purpose of removing the last classification layer and replacing it with another?
I know that in the approach I take, I’m adding an extra layer on top of it, which probably delays learning by a small amount. Is that what you have changed?
@Swift2046 Wow, I am speechless, thanks for the script! I’ll try and test it on Friday.
I’m guessing from a brief look that you’re using 48 features and wrote a custom MLP to test classification?
I’ll let you know if/how it works on the AffectNet dataset.
|
st97673
|
Hi Alex, This may be a good place to read up on Transfer Learning 5.
The imagenet pretrained network’s final output are the class log probabilities of the 1000 classes present in imagenet. Hence, it is possible that that particular final layer is in a deep local minima and may be hard to escape from. Effectively, the classification you were performing looks like
P(face expression| face image) = P(imagenet class| face image) * P(face expression|imagenet classes)
|
st97674
|
This would be a problem only with a pretrained=True network, correct?
AFAIK what you’re suggesting minimises network complexity so I’ll try it with my next iteration.
Many thanks!
|
st97675
|
It was a good challenge! I was intrigued how well a MLP would handle coordinates on a task like this … Better than I thought … With a higher learning rate, it gets to 50% in one epoch.
It’s actually using 72 features (X & Y coordinates, meaning 144 inputs) … I was wondering if you could strip the chin, and just focus on eyes, nose and mouth … The images are 48 x 48 grayscale, so I’m dividing by 48 to normalise (because I was too lazy to write something that would crop them too – but that might be a quick way to improve accuracy).
Quite a bit of the _load_data routine is putting grayscale image data into an RGB format the face_recognition script would recognise … So that would need modifying for colour (which I think would perform a lot better) … I’ll try and get hold of the AffectNet dataset too … The MLP is actually just a nice, quick customisable class I picked up from Andrew Trask and co’s NALU paper … Become a go-to vanilla neural net for me
|
st97676
|
Hello.
Today I was performing some experiments using a variational autoencoder. I realized about something quite surprising that I did not expect. Hope someone can help me.
I was sampling from the posterior distribution and wanted to save an image of the latent space. The image on the latent space (which comes from a gaussian distribution) is normalized to fit in range 0-1. I use this code:
mean_p,logvar_p=variatonal.forward(x_test)#encode params
z_lat=sampler([mean_p,logvar_p],'gaussian',mean_p.shape)#sample from gaussian distribution
z_=normalize(z_lat)#normalize latent code
save_image(z_)#save latent code
mean,logvar=reconstruction.sample(z_lat)#decoder parameters
x_=sampler([mean,logvar],decoder_type,mean.shape)#sample from decoder
x_=normalize(x_)
save_image(x_)
Normalize is a function that does this (assume batch=100)
def normalize(x_):
min_val_aux,ind=torch.min(x_.data,1)
max_val_aux,ind=torch.max(x_.data,1)
min_val = torch.zeros(100,1).cuda()
max_val = torch.zeros(100,1).cuda()
min_val[:,0]=min_val_aux
max_val[:,0]=max_val_aux
a,b=x_.shape
min_val=min_val.expand(100,b)
max_val=max_val.expand(100,b)
x_.data=(x_.data-min_val)/(max_val-min_val)
return x_.data.cpu()
What is incredible is that the function normalize is modifying the content of z_lat!!! so when decoding z_lat I observed mode collapse. At the beginning I though it was because a FC VAE encoder is not expressive enough, however I realized that by taking out the call to z_=normalize(z_lat)#normalize latent code everything works find. Moreover if normalize acts directly on z_lat.data instead of z_lat everything works so it seems something related to autograd. Should not a call to a python function pass the data instead of a pointer to a variable? It seems that autograd pass variable references to functions instead of actually copying data.
I am impressed by this fact.
|
st97677
|
Solved by albanD in post #19
No problem !
In python, only number, strings and booleans (I might be missing things here like functions but I’m not sure) are passed by value, everything else is passed by reference.
|
st97678
|
Hi,
The thing is that you should not use .data anymore. It is the old api and is really bugprone as you can see.
If the goal is to get a tensor with the same content but without the autograd graph, you should use .detach(). It will return a tensor with the same content and requires_grad=False.
If the goal is to make some changes on a tensor that are not tracked by the autograd, then you should do these ops within a with torch.no_grad(): block. It will allow you to do any op on a tensor as if it was not requirering any gradient and these ops will not be recorded by the autograd engine.
|
st97679
|
Well, this is an old code (i did it like 9 months ago and rescue now because i remember that mode collapse and want to compare with the new one). Actually, in that code, z_lat is a Variable, and if I pass .data everything works. However, if normalize() is expecting a torch.Variable that is what happens.
So for the current release I understand that .data is also bugged… Are you planning to take it out in future releases? It is a bit confusing. Can one copy a tensor using detach().data? Should that work?
Well, at least I have seen that aVAEs can suffer from mode collapse when linearly transforming the latent space… One can always learn something interesting even from a bug
Thanks for your reply
|
st97680
|
Anyway, I think that when this kind of bugs appears (this was in version 0.3.0) either a warning should appear when using .data or it should be taken out immediately. This bugs are very difficult to detect for people that uses pytorch since long time.
Even if I search information related to torch.data there is no post, nor docs, nor general information talking about this bug…
thanks
|
st97681
|
I think .data is still used inside the optimizers (and maybe other internals) which would result in tons of warnings.
|
st97682
|
Well, something should be done. At least an initial warning when importing torch or just take out .data from the API, or something.
Or put it here https://pytorch.org/blog/pytorch-0_4_0-migration-guide/ 2
The problem is that this bug appears at least in version 0.3.0.
|
st97683
|
As you mentioned the migration guide: it is explicitly mentioned there, as there is a section “what about .data ?”
But in general I agree with you.
@albanD isn’t it possible to replace all .data calls with .detach() in the internals?
|
st97684
|
@jmaronas This is not a bug !
This is the intended behavior of .data. Here you have a given variable x_, when you do x_.data=, then you modify what x_ contains. It is an inplace change into x_.
@justusschock either .detach() or a with torch.no_grad() block depending on what what the .data used for. Yes that should done if you have someone has some free time
|
st97685
|
Well. I agree with you 50%. This is why.
If you have a code that modifies .data you expect to modify what x_.data contains, that is true. The problem is, as far as I now, that when you pass an argument to a python function the input argument is not modified.
a=1
def function(a):
return a+1
print function(a)
print a
a is still 1. In autograd this does not happens. In autograd if “a” is autograd variable its content would be modified. If a is not a variable it would not.
I think that this is not the tipical behaviour of python code and that is why I call it a “bug”. In my example I understood that the variable inside normalize(), x_, is independent from the argument passed when calling normalize(). I expect that internal in the function to the variable would not change the variable in the main program. With the next function everything would persist.
def normalize(y_):
min_val_aux,ind=torch.min(y_.data,1)
max_val_aux,ind=torch.max(y_.data,1)
min_val = torch.zeros(100,1).cuda()
max_val = torch.zeros(100,1).cuda()
min_val[:,0]=min_val_aux
max_val[:,0]=max_val_aux
a,b=y_.shape
min_val=min_val.expand(100,b)
max_val=max_val.expand(100,b)
y_.data=(y_.data-min_val)/(max_val-min_val)
return y_.data.cpu()
And the “bug” would persists.
Hope it is clearer.
|
st97686
|
I disagree with you here, it works that way for numbers and strings only in python. Other objects have the same behavior as variables here.
In your case you pass an object and modify one of it’s field (.data in that case), it’s like working with a list (or dict) and modifying one of it’s elements:
a=[1]
def function(a):
a[0] = 2
return a
print function(a)
print a
Actually any python object that can contain a subfield like .data will always have this behavior.
|
st97687
|
Yes totally agree. But one cannot expect that with torch.autograd happens and not with torch.tensor as both have very similar functionality (apart from automatic differentiation). It would be nice to include in documentation or migration guide, that is what I mean.
|
st97688
|
But one cannot expect that with torch.autograd happens and not with torch.tensor.
You mean that if you pass a torch.Tensor to that function it won’t have the same behavior? I would expect it does (or just crash for old pytorch versions where .data did not exist on torch.Tensors).
For the migration guide, isn’t what @justusschock linked here 2 about migration for 0.4.0 what you’re looking for?
|
st97689
|
I mean that if I pass .data to normalize everything works fine, the modifications are not done to the variable in the main function:
x=Variable(torch.zeros(10,10).normal_(0,1)
new_x=normalize(x.data)
print new_x.sum()==x.data.sum() #return false
###############
###############
x=Variable(torch.zeros(10,10).normal_(0,1)
new_x=normalize(x)
print new_x.sum()==x.data.sum() #return True
In this example normalize function is different. For the last case is the example I posted. For the first one is:
def normalize(y_):
min_val_aux,ind=torch.min(y_,1)
max_val_aux,ind=torch.max(y_,1)
min_val = torch.zeros(100,1).cuda()
max_val = torch.zeros(100,1).cuda()
min_val[:,0]=min_val_aux
max_val[:,0]=max_val_aux
a,b=y_.shape
min_val=min_val.expand(100,b)
max_val=max_val.expand(100,b)
y_=(y_-min_val)/(max_val-min_val)
return y_.cpu()
i.e. take out .data from y_
|
st97690
|
Which is the same as if in my list example I give a[0] to the function instead of a. It won’t change the content of a anymore. Because it never had the object a to begin with.
I’m not sure to understand what you were expecting to happen differently than the following?
If you change input.data (or input[0] in my list example), then you modify your input object. And so this change will be seen every place where you use this object.
If you pass the content of input.data (or input[0] in my list example) and then just use it for some computation (not modifying any object inplace), then you don’t modify any existing object and so no existing objects will be changed.
|
st97691
|
Well, what I mean is that both torch.autograd and torch.Tensor are objects. As far I understand torch.autograd.data is a torch.tensor
What I expect to happen is that if I modify a torch.autograd class variable and that change is visible for the whole program, i.e is implicit, why torch.tensor variable modifications are not. I would expect the same behavior for both of them, as both are class variables.
In your example if you have a=[[1,2,3,4]] and you pass a[0], i.e. a list object the same holds because the internal object to the list is a class variable that supports implicit modification (as you clearly state before):
a=[[1,2,3]]
def function(a):
a[0]=2
return a
function(a[0]) #a now would be a[2,2,3]
I do not see why on torch.autograd the implicit operation is preserved while for a torch.tensor is not.
|
st97692
|
Ho ok,
The confusion is that y_ = xxx in you second example is not an inplace change of the tensor ! a[0] is an inplace change of the list, but if you were doing a = 2 it would not change the original list. It’s just reassigning the variable named “a” or “y_” to a different python object. If you were doing y_.copy_(xxx) or y[0] = xxx then the change would be visible as well.
The code below cover all cases for list and tensors.
You will see that in each case, inplace modifications will modify what was given as input.
import torch
def inplace_change(a):
print("doing inplace change")
a[0] = 2
return a
def inplace_tensor_change(a):
print("doing inplace change on a.data")
# This is bad, never use .data in proper code !
a.data = torch.rand(5)
return a
def out_of_place_change(a):
print("doing out of place change")
a = 2
return a
a=[[1, 2, 3]]
print("working with ", a)
print("Giving a[0]")
print(inplace_change(a[0]))
print(a)
print("")
a=[[1, 2, 3]]
print("working with ", a)
print("Giving a")
print(inplace_change(a))
print(a)
print("")
a=torch.Tensor([1, 2, 3])
print("working with ", a)
print("Giving a")
print(inplace_change(a))
print(a)
print("")
a=torch.Tensor([1, 2, 3])
print("working with ", a)
print("Giving a")
print(inplace_tensor_change(a))
print(a)
print("")
a=torch.Tensor([1, 2, 3])
print("working with ", a)
print("Giving a.data")
print(inplace_change(a.data))
print(a)
print("")
a=[[1, 2, 3]]
print("working with ", a)
print("Giving a[0]")
print(out_of_place_change(a[0]))
print(a)
print("")
a=[[1, 2, 3]]
print("working with ", a)
print("Giving a")
print(out_of_place_change(a))
print(a)
print("")
a=torch.Tensor([1, 2, 3])
print("working with ", a)
print("Giving a")
print(out_of_place_change(a))
print(a)
print("")
a=torch.Tensor([1, 2, 3])
print("working with ", a)
print("Giving a.data")
print(out_of_place_change(a.data))
print(a)
print("")
|
st97693
|
After experimenting a bit my problem was thinking that arguments are copied instead of passed by reference. The problem is that
x=x+1
Is creating a new variable while
x.data=x+1
is accessing a method. Taking in account that arguments are passed by reference instead of copied in a function call yield my error. In fact the key error is not coding this like:
def normalize(x_):
min_val_aux,ind=torch.min(x_.data,1)
max_val_aux,ind=torch.max(x_.data,1)
min_val = torch.zeros(100,1).cuda()
max_val = torch.zeros(100,1).cuda()
min_val[:,0]=min_val_aux
max_val[:,0]=max_val_aux
a,b=x_.shape
min_val=min_val.expand(100,b)
max_val=max_val.expand(100,b)
x_=(x_.data-min_val)/(max_val-min_val)# change x_.data= to x=
return x_.cpu()
Entirely my mistake. Normally when I code in python I forget about memory management. I only activate that part of my brain when coding C or C++. An that, in addition to the fact that in previous versions of pytorch combining computation of Variables and tensor (for example gaussian noise addition) was reduced to accessing .data lots of times I did not take the problem presented when coding this function.
Thanks @albanD for your time.
|
st97694
|
No problem !
In python, only number, strings and booleans (I might be missing things here like functions but I’m not sure) are passed by value, everything else is passed by reference.
|
st97695
|
Hello !
I’m trying to implement a Fully Convolutional Network, based on a VGG16 encoder. I’ve set up the architecture for the decoder. When the input tensor has a shape of multiples of 224, the decoder’s output shape is fine. However, when it is not the case, for instance the image shape is (500,500) or (1080,720), I can’t recover the shape.
Here’s what I have so far:
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import models
class FCN(nn.Module):
def __init__(self, nb_classes):
super().__init__()
vgg = models.vgg16()
self.encoder = vgg.features
self.relu = nn.ReLU(inplace = True)
self.deconv1 = nn.ConvTranspose2d(512, 512, kernel_size=3, stride=2, padding=1, dilation=1, output_padding=1)
self.bn1 = nn.BatchNorm2d(512)
self.deconv2 = nn.ConvTranspose2d(512, 256, kernel_size=3, stride=2, padding=1, dilation=1, output_padding=1)
self.bn2 = nn.BatchNorm2d(256)
self.deconv3 = nn.ConvTranspose2d(256, 128, kernel_size=3, stride=2, padding=1, dilation=1, output_padding=1)
self.bn3 = nn.BatchNorm2d(128)
self.deconv4 = nn.ConvTranspose2d(128, 64, kernel_size=3, stride=2, padding=1, dilation=1, output_padding=1)
self.bn4 = nn.BatchNorm2d(64)
self.deconv5 = nn.ConvTranspose2d(64, 32, kernel_size=3, stride=2, padding=1, dilation=1, output_padding=1)
self.bn5 = nn.BatchNorm2d(32)
self.classifier = nn.Conv2d(32, nb_classes, kernel_size=1)
def forward(self, x):
outputs = {}
for i,l in enumerate(self.encoder):
x = l(x)
if isinstance(l, nn.Conv2d):
print(x.shape)
print('Starting decoder')
score = self.relu(self.deconv1(x))
print(score.shape)
score = self.bn1(score)
score = self.relu(self.deconv2(score))
print(score.shape)
score = self.bn2(score)
score = self.relu(self.deconv3(score))
print(score.shape)
score = self.bn3(score)
score = self.relu(self.deconv4(score))
print(score.shape)
score = self.bn4(score)
score = self.bn5(self.relu(self.deconv5(score)))
print(score.shape)
return self.classifier(score)
fcn = FCN(2)
img = torch.rand(1,3,500,500)
out = fcn(img)
How should I change the deconv layers to make this work ?
Thanks !
|
st97696
|
I need to compute softmax for a two dimensional matrix w, batch * seq_length. Sequences have different length, and they are denoted by a mask matrix mask_d, also of size batch * seq_length.
I have written the following code, however, it runs into all nan after a couple of iterations. Is there a better way to implement this, or is there an existing SoftMax implementation in PyTorch that can handle a batch of sequences of variable length by mask, and is numerically stable?
w_max = torch.max(w, 1)[0]
w_max = w_max.expand_as(w)
w_max.data[w_max.data < 0] = 0
w = torch.exp(w - w_max)
w_sum = torch.sum(w * mask_d, 1)
w_sum = w_sum.expand_as(w)
w = w / w_sum * mask_d
|
st97697
|
you can use nn.LogSoftmax, it is numerically more stable and is less likely to nan than using Softmax
|
st97698
|
But if I just want to get SoftMax instead of LogSoftMax, what should I do? And SoftMax do not allow me to do batch operations of variable sequence lengths, so I have to define my own softmax operations.
|
st97699
|
you can do Softmax, but that operation is inherently numerically unstable. That is why I suggested that you do LogSoftmax
|
st97700
|
Thanks. But I think you misunderstand my question. I am working on a batch_size*max_sequence_length matrix, and the sequences are of variable lengths, that’s why I need a mask matrix to mask out some padding elements. Seems neither Softmax nor LogSoftmax supports this operation of masked softmax. And if I use LogSoftmax, should I do exp(w) to convert it back to Softmax, and do you mean that this will work?
|
st97701
|
I guess w_sum * mask_d is zero in the last step. if you print it, you can find it out. Also I’m wondering why you do w_max.data[w_max.data < 0] = 0.
You may try smooth tricks, like add eps*seq_length in w_sum and eps in w. But I think it would better to find the cause of this problem in your model.
|
st97702
|
Hi, did you find a solution?Is that possible to just add a very small number to the denominator?
|
st97703
|
When I ran the following code on my ‘testing’ folder, it works fine. However, when I started running on my ‘training’ folder the python program crashed. The ‘testing’ folder has about 60K images inside and the ‘training’ folder has about 1 million images inside. Both folders’ structure is the following
‘testing’
‘1’:
image1.jpg
image2.jpg
…
‘0’:
image1.jpg
image2.jpg
…
When I ran on the test folders, I got what I expected
image_datasets is set up
dataloaders is set up
takes 6.8155
0
torch.Size([32,3,299,299])
takes:20.034827
50
torch.Size([32,3,299,299])
....
However, when I ran on the training folder, i.e., replacing ‘testing’ with ‘training’ in the following code. I could not even get the ‘image_dataset is set up’ message and the python got stuck. I think the python crashed as the task manger showed cpu and memory usage as 0.
Here is the code
from __future__ import print_function, division
import torch.utils.data
from torch.utils.data import DataLoader
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
from sklearn.utils.class_weight import compute_class_weight
import matplotlib.pyplot as plt
import torch.nn.functional as F
import time
import os
import pickle as pickle
import copy
from torchvision.datasets import DatasetFolder
plt.ion()
class MyDatasetFolder(DatasetFolder):
def __getitem__(self, index):
path, target = self.samples[index]
#print(path, target)
try:
sample = self.loader(path)
#print('sample is {}'.format(sample))
try:
if self.transform is not None:
sample = self.transform(sample)
if self.target_transform is not None:
target = self.target_transform(target)
except Exception as err:
print('{} can not be transformed'.format(path))
print('error is {}'.format(err))
return None
return sample, target
except:
#print('{} can not be loaded'.format(path))
return None
def myloader(path):
from PIL import Image, ImageFile
#ImageFile.LOAD_TRUNCATED_IMAGES = True
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGB')
def my_collate_fn(data):
data = list(filter(lambda x:x is not None, data))
#print('data is {}'.format(data[0]))
return torch.utils.data.dataloader.default_collate(data)
def main():
model = models.inception_v3(pretrained = True)
data_transforms = {
'testing': transforms.Compose([
transforms.Resize(300),
transforms.CenterCrop(299),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
}
data_dir = os.path.join('.','data','images')
batch_size = 32
img_ext = ['.jpg', '.jpeg', '.JPEG', '.JPG','.png','.ppm','.bmp','.pgm','.tif','.gif', '.eps', '.icns', '.asp', '.svg', '.ico', '.im', '.msp', '.pcx', '.sgi','.spider', '.tiff', '.webp','.xbm', '.octet-stream']
image_datasets = {x: MyDatasetFolder(os.path.join(data_dir, x), myloader, img_ext,
data_transforms[x])
for x in ['testing']}
print('image_datasets is set up')
dataloaders = {x: DataLoader(image_datasets[x], batch_size=batch_size,
shuffle=True, num_workers=4, collate_fn = my_collate_fn)
for x in ['testing']}
print('dataloaders is set up')
index = 0
last_time = time.time()
for inputs, labels in dataloaders['testing']:
if index % 50 == 0:
print('takes:{}'.format(time.time() - last_time))
last_time = time.time()
print(index)
print(inputs.size())
index += 1
if __name__ == "__main__":
main()
|
st97704
|
Hi team,
I have two data generator classes, one which loads all the data from a file onto memory thereafter feeds and another one which feeds batches from the file. My script tries the first approach and if the memory is not sufficient goes to approach two.
try:
loader = DataLoader1()
except RuntimeError as e:
loader = DataLoader2()
But once the RuntimeError,
RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/generic/THCTensorMath.cu:35
is raised by DataLoader1, DataLoader2 also raises the same error. Is there a way I can solve this?
Thank you.
|
st97705
|
Have a look at this thread 2.2k. FairSeq is using a different approaches in case they run into an OOM issue.
Maybe you could adapt them to your use case.
|
st97706
|
Thanks for the link. In my case, this happens before training, the model is not created yet! In the try block I’m trying to load all the training set onto memory which sometimes fails. Since that object is not created shouldn’t the DataLoader2 in exception be excecuted?
torch.cuda.empty_cache(), does not seem to help here. Moreover the documention also states
“the occupied GPU memory by tensors will not be freed so it can not increase the amount of GPU memory available for PyTorch”
|
st97707
|
torch.cuda.empty_cache() is called after the tensors were deleted.
Could you try to delete loader in the exception first, then empty the cache and see if you can recreate the loader using DataLoader2?
How did you create your DataLoader? Do you push all data onto the GPU?
|
st97708
|
In Dataloader1 cuda OOM occurs here when the tensor is created inside init(),
class DataLoader1():
def __init__(self, args):
....
self.features_vec = torch.zeros((self.num_samples, self.max_seq_len), dtype=torch.long, device=self.device)
Which is understandable, as I am trying to create a tensor of that big size. As this is in DataLoader1’s init(), I suppose the object loader is not created. I get the undefined variable error when I try
except RuntimeError as e:
del loader
Whereas DataLoader2 looks like this;
class DataLoader2():
def __init__(self, args):
...
self.features_vec = torch.zeros((self._batch_size, self.max_seq_len), dtype=torch.long, device=self.device)
And this too raises OOM while creating self.features_vec. Here self.device = torch.device('cuda') and I can see plenty of memory left in nvidia-smi.
|
st97709
|
Since the exception is thrown in __init__ your loader was actually never instantiated.
This would probably be the reason, why you can’t delete loader in the except block.
Is the memory still in use after the exception?
If so, could you try to create the try block inside of __init__ and empty the cache once you get the OOM error?
Are you getting the OOM error for DataLoader2 from the beginning or just after DataLoader1 could not be created?
|
st97710
|
That’s right, loader is not instantiated. This is DataLoader1
class DataLoader1():
def __init__(self, trg_file_path, emb_indices, batch_size, max_seq_len, label_signature=None,
sample_per_word=False, one_hot_labels=False, update_emb_indices=False,
infinite_batches=False, intents_to_use=None, device=None, num_samples_key='num_samples',
get_raw_samples_while_iter=False, debug_mode=False):
"""
Iterator to feed trg data
args:;
labels = [
('intent',[None, 'greet', 'direction']),
('action',[None, 'enquire', 'navigate']),
('subject',[None, 'thank', 'hello'])
]
or
labels = [None, 'person','location','day', 'time']
batches_to_produce:
-1 - feeds indefinitely
0 - Feeds till the file exhausts
<int value> - feeds <value> batches and exits
intents_to_use - if a list is provided, excludes samples not included in this list
"""
if debug_mode:
torch.set_printoptions(threshold=10000)
else:
torch.set_printoptions(threshold=500)
self.emb_indices = emb_indices
self.batch_size = batch_size
self.max_seq_len = max_seq_len
self.sample_per_word = sample_per_word
self.one_hot_labels = one_hot_labels
self.update_emb_indices = update_emb_indices
self.get_raw_samples_while_iter = get_raw_samples_while_iter
self.sample_ptr = 0
self.infinite_batches = infinite_batches
self.num_samples_key = num_samples_key
print ('Preparing data from file = {}'.format(trg_file_path))
self.phrase_match, self.num_samples, self.intents_to_train, labels_from_trg_file = getClasses(
trg_file_path,intents_to_get=intents_to_use,
num_samples_key=num_samples_key)
self.labels = labels_from_trg_file if label_signature is None else label_signature
if device:
self.device = device
else:
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
if get_raw_samples_while_iter:
self.samples = []
# calc feature vec size
try:
self.features_vec = torch.zeros((self.num_samples, self.max_seq_len), dtype=torch.long, device=self.device)
except RuntimeError as e:
torch.cuda.empty_cache()
raise e
As you can see self.features_vec is the 1st tensor that is created, all others are trivial and small python variables. I tried clearing cache with a try block, but that didnt help.
Yes I’m getting OOM in DataLoader2 just after DataLoader1 fails to instantiate. If I run just DataLoader2 it works fine (That’s how I am training now)
|
st97711
|
1 Trying to load data onto memory
2 Preparing data from file = trg_data.txt
3 CUDA error: out of memory
4 Not enough memory to load all the data to GPU. Run script without the '-m' flag
5 torch.cuda.max_memory_allocated()=0 ,torch.cuda.max_memory_cached() = 0
6
7 Preparing data from file = trg_data.txt
8 THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCTensorMath.cu line=35 error=2 : out of memory
9 Traceback (most recent call last):
10 File "train_ext.py", line 239, in <module>
11 intents_to_train=args.intents, debug_mode=args.debug, test=args.test, load_to_memory=args.load_all_data_to_memory )
12 File "train_ext.py", line 59, in train
13 num_samples_key='num_words', get_raw_samples_while_iter=debug_mode, debug_mode=debug_mode)
14 File "/media/storage/dev/utils.py", line 261, in __init__
15 self.features_vec = torch.zeros((self._batch_size, self.max_seq_len), dtype=torch.long, device=self.device)
16 RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/generic/THCTensorMath.cu:35
here’s the error. Line 4 is where OOM occurs for DataLoader1 and from Line 7 DataLoader2 tries to take over.
|
st97712
|
You could try to delete self.features_vec in case it’s holding to some reference before calling torch.cuda.empty_cache(). Let me know, if that works.
|
st97713
|
I think the just like loader, self.features_vec is also not instantiated. I get,
File "/media/storage/dev/utils.py", line 65, in __init__
del self.features_vec
AttributeError: DataLoader1 instance has no attribute 'features_vec'
when I do
try:
self.features_vec = torch.zeros((self.num_samples, self.max_seq_len), dtype=torch.long, device=self.device)
except RuntimeError as e:
del self.features_vec
torch.cuda.empty_cache()
raise e
|
st97714
|
Thanks the trying this approach. I’m afraid I have no other suggestions.
Let’s see what others might come up with. Maybe I’m just not seeing any obvious issue.
|
st97715
|
I think I know what’s happening here. It takes a little time for the memory to clear so that it can be reused.
This is what worked!
try:
print("\nTrying to load data onto memory")
data_loader = DataLoader1(trg_file_path, params)
except RuntimeError as e:
print ("Not enough memory to load all the data to GPU\nTrying generator approach")
_initialized = 0
while _initialized != 1:
_initialized -= 1
try:
data_loader = DataLoader2(trg_file_path, params)
_initialized = 1
except RuntimeError as e:
if _initialized > -5:
print ("Failed to initialize, attempt {}. Let's try again".format(abs(_initialized)))
else:
raise RuntimeError(e)
Output:
Trying to load data onto memory
Preparing data from file = trg_data.txt
Not enough memory to load all the data to GPU
Trying generator approach
Preparing data from file = trg_data.txt
THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCTensorMath.cu line=35 error=2 : out of memory
Failed to initialize, attempt 1. Let's try again
Preparing data from file =trg_data.txt
Success!!
As you can see, attempt 1 failed but attempt 2 suceeded. So now is there a way I can wait for memory to sync rather than running a while loop?
|
st97716
|
Oh wow, that’s awesome!
Just a wild guess, but would torch.cuda.synchronize() work?
|
st97717
|
Thanks, tried that but didnt work.
try:
print ("\nTrying to load all data onto Memory")
data_loader = DataLoader1(trg_file_path, params)
except RuntimeError as e:
if 'out of memory' not in str(e):
raise RuntimeError(e)
print ("Not enough memory to load all that data\n\nTrying generator approach")
torch.cuda.synchronize()
_initialized = 0
_max_retries = 5
while _initialized != 1:
_initialized -= 1
try:
data_loader = DataLoader2(trg_file_path, params)
print("Attempt {}/{}, initialized".format(abs(_initialized), _max_retries))
_initialized = 1
except RuntimeError as e:
if _initialized > -1 * _max_retries:
print("Attempt {}/{}, failed to initialize, trying again".format(abs(_initialized), _max_retries))
else:
raise RuntimeError(e)
stdout
Trying to load data onto memory
Preparing data from file = trg_data.txt
Not enough memory to load all that data
Trying generator approach
Preparing data from file = trg_data.txt
THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCTensorMath.cu line=35 error=2 : out of memory
Attempt 1/5, failed to initialize, trying again
Preparing data from file = trg_data.txt
Attempt 2/5, initialized
Still takes 2 attempts
EDIT:Nomenclature
|
st97718
|
Hi all,
I am a beginner in pytorch. I am trying to perform a text summarization task on gigawaord dataset.
Till now, I have implemented glove embeddings as a feature and tested it over the LSTM.
I have also separately extracted the contextual layer of every line in the dataset, which contains of subject
,object and predicate for every line in the dataset.
I want to use this contextual layer as an additional feature in my model along with glove embeddings.
How can I convert this contextual layer in a form which can be accepted by the model and is it wise to concatenate with the existing embeddings or should I take some other approach?
Any help is appreciated.
Thanks in advance.
|
st97719
|
Solved by ptrblck in post #2
Yes! Have a look at the Anomaly Detection docs.
|
st97720
|
How can I remove the following problem:
Traceback (most recent call last):
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/multiprocessing/queues.py”, line 268, in _feed
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/site-packages/torch/multiprocessing/queue.py”, line 17, in send
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/pickle.py”, line 224, in dump
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/pickle.py”, line 286, in save
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/pickle.py”, line 554, in save_tuple
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/pickle.py”, line 286, in save
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/pickle.py”, line 606, in save_list
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/pickle.py”, line 639, in _batch_appends
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/pickle.py”, line 286, in save
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/multiprocessing/forking.py”, line 67, in dispatcher
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/pickle.py”, line 401, in save_reduce
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/pickle.py”, line 286, in save
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/pickle.py”, line 554, in save_tuple
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/pickle.py”, line 286, in save
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/multiprocessing/forking.py”, line 66, in dispatcher
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/site-packages/torch/multiprocessing/reductions.py”, line 192, in reduce_storage
File “/home/amalik/Pytorch_virtual_enviornment/lib/python2.7/multiprocessing/reduction.py”, line 145, in reduce_handle
OSError: [Errno 24] Too many open files
|
st97721
|
Too many open files when using dataLoader
Hi,
When I use the data loader, I have met the following error: Too many open files.
In my implementation of the Dataset, I use torch.load(‘xxx’) to load the data files (which are all tensors stored on disks), and when call getitem(self, index), it will take the corresponding items from the tensor, and return it.
I construct the dalaloader in the following manner:
dataset = Dataset(xxxxx)
dataLoader = torch.utils.data.DataLoader(dataset=dataset, batch_size=batchSize, shuffle=shuffle,
num…
|
st97722
|
how can i change the spatial transformer module on pytorch to only include translation shifts and nothing else, i dont need the full 6 affine transform only two components to capture translation shift. any ideas will be greatly appreciated!
Btw by spatial transformer module im referring to this one:
http://pytorch.org/tutorials/intermediate/spatial_transformer_tutorial.html 33
|
st97723
|
self.fc_loc[2] should be nn.Linear(32, 2)
Then, make sure the generated two numbers are set as right-most values of a 2x3 matrix.
I wrote the rough code for you, didn’t test it.
If you dont understand, read https://en.wikipedia.org/wiki/Transformation_matrix 19
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
# Spatial transformer localization-network
self.localization = nn.Sequential(
nn.Conv2d(1, 8, kernel_size=7),
nn.MaxPool2d(2, stride=2),
nn.ReLU(True),
nn.Conv2d(8, 10, kernel_size=5),
nn.MaxPool2d(2, stride=2),
nn.ReLU(True)
)
# Regressor for the 3 * 2 affine matrix
self.fc_loc = nn.Sequential(
nn.Linear(10 * 3 * 3, 32),
nn.ReLU(True),
nn.Linear(32, 2)
)
# Initialize the weights/bias with identity transformation
self.fc_loc[2].weight.data.fill_(0)
self.fc_loc[2].bias.data = torch.FloatTensor([0, 0])
# Spatial transformer network forward function
def stn(self, x):
xs = self.localization(x)
xs = xs.view(-1, 10 * 3 * 3)
theta_translation = self.fc_loc(xs)
theta = theta_translation.data.new(xs.size(0), 2, 3)
theta[:, 0, 0] = 1
theta[:, 1, 1] = 1
theta = Variable(theta, requires_grad=True)
theta[:, :, 2] = theta_translation
grid = F.affine_grid(theta, x.size())
x = F.grid_sample(x, grid)
return x
def forward(self, x):
# transform the input
x = self.stn(x)
# Perform the usual forward pass
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
|
st97724
|
thank you very much for your help however when i try to run the given the code it gives the following error:
Traceback (most recent call last):
File “model_change.py”, line 204, in
output = model(data)
File “/home/sohrab/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 325, in call
result = self.forward(*input, **kwargs)
File “model_change.py”, line 172, in forward
x = self.stn(x)
File “model_change.py”, line 162, in stn
theta[:, :, 2] = theta_translation
File “/home/sohrab/anaconda3/lib/python3.5/site-packages/torch/autograd/variable.py”, line 87, in setitem
return SetItem.apply(self, key, value)
RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.
im assuming this is because the variable has already been created and cant be modified afterwards? any help on this would be amazing, thank you so much
|
st97725
|
However, in this stn paper 10,
image.jpg778×401 132 KB
the author operate some more complex transformers, such as elastic distortion(audio can be seen in this link 7, so I wonder how to use pytorch to gernerate any kind of transformers?
|
st97726
|
```python
from torch.autograd import Variable
N_PARAMS = {'affine': 6,
'translation':2,
'rotation':1,
'scale':2,
'shear':2,
'rotation_scale':3,
'translation_scale':4,
'rotation_translation':3,
'rotation_translation_scale':5}
# Spatial transformer network forward function
def stn(x, theta, mode='affine'):
if mode == 'affine':
theta1 = theta.view(-1, 2, 3)
else:
theta1 = Variable( torch.zeros([x.size(0), 2, 3], dtype=torch.float32, device=x.get_device()), requires_grad=True)
theta1 = theta1 + 0
theta1[:,0,0] = 1.0
theta1[:,1,1] = 1.0
if mode == 'translation':
theta1[:,0,2] = theta[:,0]
theta1[:,1,2] = theta[:,1]
elif mode == 'rotation':
angle = theta[:,0]
theta1[:,0,0] = torch.cos(angle)
theta1[:,0,1] = -torch.sin(angle)
theta1[:,1,0] = torch.sin(angle)
theta1[:,1,1] = torch.cos(angle)
elif mode == 'scale':
theta1[:,0,0] = theta[:,0]
theta1[:,1,1] = theta[:,1]
elif mode == 'shear':
theta1[:,0,1] = theta[:,0]
theta1[:,1,0] = theta[:,1]
elif mode == 'rotation_scale':
angle = theta[:,0]
theta1[:,0,0] = torch.cos(angle) * theta[:,1]
theta1[:,0,1] = -torch.sin(angle)
theta1[:,1,0] = torch.sin(angle)
theta1[:,1,1] = torch.cos(angle) * theta[:,2]
elif mode == 'translation_scale':
theta1[:,0,2] = theta[:,0]
theta1[:,1,2] = theta[:,1]
theta1[:,0,0] = theta[:,2]
theta1[:,1,1] = theta[:,3]
elif mode == 'rotation_translation':
angle = theta[:,0]
theta1[:,0,0] = torch.cos(angle)
theta1[:,0,1] = -torch.sin(angle)
theta1[:,1,0] = torch.sin(angle)
theta1[:,1,1] = torch.cos(angle)
theta1[:,0,2] = theta[:,1]
theta1[:,1,2] = theta[:,2]
elif mode == 'rotation_translation_scale':
angle = theta[:,0]
theta1[:,0,0] = torch.cos(angle) * theta[:,3]
theta1[:,0,1] = -torch.sin(angle)
theta1[:,1,0] = torch.sin(angle)
theta1[:,1,1] = torch.cos(angle) * theta[:,4]
theta1[:,0,2] = theta[:,1]
theta1[:,1,2] = theta[:,2]
grid = F.affine_grid(theta1, x.size())
x = F.grid_sample(x, grid)
return x
class Net(nn.Module):
def __init__(self, stn_mode='affine'):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
self.stn_mode = stn_mode
self.stn_n_params = N_PARAMS[stn_mode]
# Spatial transformer localization-network
self.localization = nn.Sequential(
nn.Conv2d(1, 8, kernel_size=7),
nn.MaxPool2d(2, stride=2),
nn.ReLU(True),
nn.Conv2d(8, 10, kernel_size=5),
nn.MaxPool2d(2, stride=2),
nn.ReLU(True)
)
# Regressor for the 3 * 2 affine matrix
self.fc_loc = nn.Sequential(
nn.Linear(10 * 3 * 3, 32),
nn.ReLU(True),
nn.Linear(32, self.stn_n_params)
)
# Initialize the weights/bias with identity transformation
self.fc_loc[2].weight.data.fill_(0)
self.fc_loc[2].weight.data.zero_()
if self.stn_mode == 'affine':
self.fc_loc[2].bias.data.copy_(torch.tensor([1, 0, 0, 0, 1, 0], dtype=torch.float))
elif self.stn_mode in ['translation','shear']:
self.fc_loc[2].bias.data.copy_(torch.tensor([0,0], dtype=torch.float))
elif self.stn_mode == 'scale':
self.fc_loc[2].bias.data.copy_(torch.tensor([1,1], dtype=torch.float))
elif self.stn_mode == 'rotation':
self.fc_loc[2].bias.data.copy_(torch.tensor([0], dtype=torch.float))
elif self.stn_mode == 'rotation_scale':
self.fc_loc[2].bias.data.copy_(torch.tensor([0,1,1], dtype=torch.float))
elif self.stn_mode == 'translation_scale':
self.fc_loc[2].bias.data.copy_(torch.tensor([0,0,1,1], dtype=torch.float))
elif self.stn_mode == 'rotation_translation':
self.fc_loc[2].bias.data.copy_(torch.tensor([0,0,0], dtype=torch.float))
elif self.stn_mode == 'rotation_translation_scale':
self.fc_loc[2].bias.data.copy_(torch.tensor([0,0,0,1,1], dtype=torch.float))
def stn(self, x):
x = stn( x, self.theta(x), mode=self.stn_mode)
return x
def theta(self, x):
xs = self.localization(x)
xs = xs.view(-1, 10 * 3 * 3)
theta = self.fc_loc(xs)
return theta
def forward(self, x):
# transform the input
x = self.stn(x)
# Perform the usual forward pass
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
|
st97727
|
Does anyone know where to find the pytorch version of large margin softmax loss? I search for it where I can find but don’t make it.
And I only find MultiLabelSoftMarginLoss.
|
st97728
|
Pytorch doesn’t have an implementation of large margin softmax loss, and a quick google search doesn’t seem to result in anything. You can be the first person to write one
|
st97729
|
Here’s the code if you have not found it yet : lsoftmax-pytorch 286. The truth, you should kinda update it to 0.4.0, but works fine.
|
st97730
|
Here’s an updated implementation: https://github.com/amirhfarzaneh/lsoftmax-pytorch 338
|
st97731
|
The nn.sequential module simplifies writing models very nicely. However, when I need to convert from 2D tensors to 1D tensors before the fully connected layers I use view function which cannot be used within the sequential. I know I can create my own 2d_to_1d module and feed it into sequential but wanted to know if there is a better way to do this. So my question is, what is the best way to change dimensions of a tensor and still use sequential to create a single module that does the whole pipeline?
|
st97732
|
You could create a wrapper module like this:
class Flatten(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(x)
return x.view(x.size(0), -1)
And pass an instance of this module to the Sequential model before the fully connected layers.
|
st97733
|
Hi,
do i understand correctly that since torchvision.transforms act on PIL images which, as far as i tried, can’t be converted to float and still retain all 3 channels (which is pretty weird so i hope somebody can correct me on this), one cannot apply torchvision.transforms in floating point?
the reason i’m asking this is because even though i can imagine doing transforms in int8 format can be faster… if i’m looking to preserve details then doing transforms such as rotation and shear in int8 can result over time in quantization errors.
do i understand correctly that if i want to have augmentations in float i must use open cv (or, if i remember correctly, FastAI)?
|
st97734
|
Is there a way to tell PyTorch to not use the GPU? I want to do some profiling that would be easier done on the CPU side.
|
st97735
|
Unless you don’t push your tensors and modules to GPU via .cuda() or .to("cuda") all computations will be done on CPU per default.
|
st97736
|
I’m getting the error ValueError: Expected input batch_size (1) to match target batch_size (10000). for the following code:
class LogLinearLM(nn.Module):
def __init__(self, vocab_size):
super(LogLinearLM, self).__init__()
self.linear = nn.Linear(2*vocab_size, vocab_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input):
out = self.linear(input)
return self.softmax(out)
losses = []
loss_fn = nn.NLLLoss()
model = LogLinearLM(10000)
optimizer = optim.SGD(model.parameters(), lr=0.001)
for _ in range(30):
total_loss = 0
for w1, w2, w3 in nltk.trigrams(train):
model.zero_grad()
X = torch.cat([one_hot(w1), one_hot(w2)])
yhat = model(X.view(1, -1))
y = torch.tensor(one_hot(w3), dtype=torch.long)
loss = loss_fn(yhat, y)
loss.backward()
optimizer.step()
total_loss += loss
losses.append(total_loss)
plt.plot(losses)
The function one_hot is:
def one_hot(word):
v = torch.zeros(10000)
v[word2idx[word]] = 1
return v
Can anyone help me, please? Thanks
|
st97737
|
Solved by ptrblck in post #4
OK, in that case the second approach would be valid.
If you have one valid class for each sample, your target should have the shape [batch_size] storing the class index. E.g. if the current word would be class5, you shouldn’t store it as [[0, 0, 0, 0, 0, 1, 0, ...]], but rather just use the class i…
|
st97738
|
Are you working on a multi-label classification task, i.e. is your target holding more then one valid class?
If so, I think you should try using nn.BCELoss and unsqueeze your target at dim0 using y = y.unsqueeze(0).
However, if that’s not the case and you are dealing with a multi-class classification, i.e. your target only stores one valid target class, you should use the class index instead of the one-hot encoded target.
You can get the class index using y = torch.argmax(y).
|
st97739
|
The task is training a trigram model, so given two one-hot vectors (representing two words), I concatenate them, and the expected label is a one-hot vector representing the target word. I’m only expecting one label, it’s just one-hot encoded, and from what I know, I want to compare the softmax output to the target, which is what (I think) I’m doing…
|
st97740
|
OK, in that case the second approach would be valid.
If you have one valid class for each sample, your target should have the shape [batch_size] storing the class index. E.g. if the current word would be class5, you shouldn’t store it as [[0, 0, 0, 0, 0, 1, 0, ...]], but rather just use the class index torch.tensor([5]).
As described in the other post, you can achieve this using torch.argmax.
The docs 145 have some more examples.
Let me know, if that works.
|
st97741
|
Hi all. I am sorry to bug you again with this kind of question but the notation is really confusing (if not wrong in my opinion). The question is about torch.optim.SGD.
Question in short: Isn’t torch’s SGD just a regular ‘step into the direction (-gradient)-optimizer’ instead of SGD? Shouldn’t it be named ‘GD’ instead of ‘SGD’?
(lengthly) explanation:
Usually we do gradient descent (ignoring all the overfitting issues for a moment) like this: Given a data set x_i (i=0,1,…N) with true answers y_i (i=0,1,…,N) and some function shape f (that we have maybe implemented in pytorch) depends on the input x and some model parameter theta and some loss function l we abstractly compute the gradient g = g(x, y, theta) of l(y, f(x,theta)) w.r.t. theta, we initialize the theta to some value theta_0 and then we put
theta_(t+1) = theta_t + (1/N)*sum_i g(x_i, y_i, theta_t)
and then we iterate. As we cannot do this with complicated gradients and large training sets we use SGD with mini batches which (according to my understanding) is the following:
iterate
select some random subset i_1, …, i_M of fixed size M from the training set
theta_(t+1) = theta_t + (1/M)*sum_k g(x_i_k, y_i_k, theta_t)
so the SGD function should somehow be linked to the current minibatch… Lets say very concretely that we consider linear regression with current weights=[3,4] and bias=-1and the training batch is [[1,2], [3,3]] and the desired outputs are [[10], [-1]] and we use MSE as loss. Then theta = [w0, w1, b] and the gradient is
g(x, y, theta) = 2(yhat-y) * [x0, x1, 1]
i.e. we get two gradients for the two training samples, namely [-10, -20, -10] and [126,126,42]. So I implemented this in torch in the following way:
linearFunction = nn.Linear(2, 1, True)
# initialize weights...
weights = torch.tensor([[3, 4]], dtype=torch.float)
weightsAsParam = torch.nn.Parameter(weights, requires_grad=True)
bias = torch.tensor([-1], dtype=torch.float)
biasAsParam = torch.nn.Parameter(bias, requires_grad=True)
linearFunction.weight = weightsAsParam
linearFunction.bias = biasAsParam
x = torch.tensor([[1, 2], [3,3]], dtype=torch.float)
y = torch.tensor([[15], [-1]], dtype=torch.float)
yhat = linearFunction(x)
lossFunction = torch.nn.MSELoss()
loss = lossFunction(yhat, y)
loss.backward()
now I want to see the gradients… However, linearFunction.weight.grad (which is supposed to give me two gradients (the one for the first example and then the one for the second)) gives me
tensor([[58., 53.]])
which is the mean of the two single gradients… So, what SGD effectively does is to step into the direction of (-mean gradient) but the task of selecting the minibatch and of forming the mean gradient is done by other parts of torch… so what exactly is the S in SGD?
Is it even possible to let the tensors linearFunction.weight and linearFunction.bias know two different gradients?
Regards,
Fabian Werner
|
st97742
|
Solved by albanD in post #2
Yes the update step for GD and SGD are the same and so from the optimizer point of view they are both the same.
If you want to the S to be meaningful, you can see it as Sub Gradient Descent But here again if you had real gradients, the update would be the same.
It is not possible to save gradien…
|
st97743
|
Yes the update step for GD and SGD are the same and so from the optimizer point of view they are both the same.
If you want to the S to be meaningful, you can see it as Sub Gradient Descent But here again if you had real gradients, the update would be the same.
It is not possible to save gradients for all the samples. Doing so would require a significant amount of memory and potentially prevent some optimizations in the implementation.
|
st97744
|
Is there an official callback feature in PyTorch? If not, I’d like to know the relevant files to modify to make it happen. Basically, I want callbacks for functions like on_batch_end(), on_epoch_end() etc.
|
st97745
|
Solved by ptrblck in post #2
Ignite has some nice callbacks.
|
st97746
|
I’m not sure, where on_batch_end() is defined, but based on the naming, it should be the same.
|
st97747
|
callbacks seems quite an useful thing, I google around see a lot of people implementing similar callback for PyTorch, is that a reason why PyTorch team think callback is not necessary? What’s the native solution if we do not use Callback?
Thanks.
|
st97748
|
I am training a network model but I found it’s so big such that one TITAN Xp GPU’s 12GB memory can only allow one sample to train at the same time. Now I am using one trick to mimick multi-sample training. The way is I go forward and backward the model with one sample each time and after some times I optmiizer.step() one time. I am not sure in pytorch is this way can work similiary with training with multi-samples at one time. In my network I did not use BatchNorm layer.
Thank you for your advice !
|
st97749
|
Solved by ptrblck in post #2
That should generally work.
Note that you might want to scale the gradients as they are accumulated by default.
Here is a good explanation with some examples.
|
st97750
|
That should generally work.
Note that you might want to scale the gradients as they are accumulated by default.
Here 4 is a good explanation with some examples.
|
st97751
|
Thanks for your reply! Is there any bad infulence if I use BatchNorm layer? If so, how should I do to reduce its hurt to performance?
|
st97752
|
Since you can only use a single sample in your forward pass, the running estimates might be off.
You could try to adjust the momentum, but I think nn.InstanceNorm or nn.GroupNorm would be a better alternative.
|
st97753
|
following is the definition of torch.addmm 's parameters:
torch. addmm ( beta=1 , mat , alpha=1 , mat1 , mat2 , out=None ) → Tensor
from https://pytorch.org/docs/stable/torch.html?highlight=addmm#torch.addmm 239
but under it there is an example:
M = torch.randn(2, 3)
mat1 = torch.randn(2, 3)
mat2 = torch.randn(3, 3)
torch.addmm(M, mat1, mat2)
tensor([[-4.8716, 1.4671, -1.3746],
[ 0.7573, -3.9555, -2.8681]])
the call of torch.addmm in the example only uses 3 parameters, when addmm’s last 2 parameters don’t heve a default value.
This is all my confusion about it.
|
st97754
|
Solved by tom in post #15
What happens here is that the addmm does have “overloads” to implement the behaviour that, as you correctly note, would not be possible using a single plain Python function.
The twist is that the one using keyword only alpha and beta arguments is the preferred one (defined in aten/src/ATen/native/n…
|
st97755
|
What the M does is to add the random number generated by the 2 by 3 matrix while the mat1 and mat2 are multiply together
that
it will add mat1 and mat2 and multiply the final matrix by M
and the M rep the mat in the addmm
|
st97756
|
But what I see from pytorch doc is that " torch.addmm( beta=1 , mat , alpha=1 , mat1 , mat2 , out=None )" in which M,mat1 and mat2 should be seen as beta, mat and alpha. I run the example on my pc, it just works as you say, but I don’t understand why it works. Please cure my confusion. Thanks
|
st97757
|
I know what torch.addmm does. But I don’t know how it can be like this.
let me take an example. why torch.addmm(1,M,1,mat1,mat2) and torch.addmm(M,mat1,mat2) can work the same? why the M in the second call can be parsed as mat instead of beta while the M in the second call is the first parameter? Please forgive my English…
|
st97758
|
Because beta and alpha have positional argument and have a input value 1 from the function.
|
st97759
|
To my knowledge, the non-default parameter can’t follow the default parameter in python. And I’m also confused about the definition of addmm in pytoch doc, where the mat follows the beta=1. And this question is closely related to what we talk about.
|
st97760
|
sure,bt in that case alpha already have an input in the argument
let take for instance a function to calulate neural network
> def nn(weight,bias=1,features):
>
> return weight*feature+bias
so if you use the function note the bias in the function is given a default value 1 so it allow u to input weight and features without worring about bias except u want to change bias
|
st97761
|
My python version is 3.6.6. My python don’t allow it. And does the ‘nn’ function you just define work on your device?
|
st97762
|
What happens here is that the addmm does have “overloads” to implement the behaviour that, as you correctly note, would not be possible using a single plain Python function.
The twist is that the one using keyword only alpha and beta arguments is the preferred one (defined in aten/src/ATen/native/native_functions.yaml 14) while the others are deprecated (defined in tools/autograd/deprecated.yaml 4).
It is, perhaps, unfortunate that the addmm example uses parameters that are marked as deprecated.
Best regards
Thomas
(who looked at this way too much when generating typehints )
|
st97763
|
Other has explained to me that the definition isn’t what’s shown in pytorch doc. Any way, thanks
|
st97764
|
Though I can’t understand the content in the link, it’s enough for me to know the the definition isn’t what’s shown in pytorch doc. Thank you vvvvery much!
|
st97765
|
This topic has been posted before on forums but they seem to be outdated now.
Is there a way of using the torchvision transform.ToPilImage() function on n-dimensional numpy arrays which aren’t 3 channels?
I’ve attempted to do this with single channel images but I get errors due to image shaping I think.
My method for when image.shape == (N,N,1):
transform = transforms.Compose([transforms.ToPilImage(), transforms.ToTensor, transforms.Normalize([0.5],[0.5])])
|
st97766
|
Thanks or the reply!
Wouldn’t this only work if you can first transform your ndarray into a PIL image as transforms.Grayscale requires PIL images?
I get this error: TypeError: img should be PIL Image. Got <class 'numpy.ndarray'>
|
st97767
|
yes convert to array. it more like if u want to convert a image to gray wit opencv
it will first read the image then convert it to gray by reducing the last column of the array
cv2.imread() which will convert it to array (233,44,1) and then onvert with another function
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.