id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st32168
|
Hello,
I have also been needing to do this exact thing myself (for various reasons) for a little while and struggling to figure it out. Honestly, it’s way harder to find answers for this than it should be. I have read a lot of material on convolution, pytorch unfold, convtranspose2d, and cnn gradients. Finally, I just got it.
The answer is fortunately actually very simple, it’s just that it seems everyone has a different view of this operation (upsampling, cnn gradient, deconvolution, etc) that doesn’t quite explain everything. So this answer isn’t very “googleable”. Note this is the inefficient way of doing things - to do this using unfold, we have to add a bunch of padding to all the sides. There are more efficient implementations, but this is the best vectorized implementation I can come up with.
import torch
import torch.nn.functional as F
img = torch.randn(1 ,50 ,28 ,28)
kernel = torch.randn(30,50 ,3 ,3)
true_convt2d = F.conv_transpose2d(img, kernel.transpose(0,1))
pad0 = 3-1 # to explicitly show calculation of convtranspose2d padding
pad1 = 3-1
inp_unf = torch.nn.functional.unfold(img, (3,3), padding=(pad0,pad1))
w = torch.rot90(kernel, 2, [2,3])
# this is done the same way as forward convolution
out_unf = inp_unf.transpose(1, 2).matmul(w.view(w.size(0), -1).t()).transpose(1, 2)
out = out_unf.view(true_convt2d.shape)
print((true_convt2d-out).abs().max())
print(true_convt2d.abs().max())
Running this code will give a small error value around 1e-5, but you can see the magnitude of the output is around 90 so it’s close enough. I think this is due to optimization in the backend.
I hope this answer becomes “googleable” for others looking for this information. I am an new user so I can apparently only put 2 links in a post. If you want more, please DM me. There is a formula for calculating padding on data science stack exchange (though it’s not too hard to figure out) if you search “how-to-calculate-the-output-shape-of-conv2d-transpose”.
Disclaimer: I am pretty sure this is correct, but it still could be wrong. Also, this doesn’t take into account padding, strides, dilation, or groups.
Sources:
Thorough descriptions of convolution 4
Visualization that explains rotation 4
|
st32169
|
more helpful sources provided by @santacml
https://danieltakeshi.github.io/2019/03/09/conv-matmul/ 10
https://towardsdatascience.com/backpropagation-in-a-convolutional-layer-24c8d64d8509 7
github.com
vdumoulin/conv_arithmetic 4
A technical report on convolution arithmetic in the context of deep learning
|
st32170
|
Hello, I have a file where i trained a BERT model and saved the state_dict of the model. How can i load the model in another .py file and use it to predict the polarity some keyboard input sentences? I have the below code :
import torch
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-uncased', do_lower_case=True)
model.load_state_dict(torch.load('model.pt'))
model.eval()
inputs = tokenizer("I do not feel good today.", return_tensors="pt")
outputs = model(**inputs)
print(outputs)
Assuming that I do not feel good today. is the sentence entered from keyboard, i would like the model to return 0 if it is negative or 1 if it is positive. How can I define the model so it will not be linked to the py file where I trained the model? I tried the below but that will start the training again( BertClassifier is the class where i trained the model) :
from bert import BertClassifier
model = BertClassifier(freeze_bert=False)
|
st32171
|
I am quite new to PyTorch, I have checked the discussion a lot but I could not find about the issue. I have 4 models which require around 6 GB memory on GPU but we have limited number of GPUs. So I want to parallelize all 4 models and I wonder if it is possible. I have thought about using all modules within a new module but I don’t know if the calculations would be parallel or sequential,
class combinedModel(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.model1 = Model1()
self.model2 = Model2()
self.model3 = Model3()
def forward(self, x):
out1 = self.model1(x)
out2 = self.model2(x)
out3 = self.model3(x)
return out1, out2, out3
Given models don’t have any shared parameters and I am going to perform optimization for each of them seperately. How can I utilize my GPU memory instead of employing each model on seperate GPUs?
|
st32172
|
Say that from an image folder with 9k images I have 4k images of size (100,400) , 2k images of size(150 ,350) and the rest have a size of (200 , 500) …How can I train with such data like how would I define collate_fn to handle such images and effectively create batches of same sized images ? If I preprocess and save 3 hdf5 files for three different sized images how would I use all three in Dateset and collate_fn or is there a way that a single hdf5 file can store all these three subsets of data without using padding ?
|
st32173
|
Hello community,
I recently build a Unet for predicting Digital Surface Models (DSMs or heightmaps) with a Unet. For this my input has a shape of 3x512x512 for the RGB channels of my satellite image while my ground truth is a DSM with 1x512x512 respectivley. Everything works fine and the losses looking good. But I am wondering of some metrics to measure something like accuracy or dice loss.
The task of course is not really a classic semantic segmentation because the resulting DSM image contains not 1 for segmented and 0 for the rest, it contains the corresponding height of each pixel. Thus it is useless to have an accuracy where you like compare the prediction DSM with the traget one pixel by pixel, because it will almost never have the EXACT height (based of float) isn’t it?
My idea was to have like a 10 percent range for the element wise comparison, but I dont if this can be done very efficiently. But maybe this is a known problem…?
Small example:
Target-heights: [20.5435, 23.6946
19.59 , 22.646 ]
Prediction-heights: [18.9545, 23.5007
20.0153, 26.5432 ]
This is just a made up one, I do not know how close the values actually will get… Also is there a way to write nice matricies in a question?
|
st32174
|
Hello, maybe it’s easy but it is very confusing to me.
So doing binary classification with BCEWithlogitsloss.
this is my Model:
class BreastCancerModel(nn.Module):
def __init__(self):
super().__init__()
self.conv1=nn.Conv2d(3,6,3,1)
self.conv2=nn.Conv2d(6,10,3,1)
self.conv3=nn.Conv2d(10,13,3,1)
self.fc1=nn.Linear(13*4*4,84)
self.fc2=nn.Linear(84,10)
self.fc3=nn.Linear(10,1)
def forward(self,X):
X=X.view(-1,3,50,50)
X=F.relu(self.conv1(X))
X=F.max_pool2d(X,2,2)
X=F.relu(self.conv2(X))
X=F.max_pool2d(X,2,2)
X=F.relu(self.conv3(X))
X=F.max_pool2d(X,2,2)
X=X.view(-1,13*4*4)
X=F.relu(self.fc1(X))
X=F.relu(self.fc2(X))
X=self.fc3(X)
return X
This is Training loop :
def train(optim,criterion,num_epochs):
train_correct=[]
train_losses=[]
test_correct=[]
test_losses=[]
for i in range(num_epochs):
trn_corr=0
tst_corr=0
for b,(X_train,y_train) in enumerate(train_loader):
b+=1
y_pred=model(X_train)
print('ypred')
print(y_pred)
loss=criterion(y_pred,y_train.unsqueeze(1).float())
print('ytrain')
print(y_train.unsqueeze(1))
#get the number of correct predictions
predicted = torch.round((y_pred.data)[1])
print('predicted')
print(predicted)
batch_corr = (predicted == y_train).sum()
trn_corr += batch_corr
#update Parameters
optimizer.zero_grad()
loss.backward()
optimizer.step()
# if b%1000 == 0:
print(f'epoch: {i:2}/{num_epochs} loss: {loss.item():10.8f} \
accuracy: {trn_corr.item()*100/(64*b):7.3f}%')
train_losses.append(loss)
train_correct.append(trn_corr)
with torch.no_grad():
for b,(X_test,y_test) in enumerate(val_loader):
y_val=torch.sigmoid(model(X_test))
loss=criterion(y_val,y_test.unsqueeze(1).float())
predicted=round(y_val.data)[1]
tst_corr+=(predicted==y_test).sum()
test_losses.append(loss)
test_correct.append(tst_corr)
This is THe output of first batch and its very confusing to me. Should i be doing training loop differently ?
ypred
tensor([[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236],
[0.0236]], grad_fn=<AddmmBackward>)
ytrain
tensor([[1.],
[0.],
[1.],
[1.],
[0.],
[1.],
[1.],
[0.],
[0.],
[1.],
[0.],
[0.],
[1.],
[0.],
[1.],
[0.],
[1.],
[0.],
[1.],
[1.],
[0.],
[0.],
[1.],
[1.],
[0.],
[0.],
[1.],
[1.],
[0.],
[0.],
[1.],
[0.],
[1.],
[0.],
[0.],
[0.],
[1.],
[1.],
[0.],
[0.],
[0.],
[0.],
[1.],
[0.],
[0.],
[0.],
[0.],
[0.],
[1.],
[1.],
[0.],
[1.],
[0.],
[0.],
[1.],
[1.],
[1.],
[1.],
[1.],
[0.],
[1.],
[0.],
[1.],
[1.]])
predicted
tensor([0.]
What should i change in training loop for accuracy to be correct?
Now my accuracy is 50 %.
|
st32175
|
I think there may be an issue with the training data here as I cannot reproduce this output with random input:
net = BreastCancerModel().cuda()
x = torch.randn((8, 3, 50, 50), device='cuda')
print(net(x))
tensor([[0.0419],
[0.0497],
[0.0391],
[0.0421],
[0.0439],
[0.0445],
[0.0450],
[0.0524]], device='cuda:0', grad_fn=<AddmmBackward>)
Can you print an example of the input to the model (e.g., the first batch)?
|
st32176
|
Thanks for your reply.
Here is first batch:
tensor([[0.2174],
[0.3320],
[0.2788],
[0.4845],
[0.1838],
[0.2834],
[0.2081],
[0.2673],
[0.1727],
[0.2490],
[0.4976],
[0.2831],
[0.2340],
[0.3234],
[0.2539],
[0.2469],
[0.1592],
[0.3147],
[0.4204],
[0.3544],
[0.3064],
[0.1808],
[0.2562],
[0.3065],
[0.0838],
[0.2785],
[0.3098],
[0.2009],
[0.2919],
[0.2356],
[0.3093],
[0.2793],
[0.2551],
[0.3788],
[0.2043],
[0.3354],
[0.2083],
[0.3548],
[0.3419],
[0.4007],
[0.2365],
[0.3014],
[0.2157],
[0.2639],
[0.4706],
[0.2764],
[0.3063],
[0.3154],
[0.3702],
[0.2395],
[0.2813],
[0.3398],
[0.3706],
[0.3421],
[0.2573],
[0.2464],
[0.3041],
[0.2284],
[0.3122],
[0.2555],:
[0.2161],:
[0.2907],
[0.1727],
[0.3898]]
All of them are 0 class but i did many things while processing data. Data is well balanced and every image is readable.
Should i do more in processing?
|
st32177
|
That seems like the model output rather than the model input. However, it looks different from what you posted before (the same value repeated). If you can verify that the starting loss is about 0.693 then it suggests the model itself is probably fine.
|
st32178
|
It is maybe because of model parameters.
def count_parameters(model):
params = [p.numel() for p in model.parameters() if p.requires_grad]
for item in params:
print(f'{item:>6}')
print(f'______\n{sum(params):>6}')
count_parameters(model)
162
6
540
10
1170
13
17472
84
840
10
20
2
–
20329
20000 parameters might be very low for this Task.
I think i just need to raise these parameters.
Thanks for your Reply.
|
st32179
|
I have broad model in to every gpu in multi_gpu inference, and I set something in rank=0, as bellow:
if rank==0:
model.cfg = ‘my setting’
Dosomething
(in Dosomething, I get model.cfg but I found only rank=0 is ‘my_setting’, so how to broad this to all gpus???
|
st32180
|
You can use dist.broadcast_object_list which allows you to broadcast pickable objects across all of your workers: torch.distributed.distributed_c10d — PyTorch 1.8.1 documentation 1
|
st32181
|
when i tested this code
def forward(self, x):
x = self.hidden0(x)
x = self.hidden1(x)
x = self.hidden2(x)
x = self.out(x)
return x
|
st32182
|
Unfortunately you are not explaining any issues in the post, so please try to describe the error as well as the expected behavior, and, if possible, post a minimal code snippet to reproduce the issue.
|
st32183
|
image825×343 13.3 KB
with torch.no_grad():
output1 = model(((train_x.float(),train_x1.float())))
softmax1 = torch.exp(output1).cpu()
prob1 = list(softmax1.numpy())
predictions1 = np.argmax(prob1, axis=1)
print('Validation accuracy train: {:.4f}%'.format(float(accuracy_score(train_y, predictions1)) * 100))
help me @ptrblck
|
st32184
|
Based on the error message the model expects (at least) two inputs, while you are passing a single tuple to it, so you would need to remove the () around the tensors.
|
st32185
|
hi i am working about time series data. i have a problem that confused me. i am tuned a neural network with same implementation in both keras and pytorch but had different result.
This is not the only problem. The keras model always gives the same results (Every time I do train model). But the Pytorch model gives the results in 10% of the cases consistent with the cross model. And most of the time it has very bad results that I put(And of course not like the results of keras).
Please guide me. thankssssssssssssss
keras model:
model_input = keras.Input(shape=(x_train_T.shape[1], 8))
x_1 = layers.GRU(75,return_sequences=True)(model_input)
x_1 = layers.GRU(90)(x_1)
x_1 = layers.Dense(95)(x_1)
x_1 = layers.Dense(15)(x_1)
model = keras.models.Model(model_input, x_1)
model.compile(optimizer= adam_optim, loss= "mse" , metrics='accuracy')
model.fit(x_train_T, y_train, batch_size=1, epochs = 100)
pytorch model:
class GRU(nn.Module):
def __init__(self,input_size, hidden_size_1, hidden_size_2, hidden_size_3, output_size, num_layers, device):
super(GRU, self).__init__()
self.input_size = input_size
self.hidden_size_1 = hidden_size_1
self.hidden_size_2 = hidden_size_2
self.hidden_size_3 = hidden_size_3
self.num_layers = num_layers
self.device = device
self.gru_1 = nn.GRU(input_size, hidden_size_1, num_layers, batch_first=True)
self.gru_2 = nn.GRU(hidden_size_1, hidden_size_2, num_layers, batch_first=True)
self.fc_1 = nn.Linear(hidden_size_2, hidden_size_3)
self.fc_out = nn.Linear(hidden_size_3, output_dim)
def forward(self, x):
input_X = x
h_1 = torch.zeros(self.num_layers, input_X.size(0), self.hidden_size_1, device=self.device)
h_2 = torch.zeros(self.num_layers, input_X.size(0), self.hidden_size_2, device=self.device)
out_gru_1 , h_1 = self.gru_1(input_X, h_1)
out_gru_2 , h_2 = self.gru_2(out_gru_1, h_2)
out_Dense_1 = self.fc_1(out_gru_2[:,-1,:])
out_Dense_out = self.fc_out(out_Dense_1)
return out_Dense_out
##############################
input_dim = 8
hidden_dim_1 = 75
hidden_dim_2 = 90
hidden_dim_3 = 95
num_layers = 1
output_dim = 15
num_epochs = 100
model = GRU(input_size=input_dim, hidden_size_1 = hidden_dim_1, hidden_size_2 = hidden_dim_2, hidden_size_3 = hidden_dim_3,output_size = output_dim, num_layers=num_layers, device = device)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
import time
for t in range(num_epochs ):
start_time = time.time()
loss_p = []
for i in range(x_train_T.size(0)):
inputs, target = x_train_T[i:i+1] , y_train[i:i+1]
inputs = torch.tensor(inputs, dtype=torch.float32).to(device)
target = torch.tensor(target, dtype=torch.float32).to(device)
y_train_pred = model(inputs)
loss_ = criterion(y_train_pred, target)
optimizer.zero_grad()
loss_.backward()
optimizer.step()
loss_p.append(loss_)
loss_p = np.array(loss_p)
loss_P = loss_p.sum(0)/loss_p.shape[0]
end_time = time.time()
print("Epoch ", t, "MSE: ", loss_P.item() , "///epoch time: {0} seconds".format(round(end_time - start_time, 2)))
##############################
In rare cases, the loss result of both starts at approximately 0.09 and ends at approximately 0.015.
In most cases, the losses is the same for the keras model , but for the pytorch it stays at 0.08.
i.e , sometimes Pytorch is trained and sometimes not
i think should initialize pytorch layers as same as the keras layers.
but how???
lstm initialization in the keras is as follows:
def __init__(units, activation='tanh', recurrent_activation='sigmoid', use_bias=True, kernel_initializer='glorot_uniform', recurrent_initializer='orthogonal', bias_initializer='zeros', kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, recurrent_constraint=None, bias_constraint=None, dropout=0.0, recurrent_dropout=0.0, return_sequences=False, return_state=False, go_backwards=False, stateful=False, unroll=False, time_major=False, reset_after=True, **kwargs)
and linear layers:
def __init__(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs)
how i initializing layers in pytorch???
|
st32186
|
Solved by ptrblck in post #2
You can use the torch.nn.init methods to initialize the parameters as shown in e.g. this post.
|
st32187
|
You can use the torch.nn.init methods to initialize the parameters as shown in e.g. this post 9.
|
st32188
|
Hi @ptrblck : As a small follow up question on that, I was wondering how we could use an if statement to initialize the kernel and recurrent base separately for an LSTM in Pytorch as keras has orthogonal initialization for the recurrent layers and glorot for the kernel
Something like
def init_weights(model):
if type(model) == nn.RNNBase:
# Do Orthogonal
elif type(model) == nn.Linear:
# Do glorot
rnn = nn.LSTM(10, 20, 2)
rnn.apply(init_weights)
I was trying to understand the right conditions from torch.nn.modules.rnn — PyTorch 1.8.1 documentation 1 but wasn’t able to figure it out.
|
st32189
|
If I understand the use case correctly, you would like to use different init methods for the internal nn.LSTM parameters. In that case you could use a similar approach as seen here:
lstm = nn.LSTM(1, 1)
torch.nn.init.xavier_uniform_(lstm.weight_ih_l0)
Let me know, if this works for you.
|
st32190
|
ptrblck:
lstm = nn.LSTM(1, 1)
torch.nn.init.xavier_uniform_(lstm.weight_ih_l0)
Thanks this works, not sure about its impact on training yet but the reference to .weight_ih_l0 is what I was looking for
|
st32191
|
Initialization did not solve the problem either.
1- What is in the kreas model that is not in the pytorch model?
2- Did I write the pytorch model correctly according to the keras model?
3- What is your advice to solve this problem?
Loss decreases from 0.1 to 0.01 in the keras model, but stops at 0.08 in the pytorch model (most of the time).
|
st32192
|
I don’t know how exactly the Keras model is working, so cannot give you a proper answer. E.g. could you explain what the difference in the outputs for the Keras GRU would be if return_sequences is either set to True or False? Also, do the Dense layers automatically apply an activation function without you specifying it? If so, you might want to add this non-linearity to the PyTorch model as well.
Same as number 1.
To properly debug the difference you could store all parameters of the Keras model, load them into the PyTorch model, and compare the outputs of both. If the difference for the same input is larger than the expected error due to the floating point precision, you could then compare the outputs of each layer and narrow down the difference further.
|
st32193
|
Hello Folks, Any help would be appreciated. I am stuck at one point.
I am trying to use the siamese neural network for my images to predict, Images are similar or not. I am getting the following error while loading data into the dataloader. why the data loader is unable to load all images and it is giving me an error can not choose from the empty sequence.
Causing an error at:
for i, data in enumerate(trainloader,0):
Capture11097×242 55.9 KB
below is my whole code:
BATCH_SIZE=64
NUMBER_EPOCHS=100
IMG_SIZE=100
#F09xx are used for validation.
val_famillies = "XZY"
#An example of data:"../input/train/F00002/MID1/P0001_face1.jpg"
all_images = glob("/home/Phillip/Data/test3/*/*.png")
print(len(all_images))
#train_images = [x for x in all_images if val_famillies not in x]
#val_images = [x for x in all_images if val_famillies in x]
train_images = all_images[0:2000]
print(len(train_images))
val_images = all_images[2000:]
print(len(val_images))
train_person_to_images_map = defaultdict(list)#Put the link of each picture under the key word of a person such as "F0002/MID1"
for x in train_images:
train_person_to_images_map[x.split("/")[-3] + "/" + x.split("/")[-2]].append(x)
val_person_to_images_map = defaultdict(list)
for x in val_images:
val_person_to_images_map[x.split("/")[-3] + "/" + x.split("/")[-2]].append(x)
ppl = [x.split("/")[-3] + "/" + x.split("/")[-2] for x in all_images]
relationships = pd.read_csv("/home/phillip/Data/Test.csv")
#print(relationships)
relationships = list(zip(relationships.p1.values, relationships.p2.values))#For a List like[p1 p2], zip can return a result like [(p1[0],p2[0]),(p1[1],p2[1]),...]
#print(relationships)
#relationships = [x for x in relationships if x[0] not in ppl and x[1] not in ppl]#filter unused relationships
print(len(relationships))
#train = [x for x in relationships if val_famillies not in x[0]]
#val = [x for x in relationships if val_famillies in x[0]]
train = relationships[0:9400]
val = relationships[9400:]
print("Total train pairs:", len(train))
print("Total val pairs:", len(val))
class trainingDataset(Dataset):#Get two images and whether they are related.
def __init__(self,imageFolderDataset, relationships, transform=None):
self.imageFolderDataset = imageFolderDataset
self.relationships = relationships #choose either train or val dataset to use
self.transform = transform
def __getitem__(self,index):
img0_info = self.relationships[index][0]#for each relationship in train_relationships.csv, the first img comes from first row, and the second is either specially choosed related person or randomly choosed non-related person
img0_path = glob("/home/phillip/Data/test3/"+img0_info+"/*.png")
img0_path = random.choice(img0_path)
cand_relationships = [x for x in self.relationships if x[0]==img0_info or x[1]==img0_info]#found all candidates related to person in img0
if cand_relationships==[]:#in case no relationship is mensioned. But it is useless here because I choose the first person line by line.
should_get_same_class = 0
else:
should_get_same_class = random.randint(0,1)
if should_get_same_class==1:#1 means related, and 0 means non-related.
img1_info = random.choice(cand_relationships)#choose the second person from related relationships
if img1_info[0]!=img0_info:
img1_info=img1_info[0]
else:
img1_info=img1_info[1]
img1_path = glob("/home/phillip/Data/test3/"+img1_info+"/*.png")#randomly choose a img of this person
img1_path = random.choice(img1_path)
else:#0 means non-related
randChoose = True#in case the chosen person is related to first person
while randChoose:
img1_path = random.choice(self.imageFolderDataset.imgs)[0]
img1_info = img1_path.split("/")[-3] + "/" + img1_path.split("/")[-2]
randChoose = False
for x in cand_relationships:#if so, randomly choose another person
if x[0]==img1_info or x[1]==img1_info:
randChoose = True
break
img0 = Image.open(img0_path)
img1 = Image.open(img1_path)
if self.transform is not None:#I think the transform is essential if you want to use GPU, because you have to trans data to tensor first.
img0 = self.transform(img0)
img1 = self.transform(img1)
return img0, img1 , should_get_same_class #the returned data from dataloader is img=[batch_size,channels,width,length], should_get_same_class=[batch_size,label]
def __len__(self):
return len(self.relationships)#essential for choose the num of data in one epoch
folder_dataset = dset.ImageFolder(root='/home/phillip/Data/test3')
trainset = trainingDataset(imageFolderDataset=folder_dataset,
relationships=train,
transform=transforms.Compose([transforms.Resize((IMG_SIZE,IMG_SIZE)),
transforms.ToTensor()
]))
trainloader = DataLoader(trainset,
shuffle=True,#whether randomly shuffle data in each epoch, but cannot let data in one batch in order.
num_workers=8,
batch_size=BATCH_SIZE)
valset = trainingDataset(imageFolderDataset=folder_dataset,
relationships=val,
transform=transforms.Compose([transforms.Resize((IMG_SIZE,IMG_SIZE)),
transforms.ToTensor()
]))
valloader = DataLoader(valset,
shuffle=True,
num_workers=8,
batch_size=BATCH_SIZE)
class SiameseNetwork(nn.Module):# A simple implementation of siamese network, ResNet50 is used, and then connected by three fc layer.
def __init__(self):
super(SiameseNetwork, self).__init__()
#self.cnn1 = models.resnet50(pretrained=True)#resnet50 doesn't work, might because pretrained model recognize all faces as the same.
self.cnn1 = nn.Sequential(
nn.ReflectionPad2d(1),
nn.Conv2d(3, 64, kernel_size=3),
nn.ReLU(inplace=True),
nn.BatchNorm2d(64),
nn.Dropout2d(p=.2),
nn.ReflectionPad2d(1),
nn.Conv2d(64, 64, kernel_size=3),
nn.ReLU(inplace=True),
nn.BatchNorm2d(64),
nn.Dropout2d(p=.2),
nn.ReflectionPad2d(1),
nn.Conv2d(64, 32, kernel_size=3),
nn.ReLU(inplace=True),
nn.BatchNorm2d(32),
nn.Dropout2d(p=.2),
)
self.fc1 = nn.Linear(2*32*100*100, 500)
#self.fc1 = nn.Linear(2*1000, 500)
self.fc2 = nn.Linear(500, 500)
self.fc3 = nn.Linear(500, 2)
def forward(self, input1, input2):#did not know how to let two resnet share the same param.
output1 = self.cnn1(input1)
output1 = output1.view(output1.size()[0], -1)#make it suitable for fc layer.
output2 = self.cnn1(input2)
output2 = output2.view(output2.size()[0], -1)
output = torch.cat((output1, output2),1)
output = F.relu(self.fc1(output))
output = F.relu(self.fc2(output))
output = self.fc3(output)
return output
import random
net = SiameseNetwork().cuda()
criterion = nn.CrossEntropyLoss() # use a Classification Cross-Entropy loss
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
counter = []
loss_history = []
iteration_number= 0
for epoch in range(0,NUMBER_EPOCHS):
print("Epoch:", epoch, " start.")
for i, data in enumerate(trainloader,0):
img0, img1 , labels = data #img=tensor[batch_size,channels,width,length], label=tensor[batch_size,label]
img0, img1 , labels = img0.cuda(), img1.cuda() , labels.cuda()#move to GPU
#print("epoch:", epoch, "No." , i, "th inputs", img0.data.size(), "labels", labels.data.size())
optimizer.zero_grad()#clear the calculated grad in previous batch
outputs = net(img0,img1)
loss = criterion(outputs,labels)
loss.backward()
optimizer.step()
if i %10 == 0 :#show changes of loss value after each 10 batches
#print("Epoch number {}\n Current loss {}\n".format(epoch,loss.item()))
iteration_number +=10
counter.append(iteration_number)
loss_history.append(loss.item())
#test the network after finish each epoch, to have a brief training result.
correct_val = 0
total_val = 0
with torch.no_grad():#essential for testing!!!!
for data in valloader:
img0, img1 , labels = data
img0, img1 , labels = img0.cuda(), img1.cuda() , labels.cuda()
outputs = net(img0,img1)
_, predicted = torch.max(outputs.data, 1)
total_val += labels.size(0)
correct_val += (predicted == labels).sum().item()
print('Accuracy of the network on the', total_val, 'val pairs in',val_famillies, ': %d %%' % (100 * correct_val / total_val))
show_plot(counter,loss_history)
My dataset folder structure is like this:
Dataset:
Class 1
Images
Class 2
Images
then why I am getting an error that cannot choose from empty sequences.??
|
st32194
|
Based on the error message this code is raising the issue:
img0_info = self.relationships[index][0]#for each relationship in train_relationships.csv, the first img comes from first row, and the second is either specially choosed related person or randomly choosed non-related person
img0_path = glob("/home/phillip/Data/test3/"+img0_info+"/*.png")
img0_path = random.choice(img0_path)
so you could check what img0_path contains by adding a print statement to the __getitem__ method.
I guess img0_info is invalid, is thus creating an invalid path such that glob cannot find any files and, the choice operation fails in the end.
|
st32195
|
Thanks @ptrblck . Yes you were right. There was a problem at img0_info = self.relationships[index][0]
I printed the data for img0_info and img0_path it was as shown in below figure:
11367×580 60.4 KB
also, I checked the data for relationship
relationships = pd.read_csv("/home/phillip/Data/Test.csv")
It is as below:
....('DE_PH_Bein_2020-02_15', 'DE_PH_Bein_2020-06_10'), ('DE_PH_Bein_2020-02_15', 'DE_PH_Bein_2020-06_00'), ('DE_PH_Bein_2020-02_15', 'DE_PH_Bein_2020-06_15'), ('DE_PH_Bein_2020-02_19', 'DE_PH_Bein_2020-06_06'), ('DE_PH_Bein_2020-02_19', 'DE_PH_Bein_2020-06_14'), ('DE_PH_Bein_2020-02_19', 'DE_PH_Bein_2020-06_01'), ('DE_PH_Bein_2020-02_19', 'DE_PH_Bein_2020-06_07'), ('DE_PH_Bein_2020-02_19', 'DE_PH_Bein_2020-06_19'), ('DE_PH_Bein_2020-02_19', 'DE_PH_Bein_2020-06_05'), ('DE_PH_Bein_2020-02_19', 'DE_PH_Bein_2020-06_12'), ('DE_PH_Bein_2020-02_19', 'DE_PH_Bein_2020-06_08'), ('DE_PH_Bein_2020-02_19', 'DE_PH_Bein_2020-06_03'), ('DE_PH_Bein_2020-02_19', 'DE_PH_Bein_2020-06_09'),....
so the data for the relationship looks good to me.
I am not getting why the data for img0_info is invalid.
def __getitem__(self,index):
img0_info = self.relationships[index][0]
img0_path = glob("/home/phillip/Data/test3/"+img0_info+"/*.png")
img0_path = random.choice(img0_path)
Is there any mistake I am making over here?
any solution/idea would be appreciated, thanks in advance.
|
st32196
|
A simple approach to check for valid data would be to print the final img9_path, where the images should be located, copy this path, and check the files in the terminal via:
ls COPIED_PATH | grep png
If no output is shown, this folder does not contain any png files and the choice operation is thus expected to fail.
|
st32197
|
My GPU usage:
# nvidia-smi
Fri May 21 13:31:47 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.87.01 Driver Version: 418.87.01 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:00:10.0 Off | 0 |
| N/A 33C P0 56W / 300W | 16126MiB / 16130MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-SXM2... On | 00000000:00:11.0 Off | 0 |
| N/A 33C P0 57W / 300W | 1517MiB / 16130MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla V100-SXM2... On | 00000000:00:12.0 Off | 0 |
| N/A 39C P0 56W / 300W | 1519MiB / 16130MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 Tesla V100-SXM2... On | 00000000:00:13.0 Off | 0 |
| N/A 55C P0 278W / 300W | 15965MiB / 16130MiB | 97% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
x_data = torch.tensor([[1, 2],[3, 4]])
x_data.to(“cuda:0”) and x_data.to(“cuda:3”) caused:
RuntimeError: CUDA error: out of memory
‘cuda:1’ and ‘cuda:2’ are fine without memory error.
Does Pytorch automatically make use of GPUs that currently have free memory? Also, the above picture shows that GPU 0 is not used at all, why it also caused out of memory issue?
|
st32198
|
lingvisa:
Does Pytorch automatically make use of GPUs that currently have free memory?
No, the used GPU is specified via its index as given in your code example.
lingvisa:
Also, the above picture shows that GPU 0 is not used at all, why it also caused out of memory issue?
The output shows that GPU0 is almost completely filled, so I’m unsure what “not used at all” would mean in this context. Since this device has only ~4MB left, I would assume that a new allocation would raise an out of memory error.
|
st32199
|
@ptrblck When I say GPU 0 ‘not used at all’, I am looking at the next column GPU utilization, which shows it is 0% used. So why GPU is almost 0% used but at the same time memory is used up?
Another question regrading the 2nd table, why is the ‘GPU Memory Usage’ table is blank? I though it should shows the actual GPU memory usage, which is different from the gpu usage column in the first table?
|
st32200
|
Also, I found this little code to print out the currently gpu usage, which shows the first 2 gpu is not used:
In [1]: import nvidia_smi
...:
...: nvidia_smi.nvmlInit()
...:
...: deviceCount = nvidia_smi.nvmlDeviceGetCount()
...: for i in range(deviceCount):
...: handle = nvidia_smi.nvmlDeviceGetHandleByIndex(i)
...: info = nvidia_smi.nvmlDeviceGetMemoryInfo(handle)
...: print("Device {}: {}, Memory : ({:.2f}% free): {}(total), {} (free), {} (used)".format(i, nvidia_smi.nvmlDeviceGetName(handle), 100*info.free/info.total, info.total, info.free, info.used))
...:
...: nvidia_smi.nvmlShutdown()
Device 0: b'Tesla V100-SXM2-16GB', Memory : (100.00% free): 16914055168(total), 16913989632 (free), 65536 (used)
Device 1: b'Tesla V100-SXM2-16GB', Memory : (100.00% free): 16914055168(total), 16913989632 (free), 65536 (used)
Device 2: b'Tesla V100-SXM2-16GB', Memory : (3.82% free): 16914055168(total), 646053888 (free), 16268001280 (used)
Device 3: b'Tesla V100-SXM2-16GB', Memory : (3.82% free): 16914055168(total), 646053888 (free), 16268001280 (used)
|
st32201
|
The GPU utilization gives for a specific time period the percentage of time one or more GPU kernel(s) were running on the device. If your script shows a low utilization, you could profile it and check where the bottlenecks are. Usually this indicates, that the GPU is “starving”, i.e. your script cannot provide the data fast enough, which might happen for e.g. a slow data loading compared to the model execution.
lingvisa:
Another question regrading the 2nd table, why is the ‘GPU Memory Usage’ table is blank? I though it should shows the actual GPU memory usage, which is different from the gpu usage column in the first table?
The second row would show all processes using the device. If this information is empty, it could point towards permission issues on your system, so that nvidia-smi doesn’t get information about the running processes and thus cannot display the memory each one is using.
lingvisa:
Also, I found this little code to print out the currently gpu usage, which shows the first 2 gpu is not used:
The output of the script seems to match the output of nvidia-smi (it seems to be calling it, so this would be expected) but it seems that the device ids changed this time.
|
st32202
|
Looks like torch.jit is interpreting nn.Embedding and nn.LSTMCell as python functions. The code and error are below.
import torch
class Model(torch.jit.ScriptModule):
def __init__(self):
super(Model, self).__init__()
self.embedding = torch.nn.Embedding(128, 512)
@torch.jit.script_method
def forward(self, x):
return self.embedding(x)
m = Model()
m.save('temp.pt')
Traceback (most recent call last):
File "test.py", line 16, in <module>
m.save('temp.pt')
RuntimeError: Couldn't export Python operator <python_value>
Defined at:
@torch.jit.script_method
def forward(self, x):
return self.embedding(x)
~~~~~~~~~~~~~~ <--- HERE
|
st32203
|
Torch Script supports a subset of the builtin tensor and neural network functions that PyTorch provides. Most methods on Tensor as well as functions in the torch namespace, all functions in torch.nn.functional and all modules from torch.nn are supported in Torch Script, excluding those in the table below. For unsupported modules, we suggest using torch.jit.trace() 19.
Unsupported torch.nn Modules
torch.nn.modules.adaptive.AdaptiveLogSoftmaxWithLoss torch.nn.modules.normalization.CrossMapLRN2d torch.nn.modules.fold.Fold torch.nn.modules.fold.Unfold torch.nn.modules.rnn.GRU torch.nn.modules.rnn.LSTM torch.nn.modules.rnn.RNN torch.nn.modules.rnn.GRUCell torch.nn.modules.rnn.LSTMCell torch.nn.modules.rnn.RNNCell
|
st32204
|
@jotline hello i face the same problem that said that the LSTMCell is not supported.
That means I can’t convert to torchscript model if I have this LSTMCell module?
I’m trying to create a variable length OCR model.
|
st32205
|
Hi! I am training a Convnet to classify CIFAR10 images on RTX 3080 GPU. For some reason, when I look at the GPU usage in task manager, it shows 3% GPU usage as shown in the image.
GPU1143×368 8.59 KB
The model is as follows
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet,self).__init__()
self.conv1 = nn.Conv2d(in_channels=3,out_channels=8,stride=1,kernel_size=(3,3),padding=1)
self.conv2 = nn.Conv2d(in_channels=8,out_channels=32,kernel_size=(3,3),padding=1,stride=1)
self.conv3 = nn.Conv2d(in_channels=32,out_channels=64,kernel_size=(3,3),padding=1,stride=1)
self.conv4 = nn.Conv2d(in_channels=64,out_channels=128,kernel_size=(3,3),padding=1,stride=1)
self.conv5 = nn.Conv2d(in_channels=128,out_channels=256,kernel_size=(3,3),stride=1)
self.fc1 = nn.Linear(in_features=6*6*256,out_features=256)
self.fc2 = nn.Linear(in_features=256,out_features=128)
self.fc3 = nn.Linear(in_features=128,out_features=64)
self.fc4 = nn.Linear(in_features=64,out_features=10)
self.max_pool = nn.MaxPool2d(kernel_size=(2,2),stride=2)
self.dropout = nn.Dropout2d(p=0.5)
def forward(self,x,targets):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = self.max_pool(x)
x = self.conv3(x)
x = F.relu(x)
x = self.dropout(x)
x = self.conv4(x)
x = F.relu(x)
x = self.max_pool(x)
x = self.conv5(x)
x = F.relu(x)
x = self.dropout(x)
x = x.view(-1,6*6*256)
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
x = F.relu(x)
logits = self.fc4(x)
loss = None
if targets is not None:
loss = F.cross_entropy(logits,targets)
return logits,loss
def configure_optimizers(self,config):
optimizer = optim.Adam(self.parameters(),lr=config.lr,betas=config.betas,weight_decay=config.weight_decay)
return optimizer
Training Configurations are as follows:
Epochs : 300
Batch Size : 64
Weight Decay : 7.34e-4
Learning Rate : 3e-4
Optimizer : Adam
Also I am running several transforms such as Normalization, RandomRotation, RandomHorizontalFlips.
Also I have another bug. When I change the number of workers in DataLoader, the training just begin at all. In jupyter notebook, it shows that cell is being executed but no output is shown. So I am forced to run with num_workers=0. Anything above 0 breaks for some reason.
|
st32206
|
Solved by jmandivarapu1 in post #21
Unfortunately I think I can’t help much. As the file in your github running successfully for me for different batch sizes and differ workers also. My gpu utilization is around 22%. But I am using linux machine and I do have 32 cpu’s in it.
But I did found one interesting article about num_wprkers>0…
|
st32207
|
Vishal_R:
all. In jupyter notebook, it shows that cell is being executed but no output is shown. So I am forced to run with num_workers=0. Anything above 0 breaks for some reason.
try typing
watch nvidia-smi
in your shell at the same time while training is happening it will show the real memory usage of your model and utilization.
|
st32208
|
I am on Windows 10. I tried using watch nvidia-smi in notebook. It gives syntax error.
Update:
I managed to find a command to get GPU stats. It shows that it is using 14% GPU. Isn’t that low? I am training a big model right?
|
st32209
|
In jupyternotebook when you press new then click terminal, tyoe the comman there
Screen Shot 2021-05-18 at 12.56.13 AM2832×614 54 KB
|
st32210
|
Vishal_R:
Batch Size : 64
Yeah even though you use bigger model it depends on the batch size and total computation done by the GPU. Try increasing the batch size and do watch nvidia-smi to see continuously on the memory and utilization.
|
st32211
|
I increased batch_size to 256. The GPU usage is now 10%. It was 14% when batch_size was 64.
Also I have seen in many YouTube videos that if we use a very large batch size the overall generalization of the model decreases and hence the validation accuracy goes down. Is that true?
|
st32212
|
Low GPU utilization problem - PyTorch Forums 3
As you see the link you need to increase the num_workers. As that might be one of the cause
|
st32213
|
Yeah.
But if I increase the num_workers to say like 2 or something, for some reason, it breaks the training process. It doesn’t start training at all. It only trains when num_workers=0. I don’t know why it is happening.
|
st32214
|
Workers1918×1088 142 KB
This is the problem I am getting if I change the num_workers to anything above 0. I don’t know why it doesn’t work for me.
Also I have seen some YouTube videos suggesting to keep the batch_size to 32,64 like that. They tell not to use very large batch_sizes as it reduces the generalization of the model. Is it true?
|
st32215
|
Yeah generalization of model dependent and according to me depend on the no of classes you have in your dataset. So it is mostly dependent on the type of the dataset you have in hand. But ideally 32,64,128,256 works depends on datasets. If someone have very big images they will use batch sizes like 4 4,8,16 also because of memory constraints.
|
st32216
|
I am using a custom training loop that I found on Andrej Karpathy’s MinGPT repo. I thought it was a nice way of doing it. Even if I did not use that trainer and used a simple training loop, the DataLoader with num_workers>0 doesn’t work.
Link : trainer.py 1
|
st32217
|
loss1904×609 85.7 KB
As you mentioned, I have printed loss after the loss.mean() line in the trainer.
|
st32218
|
workers-11859×169 25.2 KB
No it does not print anything. It gets stuck like this and I cannot interrupt the kernel as well.
|
st32219
|
I am still getting the same result.
I tried running an earlier project that I did for training MNIST digits. There I changed num_workers=2 and ran it in terminal instead of running on notebook.
This is what I have got.
error1218×1071 55.5 KB
I have used PyTorch’s MNIST dataset itself and trained using the trainer class.
|
st32220
|
Hi, are there any ways in Pytorch to set the range of parameters or values in each layer? For example, is it able to constrain the range of the linear product Y = WX to [-1, 1]? If not, how about limiting the range of the weight?
I noticed in Karas, user can define this by setting constraints. Any equivalence in Pytorch?
Thanks!
|
st32221
|
You can just clip the weights of the parameters after each optimization update.
class WeightClipper(object):
def __init__(self, frequency=5):
self.frequency = frequency
def __call__(self, module):
# filter the variables to get the ones you want
if hasattr(module, 'weight'):
w = module.weight.data
w = w.clamp(-1,1)
model = Net()
clipper = WeightClipper()
model.apply(clipper)
created with inspiration from this post 479
|
st32222
|
I was trying to place explicit weight constraints on my network layers using your suggested approach, I found that nothing was changing until I added this line (correct me if I’m wrong):
def __call__(self, module):
# filter the variables to get the ones you want
if hasattr(module, 'weight'):
w = module.weight.data
w = w.clamp(-1,1)
module.weight.data = w
Just a heads up for anyone trying to implement this in the future!
|
st32223
|
Have you solved it? How to place the explicit constraints only for the weights of a specified layer?
|
st32224
|
Hi, I have the similar problem. Your code is to add constraints to the whole net, but I want to add the same constraint for the parameters of a specified layer(e.g., the last layer). Can you give me some suggestions?
|
st32225
|
Hi, I tried the method you suggested. However, it seems module doesn’t have the attribute weight (module.weight returns a None). Have you encountered this issue? Thx
|
st32226
|
eshgovil:
if hasattr(module, ‘weight’): w = module.weight.data w = w.clamp(-1,1) module.weight.data = w
@bf_Lee
I will take an example
class Model(nn.Module):
def __init__(self):
super(Model,self).__init__()
self.l1=nn.Linear(100,50)
self.l2=nn.Linear(50,10)
self.l3=nn.Linear(10,1)
self.sig=nn.Sigmoid()
def forward(self,x):
x=self.l1(x)
x=self.l2(x)
x=self.l3(x)
x=self.sig(x)
return(x)
class weightConstraint(object):
def __init__(self):
pass
def __call__(self,module):
if hasattr(module,'weight'):
print("Entered")
w=module.weight.data
w=w.clamp(0.5,0.7)
module.weight.data=w
# Applying the constraints to only the last layer
constraints=weightConstraint()
model=Model()
model._modules['l3'].apply(constraints)
Hope this helps
|
st32227
|
Is there a way to restrict the values for torch.nn.Parameter?
Since we cannot use apply(fn) for this kind of object.
|
st32228
|
You can limit your parameter by feed it as input to a function, e.g., sigmoid.
my_param= nn.Parameter(torch.empty(1).cuda(), requires_grad=True)
my_param_limited = torch.sigmoid(my_param)
Note the difference in the names of the parameters, since using a single name, changes the computation graph, and make backpropagation impossible.
|
st32229
|
The output of torch.sigmoid will create a non-leaf tensor and you will use the nn.Parameter property, so I would recommend to apply the sigmoid on the tensor before wrapping it into the nn.Parameter (unless you want exactly this behavior).
Nit: torch.empty will use uninitialized memory and the tensor might thus contain invalid values such as NaNs/Infs. torch.sigmoid(NaN) would also output a NaN value, so you should initialize it somehow e.g. using rand(n).
|
st32230
|
Thank you @ptrblck for your reply. But When I apply sigmoid before wrapping my variable to nn.Parameter, after a few epochs, my parameter violates its range, i.e., [0,1]. How can I handle that?
my_param= nn.Parameter(torch.sigmoid(torch.rand(1)).cuda(), requires_grad=True)
|
st32231
|
It is impossible to declare a constrained parameter in pytorch. So, in __init__ an unconstained parameter is declared, e.g.:
self.my_param = nn.Parameter(torch.zeros(1))
And in forward(), you do the transformation:
my_param_limited = torch.sigmoid(my_param)
this is a dynamically created tensor stored in a local variable
|
st32232
|
Thank you for your reply. the transformation that you did in forward() as @ptrblck mentioned, makes it a non-leaf tensor. I am wondering how can we learn a parameter which should be constrained. For example, we have a custom loss function which is the combination of cross-entropy and MSE loss.
def __init__():
my_param= nn.Parameter(torch.empty(1).cuda(), requires_grad=True)
def forward():
gamma = torch.sigmoid(my_param)
total_loss = gamma * cross_entropy_loss + (1- gamma) * MSE()
return total_loss
So in this way, my_param can not be trained meaningfully. right?
|
st32233
|
Built-in optimizers have no idea about constraints, so any other solution would be more cumbersome.
You can improve things a bit:
def __init__(self):
self.my_param= nn.Parameter(torch.logit(torch.tensor([0.5])) #inverse of sigmoid
@property
def gamma(self):
return self.my_param.sigmoid()
though I don’t like uncached @property there
PS: cuda() can be omitted and done on a module, and requires_grad=True is unnecessary
|
st32234
|
Hi there,
I am trying to convert a pytorch model to onnx and use tensorrt to optimize it. I can convert it to onnx and i have generated a trt engine but the output is completely useless. The onnx engine creation is what i suspect is the issue here. I do have a lot of tracerwarning of which most are of this sort:
TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
pred_boxes[:, :, 0::4] = pred_ctr_x - 0.5 * pred_w
From what i understand, slicing and assigning is the issue here?
How would i go about fixing this, (Also, am i right to assume this is the issue.).
the rest of the warnings are of the sort:
TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
which i dont think is the issue as of now, because i am using the same shape inputs.
|
st32235
|
Solved by albanD in post #5
I mean the function without the _. Which won’t modify the input inplace but return a new Tensor containing the result. See the doc here.
|
st32236
|
I believe this should work for the _copy (at assignment) warning :
Unable to convert PyTorch model to onnx deployment
Hello. I have some troubles in converting my yolo model to onnx format.
I have next errors:
C:\Users\1\PycharmProjects\untitled\my_utils.py:145: TracerWarning: There are 3 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, …
There cannot be any in-place assignment, like:
pred[:, :, 0] = torch.sigmoid(pred[:, :, 0])
I think that you need to modify it as:
pred = torch.cat((torch.sigmoid(pred[:, :, 0:1]), pred[:, :, 1:]), dim=2)
Anyway, slicing cannot occur on the left side of “=”
But how do I fix
TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator clamp_. This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
boxes[i,:,3::4].clamp_(0, im_shape[i, 0]-1)
|
st32237
|
Hi,
This is a limitation of onnx I guess: they do not support inplace operations.
You will need to use cat here as well and use the not-inplace version of .clamp().
|
st32238
|
Thanks for the input. By non inplace clamp, do you mean using the out key word argument in torch clamp?
|
st32239
|
I mean the function without the _. Which won’t modify the input inplace but return a new Tensor containing the result. See the doc here 60.
|
st32240
|
Hey, I also met some tracerwarnings that are same with yours and I know that you solved this warning Could you tell me how to solve this problem?
|
st32241
|
Lets take the example from tutorial for example, suppose I train the model as following
import torch
import math
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)
p = torch.tensor([1, 2, 3])
xx = x.unsqueeze(-1).pow(p)
model = torch.nn.Sequential(
torch.nn.Linear(3, 1),
torch.nn.Flatten(0, 1)
)
loss_fn = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.RMSprop(model.parameters(), lr=1e-3)
for t in range(2000):
y_pred = model(xx)
loss = loss_fn(y_pred, y)
if t % 100 == 99:
print(t, loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
....
99 18966.865234375
199 7549.88232421875
299 2734.0283203125
399 1074.5540771484375
499 667.1275024414062
599 529.754638671875
Now if I reset the model parameters like below, and retrain the model, the model seems stop trainning itself.
def weight_init(m):
if isinstance(m, nn.Linear):
init.xavier_normal_(m.weight.data, gain=1.0)
if m.bias is not None:
init.zeros_(m.bias)
model.apply(weight_init)
optimizer = torch.optim.RMSprop(model.parameters(), lr=1e-3)
for t in range(2000):
y_pred = model(xx)
loss = loss_fn(y_pred, y)
if t % 100 == 99:
print(t, loss.item())
optimizer.zero_grad()
loss.backward()
...
99 46380.5
199 46380.5
299 46380.5
399 46380.5
499 46380.5
599 46380.5
699 46380.5
799 46380.5
899 46380.5
999 46380.5
1099 46380.5
...
Can anyone help me to explain these?
|
st32242
|
Just to double check, did you simply omit the optimizer.step() from the second code snippet when it is in fact in the actual code?
|
st32243
|
I need to parallelize a model along the samples of a batch on CPU. The reason for that is that my model is a recursive network so I don’t have tensors which I can stack into a batch but tree structures I have to unfold during training.
Basically, the code I want to achieve looks like this.
for b in batches:
losses = 0 # needs to be synchronized among processes
for sample in batch: # sample is a recursive structure, not a tensor
# do this in parallel on batch_size-many CPU cores
loss += model(sample)
# sum/average all losses from the subprocesses
loss /= num_processes
loss.backward()
optimizer.step()
Basically, it should work like a multiprocessing.Pool. However, I cannot use this as gradients cannot be shared along processes. I know that DistributedDataParallel exists, but I don’t see how I can use this to achieve my goal.
Thanks!
|
st32244
|
Solved by carloalbertobarbano in post #6
You would probably need to make your own custom data loader to handle data structures other than tensor, but the general scheme could be something like this?
model = DDP(model, ...)
sampler = DistributedSampler(dataset)
dataloader = CustomDataLoader(dataset, shuffle=False, sampler=train_sampler, ba…
|
st32245
|
I’m not sure I understand, but couldn’t you just parallelize at the batch-level? (every process would get a subset of batches).
|
st32246
|
How can I parallelize at the batch-level? Maybe I could set the batch size to 1 and do a gradient update every 16 batches, or so…
|
st32247
|
If batch is not a tensor and you are forced to iterate over it anyways, then yes I guess that setting the batch size to 1 with DDP could do the trick
|
st32248
|
You would probably need to make your own custom data loader to handle data structures other than tensor, but the general scheme could be something like this?
model = DDP(model, ...)
sampler = DistributedSampler(dataset)
dataloader = CustomDataLoader(dataset, shuffle=False, sampler=train_sampler, batch_size=1)
for sample in dataloader:
loss = model(sample)
optimizer.zero_grad()
loss.backward() #this is synchronized automatically
optimizer.step()
|
st32249
|
Thank you. I implemented this approach. But when I set batch_size=1, it performs a gradient update after every sample, right? That is not what I want, I want larger batches. Is there a way to achieve this?
When I set batch_size=16, every process gets all samples of this batch. Maybe I misunderstood DDP, but shouldn’t it split the batch into subbatches < 16 and distribute these to the workers? The batches are python lists in my case.
|
st32250
|
When using DDP you are “simulating” a total batch size equal to batch_size*world_size. If you set batch size to 1 and spawn 16 workers (while using the DistributedSampler) you are synchronizing gradients from 16 different samples at the same time.
For more detail about DDP refer to Distributed Data Parallel — PyTorch master documentation
|
st32251
|
Hey, looking for the n choose k function similar to scipy.special.comb any ideas where I can find it? Thanks!
|
st32252
|
While there is torch.special in PyTorch 1.9, it is not yet very complete.
I would suggest to use the scipy function or implement it directly. Note that PyTorch integer tensors are 64 bits by default, things can get large quickly with these factorials.
|
st32253
|
Hi Mark!
mhamilton723:
looking for the n choose k function similar to scipy.special.comb any ideas where I can find it?
Assuming that the scipy function computes the conventional binomial
coefficient, you may use:
((n + 1).lgamma() - (k + 1).lgamma() - ((n - k) + 1).lgamma()).exp()
Binomial coefficients may be computed in terms of factorials, the
gamma function is the factorial function with its argument shifted
by one, and pytorch implements an autograd-aware log-gamma
function as torch.lgamma().
So you can use lgamma() to build a binomial-coefficient function
(that is differentiable and autograd aware):
>>> import torch
>>> torch.__version__
'1.7.1'
>>> def combo (n, k):
... return ((n + 1).lgamma() - (k + 1).lgamma() - ((n - k) + 1).lgamma()).exp()
...
>>> tfive = torch.tensor ([5.0], requires_grad = True)
>>> tthree = torch.tensor ([3.0], requires_grad = True)
>>> c = combo (tfive, tthree)
>>> c
tensor([10.], grad_fn=<ExpBackward>)
>>> c.backward()
>>> tfive.grad
tensor([7.8333])
>>> tthree.grad
tensor([-3.3333])
Note that the binomial-coefficient function grows exponentially (as
does the factorial function), so in many situations is better to work
with logarithms. If that fits your use case, just drop the trailing exp()
in the definition of combo().
(The beta function is more or less the binomial-coefficient function
so it would be nice if pytorch implemented a lbeta() function, but
it’s hardly necessary.)
Best.
K. Frank
|
st32254
|
I am trying to do an Isolated Speech Digits with LCNN (Light Convolutional Neural Network), right now I am stuck on this error whenever I run this code
P.S. I am doing this on Jupyter Notebook
optimizer = torch.optim.Adam(model.parammeters(), lr=0.07)
# Specify the loss criteria
loss_criteria = nn.CrossEntropyLoss()
# Track metrics in these arrays
epoch_nums = []
training_loss = []
validation_loss = []
# Train over 10 epochs (We restrict to 10 for time issues)
epochs = 10
print('Training on', device)
for epoch in range(1, epochs + 1):
train_loss = train(model, device, train_loader, optimizer, epoch)
test_loss = test(model, device, test_loader)
epoch_nums.append(epoch)
training_loss.append(train_loss)
validation_loss.append(test_loss)
The Whole code
import matplotlib.pyplot as plt
from matplotlib.backend_bases import RendererBase
from scipy import signal
from scipy.io import wavfile
#import soundfile as sf
import os
import numpy as np
from PIL import Image
from scipy.fftpack import fft
from torch.optim import Adam, SGD
from torch.nn import Linear, CrossEntropyLoss, Sequential, Conv2d, MaxPool2d, Module, Softmax
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
%matplotlib inline
audio_path = 'Dataset/train/'
pict_Path = 'Dataset/pics/train/'
test_audio_path = 'Dataset/test/'
test_pict_Path = 'Dataset/pics/test/'
val_audio_path = 'Dataset/validation/'
val_pict_Path = 'Dataset/pics/validation/'
samples = []
subFolderList = []
for x in os.listdir(audio_path):
if os.path.isdir(audio_path + x):
subFolderList.append(x)
subFolderList_test = []
for y in os.listdir(audio_path):
if os.path.isdir(audio_path + y):
subFolderList_test.append(y)
subFolderList_val = []
for k in os.listdir(val_audio_path):
if os.path.isdir(val_audio_path + k):
subFolderList_val.append(k)
if not os.path.exists(pict_Path):
os.makedirs(pict_Path)
if not os.path.exists(test_pict_Path):
os.makedirs(test_pict_Path)
if not os.path.exists(val_pict_Path):
os.makedirs(val_pict_Path)
subFolderList = []
for x in os.listdir(audio_path):
if os.path.isdir(audio_path + x):
subFolderList.append(x)
if not os.path.exists(pict_Path + x):
os.makedirs(pict_Path + x)
subFolderList_test = []
for y in os.listdir(audio_path):
if os.path.isdir(audio_path + y):
subFolderList_test.append(y)
if not os.path.exists(test_pict_Path + y):
os.makedirs(test_pict_Path + y)
subFolderList_val = []
for k in os.listdir(val_audio_path):
if os.path.isdir(val_audio_path + k):
subFolderList_val.append(k)
if not os.path.exists(val_pict_Path + k):
os.makedirs(val_pict_Path + k)
sample_audio = []
total = 0
for x in subFolderList:
# get all the wave files
all_files = [y for y in os.listdir(audio_path + x) if '.wav' in y]
total += len(all_files)
# collect the first file from each dir
sample_audio.append(audio_path + x + '/'+ all_files[0])
# show file counts
print('count: %d : %s' % (len(all_files), x ))
print(total)
sample_audio = []
total = 0
for x in subFolderList_test:
# get all the wave files
all_files = [y for y in os.listdir(test_audio_path + x) if '.wav' in y]
total += len(all_files)
# collect the first file from each dir
sample_audio.append(test_audio_path + x + '/'+ all_files[0])
# show file counts
print('count: %d : %s' % (len(all_files), x ))
print(total)
sample_audio = []
total = 0
for x in subFolderList_val:
# get all the wave files
all_files = [y for y in os.listdir(val_audio_path + x) if '.wav' in y]
total += len(all_files)
# collect the first file from each dir
sample_audio.append(val_audio_path + x + '/'+ all_files[0])
# show file counts
print('count: %d : %s' % (len(all_files), x ))
print(total)
def log_specgram(audio, sample_rate, window_size=20,
step_size=10, eps=1e-10):
nperseg = int(round(window_size * sample_rate / 1e3))
noverlap = int(round(step_size * sample_rate / 1e3))
freqs, _, spec = signal.spectrogram(audio,
fs=sample_rate,
window='hann',
nperseg=nperseg,
noverlap=noverlap,
detrend=False)
return freqs, np.log(spec.T.astype(np.float32) + eps)
def wav2img(wav_path, targetdir='', figsize=(4,4)):
"""
takes in wave file path
and the fig size. Default 4,4 will make images 288 x 288
"""
fig = plt.figure(figsize=figsize)
# use soundfile library to read in the wave files
samplerate, test_sound = wavfile.read(wav_path)
_, spectrogram = log_specgram(test_sound, samplerate)
## create output path
output_file = wav_path.split('/')[-1].split('.wav')[0]
output_file = targetdir +'/'+ output_file
plt.imshow(spectrogram.T, aspect='auto', origin='lower')
plt.imsave('%s.png' % output_file, spectrogram)
plt.close()
for i, x in enumerate(subFolderList):
print(i, ':', x)
# get all the wave files
all_files = [y for y in os.listdir(audio_path + x) if '.wav' in y]
for file in all_files[:]:
wav2img(audio_path + x + '/' + file, pict_Path + x)
print("Done!")
for i, x in enumerate(subFolderList_test):
print(i, ':', x)
# get all the wave files
all_files = [y for y in os.listdir(test_audio_path + x) if '.wav' in y]
for file in all_files[:]:
wav2img(test_audio_path + x + '/' + file, test_pict_Path + x)
print("Done!")
for i, x in enumerate(subFolderList_val):
print(i, ':', x)
# get all the wave files
all_files = [y for y in os.listdir(val_audio_path + x) if '.wav' in y]
for file in all_files[:]:
wav2img(val_audio_path + x + '/' + file, val_pict_Path + x)
print("Done!")
class mfm(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=1, type=1):
super(mfm, self).__init__()
self.out_channels = out_channels
if type == 1:
self.filter = nn.Conv2d(in_channels, 2*out_channels, kernel_size=kernel_size, stride=stride, padding=padding)
else:
self.filter = nn.Linear(in_channels, 2*out_channels)
def forward(self, x):
x = self.filter(x)
out = torch.split(x, self.out_channels, 1)
return torch.max(out[0], out[1])
class group(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride, padding):
super(group, self).__init__()
self.conv_a = mfm(in_channels, in_channels, 1, 1, 0)
self.conv = mfm(in_channels, out_channels, kernel_size, stride, padding)
def forward(self, x):
x = self.conv_a(x)
x = self.conv(x)
return x
class resblock(nn.Module):
def __init__(self, in_channels, out_channels):
super(resblock, self).__init__()
self.conv1 = mfm(in_channels, out_channels, kernel_size=3, stride=1, padding=1)
self.conv2 = mfm(in_channels, out_channels, kernel_size=3, stride=1, padding=1)
def forward(self, x):
res = x
out = self.conv1(x)
out = self.conv2(out)
out = out + res
return out
class network_9layers(nn.Module):
def __init__(self, num_classes=79077):
super(network_9layers, self).__init__()
self.features = nn.Sequential(
mfm(1, 48, 5, 1, 2),
nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True),
group(48, 96, 3, 1, 1),
nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True),
group(96, 192, 3, 1, 1),
nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True),
group(192, 128, 3, 1, 1),
group(128, 128, 3, 1, 1),
nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True),
)
self.fc1 = mfm(8*8*128, 256, type=0)
self.fc2 = nn.Linear(256, num_classes)
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.fc1(x)
x = F.dropout(x, training=self.training)
out = self.fc2(x)
return out, x
def LightCNN_9Layers(**kwargs):
model = network_9layers(**kwargs)
return model
model = LightCNN_9Layers(num_classes=79077)
def load_dataset(data_path):
import torch
import torchvision
import torchvision.transforms as transforms
# Load all the images
transformation = transforms.Compose([
# Randomly augment the image data
# Random horizontal flip
transforms.RandomHorizontalFlip(0.5),
# Random vertical flip
transforms.RandomVerticalFlip(0.3),
# transform to tensors
transforms.ToTensor(),
# Normalize the pixel values (in R, G, and B channels)
transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
# Resize Images
transforms.Resize((128,128))
])
# Load all of the images, transforming them
full_dataset = torchvision.datasets.ImageFolder(
root='Dataset/pics/train',
transform=transformation
)
full_dataset_test = torchvision.datasets.ImageFolder(
root='Dataset/pics/test',
transform=transformation
)
# Split into training (70% and testing (30%) datasets)
train_dataset = full_dataset
test_dataset = full_dataset_test
# use torch.utils.data.random_split for training/test split
#train_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [train_size, test_size])
# define a loader for the training data we can iterate through in 50-image batches
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=50,
num_workers=0,
shuffle=False
)
# define a loader for the testing data we can iterate through in 50-image batches
test_loader = torch.utils.data.DataLoader(
test_dataset,
batch_size=50,
num_workers=0,
shuffle=False
)
return train_loader, test_loader
# Recall that we have resized the images and saved them into
train_folder = 'Dataset/pics/train'
# Get the iterative dataloaders for test and training data
train_loader, test_loader = load_dataset(train_folder)
batch_size = train_loader.batch_size
print("Data loaders ready to read", train_folder)
def train(model, device, train_loader, optimizer, epoch):
# Set the model to training mode
model.train()
train_loss = 0
print("Epoch:", epoch)
# Process the images in batches
for batch_idx, (data, target) in enumerate(train_loader):
# Use the CPU or GPU as appropriate
# Recall that GPU is optimized for the operations we are dealing with
data, target = data.to(device), target.to(device)
# Reset the optimizer
optimizer.zero_grad()
# Push the data forward through the model layers
output = model(data)
# Get the loss
loss = loss_criteria(output, target)
# Keep a running total
train_loss += loss.item()
# Backpropagate
loss.backward()
optimizer.step()
# Print metrics so we see some progress
print('\tTraining batch {} Loss: {:.6f}'.format(batch_idx + 1, loss.item()))
# return average loss for the epoch
avg_loss = train_loss / (batch_idx+1)
print('Training set: Average loss: {:.6f}'.format(avg_loss))
return avg_loss
def test(model, device, test_loader):
# Switch the model to evaluation mode (so we don't backpropagate or drop)
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
batch_count = 0
for data, target in test_loader:
batch_count += 1
data, target = data.to(device), target.to(device)
# Get the predicted classes for this batch
output = model(data)
# Calculate the loss for this batch
test_loss += loss_criteria(output, target).item()
# Calculate the accuracy for this batch
_, predicted = torch.max(output.data, 1)
correct += torch.sum(target==predicted).item()
# Calculate the average loss and total accuracy for this epoch
avg_loss = test_loss / batch_count
print('Validation set: Average loss: {:.6f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
avg_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
# return average loss for the epoch
return avg_loss
# The images are in a folder named 'input/natural-images/natural_images'
training_folder_name = 'Dataset/pics/train'
# All images are 128x128 pixels
img_size = (128,128)
# The folder contains a subfolder for each class of shape
classes = sorted(os.listdir(training_folder_name))
print(classes)
device = "cpu"
if (torch.cuda.is_available()):
# if GPU available, use cuda (on a cpu, training will take a considerable length of time!)
device = "cuda"
# Create an instance of the model class and allocate it to the device
model = LightCNN_9Layers(num_classes=len(classes)).to(device)
print(model)
optimizer = torch.optim.Adam(model.parameters(), lr=0.07)
# Specify the loss criteria
loss_criteria = nn.CrossEntropyLoss()
# Track metrics in these arrays
epoch_nums = []
training_loss = []
validation_loss = []
# Train over 10 epochs (We restrict to 10 for time issues)
epochs = 10
print('Training on', device)
for epoch in range(1, epochs + 1):
train_loss = train(model, device, train_loader, optimizer, epoch)
test_loss = test(model, device, test_loader)
epoch_nums.append(epoch)
training_loss.append(train_loss)
validation_loss.append(test_loss)
|
st32255
|
Solved by Khalid in post #4
Fixed it, Just had to add transforms.Grayscale(num_of_channels=1) to my load_dataset
|
st32256
|
@Khalid, I am not sure, You need to check the size of your input again. I think you are inputting a 2D image there and your model requires 3D.
|
st32257
|
@krishna511, I am still new to python. I have been collecting and debugging codes to work with what I need. Could you possibly point me to where I could fix my issue?
|
st32258
|
Fixed it, Just had to add transforms.Grayscale(num_of_channels=1) to my load_dataset
|
st32259
|
Hello everyone,
is there a general way to reshape the batch_shape of a torch.distributions.Distribution (Probability distributions - torch.distributions — PyTorch 1.8.1 documentation)?
Similar to expand (Probability distributions - torch.distributions — PyTorch 1.8.1 documentation 1) I would expect a reshape method, but I cannot find one.
Best,
Tim
|
st32260
|
I tried the stackoverflow and other threads in forum but still my issues wasn’t resolved. I am a starter please help me understand what went wrong.
id_2_token = dict(enumerate(set(n for name in names for n in name),1))
token_2_id = {value:key for key,value in id_2_token.items()}
print(len(id_2_token))
print(len(token_2_id))
Output :
56
56
feature_id,target_id = batch_maker(names) #batching function
print(feature_id.shape) #Shape - [124,64,17]
#RNN MODEL
class CharMaker(nn.Module):
def __init__(self, input_size, hidden_size, output_size,n_layers=1):
super(CharMaker,self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.encoder = nn.Embedding(self.input_size, self.hidden_size)
self.rnn = nn.RNN(self.hidden_size,self.hidden_size, num_layers=1,batch_first=True)
self.linear = nn.Linear(self.hidden_size, self.output_size)
self.softmax = torch.nn.Softmax(dim=output_size)
def forward(self, inputs, hidden):
batch_size = inputs.size(0)
if hidden == None:
hidden = torch.zeros(1,inputs.size(1),self.hidden_size)
print(inputs.shape)
encoded = self.encoder(inputs)
output, hidden = self.rnn(encoded, hidden)
outout = self.linear(hidden,self.output_size)
output = self.softmax(output)
return output,hidden
Initializing my model
cm = CharMaker(input_size=len(token_2_id),hidden_size=20,output_size=len(token_2_id))
Reshaping and Texting The Data
hidden = None
names_id_tensor = torch.from_numpy(features_id[0])
names_id_tensor = names_id_tensor.reshape(names_id_tensor.shape[0],names_id_tensor.shape[1],1)
Shapes
print(names_id_tensor.shape) #torch.Size([64, 17, 1])
output,hidden = cm(names_id_tensor,hidden)
Error:
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-139-d0d9f66f3192> in <module>
----> 1 output,hidden = cm(names_id_tensor,hidden)
~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
<ipython-input-129-f8a6cdd31a7a> in forward(self, inputs, hidden)
19 hidden = torch.zeros(1,inputs.size(1),self.hidden_size)
20 print(inputs.shape)
---> 21 encoded = self.encoder(inputs)
22 output, hidden = self.rnn(encoded, hidden)
23 outout = self.linear(hidden,self.output_size)
~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
~/.local/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
~/.local/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1722 # remove once script supports set_grad_enabled
1723 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1724 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1725
1726
IndexError: index out of range in self
|
st32261
|
Solved by ptrblck in post #4
You cannot pass indices higher than embedding_dim-1, since the embedding layer is working as a lookup table. The input is used to index the corresponding embedding vector, so you should set embedding_dim as the highest value you would expect in your use case.
|
st32262
|
Could you print the min and max values of names_id_tensor?
The embedding_dim is currently set as self.input_size, which is the length of token_2_id.
Note that the embedding dimension refers to the max. input index you would like to provide.
|
st32263
|
Hi @ptrblck, I understand what are you trying to say but I am having an issue with that.
This was just a batch for testing the model if it works or not. The highest value in that batch was 53 while my vocab(token_2_id) size is 56. What if another batch comes up with the highest value other than 53, what will happen then? How will I resolve that problem?
Can you please guide me through this?
print(torch.max(names_id_tensor))
print(torch.min(names_id_tensor))
Output
tensor(53)
tensor(1)
|
st32264
|
csblacknet:
The highest value in that batch was 53 while my vocab(token_2_id) size is 56. What if another batch comes up with the highest value other than 53, what will happen then? How will I resolve that problem?
You cannot pass indices higher than embedding_dim-1, since the embedding layer is working as a lookup table. The input is used to index the corresponding embedding vector, so you should set embedding_dim as the highest value you would expect in your use case.
|
st32265
|
Got it. I had to just fix the embedding layer. @ptrblck Thanks it’s solved now.
Last time my vocab was create by enumerating from 1. So if I just enumerate from 0 I can keep the same embedding otherwise if I had insisted on keeping enumeration from 1. then all I had to do was:
self.encoder = nn.Embedding(self.input_size+1, self.hidden_size) #[57,20] but still 56 dimensions as 0th index is still empty.
Solution :
class CharMaker(nn.Module):
def __init__(self, input_size, hidden_size, output_size,n_layers=1):
super(CharMaker,self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
print("Input & Output Size", input_size,output_size)
print("Hidden Size ", hidden_size )
self.encoder = nn.Embedding(self.input_size, self.hidden_size) #[56,20]
self.rnn = nn.RNN(self.hidden_size,self.hidden_size, num_layers=1,batch_first=True) #[20,20]
self.linear = nn.Linear(self.hidden_size, self.output_size) #[20,56]
def forward(self, inputs, hidden):
batch_size = inputs.size(0)
if hidden == None:
hidden = torch.zeros(1,batch_size,self.hidden_size)
#print("Original Input : ",inputs.shape)
encoded = self.encoder(inputs)
#print("Encoded Input : ",encoded.shape)
output, hidden = self.rnn(encoded, hidden)
output = self.linear(output)
return output,hidden
|
st32266
|
When using random data, I noticed, nn.Embedding only requires posetive numbers to be passed. This error gone if used .abs() or using any natural number.
|
st32267
|
@ptrblck suppose the distinct values for my seq is 0,1,2 .
What should size of embeddings here
3 or 2…
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.