id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st46968 | What do you mean by network capacity?
However, the problem is not over-fitting, that would be the next step. In order to do that I’m plotting training and testing accuracy at intermediate epochs. But the test accuracy at the last epoch changes depending on if I’m doing testing at intermediate epochs or not, which is strange. |
st46969 | No, I didn’t. Maybe my implementation of DataLoader is incorrect… I’m still confounded by this behavior. Since I have a very small dataset, I simply created a customized way of reading the data - that solved the issue. |
st46970 | My best guess would be that model.eval() might have been forgotten in the evaluation code, which would update the running stats of all batchnorm layers with the test set and would thus leak the data.
The current code snippet calls model.eval() during the last iteration of the training, but I would recommend to add it to the validation function explicitly in case it was reset somewhere else to training mode.
Let me know, if that doesn’t help. |
st46971 | I had model.eval() in my code but the issue still exists.
I tried to reproduce the issue on a very small architecture on MNIST. Here is the link: https://colab.research.google.com/drive/1A6Ey6Z6mhUBWBP5Y-EwNIIFyg_q6JGkQ?usp=sharing
When test() is called at the end of the training only, accuracy 9508/10000
When test() is called during the training as well, accuracy: 9484/10000 |
st46972 | Thanks for the code.
It seems that the additional call of the test method inside the training loop calls into the pseudo-random number generator and thus changes the order of the training data in the next step.
This will result in a bit noise during the training, which thus yields a different end result.
You can check it by printing the sum of the data samples in your training loop (which can be seen as a “unique value” for the sample).
If I add a seeding inside the training loop to make sure the train_loader uses the same order of samples, I get the same results for the additional test call and just the final one:
for epoch in range(1, epochs + 1):
torch.manual_seed(seed+epoch+1)
train(model, device, train_loader, optimizer, epoch,log_interval)
#test(model, device, test_loader) # uncommenting this line produces the same result now
test(model, device, test_loader)
I’m unsure, where the test_loader is using random numbers, but based on the current workaround, this is my best guess. |
st46973 | Thank you @ptrblck for this solution, it works for me! And thanks @Ghada_Sokar for bringing this up again.
I could see that calling test_loader causes this issue, without even evaluating the model on test data. However, I did set torch.manual_seed(1) before doing training and testing - but it seems like it is overwritten when test_loader is called, and unless I manually reset it again before training, it works differently. Also, why is the seed updated in every epoch as above (seed+epoch), Is it to ensure that data is randomized differently in each epoch?
Thanks! |
st46974 | Pragya:
Also, why is the seed updated in every epoch as above (seed+epoch), Is it to ensure that data is randomized differently in each epoch?
Yes, I didn’t want to reuse the same seed again, as the data wouldn’t be “randomly” shuffled anymore in each epoch.
I’m still unsure, where the PRNG is used in creating the test_loader, as it’s not shuffling the data etc. |
st46975 | I have meet the similar problem, I set torch.manual_seed(1) in train loop, and it works! I want to know why the seeds is overwritten when test_loader is called, even I have set the seed in ttorch.utils.data.DataLoader(). Do you have any idea? Thanks! |
st46976 | I want to use batchnorm but I want to make the bias not learnable , how can I specify that? the affine parameters will make both of gamma and beta true or false, but I want to make the beta not learnable |
st46977 | Solved by klory in post #2
maybe this is sth you want to do?
import torch
from torch import nn
m = nn.BatchNorm2d(4)
x = torch.randn(2,4,16,16)
m._parameters['bias'].requires_grad_(False)
print(m._parameters['weight'].grad)
print(m._parameters['bias'].grad)
m.zero_grad()
y = m(x).mean()
y.backward()
print(m._parameters['we… |
st46978 | maybe this is sth you want to do?
import torch
from torch import nn
m = nn.BatchNorm2d(4)
x = torch.randn(2,4,16,16)
m._parameters['bias'].requires_grad_(False)
print(m._parameters['weight'].grad)
print(m._parameters['bias'].grad)
m.zero_grad()
y = m(x).mean()
y.backward()
print(m._parameters['weight'].grad)
print(m._parameters['bias'].grad)
output
None
None
tensor([ 1.4578e-10, -2.0856e-09, -5.6265e-11, -1.4465e-09])
None |
st46979 | Hi,
I am observing the following weird behaviour with my model.
class Model(nn.Module):
def __init__(self, encoder, classifier):
super(Model, self).__init__()
self.gru = torch.nn.GRU(input_size=100, hidden_size=300, dropout=0.5, num_layers=3, bidirectional=True)
self.dp1 = torch.nn.Dropout(0.7)
self.dense1 = TimeDistributed(torch.nn.Linear(600, 100))
self.encoder = encoder
self.classifier = classifier
def forward(self, x, m):
#import pdb
#pdb.set_trace()
#x, h = self.gru(x)
#x = self.dp1(x)
#x = self.dense1(x)
x = self.encoder(x, m)
x = self.classifier(x)
return x
When I compile and run this model, I get a binary classification accuracy of 60.77%.
But when I comment the components in both forward and init, it decreases slightly to 59.6%.
class Model(nn.Module):
def __init__(self, encoder, classifier):
super(Model, self).__init__()
#self.gru = torch.nn.GRU(input_size=100, hidden_size=300, dropout=0.5, num_layers=3, bidirectional=True)
#self.dp1 = torch.nn.Dropout(0.7)
#self.dense1 = TimeDistributed(torch.nn.Linear(600, 100))
self.encoder = encoder
self.classifier = classifier
def forward(self, x, m):
#import pdb
#pdb.set_trace()
#x, h = self.gru(x)
#x = self.dp1(x)
#x = self.dense1(x)
x = self.encoder(x, m)
x = self.classifier(x)
return x
So, why does the performance differ in the 2 cases?
Note: I have kept the random seed constant and enabled the CUDA deterministic behaviour. So, each run with no model changes provides the same accuracy.
Thanks, |
st46980 | Each layer creation will randomly initialize the parameters (if available) and will thus call into the pseudorandom number generator. If your training is sensitive to different seeds, these additional layer creations can change the training results significantly and you should see the same effect by removing the layer creation and just changing the seed at the beginning of your script. |
st46981 | According to the documentation, transforms.ToTensor() should transform in array in range 0-255 to [0, 1]. But I just tested the output of my DataLoader, which results in the following:
class ImageDataset(Dataset):
def __init__(self, images):
super(ImageDataset, self).__init__()
self.images = images
self.transforms = transforms.Compose([transforms.ToTensor()])
def __len__(self):
return len(self.images)
def __getitem__(self, index):
# Select Image
image = self.images[index]
image = self.transforms(image)
return image
# Load Datasets and DataLoader
data = load_images_from_folder()
train_data = np.array(data[:-1000])
train_dataset = ImageDataset(train_data)
test_data = np.array(data[-1000:])
test_dataset = ImageDataset(test_data)
train_loader = DataLoader(train_dataset, batch_size=bn, shuffle=True, num_workers=4)
test_loader = DataLoader(test_dataset, batch_size=bn, num_workers=4)
# Make Training
for epoch in range(epochs+1):
# Train on Train Set
model.train()
model.mode = 'train'
for step, original in enumerate(train_loader):
original = original.to(device)
if step == 0 and epoch == 0:
print(f'input information: mean: {torch.mean(original[0])}, max: {torch.max(original[0])}, min: {torch.min(original[0])}')
input information: mean: 0.32807812094688416, max: 1.0, min: -1.0
What’s the reason for that? |
st46982 | Okay got it fixed - there was a problem with my upload deployment configurations and the code didn’t upload. It works now |
st46983 | I get this error:
RuntimeError: stack expects each tensor to be equal size, but got [3, 530, 500] at entry 0 and [3, 500, 530] at entry 3
image(32bit) and label (4bit) , the same size(500*530), is there anything wrong with my transpose process?
class Resize(object):
“”“Resize image and/or masks.”""
def init(self, imageresize, maskresize):
self.imageresize = imageresize
self.maskresize = maskresize
def call(self, sample):
image, mask = sample[‘image’], sample[‘mask’]
if len(image.shape) == 3:
image = image.transpose(1, 2, 0)
if len(mask.shape) == 3:
mask = mask.transpose(1, 2, 0)
mask = cv2.resize(mask, self.maskresize, cv2.INTER_AREA)
image = cv2.resize(image, self.imageresize, cv2.INTER_AREA)
if len(image.shape) == 3:
image = image.transpose(2, 0, 1)
if len(mask.shape) == 3:
mask = mask.transpose(2, 0, 1)
return {'image': image,
'mask': mask} |
st46984 | If the image as well as the mask have the same initial shape and you are applying the same transpose operations on these numpy arrays, the error might be raised by trying to stack different samples.
Could you check the shape of both arrays and make sure the height and width is equal for all samples?
PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging easier. |
st46985 | Hi every one;
I need help, please. I have two images then I take the difference between these two images into one single image. I use Pretrained Mobile Neural Architecture Search (MNAS) to extract features only. I need to display the feature map of this image from the middle Conv layer of MNAS and the feature map of the final Conv layer before classifier.
I tried this snipped from @ptrblck but I have a problem.
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision import models
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = models.mnasnet1_0(pretrained=True)
self.convNet = nn.Sequential (self.conv1)
self.convNet2= self.conv1.classifier=nn.Identity()
def forward(self, x):
x = self.convNet(x)
return x
img1 = './dataset/11/frame001.jpg'
img2= './dataset/11/frame004.jpg'
img1 = Image.open(str(img1))
img2 = Image.open(str(img2))
img1= transform=transforms.ToTensor()(img1)
img2= transform=transforms.ToTensor()(img2)
model = MyModel()
# Visualize feature maps
activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.detach()
return hook
model.conv1.register_forward_hook(get_activation('conv1'))
img= torch.abs(img1 - img2)
img.unsqueeze_(0)
output = model(img)
act = activation['conv1'].squeeze()
fig, axarr = plt.subplots(act.size(0))
plt.imshow(act)
I get this error
File "C:\Users\Windows10\anaconda3\envs\Heyam\lib\site-packages\matplotlib\axes\_axes.py", line 5523, in imshow
im.set_data(X)
File "C:\Users\Windows10\anaconda3\envs\Heyam\lib\site-packages\matplotlib\image.py", line 709, in set_data
raise TypeError("Invalid shape {} for image data"
TypeError: Invalid shape (1280,) for image data |
st46986 | Solved by ptrblck in post #6
If I’m using a seqLen of 1 and a batch size of 2, the internal loop will not be executed and thus no hooks are called:
x = torch.randn(1, 2, 3, 224, 224)
model(x)
since your loop would be:
for t in range(0, 0):
and would thus not execute the self.convNet.
Note that your current forward method a… |
st46987 | Your activation (act) seems to have a wrong size and plt.imshow cannot visualize it.
Make sure you are trying to plot a tensor in the shape [height, width, 3] or [height, width] and pass it as a numpy array to plt.imshow via tensor.numpy(). |
st46988 | thanks alot.
act shape
torch.Size([1280])
can you help how to change the shape of the act tensor ? |
st46989 | The activation seems to be the output activation of the mnasnet1_0, which would be the output logits.
If you want to visualize them, you could use e.g. plt.plot instead of plt.imshow, since the activation is not an image but just a flattened tensor containing the class logits.
Alternatively, you could reshape the activation to the aforementioned shape. |
st46990 | I tired of this error. can you help me please. @ptrblck
My code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import DataLoader
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
from torchvision import models
import torchvision.transforms as transforms
import cv2
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.mod= models.mnasnet1_0(pretrained=True)
self.convNet= nn.Sequential(*list(self.mod.children()))
def forward(self, x):
seqLen = x.size(0) - 1
for t in range(0, seqLen):
x1 = x[t] - x[t+1]
x2 = self.convNet(x1)
img1 = './dataset/11/frame001.jpg'
img2= './dataset/11/frame004.jpg'
img1 = Image.open(str(img1))
img2 = Image.open(str(img2))
trans= transforms.Compose([transforms.RandomResizedCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])])
inpSeq = []
# img1=Image.fromarray(img1)
# img2=Image.fromarray(img2)
inpSeq.append(trans(img1.convert('RGB')))
inpSeq.append(trans(img2.convert('RGB')))
inpSeq = torch.stack(inpSeq, 0)
inpSeq=inpSeq.unsqueeze(0)
model = MyModel()
activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.detach()
return hook
model.convNet.register_forward_hook(get_activation('conv1'))
output = model(inpSeq)
act = activation['conv1'].squeeze()
plt.imshow(act[0])
error is here:
Traceback (most recent call last):
File "C:\Users\Windows10\Desktop\Code1\exp.py", line 67, in <module>
act = activation['conv1'].squeeze()
KeyError: 'conv1' |
st46991 | If I’m using a seqLen of 1 and a batch size of 2, the internal loop will not be executed and thus no hooks are called:
x = torch.randn(1, 2, 3, 224, 224)
model(x)
since your loop would be:
for t in range(0, 0):
and would thus not execute the self.convNet.
Note that your current forward method also doesn’t return anything, so you might want to change that as well.
However, if I’m using a seqLen of 2, I get a shape mismatch error:
x = torch.randn(2, 2, 3, 224, 224)
output = model(x)
> RuntimeError: mat1 and mat2 shapes cannot be multiplied (17920x7 and 1280x1000)
so I guess you might be facing the first issue.
The shape mismatch error is most likely raised, since you are rewrapping the child modules in an nn.Sequential container and are thus dropping the functional calls from the original forward.
I would not recommend to wrap arbitrary models into nn.Sequential unless you are sure that all modules are just executed in a sequential way without any conditions, functional calls, loops etc.
If I avoid the shape mismatch error and use:
def forward(self, x):
seqLen = x.size(0) - 1
for t in range(0, seqLen):
x1 = x[t] - x[t+1]
x2 = self.mod(x1)
with x = torch.randn(2, 2, 3, 224, 224), the forward hook is properly called and activation will be populated. |
st46992 | Hi. I think Pytorch calculates the cross entropy loss incorrectly while using the ignore_index option.
The problem is that currently when specifying the ignore_index (say, = k), the function just ignores the value of the target y = k (in fact, it calculates the cross entropy at k but returns 0) but it still makes full use of the logit at index k to calculate the normalization term for other indices not equal to k. I think this is not intended use for most users.
For example, in variable length sequences, people pad the sequence and use the ignore_index as the pad target index in order to avoid considering the padded values (both from inputs and targets). If there are n classes, you have to prepare (n+1) classes for the logit dimension (input of the cross entropy loss) to include the pad class and then ignore it by using the ignore_index option.
Here is some illustrative example:
# Test cross entropy loss: first create data
x = torch.log(torch.tensor([[2,3,4]],dtype=torch.float)) # a vector of 3-class logits (one of them could be a padding class)
y1 = torch.tensor([0],dtype=torch.long)
y2 = torch.tensor([1],dtype=torch.long)
y3 = torch.tensor([2],dtype=torch.long)
# calculate the negative logsoftmax for each logit index for comparison
-torch.nn.functional.log_softmax(x,dim=1) # returns tensor([1.5041, 1.0986, 0.8109])
# perform logsoftmax and NLL loss at the same time (not use ignore_index yet)
print(torch.nn.functional.cross_entropy(x,y1)) # 1.5041
print(torch.nn.functional.cross_entropy(x,y2)) # 1.0986
print(torch.nn.functional.cross_entropy(x,y3)) # 0.8109
# Now let's ignore the index 0 and find cross entropy loss for index 1
print(torch.nn.functional.cross_entropy(x,y2,ignore_index=0)) # get 1.0986
# this is the same value as when not excluding the index 0 from the logit;
# It should ignore the index since the level of the logit index, not just final target index.
# Next let's calculate the correct cross entropy loss when you actually ignore the index 0 completely from both x and y
x_ignore = x[0][1:].view(1,x.shape[-1]-1) # Now we have logits of 2 classes
# the index that is more than the ignore index is decreased by 1
y2_ignore = torch.tensor([0],dtype=torch.long)
y3_ignore = torch.tensor([1],dtype=torch.long)
# cross entropy with ignore_index 0 for the index 1 (which now becomes index 0)
print(torch.nn.functional.cross_entropy(x_ignore,y2_ignore)) # get 0.8473
In conclusion, I raise this issue in case developers may consider to revise this ignore_index option, but if the current one already follows the intended use (ignore only y not x; hence allowing to backprop through the ignored index of the logit in the normalization term of the softmax), it would be my misunderstanding of how it should work (ignore both x and y at the ignore_index). |
st46993 | I think that’s not what it means the ignore_index parameter. its description says:
Specifies a target value that is ignored and does not contribute to the input gradient. When size_average is True , the loss is averaged over non-ignored targets.
So what you specify with ignore_index = k is that the elements of the target that has value k will not contribute to the error. And if you specify size_average = True then the average won’t count those elements either.
Its intended use, or at least the way I use them, is on the cases when you add padding to your input in order to have the same length on all the instances. |
st46994 | @adrianjav Yes, the target that has index k will not contribute to the error. But the problem is that the class k at the softmax layer is not ignored when calculating the softmax for other classes (the index k still appears in the denominator of the softmax formula since Pytorch did not drop it).
For example, you have only 2 classes: class 1, and class 2 (your padding class). So when you ignore the padding class, the softmax probability of the class 1 must always be one (because there is only one class to consider) but if you try to use ignore_index option, it will not return 1 in general since it still did not eliminate the padding class from consideration (and also there is a chance that it will give higher probability on the padding class for unseen data.) |
st46995 | @DKSG I think ignore_index should only be used in the extreme cases where the ignoring target dose not exist in your input. |
st46996 | To add onto this topic, it would be very helpful to know exactly how ignore_index works. For instance, consider:
N = 10
criterion = nn.CrossEntropyLoss(reduction='none', ignore_index=-1)
groundtruth = torch.rand(N, ).ge(0.5).type(torch.LongTensor)
groundtruth[7:] = -1
pred = torch.rand(N, 2)
loss = criterion(pred, groundtruth)
In such a situation, Pytorch returns a value of 0.0 for elements which are to be ignored. Running the same piece of code with N = 5000 returns weird numbers in the loss for elements to be ignored. Values such as .0646e+24 etc.
Next, how do we backprop on the above said loss?
Can I simply find all elements which aren’t -1, say, loc = groundtruth!=-1, and average by ignoring those values?
loss = torch.mean(loss[groundtruth!=-1])
loss.backward()
For some weird reason, the above mentioned situation does not work for me. The code crashes after 10 epochs or so. |
st46997 | Rakshit_Kothari:
Running the same piece of code with N = 5000 returns weird numbers in the loss for elements to be ignored. Values such as .0646e+24 etc.
This might be a bug, as it seems the values are uninitialized.
I cannot reproduce it using your (modified) code for N = 5000.
Also note, that your criterion should get the prediction as the first argument and the target as the second.
reduction should be set as 'none' (lowercase n). |
st46998 | @ptrblck Thanks for your comment, I edited mine accordingly I can confirm that this bug is occurring in torch 0.4 but cannot reproduce it in 1.0.
Could you comment on the internals of ignore_index?
Is the following code the exact. or near exact, representation of what ignore_index does?
loss = torch.mean(loss[groundtruth!=-1])
loss.backward() |
st46999 | Your code snippet will give the same result, as reduction='mean':
Here is a small example:
N = 10
criterion = nn.CrossEntropyLoss(reduction='mean', ignore_index=-1)
groundtruth = torch.rand(N, ).ge(0.5).type(torch.LongTensor)
groundtruth[7:] = -1
pred = torch.rand(N, 2, requires_grad=True)
loss = criterion(pred, groundtruth)
loss.backward()
print(pred.grad)
# Manual approach
pred.grad.zero_()
target = groundtruth[groundtruth!=-1]
output = pred[groundtruth!=-1]
loss_manual = -1 * F.log_softmax(output, 1).gather(1, target.unsqueeze(1))
loss_manual = loss_manual.mean()
loss_manual.backward()
print(pred.grad) |
st47000 | Hi, I’d like to ask something related to the last answer.
I’m working on a semi-supervised learning project and my dataloader generates batches with labelled (targets with values 0 to N) and unlabelled (-1) samples.
To keep it simple here, I need a CE loss that only computes the loss on the labelled samples within the batch.
Would this manual approach also work?
N = 10
groundtruth = torch.rand(N, ).ge(0.5).type(torch.LongTensor)
groundtruth[7:] = -1
pred = torch.rand(N, 2, requires_grad=True)
# ptrblck's manual approach
pred.grad.zero_()
target = groundtruth[groundtruth!=-1]
output = pred[groundtruth!=-1]
loss_manual = -1 * F.log_softmax(output, 1).gather(1, target.unsqueeze(1))
loss_manual = loss_manual.mean()
loss_manual.backward()
print(pred.grad)
# My manual approach
pred.grad.zero_()
criterion = nn.CrossEntropyLoss(reduction='mean')
target = groundtruth[groundtruth>0]
output = pred[groundtruth>0]
loss_manual = criterion(output, target)
loss_manual.backward()
print(pred.grad)
If it works that would be great, because it’s not very clear to me what is that gather doing there…
Thanks! |
st47001 | Your approach currently also filters out the class0 target indices. If you use groundtruth>=0, you should get the same results as my approach.
The gather operation selects the log probabilities at the target indices in dim1. |
st47002 | For practice purposes, I built an encoder-decoder that receives images of 3 and outputs images of 7 using FastAI
!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
from fastbook import *
#Loading the images
path = untar_data(URLs.MNIST_SAMPLE)
Path.BASE_PATH = path
#converting them into 728 column tensors
threes = (path/'train'/'3').ls()
sevens = (path/'train'/'7').ls()
threes_valid = (path/'valid'/'3').ls()
sevens_valid = (path/'valid'/'7').ls()
threes_data = [Image.open(i) for i in threes]
sevens_data = [Image.open(i) for i in sevens]
threes_data_v = [Image.open(i) for i in threes_valid]
sevens_data_v = [Image.open(i) for i in sevens_valid]
#To tensors
tensor_3 =[tensor(i).float()/255 for i in threes_data]
tensor_7 = [tensor(i).float()/255 for i in sevens_data]
tensor_3_v =[tensor(i).float()/255 for i in threes_data_v]
tensor_7_v = [tensor(i).float()/255 for i in sevens_data_v]
stack_3 = torch.stack(tensor_3).view(-1,28*28)
stack_7 = torch.stack(tensor_7).view(-1,28*28)
stack_7 = stack_7[:len(stack_3),:] #Making sure 7 and 3 are the same size
stack_3_v =torch.stack(tensor_3_v).view(-1,28*28)
stack_7_v = torch.stack(tensor_7_v).view(-1,28*28)
stack_7_v = stack_7_v[:len(stack_3_v),:] #Making sure 7 and 3 are the same size
#Creating dataloaders
X_train = stack_3.view(-1,28*28)
y_train = stack_7.view(-1,28*28)
X_valid = stack_3_v.view(-1,28*28)
y_valid = stack_7_v.view(-1,28*28)
dset = list(zip(X_train,y_train))
dset_v = list(zip(X_valid,y_valid))
dl = DataLoader(dset,batch_size=256)
dl_v = DataLoader(dset_v,batch_size=256)
dls = DataLoaders(dl,dl_v)
#Modeling
def loss(p,t):
p=p.sigmoid()
criterion = nn.MSELoss()
return criterion(p, t)
simple_net = nn.Sequential(
nn.Linear(28*28,30),
nn.ReLU(),
nn.Linear(30,28*28)
)
learn=Learner(dls,simple_net,opt_func=SGD,loss_func = loss)
learn.fit(80,0.1)
#Predicting
get_preds() gives me the predictions for the training set.
preds,targs = learn.get_preds()
show_image(preds[11].view(28,28)), show_image(targs[0].view(28,28))
This seems to work:
However when I try to produce a prediction for a single image I get an error:
newp = X_train[1,:]
newimage = learn.predict(newp)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-118-6693f5f1dc19> in <module>()
----> 1 newimage = learn.predict(newp)
2 frames
/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __getattr__(self, k)
151 if self._component_attr_filter(k):
152 attr = getattr(self,self._default,None)
--> 153 if attr is not None: return getattr(attr,k)
154 raise AttributeError(k)
155 def __dir__(self): return custom_dir(self,self._dir())
AttributeError: 'list' object has no attribute 'decode_batch'
Can you please help me figure out where I went wrong here and how I can obtain a prediction for a single image? |
st47003 | Based on this thread 144 it seems predict() expects an image as the input not a tensor or list. |
st47004 | Hi all,
I need to implement below model structure defined in Keras using pytorch.
# MNIST model
layers = [
Conv2D(64, (3, 3), padding='valid', input_shape=(28, 28, 1)),
Activation('relu'),
Conv2D(64, (3, 3)),
Activation('relu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Flatten(),
Dense(128),
Activation('relu'),
Dropout(0.5),
Dense(10),
Activation('softmax')
]
I tried below model in Pytorch, but I would like to double check if I did it correctly:
class LeNet_dropout(nn.Module):
def __init__(self):
super(LeNet_dropout,self).__init__()
self.conv1 = nn.Conv2d(in_channels=1, out_channels=64, kernel_size=3, padding=0, bias=True)
self.conv2 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, padding=0, bias=True)
self.dropout = nn.Dropout(0.5)
self.fc1 = nn.Linear(12*12*64, 128)
self.fc2 = nn.Linear(128, 10)
def last_hidden_layer_output(self, X):
X = F.relu(self.conv1(X))
X = self.dropout(F.max_pool2d(F.relu(self.conv2(X)),2))
X = X.view(-1, 12*12*64)
X = self.dropout(F.relu(self.fc1(X)))
return X
def forward(self, X):
X = F.relu(self.conv1(X))
X = self.dropout(F.max_pool2d(F.relu(self.conv2(X)),2))
X = X.view(-1, 12*12*64)
X = self.dropout(F.relu(self.fc1(X)))
X = torch.softmax(F.relu(self.fc2(X)), dim=1)
return X
Would you please comment and correct me if I did anything wrong?
And I am not sure which loss function to use to train this model. Because there is a softmax in output. I would appreciate if you can also guide how to train exactly? I think I cannot use nn.CrossEntropyLoss as below:
output = Model(data)
loss = nn.CrossEntropyLoss(output,target)
And last thing, I will also need to use the output of to last hidden layer activation. Thats why I also include a method called last_hidden_layer_output. Is there any other way to handle this? |
st47005 | nn.CrossEntropyLoss can be used for a multi-class classification and expects raw logits as the model output so you should remove the last torch.softmax activation.
Also, remove the last F.relu and return the output of self.fc2(x) directly.
jarvico:
And last thing, I will also need to use the output of to last hidden layer activation. Thats why I also include a method called last_hidden_layer_output. Is there any other way to handle this?
You could use forward hooks as described here 2 or you could keep this method but reuse it in forward to avoid duplicated code:
def forward(self, x):
x = self.last_hidden_layer_output(x)
x = self.fc2(x)
return x |
st47006 | Thank you ptrblck.
First of all , I needed to use softmax at last layer.
The reason I used F.relu at the last layer is because when I calculate loss as below I get nan values after some epoch.
toutput = model(data)
loss = F.nll_loss(torch.log(output), target)
So, I used Relu at last layer and changes loss calculate as below:
eps = 1e-7
output = model(data)
loss = F.nll_loss(torch.log(output+eps), target)
If I remove Relu from last layer, how can I guarantee the term output in torch.log become non negative? |
st47007 | jarvico:
First of all , I needed to use softmax at last layer.
Why would that be the case?
jarvico:
If I remove Relu from last layer, how can I guarantee the term output in torch.log become non negative?
F.nll_loss or nn.NLLLoss expects log probabilities as the model output, so you should use:
output = F.log_softmax(x, dim=1)
loss = F.nll_loss(output, target)
while nn.CrossEntropyLoss expects raw logits and will apply F.log_softmax + F.nll_loss internally. |
st47008 | Hi,
Thank you ptrblck.
I changed my model to:
class Model(nn.Module):
def __init__(self):
super(Model,self).__init__()
self.conv1 = nn.Conv2d(in_channels=1, out_channels=64, kernel_size=3, padding=0, bias=True)
self.conv2 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, padding=0, bias=True)
self.dropout = nn.Dropout(0.5)
self.fc1 = nn.Linear(12*12*64, 128)
self.fc2 = nn.Linear(128, 10)
def last_hidden_layer_output(self, X):
X = F.relu(self.conv1(X))
X = self.dropout(F.max_pool2d(F.relu(self.conv2(X)),2))
X = X.view(-1, 12*12*64)
X = self.dropout(F.relu(self.fc1(X)))
return X
def forward(self, X):
X = self.last_hidden_layer_output(X)
X = self.fc2(X)
return X
I calculate loss like this:
output = model(data)
loss = F.nll_loss(F.log_softmax(output, dim=1), target)
I got predictions like this:
predictions = F.softmax(model(data), dim=1).data.argmax(1, keepdim=True)
I got last hidden layer output like this:
model.last_hidden_layer(data).data |
st47009 | Your code looks generally good.
Some small suggestions:
Don’t use the .data attribute, as it could yield unwanted side effects
you can get the predictions directly via torch.argmax(output, dim=1) without applying the softmax, since the max. logit will also have the max. probability. However, you can of course apply the softmax, if you want to see the probabilities (just don’t pass it to the criterion to calculate the loss). |
st47010 | Hello,
I am trying an autoencoder with specific connections.
My input matrix is 24,000 x 65,000
my middle layer has 4000 nodes, but the first to the second layer are not fully connected. The connection is specified based on some prior knowledge.
self.layer1.weight.data = self.layer1.weight.data.mul(self.mask)
z = self.relu(self.dropout(self.NEL1(self.EL1(x))))
I am already using a GPU for training, but still, the training is very slow. I was wondering if anyone has any suggestions on how I can speed it up?
Thanks! |
st47011 | I used Pytorch to create 3DCNN
I used the gridsearch function to choose the parameters of the model.
I found this error !
can you help me ?
thank you in advance!
batch_size = [5, 10]
epochs = [50, 100, 500]
learn_rate = [0.01, 0.001, 0.0001, 0.00001, 0.000001]
param_grid = dict(batch_size=batch_size, epochs=epochs, learn_rate=learn_rate)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1,cv=3)
grid_result = grid.fit(data,targets)
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
Result
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-ccbe836e4806> in <module>
147 param_grid = dict(batch_size=batch_size, epochs=epochs, learn_rate=learn_rate)
148 grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1,cv=3)
--> 149 grid_result = grid.fit(data,targets)
150 print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
151 means = grid_result.cv_results_['mean_test_score']
/opt/tljh/user/envs/fethi_env/lib/python3.6/site-packages/sklearn/utils/validation.py in inner_f(*args, **kwargs)
70 FutureWarning)
71 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)})
---> 72 return f(**kwargs)
73 return inner_f
74
/opt/tljh/user/envs/fethi_env/lib/python3.6/site-packages/sklearn/model_selection/_search.py in fit(self, X, y, groups, **fit_params)
653
654 scorers, self.multimetric_ = _check_multimetric_scoring(
--> 655 self.estimator, scoring=self.scoring)
656
657 if self.multimetric_:
/opt/tljh/user/envs/fethi_env/lib/python3.6/site-packages/sklearn/metrics/_scorer.py in _check_multimetric_scoring(estimator, scoring)
473 if callable(scoring) or scoring is None or isinstance(scoring,
474 str):
--> 475 scorers = {"score": check_scoring(estimator, scoring=scoring)}
476 return scorers, False
477 else:
/opt/tljh/user/envs/fethi_env/lib/python3.6/site-packages/sklearn/utils/validation.py in inner_f(*args, **kwargs)
70 FutureWarning)
71 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)})
---> 72 return f(**kwargs)
73 return inner_f
74
/opt/tljh/user/envs/fethi_env/lib/python3.6/site-packages/sklearn/metrics/_scorer.py in check_scoring(estimator, scoring, allow_none)
401 if not hasattr(estimator, 'fit'):
402 raise TypeError("estimator should be an estimator implementing "
--> 403 "'fit' method, %r was passed" % estimator)
404 if isinstance(scoring, str):
405 return get_scorer(scoring)
TypeError: estimator should be an estimator implementing 'fit' method, CNNModel(
(conv_layer1): Sequential(
(0): Conv3d(3, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1))
(1): LeakyReLU(negative_slope=0.01)
(2): MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2), padding=0, dilation=1, ceil_mode=False)
)
(conv_layer2): Sequential(
(0): Conv3d(32, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1))
(1): LeakyReLU(negative_slope=0.01)
(2): MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2), padding=0, dilation=1, ceil_mode=False)
)
(fc1): Linear(in_features=1404928, out_features=2, bias=True)
(fc2): Linear(in_features=1404928, out_features=2, bias=True)
(relu): LeakyReLU(negative_slope=0.01)
(batch): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(drop): Dropout(p=0.15, inplace=True)
) was passed |
st47012 | Instead of using GridSearchCV, give hyperearch 53 a try. You can also try GridSearchCV with skorch 32. |
st47013 | Thank you @Abhilash_Srivastava for your response.
I tried with skroch but i found an error here
net = NeuralNet(
model,
max_epochs=50,
lr=0.1,
criterion = torch.nn.CrossEntropyLoss(),
# Shuffle training data on each epoch
iterator_train__shuffle=True,
verbose=False,
)
net.fit(data, targets)
TypeError Traceback (most recent call last)
<ipython-input-1-b062d1acaff3> in <module>
166 verbose=False,
167 )
--> 168 net.fit(data, targets)
169 y_proba = net.predict_proba(X)
170 #score(X, y[, sample_weight])
/opt/tljh/user/envs/fethi_env/lib/python3.6/site-packages/skorch/net.py in fit(self, X, y, **fit_params)
899 """
900 if not self.warm_start or not self.initialized_:
--> 901 self.initialize()
902
903 self.partial_fit(X, y, **fit_params)
/opt/tljh/user/envs/fethi_env/lib/python3.6/site-packages/skorch/net.py in initialize(self)
583 self.initialize_virtual_params()
584 self.initialize_callbacks()
--> 585 self.initialize_criterion()
586 self.initialize_module()
587 self.initialize_optimizer()
/opt/tljh/user/envs/fethi_env/lib/python3.6/site-packages/skorch/net.py in initialize_criterion(self)
471 """Initializes the criterion."""
472 criterion_params = self.get_params_for('criterion')
--> 473 self.criterion_ = self.criterion(**criterion_params)
474 if isinstance(self.criterion_, torch.nn.Module):
475 self.criterion_ = to_device(self.criterion_, self.device)
/opt/tljh/user/envs/fethi_env/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
TypeError: forward() missing 2 required positional arguments: 'input' and 'target'
Rq: 3DCNN:
class CNNModel(nn.Module):
def __init__(self):
super(CNNModel, self).__init__() # héritage
self.conv_layer1 = self._conv_layer_set(3, 32)
self.conv_layer2 = self._conv_layer_set(32, 64)
self.fc1 = nn.Linear(64*28*28*28, 2)
self.fc2 = nn.Linear(1404928, num_classes)
self.relu = nn.LeakyReLU()
self.batch=nn.BatchNorm1d(2)
self.drop=nn.Dropout(p=0.15, inplace = True)
def _conv_layer_set(self, in_c, out_c):
conv_layer = nn.Sequential(
nn.Conv3d(in_c, out_c, kernel_size=(3, 3, 3), padding=0),
nn.LeakyReLU(),
nn.MaxPool3d((2, 2, 2)),
)
return conv_layer
def forward(self, x):
# Set 1
out = self.conv_layer1(x)
out = self.conv_layer2(out)
out = out.view(out.size(0), -1)
out = self.fc1(out)
out = self.relu(out)
out = self.batch(out)
out = self.drop(out)
out = F.softmax(out, dim=1)
return out |
st47014 | I think there’s something wrong with your forward pass. If you’re using torch.nn.CrossEntropyLoss(), you wouldn’t need F.softmax.
Try running your model first without GridSearchCV. Just pick any set of hyperparams and make it train correctly. You might encounter more errors, correct those and then try skorch. |
st47015 | I modified the model by:
lass CNNModel(nn.Module):
def __init__(self):
super(CNNModel, self).__init__()
self.conv_layer1 = self._conv_layer_set(3, 32)
self.conv_layer2 = self._conv_layer_set(32, 64)
self.conv_layer3 = self._conv_layer_set(64, 128)
self.conv_layer4 = self._conv_layer_set(128, 256)
self.conv_layer5 = self._conv_layer_set(256, 512)
#self.conv_layer6 = self._conv_layer_set(512, 1024)
self.fc1 = nn.Linear(512, 128)
self.fc2 = nn.Linear(128, num_classes)
self.relu = nn.LeakyReLU()
self.batch=nn.BatchNorm1d(128)
self.drop=nn.Dropout(p=0.15, inplace = True)
def _conv_layer_set(self, in_c, out_c):
conv_layer = nn.Sequential(
nn.Conv3d(in_c, out_c, kernel_size=(3, 3, 3), padding=0),
nn.LeakyReLU(),
nn.MaxPool3d((2, 2, 2)),
)
return conv_layer
def forward(self, x):
# Set 1
out = self.conv_layer1(x)
out = self.conv_layer2(out)
out = self.conv_layer3(out)
out = self.conv_layer4(out)
out = self.conv_layer5(out)
out = out.view(out.size(0), -1)
#print('conv shape', out.shape)
out = self.fc1(out)
out = self.relu(out)
out = self.batch(out)
out = self.drop(out)
out = self.fc2(out)
return out
it works fine without Grid Search but with Grid Search it still shows me the same error message (TypeError: forward() missing 2 required positional arguments: 'input' and 'target') |
st47016 | Anna_yah:
criterion = torch.nn.CrossEntropyLoss(),
Change to criterion = torch.nn.CrossEntropyLoss,
Similar issue here 18. |
st47017 | Is there any difference between x.to(‘cuda’) vs x.cuda()? Which one should I use? Documentation seems to suggest to use x.to(‘cuda’). |
st47018 | I’m quite new to PyTorch, so there may be more to it than this, but I think that one advantage of using x.to(device) is that you can do something like this:
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
x = x.to(device)
Then if you’re running your code on a different machine that doesn’t have a GPU, you won’t need to make any changes. If you explicitly do x = x.cuda() or even x = x.to('cuda') then you’ll have to make changes for CPU-only machines. |
st47019 | .cuda()/.cpu() is the old, pre-0.4 way. As of 0.4, it is recommended to use .to(device) because it is more flexible, as neighthan showed above. |
st47020 | Hi,
I am seeing a big timing difference in the following piece of code:
import torch
from datetime import datetime
a = torch.rand(20000,20000)
a = a.cuda()
#a = torch.rand(20000,20000)
#a.to('cuda')
i=0
t1 = datetime.now()
while i< 500:
a += 1
a -= 1
i+=1
t2 = datetime.now()
print('cuda', t2-t1)
This code will take 1 min.
Using
a = torch.rand(20000,20000)
a.to('cuda')
instead takes only ~1 second.
Am I getting something wrong?
I am using torch 1.0+ |
st47021 | CUDA operations are asynchronous, so you would have to synchronize all CUDA ops before starting and stopping the timer:
a = torch.rand(20000,20000)
a = a.cuda()
i=0
torch.cuda.synchronize()
t1 = time.time()
while i< 500:
a += 1
a -= 1
i+=1
torch.cuda.synchronize()
t2 = time.time()
print('cuda', t2-t1)
a = torch.rand(20000,20000)
a = a.to('cuda')
i=0
torch.cuda.synchronize()
t1 = time.time()
while i< 500:
a += 1
a -= 1
i+=1
torch.cuda.synchronize()
t2 = time.time()
print('cuda string', t2-t1)
> cuda 5.500105619430542
> cuda string 5.479088306427002
Also, it seems you’ve forgotten to reassign a.to('cuda') to a, so that this code will run on the CPU. |
st47022 | is this true? what if you are using multiple GPUs? would pytorch allocate the right GPUs automatically without have to specify them if one uses .cuda()? @ptrblck
PS: asked similar question Run Pytorch on Multiple GPUs 15 but didn’t get an answer to that (if .cuda() allocates to the right gpu automatically or if it has to be manually done all the time). |
st47023 | They have the same behavior: to("cuda") and .cuda() when you don’t specify the device, will use device 0.
You can specify the device for both with to("cuda:0") and .cuda(0). |
st47024 | so we have to specify the device? always if we use multiple gpus?
(I am trying to have to modify my code as little as possible but I keep getting issues that tensors are not in the right GPUS…I am testing it with pytorch’s resnet18 ) |
st47025 | No you don’t have to specify the device.
As mentioned in my first sentence above, if you don’t specify the device, device 0 will be used. |
st47026 | Do you try to use a GPU that is not the 0th somewhere?
Note that you can use CUDA_VISIBLE_DEVICES=0 to hide all but the 0th GPU and avoid this issue altogether |
st47027 | Hi all,
my data is stored in a three dimensional tensor (no of samples, length of timeseries, feature dimension).
Concatenating these different samples to one timeseries is in my case for methodological reasons not possible. Hence, I need a custom getitem method that accepts two indices: One to choose the sample and one to choose the index within that sample. What exactly do I have to change in addition? Do I have to write a custom collate_fn class as well? Which adjustments would be necessary there?
I have just realized that this would require to edit the _MapDatasetFetcher(_BaseDatasetFetcher) class since there is a function call:
data = [self.dataset[idx] for idx in possibly_batched_index
I.e., apparently it is not wanted that the dataset getitem method can use two indices. So do I even have a chance to implement my approach with two indices in the getitem method?
https://github.com/pytorch/pytorch/blob/master/torch/utils/data/_utils/fetch.pyhttps://github.com/pytorch/pytorch/blob/master/torch/utils/data/_utils/fetch.py 74 |
st47028 | Hi Raphi!
I don’t know of a way to give the dataset two indicies but I can think of another solution to your problem.
When you initialize your dataset you could build a mapping from a one-dimensional index to your two dimensional index. I wrote some code that hopefully could help you
from torch.utils.data import Dataset
class MyDataset(Dataset):
def __init__(self, data_file):
self.data_file = data_file
self.index_map = {}
index = 0
for sample in data_file: # First dimension
sample_index = sample['index']
for timeseries in sample: # Second dimension
timeseries_index = timeseries['index']
self.index_map[index] = (sample_index, timeseries_index)
index += 1
def __getitem__(self, idx):
sample_index, timeseries_index = self.index_map[idx]
# Use the two indices to get the desired data
... |
st47029 | Hi,
The fact is that you will have a fixed number of samples. You can think of a sample as a NN input. So if you need 2 indices as your data is N_samples,length you can just write the dataset as if you have N_sample x length samples and create a mapping between (N_samples*length ) --> (N_samples,length)
Dataset class is flexible, you just return how many elements you have and it will iterate over that amount. If your data is more complex you just need to think how to code it. I would say that it doesn’t accept several indices as it follow the maxima “1 input - 1 index” |
st47030 | Hi thank you both for your ansers.
@Juan what I have not mentioned in my question is that my timeseries have different lengths, so your advice wouldnt work unless I pad them.
@Oli your suggestion is very interesting! However, when testing your code I get an error
IndexError: too many indices for tensor of dimension 2
Which python version do you use? (I use python 3.6)
EDIT: I have adjusted the code like this, such that it works also on my system:
index_map = {}
index = 0
for sample_index, sample in enumerate(data_file): # First dimension
for timeseries_index, timeseries in enumerate(sample): # Second dimension
index_map[index] = (sample_index, timeseries_index)
index += 1 |
st47031 | BTW you can return lists of tensors instead of stacked tensors if that helps
Dataloader can return nested python structures as well. |
st47032 | My code wasn’t meant to be used as actual code, just to show you my idea. You need to adapt it to your situation
Great that you dit it |
st47033 | Using following logic for __len__ gives an extra advantage of considering all possible chunks in the timeseries during training.
def __len__(self):
return len(self.index_map) |
st47034 | I have a Timeseries dataset in a numpy array with shape (number of series, number of entries). I want to create multiple Dataset objects of this dataset for different custom batching methods. Does creating multiple Dataset objects lead to copying of the underlying data in memory or only reference of underlying data is used? Also, how do I test it? |
st47035 | Hi All,
I’ve spent some time reading approaches to this, but am curious for your advice.
I have a model phi(x) that takes inputs of dimension 1, and returns row vectors of dimension L. I’d like to apply it elementwise to a 2D matrix of data, giving me some 3D object.
Essentially, to write new_tensor = map(phi, old_tensor) in a differentiable way.
It looks like the way to do this is with a conv1d layer of some kind, but I can’t get it to work.
Any suggestions would be appreciated. |
st47036 | I want to have PyTorch Mobile available on my mobile device, how it builds PyTorch Mobile from source? not needed to include a full-fledge PyTorch. If PyTroch is built then how to distinguishes during inference standard PyTorch vs. PyTorch Mobile?
Thank you. |
st47037 | I have a video dataset, it consists of 850 videos and per video a lot of frames (not necessarily same number in all frames).
As I can’t fit my entire video in GPU at once I have to sample frames from the video (maybe consecutive maybe random)
When I am building torch.utils.data.Dataset object then _ _len _ _ of the dataset should be 850 only (number of videos). Otherwise I could make it something based on number of frames as well. For example if I use 32 consecutive frames I can get length number of 32 frame videos from consecutive frames in all videos.
Then I will need to make the _ _get _ _ function based on that! |
st47038 | Hey,
It might be good to have a look at https://github.com/NVIDIA/nvvl 1.4k
They have a PyTorch dataloader that loads videos on the GPU, and might be helpful for you. |
st47039 | Hi thanks @fmassa , actually I already knew about the dataloader, but actually I am under hard deadline and dont have time to install and learn new dataLoader (would probably sometime though).
Also using the former approach (len as number of videos) and closing this on that behalf
Thanks! |
st47040 | Also, i tried their dataloader and after struggling with their code for 3 days I gave up because it was not easy to figure.
Does pytorch have any plane to provide something for it, or at least an example should be useful
@Naman-ntc did you finally figure it out? |
st47041 | Did you look at there example code for superresolution in pytorch 576, it should help you somewhat. I looked at it in between but didn’t proceed with it.
Agree that it is somewhat complicated, but you might open an issue in case you don’t understand something!
Would be helpful to have more examples I guess! |
st47042 | yeah, i could not make that work on my case,
and the sad side is that i dont know how to use docker, and dont wanna get into more confusion.
Thanks anyways |
st47043 | I am in your shoes now, is there any hope pytorch provides a Dataloader similar to nvvl but doesn’t require all this hussle to use it. |
st47044 | Late, but this repository right here offers exactly what this thread is asking for: off-the-shelf PyTorch Video Dataset Loading https://github.com/RaivoKoot/Video-Dataset-Loading-Pytorch 332
It allows you to choose how many frames from a video you want to load and loads them evenly spaced from start to end of a video. Its native pytorch and doesn’t take the hassle that other methods do. |
st47045 | Hi,
I tried to implement BCEWithLogitsLoss by myself.
for example,
def bce_loss(pred, target):
pred=F.sigmoid(pred)
loss=torch.mean(-torch.sum(target * torch.log(pred) + (1-target) * torch.log(1-pred)) / target.size(1))
however, the loss is quite larger than torch.nn.BCEWithLogitsLoss
for example,
def bce_loss_pytorch(pred, target):
m=torch.nn.BCEWithLogitsLoss()
loss=m(pred, target)
I am not sure what is the main difference between my implementation and torch.nn.BCEWithLogitsLoss.
any idea? |
st47046 | Solved by RoySadaka in post #2
I think the issue is that you have 2 means
The first one is that you have the sum and then divide by the size (but you should use target.numel() instead of target.size(1))
Second is when you use torch.mean
So 2 options:
Here is your code with torch.mean
def bce_loss(pred, target):
pred=F.si… |
st47047 | I think the issue is that you have 2 means
The first one is that you have the sum and then divide by the size (but you should use target.numel() instead of target.size(1))
Second is when you use torch.mean
So 2 options:
Here is your code with torch.mean
def bce_loss(pred, target):
pred=F.sigmoid(pred)
loss=torch.mean(-(target * torch.log(pred) + (1-target) * torch.log(1-pred)))
return loss
And here is your code with manual mean (sum and divide)
def bce_loss(pred, target):
pred=F.sigmoid(pred)
loss=torch.sum(-(target * torch.log(pred) + (1-target) * torch.log(1-pred))) / target.numel()
return loss
That said, they say that using the official torch.nn.BCEWithLogitsLoss() is better, cause although it’s doing sigmoid and then BCE, it does it in such a way that it is numerically stable (i believe the optimization is on the c++ level)
Roy. |
st47048 | Hi Roy and 杜明軒!
RoySadaka:
That said, they say that using the official torch.nn.BCEWithLogitsLoss() is better, cause although it’s doing sigmoid and then BCE, it does it in such a way that it is numerically stable (i believe the optimization is on the c++ level)
As a minor note, you can implement your own BCEWithLogitsLoss
with the same numerical benefits as pytorch’s by replacing the
separate calls to sigmoid() and log() with a call to logsigmoid().
Best.
K. Frank |
st47049 | Hello,
When I type import torch on python console, this is what going on:
import torch
Microsoft Visual C++ Redistributable is not installed, this may lead to the DLL load failure.
It can be downloaded at https://aka.ms/vs/16/release/vc_redist.x64.exe 22
Traceback (most recent call last):
File “”, line 1, in
File “C:\Users\ziywa\anaconda3\lib\site-packages\torch_init_.py”, line 128, in
raise err
OSError: [WinError 126] The specified module could not be found.
Installation was successful.
Best,
Ziyue Wang |
st47050 | Hi,
Have you tried following the instruction from the error and install the missing component? |
st47051 | ok. Even after I installed this successfully, I still cannot run a program with torch this on pycharm (ModuleNotFoundError: No module named ‘torch’). I can do it on python console. |
st47052 | I guess the issue is that pycharm is not using the same python as the one in your console where you installed pytorch.
You will need to make sure that pycharm use the same python as the one you installed torch in. |
st47053 | I’m following the English to German BERT example here: https://pytorch.org/hub/pytorch_fairseq_translation/ 9
I’d like to export this model to ONNX to use for inference on ONNXRuntime. I’ve found a tutorial here: https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html 8
But the step where I do the torch.onnx.export is failing. I’m thinking the issue is that I’m not 100% sure of the shape or number of inputs. Typically I think there are 3 inputs to BERT all of the same shape (e.g. (batch_size, 256) or (batch_size, 1024)). Is there some method to probe the Hub model and find out the input and output names and shapes it expects?
Would anyone be able to help me spot my mistake?
import torch
# Load an En-De Transformer model trained on WMT'19 data:
en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model', tokenizer='moses', bpe='fastbpe')
bert_model = en2de.models[0]
# Export the model
batch_size = 1
x = torch.ones((batch_size, 1024), dtype=torch.long)
y = torch.ones((batch_size, 1024), dtype=torch.long)
z = torch.ones((batch_size, 1024), dtype=torch.long)
torch.onnx.export(bert_model, # model being run
(x,y,z), # model input (or a tuple for multiple inputs)
"bert_en2de.onnx", # where to save the model (can be a file or file-like object)
export_params=True) # store the trained parameter weights inside the model file
–> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/jit/init.py in forward(self, *args)
358 in_vars + module_state,
359 _create_interpreter_name_lookup_fn(),
–> 360 self._force_outplace,
361 )
362
Error is:
RuntimeError: hasSpecialCase INTERNAL ASSERT FAILED at /opt/conda/conda-bld/pytorch_1579022119164/work/torch/csrc/jit/passes/alias_analysis.cpp:300, please report a bug to PyTorch. We don’t have an op for aten::uniform but it isn’t a special case. (analyzeImpl at /opt/conda/conda-bld/pytorch_1579022119164/work/torch/csrc/jit/passes/alias_analysis.cpp:300)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x47 (0x7fa0fd3ce627 in /home/sdp/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: + 0x304553b (0x7fa10063953b in /home/sdp/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/lib/libtorch.so) |
st47054 | I know this is a little late, but just in case you are still looking into this I wanted to share what I see in the error. It looks like your issue is not the batch size, but rather the “aten::uniform” operator in one of your layers. You can check here 35 for supported operators and/or how to add aten operators. |
st47055 | Hi guys,
I’m loading my image dataset (3 classes) with datasets.ImageFolder(), unfortunately the labels are then integers. So for the first class the label is 0, for the second it is 1 and for the last it is 2. I’d like the labels to be one hot encoded tho. Is there a way I can do this with the target_transform property when loading the data?
I’ve tried nn.functinal.one_hot() for example but I always need an input for this functions but I don’t know what to use as an input.
I also tried to use torch.eye(3)[i] where i is the label (0, 1 or 2) but again, i need the target as an input.
Is there a way to do this?
Thank you |
st47056 | Solved by wilhelmberghammer in post #5
Thank you, @Alexey_Demyanchuk
I got an error when trying your code, but then I changed it a bit and it worked like that:
def target_to_oh(target):
NUM_CLASS = 3 # hard code here, can do partial
one_hot = torch.eye(NUM_CLASS)[target]
return one_hot |
st47057 | My provisional fix:
example training loop:
for epoch in range(EPOCHS):
for i, (images, labels) in tqdm(enumerate(train_dl)):
images = images.to(device)
labels = torch.eye(3)[labels].to(device)
preds = model(images)
loss = loss_func(preds, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
During every iteration, when pushing the labels to the device (‘cuda’ or ‘cpu’ etc.), a one hot vector is created.
labels = torch.eye(3)[labels].to(device)
torch.eye(3)
'''output:
[[1, 0, 0]
[0, 1, 0]
[0, 0, 1]]
'''
torch.eye(3)[0]
'''output:
[1, 0, 0]
'''
So when the label is 0 the one hot vector is [1, 0, 0], for 1 it is [0, 1, 0] and so on.
Still, I’d like a fix using the target_transform attribute tho! |
st47058 | According to the docs target_transform should be callable. Assuming you know the number of classes in advance, something like this should work:
def target_to_oh(target: int) -> List[int]:
NUM_CLASS = 5 # hard code here, can do partial
one_hot = [0] * NUM_CLASS
one_hot[target] = 1
return one_hot
# Use here
ds = ImageFolder('some/path', target_transform=target_to_oh)
I didn’t try it. May have some errors but conceptually should work |
st47059 | Whether u encode them one-hot style (pun intended) or not it’ll still give u the same or similar results.
Still let me tell u what I usually do:
I created a small block of code that iterates through my dataset and assign each label to them as one hot encoded np.eye(3)[Idx] given the directory it belonged to.
Then I saved the entire dataset as a .npy file, then after I load it I convert the numpy instances to tensors (u can also make them tensors from start tho)
I would have posted the sample code here, but I’m currently out and not with my PC at the moment so u’ll have to wait till evening (about 5 hrs from now). |
st47060 | Thank you, @Alexey_Demyanchuk
I got an error when trying your code, but then I changed it a bit and it worked like that:
def target_to_oh(target):
NUM_CLASS = 3 # hard code here, can do partial
one_hot = torch.eye(NUM_CLASS)[target]
return one_hot |
st47061 | I’m training an object detection model with YOLO structure:
class YOLO(nn.Module):
def __init__(self, ...)
self.backbone = resnet50(pretrained=True) # pretrained weights, classification layer already removed
self.neck = nn.Sequential(
conv3x3(in_planes=2048, out_planes=2048),
nn.ReLU(),
conv3x3(in_planes=2048, out_planes=2048),
nn.ReLU(),
conv3x3(in_planes=2048, out_planes=2048),,
nn.ReLU(),
)
self.heads = nn.ModuleList(
[YOLOHead(num_classes) for anchor in self.anchors]
)
def forward(self, x, ...):
x = self.backbone(x)
x = self.neck(x)
outs = [head(x) for head in self.heads]
class YOLOHead(nn.Module):
def __init__(self, num_classes):
super(YOLOHead, self).__init__()
self.objectness = conv1x1(in_planes=2048, out_planes=1)
self.cls = conv1x1(in_planes=2048, out_planes=1+num_classes)
self.reg = conv1x1(in_planes=2048, out_planes=4)
def forward(self, x):
objectness = self.objectness(x)
cls = self.cls(x)
reg = self.reg(x)
return objectness, cls, reg
I optimize the sum of loss of (1)objectness, (2)classification and (3)bbox regression. As shown above, the heads compute them separately, yet with a shared backbone.
The issue is that, sometimes regression loss (mse_loss between gt and pred) may get a sudden raise, and then optimzed down to a moderate level again. But after that optimzing-down, loss of objectness and classification can hardly be optimized any more. My words may be vague but it looks like this (see the stable obj_loss and cls_loss after batch 26):
[epoch: 1, batch: 1] loss: 7.862009, obj_loss: 1.040800, cls_loss: 4.567673, reg_loss: 2.253536
[epoch: 1, batch: 2] loss: 7.364972, obj_loss: 1.027697, cls_loss: 4.538699, reg_loss: 1.798576
[epoch: 1, batch: 3] loss: 7.077137, obj_loss: 1.006557, cls_loss: 4.509902, reg_loss: 1.560679
[epoch: 1, batch: 4] loss: 6.991296, obj_loss: 0.959222, cls_loss: 4.449108, reg_loss: 1.582966
[epoch: 1, batch: 5] loss: 7.313764, obj_loss: 0.869381, cls_loss: 4.270252, reg_loss: 2.174131
[epoch: 1, batch: 6] loss: 6.717353, obj_loss: 0.832326, cls_loss: 4.162328, reg_loss: 1.722699
[epoch: 1, batch: 7] loss: 7.118721, obj_loss: 0.969238, cls_loss: 3.606729, reg_loss: 2.542755
[epoch: 1, batch: 8] loss: 6.670866, obj_loss: 0.907451, cls_loss: 3.944198, reg_loss: 1.819217
[epoch: 1, batch: 9] loss: 7.152078, obj_loss: 0.937133, cls_loss: 4.139555, reg_loss: 2.075390
[epoch: 1, batch: 10] loss: 6.950064, obj_loss: 0.961618, cls_loss: 4.229391, reg_loss: 1.759055
[epoch: 1, batch: 11] loss: 6.504709, obj_loss: 0.969900, cls_loss: 4.112210, reg_loss: 1.422600
[epoch: 1, batch: 12] loss: 6.989936, obj_loss: 0.993160, cls_loss: 3.952706, reg_loss: 2.044071
[epoch: 1, batch: 13] loss: 6.952768, obj_loss: 0.883456, cls_loss: 3.616553, reg_loss: 2.452760
[epoch: 1, batch: 14] loss: 7.013877, obj_loss: 0.963605, cls_loss: 4.107985, reg_loss: 1.942287
[epoch: 1, batch: 15] loss: 6.275753, obj_loss: 0.981065, cls_loss: 3.834120, reg_loss: 1.460567
[epoch: 1, batch: 16] loss: 6.559465, obj_loss: 0.931920, cls_loss: 3.854708, reg_loss: 1.772837
[epoch: 1, batch: 17] loss: 6.501263, obj_loss: 0.988465, cls_loss: 4.012724, reg_loss: 1.500074
[epoch: 1, batch: 18] loss: 6.336897, obj_loss: 0.972874, cls_loss: 3.923963, reg_loss: 1.440061
[epoch: 1, batch: 19] loss: 7.756456, obj_loss: 0.907921, cls_loss: 4.357026, reg_loss: 2.491509
[epoch: 1, batch: 20] loss: 6.757920, obj_loss: 1.009806, cls_loss: 4.316257, reg_loss: 1.431857
[epoch: 1, batch: 21] loss: 6.218307, obj_loss: 0.975769, cls_loss: 4.141428, reg_loss: 1.101111
[epoch: 1, batch: 22] loss: 11.127349, obj_loss: 1.423922, cls_loss: 6.209508, reg_loss: 3.493918
[epoch: 1, batch: 23] loss: 7.093385, obj_loss: 1.035403, cls_loss: 4.525311, reg_loss: 1.532671
[epoch: 1, batch: 24] loss: 7.644183, obj_loss: 1.034627, cls_loss: 4.520189, reg_loss: 2.089367
[epoch: 1, batch: 25] loss: 6.772291, obj_loss: 1.016327, cls_loss: 4.244712, reg_loss: 1.511253
[epoch: 1, batch: 26] loss: 193.881104, obj_loss: 1.830120, cls_loss: 17.074070, reg_loss: 174.976913
[epoch: 1, batch: 27] loss: 7.431651, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.825148
[epoch: 1, batch: 28] loss: 7.192277, obj_loss: 1.039721, cls_loss: 4.566783, reg_loss: 1.585774
[epoch: 1, batch: 29] loss: 6.893618, obj_loss: 1.039721, cls_loss: 4.566783, reg_loss: 1.287114
[epoch: 1, batch: 30] loss: 7.638692, obj_loss: 1.039721, cls_loss: 4.566783, reg_loss: 2.032189
[epoch: 1, batch: 31] loss: 7.449893, obj_loss: 1.039721, cls_loss: 4.566783, reg_loss: 1.843388
[epoch: 1, batch: 32] loss: 7.655496, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 2.048993
[epoch: 1, batch: 33] loss: 7.145531, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.539027
[epoch: 1, batch: 34] loss: 7.390171, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.783668
[epoch: 1, batch: 35] loss: 7.374976, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.768474
[epoch: 1, batch: 36] loss: 7.157423, obj_loss: 1.039721, cls_loss: 4.566783, reg_loss: 1.550920
[epoch: 1, batch: 37] loss: 7.023059, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.416556
[epoch: 1, batch: 38] loss: 7.025788, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.419284
[epoch: 1, batch: 39] loss: 7.916700, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 2.310196
[epoch: 1, batch: 40] loss: 7.503070, obj_loss: 1.039721, cls_loss: 4.566783, reg_loss: 1.896566
[epoch: 1, batch: 41] loss: 7.642205, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 2.035701
[epoch: 1, batch: 42] loss: 7.002790, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.396288
[epoch: 1, batch: 43] loss: 7.220839, obj_loss: 1.039721, cls_loss: 4.566783, reg_loss: 1.614336
[epoch: 1, batch: 44] loss: 6.997747, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.391244
[epoch: 1, batch: 45] loss: 7.289024, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.682521
[epoch: 1, batch: 46] loss: 8.104959, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 2.498456
[epoch: 1, batch: 47] loss: 7.818621, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 2.212118
[epoch: 1, batch: 48] loss: 7.451015, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.844513
[epoch: 1, batch: 49] loss: 7.190177, obj_loss: 1.039721, cls_loss: 4.566783, reg_loss: 1.583674
[epoch: 1, batch: 50] loss: 7.414326, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.807823
[epoch: 1, batch: 51] loss: 7.082467, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.475964
[epoch: 1, batch: 52] loss: 8.054209, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 2.447706
[epoch: 1, batch: 53] loss: 7.361894, obj_loss: 1.039721, cls_loss: 4.566783, reg_loss: 1.755390
[epoch: 1, batch: 54] loss: 7.404776, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.798274
[epoch: 1, batch: 55] loss: 7.673492, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 2.066990
[epoch: 1, batch: 56] loss: 7.255838, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.649335
[epoch: 1, batch: 57] loss: 7.364078, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.757576
[epoch: 1, batch: 58] loss: 7.672638, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 2.066136
[epoch: 1, batch: 59] loss: 6.943457, obj_loss: 1.039721, cls_loss: 4.566783, reg_loss: 1.336953
[epoch: 1, batch: 60] loss: 7.074402, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.467899
[epoch: 1, batch: 61] loss: 7.573625, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.967122
[epoch: 1, batch: 62] loss: 6.797518, obj_loss: 1.039721, cls_loss: 4.566783, reg_loss: 1.191013
[epoch: 1, batch: 63] loss: 7.667085, obj_loss: 1.039721, cls_loss: 4.566783, reg_loss: 2.060580
[epoch: 1, batch: 64] loss: 7.461186, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.854683
[epoch: 1, batch: 65] loss: 7.218839, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.612336
[epoch: 1, batch: 66] loss: 6.966671, obj_loss: 1.039721, cls_loss: 4.566783, reg_loss: 1.360166
[epoch: 1, batch: 67] loss: 7.419224, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.812721
[epoch: 1, batch: 68] loss: 7.038870, obj_loss: 1.039721, cls_loss: 4.566783, reg_loss: 1.432365
[epoch: 1, batch: 69] loss: 7.527884, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.921382
[epoch: 1, batch: 70] loss: 6.735312, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.128809
[epoch: 1, batch: 71] loss: 7.051725, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.445222
[epoch: 1, batch: 72] loss: 7.819386, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 2.212884
[epoch: 1, batch: 73] loss: 7.343047, obj_loss: 1.039721, cls_loss: 4.566783, reg_loss: 1.736544
[epoch: 1, batch: 74] loss: 6.986712, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.380208
[epoch: 1, batch: 75] loss: 7.757770, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 2.151267
[epoch: 1, batch: 76] loss: 7.547459, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.940956
[epoch: 1, batch: 77] loss: 7.939866, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 2.333363
[epoch: 1, batch: 78] loss: 7.063341, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.456838
[epoch: 1, batch: 79] loss: 7.327311, obj_loss: 1.039721, cls_loss: 4.566783, reg_loss: 1.720806
[epoch: 1, batch: 80] loss: 7.129947, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.523444
[epoch: 1, batch: 81] loss: 7.087235, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.480733
[epoch: 1, batch: 82] loss: 7.572030, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.965527
[epoch: 1, batch: 83] loss: 7.192766, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.586263
[epoch: 1, batch: 84] loss: 7.427009, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.820506
[epoch: 1, batch: 85] loss: 7.694969, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 2.088466
[epoch: 1, batch: 86] loss: 8.016901, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 2.410399
[epoch: 1, batch: 87] loss: 7.126433, obj_loss: 1.039721, cls_loss: 4.566782, reg_loss: 1.519930
It’s the first epoch. Loss obj_loss and cls_loss is unlikely to be optimized after batch 26, yet reg_loss remains shifting.
The training script:
device = torch.device("cuda")
dataloader_train = get_dataloader(train=True, batch_size=batch_size)
dataset_val = VOCDetectionDataset(root="/data/sfy_projects/Datasets/VOC2007/VOCtrainval_06-Nov-2007", year="2007",
image_set="val",
transforms=get_transforms(train=False),
show=False)
coco_anno_path = "/data/sfy_projects/Datasets/VOC2007/VOCtrainval_06-Nov-2007/voc2007.val.cocoformat.json"
model = YOLO(anchors=k_means_anchors["2007.train"], num_classes=20).to(device)
optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9, weight_decay=1e-5)
lr_scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=0.1, patience=5, verbose=True)
for epoch in range(epochs):
print("---------------------- TRAINING ---------------------- ")
model.train()
model.freeze_backbone(layers=3)
model.freeze_bn(model.backbone)
running_loss, running_objectness_loss, running_classification_loss, running_regression_loss = 0.0, 0.0, 0.0, 0.0
for i, data in enumerate(dataloader_train):
imgs, targets = data
imgs = imgs.to(device)
for target in targets:
for obj in target["objects"]:
obj["class"] = obj["class"].to(device)
obj["bbox"] = obj["bbox"].to(device)
res = model(imgs, targets)
loss = res["loss"]["loss"]
objectness_loss = res["loss"]["objectness_loss"]
classification_loss = res["loss"]["classification_loss"]
regression_loss = res["loss"]["regression_loss"]
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_loss += loss.item()
running_objectness_loss += objectness_loss.item()
running_classification_loss += classification_loss.item()
running_regression_loss += regression_loss.item()
if (i + 1) % batches_show == 0:
print(f"[epoch: {epoch + 1}, batch: {i + 1}] loss: {running_loss / batches_show:.6f}, obj_loss: {running_objectness_loss / batches_show: .6f}, cls_loss: {running_classification_loss / batches_show: .6f}, reg_loss: {running_regression_loss / batches_show: .6f}")
running_loss, running_objectness_loss, running_classification_loss, running_regression_loss = 0.0, 0.0, 0.0, 0.0
lr_scheduler.step(loss)
I’ve tried to ommit optimzing regression loss. Finally I got a rather good model which points out the object’s location and class without awareness of it’s bbox size. But… uhm, I’m not sure what can this prove.
My biggest doubt is that, since I already called optimizer.zero_grad(), why can something happend in one batch of training have a lasting effect on the following training batches? Also I cannot find any clue why this happens.
BTW, the problem is reproducible to me. In fact, it happens at the first epoch every time if I set the learning rate as 0.01. However the sudden raise of regression loss never show up again yet since I tuned learning rate to 0.001(as far as I’ve seen). Is all of this just about learning rate? |
st47062 | Below is a simplified version of a model I am developing
class ExampleModel(nn.Module):
def __init__(self, model_1, model_2):
super(ExampleModel, self).__init__()
self.model_1_name = 'Model_1'
self.model_1 = model_1
self.add_module(self.model_1_name, self.model_1)
self.model_2_name = 'Model_2'
self.model_2 = model_2
self.add_module(self.model_2_name, self.model_2)
self.fc_name = 'Linear_layer'
self.fc_linear = nn.Linear(self.2, 1)
self.add_module(self.fc_name, self.fc_linear)
def forward(self, input_a, input_b):
x1 = self.model_1(input_a)
x2 = self.model_2(input_b)
x = torch.cat((x1, x2), 1)
x = self.fc_linear(F.relu(x))
return x
My end goal is to support cases where data in the input batches might be missing and avoid updating model parameters based on missing data or any “dummy” data that is sent in.
As the data sent to forward is batched, a single batch might have missing data in some places and relevant data in other. Currently the missing data is replaced with a torch.zeros tensor.
Is it possible to create an architecture where the gradients generated from these inputs are excluded from the backwards pass?
Is there a better way to go about it which does not involve creating torch.zeros tensors as dummy data? |
st47063 | It is doable if you have one availability state per sample, or can split columns into “core” and “optional” table; then multiplying some entries in either unreduced loss tensor or “optional” residual layer output by zeros does the trick. With varying missing column combinations it is much harder to do, without hindering performance. |
st47064 | Is there a way we can enforce non-negativity constrains in embeddings?
In Keras, one could do like using the keras.constraints.non_neg
from keras.constraints import non_neg
movie_input = keras.layers.Input(shape=[1],name='Item')
movie_embedding = keras.layers.Embedding(n_movies + 1, n_latent_factors, name='NonNegMovie-Embedding', embeddings_constraint=non_neg())(movie_input)
I’m trying to rewrite in PyTorch, a blogpost I wrote on non-neg. matrix factorization in Keras. 24 |
st47065 | How is the keras one implemented? Is it just simply clamping below at 0 after each gradient update? If so you can just also do so after each optim update in pytorch. |
st47066 | In https://github.com/keras-team/keras/blob/master/keras/constraints.py 13 we have:
class NonNeg(Constraint):
"""Constrains the weights to be non-negative.
"""
def __call__(self, w):
w *= K.cast(K.greater_equal(w, 0.), K.floatx())
return w
I guess this is just clamping it to minimum of 0. |
st47067 | @Nipun_Batra
Clamping all embedding vectors per every itersion is so time consuming.
How to clamp only updated embedding vectors with cuda variable? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.