id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st47768 | Do you think there would be a better way than
In [1]: import torch
In [2]: import numpy as np
In [3]: x = torch.FloatTensor([[1,2],[3,4]])
...: xx = x.split(1)
...:
In [4]: xx
Out[4]: (tensor([[1., 2.]]), tensor([[3., 4.]]))
In [5]: out = torch.FloatTensor([])
...:
...: for x_sub, num_repeat in zip(xx, [1,2]):
...: ^Iout = torch.cat([out, x_sub.expand(num_repeat, -1)])
...:
In [6]: out
Out[6]:
tensor([[1., 2.],
[3., 4.],
[3., 4.]])
In [7]: x = np.array([[1,2],[3,4]])
In [8]: np.repeat(x, [1,2], axis=0)
Out[8]:
array([[1, 2],
[3, 4],
[3, 4]])
Is this work for sure with backpropagation ? |
st47769 | Hello,
I tried the code below, and it seems that it works with backpropagation.
x = torch.tensor([[1, 2], [3, 4]], requires_grad=True)
a, b = x.split(1)
a.requires_grad
>> True
a.grad_fn
>> <SplitBackward at 0x7f1ed6f25128>
out = torch.FloatTensor([])
torch.cat([out, a.expand(1,-1)])
>> tensor([[1., 2.]], grad_fn=<CatBackward>) |
st47770 | Thank you for your answer !
I’m just thinking in terms of efficiency, if it makes the computation much slower. Do you have an idea ? |
st47771 | Einops 7 recently got support for repeat-like patterns. Examples:
# np.repeat behavior, repeat rows (copies are in succession like aaabbbcccddd)
einops.repeat(x, 'i j -> (i copy) j', copy=3)
# np.repeat behavior, repeat columns (copies are in succession like aaabbbcccddd)
einops.repeat(x, 'i j -> i (j copy)', copy=3)
# np.tile behavior (whole sequence is repeated 3 times like abcdabcdabcd)
einops.repeat(x, 'i j -> (copy i) j', copy=3)
You can repeat/tile multiple axes independently within one operation. |
st47772 | You can do this with repeat_interleave 38.
It even includes your exact example in its documentation (I am guessing they introduced it after your post and perhaps even because of it):
>>> y = torch.tensor([[1, 2], [3, 4]])
>>> torch.repeat_interleave(y, 2)
tensor([1, 1, 2, 2, 3, 3, 4, 4])
>>> torch.repeat_interleave(y, 3, dim=1)
tensor([[1, 1, 1, 2, 2, 2],
[3, 3, 3, 4, 4, 4]])
>>> torch.repeat_interleave(y, torch.tensor([1, 2]), dim=0)
tensor([[1, 2],
[3, 4],
[3, 4]]) |
st47773 | Hello,I am preparing dataset for a detection task:
def __getitem__(self, idx):
......
return from_numpy(image), from_numpy(bboxes)
As you can see,the second element of the return is bboxes,and different image will have different number of objects in it,so the shape of the bboxes will vary from each other,and this will cause Dataloder throws out an exception like:
RuntimeError: stack expects each tensor to be equal size, but got [1, 1, 5] at entry 0 and [1, 5, 5] at entry 1
when it tries to stack bboxes(label) in a batch,question is
what is general paradigm to handle this issue? |
st47774 | Solved by mariosasko in post #4
The simplest solution to that is to define the collate_fn like this:
def collate_fn(batch):
return tuple(zip(*batch))
And this is the for loop:
for image_batch, bbox_batch in dataloader:
...
Then, inside the loop you can stack the images before passing them to the model, etc.
If you ne… |
st47775 | This discussion may be helpful.
RuntimeError: stack expects each tensor to be equal size, but got [3, 224, 224] at entry 0 and [3, 224, 336] at entry 3 vision
So you mean to say, even if the transforms.Resize() is included in transforms.Compose(), there is still an error of size mismatch ?
Usually, in such situations some sort of padding has to be introduced for each batch so all elements match in size. This can be achieved by defining your own collate_fn that is then passed to DataLoader as an argument. |
st47776 | Thanks,I may haven’t express may question clearly,the size(width/height) of every transformed image is same,and the problem caused from that the numbers of object in each image in the same batch are different,not the height or width of images |
st47777 | The simplest solution to that is to define the collate_fn like this:
def collate_fn(batch):
return tuple(zip(*batch))
And this is the for loop:
for image_batch, bbox_batch in dataloader:
...
Then, inside the loop you can stack the images before passing them to the model, etc.
If you need some additional functionality, you can even define your own object for handling batches (in the docs there is an example how to do this). |
st47778 | I don’t get it. I have the same problem like OP had. I am loading data like this:
my_loader = DataLoader(my_dataset, batch_size=128, shuffle=True)
How should I use that collate_fn function (what is the “batch” argument? )? What should I do in the for loop? |
st47779 | By default, Dataloader tries to stack the tensors to form a batch (calls torch.stack on the current batch), but it fails if the tensors are not of equal size. With the collate_fn it is possible to override this behavior and define your own “stacking procedure”. In the example above, the batch arg contains a list of instances (an image-bbox pair; the batch arg is of type List[Tuple[Image, Bbox]]) and with tuple(zip(*batch)) we form a batch, where batch[0] corresponds to the images, and batch[1] to the bboxes in the batch. |
st47780 | Ok, so I have a function:
def collate_fn(data):
img, bbox = data
zipped = zip(img, bbox)
return zipped
Where data is object of class based on Dataset like:
…
def getitem(self, idx):
img = self.imgs[idx]
bbox = self.bboxs[idx]
return (
img,
bbox,
)
And how I am supposed to use it?
When I do like this:
my_loader = DataLoader(data, batch_size=8, shuffle=True, collate_fn=collate_fn(data))
there is an error that: zip object is not callable
What am I doing wrong? |
st47781 | First consume the iterator returned by zip in the collate function:
def collate_fn(data):
img, bbox = data
zipped = zip(img, bbox)
return list(zipped)
The error is self-explantory. collate_fn has to be callable or in layman’s terms a function.
my_loader = DataLoader(data, batch_size=8, shuffle=True, collate_fn=collate_fn) |
st47782 | Hello everyone,
I am currently doing a deep learning research project and have a question regarding use of loss function. Basically, for my loss function I am using Weighted cross entropy + Soft dice loss functions but recently I came across with a mean IOU loss which works, but the problem is that it purposely return negative loss. First, it seemed odd to me that it returns -loss, so i changed the function to return 1-loss, but it performed worse so I believe the negative loss is correct approach. This means though that my final loss will be sum of positive, positive and negative values which seem to me very odd and don’t really make sense but surprisingly working not badly.
Hence, during training, my loss values go below 0 as the training continues. My current guess is that it is working fine because the optimization of loss function is to reduce the gradient of loss to zero not the loss itself.
My question is, is it okay to use combination of positive and negative loss functions as what matters is just a gradient of my final loss function?
I used the following approach for IOU loss: How to implement soft-IoU loss? 8
Thank you and I look forward to hearing from someone to answer this question soon! |
st47783 | Solved by KFrank in post #2
Hi Edward!
Yes, it is perfectly fine to use a loss that can become negative.
Your reasoning about this is correct.
To add a few words of explanation:
A smaller loss – algebraically less positive or algebraically more
negative – means (or should mean) better predictions. The
optimization step… |
st47784 | Hi Edward!
edshkim98:
My current guess is that it is working fine because the optimization of loss function is to reduce the gradient of loss to zero not the loss itself.
My question is, is it okay to use combination of positive and negative loss functions as what matters is just a gradient of my final loss function?
Yes, it is perfectly fine to use a loss that can become negative.
Your reasoning about this is correct.
To add a few words of explanation:
A smaller loss – algebraically less positive or algebraically more
negative – means (or should mean) better predictions. The
optimization step uses some version of gradient descent to make
your loss smaller. The overall level of the loss doesn’t matter as
far as the optimization goes. The gradient tells the optimizer how
to change the model parameters to reduce the loss, and it doesn’t
care about the overall level of the loss.
When gradient descent drives the loss to a minimum, the gradient
becomes zero (although it can be zero at places other than a
minimum). (Also, when the gradient is zero, plain-vanilla gradient
descent stops changing the parameters.)
It is true that several common loss functions are non-negative, and
become zero precisely when the predictions are “perfect.” Examples
include MSELoss and CrossEntropyLoss. But this is by no means
a requirement.
Consider, for example, optimizing with lossA = MSELoss. Now
imagine optimizing with lossB = lossA - 17.2. The 17.2 doesn’t
really change anything at all. It is true that “perfect” predictions
will yield lossB = -17.2 rather than zero. (lossA will, of course,
be zero for “perfect” predictions.) But who cares?
Best.
K. Frank |
st47785 | I have a tensor a of shape (1, N, 1). I need to left shift the tensor along dimension 1 and add a new value as replacement. I have found a way to make this work and following is the code.
a = torch.from_numpy(np.array([1, 2, 3]))
a = a.unsqueeze(0).unsqeeze(2) # (1, 3, 1), my data resembles this shape, therefore the two unsqueeze
# want to left shift a along dim 1 and insert a new value at the end
# I achieve the required shifts using the following code
b = a.squeeze
c = b.roll(shifts=-1)
c[-1] = 4
c = c.unsqueeze(0).unsqueeze(2)
# c = [[[2], [3], [4]]]
My question is, is there a simpler way to do this? Thanks. |
st47786 | Solved by learner47 in post #2
I found out that unsqueezing and squeezing are not required and can be combined.
a = torch.from_numpy(np.array([1, 2, 3])[None, :, None]) # (1, 3, 1)
b = torch.roll(a, shifts=-1, dims=1)
b[:, -1, :] = 4 |
st47787 | I found out that unsqueezing and squeezing are not required and can be combined.
a = torch.from_numpy(np.array([1, 2, 3])[None, :, None]) # (1, 3, 1)
b = torch.roll(a, shifts=-1, dims=1)
b[:, -1, :] = 4 |
st47788 | Hi, I am using colab, pytorch version 1.6.0+cu101. When I try to allocate a model or a parameter to GPU I get the error below. I have tried to reproduce several tips/corrections that were listed in the forum but no one has worked so far…any ideas?
Thanks!
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
g = nn.Parameter(torch.rand((100,128)), requires_grad=False)
if device.type=='cuda':
g = g.to(device)
print(next(g.parameters()).is_cuda) # returns a boolean
Here is the error message:
Using device: cuda
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-35-f9356a863a45> in <module>()
4 g = nn.Parameter(torch.rand((100,128)), requires_grad=False)
5 if device.type=='cuda':
----> 6 g = g.to(device)
7 print(next(g.parameters()).is_cuda) # returns a boolean
RuntimeError: CUDA error: an illegal memory access was encountered |
st47789 | Solved by ptrblck in post #8
Thanks for the rest of the code.
While running it I get a proper error message:
RuntimeError: Tensor for argument #2 'mat1' is on CPU, but expected it to be on GPU (while checking arguments for addmm)
which points to a device mismatch in Generator.
After checking the code it seems that z is crea… |
st47790 | Have you tried to run the code with CUDA_LAUNCH_BLOCKING=1 python script.py args?
If so, could you post the stack trace here, please? |
st47791 | for some reason, after restarting the colab, I get a new error:
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED |
st47792 | Do you get this error message with the blocking command?
If so, could you post the complete stack trace? |
st47793 | This is what I am trying to run now. Without .cuda() it runs OK.
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.distributions import Uniform, Normal
import torch.utils.data as tdata
import torch.optim as optim
import numpy as np
class DepthToSpace(nn.Module):
def __init__(self, block_size):
super().__init__()
self.block_size = block_size
self.block_size_sq = block_size * block_size
def forward(self, input):
output = input.permute(0, 2, 3, 1)
(batch_size, d_height, d_width, d_depth) = output.size()
s_depth = int(d_depth / self.block_size_sq)
s_width = int(d_width * self.block_size)
s_height = int(d_height * self.block_size)
t_1 = output.reshape(batch_size, d_height, d_width, self.block_size_sq, s_depth)
spl = t_1.split(self.block_size, 3)
stack = [t_t.reshape(batch_size, d_height, s_width, s_depth) for t_t in spl]
output = torch.stack(stack, 0).transpose(0, 1).permute(0, 2, 1, 3, 4).reshape(batch_size, s_height, s_width,
s_depth)
output = output.permute(0, 3, 1, 2)
return output
# Spatial Upsampling with Nearest Neighbors
class Upsample_Conv2d(nn.Module):
def __init__(self, in_dim, out_dim, kernel_size=(3, 3), stride=1, padding=1):
super(Upsample_Conv2d, self).__init__()
self.depth_to_space = DepthToSpace(block_size=2)
self.conv2d = nn.Conv2d(in_dim, out_dim, kernel_size, stride=stride,
padding=padding)
def forward(self, x):
x_ = torch.cat([x, x, x, x], dim=1)
x_ = self.depth_to_space(x_)
x_ = self.conv2d(x_)
return x_
class ReshapeGan(nn.Module):
def __init__(self, out_shape):
super(ReshapeGan, self).__init__()
self.out_shape = out_shape
def forward(self, x):
b = x.shape[0]
return x.view(b, *self.out_shape)
class Generator(nn.Module):
def __init__(self, n_filters=128):
super(Generator, self).__init__()
self.n_filters = n_filters
self.latent = torch.distributions.Normal(torch.tensor(0.), torch.tensor(1.))
self.network = nn.Sequential(
nn.Linear(n_filters, 4*4*256),
ReshapeGan((256,4,4)),
ResnetBlockUp(in_dim=256, n_filters=n_filters),
ResnetBlockUp(in_dim=n_filters, n_filters=n_filters),
ResnetBlockUp(in_dim=n_filters, n_filters=n_filters),
nn.BatchNorm2d(n_filters),
nn.ReLU(),
nn.Conv2d(n_filters, 3, kernel_size=(3, 3), padding=1),
nn.Tanh()
)
def forward(self, z):
return self.network(z)
def sample(self, n):
z = self.latent.sample([n, self.n_filters])
return self.forward(z)
CUDA_LAUNCH_BLOCKING=1
g = Generator(128)
g = g.cuda() # HERE IT CRASHES
z = g.sample(10)
print(z.shape) |
st47794 | Ups, sorry:
class ResnetBlockUp(nn.Module):
def __init__(self, in_dim, kernel_size=(3, 3), n_filters=256):
super(ResnetBlockUp, self).__init__()
self.layers = nn.ModuleList(
[nn.BatchNorm2d(in_dim),
nn.ReLU(),
nn.Conv2d(in_dim, n_filters, kernel_size, padding=1),
nn.BatchNorm2d(n_filters),
nn.ReLU(),
Upsample_Conv2d(n_filters, n_filters, kernel_size=kernel_size, padding=1)])
self.shortcut = Upsample_Conv2d(in_dim, n_filters,
kernel_size=(1, 1), padding=0)
def forward(self, x):
x_ = x
for layer in self.layers:
x_ = layer(x_)
x = self.shortcut(x)
return x + x_
The strange thing is that it crashes with error code:
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED
right after resetting the colab at the g.sample(10) line. Then at .cuda() line I get the illegal access error after retrying to run without resetting the colab… |
st47795 | Thanks for the rest of the code.
While running it I get a proper error message:
RuntimeError: Tensor for argument #2 'mat1' is on CPU, but expected it to be on GPU (while checking arguments for addmm)
which points to a device mismatch in Generator.
After checking the code it seems that z is created on the CPU, while the parameters of the model are on the GPU.
Use
z = self.latent.sample([n, self.n_filters]).cuda()
or
z = self.latent.sample([n, self.n_filters]).to(next(self.parameters()).device)
to fix this issue. |
st47796 | Amazing help! Thank you very much, I have just missed the gpu data load.
BTW, what should I have done in order to get the error message?
Again, thanks! |
st47797 | It shouldn’t be the case, but maybe the notebook is somehow raising the “wrong” error message?
In any way, updating to the latest stable release (1.7) or the nightly is also a good idea and might help. |
st47798 | can you help me??
I’m trying to write a code about disease diagnosis through retinal images. I try different codes but I get errors separately from all of them
first code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import torch.optim as optim
import numpy as np
train_transforms = transforms.Compose([transforms.Grayscale(),
transforms.Resize(255),
transforms.RandomRotation(38),
transforms.RandomResizedCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.5],[0.5])])
test_transforms=transforms.Compose([transforms.Grayscale(),
transforms.Resize(255),
transforms.CenterCrop(244),
transforms.ToTensor()])
train_dataset=torchvision.datasets.ImageFolder(root=‘C:/OCT2017/train’, transform=train_transforms)
test_dataset=torchvision.datasets.ImageFolder(root=‘C:/OCT2017/test’, transform=test_transforms)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size = 100, shuffle = False)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size = 100, shuffle = False)
class NN(nn.Module):
def init(self, input_size, num_classes):
super(NN.self).init()
self.fc1=nn.Linear(input_size,50)
def forward(self,x):
x=F.relu(self.fc1(x))
return x
class OCTModel(nn.Module):
def init(self,in_channels=1, num_classes =4):
super(OCTModel, self).init()
# Convolution 1
self.conv1= nn.Conv2d(in_channels=1, out_channels=32, kernel_size=(3,3), stride=(1))
self.relu1 = nn.ReLU()
self.pool = nn.MaxPool2d(kernel_size=2)
self.dropout1=nn.Dropout(p=0.2)
self.relu2=nn.ReLU()
self.fc1 = nn.Linear(256,num_classes)
def forward(self, x):
# Convolution 1
x = self.conv1(x)
x = self.relu1(x)
x = self.pool(x)
x = self.dropout1(x)
x = self.relu2(x)
x = x.reshape(x.shape[0], -1)
x = self.fc1(x)
return x
device = torch.device(‘cuda’ if torch.cuda.is_available() else ‘cpu’)
in_channels=1
num_classes=4
learning_rate=0.001
batch_size=10
num_epochs=1
model = OCTModel().to(device)
Criterion=nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for epoch in range (num_epochs):
for batch_idx, (data,targets) in enumerate(train_loader,0):
data=data.to(device=device)
targets=targets.to(device=device)
scores=model(data)
loss = Criterion(scores,targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
def check_accuracy (loader, model):
if loader.datasets.train:
print (“Checking accuracy on training data”)
else:
print(“Checking accuary on test data”)
num_correct=0
num_samples=0
model.eval()
with torch.no_grad():
for x, y in loader:
x=x.to(device=device)
y=y.to(device=device)
scores=model(x)
_, predictions = scores.max(1)
num_correct += (predictions == y).sum()
num_samples += predictions.size(0)
print(f’Got {num_correct} / {num_samples} with accuracy {float(num_correct)/float(num_samples)*100:.2f}’)
model.train()
check_accuracy(train_loader,model)
check_accuracy(test_loader,model)
RuntimeError: CUDA out of memory. Tried to allocate 152.00 MiB (GPU 0; 2.00 GiB total capacity; 1.20 GiB already allocated; 121.74 MiB free; 9.86 MiB cached)
second code
class OCTModel(nn.Module):
def init(self,in_channels=1, num_classes =4):
super(OCTModel, self).init()
# Convolution 1
self.conv1= nn.Conv2d(in_channels=1, out_channels=32, kernel_size=(3,3), stride=(1))
self.relu1 = nn.ReLU()
self.pool = nn.MaxPool2d(kernel_size=2)
self.dropout1=nn.Dropout(p=0.2)
self.relu2=nn.ReLU()
self.fc1 = nn.Linear(256,num_classes)
def forward(self, x):
# Convolution 1
x = self.conv1(x)
x = self.relu1(x)
x = self.pool(x)
x = self.dropout1(x)
x = self.relu2(x)
x = self.fc1(x)
return x
device = torch.device(‘cuda’ if torch.cuda.is_available() else ‘cpu’)
input_size=784
num_classes = 4
learning_rate=0.001
batch_size=32
num_epochs=1
train_transforms = transforms.Compose([transforms.Grayscale(),
transforms.Resize(255),
transforms.RandomRotation(38),
transforms.RandomResizedCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.5],[0.5])])
test_transforms=transforms.Compose([transforms.Grayscale(),
transforms.Resize(255),
transforms.CenterCrop(244),
transforms.ToTensor()])
train_dataset=torchvision.datasets.ImageFolder(root=‘C:/OCT2017/train’, transform=train_transforms)
test_dataset=torchvision.datasets.ImageFolder(root=‘C:/OCT2017/test’, transform=test_transforms)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size = 32, shuffle = False)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size = 32, shuffle = False)
model = OCTModel().to(device)
Criterion=nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for epoch in range (num_epochs):
for batch_idx, (data,targets) in enumerate(train_loader,0):
data=data.to(device=device)
targets=targets.to(device=device)
data = data.reshape(data.shape[0],-1)
scores=model(data)
loss = Criterion(scores,targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
def check_accuracy (loader, model):
if loader.datasets.train:
print (“Checking accuracy on training data”)
else:
print(“Checking accuary on test data”)
num_correct=0
num_samples=0
model.eval()
with torch.no_grad():
for x, y in loader:
x=x.to(device=device)
y=y.to(device=device)
x=x.reshape(x.shape[0],-1)
scores=model(x.unsqueeze(dim=1))
_, predictions = scores.max(1)
num_correct += (predictions == y).sum()
num_samples += predictions.size(0)
print(f’Got {num_correct} / {num_samples} with accuracy {float(num_correct)/float(num_samples)*100:.2f}’)
model.train()
check_accuracy(train_loader,model)
check_accuracy(test_loader,model)
RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 1, but got 2-dimensional input of size [32, 50176] instead |
st47799 | The error message is:
RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 1, but got 2-dimensional input of size [32, 50176] instead
which doesn’t point towards an illegal memory access, but a shape mismatch.
Most likely you are trying to pass a 2-dimensional input to e.g. an nn.Conv2d layer, which expects an input in the shape [batch_size, channels, height, width].
Since this error is unrelated, feel free to create a new topic, if you get stuck.
PS: you can post code snippets by wrapping them into three backticks ```, which would make debugging easier. |
st47800 | ptrblck:
e nn.Conv2d bir girdi bekleyen bir katmana iletmeye çalışıyorsunuz [batch_size, channels, height, width] .
Bu hata ilgisiz olduğundan, takılırsanız yeni bir konu oluşturmaktan çekinmeyin.
Not: Kod parçacıklarını üç geri işaretine sararak gönderebilirsiniz, bu da hata ayıklamayı kolaylaştırır. : göz kırpma:
The database has four categories of gray level retinal images. I guess I’m having a mistake here. I’m very sorry, but I don’t quite understand what to do.
[image] |
st47801 | Your input images should have the mentioned shape ([batch_size, channels, height, width]), while they are 2-dimensional tensors.
Have a look at this tutorial 4 for an example on how to build a model and how the data tensors should be created. |
st47802 | i was making transformer of my own , but soon realized a problem that how to make batches osf sequences with variable length!!
can anyone plz help me figure out how it can be done? |
st47803 | I’m not sure whether this is a bug or I’m not understanding how pytorch works, hence why I’m posted here first. I’m using pytorch 1.5.0.
My model is running on the gpu and I convert each batch to the device at the beginning, then forward through the model. When I then move it to CPU however, it doesn’t seem to free the GPU memory. When the loop comes around again, the memory still isn’t freed and ends up with an out of memory issue after a few loops.
mem_a = torch.cuda.memory_summary(device, True)
b = b.to(device)
mem_a2 = torch.cuda.memory_summary(device, True)
forward = model.forward(b)
mem_a21 = torch.cuda.memory_summary(device, True)
forward = forward.to("cpu")
mem_a3 = torch.cuda.memory_summary(device, True)
I’ve checked in the debugger, and the forward after the .to() statement says it is on the cpu, however if you look at the GPU ram below from these positions…
Loop 1:
mem_a = 238MB
mem_a2 = 778MB
mem_a21 = 3471MB
mem_a3 = 3471MB
Loop 2:
mem_a = 3471MB
mem_a2 = 3781MB
mem_a21 = 5382MB
mem_a3 = 5382MB
Loop 3:
mem_a = 5382
mem_a2 = 5693
Out of Memory
To me, it very much looks like pytorch isn’t clearing up the link to the GPU after moving to cpu. I’ve tried this in a number of ways, even separating the gpu/cpu variable and deleting the gpu one still doesn’t work, as if it maintains a link somewhere. If I delete while it’s still on gpu the memory usage disappears, so it definitely seems to be something around this .to() statement that is keeping a link to the GPU.
I’m needing to store the output of this all before being able to do what I do next, and have more than enough RAM for it if I can get it put properly get it out of the GPU, however I can’t seem to do that.
Am I doing something wrong here? |
st47804 | Solved by thomasjo in post #4
Are you collecting results, e.g. losses, and storing them in a list or something similar? If you are, make sure you store graph-detached copies of the relevant tensors by calling Tensor.detach.
losses = []
# Inside training loop or similar
loss = criterion(...)
losses.append(loss.detach())
If you… |
st47805 | I’ve tried adding it as below, but there has been no difference. I also tried gc.collect just to see if there was a variable being held within memory with no difference.
mem_a21 = torch.cuda.memory_summary(device, True)
forward = forward.to("cpu")
torch.cuda.empty_cache()
mem_a3 = torch.cuda.memory_summary(device, True)
I’m really unsure of where to debug from this point forward… I’ve also tried forward.cpu() and torch.device("cpu") so I don’t think it’s specifically an issue with the .to(). |
st47806 | Are you collecting results, e.g. losses, and storing them in a list or something similar? If you are, make sure you store graph-detached copies of the relevant tensors by calling Tensor.detach 11.
losses = []
# Inside training loop or similar
loss = criterion(...)
losses.append(loss.detach())
If you store tensors attached to a graph, invoking garbage collection will have no impact since those tensors are still actively referenced via the list. |
st47807 | Hi everyboday,
any updates for this issue?
It seems that I came into the same situation.
Best,
Jianxiang |
st47808 | In my case, it turns out that the now marked answer (that I forgot to mark as solution back then) was the issue/solution in my case. |
st47809 | Thank you for the quick reply!
Since I got into this during inference in which no gradients are required, I found that other people 21 suggested to use with torch.no_grad(): before doing inference in the mini-batch. And it worked fine for me. |
st47810 | I am working on an encoder that uses LSTM
def init_hidden(self, batch):
'''
used to initialize the encoder (LSTMs) with number of layers, batch_size and hidden layer dimensions
:param batch: batch size
:return:
'''
return (
torch.zeros(self.num_layers, batch, self.h_dim).cuda(),
torch.zeros(self.num_layers, batch, self.h_dim).cuda()
)
this is the code to initialize the LSTM, what does the batch_size represent for LSTM ? is the number of LSTMs used in the encoder ? . It is from social gan 7 algorithm.
Can someone please explain what it means? |
st47811 | The batch size has the same meaning as in the traditional feed forward models.
It stands for the number of sequences you want to process at the same time.
So your input should be of size (sequence_length, batch_size, number_of_features). |
st47812 | Hi thank you for the reply yea I already know that but I read somewhere that batch_size produces batch_size number of sequences for LSTM … so that’s what I wanted to know if that is true or not ? |
st47813 | If you are talking about the output of a LSTM with that hidden size: the final hidden state is composed by batch_size number of sequences representing the hidden state at each time step, while the output is composed by batch_size vectors representing the hidden state at the last time step. |
st47814 | Hi, thank you for your reply so does it mean the number of batch_size generates that many LSTMs in the image ? for example if the batch size is 3 the there would be 3 LSTMs with their own hidden states respectively? |
st47815 | No, there is only 1 LSTM that produces in output batch_size sequences. It is more or less the same process that occurs in a feedforward model, when you obtain batch_size predictions with just one output layer.
Take a look at the official docs 235 for the LSTM to understand the shape of input and output of the model. In particular the input and output section. |
st47816 | Ah great thank you for the information, I will take a look at the official docs . |
st47817 | Hi,
I am a pretty beginner of DL and Pytorch, and now working on constructing LSTM for training some sequence data.
I am wondering that for example if we have 500 sequence length of data with 20 features. And I want to split them into a batch size of 10. Then when we input the data should we have data with a shape of (500, 10, 20), which has the same 500 sequence and 20 features in each of the batches? Or we should have data with a shape of (50, 10, 20) which 500 sequence data is divided by 10 and they have different data in each batch?
Thank you very much. |
st47818 | The input shape for the LSTM is [batch_size, sequence_length, input_dim]. The batch size and sequence_length are two different (independent) parameters. Your total sequence length is 500, you can create more training samples by selecting a smaller sequence (say length 100) and create 400 training samples which would look like, Sample 1 = [s1, s2, s3 …s100], Sample 2 = [s2, s3, s4 …s101] -----> Sample 400 = [s400, s401, s497 … s499]. Now dividing these 400 training samples into batches of 10 would give you an input shape of 10, 100, 20. |
st47819 | my network is running slow and i suspect it’s got to do with the loading scheme, which is based on tutorials and simply loads all data to the dataset object during initialization.
am i right to think that it might help to load a chunk of data at each time, iterate through this chunk, update weights and then discard the chunk from memory and start again with another chunk?
i found [How to use dataset larger than memory? 9] (this link) to be close to what i’m looking for but it ended with no straight up solution.
my current dataest class, shown below, is initialized by loading the entire dataset and performing some procedures on it.
storing the entire preprocessed data as a variable of the class instance.
the get item method receives a random idx and returns the sample (input and target) after some more transformations that i apply manually (in the getitem method).
i’m not sure how to continue from here because if i set read_csv to “chunk mode” it returns an iterator. this way how do i perform preprocessing in the initialization?
i could preprocess in the getitem method, but then get item would get a chunk of data instead of a single sample, how can i pass that?
i’m adding my current dataset for your consideration. generally any advice would be appreciated but what i couldn’t, and would love to find is a sipmle example for my case: full implementation of loading chunks to a dataset object and configuring it with a dataloader object and all the way to a training loop.
‘’’
class FacialKeypoints(Dataset):
def __init__(self, test=False, cols=None,FTRAIN = 'data/Q3/training.csv', FTEST = 'EX1/Q3/test.csv', transform_vars=None, batch_size = 16):
fname = FTEST if test else FTRAIN
df = read_csv(os.path.expanduser(fname)) # load pandas dataframe
# The Image column has pixel values separated by space; convert
# the values to numpy arrays:
df['Image'] = df['Image'].apply(lambda im: np.fromstring(im, sep=' '))
print('number of values in each column: ', df.count()) # prints the number of values for each column
df = df.dropna() # drop all rows that have missing values in them
X = np.vstack(df['Image'].values) / 255. # scale pixel values to [0, 1]
X = X.astype(np.float32)
image_size = int(np.sqrt(X.shape[1]))
Y = []
if not test: # only FTRAIN has any target columns
y = df[df.columns[:-1]].values
y2 = y.reshape(y.shape[0],15,2)
for coords in y2:
mask = np.zeros((image_size,image_size))
for pair in coords:
pair = pair.round().astype(int)
mask[pair[1] - 1, pair[0] - 1] += 1
if mask[pair[1] - 1, pair[0] - 1]==3:
print('bad')
Y.append(mask)
Y = np.array(Y)
y = (y - 48) / 48 # scale target coordinates to [-1, 1]
X, y, Y = shuffle(X, y, Y, random_state=42) # shuffle train data
y = y.astype(np.float32)
else:
y = None
self.X = torch.tensor(X,dtype=torch.float32)
self.transform_vars = transform_vars
self.y = torch.tensor(y)
self.Y = torch.tensor(Y,dtype=torch.float32)
print('finished loading')
def __len__(self):
return len(self.X)
def transform(self,image, mask):
image = image.reshape(96,96)
flip_prob = self.transform_vars['flip_probability']
rotate_prob = self.transform_vars['rotate_probability']
if torch.rand(1)>flip_prob:
image = TF.hflip(image)
mask_points = torch.nonzero(mask, as_tuple=False)
newmask = torch.zeros((96, 96))
for pair in mask_points:
newpair = flip(pair)
newmask[newpair[0] - 1, newpair[1] - 1] = mask[pair[0],pair[1]]
mask = newmask
#mask = TF.hflip(mask)
if torch.rand(1)<rotate_prob:
avg_pixel = image.mean()
degrees = self.transform_vars['degrees']
deg = int(torch.rand(1).item() * degrees - degrees)
image_r = TF.to_tensor(TF.rotate(TF.to_pil_image(image),deg)).squeeze()
image_r[(image_r==0) * (image!=0)] = avg_pixel
image = image_r
mask_points = torch.nonzero(mask,as_tuple=False)
newmask = torch.zeros((96,96))
for pair in mask_points:
newpair = tilt(pair,deg)
newmask[newpair[0] - 1,newpair[1] - 1] = mask[pair[0],pair[1]]
mask = newmask
#mask = TF.to_pil_image(mask)
#mask = TF.rotate(mask, deg,resample=PIL.Image.NEAREST)
#mask = TF.to_tensor(mask).squeeze()
#mask = TF.to_tensor(TF.rotate(TF.to_pil_image(mask), deg)).squeeze()
return image.unsqueeze(0), mask
def update_target(self,mask):
keypoints = (mask==1).nonzero(as_tuple=False).reshape(-1)
keypoints = torch.hstack([keypoints,(mask==2).nonzero(as_tuple=False).reshape(-1).repeat(2)])
keypoints = torch.from_numpy((keypoints.numpy() - 48) / 48)
if keypoints.shape[0]!=30:
print('bad after transform')
temp = self.__getitem__(torch.randint(0,len(self),[1]).item())
return temp['keypoints']
else:
return keypoints
def __getitem__(self, idx):
self.idx = idx
if torch.is_tensor(idx):
idx = idx.tolist()
image = self.X[idx]
keypoints = self.y[idx]
mask = self.Y[idx]
if self.transform_vars['is']:
image, mask = self.transform(image, mask)
keypoints = self.update_target(mask).to(dtype = torch.float32)
return {'image':image, 'keypoints':keypoints}
else:
return {'image':image,'keypoints':keypoints}
‘’’ |
st47820 | i figured out how to use the chunk loader feature of pd.read_csv, but ran into difficulties since the iterator object (returned by read_csv with chunksize argument) can only draw samples at a fixed order (and i want the order to be shuffled after each epoch)
i found a way to bypass that, but i’m afraid it is still very slow. my new approach:
changed the sampler to a custom sampler such that each time getitem is called, it takes in a batch of indices
the getitem function takes the list of indices generated by the sampler and utilizes the skiprows feature of pd.read_csv.
so, to be clear, not using chunk loader. simply call read_csv everytime i draw a batch of samples and skip all rows but the rows of the batch.
i’m adding the code here, if anyone has a better solution please let me know (
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from preprocess import FacialKeypoints
import numpy as np
from torch.utils.data import Subset
from sklearn.model_selection import train_test_split as splitter
from helper_funcs import pixel_distance
transformed_dataset = FacialKeypoints(transform_vars={'is':True,'degrees':10,'flip_probability':0.5,'rotate_probability':0.5})
validation_dataset = FacialKeypoints(transform_vars={'is':False})
n = len(transformed_dataset)
num_train = int(np.ceil(len(transformed_dataset) * 0.85))
num_val = int(len(transformed_dataset) - num_train)
train_idx, val_idx = splitter(np.arange(n),train_size=num_train,shuffle=True)
batch_size = 16
trainset = Subset(transformed_dataset,train_idx)
valset = Subset(validation_dataset,val_idx)
train_sampler = torch.utils.data.sampler.BatchSampler(
torch.utils.data.sampler.RandomSampler(trainset), batch_size=batch_size,drop_last=False)
val_sampler = torch.utils.data.sampler.BatchSampler(
torch.utils.data.sampler.RandomSampler(valset), batch_size=batch_size,drop_last=False)
trainloader = DataLoader(trainset, sampler=train_sampler,num_workers=0)
valoader = DataLoader(valset, sampler=val_sampler, num_workers=0)
if torch.cuda.is_available():
device = torch.device('cuda:0')
model2 = nn.Sequential(
nn.Conv2d(1,32,3),
nn.MaxPool2d(kernel_size=5,stride=2),
nn.ReLU(),
nn.Conv2d(32,64,2),
nn.MaxPool2d(kernel_size=5,stride=2),
nn.ReLU(),
nn.Dropout2d(0.5),
nn.Conv2d(64,128,2),
nn.MaxPool2d(kernel_size=3,stride=2),
nn.ReLU(),
nn.Flatten(),
nn.Linear(10368,500),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(500,500),
nn.ReLU(),
nn.Linear(500,30)
)
model2.to(device)
total_loss = {'train':[],'val':[]}
criterion2 = nn.MSELoss()
optimizer2 = torch.optim.Adam(model2.parameters(),lr=0.001)
total_loss = {'train':[],'val':[]}
for epoch in range(10):
pixel_dist = []
print('in epoch {}/100 :'.format(epoch+1))
for sample in trainloader:
losses = []
input = sample['image'].squeeze(0).to(dtype = torch.float,device = device)
batch = input.shape[0]
target = sample['keypoints'].squeeze(0).to(dtype = torch.float,device = device)
optimizer2.zero_grad()
output = model2(input)
pixel_dist.append(pixel_distance(output, target))
loss2 = criterion2(output,target)
loss2.backward()
optimizer2.step()
losses.append(loss2.data.item())
a = np.mean(losses)
total_loss['train'].append(a)
print('train loss = {}'.format(a))
print('avg training pixel distance = {}'.format(np.mean(pixel_dist)))
pixel_dist = []
for sample in valoader:
with torch.no_grad():
losses = []
input = sample['image'].squeeze(0).to(dtype = torch.float,device = device)
batch = input.shape[0]
target = sample['keypoints'].squeeze(0).to(dtype = torch.float,device = device)
output = model2(input)
pixel_dist.append(pixel_distance(output,target))
loss2 = criterion2(output, target)
losses.append(loss2.data.item())
a = np.mean(losses)
print('validation loss = {}'.format(a))
total_loss['val'].append(a)
print('avg validatioin pixel distance = {}'.format(np.mean(pixel_dist)))
'''def check_sample(loader=valoader,model=model2,device=device):
device2 = torch.device('cpu')
plots = 16//3
x = next(iter(loader))
y_true = x['keypoints']
y_true = y_true.reshape(16,15,2)
x = x['image'].to(device)
x = x.view(16,1,96,96)
y = model(x)
y = y.reshape(16,15,2).to(device2)
x = x.to(device2)
fig,ax = plt.subplots(3,plots)
k=0
for i in range(plots):
for j in range(3):
ax[i,j].scatter(y_true[k,:,0].detach().numpy(),y_true[k,:,1].detach().numpy())
ax[i,j].imshow(x[k].squeeze())
ax[i,j].scatter(y[k,:,0].detach().numpy(),y[k,:,1].detach().numpy())
k=k+1
plt.show()
'''```
def __getitem__(self, idx):
self.idx = idx
if torch.is_tensor(idx):
idx = idx.tolist()
image = torch.from_numpy(np.array(read_csv('data/Q3/training_images.csv',header=None,
skiprows=np.delete(self.indices,self.idx))))
keypoints = torch.from_numpy(np.array(read_csv('data/Q3/training_labels.csv',header=None,
skiprows=np.delete(self.indices,self.idx))))
if self.transform_vars['is']:
image, keypoints = self.transform(image, keypoints)
return {'image':image, 'keypoints':keypoints}
else:
image = image.reshape((-1,96, 96))
return {'image':image.unsqueeze(1),'keypoints':keypoints} |
st47821 | If I want a network’s parameters, I can easily throw them into a list using
params = list(network.parameters())
Is there an easy way to do this for the gradients of the network parameters? I know each gradient can be accessed using grad = param.grad, but storing all these gradients in an array requires me to iterate over each parameter in the network and access its .grad. Any help is appreciated here! |
st47822 | Solved by albanD in post #2
Hi,
You can use grads = autograd.grad(loss, network.parameters()) and it will return to you a list of all the gradients.
If you already did a .backward() that populated the .grad fields, you can do grads = [p.grad for p in network.parameters()] as well. |
st47823 | Hi,
You can use grads = autograd.grad(loss, network.parameters()) and it will return to you a list of all the gradients.
If you already did a .backward() that populated the .grad fields, you can do grads = [p.grad for p in network.parameters()] as well. |
st47824 | Hi, everyone
I want to freeze BatchNorm while fine-tuning my resnet (I mean, use global mean/std and freeze weight and bias in BN), but the loss is so large and become nan at last:
iter = 0 of 20000 completed, loss = [ 15156.56640625]
iter = 1 of 20000 completed, loss = [ nan]
iter = 2 of 20000 completed, loss = [ nan]
the code I used to freeze BatchNorm is:
def freeze_bn(model):
for name, module in model.named_children():
if isinstance(module, nn.BatchNorm2d):
module.eval()
print 'freeze: ' + name
else:
freeze_bn(module)
model.train()
freeze_bn(model)
if I delete ‘freeze_bn(model)’, the loss converge:
iter = 0 of 20000 completed, loss = [ 27.71678734]
iter = 1 of 20000 completed, loss = [ 15.12455177]
iter = 2 of 20000 completed, loss = [ 16.49391365]
iter = 3 of 20000 completed, loss = [ 16.47186661]
iter = 4 of 20000 completed, loss = [ 6.9540534]
iter = 5 of 20000 completed, loss = [ 7.13955498]
iter = 6 of 20000 completed, loss = [ 4.7441926]
iter = 7 of 20000 completed, loss = [ 15.24151039]
iter = 8 of 20000 completed, loss = [ 12.98035049]
iter = 9 of 20000 completed, loss = [ 3.7848444]
iter = 10 of 20000 completed, loss = [ 4.14818573]
iter = 11 of 20000 completed, loss = [ 4.04237747]
iter = 12 of 20000 completed, loss = [ 4.52667046]
iter = 13 of 20000 completed, loss = [ 4.85921001]
iter = 14 of 20000 completed, loss = [ 3.59978628]
Why the global mean and std make the loss nan?
Hope for help, Thank you! |
st47825 | def set_bn_eval(m):
classname = m.__class__.__name__
if classname.find('BatchNorm') != -1:
m.eval()
model.apply(set_bn_eval)
you should use apply instead of searching its children.
named_children() doesn’t iteratively search submodules. |
st47826 | What is your learning rate ??
your loss is 15156 ,don’t you think it is too big? |
st47827 | I encountered with similar problem after freezing BN. Yet my loss doesn’t raise to a very high magnitude, but just shifts to NaN abruptly at some epoch. Anyway, the problem seems gone after I tuning down the learning rate, but I still don’t know the cause of it. |
st47828 | I want to train my own data set on a pre-trained model and do some fine-tuning, I have made the label by labelme, but the dataloader always seems to have some problems, what should I do or is there any good guidance on colab? |
st47829 | It seems you might be using a 3rd party DataLoader implementation defined in pytorch-deeplab-xception/dataloaders/__init__.py, which raises the NotImplementedError, so should check this file and make sure this class is used as expected.
PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging easier. |
st47830 | Thanks a lot.bug31113×830 80.2 KB It’s this one, but why the data can not loaded in? |
st47831 | Could you post an executable code snippet, so that we could take a look, please? |
st47832 | Hi, I have nueral network that is used to classify a patch as eye or not, give eye landmark and bounding box predictions.
All the three output layers are fully connected linear layers.
The problem with the model is, it is giving same output for the first output layer which has two nodes.
Here is my model architecture.
class Net(torch.nn.Module):
def init(self):
super(Net, self).init()
self.conv1 = nn.Conv2d(3, 32, 3)
torch.nn.init.xavier_normal_(self.conv1.weight)
self.pool1 = nn.MaxPool2d(3,stride=2,ceil_mode=True)
self.conv2 = nn.Conv2d(32, 64, 3)
torch.nn.init.xavier_normal_(self.conv2.weight)
self.pool2 = nn.MaxPool2d(3,stride=2,ceil_mode=True)
self.conv3 = nn.Conv2d(64, 64, 3)
torch.nn.init.xavier_normal_(self.conv3.weight)
self.pool3 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
self.conv4 = nn.Conv2d(64, 128, 2)
torch.nn.init.xavier_normal_(self.conv4.weight)
self.fc1 = nn.Linear(31128, 256)
self.cls_prob_fc = nn.Linear(256,2)
self.bbox_pred_fc = nn.Linear(256,4)
self.lm_pred_fc = nn.Linear(256,12)
def forward(self, x):
x = self.pool1(F.relu(self.conv1(x)))
x = self.pool2(F.relu(self.conv2(x)))
x = self.pool3(F.relu(self.conv3(x)))
x = F.relu(self.conv4(x))
x = x.view(x.size(0), 1*3*128)
x = F.relu(self.fc1(x))
x_cls_prob = self.cls_prob_fc(x)
x_bbox_pred = self.bbox_pred_fc(x)
x_lm_pred = self.lm_pred_fc(x)
return x_cls_prob,x_bbox_pred,x_lm_pred
Could anyone please help me with this |
st47833 | It your model is not training properly I would generally recommend to try to overfit a small dataset, e.g. just 10 samples, and make sure your model is able to do so by playing around with some hyperparameters.
If this is not possible, you might have some bugs in the code, e.g. forgetting to zero out the gradients.
PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging easier |
st47834 | There are some works that make a model robust against attacks, by making the model “focus” on only malicious features, therefore addition of benign features will not affect the outcome of model and only addition of malicious content can change its prediction.
more on this :
arxiv.org
1806.06108.pdf 8
375.14 KB
https://towardsdatascience.com/evading-machine-learning-malware-classifiers-ce52dabdb713 6 (read Non-Negative MalConv)
Non-Negative MalConv was constrained during training to have
non-negative weight matrices. The point of doing this is to prevent
trivial attacks like those created against MalConv. When done
properly, the non-negative weights make binary classifiers monotonic;
meaning that the addition of new content can only increase the
malicious score. This would make evading the model very difficult,
because most evasion attacks do require adding content to the file.
Fortunately for me, this implementation of Non-Negative MalConv has a
subtle but critical flaw.
my question is, how can i implement this in RNN/LSTM using pytorch?
my model currently takes a sequence of words and predicts whether the sentence is malicious or not, and right now addition of a lot of benign words (by beaning i mean words that appear in a lot of benign sentences) will evade the model.
basically i want my model to learn to predict based on only words that are malicious, meaning mostly appear in malicious samples, and addition of benign words that appear in many benign sentences will not change the prediction of the sentence (so adding many benign words will not affect the prediction)
how can i implement this? is this possible? |
st47835 | Hi there,
do you have a tutorial/guidance on how to finetune provided trained semantic segmentation model of torchvision 0.3 (FCN or DeepLabV3 with Resnet 50 or 101 backbone) on our dataset (transfer learning for semantic segmentation)? |
st47836 | The general logic should be the same for classification and segmentation use cases, so I would just stick to the Finetuning tutorial 611.
The main difference would be the output shape (pixel-wise classification in the segmentation use case) and the transformations (make sure to apply the same transformations on the input image and mask, e.g. crop). |
st47837 | What about this tutorial:
https://colab.research.google.com/github/pytorch/vision/blob/temp-tutorial/tutorials/torchvision_finetuning_instance_segmentation.ipynbhttps:/colab.research.google.com/github/pytorch/vision/blob/temp-tutorial/tutorials/torchvision_finetuning_instance_segmentation.ipynb 734 |
st47838 | I’ve written a tutorial on how to fine-tune DeepLabv3 for semantic segmentation in PyTorch. The tutorial link is: https://expoundai.wordpress.com/2019/08/30/transfer-learning-for-segmentation-using-deeplabv3-in-pytorch/ 500
The github repo link is: https://github.com/msminhas93/DeepLabv3FineTuning 187 |
st47839 | Hello,
I am following your link. I have few queries onto this:
I have 3 classes, even though I tried to change the number of out_channels to 3, still model.eval() shows final layer with 21 classes. Please clarify this.
It has batch size 4. I am trying to train it on single image. So when I change batch size to 1, it is throwing an error.
Can you help to decode the result into a mask? I dont know how to do that.
Please help in this regard. Thank you. |
st47840 | bug21072×380 55.2 KB I am following your tutorials, I have following error when I run the code … Do you tell me where is the problem?. |
st47841 | I have a fixed tensor that needs to be moved to GPU according to DataParallel’s whims, but I don’t want to bloat the state_dict with this tensor (since it never changes). I can’t seem to figure out a good solution. [for a little more context, this tensor is the weight in F.conv_transpose2d] The things I’ve considered:
override _apply - for single GPU, this seems to work fine, but for multi GPU, DataParallel seems to rely on replicate… which doesn’t use _apply and instead iterates over all buffers and parameters [the same things that goes into the state_dict]
register_buffer (currently using) - code runs, but state_dict is bloated with unnecesary crap; also if this tensor is ever modified (accidentally or otherwise), entire network is silently corrupted
custom code for saving checkpoints by manually removing this tensor from state_dict every single time - ugly
move tensor to appropriate GPU during every forward pass - probably slow
construct tensor during every forward pass - probably even slower
any other ideas? |
st47842 | Just create a flag to check whether the tensor has been allocated or not. Allocate it once.
Another option is just to delete it when you stop training.
Anyway I think it’s not bad it stays as a buffer. It’s somehow a necessary parameter to run inference. What would you do for exporting the model, keep telling everyone those values? Seems a bit annoying to me. |
st47843 | The whole point of allocating every time is that it would allocate on the correct GPU. If it’s only allocated once, then that would be option 4). From my tests, it seems option 4) is about as fast as 2), so maybe I can just do that (assuming it works in multi GPU context…)
A checkpoint is saved every epoch, so deleting it before saving is a variation of 3). I guess that’s an option.
Well, ideally this buffer is instantiated with the rest of the net in *.py, so “keep telling everyone those values” isn’t much of an issue. |
st47844 | With recent pytorch versions, you can use register_buffer(persistent=False) to prevent the buffer from appearing in the state_dct. This is present in 1.7 and not present in 1.4, not 100% sure when it appeared. |
st47845 | Why does Aten code use int64_t as the convention for indices instead of int32_t ? Wouldn’t int32_t be sufficient for most use cases? What amount of performance hit does using int64 instead of int32 lead to? |
st47846 | Hi,
I’m trying to create a multioutput / multihead feedforward neural network with some shared layers and two different heads.
Based on a condition, the forward step should split the features such that one group of samples should go into the one head and the other group of samples should go into the other head. I’m using indices to distinguish between these two groups.
Example code (I reduced the code to a minimum. Let me know if you need more information):
def forward(self, x):
indices_t = (x[:, -1] == 1).nonzero()
indices_c = (x[:, -1] == 0).nonzero()
x = x[:, :-1]
pred_t = None
pred_c = None
# Shared layers
z1 = self.fc1(x)
a1 = F.elu(z1)
z2 = self.fc2(a1)
a2 = F.elu(z2)
...
x_t = torch.index_select(a6, 0, indices_t.flatten())
x_c = torch.index_select(a6, 0, indices_c.flatten())
# Head One
if x_t.shape[0] > 0:
z1_t = self.fc_t_1(x_t)
a1_t = F.tanh(z1_t)
...
pred_t = F.softmax(a4_t, dim=1)
# Head Two
if x_c.shape[0] > 0:
z1_c = self.fc_c_1(x_c)
a1_c = F.tanh(z1_c)
z2_c = self.fc_c_2(a1_c)
a2_c = F.tanh(z2_c)
...
pred_c = F.softmax(a4_c, dim=1)
return pred_t, pred_c
So far, the forward step works. The problem is the backward step / updating the weights.
Here, I read that I can simply add my two losses and compute the gradients. Unfortunately, weights of both heads are updated, even if I process only samples from one group. For example, let’s assume we have two different groups: X and Y. Further, let’s assume we process a single sample which belongs to group X. The idea is to use the loss, calculated in the forward step, to update the shared layers + the head belonging to group X but not the head belonging to group Y. Unfortunately, in my case, both layers are updated.
Here is an excerpt of the backward step:
for x, y in train_dl:
pred_t, pred_c = model(x)
indices_t = (x[:, -1] == 1).nonzero()
indices_c = (x[:, -1] == 0).nonzero()
y_t = torch.index_select(y, 0, indices_t.flatten())
y_c = torch.index_select(y, 0, indices_c.flatten())
if pred_t is not None:
loss1 = F.cross_entropy(pred_t, y_t)
if pred_c is not None:
loss2 = F.cross_entropy(pred_c, y_c)
# Case 1: Both losses could be calculated
loss = loss1 + loss2
else:
# Case 2: Only one loss could be calculated
loss = loss1
else:
# Case 3: Only one loss could be calculated
loss = F.cross_entropy(pred_c, y_c)
loss.backward()
optimizer.step()
optimizer.zero_grad()
I also tried using requires_grad in the for loop on the layers which should not be updated. Unfortunately, the weights in all layers are nevertheless constantly updated
Thanks in advance. |
st47847 | Solved by ptrblck in post #2
The indexing of the input data and “selectively” calling backward works as expected in this code snippet:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.base = nn.Linear(1, 10)
self.head1 = nn.Linear(10, 10)
self.head2 = nn.Lin… |
st47848 | The indexing of the input data and “selectively” calling backward works as expected in this code snippet:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.base = nn.Linear(1, 10)
self.head1 = nn.Linear(10, 10)
self.head2 = nn.Linear(10, 10)
def forward(self, x):
idx1 = (x == 0).nonzero()
idx2 = (x == 1).nonzero()
# base
x = self.base(x)
# heads
x1 = torch.index_select(x, 0, idx1[:, 0])
x2 = torch.index_select(x, 0, idx2[:, 0])
x1 = self.head1(x1)
x2 = self.head2(x2)
return x1, x2
model = MyModel()
x = torch.randint(0, 2, (100, 1)).float()
out1, out2 = model(x)
print(out1.shape, out2.shape)
for name, param in model.named_parameters():
print(name, param.grad)
out1.mean().backward(retain_graph=True)
for name, param in model.named_parameters():
print(name, param.grad)
out2.mean().backward()
for name, param in model.named_parameters():
print(name, param.grad)
As you can see, the gradients of the parameters of head2 will be None after the first backward call and will only be populated after out2.mean().backward() is called.
I guess the indexing in your code might not work as expected and might select all samples from the batch dimension. Could you verify that x_t and x_c contain separate sets of the input data and if so compare your approach to my example code snippet? |
st47849 | You are right, I can see that the gradients are None after the first backward call. However, in some cases where the batch does only contain samples from one group, both heads are updated. I’m expecting that in such a case only the head gets updated which the group belongs to.
I noticed that in the first rounds of training the behavior is as expected. For example, imagine the following batches:
bs1:= x=1, y=1
bs2:= x=1, y=0
bs3:= x=0, y=1
Here, in the first training loop (using only bs1), only head1 is updated. The same applies for the second training loop. However, for the third training loop, both heads are updated even tough the gradients are zero for prediction1.
I modified your example such that you can convince yourself:
x = torch.randint(0, 2, (100, 3)).float()
for t in x:
out1, out2 = model(t.view(1, 3))
# Store weights to compare them after updating them
head1_weight_buffer = model.head1.weight.detach().clone()
head2_weight_buffer = model.head2.weight.detach().clone()
for name, param in model.named_parameters():
print(name, param.grad)
out1.mean().backward(retain_graph=True)
for name, param in model.named_parameters():
print(name, param.grad)
out2.mean().backward()
for name, param in model.named_parameters():
print(name, param.grad)
optimizer_adam.step()
optimizer_adam.zero_grad()
# Compare weights. We are expecting that just one head changed.
print(torch.all(torch.eq(head1_weight_buffer, model.head1.weight)))
print(torch.all(torch.eq(head2_weight_buffer, model.head2.weight)))
I’m expecting that one of the print statements is true and the other one is false, but they are never false or true at the same time.
Regarding your index question: It looks like your approach as well as my approach do work. I indeed got separate sets of input data. The only difference is the idx.flatten() instead of idx[:, 0], right? |
st47850 | I think I found a way to fix the issue. We need to distinguish between an empty prediction and a non-empty prediction. Further, we need to set the gradients to none instead to of zero. Here is what I did:
x = torch.randint(0, 2, (100, 3)).float()
for t in x:
out1, out2 = model(t.view(1, 3))
# Store weights to compare them after updating them
head1_weight_buffer = model.head1.weight.detach().clone()
head2_weight_buffer = model.head2.weight.detach().clone()
# Check where prediction is non zero and only calculate the gradients for non-zero predictions
if out1.shape[0] > 0:
if out2.shape[0] > 0:
out1.mean().backward(retain_graph=True)
out2.mean().backward()
else:
out1.mean().backward()
else:
out2.mean().backward()
optimizer_adam.step()
# set_to_none = True
optimizer_adam.zero_grad(set_to_none=True)
# Compare weights. We are expecting that just one head changed.
print(torch.all(torch.eq(head1_weight_buffer, model.head1.weight)))
print(torch.all(torch.eq(head2_weight_buffer, model.head2.weight)))
It solved my problem! Heads are only getting updated if we were using samples from the specific head group. Can you confirm whether the code snippet above is correct? And why do we have to distinguish between non-zero and zero prediction? |
st47851 | I guess the issue you are seeing in the previous example is due to using an optimizer with internal states such as Adam.
Even if the gradients are zero for certain parameters, Adam might still update them if valid running estimates were already created.
By setting the gradients to None you are changing this behavior and Adam will skip the parameter updates for all parameters where .grad==None. |
st47852 | That sounds reasonable! However, without the if else statement, both heads are again updated, even though I’m using only an example from one of the groups. I think that is because of the retain_graph = True parameter in backward(). In the case where out1 is zero, I retain the graph which leads (somehow?) to an update of the weights in head1 and head2. Thank you very much for your help! |
st47853 | Hi, my code runs extremely slow so I was checking each step for the reason, however, there seems to be something wrong with the timing in PyTorch
t1 = torch.cuda.Event(enable_timing=True)
t2 = torch.cuda.Event(enable_timing=True)
t3 = torch.cuda.Event(enable_timing=True)
torch.cuda.synchronize()
t1.record()
mask_loss_2 = nn.functional.binary_cross_entropy(out_mask, target_mask)
torch.cuda.synchronize()
t2.record()
# self.mask_loss = nn.BCELoss(reduction='mean')
mask_loss = self.mask_loss(out_mask, target_mask)
torch.cuda.synchronize()
t3.record()
if show_time:
torch.cuda.synchronize()
print(" Loss time ", t1.elapsed_time(t2), t2.elapsed_time(t3))
and the results are
Loss time 594.671630859375 4.7564802169799805
But then I switched the order of 2 step:
torch.cuda.synchronize()
t1.record()
mask_loss = self.mask_loss(out_mask, target_mask)
torch.cuda.synchronize()
t2.record()
mask_loss_2 = nn.functional.binary_cross_entropy(out_mask, target_mask)
torch.cuda.synchronize()
t3.record()
and the results are
Loss time 580.7462158203125 4.7769598960876465
So it seems the first BCEloss in each step took much longer time than the second
I also feel that this timing isn’t accurate because I’m pretty sure the major time consumption happened in some complex operations after this step. Do you know why is this happening?
Thanks! |
st47854 | I would recommend to add some warmup workloads to avoid accidentally profiling the CUDA context creation and general startup time as seen here:
# warmup
x = torch.randn(1024, 1024, device='cuda')
out = torch.matmul(x, x)
# create events
t1 = torch.cuda.Event(enable_timing=True)
t2 = torch.cuda.Event(enable_timing=True)
t3 = torch.cuda.Event(enable_timing=True)
# profile1
t1.record()
out = torch.matmul(x, x)
# profile2
t2.record()
for _ in range(10):
out = torch.matmul(x, x)
# end timer
t3.record()
# print results
torch.cuda.synchronize()
print(" Loss time ", t1.elapsed_time(t2), t2.elapsed_time(t3))
Changing the workload (single matmul and the matmul in the loop) results in the expected swapped timings. |
st47855 | I am using a pre-train network with nn.BCEWithLogitsLoss() loss for a multilabel problem. I want the output of the network as probabilities, but after using Softmax, I am getting the output of 0 or 1, which seems quite confusing as Softmax should not output perfectly 0 or 1 of any class, it should output the probabilities for various classes instead.
Below is the image of my code:
Below is the image of the output:
image757×103 2.09 KB |
st47856 | The softmax operation might output values (close to) this discrete values, if a particular logit in the input activation has a relatively positively large value as seen here:
x = torch.randn(1, 10)
out = F.softmax(x, dim=1)
print(out)
> tensor([[0.1612, 0.1486, 0.1232, 0.0626, 0.0162, 0.3084, 0.0166, 0.0811, 0.0098,
0.0723]])
x[0, 1] = 1000.
out = F.softmax(x, dim=1)
print(out)
> tensor([[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.]])
If you are using nn.BCEWithLogitsLoss, I assume you are working on a multi-label classification use case. If that’s the case, you should remove the softmax and pass the raw logits to this criterion, as internally log_sigmoid will be applied. |
st47857 | Hi @ptrblck,
Yes, it is a multi-label classification problem. Is there a way to convert logits into probabilities, as the softmax is output 0 and 1 for all the observations.
I want to use cutoff point to choose the labels instead of topk classes, what should I do to convert the output into probabilities? As I have tried to take the exp of the logits, but their sum is substantially greater than 1.
Shall I use some different loss to get the probabilities? |
st47858 | For a multi-label classification you would apply sigmoid to the outputs to get the probability for each class separately.
Note that nn.BCEWihtLogitsLoss still expects raw logits.
You could apply the sigmoid and use nn.BCELoss instead, but this would reduce the numerical stability. |
st47859 | Using BCELoss throughs an error:
image1823×414 27.2 KB
While the same network works with nn.BCEWihtLogitsLoss |
st47860 | One of the tensors (model output, target or weight) is a DoubleTensor, while a FloatTensor is expected, so you would have to transform it via tensor = tensor.float(). |
st47861 | Suppose I have a 2D tensor. I want to get the index of the first negative element of the tensor if there are any negative elements, else get the index of the smallest element. Basically, I want to implement the following for loop over each row of a 2D tensor efficiently in pytorch. Is there a way to do this with pytorch operations in a single iteration over each row?
x = 10.0 # all numbers are guaranteed less than 10
for i in range(len(row)):
if row[i] < 0:
x = i
break
if row[i] < x:
x = i |
st47862 | Solved by sidm in post #2
Figured it out:
clamped = torch.clamp(rays, 5e-11, 10.0)
minIdx = torch.argmin(clamped, dim=1)
-5e-11 is an upper bound on the negative numbers I’m interested in. |
st47863 | Figured it out:
clamped = torch.clamp(rays, 5e-11, 10.0)
minIdx = torch.argmin(clamped, dim=1)
-5e-11 is an upper bound on the negative numbers I’m interested in. |
st47864 | I try to save the memory on GPU by .cpu() using the code like the following, but failed.
import ...
model = MyModel(...)
loss = Myloss(...)
def train():
model.to('cuda')
loss.to('cuda')
model.train()
# optimize parameters
model.cpu()
loss.cpu()
The memory cost does not decrease. Why does .cpu() fail to move the parameters from the GPU
memory to the memory of CPU? |
st47865 | Solved by RaLo4 in post #2
Can we use .cpu() to save memory on GPU?
Since I don’t know what you are trying to do here I can’t really answer that question.
As for this question:
.cpu() is not inplace for a tensor, so assuming loss is a tensor you need to write it this way:
loss = loss.cpu()
Also! One reason why a lot o… |
st47866 | Can we use .cpu() to save memory on GPU?
Since I don’t know what you are trying to do here I can’t really answer that question.
As for this question:
TsingZ0:
Why does .cpu() fail to move the parameters from the GPU
memory to the memory of CPU?
.cpu() is not inplace for a tensor, so assuming loss is a tensor you need to write it this way:
loss = loss.cpu()
Also! One reason why a lot of people are running out of vram is because they are trying to keep their total loss without detaching the graph or moving the tensor to cpu. Like so:
running_loss += loss #this is wrong! don't use this!
If keeping your loss is what you are trying to do, but don’t want it to be on your gpu, then you can do this instead:
running_loss += loss.item()
This will extract the loss as a float. Completely detached and moved to cpu. |
st47867 | 5had3z:
torch.cuda.empty_cache()
It works and .cpu() is really able to save memory. However, the maximum memory cost is still high. With RaLo4’s help, I know that using
RaLo4:
running_loss += loss.item()
saves a lot of memory. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.