id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st45168
|
Hi Log!
logarith:
do I have to normalize them before passing them to the network,
or can I input tensors consisting of ones and zeros?
Passing in the “raw” tensors should be fine. Being ones and zeros,
they are already close to being normalized. Changing them to, say,
-1 and 1 so (if they were fifty-fifty) they would have a mean of 0 and
a standard deviation of 1 wouldn’t affect things much. (Try it both
ways – I doubt you’ll see any difference.)
(In contrast, think about a 16-bit grayscale image as input to a
network. The pixel values run from zero to about 65,000, so they
can be rather large. Normalizing the pixel values so that they are
of order one makes like easier for the network.)
Best.
K. Frank
|
st45169
|
I’m trying to create a model that is able to learn any lower polynomial function. (Can this even be done?) So far I’ve got code that is able to learn the y=6x^2 + 2x - 4 quadratic equation.
I noticed that by providing [x, x^2] instead of [x] as the input the model performs much better. However, when I use [x, x^2, x^3] as the input the model doesn’t work anymore (huge test set loss). Why does providing x^3 to the model result in such a poor result?
Below is the code that I’m using. I’m quite new to Pytorch so general tips/critique is also appriciated. Thanks!
import torch
from torch import Tensor
from torch.nn import Linear, MSELoss, functional as F
from torch.autograd import Variable
import numpy as np
def our_function(x):
# calculate the y value using the function 6x^2 + 2x - 4
return 6 * x * x + 2 * x - 4
def data_generator(data_size=1000):
x = np.random.randint(-1000, 1000, size=data_size)
y = our_function(x)
# Adding x^2 enables the model to find quadratic function much better.
inputs = np.column_stack([x, x ** 2])
# inputs = np.column_stack([x, x ** 2, x ** 3])
labels = y.reshape(-1, 1) # without the reshape we get wrong results
return inputs, labels
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = Linear(2, 50)
# self.fc1 = Linear(3, 50)
self.fc2 = Linear(50, 50)
self.fc3 = Linear(50, 1)
self.criterion = MSELoss()
self.optimizer = torch.optim.Adam(self.parameters(), lr=0.01)
def forward(self, x):
# Shoud i add a dropout?
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def train_model(self, epochs=1000, data_size=1000):
x_train, y_train = data_generator(data_size)
for epoch in range(epochs):
y_pred = model(Variable(Tensor(x_train)))
y_train = Variable(Tensor(y_train))
loss = self.criterion(y_pred, y_train)
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
print(f"Epoch: {epoch} Loss: {loss / data_size:,.10f}")
model = Net()
model.train_model()
|
st45170
|
Generally, a neural net fits a piecewise function of x, and there are no x*x terms in a network. Thus, adding x^2 input is beneficial.
Martijn:
Why does providing x^3 to the model result in such a poor result?
Because cubic polynomial root finding is more difficult that quadratic, indeed it becomes a nonconvex problem, I believe.
If you mean a problem with quadratic target equation, perhaps it is just a numerical issue, as your x^3 column range is -1e9…1e9, while it should have zero effect after training.
|
st45171
|
Hi Martijn!
Martijn:
I’m trying to create a model that is able to learn any lower polynomial function. (Can this even be done?)
Yes*!
*) For some definition of “lower polynomial.”
So far I’ve got code that is able to learn the y=6x^2 + 2x - 4 quadratic equation.
I noticed that by providing [x, x^2] instead of [x] as the input the model performs much better.
As Alex noted, by providing x**2 as input, you are giving your
model very valuable raw material with which to construct your
quadratic polynomial.
In fact, including x**2 lets you train an almost trivial model that
consists of a single Linear layer (and no activation functions):
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = Linear(2, 1)
def forward(self, x):
x = self.fc1(x)
return x
The Linear (2, 1) layer (with bias), is, in fact, exactly the general
quadratic polynomial when fed with [x, x**2] as its input, so this
model will train to essentially machine precision, and the weight
and bias of the Linear will train to match the coefficients of your
our_function() quadratic polynomial.
However, when I use [x, x^2, x^3] as the input the model doesn’t work anymore (huge test set loss). Why does providing x^3 to the model result in such a poor result?
First, adding x**3 just confuses the issue, making the training less
nimble. But more importantly, as Alex noted, x**3 is, in your case,
very large.
Given the range of your sample data, x is of order one thousand,
and x**2 is of order one million, while x**3 is of order one billion.
As training starts, the network has no idea what the three inputs
“mean” and has no sense of their relative scales. The x**3 term
has the largest effect on the output initially. So the network has to
spend many training iterations learning to ignore x**3 (or to process
it into some approximation to a quadratic), which wastes valuable
time, and potentially sends the training off on some wild goose chase
from which it never recovers.
Below is the code that I’m using. I’m quite new to Pytorch so general tips/critique is also appriciated.
self.criterion = MSELoss()
...
print(f"Epoch: {epoch} Loss: {loss / data_size:,.10f}")
A minor comment: MSELoss (using its defaults) takes the mean
of the loss over the batch, so loss / data_size is incorrectly
dividing by the batch size a second time.
In terms of understanding your loss, x is of order one thousand,
so your quadratic (that is, y_train) is of order one million. If your
errors (that is, y_pred - y_train) were comparable to y_train
(that is, your relative errors were of order 1), then your errors and
squared errors would be of order one million and one trillion,
respectively. So your loss (MSELoss means “mean-squared-error”)
will be of order one trillion (10**12) as training starts. Just be aware
that even if your training works well and your loss falls a lot, it will
still be numerically large.
You should probably train on MSELoss, but maybe also print out
sqrt (MSELoss) – the so-call root-mean-squared (rms) error.
Even this will be large, starting out on the order of one million. So
I would suggest also tracking the relative rms error, something like
sqrt (MSELoss / (y_train**2).mean()). This gives you a sense of
how well your predictions are doing relative to the scale of the values
your are trying to predict.
googlebot:
Generally, a neural net fits a piecewise function of x, and there are no x*x terms in a network.
I’d like to expand on this a little. It is true that Linear layers introduce
no quadratic terms. Also the ReLU activation function is piecewise
linear. But many non-linear activation functions do have quadratic (and
higher-order) terms in their expansions, so they do introduce quadratic
terms into the overall function computed by the network.
(For example, pytorch’s ELU (“exponential-linear unit”) has a regime
where the quadratic term dominates.)
Thus, adding x^2 input is beneficial.
My quibble about activation functions introducing quadratic terms
notwithstanding, explicitly adding x**2 is, of course, beneficial in a
case like yours because the network doesn’t have to reproduce it by
digging it out of non-linearities in the activation functions.
Best.
K. Frank
|
st45172
|
KFrank:
I’d like to expand on this a little. It is true that Linear layers introduce
no quadratic terms. Also the ReLU activation function is piecewise
linear. But many non-linear activation functions do have quadratic (and
higher-order) terms in their expansions, so they do introduce quadratic
terms into the overall function computed by the network.
(For example, pytorch’s ELU (“exponential-linear unit”) has a regime
where the quadratic term dominates.)
Yeah, but the thing is, usual networks won’t generate quadratic and linear “features” at once, so polynomials are approximated with something like additive models.
|
st45173
|
Thanks for the replies @KFrank @googlebot!
KFrank:
In fact, including x**2 lets you train an almost trivial model that
consists of a single Linear layer (and no activation functions):
Indeed a simple Linear module with one layer does a much better job at finding the polynomial function than my original model. The beauty is in simplicity apparently.
KFrank:
So the network has to spend many training iterations learning to ignore x**3 (or to process it into some approximation to a quadratic), which wastes valuable
time, and potentially sends the training off on some wild goose chase
from which it never recovers.
As both of you mentioned this seems to be cause of the problem I ran into in my original post. I’m still a bit surprised the network doesn’t figure out it simply needs to ignore x**3 though.
KFrank:
A minor comment: MSELoss (using its defaults) takes the mean
of the loss over the batch, so loss / data_size is incorrectly
dividing by the batch size a second time.
Oops! Thanks for pointing that out, some leftover code from a previous version.
KFrank:
I’d like to expand on this a little. It is true that Linear layers introduce
no quadratic terms. Also the ReLU activation function is piecewise
linear. But many non-linear activation functions do have quadratic (and
higher-order) terms in their expansions, so they do introduce quadratic
terms into the overall function computed by the network.
So in general, if one suspects that the data contains a certain relation/function, would you add said function to the input instead of letting the Neural Network try to find it?
|
st45174
|
Hi Martijn!
Martijn:
So in general, if one suspects that the data contains a certain relation/function, would you add said function to the input instead of letting the Neural Network try to find it?
I’d go further: If I suspected that a dataset contained a certain
relation, I would try to model that relation directly, and not use
a neural network.
Things like performing linear regressions and fitting polynomials
with neural networks are nice toy problems to explore how neural
networks and gradient-descent optimization work, but if I were
doing it for real, I would avoid the complication of a neural network,
and just perform the regression or fit the polynomial.
Best.
K. Frank
|
st45175
|
version: pytorch==1.3.0
code:
import torch
input = torch.randn((2, 128, 10, 6), dtype=torch.float32)
out = input.sum()
print("%3.10f" % out.data)
<< 0.0181007385
Compute comparison tests using the same data:
sum:0.0179971529
numpy: 0.017990112. (RE=0.04%)
torch cuda:0.0179862976. (RE=0.06%)
torch cpu:0.0181007385 (RE=0.57%)
The error of summation of float32 type data is normal, and the precision is close compared with other computed data. However, the calculation results of the Torch CPU vary greatly. How to determine the accuracy of the data?
Is it because the accumulated errors are processed by other computing libraries, while the Torch CPU does not have higher errors? Or is the processing of TorchCPU a special process?
Where can I see the implementation of the cumulative calculation?
|
st45176
|
Hi! I’m not very familiar and good with the subject, but maybe this issue page can help you:
github.com/pytorch/pytorch
the acc of torch.float32 is not good 8
opened
Mar 25, 2019
closed
Mar 25, 2019
sun7-she
environment:
python3
pytorch 0.4.1
run the code1:
import torch
import numpy as np
n=4
c=3
h=32
w=32
inputs = torch.FloatTensor(n,c,h,w).fill_(0)
inputs = np.asarray(inputs)
inputs_sum = 0.0
for j in range(c):
for k in range(h):
...
|
st45177
|
Hi Zebulun!
Zebulun:
Compute comparison tests using the same data:
sum:0.0179971529
numpy: 0.017990112. (RE=0.04%)
torch cuda:0.0179862976. (RE=0.06%)
torch cpu:0.0181007385 (RE=0.57%)
As Utku suggested with his link, the imprecision you see is expected
as it is consistent with 32-bit floating-point round-off error.
You are summing over about 15,000 normally distributed values, so
the expected size of your sum is about 124. Let’s treat your numpy
result as the exact result. Then your cpu result is off in the fourth
place after the decimal point, so relative to the expected value, this
is fully consistent with round-off error.
However, there is something fishy going on with your numbers. The
value you get for your sum is improbably small. As I mentioned
above, the expected size of your sum is about 124, so a value of
0.018 is very unlikely – about one chance in ten thousand. I can’t
think of any good reason this should happen, and it would be pushing
your luck to suggest that luck explains it.
Here is a script that goes through the calculations relevant to these
two points:
import numpy as np
tdim = (2, 128, 10, 6)
sumcpu = 0.0181007385
sumnpy = 0.017990112
nSum = np.product (tdim)
expected_size = np.sqrt (nSum)
relerr = (sumcpu - sumnpy) / expected_size
print ('nSum =', nSum, ', expected_size =', expected_size)
print ('relerr =', relerr)
import math
prob_small = math.erf (abs (sumnpy) / (math.sqrt (2.0) * expected_size))
print ('prob_small =', prob_small)
And here is its output:
>>> import numpy as np
>>>
>>> tdim = (2, 128, 10, 6)
>>> sumcpu = 0.0181007385
>>> sumnpy = 0.017990112
>>>
>>> nSum = np.product (tdim)
>>> expected_size = np.sqrt (nSum)
>>>
>>> relerr = (sumcpu - sumnpy) / expected_size
>>>
>>> print ('nSum =', nSum, ', expected_size =', expected_size)
nSum = 15360 , expected_size = 123.93546707863734
>>> print ('relerr =', relerr)
relerr = 8.92613733644216e-07
>>>
>>> import math
>>>
>>> prob_small = math.erf (abs (sumnpy) / (math.sqrt (2.0) * expected_size))
>>> print ('prob_small =', prob_small)
prob_small = 0.00011581860221173588
Two things:
relerr = 8.92613733644216e-07
32-bit round-off error is about 1.e-7. You are summing over about
15,000 values of order one, so the round-off error could easily
accumulate to this level.
prob_small = 0.00011581860221173588
When you sum over 15,000 (independent) normally distributed values
(so each value has standard deviation one), the sum comes from a
gaussian distribution that has a standard deviation of about 124 (and
a mean of zero). You can calculate the probability of such a sum
having a magnitude smaller that 0.018 using the so-called error
function, and that probability is very small – about 1% of 1%.
Best.
K. Frank
|
st45178
|
Hello!
I am new to PyTorch and I am at the moment building my first ever GAN network.
While choosing the proper layer architecture I have noticed some behavior that I cannot understand and I am really interested in the inner-working of this function.
Let’s say, that I have got a batch of data looking like this:
input_data = torch.randn(64, 100, 1, 1)
As I understand, it can be interpreted as a set of 64 pictures, of height=100 pixels, width=1 pixel in a grayscale.
I want to use this random data to generate some images, so I insert it into a network in which the first layer is a ConvTranspose2d. This layer requires the size of the input data to be specified and the size of the output data as well. It looks strange for me, because I thought, that the size of an output image is determined by the size of an input image, stride, and padding. However, the layer canfit the output into the given size.
Example 1:
conv = ConvTranspose2d(100, 512, 4, 1, 0)
output = conv(input_data)
print(output.size())
torch.Size([64, 512, 4, 4])
Example 2:
conv = ConvTranspose2d(100, 3157, 4, 1, 0)
output = conv(input_data)
print(output.size())
torch.Size([64, 3157, 4, 4])
Probably, I do not understand some of the Convolutional Transpose layer’s mechanics but I find it very interesting and I wonder whether someone knows a simple answer to the question:
How does the ConvTranspose2d fit the output into an arbitrary number of output channels?
I will be grateful for your help
|
st45179
|
kluk123:
As I understand, it can be interpreted as a set of 64 pictures, of height=100 pixels, width=1 pixel in a grayscale.
No, PyTorch uses the NCHW memory format (channels-first), so your input would have 64 channels and a single pixel.
kluk123:
This layer requires the size of the input data to be specified and the size of the output data as well.
That’s not the case and you would need to define e.g. the in_channels, out_channels, and kernel_size.
kluk123:
How does the ConvTranspose2d fit the output into an arbitrary number of output channels?
Each output channels is created by a filter kernel, so you can define this number arbitrarily.
For a general overview of the conv / transposed conv arithmetic, have a look at this tutorial 24.
|
st45180
|
Thank You very much, this explains a lot. Especially the linked tutorial comes in handy.
Now I understand the problem I had with channels.
Thank You one more time.
|
st45181
|
Suppose I have batch size 256, and if I have 2 GPUs to use Data parallel, I can split a 512 batch data into two 256 batches, but in final optimization, It uses sum up of individual loss gradients which equals to gradient of loss sum up. Ex: (f(x1) + f(x2))’ = f’(x1) + f’(x2) . dose that mean it is same as a single 512 batch size?
Since our model may not perform well in large batch size, but we have large training data, so would like to know whether we should consider DP or DDP
|
st45182
|
can you elaborate more on why DDP is different? they still use All reduce to “average” gradient?
|
st45183
|
DataParallel vs DistributedDataParallel distributed
Hi,
what is the difference between
model = nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
and
model = nn.DataParallel(model, device_ids=[args.gpu])
?
I think this should be helpful.
|
st45184
|
Yes, I checked this thread, I knew DDP is better than DP as it dose fully All reduce and multi processing, instead DP need to calculate the loss in main GPU thus in balanced GPU utilization.
But here my question is if # of GPU is equivalent to batch_size * # in DP , is it same in DDP? since in DDP “At the end of the backwards pass, every node has the averaged gradients” , so to me seems both are doing some “average” here
|
st45185
|
No, they are different. DDP will calculate loss on each device and get grad then do “average”, DP just do it in “main” device and update model.
|
st45186
|
Thanks, I understand, but why DP is same as large batch size, but DDP is not? essentially they do some “average”, “sum” for all gradient calculated in each GPUs, so model side should be same?
|
st45187
|
DP will all gather NN result to main device and then all operations are doing in it, that may contain calculate loss get grad and update model param.
DDP keep result on each device until get grad, and do you say average then update model on each device.
|
st45188
|
Suppose I have a matrix e.g. A = tensor([[0, 1, 2], [3, 4, 5]]) , and I have another tensor B e.g. B = torch.tensor([1, 5, 2, 4]), how can I multiply each scalar in A with B, to get C of shape [2, 3, 4] in this example?
|
st45189
|
Solved by KFrank in post #2
Hi Bob!
I believe you are asking for the generalized outer product. Please
try torch.ger() (torch.outer) with .reshape() or torch.einsum:
ABa = torch.ger (A.reshape([6]), B).rehshape ([2, 3, 4])
ABb = torch.einsum ('ij, k -> ijk', A, B)
These two versions should give you the same result. Do t…
|
st45190
|
Hi Bob!
Bob_Li:
Suppose I have a matrix e.g. A = tensor([[0, 1, 2], [3, 4, 5]]) , and I have another tensor B e.g. B = torch.tensor([1, 5, 2, 4]), how can I multiply each scalar in A with B, to get C of shape [2, 3, 4] in this example?
I believe you are asking for the generalized outer product. Please
try torch.ger() (torch.outer) with .reshape() or torch.einsum:
ABa = torch.ger (A.reshape([6]), B).rehshape ([2, 3, 4])
ABb = torch.einsum ('ij, k -> ijk', A, B)
These two versions should give you the same result. Do they do what
you want?
Best.
K. Frank
|
st45191
|
Hi, everybody!
New pytorcher here. I am trying to implement a binary semantic segmentation approach using UNET.
To get myself started I took some code from this example. 3
My dataset contains some 250 images and I used rotations to expand that to nearly 1000.
Here is an example of my images and targets:
target mask|537x252 5
Now when I train the model two odd things happen:
The model achieves high accuracy very quickly, after just a couple of epochs.
All prediction masks are blank. The more epochs the blanker they are.
After a certain amout of reading, I believe the problem is that the targets in my images are fairly small.
The model thus gets a high accuracy for predicting zeroes for everything, thus essentially training itself to paint everything black.
The best solution appears to be:
Way more training epochs
A weighted scoring approach where positive pixels are favoured.
Any thoughts on this analysis and hints on how to achieve the weight scoring would be appreciated, I’m still very new to all this.
Cheers!
|
st45192
|
Update:
I trained the model for 50 epochs and the resulting predictions were again blanks.
I then assembled a data set in which all the targets made up a large part of the images.
(small tagets potentially being the issue)
After training the new set for 10 epochs I got this result. Not perfect but not terrible.
However retraining the same set with the same settings from scratch, delivers this were everything is turned around:
Also training for more than 10 epochs again creates blank predictions.
Has anyone encountered this before?
|
st45193
|
The blank predictions can indeed be produced by a highly imbalanced target distribution in the mask images and you could try to use a weighted loss function to counter this effect.
I’m unsure about the last effect of flipped predictions and then going to a blank prediction again.
|
st45194
|
Many thanks for the reply.
I have experimented with smaller image tiles 256px instead of 512px, and the results appear to be similar.
I will look into putting together a weighted loss function.
|
st45195
|
Based on this, what would be the most appropriate way to define my training model so that I could then record loss and accuracy?
image2278×1624 339 KB
|
st45196
|
The model definition is independent from recording the loss and accuracy so could you explain the issue a bit more, please?
PS: you can post code snippets by wrapping them into three backticks ```, which would make debugging easier.
|
st45197
|
Could you explain the statement a bit? The optimizer.step() function will itertate all param groups, not only one.
|
st45198
|
Hi, since I updated torch to 1.7 the transformations on my dataset don’t apply to my labels, even though I did not change anything. The transformations work fine on the normal data. Maybe somebody has a quick fix, here’s the code:
batch_size = 1
path_file = "data.csv"
train_inputs, train_labels, val_inputs, val_labels = BatchMaker.BatchMaker(path_file)
transform = transforms.Compose([
transforms.ToPILImage(),
# transforms.Resize((165, 220)),
transforms.RandomRotation(degrees=random.randint(0,30)),
transforms.RandomHorizontalFlip(p=0.5),
transforms.RandomVerticalFlip(p=0.5),
])
class CreateDataset(Dataset):
def __init__(self, inputs, labels, transform=transform):
self.inputs = torch.FloatTensor(inputs)
self.labels = torch.FloatTensor(labels)
self.transform = transform
def __getitem__(self, index):
x = self.inputs[index]
y = self.labels[index]
if self.transform:
seed = np.random.randint(2147483647)
random.seed(seed)
x = self.transform(x)
y = self.transform(y)
if random.random() > 0.5:
x = TF.adjust_brightness(x, random.uniform(0.4,0.6))
if random.random() > 0.5:
x = TF.adjust_contrast(x, random.uniform(0.4,0.6))
x = TF.to_tensor(x)
random.seed(seed)
y = TF.to_tensor(y)
y = y/np.sum(np.array(y))
return x.view(1,180,240), y.view(1,180,240)
def __len__(self):
return len(self.inputs)
# Get the data, transform it
data = {
'train':
CreateDataset(train_inputs, train_labels),
'val':
CreateDataset(val_inputs, val_labels, transform=None),
# 'test':
# CreateDataset(test_inputs, test_labels, transform=None)
}
# Load Data in batches, shuffled
dataloaders = {
'train': DataLoader(data['train'], batch_size=batch_size, shuffle=True, drop_last=True),
'val': DataLoader(data['val'], batch_size=batch_size, shuffle=False, drop_last=True),
# 'test': DataLoader(data['test'], batch_size=batch_size, shuffle=False, drop_last=True),
}
|
st45199
|
Could you explain the issue a bit more?
Are these two lines of code not executed:
y = TF.to_tensor(y)
y = y/np.sum(np.array(y))
and the dataset returns unexpected objects for y?
|
st45200
|
Hi Guys!
I am basically trying to split an MNIST Image Into smaller non-overlapping square blocks of size block_size and trying to shuffle pixel values in it randomly. However, a naive implementation from my using nested for loops causes the function to be terribly slow. I have tried reading the docs for some functions in torch that can help me out; I couldn’t find any. Could Someone please help me with doing this using slicing? Any help would be highly appreciated! Thanks in advance! I am attaching the code for reference.
#takes a batch of images (BxCxHXW) and shuffles pixels
#I was executing this on mnist, hence I didn't consider the channel dimension
def Pixel_Shuffle(batch,block_size=2):
it = int(batch.shape[2]/block_size)
for img in batch :
for i in range(it**2) :
row = int(i/it)
row = row*2
col = i%it
col = col*block_size
img[0,row:row+block_size,col:col+block_size] = Block_Shuffle(img[0,row:row+block_size,col:col+block_size])
return batch
def Block_Shuffle(block):
ord = torch.randperm(block.numel())
block = block.reshape(-1)[ord].reshape(block.shape)
return block
|
st45201
|
You could use tensor.unfold to create the patches and reshape them back as explained in this post 89.
|
st45202
|
Hello.
I would like to implement the indicator function of a set with pytorch (pytorch in particular because I need to use it as an activation function for one of my models).
I take the case of the derivative of Parameterised ReLU (parameterised by a real a), which is 1 for positive numbers and a elsewhere. I would like to be able to implement this derivative so that it can support batch sizes greater than 1.
Here is my example for relu.
import torch
activation_function = torch.relu
deriv_activation_function = lambda x : (x > 0).float()
|
st45203
|
why not just use the tensor product, for example, when you want to apply penalty if the sum of the output exceeds 1:
import torch
v=torch.rand(3)
penalty = torch.sum()*(torch.sum()>1)
|
st45204
|
I implemented the Hogwild example program from here. My PC has a GPU, but when I execute the program it does not use the GPU but all the CPU cores, why? How can I indicate I want it to be run with the GPU instead?
|
st45205
|
Thanks, but I get an error: RuntimeError: Cannot pickle CUDA storage; try pickling a CUDA tensor instead, do you know how I can fix it?
|
st45206
|
Could you try to move these two lines 8 right after the args = parser.parse_args() call?
I needed to move them as I got an error as:
RuntimeError: context has already been set
I’ll create an issue with a fix if I can reproduce it using the nightly (my current PyTorch build is a source build from a few weeks ago).
|
st45207
|
I did the moving of those lines and after running the program I got the context error, were you able to fix it?
|
st45208
|
I got the before changing anything. After I moved the lines of code it worked.
Are you only seeing the error after moving?
|
st45209
|
Hi, the context error displays when moving the lines as you told me, the pickle error occurs when running the posted code as is.
|
st45210
|
Hello, while pytorch works fine for me with python3.6, I get this error when trying to import it in python3.7 :
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.7/site-packages/torch/__init__.py", line 427, in <module>
_C._initExtension(manager_path())
File "/usr/lib/python3.7/site-packages/torch/__init__.py", line 422, in manager_path
raise RuntimeError("Unable to find torch_shm_manager at " + path)
RuntimeError: Unable to find torch_shm_manager at /usr/lib/python3.7/site-packages/torch/bin/torch_shm_manager
Why is torch looking for torch_shm_manager under /usr/lib/python3.7/site-packages? I find torch_shm_manager under /usr/bin/torch_shm_manager.
Is there anything wrong with my install?
|
st45211
|
How did you install PyTorch? Did you build it from source or did you install the conda/pip binaries?
|
st45212
|
I’m not familiar with the Gentoo and this package manager, but are you able to install and run it without these tools?
|
st45213
|
Here is the code that attempts to learn to encode for paragraphs (the set of sentences that are themselves encoded).
class TransformerEnc(nn.Module):
def __init__(self):
super(TransformerEnc, self).__init__()
d_model = 1024
self.posenc = PositionalEncoding(d_model)
encoder_layer = nn.TransformerEncoderLayer(d_model, nhead=8)
self.model = nn.TransformerEncoder(encoder_layer, num_layers=6)
def forward(self, x):
x = self.posenc(x)
output = self.model(x)
output = torch.sum(output, 1)
return output
forward function gets tensor x of shape [64, 20, 1024].
Where 20 is the max number of sentences in a paragraph and 1024 is encoded sentence.
Now I need the output of the size [64, 1024] where 20 sentences got embedded in one 1024 dim vector.
When I train a model with this embedding I get a random accuracy. If I replace this code with LSTM it trains well. So clearly there is some issue.
Any ideas what might be the problem?
|
st45214
|
Hi,
I trained and saved a model that has two embedding layers.
One had a vocab V of ~ 22k items, that were encoded using a latent dimension D of 8.
If this is my trained embedding matrix (V x D)
tensor([[ 0.6163, 0.9769, 0.3950, ..., 1.2966, 0.3279, 1.4990],
[-0.4985, -0.6366, 0.6025, ..., 0.1584, -0.9755, -0.7621],
[-0.7265, 0.0665, -1.8310, ..., -0.4401, 0.8690, -0.7261],
...,
[-0.5039, 0.8430, -0.7346, ..., -0.1686, -0.0024, -0.9600],
[ 0.9195, 0.3476, 0.0367, ..., -0.9595, 0.1659, 1.1200],
[ 1.1634, -0.1817, -1.1437, ..., -1.2055, -0.5795, -1.6404]])
is there a way to map each of the rows to its original input (which is an integer)?
I am thinking that the embedding matrix is one-to-one to the initial size of the vocab, but I don’t know how are they mapped because the num_embeddings param asks for the length of the vocab only. So if the encoding is done with something like a dictionary, then what is the first key? Is it 0 or 1, and if I would plug into that dictionary the key 0 would I get the first row of the embedding matrix?
|
st45215
|
Solved by ptrblck in post #2
The input index is directly used to index the weight matrix as seen here:
emb = nn.Embedding(4, 2)
x = torch.arange(4)
print(emb.weight)
> Parameter containing:
tensor([[-0.5812, 0.8916],
[ 1.9390, -0.6156],
[ 0.2126, 0.0578],
[-1.1099, 0.0964]], requires_grad=True)
out …
|
st45216
|
The input index is directly used to index the weight matrix as seen here:
emb = nn.Embedding(4, 2)
x = torch.arange(4)
print(emb.weight)
> Parameter containing:
tensor([[-0.5812, 0.8916],
[ 1.9390, -0.6156],
[ 0.2126, 0.0578],
[-1.1099, 0.0964]], requires_grad=True)
out = emb(x)
print(out)
> tensor([[-0.5812, 0.8916],
[ 1.9390, -0.6156],
[ 0.2126, 0.0578],
[-1.1099, 0.0964]], grad_fn=<EmbeddingBackward>)
|
st45217
|
@ptrblck, thank you, this was my intuition but I wasn’t completely sure that this is right. However, I assumed it is so and ran a test that confirmed it empirically.
|
st45218
|
Hi,
I am facing a problem with DataLoader. I am training a classification problem, the code runs normally with num_workers equal 0 but it raised CUDA out of memory problem when I increased the num_workers.
My GPU: RTX 3090
Pytorch version: 1.8.0.dev20201104 - pytorch-nightly
Python version: 3.7.9
Operating system: Windows
CUDA version: 10.2
This case consumes 19.5GB GPU VRAM.
train_dataloader = DataLoader(dataset = train_dataset,
batch_size = 16, \
shuffle = True,
num_workers= 0)
This case return: RuntimeError: CUDA out of memory. Tried to allocate 90.00 MiB (GPU 0; 24.00 GiB total capacity; 13.09 GiB already allocated; 5.75 GiB free; 13.28 GiB reserved in total by PyTorch)
train_dataloader = DataLoader(dataset = train_dataset,
batch_size = 16, \
shuffle = True,
num_workers= 8)
I can understand if it was run out of PC memory, but run out of CUDA memory is so weird. Is that because of Nightly version?
Update:
I just install CUDA 11.1.
The memory in GPU is the same with num_workers = 0 or 2 or 4, but CUDA out of memory in 8.
|
st45219
|
Are you pushing the data to the GPU inside the Dataset (in the __init__ or __getitem__)?
If so, increasing the number of workers would also increase the GPU memory usage, since each worker would push the data to the device.
If that’s not the case, could you post an executable code snippet so that we could reproduce this issue, as the device memory shouldn’t increase while loading data using the CPU.
|
st45220
|
@ptrblck thank you for the quick reply, I didn’t push data into GPU in Dataset.
This is my Dataset:
import os
import cv2
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
from torch.utils.data import Dataset
from matplotlib import pyplot as plt
import albumentations.pytorch as AT
from albumentations import (
RandomRotate90, Flip, Transpose, GaussNoise, Blur, VerticalFlip, HorizontalFlip, \
HueSaturationValue, RGBShift, RandomBrightness, Resize, Normalize, Compose, CenterCrop)
from PIL import Image
def transforms(size_image):
return Compose([
Resize(height = size_image[0], width = size_image[1]),
# Red - Green - Blue right now
Normalize(mean=(0.406, 0.515, 0.323), std=(0.195, 0.181, 0.178)),
AT.ToTensor()
])
def augmentation(size_image,p=0.5):
return Compose([
RandomRotate90(),
Flip(),
Transpose(),
GaussNoise(),
Blur(),
VerticalFlip(),
HorizontalFlip(),
HueSaturationValue(hue_shift_limit=5, sat_shift_limit=15, val_shift_limit=10),
RGBShift(r_shift_limit=10, g_shift_limit=10, b_shift_limit=10),
RandomBrightness(limit = 0.05),
CenterCrop(height = 150, width = 150, p = 0.5)
], p=p)
class PyTorchImageDataset(Dataset):
def __init__(self, image_list,train, labels, size_image, **kwags):
self.image_list = image_list
self.transforms = transforms(size_image)
self.labels = labels
self.train = train
self.augment = augmentation(size_image)
import json
with open('C:/Users/user/name2path.json', 'r') as f:
self.image_path_dir = json.load(f)
def __len__(self):
return len(self.image_list)
def __getitem__(self, i):
image_path = self.image_path_dir[self.image_list[i]]
image = np.array(Image.open(image_path))
label = self.labels[i]
if self.train:
image = self.augment(image=image)['image']
image = self.transforms(image = image)['image']
return image, label
def isImage(self,path):
all_image_ext = ["jpg","gif","png","tga","jpeg"]
return True if (path.split('.')[-1].lower()) in all_image_ext else False
And my train code:
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
import sys
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
from torch.utils.data import DataLoader
from models import se_resnext50_32x4d
from tqdm import tqdm
from sklearn.model_selection import train_test_split
from utilities import *
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
params = {'workers': 4,
'batch_size': 16,
'num_epochs': 100,
'lr': 0.001,
'size_image': [480, 768],
'checkpoint': True,
'save_path': 'C:/Users/user/save',
'dataFrame_path': 'C:/Users/user/train.csv',
'max_patience': 5}
checkDir(params['save_path'])
df = pd.read_csv(params['dataFrame_path'])
IDs = df.id.to_list()
labels = df.landmark_id.to_list()
train ,val, y_train,y_val = train_test_split(IDs, labels, test_size = 0.2, random_state = 42, shuffle = True)
train_dataset = PyTorchImageDataset(image_list = train,
labels = y_train, \
train = True,
size_image = params['size_image'])
train_dataloader = DataLoader(dataset = train_dataset,
batch_size = params['batch_size'], \
shuffle = True,
num_workers=params['workers'])
val_dataset = PyTorchImageDataset(image_list=val,
labels=y_val, \
train = False, \
size_image = params['size_image'])
val_dataloader = DataLoader(dataset=val_dataset, \
batch_size=params['batch_size'], \
shuffle=True, \
num_workers=params['workers'])
dataloader = {'train': train_dataloader, 'valid': val_dataloader}
model = se_resnext50_32x4d()
model.to(device)
model.train()
criterion = nn.CrossEntropyLoss()
softmax = nn.Softmax()
optimizer = optim.Adam(model.parameters(), lr=params['lr'], betas=(0.9, 0.999))
patience = 0
last_acc = -1
best_acc = 0.3
for epoch in range(params['num_epochs']):
for phase in ['train','valid']:
if phase == 'train':
running_loss = 0
for i, data in enumerate(tqdm(dataloader[phase])):
optimizer.zero_grad()
data_batch = data[0].to(device)
b_size = data_batch.size(0)
label = data[1].type(torch.long)
label = label.to(device)
output = model(data_batch)
prob = softmax(output)
loss = criterion(output, label)
running_loss+= loss.item()
loss.backward()
optimizer.step()
print('epoch %d train loss: %.3f' %(epoch + 1, float(running_loss)/(1+i)))
else:
running_loss = 0
with torch.no_grad():
for i, data in enumerate(tqdm(dataloader[phase])):
data_batch = data[0].to(device)
b_size = data_batch.size(0)
label = data[1].type(torch.long)
label = label.to(device)
output = model(data_batch)
prob = softmax(output)
loss = criterion(output, label)
if phase == 'valid' and i == 0:
valid_label = label.cpu().detach().numpy()
valid_prob = prob.cpu().detach().numpy()
running_loss+= loss.item()
else:
torch.cuda.synchronize()
temp_label = label.cpu().detach().numpy()
valid_label = np.concatenate((valid_label, temp_label))
torch.cuda.synchronize()
temp_prob = prob.cpu().detach().numpy()
valid_prob = np.concatenate((valid_prob, temp_prob), axis = 0)
running_loss+= loss.item()
if phase == 'valid':
last = 0
torch.cuda.synchronize()
predict_labels = np.argmax(valid_prob, axis=1)
acc = accuracy(valid_label, predict_labels)
loss = round(float(running_loss)/(i + 1), 4)
print('epoch %d valid acc: %.3f' %(epoch + 1, acc))
print('epoch %d valid loss: %.3f' %(epoch + 1, loss))
path_name = params['save_path'] + 'se_resnext_' + str(last + epoch + 1) + '_loss_' + str(loss) + '_acc_' + str(acc) + '.pth'
if acc > best_acc:
torch.save(model.state_dict(), path_name)
best_acc = acc
if acc < last_acc:
patience+=1
elif acc >= last_acc:
patience = 0
if patience == params['max_patience']:
torch.save(model.state_dict(), path_name)
sys.exit()
last_acc = acc
|
st45221
|
Thanks for the code.
I’ve removed the training part, as the DataLoader's workers should increase the memory usage as described in your initial post.
Using this code snippet:
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
import sys
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
from torch.utils.data import DataLoader
import numpy as np
import torch
import torch.nn as nn
from torch.utils.data import Dataset
from matplotlib import pyplot as plt
import albumentations.pytorch as AT
from albumentations import (
RandomRotate90, Flip, Transpose, GaussNoise, Blur, VerticalFlip, HorizontalFlip, \
HueSaturationValue, RGBShift, RandomBrightness, Resize, Normalize, Compose, CenterCrop)
from PIL import Image
def transforms(size_image):
return Compose([
Resize(height = size_image[0], width = size_image[1]),
# Red - Green - Blue right now
Normalize(mean=(0.406, 0.515, 0.323), std=(0.195, 0.181, 0.178)),
AT.ToTensor()
])
def augmentation(size_image,p=0.5):
return Compose([
RandomRotate90(),
Flip(),
Transpose(),
GaussNoise(),
Blur(),
VerticalFlip(),
HorizontalFlip(),
HueSaturationValue(hue_shift_limit=5, sat_shift_limit=15, val_shift_limit=10),
RGBShift(r_shift_limit=10, g_shift_limit=10, b_shift_limit=10),
RandomBrightness(limit = 0.05),
CenterCrop(height = 150, width = 150, p = 0.5)
], p=p)
class PyTorchImageDataset(Dataset):
def __init__(self, data, labels, size_image, **kwags):
self.transforms = transforms(size_image)
self.data = data
self.labels = labels
self.augment = augmentation(size_image)
def __len__(self):
return len(data)
def __getitem__(self, i):
image = self.data[i]
label = self.labels[i]
image = self.augment(image=image.permute(1, 2, 0).numpy())['image']
image = self.transforms(image = image)['image']
return image, label
data = torch.randn(10, 3, 255, 255)
labels = torch.randint(0, 1000, (10,))
train_dataset = PyTorchImageDataset(data=data,
labels=labels,
size_image=(224, 224))
num_workers=20
train_dataloader = DataLoader(dataset=train_dataset,
batch_size=5,
shuffle=True,
num_workers=num_workers)
print('num_workers={}'.format(num_workers))
device = 'cuda:0'
for epoch in range(10):
for i, data in enumerate(train_dataloader):
data_batch = data[0].to(device)
label = data[1].to(device)
print('{}MB allocated'.format(torch.cuda.memory_allocated()/1024**2))
yields the same memory usage for different number of workers:
num_workers=0
2.87158203125MB allocated
2.87158203125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
num_workers=2
2.87158203125MB allocated
2.87158203125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
num_workers=20
2.87158203125MB allocated
2.87158203125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
Could you check, if this code snippet reproduces the issue on your system?
|
st45222
|
Thank you for your answer, this is the results:
At num_workers = 0 -> 7, it work well:
num_workers=0
2.87158203125MB allocated
2.87158203125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
From 8->11, sometimes it raises bug while running, for example:
num_workers=11
2.87158203125MB allocated
2.87158203125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
1.14892578125MB allocated
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Users\giang\anaconda3\envs\working\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\giang\anaconda3\envs\working\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\giang\anaconda3\envs\working\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\giang\Desktop\DACON_landmark\test.py", line 5, in <module>
import torch
File "C:\Users\giang\anaconda3\envs\working\lib\site-packages\torch\__init__.py", line 117, in <module>
raise err
OSError: [WinError 1455] The paging file is too small for this operation to complete. Error loading "C:\Users\giang\anaconda3\envs\working\lib\site-packages\torch\lib\cudnn_adv_infer64_8.dll" or one of its dependencies.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Users\giang\anaconda3\envs\working\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\giang\anaconda3\envs\working\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\giang\anaconda3\envs\working\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\giang\Desktop\DACON_landmark\test.py", line 5, in <module>
import torch
File "C:\Users\giang\anaconda3\envs\working\lib\site-packages\torch\__init__.py", line 117, in <module>
raise err
OSError: [WinError 1455] The paging file is too small for this operation to complete. Error loading "C:\Users\giang\anaconda3\envs\working\lib\site-packages\torch\lib\caffe2_detectron_ops_gpu.dll" or one of its dependencies.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Users\giang\anaconda3\envs\working\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\giang\anaconda3\envs\working\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\giang\anaconda3\envs\working\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\giang\Desktop\DACON_landmark\test.py", line 5, in <module>
import torch
File "C:\Users\giang\anaconda3\envs\working\lib\site-packages\torch\__init__.py", line 117, in <module>
raise err
OSError: [WinError 1455] The paging file is too small for this operation to complete. Error loading "C:\Users\giang\anaconda3\envs\working\lib\site-packages\torch\lib\cudnn_adv_infer64_8.dll" or one of its dependencies.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Users\giang\anaconda3\envs\working\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\giang\anaconda3\envs\working\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\giang\anaconda3\envs\working\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\giang\Desktop\DACON_landmark\test.py", line 5, in <module>
import torch
File "C:\Users\giang\anaconda3\envs\working\lib\site-packages\torch\__init__.py", line 117, in <module>
raise err
OSError: [WinError 1455] The paging file is too small for this operation to complete. Error loading "C:\Users\giang\anaconda3\envs\working\lib\site-packages\torch\lib\caffe2_detectron_ops_gpu.dll" or one of its dependencies.
Traceback (most recent call last):
File "test.py", line 82, in <module>
for i, data in enumerate(train_dataloader):
File "C:\Users\giang\anaconda3\envs\working\lib\site-packages\torch\utils\data\dataloader.py", line 359, in __iter__
return self._get_iterator()
File "C:\Users\giang\anaconda3\envs\working\lib\site-packages\torch\utils\data\dataloader.py", line 301, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\giang\anaconda3\envs\working\lib\site-packages\torch\utils\data\dataloader.py", line 885, in __init__
w.start()
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\giang\anaconda3\envs\working\lib\multiprocessing\popen_spawn_win32.py", line 72, in __init__
None, None, False, 0, env, None, None)
OSError: [WinError 1455] The paging file is too small for this operation to complete
From 12 ->14, sometimes the memory shows to be located in 2 or 3 epochs then raise the bug.
From 14-> 16, fail from the beginning.
It shows CUDA out-of-memory several times, but I checked on nvida-smi the memory still empty.
My CPU: Intel core i7-10700K
|
st45223
|
This Windows error seems to indicate you might be running out of CPU RAM. Could this be the case?
I’m not a Windows expert, but a quick search for this error gave some results pointing in this direction.
|
st45224
|
My PC ram is 32Gb and its reaches to 20Gb is maximum when I tried that code, I will install Ubuntu then try it again to make sure it is not the hardware problem.
I just met that problem while changing from 2080Ti to 3090
|
st45225
|
I’m not particularly skilled, just running projects from github and such, but one thing I keep encountering is error related to multiple workers with dataloader, and most of the time, just puttingthe executing code in a "if name == ‘main’ can help. I can only assume that something behaves differently on non-windows systems for those errors not to appear there.
I have encountered similar errors to the above, but I think those were issues with lmdb environments set to take too much space for the drive, so perhaps my suggestion is unhelpful for this problem. But i can at least say that this seems to be related to RAM and/or storage. Perhaps the drive is full enough that memory cannot be paged, or something is artificially taking up storage during the program execution. For my case, I had 2 lmbd environments each set at 200 GB, which was more than what was available on the drive, leading to my programs crashing during startup.
|
st45226
|
Thank you for your explanation,
In my experience, the problem without if __name__ == "__main__" will make our code can not execute, and I used it while running my code. However, the code runs normally in this case, the problem is the increase in RAM.
Now we don’t have to worry about this anymore. At the time I raised this question, I used the nightly version, but they published the stable version for torch 1.7 cuda 11 and it solved the problem.
About your explanation, if I understand correctly, you are talking about the RAM of PC but we are discussing GPU RAM, I think they are different.
|
st45227
|
Good that it’s resolved!
Given the error in the last message, it seemed more like a PC RAM error and not GPU RAM issue, since it talks about paging file. GPU memory errors tend to mention being GPU RAM. (Usually some cuda error and allocation fail due to no available error.)
|
st45228
|
I’m trying to accomplish something that I don’t even have the correct wording for.
(If you have a better Idea, I’d be happy to rename this thread so it could maybe help someone in the future)
I have two 1d matrices, binary, either 0 or 1 (yeah, that’s pretty much what binary means :D)
The way I framed to problem I’m trying to solve, (of which details would be of little interest here) I want the resulting matrix to be the resulting state of the two input matrices would act as “triggers”. Which I feel is totally unclear, so I’ll illustrate with an example:
a = [1, 0, 0, 0, 1, 0] # Obviously not Python lists, I just want to illustrate what I'd functionnaly want
b = [0, 0, 1, 0, 0, 0]
# What I want:
result = [1, 1, 0, 0, 1, 1]
# So where a==1 and b ==0, result should be 1
# where a==0 and b ==1, result should be 0
# where a==0 and b==0, result should be the last state triggered
# [1, 0] -> 1
# [0, 1] -> 0
# [0, 0] -> the last value
# It's assumed either a or b while not be 0 at first position
# It's also assumed a==1 & b==1 will never occur
To try and be even clearer, here’s what it would look like with Python lists and iterations:
(Which, again, is obviously a mere illustration of what I want, if you landed here from google, that’s a scrappy and only illustrative implementation of what I functionality want)
a = [1, 0, 0, 0, 1, 0]
b = [0, 0, 1, 0, 0, 0]
results = []
for a_value, b_value in zip(a, b):
if a_value==1 and b_value==0:
result.append(1)
if a_value==0 and b_value==1:
result.append(0)
if a_value==0 and b_value==0:
result.append(result[-1])
I hope it’s clear.
Thanks a lot in advance,
|
st45229
|
What would be the output value, if the first position contains a zero for a and b?
This code should work:
a = torch.tensor([1, 0, 0, 0, 1, 0])
b = torch.tensor([0, 0, 1, 0, 0, 0])
res = ((a==1) & (b==0)).long()
idx = ((a==0) & (b==0)).nonzero()
res[idx] = res[idx-1]
print(res)
> tensor([1, 1, 0, 0, 1, 1])
Although, is should write a 1 at the last position, if the first entry is a double 0.
|
st45230
|
ptrblck:
a = torch.tensor([1, 0, 0, 0, 1, 0]) b = torch.tensor([0, 0, 1, 0, 0, 0]) res = ((a==1) & (b==0)).long() idx = ((a==0) & (b==0)).nonzero() res[idx] = res[idx-1] print(res) > tensor([1, 1, 0, 0, 1, 1])
Thanks a lot for your answer. I knew I had to be somehow unclear on my intentions :). My problem is that I want the “equal to previous value” mechanism to be recursive.
Hence
a = torch.tensor([1, 0, 0, 0, 0, 0, 0, 1, 0])
b = torch.tensor([0, 0, 0, 0, 0, 1, 0, 0, 0])
would result in
> tensor(1, 1, 1, 1, 1, 0, 0, 1, 1)
To answer your first question, it’s assumed either a or b will equal 1 in first pos.
|
st45231
|
I realize I choose to go that way because it would fit the way I want to frame my problem, yet I could have asked help broadening a bit the explanation about that.
So, I’m kind of dealing with reinforcement learning (“Kind of” because my challenge is to use as much shortcuts as possible, so no fancy Markovian formalism).
My agent (for the part I’m currently struggling to solve anyway) can decide to put ON of OFF a state. Matrix a represents the neuron “turn it on”, and b “turn it off”.
The resulting matrix should be the resulting state.
As following operations is fairly straightforward (Matrix multiplication, error function, gradient descent), doing that one in few GPU operations would be a tremendous win.
Edit:
Hence, another way to express illustratively what I’d like to achieve:
#I want to turn:
start_walking = torch.tensor([1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0])
stop_walking = torch.tensor([0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0])
#Into
> is_walking tensor([1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1])
|
st45232
|
I’m getting back to the project where I needed that.
I’m quite certain there is a way to be found.
That post is a mere up, though only in order to not waste time in what I’m going to explain next, in case someone has an answer.
My intelligence has limits (obviously), and the answer to that problem either doesn’t exist, or lies beyond those limits.
What I’m going to try, is genetic programming. The problem seems simple enough to formulate for that approach to be somehow valid.
If I succeed I’ll both post the answer and the code used to evolve algos.
|
st45233
|
I had vaguely similar problem (last known value repeater), my conclusion was that it is not possible to do without a loop; basically you need to shift values by varying distances (still possible with gather()), and you may need to consult an arbitrary number of past values to find these distances - so not O(1) without some forward accumulator. Unfortunately, cumsum/cumprod seem inappropriate, simplest step I found was:
nmissed += missing[t]
nmissed *= missing[t]
|
st45234
|
Pretty sure you’re totally right. That was my intuition though I haven’t figured any argument about what might be the minimal complexity here.
Well genetic programming is fun anyway…
|
st45235
|
I have a simple custom operator that inherits from torch.autograd.Function.
import torch.onnx
import torchvision
from torch import nn
from torch.autograd import Function
class MyReLUFunction(Function):
@staticmethod
def symbolic(g, input):
return g.op('MyReLU', input)
@staticmethod
def forward(ctx, input):
ctx.input = ctx
return input.clamp(0)
@staticmethod
def backward(ctx, grad_output):
grad_input = grad_output.clone()
grad_input.masked_fill_(ctx.input < 0, 0)
return grad_input
class MyReLU(nn.Module):
def forward(self, input):
return MyReLUFunction.apply(input)
model = nn.Sequential(
nn.Conv2d(1, 1, 3),
MyReLU(),
)
dummy_input = torch.randn(10, 1, 3, 3)
torch.onnx.export(model, dummy_input, "model.onnx", verbose=True)
Following the instruction from the documentation, I added symbolic(g, input) for within the class, but when I conduct onnx export, it is still giving me
RuntimeError: No Op registered for MyReLU with domain_version of 9
So I was wondering what is the correct way to register a customized function when exporting to onnx.
|
st45236
|
I had the same issue. Then found out that the given example will work in pytorch 1.1, 1.2 but it is throwing erros with the latest version like pytorch 1.5 and 1.7. When I digged into the pytorch onnx documentation I found that we have to pass as an additional argument operator_export_type to torch.onnx.export function(ref: https://pytorch.org/docs/stable/onnx.html#functions 3).
In my understanding of the documentation in order to support the custom lyaers with symbolic links, the additional argument operator_export_type should be assigned a value of OperatorExportTypes.ONNX_ATEN_FALLBACK.
So your line
torch.onnx.export(model, dummy_input, "model.onnx", verbose=True)
Should be replaced with
torch.onnx.export(model, dummy_input, "model.onnx", verbose=True, operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK)
Then this error should be solved.
|
st45237
|
When I was using Insightface, I found that sometimes the function sample took up a lot of time, resulting in low GPU utilization. I used line_profiler to check the time of each line, and found that this line of code runs very slowly
github.com
deepinsight/insightface/blob/master/recognition/partial_fc/pytorch/partial_classifier.py#L56
self.sub_weight = Parameter(self.weight)
self.sub_weight_mom = self.weight_mom
else:
self.sub_weight = Parameter(torch.empty((0, 0)).cuda(local_rank))
@torch.no_grad()
def sample(self, total_label):
P = (self.class_start <=
total_label) & (total_label < self.class_start + self.num_local)
total_label[~P] = -1
total_label[P] -= self.class_start
if int(self.sample_rate) != 1:
positive = torch.unique(total_label[P], sorted=True)
if self.num_sample - positive.size(0) >= 0:
perm = torch.rand(self.num_local,device=cfg.local_rank)
perm[positive] = 2.0
index = torch.topk(perm,k=self.num_sample)[1]
index = index.sort()[0]
else:
index = positive
unreasonable result:
无标题1037×725 29.8 KB
reasonable result (I split this line into two lines):
rBAoL1_J_KWAdMRvAAEu6vsp8vQ6351004×725 29.4 KB
How can I solve this problem? Thanks.
|
st45238
|
total_label is distributed in multi gpus, and its shape is (batches * ngpus, embedding_size)
|
st45239
|
Hi,
I am running pytorch with a simple cuda allocation.
import torch
a=torch.zeros((1,3,352,640), dtype=torch.float32).cuda()
The amount of memory should be (13352*640 / 1000000) = 2.7MB
However, with nvidia-smi, it costs about 591MB.
And also the cpu memory is also increased by 1.2G.
this is unacceptable when running on edge device.
any idea?
|
st45240
|
Solved by ptrblck in post #2
The majority of the memory allocation on the GPU are used by the CUDA context, which is created during the first CUDA operation. You could lower is by building from source for your GPU architecture and remove unnecessary libs (such as NCCL in case you don’t need it).
|
st45241
|
The majority of the memory allocation on the GPU are used by the CUDA context, which is created during the first CUDA operation. You could lower is by building from source for your GPU architecture and remove unnecessary libs (such as NCCL in case you don’t need it).
|
st45242
|
Hi I’m new with programming.
I’m facing some problem with cuda installation. I want to know, is cuda toolkit installation
a. in window - using downloaded installer from (https://developer.nvidia.com/cuda-downloads)
b. in python - usingconda install pytorch torchvision torchaudio cudatoolkit=8.0 -c pytorch
the same thing?
Do I need both installed or only in python?
Hope someone can help me with this. Thanks.
|
st45243
|
Solved by ptrblck in post #4
PyTorch supports CUDA>=9.2 and the binaries ship with a min. compute capability of 3.7, so your device won’t be supported anymore.
|
st45244
|
If you install the binaries (via conda or pip) they will ship with their own CUDA runtime and you would only need to install the corresponding NVIDIA driver on your system.
Have a look here 1 (Table 1) to find the appropriate driver versions for different CUDA releases.
|
st45245
|
I’m using window 10. I have GPU nvidia quadro 6000 version 377.83 (latest). If I refer to this link Check GPU-Cuda Computibility 2 and quadro 6000 computibilily is 2.0 which match with cuda 8.0
However, still torch.cuda.is_available() returning to False. Is this a common problem?
|
st45246
|
PyTorch supports CUDA>=9.2 and the binaries ship with a min. compute capability of 3.7, so your device won’t be supported anymore.
|
st45247
|
I see. That solves the problem. No wonder the “false” problem keeps repeating.
Thanks a lot!
|
st45248
|
Hi,
For some reasons I can only use cmake to build pytorch source
mkdir build
cd build
cmake .. -DCMAKE_INSTALL_PREFIX="$HOME/pytorch_build
make -j4 && make install
However, I didn’t see any wheel or any pytorch python interface in the build directory, only libtorch_python.so and python3.7/caffe2 seems related to python.
any idea?
|
st45249
|
Hi I have a tensor x of shape: [8, 8, 89] and a second tensor s of shape [8,8] containing only values 1 and -1.
Now I want to expand s to the same shape of x:
s = s.unsqueeze(2).expand(x.shape)
and multiple them element-wise:
x = x*s
Two questions:
Why do I get a RuntimeError: CUDA error: invalid configuration argument when running this code with CUDA.
Is there a better operator which does this inplace?
Thank You!
|
st45250
|
Solved by matthias.l in post #6
Ok the actual error was in my custom operator:
The mistake was related to CUDA not directly to PyTorch.
When launching the CUDA kernel with (1,1,N) with N > 64 threads there is a failure. But when you put the number of threads in a scalar instead of a 3d vector, everything works well.
|
st45251
|
Hi,
I can’t reproduce this error when running on colab.
Does the following code works for you?
import torch
s = torch.rand(8, 8, device="cuda")
x = torch.rand(8, 8, 89, device="cuda")
s = s.unsqueeze(2).expand(x.shape)
s * x
You can do this inplace by doing x *= s.
|
st45252
|
Thank you!
Ok. You are right. The issue must be related to something deeper. I just copied the part in the model which raised the issue, but now all operations on x seem to have same issue.
Had to investigate this in more detail…
Is it possible to dump all relevant information of a tensor to better reproduce issues like that?
x: torch.Size([8, 8, 89]) torch.float32 cuda:0
s: torch.Size([8, 8, 89]) torch.float32 cuda:0
|
st45253
|
I found the issue. The problem was from the operation before which calculates x. It was caused by a custom CUDA kernel which spawned 89 threads and 8x8 blocks.
It seems that pytorch evaluates the graph lazy and the error popped up at the following operation.
|
st45254
|
It is the cuda API that is asynchronous. So errors will happen later on indeed.
You can force it to be synchronous by setting CUDA_LAUNCH_BLOCKING=1 env variable to get the error to point at the right place
|
st45255
|
Ok the actual error was in my custom operator:
The mistake was related to CUDA not directly to PyTorch.
When launching the CUDA kernel with (1,1,N) with N > 64 threads there is a failure. But when you put the number of threads in a scalar instead of a 3d vector, everything works well.
|
st45256
|
I am using google colab to run my deep learning model. I dont know why is it taking more time in colab/
I am using this snippet for device mounting in training the model. Plz suggest something , its taking 30 min for 1 epoch.
‘’’
if args.use_gpu:
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
else:
device='cpu'
‘’’
@ptrblck plz help sir
|
st45257
|
Is the notebook setup to use a GPU? Because device will still hold "cpu" if the settings aren’t changed.
|
st45258
|
The difference between Colab and your local setup would come from all used processing units, i.e.:
used CPU
data storage and bandwidth to access it
used GPU
different libraries
etc.
You could profile the code to check, which part of the training takes most of the time.
|
st45259
|
Thanks for reply sir,
But I am not getting last line, How to profile the code?
Actually its taking more time then my local machine.
|
st45260
|
You could add timers to the code and check e.g. how long the data loading needs as done in the ImageNet example 3. Also, the torch.autograd.profiler 2 might be useful.
|
st45261
|
I want to multiply different scalar values for each image.
For example, the image have [4,3,10,10] shape. (batch size, channel, w,h).
0.1 * image works fine but,
[0.1, 0.2, 0.3, 0.4] * image doesn’t work because the dimensions are different.
How can I multiply this ?
|
st45262
|
Solved by ptrblck in post #2
You can unsqueeze the additional dimensions to multiply the list entries with each sample in the batch:
x = torch.randn(4,3,10,10)
f = torch.tensor([0.1, 0.2, 0.3, 0.4])
res = f[:, None, None, None] * x
|
st45263
|
You can unsqueeze the additional dimensions to multiply the list entries with each sample in the batch:
x = torch.randn(4,3,10,10)
f = torch.tensor([0.1, 0.2, 0.3, 0.4])
res = f[:, None, None, None] * x
|
st45264
|
Hi
I’m trying to get data for the next iteration in the dataloader loop.
The point is I need both data for the current iteration and the one for the next iteration or even for n times the next iteration.
So I did following.
for num_iter, (img, label) in enumerate(train_loader):
train_loader_list = list(train_loader)
tuple = train_loader_list[num_iter]
I was going to get data for the next iteration by doing train_loader_list[num_iter+1]
but tuple[0] and img were different!
I think it’s because of shuffle=True.
In this case, what’s the most efficient way to get both data for the current iteration and the one for the n times next iteration?
|
st45265
|
If you think the data for the current iteration and next iteration should be the same, why not just use the same data.
data_current_iteration = next(train_loader)
data_next_iteration = data_current_iteration
Or do I misunderstand your question?
|
st45266
|
I am not sure if I understand your question correctly or not but I’d like to mention few things that I think might be helpful.
Basically, when you set shuffle equal to True your dataset indices will change every epoch and you won’t get the same instances in two consecutive epochs since you reshuffled at every epoch. Also, your dataloader should give you the correct (image, label) pairs and if it doesn’t seem right, please check the output of your dataset one more time.
If you need more than one instance for your calculation at each iteration, I would suggest changing the __getitem__ function of your dataset. (It is not a good idea if you are just trying to update the weights less frequently or training a GAN , …)
Here is another possible way but I cannot say if it’s the most efficient way for your case or not.
for curr_iter in range(num_iter):
i = 0
while i < number_of_iterations_you_need:
data_iter = next(iter(train_loader))
#do anything you want
i +=1
|
st45267
|
My training sometimes stucks oddly. It stops printing.
image747×59 2.14 KB
And the GPU utils is 0%.
image877×84 3.43 KB
When I press Crtl + C to terminate it, it prints log like this.
image1267×513 25.9 KB
It seems that the error occurs when loading data by multiprocessing.
Anyone knows why?
The verison of some modules in my code:
python version:3.7.9
torch version:1.4.0
cuda version:10.0
I use the docker interactively, and use screen to run in background.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.