id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st48268 | @ptrblck
Hi,thanks!
I use this code:
import torch
import torch.nn as nn
import torch.nn.functional as F
train = pd.read_csv(r'train.csv')
x_test = pd.read_csv(r'test.csv')
x_train = train[:, :-1]
y_train = train[:, -1]
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.fc1 = nn.Linear(800000, 153)
self.fc2 = nn.Linear(153, 2)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
model = MyModel()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.BCEWithLogitsLoss()
x_train = torch.Tensor(x_train.values)
y_train = torch.Tensor(y_train.values)
x_test = torch.Tensor(x_test.values)
# data = torch.randn(64, 128)
# target = torch.randint(0, 2, (64, 40)).float()
for epoch in range(5):
optimizer.zero_grad()
output = model(x_train)
loss = criterion(output, y_train)
loss.backward()
optimizer.step()
print('epoch {}, loss {}'.format(epoch, loss.item()))
# check predictions
output = model(x_test)
probs = torch.sigmoid(output)
print(probs)
But the error messgae is:
RuntimeError:
size mismatch, m1: [800000 x 153], m2: [800000 x 153] at …\aten\src\TH/generic/THTensorMath.cpp:41
How to set these 2 lines?
self.fc1 = nn.Linear(800000, 153)
self.fc2 = nn.Linear(153, 2) |
st48269 | It seems the x_train tensor has the shape [153, 800000] and should be permuted before passing it to the model via x_train = x_train.permute(1, 0). |
st48270 | @ptrblck
Hi,when I do it as you said:
import torch
import torch.nn as nn
import torch.nn.functional as F
train = pd.read_csv(r'train.csv')
x_test = pd.read_csv(r'test.csv')
x_train = train[:, :-1]
y_train = train[:, -1]
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.fc1 = nn.Linear(800000, 153)
self.fc2 = nn.Linear(153, 2)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
model = MyModel()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.BCEWithLogitsLoss()
x_train = torch.Tensor(x_train.values)
y_train = torch.Tensor(y_train.values)
x_train = x_train.permute(1, 0)
x_test = torch.Tensor(x_test.values)
# data = torch.randn(64, 128)
# target = torch.randint(0, 2, (64, 40)).float()
for epoch in range(5):
optimizer.zero_grad()
output = model(x_train)
loss = criterion(output, y_train)
loss.backward()
optimizer.step()
print('epoch {}, loss {}'.format(epoch, loss.item()))
# check predictions
output = model(x_test)
probs = torch.sigmoid(output)
print(probs)
The error message is:
ValueError:
Target size (torch.Size([800000])) must be the same as input size (torch.Size([153, 2]))
So how should I set this 2 lines?
self.fc1 = nn.Linear(800000, 153)
self.fc2 = nn.Linear(153, 2) |
st48271 | Ok, I might be wrong with the permute suggestion.
Could you remove the permute operation again and post the shape of x_train before passing it to the model? |
st48272 | Hi,my training set is 800,000 rows and 153 columns. The data I want to predict is 200,000 rows and 153 columns. How should I write these 2 lines?
self.fc1 = nn.Linear(800000, 153)
self.fc2 = nn.Linear(153, 2)
If I use this code,the error message is:
RuntimeError:
size mismatch, m1: [800000 x 153], m2: [800000 x 153] at …\aten\src\TH/generic/THTensorMath.cpp:41 |
st48273 | MadFrog:
my training set is 800,000 rows and 153 columns.
800,000 would then be the batch size (or the total number of samples) while 153 would be the feature dimension.
In that case use
self.fc1 = nn.Linear(153, any_value)
self.fc2 = nn.Linear(any_value, 2)
The batch size is not part of the layer definition, as all PyTorch layers accept a variable batch size.
Your target should have the same batch size so you can calculate the loss using the model output. |
st48274 | Hi,
assume I have a tensor, and I want to apply a certain function to buckets of its elements. For example, assume I have the tensor
>>> A
0
2
0
2
[torch.FloatTensor of size 4x1]
and I want to compute the mean for every “bucket” of two elements, and replace it, like so:
>>> for idx in range(0, 4, 2):
A[idx:idx+2] = torch.mean(A[idx:idx+2] )
>>> A
1
1
1
1
[torch.FloatTensor of size 4x1]
The issue is that for loop may be very slow, since it has to execute A.numel()/2 times. Is there any way to make it parallelizable, so that it runs in parallel over multiple buckets at the same time?
Note: The example with the mean is just to clarify what I meant, the actual function is slower and more complicated.
Thank you! |
st48275 | you can use torch.unfold to compute your bucket tensor, and then you can compute the mean for each bucket.
http://pytorch.org/docs/master/tensors.html?highlight=unfold#torch.Tensor.unfold 305 |
st48276 | Actually, in my case the buckets don’t overlap, so I think it would be better to just use
tensor.view(-1, bucket_size)
which avoids creating a new tensor like unfold would. |
st48277 | @smth @antspy I’m sorry I don’t understand the answer. I could bucket the tensor using unfold or view, but how do I apply the function (mean in this case) in parallel? |
st48278 | After creating the “buckets” or windows using unfold, you could flatten the elements in this bucket and apply e.g. torch.mean on this dimension. |
st48279 | I have reimplemented BatchNorm1D based on the implementation provided by @ptrblck (greatly appreciated!), here: https://github.com/ptrblck/pytorch_misc/blob/master/batch_norm_manual.py 37
In order to verify identical behaviour with the nn.BatchNorm equivalent, I initiate 2 models (as well as 2 optimizers), one using MyBatchNorm and one using nn.BatchNorm. Although I can verify that in the beginning, “compareBN” returns identical results (Max diff: tensor(8.5831e-06, device='cuda:0', grad_fn=<MaxBackward1>)), as the training progresses, very tiny differences begin to emerge.
A few iterations in:
image1659×558 95.6 KB
More iteratations later:
image1657×473 71.2 KB
By the end of the 25 epochs, both the models and the results differ significantly. Note that I have set all random seeds to 0 (including deterministic behaviour for cudnn and cuda).
This reminded me of fp16 vs fp32 training. Can it be the case that the underlying implementation of nn.BatchNorm uses different precision? Or is it something else? The MyBatchNorm method is also slightly slower (+2 seconds on a 10 second epoch), but I’d assume that this is due to C++ being used for nn.BatchNorm. I used a 2 layer FCNN with 2 BatchNorm layers on MNIST. Interestingly, the 2nd BatchNorm layer is slower in showing disrepancies. |
st48280 | Solved by ptrblck in post #9
Ah OK.
The backward pass of repeat_interleave is not deterministic as explained in the linked docs:
Additionally, the backward path for repeat_interleave() operates nondeterministically on the CUDA backend because repeat_interleave() is implemented using index_select() , the backward path fo… |
st48281 | I was able to get the models (predictions as well as the batchnorm params) closer to each other by using double precision (model.double()). In fact, their differences now have been stabilised around 10^-10 - 10^-11 (based on the output of print('Max diff: ', (outputs - outputs_).abs().max() and compareBN), whereas before it would at some point escalate to 0.5-1.5 and therefore end up with 2 completely different models.
The downside is that it now takes longer to train due to the double precision. If anyone has any recommendation as to how to ameliorate the problem without the overhead, I am all ears |
st48282 | My manual implementation was originally intended to be used as a reference implementation to see, how the running estimates are updated and how the batchnorm layer works in general.
The performance difference is expected, as I used native methods without specific (and faster) kernels.
The difference in after a couple of iterations is most likely created due to the accumulation of rounding errors created by the limited floating point precision.
It’s also an indication for this, since you see a reduction in these errors using float64.
What’s your use case, that you would like to use this implementation in a real model? |
st48283 | Could you be more specific with what you mean with faster kernels :)? I’d be interested to look into that. I am trying to implement ghost batch normalization (in essence, estimate batch statistics based on smaller sample sizes than the original mini-batch) for research purposes. |
st48284 | I’m not calling into any batchnorm functions defined in Normalization.cpp 6, such as the cudnn implementation 2, but use just native operators. |
st48285 | Other than the speedup one would gain from adopting those functions, do these functions also facilitate determinism? When I don’t use cuda(), I can reproduce the experiments, however, when I use cuda() and my own implementation of BatchNorm, it is not reproducible |
st48286 | Have a look at the Reproducibility docs 5, if not already done.
What exactly did you change in the implementation, as I’m able to get deterministic outputs just be seeing the code with my code snippet. |
st48287 | Hey, thanks for following-up. I appreciate it.
Have a look at the Reproducibility docs 1, if not already done.
The following is included with every run:
torch.manual_seed(0)
torch.cuda.manual_seed(0)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(0)
random.seed(0)
os.environ['PYTHONHASHSEED'] = str(0)
What exactly did you change in the implementation, as I’m able to get deterministic outputs just be seeing the code with my code snippet.
You are right, I modified the implementation back to yours, and it’s reproducible. It’s very strange because the nondeterminism seems to arise from rounding errors that do not happen in the same way, i.e. on 2 different runs I get:
Epoch 1/2, training loss: 0.2351639289064084, validation loss: 0.09888318485801263
Epoch 1/2, training loss: 0.23525172622356727, validation loss: 0.10040923198367216
I’ve added the following: reshape input tensor, take the mean & var across the newly introduced dimension, and then use repeat_interleave on mean & var. Finally, do the (input - mean) / sqrt(var) calculation. |
st48288 | Ah OK.
The backward pass of repeat_interleave is not deterministic as explained in the linked docs:
Additionally, the backward path for repeat_interleave() operates nondeterministically on the CUDA backend because repeat_interleave() is implemented using index_select() , the backward path for which is implemented using index_add_() , which is known to operate nondeterministically (in the forward direction) on the CUDA backend (see above). |
st48289 | I’m also trying to reimplement BatchNorm2d based off your provided approach and getting similar rounding errors. Do you have any recommendations for a way to completely remove these differences?
My use case is I’m trying to separate the Batch Norm layer into two separate parts - mean/var (“head”) and weight/bias (“tail”). Ideally, I would like to have each part as a separate class that extends nn.Module and self.tail(self.head(x)) would be the exact same as doing self.batch_norm(x). You could also think of the “head” as doing nn.BatchNorm2d(num_features, affine=False), but I just need a way to do the affine transformation without any precision errors from the normal method. |
st48290 | loganfrank:
but I just need a way to do the affine transformation without any precision errors from the normal method.
If you are seeing absolute errors in the range 1e-5 they would be created by the limited floating point precision and the order of operations.
I don’t think you can easily guarantee the order of floating point operations in the manual approach to match the rounding errors in the internal PyTorch version.
If you need more precision, you could use .double() types for the data, buffers, and parameters. |
st48291 | Hi,
When I try to create two threads and one dataloader per thread, the following warning will come out from time to time:
OMP: Warning #190: Forking a process while a parallel region is active is potentially unsafe.
Moreover, the program can sometimes get stuck during training with two threads loading data at the same time. torch.multiprocessing can work perfectly with one dataloader per process, but I have to use threading for some reason.
I am using PyTorch 1.4.0 and Python 3.6. How can I make dataloader + threading work without getting stuck sometimes?
Thanks |
st48292 | working with a facial keypoint dataset, i wan’t to augment the data by rotating and random flipping. for that i need to also infer the new location of the keypoints.
i’m trying to build a transform by myself but got confused since most transforms that i know and use do not expect to also modify the labels.
what is right way to approach this issue? and is there any easy implementation running around somewhere?
update
there is a lot of confusion online about this subject and i think it;s useful if someone could post one tutorial on the subject.
anyway, i saw some people just pass a mask and convert it. i tried to implement it myself by first taking the coordinates vector and turning it into a sparse matrix (same size as image, where all is zero except location of keypoints), then making it go through same transformations, then another tranformation to update the targets with the ones aquired by the mask.
the question remains, how do i know what is the input and output of each transform?
let’s say i modified my dataset class getitem function to return a dictionary with image, labels and mask. (labels being the coordinates of the original picture and mask being a sparse matrix with those coordinates as ones and rest are zeros) how do i now pass this dataset together with a compose object and make sure that transforms are handling this dictionary correctly? |
st48293 | This tutorial 46 is quite old by now and written for Theano + Lasagne, however you could reuse the approach for the augmentation of the keypoints as it walks you through it step by step.
enterthevoidf22:
how do i now pass this dataset together with a compose object and make sure that transforms are handling this dictionary correctly?
If you want to apply the same “random” transformation of two different image tensors, you could use the functional API of torchvision.transforms as described here 34. |
st48294 | i followed your advice and performed augmentation manually inside the getitem function of dataset class.
still i was confused how to perform validation split because i supply the random_split function with the custom dataset class that as i said already performs augmentation.
further, just to check for other errors, i tried to run the program with both validation and and train being augmented.*** there is a strange error occuring after TF.rotate(mask) that adds non zeros to the mask. i can’t find the exact location but it’s somewhere in the transform to pil or roatate.***
import os
import numpy as np
from pandas.io.parsers import read_csv
from sklearn.utils import shuffle
import torch
from torch.utils.data import Dataset
import torchvision.transforms.functional as TF
class FacialKeypoints(Dataset):
def __init__(self, test=False, cols=None,FTRAIN = 'data/Q3/training.csv', FTEST = 'EX1/Q3/test.csv', transform_vars=None):
fname = FTEST if test else FTRAIN
df = read_csv(os.path.expanduser(fname)) # load pandas dataframe
# The Image column has pixel values separated by space; convert
# the values to numpy arrays:
df['Image'] = df['Image'].apply(lambda im: np.fromstring(im, sep=' '))
if cols: # get a subset of columns
df = df[list(cols) + ['Image']]
print('number of values in each column: ', df.count()) # prints the number of values for each column
df = df.dropna() # drop all rows that have missing values in them
X = np.vstack(df['Image'].values) / 255. # scale pixel values to [0, 1]
X = X.astype(np.float32)
image_size = int(np.sqrt(X.shape[1]))
Y = []
if not test: # only FTRAIN has any target columns
y = df[df.columns[:-1]].values
y2 = y.reshape(y.shape[0],15,2)
for coords in y2:
mask = np.zeros((image_size,image_size))
for pair in coords:
pair = pair.round().astype(int)
mask[pair[1]-1,pair[0]-1]=1
Y.append(mask)
Y = np.array(Y)
y = (y - 48) / 48 # scale target coordinates to [-1, 1]
X, y, Y = shuffle(X, y, Y, random_state=42) # shuffle train data
y = y.astype(np.float32)
else:
y = None
self.X = torch.tensor(X,dtype=torch.float32)
self.transform_vars = transform_vars
self.y = torch.tensor(y)
self.Y = torch.tensor(Y,dtype=torch.float32)
print('finished loading')
def __len__(self):
return len(self.X)
def transform(self,image, mask):
image = image.reshape(96,96)
flip_prob = self.transform_vars['flip_probability']
rotate_prob = self.transform_vars['rotate_probability']
print('before',torch.nonzero(mask, as_tuple=False).reshape(-1).shape[0])
if torch.rand(1)>flip_prob:
image = TF.hflip(image)
mask = TF.hflip(mask)
if torch.rand(1)<rotate_prob:
avg_pixel = image.mean()
degrees = self.transform_vars['degrees']
deg = int(torch.rand(1).item() * degrees - degrees)
image_r = TF.to_tensor(TF.rotate(TF.to_pil_image(image),deg)).squeeze()
image_r[(image_r==0) * (image!=0)] = avg_pixel
image = image_r
mask = TF.to_pil_image(mask)
print('after pil', mask.ImageStat.sum)
mask = TF.rotate(mask, deg)
print('after rotate', mask.ImageStat.sum)
mask = TF.to_tensor(mask).squeeze()
#mask = TF.to_tensor(TF.rotate(TF.to_pil_image(mask), deg)).squeeze()
print('after tensor',torch.nonzero(mask, as_tuple=False).reshape(-1).shape[0])
return image, mask
def update_target(self,mask):
keypoints = torch.nonzero(mask,as_tuple=False).reshape(-1)
keypoints = torch.from_numpy((keypoints.numpy() - 48) / 48)
return keypoints
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
image = self.X[idx]
keypoints = self.y[idx]
mask = self.Y[idx]
if self.transform_vars['is']:
image, mask = self.transform(image, mask)
keypoints = self.update_target(mask)
print(keypoints.shape)
return {'image':image, 'keypoints':keypoints}
else:
return {'image':image,'keypoints':keypoints}
this is the dataset class that loads the data, turns image string to values, removes nans and forms a “mask” which is a matrix of zeros and ones where there should be a facial keypoint. in the augmentation part, both the image and the mask go through the same transformations and then the mask goes through one more transformation (updateKeypoint) to become a vector of size 30 which is the target.
the main script:
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from preprocess import FacialKeypoints
import numpy as np
from torch.utils.data.dataset import random_split
transformed_dataset = FacialKeypoints(transform_vars={'is':True,'degrees':20,'flip_probability':0.5,'rotate_probability':0.8})
num_train = int(np.ceil(len(transformed_dataset) * 0.85))
num_val = int(len(transformed_dataset) - num_train)
batch_size = 16
trainset,valset = random_split(transformed_dataset,[num_train,num_val])
trainloader = DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=0)
valoader = DataLoader(valset, batch_size=batch_size,
shuffle=True, num_workers=0)
device = torch.device('cuda')
model2 = nn.Sequential(
nn.Conv2d(1,32,3),
nn.MaxPool2d(kernel_size=2,stride=2),
nn.ReLU(),
nn.Conv2d(32,64,2),
nn.MaxPool2d(kernel_size=2,stride=2),
nn.ReLU(),
nn.Conv2d(64,128,2),
nn.MaxPool2d(kernel_size=2,stride=2),
nn.ReLU(),
nn.Flatten(),
nn.Linear(15488,500),
nn.ReLU(),
nn.Linear(500,500),
nn.ReLU(),
nn.Linear(500,30)
)
model2.to(device)
total_loss = {'train':[],'val':[]}
criterion1 = nn.MSELoss()
criterion2 = nn.MSELoss()
optimizer2 = torch.optim.Adam(model2.parameters(),lr=0.001)
total_loss = {'train':[],'val':[]}
for epoch in range(100):
print('in epoch {}/100 :'.format(epoch+1))
for sample in trainloader:
losses = []
input = sample['image'].to(device)
batch = input.shape[0]
target = sample['keypoints'].to(device)
optimizer2.zero_grad()
output = model2(input)
loss2 = criterion2(output,target)
loss2.backward()
optimizer2.step()
losses.append(loss2.data)
a = np.sum(losses)
total_loss['train'].append(a)
print('train loss = {}'.format(a))
for sample in valoader:
with torch.no_grad():
losses = []
input = sample['image'].to(device)
batch = input.shape[0]
input = input.view([batch, 1, 96, 96])
target = sample['keypoints'].to(device)
output = model2(input)
loss2 = criterion2(output, target)
losses.append(loss2.data)
a = np.sum(losses)
total_loss['val'].append(a)
'''def check_sample(loader=valoader,model=model2,device=device):
device2 = torch.device('cpu')
plots = 16//3
x = next(iter(loader))
y_true = x['keypoints']
y_true = y_true.reshape(16,15,2)
x = x['image'].to(device)
x = x.view(16,1,96,96)
y = model(x)
y = y.reshape(16,15,2).to(device2)
x = x.to(device2)
fig,ax = plt.subplots(3,plots)
k=0
for i in range(plots):
for j in range(3):
ax[i,j].scatter(y_true[k,:,0].detach().numpy(),y_true[k,:,1].detach().numpy())
ax[i,j].imshow(x[k].squeeze())
ax[i,j].scatter(y[k,:,0].detach().numpy(),y[k,:,1].detach().numpy())
k=k+1
plt.show()
'''
finally i get a weird error about the size of batch, which i suspect is because of the target changing size.
line 55, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [30] at entry 0 and [26] at entry 3
would really appreciate any hint what could cause this error, and and also how to augment the data only for training and not for validation |
st48295 | i pinpointed the problem but still not sure how to solve it.
also not sure if it’s possible to reproduce without the data.
still, i have a 2d tensor size: (96,96) with all elements 0’s except for 15 containing 1’s.
let’s call this tensor mask. so:
>>> import torch
import torchvision.transforms.functional as TF
deg = 20
mask.nonzero(as_tuple=False)
Out[44]:
tensor([[27, 38],
[27, 52],
[28, 14],
[29, 80],
[36, 28],
[36, 65],
[37, 20],
[37, 36],
[37, 58],
[37, 73],
[59, 47],
[71, 29],
[71, 46],
[71, 64],
[82, 46]])
>>>mask = TF.to_pil_image(mask)
mask = TF.rotate(mask, deg)
mask = TF.to_tensor(mask).squeeze()
mask.nonzero(as_tuple=False)
Out[46]:
tensor([[19, 72],
[27, 45],
[29, 68],
[31, 60],
[34, 54],
[41, 9],
[42, 33],
[43, 25],
[47, 18],
[58, 51],
[64, 71],
[70, 54],
[76, 38],
[80, 58]])
if you count, notice that there were 15 non zero elements prior to transform and 14 after transform.
this is random and sometimes leads to less elements sometime to more elements, sometimes exactly to the desired 15 elements. of course the randomness might be caused by the random generator of degrees i didn’t mention in this comment |
st48296 | I think you might lose values due to the applied interpolation in TF.rotate.
Could you set resample to PIL.Image.NEAREST for this method and check, if your output would contain all points?
enterthevoidf22:
still i was confused how to perform validation split because i supply the random_split function with the custom dataset class that as i said already performs augmentation.
If you are lazily loading the data, just create two datasets. One with the training transformation, the other one with the validation transformation, and wrap both datasets in Subsets using the randomly split data indices. |
st48297 | hey,
PIL.Image.NEAREST doesn’t seem to solve the problem.
i tried
mask = TF.rotate(mask, deg,resample=PIL.Image.NEAREST )
and also:
mask = TF.rotate(mask, deg,resample=PIL.Image.NEAREST ,fill=0)
still getting weird interpolations. what is more strange is that for some runs i’m getting more pixels than i should.
what i mean is that i count the non-zero elements before and after transform and getting more non-zero after tranform for some pictures, less non-zero for others, and also sometimes getting the right amount.
what is more surprising is the fact i don’t see anyone else having trouble with this. is there an easy solve-around? |
st48298 | enterthevoidf22:
what i mean is that i count the non-zero elements before and after transform and getting more non-zero after tranform for some pictures, less non-zero for others, and also sometimes getting the right amount.
This might be indeed expected, if e.g. a single input pixel with a non-zero value “lands” on multiple output pixel locations after the rotation. I assumed that the nearest interpolation would make sure to select a single output location, but this doesn’t seem to be the case.
E.g. if you have 4 single pixels with a non-zero value in the input image creating the edges of a rectangle, the rotated image would create a rotated rectangle, but each edge might be bigger or smaller.
For your use case you could thus use the keypoint coordinates directly instead of rotating an image. |
st48299 | yes, in despair i turned to implementing it myself. still i run into the same error, maybe you can check out my code and see if something pops to your eye?
it’s basically a function that takes the pair from iterable of torch.nonzero(as_tuple=False) and returns a tilted pair:
from math import atan2, cos, sin ,radians
import numpy as np
def tilt(pair,deg):
deg = radians(deg)
x = (pair[1]-48)/48
y = (pair[0]-48)/48
angle = atan2(y,x) - deg
size = np.sqrt( x ** 2 + y ** 2 )
temp = size * cos(angle)
x = temp if ((temp > -1) & (temp < 1)) else x
temp = size * sin(angle)
y = temp if ((temp > -1) & (temp < 1)) else y
x = int((x * 48 + 48).round())
y = int((y * 48 + 48).round())
return (y,x)
generally, i’m sure this issue will repeat itself and should be addressed |
st48300 | I haven’t verified your code, but this general approach should work.
This example 26 shows how to use a rotation matrix and might be useful in case you want to extend your use case.
enterthevoidf22:
generally, i’m sure this issue will repeat itself and should be addressed
I’m not sure which issue you are referring to, but if you think it’s an issue that single pixels might be “blurred” in the rotated image, i.e. they would spread to more than a single pixel location, I think this is rather a known image processing “issue”. |
st48301 | Let’s say I have a tensor of the size
torch.Size([512, 5000, 14, 14])
and I want to sum each 10 entries in second dim, e.g
sum [512, 0:10, :, :] over dim 1
sum [512, 10:20, :, :] over dim 1
…
So the resulting tensor is of size
[512, 500, 14, 14]
Is there any way to do it efficiently without loops? |
st48302 | Solved by albanD in post #2
Hi,
You can first view the Tensor before doing the reduction:
t.view(512, 500, 10, 14, 14).sum(2) |
st48303 | Hi,
You can first view the Tensor before doing the reduction:
t.view(512, 500, 10, 14, 14).sum(2) |
st48304 | Hi,
I need to solve many sparse linear systems using LU factorization (ideally on GPU). Is it possible within the current implementation of PyTorch? Thanks |
st48305 | Thanks. It seems that it does not work with sparse tensors (trying without batching):
import torch
i = torch.LongTensor([[0, 2], [2, 0], [1, 1]])
v = torch.FloatTensor([3, 4, 5 ])
A = torch.sparse.FloatTensor(i.t(), v, torch.Size([3,3])).cuda()
A_LU = torch.lu(A)
RuntimeError: Didn’t find kernel to dispatch to for operator ‘aten::_lu_with_info’. Tried to look up kernel for dispatch key ‘SparseCUDATensorId’. Registered dispatch keys are: [CUDATensorId, CPUTensorId, VariableTensorId] |
st48306 | I’m getting the same, even without cuda.
i = torch.LongTensor([[0, 1, 2], [0, 1, 2]])
v = torch.FloatTensor([1,1,1])
t = torch.sparse.FloatTensor(i, v, torch.Size([3,3]))
t.to_dense()
tensor([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
t.lu()
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-25-9f1f2db7f67d> in <module>
----> 1 t.lu()
~/anaconda3/envs/audio-built/lib/python3.7/site-packages/torch/tensor.py in lu(self, pivot, get_infos)
294 r"""See :func:`torch.lu`"""
295 # If get_infos is True, then we don't need to check for errors and vice versa
--> 296 LU, pivots, infos = torch._lu_with_info(self, pivot=pivot, check_errors=(not get_infos))
297 if get_infos:
298 return LU, pivots, infos
RuntimeError: Didn't find kernel to dispatch to for operator 'aten::_lu_with_info'. Tried to look up kernel for dispatch key 'SparseCPUTensorId'. Registered dispatch keys are: [CPUTensorId, VariableTensorId] |
st48307 | In your case, you will need the following code:
i = torch.LongTensor([[0, 1, 2], [0, 1, 2]])
v = torch.FloatTensor([1,1,1])
t = torch.sparse.FloatTensor(i, v, torch.Size([3,3]))
a = t.to_dense() #this is necessary
a.lu() |
st48308 | I think the answer is that no we don’t have a LU factorization for sparse Tensors available. You will have to convert to a dense Tensor.
If you are interested in contributing an implementation for LU factorization for sparse Tensor, we would be happy to help and accept it |
st48309 | Hi everyone,
Is there any other developments regarding this problem of linear algebra algorithms for sparse matrices ?
I think most of methods already implemented for instance in scipy or extensions such as scikit-sparse, are not yet implemented in Torch, for instance:
Sparse Cholesky decomposition, see e.g. sksparse.cholmod
Solving linear systems Ax=b with A a sparse matrix
Product of two sparse matrices
…
It could be very useful to allow for automatic differentiation of such operations in Torch.
Thanks |
st48310 | Hi all,
I have trained three separate pre -trained models (squeeznet, resnet, alexnet) and I want to create an ensemble. This how my code look like but when i do model.eval(), I got the error : RuntimeError: mat1 dim 1 must match mat2 dim 0
Can you please help me out? Where I am doing wrong?
Thank you so much!
class MyEnsemble(nn.Module):
def init(self, modelA, modelB, nb_classes=11):
super(MyEnsemble, self).init()
self.modelA = modelA
self.modelB = modelB
self.modelC= modelC
# Remove last linear layer
self.modelA.fc = nn.Identity()
self.modelB.classifer = nn.Identity()
# Remove last linear layer
self.modelA.classifier = nn.Identity()
self.modelB.fc = nn.Identity()
self.modelC.classifier = nn.Identity()
# Create new classifier
self.classifier = nn.Linear(512 + 2048 + 4096, nb_classes)
def forward(self, x):
x1 = self.modelA(x.clone()) # clone to make sure x is not changed by inplace methods
x1 = x1.view(x1.size(0), -1)
x2 = self.modelB(x)
x2 = x2.view(x2.size(0), -1)
x3 = self.modelB(x)
x3 = x2.view(x2.size(0), -1)
x = torch.cat((x1,x2,x3), dim=1)
x = self.classifier(nn.functional.relu(x))
return x |
st48311 | Solved by ptrblck in post #4
Based on the error message I guess that the number of input features in self.classifier doesn’t match the feature dimension for x.
Print the shape of x after using torch.cat and before feeding it to self.classifier and adapt the in_features if necessary. |
st48312 | Based on the error message I guess that the number of input features in self.classifier doesn’t match the feature dimension for x.
Print the shape of x after using torch.cat and before feeding it to self.classifier and adapt the in_features if necessary. |
st48313 | Thank you so much! I check that out and the in_features I was passing to the self_classifier was wrong! Thank you so much for your reply, now it’s working! |
st48314 | This seems to be a PIL issue, which was tracked here 1 and should be already solved. Could you update torchvision, if you are using an older version? |
st48315 | Assuming two independent models model0 and model1, when I train these two models individually with losses loss0 and loss1 correspondingly, I get different results (accuracy) compared to the time that I train the models together and with the loss loss = loss0+loss1.
I appreciate any thought on this problem.
Cheers |
st48316 | I think, that’s how it is supposed to work. No?
For instance, softmax and triplet losses work this way too. |
st48317 | Could you please elaborate, as grad(loss) = grad(loss0) + grad(loss1) and these two models have no parameters in common. So they should get same gradients as before when I trained them individually.
I again emphasize that the only difference is that I sum the losses and backpop on the summed loss. |
st48318 | I see. I missed that they are two independent models.
How do you initialize optimizers?
Do you use the same optimizer for both? |
st48319 | The parameters for two models are put into list and an optimizer is setup based on the parameters in the list. So I assume the optimizer should work similar for both of the models. |
st48320 | silvester:
grad(loss) = grad(loss0) + grad(loss1)
As you said here, gradients are independent I suppose.
So, I am not sure how to debug this. I do not know if the advanced optimizers (Like Adam) consider the parameters as independent entities or if they perform any operations to normalize the gradients somehow.
Did you try with simple SGD? |
st48321 | What I did is with SGD with decay rate and momentum, though the idea of normalization seems interesting to me, too. But it is not clear when this is happened. I am suspicious may it is also related to the number of parameters, as in the summed case the number of parameters is twice of the number of parameters in the individual training. |
st48322 | I am not sure about the case of the number of parameters. Do you have a concrete reason behind this thought? Is it related to overfitting?
As the gradients are independent, I would expect them to behave more or less similarly in both cases.
I hope you do not have implementation-related issues. |
st48323 | @ptrblck I was hopping to get some leads from Pytorch community about this issue. |
st48324 | Does applying np.float32 rather than np.int64 for the labels lead to degradation in accuracy? |
st48325 | If the losses are independent and don’t share any parameters, which were used to compute them, you should get the same results.
Are you able to reproduce this issue deterministically by seeding the code?
If so, could you share a code snippet so that we could have a look?
Here is a small example, which yields exact the same gradients:
# separate losses
torch.manual_seed(2809)
model1 = models.resnet18()
model2 = models.resnet34()
x = torch.randn(2, 3, 224, 224)
target = torch.tensor([0, 999])
criterion = nn.CrossEntropyLoss()
output1 = model1(x)
output2 = model2(x)
loss1 = criterion(output1, target)
loss2 = criterion(output2, target)
loss1.backward()
loss2.backward()
grad1_ref = {name: p.grad.clone() for name, p in model1.named_parameters()}
grad2_ref = {name: p.grad.clone() for name, p in model2.named_parameters()}
model1.zero_grad()
model2.zero_grad()
# loss sum
torch.manual_seed(2809)
model1 = models.resnet18()
model2 = models.resnet34()
x = torch.randn(2, 3, 224, 224)
target = torch.tensor([0, 999])
criterion = nn.CrossEntropyLoss()
output1 = model1(x)
output2 = model2(x)
loss1 = criterion(output1, target)
loss2 = criterion(output2, target)
loss = loss1 + loss2
loss = loss.backward()
grad1 = {name: p.grad.clone() for name, p in model1.named_parameters()}
grad2 = {name: p.grad.clone() for name, p in model2.named_parameters()}
# compare
for name in grad1_ref:
print('model1 grad diff for {}: {}'.format(
name, (grad1_ref[name] - grad1[name]).abs().max()))
print('model2 grad diff for {}: {}'.format(
name, (grad2_ref[name] - grad2[name]).abs().max())) |
st48326 | When I move a model to a gpu using to(device), am I still able to call methods within the model itself (like within forward) that aren’t gpu compatible? For example, I’d like to process a graph Data object in the forward method but doing so would rely on packages not meant for the gpu. |
st48327 | Yes, this would be possible, but depending on the code logic your code might be synchronizing at this point. e.g. if the result of this CPU operation is needed in the forward for some operations. |
st48328 | Im making a simple model:
class Linear(nn.Module):
def init(self, input_size, hidden_size, num_classes):
super(Linear, self).init()
self.fc1 = nn.Linear(input_size, hidden_size, bias=True)
self.fc2 = nn.Linear(hidden_size, num_classes, bias=True)
def forward(self, text, text_lengths):
text = text.float() # dense layer deals with float datatype
x = self.fc1(text)
preds = self.fc2(x)
return preds
and training it with the following code:
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
for batch in iterator:
optimizer.zero_grad()
text, text_lengths = batch.text
print(np.shape(text))
print(np.shape(text_lengths))
predictions = model(text, text_lengths)
loss = criterion(predictions, batch.labels.squeeze())
acc = accuracy(predictions, batch.labels)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
When I run it, I am getting the following error:
RuntimeError Traceback (most recent call last)
in
----> 1 run_train(num_epochs, linear_model, train_iterator, valid_iterator, optimizer, loss_func, ‘linear’)
in run_train(epochs, model, train_iterator, valid_iterator, optimizer, criterion, model_type)
5
6 # train the model
----> 7 train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
8
9 # evaluate the model
in train(model, iterator, optimizer, criterion)
8 print(np.shape(text))
9 print(np.shape(text_lengths))
—> 10 predictions = model(text, text_lengths)
11 loss = criterion(predictions, batch.labels.squeeze())
12 acc = accuracy(predictions, batch.labels)
c:\users\mynam\appdata\local\programs\python\python38\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
–> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
in forward(self, text, text_lengths)
8 def forward(self, text, text_lengths):
9 text = text.float() # dense layer deals with float datatype
—> 10 x = self.fc1(text)
11 preds = self.fc2(x)
12 return preds
c:\users\mynam\appdata\local\programs\python\python38\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
–> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
c:\users\mynam\appdata\local\programs\python\python38\lib\site-packages\torch\nn\modules\linear.py in forward(self, input)
89
90 def forward(self, input: Tensor) -> Tensor:
—> 91 return F.linear(input, self.weight, self.bias)
92
93 def extra_repr(self) -> str:
c:\users\mynam\appdata\local\programs\python\python38\lib\site-packages\torch\nn\functional.py in linear(input, weight, bias)
1672 if input.dim() == 2 and bias is not None:
1673 # fused op is marginally faster
-> 1674 ret = torch.addmm(bias, input, weight.t())
1675 else:
1676 output = input.matmul(weight.t())
RuntimeError: mat1 dim 1 must match mat2 dim 0
I printed out the dimensions of my tensors and they are torch.Size([250, 10]) and torch.Size([250]) if that helps. |
st48329 | The error is raised in:
x = self.fc1(text)
Print the shape of text before passing it to self.fc1 and make sure the in_features of the linear layer and the feature size of text are equal.
PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging easier. |
st48330 | I was trying to create new copies (so that python was pointing at different objects and not just updating the same copy across multiple namespaces). I tried it by getting the actual numpy data from a tensor and using that to create a brand new tensor but that didn’t work. Why did it not work? How does one do this?
>>> y
3 3
3 3
3 3
[torch.FloatTensor of size 3x2]
>>> yv.data = torch.FloatTensor( y.numpy() )
>>> yv
Variable containing:
3 3
3 3
3 3
[torch.FloatTensor of size 3x2]
>>> y
3 3
3 3
3 3
[torch.FloatTensor of size 3x2]
>>> yv.data.fill_(5)
5 5
5 5
5 5
[torch.FloatTensor of size 3x2]
>>> yv
Variable containing:
5 5
5 5
5 5
[torch.FloatTensor of size 3x2]
>>> y
5 5
5 5
5 5
[torch.FloatTensor of size 3x2] |
st48331 | clone maybe?
How to copy a Variable in a network graph
Yes this is how you want to do it.
need to check it out…
this seems to be Variables but I wanted just Tensor only… |
st48332 | ok that works:
>>> yv.data = y.clone()
>>> y
5 5
5 5
5 5
[torch.FloatTensor of size 3x2]
>>> yv
Variable containing:
5 5
5 5
5 5
[torch.FloatTensor of size 3x2]
>>> yv.data.fill_(10)
10 10
10 10
10 10
[torch.FloatTensor of size 3x2]
>>> y
5 5
5 5
5 5
[torch.FloatTensor of size 3x2]
>>> yv
Variable containing:
10 10
10 10
10 10
[torch.FloatTensor of size 3x2] |
st48333 | Note that indexing with a (non-zero dimensional) tensor also results in a copy.
(for zero-dim tensors see this merged commit 156) |
st48334 | You should also detach the tensor from the computational graph if it requires grad else gradients will be calculated for both that may lead to OOM issues.
x = torch.tensor([1.,2.,3.], requires_grad=True)
y = x.detach().clone()
y.requires_grad = True |
st48335 | I tested the inference of the vgg16 network with the pytorch framework, and found that the first layer of network (convolutional layer) for the first inference took a very long time, probably dozens of times the total time of the second reasoning. I know this may be the first Once booted on the GPU, but I want to know what the hell are doing this time?
Machine Configuration:GTX2080Ti
key Code:
cnt = 0
for i in gpu_input_data:
y = net(i)
cnt += 1
time.sleep(0.02)
if cnt > 20:
break
The timeline measured with pytorch’s profile is as follows:
first time inference: 300ms, the average cost time the rest 19 times inference: 13ms |
st48336 | The first CUDA operation creates the CUDA context on the device, which adds some overhead.
Also, since CUDA operations are asynchronously called, you would need to synchronize the code before starting and stopping a timer via torch.cuda.synchronize().
Additionally, if you are using torch.backends.cudnn.benchmark = True the first iteration using a new input shape will profile different cudnn kernels and select the fastest one, which also adds overhead. |
st48337 | thanks for your reply!
I turned on the print time after synchronization, and I got the same result. In addition, I used the torch.autograd.profiler.profile, I don’t know if this will introduce additional overhead in the first inference. |
st48338 | Thanks for the update. I’m not completely sure, but it looks as if all calls take approx. the same amount of time?
If the difference in in the colors, note that I’m partly colorblind and cannot see these differences that clearly so you would have to explain what you expect and what you are seeing now. |
st48339 | This image is a process of 20 inferences, the first of which takes longer than the others. As can be seen from the figure, the first reasoning takes a very long time after going through the first layer network. The following is the operations done by Pytorch each time the picture is enlarged.
image1534×546 280 KB |
st48340 | The interpretation of the previous picture involves 20 inferences, and the first one takes more time than the others. |
st48341 | Is this the very first execution of this layer in this script or are you seeing this increased time in another epoch of the model?
I would not recommend to profile the first execution for the aforementioned reasons (CUDA context creation, cudnnFind in benchmark mode, GPU startup/warmup time). |
st48342 | This graph is not the first execution of the network, it is one of many inferences that follow. Thank you for your answer, I probably understand the extra cost of the first execution of the model. There is one more question I want to understand. According to your statement, the first execution of the model after it is loaded into the GPU will inevitably take time to perform some initialization operations. If it is unavoidable, can this time be shortened in some way. |
st48343 | It depends what is causing the slow down, i.e. the context creation cannot be avoided.
Could you post a code snippet to reproduce this issue you are seeing so that we can profile it? |
st48344 | You need to run the vgg.py You’ll get vgg.json , vgg.json In Chrome chrome://tracing/ The website can be parsed into visual graphics, and the core code of the test is in the test function.
github project:
GitHub
xinhaixiangyunpiao/vgg 2
Contribute to xinhaixiangyunpiao/vgg development by creating an account on GitHub. |
st48345 | I have a list of tensors t_list and I need to get the element wise sum. I am concerned that because the tensors are in a batch that my method is incorrect.
Code example
t_list = [t1, t2, t3, t4] #where ti is a tensor 32 x 1 x 128
t_list = torch.stack(t_list) # giving 4 x 32 x 1 x 128
sum_list = sum(t_list) # result is 1 x 32 x 1 x 128
Is this the correct way to sum each list within the batch or would I need to specify dimensions for sum? |
st48346 | Solved by ptrblck in post #2
Your approach might work, but if you want to set a dim argument you could use torch.sum(t_list, dim=0). |
st48347 | Your approach might work, but if you want to set a dim argument you could use torch.sum(t_list, dim=0). |
st48348 | Hello,
I am using this in my training function:
for epoch in range(num_epochs):
train_mean_loss = 0
train_mean_acc = 0
rand_var = 0
for i, (train_input, train_label) in enumerate(train_dataloader):
if(train_input.device != device_available):
print("Train Data wasn't on cuda but now is.")
train_input = train_input.to(device_available)
train_label = train_label.to(device_available)
.
.
.
When I run the training loop second time (in case of any error), I get the illegal memory access error.
RuntimeError: CUDA error: an illegal memory access was encountered
I am confused about these things:
Does cuda throw an error if you try to push a tensor to cuda if itt’s already on cuda?
I faced the same issue with my model (I had to factory reset runtime to get it running), I first created an object for my model class and then pushed it to cuda but then I changed something in my model and tried to push it again on cuda, I got the same illegal memory access error.
I am using google colab and a big dataset so, it’s very difficult to debug when I have to factory reset my runtime everytime if I get error during training. |
st48349 | Solved by ptrblck in post #4
No, this won’t work and you would have to use:
os.environ['CUDA_LAUNCH_BLOCKING'] = 1
at the beginning of your script. Make sure to restart the runtime and set this env var before PyTorch or any other library was imported otherwise this variable might not have any effect. |
st48350 | CUDA operations are asynchronous so the illegal memory access might be created in the model and the next CUDA operation would run into this error and raise it.
Could you rerun your script with CUDA_LAUNCH_BLOCKING=1 python scripy.py args and post the stack trace here, please? |
st48351 | I am facing this error on colab, so I just used this in the cell:
CUDA_LAUNCH_BLOCKING = 1
model = MLP_network()
model = model.to(device_available)
Will this work? What does this command do? |
st48352 | No, this won’t work and you would have to use:
os.environ['CUDA_LAUNCH_BLOCKING'] = 1
at the beginning of your script. Make sure to restart the runtime and set this env var before PyTorch or any other library was imported otherwise this variable might not have any effect. |
st48353 | Alright. Thank you so much!
So, I am not facing the error (the one in my question) now because I fixed the problems with my training functions and this error was showing up when I tried to run the training function second time (after getting error because of some other reason). I am gonna close the topic. Thanks again @ptrblck for your time, I appreciate it. |
st48354 | Do you have any plan on implementing big data files loading functionality?
Suppose I have 300G data files for training, and I can’t load them all into memory.
For now, I am using TensorFlow, and they provide producer/consumer style data loading:
https://www.tensorflow.org/how_tos/reading_data/ 606
With it, I don’t need to read all data set into memory at once, and I can load data in parallel fashion.
Any plan on similar functionality?
Thanks. |
st48355 | You can already do that with Pytorchnet 1.9k.
Concretely, you pass a list of data files into tnt.ListDataset, then wrap it with torch.utils.DataLoader.
Example code:
def load_func(line):
# a line in 'list.txt"
# Implement how you load a single piece of data here
# assuming you already load data into src and target respectively
return {'src': src, 'target': target} # you can return a tuple or whatever you want it to
def batchify(batch):
# batch will contain a list of {'src', 'target'}, or how you return it in load_func.
# Implement method to batch the list above into Tensor here
# assuming you already have two tensor containing batched Tensor for src and target
return {'src': batch_src, 'target': batch_target} # you can return a tuple or whatever you want it to
dataset = ListDataset('list.txt', load_func) #list.txt contain list of datafiles, one per line
dataset = DataLoader(dataset=dataset, batch_size=50, num_workers=8, collate_fn=batchify) #This will load data when needed, in parallel, up to <num_workers> thread.
for x in dataset: #iterate dataset
print(x)
There are surely other way to do it. Hope this helps. |
st48356 | Just define a Dataset object, that only loads a list of files in __init__, and loads them every time __getindex__ is called. Then, wrap it in a torch.utils.DataLoader with multiple workers, and you’ll have your files loaded lazily in parallel.
class MyDataset(torch.utils.Dataset):
def __init__(self):
self.data_files = os.listdir('data_dir')
sort(self.data_files)
def __getindex__(self, idx):
return load_file(self.data_files[idx])
def __len__(self):
return len(self.data_files)
dset = MyDataset()
loader = torch.utils.DataLoader(dset, num_workers=8) |
st48357 | Sorry, new to PyTorch. How might one adapt the above method to pytorch/examples/word_language_model/? Currently, it seems to load the entire data into cuda, which is causing OOM errors. |
st48358 | It would require rewriting the whole data loading part. It would need to go over all of it to gather all the tokens, and then lazily load only the batches you request. You’d need to add a proper torch.utils.Dataset subclass that does it. |
st48359 | Thanks. I did something that may sound stupid.
I removed data.cuda() from batchify() and added
data = data.cuda()
target = target.cuda()
in get_batch().
It seems to be running now, but I wonder if this is a quick (and correct) fix. |
st48360 | That is a relief! However, I am noticing a huge and growing amount of memory usage now, after running for 40+ hours. Is there anything that I am missing? Perhaps every time when get_batch() is run, it is creating Variables and they are kept in memory after the batch is finished? Thanks. |
st48361 | Is that CPU memory or GPU memory? Everything should get freed from time to time. Are you just running the example with that single modification? |
st48362 | I wrote something follow your instruction. But it doesn’t work for me.
Here is what I do:
def _load_hdf5_file(hdf5_file):
f = h5py.File(hdf5_file, "r")
data = []
for key in f.keys():
data.append(f[key])
return tuple(data)
class HDF5Dataset(Dataset):
def __init__(self, data_files):
self.data_files = sorted(data_files)
def __getitem__(self, index):
return _load_hdf5_file(self.data_files[index])
def __len__(self):
return len(self.data_files)
train_set = HDF5Dataset(train_files) # there is only one file in train_files, i.e. train_files = ["foo_1"]
train_loader = DataLoader(dataset=train_set,
batch_size=train_batch_size,
shuffle=True,
num_workers=2)
And during iteration, I got this error:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/util.py", line 274, in _run_finalizers
File "/usr/lib/python2.7/multiprocessing/util.py", line 207, in __call__
File "/usr/lib/python2.7/shutil.py", line 239, in rmtree
File "/usr/lib/python2.7/shutil.py", line 237, in rmtree
OSError: [Errno 24] Too many open files: '/tmp/pymp-Y6oJsO'
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
File "/home/ts-yandixia01/.local/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 36, in _worker_loop
File "/usr/lib/python2.7/multiprocessing/queues.py", line 392, in put
File "/home/ts-yandixia01/.local/lib/python2.7/site-packages/torch/multiprocessing/queue.py", line 17, in send
File "/usr/lib/python2.7/pickle.py", line 224, in dump
File "/usr/lib/python2.7/pickle.py", line 286, in save
File "/usr/lib/python2.7/pickle.py", line 554, in save_tuple
File "/usr/lib/python2.7/pickle.py", line 286, in save
File "/usr/lib/python2.7/pickle.py", line 606, in save_list
File "/usr/lib/python2.7/pickle.py", line 639, in _batch_appends
File "/usr/lib/python2.7/pickle.py", line 286, in save
File "/usr/lib/python2.7/pickle.py", line 606, in save_list
File "/usr/lib/python2.7/pickle.py", line 639, in _batch_appends
File "/usr/lib/python2.7/pickle.py", line 286, in save
File "/usr/lib/python2.7/pickle.py", line 606, in save_list
File "/usr/lib/python2.7/pickle.py", line 639, in _batch_appends
File "/usr/lib/python2.7/pickle.py", line 286, in save
File "/usr/lib/python2.7/multiprocessing/forking.py", line 67, in dispatcher
File "/usr/lib/python2.7/pickle.py", line 401, in save_reduce
File "/usr/lib/python2.7/pickle.py", line 286, in save
File "/usr/lib/python2.7/pickle.py", line 554, in save_tuple
File "/usr/lib/python2.7/pickle.py", line 286, in save
File "/usr/lib/python2.7/multiprocessing/forking.py", line 66, in dispatcher
File "/home/ts-yandixia01/.local/lib/python2.7/site-packages/torch/multiprocessing/reductions.py", line 116, in reduce_storage
File "/usr/lib/python2.7/multiprocessing/reduction.py", line 145, in reduce_handle
OSError: [Errno 24] Too many open files
And the program never terminates.
Did I do anything wrong? Thanks
By the way, I don’t know if it is appropriate to ask here, how can I post python style code here like you did? Thanks. |
st48363 | It seems that your system allows only for a small number of open files. Can you try adding torch.multiprocessing.set_sharing_strategy('file_system') at the top of your script and try again?
Just append python after the three backticks 133 to add syntax highlighting. |
st48364 | I added the line, and I got this error:
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/home/ts-yandixia01/.local/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 36, in _worker_loop
data_queue.put((idx, samples))
File "/usr/lib/python2.7/multiprocessing/queues.py", line 392, in put
return send(obj)
File "/home/ts-yandixia01/.local/lib/python2.7/site-packages/torch/multiprocessing/queue.py", line 17, in send
ForkingPickler(buf, pickle.HIGHEST_PROTOCOL).dump(obj)
File "/usr/lib/python2.7/pickle.py", line 224, in dump
self.save(obj)
File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python2.7/pickle.py", line 554, in save_tuple
save(element)
File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python2.7/pickle.py", line 606, in save_list
self._batch_appends(iter(obj))
File "/usr/lib/python2.7/pickle.py", line 639, in _batch_appends
save(x)
File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python2.7/pickle.py", line 606, in save_list
self._batch_appends(iter(obj))
File "/usr/lib/python2.7/pickle.py", line 639, in _batch_appends
save(x)
File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python2.7/pickle.py", line 606, in save_list
self._batch_appends(iter(obj))
File "/usr/lib/python2.7/pickle.py", line 639, in _batch_appends
save(x)
File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python2.7/multiprocessing/forking.py", line 67, in dispatcher
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python2.7/pickle.py", line 401, in save_reduce
save(args)
File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python2.7/pickle.py", line 554, in save_tuple
save(element)
File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python2.7/multiprocessing/forking.py", line 66, in dispatcher
rv = reduce(obj)
File "/home/ts-yandixia01/.local/lib/python2.7/site-packages/torch/multiprocessing/reductions.py", line 109, in reduce_storage
metadata = storage._share_filename_()
RuntimeError: $ Torch: unable to mmap memory: you tried to mmap 0GB. at /home/soumith/local/builder/wheel/pytorch-src/torch/lib/TH/THAllocator.c:317
Thanks |
st48365 | That was CPU memory that I noticed using htop. Besides the above modification, I used another corpus which is about 9 GB. Anyway, it finished without OOM so I guess it’s fine . |
st48366 | That was probably an out of memory error. If the data or the code is public (or if you could just isolate the data loading into a separate script), I could run it myself and make sure it doesn’t leak. But we have some tests for that. |
st48367 | Hi,
I just wrote a simple demo code to reproduce the error. The code is here:
GitHub
xiayandi/Pytorch_text_classification 152
A simple implementation of CNN based text classification in Pytorch - xiayandi/Pytorch_text_classification
The data is randomly generated, but everything is the same as mine except the actual values.
You could run main.py to reproduce the error.
I don’t know if this will be useful, my system is:
Ubuntu 16.04, TitanX, cuda 8.0, pip installed pytorch
Thanks! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.