id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st46768 | After I update libtorch 1.5(C++, cuda10.1), torch::cuda::is_available() keeps returning false, and I couldn’t find the reason. Before that, I was using libtorch 1.3(C++, cuda10.1), and everything is OK. |
st46769 | Solved by peterjc123 in post #4
You’ll need to pass an additional argument to the linker.
-INCLUDE:?warp_size@cuda@at@@YAHXZ |
st46770 | You’ll need to pass an additional argument to the linker.
-INCLUDE:?warp_size@cuda@at@@YAHXZ |
st46771 | I see the same issue with Linux. Is there an equivalent linker magic flag to pass ? |
st46772 | Hello,
I have the sample problem with Linux on my NVIDIA Jetson AGX Xavier, JetPack 4.4 and liborch 1.6.
Did you solve this issue on your Linux?
Did you find the “magic flag”?
Thanks |
st46773 | For example i have batch of [batch_size, lens, features] and [batch_size,] for x and x_lens.
So masking should be like this:
x = torch.rand(32, 1000, 512)
lens = (torch.rand(32,)*1000).long()
power=3
masks=10
x = x.clone()
batch_size, length, features = x.size()
T = lens // power // masks
mean = (x.detach().sum(1) / lens.unsqueeze(-1)).unsqueeze(1).expand(-1, length, -1)
mask_start = (torch.rand((batch_size, masks), device=lens.device)*(lens-T).unsqueeze(1)).long()
mask_end = mask_start + (torch.rand((batch_size, masks), device=lens.device)*T.unsqueeze(1)).long()
mask = torch.arange(0, length, device=lens.device).unsqueeze(0).expand(batch_size, -1)
# here im stuck
mask = (mask >= mask_start) & (mask < mask_end)
x[mask] = mean[mask]
Im getting
RuntimeError: The size of tensor a (1000) must match the size of tensor b (10) at non-singleton dimension 1
I cant figure out how to apply all these masks. |
st46774 | Hello! I keep getting an error when attempting to save my CNN model. I do not have this problem for smaller images like the 32x32 cifar dataset; however, my images are 448x672 (note: a multiple of 224). I am using the model for a regression task. Any help would be much appreciated!!
python 3.7.5
pytorch 1.6.0
Anaconda
Here is my model:
class Network_CNN_batchNorm(nn.Module):
def __init__(self):
super(Network_CNN_batchNorm, self).__init__()
# 3x448x672 input image (RGB)
self.layer1 = nn.Sequential(
# input is 3 channels (RGB) - first parameter
# 64 filters of kernel size 5x5; padding = kernel_size/2 - 1;
nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1),
# max pooling with stride=2 makes output image 224x336
nn.MaxPool2d(kernel_size=2, stride=2),
nn.BatchNorm2d(64))
self.layer2 = nn.Sequential(
# 2nd layer uses 128 channels (filters) of 3x3
nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.BatchNorm2d(128))
# 3rd layer uses 128 channels (filters) of 3x3
# output feature map is still 224x336
self.layer3 = nn.Sequential(
nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.BatchNorm2d(128))
# Average Pooling Layer, 112x168 output
self.avgP1 = nn.AvgPool2d(kernel_size=3, stride=2, padding=1)
# Fully connected layers
self.fc1 = nn.Linear(112 * 168 * 128, 1000)
self.fc2 = nn.Linear(1000, 10) # 10 outputs
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = self.layer3(out)
out = self.avgP1(out)
out = out.reshape(out.size(0), -1) # flatten
out = self.fc1(out)
out = self.fc2(out)
return out
Note that I can see my training and validation loss decrease over multiple epochs so training the model appears to be fine. However, I do notice that when training on my CPU the memory usage is around 30-40Gb which seems excessive.
The code for saving the model is shown, and I can confirm that the path is OK since it works with smaller image sizes.
torch.save(model.state_dict(), os.path.join(Model_Path, 'epoch-{}.pth'.format(epoch)))
The error I am getting is as follows:
File "C:\my.py", line 526, in <module>
model_trained, t_loss, v_loss = train_model(model, criterion, optimizer, trainloader, testloader, num_epochs)
File "C:\my.py", line 356, in train_model
torch.save(model.state_dict(), os.path.join(Model_Path, 'epoch-{}.pth'.format(epoch)))
File "C:\Users\...\anaconda3\envs\TF2.0\lib\site-packages\torch\serialization.py", line 364, in save
_save(obj, opened_zipfile, pickle_module, pickle_protocol)
File "C:\Users\...\anaconda3\envs\TF2.0\lib\site-packages\torch\serialization.py", line 477, in _save
zip_file.write_record(name, storage.data_ptr(), num_bytes)
TypeError: write_record(): incompatible function arguments. The following argument types are supported:
1. (self: torch._C.PyTorchFileWriter, arg0: str, arg1: str, arg2: int) -> None
2. (self: torch._C.PyTorchFileWriter, arg0: str, arg1: int, arg2: int) -> None
Invoked with: <torch._C.PyTorchFileWriter object at 0x0000026AD2154D30>, 'data/2657683100064', 2657910136960, -7546077184 |
st46775 | Hi,
We did some fix recently on this. Does it still happen if you use the nightly build? |
st46776 | Thank you for responding. I installed pytorch-nightly (1.8.0.dev20201113) and still have the same error when trying to save the model. |
st46777 | Ok, thanks!
Can you check the size of all the Tensors that are in your model state dict and report them here? I guess one of them is going to be huge?
Note that if you have a case where you can do
t = torch.rand(your_tensor_size)
torch.save(t, "my_path.pth")
it would be super helpful and we should open an issue on github.
From looking from afar, it looks like the issue is with some integer overflow because some of your objects are too big. But if you have a simple repro, we can verify that! |
st46778 | Thank you. Is this what you are asking for?
for param_tensor in model.state_dict():
print(param_tensor, "\t", model.state_dict()[param_tensor].size())
Model's state_dict:
layer1.0.weight torch.Size([64, 3, 3, 3])
layer1.0.bias torch.Size([64])
layer1.2.weight torch.Size([64])
layer1.2.bias torch.Size([64])
layer1.2.running_mean torch.Size([64])
layer1.2.running_var torch.Size([64])
layer1.2.num_batches_tracked torch.Size([])
layer2.0.weight torch.Size([128, 64, 3, 3])
layer2.0.bias torch.Size([128])
layer2.2.weight torch.Size([128])
layer2.2.bias torch.Size([128])
layer2.2.running_mean torch.Size([128])
layer2.2.running_var torch.Size([128])
layer2.2.num_batches_tracked torch.Size([])
layer3.0.weight torch.Size([128, 128, 3, 3])
layer3.0.bias torch.Size([128])
layer3.2.weight torch.Size([128])
layer3.2.bias torch.Size([128])
layer3.2.running_mean torch.Size([128])
layer3.2.running_var torch.Size([128])
layer3.2.num_batches_tracked torch.Size([])
fc1.weight torch.Size([1000, 2408448])
fc1.bias torch.Size([1000])
fc2.weight torch.Size([10, 1000])
fc2.bias torch.Size([10]) |
st46779 | Note: I reduced my image size by half in both dimensions:
transforms.Resize((224,336),interpolation=Image.NEAREST)
and was able to save the model. The saved model size is 2.3GB in size just at this image size! |
st46780 | I’m also able to increase the image size to 400x600 and save the model at 7.5Gb in size. |
st46781 | So, running on my CPU allows me the flexibility to use 64Gb of RAM but all four of my GPUs (2070S) have only 8Gb of memory. I have a fairly simple CNN model with 3 layers but the memory requirements are so large with 400x600 images that I cannot train it on my GPUs. I’m working with a regression problem and would prefer to maintain the resolution of my images. What is done in practice for larger image sizes and GPU memory limitations? With my current CPU processor, I would have to wait several days to train a model |
st46782 | I think your first fully connected layer is a bit big no? the weight size is `torch.Size([1000, 2408448]). Meaning that the input feature size is more than 2 million!
I think that you should reduce the size of that layer and it will help drastically with memory usage.
You can add extra pooling or striding in the last convs to reduce this size. |
st46783 | I want use amp in https://github.com/open-mmlab/mmcv/blob/master/mmcv/ops/roi_align.py.
I refer to https://pytorch.org/docs/stable/notes/amp_examples.html#functions-with-multiple-inputs-or-autocastable-ops, and Apply custom_fwd and custom_bwd (with no arguments) to forward and backward respectively. I modify code :
class RoIAlignFunction(Function):
... ...
@staticmethod
@custom_fwd
def forward(ctx,
input,
rois,
output_size,
spatial_scale=1.0,
sampling_ratio=0,
pool_mode='avg',
aligned=True):
ctx.output_size = _pair(output_size)
ctx.spatial_scale = spatial_scale
... ...
@staticmethod
@once_differentiable
@custom_bwd
def backward(ctx, grad_output):
rois, argmax_y, argmax_x = ctx.saved_tensors
... ...
The code still report error:
File "/usr/local/python16/lib/python3.7/site-packages/mmcv/ops/roi_align.py", line 71, in forward
aligned=ctx.aligned)
RuntimeError: expected scalar type Half but found Float
what did custom_fwd and custom_bwd do specifically?
This refer to issue https://github.com/pytorch/pytorch/issues/47906 |
st46784 | Hi there,
I’m trying to use a NN for a classification into two classes. As this did not work with my dataset (constant prediction for each batch) I wrote a simpler version of the code, but can’t still find the problem.
Here’s a minimal version code:
class Model(nn.Module):
def __init__(self, input_size, hidden_sizes_fc=[100, 2]):
super().__init__()
self.fc_list = nn.ModuleList([nn.Linear(input_size, hidden_sizes_fc[0])])
for hidden_size_fc_ind in range(0, len(hidden_sizes_fc)-1):
self.fc_list.append(nn.Linear(hidden_sizes_fc[hidden_size_fc_ind],
hidden_sizes_fc[hidden_size_fc_ind+1]))
def forward(self, x):
relu = nn.ReLU()
for i, FC in enumerate(self.fc_list):
x = FC(x)
x = relu(x)
return x
def train_std_nn(net, train, val, epochs, loss_fn):
optimiser = torch.optim.Adam(net.parameters(), lr=0.0001)
train_losses_epochs = []
val_score_epochs = []
net.train()
for epoch in trange(epochs):
train_loss = 0.0
total_computations = 0
for X, Y in train:
output = net(X)
loss = loss_fn(output, Y)
loss.backward()
optimiser.step()
train_loss += loss.item()
total_computations += Y.shape[0]
train_losses_epochs.append(train_loss / total_computations)
for X_val, Y_val in val:
output = net(X_val)
top_p, top_class = torch.topk(output, 1, dim=1)
pred = torch.flatten(top_class).detach().numpy()
val_score_epochs.append(roc_auc_score(Y_val.numpy(), pred))
return net, train_losses_epochs, val_score_epochs
epochs = 10
batch_size = 128
hidden_layers_size = [16, 2]
net = Model(input_size=11, hidden_sizes_fc=hidden_layers_size).double()
loss_fn = nn.CrossEntropyLoss()
aaa = torch.Tensor(np.random.rand(15, 11)).double()#.type(torch.LongTensor)
bbb = torch.Tensor(np.random.randint(0, 2, (15))).type(torch.LongTensor)
net, train_losses_epochs, val_score_epochs = train_std_nn(net, [[aaa, bbb]], [[aaa, bbb]], epochs, loss_fn)
I’ve plotted some graphs of the training loss and validation score (area under the curve). But the model doesn’t seem to learn anything… Training loss does random stuff (mainly decreasing but depends on the run) and auc is always 0.5
Thanks for help! |
st46785 | I also tried
def forward(self, x):
relu = nn.ReLU()
sm = nn.Softmax(dim=1)
for i, FC in enumerate(self.fc_list):
x = FC(x)
x = relu(x)
x = sm(x)
return x
but that did not work either |
st46786 | You are still using relu for the output layer, try
def forward(self, x):
relu = nn.ReLU()
sm = nn.Softmax(dim=1)
x = self.fc_list[0](x)
x = relu(x)
x = self.fc_list[1](x)
x = sm(x)
return x |
st46787 | Yeah sorry, what I meant is: I both tried
def forward(self, x):
relu = nn.ReLU()
sm = nn.Softmax(dim=1)
for i, FC in enumerate(self.fc_list):
x = FC(x)
x = relu(x)
x = sm(x)
return x
and
def forward(self, x):
relu = nn.ReLU()
sm = nn.Softmax(dim=1)
x = self.fc_list[0](x)
x = relu(x)
x = self.fc_list[1](x)
x = sm(x)
return x
neither is working |
st46788 | klory:
def forward(self, x):
relu = nn.ReLU()
sm = nn.Softmax(dim=1)
x = self.fc_list[0](x)
x = relu(x)
x = self.fc_list[1](x)
x = sm(x)
return x
sorry my bad, but I think you forgot to call 'optimiser.zero_grad()beforeloss.backward()` |
st46789 | nn.CrossEntropyLoss expects raw logits as the model output, so remove the softmax and relu and pass the output of the last linear layer to the loss function.
Also, as explained before, you are not zeroing out the gradients. |
st46790 | Thanks! The code now looks like this:
class Model(nn.Module):
def __init__(self, input_size, hidden_sizes_fc=[100, 2]):
super().__init__()
self.fc_list = nn.ModuleList([nn.Linear(input_size, hidden_sizes_fc[0])])
for hidden_size_fc_ind in range(0, len(hidden_sizes_fc)-1):
self.fc_list.append(nn.Linear(hidden_sizes_fc[hidden_size_fc_ind],
hidden_sizes_fc[hidden_size_fc_ind+1]))
def forward(self, x):
relu = nn.ReLU()
x = self.fc_list[0](x)
x = relu(x)
x = self.fc_list[1](x)
return x
def train_std_nn(net, train, val, epochs, loss_fn):
optimiser = torch.optim.Adam(net.parameters(), lr=0.0001)
train_losses_epochs = []
val_score_epochs = []
net.train()
for epoch in trange(epochs):
train_loss = 0.0
total_computations = 0
for X, Y in train:
output = net(X)
loss = loss_fn(output, Y)
optimiser.zero_grad()
loss.backward()
optimiser.step()
train_loss += loss.item()
total_computations += Y.shape[0]
train_losses_epochs.append(train_loss / total_computations)
for X_val, Y_val in val:
output = net(X_val)
top_p, top_class = torch.topk(output, 1, dim=1)
pred = torch.flatten(top_class).detach().numpy()
val_score_epochs.append(roc_auc_score(Y_val.numpy(), pred))
return net, train_losses_epochs, val_score_epochs
epochs = 200
batch_size = 4
hidden_layers_size = [16, 2]
net = Model(input_size=11, hidden_sizes_fc=hidden_layers_size).double()
loss_fn = nn.CrossEntropyLoss()
aaa = torch.Tensor(np.random.rand(15, 11)).double()#.type(torch.LongTensor)
bbb = torch.Tensor(np.random.randint(0, 2, (15))).type(torch.LongTensor)
net, train_losses_epochs, val_score_epochs = train_std_nn(net, [[aaa, bbb]], [[aaa, bbb]], epochs, loss_fn)
With e.g. input vector [1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0] after 200 epochs (which I would consider enough to overfitt the data) the predicition is still [1 0 1 1 1 1 1 1 1 1 1 1 1 1 1]. I ran the code a few times and always get very bad predictions. |
st46791 | I can perfectly overfit random samples using your code, so you might want to increase the learning rate to let it converge faster (it still converges with your lr or 1e-4, but takes more epochs):
class Model(nn.Module):
def __init__(self, input_size, hidden_sizes_fc=[100, 2]):
super().__init__()
self.fc_list = nn.ModuleList([nn.Linear(input_size, hidden_sizes_fc[0])])
for hidden_size_fc_ind in range(0, len(hidden_sizes_fc)-1):
self.fc_list.append(nn.Linear(hidden_sizes_fc[hidden_size_fc_ind],
hidden_sizes_fc[hidden_size_fc_ind+1]))
def forward(self, x):
relu = nn.ReLU()
x = self.fc_list[0](x)
x = relu(x)
x = self.fc_list[1](x)
return x
hidden_layers_size = [16, 2]
net = Model(input_size=11, hidden_sizes_fc=hidden_layers_size)
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=1e-3)
data = torch.rand(15, 11)
target = torch.randint(0, 2, (15,))
for epoch in range(1000):
optimizer.zero_grad()
output = net(data)
loss = loss_fn(output, target)
loss.backward()
optimizer.step()
preds = torch.argmax(output, dim=1)
print('epoch {}, loss {:.3f}, acc {}'.format(
epoch, loss.item(), (preds==target).float().mean())) |
st46792 | Hello,everyone.
Newbie here, trying to learn pytorch. Recently, I’m using lstm or gru to do failure prediction. This is a binary classification problem ,so I use BCEWithLogitsLoss as the loss function, but finally, it is wrong.
%matplotlib notebook
import pandas as pd
import os
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
import torch
from torch import nn
data = pd.read_csv('D:/A_PHM_Data/data/train/002/00a22713-68d5-372a-a009-b948ce453442.csv', header=0)
data = data.iloc[:, :-3]
value = data.values.astype(float)
train_x = value[:300].reshape(1, -1, 72)
train_y = np.zeros((train_x.shape[1],1))
train_x = torch.from_numpy(train_x)
train_y = torch.from_numpy(train_y)
class LSTM(nn.Module):
def __init__(self):
super(LSTM,self).__init__()
self.lstm = nn.LSTM(input_size=72, hidden_size=100,batch_first=True)
self.out = nn.Linear(100,1)
def forward(self, x, h_state, c_state):
r_out, (h_state, c_state) = self.lstm(x, (h_state, c_state))
output = self.out(r_out)
# output = torch.sigmoid(output)
return output
def InitHidden(self):
h_state = torch.zeros(1,1,100)
c_state = torch.zeros(1,1,100)
return h_state, c_state
device = torch.device('cuda')
model = LSTM().to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
loss = torch.nn.BCEWithLogitsLoss()
h_state, c_state = model.InitHidden()
h_state, c_state = h_state.to(device), c_state.to(device)
train_x = train_x.float().to(device)
train_y = train_y.float().to(device)
test_x = test_x.to(device)
test_y = test_y.to(device)
model.train()
for epoch in range(1000):
output = model(train_x, h_state,c_state).squeeze(0)
loss = loss(output, train_y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
TypeError Traceback (most recent call last)
in
2 for epoch in range(1000):
3 output = model(train_x, h_state,c_state).squeeze(0)
----> 4 loss = loss(output, train_y)
5 optimizer.zero_grad()
6 loss.backward()
TypeError: ‘Tensor’ object is not callable |
st46793 | You are overwriting the loss function loss with the loss value in:
loss = loss(output, train_y)
Use criterion = nn.BCEWithLogitsLoss or use another name for the loss value and it should work. |
st46794 | File "d:\anaconda3\lib\site-packages\fire\core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "d:\anaconda3\lib\site-packages\fire\core.py", line 468, in _Fire
target=component.__name__)
File "d:\anaconda3\lib\site-packages\fire\core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "main.py", line 103, in train
optimizer.step()
File "d:\anaconda3\lib\site-packages\torch\autograd\grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "d:\anaconda3\lib\site-packages\torch\optim\adam.py", line 107, in step
denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])
i set the Batch size=1;but the CUDA out of memory…I don’t know what…
this is my train:
for epoch in range(opt.max_epoch):
loss_mean = 0.0
loss_val = 0.0
loss_meter.reset()
for ii,(data,label) in tqdm(enumerate(train_dataloader),total=len(train_data)):
# train model
input = Variable(data)
input = input.float()
target = Variable(label)
if opt.use_gpu:
input = input.cuda()
target = target.cuda()
optimizer.zero_grad()
score = model(input)
loss = criterion(score,target) / (400 * 190)
loss.backward()
optimizer.step()
the error occurs in optimizer.step() of the first epoch… |
st46795 | Solved by ptrblck in post #16
Yes, most likely. If neither a batch size of 1 can run nor you are able to use checkpointing, you could try to use model sharding, i.e. executing separate parts of the model on different GPUs.
If also this needs to much memory, you would have to change the model architecture and make sure the GPU … |
st46796 | I had the same problem, and I solved by using with torch.no_grad():
For example,
# train model
input = Variable(data)
input = input.float()
target = Variable(label)
if opt.use_gpu:
input = input.cuda()
target = target.cuda()
optimizer.zero_grad()
with torch.no_grad() :
score = model(input)
loss = criterion(score,target) / (400 * 190)
loss = Variable(loss, requires_grad = True)
loss.backward()
optimizer.step() |
st46797 | Wrapping the forward pass in a torch.no_grad() block will not store any intermediate activations, which would be needed to compute the gradients during the backward pass.
You would get an error in loss.backward(), but you are avoiding it by detaching the loss and setting requires_grad=True in:
loss = Variable(loss, requires_grad = True)
However, your training is still broken and the model will not be updated.
torch.no_grad() should only be used during evaluation and testing, if no gradients should be computed and no parameter updates are needed.
CC @tianle-BigRice |
st46798 | yeah, I didn’t use torch.no_grad in the train before. and i used torch.no_grad in the model.eval . This error will not occur . But this time I don’t know why the error occur in the train .even i set the batch size=1,this error still occurs, i didnt konw why. |
st46799 | If you are not storing the loss directly in e.g. a list or any other tensor, which is attached to the computation graph, your model might just use too much memory.
Are you seeing the OOM issue in the first iteration(s) or later in training?
In the former case, you could try to trade compute for memory via torch.utils.checkpoint, while the latter case points towards storing tensors without detaching them. |
st46800 | yeah ,i seeing the OOM issue in the first iteration(s) , This error occurred at optimizer.step().and i storing the loss:(but the issue remains)
score = model(input)
loss = criterion(score,target)
running_loss = loss.item()
loss.backward()
optimizer.step() #optimizer = t.optim.Adam(model.parameters(),lr = lr)
I found this error because score = model(input). So should I add torch.utils.checkpoint into the model? |
st46801 | if i use the code in train:
with t.no_grad() :
score = model(input)
the parameter will not be updated?
and i use the code:
loss = checkpoint(criterion,score,target)
it didnt work
Thank you very much for your continued help |
st46802 | this is my error :
File "D:\python\RDANET\main.py", line 197, in <module>
fire.Fire(train)
File "d:\anaconda3\lib\site-packages\fire\core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "d:\anaconda3\lib\site-packages\fire\core.py", line 463, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "d:\anaconda3\lib\site-packages\fire\core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "D:\python\RDANET\main.py", line 112, in train
optimizer.step()
File "d:\anaconda3\lib\site-packages\torch\autograd\grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "d:\anaconda3\lib\site-packages\torch\optim\adam.py", line 91, in step
state['exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format)
RuntimeError: CUDA out of memory. Tried to allocate 1.35 GiB (GPU 0; 15.92 GiB total capacity; 12.89 GiB already allocated; 1.22 GiB free; 13.67 GiB reserved in total by PyTorch)
Is this the storing tensors without detaching them you mentioned?
thank you very much
and the code is my model forward:
image_size = [400,190] , block_nums = 8, num_classes=1000
x = self.maxpool(F.leaky_relu(self.conv1_1(x)))
x = self.maxpool(F.leaky_relu(self.conv2_1(x)))
x = self.maxpool(F.leaky_relu(self.conv3_2(F.leaky_relu(self.conv3_1(x)))))
x = self.maxpool(F.leaky_relu(self.conv4_2(F.leaky_relu(self.conv4_1(x)))))
x = self.maxpool(F.leaky_relu(self.conv5_2(F.leaky_relu(self.conv5_1(x)))))
x = self.dropout(x)
x = x.view(x.size(0), 512 * 12 * 5)
x = F.leaky_relu(self.fc1(x))
x = F.leaky_relu(self.fc2(x))
x = x.reshape(x.size(0),-1, int(self.image_size[0]/2), int(self.image_size[1]/2))
# print('x:',x.size())
'''
x = self.res1(x)
out = self.res2(x)
out = self.res3(out)
out = out + x
out = self.res4(out)
# print('res4:',out.size())
#shuffle
# print(out.shape)
out = self.shuffle(out)
# print(out.shape)
# print('shuffle:',out.size())
out = self.res5(out)
# print(out.shape)
out = np.squeeze(out)
# print(out.shape)
return out |
st46803 | tianle-BigRice:
if i use the code in train:
with t.no_grad() :
score = model(input)
the parameter will not be updated?
Yes, the parameters will not get any gradients and thus the optimizer will not update them.
tianle-BigRice:
and i use the code:
loss = checkpoint(criterion,score,target)
it didnt work
You should wrap blocks of the models into a checkpoint. You can find an older tutorial here 4. |
st46804 | I wrap blocks of the models into a checkpoint . according to the tutorial,But crazy error report.
my code:
def conv_lrelu(in_ch, out_ch, ker_sz, pad):
return nn.Sequential(nn.Conv2d(in_ch, out_ch, ker_sz, padding=pad, bias=False),
nn.LeakyReLU())
def seg1(self, x):
x = self.layer1(x)
x = self.maxpool(x)
return x
def fc(self, x):
x = self.dropout(x)
x = x.view(x.size(0), 512 * 12 * 5)
x = F.leaky_relu(self.fc1(x))
x = F.leaky_relu(self.fc2(x))
x = x.view(x.size(0),-1, int(self.image_size[0]/2), int(self.image_size[1]/2))
def EDSR(self,x):
x = self.res1(x)
out = self.res2(x)
out = self.res3(out)
out = out + x
out = self.res4(out)
out = self.shuffle(out)
out = self.res5(out)
out = np.squeeze(out)
return out
x = checkpoint(self.seg1, x)
x = checkpoint(self.seg2, x)
x = checkpoint(self.seg3, x)
x = checkpoint(self.seg4, x)
x = checkpoint(self.seg5, x)
x = checkpoint(self.fc, x)
out = checkpoint(self.EDSR, x)
In addition, no matter how I adjust the batchsize, there will always be CUDA out of memory.But I have memory 32G.And the program can run normally on this computer before, there will be no CUDA out of memory problem。I really don’t know what went wrong, why it worked well before and now it can’t run。 |
st46805 | tianle-BigRice:
But I have memory 32G.
The error message claims your device has 16GB, so you might be using the wrong device? |
st46806 | I have two gpus, the gpu0 have16G,the GPU1 have 16G,
What I am confused about is that the program used to run normally, but now there is always the problem of CUDA out of memory。 |
st46807 | I just used another data set to test, the program can run normally,This is the model code for another set of data sets:
input = 128*128,image_size = [160,160] , block_nums = 10, num_classes=1000
x = self.maxpool(F.leaky_relu(self.conv1_1(x)))
x = self.maxpool(F.leaky_relu(self.conv2_1(x)))
x = self.maxpool(F.leaky_relu(self.conv3_2(F.leaky_relu(self.conv3_1(x)))))
x = self.maxpool(F.leaky_relu(self.conv4_2(F.leaky_relu(self.conv4_1(x)))))
x = self.maxpool(F.leaky_relu(self.conv5_2(F.leaky_relu(self.conv5_1(x)))))
x = self.dropout(x)
x = x.view(x.size(0), 512 * 4 * 4)
x = F.leaky_relu(self.fc1(x))#self.fc1 = nn.Linear(512 * 4 * 4, 6400)
x = F.leaky_relu(self.fc2(x))#self.fc2 = nn.Linear(6400, 6400)
x = x.reshape(x.size(0),-1, int(self.image_size[0]/2),int(self.image_size[1]/2))
x = self.res1(x)
out = self.res2(x)
out = self.res3(out)
out = out + x
out = self.res4(out)
out = self.shuffle(out)
out = self.res5(out)
out = np.squeeze(out)
This is the code for the new data set:
input = 160*384,image_size = [400,190] , block_nums = 8, num_classes=1000
x = self.maxpool(F.leaky_relu(self.conv1_1(x)))
x = self.maxpool(F.leaky_relu(self.conv2_1(x)))
x = self.maxpool(F.leaky_relu(self.conv3_2(F.leaky_relu(self.conv3_1(x)))))
x = self.maxpool(F.leaky_relu(self.conv4_2(F.leaky_relu(self.conv4_1(x)))))
x = self.maxpool(F.leaky_relu(self.conv5_2(F.leaky_relu(self.conv5_1(x)))))
x = self.dropout(x)
x = x.view(x.size(0), 512 * 12 * 5)
x = F.leaky_relu(self.fc1(x))#self.fc1 = nn.Linear(512 * 12 * 5, 19000)
x = F.leaky_relu(self.fc2(x))#self.fc2 = nn.Linear(19000, 19000)
x = x.reshape(x.size(0),-1, int(self.image_size[0]/2), int(self.image_size[1]/2))
x = self.res1(x)
out = self.res2(x)
out = self.res3(out)
out = out + x
out = self.res4(out)
out = self.shuffle(out)
out = self.res5(out)
out = np.squeeze(out)
I don’t understand why the new data set can’t work properly, is it because the input and output are too large? But I also reduced the batchsize and still can’t run |
st46808 | tianle-BigRice:
I don’t understand why the new data set can’t work properly, is it because the input and output are too large? But I also reduced the batchsize and still can’t run
Yes, most likely. If neither a batch size of 1 can run nor you are able to use checkpointing, you could try to use model sharding, i.e. executing separate parts of the model on different GPUs.
If also this needs to much memory, you would have to change the model architecture and make sure the GPU requirement meets the available device memory. |
st46809 | See latest post for CUDA error: all CUDA-capable devices are busy or unavailable
Resolved issue
I was running Pytorch without issues using GTX 1080 Ti. I recently obtained a RTX3090, and had to make appropriate updates on nvidia drivers for Ampere architecture support. However, I started getting errors when trying to put variables into GPU with .cuda(), and torch.cuda.is_available() returns False. See below.
The same error also occurs in a separate (new) machine with Quadro RTX 5000, leading me to speculate this could be a setup error. However, I do not know the commonalities between the two machines
Machines experiecing the same errors
RTX3090
Debian Testing
nvidia-driver: 455.38, from Debian experimental
nvidia-cuda-toolkit: 11.0.3-2, from Debian testing
Quadro RTX5000
Debian Testing (VM, vfio passthrough)
nvidia-driver: 450.80, from Debian testing
nvidia-cuda-toolkit: 11.0.3-2, from Debian testing
Please let me know if you have any suggestions on troubleshooting this issue.
Thanks
Following results are from the RTX3090 machine
Miniconda env
$ python3 -c 'import torch; print(torch.cuda.is_available())'
/home/user/dev/miniconda3/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at /opt/conda/conda-bld/pytorch_1603729096996/work/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
False
$ python3 -c 'import torch; torch.rand(3).cuda()'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/user/dev/miniconda3/lib/python3.8/site-packages/torch/cuda/__init__.py", line 172, in _lazy_init
torch._C._cuda_init()
RuntimeError: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero.
Miniconda installation
$ conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch
Pip env
$ python3 -c 'import torch; print(torch.cuda.is_available())'
/home/user/.local/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
False
$ python3 -c 'import torch; torch.rand(3).cuda()'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/user/.local/lib/python3.8/site-packages/torch/cuda/__init__.py", line 172, in _lazy_init
torch._C._cuda_init()
RuntimeError: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero.
Pip installation
Attempted to use torch nightly 1.8, with same error
$ python3 -m pip install torch==1.7.0+cu110 torchvision==0.8.1+cu110 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
System specs
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux bullseye/sid
Release: testing
Codename: bullseye
$ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.38 Driver Version: 455.38 CUDA Version: 11.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 3090 On | 00000000:01:00.0 On | N/A |
| 0% 37C P8 33W / 350W | 282MiB / 24245MiB | 16% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Wed_Jul_22_19:09:09_PDT_2020
Cuda compilation tools, release 11.0, V11.0.221
Build cuda_11.0_bu.TC445_37.28845127_0
$ apt list --installed | grep nvidia-*
glx-alternative-nvidia/testing,unstable,now 1.2.0 amd64 [installed,automatic]
libegl-nvidia0/experimental,now 455.38-1 amd64 [installed,automatic]
libegl-nvidia0/experimental,now 455.38-1 i386 [installed,automatic]
libgl1-nvidia-glvnd-glx/experimental,now 455.38-1 amd64 [installed,automatic]
libgl1-nvidia-glvnd-glx/experimental,now 455.38-1 i386 [installed,automatic]
libgles-nvidia1/experimental,now 455.38-1 amd64 [installed,automatic]
libgles-nvidia1/experimental,now 455.38-1 i386 [installed,automatic]
libgles-nvidia2/experimental,now 455.38-1 amd64 [installed,automatic]
libgles-nvidia2/experimental,now 455.38-1 i386 [installed,automatic]
libglx-nvidia0/experimental,now 455.38-1 amd64 [installed,automatic]
libglx-nvidia0/experimental,now 455.38-1 i386 [installed,automatic]
libnvidia-cfg1/experimental,now 455.38-1 amd64 [installed,automatic]
libnvidia-compiler/experimental,now 455.38-1 amd64 [installed,automatic]
libnvidia-eglcore/experimental,now 455.38-1 amd64 [installed,automatic]
libnvidia-eglcore/experimental,now 455.38-1 i386 [installed,automatic]
libnvidia-glcore/experimental,now 455.38-1 amd64 [installed,automatic]
libnvidia-glcore/experimental,now 455.38-1 i386 [installed,automatic]
libnvidia-glvkspirv/experimental,now 455.38-1 amd64 [installed,automatic]
libnvidia-glvkspirv/experimental,now 455.38-1 i386 [installed,automatic]
libnvidia-ml-dev/testing,unstable,now 11.0.3-2 amd64 [installed,automatic]
libnvidia-ml1/experimental,now 455.38-1 amd64 [installed,automatic]
libnvidia-ptxjitcompiler1/experimental,now 455.38-1 amd64 [installed,automatic]
libnvidia-ptxjitcompiler1/experimental,now 455.38-1 i386 [installed,automatic]
nvidia-alternative/experimental,now 455.38-1 amd64 [installed,automatic]
nvidia-cuda-dev/testing,unstable,now 11.0.3-2 amd64 [installed]
nvidia-cuda-gdb/testing,unstable,now 11.0.3-2 amd64 [installed]
nvidia-cuda-toolkit-doc/testing,testing,unstable,unstable,now 11.0.3-2 all [installed,automatic]
nvidia-cuda-toolkit/testing,unstable,now 11.0.3-2 amd64 [installed]
nvidia-driver-bin/experimental,now 455.38-1 amd64 [installed,automatic]
nvidia-driver-libs/experimental,now 455.38-1 amd64 [installed,automatic]
nvidia-driver-libs/experimental,now 455.38-1 i386 [installed,automatic]
nvidia-driver/experimental,now 455.38-1 amd64 [installed]
nvidia-egl-common/now 455.23.04-1 amd64 [installed,local]
nvidia-egl-icd/experimental,now 455.38-1 amd64 [installed,automatic]
nvidia-egl-icd/experimental,now 455.38-1 i386 [installed,automatic]
nvidia-installer-cleanup/testing,unstable,now 20151021+12 amd64 [installed]
nvidia-kernel-common/testing,unstable,now 20151021+12 amd64 [installed]
nvidia-kernel-dkms/experimental,now 455.38-1 amd64 [installed,automatic]
nvidia-kernel-support/experimental,now 455.38-1 amd64 [installed,automatic]
nvidia-legacy-check/now 455.23.04-1 amd64 [installed,local]
nvidia-modprobe/experimental,now 455.23.04-1 amd64 [installed,automatic]
nvidia-opencl-common/now 455.23.04-1 amd64 [installed,local]
nvidia-opencl-dev/testing,unstable,now 11.0.3-2 amd64 [installed]
nvidia-opencl-icd/experimental,now 455.38-1 amd64 [installed,automatic]
nvidia-openjdk-8-jre/testing,unstable,now 9.+8u252-b09-1~deb9u1~11.0.3-2 amd64 [installed,automatic]
nvidia-persistenced/testing,unstable,now 450.57-1 amd64 [installed]
nvidia-profiler/testing,unstable,now 11.0.3-2 amd64 [installed,automatic]
nvidia-settings/testing,unstable,now 450.80.02-1 amd64 [installed]
nvidia-smi/experimental,now 455.38-1 amd64 [installed,automatic]
nvidia-support/testing,unstable,now 20151021+12 amd64 [installed]
nvidia-vdpau-driver/experimental,now 455.38-1 amd64 [installed,automatic]
nvidia-visual-profiler/testing,unstable,now 11.0.3-2 amd64 [installed,automatic]
nvidia-vulkan-common/now 455.23.04-1 amd64 [installed,local]
nvidia-vulkan-icd/experimental,now 455.38-1 amd64 [installed,automatic]
nvidia-vulkan-icd/experimental,now 455.38-1 i386 [installed,automatic]
nvidia-xconfig/testing,unstable,now 450.66-1 amd64 [installed]
xserver-xorg-video-nvidia/experimental,now 455.38-1 amd64 [installed]
Testing nvcc
I’m not an expert in CUDA, but I copied a helloworld code and ran without errors
//hello.cu
// This is the REAL "hello world" for CUDA!
// It takes the string "Hello ", prints it, then passes it to CUDA with an array
// of offsets. Then the offsets are added in parallel to produce the string "World!"
// By Ingemar Ragnemalm 2010
#include <stdio.h>
const int N = 16;
const int blocksize = 16;
__global__
void hello(char *a, int *b)
{
a[threadIdx.x] += b[threadIdx.x];
}
int main()
{
char a[N] = "Hello \0\0\0\0\0\0";
int b[N] = {15, 10, 6, 0, -11, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
char *ad;
int *bd;
const int csize = N*sizeof(char);
const int isize = N*sizeof(int);
printf("%s", a);
cudaMalloc( (void**)&ad, csize );
cudaMalloc( (void**)&bd, isize );
cudaMemcpy( ad, a, csize, cudaMemcpyHostToDevice );
cudaMemcpy( bd, b, isize, cudaMemcpyHostToDevice );
dim3 dimBlock( blocksize, 1 );
dim3 dimGrid( 1, 1 );
hello<<<dimGrid, dimBlock>>>(ad, bd);
cudaMemcpy( a, ad, csize, cudaMemcpyDeviceToHost );
cudaFree( ad );
cudaFree( bd );
printf("%s\n", a);
return EXIT_SUCCESS;
}
# nvcc hello.cu -o hello
# ./hello
Hello Hello
Setting envs
Executing the following prior to importing torch do not resolve the errors
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0" |
st46810 | The error
CUDA initialization: CUDA unknown error
is unfortunately not very helpful.
Could you check dmesg for any XID error codes and post them here?
Also, could you check, if docker containers with CUDA11 and PyTorch work fine on your machine? |
st46811 | Hi,
I had an issue on RTX2060 where cuda was not available.
I reinstalled it and worked fine. Mine was dependency issue with tensorflow as tensorflow-gpu runs on cuda11.
Please make sure once that you have installed cuda correctly and also check with tensorflow-gpu if cuda is running fine.
Hope it helps.
Thanks |
st46812 | Thanks ptrblck, granth_jain.
When I investigate dmesg,
# dmesg|grep "NVRM"
[ 9.976755] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 455.38 Thu Oct 22 06:06:59 UTC 2020
I noticed if I run the torch methods which error out, I get the following
[43613.854296] nvidia_uvm: module uses symbols from proprietary module nvidia, inheriting taint.
[43613.854575] nvidia_uvm: Unknown symbol radix_tree_preloads (err -2)
This was caused by Nvidia incompatibility with Kernel 5.9 54. I downgraded from 5.9 to 5.8, and the errors are resolved.
I applied the fixes to both computers and the errors are resolved.
However, my Quardo RTX 5000 machine is encountering another error, where
$ python3 -c 'import torch; torch.randn(1).to(0)'
Traceback (most recent call last):
File "<string>", line 1, in <module>
RuntimeError: CUDA error: all CUDA-capable devices are busy or unavailable
I verified that no process is using the GPU,
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.38 Driver Version: 455.38 CUDA Version: 11.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro RTX 5000 On | 00000000:04:00.0 Off | Off |
| 33% 26C P8 6W / 230W | 1MiB / 16125MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
and the compute mode is in default, not exclusive,
$ nvidia-smi -a | grep Compute
Compute Mode : Default
This is running inside a kvm hypervisor with vfio passthrough, and I verified that nvidia driver is attached to the GPU
04:00.0 VGA compatible controller: NVIDIA Corporation TU104GL [Quadro RTX 5000] (rev a1)
Subsystem: Dell TU104GL [Quadro RTX 5000]
Kernel driver in use: nvidia
Kernel modules: nvidia
05:00.0 Audio device: NVIDIA Corporation TU104 HD Audio Controller (rev a1)
Subsystem: Dell TU104 HD Audio Controller
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel
06:00.0 USB controller: NVIDIA Corporation TU104 USB 3.1 Host Controller (rev a1)
Subsystem: Dell TU104 USB 3.1 Host Controller
Kernel driver in use: xhci_hcd
Kernel modules: xhci_pci
07:00.0 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C UCSI Controller (rev a1)
Subsystem: Dell TU104 USB Type-C UCSI Controller
I attempted to
Remove all nvidia-* packages and reinstall nvidia-driver (tried both 450.80 and 455.38) and nvidia-cuda-toolkit (11.0)
Reinstall Pytorch for Cuda 11.0 using miniconda and pip.
The server is headless, and no desktop environment was installed. Thus, there should be no graphics-based processes using the gpu.
Do you know what is causing this issue? Can this be caused by VFIO, although everything seem to be in order? Thanks!
More tests
I downloaded and compiled the script to test cuda functionality 64. The output shows error code 201 for cMemGetInfo.
$ ./cuda_check
Found 1 device(s).
Device: 0
Name: Quadro RTX 5000
Compute Capability: 7.5
Multiprocessors: 48
CUDA Cores: 3072
Concurrent threads: 49152
GPU clock: 1815 MHz
Memory clock: 7001 MHz
cMemGetInfo failed with error code 201: invalid device context |
st46813 | nkla:
Do you know what is causing this issue? Can this be caused by VFIO, although everything seem to be in order? Thanks!
I don’t know, if VFIO could cause this issue. Could you try to run a CUDA sample on this node without VFIO e.g. in a docker container or on the bare metal? |
st46814 | I gave up with VFIO. The original error was me not passing through other 3 components of the GPU (audio, usb, scsi). Now, pytorch works sometimes only if the GPU was originally attached by nouveau, then bind to vfio. Eg, if GPU was originally only used by vfio-pci, pytorch will not work in guest. Instead, python3 binary will be frozen and unkillable, requiring a reset of the guest.
Seems like vfio is not ready for deep learning. I wonder how do colabs run the services? Will be running bare metal now, thanks. |
st46815 | Hi,
I have same question with you , my machine is RTX 2060 Notebook series, and i install cuda 11.0.3, cudnn for 11.0;
UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)return torch._C._cuda_getDeviceCount() > 0
Now i am trying to uninstall cuda11 and cudnn;
I was wondering what 's cuda version of your reinstall and before you reintall did you unintall the cuda11 and cudnn both? |
st46816 | Hi, are you running kernel>=5.9? You’ll need to downgrade to <=5.8 since nvidia does not support 5.9 yet. |
st46817 | Hi,
I don’t exactly remember my previous cuda version, I installed cuda version 11.
installing tensorflow-gpu then installing pytorch with same cuda version as tensorflow-gpu cuda did the trick for me.
Thanks |
st46818 | Hi nkla,
Thanks for your reply, mine kernel is 5.4.0-52-generic;
And i according to userWarning:
this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero
So, i adding the " export CUDA_VISIBLE_DEVICES=0 " via the source gedit ~/.bashrc;
Now , it 's works fine;
import torch
torch.cuda.is_available()
True
Thanks again |
st46819 | Helo dears,
I’m trying to load pre-training mode as bellow code:
url = 'https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ar.vec'
SRC.build_vocab(train_data, vectors=Vectors('wiki.ar.vec', url=url), unk_init = torch.Tensor.normal_, min_freq = 2)
TRG.build_vocab(train_data, min_freq = 2)
I get an unusual error like :
Fatal Python error: Segmentation fault
Current thread 0x00007f054de01740 (most recent call first):
File "/home/aiman/anaconda3/lib/python3.7/site-packages/torchtext/vocab.py", line 387 in cache
File "/home/aiman/anaconda3/lib/python3.7/site-packages/torchtext/vocab.py", line 323 in __init__
File "test.py", line 101 in <module>
I have update torchtext and conda but still not fix.
The environment as bellow:
pytorch-ignite==0.4.2
pytorch-nlp==0.5.0
torch==1.4.0
torchaudio==0.4.0a0+719bcc7
torchtext==0.6.0
torchvision==0.5.0
Any suggestions to fix this issue? |
st46820 | Solved by ptrblck in post #3
Could you update all libs to the latest stable release and retry the code?
If you are still seeing the seg fault, please create an issue in the torchtext GitHub repository. |
st46821 | Could you update all libs to the latest stable release and retry the code?
If you are still seeing the seg fault, please create an issue in the torchtext GitHub repository. |
st46822 | The problem with torchtext version 1.4.0. I have updated it to version 1.5.0. and the issue is fixed. |
st46823 | Can we read data from dataloader with offset index like a list? For example, if I have data_list, I can read data like
for i in range(offset, something):
x, y = data_list[i]
I’m wondering if this can be done with dataloader, thanks. |
st46824 | Yes, you could implement a custom sampler, which is responsible to create the indices and pass them to the Dataset.__getitem__.
Alternatively, you could also add an offset in the __getitem__ method directly, if this would fit your use case. |
st46825 | Hi there, I got this runtime error when I was running my code on CUDA,
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 33.22 MiB already allocated; 2.65 MiB free; 40.00 MiB reserved in total by PyTorch)
I notice that the memory reserved by PyTorch is extremely small, I’m using GTX 1050Ti with torch version 1.4.0, driver version 457.09 and CUDA version 11.1
Is this an issue with my CUDA settings? Does anyone know how I can fix this?
Cheers |
st46826 | Solved by timtaotao in post #7
Hi @ptrblck, thanks for your help, I executed nvidia-smi on windows but I only got N/A for each process’ gpu usage, however, I do find the cause to my problem.
Since I load data from tfrecord file, I import tensorflow to do data preprocessing, and tf takes up all the gpu memory by default. I flush … |
st46827 | Based on the error message it seems that your GPU might be used by other processes, so you should check its free memory via nvidia-smi. |
st46828 | Hi, thanks for your reply, since I am using Windows I can’t monitor gpu memory via nvidia-smi, I googled but can’t find a replacement.
I do notice that the dedicated gpu memory usage was empty at the beginning, but increased to 3.7GB when I run my code, though according to the error message, only 268 MiB memory was reserved by pytorch. |
st46829 | On Windows you should be able to find an nvidia-smi.exe, which would give you the memory usage.
The PyTorch memory stats won’t show other processes. |
st46830 | Hi @ptrblck, thanks for your help, I executed nvidia-smi on windows but I only got N/A for each process’ gpu usage, however, I do find the cause to my problem.
Since I load data from tfrecord file, I import tensorflow to do data preprocessing, and tf takes up all the gpu memory by default. I flush CUDA after the preprocessing and everything works fine now! |
st46831 | should I change MSELoss to cross entropy?
criterion = torch.nn.MSELoss(reduction=‘mean’)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4
metrics = {‘f1_score’: f1_score, ‘auroc’: roc_auc_score}
I have the rgb image and mask of 32 bit, they are divided into two classes(background and the thing), why I got this one? how can I change the MSELoss to cross entropy?
File “/content/DeepLabv3FineTuning/trainer.py”, line 58, in train_model
metric(y_true.astype(‘uint8’), y_pred))
File “/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_ranking.py”, line 390, in roc_auc_score
sample_weight=sample_weight)
File “/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_base.py”, line 77, in _average_binary_score
return binary_metric(y_true, y_score, sample_weight=sample_weight)
File “/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_ranking.py”, line 221, in _binary_roc_auc_score
raise ValueError("Only one class present in y_true. ROC AUC score "
ValueError: Only one class present in y_true. ROC AUC score is not defined in that case. |
st46832 | Solved by ptrblck in post #2
If you are working on a multi-class segmentation use case, nn.CrossEntropyLoss would be the preferred loss function.
Your error is raised by a sklearn metric method and is unrelated to the criterion in PyTorch. |
st46833 | If you are working on a multi-class segmentation use case, nn.CrossEntropyLoss would be the preferred loss function.
Your error is raised by a sklearn metric method and is unrelated to the criterion in PyTorch. |
st46834 | Hello,
I trained a CNN model to categorizing 20 kinds of dogs in Stanford Dogs dataset.
However, I cannot reduce the loss value.
There are some parts of my codes. Can anyone help?
train_dataloader, test_dataloader = split_Train_Val_Data(data_dir)
C = models.resnet50(num_classes=20).to(device)
optimizer_C = optim.Adam(C.parameters(), lr = 1e-4)
criteron = nn.CrossEntropyLoss()
if __name__ == '__main__':
for epoch in range(epochs):
iter = 0
correct_train, total_train = 0, 0
correct_test, total_test = 0, 0
train_loss_C = 0.0
print('epoch: ' + str(epoch + 1) + ' / ' + str(epochs))
C.train()
for i, (x, label) in enumerate(train_dataloader) :
x, label = x.to(device), label.to(device)
optimizer_C.zero_grad()
with torch.no_grad():
output = C(x)
loss = criteron(output, label.long())
loss = Variable(loss, requires_grad = True)
loss.backward()
optimizer_C.step()
_, predicted = torch.max(output.data, 1)
total_train += len(x)
correct_train += (predicted==label).sum().item()
# train_loss_C += loss.item()*len(label)
train_loss_C += loss.item()
iter += 1
print('Training epoch: %d / loss_C: %.3f | acc: %.3f' % \
(epoch + 1, train_loss_C / iter, correct_train / total_train))
C.eval()
for i, (x, label) in enumerate(test_dataloader) :
with torch.no_grad() :
x, label = x.to(device), label.to(device)
output = C(x)
loss = criteron(output, label.long())
_, predicted = torch.max(output.data, 1)
total_test += len(x)
correct_test += (predicted==label).sum().item()
print('Testing acc: %.3f' % (correct_test / total_test))
train_acc.append(100 * correct_train/total_train)
test_acc.append(100 * correct_test/total_test)
loss_epoch_C.append(train_loss_C) |
st46835 | You are disabling the gradient calculation and are rewrapping the loss tensor thus detaching it from the graph in these lines of code:
with torch.no_grad():
output = C(x)
loss = criteron(output, label.long())
loss = Variable(loss, requires_grad = True)
Remove the with torch.no_grad() guard and don’t recreate the loss tensor.
Also, Variables are deprecated since PyTorch 0.4 so you can use tensors now. |
st46836 | Hey everyone,
I am trying to copy a Sequential object consisting of the Conv2d, BatchNorm2d, LeakyReLU modules from the CPU to the GPU by doing object->to(…) using Pytorch C++ 1.6 and I get the message that’s in the title of this thread.
The exact same code works fine using Pytorch 1.4.
Could anyone help me figure out what the problem is ?
Thanks
P.S. I am using Visual Studio 2017 and the operating system is Win 10.
P.S.1 I tried the following code:
auto test = torch::nn::Conv2d(torch::nn::Conv2dOptions(1, 1, 1));
test->to(torch::Device(torch::kCUDA));
and I got a different message. It is “PyTorch is not linked with support for cuda devices”. I did link both torch_cuda.lib and c10_cuda.lib.
P.S.2. I’ve found a solution to this problem. I find it a bit strange that I have to force the linker to link against a library by directly adding a symbol to the symbol table. Microsoft do describe this linker option as a useful feature for including a library object that otherwise would not be linked to the program. I guess it’s just that I’ve never had to do this up until now. |
st46837 | I solved this issue by adding ‘?warp_size@cuda@at@@YAHXZ’ to linker option of visual studio(2017).
Add /INCLUDE:?warp_size@cuda@at@@YAHXZ
Add ‘torch_cuda.lib’
ref. https://github.com/pytorch/pytorch/issues/33435#issuecomment-685241862 243
Thanks. |
st46838 | Seunghan-go,
I solved the issue soon after I had made the post but thanks for the reply. |
st46839 | hello,
Thanks for this issue I successfully fixed the same error on my Windows system.
But, I also have a Linux:
Ubuntu 18.04 - NVIDIA Jetson Xavier AGX with JetPack 4.4.
PyTorch 1.6 for ARM.
When I tried to used the same solution described above and added the
/INCLUDE:?warp_size@cuda@at@@YAHXZ to the linker option a build error was raised:
g++: error: /INCLUDE:?warp_size@cuda@at@@YAHXZ: No such file or directory
g++: error: /INCLUDE : error : No such file or directory
When I tried to change the /INCLUDE to -INCLUDE the build process was successfully completed but the original problem came bask and report on the aten::empty_strided problem as described above.
Can you please help me to understand how to use this flag on Linux system?
Thanks |
st46840 | Here is my code:
import torch
class Dataset(torch.utils.data.Dataset):
def __init__(self, labels):
'Initialization'
self.labels = labels
def __len__(self):
'Denotes the total number of samples'
return len(self.labels)
def __getitem__(self, index):
X = torch.load('data/' + str(index) + '.pt')
y = self.labels[index]
return X, y
training_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size = 1)
But when I do a for loop with training_dataloader on this dataset class, it returns only the label y, not the features X, even though if I print X, I can see it being printed. What am I doing wrong here? |
st46841 | Hi,
I am trying to train the model on mixed precision, so for the same I am using the command:
model.half()
But I am getting the following error:
image1363×422 27.8 KB
So when I convet my input and labels also to half but it seem like the error is caused due to the line:
loss.backward()
I have tried to convert loss back into floating point value and run the same, but still I am getting the following error:
image1383×405 41.4 KB
Any suggetions? |
st46842 | Solved by ptrblck in post #16
This code is working fine for me:
temp = nn.Sequential(
nn.Dropout(p=0.5),
nn.Linear(in_features=512, out_features=128),
nn.ReLU(),
nn.Linear(in_features=128, out_features=17, bias=True),
)
classifier = models.resnet34(pretrained=True)
classifier.fc … |
st46843 | Calling model.half() would not train the model in mixed-precision, but half precision.
Automatic mixed-precision can be used via torch.cuda.amp 143.
Could you post your model definition as well as the input shapes so that we can reproduce this error, please? |
st46844 | Hi,
I am trying to run the model in half-precision only. Sorry for the confusion.
My model seems to run for a few batches or even at times for an epoch but then it throughs error. The code of my model looks similar to the one below:
images = Variable(images).cuda()
labels = Variable(labels).cuda()
net.half() ##
images = images.half() ##
#forward
logits = net(images)
logits = logits.float() ##
loss = criterion(logits, labels)
loss.backward()
net.float() ##
optimizer.step() |
st46845 | Variables are deprecated since PyTorch 0.4 so you can use tensors now.
Could you post an executable code snippet using random tensors, so that we could reproduce the issue and debug further? |
st46846 | My training loop look like code below in actual, the code above was just an example, apologies for the confusion:
def train_classifier(classifier, train_loader, optimizer, criterion):
classifier.half()
classifier.train()
loss = 0.0
losses = []
for i, (images, labels) in enumerate(train_loader):
classifier.half()
images, labels = images.to(device), labels.float().to(device)
images = images.half()
optimizer.zero_grad()
logits = classifier(images)
logits = logits.float()
loss = criterion(logits, labels)
loss = loss.float()
loss.backward()
classifier.float()
optimizer.step()
losses.append(loss)
return torch.stack(losses).mean().item()
Any idea what could be wrong? |
st46847 | I guess the second error might be raised since you are converting the model to half and back to float() after again during the training, which could cause dtype mismatches.
Could you explain your use case of converting the model back and forth and, if possible, post an executable code snippet as simple models (e.g. resnet18) seem to work? |
st46848 | Since this the first time I am trying to convert the model to half precision, so I just followed the post below. And it was converting the model to float and half, back and forth, so I thought this is the correct way.
kaggle.com
Carvana Image Masking Challenge 5
Automatically identify the boundaries of the car in an image
But I am getting error even on the first epoch if I remove don’t convert back the model back to float. The modified code looks like:
def train_classifier(classifier, train_loader, optimizer, criterion):
classifier.half()
classifier.train()
loss = 0.0
losses = []
for i, (images, labels) in enumerate(train_loader):
images, labels = images.to(device), labels.float().to(device)
images = images.half()
optimizer.zero_grad()
logits = classifier(images)
logits = logits.float()
loss = criterion(logits, labels)
loss = loss.float()
loss.backward()
optimizer.step()
losses.append(loss)
return torch.stack(losses).mean().item()
The error which I am getting is:
image1405×324 23.3 KB |
st46849 | I would still recommend to use the automatic mixed-precision in case you want a stable FP16 training, where numerical sensitive operations are automatically performed in FP32.
Karan_Chhabra:
The modified code looks like:
Could you still post the model definition and an executable code snippet to reproduce the issue, since I’m unable to run into this error using standard torchvision models. |
st46850 | I am using resnet34 as my base model, with last few layers as linear layer followed by sigmoid. My code looks like:
temp = nn.Sequential(
nn.Dropout(p=0.5),
nn.Linear(in_features=512, out_features=128),
nn.ReLU(),
nn.Linear(in_features=128, out_features=17, bias=True),
nn.Sigmoid()
)
classifier = torchvision.models.resnet34(pretrained=True)
classifier.fc = temp
I am using Adam optimizer with BCELoss.
And for every epoch I am just calling the above function
train_classifier(classifier, ) |
st46851 | Thanks for the update.
If I run your code snippet, I get invalid outputs after two iterations since the model is overflowing, which is creating an error in the criterion and thus a CUDA assert failure:
/opt/conda/conda-bld/pytorch_1603729047590/work/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [0,0,0], thread: [33,0,0] Assertion `input_val >= zero && input_val <= one` failed.
Again, I would advice against using FP16 directly, over/underflows can easily happen. |
st46852 | Hi,
I am trying to run the autograded but facing the following issue:
AttributeError: module 'torch.cuda.amp' has no attribute 'autocast'
Any idea? |
st46853 | Could you update to the latest stable version (.1.7.0) and retry importing it?
torch.cuda.amp.autocast was introduced in 1.6.0, but I would recommend to use the latest version, since it ships with the latest bug fixes and additional features. |
st46854 | I am able to run the auto-cast by the loss is causing an issue, my code looks like:
with torch.cuda.amp.autocast():
logits = classifier(images)
loss = criterion(logits, labels)
And I am getting an error on loss calculation. The error mentions that I should use some other loss function other that BCELoss, but I need to have a sigmoid layer just before the output, so what kind of loss should I use as the pytorch is recommending me to use loss with logits.
Or is there some way to make it work.
The error message is:
image1442×319 22.5 KB |
st46855 | Your model should return the raw logits and you should use nn.BCEWithLogitsLoss as the criterion.
If you want to see the probabilities, you could still apply torch.sigmoid to them, but don’t pass them to the loss function. |
st46856 | I am using the code below to convert the model into mixed precision. And I have also commented the sigmoid line but still I am facing the issue.
for i, (images, labels) in enumerate(train_loader):
images, labels = images.float().to(device), labels.float().to(device)
optimizer.zero_grad()
with torch.cuda.amp.autocast():
logits = classifier(images)
loss = criterion(logits, labels)
scaler.scale(loss).backward()
scaler.step(optimizer)
# Updates the scale for next iteration.
scaler.update()
But I am getting the error on line scaler.scale(loss).backward(). I am getting the following error:
image1427×330 23.1 KB
Sorry for bugging you. |
st46857 | This code is working fine for me:
temp = nn.Sequential(
nn.Dropout(p=0.5),
nn.Linear(in_features=512, out_features=128),
nn.ReLU(),
nn.Linear(in_features=128, out_features=17, bias=True),
)
classifier = models.resnet34(pretrained=True)
classifier.fc = temp
device = 'cuda'
classifier.to(device)
optimizer = torch.optim.SGD(classifier.parameters(), lr=1e-3)
scaler = torch.cuda.amp.GradScaler()
data = torch.randn(2, 3, 224, 224, device=device)
target = torch.randint(0, 2, (2, 17), device=device).float()
criterion = nn.BCEWithLogitsLoss()
for epoch in range(10):
optimizer.zero_grad()
with torch.cuda.amp.autocast():
logits = classifier(data)
loss = criterion(logits, target)
scaler.scale(loss).backward()
scaler.step(optimizer)
# Updates the scale for next iteration.
scaler.update()
print('epoch {}, loss {:.3f}'.format(epoch, loss.item())) |
st46858 | Hi,
Seem like I was also manually calling half, due to which this error was occurring.
Thank you |
st46859 | When using the function torch.save(model.state_dict(), PATH) and subsequently loading the model using model.load_state_dict(torch.load(PATH)), what happens to the running mean and variance of a batch normalization layer? Are they saved and loaded with the same values, or are they set to default when a model is initialized using the saved state_dict? |
st46860 | Solved by ptrblck in post #2
They are saved and loaded as seen here:
bn = nn.BatchNorm2d(3)
print(bn.running_mean)
> tensor([0., 0., 0.])
print(bn.running_var)
> tensor([1., 1., 1.])
out = bn(torch.randn(10, 3, 24, 24))
print(bn.running_mean)
> tensor([0.0009, 0.0018, 0.0004])
print(bn.running_var)
> tensor([1.0009, 1.0015, 1… |
st46861 | They are saved and loaded as seen here:
bn = nn.BatchNorm2d(3)
print(bn.running_mean)
> tensor([0., 0., 0.])
print(bn.running_var)
> tensor([1., 1., 1.])
out = bn(torch.randn(10, 3, 24, 24))
print(bn.running_mean)
> tensor([0.0009, 0.0018, 0.0004])
print(bn.running_var)
> tensor([1.0009, 1.0015, 1.0022])
torch.save(bn.state_dict(), 'tmp.pt')
bn = nn.BatchNorm2d(3)
print(bn.running_mean)
> tensor([0., 0., 0.])
print(bn.running_var)
> tensor([1., 1., 1.])
bn.load_state_dict(torch.load('tmp.pt'))
print(bn.running_mean)
> tensor([0.0009, 0.0018, 0.0004])
print(bn.running_var)
> tensor([1.0009, 1.0015, 1.0022]) |
st46862 | For example, I have a 4x4x4 torch tensor:
x = torch.Tensor([
[[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]],
[[2, 2, 2, 2],
[2, 2, 2, 2],
[2, 2, 2, 2],
[2, 2, 2, 2]],
[[3, 3, 3, 3],
[3, 3, 3, 3],
[3, 3, 3, 3],
[3, 3, 3, 3]],
[[4, 4, 4, 4],
[4, 4, 4, 4],
[4, 4, 4, 4],
[4, 4, 4, 4]]
])
I want to convert it to 1x8x8 tensor as:
([[[1, 2, 1, 2, 1, 2, 1, 2],
[3, 4, 3, 4, 3, 4, 3, 4],
[1, 2, 1, 2, 1, 2, 1, 2],
[3, 4, 3, 4, 3, 4, 3, 4],
[1, 2, 1, 2, 1, 2, 1, 2],
[3, 4, 3, 4, 3, 4, 3, 4],
[1, 2, 1, 2, 1, 2, 1, 2],
[3, 4, 3, 4, 3, 4, 3, 4]]])
How to do this kind of reshape using vectorized method without using for loop in Pytorch? |
st46863 | Solved by ptrblck in post #2
This should work:
y = x.view(2, 2, 4, 4).permute(3, 0, 2, 1).reshape(1, 8, 8) |
st46864 | Thank you so much.
And there is a further question:
I have a 8x4x4 torch tensor:
x = torch.Tensor([
[[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]],
[[2, 2, 2, 2],
[2, 2, 2, 2],
[2, 2, 2, 2],
[2, 2, 2, 2]],
[[3, 3, 3, 3],
[3, 3, 3, 3],
[3, 3, 3, 3],
[3, 3, 3, 3]],
[[4, 4, 4, 4],
[4, 4, 4, 4],
[4, 4, 4, 4],
[4, 4, 4, 4]],
[[5, 5, 5, 5],
[5, 5, 5, 5],
[5, 5, 5, 5],
[5, 5, 5, 5]],
[[6, 6, 6, 6],
[6, 6, 6, 6],
[6, 6, 6, 6],
[6, 6, 6, 6]],
[[7, 7, 7, 7],
[7, 7, 7, 7],
[7, 7, 7, 7],
[7, 7, 7, 7]],
[[8, 8, 8, 8],
[8, 8, 8, 8],
[8, 8, 8, 8],
[8, 8, 8, 8]]
])
using your answer it converted to a 2x8x8 tensor as:
tensor([[[1., 2., 1., 2., 1., 2., 1., 2.],
[3., 4., 3., 4., 3., 4., 3., 4.],
[5., 6., 5., 6., 5., 6., 5., 6.],
[7., 8., 7., 8., 7., 8., 7., 8.],
[1., 2., 1., 2., 1., 2., 1., 2.],
[3., 4., 3., 4., 3., 4., 3., 4.],
[5., 6., 5., 6., 5., 6., 5., 6.],
[7., 8., 7., 8., 7., 8., 7., 8.]],
[[1., 2., 1., 2., 1., 2., 1., 2.],
[3., 4., 3., 4., 3., 4., 3., 4.],
[5., 6., 5., 6., 5., 6., 5., 6.],
[7., 8., 7., 8., 7., 8., 7., 8.],
[1., 2., 1., 2., 1., 2., 1., 2.],
[3., 4., 3., 4., 3., 4., 3., 4.],
[5., 6., 5., 6., 5., 6., 5., 6.],
[7., 8., 7., 8., 7., 8., 7., 8.]]])
but I want to get a 2x8x8 tensor as:
tensor([[[1., 2., 1., 2., 1., 2., 1., 2.],
[3., 4., 3., 4., 3., 4., 3., 4.],
[1., 2., 1., 2., 1., 2., 1., 2.],
[3., 4., 3., 4., 3., 4., 3., 4.],
[1., 2., 1., 2., 1., 2., 1., 2.],
[3., 4., 3., 4., 3., 4., 3., 4.],
[1., 2., 1., 2., 1., 2., 1., 2.],
[3., 4., 3., 4., 3., 4., 3., 4.]],
[[5., 6., 5., 6., 5., 6., 5., 6.],
[7., 8., 7., 8., 7., 8., 7., 8.],
[5., 6., 5., 6., 5., 6., 5., 6.],
[7., 8., 7., 8., 7., 8., 7., 8.],
[5., 6., 5., 6., 5., 6., 5., 6.],
[7., 8., 7., 8., 7., 8., 7., 8.],
[5., 6., 5., 6., 5., 6., 5., 6.],
[7., 8., 7., 8., 7., 8., 7., 8.]]])
Could you help me? Thanks a lot! |
st46865 | Just curious about this…
Can’t we use only tensor.rehsape by avoiding .view and .permute? What difference does it make? |
st46866 | tensor.reshape copies the data under the hood, if needed for the view operation (which would otherwise yield an error explaining your data is not contiguous in memory and you thus cannot change its strides and shapes since it would overlap). It’s not a replacement for permute, but for .contiguous().view(). |
st46867 | Hi, I’m currently using a pretrained MobileNetV2 model for my audio classification.
As a beginner of using PyTorch and from reading other similar topics, I understand how to modify the first and the last layer:
from torchvision.models import mobilenet_v2
import torch
import torch.nn as nn
mobilenet_model = mobilenet_v2(pretrained=True)
mobilenet_model.features[0][0] = nn.Conv2d(1, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) # Modify the first layer
mobilenet_model.classifier[1] = nn.Linear(1280, NUM_CLASSES, bias=True) # Modify the last layer
It works great, but now I want to experiment this model with different approaches by applying different classifiers (like KNN), but I’m not sure how to change the last layer to use, let’s say, KNN instead of linear transformation.
Does anyone know how to do this? Any resource is also fine! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.