id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st30668
|
I just created a package called torcheck that help you carry out sanity check around your model. It has received several stars, so I wanted to share with the community.
For a general introduction, please check this out: Testing Your PyTorch Models with Torcheck
The GitHub link is: GitHub - pengyan510/torcheck 4
The major benefit is that you no longer need to write additional testing code for your model training. Just add a few lines of code specifying the checks before training, torcheck will then take over and perform the checks simultaneouly while the training happens.
Another benefit is that torcheck allows you to check your model on different levels. Instead of checking the whole model, you can specify checks for a submodule, a linear layer, or even the weight tensor! This enables more customization around the sanity checks.
|
st30669
|
I would like to use a python set to check if I have seen a given tensor before, as a termination condition. Of course, this doesn’t work as tensors are only equal at that level if they are the same object. Will I have to write my own implementation to cast tensors into something I can put in a set? I get the feeling that moving everything to cpu, for example as a tuple, is not the nicest way to do this.
|
st30670
|
Solved by eqy in post #7
You could consider opening an issue on the github for this. I am curious if there is a workaround that works like computing a cruder hash function using functions that are native on GPU so that only a very small amount of data has to be copied to CPU for hashing:
import time
import torch
class Has…
|
st30671
|
In general this is tricky for a few different reasons such as what you mean by equality. Do you mean perfect bitwise equality in the underlying data, or some kind of fuzzy floating point “equality” like what allclose is used for?
|
st30672
|
Thank you for such a quick reply. I am using integers (the problem is checking if an RL policy has been seen before, in case you are familiar with this). So exact bitwise equality is indeed what I am looking for.
|
st30673
|
In that case, does something like a simple wrapper class work for your use case?
import torch
class HashTensorWrapper():
def __init__(self, tensor):
self.tensor = tensor
def __hash__(self):
return hash(self.tensor.numpy().tobytes())
def __eq__(self, other):
return torch.all(self.tensor == other.tensor)
a = torch.randn(1000)
b = a.clone()
print(hash(a) == hash(b))
a_wrap = HashTensorWrapper(a)
b_wrap = HashTensorWrapper(b)
print(hash(a_wrap) == hash(b_wrap))
unwrapped_set = set()
unwrapped_set.add(a)
unwrapped_set.add(b)
wrapped_set = set()
wrapped_set.add(a_wrap)
wrapped_set.add(b_wrap)
print(len(unwrapped_set), len(wrapped_set))
$ python3 hash.py
False
True
2 1
|
st30674
|
Yes, this would work. I will time this against naive solution (just move onto cpu as pytorch tensor → numpy array → tuple). But I think we still have the same problem that the bytes will go onto the cpu. To get around this, somehow we would need to implement a hashset on gpu. I doubt this is done though, haha. But this also poses an interesting idea: Maybe searching a hashtable is (much?) faster on gpu.
|
st30675
|
Actually, I found that tensorflow has an implementation of hashtable: tf.lookup.experimental.DenseHashTable | TensorFlow Core v2.5.0 1. Is there any sort of feature request system, so that someday this can exist for pytorch as well?
|
st30676
|
You could consider opening an issue on the github for this. I am curious if there is a workaround that works like computing a cruder hash function using functions that are native on GPU so that only a very small amount of data has to be copied to CPU for hashing:
import time
import torch
class HashTensorWrapper():
def __init__(self, tensor):
self.tensor = tensor
def __hash__(self):
return hash(self.tensor.cpu().numpy().tobytes())
def __eq__(self, other):
return torch.all(self.tensor == other.tensor)
class HashTensorWrapper2():
def __init__(self, tensor):
self.tensor = tensor
self.hashcrap = torch.arange(self.tensor.numel(), device=self.tensor.device).reshape(self.tensor.size())
def __hash__(self):
if self.hashcrap.size() != self.tensor.size():
self.hashcrap = torch.arange(self.tensor.numel(), device=self.tensor.device).reshape(self.tensor.size())
return hash(torch.sum(self.tensor*self.hashcrap))
def __eq__(self, other):
return torch.all(self.tensor == other.tensor)
a = torch.randn(1000,1000).cuda()
b = a.clone()
print(hash(a) == hash(b))
a_wrap = HashTensorWrapper(a)
b_wrap = HashTensorWrapper(b)
a_wrap2 = HashTensorWrapper2(a)
b_wrap2 = HashTensorWrapper2(b)
print(hash(a_wrap2) == hash(b_wrap2))
unwrapped_set = set()
unwrapped_set.add(a)
unwrapped_set.add(b)
wrapped_set = set()
wrapped_set.add(a_wrap2)
wrapped_set.add(b_wrap2)
print(len(unwrapped_set), len(wrapped_set))
torch.cuda.synchronize()
t1 = time.time()
for i in range(10):
hash(a_wrap)
torch.cuda.synchronize()
t2 = time.time()
torch.cuda.synchronize()
t3 = time.time()
for i in range(10):
hash(a_wrap2)
torch.cuda.synchronize()
t4 = time.time()
print(t2-t1, t4-t3)
# python hash.py
False
True
2 1
0.027219772338867188 0.0004017353057861328
|
st30677
|
Neat trick! I will probably use that, and also open an issue to request the feature. Thank you very much for your help and implementations.
|
st30678
|
Hello,
I am running a docker container based on official pytorch/pytorch:1.7.1-cuda11.0-cudnn8-runtime,
I am also using onnxruntime-gpu package to serve the models from the container. However onnxruntime fails with
File "/home/mrc/.local/lib/python3.8/site-packages/onnxruntime/__init__.py", line 24, in <module>
from onnxruntime.capi._pybind_state import get_all_providers, get_available_providers, get_device, set_seed, \
File "/home/mrc/.local/lib/python3.8/site-packages/onnxruntime/capi/_pybind_state.py", line 9, in <module>
import onnxruntime.capi._ld_preload # noqa: F401
File "/home/mrc/.local/lib/python3.8/site-packages/onnxruntime/capi/_ld_preload.py", line 13, in <module>
_libcudnn = CDLL("libcudnn.so.8", mode=RTLD_GLOBAL)
File "/opt/conda/lib/python3.8/ctypes/__init__.py", line 373, in __init__
self._handle = _dlopen(self._name, mode)
OSError: libcudnn.so.8: cannot open shared object file: No such file or directory
Inside the container I see the
root@fc13d70325fe:/# echo $LD_LIBRARY_PATH
/usr/local/nvidia/lib:/usr/local/nvidia/lib64
but there are no cudnn binaries in there.
Does anyone know what is causing the issue? Are the containers not coming pre-installed with cudnn, etc.?
Thank you,
S
|
st30679
|
Solved by ptrblck in post #4
No, PyTorch uses the official cudnn release and either links it dynamically or statically.
Note that you are using the runtime container, so nvcc isn’t installed either:
root@f79b17da2a55:/workspace# nvcc --version
bash: nvcc: command not found
Also, the lib path is also empty:
root@f79b17da2a5…
|
st30680
|
Based on the naming of the container it seems cudnn is installed and you could check the used version via print(torch.backends.cudnn.version()).
The error seems to be raised by onnxruntime and I don’t know how you’ve built/installed it and what might be the issue.
|
st30681
|
Thanks @ptrblck !
Yeah, I see that
>>> print(torch.backends.cudnn.version())
8003
But I can’t find the lubcudnn binary anywhere in the container!
root@fc13d70325fe:/# find / -iname 'libcudnn*'
root@fc13d70325fe:/#
The other dependencies of ONNX runtime are there though
root@fc13d70325fe:/# find / -iname 'libcublas*'
/opt/conda/pkgs/cudatoolkit-11.0.221-h6bb024c_0/lib/libcublasLt.so.11.2.0.252
/opt/conda/pkgs/cudatoolkit-11.0.221-h6bb024c_0/lib/libcublasLt.so
/opt/conda/pkgs/cudatoolkit-11.0.221-h6bb024c_0/lib/libcublas.so.11.2.0.252
/opt/conda/pkgs/cudatoolkit-11.0.221-h6bb024c_0/lib/libcublas.so
/opt/conda/pkgs/cudatoolkit-11.0.221-h6bb024c_0/lib/libcublas.so.11
/opt/conda/pkgs/cudatoolkit-11.0.221-h6bb024c_0/lib/libcublasLt.so.11
/opt/conda/lib/libcublasLt.so.11.2.0.252
/opt/conda/lib/libcublasLt.so
/opt/conda/lib/libcublas.so.11.2.0.252
/opt/conda/lib/libcublas.so
/opt/conda/lib/libcublas.so.11
/opt/conda/lib/libcublasLt.so.11
So, where is the libcudnn binary that pytorch is using?
Edit: So I dug into the source code a bit, and it looks like pytorch has a completely separate implementation of cuDNN inside it’s own codebase. Is this true?
|
st30682
|
mkserge:
Edit: So I dug into the source code a bit, and it looks like pytorch has a completely separate implementation of cuDNN inside it’s own codebase. Is this true?
No, PyTorch uses the official cudnn release and either links it dynamically or statically.
Note that you are using the runtime container, so nvcc isn’t installed either:
root@f79b17da2a55:/workspace# nvcc --version
bash: nvcc: command not found
Also, the lib path is also empty:
root@f79b17da2a55:/workspace# ls /usr/local/nvidia
ls: cannot access '/usr/local/nvidia': No such file or directory
If you want to build applications inside the container, use the devel container:
root@389363a6c5ec:/workspace# find /usr/ -name libcudnn.so
/usr/lib/x86_64-linux-gnu/libcudnn.so
root@389363a6c5ec:/workspace# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Wed_Jul_22_19:09:09_PDT_2020
Cuda compilation tools, release 11.0, V11.0.221
Build cuda_11.0_bu.TC445_37.28845127_0
|
st30683
|
Thanks @ptrblck , always helpful
But isn’t it odd? The *-runtime package claims to have cudnn installed (and we see it through torch.backends) but it’s not actually there?
I don’t think I am actually compiling anything inside the container (but I could be wrong, maybe onnx does something special), I install onnxruntime-gpu through pip, and it fails during import when it tries to load libcudnn and cannot find it. I myself cannot find cudnn anywhere in the system, so pytorch must be doing something else here, no?
|
st30684
|
mkserge:
But isn’t it odd? The *-runtime package claims to have cudnn installed (and we see it through torch.backends) but it’s not actually there?
It’s installed in the PyTorch binaries and is most likely linked statically.
mkserge:
I install onnxruntime-gpu through pip, and it fails during import when it tries to load libcudnn and cannot find it.
This would mean that pnnxruntime-gpu doesn’t ship with its own statically linked cudnn, but is trying to dynamically link it from the system installation.
mkserge:
I myself cannot find cudnn anywhere in the system, so pytorch must be doing something else here, no?
Yes, statically linking it and probably removing it afterwards to lower the size. If you need the local libs, you would have to use the devel container (or reinstall it into the runtime container).
|
st30685
|
Just to help anyone else ending up here searching for solutions, running:
sudo apt install libcudnn8
or whatever version you need, could help you.
|
st30686
|
Capture d’écran 2021-06-10 1349111920×1080 203 KB
hello i am trying to do CNN with different layers conv and pool but i got this error (on the image) i have to put to the second conv layer result of of first conv pool activation that helps me to do ResNet
here is my code
import torch
from torchtext.legacy import data
from torchtext.legacy import datasets
import random
import numpy as np
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
TEXT = data.Field(tokenize = ‘spacy’,
tokenizer_language = 'en_core_web_sm',
batch_first = True)
LABEL = data.LabelField(dtype = torch.float)
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
train_data, valid_data = train_data.split(random_state = random.seed(SEED))
MAX_VOCAB_SIZE = 25_000
TEXT.build_vocab(train_data,
max_size = MAX_VOCAB_SIZE,
vectors = "glove.6B.100d",
unk_init = torch.Tensor.normal_)
LABEL.build_vocab(train_data)
BATCH_SIZE = 64
device = torch.device(‘cuda’ if torch.cuda.is_available() else ‘cpu’)
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
import torch.nn as nn
import torch.nn.functional as F
class CNN1d(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim,
dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.convs = nn.ModuleList([
nn.Conv1d(in_channels = embedding_dim,
out_channels = n_filters,
kernel_size = fs)
for fs in filter_sizes
])
self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
#text = [batch size, sent len]
embedded = self.embedding(text)
#embedded = [batch size, sent len, emb dim]
embedded = embedded.permute(0, 2, 1)
print("nahla")
#embedded = [batch size, emb dim, sent len]
conved1 = [F.relu(conv(embedded)) for conv in self.convs]
print("hla")
#conved_n = [batch size, n_filters, sent len - filter_sizes[n] + 1]
pooled1 = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in conved1]
print("ahla")
#pooled_n = [batch size, n_filters]
conved2 = [F.relu(conv(pooled1)) for conv in self.convs]
print("nnnnnnahla")
#conved_n = [batch size, n_filters, sent len - filter_sizes[n] + 1]
pooled2 = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in conved2]
print("nnnahla")
#pooled_n = [batch size, n_filters]
cat = self.dropout(torch.cat((pooled1,pooled2)), dim = 1)
print("nnahla")
#cat = [batch size, n_filters * len(filter_sizes)]
return self.fc(cat)
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
N_FILTERS = 100
FILTER_SIZES = [3,4,5]
OUTPUT_DIM = 1
DROPOUT = 0.5
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
model = CNN1d(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM, DROPOUT, PAD_IDX)
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f’The model has {count_parameters(model):,} trainable parameters’)
pretrained_embeddings = TEXT.vocab.vectors
model.embedding.weight.data.copy_(pretrained_embeddings)
UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]
model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)
import torch.optim as optim
optimizer = optim.Adam(model.parameters())
criterion = nn.BCEWithLogitsLoss()
model = model.to(device)
criterion = criterion.to(device)
def binary_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
#round predictions to the closest integer
rounded_preds = torch.round(torch.sigmoid(preds))
correct = (rounded_preds == y).float() #convert into float for division
acc = correct.sum() / len(correct)
return acc
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
N_EPOCHS = 5
best_valid_loss = float(‘inf’)
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut4-model.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
|
st30687
|
Is batch.text a list or a tensor? It looks like the issue is that some input somewhere in the model is a Python list when it is expected to be a tensor.
|
st30688
|
I’m currently trying to train a 3D U-Net, in which the convolutional layers are replaced with my own layers. In my own implementation, I unfold the (5D: batch size x channels x h x d x w) input to I can contract it with my kernel tensor, which means I use torch.Tensor.unfold three times per layer, like so:
patches = input.unfold(2, kernel_dim, stride).unfold(3, kernel_dim, stride).unfold(4, kernel_dim, stride)
Unfortunately, this is also extremely taxing on the GPU’s RAM.
I’m currently training on an HPC, and have access to multiple GPUs, each with 32GB RAM. To save memory, I’m currently trying to checkpointing with cpu_offload (on each of my double convolution blocks), using the fairscale library.
I haven’t found a way to make it work just yet. I’m wondering: is there a memory-friendly alternative to unfolding 5D input this way? I found several posts of people having similar problems (which is how I turned to fairscale), but I haven’t found anything just yet.
|
st30689
|
I want to update the classifier’s weights twice with the two outputs of the classifier.
To update, I wrote a code.
But, the code gives me the error that ’ enable anomaly detection to find the operation that failed to compute its gradient,’
I saw the answer that this code works with the previous version of pytorch. But it seems weird.
Can you tell me where I should fix it?
I don’t want to backward with (foreProb + backProb).backward()
foreData = data * mask
backData = data * (1-mask)
foreOutput = myClassifier(foreData)
backOutput = myClassifier(backData)
foreProb = nn.CrossEntropyLoss(foreOutput, target)
backProb = nn.CrossEntropyLoss(backOutput, target)
self.optimizer['classifier'].zero_grad()
foreProb.backward()
self.optimizer['classifier'].step()
self.optimizer['classifier'].zero_grad()
backProb.backward()
self.optimizer['classifier'].step()
|
st30690
|
Solved by ptrblck in post #4
That wouldn’t be a fix, as it’s still using the wrong behavior. Previous PyTorch versions allowed this wrong gradient calculations, which is why no errors were raised.
|
st30691
|
It seems your code tries to calculate the gradients in the second backward pass using “stale” intermediate forward activations, since the parameters were already updated, which is wrong. This post 1 explains it in more detail.
|
st30692
|
Yes, I saw that post.
But, still gives me the error although I fix the code to
self.optimizer['classifier'].zero_grad()
foreProb.backward(retain_graph=True)
self.optimizer['classifier'].step()
self.optimizer['classifier'].zero_grad()
backProb.backward()
self.optimizer['classifier'].step()
The error is
one of the variables needed for gradient computation has been modified by an inplace operation
This error doesn’t appear with the old pytorch version…
|
st30693
|
That wouldn’t be a fix, as it’s still using the wrong behavior. Previous PyTorch versions allowed this wrong gradient calculations, which is why no errors were raised.
|
st30694
|
Hi I’m currently converting a tensor to a numpy array just so I can use sklearn.preprocessing.scale
Is there a way to achieve this in PyTorch? I have seen there is torchvision.transforms.Normalize but I can’t work out how to use this outside of the context of a dataloader. (I’m trying to use this on a tensor during training)
Thanks in advance
|
st30695
|
Solved by yvanscher in post #3
Would this do it?
import torch
from torchvision import transforms
mu = 2
std = 0.5
t = torch.Tensor([1,2,3])
(t - 2)/0.5
# or if t is an image
transforms.Normalize(2, 0.5)(t)
see:
https://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.Normalize
|
st30696
|
You could add the normalization in the __getitem__ function of your Dataset:
class MyDataset(Dataset):
def __init__(self, X, y, transform=None):
self.data = X
self.target = y
self.transform = transform
def __getitem__(self, index):
x = self.data[index]
y = self.target[index]
# Normalize your data here
if self.transform:
x = self.transform(x)
return x, y
def __len__(self):
return len(self.data)
In this use case, you could set transform to something like this:
transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
|
st30697
|
Would this do it?
import torch
from torchvision import transforms
mu = 2
std = 0.5
t = torch.Tensor([1,2,3])
(t - 2)/0.5
# or if t is an image
transforms.Normalize(2, 0.5)(t)
see:
https://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.Normalize 2.4k
|
st30698
|
Thanks but this one won’t work for my use case , as I am not trying to do this when I load the data, but as part of another calculation that I am performing during training.
|
st30699
|
yvanscher:
transforms.Normalize(2, 0.5)(t)
Yeah I tried this and I always get an error:
" for t, m, s in zip(tensor, self.mean, self.std):
TypeError: zip argument #2 must support iteration"
|
st30700
|
so you cant zip self.mean and self.std if they are sinlge values. zip takes multiple iterables and returns packaged tuples.
means = [self.mean] * tensor.size()[0]
stds = [self.std] * tensor.size()[0]
for t, m, s in zip(tensor, means, stds):
# do stuff
turn the means and stds into a length n array where n is the length of ‘tensor’ or tensors
|
st30701
|
ptrblck:
transforms.Normalize
I haven’t figured out how to use transforms.Normalize on input data that is not an image. I get TypeError: tensor is not a torch image. Is there any way to use this method on non-images?
|
st30702
|
Normalize works on tensors, so the error message might come from another transformation:
norm = transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
x = torch.randn(3, 224, 224)
out = norm(x)
|
st30703
|
That only works because your tensor has the dimensions of an Image. If you look at the documentation, it says torchvision.transforms.Normalize is used to Normalize a tensor image with mean and standard deviation. The argument is described as a
tensor ( Tensor 11) – Tensor image of size (C, H, W) to be normalized.
My data is sequence data of dimension torch.Size([4, 589, 4])
|
st30704
|
Actually, you’re right the error does go away if I get the dimensions right:
norm = transforms.Normalize((30, 30, 30, 30), (20, 25, 30, 35))
x = torch.randn(4, 589, 4)
out = norm(x)
But I don’t think this is applying the normalization correctly. The data from my data loader is shaped [batch_size, seq_length, x_dim] so the scaling should be applied to the last dimension, whereas I think normalize is applying the scaling across the first dimension (the set of image colour maps).
|
st30705
|
Is it possible to extend/apply the transforms.Normalize to normalize multidimensional tensor in custom pytroch dataset class? I have a tensor with shape (S x C x W x H) and I want to normalize on C dimension.
|
st30706
|
thanks, and i have a question on how to set mean and std for each channel, are they calculated from dataset?
|
st30707
|
Yes, you can calculate the mean and std from your training dataset or use some “default” values e.g. from ImatgeNet.
|
st30708
|
Is there a way to apply different transforms to the mask vs input? For example, I want to apply all deformation transforms to both, but i only want to normalize and totensor the predicted masks (not target)
|
st30709
|
I would recommend to use the functional API for these use cases, as it allows you to apply the same “random” transformation on the data and target, and can also be used to call some transformations on one of these tensors separately.
Have a look at this example 109.
|
st30710
|
Me and @FilipAndersson245 found out that the correct way to unnormalize is:
x * std + mean
We also had to clamp a few values outside of [0,1].
For a single image the code would look something like this:
def inv_normalize(img):
mean = torch.Tensor([0.485, 0.456, 0.406]).unsqueeze(-1)
std= torch.Tensor([0.229, 0.224, 0.225]).unsqueeze(-1)
img = (img.view(3, -1) * std + mean).view(img.shape)
img = img.clamp(0, 1)
return img
Feel free to help if the code can be written in a simpler way!
|
st30711
|
Hi @ptrblck, I am also trying to do transform.Normalize(mean, std) outside data-loader but somewhere in the training process. I am not sure how would I do this for a batch of images.
Also, I am using F.normalize(tensor, p=1, dim=1) inside my model. Now, If I am loading the data with transforms.Normalize(mean, std) does it mean I am applying the same Normalization twice?
I saw the source for transforms.Normalize and it appears to be using F.normalize(tensor, self.mean, self.std, self.inplace) which I am not sure is the same thing or different.
|
st30712
|
To apply transforms.Normalize on a batch you could either run this transformation in a loop on each input or normalize the data tensoe manually via:
x = (x - mean) / std
Inside transforms.Normalize the torchvision.transforms.functional API will be used as F.normalize.
This is not the same methods as torch.nn.functional.normalize and will accept different input arguments.
|
st30713
|
Hi all,
Sir I am using an online available code for my data. But its showing out of memory message everywhere, on my machine, kaggle GPU and google colab. The author suggested to
" Process the data and save it on the hard disk and create [pytorch dataloader]"
I have got the processed data as
Shape of X_train: (3441, 7, 1, 128, 128)
Shape of X_val: (143, 7, 1, 128, 128)
Shape of X_test: (150, 7, 1, 128, 128)
Now plz can someone guide how to use pytorch dataloader here, or If he meant to save these variables from python to my hard disk then only I can use dalaloader.
plz guide.
Regards
|
st30714
|
For reference, you can take a look at Datasets & Dataloaders — PyTorch Tutorials 1.8.1+cu102 documentation 1 or other repos like vision/torchvision/datasets at master · pytorch/vision · GitHub
Also, can you post a code snippet to better understand the problem?
|
st30715
|
I have modified few things like Dataset for pytorchdataloader, so I am reframing the question again, in another question.
|
st30716
|
@ejguan Actually I dont know the correct way so just copied the link 2
Any guidance will be appreciated.
|
st30717
|
Hi there, I got an error when I call torch.save to save my pickle file but I got an error below, please let me know what’s does this mean and how to fix it. Thanks!
TypeError Traceback (most recent call last)
in
----> 1 ssd.export(‘123.pkl’)
in export(self, name_or_path)
191
192 def export(self, name_or_path):
–> 193 torch.save(self.learn,name_or_path)
194
195 def load(self, name_or_path):
D:\Software\envs\data245\lib\site-packages\torch\serialization.py in save(obj, f, pickle_module, pickle_protocol)
222 >>> torch.save(x, buffer)
223 “”"
–> 224 return _with_file_like(f, “wb”, lambda f: _save(obj, f, pickle_module, pickle_protocol))
225
226
D:\Software\envs\data245\lib\site-packages\torch\serialization.py in _with_file_like(f, mode, body)
147 f = open(f, mode)
148 try:
–> 149 return body(f)
150 finally:
151 if new_fd:
D:\Software\envs\data245\lib\site-packages\torch\serialization.py in (f)
222 >>> torch.save(x, buffer)
223 “”"
–> 224 return _with_file_like(f, “wb”, lambda f: _save(obj, f, pickle_module, pickle_protocol))
225
226
D:\Software\envs\data245\lib\site-packages\torch\serialization.py in _save(obj, f, pickle_module, pickle_protocol)
295 pickler = pickle_module.Pickler(f, protocol=pickle_protocol)
296 pickler.persistent_id = persistent_id
–> 297 pickler.dump(obj)
298
299 serialized_storage_keys = sorted(serialized_storages.keys())
TypeError: can’t pickle weakref objects
|
st30718
|
Hi,
The problem is that the model you try to save contains weakref python objects. Such objects are not supported by python’s pickle module.
You might want to check your model and see why you have weakrefs in it.
|
st30719
|
You can find it in the python documentation https://docs.python.org/3/library/weakref.html 634 .
|
st30720
|
then, how to change the weakref model, to strenght?
is it possible to get changed?
|
st30721
|
Hi, when I try to change the dataset Cityscapes to a binary segmentation, the cross_entropy loss seems not work well with the binary mask,
Blockquote
File “source_only.py”, line 126, in main
lr_scheduler, epoch, visualize if args.debug else None, args)
File “source_only.py”, line 188, in train
loss_cls_s = criterion(pred_s, label_s)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 889, in _call_impl
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py”, line 1048, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py”, line 2693, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py”, line 2390, in nll_loss
ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of dimension: 4
and if I change it to BCEloss,
Blockquote
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py”, line 613, in forward
return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py”, line 2755, in binary_cross_entropy
“Please ensure they have the same size.”.format(target.size(), input.size())
ValueError: Using a target size (torch.Size([2, 256, 512, 3])) that is different to the input size (torch.Size([2, 19, 256, 512])) is deprecated. Please ensure they have the same size.
it goes wrong with the channels, even I set the num_class from 19 to 2.
I have no idea, any guidance will be so helpful!
|
st30722
|
nn.CrossEntropyLoss used for a multi-class segmentation use case expects a model output in the shape [batch_size, nb_classes, height, width] while the target should have the shape [batch_size, height, width] and contain class indices in the range [0, nb_classes-1] ([0, 1] in your case for a “binary multi-class segmentation”).
The first error is thus raised, since the target seems to have 4 dimensions.
On the other hand, nn.BCEWithLogitsLoss expects both the model output and target to have the same shape as [batch_size, nb_classes, height, width] and the target should contain floating point values in the range [0, 1] for each class (nb_classes would be 1 for a binary segmentation use case).
Based on the second error message it seems you are trying to pass color images in the channels-last memory format as the target tensors, which won’t work. In case my assumption is correct you would have to map the colors to class values first.
|
st30723
|
Hi,I change the map to
Blockquote
self.id_to_trainid = {
0: 0,
1: 1
}
self.trainid2name = {
0:“back”,
1:“file”,
}
but it seems the same problem…
Blockquote
File “train_src.py”, line 314, in
main()
File “train_src.py”, line 307, in main
model = train(cfg, args.local_rank, args.distributed)
File “train_src.py”, line 139, in train
loss = criterion(pred, src_label)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 889, in _call_impl
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py”, line 1048, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py”, line 2693, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py”, line 2390, in nll_loss
ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: 1only batches of spatial targets supported (3D tensors) but got targets of size: : [2, 3, 720, 1280]
|
st30724
|
I don’t know where this mapping is used, but based on the error message your target still contains 4 dimensions (now the channel dimension seems to be in dim1).
Could you show how you’ve tried to map the color values to the class indices in the target tensor?
|
st30725
|
Thanks for reply, the map is in this way:
class GTA5DataSet(data.Dataset):
def init(self,
max_iters=None,
num_classes=2,
ignore_label=255,
debug=False,):
self.split = split
self.NUM_CLASS = num_classes
self.data_root = data_root
self.data_list = []
if max_iters is not None:
self.label_to_file, self.file_to_label = pickle.load(open(osp.join(data_root, "gtav_label_info.p"), "rb"))
self.img_ids = []
SUB_EPOCH_SIZE = 3000
tmp_list = []
ind = dict()
for i in range(self.NUM_CLASS):
ind[i] = 0
for e in range(int(max_iters/SUB_EPOCH_SIZE)+1):
cur_class_dist = np.zeros(self.NUM_CLASS)
for i in range(SUB_EPOCH_SIZE):
if cur_class_dist.sum() == 0:
dist1 = cur_class_dist.copy()
else:
dist1 = cur_class_dist/cur_class_dist.sum()
w = 1/np.log(1+1e-2 + dist1)
w = w/w.sum()
c = np.random.choice(self.NUM_CLASS, p=w)
if ind[c] > (len(self.label_to_file[c])-1):
np.random.shuffle(self.label_to_file[c])
ind[c] = ind[c]%(len(self.label_to_file[c])-1)
c_file = self.label_to_file[c][ind[c]]
tmp_list.append(c_file)
ind[c] = ind[c]+1
cur_class_dist[self.file_to_label[c_file]] += 1
self.img_ids = tmp_list
if max_iters is not None:
self.data_list = self.data_list * int(np.ceil(float(max_iters) / len(self.data_list)))
print('length of gta5', len(self.data_list))
self.id_to_trainid = {0: 0, 1: 1}
self.trainid2name = {
0:"back",
1:"file",
}
self.transform = transform
self.ignore_label = ignore_label
self.debug = debug
def __len__(self):
return len(self.data_list)
def __getitem__(self, index):
if self.debug:
index = 0
datafiles = self.data_list[index]
# re-assign labels to match the format of Cityscapes
label_copy = self.ignore_label * np.ones(label.shape, dtype=np.float32)
for k, v in self.id_to_trainid.items():
label_copy[label == k] = v
label = Image.fromarray(label_copy)
if self.transform is not None:
image, label = self.transform(image, label)
return image, label, name
|
st30726
|
Hello,
I am trying to implement a ‘one step gradient descent’ aproach wherein I accumulate the loss for the whole dataset, sum it, and then do a backpropagation. I have set my batch size to 8. The issue that I am facing is that after a few forward passes I obtain an OOM error. I think it is because pytorch is saving the forward computation graph for each instance. Is there any work around, where I can save the forward computation graph somewhere and access it when performing a backward pass? Or is there any other workaround?
I have also tried deleting en_input, en_masks, de_output, de_masks after accumulating the loss but no avail.
#reproduce error
from transformers import BertModel, BertForMaskedLM, BertConfig, EncoderDecoderModel
import torch
import torch.nn.functional as F
model1 = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert from pre-trained checkpoints
model1.cuda()
optimizer1 = torch.optim.Adam(model1.parameters(), lr=0.001)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print("Using device:", device)
def train1(batch_size):
acc=torch.zeros(1)
acc=acc.to('cuda')
for i in range(20):
optimizer1.zero_grad()
#dummy inputs similar to my dataset
en_input=torch.tensor([[i for i in range(50)] for i in range (batch_size)])
en_masks=torch.tensor([[0 for i in range(50)] for i in range (batch_size)])
de_output=torch.tensor([[i for i in range(50)] for i in range (batch_size)])
de_masks=torch.tensor([[0 for i in range(50)] for i in range (batch_size)])
lm_labels=torch.tensor([[i for i in range(50)] for i in range (batch_size)])
en_input = en_input.to('cuda')
de_output = de_output.to('cuda')
en_masks = en_masks.to('cuda')
de_masks = de_masks.to('cuda')
lm_labels = de_output.clone().to('cuda')
out = model1(input_ids=en_input, attention_mask=en_masks, decoder_input_ids=de_output,
decoder_attention_mask=de_masks, labels=lm_labels)
prediction_scores = out[1]
predictions = F.log_softmax(prediction_scores, dim=2)
p=((predictions.sum() - de_output.sum())).sum() #some loss
p=torch.unsqueeze(p, dim=0)
acc = torch.cat((p,acc)) # accumulating the loss
loss=acc.sum()
loss.backward(retain_graph=True)
optimizer1.step()
train1(batch_size=8)
|
st30727
|
Solved by ptrblck in post #2
Yes, Autograd will save the computation graphs, if you sum the losses (or store the references to those graphs in any other way) until a backward operation is performed.
To accumulate gradients you could take a look at this post, which explains different approaches and their computation as well as …
|
st30728
|
Yes, Autograd will save the computation graphs, if you sum the losses (or store the references to those graphs in any other way) until a backward operation is performed.
To accumulate gradients you could take a look at this post 15, which explains different approaches and their computation as well as memory usage.
|
st30729
|
I have a question regarding the MultiheadAttention and multi_head_attention_forward, it seems like the padding masking is applied only to one axis instead of being applied to two axes.
When attaching hook to the attention module, I receive the attention weights as so, when I would expect instead only a non-zero square attention map while the rest being zero ?
map_29800×800 105 KB
Looking at the source code:
if key_padding_mask is not None:
attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)
attn_output_weights = attn_output_weights.masked_fill(
key_padding_mask.unsqueeze(1).unsqueeze(2),
float("-inf"),
)
# **Expect Extra** =============>
attn_output_weights = attn_output_weights.masked_fill(
key_padding_mask.unsqueeze(1).unsqueeze(2).transpose(-1, -2),
float("-inf"),
)
# <==================== End + Handle the lower zero-row softmax.
attn_output_weights = attn_output_weights.view(bsz * num_heads, tgt_len, src_len)
Is it something I am missing or that’s intended like so which seems weird, as attention is shared between padding and audio features?
|
st30730
|
lstm = torch.nn.LSTM(10, 20,1)
lstm.state_dict().keys()
Output result:
Out[47]: odict_keys(['weight_ih_l0', 'weight_hh_l0', 'bias_ih_l0', 'bias_hh_l0'])
According to the calculation process of LSTM, there should be only one bias. Why do we output two bias variables, that is,‘bias_ih_l0’and’bias_hh_l0’?
|
st30731
|
https://pytorch.org/docs/stable/_modules/torch/nn/modules/rnn.html 77
It says that “Second bias vector is included for CuDNN compatibility. Only one bias vector is needed in standard definition.”
https://docs.nvidia.com/deeplearning/sdk/cudnn-developer-guide/index.html#cudnnRNNMode_t 42
|
st30732
|
I think two bias term acts differently.
The main point is that bias_ih is applied once during the computation along time axis, while bias_hh is applied accumulated along the time axis.
I want to clarify this one using illustrative example, but the process is so complicate.
|
st30733
|
Is there any function similar to keras.layers.Masking which masks the rows of timeseries data which is filled with an specific value (e.g. padded with zero)?
|
st30734
|
Thanks for your answer. in this way, I need to have the start and end index of the non padded values, isn’t there any other more straightforward way like Masking layer in keras?
one thing more, by slicing we feed the matrix with different shapes into RNN, is this acceptable in pytorch? (e.g. data[0].size() could be (timesteps=100, features=20) and data[10].size() could be (timesteps=34, features=20))
|
st30735
|
Idk how r you going to set values to zero if u don’t know where the values are. You can check torch.masked_select, but i’m quite sure u need to know what to mask. Just curious, are you doing ASR?
u have to pad them into a bigger tensor, u can’t even feed it into rnn if u don’t pad
|
st30736
|
Use this
gist.github.com
https://gist.github.com/Tushar-N/dfca335e370a2bc3bc79876e6270099e 148
pad_packed_demo.py
import torch
import torch.nn as nn
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
seqs = ['gigantic_string','tiny_str','medium_str']
# make <pad> idx 0
vocab = ['<pad>'] + sorted(set(''.join(seqs)))
# make model
This file has been truncated. show original
|
st30737
|
Hi,
I am working on writing my own optimizer while going through the code for default optimizers such as RMSprop I found a torch.no_grad() decorator just before the step function. Is it necessary to use this decorator before the step function when I am writing my own optimizer’s step function? Can someone please explain why it is used over the step function.
|
st30738
|
if you wouldnt call no_grad() the update operation would require gradients and you probably dont want to add your weight update in the computational graph.
|
st30739
|
Hello,
I am new to PyTorch, I just want to ensure that I correctly understand how model.to(device=device) works before I start creating more complex models.
So far, almost all the tutorials that Ive followed create, train and evaluate their models in the same notebook/script.
However, what if I am creating my model, training it, and evaluating it using functions/operations in different scripts?
Suppose I have the following code under these different scripts.
main.py
model = EfficientNet.from_pretrained("efficientnet-b3")
model = model.to(device=device)
for epoch in range(EPOCHS):
train(model, train_loader, ...)
check_accuracy(model, val_loader, ...)
train_model.py
def train_model(model, loader, ...):
losses = []
for batch_idx, (data, targets) in enumerate(tqdm(loader)):
...
scores=model(data)
...
print(f"Loss: {sum(losses)/len(losses)}")
evaluation.py
def check_accuracy(model, loader, ...):
model.eval()
with torch.no_grad():
...
...
model.train()
return accuracy
Since my model is moved to device by running .to(device=device), I dont need to return the model from each function in different modules to main.py, is that right?
Simply passing the model to train_model() and check_accuracy() is enough run operations on the model, and once they finish running, the remaining operations in main.py will resume working on the same model, is that right?
Thank you all for your help.
|
st30740
|
Solved by ptrblck in post #2
Yes, your assumption should be correct as also seen in this post, since the model reference would be passed and its parameters (and buffers) updated inplace.
You could use the code snippet in the linked post to verify it using your setup.
|
st30741
|
Yes, your assumption should be correct as also seen in this post 2, since the model reference would be passed and its parameters (and buffers) updated inplace.
You could use the code snippet in the linked post to verify it using your setup.
|
st30742
|
This my code:
optimizer=torch.optim.AdamW(model.parameters(),lr=0.005,weight_decay=0.01)
scheduler=torch.optim.lr_scheduler.OneCycleLR(optimizer,max_lr=0.0005,total_steps=total_steps,epochs=30)
when I use scheduler.step(), this error appear:
捕获1071×116 5.6 KB
how to solve this error?
|
st30743
|
Could you post a code snippet to reproduce this issue, please?
This dummy example runs fine:
model = models.resnet18()
optimizer=torch.optim.AdamW(model.parameters(),lr=0.005,weight_decay=0.01)
scheduler=torch.optim.lr_scheduler.OneCycleLR(optimizer,max_lr=0.0005,total_steps=10,epochs=30)
output = model(torch.randn(1, 3, 224, 224))
loss = F.cross_entropy(output, torch.randint(0, 1000, (1,)))
loss.backward()
optimizer.step()
scheduler.step()
|
st30744
|
criterion=smp.utils.losses.DiceLoss()
for epoch in range(epochs):
model.train()
epoch_loss=0
for i,(image,mask) in enumerate(train_dl):
optimizer.zero_grad()
image = image.to(device).float()
output=model(image)
output=output.to('cpu')
output=torch.sigmoid(output)
loss=criterion(output,mask)
epoch_loss+=loss
loss.backward()
optimizer.step()
scheduler.step()
|
st30745
|
A minimal and executable code snippet would be great.
Could you try to remove unnecessary functions and use some random inputs, so that we can reproduce this issue locally?
|
st30746
|
criterion=smp.utils.losses.DiceLoss()
for epoch in range(epochs):
model.train()
epoch_loss=0
for i,(image,mask) in enumerate(train_dl):
data=torch.randn(8,3,128,128)
target=torch.randn(8,4,128,128)
optimizer.zero_grad()
data=data.to(device).float()
output=model(data)
output=torch.sigmoid(output)
output=output.to('cpu')
loss=criterion(output,target)
epoch_loss += loss
loss.backward()
optimizer.step()
scheduler.step()
epoch_loss = epoch_loss / total_steps
print('epoch:{},epoch_loss:{}'.format(epoch,epoch_loss))
I’m sorry for my slow response。This code might suit your needs。
This is the error:
捕获1114×152 7.7 KB
You can see that the first call to scheduler. Step should have been fault-free because it printed the following print statement, as well as eval for the model. But an error should have occurred on the second call
|
st30747
|
I forgot the code that came out and this is the following code
model.eval()
metric,metric2,valid_loss=evalue(model,valid_dl)
if metric2>best_score:
state={'state':model.state_dict(),'best_score':metric2}
torch.save(state,checkpoint_path)
best_score=metric2
logging.basicConfig(filename='cloud4.log', level=logging.DEBUG, format='%(asctime)s-%(message)s')
logging.warning('epoch_loss:{},metric1:{},metric2:{}'.format(epoch_loss,metric,metric2))
|
st30748
|
Now it turns out that if you use annotated code, you get an error, while unannotated code works fine
# for image,mask in train_dl:
for i in range(5):
data=torch.randn(8,3,128,128)
target=torch.randn(8,4,128,128)
optimizer.zero_grad()
data=data.to(device).float()
output=model(data)
output=torch.sigmoid(output)
output=output.to('cpu')
loss=criterion(output,target)
epoch_loss += loss
loss.backward()
optimizer.step()
scheduler.step()
# image = image.to(device).float()
# optimizer.zero_grad()
# pre_mask=model(image)
# pre_mask=pre_mask.to('cpu')
# pre_mask=torch.sigmoid(pre_mask)
# loss=criterion(pre_mask,mask)
# epoch_loss+=loss
# loss.backward()
# optimizer.step()
# scheduler.step()
I really don’t know why, is it the data problem that caused the error
|
st30749
|
Thanks for the code so far.
Could you also post the code you are using to initialize the model, optimizer, and scheduler?
Also, could you try to run the code on the CPU only and check, if you see the same error?
If not, could you rerun the GPU code using CUDA_LAUNCH_BLOCKING=1 python script.py args and post the stack trace again?
|
st30750
|
This is my code:
model=smp.Unet('efficientnet-b3',encoder_weights=None,classes=4)
# # #
for m in model.modules():
weights_init_kaiming(m)
# checkpoint=torch.load(checkpoint_path)
# model.load_state_dict(checkpoint['state'])
device=torch.device('cuda:0')
model.to(device)
total_steps=len(train_dl)
optimizer=torch.optim.AdamW(model.parameters(),lr=0.05,weight_decay=0.01)
scheduler=torch.optim.lr_scheduler.OneCycleLR(optimizer,max_lr=0.0005,total_steps=total_steps,epochs=30)
It was very embarrassing that I could not debug with CPU because of the machine, but if I used GPU to debug, the error result did not change
|
st30751
|
Could you call scheudler.get_lr() before the error is thrown and check the return value, please?
|
st30752
|
I am very sorry for replying to you a few days later, because I am a sophomore student in university and I have a lot of things to do recently, so I didn’t deal with this problem for a few days. As you requested, I added this line of code. The problem is that it returned the value successfully without any problems during the first epoch. But after the second epoch, it reported an error
|
st30753
|
Are you recreating or manipulating the scheduler or optimizer in each epoch somehow?
|
st30754
|
29/5000
Thank you for answering my question so patiently. My code should not have this problem。
If I use following code,the error will not appear( it will appear when using the annotating code):
for epoch in epochs:
for batch in train_loader:
#scheduler.step()
scheduler.step()
Here is my complete training code(When you put scheduler. Step () into each batch iteration,error will apear):
for epoch in range(epochs):
model.train()
epoch_loss=0
epoch_mask_loss=0
epoch_label_loss=0
for image,mask,label in train_dl:
optimizer.zero_grad()
r=np.random.rand(1)
#cutmix transform
if r>threshold:
lam=np.random.beta(50,50)
image,mask,cutmix_label=make_cutmix(image,mask,lam)
image=image.to(device).float()
mask_prediction,label_prediction=model(image)
label_prediction=label_prediction.to('cpu')
label_loss=lam*label_criterion(label_prediction,label)+(1-lam)*label_criterion(label_prediction,cutmix_label)
else:
image = image.to(device).float()
mask_prediction,label_prediction=model(image)
label_prediction=label_prediction.to('cpu')
label_loss=label_criterion(label_prediction,label)
mask_prediction=torch.sigmoid(mask_prediction)
mask_prediction = mask_prediction.to('cpu')
mask_loss=mask_criterion(mask_prediction,mask)
epoch_mask_loss+=mask_loss
epoch_label_loss+=label_loss
loss=label_loss+mask_loss
epoch_loss+=loss
loss.backward()
optimizer.step()
epoch_loss = epoch_loss / total_steps
epoch_label_loss=epoch_label_loss/total_steps
epoch_mask_loss=epoch_mask_loss/total_steps
print('epoch:{},epoch_loss:{},epoch_label_loss:{},epoch_mask_loss:{}'.format(epoch,epoch_loss,epoch_label_loss,epoch_mask_loss))
model.eval()
metric,metric2,valid_loss=evalue(model,valid_dl)
if metric2>best_score:
state={'state':model.state_dict(),'best_score':metric2}
torch.save(state,checkpoint_path)
best_score=metric2
logging.warning('epoch_loss:{},metric1:{},metric2:{}'.format(epoch_loss,metric,metric2))
scheduler.step()
|
st30755
|
Hi,
I had the same error that I think has been fixed.
In the end it seems like the number of epochs you had mentioned in your scheduler was less than the number of epochs you tried training for.
I went into %debug in the notebook and tried calling self.get_lr() as suggested.
I got this message:
*** ValueError: Tried to step 3752 times. The specified number of total steps is 3750
Then with some basic math and a lot of code search I realised that I had specified 5 epochs in my scheduler but called for 10 epochs in my fit function.
Hope this helps.
|
st30756
|
There is an error with total_steps.i am also getting same error but i rectified it
|
st30757
|
Related to this issue.
github.com/pytorch/pytorch
Unbound local variable in LR scheduler 83
opened
Jan 27, 2020
closed
Feb 11, 2020
vadimkantorov
This is about a mysterious error. I have the following code that runs ok:
import torch
class MultiStepLR(torch.optim.lr_scheduler._LRScheduler):
def __init__(self, optimizer, gamma, milestones,...
module: optimizer
triaged
If get_lr() throws an error, pytorch suppresses it but will later encounter this “values” unbounded bug. Fix the get_lr() error and this bug will go away.
|
st30758
|
Python has lexical scoping by default, which means that although an enclosed scope can access values in its enclosing scope, it cannot modify them (unless they’re declared global with the global keyword). A closure binds values in the enclosing environment to names in the local environment. The local environment can then use the bound value, and even reassign that name to something else, but it can’t modify the binding in the enclosing environment. UnboundLocalError happend because when python sees an assignment inside a function then it considers that variable as local variable and will not fetch its value from enclosing or global scope when we execute the function. To modify a global variable 3 inside a function, you must use the global keyword.
|
st30759
|
Hello,I had the same error that I can’t solve it.
I want to ask you some question about it.Thank you.
What is the ‘The specified number of total steps is 3750’?
How to change the number of steps?
Thank you.
|
st30760
|
Hi make sure that your dataloader and the scheduler have the same number of iterations. If I remember correctly I got this error when using the OneCycle LR scheduler which needs you to specify the max number of steps as init parameter. Hope this helps! If this isn’t the error you have, then please provide code and try to see what your scheduler.get_lr() method returns.
|
st30761
|
Hi all,
I was wondering if there is a way of grouping multiple Parameter class instances in a collection (i.e. ParameterList) s.t. they are registered well as Module parameters, and can be indexed with boolean mask, or an array/list of indexes (like a numpy array)? Nesting parameters in ParameterList registers them correctly, but only list-like indexing is supported.
If you need more specific use case info, I would be happy to provide it.
Thank you!
|
st30762
|
trancelestial:
If you need more specific use case info, I would be happy to provide it.
I would be interested in hearing more about your use case.
If I understand it correctly, you would like to be able to use something like:
params = nn.ParamArray([[param1, param2], [param3, param4]])
p = params[0, 1]
|
st30763
|
I am sorry for this delayed reply.
My use case might be best described as multitask learning, where I have a network hard shared between tasks and a per-task 1-d tensor which linearly combines the shared network outputs. Let’s say I have n tasks in total, and in each batch, I have samples from k (k<<n) tasks so I want:
to efficiently index the ‘collection’ of all per-task weights s.t. for an input of shape (b,d) where b is the batch size, I end up with a tensor of shape (b,t) where t is the size of per-shape tensor.
This is where the implementation using Parameter/ModuleList, wrapping the individual Parameter(per_shape_tensor) fails, as I can’t index the ParamList with boolean mask or list of indices (as in torch or numpy arrays).
that the loss.backward() time doesn’t scale with the total number of tasks (backward computation time in a batch should only depend on the number of different tasks in the current batch i.e. gradients should not be calculated for the per-task tensors which are not used in the current batch forward computation)
This is what prevents me from having a large Parameter wrapping a matrix with per-task weights as rows. In this way, I can efficiently index, however backward computation scales with the number of tasks.
(this sems similar to the embedding layers - I tried implementing both with torch.nn.Embedding, and torch.nn.functional.embedding but both implementations seem to scale with number of tasks)
I would be very thankful if you would provide me with any helpful insights for implementing this while achieving both 1) and 2).
|
st30764
|
Hi all,
I was wondering if there exists an operation that given a list of (1-d) tensors, joins them in a single (2-d) tensor, s.t. the resulting tensor is a ‘view’ of the original tensors i.e. changing the values in either the original tensors or the joined one, would reflect in changes in the other?
torch.cat and torch.stack seem to create a new tensor. The resulting vectors of torch.chunk are views of the original but the operation is inverse to what I need.
Thank you!
|
st30765
|
No, there isn’t and it’s not supported. The JIT fusers can fuse a final torch.cat by allocating the entire tensor and then filling the parts.
Your best option is to keep around the larger tensor / allocate a larger tensor in advance and then work on the parts.
Best regards
Thomas
|
st30766
|
Hi Thomas,
I see, thank you!
Working with preallocated tensor wouldn’t be feasible in my case as I would like the gradient to be calculated only for certain rows of that large tensor, which is also not possible in Pytorch if I understand well? It is possible that the gradient is set to 0 for the rows not used, but I need them to not be calculated - so that backward time doesn’t depend on the total number of rows in that large matrix, but only on the number of rows used in that operation (and their size ofc). Do you have an idea on how to achieve this?
I found this to be similar to torch.nn.functional.embedding, but there backward time (and also optimizer step time) depend on the total size of the values matrix.
Best regards,
Bozidar
|
st30767
|
I think modelling it similar to the “sparse” option for embedding is pretty much how you would manage it.
I would probably try to use custom autograd.Functions with a “custom sparse representation” (i.e. you keep track of the indices of non-zero rows or somesuch and then only instantiate a tensor with these parts).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.