instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Saving a trained Detectron2 model and making predictions on a single image | I am new to detectron2 and this is my first project. After reading the docs and using the tutorials as a guide, I trained my model on the custom dataset and performed the evaluation.
I would now like to make predictions on images I receive via an API by loading this saved model. I could not find any reading materials that could help me with this task.
To save my model, I have used this link as a reference - https://detectron2.readthedocs.io/en/latest/tutorials/models.html
I am able to save my trained model using the following code-
from detectron2.modeling import build_model
model = build_model(cfg) # returns a torch.nn.Module
from detectron2.checkpoint import DetectionCheckpointer
checkpointer = DetectionCheckpointer(model, save_dir="output")
checkpointer.save("model_final") # save to output/model_final.pth
But I am still confused as to how I can go about implementing what I want. I could use some guidance on what my next steps should be. Would be extremely grateful to anyone who can help.
| for a single image, create a list of data. Put image path in the file_name as below:
test_data = [{'file_name': '.../image_1jpg',
'image_id': 10}]
Then do run the following:
from detectron2.config import get_cfg
from detectron2.engine import DefaultPredictor
from detectron2.data import MetadataCatalog
from detectron2.utils.visualizer import Visualizer, ColorMode
import matplotlib.pyplot as plt
import cv2.cv2 as cv2
test_data = [{'file_name': '.../image_1jpg',
'image_id': 10}]
cfg = get_cfg()
cfg.merge_from_file("model config")
cfg.MODEL.WEIGHTS = "model_final.pth" # path for final model
predictor = DefaultPredictor(cfg)
im = cv2.imread(test_data[0]["file_name"])
outputs = predictor(im)
v = Visualizer(im[:, :, ::-1],
metadata=MetadataCatalog.get(cfg.DATASETS.TRAIN[0]),
scale=0.5,
instance_mode=ColorMode.IMAGE_BW)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
img = cv2.cvtColor(out.get_image()[:, :, ::-1], cv2.COLOR_RGBA2RGB)
plt.imshow(img)
This will show the prediction for the single image
| https://stackoverflow.com/questions/68343961/ |
module 'torch' has no attribute 'nan_to_num' | I am using 1.7.1 version of Pytorch on Ubuntu, and I try to do the following :
x = torch.tensor([float('nan'), float('inf'), -float('inf'), 3.14])
torch.nan_to_num(x)
but I am getting this error :
AttributeError: module 'torch' has no attribute 'nan_to_num'
But it does exist in the documentation since I just copied those 2 lines from it. Can someone help me ?
| nan_to_num was introduced in PyTorch 1.8. You will need to update your torch package to access it:
pip install --upgrade torch
| https://stackoverflow.com/questions/68359151/ |
pytorch loss function for regression model with a vector of values | I'm training a CNN architecture to solve a regression problem using PyTorch where my output is a tensor of 25 values. The input/target tensor could be either all zeros or a gaussian distribution with a sigma value of 2. An example of a 4-sample batch is as this one:
[[0.13534, 0.32465, 0.60653, 0.8825, 1.0000, 0.88250,0.60653, 0.32465, 0.13534, 0.043937, 0.011109, 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.13534, 0.32465, 0.60653, 0.8825, 1.0000, 0.88250,0.60653, 0.32465, 0.13534, 0.043937, 0.011109, 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.13534, 0.32465, 0.60653, 0.8825, 1.0000, 0.88250,0.60653, 0.32465, 0.13534 ],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]
My question is how to design a loss function for the model effectively learn the regression output with 25 values.
I have tried 2 types of loss, torch.nn.MSELoss() and torch.nn.MSELoss()-torch.nn.CosineSimilarity(). They sort of work. However, sometimes the network has difficulty converging, especially when there are a lot of samples with all "zeros", which leads the network to output a vector with all 25 small values.
My question is, is there any other loss which we could try?
| Your values do not seem widely different in scale so an MSELoss seems like it would work fine. Your model could be collapsing because of the many zeros in your target.
You can always try torch.nn.L1Loss() (but I do not expect it to be much better than torch.nn.MSELoss())
I suggest that you instead try to predict the gaussian mean/mu, and later try to re-create the gaussian for each sample if you really need it.
So you have two alternatives if you choose to try this method.
Alt 1
A good alternative is to encode your target to look like a classification target. Your 25 element vectors become a single value where the original target == 1 (possible classes will 0, 1, 2, ..., 24). We can then assign a sample that contains "only zeroes" as our last class "25". So your target:
[[0.13534, 0.32465, 0.60653, 0.8825, 1.0000, 0.88250,0.60653, 0.32465, 0.13534, 0.043937, 0.011109, 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.13534, 0.32465, 0.60653, 0.8825, 1.0000, 0.88250,0.60653, 0.32465, 0.13534, 0.043937, 0.011109, 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.13534, 0.32465, 0.60653, 0.8825, 1.0000, 0.88250,0.60653, 0.32465, 0.13534 ],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]
becomes
[4,
10,
20,
25]
If you do this, then you can try the common torch.nn.CrossEntropyLoss().
I do not know what your dataloader looks like but given a single sample in your original format, you can convert it to my proposed format with:
def encode(tensor):
if tensor.sum() == 0:
return len(tensor)
return torch.argmax(tensor)
and back to a gaussian with:
def decode(value):
n_values = 25
zero = torch.zeros(n_values)
if value == n_values:
return zero
# Create gaussian around value
std = 2
n = torch.arange(n_values) - value
sig = 2*std**2
gauss = torch.exp(-n**2 / sig2)
# Only return 9 values from the gaussian
start_ix = max(value-6, 0)
end_ix = min(value+7,n_values)
zero[start_ix:end_ix] = gauss[start_ix:end_ix]
return zero
(Note I have not tried them with batches, only samples)
Alt 2
The second option is to change your regression targets (still only the argmax positions (mu)) to a nicer regression value in the range 0-1 and have a separate neuron that outputs a "mask value" (also 0-1). Then your batch of:
[[0.13534, 0.32465, 0.60653, 0.8825, 1.0000, 0.88250,0.60653, 0.32465, 0.13534, 0.043937, 0.011109, 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.13534, 0.32465, 0.60653, 0.8825, 1.0000, 0.88250,0.60653, 0.32465, 0.13534, 0.043937, 0.011109, 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.13534, 0.32465, 0.60653, 0.8825, 1.0000, 0.88250,0.60653, 0.32465, 0.13534 ],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]
becomes
# [Mask, mu]
[
[1, 0.1666], # True, 4/24
[1, 0.4166], # True, 10/24
[1, 0.8333], # True, 20/24
[0, 0] # False, undefined
]
If you are using this setup, then you should be able to use an MSELoss with modification:
def custom_loss(input, target):
# Assume target and input is of shape [Batch, 2]
mask = target[...,1]
mask_loss = torch.nn.functional.mse_loss(input[...,0], target[...,0])
mu_loss = torch.nn.functional.mse_loss(mask*input[...,1], mask*target[...,1])
return (mask_loss + mu_loss) / 2
This loss would only look at the 2nd value (mu) if the mask of the target is 1. Otherwise it only tried to optimize for the correct mask.
To encode to this format you would use:
def encode(tensor):
n_values = 25
if tensor.sum() == 0:
return torch.tensor([0,0])
return torch.argmax(tensor) / (n_values-1)
and to decode:
def decode(tensor):
n_values = 25
# Parse values
mask, value = tensor
mask = torch.round(mask)
value = torch.round((n_values-1)*value)
zero = torch.zeros(n_values)
if mask == 0:
return zero
# Create gaussian around value
std = 2
n = torch.arange(n_values) - value
sig = 2*std**2
gauss = torch.exp(-n**2 / sig2)
# Only return 9 values from the gaussian
start_ix = max(value-6, 0)
end_ix = min(value+7,n_values)
zero[start_ix:end_ix] = gauss[start_ix:end_ix]
return zero
| https://stackoverflow.com/questions/68370248/ |
First argument error in pytorch.loads() function when working with Emotic demo | Basically, I'm creating an emotion recognition application, and I'm using Emotic's image dataset. They have their own premade program and trained model for a demo (The colab link below) but for some reason the third cell under I. Prepare places pretrained model is encountering the error:
the first argument must be callable on line 19 in the 4th Google Colab cell.
Code:
# Converting model weights to python3.6 format
import torch
from PIL import Image
from torch.autograd import Variable as V
import torchvision.models as models
from torchvision import transforms as trn
from torch.nn import functional as F
import os
model_path = './places'
archs = ['resnet18']
for arch in archs:
model_file = os.path.join(model_path,'%s_places365.pth.tar' % arch)
save_file = os.path.join(model_path,'%s_places365_py36.pth.tar' % arch)
from functools import partial
import pickle
pickle.load = partial(pickle.load, encoding="latin1")
pickle.Unpickler = partial(pickle.Unpickler, encoding="latin1")
model = torch.load(model_file, map_location=lambda storage, loc: storage, pickle_module=pickle)
torch.save(model, save_file)
print('converting %s -> %s'%(model_file, save_file))
print ('completed cell')
# Saving the model weights to use ahead in the notebook
# the architecture to use
arch = 'resnet18'
model_weight = os.path.join(model_path, 'resnet18_places365_py36.pth.tar')
# create the network architecture
model = models.__dict__[arch](num_classes=365)
#model_weight = '%s_places365.pth.tar' % arch
checkpoint = torch.load(model_weight) # model trained in GPU could be deployed in CPU machine like this!
state_dict = {str.replace(k,'module.',''): v for k,v in checkpoint['state_dict'].items()} # the data parallel layer will add 'module' before each layer name
model.load_state_dict(state_dict)
model.eval()
model.cpu()
torch.save(model, os.path.join(model_path, 'res_context' + '.pth'))
print ('completed cell')
Does anyone have any idea why this is happening? (Haven't changed any code this is the demo offered by Emotic)
Error:
TypeError Traceback (most recent call last)
<ipython-input-8-1a9e3bc55eae> in <module>()
17 #model_weight = '%s_places365.pth.tar' % arch
18
---> 19 checkpoint = torch.load(model_weight) # model trained in GPU could be deployed in CPU machine like this!
20 state_dict = {str.replace(k,'module.',''): v for k,v in checkpoint['state_dict'].items()} # the data parallel layer will add 'module' before each layer name
21 model.load_state_dict(state_dict)
1 frames
/usr/local/lib/python3.7/dist-packages/torch/serialization.py in _load(zip_file, map_location, pickle_module, pickle_file, **pickle_load_args)
867 # because it's marked readonly in pickle.
868 # The type: ignore is because mypy can't statically determine the type of this class.
--> 869 class UnpicklerWrapper(pickle_module.Unpickler): # type: ignore[name-defined]
870 # from https://stackoverflow.com/questions/13398462/unpickling-python-objects-with-a-changed-module-path/13405732
871 # Lets us override the imports that pickle uses when unpickling an object.
TypeError: the first argument must be callable
Public Gopogle Colab link:
https://colab.research.google.com/github/Tandon-A/emotic/blob/master/Colab_train_emotic.ipynb
| I am the author of this repository. I had fixed this issue back in August 2021. The issue is caused due to some changes in the Python Pickle module in the python 3.7 version. The code used to work properly in the Python 3.6 version.
You can try the Colab_train_emotic.ipynb file.
| https://stackoverflow.com/questions/68370899/ |
Facing issue: UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 918: ordinal not in range(128) while trying to load a Pytorch model | I am trying to load a pre-trained Pytorch model but getting an error as shown below:
model = torch.load('a.pth')
File "/home/ubuntu/env/lib/python3.6/site-packages/torch/serialization.py", line 267, in load
return _load(f, map_location, pickle_module)
File "/home/ubuntu/env/lib/python3.6/site-packages/torch/serialization.py", line 420, in _load
result = unpickler.load()
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 918: ordinal not in range(128)
I am using Python 3.6 in the virtual environment and PyTorch version = 0.3.1, any leads towards resolving the issue will be helpful.
| From documentation:
By default, we decode byte strings as utf-8. This is to avoid a common error case UnicodeDecodeError: 'ascii' codec can't decode byte 0x... when loading files saved by Python 2 in Python 3. If this default is incorrect, you may use an extra encoding keyword argument to specify how these objects should be loaded, e.g., encoding='latin1' decodes them to strings using latin1 encoding, and encoding='bytes' keeps them as byte arrays which can be decoded later with byte_array.decode(...).
Try to change encoding, for instance:
model = torch.load('a.pth', encoding='latin') # or 'ascii'
| https://stackoverflow.com/questions/68372576/ |
Are there any difference between Y and *Y, where Y is a list used as input argument? | I was using the torch.tensor.repeat()
x = torch.tensor([[1, 2, 3], [4, 5, 6]])
period = x.size(1)
repeats = [1,2]
result = x.repeat(*repeats)
the result is
tensor([[1, 2, 3, 1, 2, 3],
[4, 5, 6, 4, 5, 6]])
if I get result as follows
result = x.repeat(repeats)
the result is the same
tensor([[1, 2, 3, 1, 2, 3],
[4, 5, 6, 4, 5, 6]])
It seems that x.repeat(repeats) and x.repeat(*repeats) work the same.
Does it mean that, for an input parameter, e.g, Y, I can use either Y or *Y
| Kinda. If repeats is a (list or tuple) of ints, then it is equivalent. But in general the rule appears to be:
If the first argument is a list or tuple, take that as repeats. Ignore all other arguments.
Otherwise, take the full *args as repeats
So if your repeats is something weird like repeats=((1,2),(3,4)), then a.repeat(*repeats) succeeds and is the same as a.repeat(1, 2)==a.repeat((1, 2)) and a.repeat(repeats) fails.
Note: This is observational based on my own tests. The only official documentation is the defined type of the function, e.g. repeat(torch.Size or int...) which isn't perfectly clear with regards to semantics.
You can also get error messages like this:
TypeError: repeat(): argument 'repeats' (position 1) must be tuple of ints, not tuple
when you pass floats. In general error reporting could be better.
| https://stackoverflow.com/questions/68387274/ |
Plot loss and accuracy over each epoch for both training and test datasets | I am training that model to classify 3 classes (0,1,2). I am using cross validation for 2 fold, I am using pytorch, I would like to plot the accuracy and loss function for training and test dataset over the number epochs on the same plot. I do know how to do that . especially I just evaluate the test once I finish training , Is there is way that I can have that plot for both training data and test data
# Configuration options
k_folds = 2
loss_function = nn.CrossEntropyLoss()
# For fold results
results = {}
# Set fixed random number seed
torch.manual_seed(42)
# Prepare dataset by concatenating Train/Test part; we split later.
training_set = CustomDataset('one_hot_train_data.txt','train_3states_target.txt') #training_set = CustomDataset_3('one_hot_train_data.txt','train_5_target.txt')
training_generator = torch.utils.data.DataLoader(training_set, **params)
val_set = CustomDataset('one_hot_val_data.txt','val_3states_target.txt')
test_set = CustomDataset('one_hot_test_data.txt','test_3states_target.txt')
#testloader = torch.utils.data.DataLoader(test_set, **params)
#dataset1 = ConcatDataset([training_set, val_set])
dataset = ConcatDataset([training_set,test_set])
kfold = KFold(n_splits=k_folds, shuffle=True)
# Start print
print('--------------------------------')
# K-fold Cross Validation model evaluation
for fold, (train_ids, test_ids) in enumerate(kfold.split(dataset)):
# Print
print(f'FOLD {fold}')
print('--------------------------------')
# Sample elements randomly from a given list of ids, no replacement.
train_subsampler = torch.utils.data.SubsetRandomSampler(train_ids)
test_subsampler = torch.utils.data.SubsetRandomSampler(test_ids)
# Define data loaders for training and testing data in this fold
trainloader = torch.utils.data.DataLoader(
dataset,**params, sampler=train_subsampler)
testloader = torch.utils.data.DataLoader(
dataset,
**params, sampler=test_subsampler)
# Init the neural network
model = PPS()
model.to(device)
# Initialize optimizer
optimizer = optim.SGD(model.parameters(), lr=LEARNING_RATE)
# Run the training loop for defined number of epochs
train_acc = []
for epoch in range(0, N_EPOCHES):
# Print epoch
print(f'Starting epoch {epoch + 1}')
# Set current loss value
running_loss = 0.0
epoch_loss = 0.0
a = []
# Iterate over the DataLoader for training data
for i, data in enumerate(trainloader, 0):
inputs, targets = data
inputs = inputs.unsqueeze(-1)
#inputs = inputs.to(device)
targets = targets.to(device)
inputs = inputs.to(device)
# print(inputs.shape,targets.shape)
# Zero the gradients
optimizer.zero_grad()
# Perform forward pass
loss,outputs = model(inputs,targets)
outputs = outputs.to(device)
# Perform backward pass
loss.backward()
# Perform optimization
optimizer.step()
# print statistics
running_loss += loss.item()
epoch_loss += loss
a.append(torch.sum(outputs == targets))
# print(outputs.shape,outputs.shape[0])
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000), "acc",
torch.sum(outputs == targets) / float(outputs.shape[0]))
running_loss = 0.0
# sum_acc += (outputs == stat_batch.argmax(1)).float().sum()
print("epoch", epoch + 1, "acc", sum(a) / len(train_subsampler), "loss", epoch_loss / len(trainloader))
train_acc.append(sum(a) / len(train_subsampler))
state = {'epoch': epoch + 1, 'state_dict': model.state_dict(),
'optimizer': optimizer.state_dict() }
torch.save(state, path + name_file + "model_epoch_i_" + str(epoch) + str(fold)+".cnn")
#torch.save(model.state_dict(), path + name_file + "model_epoch_i_" + str(epoch) + ".cnn")
# Print about testing
print('Starting testing')
# Evaluation for this fold
correct, total = 0, 0
with torch.no_grad():
# Iterate over the test data and generate predictions
for i, data in enumerate(testloader, 0):
# Get inputs
inputs, targets = data
#targets = targets.to(device)
inputs = inputs.unsqueeze(-1)
inputs = inputs.to(device)
# Generate outputs
loss,outputs = model(inputs,targets)
outputs.to(device)
print("out",outputs.shape)
print("target",targets.shape)
print("targetsize",targets.size(0))
print("sum",(outputs == targets).sum().item())
#print("sum",torch.sum(outputs == targets))
# Set total and correct
# _, predicted = torch.max(outputs.data, 1)
total += targets.size(0)
correct += (outputs == targets).sum().item()
#correct += torch.sum(outputs == targets)
# Print accuracy
print('Accuracy for fold %d: %d %%' % (fold,float( 100.0 * float(correct / total))))
print('--------------------------------')
results[fold] = 100.0 * float(correct / total)
# Print fold results
print(f'K-FOLD CROSS VALIDATION RESULTS FOR {k_folds} FOLDS')
print('--------------------------------')
sum = 0.0
for key, value in results.items():
print(f'Fold {key}: {value} %')
sum += value
print(f'Average: {float(sum / len(results.items()))} %')
| You could use Tensorboard that is built especially for that, here is the doc for pytorch : https://pytorch.org/docs/stable/tensorboard.html
So in your case when you are printing the result, you can just do a
writer.add_scalar('accuracy/train', torch.sum(outputs == targets) / float(outputs.shape[0]), n_iter)
EDIT : adding small example that you can follow
Let's say that you are training a model :
model_name = 'network'
log_name = '{}_{}'.format(model_name, strftime('%Y%m%d_%H%M%S'))
writer = SummaryWriter('logs/{}'.format(log_name))
net = Model()
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.1)
for epoch in range(num_epochs):
losses = []
for i, (inputs,labels) in enumerate (trainloader):
inputs = Variable(inputs.float())
labels = Variable(labels.float())
outputs = net(inputs)
optimizer.zero_grad()
loss = criterion(outputs, labels)
losses.append(loss)
loss.backward()
optimizer.step()
correct_values += (outputs == labels).float().sum()
accuracy = 100 * correct_values / len(training_set)
avg_loss = sum(losses) / len(training_set)
writer.add_scalar('loss/train', avg_loss.item(), epoch)
writer.add_scalar('acc/train', accuracy, epoch)
| https://stackoverflow.com/questions/68389962/ |
FileNotFoundError: [Errno 2] No such file or directory: '.data/multi30k/train.fr' | I'm trying to load Multi30k torchtext dataset using google colab. When I load the .de it works fine, but as soon as I changed from .de I get this error:
FileNotFoundError: [Errno 2] No such file or directory: '.data/multi30k/train.fr'
This is how I loaded the .de and it worked:
train_data, valid_data, test_data = datasets.Multi30k.splits(
root=".data",
exts=('.de', '.en'),
fields = (SRC, TRG),
)
As soon as I changed this code by changing .de to .fr the error rises:
train_data, valid_data, test_data = datasets.Multi30k.splits(
root=".data",
exts=('.fr', '.en'),
fields = (SRC, TRG),
)
Imports
import torch
from torch import nn
from torch.nn import functional as F
import spacy, math, random
import numpy as np
from torchtext.legacy import datasets, data
import time
from prettytable import PrettyTable
from matplotlib import pyplot as plt
Seeds
SEED = 42
np.random.seed(SEED)
torch.manual_seed(SEED)
random.seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deteministic = True
Tokenizers spacy
import spacy
spacy.cli.download('fr_core_news_sm')
spacy_fr = spacy.load('fr_core_news_sm')
spacy_en = spacy.load('en_core_web_sm')
def tokenize_fr(sent):
return [tok.text for tok in spacy_fr.tokenizer(sent)]
def tokenize_en(sent):
return [tok.text for tok in spacy_en.tokenizer(sent)]
Fields
SRC = data.Field(
tokenize= tokenize_fr,
lower= True,
init_token = "<sos>",
eos_token = "<eos>",
include_lengths =True
)
TRG = data.Field(
tokenize = tokenize_en,
lower= True,
init_token = "<sos>",
eos_token = "<eos>"
)
The cell that throws an error
train_data, valid_data, test_data = datasets.Multi30k.splits(
root=".data",
exts=('.fr', '.en'),
fields = (SRC, TRG),
)
| It's because there is no train.fr file in the dataset itself.
If you list down what pytorch downloaded,
$ !ls -al .data/multi30k
total 5.4M
drwxr-xr-x 2 root root 4.0K Jul 15 14:26 .
drwxr-xr-x 3 root root 4.0K Jul 15 14:26 ..
-rw-r--r-- 1 root root 65K Jul 15 14:26 mmt_task1_test2016.tar.gz
-rw-rw-r-- 1 1000 1000 69K Oct 17 2016 test2016.de
-rw-rw-r-- 1 1000 1000 61K Oct 17 2016 test2016.en
-rw-rw-r-- 1 1000 1000 71K Feb 11 2017 test2016.fr
-rw-rw-r-- 1 1000 1000 2.1M Feb 2 2016 train.de
-rw-rw-r-- 1 1000 1000 1.8M Feb 2 2016 train.en
-rw-r--r-- 1 root root 1.2M Jul 15 14:26 training.tar.gz
-rw-rw-r-- 1 1000 1000 75K Feb 2 2016 val.de
-rw-rw-r-- 1 1000 1000 62K Feb 2 2016 val.en
-rw-r--r-- 1 root root 46K Jul 15 14:26 validation.tar.gz
| https://stackoverflow.com/questions/68391900/ |
"ValueError: Incompatible Language version 13. Must not be between 9 and 12" with Google Colab | I am trying to build a deep learning model with transformer model architecture. In that case when I am trying to cleaning the dataset following error occurred.
I am using Pytorch and google colab for that case & trying to clean Java methods and comment dataset.
Tested Code
import re
from fast_trees.core import FastParser
parser = FastParser('java')
def get_cmt_params(cmt: str) -> List[str]:
'''
Grabs the parameter identifier names from a JavaDoc comment
:param cmt: the comment to extract the parameter identifier names from
:returns: an array of the parameter identifier names found in the given comment
'''
params = re.findall('@param+\s+\w+', cmt)
param_names = []
for param in params:
param_names.append(param.split()[1])
return param_name
Occured Error
Downloading repo https://github.com/tree-sitter/tree-sitter-java to /usr/local/lib/python3.7/dist-packages/fast_trees/tree-sitter-java.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-31-64f6fa6ed39b> in <module>()
3 from fast_trees.core import FastParser
4
----> 5 parser.set_language = FastParser('java')
6
7 def get_cmt_params(cmt: str) -> List[str]:
3 frames
/usr/local/lib/python3.7/dist-packages/fast_trees/core.py in FastParser(lang)
96 }
97
---> 98 return PARSERS[lang]()
/usr/local/lib/python3.7/dist-packages/fast_trees/core.py in __init__(self)
46
47 def __init__(self):
---> 48 super().__init__()
49
50 def get_method_parameters(self, mthd: str) -> List[str]:
/usr/local/lib/python3.7/dist-packages/fast_trees/core.py in __init__(self)
15 class BaseParser:
16 def __init__(self):
---> 17 self.build_parser()
18
19 def build_parser(self):
/usr/local/lib/python3.7/dist-packages/fast_trees/core.py in build_parser(self)
35 self.language = Language(build_dir, self.LANG)
36 self.parser = Parser()
---> 37 self.parser.set_language(self.language)
38
39 # Cell
ValueError: Incompatible Language version 13. Must not be between 9 and 12
an anybody help me to solve this issue?
| The fast-trees library uses the tree-sitter library and since they recommended using the 0.2.0 version of tree-sitter in order to use fast-trees. Although downgrade the tree-sitter to the 0.2.0 version will not be resolved your problem. I also tried out it by downgrading it.
So, without investing time to figure out the bug in tree-sitter it is better to move to another stable library that satisfies your requirements. So, as your requirement, you need to extract features from a given java code. So, you can use javalang library to extract features from a given java code.
javalang is a pure Python library for working with Java source code.
javalang provides a lexer and parser targeting Java 8. The
implementation is based on the Java language spec available at
http://docs.oracle.com/javase/specs/jls/se8/html/.
you can refer it from - https://pypi.org/project/javalang/0.13.0/
Since javalang is a pure library it will help go forward on your research without any bugs
| https://stackoverflow.com/questions/68393698/ |
RuntimeError: mat1 and mat2 shapes cannot be multiplied (5376x28 and 784x512) | Basic Network
class Baseline(nn.Module):
def __init__(self):
super().__init__()
# 5 Hidden Layer Network
self.fc1 = nn.Linear(28 * 28, 512)
self.fc2 = nn.Linear(512, 256)
self.fc3 = nn.Linear(256, 128)
self.fc4 = nn.Linear(128, 64)
self.fc5 = nn.Linear(64, 3)
# Dropout module with 0.2 probbability
self.dropout = nn.Dropout(p=0.2)
# Add softmax on output layer
self.log_softmax = F.log_softmax
def forward(self, x):
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
x = self.dropout(F.relu(self.fc4(x)))
x = self.log_softmax(self.fc5(x), dim=1)
return x
Error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-0030d9c3852c> in <module>
18 optimizer.zero_grad()
19 # Make predictions
---> 20 log_ps = model(images)
21 loss = criterion(log_ps, labels)
22 #backprop
~/anaconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
<ipython-input-46-09dd06cd0a72> in forward(self, x)
15
16 def forward(self, x):
---> 17 x = self.dropout(F.relu(self.fc1(x)))
18 x = self.dropout(F.relu(self.fc2(x)))
19 x = self.dropout(F.relu(self.fc3(x)))
~/anaconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/anaconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/linear.py in forward(self, input)
91
92 def forward(self, input: Tensor) -> Tensor:
---> 93 return F.linear(input, self.weight, self.bias)
94
95 def extra_repr(self) -> str:
~/anaconda3/envs/torch/lib/python3.8/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1690 ret = torch.addmm(bias, input, weight.t())
1691 else:
-> 1692 output = input.matmul(weight.t())
1693 if bias is not None:
1694 output += bias
RuntimeError: mat1 and mat2 shapes cannot be multiplied (5376x28 and 784x512)
I tried changing fc1 to self.fc1 = nn.Linear(5376, 512) but I still get RuntimeError: mat1 and mat2 shapes cannot be multiplied (5376x28 and 5376x512).
I then adjusted the whole architecture as follows:
class Baseline(nn.Module):
def __init__(self):
super().__init__()
# 5 Hidden Layer Network
self.fc1 = nn.Linear(5376, 28)
self.fc2 = nn.Linear(28, 256)
self.fc3 = nn.Linear(256, 128)
self.fc4 = nn.Linear(128, 64)
self.fc5 = nn.Linear(64, 3)
# Dropout module with 0.2 probbability
self.dropout = nn.Dropout(p=0.2)
# Add softmax on output layer
self.log_softmax = F.log_softmax
def forward(self, x):
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
x = self.dropout(F.relu(self.fc4(x)))
x = self.log_softmax(self.fc5(x), dim=1)
return x
and I still get the following error:
RuntimeError Traceback (most recent call last)
<ipython-input-54-0030d9c3852c> in <module>
18 optimizer.zero_grad()
19 # Make predictions
---> 20 log_ps = model(images)
21 loss = criterion(log_ps, labels)
22 #backprop
~/anaconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
<ipython-input-52-f98f89e15885> in forward(self, x)
15
16 def forward(self, x):
---> 17 x = self.dropout(F.relu(self.fc1(x)))
18 x = self.dropout(F.relu(self.fc2(x)))
19 x = self.dropout(F.relu(self.fc3(x)))
~/anaconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/anaconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/linear.py in forward(self, input)
91
92 def forward(self, input: Tensor) -> Tensor:
---> 93 return F.linear(input, self.weight, self.bias)
94
95 def extra_repr(self) -> str:
~/anaconda3/envs/torch/lib/python3.8/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1690 ret = torch.addmm(bias, input, weight.t())
1691 else:
-> 1692 output = input.matmul(weight.t())
1693 if bias is not None:
1694 output += bias
RuntimeError: mat1 and mat2 shapes cannot be multiplied (5376x28 and 5376x28)
NB: The input data is of shape torch.Size([64, 3, 28, 28]).
For the purposes of replication:
model = Baseline()
X = torch.rand(64, 3, 28, 28)
model(X)
| I see one issue in the code.
Linear layers do not accept matrices with a 4d shape that you passed into the model.
In order to pass data with torch.Size([64, 3, 28, 28]) through a nn.Linear() layers like you have in your model. You need to flatten the tensor in your forward function like:
# New code
x = x.view(x.size(0), -1)
#Your code
x = self.dropout(F.relu(self.fc1(x)))
...
This will probably help solve the weight matrix error you are getting.
Sarthak Jain
| https://stackoverflow.com/questions/68398721/ |
Convert keras model to pytorch | Is there an easy way to convert a model like this from keras to pytorch?
I have the code in keras as following:
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.regularizers import l2
state_dim = 10
architecture = (256, 256) # units per layer
learning_rate = 0.0001 # learning rate
l2_reg = 0.00000001 # L2 regularization
trainable = True
num_actions = 3
layers = []
n = len(architecture) # n = 2
for i, units in enumerate(architecture, 1):
layers.append(Dense(units=units,
input_dim=state_dim if i == 1 else None,
activation='relu',
kernel_regularizer=l2(l2_reg),
name=f'Dense_{i}',
trainable=trainable))
layers.append(Dropout(.1))
layers.append(Dense(units=num_actions,
trainable=trainable,
name='Output'))
model = Sequential(layers)
model.compile(loss='mean_squared_error',
optimizer=Adam(lr=learning_rate))
Which outputs as follow:
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Dense_1 (Dense) (None, 256) 2816
_________________________________________________________________
Dense_2 (Dense) (None, 256) 65792
_________________________________________________________________
dropout_3 (Dropout) (None, 256) 0
_________________________________________________________________
Output (Dense) (None, 3) 771
=================================================================
Total params: 69,379
Trainable params: 69,379
Non-trainable params: 0
_________________________________________________________________
None
I must admit, I'm a little out of my depth so any advice is appreciated. I'm trying to read through the pytorch docs and will update my question with a possible answer if I manage.
| Here is my best attempt:
state_dim = 10
architecture = (256, 256) # units per layer
learning_rate = 0.0001 # learning rate
l2_reg = 0.00000001 # L2 regularization
trainable = True
num_actions = 3
import torch
from torch import nn
class CustomModel(nn.Module):
def __init__(self):
super().__init__()
self.layers = nn.Sequential(
nn.Linear(state_dim, architecture[0]),
nn.ReLU(),
nn.Linear(architecture[0], architecture[1]),
nn.ReLU(),
nn.Dropout(0.25),
nn.Linear(architecture[1], num_actions),
)
def forward(self, x):
return self.layers(x)
model = CustomModel()
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
It outputs a promising looking output:
CustomModel(
(layers): Sequential(
(0): Linear(in_features=10, out_features=256, bias=True)
(1): ReLU()
(2): Linear(in_features=256, out_features=256, bias=True)
(3): ReLU()
(4): Dropout(p=0.25, inplace=False)
(5): Linear(in_features=256, out_features=3, bias=True)
)
)
However a few items are still left unanswered:
are the activations in the right place?
how do we add a kernel_regularizer = l2(l2_reg) to the first two Linear/Dense layers?
and how do we make the layers trainable?
Any input appreciated.
| https://stackoverflow.com/questions/68413480/ |
How to make a custom torchvision transform? | I have a function that changes image pixels with 20% chance, but not sure how to make it work in transforms.Compose([]). Please help!
def random_t(img):
im = Image.open(img)
pixelMap = im.load()
pixelMap_list = []
for i in range(im.size[0]):
for j in range(im.size[1]):
randNum = random.uniform(0, 1)
if randNum < 0.2: # 20% chance of pixel change
pixelMap[i, j] = (random.randint(0, 255), random.randint(0, 255), random.randint(0, 255))
pixelMap_list.append(pixelMap[i, j])
else:
pixelMap[i, j] = pixelMap[i, j]
return im
I think it should have a format like this.. this is from pytorch library.
class custom_augmentation(object):
def __init__(self, p):
self.p = p # it should be the probability of random pixel
def __call__(self, img):
return None # Not sure how to make random_t in here
def __repr__(self):
return "custom augmentation"
fixed code:
class custom_augmentation(object):
def __init__(self, p=0.5):
self.p = p
def __call__(self, img):
pixelMap = img.load()
for i in range(img.size[0]):
for j in range(img.size[1]):
if torch.rand(1) < self.p:
pixelMap[i, j] = (random.randint(0, 255), random.randint(0, 255), random.randint(0, 255))
else:
pixelMap[i, j] = pixelMap[i, j]
return img # Not sure how to make random_t in here
def __repr__(self):
return "custom augmentation"
| You need to do your operations on img and then return it. For a good example of how to create custom transforms just check out how the normal torchvision transforms are created like over here:
This is the github where torchvision.transforms like transforms.Resize(), transforms.ToTensor(), transforms.RandomHorizontalFlip() have their code. Look at these transforms like RandomHorizontalFlip() to see how to introduce a probability that a transform will happen etc.
https://github.com/pytorch/vision/blob/master/torchvision/transforms/transforms.py
Sarthak Jain
| https://stackoverflow.com/questions/68415926/ |
Imorting zero_gradients from torch.autograd.gradcheck | I want to replicate the code here, and I get the following error while running in Google Colab?
ImportError: cannot import name 'zero_gradients' from
'torch.autograd.gradcheck'
(/usr/local/lib/python3.7/dist-packages/torch/autograd/gradcheck.py)
Can someone help me with how to solve this?
| This seems like it's using a very old version of PyTorch, the function itself is not available anymore. However, if you look at this commit, you will see the implementation of zero_gradients. What it does is simply zero out the gradient of the input:
def zero_gradients(i):
for t in iter_gradients(i):
t.zero_()
Then zero_gradients(x) should be the same as x.zero_grad(), which is the current API, assuming x is a nn.Module!
Or it's just:
if x.grad is not None:
x.grad.zero_()
| https://stackoverflow.com/questions/68419612/ |
PyTorch: Image not displaying properly | I have the following code portion:
dataset = trainDataset()
train_loader = DataLoader(dataset,batch_size=1,shuffle=True)
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
images = []
image_labels = []
for i, data in enumerate(train_loader,0):
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
inputs, labels = inputs.float(), labels.float()
images.append(inputs)
image_labels.append(labels)
image = images[7]
image = image.numpy()
image = image.reshape(416,416,3)
img = Image.fromarray(image,'RGB')
img.show()
The issue is that the image doesn't display properly. For instance, the dataset I have contains images of cats and dogs. But, the image displayed looks as shown below. Why is that?
EDIT 1
So, after @flawr's nice explanation, I have the following:
image = images[7]
image = image[0,...].permute([1,2,0])
image = image.numpy()
img = Image.fromarray(image,'RGB')
img.show()
And, the image looks as shown below. Not sure if it is a Numpy thing or the way the image is represented and displayed? I would like to also kindly note that I get a different display of the image at every run, but it is pretty much something close to the image displayed below.
EDIT 2
I think the issue now is with how to represent the image. By referring to this solution, I now get the following:
image = images[7]
image = image[0,...].permute([1,2,0])
image = image.numpy()
image = (image * 255).astype(np.uint8)
img = Image.fromarray(image,'RGB')
img.show()
Which produces the following image as expected :-)
| In pytorch you usually represent pictures with tensors of shape
(channels, height, width)
You then seem to reshape it to what you expect would be
(height, width, channels)
Note that these tensors or arrays are actually stored as 1d "array", and the multiple dimensions just come from defining strides (check out How to understand numpy strides for layman?).
In your particular case this means that consecutive values (that were basically values of the same color channela and the same row) are now interpreted as different colour channels.
So let's say you have a 2x2 image with 3 color channels. Let's say it is a chessboard pattern. In pytorch that would looks something like the following array of shape (3, 2, 2):
[[[1,0],[0,1]],[[1,0],[0,1]],[[1,0],[0,1]]]
The underlaying internal array is just
[ 1,0 , 0,1 , 1,0 , 0,1 , 1,0 , 0,1 ]
So reshaping to (2, 2, 3) would look like so:
[[[1,0,0],[1,1,0]],[[0,1,1],[0,0,1]]]
which immediately shows how the image will be completely jumbled. Reshaping really just means setting the brackets in different places!
So what you probably want instead of reshape is permute([1, 2, 0]), (or in numpy called transpose) which will actually rearrange the data.
| https://stackoverflow.com/questions/68424784/ |
How does pytorch perform the reverse-differentiation given an indexed version of a tensor in the feedforward step? | Some of this code was adapted from the book Deep learning with Pytorch
Script: Linear regression (trying to predict t_c given t_u)
t_c = torch.tensor([0.5, 14.0, 15.0, 28.0, 11.0, 8.0,
3.0, -4.0, 6.0, 13.0, 21.0])
t_u = torch.tensor([35.7, 55.9, 58.2, 81.9, 56.3, 48.9,
33.9, 21.8, 48.4, 60.4, 68.4])
def model(t_u, w, b):
return w * t_u + b
def loss_fn(t_p, t_c):
squared_diffs = (t_p - t_c)**2
return squared_diffs.mean()
params = torch.tensor([1.0, 0.0], requires_grad=True)
loss = loss_fn(model(t_u, params[0], params[1]), t_c)
loss.backward()
print(params.grad)
Here I am passing in the 0th and 1st index of params as an input to the model function, which performs scalar-to-vector multiplication and addition.
My question is, what is PyTorch exactly doing to compute the gradients of the params tensor? The "feedforward" step uses two subtensors of the params tensor, rather than separate tensors for bias and weight, which is what I am familiar with.
My guess is: params[0] and params[1] are both references to elements in params, and they both have their own distinct gradients stored somewhere in the params.grad. So the .backward() call is treating params[0] and params[1] as new individual tensors (as if we temporarily had two separate tensors -weight and bias) and updates their gradients (params[0].grad, params[1].grad), hence updating the params.grad since they are references to it.
| The main idea here is that the indexing operation returns a new view of the tensor. If you are not using in-place operations (+=, -=, etc.), the "view" thing does not really matter and you can consider it as just another tensor.
In that case, the indexing operation is no different from other operations like addition or matrix-multiplication -- input (original tensor), output (selected tensor), and gradient (1 if selected, zero otherwise*). Then back-propagation happens as usual.
* More specifically, the gradient of an input entry with respect to an output entry is 1 if the output entry is selected from the input entry; 0 otherwise.
EDIT:
Maybe it's easier to see it this way:
a = d_params[0]
c = W*a+b
--------------------------
dc/d_params
= dc/{d_params[0], d_params[1], d_params[2], ...}
--------------------------
dc/d_params[0]
= dc/da * da/d_params[0]
= dc/da * 1
= W
--------------------------
dc/d_params[1], dc/d_params[2], ... = 0
| https://stackoverflow.com/questions/68425540/ |
How to disable tqdm's progressbar and keep only the text info in Pytorch Lightning (or in tqdm in general) | I am working on Pytorchlightning and tqdm's progressbar is very buggy, it keep resizing back and forth from short to long, making reading the logging text so unpleasant, I realized that the progressbar is not really necessary and would like to keep only the info about the current epoch, current batch, accuracy, loss, etc.
From my searching it seems like you can disable whole tqdm display(progressbar and text), but how can I selectively disable only progressbar but not the text?
| The tqdm way to disable the "meter" (while retaining display of stats) is to set ncols=0 and dynamic_ncols=False (see tqdm documentation).
The way to customize the default progress bar behavior in pytorch_lightning is to pass a custom ProgressBar in as a callback when building the Trainer.
Putting the two together, if you wanted to modify the progress bar during training you could do something like the following:
import pytorch_lightning as pl
from pytorch_lightning.callbacks import ProgressBar
class MeterlessProgressBar(ProgressBar):
def init_train_tqdm(self):
bar = super().init_train_tqdm()
bar.dynamic_ncols = False
bar.ncols = 0
return bar
bar = MeterlessProgressBar()
trainer = pl.Trainer(callbacks=[bar])
You can separately customize for the sanity check, prediction, validation, and test by overriding: init_sanity_tqdm, init_predict_tqdm, init_validation_tqdm, and init_test_tqdm respectively. (If you want a quick and dirty way to do something to all progress bars, you could consider overriding the _update_bar method instead.)
| https://stackoverflow.com/questions/68427465/ |
Pytorch Custom Loss Function with If Statement | I am trying to create a custom loss function in Pytorch that evaluates each element of a tensor with an if statement and acts accordingly.
def my_loss(outputs,targets,fin_val):
if (outputs.detach()-fin_val.detach())*(targets.detach()-fin_val.detach())<0:
loss=3*(outputs-targets)**2
else:
loss=0.3*(outputs-targets)**2
return loss
I have also tried:
def my_loss(outputs,targets,fin_val):
if torch.gt((outputs.detach()-fin_val.detach())*(targets.detach()-fin_val.detach()),0):
loss=0.3*(outputs-targets)**2
else:
loss=3*(outputs-targets)**2
return loss
In both cases, I get the following error:
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
TIA
| You are getting this error because the condition you are passing to the if statement is not a boolean but a tensor of booleans. Just check what's the nature of (outputs.detach()-fin_val.detach())*(targets.detach()-fin_val.detach())<0, it is a tensor!
What you should be looking to do instead is handling this in vectorized form. You can use torch.where which is built for this use:
torch.where(condition=(outputs - fin_val)*(targets - fin_val) < 0,
x=3*(outputs-targets)**2,
y=0.3*(outputs-targets)**2)
This will return a tensor of "xs" and "ys" based on the point-wise condition tensor condition. Then, you could average it depending on your needs to get an actual loss value.
| https://stackoverflow.com/questions/68430520/ |
How to print the adjusting learning rate in Pytorch? | While I use torch.optim.Adam and exponential decay_lr in my PPO algorithm:
self.optimizer = torch.optim.Adam([
{'params': self.policy.actor.parameters(), 'lr': lr_actor},
{'params': self.policy.critic.parameters(), 'lr': lr_critic}
])
self.scheduler = torch.optim.lr_scheduler.ExponentialLR(self.optimizer, self.GAMMA)
The initial lr=0.1, and GAMMA=0.9.
Then I print the lr in my epoch dynamiclly with:
if time_step % update_timestep == 0:
ppo_agent.update()
print(f'__________start update_______________')
print(ppo_agent.optimizer.state_dict()['param_groups'][0]['lr'])
But, something wrong with this, and the bug is:
File "D:\Anaconda\lib\site-packages\torch\distributions\beta.py", line 36, in __init__
self._dirichlet = Dirichlet(concentration1_concentration0, validate_args=validate_args)
File "D:\Anaconda\lib\site-packages\torch\distributions\dirichlet.py", line 52, in __init__
super(Dirichlet, self).__init__(batch_shape, event_shape, validate_args=validate_args)
File "D:\Anaconda\lib\site-packages\torch\distributions\distribution.py", line 53, in __init__
raise ValueError("The parameter {} has invalid values".format(param))
ValueError: The parameter concentration has invalid values
Then, if I delete the "print()" sentence, it does work well!
So, it's bothering me very much.
| You can get the learning rate like this:
self.optimizer.param_groups[0]["lr"]
| https://stackoverflow.com/questions/68442914/ |
Problem with variable when working with python | Hello everyone im trying to work with the Digit tactile sensor with the PyTouch library so when i try to run the contact area code example i get this error
DigitSensor with SensorDataSources.RAW data source
Traceback (most recent call last):
File "contactarea.py", line 31, in <module>
extract_surface_contact()
File "contactarea.py", line 17, in extract_surface_contact
major, minor = pt.ContactArea(sample_img, base=base_img)
File "/home/ayoub.hichri/.local/lib/python3.8/site-
packages/pytouch/tasks/contact_area.py", line 32, in __call__
) = self._compute_contact_area(contours, self.contour_threshold)
File "/home/ayoub.hichri/.local/lib/python3.8/site-
packages/pytouch/tasks/contact_area.py", line 107, in _compute_contact_area
return poly, major_axis, major_axis_end, minor_axis, minor_axis_end
UnboundLocalError: local variable 'poly' referenced before assignment
this is the code in file contact_area.py
def _compute_contact_area(self, contours, contour_threshold):
for contour in contours:
if len(contour) > contour_threshold:
ellipse = cv2.fitEllipse(contour)
poly = cv2.ellipse2Poly(
(int(ellipse[0][0]), int(ellipse[0][1])),
(int(ellipse[1][0] / 2), int(ellipse[1][1] / 2)),
int(ellipse[2]),
0,
360,
5,
)
center = np.array([ellipse[0][0], ellipse[0][1]])
a, b = (ellipse[1][0] / 2), (ellipse[1][1] / 2)
theta = (ellipse[2] / 180.0) * np.pi
major_axis = np.array(
[center[0] - b * np.sin(theta), center[1] + b * np.cos(theta)]
)
minor_axis = np.array(
[center[0] + a * np.cos(theta), center[1] + a * np.sin(theta)]
)
major_axis_end = 2 * center - major_axis
minor_axis_end = 2 * center - minor_axis
return poly, major_axis, major_axis_end, minor_axis, minor_axis_end
and this is the python code for my main file which im trying to run
import pytouch
from pytouch.handlers import ImageHandler
from pytouch.sensors import DigitSensor
from pytouch.tasks import ContactArea
def extract_surface_contact():
base_img_path = "/home/../Documents/digit.png"
sample_img_path = "/home/../Documents/Digit2.png"
base_img = ImageHandler(base_img_path).nparray
sample_img = ImageHandler(sample_img_path).nparray
sample_img_2 = sample_img.copy()
# initialize with default configuration of ContactArea task
pt = pytouch.PyTouch(DigitSensor, tasks=[ContactArea])
major, minor = pt.ContactArea(sample_img, base=base_img)
print("Major Axis: {0}, minor axis: {1}".format(*major, *minor))
ImageHandler.save("surface_contact_1.png", sample_img)
# initialize with custom configuration of ContactArea task
contact_area = ContactArea(base=base_img, contour_threshold=10)
major, minor = contact_area(sample_img_2)
print("Major Axis: {0}, minor axis: {1}".format(*major, *minor))
ImageHandler.save("surface_contact_2.png", sample_img_2)
if __name__ == "__main__":
extract_surface_contact()
Thanks in advance for anyone who helps me
| Seems like the condition len(contour) > contour_threshold inside _compute_contact_area is never matched so the variable poly is never defined.
I recommend trying to print the length of contour before the if statement to check the values. If you want it to work even if the condition isn't matched just declare the variable at the start of your function with an empty value (poly = None)
| https://stackoverflow.com/questions/68453045/ |
Key already registered with the same priority: GroupSpatialSoftmax | I get this error:
"Key already registered with the same priority: GroupSpatialSoftmax"
when i run:
import torch
Though I've installed the pytorch package through the pycharm settings > python interpreter.
Does anyone know how can I solve it?
Thanks
| Solved it myself!
I uninstalled the pytorch package and re-installed it so now it works
| https://stackoverflow.com/questions/68468122/ |
Defining Metrics on SageMaker to CloudWatch | From AWS Sagemaker Documentation, In order to track metrics in cloudwatch for custom ml algorithms (non-builtin), I read that I have to define my estimaotr as below.
But I am not sure how to alter my training script so that the metric definitions declared inside my estimators can pick up these values.
estimator =
Estimator(image_name=ImageName,
role='SageMakerRole',
instance_count=1,
instance_type='ml.c4.xlarge',
k=10,
sagemaker_session=sagemaker_session,
metric_definitions=[
{'Name': 'train:error', 'Regex': 'Train_error=(.*?);'},
{'Name': 'validation:error', 'Regex': 'Valid_error=(.*?);'}
]
)
In my training code, I have
for epoch in range(1, args.epochs + 1):
total_loss = 0
model.train()
for step, batch in enumerate(train_loader):
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
model.zero_grad()
outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)
loss = outputs[0]
total_loss += loss.item()
loss.backward() # Computes the gradients
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) # Clip for error prevention
# modified based on their gradients, the learning rate, etc.
optimizer.step() # Back Prop
logger.info("Average training loss: %f\n", total_loss / len(train_loader))
Here, I want the train:error to pick up total_loss / len(train_loader) but I am not sure how to assign this.
| You have to define a regex to capture that pattern, try with this:
{'Name': 'Average training loss', 'Regex': 'Average training loss = ([0-9\.]+)'}
You can try the regex in tool like this and see what happens.
| https://stackoverflow.com/questions/68470626/ |
PyTorch BatchNorm2d Calculation | I am trying to understand the mechanics of PyTorch BatchNorm2d through calculation. My example code:
import torch
from torch import nn
torch.manual_seed(123)
a = torch.rand(3,2,3,3)
print(a)
print(nn.BatchNorm2d(2)(a))
#print(a[:,0,:,:])
mean_by_plane_feature = torch.mean(a,dim=0)
std_by_plane_feature = torch.std(a,dim=0)
print(mean_by_plane_feature)
print(std_by_plane_feature)
Output:
tensor([[[[0.2961, 0.5166, 0.2517],
[0.6886, 0.0740, 0.8665],
[0.1366, 0.1025, 0.1841]],
[[0.7264, 0.3153, 0.6871],
[0.0756, 0.1966, 0.3164],
[0.4017, 0.1186, 0.8274]]],
[[[0.3821, 0.6605, 0.8536],
[0.5932, 0.6367, 0.9826],
[0.2745, 0.6584, 0.2775]],
[[0.8573, 0.8993, 0.0390],
[0.9268, 0.7388, 0.7179],
[0.7058, 0.9156, 0.4340]]],
[[[0.0772, 0.3565, 0.1479],
[0.5331, 0.4066, 0.2318],
[0.4545, 0.9737, 0.4606]],
[[0.5159, 0.4220, 0.5786],
[0.9455, 0.8057, 0.6775],
[0.6087, 0.6179, 0.6932]]]])
tensor([[[[-0.5621, 0.2574, -0.7273],
[ 0.8968, -1.3879, 1.5584],
[-1.1552, -1.2819, -0.9787]],
[[ 0.5369, -1.0117, 0.3888],
[-1.9141, -1.4584, -1.0073],
[-0.6859, -1.7524, 0.9171]]],
[[[-0.2425, 0.7925, 1.5103],
[ 0.5422, 0.7042, 1.9901],
[-0.6425, 0.7846, -0.6311]],
[[ 1.0298, 1.1880, -2.0520],
[ 1.2915, 0.5833, 0.5047],
[ 0.4593, 1.2495, -0.5645]]],
[[[-1.3761, -0.3375, -1.1132],
[ 0.3187, -0.1512, -0.8011],
[ 0.0269, 1.9569, 0.0493]],
[[-0.2561, -0.6096, -0.0199],
[ 1.3619, 0.8356, 0.3525],
[ 0.0933, 0.1281, 0.4116]]]], grad_fn=<NativeBatchNormBackward>)
tensor([[[0.2518, 0.5112, 0.4177],
[0.6049, 0.3724, 0.6937],
[0.2885, 0.5782, 0.3074]],
[[0.6999, 0.5455, 0.4349],
[0.6493, 0.5804, 0.5706],
[0.5721, 0.5507, 0.6515]]])
tensor([[[0.1572, 0.1521, 0.3810],
[0.0784, 0.2829, 0.4042],
[0.1594, 0.4411, 0.1406]],
[[0.1723, 0.3110, 0.3471],
[0.4969, 0.3340, 0.2211],
[0.1553, 0.4028, 0.2000]]])
I found that the output of BatchNorm is not what I expected to be. For example, the mean across batch for first plane, first feature = 0.2518 and the std is 0.1572. The normalized value for the first value = (0.2961-0.2518)/0.1572 = 0.2818 != -0.5621.
My questions:
Am I correct to calculate the means in this way (across batch, per plane and feature)? as I understand batchnorm is used to treat the issue of having different scales for different feature, so it should at least be per feature dimension, however I am not sure whether to sum across the "plane dimension" as well.
Any other modifications I need to do to get the same output from BatchNorm2d?
| This is the implementation of BatchNorm2d in pytorch (source1, source2). Using this, you can verify the operations you performed.
class MyBatchNorm2d(nn.BatchNorm2d):
def __init__(self, num_features, eps=1e-5, momentum=0.1,
affine=True, track_running_stats=True):
super(MyBatchNorm2d, self).__init__(
num_features, eps, momentum, affine, track_running_stats)
def forward(self, input):
self._check_input_dim(input)
exponential_average_factor = 0.0
if self.training and self.track_running_stats:
if self.num_batches_tracked is not None:
self.num_batches_tracked += 1
if self.momentum is None: # use cumulative moving average
exponential_average_factor = 1.0 / float(self.num_batches_tracked)
else: # use exponential moving average
exponential_average_factor = self.momentum
# calculate running estimates
if self.training:
mean = input.mean([0, 2, 3])
# use biased var in train
var = input.var([0, 2, 3], unbiased=False)
n = input.numel() / input.size(1)
with torch.no_grad():
self.running_mean = exponential_average_factor * mean\
+ (1 - exponential_average_factor) * self.running_mean
# update running_var with unbiased var
self.running_var = exponential_average_factor * var * n / (n - 1)\
+ (1 - exponential_average_factor) * self.running_var
else:
mean = self.running_mean
var = self.running_var
input = (input - mean[None, :, None, None]) / (torch.sqrt(var[None, :, None, None] + self.eps))
if self.affine:
input = input * self.weight[None, :, None, None] + self.bias[None, :, None, None]
return input
The outputs of nn.BatchNorm2d(2)(a) and MyBatchNorm2d(2)(a) are same.
| https://stackoverflow.com/questions/68478856/ |
CUDA out of memory error, cannot reduce batch size | I want to run some experiments on my GPU device, but I get this error:
RuntimeError: CUDA out of memory. Tried to allocate 3.63 GiB (GPU 0;
15.90 GiB total capacity; 13.65 GiB already allocated; 1.57 GiB free; 13.68 GiB reserved in total by PyTorch)
I read about possible solutions here, and the common solution is this:
It is because of mini-batch of data does not fit onto GPU memory.
Just decrease the batch size. When I set batch size = 256 for cifar10
dataset I got the same error; Then I set the batch size = 128, it is
solved.
But in my case, it is a research project, and I want to have specific hyper-parameters and I can not reduce anything such as batch size.
Does anyone have a solution for this?
| As long as a single sample can fit into GPU memory, you do not have to reduce the effective batch size: you can do gradient accumulation.
Instead of updating the weights after every iteration (based on gradients computed from a too-small mini-batch) you can accumulate the gradients for several mini-batches and only when seeing enough examples, only then updating the weights.
This is nicely explained in this video.
Effectively, your training code would look something like this.
Suppose your large batch size is large_batch, but can only fit small_batch into GPU memory, such that large_batch = small_batch * k.
Then you want to update the weights every k iterations:
train_data = DataLoader(train_set, batch_size=small_batch, ...)
opt.zero_grad() # this signifies the start of a large_batch
for i, (x, y) in train_data:
pred = model(x)
loss = criterion(pred, y)
loss.backward() # gradeints computed for small_batch
if (i+1) % k == 0 or (i+1) == len(train_data):
opt.step() # update the weights only after accumulating k small batches
opt.zero_grad() # reset gradients for accumulation for the next large_batch
| https://stackoverflow.com/questions/68479235/ |
How can I separate the last layer of deep network in pytorch? | My deep network is:
self.actor = nn.Sequential(
nn.Linear(state_dim, 256),
nn.Softplus(),
nn.Linear(256, 256),
nn.Softplus(),
nn.Linear(256, action_dim),
nn.Softplus())
Now, I would like the network to give two separate outputs, like this:
That's to say, only the last layer of the network is different and maybe the last layer has different activation functions.
How should I change my code above?
| The model you want to build is not sequential anymore, since there are two parallel branches at the end. You can keep the common trunk and separate with two additional separate layers. Something like:
class Model(nn.Module):
def __init__(self):
super.__init__()
self.actor = nn.Sequential(
nn.Linear(state_dim, 256),
nn.Softplus(),
nn.Linear(256, 256),
nn.Softplus())
self.outA = nn.Sequential(
nn.Linear(256, action_dim),
nn.Softplus())
self.outB = nn.Sequential(
nn.Linear(256, action_dim),
nn.Softplus())
def forward(self, x):
features = self.actor(x)
return self.outA(features), self.outB(features)
| https://stackoverflow.com/questions/68480744/ |
PyTorch: Dogs vs Cat dataset with datasets.ImageFolder | I am new on PyTorch trying to create a TransferLearning model.
I am using dogs vs cats dataset from Kaggle.
I am using ImageFolder to load the data and it requires a folder for each classes. But the photos in the test folder are mixed.So I'm not able to separate the images on the test folder. What can I do to solve the problem apart from labeling all the test data with my hand?
| You can create a custom Dataset class and wrap it inside a dataloader in Pytorch.
This link has great information on this topic
An overall structure to follow is
class Dog_and_Cat():
def __init__(self, ...):
... # replace with a zipped list of image paths and labels (Cat or Dog)
# You can use glob.glob
# Overall ask how do I know the label of the image and install that reasoning in code
# result is a zipped list like [("img1.jpg", 0), ("img2.jpg", 1)] where 0 and 1 represent Cat and Dog
def __getitem__(self, idx):
#describe the func when indexing your class. This is where you open your image and do transforms and return it.
def __len__(self):
# This is where you describe the length of the dataset.
# Ideally should return len of zipped list mentioned above.
You can init this class you just made instead of Image Folder and then put an object of that class inside the dataloader. This will solve your task
Sarthak
| https://stackoverflow.com/questions/68486511/ |
How to adjust the learning rate after N number of epochs? | I am using Hugginface's Trainer.
How to adjust the learning rate after N number of epochs?
For example, I have an initial learning rate set to lr=2e-6, and I would like to change the learning rate to lr=1e-6 after the first epoch and stay on it the rest of the training.
I tried this so far:
optimizer = AdamW(model.parameters(),
lr = 2e-5,
eps = 1e-8
)
epochs = 5
batch_number = len(small_train_dataset) / 8
total_steps = batch_number * epochs
scheduler = get_linear_schedule_with_warmup(optimizer,
num_warmup_steps = 0,
num_training_steps = total_steps,
last_epoch=-1
)
I know that there is https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LambdaLR.html#torch.optim.lr_scheduler.LambdaLR but here it drops learning rate every epoch but that is not what i want to do. I want it to drop after 1 epoch and then stay on it rest of the training process.
| You could train in two steps,
first, train with desired initial learning rate then create a second optimizer with the final learning rate. It is equivalent.
| https://stackoverflow.com/questions/68492369/ |
Pytorch cuda is unavailable even installed CUDA and pytorch with cuda. How to fix? | I'm trying to use pytorch with my GPU (RTX 3070) on my Windows machine using WSL2, but I couldn't get it work even though I followed the Nvidia guide (https://docs.nvidia.com/cuda/wsl-user-guide/index.html#abstract).
nvidia-smi.exe output:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 471.41 Driver Version: 471.41 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... WDDM | 00000000:0A:00.0 On | N/A |
| 0% 40C P5 12W / 220W | 1815MiB / 8192MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1384 C+G Insufficient Permissions N/A |
| 0 N/A N/A 1628 C+G ...dows\System32\WWAHost.exe N/A |
| 0 N/A N/A 3172 C+G ...y\ShellExperienceHost.exe N/A |
| 0 N/A N/A 5940 C+G ...lPanel\SystemSettings.exe N/A |
| 0 N/A N/A 6360 C+G C:\Windows\explorer.exe N/A |
| 0 N/A N/A 7280 C+G ...artMenuExperienceHost.exe N/A |
| 0 N/A N/A 7732 C+G ...5n1h2txyewy\SearchApp.exe N/A |
| 0 N/A N/A 8256 C+G ...ekyb3d8bbwe\YourPhone.exe N/A |
| 0 N/A N/A 8780 C+G ...nputApp\TextInputHost.exe N/A |
| 0 N/A N/A 10032 C+G ...perience\NVIDIA Share.exe N/A |
| 0 N/A N/A 10732 C+G ...hyper\app-3.0.2\Hyper.exe N/A |
| 0 N/A N/A 10804 C+G ...kyb3d8bbwe\Calculator.exe N/A |
| 0 N/A N/A 10852 C+G ...in7x64\steamwebhelper.exe N/A |
| 0 N/A N/A 12180 C+G ...ge\Application\msedge.exe N/A |
| 0 N/A N/A 13880 C+G ...icrosoft VS Code\Code.exe N/A |
| 0 N/A N/A 14724 C+G ...b3d8bbwe\WinStore.App.exe N/A |
+-----------------------------------------------------------------------------+
nvcc -v output:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Wed_Jun__2_19:15:15_PDT_2021
Cuda compilation tools, release 11.4, V11.4.48
Build cuda_11.4.r11.4/compiler.30033411_0
dpkg -l | grep nvidia output:
ii libnvidia-cfg1-470:amd64 470.57.02-0ubuntu0.20.04.1 amd64 NVIDIA binary OpenGL/GLX configuration library
ii libnvidia-common-470 470.57.02-0ubuntu0.20.04.1 all Shared files used by the NVIDIA libraries
rc libnvidia-compute-450:amd64 450.51.05-0ubuntu1 amd64 NVIDIA libcompute package
rc libnvidia-compute-465:amd64 465.27-0ubuntu0.20.04.2 amd64 NVIDIA libcompute package
ii libnvidia-compute-470:amd64 470.57.02-0ubuntu0.20.04.1 amd64 NVIDIA libcompute package
ii libnvidia-decode-470:amd64 470.57.02-0ubuntu0.20.04.1 amd64 NVIDIA Video Decoding runtime libraries
ii libnvidia-encode-470:amd64 470.57.02-0ubuntu0.20.04.1 amd64 NVENC Video Encoding runtime library
ii libnvidia-extra-470:amd64 470.57.02-0ubuntu0.20.04.1 amd64 Extra libraries for the NVIDIA driver
ii libnvidia-fbc1-470:amd64 470.57.02-0ubuntu0.20.04.1 amd64 NVIDIA OpenGL-based Framebuffer Capture runtime library
ii libnvidia-gl-470:amd64 470.57.02-0ubuntu0.20.04.1 amd64 NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries
and Vulkan ICD
ii libnvidia-ifr1-470:amd64 470.57.02-0ubuntu0.20.04.1 amd64 NVIDIA OpenGL-based Inband Frame Readback runtime library
ii libnvidia-ml-dev 10.1.243-3 amd64 NVIDIA Management Library (NVML) development files
rc nvidia-compute-utils-450 450.51.05-0ubuntu1 amd64 NVIDIA compute utilities
ii nvidia-compute-utils-470 470.57.02-0ubuntu0.20.04.1 amd64 NVIDIA compute utilities
ii nvidia-cuda-dev 10.1.243-3 amd64 NVIDIA CUDA development files
ii nvidia-cuda-doc 10.1.243-3 all NVIDIA CUDA and OpenCL documentation
ii nvidia-cuda-gdb 10.1.243-3 amd64 NVIDIA CUDA Debugger (GDB)
ii nvidia-cuda-toolkit 10.1.243-3 amd64 NVIDIA CUDA development toolkit
rc nvidia-dkms-450 450.51.05-0ubuntu1 amd64 NVIDIA DKMS package
ii nvidia-dkms-470 470.57.02-0ubuntu0.20.04.1 amd64 NVIDIA DKMS package
ii nvidia-driver-470 470.57.02-0ubuntu0.20.04.1 amd64 NVIDIA driver metapackage
rc nvidia-kernel-common-450 450.51.05-0ubuntu1 amd64 Shared files used with the kernel module
ii nvidia-kernel-common-470 470.57.02-0ubuntu0.20.04.1 amd64 Shared files used with the kernel module
ii nvidia-kernel-source-470 470.57.02-0ubuntu0.20.04.1 amd64 NVIDIA kernel source package
ii nvidia-opencl-dev:amd64 10.1.243-3 amd64 NVIDIA OpenCL development files
ii nvidia-prime 0.8.16~0.20.04.1 all Tools to enable NVIDIA's Prime
ii nvidia-profiler 10.1.243-3 amd64 NVIDIA Profiler for CUDA and OpenCL
ii nvidia-settings 450.51.05-0ubuntu1 amd64 Tool for configuring the NVIDIA graphics driver
ii nvidia-utils-470 470.57.02-0ubuntu0.20.04.1 amd64 NVIDIA driver support binaries
ii nvidia-visual-profiler 10.1.243-3 amd64 NVIDIA Visual Profiler for CUDA and OpenCL
ii screen-resolution-extra 0.18build1 all Extension for the nvidia-settings control panel
ii xserver-xorg-video-nvidia-470 470.57.02-0ubuntu0.20.04.1 amd64 NVIDIA binary Xorg driver
dpkg -l | grep cuda output:
ii cuda 11.4.0-1 amd64 CUDA meta-package
ii cuda-11-4 11.4.0-1 amd64 CUDA 11.4 meta-package
ii cuda-cccl-11-4 11.4.43-1 amd64 CUDA CCCL
ii cuda-command-line-tools-11-4 11.4.0-1 amd64 CUDA command-line tools
ii cuda-compiler-11-4 11.4.0-1 amd64 CUDA compiler
rc cuda-cudart-11-0 11.0.194-1 amd64 CUDA Runtime native Libraries
ii cuda-cudart-11-4 11.4.43-1 amd64 CUDA Runtime native Libraries
ii cuda-cudart-dev-11-4 11.4.43-1 amd64 CUDA Runtime native dev links, headers
ii cuda-cuobjdump-11-4 11.4.43-1 amd64 CUDA cuobjdump
ii cuda-cupti-11-4 11.4.65-1 amd64 CUDA profiling tools runtime libs.
ii cuda-cupti-dev-11-4 11.4.65-1 amd64 CUDA profiling tools interface.
ii cuda-cuxxfilt-11-4 11.4.43-1 amd64 CUDA cuxxfilt
ii cuda-demo-suite-11-4 11.4.43-1 amd64 Demo suite for CUDA
ii cuda-documentation-11-4 11.4.43-1 amd64 CUDA documentation
ii cuda-driver-dev-11-4 11.4.43-1 amd64 CUDA Driver native dev stub library
ii cuda-gdb-11-4 11.4.55-1 amd64 CUDA-GDB
ii cuda-libraries-11-4 11.4.0-1 amd64 CUDA Libraries 11.4 meta-package
ii cuda-libraries-dev-11-4 11.4.0-1 amd64 CUDA Libraries 11.4 development meta-package
ii cuda-memcheck-11-4 11.4.43-1 amd64 CUDA-MEMCHECK
ii cuda-nsight-11-4 11.4.43-1 amd64 CUDA nsight
ii cuda-nsight-compute-11-4 11.4.0-1 amd64 NVIDIA Nsight Compute
ii cuda-nsight-systems-11-4 11.4.0-1 amd64 NVIDIA Nsight Systems
ii cuda-nvcc-11-4 11.4.48-1 amd64 CUDA nvcc
ii cuda-nvdisasm-11-4 11.4.43-1 amd64 CUDA disassembler
ii cuda-nvml-dev-11-4 11.4.43-1 amd64 NVML native dev links, headers
ii cuda-nvprof-11-4 11.4.43-1 amd64 CUDA Profiler tools
ii cuda-nvprune-11-4 11.4.43-1 amd64 CUDA nvprune
ii cuda-nvrtc-11-4 11.4.50-1 amd64 NVRTC native runtime libraries
ii cuda-nvrtc-dev-11-4 11.4.50-1 amd64 NVRTC native dev links, headers
ii cuda-nvtx-11-4 11.4.43-1 amd64 NVIDIA Tools Extension
ii cuda-nvvp-11-4 11.4.43-1 amd64 CUDA Profiler tools
ii cuda-repo-ubuntu2004-11-0-local 11.0.2-450.51.05-1 amd64 cuda repository configuration files
ii cuda-repo-wsl-ubuntu-11-4-local 11.4.0-1 amd64 cuda repository configuration files
ii cuda-runtime-11-4 11.4.0-1 amd64 CUDA Runtime 11.4 meta-package
ii cuda-samples-11-4 11.4.43-1 amd64 CUDA example applications
ii cuda-sanitizer-11-4 11.4.54-1 amd64 CUDA Sanitizer
rc cuda-toolkit-11-0 11.0.2-1 amd64 CUDA Toolkit 11.0 meta-package
ii cuda-toolkit-11-4 11.4.0-1 amd64 CUDA Toolkit 11.4 meta-package
ii cuda-toolkit-11-4-config-common 11.4.43-1 all Common config package for CUDA Toolkit 11.4.
ii cuda-toolkit-11-config-common 11.4.43-1 all Common config package for CUDA Toolkit 11.
ii cuda-toolkit-config-common 11.4.43-1 all Common config package for CUDA Toolkit.
ii cuda-tools-11-4 11.4.0-1 amd64 CUDA Tools meta-package
rc cuda-visual-tools-11-0 11.0.2-1 amd64 CUDA visual tools
ii cuda-visual-tools-11-4 11.4.0-1 amd64 CUDA visual tools
ii libcudart10.1:amd64 10.1.243-3 amd64 NVIDIA CUDA Runtime Library
ii nvidia-cuda-dev 10.1.243-3 amd64 NVIDIA CUDA development files
ii nvidia-cuda-doc 10.1.243-3 all NVIDIA CUDA and OpenCL documentation
ii nvidia-cuda-gdb 10.1.243-3 amd64 NVIDIA CUDA Debugger (GDB)
ii nvidia-cuda-toolkit 10.1.243-3 amd64 NVIDIA CUDA development toolkit
The pytorch version I installed:
torch 1.9.0+cu111
torchaudio 0.9.0
torchvision 0.10.0+cu111
But pytorch tells me that cuda is not available:
Python 3.8.10 (default, Jun 2 2021, 10:49:15)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
False
Does anyone know if I installed all the packages I need? I tried many solutions posted on the internet but still doesn't work.
| My environment is (Ubuntu 20.04 with NVIDIA GTX 1080Ti):
$ nvidia-smi | grep CUDA
| NVIDIA-SMI 470.74 Driver Version: 470.74 CUDA Version: 11.4 |
$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Aug_15_21:14:11_PDT_2021
Cuda compilation tools, release 11.4, V11.4.120
Build cuda_11.4.r11.4/compiler.30300941_0
After the installation of CUDA and cuDNN, I run this followwing command getting from https://pytorch.org/get-started/locally/ with options (PyTorch Stable 1.9.1, Linux, Pip, Python, CUDA 11.1).
pip3 install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html
Find packages:
(test-py38) ➜ ~ conda list | grep torch
_pytorch_select 0.1 cpu_0 defaults
pytorch 1.8.1 cpu_py38h60491be_0 defaults
torch 1.9.1+cu111 <pip>
torchaudio 0.9.1 <pip>
torchvision 0.10.1+cu111 <pip>
Then:
$ python
Python 3.8.11 (default, Aug 3 2021, 15:09:35)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
| https://stackoverflow.com/questions/68493965/ |
AttributeError: 'LSTMClassifier' object has no attribute 'log_softmax' | While making predictions from my LSTM model, I am getting the error :: AttributeError: 'LSTMClassifier' object has no attribute 'log_softmax'.Can anyone explain me what I am doing wrong?
class LSTMClassifier(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim):
super().__init__()
self.hidden_dim = hidden_dim
self.layer_dim = layer_dim
self.lstm = nn.LSTM(input_dim, hidden_dim, layer_dim, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_dim)
self.batch_size = None
self.hidden = None
def forward(self, x):
h0, c0 = self.init_hidden(x)
out, (hn, cn) = self.lstm(x, (h0, c0))
out = self.fc(out[:, -1, :])
return out
def init_hidden(self, x):
h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim)
c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim)
print(x.size(0))
print(layer_dim)
return [t.to(device) for t in (h0, c0)]
test_dl = DataLoader(tst_data, batch_size=64, shuffle=False)
test = []
print('Predicting on test dataset')
for batch, _ in tst_data:
batch=batch.to(device)
print(batch.shape)
out = model.to(device)
y_hat = F.log_softmax(out, dim=1).argmax(dim=1) ### at this line I am getting error
test += y_hat.tolist()
Thank you in advance!
Error :: AttributeError: 'LSTMClassifier' object has no attribute 'log_softmax'
Traceback:::
AttributeError Traceback (most recent call last)
<ipython-input-74-df6f970f9b87> in <module>()
8 print(batch.shape)
9 out = model.to(device)
---> 10 y_hat = F.log_softmax(out, dim=1).argmax(dim=1)
11
12 test += y_hat.tolist()
1 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in __getattr__(self, name)
1129 return modules[name]
1130 raise AttributeError("'{}' object has no attribute '{}'".format(
-> 1131 type(self).__name__, name))
1132
1133 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:
AttributeError: 'LSTMClassifier' object has no attribute 'log_softmax'
| Your train loop does not work. You never pass the input batch to the model therefore out is not a output tensor but a model object which of course can't be passed into an activation function.
You have to do this:
model = model.to(device)
for batch, _ in tst_data:
batch = batch.to(device)
# pass your input batch to the model like this
out = model.train()(batch)
# now you can calculate the log-softmax for out
y_hat = F.log_softmax(out, dim=1).argmax(dim=1)
test += y_hat.tolist()
| https://stackoverflow.com/questions/68505119/ |
How can we use Pytorch Autograd for sequence optimization (in a for loop)? | I want to optimize a sequence in a for loop using Pytorch Autograd. I am using LBFGS.
loss = 0.0
for i in range(10):
x = f(x,z[i])
loss = loss + mse_loss(x,x_GT)
Say the sequence length is 10. I want to optimize x as well as z(z is a tensor array), these are learnable parameters. Note the x will be updated in the loop.
x_GT is ground truth data.
To run this, I have to open:
loss.backward(retain_graph=True)
Is there a better way to do so (To make it run faster)?
| The code you provided is actually perfectly fine:
loss = torch.zeros(1)
for i in range(10):
x = f(x, z[i])
loss += mse_loss(x, x_GT)
It will accumulate the loss over the loop steps. The backward pass only needs to be called once, though, so you are not required to retain the graph on it:
>>> loss.backward()
I don't believe not retaining the graph will make your code run any faster. It only adds to the memory load since it has to save all activations on the graph, expecting a second backward pass to come.
| https://stackoverflow.com/questions/68507559/ |
Use MS-COCO format as input to PyTorch MASKRCNN | I am trying to train a MaskRCNN Image Segmentation model with my custom dataset in MS-COCO format.
I am trying to use the polygon masks as the input but cannot get it to fit the format for my model.
My data looks like this:
{"id": 145010,
"image_id": 101953,
"category_id": 1040,
"segmentation": [[140.0, 352.5, 131.0, 351.5, 118.0, 344.5, 101.50000000000001, 323.0, 94.5, 303.0, 86.5, 292.0, 52.0, 263.5, 35.0, 255.5, 20.5, 240.0, 11.5, 214.0, 14.5, 190.0, 22.0, 179.5, 53.99999999999999, 170.5, 76.0, 158.5, 88.5, 129.0, 100.5, 111.0, 152.0, 70.5, 175.0, 65.5, 217.0, 64.5, 272.0, 48.5, 296.0, 56.49999999999999, 320.5, 82.0, 350.5, 135.0, 374.5, 163.0, 382.5, 190.0, 381.5, 205.99999999999997, 376.5, 217.0, 371.0, 221.5, 330.0, 229.50000000000003, 312.5, 240.0, 310.5, 291.0, 302.5, 310.0, 288.0, 326.5, 259.0, 337.5, 208.0, 339.5, 171.0, 349.5]],
"area": 73578.0,
"bbox": [11.5, 11.5, 341.0, 371.0],
"iscrowd": 0}
I have one object in this image, hence one item for segmentation and bbox. Segmentation values are the pixels of the polygon, hence have different sizes for different objects.
Could anyone help me with this?
| To manage COCO formated datasets you can use this repo. It gives classes which you can instantiate from you annotation's file making it really easy to use and to access the data.
I don't know which implementation you are using, but if it's something like this tutorial, this piece of code might give you at least some ideas on how to solve your problem:
class CocoDataset(torch.utils.data.Dataset):
def __init__(self, dataset_dir, subset, transforms):
dataset_path = os.path.join(dataset_dir, subset)
ann_file = os.path.join(dataset_path, "annotation.json")
self.imgs_dir = os.path.join(dataset_path, "images")
self.coco = COCO(ann_file)
self.img_ids = self.coco.getImgIds()
self.transforms = transforms
def __getitem__(self, idx):
'''
Args:
idx: index of sample to be fed
return:
dict containing:
- PIL Image of shape (H, W)
- target (dict) containing:
- boxes: FloatTensor[N, 4], N being the n° of instances and it's bounding
boxe coordinates in [x0, y0, x1, y1] format, ranging from 0 to W and 0 to H;
- labels: Int64Tensor[N], class label (0 is background);
- image_id: Int64Tensor[1], unique id for each image;
- area: Tensor[N], area of bbox;
- iscrowd: UInt8Tensor[N], True or False;
- masks: UInt8Tensor[N, H, W], segmantation maps;
'''
img_id = self.img_ids[idx]
img_obj = self.coco.loadImgs(img_id)[0]
anns_obj = self.coco.loadAnns(self.coco.getAnnIds(img_id))
img = Image.open(os.path.join(self.imgs_dir, img_obj['file_name']))
# list comprhenssion is too slow, might be better changing it
bboxes = [ann['bbox'] for ann in anns_obj]
# bboxes = ? from [x, y, w, h] to [x0, y0, x1, y1]
masks = [self.coco.annToMask(ann) for ann in anns_obj]
areas = [ann['area'] for ann in anns_obj]
boxes = torch.as_tensor(bboxes, dtype=torch.float32)
labels = torch.ones(len(anns_obj), dtype=torch.int64)
masks = torch.as_tensor(masks, dtype=torch.uint8)
image_id = torch.tensor([idx])
area = torch.as_tensor(areas)
iscrowd = torch.zeros(len(anns_obj), dtype=torch.int64)
target = {}
target["boxes"] = boxes
target["labels"] = labels
target["masks"] = masks
target["image_id"] = image_id
target["area"] = area
target["iscrowd"] = iscrowd
if self.transforms is not None:
img, target = self.transforms(img, target)
return img, target
def __len__(self):
return len(self.img_ids)
Once again, this is just a draft and meant to give tips.
| https://stackoverflow.com/questions/68513782/ |
Parameters for LSTM with CNN in a sequential data | I am doing a classification problem with ECG data. I built a LSTM model but the accuracy of the model is not quiet good. Hence, I am thinking to implement it with CNN. I am planning to pass the data from CNN, then passing the output from CNN to LSTM. Howver, I have noticed that CNN is mostly used in Image classifications. I have sequential data with 4000 time steps. Could you please help me to define the parameters of the CNN model.
Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)
Can someone explain me what would be the in_channels, out_channels, kernel_size and stride for a sequence data having 4000 time steps?
| Well, that's yours to define, it's the actual architectural decision your need to take to construct your model. The following is not a solution to your question, however, this might give you some ideas.
You could pass each timestep through the CNN and retrieve a sequence of feature vectors corresponding to the CNN's outputs at consecutive time steps. Your CNN input would be shaped as (batch_size, channel, height, width) and output something like (batch_size, feature_length). Stacking the timesteps results would give you (batch_size, sequence_length, feature_length).
You could use a 3D convolutional layer, in that case, you can work straight away with shape (batch_size, sequence_length, channel, height, width). This is much more computation-intensive, since you are already planning on using an LSTM, it might be a little over-complex.
The number of channels, kernel sizes, number of filters in each convolutional layer isn't really an obvious question. You need to decide on that based on your setup: how large is your dataset, how many classes you have, how complex is the task (if not a classification problem).
My best advice is you start by using a well-known CNN architecture such as VGG or ResNet and work from there. Better yet, look at the literature and see if some else has ever faced that problem, you will most likely find interesting ideas that will help shape your project.
| https://stackoverflow.com/questions/68514274/ |
Error with pytorch compilation: LAPACK library not found in compilation. How to solve? | I am already desperate about this problem that I am having.
RuntimeError: inverse: LAPACK library not found in compilation
The easiest way to reproduce it is:
import torch
A = torch.rand(5,5)
torch.inverse(A)
I run this inside a docker container. The part of the dockerfile that compiles pytorch is:
#PyTorch
RUN pip3 install astunparse numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing_extensions future six requests dataclasses
ENV PYTORCH_INST_VERSION="v1.8.1"
RUN git clone --recursive --branch ${PYTORCH_INST_VERSION} https://github.com/pytorch/pytorch pytorch-src && \
cd pytorch-src && \
export MAX_JOBS=$((`nproc` - 2)) && \
export TORCH_CUDA_ARCH_LIST=${CUDA_ARCH} && \
python3 setup.py install --prefix=/opt/pytorch && \
cp -r /opt/pytorch/lib/python3.8/site-packages/* /usr/lib/python3/dist-packages/ && \
cd /opt && \
rm -rf /opt/pytorch-src
I am not super experienced so I don't know if I need to provide additional details. Please tell me if so.
| I solved my own problem.
I added apt-get liblapack-dev on the dockerfile before the torch compilation. Then I runned the docker container again and it worked.
| https://stackoverflow.com/questions/68517600/ |
How to fix input and parameter tensors are not at the same device? | I have seen other people have this error and I try to follow the steps to resolve, but continue to receive this error. "RuntimeError: Input and parameter tensors are not at the same device, found input tensor at cpu and parameter tensor at cuda:0"
I run both model.to(device) and input_seq.to(device). Error says it found an input tensor on CPU, but all input data should be on GPU with input_seq.to(device). Below is fill code
text = ['hey how are you','good i am fine','have a nice day']
# Join all the sentences together and extract the unique characters from the combined sentences
chars = set(''.join(text))
# Creating a dictionary that maps integers to the characters
int2char = dict(enumerate(chars))
# Creating another dictionary that maps characters to integers
char2int = {char: ind for ind, char in int2char.items()}
# Finding the length of the longest string in our data
maxlen = len(max(text, key=len))
# Padding
# A simple loop that loops through the list of sentences and adds a ' ' whitespace until the length of
# the sentence matches the length of the longest sentence
for i in range(len(text)):
while len(text[i])<maxlen:
text[i] += ' '
# Creating lists that will hold our input and target sequences
input_seq = []
target_seq = []
for i in range(len(text)):
# Remove last character for input sequence
input_seq.append(text[i][:-1])
# Remove first character for target sequence
target_seq.append(text[i][1:])
print("Input Sequence: {}\nTarget Sequence: {}".format(input_seq[i], target_seq[i]))
for i in range(len(text)):
input_seq[i] = [char2int[character] for character in input_seq[i]]
target_seq[i] = [char2int[character] for character in target_seq[i]]
dict_size = len(char2int)
seq_len = maxlen - 1
batch_size = len(text)
def one_hot_encode(sequence, dict_size, seq_len, batch_size):
# Creating a multi-dimensional array of zeros with the desired output shape
features = np.zeros((batch_size, seq_len, dict_size), dtype=np.float32)
# Replacing the 0 at the relevant character index with a 1 to represent that character
for i in range(batch_size):
for u in range(seq_len):
features[i, u, sequence[i][u]] = 1
return features
# Input shape --> (Batch Size, Sequence Length, One-Hot Encoding Size)
input_seq = one_hot_encode(input_seq, dict_size, seq_len, batch_size)
input_seq = torch.from_numpy(input_seq)
target_seq = torch.Tensor(target_seq)
# torch.cuda.is_available() checks and returns a Boolean True if a GPU is available, else it'll return False
is_cuda = torch.cuda.is_available()
# If we have a GPU available, we'll set our device to GPU. We'll use this device variable later in our code.
if is_cuda:
device = torch.device("cuda")
print("GPU is available")
else:
device = torch.device("cpu")
print("GPU not available, CPU used")
class Model(nn.Module):
def __init__(self, input_size, output_size, hidden_dim, n_layers):
super(Model, self).__init__()
# Defining some parameters
self.hidden_dim = hidden_dim
self.n_layers = n_layers
#Defining the layers
# RNN Layer
self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True)
# Fully connected layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, x):
batch_size = x.size(0)
# Initializing hidden state for first input using method defined below
hidden = self.init_hidden(batch_size)
# Passing in the input and hidden state into the model and obtaining outputs
out, hidden = self.rnn(x, hidden)
# Reshaping the outputs such that it can be fit into the fully connected layer
out = out.contiguous().view(-1, self.hidden_dim)
out = self.fc(out)
return out, hidden
def init_hidden(self, batch_size):
# This method generates the first hidden state of zeros which we'll use in the forward pass
# We'll send the tensor holding the hidden state to the device we specified earlier as well
hidden = torch.zeros(self.n_layers, batch_size, self.hidden_dim)
return hidden
# Instantiate the model with hyperparameters
model = Model(input_size=dict_size, output_size=dict_size, hidden_dim=12, n_layers=1)
# We'll also set the model to the device that we defined earlier (default is CPU)
model.to(device)
# Define hyperparameters
n_epochs = 100
lr=0.01
# Define Loss, Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
# Training Run
for epoch in range(1, n_epochs + 1):
optimizer.zero_grad() # Clears existing gradients from previous epoch
input_seq.to(device)
target_seq.to(device)
output, hidden = model(input_seq)
loss = criterion(output, target_seq.view(-1).long())
loss.backward() # Does backpropagation and calculates gradients
optimizer.step() # Updates the weights accordingly
if epoch%10 == 0:
print('Epoch: {}/{}.............'.format(epoch, n_epochs), end=' ')
print("Loss: {:.4f}".format(loss.item()))
| Unlike the to method available on nn.Modules such as your model. The to method on Tensors is not an in-place operation! As stated on the documentation page:
This method [nn.Module.to] modifies the module in-place.
vs for Tensor.to:
[...] the returned tensor is a copy of self with the desired [...] torch.device.
In other words, you need to reassign the tensors in order to effectively send them to the device.
input_seq = input_seq.to(device)
target_seq = target_seq.to(device)
While an nn.Module won't need this treatment:
model.to(device)
To clearly understand what happens here, take this example:
>>> x = torch.zeros(1) # on cpu
>>> y = x.cuda() # y is a copy of x
>>> y.device # placed on cuda device
'cuda:0'
>>> x.device # but x remains on the original device
'cpu'
| https://stackoverflow.com/questions/68521735/ |
Difference between transformers schedulers and Pytorch schedulers | Transformers also provide their own schedulers for learning rates like get_constant_schedule, get_constant_schedule_with_warmup, etc. They are again returning torch.optim.lr_scheduler.LambdaLR (torch scheduler). Is the warmup_steps the only difference between the two?
How can we create a custom transformer-based scheduler similar to other torch schedulers like lr_scheduler.MultiplicativeLR, lr_scheduler.StepLR, lr_scheduler.ExponentialLR?
| You can create a custom scheduler by just creating a function in a class that takes in an optimizer and its state dicts and edits the values in its param_groups.
To understand how to structure this in a class, just take a look at how Pytorch creates its schedulers and use the same functions just change the functionality to your liking.
The Permalink I found that will be a good reference is over here
EDIT After comments:
This is like a template you can use
from torch.optim import lr_scheduler
class MyScheduler(lr_scheduler._LRScheduler # Optional inheritance):
def __init__(self, # optimizer, epoch, step size, whatever you need as input to lr scheduler, you can even use vars from LRShceduler Class that you can inherit from etc.):
super(MyScheduler, self).__init__(optimizer, last_epoch, verbose)
# Put variables that you will need for updating scheduler like gamma, optimizer, or step size etc.
self.optimizer = optimizer
def get_lr(self):
# How will you use the above variables to update the optimizer
for group in self.optimizer.param_groups:
group["lr"] = # Fill this out with updated optimizer
return self.optimizer
You can add more functions for increased functionality. Or you can just use a function to update your learning rate. This will take in the optimizer and change it optimizer.param_groups[0]["lr"] and return the new optimizer.
| https://stackoverflow.com/questions/68523070/ |
Summing vector pairs efficiently in pytorch | I'm trying to calculate the summation of each pair of rows in a matrix. Suppose I have an m x n matrix, say one like
[[1,2,3],
[4,5,6],
[7,8,9]]
and I want to create a matrix of the summations of all pairs of rows. So, for the above matrix, we would want
[[5,7,9],
[8,10,12],
[11,13,15]]
In general, I think the new matrix will be (m choose 2) x n. For the above example in pytorch, I ran
import torch
x = torch.tensor([[1,2,3], [4,5,6], [7,8,9]])
y = x[None] + x[:, None]
torch.cat((y[0, 1:3, :], y[1, 2:3, :]))
which manually creates the matrix I am looking for. However, I am struggling to think of a way to create the output without manually specifying indices and without using a for-loop. Is there even a way to create such a matrix for an arbitrary matrix without the use of a for-loop?
| You can try using this function:
def sum_rows(x):
y = x[None] + x[:, None]
ind = torch.tril_indices(x.shape[0], x.shape[0], offset=-1)
return y[ind[0], ind[1]]
Because you know you want pairs with the constraints of sum_matrix[i,j], where i<j (but i>j would also work), you can just specify that you want the lower/upper triangle indices of your 3D matrix. This still uses a for loop, AFAIK, but should do the job for variable-sized inputs.
| https://stackoverflow.com/questions/68524558/ |
RTX 3070 compatibility with Pytorch |
NVIDIA GeForce RTX 3070 with CUDA capability sm_86 is not compatible
with the current PyTorch installation. The current PyTorch install
supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
So I'm currently trying to train a neural network but I'm getting this issue. It seems that the GPU model I have is not compatible with the version of PyTorch that I have.
The output of my nvcc -V is:
Cuda compilation tools, release 11.4, V11.4.48
Build cuda_11.4.r11.4/compiler.30033411_0
and my PyTorch version is 1.9.0.
I've tried changing the CUDA version from what was initially 10 to 11.4 and there was no change. Any help would be hugely appreciated.
| It might be because you have installed a torch package with cuda==10.* (e.g. torch==1.9.0+cu102) . I'd suggest trying:
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
| https://stackoverflow.com/questions/68529258/ |
Forcing Ration on Batches in PyTorch DataLoader | I came across a binary classifcation task where the data is heavily imbalanced. (I'm looking at 80:1)
Through undersampling, the data ratio is at now 20:1.\
Now, the undersampled/treated data is loaded in to dataloader as below. (this is an nlp task)
train_inputs = torch.tensor(input_ids)
train_labels = torch.tensor(labels)
train_masks = torch.tensor(attention_masks)
train_data = TensorDataset(train_inputs, train_masks, train_labels)
if is_distributed:
train_sampler = torch.utils.data.distributed.DistributedSampler(dataset)
else:
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)
For each batch, I want to make sure/enforce that each batch has the ration of 20:1 in terms of each classifications. Is there a pytorch builtin method that allows me to enforce such conditions ?
| You can use WeightedRandomSampler with replacement set to true.
Just multiply the weight of the positive examples by 20 to bias the ratio towards 20:1. (assuming positives are 20 times more than negatives here)
# labels is a numpy array of shape n,1 containing 1 and 0 for each datapoint
weights = np.ones(labels.shape)
weights[labels==1] *= 20
# now the samples will be drawn with a 20:1 ratio
sampler = WeightedRandomSampler(weights=weights, num_samples=len(labels), replacement=True)
Note that setting replacement to True is necessary to enforce each batch to be sampled with the ratio but you might see an example twice during training.
However if the ratio is naturally occurring in your dataset you can leave it to False.
IMHO:
Random sampling do not ensure a 20:1 ratio for each batch but only for an expected value of the ratio (on average the ratio of the batch(sample) will tend to the ratio of the dataset(population)) so it should not impact the average gradient as well during training (in theory). But I see how in practice you might want to have more control.
| https://stackoverflow.com/questions/68542721/ |
"SyntaxError: Can't assign to operator"; "ipykernel_launcher.py: error: unrecognized arguments" | I want to execute a train.py script inside a colab or jupyter notebook.
Before running I have to set some variables, e.g. dataset-type.
I did as I would type into a terminal command but I get a SyntaxError.
--dataset-type=voc
--dataset-type='voc'
^
SyntaxError: can't assign to operator
Executing from a terminal works fine but how do I declare the variables correctly?
Here is some code of train.py:
parser = argparse.ArgumentParser(description='Single Shot MultiBox Detector Training With PyTorch')
# Params for datasets
parser.add_argument('--dataset-type', default="voc", type=str,
help='Specify dataset type. Currently supports voc and open_images.')
parser.add_argument('--datasets', '--data', nargs='+', default=["data"], help='Dataset directory path')
parser.add_argument('--balance-data', action='store_true',
help="Balance training data by down-sampling more frequent labels.")
I tried:
from sys import argv
argv.append('--dataset-type=voc')
This solved the SyntaxError.
But I get following error in the end. There are more variables to set, but --dataset-type is still listed.
usage: ipykernel_launcher.py [-h] [--dataset-type DATASET_TYPE]
[--datasets DATASETS [DATASETS ...]]
[--balance-data] [--net NET] [--freeze-base-net]
[--freeze-net] [--mb2-width-mult MB2_WIDTH_MULT]
[--base-net BASE_NET]
[--pretrained-ssd PRETRAINED_SSD]
[--resume RESUME] [--lr LR] [--momentum MOMENTUM]
[--weight-decay WEIGHT_DECAY] [--gamma GAMMA]
[--base-net-lr BASE_NET_LR]
[--extra-layers-lr EXTRA_LAYERS_LR]
[--scheduler SCHEDULER] [--milestones MILESTONES]
[--t-max T_MAX] [--batch-size BATCH_SIZE]
[--num-epochs NUM_EPOCHS]
[--num-workers NUM_WORKERS]
[--validation-epochs VALIDATION_EPOCHS]
[--debug-steps DEBUG_STEPS] [--use-cuda USE_CUDA]
[--checkpoint-folder CHECKPOINT_FOLDER]
ipykernel_launcher.py: error: unrecognized arguments: -f /root/.local/share/jupyter/runtime/kernel-f89b4ea1-0c84-4617-af88-0191a91639c0.json
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py:2890: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D.
warn("To exit: use 'exit', 'quit', or Ctrl-D.", stacklevel=1)
| It seems the model uses sys.argv, to assign them in a notebook:
from sys import argv
argv.append('--dataset-type=voc')
That should work the same as adding --dataset-type=voc in a terminal.
| https://stackoverflow.com/questions/68556605/ |
How can I add multiple Metadata in Torch Tensorboard Embedding? | I am using torch==1.9.0 and tensorboard==2.5.0. I would like to track data with tensorbaord as an embedding, so I am something like this:
data = np.random.poisson(lam=10.0, size=(4,4))
labels = ["A","A","B","B"]
ids = [1,2,3,4]
writer = SummaryWriter("/runs/")
writer.add_embedding(data,
metadata=labels)
writer.close()
But I only can add lables or ids as metadeta and not a combined dictionaty {'ids':ids, 'lables':labels}. Any idea, how to solve this? Thanks!
FYI: Tensorboard docs just descirbes metadata as a list:
https://pytorch.org/docs/stable/tensorboard.html
| Found the answer. You could get multiple field by adding a metadata header and give metadat as list of lists:
writer.add_embedding(data.values,
metadata=metadata,
metadata_header=["Name","Labels"])
Reference: https://github.com/tensorflow/tensorboard/issues/61
| https://stackoverflow.com/questions/68556767/ |
Simple Adjacency matric creation with Pytorch Tensors | I was trying to write a simple function to create a random adjacency matrix in the following way :
def create_adj(a):
a[a>0.5] = 1
a[a<=0.5] = 0
return a
given that a is assumed to be a torch.Tensor() as input, but I get the following error:
TypeError: 'int' object does not support item assignment
If I do things separately (i.e. not inside a function), I simply do:
>> a = torch.rand(3,3)
>> a[a>0.5] = 1
>> a[a<=0.5] = 0
>> a
tensor([[1., 1., 1.],
[0., 0., 0.],
[1., 0., 0.]])
But I don't understand what I'm doing wrong in the function.
| I would assume you are not passing the correct variable your create_adj function. As long as a is a torch.tensor, then it should work.
Alternatively, you can directly use the mask as result:
def create_adj(x):
return (a > .5).float()
| https://stackoverflow.com/questions/68562472/ |
How to Fix "AssertionError: CUDA unavailable, invalid device 0 requested" | I'm trying to use my GPU to run the YOLOR model, and I keep getting the error:
Traceback (most recent call last):
File "D:\yolor\detect.py", line 198, in <module>
detect()
File "D:\yolor\detect.py", line 41, in detect
device = select_device(opt.device)
File "D:\yolor\utils\torch_utils.py", line 47, in select_device
assert torch.cuda.is_available(), 'CUDA unavailable, invalid device %s requested' % device # check availablity
AssertionError: CUDA unavailable, invalid device 0 requested
When I try to check if CUDA is available with the following:
python3
>>import torch
>>print(torch.cuda.is_available())
I get False, which explains the problem. I tried running the command
py -m pip install torch1.9.0+cu111 torchvision0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
I get the error: ERROR: Invalid requirement: 'torch1.9.0+cu111'
Running nvcc --version, I get:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Mon_May__3_19:41:42_Pacific_Daylight_Time_2021
Cuda compilation tools, release 11.3, V11.3.109
Build cuda_11.3.r11.3/compiler.29920130_0
Thus, I'm not really sure what the issue is, or how to fix it.
EDIT: As @Ivan pointed out, I added the == sign, but still get False when checking if CUDA is available.
| You forgot to put the == signs between the packages and the version number. According to the PyTorch installation page:
py -m pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio===0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
| https://stackoverflow.com/questions/68562730/ |
Pytorch unable to export trained model as ONNX | I have been training a model in the Pytorch framework using multiple convolutional layers (3x3, stride 1, padding same). The model performs well and I want to use it in Matlab for inference. For that, the ONNX format for NN exchange between frameworks seems to be the (only?) solution. The model can be exported using the following command:
torch.onnx.export(net.to('cpu'), test_input,'onnxfile.onnx')
Here is my CNN architecture definition:
class Encoder_decoder(nn.Module):
def __init__(self):
super().__init__()
self.model = nn.Sequential(
nn.Conv2d(2,8, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(8,8, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(8,16, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(16,16, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(16,32, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(32,32, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(32,64, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(64,64, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(64,128, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(128,128, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(128,1, (1, 1))
)
def forward(self, x):
x = self.model(x)
return x
However, when I run the torch.onnx.export command I get the following error:
RuntimeError: Exporting the operator _convolution_mode to ONNX opset version 9 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.
I have tried changing the opset, but that doesn't solve the problem. ONNX has full support for convolutional neural networks. Also, I am training the network in google colab.
Do you know other methods to transfer the model to matlab?
| Currently, _convolution_mode operator isn't supported in pytorch. This is due to the use of padding='same'.
You need to change padding to an integer value or change it to its equivalent. Consult Same padding equivalent in Pytorch.
| https://stackoverflow.com/questions/68565147/ |
How to fix RuntimeError CUDA error CUBLAS_STATUS_INVALID_VALUE when calling `cublasSgemm`? | When training some models on a working cuda environment, you can get the error RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
What does it means and how to fix it?
| It may be an incomplete error reporting of a shape error:
A mismatch in dimension of a nn.Linear module and its inpput, for example x.shape == [a, b] going into a nn.Linear(c, c, bias=False) with c not matching the shape of x, will result in this error message.
See the Pytorch forum conversation.
| https://stackoverflow.com/questions/68571902/ |
Converting From .pt model to .h5 model | I am using Google colab. I want to convert .pt model from my google drive to .h5 model. I follow link https://github.com/gmalivenko/pytorch2keras and https://www.programmersought.com/article/57938937172/ and install libraries and also write code as below:
%pip install pytorch2keras
%pip install onnx==1.8.1
import numpy as np
from numpy import random
from random import uniform
import torch
from torch.autograd import Variable
input_np = np.random.uniform(0, 1, (1, 10, 32, 32))
input_var = Variable(torch.FloatTensor(input_np))
model='/content/gdrive/MyDrive/model.pt'
pytorch_to_keras(model,input_var,input_shapes= [(10, 32, 32,)], verbose=True)
but it gives me error like:
WARNING:pytorch2keras:Custom shapes isn't supported now.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-53-eef217a11c8a> in <module>()
8 input_var = Variable(torch.FloatTensor(input_np))
9 model='/content/gdrive/MyDrive/model.pt'
---> 10 pytorch_to_keras(model,input_np,input_shapes= [(10, 32, 32,)], verbose=True)
11
12
/usr/local/lib/python3.7/dist-packages/pytorch2keras/converter.py in pytorch_to_keras(model, args, input_shapes, change_ordering, verbose, name_policy, use_optimizer, do_constant_folding)
51 args = tuple(args)
52
---> 53 dummy_output = model(*args)
54
55 if isinstance(dummy_output, torch.autograd.Variable):
TypeError: 'str' object is not callable
| Ah, the classic problem of PyTorch to Tensorflow. Many libraries have come and gone over the years, but I've found ONNX to work the most consistent. You could try something like this.
Specific to PyTorch is a Dynamic Computational Graph. A dynamic computational graph means that PyTorch models can dynamically adapt to different input sizes. You can specify which axes need dynamic sizing as such.
Here is some minimal code to convert a CNN from PyTorch to ONNX.
import onnx
import torch
model = get_model()
model.eval()
# Test model on sample image size
example_input = torch.randn((1, 3, img_size, img_size), requires_grad=True)
model(example_input)
# Set input and output names, include more names in the list if your model has more than 1 input/output
input_names = ["input0"]
output_names = ["output0"]
# Set dynamic axes (in this case, make the batch a dynamic dimension)
dynamic_axes = {'input0': {0: 'batch'}, 'output0': {0: 'batch'}}
# Export model with the above parameters
torch_out = torch.onnx.export(
model, example_input, 'model.onnx', export_params=True, input_names=input_names, output_names=output_names,
dynamic_axes=dynamic_axes, operator_export_type=torch.onnx.OperatorExportTypes.ONNX
)
# Use ONNX checker to verify integrity of model
onnx_model = onnx.load("model.onnx")
onnx.checker.check_model(onnx_model)
One could also set the height and width of the model as a dynamic input size with
dynamic_axes['input0'][2] = 'height'
dynamic_axes['input0'][3] = 'width'
Next, we convert our ONNX model to a Tensorflow SavedModel.
from onnx_tf.backend import prepare
import onnx
onnx_model = onnx.load('model.onnx')
tf_model = prepare(onnx_model)
tf_model.export_graph('tf_model')
tf_model is now a Tensorflow SavedModel.
| https://stackoverflow.com/questions/68577156/ |
Loading PyTorch Lightning Trained checkpoint | I am using PyTorch Lightning version 1.4.0 and have defined the following class for the dataset:
class CustomTrainDataset(Dataset):
'''
Custom PyTorch Dataset for training
Args:
data (pd.DataFrame) - DF containing product info (and maybe also ratings)
all_itemIds (list) - Python3 list containing all Item IDs
'''
def __init__(self, data, all_orderIds):
self.users, self.items, self.labels = self.get_dataset(data, all_orderIds)
def __len__(self):
return len(self.users)
def __getitem__(self, idx):
return self.users[idx], self.items[idx], self.labels[idx]
def get_dataset(self, data, all_orderIds):
users, items, labels = [], [], []
user_item_set = set(zip(train_ratings['CustomerID'], train_ratings['ItemCode']))
num_negatives = 7
for u, i in user_item_set:
users.append(u)
items.append(i)
labels.append(1)
for _ in range(num_negatives):
negative_item = np.random.choice(all_itemIds)
while (u, negative_item) in user_item_set:
negative_item = np.random.choice(all_itemIds)
users.append(u)
items.append(negative_item)
labels.append(0)
return torch.tensor(users), torch.tensor(items), torch.tensor(labels)
followed by the PL class:
class NCF(pl.LightningModule):
'''
Neural Collaborative Filtering (NCF)
Args:
num_users (int): Number of unique users
num_items (int): Number of unique items
data (pd.DataFrame): Dataframe containing the food ratings for training
all_orderIds (list): List containing all orderIds (train + test)
'''
def __init__(self, num_users, num_items, data, all_itemIds):
# def __init__(self, num_users, num_items, ratings, all_movieIds):
super().__init__()
self.user_embedding = nn.Embedding(num_embeddings = num_users, embedding_dim = 8)
# self.user_embedding = nn.Embedding(num_embeddings = num_users, embedding_dim = 10)
self.item_embedding = nn.Embedding(num_embeddings = num_items, embedding_dim = 8)
# self.item_embedding = nn.Embedding(num_embeddings = num_items, embedding_dim = 10)
self.fc1 = nn.Linear(in_features = 16, out_features = 64)
# self.fc1 = nn.Linear(in_features = 20, out_features = 64)
self.fc2 = nn.Linear(in_features = 64, out_features = 64)
self.fc3 = nn.Linear(in_features = 64, out_features = 32)
self.output = nn.Linear(in_features = 32, out_features = 1)
self.data = data
# self.ratings = ratings
# self.all_movieIds = all_movieIds
self.all_orderIds = all_orderIds
def forward(self, user_input, item_input):
# Pass through embedding layers
user_embedded = self.user_embedding(user_input)
item_embedded = self.item_embedding(item_input)
# Concat the two embedding layers
vector = torch.cat([user_embedded, item_embedded], dim = -1)
# Pass through dense layer
vector = nn.ReLU()(self.fc1(vector))
vector = nn.ReLU()(self.fc2(vector))
vector = nn.ReLU()(self.fc3(vector))
# Output layer
pred = nn.Sigmoid()(self.output(vector))
return pred
def training_step(self, batch, batch_idx):
user_input, item_input, labels = batch
predicted_labels = self(user_input, item_input)
loss = nn.BCELoss()(predicted_labels, labels.view(-1, 1).float())
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters())
def train_dataloader(self):
return DataLoader(
ChupsTrainDataset(
self.data, self.all_orderIds
),
batch_size = 32, num_workers = 2
# Google Colab's suggested max number of worker in current
# system is 2 and not 4.
)
print(f"num_users = {num_users}, num_items = {num_items} & all_itemIds = {len(all_itemIds)}")
# num_users = 12958, num_items = 511238 & all_itemIds = 9114
# Initialize NCF model-
model = NCF(num_users, num_items, train_ratings, all_itemIds)
trainer = pl.Trainer(
max_epochs = 75, gpus = 1,
# max_epochs = 5,
reload_dataloaders_every_n_epochs = True,
# reload_dataloaders_every_epoch = True, # deprecated!
progress_bar_refresh_rate = 50,
logger = False, checkpoint_callback = False)
trainer.fit(model)
# Save trained model as a checkpoint-
trainer.save_checkpoint("NCF_Trained.ckpt")
To load the saved checkpoint, I have tried:
trained_model = NCF.load_from_checkpoint(
"NCF_Trained.ckpt", num_users = num_users,
num_items = train_ratings, data = train_ratings,
all_itemIds = all_itemIds)
trained_model = NCF(num_users, num_items, train_ratings, all_orderIds).load_from_checkpoint(checkpoint_path = "NCF_Trained.ckpt")
But these don't seem to work. How do I load this saved checkpoint?
Thanks!
| As shown in here, load_from_checkpoint is a primary way to load weights in pytorch-lightning and it automatically load hyperparameter used in training. So you do not need to pass params except for overwriting existing ones. My suggestion is to try trained_model = NCF.load_from_checkpoint("NCF_Trained.ckpt")
| https://stackoverflow.com/questions/68578213/ |
pytorch make_grid (from torchvision.utils import make_grid) behaves different then I expect | trying to run the visualization utils tutorial from pytorch, I tried it with some images of dogs found on the internet. the images used in the tutorial are not distributed for use.. making the gris and showing the result behaves funny - it shows each channel as a separate image (I guess this is what I see)
so - from the tutorial
but here is what I get from the images I got:
I was expecting to see the two images in their original colors in a grid.
Another step I tried following Ivan's comment:
tutorial: https://pytorch.org/vision/master/auto_examples/plot_visualization_utils.html
I would like to know how to fix this (and use make_grid correctly)
| For the output you got, I would assume the correct shape is (height, width, channels) instead of (channels, height, width). You can correct this with torch.permute. The following should provide the desired result:
>>> grid = make_grid(torch.stack([transformed_dog1, transformed_dog2]).permute(0,3,1,2))
>>> show(grid)
| https://stackoverflow.com/questions/68579467/ |
How can I solve this issue? input must have 3 dimensions, got 4 | The Below is data which I passed to the Data Loader,
train_path='/content/drive/MyDrive/Dataset_manual_pytorch/train'
test_path='/content/drive/MyDrive/Dataset_manual_pytorch/test'
train = torchvision.datasets.ImageFolder(train_path,transform=transformations)
test = torchvision.datasets.ImageFolder(test_path,transform=transformations)
train_loader = torch.utils.data.DataLoader(train, batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(test, batch_size =32, shuffle=True)
This is my Recurrent Neural Network Model,
hidden_size = 256
sequence_length = 28
num_classes = 2
num_layers = 2
input_size = 32
learning_rate = 0.001
num_epochs = 3
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, num_classes):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first = True)
self.fc = nn.Linear(hidden_size*sequence_length, num_classes)
def forward(self, x):
h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)
#Forward Prop
out,_ = self.rnn(x, h0)
out = out.reshape(out.shape[0], -1)
out = self.fc(out)
return out
model_rnn = RNN(input_size, hidden_size, num_layers, num_classes).to(device)
When I train this model for the particular epochs and for the training data it gives me the following error;
RuntimeError: input must have 3 dimensions, got 4
The shape of data is: torch.Size([64, 3, 32, 32])
I think the error is because I am feeding the data of 4 dimensional, in which I am passing three channels (RGB) as well, to solve this issue I need to reshape; torch.Size([64, 3, 32, 32]) --> torch.Size([64, 32, 32])) But I am unable to do this.
The Training code is;
@torch.no_grad()
def Validation_phase(model, val_loader):
model.eval()
for data, labels in val_loader:
out = model(data)
val_loss = F.cross_entropy(out, labels)
val_acc = accuracy(out, labels)
return val_loss.detach(), val_acc
def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):
history = []
optimizer = opt_func(model.parameters(), lr)
for epoch in range(epochs):
# Training Phase
model.train()
train_losses = []
train_accuracy = []
for data, labels in train_loader:
#forward
print(data.shape)
out = model(data)
#loss calculate
train_loss = F.cross_entropy(out, labels)
#Accuracy
train_acc = accuracy(out, labels)
train_accuracy.append(train_acc)
train_losses.append(train_loss.item())
#back_propagate
train_loss.backward()
optimizer.step()
optimizer.zero_grad()
train_accuracy = np.mean(torch.stack(train_accuracy).numpy())
train_losses = np.mean(train_losses)
#Validation phase
val_losses, val_accuracy = Validation_phase(model, val_loader)
print("Epoch [{}], train_loss: {:.4f}, train_accuracy: {:.4f}, val_loss: {:.4f}, val_acc: {:.4f}".format(
epoch, train_losses*100 , train_accuracy*100 , val_losses.item()*100, val_accuracy.item()*100))
# history.append(result)
# return history
fit(5, 0.001, model_rnn, train_loader, test_loader, torch.optim.Adam)
| You can do the size conversion of torch.Size([64, 3, 32, 32]) to torch.size([64, 32, 32]) by following the bottom code:
x = torch.ones((64, 3, 32, 32))
x = x[:, 0, :, :]
#Check code:
print(x.size())
| https://stackoverflow.com/questions/68580717/ |
Pytorch fasterrcnn resnet50 fpn loss functions | I am using a pretrained model from this tutorial. https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#defining-your-model
The model is pytorch's Faster RCNN ResNet 50 FPN model. Does anyone know what the classification loss, loss, and objectness loss functions are (i.e. Cross Entropy or?). Thanks in advance,
Sriram A.
| Objectness is a binary cross entropy loss term over 2 classes (object/not object) associated with each anchor box in the first stage (RPN), and classication loss is normal cross-entropy term over C classes. Both first stage region proposals and second stage bounding boxes are also penalized with a smooth L1 loss term.
It should also be noted that the authors train the first and second stage alternately since both rely on the same features computed with convolutional layers + FPN to aid in training convergence.
Not a very clear description? I'd recommend reading the original Faster-RCNN paper as it is pretty foundational and will probably do a better job describing the loss terms than me.
| https://stackoverflow.com/questions/68584185/ |
ImportError: cannot import name 'load_mnist' from 'pytorchcv' | ---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-2cacdf187bba> in <module>
6 import numpy as np
7
----> 8 from pytorchcv import load_mnist, train, plot_results, plot_convolution, display_dataset
9 load_mnist(batch_size=128)
ImportError: cannot import name 'load_mnist' from 'pytorchcv' (/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/pytorchcv/__init__.py)
How can fix this bug?
I use python 3.7 and Jupiter notebook. The code in Pytorch Fundamental of Microsoft: Link https://learn.microsoft.com/en-gb/learn/modules/intro-computer-vision-pytorch/5-convolutional-networks
import torch
import torch.nn as nn
import torchvision
import matplotlib.pyplot as plt
from torchinfo import summary
import numpy as np
from pytorchcv import load_mnist, train, plot_results, plot_convolution, display_dataset
load_mnist(batch_size=128)
I installed PyTorch by command: pip install pytorchcv
| I assume you might have the wrong pytorchcv package. The one in pypy does not contain load_mnist
Starting from scratch you could download mnist as such:
data_train = torchvision.datasets.MNIST('./data',
download=True,train=True,transform=ToTensor()) data_test = torchvision.datasets.MNIST('./data',
download=True,train=False,transform=ToTensor())
| https://stackoverflow.com/questions/68588949/ |
The best method for normalizing dataset of images | I have a dataset of images consisting of three splits - the training, validation and test splits, and want to normalize the dataset to make training easier. Hence I want to find the mean and standard deviation of RGB values from the available data.
The doubt I have is - should I consider all the splits for normalizing?
My personal thought is that only the training split should be used since it is assumed to be the only data that we have to train the model. Hence the model is provided inputs from the distribution of the training data, leaving errors that can be picked by evaluation on the validation split. If I provide the distribution to a network from data outside what is provided for training, would it not be feeding the network extra information than what it is supposed to learn from?
Any other way to do this would also be of help. For example, is it just better to use standard values for RGB?
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))
(Soure: Pytorch Torchvision Transforms)
|
The doubt I have is - should I consider all the splits for normalizing?
As you said, in theory you should only make use of training data for anything, even for normalization.
Any other way to do this would also be of help. For example, is it just better to use standard values for RGB?
In practice, probably yes. In fact, it shouldn't really matter how you normalize your data, you could even go for mean=0.5, std=0.5 for each channel. Or even adopt a -127/+127 range, the network should adapt to whatever input you provide during training.
What you should probably bear in mind is practical use and application: if you're dealing with pretrained networks, they are usually provided with ImageNet normalization (the one you suggested). This is common practice since:
They are widely used
They are indeed a good approximation of the "real" RGB means and stds for natural images.
TLDR: the choice on custom or "standard" normalization depends on the task itself. In practice, normalization shouldn't matter very much, you should be fine in both cases. You have a decently sized set and time to compute some statistics? Go for custom values. Not so much time for statistics or the dataset is quite small? Probably better to go for the safe ImageNet approach.
| https://stackoverflow.com/questions/68599182/ |
How to maintain state in a DataLoader's Dataset | I have something like: (see self.cache for the bit that's interesting).
class DescriptorDataset(torch.utils.data.Dataset):
def __init__(self, descriptor_dir):
super().__init__()
self.file_paths = glob(osp.join(descriptor_dir, '*'))
self.image_ids = [Path(fp).stem for fp in self.file_paths]
self.cache = {}
def __len__(self):
return len(self.file_paths)
def __getitem__(self, ix):
file_path = self.file_paths[ix]
descriptor = self.get_descriptor(file_path)
return descriptor, Path(file_path).stem
def get_descriptor(self, file_path):
descriptor = self.cache.get(file_path, torch.load(file_path))
self.cache[file_path] = descriptor
return descriptor
query_loader = torch.utils.data.DataLoader(
DescriptorDataset(query_dir), batch_size=1, num_workers=0
I noticed that the caching mechanism works when num_workers == 0 but not for num_workers > 0. Does PyTorch have an inbuilt way to handle this?
| When I have come across this situation, I have filled the cache during initialisation. In that case it remains fixed during training/inference and can be reloaded the next time you instantiate:
class DescriptorDataset(torch.utils.data.Dataset):
def __init__(self, descriptor_dir, cache_loc=None):
super().__init__()
self.file_paths = glob(osp.join(descriptor_dir, '*'))
self.image_ids = [Path(fp).stem for fp in self.file_paths]
self.cache = self.make_cache(cache_loc)
def __len__(self):
return len(self.file_paths)
def __getitem__(self, ix):
file_path = self.file_paths[ix]
descriptor = self.get_descriptor(file_path)
return descriptor, Path(file_path).stem
def get_descriptor(self, file_path):
descriptor = self.cache.get(file_path, torch.load(file_path))
self.cache[file_path] = descriptor
return descriptor
def make_cache(self, cache_loc):
if os.path.exists(cache_loc):
return joblib.load(cache_loc)
else:
cache = {}
for p in self.file_paths:
descriptor = torch.load(p)
cache[p] = descriptor
return cache
| https://stackoverflow.com/questions/68602072/ |
RuntimeError: "reflection_pad2d" not implemented for 'Byte' | padding = (2, 2, 2, 2)
img = torch.nn.functional.pad(img, padding, mode='reflect')
out = torch.nn.functional.conv2d(img, kernel, groups=img.shape[1])
Here is the complete Error:
File "/home/amir/PycharmProjects/LPTN/loadPretrainedModel.py", line 57, in conv_gauss
img = torch.nn.functional.pad(img, padding, mode='reflect')
File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 4017, in _pad
return torch._C._nn.reflection_pad2d(input, pad)
RuntimeError: "reflection_pad2d" not implemented for 'Byte'
What do you think is the problem? I can't figure it out. Thanks in advance.
| You need to change data type of your img to float e.g. img.float(). Many operations such as reflection_pad2d are implemented only for float tensors.
| https://stackoverflow.com/questions/68602342/ |
RuntimeError: 1only batches of spatial targets supported (3D tensors) but got targets of size: : [4] | How to get around the following error with nn.CrossEntrophyLoss() ?
Note: I tried using nn.BCELoss(), but it resulted in different error: ValueError: Using a target size (torch.Size([4])) that is different to the input size (torch.Size([4, 3, 32, 32])) is deprecated. Please ensure they have the same size.
| The error message is pretty clear: you are using a one-dimensional target tensor while your output prediction has spatial dimensions (a three-channel map).
When using nn.CrossEntropyLoss, your target must be dense (each element is a label id): something with a shape of (batch_size,), where each element in the target belongs to [0, num_classes[. While the output consists of logits: (batch_size, num_classes,), i.e. each class from each batch is assigned a value (this is not yet a probability distribution). In the spatial setting, you will have two additional dimensions (height and width), this is the case for dense predictions such as semantic segmentation. This will make your target have a shape of (batch_size, height, width), and your output prediction (batch_size, num_classes, height, width).
| https://stackoverflow.com/questions/68607307/ |
Methods for increasing accuracying of a CNN for image classification | I'm currently working on a image classification task, involving a large datasets of grayscale images of cartoons and my CNN needs to classify them. Atm my model has a test accuracy of about 88% but I know a higher accuracy is possible.
I've tried:
improving / changing the actual model / architecture
using different meta parameters
different loss functions from the pytorch libraries
a bunch of different transforms
different optimizes from torch.optim
I've also tried a bunch of the standard models included in torchvision.models and am still getting sub 90% accuracy on the test set.
Do I just need to keep trying the above things to squeeze out better accuracy or are there any other avenues I can try? Would really appreciate any suggestions, the only other thing I can think of would be making my own custom loss function specific for the data set but I'm not exactly sure how much that would help?
| From what you've described, it sounds like it might be worth spending some time on the data preparation. Here is a good article on how to do that for images. Some ideas you could try are:
Resizing all your images to a fixed size
Subtracting mean pixel values, i.e. normalizing the dataset
I don't really know the context of what you're doing but I would also consider adding additional features that may be relevant and seeing if that helps.
| https://stackoverflow.com/questions/68607955/ |
Why network with linear layers can't learn anything | I am trying to build a neural network for binary classification, unfortunately it always predicts the value 0, even though one fifth of the training set data is 1. I have no idea why it is so. My dataset looks as this, so there are a couple of categorical variables and a couple of continuous, (target is the one we predict):
Here one can download the data:
https://drive.google.com/drive/folders/1PsG2rRdbxyocyqvLSa7zSy_aVDMRJ2Ug?usp=sharing
You can read it with
df = pd.read_csv("train.csv", index_col=0)
Now I am preparing the data for neural network:
x_train=df.drop(labels=['target'], axis=1).values
y_train=df['target'].values
X_train, X_val, y_train, y_val = train_test_split(
x_train,
y_train,
test_size=0.2)
LR = 0.001
EPOCH = 50
BATCH_SIZE = 64
torch_X_train = torch.tensor(X_train)
torch_y_train = torch.tensor(y_train)
torch_X_val = torch.tensor(X_val)
torch_y_val = torch.tensor(y_val)
train = torch.utils.data.TensorDataset(torch_X_train,torch_y_train)
validate = torch.utils.data.TensorDataset(torch_X_val,torch_y_val)
train_loader = torch.utils.data.DataLoader(train, batch_size =
BATCH_SIZE, shuffle = True)
val_loader = torch.utils.data.DataLoader(validate, batch_size =
BATCH_SIZE, shuffle = False)
And define a simple network with just linear layers:
class NN(nn.Module):
def __init__(self, input_size):
super().__init__()
self.to_class=nn.Sequential(
nn.Linear(input_size,512),
nn.ReLU(),
nn.Linear(512,256),
nn.ReLU(),
nn.Linear(256,32),
nn.ReLU(),
nn.Linear(32,2)
)
def forward(self,inputs):
pred= self.to_class(inputs)
return F.softmax(pred, dim=1)
And lastly I train it
net=NN(7)
loss_func = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=LR)
train_loss = np.zeros(EPOCH)
val_loss = np.zeros(EPOCH)
acc_train = []
acc_val=[]
for epoch in range(EPOCH):
correct = 0
total = 0
for data in train_loader:
X, y = data
optimizer.zero_grad()
output=net(X.float())
total += output.size(0)
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(y.view_as(pred)).sum().item()
loss = loss_func(output.squeeze(), y)
loss.backward()
optimizer.step()
train_loss[epoch] += loss
train_loss[epoch] /= len(train_loader)
acc_train.append(correct/total*100)
print('epoch %d:\t train_accuracy %.5f\ttrain loss: %.5f'%(epoch,acc_train[epoch], train_loss[epoch]))
But the train loss is not going anywhere and the predictions are always one class! Can someone explain this phenomenon and hint how I can improve it?
| Please see the doc
This is the standard training loop
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
It seems you are missing optimizer.zero_grad().
Please try to add it and see how it does.
If this doesn't work, please use the same net on a different dataset, for example MNIST, directly downloaded from pytorch. This net should be able to solve MNIST, so this way we can debug if the problem is the net or something related to the data or the labels.
Another thing is balancing the labels: if 80% are 0 and 20% are 1 (or vice versa) you may want to use weighted cross entropy
Another thing you may want is leaky relu instead of relu. This prevents vanishing gradients for some problems.
| https://stackoverflow.com/questions/68609125/ |
Action-selection for dqn with pytorch | I’m a newbie in DQN and try to understand its coding. I am trying the code below as epsilon greedy action selection but I am not sure how it works
if sample > eps_threshold:
with torch.no_grad():
# t.max(1) will return largest column value of each row.
# second column on max result is index of where max element was
# found, so we pick action with the larger expected reward.
return policy_net(state).max(1)[1].view(1, 1)
else:
return torch.tensor([[random.randrange(n_actions)]], device=device, dtype=torch.long)
Could you please let me know what are indices in max(1)[1] and what is view(1, 1) and it’s indices. Also why “with torch.no_grad():” has been used
| When you train a model, torch has to store all the tensors involved in computing the output into a graph, to then be able to make a backward pass during training; this is computationally expensive, and considering that after selecting the action you don't have to train the network, because your only goal here it to pick one using the current weights, then it's just better to use torch.no_grad(). Note that without that part the code would still work the same way, maybe just a bit slower.
About the max(1)[1] part, I'm not really sure how the inputs and outputs are taken considering that there's only a small portion of code here, but I guess that the model takes as input batches of data and outputs a Q-value for each action; then, for each of this outputs you have to take the action that gives you the highest value, so you basically need a max at each row, and that's done by specifying as axis (or dim as torch calls it) the first one, which represents the columns (at every row you take the max of the corresponding columns, which are the actions in this case).
| https://stackoverflow.com/questions/68615100/ |
Normalizing pixel Values in PyTorch | I am currently working with the CORnet-Z neural network and I am training it on an alternative version of the ImageNet image dataset.
I looked through the code and noticed this image value normalization method:
normalize = torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
...
And then implemented later in the train, val, and test class:
torchvision.transforms.Compose([
torchvision.transforms.RandomResizedCrop(256),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.ToTensor(),
normalize]))
I was wondering if I could rewrite this to just take the RGB pixel values and divide them by 255 to have a scale of 0-1 to work with.
According to the torchvision.transforms documentation this is not an implemented approach.
| What you found in the code is statistics standardization, you're looking to normalize the input. These are two different operations but can be carried out with the same operator: under torchvision.transforms by the name of Normalize. It applies a shift-scale on the input:
Normalize a tensor image with mean and standard deviation. This transform does not support PIL Image. Given mean: (mean[1],...,mean[n]) and std: (std[1],..,std[n]) for n channels, this transform will normalize each channel of the input torch.*Tensor i.e., output[channel] = (input[channel] - mean[channel]) / std[channel].
>>> normalize = T.Normalize(mean=0, std=255)
Your transformation pipeline is then:
>>> T.Compose([T.RandomResizedCrop(256),
T.RandomHorizontalFlip(),
T.ToTensor(),
normalize]))
where torchvivision.transforms is imported as T.
| https://stackoverflow.com/questions/68620946/ |
How to convert the below Tensorflow code to Pytorch (transfer learning)? | I want to know how to convert the below codes(Tensorflow) to Pytorch.
I've wanted to use DataLoader but I couldn't. Is it possible to use DataLoader for converting? or Can you tell me any other ways to convert?
Thanks a lot :)
from tensorflow.keras.preprocessing import image as image_utils
from tensorflow.keras.applications.vgg16 import preprocess_input
def load_and_process_image(image_path):
# Print image's original shape, for reference
print('Original image shape: ', mpimg.imread(image_path).shape)
# Load in the image with a target size of 224, 224
image = image_utils.load_img(image_path, target_size=(224, 224))
# Convert the image from a PIL format to a numpy array
image = image_utils.img_to_array(image)
# Add a dimension for number of images, in our case 1
image = image.reshape(1,224,224,3)
# Preprocess image to align with original ImageNet dataset
image = preprocess_input(image)
# Print image's shape after processing
print('Processed image shape: ', image.shape)
return image
| import os
from PIL import Image
import torch
from torch.utils.data import DataLoader, Dataset
from torchvision import transforms
class MyData(Dataset):
def __init__(self, data_path):
#path of the folder where your images are located
self.data_path = data_path
#transforms to perform on image. In general, these are the default normalization used. you can change std, mean values about three channels according to your requirement
#when ToTensor() is used it automatically permutes the dimensions according to the torch layers
self.transforms = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize((mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
self.image_path_list = sorted(os.listdir(self.data_path))
def __len__(self):
#returns the length of your dataset
return len(self.image_path_list)
def __getitem__(self, idx):
#pytorch accepts PIL images, use PIL.Image to load images
image = Image.open(self.image_path_list[idx])
image = self.transform(image)
return image
Above is a small snippet based on my assumptions from your post. I assumed you would need to resize, permute and normalize about the given mean values. DataLoader is iterate-able. It yields single image at a time.
For example,
#instantiate your loader, with the desired parameters. checkout the pytorch documentation for other arguments
myloader = DataLoader(MyData, batch_size = 32, num_workers = 10)
myloader = iter(myloader)
for i in range(0, 10):
#this yields first 10 batches of your dataset
img = next(myloader)
Hopefully, this is what you are looking for. Please feel free to comment your quires for any further clarification.
| https://stackoverflow.com/questions/68632679/ |
Best way to detect Vanishing/Exploding gradient in Pytorch via Tensorboard | I suspect my Pytorch model has vanishing gradients. I know I can track the gradients of each layer and record them with writer.add_scalar or writer.add_histogram. However, with a model with a relatively large number of layers, having all these histograms and graphs on the TensorBoard log becomes a bit of a nuisance. I'm not saying it doesn't work, it's just a bit inconvenient to have different graphs and histograms for each layer and scroll through them.
I'm looking for a graph where the y axis (vertical) represents the gradient value (mean of gradient of a specific layer), the x axis (horizontal) shows the layer number (e.g. the value at x=1 is the gradient value for 1st layer), and the z axis (depth) is the epoch number.
This would look like a histogram, but of course, it would be essentially different from a histogram since the x axis does not represent beans. One can write a dirty code that would create a histogram where instead of beans there would be layer numbers, something like (this is a pseudo-code, obviously):
fake_distribution = []
for i, layer in enumerate(model.layers):
fake_distribution += [i for j in range(int(layer.grad.mean()))]
writer.add_histogram('gradients', fake_distribution)
I was wondering if there is a better way for this.
| This is a minimal example of how you could go about evaluating the norm of a particular layer in your model. Taking a simple model for illustration purposes:
class ConvNet(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 10, 5)
self.conv2 = nn.Conv2d(10, 20, 5)
self.fc1 = nn.Linear(8000, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, input):
x = F.relu(self.conv1(input))
x = F.relu(self.conv2(x))
x = x.view(x.size(0), -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
return x
net = ConvNet()
net(torch.rand(5,1,28,28)).mean().backward()
Looking at clip_grad_norm_ as reference. To measure the magnitude of the gradient on layer conv1 you could: compute the L2-norm of the vector comprised of the L2-gradient-norms of parameters belonging to that layer. This is done with the following code:
parameters = net.conv1.parameters()
norm_type = 2
total_norm = torch.norm(
torch.stack([torch.norm(p.grad.detach(), norm_type) for p in parameters]), norm_type)
Alternatively, you can take the maximum of maximum gradient component on that layer i.e. the inf-norm:
total_norm = torch.max(
torch.stack([p.grad.detach().abs().max() for p in parameters]))
To log them onto your TensorBoard, you can use add_scalar on your SummaryWriter:
for name, module in net.named_children():
norm = torch.norm(
torch.stack([torch.norm(p.grad.detach(), 2) for p in parameters]), 2)
writer.add_scalar(f'check_info/{name}', norm, iter)
| https://stackoverflow.com/questions/68634707/ |
Channel first and channel last in convolution | I saw there are two types of data: channel first and last in the world of convolutional networks.
According to many websites, "channel-first" refers to NCHW format, while "channel-last" is equivalent to NHWC format. This is clear because in channel first format, C is positioned before H and W.
However, ARM seems to have defined "channel-first" as NHWC, as you can see in this paper.
P6: The two most common image data formats are Channel-Width-Height (CHW), i.e. channel last, and Height-Width-Channel (HWC), i.e. channel first. The dimension ordering is the same as that of the data stride. In
an HWC format, the data along the channel is stored with a stride of 1, data along the width is stored
with a stride of the channel count, and data along the height is stored with a stride of (channel count × image width).
This is also reasonable since "Channel first" sounds like MAC operation goes channel-wise like below:
for (N){
for (H){
for (W){
for (C){
}
}
}
}
So there is no fixed definition of channel-first or channel-last, isn't there?
Also, I'm not sure when you say NHWC or NCHW, what do you specifically mean? I guess the important thing is the combination of algorithms and the data arrangement in memory. If the data comes in in NHWC format, you need to design the algorithm like so.
And, since there are no fixed definitions of NHWC and NCHW, I don't think it makes any sense if you just say PyTorch is NCHW, channel-first or something without mentioning how the data arranges in memory.
Or when you hear NCHW, you can realize that the data arrangement in memory is like ch0[0,0], ch1[0, 0], ch2[0, 0], ch0[1, 0], ch1[1, 0], ch2[1, 0], ch0[2, 0], ...?
Can anyone help clarify my understanding of the data format?
| I had originally overlooked the paper you linked where they clearly define the two opponents to how the terms are usually employed in the documentation and elsewhere. There are indeed two different ways to look at CHW and HWC...
TLDR; For end-users CHW is channel-first while HWC is channel last. In this case, we refer to the position of the channel dimension with regards to the other dimensions (H and W). Wether it is before CHW or after HWC is a matter of convention defined by the library used (eg. PyTorch vs. Tensorflow). In terms of memory allocation, it makes sense to call CHW channel last, it means that the channel axis' stride will be last: it will be unfolded last with regards to the other axes of the tensor.
I don't think it makes any sense if you just say PyTorch is NCHW, channel first or something without mentioning how the data arranges in memory.
For the end-user (as in end-developer), it does not matter how memory is allocated or arranged. The important part is to know how to use the API provided by PyTorch to manipulate torch.Tensors. When we say NCHW, we mean 'channel-first', i.e. tensors of shape (batch_size, channel, height, width). In all PyTorch documentation pages, you will find the exact shapes, inputs, and outputs tensors are required to have. It just happens they have chosen to stick with the NCHW convention for 2-dimensional channel tensors.
It makes sense to stick with one format let it be for the underlying implementation - where the memory arrangement does matter - or the end-user itself - who is used to working with a single format.
In TensorFlow for instance, channel is last, so the format used is NHWC.
To come back to how HWC (resp. CHW) was named channel-first (resp. channel-last) in the paper you linked. This has to do with the tensor stride: i.e. the layout of data in memory. Intuitively you can think that format HWC is channel-first because the channel dimension is the first axis to get unfolded.
If you look at this example:
>>> x = torch.rand(2,3,4) # last dimension is the channel axis
tensor([[[0.5567, 0.0276, 0.6491, 0.7933],
[0.2876, 0.0361, 0.3883, 0.3201],
[0.6742, 0.0305, 0.5719, 0.4683]],
[[0.3385, 0.2082, 0.1675, 0.3429],
[0.6146, 0.0533, 0.6147, 0.2216],
[0.1855, 0.6107, 0.1716, 0.0071]]])
The underlying memory arangement is actually revealed when flattening the data (assuming the initial tensor's data is contiguous in memory):
>>> x.flatten()
tensor([0.5567, 0.0276, 0.6491, 0.7933, 0.2876, 0.0361, 0.3883, 0.3201, 0.6742,
0.0305, 0.5719, 0.4683, 0.3385, 0.2082, 0.1675, 0.3429, 0.6146, 0.0533,
0.6147, 0.2216, 0.1855, 0.6107, 0.1716, 0.0071])
Notice above how the data is laid out: going channel by channel 0.5567, 0.0276, 0.6491, 0.7933, then 0.2876, 0.0361, 0.3883, 0.3201, etc...
In the other format (i.e. CHW) it would have been laid out as 0.5567, 0.2876, 0.6742, then 0.3385, 0.6146, 0.1855, etc...
So it does make sense to call CHW channel-last (HWC as channel-first) when referring to how data is allocated in memory.
| https://stackoverflow.com/questions/68634724/ |
Python code to convert 1D tensor to 2D tensor | I am trying to use Binary Cross Entropy Loss (BCE loss) for Simese network.
I have two inputs for BCE loss function:
output (input_dy) → tensor of size [4] , output of neural network
true_labels (y_true) → tensor of size [4], target (true value)
For BCE loss, the input parameters must be of the dimension:
output (input_dy) → [Batch_size, no. of classes]
true_labels (y_true) → [Batch_size]
The following diagram explains the query:enter image description here
I need a function in python using pytorch to convert the dy matrix to a 2D matrix with the output probabilities that sum to 1. [To note: dy should be iterated through length of it, as it is the output of the network for every input ]
Further a 2D array must be represented into one hot encoding, which will be true_labels (that will represent Binary classes with 0 & 1)
I need both output matrix and true_labels matrix for BCE Loss with following dimensions:
output dimension → [4, 2]
true_labels → [4]
Any help is most appreciated!
Thank you in advance.
| It looks to me that your output is binary so you don't really need a 2D matrix for that task.
Also, I'm not quite sure that the BCE Loss (nor BCEWithLogits) requires tensors of different dimensions, they should both have shape (N, *) as far as I know.
Apart from that, for the sake of the question: if you have p(x), you can obtain the other column by simply computing 1 - p(x).
There are many ways to obtain that, a method could be:
# suppose we have a tensor/batch of probabilities
a = torch.tensor([0.4691, 0.9589, 0.7529, 0.9564])
# this gives a 2D matrix with two columns, (1 - p), p
b = torch.stack((1 - p, p), dim=-1)
And that's it!
| https://stackoverflow.com/questions/68642120/ |
pytorch nn.Module inference | I am planning on learning Pytorch. However at this stage I would like to ask a question so that I can understand some code I am reading
When you have a class whose base class is nn.Module say
class My_model(nn.Module)
how are inferences supposed to be run there?
In the code I am reading it says
tasks_output, other = my_model(data)
Wouldn't that just be creating an object? (like calling the class constructor)
How, in pytorch, are inference supposed to be made?
(for reference I am talking when my_model is set to my_model.eval())
EDIT: My apologies. I made the mistake of declaring the class and object as one.. I corrected the code
| You are confusion __init__ and __call__.
In your example my_model is a class, therefore calling
my_model_instance = my_model(arguments)
Invoke's my_model.__init__ with arguments. The result of this call is a new instance of my_model in the variable my_model_instance.
Once you instantiated the class my_model as the variable my_model_instance, you can evaluate the model on the training data:
tasks_output, other = my_model_instance(data)
"Calling" (i.e., putting parenthesis after the variable name) the instance of the model causes python to invoke the method __call__ of the class.
In the case of classes derived from nn.Modules this will invoke __call__ of nn.Module that does some pytorch stuff and eventually calls your implementation of forward method of my_class.
Please see this detailed thread on the difference between __init__ and __call__ in python in general.
It is often a convenient follow PEP8 Style Guide for Python Code:
Class names should normally use the CapWords convention.
Function names should be lowercase, with words separated by underscores as necessary to improve readability.
Variable names follow the same convention as function names.
| https://stackoverflow.com/questions/68645889/ |
Pytorch transformations on GPU, is it worth on big input data? | I am running a UNet with PyTorch on medical imaging data with a bunch of transformations and augmentations in my preprocessing. However, after digging into the different preprocessing packages like Torchio and MONAI, I noticed that most of the functions, even when they take Tensors as IO, are running things on CPU.
The functions either straight up take numpy arrays as input or call .numpy() on the tensors.
The problem is that my data is composed of 3D images of dimension 91x109x91 that I resize in 96x128x96 so they are pretty big. Hence, running transformations and augmentations on CPU is pretty inefficient I think.
First, it makes my program CPU bound because it takes more time to transform my images than running them through the model (I timed it many times ). Secondly, I checked the GPU usage and it's oscillating between pretty much 0% and 100% at each batch so, it's clearly limited by the CPU. I would like to speed it up if it's possible.
My question is: Why are these packages not using the GPUs? They could at least have hybrid functions taking either a numpy array or a Tensor as input as a lot of numpy functions are available in Torch as well. Is there a good reason to stick to the CPU rather than speeding up the preprocessing by loading the images on GPU at the beginning of the preprocessing?
I translated a simple normalization function to work on GPU and compare the running time between the GPU and CPU version and even on a laptop (NVidia M2000M) the function was 3 to 4 times faster on GPU.
On an ML discord, someone mentioned that GPU-based functions might not give deterministic results and that's why it might not be a good idea but I don't know if that's actually the case.
My preprocessing includes resizing, intensity clamping, z-scoring, intensity rescaling, and then I have some augmentations like random histogram shift/elastic transform/affine transform/bias field.
| A transformation will typically only be faster on the GPU than on the CPU if the implementation can make use of the parallelism offered by the GPU. Typically anything that operates element-wise, or row/column-wise can be made faster on GPU. This therefore concerns most image transformations.
The reason why some libraries don't implement things on GPU is that it requires additional work for each Tensor manipulation library you want to support (Pytorch, Tensorflow, MXNet, ...), and you still have to maintain another CPU implementation anyway. Since you're using PyTorch, checkout the torchvision package that implements many transformations for both GPU and CPU tensors.
For more complex transformations, like elastic deformation, I'm not sure if you can find a GPU version. If not, you might have to write one yourself, or drop this transformation, or pay the cost of copying back-and-forth between CPU and GPU during your data augmentation.
Another solution that some people prefer is to precompute a large set of transformation on CPU as a separate step and to save the result in a file. The HDF file format is commonly used to save large datasets that can then be read very fast from disk. Since you will be saving a finite set of augmentation, be careful to generate several augmentations for each sample of your dataset to conserve a somewhat random behavior. This is not perfect, but it's a very pragmatic that will likely speed things up quite a bit if your CPU is holding your GPU back.
Regarding the determinism of the GPU, it's true that floating point operations are not by default guaranteed to be deterministic when run on GPU. This is because reordering some floating point operations can make them faster, but the reordering cannot guarantee that the result will be exactly the same (it will be close of course!). This can matter for reproducibility, if you use a seed in your code and get slightly different results. See the Pytorch Documentation to understand other sources of non-determinism.
| https://stackoverflow.com/questions/68649820/ |
What is the "data.max" of a torch.Tensor? | I have been browsing the documentation of torch.Tensor, but I have not been able to find this (just similar things).
If a_tensor is a torch.Tensor, what is a_tensor.data.max? What type, etc.?
In particular, I am reading a_tensor.data.max(1)[1] and a_tensor.data.max(1)[1][i].cpu().numpy().
| When accessing .data you are accessing the underlying data of the tensor. The returned object is a Torch.*Tensor as well, however, it won't be linked to any computational graph.
Take this example:
>>> x = torch.rand(4, requires_grad=True)
>>> y = x**2
>>> y
tensor([0.5272, 0.3162, 0.1374, 0.3004], grad_fn=<PowBackward0>)
While y.data is somewhat detached from graph (no grad_fn function), yet it is not a copy of y as y.detach() would return:
>>> y.data
tensor([0.5272, 0.3162, 0.1374, 0.3004]
Therefore, if you modify y.data's components you end modifying y itself:
>>> y.data[0] = 1
>>> y
tensor([1.0000, 0.3162, 0.1374, 0.3004], grad_fn=<PowBackward0>)
Notice how the grad_fn didn't change there. If you had done y[0] = 1, grad_fn would have been updated to <CopySlices>. This shows that modify your tensor's data through .data is not accounted for in terms of gradient, i.e., you won't be able to backpropagate these operations. It is required to work with y - not y.data - when planning to use Autograd.
So, to give an answer to your question: a_tensor.data is a torch.*Tensor, same type as a_tensor, and a_tensor.data.max is a function bound to that tensor.
| https://stackoverflow.com/questions/68650265/ |
how to save in pytorch an ONNX model with training (autograd) operations? | In pytorch, is it possible to save an ONNX model to file including the backward operations?
If not, is there any other way in pytorch to save the forward and backward graph as text (json, pbtxt ...)?
Any help will be appreciated.
| it's possible if you wrap the model with ORTModule -
https://github.com/microsoft/onnxruntime-training-examples
There's flag to enable onnx model saving, for example:
model._save_onnx = True
model._save_onnx_prefix = 'MNIST'
However, the onnx graph from fw will be further optimized before generating bw graph. Thus it's specific to ORT, but the training results should be mathematically the same. If you are looking for just fw+bw graph, the output onnx is still a good reference. The onnx could be opened using Netron util - https://github.com/lutzroeder/Netron
| https://stackoverflow.com/questions/68672250/ |
torch.masked_scatter result did not meet expectations | my pytorch code:
import torch
x = torch.tensor([[0.3992, 0.2908, 0.9004, 0.4850, 0.6004],
[0.5735, 0.9006, 0.6797, 0.4152, 0.1732]])
print(x.shape)
mask = torch.tensor([[False, False, True, False, True],
[ True, True, True, False, False]])
print(mask.shape)
y = torch.tensor([[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]])
print(y.shape)
y.masked_scatter_(mask, x)
print(y)
result is:
torch.Size([2, 5])
torch.Size([2, 5])
torch.Size([2, 5])
tensor([[0.0000, 0.0000, 0.3992, 0.0000, 0.2908],
[0.9004, 0.4850, 0.6004, 0.0000, 0.0000]])
i think the result answer is:
tensor([[0.0000, 0.0000, 0.9004, 0.0000, 0.6004],
[0.5375, 0.9006, 0.6797, 0.0000, 0.0000]])
my pytorch version is pytorch1.4
| You are right, this is confusing and there is virtually no documentation.
However, the way scatter works (as you have discovered) is that the ith True in a row is given the ith value from the source. So not the value corresponding to the position of the True.
Luckily what you are trying to do can easily be achieved using the normal indexing notation:
>>> y[mask] = x[mask]
>>> y
tensor([[0.0000, 0.0000, 0.9004, 0.0000, 0.6004],
[0.5735, 0.9006, 0.6797, 0.0000, 0.0000]])
| https://stackoverflow.com/questions/68675160/ |
AzureML experiment pipeline not using CUDA with PyTorch | I am running an experiment pipeline to train my model with PyTorch and CUDA.
I created the environment as follow:
env = Environment.from_conda_specification(model, join(model, 'conda_dependencies.yml'))
env.docker.enabled = True
env.environment_variables = {'MODEL_NAME': model, 'BRANCH': branch, 'COMMIT': commit}
env.docker.base_image = DEFAULT_GPU_IMAGE
run_config = RunConfiguration()
run_config.environment = env
run_config.docker = DockerConfiguration(use_docker=True)
And here is the training step:
train_step = PythonScriptStep(
name='Model Train',
source_directory=training_dir,
compute_target=cluster,
runconfig=run_config,
script_name='train_aml.py',
arguments=[
'--model', model,
'--model_output_dir', model_output_dir,
],
inputs=[train_dataset.as_mount()],
outputs=[model, model_output_dir]
)
Even though I am using a Standard_NC12_Promo machine when I run my training script, the GPU is not picked up by PyTorch device = torch.device("cuda" if torch.cuda.is_available() else "cpu").
If I try running my script on the same machine but not in an experiment then the GPU is used.
Do you know any potential solutions to this?
| Depending on pytorch version you might need a specific version of cuda. Try cuda 11.0.3 or cuda 11.1 from here
https://github.com/Azure/AzureML-Containers/tree/master/base/gpu
Regarding your code snippet, please move environment variables out of environment object to runconfiguration
| https://stackoverflow.com/questions/68678587/ |
AttributeError: Can't pickle local object 'pre_datasets..' when implementing Pytorch framework | I was trying to implement a pytorch framework on CNN.
I'm sure the code is right because it's from a tutorial and it works when I ran it on Jupyter Notebook on GoogleDrive.
But when I tried to localize it as a .py file, it suggest an error:
AttributeError: Can't pickle local object 'pre_datasets.<locals>.<lambda>'
I know it's about inferencing objects outside a function, but what was the exact matter about this error?
And how should I fix it?
Here's the major part of the code.
def pre_datasets():
TRAIN_TFM = transforms.Compose(
[
transforms.Resize(size=(128, 128)),
# TODO
transforms.ToTensor(),
]
)
train_set = DatasetFolder(
root=CONFIG["train_set_path"],
loader=lambda x: Image.open(x),
extensions="jpg",
transform=TRAIN_TFM,
)
train_loader = DataLoader(
dataset=train_set,
batch_size=CONFIG["batch_size"],
shuffle=True,
num_workers=CONFIG["num_workers"],
pin_memory=True,
)
return train_loader
def train(train_loader):
...
for epoch in range(CONFIG["num_epochs"]):
...
for batch in train_loader: # error happened here
...
if __name__ == "__main__":
train_loader = pre_datasets()
train(train_loader)
Here's the error message:
Traceback (most recent call last):
File "HW03_byCRZ.py", line 197, in <module>
train(train_loader, valid_loader)
File "HW03_byCRZ.py", line 157, in train
for batch in train_loader:
File "/Users/ceezous/opt/anaconda3/envs/pytorch_env/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 355, in __iter__
return self._get_iterator()
File "/Users/ceezous/opt/anaconda3/envs/pytorch_env/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 301, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/Users/ceezous/opt/anaconda3/envs/pytorch_env/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 914, in __init__
w.start()
File "/Users/ceezous/opt/anaconda3/envs/pytorch_env/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/ceezous/opt/anaconda3/envs/pytorch_env/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Users/ceezous/opt/anaconda3/envs/pytorch_env/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/Users/ceezous/opt/anaconda3/envs/pytorch_env/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/ceezous/opt/anaconda3/envs/pytorch_env/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/ceezous/opt/anaconda3/envs/pytorch_env/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Users/ceezous/opt/anaconda3/envs/pytorch_env/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'pre_datasets.<locals>.<lambda>'
| I had a similar issue and I used dill like this:
import dill as pickle
and it worked out of the box!
| https://stackoverflow.com/questions/68679806/ |
Convert Flatten layer from PyTorch to Tensorflow - Equivalent for start_dim and end_dim | What is the equivalent of the options start_dim and end_dim of the flatten PyTorch layers, in Tensorflow?
With Tensorlfow we only have data_format and it is not customizable.
| I don't think there is an identical implementation in tf. However, you can always use tf.reshape and add the shape yourself with a simple function which takes as arguments input, start_dim and end_dim and outputs the corresponding output shape that torch.flatten would give you.
| https://stackoverflow.com/questions/68691960/ |
Split neural network, load only needed part onto the GPU | I have a very, very big neural network and an Google Colab Pro Subscripting receiving my 16GB of GPU RAM. Unfortunately, this is not enough. My idea now is, to split the model (Unet) into the encoder and decoder part separately, and proceed like the following:
Load encoder to the GPU
Process the data through the encoder
Load encoder to the cpu, decoder to the GPU
Process the encoder output through the decoder
Load the decoder to the cpu aaaand repeat.
Is this in general possible? I coded an example but it wont work:
def train(epoch, loader, loss_fn, optimizer, scaler, model1, model2):
model1.train()
model2.train()
loop = prog(loader)
running_loss = []
for batch_index, (data, target) in enumerate(loop):
optimizer.zero_grad(set_to_none=True)
model1 = model1.to(DEVICE)
data, skip_connections = model1(data.to(DEVICE))
model1 = model1.cpu()
model2 = model2.to(DEVICE)
data = model2(data, skip_connections)
model2 = model2.cpu()
target = target.to(DEVICE)
with torch.cuda.amp.autocast():
loss = loss_fn(data, target)
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
model1 = model1.to(DEVICE)
loss_value = loss.item()
loop.set_postfix(info="Epoch {}, train, loss={:.5f}".format(epoch, loss_value))
running_loss.append(loss_value)
return s.mean(running_loss)
For the setup / initialization I got the following:
DEVICE = "cuda"
model1 = UNET_FIRST_HALVE(in_channels=4).to(DEVICE)
model2 = UNET_SECOND_HALVE(out_channels=NUM_CLASSES).cpu()
for epoch in range(epochs_done + 1, num_epochs + 1):
training_loss = train(..., model1, model2)
.
.
.
I get the following error:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Surely I understand the error, but I am sure that I push and pull everything at the right time onto and from the GPU... Or maybe there is a better way of splitting a model?
| There are a couple of wrong things with this:
Your autocast block should include the forward on your model
You don't need to go back and forth from CPU to GPU and back, it's already a bottleneck with tensors, imagine with models.
Optional: what the heck is your UNet made of if you can't make it fit on a 16GB device?
I'd first try with standard solutions: reduce the batch size, include AMP (that I see you already included) or even DeepSpeed, which already does some CPU allocation, depending on the memory optimization level. Have a look at it, just in case, this may already solve your problems.
Answering to your question, the only feasible approach I see is to keep a model on cuda, the other on CPU and eventually move inputs/outputs.
def train(epoch, loader, loss_fn, optimizer, scaler, model1, model2):
model1.to(device1)
model2.to(device2)
model1.train()
model2.train()
loop = prog(loader)
running_loss = []
for batch_index, (data, target) in enumerate(loop):
optimizer.zero_grad(set_to_none=True)
# now data nd skip are on device1
data, skip_connections = model1(data.to(device1))
# we need to move to device2
data = data.to(device2)
skip = skio_connections.to(device2)
data = model2(data, skip)
# everything should be on 2
# move the target to the same device
target = target.to(device2)
loss = loss_fn(data, target)
# backprop
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
loss_value = loss.item()
loop.set_postfix(info="Epoch {}, train, loss={:.5f}".format(epoch, loss_value))
running_loss.append(loss_value)
return s.mean(running_loss)
The main issue I see with this, is that your first model (residing on CPU) should not require the backpropagation part, otherwise it will probably break again. I doubt that PyTorch can build the backward graph between devices.
| https://stackoverflow.com/questions/68693585/ |
How can I reshape (A,) and (B, C, D) shapes to the single (A, B, C, D)? | This is my code;
for img_loc in list(self.train_data)[idx]:
images_set.append(self.load_ucf_image(img_loc))
print(images_set)
And, this is its output
[tensor([[[ 1.7865, 1.8893, 1.9578, ..., -1.3815, -0.4054, 0.2967],
[ 1.7694, 1.8722, 1.9578, ..., -0.6452, -0.4054, 0.1254],
[ 1.7523, 1.8722, 1.9749, ..., -0.5082, -0.6623, -0.3541],
...,
[-1.9809, -1.6384, -1.2617, ..., -1.7754, -1.0562, -0.9020],
[-2.0494, -1.8268, -1.5014, ..., -1.1075, -1.4672, -1.7069],
[-1.9980, -1.8953, -1.4672, ..., -1.7412, -1.7069, -1.2445]],
[[ 1.0805, 1.2556, 1.3606, ..., -1.1954, -0.1975, 0.4678],
[ 1.1155, 1.2731, 1.3957, ..., -0.4426, -0.1975, 0.3102],
[ 1.1155, 1.2731, 1.3957, ..., -0.2850, -0.4426, -0.1800],
...,
[-1.9657, -1.7556, -1.4580, ..., -1.8957, -1.2304, -1.1253],
[-2.0007, -1.9132, -1.6856, ..., -1.1954, -1.6331, -1.9482],
[-1.9482, -1.9482, -1.5980, ..., -1.8431, -1.8782, -1.4230]],
[[ 1.4548, 1.6640, 1.8034, ..., -0.4624, 0.5311, 1.3502],
[ 1.5071, 1.7163, 1.8557, ..., 0.2871, 0.5311, 1.1411],
[ 1.5245, 1.7511, 1.9080, ..., 0.3916, 0.2348, 0.6182],
...,
[-1.4907, -1.3164, -1.0550, ..., -1.5953, -0.9504, -0.8284],
[-1.5430, -1.4907, -1.2816, ..., -0.8633, -1.3164, -1.6476],
[-1.4907, -1.5256, -1.2119, ..., -1.5081, -1.5256, -1.1421]]]), tensor([[[-0.4054, -0.3883, -0.4739, ..., 2.1119, 2.0948, 2.0777],
[-0.3712, -0.2856, -0.4397, ..., 2.0948, 2.0777, 2.0434],
[-0.3541, -0.2513, 0.1083, ..., 2.0777, 2.0434, 2.0263],
...,
[-1.0219, -1.2617, -1.2103, ..., -0.0629, -0.1999, -0.2171],
[-1.0048, -1.3302, -1.3302, ..., -0.0801, -0.1828, -0.4226],
[-1.3302, -0.9705, -1.0562, ..., -0.0458, -0.1486, -0.7993]],
[[-0.2325, -0.2150, -0.2675, ..., 1.5707, 1.5007, 1.4132],
[-0.1975, -0.1099, -0.2500, ..., 1.6057, 1.5357, 1.4482],
[-0.2150, -0.1099, 0.2752, ..., 1.6057, 1.5532, 1.4482],
...,
[-1.1779, -1.4230, -1.3704, ..., -0.3901, -0.4776, -0.4776],
[-1.1779, -1.5630, -1.5280, ..., -0.4076, -0.4601, -0.6527],
[-1.5630, -1.2304, -1.2829, ..., -0.3200, -0.3901, -1.0028]],
[[ 0.7751, 0.7925, 0.6531, ..., 2.0300, 1.9428, 1.8731],
[ 0.8099, 0.8971, 0.6356, ..., 2.0823, 1.9951, 1.9254],
[ 0.8099, 0.8797, 1.1759, ..., 2.1171, 2.0300, 1.9603],
...,
[-0.8981, -1.1073, -1.0201, ..., -0.0092, -0.0790, -0.0441],
[-0.8981, -1.2293, -1.1944, ..., -0.0267, -0.0615, -0.2184],
[-1.2990, -0.9156, -0.9853, ..., 0.0431, -0.0092, -0.5670]]]), tensor([[[ 1.9749, 1.8208, 1.9749, ..., -0.8678, -0.8335, -0.8678],
[ 2.0777, 1.8893, 2.0777, ..., -0.8507, -0.8164, -0.8507],
[ 2.1290, 2.0777, 2.1290, ..., -0.7993, -0.7822, -0.7993],
...,
[-1.2617, -1.3644, -1.4672, ..., -1.9980, -2.0665, -2.1008],
[-1.1760, -1.2617, -1.3815, ..., -2.0665, -2.0837, -2.0837],
[-1.0904, -1.1760, -1.3302, ..., -2.1008, -2.0323, -2.0837]],
[[ 1.3256, 1.1856, 1.3431, ..., -0.7402, -0.7227, -0.7577],
[ 1.4482, 1.3081, 1.5007, ..., -0.7227, -0.7227, -0.7402],
[ 1.5532, 1.5357, 1.6057, ..., -0.6702, -0.6877, -0.6877],
...,
[-1.4230, -1.5630, -1.7031, ..., -2.0007, -2.0357, -2.0357],
[-1.3354, -1.4405, -1.6155, ..., -2.0357, -2.0357, -2.0357],
[-1.2479, -1.3529, -1.5630, ..., -2.0357, -1.9832, -2.0182]],
[[ 1.7163, 1.5768, 1.7511, ..., 0.0082, -0.0092, -0.0441],
[ 1.8383, 1.6814, 1.8905, ..., 0.0431, 0.0431, -0.0267],
[ 1.9254, 1.8905, 1.9777, ..., 0.0953, 0.0779, -0.0092],
...,
[-1.1073, -1.2293, -1.4036, ..., -1.6650, -1.6824, -1.6824],
[-1.0550, -1.1596, -1.3164, ..., -1.7696, -1.7173, -1.6999],
[-0.9678, -1.0724, -1.2641, ..., -1.7696, -1.6650, -1.6999]]]), tensor([[[-0.6794, -0.6452, -0.6452, ..., 2.1804, 2.1290, 2.0777],
[-0.6794, -0.6452, -0.5596, ..., 2.1633, 2.1804, 2.0948],
[-0.6452, -0.5767, -0.4739, ..., 2.1462, 2.0948, 2.0434],
...,
[-1.9980, -2.0494, -2.0494, ..., -1.4158, -1.3130, -1.3130],
[-2.0152, -2.0665, -1.9980, ..., -1.3644, -1.2445, -1.1760],
[-2.0152, -2.0494, -2.0323, ..., -1.3473, -1.2445, -1.0904]],
[[-0.6176, -0.5826, -0.5651, ..., 1.6933, 1.6057, 1.5182],
[-0.6176, -0.5476, -0.4776, ..., 1.7633, 1.7108, 1.5707],
[-0.5826, -0.4776, -0.3901, ..., 1.7983, 1.6933, 1.6057],
...,
[-2.0007, -2.0357, -2.0357, ..., -1.7556, -1.5805, -1.4930],
[-2.0182, -2.0357, -2.0182, ..., -1.7031, -1.5455, -1.4055],
[-1.9657, -2.0357, -2.0182, ..., -1.6856, -1.5455, -1.3179]],
[[ 0.0779, 0.1825, 0.3045, ..., 2.0997, 1.9777, 1.9080],
[ 0.1128, 0.2348, 0.4265, ..., 2.1346, 2.0648, 1.9428],
[ 0.1476, 0.3045, 0.5136, ..., 2.1520, 2.0648, 1.9428],
...,
[-1.6650, -1.7522, -1.7696, ..., -1.4559, -1.2641, -1.2119],
[-1.7173, -1.8044, -1.7522, ..., -1.4036, -1.2119, -1.1073],
[-1.7173, -1.7870, -1.7696, ..., -1.3861, -1.2119, -1.0201]]]), tensor([[[-0.3712, -0.4054, -0.3369, ..., 2.0092, 1.9407, 1.8379],
[-0.3027, -0.2684, -0.3712, ..., 2.0263, 1.9749, 1.8550],
[-0.1486, -0.3198, -0.3712, ..., 2.0092, 1.9578, 1.9064],
...,
[-1.9638, -2.0494, -2.0494, ..., -1.9638, -2.0665, -2.0152],
[-1.9467, -2.0323, -1.9467, ..., -1.9467, -2.0837, -2.0152],
[-1.1418, -1.9638, -2.0323, ..., -2.0323, -2.0323, -1.9980]],
[[-0.1975, -0.2325, -0.1975, ..., 1.2906, 1.1856, 1.0280],
[-0.1275, -0.0924, -0.2325, ..., 1.3256, 1.2556, 1.1331],
[-0.0049, -0.1800, -0.2325, ..., 1.3782, 1.3081, 1.2031],
...,
[-2.0357, -2.0357, -2.0357, ..., -1.8957, -1.9482, -1.8782],
[-2.0357, -2.0357, -1.9832, ..., -1.8782, -1.9482, -1.8256],
[-1.2129, -2.0007, -2.0357, ..., -1.9132, -1.8431, -1.7906]],
[[ 0.7751, 0.7402, 0.7925, ..., 1.6291, 1.5420, 1.4025],
[ 0.8448, 0.9145, 0.7576, ..., 1.6988, 1.6291, 1.4722],
[ 0.9842, 0.8448, 0.7576, ..., 1.7685, 1.6988, 1.5768],
...,
[-1.7173, -1.7696, -1.7696, ..., -1.4210, -1.4559, -1.3861],
[-1.6476, -1.6999, -1.6127, ..., -1.4036, -1.4559, -1.3164],
[-0.8284, -1.6302, -1.6476, ..., -1.4210, -1.3339, -1.2816]]]), tensor([[[-0.3198, -0.3883, -0.4226, ..., 1.8893, 1.7352, 1.7352],
[-0.2684, -0.3369, -0.3541, ..., 1.8893, 1.7523, 1.7523],
[-0.1999, -0.2342, -0.3712, ..., 1.9235, 1.8037, 1.8037],
...,
[-2.0323, -2.0494, -2.1008, ..., -1.8268, -1.8268, -1.7754],
[-1.9809, -2.0323, -1.9980, ..., -1.8097, -1.7583, -1.6727],
[-1.9638, -2.0494, -2.0665, ..., -1.7925, -1.7583, -1.6898]],
[[-0.1625, -0.2325, -0.2675, ..., 1.0980, 0.8704, 0.8704],
[-0.1099, -0.1975, -0.2150, ..., 1.1506, 0.9405, 0.9405],
[-0.0574, -0.0924, -0.2325, ..., 1.2206, 1.0630, 1.0105],
...,
[-2.0357, -2.0007, -2.0357, ..., -1.7906, -1.7206, -1.6331],
[-2.0007, -2.0182, -1.9832, ..., -1.7381, -1.6155, -1.4930],
[-2.0357, -2.0357, -2.0357, ..., -1.6681, -1.5805, -1.5105]],
[[ 0.7751, 0.7054, 0.6705, ..., 1.3677, 1.1237, 1.1237],
[ 0.8274, 0.7925, 0.7751, ..., 1.4025, 1.2108, 1.1759],
[ 0.9319, 0.8971, 0.7576, ..., 1.4897, 1.3154, 1.2805],
...,
[-1.6999, -1.7173, -1.7696, ..., -1.2641, -1.1770, -1.1073],
[-1.6999, -1.7696, -1.7347, ..., -1.2293, -1.0898, -0.9330],
[-1.7173, -1.7696, -1.7696, ..., -1.1770, -1.0201, -0.9504]]]), tensor([[[ 2.0605, 2.0434, 2.0263, ..., -0.3198, -0.3027, -0.2856],
[ 2.0434, 2.0263, 2.0263, ..., -0.2171, -0.2171, -0.2171],
[ 2.0777, 2.0263, 2.0092, ..., -0.1999, -0.1828, -0.1999],
...,
[ 0.3652, 0.4337, 0.3823, ..., -2.0323, -2.0152, -1.9809],
[ 0.3823, 0.4679, 0.4337, ..., -2.0152, -2.0323, -2.0494],
[ 0.3823, 0.3652, 0.4337, ..., -1.9638, -2.0323, -2.0665]],
[[ 1.6232, 1.6758, 1.7283, ..., -0.1450, -0.1800, -0.1625],
[ 1.6758, 1.7283, 1.8333, ..., -0.0574, -0.0924, -0.0924],
[ 1.7283, 1.7983, 1.8859, ..., -0.0399, -0.0574, -0.0749],
...,
[ 0.1352, 0.2052, 0.1352, ..., -2.0357, -2.0357, -2.0007],
[ 0.1877, 0.2402, 0.1702, ..., -2.0357, -2.0357, -2.0357],
[ 0.2227, 0.1352, 0.1702, ..., -1.9657, -2.0357, -2.0357]],
[[ 1.9603, 2.0300, 2.0648, ..., 0.6879, 0.6356, 0.6531],
[ 2.0300, 2.0997, 2.1694, ..., 0.8448, 0.7576, 0.7576],
[ 2.1171, 2.1520, 2.2043, ..., 0.8971, 0.8274, 0.7751],
...,
[ 0.4788, 0.5136, 0.4091, ..., -1.7870, -1.8044, -1.7347],
[ 0.5136, 0.5834, 0.4962, ..., -1.8044, -1.8044, -1.8044],
[ 0.5485, 0.4788, 0.4962, ..., -1.7870, -1.8044, -1.8044]]]), tensor([[[-0.3369, -0.3198, -0.3541, ..., 2.0605, 2.0777, 2.0948],
[-0.2684, -0.2342, -0.2513, ..., 2.0605, 2.0948, 2.0777],
[-0.2342, -0.1657, -0.1828, ..., 2.0434, 2.0777, 2.0777],
...,
[-2.0494, -1.9980, -1.9124, ..., 0.5193, 0.5536, 0.4851],
[-1.9638, -1.9980, -1.9980, ..., 0.5193, 0.5364, 0.4508],
[-2.0494, -2.0494, -1.9638, ..., 0.5193, 0.5022, 0.4166]],
[[-0.1975, -0.1800, -0.1800, ..., 1.6933, 1.6758, 1.6232],
[-0.1450, -0.1099, -0.0924, ..., 1.7633, 1.7458, 1.6758],
[-0.1099, -0.0399, -0.0224, ..., 1.8158, 1.8158, 1.7458],
...,
[-2.0357, -2.0357, -1.9832, ..., 0.2227, 0.2402, 0.2227],
[-1.9832, -2.0182, -2.0182, ..., 0.2577, 0.2752, 0.2227],
[-2.0357, -2.0357, -1.9832, ..., 0.2577, 0.2752, 0.2052]],
[[ 0.5659, 0.5834, 0.6531, ..., 2.0125, 2.0125, 1.9603],
[ 0.6705, 0.7054, 0.7925, ..., 2.0997, 2.0997, 2.0125],
[ 0.7402, 0.8099, 0.8797, ..., 2.1346, 2.1346, 2.0474],
...,
[-1.7696, -1.7870, -1.7347, ..., 0.5485, 0.6182, 0.6182],
[-1.7173, -1.7522, -1.7522, ..., 0.5834, 0.6356, 0.6008],
[-1.8044, -1.8044, -1.7173, ..., 0.5834, 0.6182, 0.5834]]]), tensor([[[ 2.0777, 2.0605, 2.0777, ..., 0.1426, 0.1083, 0.0569],
[ 2.1119, 2.0777, 2.0948, ..., 0.1939, 0.1597, 0.0569],
[ 2.0948, 2.0777, 2.0948, ..., 0.1768, 0.1426, 0.0056],
...,
[ 0.4679, 0.6906, 0.7248, ..., -0.5253, -0.5082, -0.5938],
[ 0.0912, 0.6049, 0.7248, ..., -0.6281, -0.9877, -0.5082],
[-0.5253, 0.5707, 0.6734, ..., -0.8678, -0.6452, -0.2342]],
[[ 1.9559, 2.0084, 2.0784, ..., 0.2227, 0.1877, 0.1352],
[ 2.0259, 2.0609, 2.0959, ..., 0.2227, 0.1877, 0.0826],
[ 2.0084, 2.0609, 2.0959, ..., 0.2227, 0.1702, 0.0301],
...,
[ 0.2052, 0.3627, 0.3452, ..., -0.7927, -0.8277, -0.9153],
[-0.1450, 0.2752, 0.3452, ..., -0.9678, -1.3880, -0.9153],
[-0.7752, 0.2402, 0.3102, ..., -1.2829, -1.0903, -0.7052]],
[[ 2.3088, 2.3437, 2.4308, ..., 1.1934, 1.1585, 1.1062],
[ 2.3611, 2.3786, 2.4483, ..., 1.2108, 1.1759, 1.0714],
[ 2.3437, 2.3786, 2.4483, ..., 1.1585, 1.1585, 1.0191],
...,
[ 0.6008, 0.7402, 0.7054, ..., -0.5147, -0.5495, -0.6367],
[ 0.2348, 0.6182, 0.7054, ..., -0.6890, -1.1247, -0.6541],
[-0.3927, 0.5834, 0.6182, ..., -1.0201, -0.8458, -0.4624]]]), tensor([[[ 2.1119, 2.1633, 2.0948, ..., -0.1143, -0.1486, -0.1657],
[ 2.0777, 2.0605, 2.1119, ..., -0.0458, -0.0629, -0.0972],
[ 2.1462, 2.1633, 2.0777, ..., 0.0569, 0.0569, 0.0056],
...,
[ 0.3652, 0.4679, 0.6563, ..., -2.0323, -1.9638, -1.9980],
[ 0.1083, 0.4679, 0.6734, ..., -1.8610, -1.8953, -1.9638],
[-0.4911, 0.4337, 0.6392, ..., -0.6109, -1.8097, -1.8953]],
[[ 1.5007, 1.6408, 1.6933, ..., -0.1275, -0.1450, -0.1625],
[ 1.5357, 1.6232, 1.7633, ..., -0.0399, -0.0224, -0.0574],
[ 1.6758, 1.7983, 1.8508, ..., 0.0651, 0.0826, 0.0301],
...,
[ 0.0826, 0.1176, 0.2577, ..., -2.0357, -1.9832, -1.9832],
[-0.1450, 0.1527, 0.2927, ..., -1.9307, -1.9132, -1.9832],
[-0.7227, 0.1702, 0.3102, ..., -0.7402, -1.9307, -2.0182]],
[[ 1.8383, 2.0125, 2.0648, ..., 0.8099, 0.7925, 0.7751],
[ 1.8905, 1.9603, 2.1171, ..., 0.9145, 0.9145, 0.8797],
[ 2.0125, 2.1520, 2.2043, ..., 1.0191, 1.0714, 1.0191],
...,
[ 0.5311, 0.5485, 0.6182, ..., -1.8044, -1.7522, -1.7696],
[ 0.2871, 0.5311, 0.6531, ..., -1.6824, -1.6824, -1.7522],
[-0.3055, 0.5311, 0.6531, ..., -0.5147, -1.6999, -1.7870]]]), tensor([[[ 1.3755, 1.6153, 1.7694, ..., 0.0912, 0.2282, 0.1083],
[ 1.4269, 1.6495, 1.7694, ..., 0.1768, -0.0801, -0.7650],
[ 1.3755, 1.6324, 1.8037, ..., 0.0741, -0.5253, -0.0116],
...,
[-1.5528, -1.5699, -1.8097, ..., -0.8849, -0.6794, -0.6965],
[-1.6042, -1.5870, -1.8097, ..., -0.7479, -0.9534, -1.3302],
[-1.6898, -1.6727, -1.7412, ..., -1.6384, -1.3130, -0.7822]],
[[ 0.6429, 0.9230, 1.0980, ..., 0.2052, 0.3277, 0.2052],
[ 0.6954, 0.9405, 1.1331, ..., 0.2927, 0.0126, -0.6877],
[ 0.6604, 0.9580, 1.1681, ..., 0.1702, -0.4951, 0.0476],
...,
[-1.4230, -1.3880, -1.6155, ..., -0.7927, -0.6527, -0.7752],
[-1.4230, -1.3704, -1.5805, ..., -0.6352, -0.9678, -1.4230],
[-1.5105, -1.4405, -1.5105, ..., -1.6681, -1.4230, -0.9503]],
[[ 0.7925, 1.0539, 1.2805, ..., 1.1585, 1.2457, 1.1062],
[ 0.8971, 1.1411, 1.3502, ..., 1.2457, 0.9319, 0.2173],
[ 0.8971, 1.1759, 1.4200, ..., 1.0888, 0.4265, 0.9145],
...,
[-0.8284, -0.8284, -1.1421, ..., -0.4798, -0.3230, -0.4101],
[-0.8633, -0.8633, -1.1247, ..., -0.3753, -0.6193, -1.0898],
[-0.9504, -0.9853, -1.0898, ..., -1.3164, -1.0201, -0.5495]]]), tensor([[[ 0.0056, 0.0912, 0.1254, ..., 2.1633, 2.1119, 2.1119],
[ 0.0569, 0.1597, 0.1254, ..., 2.1633, 2.1119, 2.1462],
[ 0.0227, 0.1426, 0.1597, ..., 2.1804, 2.1462, 2.0948],
...,
[-0.5082, -0.9705, -0.6109, ..., 0.6049, -0.1999, -1.0219],
[-0.2171, -0.6452, -0.7650, ..., 0.6049, -0.6281, -0.9877],
[-1.1075, 0.0227, -0.6281, ..., 0.4337, -0.8507, -1.1247]],
[[ 0.1001, 0.2052, 0.2402, ..., 1.8859, 1.7458, 1.5882],
[ 0.1527, 0.2402, 0.2402, ..., 1.8859, 1.7458, 1.6232],
[ 0.1176, 0.2227, 0.2577, ..., 1.9034, 1.7633, 1.6232],
...,
[-0.8978, -1.3704, -0.9503, ..., 0.3627, -0.3725, -1.0903],
[-0.6176, -1.0553, -1.1604, ..., 0.3627, -0.7752, -1.0028],
[-1.5630, -0.3725, -1.0203, ..., 0.2227, -1.0028, -1.1429]],
[[ 1.0191, 1.1585, 1.1934, ..., 2.2566, 2.0648, 1.9428],
[ 1.0714, 1.2108, 1.1585, ..., 2.2914, 2.0997, 1.9777],
[ 1.0365, 1.1934, 1.1759, ..., 2.3088, 2.1694, 1.9777],
...,
[-0.6367, -1.1073, -0.6715, ..., 0.6356, -0.0441, -0.7238],
[-0.3927, -0.8284, -0.8981, ..., 0.6356, -0.4973, -0.6541],
[-1.3164, -0.1487, -0.7587, ..., 0.4614, -0.7238, -0.7936]]]), tensor([[[ 2.0777, 2.1119, 2.1119, ..., 0.0569, -0.0629, -0.2171],
[ 2.0777, 2.1290, 2.1119, ..., 0.0056, -0.1143, -0.2513],
[ 2.0777, 2.0948, 2.1290, ..., 0.0056, -0.1143, -0.1999],
...,
[ 0.4851, 0.5707, 0.5707, ..., -0.2171, -0.8849, -0.9192],
[ 0.3481, 0.5707, 0.5536, ..., -1.1075, -0.7650, -0.3883],
[-0.1657, 0.5193, 0.5193, ..., -0.8678, -0.3712, -1.1760]],
[[ 1.6408, 1.8158, 1.9384, ..., 0.1527, -0.0224, -0.1625],
[ 1.6758, 1.8508, 1.9909, ..., 0.1001, -0.0574, -0.1975],
[ 1.6758, 1.8683, 2.0084, ..., 0.0476, -0.0574, -0.1800],
...,
[ 0.2577, 0.2752, 0.2577, ..., -0.6176, -1.3004, -1.3354],
[ 0.1352, 0.2752, 0.2402, ..., -1.5455, -1.1779, -0.7927],
[-0.3725, 0.2227, 0.2052, ..., -1.2829, -0.7752, -1.5280]],
[[ 1.9603, 2.1520, 2.3088, ..., 1.0714, 0.8971, 0.6705],
[ 2.0125, 2.2217, 2.3437, ..., 1.0191, 0.8099, 0.6008],
[ 2.0125, 2.2217, 2.3960, ..., 0.9668, 0.8099, 0.6356],
...,
[ 0.5659, 0.5659, 0.5311, ..., -0.3927, -1.0724, -1.0724],
[ 0.3916, 0.5485, 0.5136, ..., -1.3513, -0.9504, -0.5321],
[-0.1138, 0.4962, 0.4788, ..., -1.0550, -0.5147, -1.2467]]]), tensor([[[ 0.0398, -0.8678, -0.1486, ..., 1.6153, 1.0502, 0.7077],
[ 0.0741, -0.1486, -0.7822, ..., 1.6495, 1.1358, 0.7933],
[-0.2513, -0.0972, -0.6965, ..., 1.6495, 1.1872, 0.7933],
...,
[-0.6452, -1.3815, -0.9192, ..., -1.9124, -1.6727, -1.7583],
[-1.1932, -0.7822, -1.3644, ..., -1.8782, -1.6727, -1.8097],
[-1.5699, -1.4158, -0.6452, ..., -1.7925, -1.7069, -1.8268]],
[[ 0.0826, -0.7927, -0.0574, ..., 0.8179, 0.2752, -0.0399],
[ 0.1176, -0.1099, -0.7052, ..., 0.8529, 0.3277, 0.0476],
[-0.2325, -0.0399, -0.6176, ..., 0.8529, 0.3803, 0.0476],
...,
[-0.8627, -1.5105, -0.9678, ..., -1.7206, -1.5280, -1.6155],
[-1.4405, -0.9503, -1.4755, ..., -1.6506, -1.4755, -1.6155],
[-1.8256, -1.5980, -0.7402, ..., -1.5280, -1.5105, -1.6331]],
[[ 1.0191, 0.1128, 0.8622, ..., 0.9842, 0.3916, 0.1302],
[ 1.0365, 0.8099, 0.2173, ..., 1.0191, 0.4614, 0.1999],
[ 0.6531, 0.8274, 0.2871, ..., 1.0191, 0.5136, 0.1999],
...,
[-0.5321, -1.1596, -0.6541, ..., -1.2467, -1.0376, -1.1247],
[-1.0550, -0.5495, -1.0724, ..., -1.1944, -0.9678, -1.1421],
[-1.4384, -1.1944, -0.3055, ..., -1.0898, -1.0027, -1.1247]]]), tensor([[[-0.3369, -0.2684, -0.1486, ..., 2.0434, 2.1290, 2.0605],
[-0.3883, -0.3027, -0.1657, ..., 2.0605, 2.1462, 2.0605],
[-0.3883, -0.2684, -0.1486, ..., 2.0605, 2.1462, 2.0263],
...,
[-1.1075, -1.0219, -0.2684, ..., 0.4166, 0.3652, 0.2111],
[-0.5596, -0.9020, -1.1760, ..., 0.3481, 0.3823, 0.0227],
[-1.3815, -0.5938, -1.0562, ..., 0.3138, 0.3652, -0.3712]],
[[-0.3550, -0.2850, -0.1625, ..., 1.7458, 1.7458, 1.6057],
[-0.4076, -0.3200, -0.1800, ..., 1.7983, 1.7808, 1.6232],
[-0.4076, -0.2850, -0.1625, ..., 1.8333, 1.7983, 1.6232],
...,
[-1.5805, -1.5280, -0.7577, ..., 0.0476, 0.0651, -0.0049],
[-1.0203, -1.4055, -1.6856, ..., 0.0126, 0.1352, -0.1450],
[-1.7556, -0.9853, -1.4755, ..., -0.0224, 0.1176, -0.5476]],
[[ 0.5136, 0.6182, 0.7751, ..., 2.0474, 2.0474, 1.8905],
[ 0.4614, 0.5834, 0.7576, ..., 2.1171, 2.0997, 1.9428],
[ 0.4614, 0.6182, 0.7751, ..., 2.1868, 2.1520, 1.9603],
...,
[-1.2293, -1.1944, -0.4624, ..., 0.3568, 0.3916, 0.3742],
[-0.6715, -1.0724, -1.3861, ..., 0.3219, 0.4091, 0.1825],
[-1.4384, -0.6890, -1.2119, ..., 0.2871, 0.3916, -0.2184]]]), tensor([[[-1.4329, -0.4397, 0.8276, ..., -0.6281, -0.6281, -0.6623],
[-0.8335, 0.5022, 1.4783, ..., -0.5938, -0.5767, -0.5938],
[ 0.0569, 1.1700, 1.5468, ..., -0.5767, -0.5424, -0.5253],
...,
[-2.1008, -2.1008, -2.0665, ..., -1.9809, -1.9295, -1.9467],
[-2.1008, -2.1179, -2.0665, ..., -1.9980, -1.9980, -2.0494],
[-2.0837, -2.1179, -2.0665, ..., -2.0665, -2.0665, -1.9638]],
[[-1.8431, -1.0028, 0.1702, ..., -0.6176, -0.5826, -0.6176],
[-1.3179, -0.0924, 0.7654, ..., -0.5826, -0.5301, -0.5476],
[-0.4951, 0.5203, 0.8179, ..., -0.5301, -0.4601, -0.4426],
...,
[-2.0182, -2.0182, -2.0007, ..., -2.0357, -2.0182, -2.0357],
[-2.0182, -2.0357, -2.0182, ..., -2.0007, -2.0182, -2.0357],
[-2.0007, -2.0357, -2.0182, ..., -2.0182, -2.0357, -2.0007]],
[[-1.5779, -0.7587, 0.3916, ..., 0.1999, 0.2522, 0.2522],
[-1.0724, 0.1651, 1.0017, ..., 0.2696, 0.3393, 0.3219],
[-0.2532, 0.7751, 1.0191, ..., 0.3045, 0.4091, 0.4614],
...,
[-1.8044, -1.8044, -1.8044, ..., -1.8044, -1.7696, -1.7870],
[-1.7870, -1.8044, -1.7696, ..., -1.6999, -1.7173, -1.7696],
[-1.7696, -1.7696, -1.7347, ..., -1.6999, -1.7347, -1.6476]]])]
I have append 16 images of 3x112x112 shape, when I check shape of the images_set using this code;
print(np.array(images_set, dtype='object').shape)
I get;
(16,)
and then I check the shape of the first index of the image_set as it is 16 image then using this code;
print(np.array(images_set[0]).shape)
I get, each 16 image have this shape
(3, 112, 112)
how can I make it to this shape `(16, 3, 112, 112)?
| A more general format:
import tensorflow as tf
import numpy as np
#Let's make a prototype of one image using ones (just to reproduce the problem without original data...)
one_liketensor=np.ones((3,112,112))
#Then let's notice the same can be seen in tensor-format as follows:
one_liketensor_as_tensor=tf.constant(one_liketensor)
#Let's define length of the list...where the tensors are...
length_of_list=16
#And then let's make an array of the target shape...
multi_array=np.ones((length_of_list,one_liketensor.shape[0],one_liketensor.shape[1],one_liketensor.shape[2]))
for i in range(length_of_list):
#For clarificatlin let's multiply each distict "image" with the number i to easily undestand the structure of the result...
multi_array[i,:]=i*one_liketensor
#...but naturally the "one_liketensor" is something special data ... thus there is need to take this information directly from this source
#And next let's print the result
print(multi_array)
#And let's transform that to tensor-format
multi_array_as_tensor=tf.constant(multi_array)
#And ... tadaa ... you have the material in the preferred format:
print("Shape of the result is: ",multi_array_as_tensor.shape)
...where the "input information" is the length of the list and the shape (and source) of the tensors;
| https://stackoverflow.com/questions/68703236/ |
HuggingFace text summarization input data format issue | I’m trying to fine-tune a model to perform text summarization. I’m using AutoModelForSeq2SeqLM.from_pretrained(), so the following applies to several models (e.g. T5, ProphetNet, BART).
I’ve created a class called CustomDataset, which is a subclass of torch.utils.Dataset. That class contains one field: samples - a list of dictionaries that have encodings and labels keys. Each of the values in each of those dictionaries is a torch.Tensor. Here’s what an entry in samples looks like:
{'encoding': tensor([[21603, 10, 188, 563, 1]]), 'label': tensor([[ 1919, 22003, 22, 7, 1]])}
Here’s how I’m attempting to fine-tune the model using Trainer:
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
training_args = TrainingArguments("test_trainer")
trainer = Trainer(
model=model,
args=training_args,
train_dataset=data,
)
trainer.train()
The error I’m getting gets thrown on line 63 in transformers\data\data_collator.py. Here’s that line of code:
label = first["label"].item() if isinstance(first["label"], torch.Tensor) else first["label"]
Here’s the error message:
ValueError: only one element tensors can be converted to Python scalars
I understand why the error message specifically is being thrown - the first["label"] tensor isn’t a one-element tensor, and hence item() can’t be called on it. That’s not why I’m asking this question, though.
I’m assuming that I’m not passing the data correctly, but it seems to me that Trainer should take care of input_ids and decoder_input_ids on its own. I’ve tried to set those manually (passing the encodings as input_ids and the labels as decoder_input_ids) and the model can successfully perform inference, but I haven’t managed to fine-tune it. Where am I making a mistake and how do I fix it?
| Using the name label_ids instead of label fixes the specific problem. label should be used if the label is either an int, a float or a one-element torch.Tensor. For tensors with multiple elements, use label_ids. See data_collator.py, lines 62-71 for details:
if "label" in first and first["label"] is not None:
label = first["label"].item() if isinstance(first["label"], torch.Tensor) else first["label"]
dtype = torch.long if isinstance(label, int) else torch.float
batch["labels"] = torch.tensor([f["label"] for f in features], dtype=dtype)
elif "label_ids" in first and first["label_ids"] is not None:
if isinstance(first["label_ids"], torch.Tensor):
batch["labels"] = torch.stack([f["label_ids"] for f in features])
else:
dtype = torch.long if type(first["label_ids"][0]) is int else torch.float
batch["labels"] = torch.tensor([f["label_ids"] for f in features], dtype=dtype)
Also, the name input_ids should be used instead of encoding. Otherwise, an unknown kwarg error gets thrown.
| https://stackoverflow.com/questions/68703608/ |
Pytorch Lightning Tensorboard Logger Across Multiple Models | I'm relatively new to Lightning and Loggers vs manually tracking metrics. I am trying to train two distinct models and have their accuracy and loss plotted on the same charts in tensorboard (or any other logger) within Colab.
What I have right now is basically:
trainer1 = pl.Trainer(gpus=n_gpus, max_epochs=n_epochs, progress_bar_refresh_rate=20, num_sanity_val_steps=0)
trainer2 = pl.Trainer(gpus=n_gpus, max_epochs=n_epochs, progress_bar_refresh_rate=20, num_sanity_val_steps=0)
trainer1.fit(Model1, train_loader, val_loader)
trainer2.fit(Model2, train_loader, val_loader)
#Then later:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
What I'd like to see at this point are those logged metrics charted together on the same chart, any help would be appreciated. I've spent some time trying to toy with this but I'm a bit out of my depth on this, thank you!
| The exact chart used for logging a specific metric depends on the key name you provide in the .log() call (its a feature that Lightning inherits from TensorBoard itself)
def validation_step(self, batch, _):
# This string decides which chart to use in the TB web interface
# vvvvvvvvv
self.log('valid_acc', acc)
Just use the same string for both .log() calls and have both runs saved in same directory.
logger = TensorBoardLogger(save_dir='lightning_logs/', name='model1')
logger = TensorBoardLogger(save_dir='lightning_logs/', name='model2')
If you run tesnsorboard --logdir ./lightning_logs pointing at the parent directory, you should be able to see both metric in the same chart with the key named valid_acc.
| https://stackoverflow.com/questions/68707849/ |
GNN with Stable baselines | I am looking to use DGL or pytorch geometric for building my policy and value networks in stable baselines, however I am struggling to figure out how to send over observations. The observations must be one of the gym spaces class but I am not sure how to send a graph object that can be used by DGL or Pytorch geometric in this way.
The fundamental question I have is how to send graph observations and where to do the prepossessing necessary to use DGL or pytorch geometric for a custom stable baselines network? Can I pack the graph into a stable baselines observation space that somehow DGL or pytorch geometric could intake it?
Note: If anyone has a github link with any code that has done this please let me know, I have looked everywhere
| You can serialize your DGL graph object using pickle and convert the resultant byte string into a vector of integers (with each char in the string corresponding to one int).
import dgl
import numpy as np
import pickle
def serialize_graph(graph: dgl.DGLGraph):
as_byte_string = pickle.dumps(graph)
as_int_list = [_ for _ in as_byte_string] # we get ints for free without explicitly casting
as_float_array = np.array(as_int_list, dtype=np.float32)
return as_float_array
You can then apply the same operations in reverse to deserialize vector representation of the graph within your custom feature extractor.
import dgl
import pickle
import torch as th
def deserialize_graph(observation: th.Tensor):
as_int_tensor = observation.to(dtype=th.int32)
as_char_list = [chr(_) for _ in observation]
as_byte_string = bytearray(''.join(as_char_list), encoding='latin')
as_dgl_graph = pickle.loads(as_byte_string)
return as_dgl_graph
| https://stackoverflow.com/questions/68731718/ |
Evaluate the model during training affects its performance PyTorch | In PyTorch, I want to evaluate my model on the validation set every eval_step during training, and I wrote code like this:
def tune(model, loader_train, loader_dev, optimizer, epochs, eval_step):
for epoch in range(epochs):
for step,x in enumerate(loader_train):
optimizer.zero_grad()
loss = model(x)
loss.backward()
optimizer.step()
if step % eval_step == 0:
model.eval()
test(model, loader_dev)
model.train()
When eval_step = int(len(loader_train)/2) and eval_step = int(len(loader_train)/8), they lead to quite different metric result after training through one whole epoch (which means the second output for the former differs the eighth output for the latter).
Could anyone explain why?
The length of loader_train is 20000 (it depends on batch size), and here is my testing script:
def test(model, loader_dev):
preds = []
labels = []
for step,x in enumerate(loader_dev):
preds.append(model(x).view(-1))
labels.apend(x['label'].view(-1))
metric = cal_metric(preds, labels)
logger.info(metric)
| I think you probably set 'shffule=True' in your dataloader. Even though you fix 'random seed', dataloader in torch will generate different results if you use another dataloader while using current dataloader. In the scenario you describe, it may cause your model get data input in different order and then result in different metric result.
| https://stackoverflow.com/questions/68736827/ |
Detect if image is blurry using pytorch android API | So I am working in this app in which if an officer take a picture of car accident and if this picture is blur, it will no accept it. It will only allow images based on quality and resolution. So after research, I found that I can implement this feature using pytorch in which I will save it in my asset and then write kotlin code based on my logic. I found this link: https://pytorch.org/mobile/android/ However, this link shows image classifier example in which user can detect if the image is a cat or a dog, and this is not what I want.
As an example,
If a police officer take a picture via the app of car accident, the app needs to verify if the image resolution and quality is readable and understandable so when user sees the image taken he/she can tell if it is car accident compared to that of when it is blur image and not clear the app will ask the user to take a picture again.
Any tips please ?
Thank you
| For a start you can try to build this using conventional computer vision algorithms.
You will find a few examples trying to implement a similar idea using OpenCV (e.g. here).
If you want to solve such a task through machine learning, you either can try to find a model that is trained to solve such a task online (maybe you can find one if you are lucky), or you have to train such a model yourself.
When using machine learning you need to have a dataset, which in your case you can easily create from a database of images and blurring the examples yourself (see self-supervised learning). You need to decide which machine learning method to use. For a simple detection if an image is blurry, using deep learning and CNNs seems unnecessary. Instead look into extracting some features relating to an images bluriness and use a simple machine learning algorithms such as k-NN.
However I think using machine learning to solve this kind of task is not really necessary. Every digital camera is basically implementing this feature for their autofocus. So I think looking for conventional computer vision solutions with OpenCV, (e.g. edge contrast estimation, image frequency transforms, ...) will get you to your goal quicker.
OpenCV is also available for Android SDK.
| https://stackoverflow.com/questions/68737162/ |
AWS Sagemaker InvokeEndpoint: operation: Endpoint of account not found | I've been following this guide here: https://aws.amazon.com/blogs/machine-learning/building-an-nlu-powered-search-application-with-amazon-sagemaker-and-the-amazon-es-knn-feature/
I have successfully deployed the model from my notebook instance. I am also able to generate predictions by calling predict() method from sagemaker.predictor.
This is how I created and deployed the model
class StringPredictor(Predictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
pytorch_model = PyTorchModel(model_data = inputs,
role=role,
entry_point ='inference.py',
source_dir = './code',
framework_version = '1.3.1',
py_version='py3',
predictor_cls=StringPredictor)
predictor = pytorch_model.deploy(instance_type='ml.m5.large', initial_instance_count=4)
From the SageMaker dashboard, I can even see that my endpoint and the status is "in-service"
If I run aws sagemaker list-endpoints I can see my desired endpoint showing up correctly as well.
My issue is when I run this code (outside of sagemaker), I'm getting an error:
import boto3
sm_runtime_client = boto3.client('sagemaker-runtime')
payload = "somestring that is used here"
response = sm_runtime_client.invoke_endpoint(EndpointName='pytorch-inference-xxxx',ContentType='text/plain',Body=payload)
This is the error thrown
botocore.errorfactory.ValidationError: An error occurred (ValidationError) when calling the InvokeEndpoint operation: Endpoint pytorch-inference-xxxx of account xxxxxx not found.
This is quite strange as I'm able to see and run the endpoint just fine from sagemaker notebook and I am able to run the predict() method too.
I have verified the region, endpoint name and the account number.
| I was having the exact same error, I've just fixed mine by setting the correct region.
I have verified the region, endpoint name and the account number.
I know that you have indicated that you have verified the region, but in my case, the remote computer had another region configured. So I just ran the following command on my remote computer
aws configure
And once I set the key ID and secret key again, I set the correct region and the error was gone.
| https://stackoverflow.com/questions/68777186/ |
Keyerror:None ,I don't understand this problem | class KITTIRAWDataset(KITTIDataset):
def __init__(self, *args, **kwargs):
super(KITTIRAWDataset, self).__init__(*args, **kwargs)
def get_image_path(self, folder, frame_index, side):
self.img_ext='.png'
f_str = "{:010d}{}".format(frame_index, self.img_ext)
image_path = os.path.join(
self.data_path, folder, "image_0{}/data".format(self.side_map[side]), f_str)
return image_path
def get_depth(self, folder, frame_index, side, do_flip):
calib_path = os.path.join(self.data_path, folder.split("/")[0])
# calib_path: D:/SomeExperiments/KITTRawData/2011_09_26
velo_filename = os.path.join(
self.data_path,
folder,
# Only change is using colon (:) instead of %. For example, instead of %s use {:s} and instead of %d use (:d}
# 010d: the integer length 10
"velodyne_points/data/{:010d}.bin".format(int(frame_index)))
depth_gt = generate_depth_map(calib_path, velo_filename, self.side_map[side])
depth_gt = skimage.transform.resize(
depth_gt, self.full_res_shape[::-1], order=0, preserve_range=True, mode='constant')
if do_flip:
depth_gt = np.fliplr(depth_gt)
# print(type(depth_gt)) 'numpy.ndarray'
return
when I was running my code, an error occured, like below:
I have changed my torch version which refers to the author's markdown, but it can't solve this problem
| KeyError means that you are trying to get a value from a dict with a key that does not exist. In the line displayed you have self.side_map[side]) and KeyError: None means that the key is None, so your side variable have a value None.
That is what we an know looking at the code, the error and without more context
| https://stackoverflow.com/questions/68779859/ |
Single GPU Pytorch training with SLURM - how to set "ntasks-per-node"? | I would like to do some simple fine-tuning on a transformers model using a single GPU on a server via SLURM. I haven't used SLURM before and I am not a computer scientist so my understanding of the field is a bit limited. I have done some research and created the script below.
Could you please confirm if it is fit for purpose?
As far as I have understood, a node corresponds to a single computer and "--gres=gpu:1" will use a single gpu. The only thing I haven't understood clearly is "ntasks-per-node". The way I have understood it, because I will run a single python script, this can be equal to 1. Is that correct?
#! /bin/bash
#SBATCH --job-name 'SQuAD'
#SBATCH --output squad_job%J.out
#SBATCH --error squad_error%J.err
#SBATCH --nodes=1
#SBATCH --gres=gpu:1
#SBATCH --ntasks-per-node=1
#SBATCH --partition=normal
#SBATCH --time=72:00:00
python3 fine_tune_squad.py
| Yes, it will request 1 GPU for running the task. As described in the documentation:
The default is one task per node [...]
Therefore, the default value for --ntasks-per-node is already 1, which means you don't even need to define it. In fact, even --nodes has a default value of 1. Nonetheless, some consider a good practice to explicitly define them to avoid problems, so I'd leave them as you did.
| https://stackoverflow.com/questions/68787145/ |
min-max normalization of a tensor in PyTorch | I want to perform min-max normalization on a tensor in PyTorch.
The formula to obtain min-max normalization is
I want to perform min-max normalization on a tensor using some new_min and new_max without iterating through all elements of the tensor.
>>>import torch
>>>x = torch.randn(5, 4)
>>>print(x)
tensor([[-0.8785, -1.6898, 2.2129, -0.8375],
[ 1.2927, -1.3187, -0.7087, -2.1143],
[-0.6162, 0.6836, -1.3342, -0.7889],
[-0.2934, -1.2526, -0.3265, 1.1933],
[ 1.2494, -1.2130, 1.5959, 1.4232]])
Is there any way to min-max normalize the given tensor between two values new_min, new_max?
Suppose I want to scale the tensor from new_min = -0.25 to new_max = 0.25
| Having defined v_min, v_max, new_min, and new_max as:
>>> v_min, v_max = v.min(), v.max()
>>> new_min, new_max = -.25, .25
You can apply your formula element-wise:
>>> v_p = (v - v_min)/(v_max - v_min)*(new_max - new_min) + new_min
tensor([[-0.1072, -0.2009, 0.2500, -0.1025],
[ 0.1437, -0.1581, -0.0876, -0.2500],
[-0.0769, 0.0733, -0.1599, -0.0969],
[-0.0396, -0.1504, -0.0434, 0.1322],
[ 0.1387, -0.1459, 0.1787, 0.1588]])
Then check v_p statistics:
>>> v_p.min(), v_p.max()
(tensor(-0.2500), tensor(0.2500))
| https://stackoverflow.com/questions/68791508/ |
How to get values return by Tuple Object in Maskcrnn libtorch | I’m new in C++ and libtorch, I try load model by torchscript and execute inference, the codes like below:
torch::jit::script::Module module;
try {
module = torch::jit::load("../../weights/card_extraction/pytorch/2104131340/best_model_27_mAP=0.9981_torchscript.pt");
}
catch (const c10::Error& e) {
std::cerr << "Error to load model\n";
return -1;
}
std::cout << "Load model successful!\n";
torch::DeviceType device_type;
device_type = torch::kCPU;
torch::Device device(device_type, 0);
module.to(device);
torch::Tensor sample = torch::zeros({3, 800, 800});
std::vector<torch::jit::IValue> inputs;
std::vector<torch::Tensor> images;
images.push_back(sample);
/* images.push_back(torch::ones({3, 224, 224})); */
inputs.push_back(images);
auto t1 = std::chrono::high_resolution_clock::now();
auto output = module.forward(inputs);
auto t2 = std::chrono::high_resolution_clock::now();
int duration = std::chrono::duration_cast<std::chrono::milliseconds> (t2 - t1).count();
std::cout << "Inference time: " << duration << " ms" << std::endl;
std::cout << output << std::endl;
And the result like this:
Load model successful!
[W mask_rcnn.py:86] Warning: RCNN always returns a (Losses, Detections) tuple in scripting (function )
Inference time: 2321 ms
({}, [{boxes: [ CPUFloatType{0,4} ], labels: [ CPULongType{0} ], scores: [ CPUFloatType{0} ], masks: [ CPUFloatType{0,1,800,800} ]}])
How do I get value boxes, labels, scores and masks from return output object using c++ ?
I tried many ways but compile always error with “c10::IValue” error thrown.
And more question, why is the time inference when I convert the model to torchscript, executed by C++ is slower than python?
Many thanks
| You can get access to the element like here: I can parse Tuple arguments and get access to it like in Tensor format. It can help u.
auto output1_t = output.toTuple()->elements()[0].toTensor();
auto output2_t = output.toTuple()->elements()[1].toTensor();
https://discuss.pytorch.org/t/how-can-i-get-access-to-first-and-second-tensor-from-tuple-returned-from-forward-method-in-libtorch-c-c/139741
| https://stackoverflow.com/questions/68796689/ |
HuggingFace Trainer logging train data | I'm following this tutorial to train some models:
https://huggingface.co/transformers/training.html
I'd like to track not only the evaluation loss and accuracy but also the train loss and accuracy, to monitor overfitting. While running the code in Jupyter, I do see all of htis:
Epoch Training Loss Validation Loss Accuracy Glue
1 0.096500 0.928782 {'accuracy': 0.625} {'accuracy': 0.625, 'f1': 0.0}
2 0.096500 1.203832 {'accuracy': 0.625} {'accuracy': 0.625, 'f1': 0.0}
3 0.096500 1.643788 {'accuracy': 0.625} {'accuracy': 0.625, 'f1': 0.0}
but when I go into trainer.state.log_history, that stuff is not there. This really doesn't make sense to me.
for obj in trainer.state.log_history:
print(obj)
{'loss': 0.0965, 'learning_rate': 4.5833333333333334e-05, 'epoch': 0.25, 'step': 1}
{'eval_loss': 0.9287818074226379, 'eval_accuracy': {'accuracy': 0.625}, 'eval_glue': {'accuracy': 0.625, 'f1': 0.0}, 'eval_runtime': 1.3266, 'eval_samples_per_second': 6.03, 'eval_steps_per_second': 0.754, 'epoch': 1.0, 'step': 4}
{'eval_loss': 1.2038320302963257, 'eval_accuracy': {'accuracy': 0.625}, 'eval_glue': {'accuracy': 0.625, 'f1': 0.0}, 'eval_runtime': 1.3187, 'eval_samples_per_second': 6.067, 'eval_steps_per_second': 0.758, 'epoch': 2.0, 'step': 8}
{'eval_loss': 1.6437877416610718, 'eval_accuracy': {'accuracy': 0.625}, 'eval_glue': {'accuracy': 0.625, 'f1': 0.0}, 'eval_runtime': 1.3931, 'eval_samples_per_second': 5.742, 'eval_steps_per_second': 0.718, 'epoch': 3.0, 'step': 12}
{'train_runtime': 20.9407, 'train_samples_per_second': 1.146, 'train_steps_per_second': 0.573, 'total_flos': 6314665328640.0, 'train_loss': 0.07855576276779175, 'epoch': 3.0, 'step': 12}
How do I get these back in an object, and not a printout?
Thanks
Edit: Reproducable code below:
import numpy as np
from datasets import load_metric, load_dataset
from transformers import TrainingArguments, AutoModelForSequenceClassification, Trainer, AutoTokenizer
from datasets import list_metrics
raw_datasets = load_dataset("imdb")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(8))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(8))
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
training_args = TrainingArguments("IntroToBERT", evaluation_strategy="epoch")
training_args.logging_strategy = 'step'
training_args.logging_first_step = True
training_args.logging_steps = 1
training_args.num_train_epochs = 3
training_args.per_device_train_batch_size = 2
training_args.eval_steps = 1
metrics = {}
for metric in ['accuracy','glue']:
metrics[metric] = load_metric(metric,'mrpc')
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
out = {}
for metric in metrics.keys():
out[metric] = metrics[metric].compute(predictions=predictions, references=labels)
return out
trainer = Trainer(
model=model,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset,
compute_metrics=compute_metrics,
)
trainer.train()
# here the printout is as shown
for obj in trainer.state.log_history:
print(obj)
# here the logging data is displayed
| You can use the methods log_metrics to format your logs and save_metrics to save them. Here is the code:
# rest of the training args
# ...
training_args.logging_dir = 'logs' # or any dir you want to save logs
# training
train_result = trainer.train()
# compute train results
metrics = train_result.metrics
max_train_samples = len(small_train_dataset)
metrics["train_samples"] = min(max_train_samples, len(small_train_dataset))
# save train results
trainer.log_metrics("train", metrics)
trainer.save_metrics("train", metrics)
# compute evaluation results
metrics = trainer.evaluate()
max_val_samples = len(small_eval_dataset)
metrics["eval_samples"] = min(max_val_samples, len(small_eval_dataset))
# save evaluation results
trainer.log_metrics("eval", metrics)
trainer.save_metrics("eval", metrics)
You can also save all logs at once by setting the split parameter in log_metrics and save_metrics to "all" i.e. trainer.save_metrics("all", metrics); but I prefer this way as you can customize the results based on your need.
Here is the complete source provided by transformers from which you can read more.
| https://stackoverflow.com/questions/68806265/ |
Negative loss when trying to implement aleatoric uncertainty estimation according to Kendall et al | I'm trying to implement a neural network with aleatoric uncertainty estimation for regression with pytorch according to
Kendall et al.: "What Uncertainties Do We Need in Bayesian Deep
Learning for Computer Vision?" (Link).
However, while the predicted regression values fit the desired ground truth values quite well, the predicted variance looks weird and the loss gets negative during training.
The paper suggests to have two outputs mean and variance instead of only predicting the regression value. To be more precise, it is suggested to predict mean and log(variance) due to stability reasons. Therefore, my network looks as follows:
class ReferenceResNet(nn.Module):
def __init__(self):
super().__init__()
self.fcl1 = nn.Linear(1, 32)
self.fcl2 = nn.Linear(32, 64)
self.fcl3 = nn.Linear(64, 128)
self.fcl_mean = nn.Linear(128,1)
self.fcl_var = nn.Linear(128,1)
def forward(self, x):
x = torch.tanh(self.fcl1(x))
x = torch.tanh(self.fcl2(x))
x = torch.tanh(self.fcl3(x))
mean = self.fcl_mean(x)
log_var = self.fcl_var(x)
return mean, log_var
According to the paper, given these outputs, the corresponding loss function consists of a residual regression-part and a regularization term:
where si is the log(variance) predicted by the network.
I implemented this loss-function accordingly:
def loss_function(pred_mean, pred_log_var, y):
return 1/len(pred_mean)*(0.5 * torch.exp(-pred_log_var)*torch.sqrt(torch.pow(y-pred_mean, 2))+0.5*pred_log_var).sum()
I tried this code on a self-generated toy dataset (see image with results), however, the loss gets negative during training and when I plot the variance over the dataset after training, for me it does not really make sense while the corresponding mean values fit the ground truth quite well:
I already figured out that the negative loss comes from the regularization term as logarithms are negative for values between 0 and 1, however, I don't believe that the absolute value of the regularization term is supposed to grow bigger than the regression part. Does anyone know what is the reason for this and how I can prevent this from happening? And why does my variance look so weird?
For reproduction, my full code looks as follows:
import torch.nn as nn
import torch
import torch.nn.functional as F
import torch.optim as optim
from torch.optim import lr_scheduler
from torch.utils.data.dataset import TensorDataset
from torchvision import datasets, transforms
import math
import numpy as np
import torch.nn.functional as F
import matplotlib.pyplot as plt
from tqdm import tqdm
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
class ReferenceRegNet(nn.Module):
def __init__(self):
super().__init__()
self.fcl1 = nn.Linear(1, 32)
self.fcl2 = nn.Linear(32, 64)
self.fcl3 = nn.Linear(64, 128)
self.fcl_mean = nn.Linear(128,1)
self.fcl_var = nn.Linear(128,1)
def forward(self, x):
x = torch.tanh(self.fcl1(x))
x = torch.tanh(self.fcl2(x))
x = torch.tanh(self.fcl3(x))
mean = self.fcl_mean(x)
log_var = self.fcl_var(x)
return mean, log_var
def toy_function(x):
return math.sin(x/15-4)+2 + math.sin(x/10-5)
def loss_function(x_mean, x_log_var, y):
return 1/len(x_mean)*(0.5 * torch.exp(-x_log_var)*torch.sqrt(torch.pow(y-x_mean, 2))+0.5*x_log_var).sum()
BATCH_SIZE = 10
EVAL_BATCH_SIZE = 10
CLASSES = 1
TRAIN_EPOCHS = 50
# generate toy dataset: A train-set in form of a complex sin-curve
x_train_data = np.array([])
y_train_data = np.array([])
for repeat in range(2):
for i in range(50, 150):
for j in range(100):
sampled_x = i+np.random.randint(101)/100
sampled_y = toy_function(sampled_x)+np.random.normal(0,0.2)
x_train_data = np.append(x_train_data, sampled_x)
y_train_data = np.append(y_train_data, sampled_y)
x_eval_data = list(np.arange(50.0, 150.0, 0.1))
y_eval_data = [toy_function(x) for x in x_eval_data]
LOADER_KWARGS = {'num_workers': 0, 'pin_memory': False} if torch.cuda.is_available() else {}
train_set = TensorDataset(torch.Tensor(x_train_data),torch.Tensor(y_train_data))
eval_set = TensorDataset(torch.Tensor(x_eval_data), torch.Tensor(y_eval_data))
train_loader = torch.utils.data.DataLoader(train_set, batch_size=BATCH_SIZE, shuffle=True, **LOADER_KWARGS)
eval_loader = torch.utils.data.DataLoader(eval_set, batch_size=EVAL_BATCH_SIZE, shuffle=False, **LOADER_KWARGS)
TRAIN_SIZE = len(train_loader.dataset)
EVAL_SIZE = len(eval_loader.dataset)
assert (TRAIN_SIZE % BATCH_SIZE) == 0
assert (EVAL_SIZE % EVAL_BATCH_SIZE) == 0
net = ReferenceRegNet().to(DEVICE)
optimizer = optim.Adam(net.parameters(), lr=1e-3)
losses = {}
# train network
for epoch in range(1,TRAIN_EPOCHS+1):
net.train()
mean_epoch_loss = 0
mean_epoch_mse = 0
# train batches
for batch_idx, (data, target) in enumerate(tqdm(train_loader), start=1):
data, target = (data.to(DEVICE)).unsqueeze(dim=1), (target.to(DEVICE)).unsqueeze(dim=1)
optimizer.zero_grad()
output_means, output_log_var = net(data)
target_np = target.detach().cpu().numpy()
output_means_np = output_means.detach().cpu().numpy()
loss = loss_function(output_means, output_log_var, target)
loss_value = loss.item() # get raw float-value out of loss-tensor
mean_epoch_loss += loss_value
# optimize network
loss.backward()
optimizer.step()
mean_epoch_loss = mean_epoch_loss / len(train_loader)
losses.update({epoch:mean_epoch_loss})
print("Epoch " + str(epoch) + ": Train-Loss = " + str(mean_epoch_loss))
net.eval()
with torch.no_grad():
mean_loss = 0
mean_mse = 0
for data, target in eval_loader:
data, target = (data.to(DEVICE)).unsqueeze(dim=1), (target.to(DEVICE)).unsqueeze(dim=1)
output_means, output_log_var = net(data) # perform prediction
target_np = target.detach().cpu().numpy()
output_means_np = output_means.detach().cpu().numpy()
mean_loss += loss_function(output_means, output_log_var, target).item()
mean_loss = mean_loss/len(eval_loader)
#print("Epoch " + str(epoch) + ": Eval-loss = " + str(mean_loss))
fig = plt.figure(figsize=(40,12)) # create a 30x30 inch figure
ax = fig.add_subplot(1,3,1)
ax.set_title("regression value")
ax.set_xlabel("x")
ax.set_ylabel("regression mean")
ax.plot(x_train_data, y_train_data, 'x', color='black')
ax.plot(x_eval_data, y_eval_data, color='red')
pred_means_list = []
output_vars_list_train = []
output_vars_list_test = []
for x_test in sorted(x_train_data):
x_test = (torch.Tensor([x_test]).to(DEVICE))
pred_means, output_log_vars = net.forward(x_test)
pred_means_list.append(pred_means.detach().cpu())
output_vars_list_train.append(torch.exp(output_log_vars).detach().cpu())
ax.plot(sorted(x_train_data), pred_means_list, color='blue', label = 'training_perform')
pred_means_list = []
for x_test in x_eval_data:
x_test = (torch.Tensor([x_test]).to(DEVICE))
pred_means, output_log_vars = net.forward(x_test)
pred_means_list.append(pred_means.detach().cpu())
output_vars_list_test.append(torch.exp(output_log_vars).detach().cpu())
ax.plot(sorted(x_eval_data), pred_means_list, color='green', label = 'eval_perform')
plt.tight_layout()
plt.legend()
ax = fig.add_subplot(1,3,2)
ax.set_title("variance")
ax.set_xlabel("x")
ax.set_ylabel("regression var")
ax.plot(sorted(x_train_data), output_vars_list_train, label = 'training data')
ax.plot(x_eval_data, output_vars_list_test, label = 'test data')
plt.tight_layout()
plt.legend()
ax = fig.add_subplot(1,3,3)
ax.set_title("training loss")
ax.set_xlabel("Epoch")
ax.set_ylabel("Loss")
lists = sorted(losses.items())
epoch, loss = zip(*lists)
ax.plot(epoch, loss, label = 'loss')
plt.tight_layout()
plt.legend()
plt.savefig('ref_test.png')
| TLDR: The optimization drives the loss to a minimum where the gradient
becomes zero, regardless of what the nominal loss value is.
A comprehensive explanation by K.Frank:
A smaller loss – algebraically less positive or algebraically more
negative – means (or should mean) better predictions. The
optimization step uses some version of gradient descent to make
your loss smaller. The overall level of the loss doesn’t matter as
far as the optimization goes. The gradient tells the optimizer how
to change the model parameters to reduce the loss, and it doesn’t
care about the overall level of the loss.
An example from the same source:
Consider, for example, optimizing with lossA = MSELoss. Now
imagine optimizing with lossB = lossA - 17.2. The 17.2 doesn’t
really change anything at all. It is true that “perfect” predictions
will yield lossB = -17.2 rather than zero. (lossA will, of course,
be zero for “perfect” predictions.) But who cares?
In your example: you are right, the negative loss value comes from the logarithmic term. This is completely OK and it means that your training is dominated by contributions of high-confidence loss terms. Regarding the high values of variance - can't comment much on it but it should be fine since the loss curve drops as expected.
| https://stackoverflow.com/questions/68806330/ |
Splitting tensor into sub-tensors in overlapping fashion | I'm in pytorch and I have a tensor x of size batch_size x d x S. It has to be intended as a batch of sequences of length S, where every sequence element is d dimensional. Every sequence is actually the overlap of multiple sub-sequences, in the following sense:
every sub-sequence is of size past_size + present_size, i.e we have past_size d-dimensional elements followed by other present_size elements
the overlap works as follows: the beginnings of the present_size sections are equispaced by present_size elements, and they are placed in the right-most positions
To make an example, with batch_size=1, d=1, consider x = [1,2,3,4,5,6,7,8,9], where present_size = 2, past_size = 3. The resulting subsequences would be:
[1,2,3,4,5]
[3,4,5,6,7]
[5,6,7,8,9]
The end goal is to produce the splitting of every sequence into the, say, N sub-sequences, to get a tensor of shape batch_size*N x d x past_size+present_size.
My second try is the following:
def seq(x, present_size, total_size=present_size+past_size, N):
z = x.unfold(-1, total_size, present_size)
v = torch.flatten(z, start_dim=2)
s = torch.cat(torch.chunk(v, N, -1), 0)
return s
Is there a more efficient way? Is it possible to backpropagate through such a function?
Edit
In the above example, N = 3.
Moreover, we have the following relation: N*present_size + past_size = S
Input-output
Here is an example with N=4, present_size = 1, past_size = 2.
x = torch.rand(4,8,6) # d=8, batch_size = 4, 6 = N*present_size + past_size
>>> tensor([[[0.5667, 0.5300, 0.2460, 0.4327, 0.4727, 0.5649],
[0.0360, 0.6687, 0.0167, 0.5359, 0.9804, 0.8778],
[0.3703, 0.4884, 0.1505, 0.5463, 0.8114, 0.3270],
[0.2932, 0.4928, 0.3933, 0.2433, 0.7053, 0.5222],
[0.6667, 0.2014, 0.7107, 0.7535, 0.2816, 0.6515],
[0.5285, 0.4150, 0.2557, 0.2144, 0.8317, 0.5448],
[0.7971, 0.6609, 0.1811, 0.7788, 0.6649, 0.1848],
[0.6902, 0.3999, 0.8719, 0.7624, 0.5216, 0.3494]],
[[0.0196, 0.7850, 0.2796, 0.4173, 0.8076, 0.5709],
[0.4566, 0.4814, 0.0568, 0.8568, 0.9119, 0.4030],
[0.4031, 0.8887, 0.3782, 0.8015, 0.9835, 0.6043],
[0.3557, 0.5960, 0.2102, 0.8165, 0.1938, 0.4948],
[0.8163, 0.7907, 0.3711, 0.6835, 0.8021, 0.1897],
[0.7790, 0.2621, 0.3769, 0.3830, 0.7140, 0.2309],
[0.5831, 0.0246, 0.6548, 0.8694, 0.1988, 0.5470],
[0.1192, 0.2928, 0.4240, 0.2624, 0.7959, 0.4091]],
[[0.7959, 0.7144, 0.4523, 0.5090, 0.6053, 0.4071],
[0.4742, 0.0224, 0.9939, 0.9757, 0.0732, 0.6213],
[0.5211, 0.1149, 0.8218, 0.7061, 0.1807, 0.2822],
[0.1456, 0.7331, 0.9107, 0.9533, 0.2438, 0.4031],
[0.0958, 0.2623, 0.0828, 0.2861, 0.0474, 0.8349],
[0.1740, 0.3658, 0.2416, 0.6735, 0.4013, 0.8896],
[0.6934, 0.8709, 0.4017, 0.6121, 0.5824, 0.5803],
[0.4811, 0.1036, 0.4356, 0.6441, 0.5859, 0.4683]],
[[0.2479, 0.9247, 0.3216, 0.6844, 0.1701, 0.4609],
[0.3320, 0.4908, 0.0458, 0.9887, 0.4725, 0.7511],
[0.0594, 0.1978, 0.8830, 0.9126, 0.4821, 0.7731],
[0.3729, 0.4921, 0.9266, 0.7827, 0.8101, 0.6258],
[0.4998, 0.7596, 0.1160, 0.3928, 0.4773, 0.7892],
[0.0215, 0.1325, 0.5940, 0.2094, 0.3109, 0.9281],
[0.7960, 0.1707, 0.1793, 0.7335, 0.2065, 0.6204],
[0.6350, 0.9696, 0.5099, 0.7375, 0.7601, 0.1405]]])
r = seq(x, 1, 2+1, 4)
>>> tensor([[[0.5667, 0.5300, 0.2460],
[0.0360, 0.6687, 0.0167],
[0.3703, 0.4884, 0.1505],
[0.2932, 0.4928, 0.3933],
[0.6667, 0.2014, 0.7107],
[0.5285, 0.4150, 0.2557],
[0.7971, 0.6609, 0.1811],
[0.6902, 0.3999, 0.8719]],
[[0.0196, 0.7850, 0.2796],
[0.4566, 0.4814, 0.0568],
[0.4031, 0.8887, 0.3782],
[0.3557, 0.5960, 0.2102],
[0.8163, 0.7907, 0.3711],
[0.7790, 0.2621, 0.3769],
[0.5831, 0.0246, 0.6548],
[0.1192, 0.2928, 0.4240]],
[[0.7959, 0.7144, 0.4523],
[0.4742, 0.0224, 0.9939],
[0.5211, 0.1149, 0.8218],
[0.1456, 0.7331, 0.9107],
[0.0958, 0.2623, 0.0828],
[0.1740, 0.3658, 0.2416],
[0.6934, 0.8709, 0.4017],
[0.4811, 0.1036, 0.4356]],
[[0.2479, 0.9247, 0.3216],
[0.3320, 0.4908, 0.0458],
[0.0594, 0.1978, 0.8830],
[0.3729, 0.4921, 0.9266],
[0.4998, 0.7596, 0.1160],
[0.0215, 0.1325, 0.5940],
[0.7960, 0.1707, 0.1793],
[0.6350, 0.9696, 0.5099]],
[[0.5300, 0.2460, 0.4327],
[0.6687, 0.0167, 0.5359],
[0.4884, 0.1505, 0.5463],
[0.4928, 0.3933, 0.2433],
[0.2014, 0.7107, 0.7535],
[0.4150, 0.2557, 0.2144],
[0.6609, 0.1811, 0.7788],
[0.3999, 0.8719, 0.7624]],
[[0.7850, 0.2796, 0.4173],
[0.4814, 0.0568, 0.8568],
[0.8887, 0.3782, 0.8015],
[0.5960, 0.2102, 0.8165],
[0.7907, 0.3711, 0.6835],
[0.2621, 0.3769, 0.3830],
[0.0246, 0.6548, 0.8694],
[0.2928, 0.4240, 0.2624]],
[[0.7144, 0.4523, 0.5090],
[0.0224, 0.9939, 0.9757],
[0.1149, 0.8218, 0.7061],
[0.7331, 0.9107, 0.9533],
[0.2623, 0.0828, 0.2861],
[0.3658, 0.2416, 0.6735],
[0.8709, 0.4017, 0.6121],
[0.1036, 0.4356, 0.6441]],
[[0.9247, 0.3216, 0.6844],
[0.4908, 0.0458, 0.9887],
[0.1978, 0.8830, 0.9126],
[0.4921, 0.9266, 0.7827],
[0.7596, 0.1160, 0.3928],
[0.1325, 0.5940, 0.2094],
[0.1707, 0.1793, 0.7335],
[0.9696, 0.5099, 0.7375]],
[[0.2460, 0.4327, 0.4727],
[0.0167, 0.5359, 0.9804],
[0.1505, 0.5463, 0.8114],
[0.3933, 0.2433, 0.7053],
[0.7107, 0.7535, 0.2816],
[0.2557, 0.2144, 0.8317],
[0.1811, 0.7788, 0.6649],
[0.8719, 0.7624, 0.5216]],
[[0.2796, 0.4173, 0.8076],
[0.0568, 0.8568, 0.9119],
[0.3782, 0.8015, 0.9835],
[0.2102, 0.8165, 0.1938],
[0.3711, 0.6835, 0.8021],
[0.3769, 0.3830, 0.7140],
[0.6548, 0.8694, 0.1988],
[0.4240, 0.2624, 0.7959]],
[[0.4523, 0.5090, 0.6053],
[0.9939, 0.9757, 0.0732],
[0.8218, 0.7061, 0.1807],
[0.9107, 0.9533, 0.2438],
[0.0828, 0.2861, 0.0474],
[0.2416, 0.6735, 0.4013],
[0.4017, 0.6121, 0.5824],
[0.4356, 0.6441, 0.5859]],
[[0.3216, 0.6844, 0.1701],
[0.0458, 0.9887, 0.4725],
[0.8830, 0.9126, 0.4821],
[0.9266, 0.7827, 0.8101],
[0.1160, 0.3928, 0.4773],
[0.5940, 0.2094, 0.3109],
[0.1793, 0.7335, 0.2065],
[0.5099, 0.7375, 0.7601]],
[[0.4327, 0.4727, 0.5649],
[0.5359, 0.9804, 0.8778],
[0.5463, 0.8114, 0.3270],
[0.2433, 0.7053, 0.5222],
[0.7535, 0.2816, 0.6515],
[0.2144, 0.8317, 0.5448],
[0.7788, 0.6649, 0.1848],
[0.7624, 0.5216, 0.3494]],
[[0.4173, 0.8076, 0.5709],
[0.8568, 0.9119, 0.4030],
[0.8015, 0.9835, 0.6043],
[0.8165, 0.1938, 0.4948],
[0.6835, 0.8021, 0.1897],
[0.3830, 0.7140, 0.2309],
[0.8694, 0.1988, 0.5470],
[0.2624, 0.7959, 0.4091]],
[[0.5090, 0.6053, 0.4071],
[0.9757, 0.0732, 0.6213],
[0.7061, 0.1807, 0.2822],
[0.9533, 0.2438, 0.4031],
[0.2861, 0.0474, 0.8349],
[0.6735, 0.4013, 0.8896],
[0.6121, 0.5824, 0.5803],
[0.6441, 0.5859, 0.4683]],
[[0.6844, 0.1701, 0.4609],
[0.9887, 0.4725, 0.7511],
[0.9126, 0.4821, 0.7731],
[0.7827, 0.8101, 0.6258],
[0.3928, 0.4773, 0.7892],
[0.2094, 0.3109, 0.9281],
[0.7335, 0.2065, 0.6204],
[0.7375, 0.7601, 0.1405]]])
| Possible method using torch.gather
You can see this problem as reassigning each element to a new position. This has to be done using a tensor containing the indices of the permutation you which to see happening.
If you look at the indices of the last dimension for input x (we will take your example with x.shape = (4, 8, 6)), you have them ordered this way:
tensor([[[0, 1, 2, 3, 4, 5],
[0, 1, 2, 3, 4, 5],
... 4 more
[0, 1, 2, 3, 4, 5],
[0, 1, 2, 3, 4, 5]],
... 2 more
[[0, 1, 2, 3, 4, 5],
[0, 1, 2, 3, 4, 5],
... 4 more
[0, 1, 2, 3, 4, 5],
[0, 1, 2, 3, 4, 5]]])
Now the permutation of indices should be looking like (considering N=4, present_size=1, and past_size=2). Keep in mind I'm only representing two dimensions among the four x in total:
tensor([[0, 1, 2],
[1, 2, 3],
[2, 3, 4],
[3, 4, 5]])
From there it will be easy to construct the new tensor using torch.gather. The operation will effectively create a tensor out defined in the following way:
out[i][j][k][l] = x[i][j][k][indices[i, j, k, l]]
1. Constructing the tensor of indices
In order to construct such tensor of indices, we will use arrangements. The following are the base indices:
>>> arr = torch.arange(total_size)[None].repeat(N, 1)
tensor([[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2]])
to which we add a displacement of present_size accumulated over the rows:
>>> disp = torch.arange(0, total_size + 1, step=present_size)[None].T
tensor([[0],
[1],
[2],
[3]])
The resulting minimal tensor of indices is:
>>> indices = arr + disp
tensor([[0, 1, 2],
[1, 2, 3],
[2, 3, 4],
[3, 4, 5]])
2. Applying torch.gather
First, we need to expand the rows of x to N: the number of rows in the resulting tensor.
>>> x_r = x[None].expand(N, *(-1,)*x.ndim)
>>> x.shape, x_r.shape
(torch.Size([4, 8, 6]), torch.Size([4, 4, 8, 6]))
In order to use torch.gather, we need the input and tensor of indices to have the same shape. To do so we can make views of our tensors using Tensor.expand.
So here we will insert two additional dimensions on indices and expand them to match the sizes of x's first and second axis.
>>> i_r = indices[:, None, None, :].expand(-1, x.size(0), x.size(1), -1)
indices.shape, i_r.shape
(torch.Size([4, 3]), torch.Size([4, 4, 8, 3]))
Then apply the gather function on the last axis of indices:
>>> torch.gather(x_r, dim=-1, index=i_r)
tensor([[[[0.5667, 0.5300, 0.2460],
[0.0360, 0.6687, 0.0167],
[0.3703, 0.4884, 0.1505],
[0.2932, 0.4928, 0.3933],
[0.6667, 0.2014, 0.7107],
[0.5285, 0.4150, 0.2557],
[0.7971, 0.6609, 0.1811],
[0.6902, 0.3999, 0.8719]],
...
[[0.6844, 0.1701, 0.4609],
[0.9887, 0.4725, 0.7511],
[0.9126, 0.4821, 0.7731],
[0.7827, 0.8101, 0.6258],
[0.3928, 0.4773, 0.7892],
[0.2094, 0.3109, 0.9281],
[0.7335, 0.2065, 0.6204],
[0.7375, 0.7601, 0.1405]]]])
If you have any questions, please don't hesitate to ask!
| https://stackoverflow.com/questions/68860290/ |
libtorch and pytorch cannot be installed simultaneously? | I am learning to develop with PyTorch as well as LibTorch. I have the following line in my ~/.bashrc for dynamic linking of libtorch libraries:
# libtorch linking path
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/user/.dev_libraries/libtorch/lib/
However, when this path is in LD_LIBRARY_PATH, importing torch in Python reports segmentation fault:
user@host:~$ $LD_LIBRARY_PATH
bash: /home/user/packages/embree-2.16.0.x86_64.linux/lib:/home/user/packages/embree-2.16.0.x86_64.linux/lib::/usr/local/lib/:/usr/local/cuda-11.1/lib64:/usr/local/lib/:/usr/local/cuda-11.1/lib64:/home/user/.dev_libraries/libtorch-cpu/libtorch/lib/: No such file or directory
user@host:~$ python
Python 3.8.10 (default, Jun 2 2021, 10:49:15)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Segmentation fault (core dumped)
user@host:~$
As soon as I remove that path from the environment variable LD_LIBRARY_PATH, torch can be correctly imported in Python.
I am guessing the cause is that some shared libraries of PyTorch having the same names as the ones in LibTorch. Does this mean PyTorch and LibTorch cannot be installed simultaneously, or is my environment setting incorrect? I'd prefer not to reset LD_LIBRARY_PATH every time I switch between the two.
System specs:
Ubuntu 20.04 + CUDA 11.1 + python 3.8.10 + GCC 9.3.0
pytorch 1.9.0+cu111
libtorch is downloaded from here: https://download.pytorch.org/libtorch/nightly/cpu/libtorch-shared-with-deps-latest.zip
| I faced the same problem too.
You can type
import torch
print(torch.__version__)
to see the version of torch, and use the same version of libtorch, that would solve the problem probably.
| https://stackoverflow.com/questions/68878821/ |
Token indices sequence length is longer than the specified maximum sequence length for this model (28627 > 512) | I am using BERT's Huggingface DistilBERT model as a backend for a question and answer application. The text I am using with which to train the model is one very large single text field. Even though the text field is a single string, the punctuation was left in place as a clue for BERT. When I execute the application I am getting the "Token indices sequence length error". I am using the transformer.encodeplus() method to pass the text into the model. I have tried various mechanisms to truncate the input ids to a length <= to 512.
I am currently using Windows 10 but I will also be porting the code to a Raspberry Pi 4 platform.
The code is failing at this line:
start_scores, end_scores = model(torch.tensor([input_ids]), attention_mask=torch.tensor([attention_mask]))
I am attempting to perform the truncation at this line:
encoding = tokenizer.encode_plus(question, tokenizer(context, truncation=True).input_ids)
The entire code is here:
from transformers import AutoTokenizer, DistilBertTokenizer, DistilBertForQuestionAnswering
import torch
# globals - set once used everywhere
tokenizer = None
model = None
context = ''
def establishSettings():
global tokenizer, model, context
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased', return_token_type_ids=True, model_max_length=512)
model = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad', return_dict=False)
# context = "Some 1,500 volcanoes are still considered potentially active around the world today 161 of those over 10 percent sit within the boundaries of the United States."
# get the volcano corpus
with open('volcanic.corpus', encoding="utf8") as file:
context = file.read().replace('\n', '')
print(len(tokenizer(context, truncation=True).input_ids))
def askQuestion(question):
global tokenizer, model, context
print("\nQuestion ", question)
encoding = tokenizer.encode_plus(question, tokenizer(context, truncation=True).input_ids)
input_ids, attention_mask = encoding["input_ids"], encoding["attention_mask"]
start_scores, end_scores = model(torch.tensor([input_ids]), attention_mask=torch.tensor([attention_mask]))
ans_tokens = input_ids[torch.argmax(start_scores): torch.argmax(end_scores) + 1]
answer_tokens = tokenizer.convert_ids_to_tokens(ans_tokens, skip_special_tokens=True)
#all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
return answer_tokens
def main():
# set the global itmes once
establishSettings()
# ask a question
question = "How many potentially active volcanoes are there in the world today?"
answer_tokens = askQuestion(question)
print("answer_tokens: ", answer_tokens)
if len(answer_tokens) == 0:
answer = "Sorry, I don't have an answer for that one. Ask me another question about New Mexico volcanoes."
print(answer)
else:
answer_tokens_to_string = tokenizer.convert_tokens_to_string(answer_tokens)
print("\nFinal Answer : ")
print(answer_tokens_to_string)
if __name__ == '__main__':
main()
What is the best way to truncate the input.ids to <= 512 in length.
| Edit this line:
encoding = tokenizer.encode_plus(question, tokenizer(context, truncation=True).input_ids)
to
encoding = tokenizer.encode_plus(question, tokenizer(context, truncation=True, max_length=512).input_ids)
| https://stackoverflow.com/questions/68885352/ |
Store a multi-dim tensor on disk and read from offset | I have a multi-dimensional tensor like this
tensor([[ 0.5599, 0.4593, 0.0580, ..., -0.2404, 0.1144, -0.5047],
[ 0.1545, 0.3332, 0.3836, ..., 0.2483, -0.0849, -0.2216],
[ 0.4513, 0.0115, 0.0801, ..., -0.8038, 0.2350, -0.3261],
...,
[-0.4387, 0.3028, -0.0510, ..., -0.4966, -0.1606, 0.2933],
[ 0.0312, 0.2351, -0.0397, ..., -0.5401, -0.0554, -0.1552],
[-0.3732, -0.0460, 0.0698, ..., -0.2963, -0.3514, -0.3815]],
device='cuda:0', requires_grad=True)
Size of entity embeddings: torch.Size([14951, 400])
What is the best way to store this tensor on disk, and only index a row of this multi-dimensional tensor (i.e. say the last row, [-0.3732, -0.0460, 0.0698, ..., -0.2963, -0.3514, -0.3815] and bring it to memory in python?
| The answer depends on what you are trying to maximize/minimize.
You could define "Best way to store" as the fastest write, the fastest read, the smallest file, ...
But given your constraints I think HDFS should be a good candidate.
Pandas allows you to save as HDFS format with the df.to_hdf() function.
You can also look at H5py to deal with HDFS files.
| https://stackoverflow.com/questions/68886280/ |
why the output of model is different in pytorch | I have a simple model, just only one linear layer.
model = torch.nn.Linear(1,1).to(device)
x_train1 = torch.FloatTensor([[1], [2], [3]])
out = model(x_train1)
print(out)
But whenever I tried to run this code, the printed output is diffrent.
Also I set these random seeds.
import random
import torch
import numpy as np
random_seed=76
torch.manual_seed(random_seed)
torch.cuda.manual_seed(random_seed)
torch.cuda.manual_seed_all(random_seed) # if use multi-GPU
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(random_seed)
random.seed(random_seed)
I want to know why the output keep changing when the code is run.
| You must set the seed every time you run the code if want to get the same result.
import torch
def my_func(device: str, seed: int):
torch.manual_seed(seed)
model = torch.nn.Linear(1,1).to(device)
x_train1 = torch.FloatTensor([[1], [2], [3]])
out = model(x_train1)
print(out)
# Whenever you run the function you'll get the same result!
my_func(device="cpu", seed=76)
# tensor([[0.3573],
# [0.5021],
# [0.6470]], grad_fn=<AddmmBackward>)
my_func(device="cpu", seed=76)
# tensor([[0.3573],
# [0.5021],
# [0.6470]], grad_fn=<AddmmBackward>)
| https://stackoverflow.com/questions/68886676/ |
MisconfigurationException: You requested GPUs: [0] But your machine only has: [] | I'm running JupyterLab via. AWS SageMaker.
I've been taking AWS certifications, but this is my first time actively using AWS.
Update:
I have changed the Notebook instance type to ml.g4dn.xlarge, a GPU.
Will run and see what happens.
How do I change the instance types of EC2 to GPU?
In Google Colab, e.g., you can select which hardware accelerator to use, one of which being GPU.
Error:
MisconfigurationException: You requested GPUs: [0]
But your machine only has: []
SageMaker environment:
List of Kernels:
I'm on conda_python3.
| First thing is to determine if you are you using a SageMaker Studio or SageMaker notebook instance.
Since you are using SageMaker notebooks, you first need to go back to SageMaker console, select the correct notebook, and stop it.
Once the notebooks is stopped, you can edit the configuration and select an instance that has GPU. You can find the list of all instances here.
In Sagemaker studio, you can select a PyTorch GPU optimized kernel
and then select an instance that has GPU.
| https://stackoverflow.com/questions/68894940/ |
urllib.error.HTTPError: HTTP Error 403: rate limit exceeded when loading resnet18 from pytorch hub | I am not sure why I get a rate limit error.
(fashcomp) [jalal@goku fashion-compatibility]$ python main.py --test --l2_embed --resume runs/nondisjoint_l2norm/model_best.pth.tar --datadir ../../../data/fashion
/scratch3/venv/fashcomp/lib/python3.8/site-packages/torchvision/transforms/transforms.py:310: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
warnings.warn("The use of the transforms.Scale transform is deprecated, " +
Traceback (most recent call last):
File "main.py", line 313, in <module>
main()
File "main.py", line 105, in main
model = torch.hub.load('pytorch/vision:v1.9.0', 'resnet18', pretrained=True)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/hub.py", line 362, in load
repo_or_dir = _get_cache_or_reload(repo_or_dir, force_reload, verbose)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/hub.py", line 162, in _get_cache_or_reload
_validate_not_a_forked_repo(repo_owner, repo_name, branch)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/hub.py", line 124, in _validate_not_a_forked_repo
with urlopen(url) as r:
File "/usr/local/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/local/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/local/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/local/lib/python3.8/urllib/request.py", line 569, in error
return self._call_chain(*args)
File "/usr/local/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/local/lib/python3.8/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: rate limit exceeded
I have:
$ pip freeze
h5py==3.3.0
joblib==1.0.1
numpy==1.21.2
Pillow==8.3.1
scikit-learn==0.24.2
scipy==1.7.1
sklearn==0.0
threadpoolctl==2.2.0
torch==1.9.0
torchaudio==0.9.0
torchvision==0.10.0
typing-extensions==3.10.0.0
and I followed this from https://pytorch.org/hub/pytorch_vision_resnet/ :
| This is a bug in Pytorch 1.9. As a workaround, try adding:
torch.hub._validate_not_a_forked_repo=lambda a,b,c: True
to your script before any torch.hub call. i.e.:
torch.hub._validate_not_a_forked_repo=lambda a,b,c: True
model = torch.hub.load('pytorch/vision:v0.9.0', 'resnet18', pretrained=True)
According to Philip Meier(one of the developers)'s post , This is a bug introduced in Pytorch 1.9 (#56138) and you shouldn't be facing this in older versions.
This is already fixed in the master, but whether there is a 1.9.1 to have this shipped with is not known at the moment. So until 1.10 ships, you either have to use an older version or use torchvision instead or simply use the workaround proposed above.
Update:
Pytorch released a bugfix for 1.9 (i.e. 1.9.1) so updating to 1.9.1 should rectify this issue for good.
| https://stackoverflow.com/questions/68901236/ |
Feeding an image to stacked resnet blocks to create an embedding | Do you have any code example or paper that refers to something like the following diagram?
I want to know why we want to stack multiple resnet blocks as opposed to multiple convolutional block as in more traditional architectures? Any code sample or referring to one will be really helpful.
Also, how can I transfer that to something like the following that can contain self-attention module for each resnet block?
| Applying self-attention to the outputs of Resnet blocks at the very high resolution of the input image may lead to memory issues: The memory requirements of self-attention blocks grow quadratically with the input size (=resolution). This is why in, e.g., Xiaolong Wang, Ross Girshick, Abhinav Gupta, Kaiming He Non-Local Neural Networks (CVPR 2018) they introduced self-attention only at a very deep layer of the architecture, once the feature map was substantially sub-sampled.
| https://stackoverflow.com/questions/68901687/ |
How to change the pytorch version in Google colab | I need to change the pytorch version in google colab,so i install anaconda
%%bash
MINICONDA_INSTALLER_SCRIPT=Miniconda3-4.5.4-Linux-x86_64.sh
MINICONDA_PREFIX=/usr/local
wget https://repo.continuum.io/miniconda/$MINICONDA_INSTALLER_SCRIPT
chmod +x $MINICONDA_INSTALLER_SCRIPT
./$MINICONDA_INSTALLER_SCRIPT -b -f -p $MINICONDA_PREFIX
import sys
_ = (sys.path
.append("/usr/local/lib/python3.6/site-packages"))
and then
!conda install pytorch==1.0.0 torchvision==0.2.1 cuda100 -c pytorch --yes
but when i
import torch
torch.__version__
it's 1.9+cuda120
what's more, when i trying to
pip uninstall torch
colab told me that do you want to uninstall pytorch-1.0.0
how does it happen?
| First, you have to run
!pip uninstall torch
Then, when you are prompted with Proceed (y/n)?, click on the background of where the output is being printed, press y and then click Enter. This will uninstall torch, it will take more or less 5 minutes.
Then, you have to
!pip install torch==1.0.0
And finally
import torch
torch.__version__
# '1.0.0'
| https://stackoverflow.com/questions/68903158/ |
After some number of epochs fake image creation become worst in GAN | I'm trying to create GAN model.
This is my discriminator.py
import torch.nn as nn
class D(nn.Module):
feature_maps = 64
kernel_size = 4
stride = 2
padding = 1
bias = False
inplace = True
def __init__(self):
super(D, self).__init__()
self.main = nn.Sequential(
nn.Conv2d(4, self.feature_maps, self.kernel_size, self.stride, self.padding, bias=self.bias),
nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps, self.feature_maps * 2, self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(self.feature_maps * 2), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * 2, self.feature_maps * (2 * 2), self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(self.feature_maps * (2 * 2)), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * (2 * 2), self.feature_maps * (2 * 2 * 2), self.kernel_size, self.stride,
self.padding, bias=self.bias),
nn.BatchNorm2d(self.feature_maps * (2 * 2 * 2)), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * (2 * 2 * 2), 1, self.kernel_size, 1, 0, bias=self.bias),
nn.Sigmoid()
)
def forward(self, input):
output = self.main(input)
return output.view(-1)
this is my generator.py
import torch.nn as nn
class G(nn.Module):
feature_maps = 512
kernel_size = 4
stride = 2
padding = 1
bias = False
def __init__(self, input_vector):
super(G, self).__init__()
self.main = nn.Sequential(
nn.ConvTranspose2d(input_vector, self.feature_maps, self.kernel_size, 1, 0, bias=self.bias),
nn.BatchNorm2d(self.feature_maps), nn.ReLU(True),
nn.ConvTranspose2d(self.feature_maps, int(self.feature_maps // 2), self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(int(self.feature_maps // 2)), nn.ReLU(True),
nn.ConvTranspose2d(int(self.feature_maps // 2), int((self.feature_maps // 2) // 2), self.kernel_size, self.stride,
self.padding,
bias=self.bias),
nn.BatchNorm2d(int((self.feature_maps // 2) // 2)), nn.ReLU(True),
nn.ConvTranspose2d((int((self.feature_maps // 2) // 2)), int(((self.feature_maps // 2) // 2) // 2), self.kernel_size,
self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(int((self.feature_maps // 2) // 2) // 2), nn.ReLU(True),
nn.ConvTranspose2d(int(((self.feature_maps // 2) // 2) // 2), 4, self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.Tanh()
)
def forward(self, input):
output = self.main(input)
return output
This is my gans.py
# Importing the libraries
from __future__ import print_function
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
from torch.autograd import Variable
from generator import G
from discriminator import D
import os
from PIL import Image
batchSize = 64 # We set the size of the batch.
imageSize = 64 # We set the size of the generated images (64x64).
input_vector = 100
nb_epochs = 500
# Creating the transformations
transform = transforms.Compose([transforms.Resize((imageSize, imageSize)), transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5, 0.5), (0.5, 0.5, 0.5,
0.5)), ]) # We create a list of transformations (scaling, tensor conversion, normalization) to apply to the input images.
def pil_loader_rgba(path: str) -> Image.Image:
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGBA')
# Loading the dataset
dataset = dset.ImageFolder(root='./data', transform=transform, loader=pil_loader_rgba)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batchSize, shuffle=True,
num_workers=2) # We use dataLoader to get the images of the training set batch by batch.
# Defining the weights_init function that takes as input a neural network m and that will initialize all its weights.
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
m.weight.data.normal_(0.0, 0.02)
elif classname.find('BatchNorm') != -1:
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
def is_cuda_available():
return torch.cuda.is_available()
def is_gpu_available():
if is_cuda_available():
if int(torch.cuda.device_count()) > 0:
return True
return False
return False
# Create results directory
def create_dir(name):
if not os.path.exists(name):
os.makedirs(name)
# Creating the generator
netG = G(input_vector)
netG.apply(weights_init)
# Creating the discriminator
netD = D()
netD.apply(weights_init)
if is_gpu_available():
netG.cuda()
netD.cuda()
# Training the DCGANs
criterion = nn.BCELoss()
optimizerD = optim.Adam(netD.parameters(), lr=0.0002, betas=(0.5, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=0.0002, betas=(0.5, 0.999))
generator_model = 'generator_model'
discriminator_model = 'discriminator_model'
def save_model(epoch, model, optimizer, error, filepath, noise=None):
if os.path.exists(filepath):
os.remove(filepath)
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': error,
'noise': noise
}, filepath)
def load_checkpoint(filepath):
if os.path.exists(filepath):
return torch.load(filepath)
return None
def main():
print("Device name : " + torch.cuda.get_device_name(0))
for epoch in range(nb_epochs):
for i, data in enumerate(dataloader, 0):
checkpointG = load_checkpoint(generator_model)
checkpointD = load_checkpoint(discriminator_model)
if checkpointG:
netG.load_state_dict(checkpointG['model_state_dict'])
optimizerG.load_state_dict(checkpointG['optimizer_state_dict'])
if checkpointD:
netD.load_state_dict(checkpointD['model_state_dict'])
optimizerD.load_state_dict(checkpointD['optimizer_state_dict'])
# 1st Step: Updating the weights of the neural network of the discriminator
netD.zero_grad()
# Training the discriminator with a real image of the dataset
real, _ = data
if is_gpu_available():
input = Variable(real.cuda()).cuda()
target = Variable(torch.ones(input.size()[0]).cuda()).cuda()
else:
input = Variable(real)
target = Variable(torch.ones(input.size()[0]))
output = netD(input)
errD_real = criterion(output, target)
# Training the discriminator with a fake image generated by the generator
if is_gpu_available():
noise = Variable(torch.randn(input.size()[0], input_vector, 1, 1)).cuda()
target = Variable(torch.zeros(input.size()[0])).cuda()
else:
noise = Variable(torch.randn(input.size()[0], input_vector, 1, 1))
target = Variable(torch.zeros(input.size()[0]))
fake = netG(noise)
output = netD(fake.detach())
errD_fake = criterion(output, target)
# Backpropagating the total error
errD = errD_real + errD_fake
errD.backward()
optimizerD.step()
# 2nd Step: Updating the weights of the neural network of the generator
netG.zero_grad()
if is_gpu_available():
target = Variable(torch.ones(input.size()[0])).cuda()
else:
target = Variable(torch.ones(input.size()[0]))
output = netD(fake)
errG = criterion(output, target)
errG.backward()
optimizerG.step()
# 3rd Step: Printing the losses and saving the real images and the generated images of the minibatch every 100 steps
print('[%d/%d][%d/%d] Loss_D: %.4f Loss_G: %.4f' % (
epoch, nb_epochs, i, len(dataloader), errD.data, errG.data))
save_model(epoch, netG, optimizerG, errG, generator_model, noise)
save_model(epoch, netD, optimizerD, errD, discriminator_model, noise)
if i % 100 == 0:
create_dir('results')
vutils.save_image(real, '%s/real_samples.png' % "./results", normalize=True)
fake = netG(noise)
vutils.save_image(fake.data, '%s/fake_samples_epoch_%03d.png' % ("./results", epoch), normalize=True)
if __name__ == "__main__":
main()
So AFTER few hours I decided to look at my results folder. I saw weird thing AFTER 39th epoch.
Generator started generating worst images. Until 39th epoch generator IMPROVED.
Pls look at below Screenshot.
Why generator suddenly became worst ?
I'm trying to run 500 epochs. I thought more epochs more success
So I had a look at logs and I'm seeing below
[40/500][0/157] Loss_D: 0.0141 Loss_G: 5.7559
[40/500][1/157] Loss_D: 0.0438 Loss_G: 5.5805
[40/500][2/157] Loss_D: 0.0161 Loss_G: 6.4947
[40/500][3/157] Loss_D: 0.0138 Loss_G: 7.1711
[40/500][4/157] Loss_D: 0.0547 Loss_G: 4.6262
[40/500][5/157] Loss_D: 0.0295 Loss_G: 4.7831
[40/500][6/157] Loss_D: 0.0103 Loss_G: 6.3700
[40/500][7/157] Loss_D: 0.0276 Loss_G: 5.9162
[40/500][8/157] Loss_D: 0.0205 Loss_G: 6.3571
[40/500][9/157] Loss_D: 0.0139 Loss_G: 6.4961
[40/500][10/157] Loss_D: 0.0117 Loss_G: 6.4371
[40/500][11/157] Loss_D: 0.0057 Loss_G: 6.6858
[40/500][12/157] Loss_D: 0.0203 Loss_G: 5.4308
[40/500][13/157] Loss_D: 0.0078 Loss_G: 6.5749
[40/500][14/157] Loss_D: 0.0115 Loss_G: 6.3202
[40/500][15/157] Loss_D: 0.0187 Loss_G: 6.2258
[40/500][16/157] Loss_D: 0.0052 Loss_G: 6.5253
[40/500][17/157] Loss_D: 0.0158 Loss_G: 5.5672
[40/500][18/157] Loss_D: 0.0156 Loss_G: 5.5416
[40/500][19/157] Loss_D: 0.0306 Loss_G: 5.4550
[40/500][20/157] Loss_D: 0.0077 Loss_G: 6.1985
[40/500][21/157] Loss_D: 0.0158 Loss_G: 5.3092
[40/500][22/157] Loss_D: 0.0167 Loss_G: 5.8395
[40/500][23/157] Loss_D: 0.0119 Loss_G: 6.0849
[40/500][24/157] Loss_D: 0.0104 Loss_G: 6.5493
[40/500][25/157] Loss_D: 0.0182 Loss_G: 5.6758
[40/500][26/157] Loss_D: 0.0145 Loss_G: 5.8336
[40/500][27/157] Loss_D: 0.0050 Loss_G: 6.8472
[40/500][28/157] Loss_D: 0.0080 Loss_G: 6.4894
[40/500][29/157] Loss_D: 0.0186 Loss_G: 5.5563
[40/500][30/157] Loss_D: 0.0143 Loss_G: 6.4144
[40/500][31/157] Loss_D: 0.0377 Loss_G: 5.4557
[40/500][32/157] Loss_D: 0.0540 Loss_G: 4.6034
[40/500][33/157] Loss_D: 0.0200 Loss_G: 5.6417
[40/500][34/157] Loss_D: 0.0189 Loss_G: 5.7760
[40/500][35/157] Loss_D: 0.0197 Loss_G: 6.1732
[40/500][36/157] Loss_D: 0.0093 Loss_G: 6.4046
[40/500][37/157] Loss_D: 0.0281 Loss_G: 5.5217
[40/500][38/157] Loss_D: 0.0410 Loss_G: 5.9157
[40/500][39/157] Loss_D: 0.0667 Loss_G: 5.2522
[40/500][40/157] Loss_D: 0.0530 Loss_G: 5.6412
[40/500][41/157] Loss_D: 0.0315 Loss_G: 5.9325
[40/500][42/157] Loss_D: 0.0097 Loss_G: 6.7819
[40/500][43/157] Loss_D: 0.0157 Loss_G: 5.8630
[40/500][44/157] Loss_D: 0.0382 Loss_G: 5.1942
[40/500][45/157] Loss_D: 0.0331 Loss_G: 5.1490
[40/500][46/157] Loss_D: 0.0362 Loss_G: 5.7026
[40/500][47/157] Loss_D: 0.0237 Loss_G: 5.7493
[40/500][48/157] Loss_D: 0.0227 Loss_G: 5.7636
[40/500][49/157] Loss_D: 0.0230 Loss_G: 5.6500
[40/500][50/157] Loss_D: 0.0329 Loss_G: 5.4542
[40/500][51/157] Loss_D: 0.0306 Loss_G: 5.6473
[40/500][52/157] Loss_D: 0.0254 Loss_G: 5.8464
[40/500][53/157] Loss_D: 0.0402 Loss_G: 5.8609
[40/500][54/157] Loss_D: 0.0242 Loss_G: 5.9952
[40/500][55/157] Loss_D: 0.0400 Loss_G: 5.8378
[40/500][56/157] Loss_D: 0.0302 Loss_G: 5.8990
[40/500][57/157] Loss_D: 0.0239 Loss_G: 5.8134
[40/500][58/157] Loss_D: 0.0348 Loss_G: 5.8109
[40/500][59/157] Loss_D: 0.0361 Loss_G: 5.9011
[40/500][60/157] Loss_D: 0.0418 Loss_G: 5.8825
[40/500][61/157] Loss_D: 0.0501 Loss_G: 6.2302
[40/500][62/157] Loss_D: 0.0184 Loss_G: 6.2755
[40/500][63/157] Loss_D: 0.0273 Loss_G: 5.9655
[40/500][64/157] Loss_D: 0.0250 Loss_G: 5.7513
[40/500][65/157] Loss_D: 0.0298 Loss_G: 6.0434
[40/500][66/157] Loss_D: 0.0299 Loss_G: 6.4280
[40/500][67/157] Loss_D: 0.0205 Loss_G: 6.3743
[40/500][68/157] Loss_D: 0.0173 Loss_G: 6.2749
[40/500][69/157] Loss_D: 0.0199 Loss_G: 6.0541
[40/500][70/157] Loss_D: 0.0309 Loss_G: 6.5044
[40/500][71/157] Loss_D: 0.0177 Loss_G: 6.6093
[40/500][72/157] Loss_D: 0.0363 Loss_G: 7.2993
[40/500][73/157] Loss_D: 0.0093 Loss_G: 7.6995
[40/500][74/157] Loss_D: 0.0087 Loss_G: 7.3493
[40/500][75/157] Loss_D: 0.0540 Loss_G: 8.2688
[40/500][76/157] Loss_D: 0.0172 Loss_G: 8.3312
[40/500][77/157] Loss_D: 0.0086 Loss_G: 7.6863
[40/500][78/157] Loss_D: 0.0232 Loss_G: 7.4930
[40/500][79/157] Loss_D: 0.0175 Loss_G: 7.8834
[40/500][80/157] Loss_D: 0.0109 Loss_G: 9.5329
[40/500][81/157] Loss_D: 0.0093 Loss_G: 7.3253
[40/500][82/157] Loss_D: 0.0674 Loss_G: 10.6709
[40/500][83/157] Loss_D: 0.0010 Loss_G: 10.8321
[40/500][84/157] Loss_D: 0.0083 Loss_G: 8.5728
[40/500][85/157] Loss_D: 0.0124 Loss_G: 6.9085
[40/500][86/157] Loss_D: 0.0181 Loss_G: 7.0867
[40/500][87/157] Loss_D: 0.0130 Loss_G: 7.3527
[40/500][88/157] Loss_D: 0.0189 Loss_G: 7.2494
[40/500][89/157] Loss_D: 0.0302 Loss_G: 8.7555
[40/500][90/157] Loss_D: 0.0147 Loss_G: 7.7668
[40/500][91/157] Loss_D: 0.0325 Loss_G: 7.7779
[40/500][92/157] Loss_D: 0.0257 Loss_G: 8.3955
[40/500][93/157] Loss_D: 0.0113 Loss_G: 8.3687
[40/500][94/157] Loss_D: 0.0124 Loss_G: 7.6081
[40/500][95/157] Loss_D: 0.0088 Loss_G: 7.6012
[40/500][96/157] Loss_D: 0.0241 Loss_G: 7.6573
[40/500][97/157] Loss_D: 0.0522 Loss_G: 10.8114
[40/500][98/157] Loss_D: 0.0071 Loss_G: 11.0529
[40/500][99/157] Loss_D: 0.0043 Loss_G: 8.0707
[40/500][100/157] Loss_D: 0.0141 Loss_G: 7.2864
[40/500][101/157] Loss_D: 0.0234 Loss_G: 7.3585
[40/500][102/157] Loss_D: 0.0148 Loss_G: 7.4577
[40/500][103/157] Loss_D: 0.0190 Loss_G: 8.1904
[40/500][104/157] Loss_D: 0.0201 Loss_G: 8.1518
[40/500][105/157] Loss_D: 0.0220 Loss_G: 9.1069
[40/500][106/157] Loss_D: 0.0108 Loss_G: 9.0069
[40/500][107/157] Loss_D: 0.0044 Loss_G: 8.0970
[40/500][108/157] Loss_D: 0.0076 Loss_G: 7.2699
[40/500][109/157] Loss_D: 0.0052 Loss_G: 7.4036
[40/500][110/157] Loss_D: 0.0167 Loss_G: 7.2742
[40/500][111/157] Loss_D: 0.0032 Loss_G: 7.9825
[40/500][112/157] Loss_D: 0.3462 Loss_G: 32.6314
[40/500][113/157] Loss_D: 0.1704 Loss_G: 40.6010
[40/500][114/157] Loss_D: 0.0065 Loss_G: 44.4607
[40/500][115/157] Loss_D: 0.0142 Loss_G: 43.9761
[40/500][116/157] Loss_D: 0.0160 Loss_G: 45.0376
[40/500][117/157] Loss_D: 0.0042 Loss_G: 45.9534
[40/500][118/157] Loss_D: 0.0061 Loss_G: 45.2998
[40/500][119/157] Loss_D: 0.0023 Loss_G: 45.4654
[40/500][120/157] Loss_D: 0.0033 Loss_G: 44.6643
[40/500][121/157] Loss_D: 0.0042 Loss_G: 44.6020
[40/500][122/157] Loss_D: 0.0002 Loss_G: 44.4807
[40/500][123/157] Loss_D: 0.0004 Loss_G: 44.0402
[40/500][124/157] Loss_D: 0.0055 Loss_G: 43.9188
[40/500][125/157] Loss_D: 0.0021 Loss_G: 43.1988
[40/500][126/157] Loss_D: 0.0008 Loss_G: 41.6770
[40/500][127/157] Loss_D: 0.0001 Loss_G: 40.8719
[40/500][128/157] Loss_D: 0.0009 Loss_G: 40.3803
[40/500][129/157] Loss_D: 0.0023 Loss_G: 39.0143
[40/500][130/157] Loss_D: 0.0254 Loss_G: 39.0317
[40/500][131/157] Loss_D: 0.0008 Loss_G: 37.9451
[40/500][132/157] Loss_D: 0.0253 Loss_G: 37.1046
[40/500][133/157] Loss_D: 0.0046 Loss_G: 36.2807
[40/500][134/157] Loss_D: 0.0025 Loss_G: 35.5878
[40/500][135/157] Loss_D: 0.0011 Loss_G: 33.6500
[40/500][136/157] Loss_D: 0.0061 Loss_G: 33.5011
[40/500][137/157] Loss_D: 0.0015 Loss_G: 30.0363
[40/500][138/157] Loss_D: 0.0019 Loss_G: 31.0197
[40/500][139/157] Loss_D: 0.0027 Loss_G: 28.4693
[40/500][140/157] Loss_D: 0.0189 Loss_G: 27.3072
[40/500][141/157] Loss_D: 0.0051 Loss_G: 26.6637
[40/500][142/157] Loss_D: 0.0077 Loss_G: 24.8390
[40/500][143/157] Loss_D: 0.0123 Loss_G: 23.8334
[40/500][144/157] Loss_D: 0.0014 Loss_G: 23.3755
[40/500][145/157] Loss_D: 0.0036 Loss_G: 19.6341
[40/500][146/157] Loss_D: 0.0025 Loss_G: 18.1076
[40/500][147/157] Loss_D: 0.0029 Loss_G: 16.9415
[40/500][148/157] Loss_D: 0.0028 Loss_G: 16.4647
[40/500][149/157] Loss_D: 0.0048 Loss_G: 14.6184
[40/500][150/157] Loss_D: 0.0074 Loss_G: 13.2544
[40/500][151/157] Loss_D: 0.0053 Loss_G: 13.0052
[40/500][152/157] Loss_D: 0.0070 Loss_G: 11.8815
[40/500][153/157] Loss_D: 0.0078 Loss_G: 12.1657
[40/500][154/157] Loss_D: 0.0094 Loss_G: 10.4259
[40/500][155/157] Loss_D: 0.0073 Loss_G: 9.9345
[40/500][156/157] Loss_D: 0.0082 Loss_G: 9.7609
[41/500][0/157] Loss_D: 0.0079 Loss_G: 9.2920
[41/500][1/157] Loss_D: 0.0134 Loss_G: 8.5241
[41/500][2/157] Loss_D: 0.0156 Loss_G: 8.6983
[41/500][3/157] Loss_D: 0.0250 Loss_G: 8.1148
[41/500][4/157] Loss_D: 0.0160 Loss_G: 8.3324
[41/500][5/157] Loss_D: 0.0187 Loss_G: 7.6281
[41/500][6/157] Loss_D: 0.0191 Loss_G: 7.4707
[41/500][7/157] Loss_D: 0.0092 Loss_G: 8.3976
[41/500][8/157] Loss_D: 0.0118 Loss_G: 7.9800
[41/500][9/157] Loss_D: 0.0126 Loss_G: 7.3999
[41/500][10/157] Loss_D: 0.0165 Loss_G: 7.0854
[41/500][11/157] Loss_D: 0.0095 Loss_G: 7.6392
[41/500][12/157] Loss_D: 0.0079 Loss_G: 7.3862
[41/500][13/157] Loss_D: 0.0181 Loss_G: 7.3812
[41/500][14/157] Loss_D: 0.0168 Loss_G: 6.9518
[41/500][15/157] Loss_D: 0.0094 Loss_G: 7.8525
[41/500][16/157] Loss_D: 0.0165 Loss_G: 7.3024
[41/500][17/157] Loss_D: 0.0029 Loss_G: 8.4487
[41/500][18/157] Loss_D: 0.0169 Loss_G: 7.0449
[41/500][19/157] Loss_D: 0.0167 Loss_G: 7.1307
[41/500][20/157] Loss_D: 0.0255 Loss_G: 6.7970
[41/500][21/157] Loss_D: 0.0154 Loss_G: 6.9745
[41/500][22/157] Loss_D: 0.0110 Loss_G: 6.9925
As you can see there is a HUGE change happened to Generator loss(Loss_G).
Any idea why that happened ?
Any idea how to overcome such a problem ?
| The GAN training is inherently unstable because of simultaneous dynamic training of two competing models. Tried plotting the loss values from your question and the loss of discriminator and generator looks like below:
Looking at the loss and the generated images, we can say that the training fails to converge. This failure is due to not finding an equilibrium between the discriminator and the generator. we see that the loss for the discriminator is close to zero and, the loss of the generator rises and is unstable resulting in garbage images that the discriminator can easily identify as fake.
Discriminator classifies both the real data and the fake data from the generator. Discriminator loss is when it penalizes itself for misclassifying a real instance as fake, or a fake instance (created by the generator).
The generator loss is based on the discriminator’s classification – it gets rewarded if it successfully fools the discriminator, and gets penalized otherwise. GAN as zero-sum non-cooperative game, the win is either the discriminator's or the generator's. If one wins, the other loses. Convergence happens at Nash equilibrium which is when one's action doesn't affect the other. Read more into it here https://jonathan-hui.medium.com/gan-why-it-is-so-hard-to-train-generative-advisory-networks-819a86b3750b and https://jonathan-hui.medium.com/gan-what-is-wrong-with-the-gan-cost-function-6f594162ce01 provides a deeper insight into the GAN challenges.
The convergence failure could also happen due to the mode collapse and Diminishing gradient. Also, In addition to the exploding gradients solution suggested by Nihal,
Try implementing early stopping in the model based on the metrics Inception Score, Modified Inception Score, Frechet Inception Distance, Wasserstein distance (Taken from this paper https://arxiv.org/pdf/1802.03446.pdf) These measures help identify the model convergence and would automatically stop once the model converges.
It is also shown that Spectral Normalization, a particular kind of normalization applied on the convolutional kernels, can greatly help the stability of the training. https://arxiv.org/pdf/1802.05957.pdf
Making the training of the discriminator more difficult could help. Adding noise to both real images and images from generator helps increase the complexity of the discriminator training.
Increasing the iterations doesn't always improve the model. More training iterations, beyond some point of training stability may or may not result in higher quality images due to high-variance loss.And since GANs are relatively new, the research direction on challenges faced are still open and debatable.
| https://stackoverflow.com/questions/68904476/ |
soft cross entropy in pytorch | I have a bit of a problem implementing a soft cross entropy loss in pytorch.
I need to implement a weighted soft cross entropy loss for my model, meaning the target value is a vector of probabilities as well, not hot one vector.
I tried using the kldivloss as suggested in a few forums, but it does not expect a weight vector so I can not use it.
In general I'm a bit confused about how to create a custom loss function with pytorch and how auto grad follows a custom loss function, especially if after the model we apply some function which is not a mathematical, like mapping the output of the model to some vector and calculating the loss on the mapped vector and etc.
| According to your comment, you are looking to implement a weighted cross-entropy loss with soft labels. Indeed nn.CrossEntropyLoss only works with hard labels (one-hot encodings) since the target is provided as a dense representation (with a single class label per instance).
You can implement the function yourself though. I originally implemented this kind of function in this answer but there wasn't any weighting on the classes. Here instead we take the following three arguments:
logits: your unscaled predictions,
weights: the weights per-logit, and
labels your target tensor.
We have the following loss term:
>>> p = F.log_softmax(pred, 1)
>>> w_labels = weights*labels
>>> loss = -(w_labels*p).sum() / (w_labels).sum()
As long as you operate with differentiable PyTorch builtins, you should be able to backward pass from your custom loss' output. In any case, you can always verify if a backward pass can be called from a given tensor by checking if it has a grad_fn attribute.
You can wrap the logic inside a nn.Module
class SoftCrossEntropyLoss():
def __init__(self, weights):
super().__init__()
self.weights = weights
def forward(self, y_hat, y):
p = F.log_softmax(y_hat, 1)
w_labels = self.weights*y
loss = -(w_labels*p).sum() / (w_labels).sum()
return loss
| https://stackoverflow.com/questions/68907809/ |
imbalanced classification using undersampling and oversampling using pytorch python | I want to use oversampling and under sampling techniques together
I have 6 classes with number of samples as following:
class 0 250000
class 1 48000
class 2 40000
class 3 38000
class 4 35000
class 5 7000
I want to use smot to make all classes balance and equal same size
class 0 40000
class 1 40000
class 2 40000
class 3 40000
class 4 40000
class 5 40000
I know how to make oversampling or undersampling for all data but how use them together with multi class classification
| I Try this
ros = RandomUnderSampler()
X, y=ros.fit_resample(mydata, labels)
strategy = {0:40000, 1:40000, 2:40000, 3:40000, 4:40000, 5:40000}
over = SMOTE(sampling_strategy=strategy)
X, y=over.fit_resample(X, y)
| https://stackoverflow.com/questions/68913032/ |
Converting a fully connected neural network with variable number of hidden layers from tensorflow to pytorch | I recently started learning pytorch and I am trying to convert a part of a large script including coding a MLP with variable number of hidden layers from Tensorflow to pytorch.
import tensorflow as tf
### Base neural network
def init_mlp(layer_sizes, std=.01, bias_init=0.):
params = {'w':[], 'b':[]}
for n_in, n_out in zip(layer_sizes[:-1], layer_sizes[1:]):
params['w'].append(tf.Variable(tf.random_normal([n_in, n_out], stddev=std)))
params['b'].append(tf.Variable(tf.mul(bias_init, tf.ones([n_out,]))))
return params
def mlp(X, params):
h = [X]
for w,b in zip(params['w'][:-1], params['b'][:-1]):
h.append( tf.nn.relu( tf.matmul(h[-1], w) + b ) )
#h.append( tf.nn.tanh( tf.matmul(h[-1], w) + b ) )
return tf.matmul(h[-1], params['w'][-1]) + params['b'][-1]
def compute_nll(x, x_recon_linear):
return tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(x_recon_linear, x), reduction_indices=1, keep_dims=True)
def gauss_cross_entropy(mean_post, std_post, mean_prior, std_prior):
d = (mean_post - mean_prior)
d = tf.mul(d,d)
return tf.reduce_sum(-tf.div(d + tf.mul(std_post,std_post),(2.*std_prior*std_prior)) - tf.log(std_prior*2.506628), reduction_indices=1, keep_dims=True)
how could I write down similarly weights and bias variables and attach them in each hidden layer in pytorch?
how could I convert gauss_cross_entropy and compute_nll
functions as well (finding equivalent syntax)?
Are these two codes compatible?
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as func
from torch.distributions import Normal, Categorical, Independent
from copy import
device = "cpu"
if torch.cuda.is_available():
device = "cuda:0"
if torch.cuda.device_count() > 1:
net = nn.DataParallel(net)
net.to(device)
def init_mlp(layer_sizes, std=.01, bias_init=0.):
params = {'w':[], 'b':[]}
for n_in, n_out in zip(layer_sizes[:-1], layer_sizes[1:]):
params['w'].append(torch.tensor(Normal([n_in, n_out], torch.tensor([std])) ,requires_grad=True))
params['b'].append(torch.tensor(torch.mul(bias_init, torch.ones([n_out,])),requires_grad=True))
return params
def mlp(X, params):
h = [X]
for w,b in zip(params['w'][:-1], params['b'][:-1]):
h.append( torch.nn.ReLU( tf.matmul(h[-1], w) + b ) )
return torch.matmul(h[-1], params['w'][-1]) + params['b'][-1]
def compute_nll(x, x_recon_linear):
return torch.sum(func.binary_cross_entropy_with_logits(x_recon_linear, x), reduction_indices=1, keep_dims=True)
def gauss_cross_entropy(mu_post, sigma_post, mu_prior, sigma_prior):
d = (mu_post - mu_prior)
d = torch.mul(d,d)
return torch.sum(-torch.div(d + torch.mul(sigma_post,sigma_post),(2.*sigma_prior*sigma_prior)) - torch.log(sigma_prior*2.506628), reduction_indices=1, keep_dims=True)
What is the substitute function for tf.placeholder in pytorch? For instance here:
class VAE(object):
def __init__(self, hyperParams):
self.X = tf.placeholder("float", [None, hyperParams['input_d']])
self.prior = hyperParams['prior']
self.K = hyperParams['K']
self.encoder_params = self.init_encoder(hyperParams)
self.decoder_params = self.init_decoder(hyperParams)
and also how should I change tf.shape in this line: tf.random_normal(tf.shape(self.sigma[-1]))
|
How could I write down similar weights and bias variables and attach them in each hidden layer in PyTorch?
An easier way to define those is to create a list containing the params as (weight, bias) tuples:
def init_mlp(layer_sizes, std=.01, bias_init=0.):
params = []
for n_in, n_out in zip(layer_sizes[:-1], layer_sizes[1:]):
params.append([
nn.init.normal_(torch.empty(n_in, n_out)).requires_grad_(True),
torch.empty(n_out).fill_(bias_init).requires_grad_(True)])
return params
Above I define my parameters as 'empty' (created with uninitialized data) tensors with torch.empty. I have used in-place functions such as nn.init.normal_ (there are many others available) and torch.Tensor.fill_ to fill the tensor with an arbitrary value (maybe it is .mul_(bias_init) you are looking for, based on your TensorFlow sample?).
For the inference code, you don't actually need to store the intermediate layer results:
def mlp(x, params):
for i, (W, b) in enumerate(params):
x = x@W + b
if i < len(params) - 1:
x = torch.relu(x)
return x
How could I convert gauss_cross_entropy and compute_nll functions as well (finding equivalent syntax)?
You can use PyTorch functions and mathematical operators to define your logic. For compute_loss you were using the built-in, which actually does not require summation after it, by default the losses of the batch elements are averaged.
def compute_loss(y_pred, y_true):
return F.binary_cross_entropy_with_logits(y_pred, y_true)
What is the substitute function for tf.placeholder in Pytorch?
You don't have placeholders in PyTorch, you compute your outputs explicitly using PyTorch operators, then you should be able to backpropagate through those operators and get the gradients for each parameter.
How should I change tf.shape in this line: tf.random_normal(tf.shape(self.sigma[-1]))
Function tf.shape returns the shape of the tensor, in PyTorch you call torch.Tensor.shape or by calling torch.Tensor.size: i.e. self.sigma[-1].shape or self.sigma[-1].size().
| https://stackoverflow.com/questions/68924907/ |
Pytorch Text AttributeError: ‘BucketIterator’ object has no attribute | I’m doing seq2seq machine translation on my own dataset. I have preproceed my dataset using this code.
The problem comes when i tried to split train_data using BucketIterator.split()
def tokenize_word(text):
return nltk.word_tokenize(text)
id = Field(sequential=True, tokenize = tokenize_word, lower=True, init_token="<sos>", eos_token="<eos>")
ti = Field(sequential=True, tokenize = tokenize_word, lower=True, init_token="<sos>", eos_token="<eos>")
fields = {'id': ('i', id), 'ti': ('t', ti)}
train_data = TabularDataset.splits(
path='/content/drive/MyDrive/Colab Notebooks/Tidore/',
train = 'id_ti.tsv',
format='tsv',
fields=fields
)[0]
id.build_vocab(train_data)
ti.build_vocab(train_data)
print(f"Unique tokens in source (id) vocabulary: {len(id.vocab)}")
print(f"Unique tokens in target (ti) vocabulary: {len(ti.vocab)}")
train_iterator = BucketIterator.splits(
(train_data),
batch_size = batch_size,
sort_within_batch = True,
sort_key = lambda x: len(x.id),
device = device
)
print(len(train_iterator))
for data in train_iterator:
print(data.i)
This is the result of the code above
Unique tokens in source (id) vocabulary: 1425
Unique tokens in target (ti) vocabulary: 1297
2004
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-72-e73a211df4bd> in <module>()
31
32 for data in train_iterator:
---> 33 print(data.i)
AttributeError: 'BucketIterator' object has no attribute 'i'
This is the result when i tried to print the train_iterator
I am very confuse, because i don’t know what key i should use for train iterator. Thank you for your help
| train_iterator = BucketIterator.splits(
(train_data),
batch_size = batch_size,
sort_within_batch = True,
sort_key = lambda x: len(x.id),
device = device
)
here
Use BucketIterator instead of BucketIterator.splits when there is only one iterator needs to be generated.
I have met this problem and the method mentioned above works.
| https://stackoverflow.com/questions/68931409/ |
DataLoader worker exited unexpectedly (pid(s) 48817, 48818) | When running my code I recieved this error message
"RuntimeError: DataLoader worker (pid(s) 48817, 48818) exited unexpectedly" I am completely unsure where to begin to solve this issue. Any guidance at all would be greatly appreciated. Code and traceback posted below
batch_size = 128
image_size = (64,64)
stats = (0.5, 0.5, 0.5), (0.5, 0.5, 0.5)
transform_ds = transforms.Compose([transforms.Resize(image_size),
# transforms.RandomCrop(32, padding=2),
# transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(*stats)
])
train_ds = facesDataset(csv_file = 'imagesdataset.csv', root_dir = 'images',
transform = transform_ds)
train_dl = DataLoader(train_ds, batch_size, shuffle=True, num_workers=3, pin_memory=True)
print(len(train_ds))
def denorm(img_tensors):
return img_tensors * stats[1][0] + stats[0][0]
def show_images(img, nmax=64):
fig, ax = plt.subplots(figsize=(8, 8))
ax.set_xticks([]); ax.set_yticks([])
ax.imshow(make_grid(denorm(img.detach()[:nmax]), nrow=8).permute(1, 2, 0))
def show_batch(dl, nmax=64):
for img, _ in dl:
show_images(img, nmax)
break
show_batch(train_dl)
traceback
Traceback (most recent call last):
File "/Users/___/Desktop/stylegan/stylegan.py", line 52, in <module>
show_batch(train_dl)
File "/Users/___/Desktop/stylegan/stylegan.py", line 48, in show_batch
for img, _ in dl:
File "/Users/___/opt/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/Users/___/opt/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1186, in _next_data
idx, data = self._get_data()
File "/Users/___/opt/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1152, in _get_data
success, data = self._try_get_data()
File "/Users/___/opt/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1003, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 48817, 48818) exited unexpectedly
| One of the reason might be data loading with multiprocessing. As far as I know, in Windows, if you don't set num_workers to 0 then there would be errors. So I recommend you to try without num_workers (because by default, it is 0) or just set it num_workers=0.
train_dl = DataLoader(train_ds, batch_size, shuffle=True, num_workers=0, pin_memory=True)
| https://stackoverflow.com/questions/68931909/ |
How to prune the k% lowest weight by pytorch? | Here I learn from the paper called Deep compression [Han et. al.] using resnet18
I also work the following code, the weight times the mask so that it is the after_weight pruned by the k% lowest weight to zero. But that code doesn't work for me.
Any efficient solution?
prune = float(0.1)
def prune_weights(torchweights):
weights=np.abs(torchweights.cpu().numpy());
weightshape=weights.shape
rankedweights=weights.reshape(weights.size).argsort()#.reshape(weightshape)
num = weights.size
prune_num = int(np.round(num*prune))
count=0
masks = np.zeros_like(rankedweights)
for n, rankedweight in enumerate(rankedweights):
if rankedweight > prune_num:
masks[n]=1
else: count+=1
print("total weights:", num)
print("weights pruned:",count)
masks=masks.reshape(weightshape)
weights=masks*weights
return torch.from_numpy(weights).cuda(), masks
# prune weights
# The pruned weight location is saved in the addressbook and maskbook.
# These will be used during training to keep the weights zero.
addressbook=[]
maskbook=[]
for k, v in net.state_dict().items():
if "conv2" in k:
addressbook.append(k)
print("pruning layer:",k)
weights=v
weights, masks = prune_weights(weights)
maskbook.append(masks)
checkpoint['net'][k] = weights
checkpoint['address'] = addressbook
checkpoint['mask'] = maskbook
net.load_state_dict(checkpoint['net'])
| You can use torch.nn.utils.prune.
It seems you want to remove 10% of every Conv2D layer. If that is the case, you can do it this way:
import torch
import torch.nn.utils.prune as prune
# load your model
net = ?
# in your example, you want to remove 10%
prune_perc = 0.1
for name, module in net.named_modules():
if isinstance(module, torch.nn.Conv2d):
prune.l1_unstructured(module, name='weight', amount=prune_perc)
| https://stackoverflow.com/questions/68936169/ |
Is there a mean-variance normalization layer in PyTorch? | I am new to PyTorch and I would like to add a mean-variance normalization layer to my network that will normalize features to zero mean and unit standard deviation. I got a bit confused reading the documentation, could anyone give me some leads?
| As @Ivan commented, the normalization can be done on many levels. However, as You say
normalize features to zero mean and unit standard deviation
I suppose You just want to input unbiased data to the network. If that's the case, You should treat it as data preprocessing step rather than a layer of Your model and basically do:
X = (X - torch.mean(X, dim=0))/torch.std(X, dim=0)
As an alternative, You can use torchvision.transforms:
preprocess = torchvision.transforms.Normalize(mean=torch.mean(X, dim=0), std=torch.std(X, dim=0))
X = preprocess(X)
as in this ResNet native example. Note how it is reasonably assumed that the future data would always have roughly the same mean and std_dev as the set that is used for their initial calculation (supposedly the training set). For this reason, we should preserve the initially calculated values and use them for preprocessing in any future inference scenario.
| https://stackoverflow.com/questions/68938545/ |
Subsets and Splits