instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
How to use a Huggingface BERT model from to feed a binary classifier CNN? | I am a bit confused about how to consume huggingface transformers outputs to train a simple language binary classifier model that predicts if Albert Einstein said a sentence or not.
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
inputs = ["Hello World", "Hello There", "Bye Bye", "Two things are infinite: the universe and human stupidity; and I'm not sure about the universe."]
for input in inputs:
inputs = tokenizer(input, return_tensors="pt")
outputs = model(**inputs)
print(outputs[0].shape, input, len(input))
Output:
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
torch.Size([1, 4, 768]) Hello World 11
torch.Size([1, 4, 768]) Hello There 11
torch.Size([1, 4, 768]) Bye Bye 7
torch.Size([1, 23, 768]) Two things are infinite: the universe and human stupidity; and I'm not sure about the universe. 95
As you can see the dimensions of the output varies with the length of the input. Now assume I would like to train a binary classifier that predicts if Einstein said the input sentence or not and the input of the network will be the prediction of the BERT transformer.
How could I write a CNN model that takes a tensor [1, None, 768] in pytorch? It seems that the second dimension changes with the length of the input.
| In pytorch you don't need to have a fixed input dim for a CNN. The only requirement is that your kernel_size must not be smaller than the input_size.
Generally, the best way of putting a classifier (sequence classifier) on top of a Transformer model is to add a pooling layer + FC layer. You can use global pooling, an average or max pooling or an adptative pooling and then a full connected layer.
Note that you can also use AutoModelForSequenceClassification to get everything done for you.
#An example with a simple average pooling
from transformers import AutoTokenizer, AutoModel
import torch
NUM_CLASSES = 1
MAX_LEN = 30
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
model.classifier = torch.nn.Linear(model.pooler.dense.in_features, NUM_CLASSES)
inputs_str = ["Hello World", "Hello There", "Bye Bye", "Two things are infinite: the universe and human stupidity; and I'm not sure about the universe."]
inputs = tokenizer(inputs_str, padding="max_length", return_tensors="pt", max_length=MAX_LEN)
outputs, _ = model(**inputs)
outputs = torch.mean(outputs, dim=1)
outputs = model.classifier(outputs)
print(outputs.shape) #=> (4,1)
| https://stackoverflow.com/questions/68945422/ |
Changing pre-trained model's parameters | I need to change parameters of a pre-trained model, not in any particular way. I'm trying this:
ids = [int(p.sum().item()) for p in model.parameters()]
print(ids[0])
for i in model.parameters():
x = i.data
x = x/100
break;
ids = [int(p.sum().item()) for p in model.parameters()]
print(ids[0])
but it outputs two exactly the same numbers.
| This is simple: you need to perform an in-place operation, otherwise you'll operate on a new object:
for i in model.parameters():
x = i.data
x /= 100
break
Here's a minimal reproducible example:
import torch
torch.manual_seed(2021)
m = torch.nn.Linear(1, 1)
# show current value of the weight
print(next(m.parameters()))
# > tensor([[-0.7391]], requires_grad=True)
for i in m.parameters():
x = i.data
x = x/100
break
# same value :(
print(next(m.parameters()))
# > tensor([[-0.7391]], requires_grad=True)
for i in m.parameters():
x = i.data
x /= 100
break
# now, we changed it
print(next(m.parameters()))
# > tensor([[-0.0074]], requires_grad=True)
P.S.: break is unnecessary in my example, but I kept it just because you used it in your example.
| https://stackoverflow.com/questions/68946757/ |
Why don't O3 less than O1 in gpu memory? | i'm training EfficientDet-D7(head_only=True) in 2080TI * 1.
And i'm using NVIDIA/APEX:amp.
When i use opt_level=O1, although the memory is definitely reduced compared to when apex is not used.
But, when I use opt_level=O2orO3, more memory is consumed.
I am experimenting with the same 2080 Ti, each with a separate GPU by creating two containers with the same docker image. The learning code also copied the code using O1 as it is and changed it to O3, and the args required for learning are all the same. (The batch size and d7 are also the same.)
Why happen this... TT
Additionally, Can you recommend the book about this?(ex. deep learning Memory, gpu ... etc)
Thanks!
| You're optimizing for speed. Some speed optimizations will reduce memory usage. Other will increase them.
An example of when speed optimization reduces memory usage is when unnecessary variables and function calls are removed.
An example of the opposite is loop unrolling.
There's no reason to expect optimization to either reduce or increase memory usage. That's not the goal when optimizing for speed. Any increase or decrease is just a byproduct.
If you really want to find out why it happens in your particular case, you can study the documentation for your compiler and inspect the assembly code.
| https://stackoverflow.com/questions/68949428/ |
How can I convert onnx format to ptl format? | Here I already got my .onnx format file: retinaface.onnx, I want to convert it to PyTorch mobile supported format: .ptl or .pt, then I can do inference in Android platform. But I failed to convert or to find relevant issues.
// I want to load directly but failed
mRetinafaceDector = LiteModuleLoader.load(mModelPath + "/retinaface.onnx");
Seems the LiteModuleLoader.load() can only load .ptl or .pt, but in my hand I only have retinaface.onnx format..
| You can't load onnx model directly to pytorch as it's not supported yet.
Fortunately you can use onnx2pytorch tool to convert onnx to pytorch
Firstly install it:
https://github.com/ToriML/onnx2pytorch
Then you can easily convert it using this code:
onnx_model = onnx.load(path_to_onnx_model)
pytorch_model = ConvertModel(onnx_model)
nevertheless, check this issue, it's about importing onnx directly in pytorch:
https://github.com/pytorch/pytorch/issues/21683
| https://stackoverflow.com/questions/68962511/ |
Getting Error too many indices for tensor of dimension 3 | I am trying to Read an Image using GeneralizedRCNN, Input shape is given as a comment with code. The problem is I am getting an error while tracing the model with input shape. The error is :
> trace = torch.jit.trace(model, input_batch) line Providing the error
> "/usr/local/lib/python3.7/dist-packages/torch/tensor.py:467:
> RuntimeWarning: Iterating over a tensor might cause the trace to be
> incorrect. Passing a tensor of different shape won't change the number
> of iterations executed (and might lead to errors or silently give
> incorrect results). 'incorrect results).', category=RuntimeWarning)
> --------------------------------------------------------------------------- IndexError Traceback (most recent call
> last) <ipython-input-25-52ff7ef794de> in <module>()
> 1 #First attempt at tracing
> ----> 2 trace = torch.jit.trace(model, input_batch)
>
> 7 frames
> /usr/local/lib/python3.7/dist-packages/detectron2/modeling/meta_arch/rcnn.py
> in <listcomp>(.0)
> 182 Normalize, pad and batch the input images.
> 183 """
> --> 184 images = [x["image"].to(self.device) for x in batched_inputs]
> 185 images = [(x - self.pixel_mean) / self.pixel_std for x in images]
> 186 images = ImageList.from_tensors(images, self.backbone.size_divisibility)
>
> IndexError: too many indices for tensor of dimension 3
model = build_model(cfg)
model.eval()
# print(model)
input_image = Image.open("model/xxx.jpg")
display(input_image)
to_tensor = transforms.ToTensor()
input_tensor = to_tensor(input_image)
# input_tensor.size = torch.Size([3, 519, 1038])
input_batch = input_tensor.unsqueeze(0)
# input_batch.size = torch.Size([1, 3, 519, 1038])
trace = torch.jit.trace(model, input_batch)
| This error occurred because input_batch.size = torch.Size([1, 3, 519, 1038]) has 4 dimensions and trace = torch.jit.trace(model, input_batch) expected to get a 3 dimensions as input.
you don't need input_batch = input_tensor.unsqueeze(0). delete or comment this line.
| https://stackoverflow.com/questions/68967277/ |
Poorer performance when change optimizer from Adam to Nesterov | I am running an image segmentation code on Pytorch, based on the architecture of Linknet.
The optimizer is initially set as:
self.optimizer = torch.optim.Adam(params=self.net.parameters(), lr=lr)
Then I change it to Nesterov to improve the performance, like:
self.optimizer = torch.optim.SGD(params=self.net.parameters(), lr=lr, momentum=0.9, nesterov=True)
However, the performance is poorer using Nesterov. When I use Adam the loss function can converge to 0.19. But the loss function can only converge to 0.34 when I use Nesterov.
By the way, the learning rate is divided by 5 if no decrease of loss in 3 consecutive epochs, and lr can adjust 3 times. After that, the training process finish.
I am wondering why this happens and what should I do for optimization? Thanks a lot for the replys:)
| Seems like your question relies on the assumption that SGD with Nesterov would definitely perform better than Adam. However, there is no learning algorithm that is better than another no matter what. You always have to check it given your model (layers, activation functions, loss, etc.) and dataset.
Are you increasing the number of epochs for SGD? Usually, SGD takes much longer to converge than Adam. Note that recent studies show that despite training faster, Adam generalizes worse to the validation and test datasets (https://arxiv.org/abs/1712.07628). An alternative to that is to start the optimization with Adam, and then after some epochs, change the optimizer to SGD.
| https://stackoverflow.com/questions/68978614/ |
Loading Dating without a separate TRAIN/TEST Directory : Pytorch (ImageFolder) | My data is not distributed in train and test directories but only in classes. I mean:
image-folders/
βββ class_0/
| βββ 001.jpg
| βββ 002.jpg
βββ class_1/
| βββ 001.jpg
| βββ 002.jpg
βββ class_2/
βββ 001.jpg
βββ 002.jpg
Is it the right way to approach the problem (What this does is: take datafolder and than divide it into train, valid and test sets. However, i am worried if it is the samething as valid/dev set even though "test set" will not go through training and validation loop):
data = datasets.ImageFolder('PATH', transform)
# creating a train / valid split
# valid set will be further divided into valid and test sets
indices = list(range(len(data)))
np.random.shuffle(indices)
split = int(np.floor(valid_size * len(data)))
train_idx, valid_idx = indices[split:], indices[:split]
# Creating a valid and test set
valid_idx = valid_idx[int(np.floor(0.2*len(valid_idx))) : len(valid_idx)]
test_idx = valid_idx[0:int(np.floor (0.2 * len(valid_idx) ) )]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
test_sampler = SubsetRandomSampler(test_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, sampler=test_sampler num_workers=num_workers)
Thanks in advance.
| This seems ok to me. Alternatively you can create three separate datasets from data, using torch.utils.data.random_split. This has the benefit of not having to worry about implementing the samplers for your dataloaders:
from torch.utils.data import DataLoader, random_split
dl_args = dict(batch_size=batch_size, num_workers=num_workers)
With a train/validation split:
>>> data = datasets.ImageFolder('PATH', transform)
>>> n_val = int(np.floor(valid_size * len(data)))
>>> n_train = len(data) - n_val
Datasets and dataloaders initialization:
>>> train_ds, val_ds = random_split(data, [n_train, n_val])
>>> train_dl = DataLoader(train_ds, **dl_args)
>>> valid_dl = DataLoader(val_ds , **dl_args)
With a train/validation/test split:
>>> data = datasets.ImageFolder('PATH', transform)
>>> n_val = int(np.floor(valid_size * len(data)))
>>> n_test = int(np.floor(test_size * len(data)))
>>> n_train = len(data) - n_val - n_test
Datasets and dataloaders initialization:
>>> train_ds, val_ds, test_ds = random_split(data, [n_train, n_val, n_test])
>>> train_dl = DataLoader(train_ds, **dl_args)
>>> valid_dl = DataLoader(val_ds, **dl_args)
>>> test_dl = DataLoader(test_ds, **dl_args)
| https://stackoverflow.com/questions/68982798/ |
data loader is not defined but I have imported it | This is a baseline model from github; I try to produce its result
dataloader.py, models.py have been put in the same direction with this scripts
from __future__ import print_function
import sys
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
import os
import sys
import time
import argparse
import datetime
from torch.autograd import Variable
if __name__ == '__main__':
import dataloader as dataloader
import models as models
parser = argparse.ArgumentParser(description='PyTorch Clothing-1M Training')
parser.add_argument('--lr', default=0.0008, type=float, help='learning_rate')
parser.add_argument('--start_epoch', default=2, type=int)
parser.add_argument('--num_epochs', default=3, type=int)
parser.add_argument('--batch_size', default=32, type=int)
parser.add_argument('--optim_type', default='SGD')
parser.add_argument('--seed', default=7)
parser.add_argument('--gpuid', default=1, type=int)
parser.add_argument('--id', default='cross_entropy')
args = parser.parse_args()
# Training
def train(epoch):
net.train()
train_loss = 0
correct = 0
total = 0
learning_rate = args.lr
if epoch > args.start_epoch:
learning_rate=learning_rate/10
for param_group in optimizer.param_groups:
param_group['lr'] = learning_rate
print('\n=> %s Training Epoch #%d, LR=%.4f' %(args.id,epoch, learning_rate))
for batch_idx, (inputs, targets) in enumerate(train_loader):
optimizer.zero_grad()
inputs, targets = Variable(inputs), Variable(targets)
outputs = net(inputs) # Forward Propagation
loss = criterion(outputs, targets) # Loss
loss.backward() # Backward Propagation
optimizer.step() # Optimizer update
train_loss += loss.data[0]
_, predicted = torch.max(outputs.data, 1)
total += targets.size(0)
correct += predicted.eq(targets.data).cpu().sum()
loader = dataloader.clothing_dataloader(batch_size=args.batch_size,num_workers=5,shuffle=True)
train_loader,val_loader = loader.run()
best_acc = 0
# Model
net = models.resnet50(pretrained=True)
net.fc = nn.Linear(2048,14)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=args.lr, momentum=0.9, weight_decay=1e-3)
for epoch in range(1, 1+args.num_epochs):
train(epoch) # raise error !! (this is found by debugging)
This is the traceback
Traceback (most recent call last):
.... File "d:\MLNT\MLNT\baseline.py", line 143, in <module>
loader = dataloader.clothing_dataloader(batch_size=args.batch_size,num_workers=5,shuffle=True)
NameError: name 'dataloader' is not defined
during the debugging, "dataloader" and "models" exist as variables
[1]: https://i.stack.imgur.com/QAXvi.png
this is debugging traceback:
Exception has occurred: NameError (note: full exception trace is shown but execution is paused at: <module>)
name 'dataloader' is not defined
File "D:\MLNT\MLNT\baseline.py", line 143, in <module>
loader = dataloader.clothing_dataloader(batch_size=args.batch_size,num_workers=5,shuffle=True)
File "<string>", line 1, in <module> (Current frame)
| If you are importing this file in another one, the condition __name__ == '__main__' won't be True, as such both dataloader and models won't be imported into your file.
if __name__ == '__main__':
import dataloader as dataloader
import models as models
Instead, you could import both straight away as:
import dataloader as dataloader
import models as models
| https://stackoverflow.com/questions/68984251/ |
Why does order of function calls impact runtime | I'm using pyTorch to run calculations on my GPU (RTX 3000, CUDA 11.1). One step involves calculating the distance between one point and an array of points. For kicks I tested 2 functions to determine which is faster as follows:
import datetime as dt
import functools
import timeit
import torch
import numpy as np
device = torch.device("cuda:0")
# define functions for calculating distance
def dist_geom(a, b):
dist = (a - b)**2
dist = dist.sum(axis=1)**0.5
return dist
def dist_linalg(a, b):
dist = torch.linalg.norm(a - b, axis=1)
return dist
# create dummy data
a = np.random.randint(0, 100000, (100000, 10, 10)).astype(np.float64)
b = np.random.randint(0, 100000, (1, 10)).astype(np.float64)
# send data to GPU
a = torch.from_numpy(a).to(device)
b = torch.from_numpy(b).to(device)
# test runtime of each
iterations = 1000
t = timeit.Timer(functools.partial(dist_linalg, a, b))
linalg_delta = t.timeit(number=iterations) / iterations
print("Linear algebra time: ", linalg_delta, " seconds per iteration")
t = timeit.Timer(functools.partial(dist_geom, a, b))
geom_delta = t.timeit(number=iterations) / iterations
print("Geometry time: ", geom_delta, " seconds per iteration")
print("linear algebra:geometry ratio: ", linalg_delta / geom_delta)
This gives the following output:
Linear algebra time: 0.000743145 seconds per iteration
Geometry time: 0.001446731 seconds per iteration
linear algebra:geometry ratio: 0.5136718574496572
So the linear algebra function is ~2x faster. But if I call the geometry function first:
t = timeit.Timer(functools.partial(dist_geom, a, b))
geom_delta = t.timeit(number=iterations) / iterations
print("Geometry time: ", geom_delta, " seconds per iteration")
t = timeit.Timer(functools.partial(dist_linalg, a, b))
linalg_delta = t.timeit(number=iterations) / iterations
print("Linear algebra time: ", linalg_delta, " seconds per iteration")
print("linear algebra:geometry ratio: ", linalg_delta / geom_delta)
I get this output:
Geometry time: 0.001213497 seconds per iteration
Linear algebra time: 0.001136769 seconds per iteration
linear algebra:geometry ratio: 0.9367711663069623
The dist_geom time is nearly identical to the initial run, but the dist_linalg time is now 1.46x longer!
I've tested this multiple ways and the result is always the same: the call order seems to matter...a lot. I think I'm missing a fundamental point here, so any help in understanding what is going on will be appreciated (and I suspect it will be so simple I'll feel foolish).
I created two sets of tensors. The following yields the same runtime regardless of order.
# create 2 tensors for geometry test
a1 = np.random.randint(0, 100000, (100000, 10, 10)).astype(np.float64)
b1 = np.random.randint(0, 100000, (1, 10)).astype(np.float64)
a1 = torch.from_numpy(a1).to(device)
b1 = torch.from_numpy(b1).to(device)
t = timeit.Timer(functools.partial(dist_geom, a, b))
geom_delta = t.timeit(number=iterations) / iterations
print("Geometry time: ", geom_delta, " seconds per iteration")
# create 2 different tensors for the linalg function
a2 = np.random.randint(0, 100000, (100000, 10, 10)).astype(np.float64)
b2 = np.random.randint(0, 100000, (1, 10)).astype(np.float64)
a2 = torch.from_numpy(a2).to(device)
b2 = torch.from_numpy(b2).to(device)
t = timeit.Timer(functools.partial(dist_linalg, a, b))
linalg_delta = t.timeit(number=iterations) / iterations
print("Linear algebra time: ", linalg_delta, " seconds per iteration")
print("linear algebra:geometry ratio: ", linalg_delta / geom_delta)
Geometry time: 0.0012010019999999998 seconds per iteration
Linear algebra time: 0.0007349769999999999 seconds per iteration
linear algebra:geometry ratio: 0.6119698385181707
That said, if I define both a1/b1 and a2/b2 before the function calls I see the difference in times again. Initially I thought this was caused memory load times, but that does not really fit, right?
| you just can add
torch.cuda.empty_cache()
All code:
import datetime as dt
import functools
import timeit
import torch
import numpy as np
device = torch.device("cuda:0")
# define functions for calculating distance
def dist_geom(a, b):
dist = (a - b)**2
dist = dist.sum(axis=1)**0.5
return dist
def dist_linalg(a, b):
dist = torch.linalg.norm(a - b, axis=1)
return dist
# create dummy data
a = np.random.randint(0, 100000, (100000, 10, 10)).astype(np.float64)
b = np.random.randint(0, 100000, (1, 10)).astype(np.float64)
# send data to GPU
a = torch.from_numpy(a).to(device)
b = torch.from_numpy(b).to(device)
# test runtime of each
iterations = 1000
t = timeit.Timer(functools.partial(dist_linalg, a, b))
linalg_delta = t.timeit(number=iterations) / iterations
print("Linear algebra time: ", linalg_delta, " seconds per iteration")
torch.cuda.empty_cache()
t = timeit.Timer(functools.partial(dist_geom, a, b))
geom_delta = t.timeit(number=iterations) / iterations
print("Geometry time: ", geom_delta, " seconds per iteration")
print("linear algebra:geometry ratio: ", linalg_delta / geom_delta)
| https://stackoverflow.com/questions/68985444/ |
Pytorch: Custom thresholding activation function - gradient | I created an activation function class Threshold that should operate on one-hot-encoded image tensors.
The function performs min-max feature scaling on each channel followed by thresholding.
class Threshold(nn.Module):
def __init__(self, threshold=.5):
super().__init__()
if threshold < 0.0 or threshold > 1.0:
raise ValueError("Threshold value must be in [0,1]")
else:
self.threshold = threshold
def min_max_fscale(self, input):
r"""
applies min max feature scaling to input. Each channel is treated individually.
input is assumed to be N x C x H x W (one-hot-encoded prediction)
"""
for i in range(input.shape[0]):
# N
for j in range(input.shape[1]):
# C
min = torch.min(input[i][j])
max = torch.max(input[i][j])
input[i][j] = (input[i][j] - min) / (max - min)
return input
def forward(self, input):
assert (len(input.shape) == 4), f"input has wrong number of dims. Must have dim = 4 but has dim {input.shape}"
input = self.min_max_fscale(input)
return (input >= self.threshold) * 1.0
When I use the function I get the following error, since the gradients are not calculated automatically I assume.
Variable._execution_engine.run_backward(RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I already had a look at How to properly update the weights in PyTorch? but could not get a clue how to apply it to my case.
How is it possible to calculate the gradients for this function?
Thanks for your help.
| The issue is you are manipulating and overwriting elements, this time of operation can't be tracked by autograd. Instead, you should stick with built-in functions. You example is not that tricky to tackle: you are looking to retrieve the minimum and maximum values along input.shape[0] x input.shape[1]. Then you will scale your whole tensor in one go i.e. in vectorized form. No for loops involved!
One way to compute min/max along multiple axes is to flatten those:
>>> x_f = x.flatten(2)
Then, find the min-max on the flattened axis while retaining all shapes:
>>> x_min = x_f.min(axis=-1, keepdim=True).values
>>> x_max = x_f.max(axis=-1, keepdim=True).values
The resulting min_max_fscale function would look something like:
class Threshold(nn.Module):
def min_max_fscale(self, x):
r"""
Applies min max feature scaling to input. Each channel is treated individually.
Input is assumed to be N x C x H x W (one-hot-encoded prediction)
"""
x_f = x.flatten(2)
x_min, x_max = x_f.min(-1, True).values, x_f.max(-1, True).values
x_f = (x_f - x_min) / (x_max - x_min)
return x_f.reshape_as(x)
Important note:
You would notice that you can now backpropagate on min_max_fscale... but not on forward. This is because you are applying a boolean condition which is not a differentiable operation.
| https://stackoverflow.com/questions/68985501/ |
PyTorch vs Tensorflow gives different results | I am implementing the "perceptual loss" function. But, PyTorch vs Tensorflow gives different results. I used the same images. Please, let me know why.
TensorFlow
class FeatureExtractor(tf.keras.Model):
def __init__(self, n_layers):
super(FeatureExtractor, self).__init__()
extractor = tf.keras.applications.VGG16(weights="imagenet",
include_top=False,input_shape=(256, 256, 3))
extractor.trainable = True
#features = [extractor.layers[i].output for i in n_layers]
features = [extractor.get_layer(i).output for i in n_layers]
self.extractor = tf.keras.models.Model(extractor.inputs,features)
def call(self, x):
return self.extractor(x)
def loss_function(generated_image, target_image,
feature_extractor):
MSE = tf.keras.losses.MeanSquaredError()
mse_loss = MSE(generated_image, target_image)
real_features = feature_extractor(target_image)
generated_features = feature_extractor(generated_image)
perceptual_loss = 0
for i in range(len(real_features)):
loss = MSE(real_features[i], generated_features[i])
print(loss)
perceptual_loss += loss
return mse_loss, perceptual_loss
Run:
feature_extractor = FeatureExtractor(n_layers=["block1_conv1","block1_conv2",
"block3_conv2","block4_conv2"])
mse_loss, perceptual_loss = loss_function(image1, image2,
feature_extractor)
print(f"{mse_loss} {perceptual_loss} {mse_loss+perceptual_loss}")
It gives:
output:
tf.Tensor(0.0014001362, shape=(), dtype=float32)
tf.Tensor(0.030578917, shape=(), dtype=float32)
tf.Tensor(2.6163354, shape=(), dtype=float32)
tf.Tensor(0.842701, shape=(), dtype=float32)
0.002584027126431465 3.4910154342651367 3.4935994148254395
Pytorch
class FeatureExtractor(torch.nn.Module):
def __init__(self, n_layers):
super(FeatureExtractor, self).__init__()
extractor = models.vgg16(pretrained=True).features
index = 0
self.layers = nn.ModuleList([])
for i in range(len(n_layers)):
self.layers.append(torch.nn.Sequential())
for j in range(index, n_layers[i] + 1):
self.layers[i].add_module(str(j), extractor[j])
index = n_layers[i] + 1
for param in self.parameters():
param.requires_grad = False
def forward(self, x):
result = []
for i in range(len(self.layers)):
x = self.layers[i](x)
result.append(x)
return result
def loss_function(generated_image, target_image, feature_extractor):
MSE = nn.MSELoss(reduction='mean')
mse_loss = MSE(generated_image, target_image)
real_features = feature_extractor(target_image)
generated_features = feature_extractor(generated_image)
perceptual_loss = 0
for i in range(len(real_features)):
loss = MSE(real_features[i], generated_features[i])
perceptual_loss += loss
print(loss)
return mse_loss, perceptual_loss
Run:
feature_extractor = FeatureExtractor(n_layers=[1, 3, 13, 20]).to(device)
mse_loss, perceptual_loss = loss_function(image1, image2,
feature_extractor)
print(f"{mse_loss} {perceptual_loss} {mse_loss+perceptual_loss}")
It gives:
output:
tensor(0.0003)
tensor(0.0029)
tensor(0.2467)
tensor(0.2311)
0.002584027359262109 0.4810013473033905 0.483585387468338
| Although they are the same models, the parameters of final model may be different because of different initialization parameters. For different frameworks like keras and pytorch, it's different to preprocess the input images before training. So the tenor value is different after processing even if they are same images. The following code is an example that could help understand.
from abc import ABC
import torch
import numpy as np
import tensorflow as tf
from torch import nn
from PIL import Image
from torch.autograd import Variable
import torchvision.models as models
import torchvision.transforms as transforms
from keras.preprocessing.image import load_img
from keras.applications.vgg16 import preprocess_input
from keras.preprocessing.image import img_to_array
# 'https://upload.wikimedia.org/wikipedia/commons/thumb/3/3a/Cat03.jpg/1200px-Cat03.jpg'
IMG_URL1 = ' the local path of 1200px-Cat03.jpeg'
# 'https://upload.wikimedia.org/wikipedia/commons/b/bb/Kittyply_edit1.jpg'
IMG_URL2 = 'the local path of Kittyply_edit1.jpg'
# preprocess in keras
image1_tf = load_img(IMG_URL1, target_size=(224, 224))
image1_tf = img_to_array(image1_tf)
image1_tf = image1_tf.reshape((1, image1_tf.shape[0], image1_tf.shape[1], image1_tf.shape[2]))
image1_tf = preprocess_input(image1_tf)
image2_tf = load_img(IMG_URL2, target_size=(224, 224))
image2_tf = img_to_array(image2_tf)
image2_tf = image2_tf.reshape((1, image2_tf.shape[0], image2_tf.shape[1], image2_tf.shape[2]))
image2_tf = preprocess_input(image2_tf)
# preprocess in pytorch
image1_torch = Image.open(IMG_URL1)
image2_torch = Image.open(IMG_URL2)
image1_torch = image1_torch.resize((224, 224))
image2_torch = image2_torch.resize((224, 224))
min_img_size = 224
transform_pipeline = transforms.Compose([transforms.Resize(min_img_size),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
image1_torch = transform_pipeline(image1_torch)
image2_torch = transform_pipeline(image2_torch)
image1_torch = image1_torch.unsqueeze(0)
image2_torch = image2_torch.unsqueeze(0)
image1_torch = Variable(image1_torch)
image2_torch = Variable(image2_torch)
class FeatureExtractor(tf.keras.Model, ABC):
def __init__(self, n_layers):
super(FeatureExtractor, self).__init__()
extractor = tf.keras.applications.VGG16(weights="imagenet", input_shape=(224, 224, 3))
extractor.trainable = True
features = [extractor.get_layer(i).output for i in n_layers]
self.extractor = tf.keras.models.Model(extractor.inputs, features)
def call(self, x):
return self.extractor(x)
def loss_function(generated_image, target_image, feature_extractor):
MSE = tf.keras.losses.MeanSquaredError()
mse_loss = MSE(generated_image, target_image)
real_features = feature_extractor(target_image)
generated_features = feature_extractor(generated_image)
print("tf prediction:", np.argmax(generated_features[-1].numpy()[0]))
print("tf prediction:", np.argmax(real_features[-1].numpy()[0]))
perceptual_loss = 0
for i in range(len(real_features[:-1])):
loss = MSE(real_features[i], generated_features[i])
print(loss)
perceptual_loss += loss
return mse_loss, perceptual_loss
feature_extractor = FeatureExtractor(n_layers=["block1_conv1", "block1_conv2", "block3_conv2",
"block4_conv2", "predictions"])
print("tensorflow: ")
mse_loss, perceptual_loss = loss_function(image1_tf, image2_tf, feature_extractor)
print(f"{mse_loss} {perceptual_loss} {mse_loss + perceptual_loss}")
class FeatureExtractor1(torch.nn.Module):
def __init__(self, n_layers):
super(FeatureExtractor1, self).__init__()
self.vgg = models.vgg16(pretrained=True)
extractor = self.vgg.features
index = 0
self.layers = nn.ModuleList([])
for i in range(len(n_layers)):
self.layers.append(torch.nn.Sequential())
for j in range(index, n_layers[i] + 1):
self.layers[i].add_module(str(j), extractor[j])
index = n_layers[i] + 1
for param in self.parameters():
param.requires_grad = False
def forward(self, x):
result = []
predict = self.vgg(x)
for i in range(len(self.layers)):
x = self.layers[i](x)
result.append(x)
result.append(predict)
return result
def loss_function1(generated_image, target_image, feature_extractor):
MSE = nn.MSELoss(reduction='mean')
mse_loss = MSE(generated_image, target_image)
real_features = feature_extractor(target_image)
generated_features = feature_extractor(generated_image)
print("torch prediction:", np.argmax(generated_features[-1].numpy()[0]))
print("torch prediction:", np.argmax(real_features[-1].numpy()[0]))
perceptual_loss = 0
for i in range(len(real_features[:-1])):
loss = MSE(real_features[i], generated_features[i])
perceptual_loss += loss
print(loss)
return mse_loss, perceptual_loss
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
feature_extractor = FeatureExtractor1(n_layers=[1, 3, 13, 20]).to(device)
print("pytorch: ")
mse_loss, perceptual_loss = loss_function1(image1_torch, image2_torch, feature_extractor)
print(f"{mse_loss} {perceptual_loss} {mse_loss + perceptual_loss}")
In addition, the training goal of the model is accuracy of classification, so the difference results between feature maps in the middle of network would make sense.
| https://stackoverflow.com/questions/69007027/ |
TypeError: unsupported format string passed to Tensor.__format__ | When trying to convert a code written for old PyTorch to 1.9 I get this error:
(fashcomp) [jalal@goku fashion-compatibility]$ python main.py --name test_baseline --learned --l2_embed --datadir ../../../data/fashion/
/scratch3/venv/fashcomp/lib/python3.8/site-packages/torchvision/transforms/transforms.py:310: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
warnings.warn("The use of the transforms.Scale transform is deprecated, " +
+ Number of params: 3191808
<class 'torch.utils.data.dataloader.DataLoader'>
/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Traceback (most recent call last):
File "main.py", line 329, in <module>
main()
File "main.py", line 167, in main
train(train_loader, tnet, criterion, optimizer, epoch)
File "main.py", line 240, in train
print('Train Epoch: {} [{}/{}]\t'
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/_tensor.py", line 561, in __format__
return object.__format__(self, format_spec)
TypeError: unsupported format string passed to Tensor.__format__
Here's the problematic part of the code:
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{}]\t'
'Loss: {:.4f} ({:.4f}) \t'
'Acc: {:.2f}% ({:.2f}%) \t'
'Emb_Norm: {:.2f} ({:.2f})'.format(
epoch, batch_idx * num_items, len(train_loader.dataset),
losses.val, losses.avg,
100. * accs.val, 100. * accs.avg, emb_norms.val, emb_norms.avg))
I see from this bug report that, as of two years ago, there was no solution provided to this problem. Do you have any suggestions on how to fix this, or an alternative of any sort?
Code is from here.
| This error is reproducible if you try to format a torch.Tensor in a specific way:
>>> print('{:.2f}'.format(torch.rand(1)))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-ece663be5b5c> in <module>()
----> 1 print('{:.2f}'.format(torch.tensor([1])))
/usr/local/lib/python3.7/dist-packages/torch/_tensor.py in __format__(self, format_spec)
559 if self.dim() == 0:
560 return self.item().__format__(format_spec)
--> 561 return object.__format__(self, format_spec)
562
563 def __ipow__(self, other): # type: ignore[misc]
TypeError: unsupported format string passed to Tensor.__format__
Doing '{}'.format(torch.tensor(1)) - i.e. without any formating rule - will work.
This is because torch.Tensor doesn't implement those specific format operations.
An easy fix would be to convert the torch.Tensor to the appropriate - corresponding - type using item:
>>> print('{:.2f}'.format(torch.rand(1).item()))
0.02
You should apply this modification to all torch.Tensor involved in your print string expression: losses.val, losses.avg, accs.val, accs.avg, emb_norms.val, and emb_norms.avg?
| https://stackoverflow.com/questions/69021335/ |
Python execution hangs when attempting to add a pytorch module to a sequential module in loop | I am designing a neural network that has a sequential module that is composed of a variable number of linear layers depending on the initial size of the feature space. The problem is that on the first module that I append to my sequential module, the code stops executing and my runtime is crashed. Here is my code:
self.sequential = nn.Sequential()
input_dim = feature_size
output_dim = int(feature_size // scaling_factor)
while (output_dim > 1000):
print("%s_%s" % (input_dim, output_dim))
# Hangs on next line
self.sequential.add_module("%s_%s" % (input_dim, output_dim), nn.Linear(input_dim, output_dim))
input_dim = output_dim
output_dim = int(input_dim // get_scaling_factor(input_dim))
What is the appropriate way to create a sequential module with a variable number of layers?
EDIT:
Turns out the code above is OK. The reason for which I was getting the hanging was that I was attempting to create a layer with way too many inputs and outputs for the interpreter to handle. (Both input_dim and output_dim were on the order of 100,000s).
| As mentioned in this PyTorch Forums thread, you can create a list of nn.Modules and feed them to an nn.Sequential constructor.
For example:
import torch.nn as nn
modules = []
modules.append(nn.Linear(10, 10))
modules.append(nn.Linear(10, 10))
sequential = nn.Sequential(*modules)
Also, as mentioned in PyTorch Documentations, you may create a sequential with names for each layer using a ordered Dictionary, created from a list of tuples; each containing the name of the layer and the layer itself. i.e.
model = nn.Sequential(OrderedDict([
('conv1', nn.Conv2d(1,20,5)),
('relu1', nn.ReLU()),
('conv2', nn.Conv2d(20,64,5)),
('relu2', nn.ReLU())
]))
More specifically, adapting your example to use this method:
from collections import OrderedDict
...
sequential_list = []
input_dim = feature_size
output_dim = int(feature_size // scaling_factor)
while (output_dim > 1000):
print("%s_%s" % (input_dim, output_dim))
name = "%s_%s" % (input_dim, output_dim)
layer = nn.Linear(input_dim, output_dim)
sequential_list.append((name, layer))
input_dim = output_dim
output_dim = int(input_dim // get_scaling_factor(input_dim))
self.sequential = nn.Sequential(OrderedDict(sequential_list))
| https://stackoverflow.com/questions/69024252/ |
Multiply 2D tensor with 3D tensor in pytorch | Suppose I have a matrix such as P = [[0,1],[1,0]] and a vector v = [a,b]. If I multiply them I have:
Pv = [b,a]
The matrix P is simply a permutation matrix, which changes the order of each element.
Now suppose that I have the same P, but I have the matrices M1 = [[1,2],[3,4]] and M2=[[5,6],[7,8]]. Now let me combine them as the 3D Tensor T= [[[1,2],[3,4]], [[5,6],[7,8]]] with dimensions (2,2,2) - (C,W,H). Suppose I multiply P by T such that:
PT = [[[5,6],[7,8]], [[1,2],[3,4]]]
Note that now M1 now equals [[5,6],[7,8]] and M2 equals [[1,2],[3,4]] as the values have been permuted across the C dimension in T (C,W,H).
How can I multiply PT (P=2D tensor,T=3D tensor) in pytorch using matmul? The following does not work:
torch.matmul(P, T)
| An alternative solution to @mlucy's answer, is to use torch.einsum. This has the benefit of defining the operation yourself, without worrying about torch.matmul's requirements:
>>> torch.einsum('ij,jkl->ikl', P, T)
tensor([[[5, 6],
[7, 8]],
[[1, 2],
[3, 4]]])
Or with torch.matmul:
>>> (P @ T.flatten(1)).reshape_as(T)
tensor([[[5, 6],
[7, 8]],
[[1, 2],
[3, 4]]])
| https://stackoverflow.com/questions/69033199/ |
Creating 1D vectors over 3D tensors in pytorch | I have the following tensor with dimensions (2, 3, 2, 2) where the dimensions represent (batch_size, channels, height, width):
tensor([[[[ 1., 2.],
[ 3., 4.]],
[[ 5., 6.],
[ 7., 8.]],
[[ 9., 10.],
[11., 12.]]],
[[[13., 14.],
[15., 16.]],
[[17., 18.],
[19., 20.]],
[[21., 22.],
[23., 24.]]]])
I would like to convert this into the following tensor with dimensions (8, 3):
tensor([[ 1, 5, 9],
[ 2, 6, 10],
[ 3, 7, 11],
[ 4, 8, 12],
[13, 17, 21],
[14, 18, 22],
[15, 19, 23],
[16, 20, 24]])
Essentially I would like to create 1D vector over the elements of the matrices. I have tried many operations such as flatten and reshape, but I cannot figure out how to achieve this reshaping.
| You can do it this way:
import torch
x = torch.Tensor(
[
[
[[1,2],[3,4]],
[[5,6],[7,8]],
[[9,10],[11,12]]],
[
[[13,14],[15,16]],
[[17,18],[19,20]],
[[21,22],[23,24]]]
]
)
result = x.swapaxes(0, 1).reshape(3, -1).T
print(result)
# > tensor([[ 1., 5., 9.],
# > [ 2., 6., 10.],
# > [ 3., 7., 11.],
# > [ 4., 8., 12.],
# > [13., 17., 21.],
# > [14., 18., 22.],
# > [15., 19., 23.],
# > [16., 20., 24.]])
| https://stackoverflow.com/questions/69035771/ |
Concatenate two tensors of different shape from two different input modalities | I have two tensors:
a = torch.randn((1, 30, 1220)) # represents text embedding vector (30 spans, each with embedding size of 1220)
b = torch.randn((1, 128, 256)) # represents image features obtained from a pretrained CNN (object detection)
How do I concatenate everything in b to each one of the 30 spans of a?
How to concatenate the whole b to the whole a?
This is what I'm trying to do:
The authors have only provided this text:
I'm extracting features (outlined in red) from a 3d point cloud (similar to CNN but for 3d) as shown below:
| Since these representations are from two different modalities (i.e., text and image) and they contain valuable features that are of great importance to the final goal, I would suggest to fuse them in a "learnable" manner instead of a mere concatenation or addition. Furthermore, such a learnable weighting (between features) would be optimal since in some cases one representation would be far more useful than the other whereas at other instances the vice versa applies.
Please note that a mere concatenation can also happen in this fusion module that you would implement. For the actual implementation, there are several types of fusion techniques. E.g. Simple Fusion, Cold Fusion etc. (cf. Fusion Models for Improved Visual Captioning, 2021)
For instance, one straightforward idea would be to use a simple linear layer to project one of the features to the same dimensionality as the other and then do a simple concatenation, with some optional non-linearity if needed.
| https://stackoverflow.com/questions/69036172/ |
How to update models / vectors on runtime, on daily basis? | I have a simple web app which uses sklearn transformed vectors (tfidf / count / normalizer) & other pytorch (transformer models). I usually dump these models via joblib. This app calls these models via fastapi based apis. Until now everything is fine.
But the above listed vectors and models are updated on daily basis, on runtime. By the part of same application which uses it. So, whenever new files are ready we starts using them.
Whenever we get a call from api, we search for todays model and does:
joblib.load then respond to api calls. In this process, whenever we are getting too many calls, we are doing many times joblib.load and finally we starts getting Too many files open OSError.
If we wouldn't be updating these models daily then I could have done loading once in global variables. But now I don't have a clear and best idea, to design it in such a way that, we can update models on daily basis & whenever the models are available for today, then start using them.
Also, one more constraint, until the models for today are not available, we use yesterdays model to serve requests.
| It sounds like what you want to do is load the model once, use the model in memory, and then every so often check whether a new model is available on disk and reload it if an updated version is available.
| https://stackoverflow.com/questions/69042371/ |
Why by changing tensor using detach method make backpropagation not always unable to work in pytorch? | After constructing a graph, when using the detach method to change some tensor value, it is expected that an error pops up when computing the back propagation. However, this is not always the case. In the following two blocks of code: the first one raises an error, while the second one does not. Why does this happen?
x = torch.tensor(3.0, requires_grad=True)
y = x + 1
z = y**2
c = y.detach()
c.zero_()
z.backward(retain_graph=True)
print(x.grad) # errors pop up
x = torch.tensor(3.0, requires_grad=True)
y1 = x+1
y2 = x**2
z = 3*y1 + 4*y2
c = y2.detach()
c.zero_()
z.backward(retain_graph=True)
print(x.grad) # no errors. The printed value is 27
| TLDR; In the former example z = y**2, so dz/dy = 2*y, i.e. it's a function of y and requires its values to be unchanged to properly compute the backpropagation, hence the error message when applying the in-place operation. In the latter z = 3*y1 + 4*y2, so dz/dy2 = 4, i.e. y2 values are not needed to compute the gradient, as such its values can be modified freely.
In the former example you have the following computation graph:
x ---> y = x + 1 ---> z = y**2
\
\ ---> c = y.detach().zero_()
Corresponding code:
x = torch.tensor(3.0, requires_grad=True)
y = x + 1
z = y**2
c = y.detach()
c.zero_()
z.backward() # errors pop up
When calling c = y.detach() you effectively detach c from the computation graph, while y remains attached. However, c shares the same data as y. This means when you call the in-place operation c.zero_, you end up affecting y. This is not allowed, because the y is part of a computation graph, and its values will be needed for a potential backpropagation from variable z.
The second scenario corresponds to this layout:
/--> y1 = x + 1 \
x ---> z = 3*y1 + 4*y2
\--> y2 = x**2 /
\
\ ---> c = y2.detach().zero_()
Corresponding code:
x = torch.tensor(3.0, requires_grad=True)
y1 = x + 1
y2 = x**2
z = 3*y1 + 4*y2
c = y2.detach()
c.zero_()
z.backward()
print(x.grad) # no errors. The printed value is 27
Here again, we have the same setup, you detach then in-place modify c and y with zero_.
The only difference is the operation performed on y and y2 (in the 1st and 2nd example respectively).
In the former, you have z = y**2, so the derivative is 2*y, hence the value of y is needed to compute the gradient of that operation.
In the latter example though z(y2) = constant + 4*y2 so the derivative with respect to y2 is just a constant: 4, i.e. it doesn't require the value of y2 to compute its derivative. You can check this by, for instance, defining in 2nd example z with z = 3*y1 + 4*y2**2: it will raise an error.
| https://stackoverflow.com/questions/69042447/ |
Input image size of Faster-RCNN model in Pytorch | I'm Trying to implement of Faster-RCNN model with Pytorch.
In the structure, First element of model is Transform.
from torchvision.models.detection import fasterrcnn_resnet50_fpn
model = fasterrcnn_resnet50_fpn(pretrained=True)
print(model.transform)
GeneralizedRCNNTransform(
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
Resize(min_size=(800,), max_size=1333, mode='bilinear')
)
When images pass forward of Resize(), They come out with (800,h) or (w, 1333) according to ratio of Width and Height.
for i in range(2):
_, image, target = testset.__getitem__(i)
img = image.unsqueeze(0)
output, _ = model.transform(img)
Before Transform : torch.Size([512, 640])
After Transform : [(800, 1000)]
Before Transform : torch.Size([315, 640])
After Transform : [(656, 1333)]
My question is how to get those resized output and why they use This method? I can't find the information in the paper and I can't understand the source code about transform in fasterrcnn_resnet50_fpn.
Sorry for my English
| GeneralizedRCNN data transform:
https://github.com/pytorch/vision/blob/922db3086e654871c35cd80c2c01eabb65d78475/torchvision/models/detection/generalized_rcnn.py#L15
performs the data transformation on the inputs to feed into the model
min_size: minimum size of the image to be rescaled before feeding it to the backbone.
max_size: maximum size of the image to be rescaled before feeding it to the backbone
https://github.com/pytorch/vision/blob/main/torchvision/models/detection/faster_rcnn.py#L256
I couldn't either find out why it was generalize for min 800 and max 1333, didn't find anything in research paper either.
but as the 1st layer is a Conv layer, the input to the network is fixed size, I apply many other augmentations such as mirror, random cropping etc, inspired by SSD based networks. Hence I would prefer to do all augmentation in a separate place once instead of twice.
I would assume the model should work the best during validation using images with shapes and other properties as close as possible to the training data.
though you can experiment with custom min_size and max_size...
`
from .transform import GeneralizedRCNNTransform
min_size = 900 #changed from default
max_size = 1433 #changed from default
image_mean = [0.485, 0.456, 0.406]
image_std = [0.229, 0.224, 0.225]
model = fasterrcnn_resnet50_fpn(pretrained=True, min_size, max_size, image_mean, image_std)
#batch of 4 image, 4 bboxes
images, boxes = torch.rand(4, 3, 600, 1200), torch.rand(4, 11, 4)
labels = torch.randint(1, 91, (4, 11))
images = list(image for image in images)
targets = []
for i in range(len(images)):
d = {}
d['boxes'] = boxes[i]
d['labels'] = labels[i]
targets.append(d)
output = model(images, targets)
`
or you can completely write your transforms
https://pytorch.org/vision/stable/transforms.html
'
from torchvision.transforms import transforms as T
model = fasterrcnn_resnet50_rpn()
model.transform = T.Compose([*check torchvision.transforms for more*])
'
Hope this helps.
| https://stackoverflow.com/questions/69053420/ |
How do I fix Pytorch-Forecasting model fit ValueError about sequence element | I am new with Pytorch_Forecasting. I followed exactly same as described in 'Demand forecasting with the Temporal Fusion Transformer' (https://pytorch-forecasting.readthedocs.io/en/latest/tutorials/stallion.html). Everything went find until the last model fitting step (trainer.fit(...)). I keep getting the error, "ValueError: dictionary update sequence element #0 has length 1; 2 is required". I cannot figure out where this is happening... I will appreciate any help, any clue, anything to fix my problem
Thanks!!
| I faced the same issue. Pytorch_lightning, recently launched a new version, and pytorch-forecasting is built on top of it. I changed the version of torchmetrics to 0.5.0. pip install torchmetrics==0.5.0
| https://stackoverflow.com/questions/69056300/ |
Have Euclidean distance between two inputs populate a matrix within one loop | I'm taking two tensors of dimensions (batch size, D1, D2, D3) and flattening them to (batch size, D1). I'm then trying to take the Euclidean distance between every row of one (train) tensor with every row of the second tensor (test). I'm having trouble understanding how to populate the distance combinations between tensors into each row element of the ith column.
# Flatten
train = x_train.view(num_train, x_train[1].view(1, -1).shape[1])
test = x_test.view(num_test, x_test[1].view(1, -1).shape[1])
# 1 Loop
for i in range(num_test):
dists[:,i] = torch.sqrt(torch.sum(torch.square(train-test[i])))
Doing it in one loop, every ith column is being populated with the same scalar value. I'm trying to make it such that
[1,i] = Euclidean distance between 1st image of train and 1st image of test ...
[2,i] = Euclidean distance between 1st image of train and 2nd image of test ...
[3,i] = Euclidean distance between 1st image of train and 3rd image of test
...
[(last element),i] = Euclidean distance between 1st image of train and (last element) image of test ...
[1,i+1] = Euclidean distance between 2nd image of train and 1st image of test
| Okay, I was missing a very important parameter in torch.sum() that I did not know about that solves this issue. adding a 1, so that it looks like torch.sqrt(torch.sum(torch.square(train-test[i]), 1)) outputs to what I want.
| https://stackoverflow.com/questions/69059279/ |
How to Load Fastai model and predict it on single image | I have trained a fastai model using Kaggle notebook, it has saved the model but how to load the model is the problem, i have tried different methods like the method given below. Even it does load the model it doesn't have any predict function only thing I can see is model.eval().
The second problem is when the model was trained on google collab it didn't even get the single image, I did try to convert the image to NumPy way and another way but both didn't work out.
I am attaching the kaggle link of model training, the saved model and the test images in last after this code
#Code for Loading model
from fastai import *
from fastai.vision import *
import torch
loc = torch.load('/content/gdrive/MyDrive/Data Exports/35k data/stage-1.pth')
body = create_body(models.resnet18, True, None)
data_classes = 4
nf = callbacks.hooks.num_features_model(body) * 2
head = create_head(nf, data_classes, None, ps=0.5, bn_final=False)
model = nn.Sequential(body, head)
Kaggle Model
Test Images From Kaggle Dataset
Saved Model
| How to load pytorch models:
loc = torch.load('/content/gdrive/MyDrive/Data Exports/35k data/stage-1.pth')
model = ... # build your model
model.load_state_dict(loc)
model.eval()
Now you should be able to simply use the forward pass to generate your predictions:
input = ... # your input image
pred = model(input) # your class predictions
Don't forget to convert your inputs to torch tensors first, you might want to use a DataLoader for ease of use.
| https://stackoverflow.com/questions/69066970/ |
how to get max value by the indices return by torch.max? | The interface torch.max will return value and indicesοΌ how can i use the indices to get the according elements from another tensor?
for example:
a = torch.rand(2,3,4)
b = torch.rand(2,3,4)
# indices shape is [2, 4]
indices = torch.max(a, 1)[1]
# how to get elements by indices ?
b_max = ????
| keepdim=True when calling torch.max() and torch.take_along_dim() should do the trick.
>>> import torch
>>> a=torch.rand(2,3,4)
>>> b=torch.rand(2,3,4)
>>> indices=torch.max(a,1,keepdim=True)[1]
>>> b_max = torch.take_along_dim(b,indices,dim=1)
2D example:
>>> a=torch.rand(2,3)
>>> a
tensor([[0.0163, 0.0711, 0.5564],
[0.4507, 0.8675, 0.5974]])
>>> b=torch.rand(2,3)
>>> b
tensor([[0.7542, 0.1793, 0.5399],
[0.2292, 0.5329, 0.2084]])
>>> indices=torch.max(a,1,keepdim=True)[1]
>>> torch.take_along_dim(b,indices,dim=1)
tensor([[0.5399],
[0.5329]])
| https://stackoverflow.com/questions/69068583/ |
LibTorch and OpenCV Libs not working in same cmakelist file | cmake_minimum_required(VERSION 3.1 FATAL_ERROR)
project(Detect)
#set(Torch "/home/somnath/libtorch/share/cmake/Torch")
find_package(Torch REQUIRED)
find_package(OpenCV REQUIRED)
message(STATUS "CVINCLUDE: ${OpenCV_INCLUDE_DIRS}")
include_directories(${OpenCV_INCLUDE_DIRS})
add_executable(Detect main.cpp)
target_link_libraries(Detect ${TORCH_LIBRARIES}; ${OpenCV_LIBS})
${TORCH_LIBRARIES}; ${OpenCV_LIBS} both are not working at the same time if want to use to build the code.
enter image description here
| Try to change the libTorch from Pre-cxx11 ABI to cxx11 ABI
https://pytorch.org/get-started/locally/
All thanks to Jacob HM from
https://stackoverflow.com/a/61459156/13045595
| https://stackoverflow.com/questions/69088354/ |
Pytorch Geometric sparse adjacency matrix to edge index tensor | My data object has the data.adj_t parameter, giving me the sparse adjacency matrix. How can I get the edge_index tensor of size [2, num_edges] from this?
| As you can see in the docs:
Since this feature is still experimental, some operations, e.g., graph pooling methods, may still require you to input the edge_index format. You can convert adj_t back to (edge_index, edge_attr) via:
row, col, edge_attr = adj_t.t().coo()
edge_index = torch.stack([row, col], dim=0)
| https://stackoverflow.com/questions/69091074/ |
Having Issues Loading the CelebA dataset on Google Colab Using Pytorch | I need to load the CelebA dataset for a Python (Pytorch) implementation of the following paper: https://arxiv.org/pdf/1908.10578.pdf
The original code for loading the CelebA dataset was written in MATLAB using MatConvNet with autonn (source 15 paper). I have the source code but I'm not sure if I can share it.
It's my first time using Pytorch(version 1.9.0+cu102) and doing a paper implementation in Computer Vision.
I looked at the following relevant question: How do I load the CelebA dataset on Google Colab, using torch vision, without running out of memory?
and tested out the solution suggested by user anurag: https://stackoverflow.com/a/65528710/15087536
Unfortunately, I'm still getting a syntax error.
Here's the code below:
import torchvision
from torchvision.datasets import ImageFolder
from torch.utils.data import DataLoader
from torchvision import transforms
# Root directory for the dataset
data_root = 'data/celeba'
# Spatial size of training images, images are resized to this size.
image_size = 64
# batch size
batch_size = 50000
transform=transforms.Compose([transforms.Resize(image_size),
transforms.CenterCrop(image_size),transforms.ToTensor(),transforms.Normalize(mean=
[0.5, 0.5, 0.5],std=[0.5, 0.5, 0.5])
dataset = ImageFolder(data_root,transform) **syntax error**
| Since we do not know the syntax error in your case, I cannot comment on it.
Below I will share one possible way to do it.
You can download the celebA dataset from Kaggle using this link. Alternatively, you can also create a Kaggle kernel using this data (no need to download data then)
If you are using google colab, upload this data accessible from your notebook.
Next you can write a PyTorch dataset which will load the images based on the partition (train, valid, test).
I am pasting an example below. You can always customize this to suit your needs.
from torch.utils.data import Dataset, DataLoader
import pandas as pd
from skimage import io
class CelebDataset(Dataset):
def __init__(self,data_dir,partition_file_path,split,transform):
self.partition_file = pd.read_csv(partition_file_path)
self.data_dir = data_dir
self.split = split
self.transform = transform
def __len__(self):
self.partition_file_sub = self.partition_file[self.partition_file["partition"].isin(self.split)]
return len(self.partition_file_sub)
def __getitem__(self,idx):
img_name = os.path.join(self.data_dir,
self.partition_file_sub.iloc[idx, 0])
image = io.imread(img_name)
if self.transform:
image = self.transform(image)
return image
Next, you can create your train and test loaders. Change the IMAGE_PATH to your directory which contains images.
batch_size = celeba_config['batch_size']
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
IMAGE_PATH = '../input/celeba-dataset/img_align_celeba/img_align_celeba'
trainset = CelebDataset(data_dir=IMAGE_PATH,
partition_file_path='../input/celeba-dataset/list_eval_partition.csv',
split=[0,1],
transform=transform)
trainloader = DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=2)
testset = CelebDataset(data_dir=IMAGE_PATH,
partition_file_path='../input/celeba-dataset/list_eval_partition.csv',
split=[2],
transform=transform)
testloader = DataLoader(testset, batch_size=batch_size,
shuffle=True, num_workers=2)
| https://stackoverflow.com/questions/69096548/ |
Which loss function to use for training sparse multi-label text classification problem and class skewness/imbalance | I am training a sparse multi-label text classification problem using Hugging Face models which is one part of SMART REPLY System. The task which I am doing is mentioned below:
I classify Customer Utterances as input to the model and classify to which Agent Response clusters it belongs to. I have 60 clusters and Customer Utterances can map to one or more clusters.
Input to Model
Input Output
My account is blocked [0,0,0,1,1,0....0,0,0,0,0]
The Output is Encoding Vector for Cluster labels. In the above example the customer query maps into cluster 4 and cluster 5 of agent responses.
Problem:
The model always predict the cluster numbers which are very frequent. It doesn't take the rare clusters.
Only few 1's are present at a time in output labels and the rest are 0.
Code:
#Dividing the params into those which needs to be updated and rest
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{
'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01
},
{
'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0
}
]
optimizer = BertAdam(optimizer_grouped_parameters, lr =0.05, warmup = .1)
Model Training
#Empty the GPU memory as it might be memory and CPU intensive while training
torch.cuda.empty_cache()
#Number of times the whole dataset will run through the network and model is fine-tuned
epochs = 10
epoch_count = 1
#Iterate over number of epochs
for _ in trange(epochs, desc = "Epoch"):
#Switch model to train phase where it will update gradients
model.train()
#Initaite train and validation loss, number of rows passed and number of batches passed
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
val_loss = 0
nb_val_examples, nb_val_steps = 0, 0
#Iterate over batches within the same epoch
for batch in tqdm(train_dataloader):
#Shift the batch to GPU for computation
#pdb.set_trace()
batch = tuple(t.to(device) for t in batch)
#Load the input ids and masks from the batch
b_input_ids, b_input_mask, b_labels = batch
#Initiate gradients to 0 as they tend to add up
optimizer.zero_grad()
#Forward pass the input data
logits = model(b_input_ids, token_type_ids = None, attention_mask = b_input_mask)
#We will be using the Binary Cross entropy loss with added sigmoid function after that in BCEWithLogitsLoss
loss_func = BCEWithLogitsLoss()
#Calculate the loss between multilabel predicted outputs and actuals
loss = loss_func(logits, b_labels.type_as(logits))
#Backpropogate the loss and calculate the gradients
loss.backward()
#Update the weights with the calculated gradients
optimizer.step()
#Add the loss of the batch to the final loss, number of rows and batches
tr_loss += loss.item()
nb_tr_examples += b_input_ids.size(0)
nb_tr_steps += 1
#Print the current training loss
print("Train Loss: {}".format(tr_loss/nb_tr_examples))
# Save the trained model after each epoch.
# pickle.dump(model, open("conv_bert_model_"+str(epoch_count)+".pkl", "wb"))
epoch_count=epoch_count+1
I am using this loss function currently:
loss_func = BCEWithLogitsLoss()
#Calculate the loss between multilabel predicted outputs and actuals
loss = loss_func(logits, b_labels.type_as(logits))
Is there any way to improve models output.( Recall and Precision) by using different loss function?
How we tackle the cluster imbalance problem in Hugging face models in case of MULTI LABLES classification .
| You can use a weighted cross entropy applied to each index of the output. You'd have to go through the training set to calculate the weights of each cluster.
criterion = nn.BCEWithLogitsLoss(reduction='none')
loss = criterion(output, target)
loss = (loss * weights).mean()
loss.backward()
By doing so the losses for different indexes are not combined immediately, but kept separate. They are first multiplied with the weights and then combined.
To calculate the weights, assuming outputs is a Tensor:
weights = torch.sum(outputs, 0)/torch.sum(outputs)
And assuming numpy arrays:
weights = np.sum(outputs, 0)/np.sum(outputs)
| https://stackoverflow.com/questions/69098628/ |
Not able to save model to gs bucket using torch.save() | I am trying to save a PyTorch model to my google cloud bucket but it is always showing a "FileNotFoundError error".
I already have a gs bucket and the file path I am providing is also correct. My code is running on a GCP notebook instance.
path = "gs://bucket_name/model/model.pt"
torch.save(model,path)
It would really help me if someone tries to upload a model to gs bucket and let me know if you could or not? It would also help me if you shared the right way to put models into gs bucket using torch.save().
| try this:
import gcsfs
fs = gcsfs.GCSFileSystem(project = '<enter your gc project>')
with fs.open("gs://bucket_name/"+f'model/model.pt', 'wb') as f:
torch.save(model, f)
| https://stackoverflow.com/questions/69100496/ |
How to solve issue : grad_fn = None | Could anyone advise why grad_fn = None for this pytorch coding ?
| Parameter weights are leaf nodes, i.e. those tensor are not the result of an operation, in other words, there is no other tensor node preceding them. You can check using the is_leaf attribute:
>>> nn.Linear(1,1).weight.is_leaf
True
The grad_fn attribute essentially holds the callback function to backpropagate from a given tensor node. By definition, leaf tensors do not have such function because there is nothing to backpropagate on.
| https://stackoverflow.com/questions/69101872/ |
tflite quantized mobilenet v2 classifier not working | My goal is to convert a PyTorch Model into a quantized tflite model that can be used for inference on the Edge TPU.
I was able to convert a fairly complex depth estimation model from PyTorch to tflite and I successfully ran it on the Edge TPU. But because not all operations were supported, inference was pretty slow (>800ms).
Number of operations that will run on Edge TPU: 87
Number of operations that will run on CPU: 47
Depth Estimation
Because I want a model that runs fully on the TPU, I tried converting the simplest model I could think of, a MobilenetV2 classification model. But when running the quantized model, I get strangely inaccurate results.
PyTorch
TFLite
Samoyed:0.8303
missile: 0.184565
Pomeranian: 0.06989
kuvasz: 0.184565
keeshond: 0.01296
stupa: 0.184565
collie: 0.0108
Samoyed: 0.184565
Great Pyrenees: 0.00989
Arctic fox: 0.184565
Is this caused by quantizing the model from float32 to uint8 or am I doing something wrong? And if it is caused by quantization, how can I mitigate that? The classification example from corral works fine and, as far as I know, is uses the same model.
Conversion Process
PyTorch -> ONNX -> OpenVINO -> TensorFlow -> TensorFlowLite
I wrote my own code to convert the model from PyTorch to ONNX and from TensorFlow(pd) into TFlite. For the other conversion steps, I used the OpenVINO mo.py script and the openvino2tensorflow toll because of the nchw nhwc mismatch between PyTorch and TensorFlow.
Downloads
Depth Estimation Model: https://github.com/AaronZettler/miscellaneous/blob/master/mobilenet_v2_depth_est.pth?raw=true
Classification Model: https://github.com/AaronZettler/miscellaneous/blob/master/mobilenetv2.tflite?raw=true
Labels: https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt
Image: https://raw.githubusercontent.com/pytorch/hub/master/images/dog.jpg
Code
This code does not require the Edge TPU to be run, but it does require the google coral libraries.
If I use different parameters for mean and std, like (2.0, 76.0), I get a solid result for the dog.jpg image but if I try to classify something else I have the same problem.
import numpy as np
from PIL import Image
from pycoral.adapters import classify
from pycoral.adapters import common
from pycoral.utils.dataset import read_label_file
from torchvision import transforms
from tensorflow.lite.python.interpreter import Interpreter
def cropPIL(image, new_width, new_height):
width, height = image.size
left = (width - new_width)/2
top = (height - new_height)/2
right = (width + new_width)/2
bottom = (height + new_height)/2
return image.crop((left, top, right, bottom))
def softmax(x):
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
def classify_img(image_dir, lables_dir, model_dir, mean, std):
#loading lables and model
labels = read_label_file(lables_dir)
interpreter = Interpreter(model_path=model_dir)
interpreter.allocate_tensors()
#load an resize image
size = (256, 256)
image = Image.open(image_dir).convert('RGB')
image = image.resize(((int)(size[0]*image.width/image.height), size[1]), Image.ANTIALIAS)
image = cropPIL(image, 224, 224)
image = np.asarray(image)
#normalizing the input image
params = common.input_details(interpreter, 'quantization_parameters')
scale = params['scales']
zero_point = params['zero_points']
normalized_input = (image - mean) / (std * scale) + zero_point
np.clip(normalized_input, 0, 255, out=normalized_input)
#setting the image as input
common.set_input(interpreter, normalized_input.astype(np.uint8))
#run inference
interpreter.invoke()
#get output tensor and run softmax
output_details = interpreter.get_output_details()[0]
output_data = interpreter.tensor(output_details['index'])().flatten()
scores = softmax(output_data.astype(float))
#get the top 10 classes
classes = classify.get_classes_from_scores(scores, 5, 0.0)
print('-------RESULTS--------')
for c in classes:
print('%s: %f' % (labels.get(c.id, c.id), c.score))
image_dir = 'data/dog.jpg'
lables_dir = 'data/imagenet_classes.txt'
model_dir = 'models/mobilenetv2.tflite'
classify_img(image_dir, lables_dir, model_dir, 114.0, 57.0)
To run the PyTorch model on google colab I had to replace
model = torch.hub.load('pytorch/vision:v0.9.0', 'mobilenet_v2', pretrained=True)
with
model = torchvision.models.mobilenet_v2(pretrained=True)
to make it work.
This is the code I used to test The PyTorch model on my machine.
import torch
from PIL import Image
from torchvision import transforms
import torchvision
import numpy as np
import matplotlib.pyplot as plt
def inference(model, input_image, lables_dir):
preprocess = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor = preprocess(input_image)
input_batch = input_tensor.unsqueeze(0)
# move the input and model to GPU for speed if available
if torch.cuda.is_available():
input_batch = input_batch.to('cuda')
model.to('cuda')
with torch.no_grad():
output = model(input_batch)
probabilities = torch.nn.functional.softmax(output[0], dim=0)
# Read the categories
with open(lables_dir, "r") as f:
categories = [s.strip() for s in f.readlines()]
# Show top categories per image
top5_prob, top5_catid = torch.topk(probabilities, 5)
result = {}
for i in range(top5_prob.size(0)):
result[categories[top5_catid[i]]] = top5_prob[i].item()
return result
def classify(image_dir, lables_dir):
model = torchvision.models.mobilenet_v2(pretrained=True)
model.eval()
im = Image.open(image_dir)
results = inference(model, im, lables_dir)
for result in results:
print(f'{result}: {round(results[result], 5)}')
classify('data/dog.jpg', 'data/imagenet_classes.txt')
| EdgeTPU mapping of PReLU (LeakyReLU) is now supported in openvino2tensorflow v1.20.4.
However, due to the large size of the model, it is not possible to map all operations to the EdgeTPU. Therefore, the part of the EdgeTPU that does not fit in RAM is offloaded to the CPU for inference, which is very slow. In this case, inference by the CPU alone is 4 to 5 times faster. EdgeTPU does not support PReLU (LeakyReLU), so the operations must be replaced. However, openvino2tensorflow v1.20.4 automatically replaces the operations in the conversion process.
Converted model
https://github.com/PINTO0309/PINTO_model_zoo/tree/main/149_depth_estimation
Convert sample
docker run --gpus all -it --rm \
-v `pwd`:/home/user/workdir \
pinto0309/openvino2tensorflow:latest
cd workdir
MODEL=depth_estimation_mbnv2
H=180
W=320
$INTEL_OPENVINO_DIR/deployment_tools/model_optimizer/mo.py \
--input_model ${MODEL}_${H}x${W}.onnx \
--data_type FP32 \
--output_dir ${H}x${W}/openvino/FP32
$INTEL_OPENVINO_DIR/deployment_tools/model_optimizer/mo.py \
--input_model ${MODEL}_${H}x${W}.onnx \
--data_type FP16 \
--output_dir ${H}x${W}/openvino/FP16
mkdir -p ${H}x${W}/openvino/myriad
${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/lib/intel64/myriad_compile \
-m ${H}x${W}/openvino/FP16/${MODEL}_${H}x${W}.xml \
-ip U8 \
-VPU_NUMBER_OF_SHAVES 4 \
-VPU_NUMBER_OF_CMX_SLICES 4 \
-o ${H}x${W}/openvino/myriad/${MODEL}_${H}x${W}.blob
openvino2tensorflow \
--model_path ${H}x${W}/openvino/FP32/${MODEL}_${H}x${W}.xml \
--output_saved_model \
--output_pb \
--output_no_quant_float32_tflite \
--output_weight_quant_tflite \
--output_float16_quant_tflite \
--output_integer_quant_tflite \
--string_formulas_for_normalization 'data / 255' \
--output_integer_quant_type 'uint8' \
--output_tfjs \
--output_coreml \
--output_tftrt
mv saved_model saved_model_${H}x${W}
openvino2tensorflow \
--model_path ${H}x${W}/openvino/FP32/${MODEL}_${H}x${W}.xml \
--output_saved_model \
--output_pb \
--output_edgetpu \
--string_formulas_for_normalization 'data / 255' \
--output_integer_quant_type 'uint8'
mv saved_model/model_full_integer_quant.tflite saved_model_${H}x${W}/model_full_integer_quant.tflite
mv saved_model/model_full_integer_quant_edgetpu.tflite saved_model_${H}x${W}/model_full_integer_quant_edgetpu.tflite
mv ${H}x${W}/openvino saved_model_${H}x${W}/openvino
mv ${MODEL}_${H}x${W}.onnx saved_model_${H}x${W}/${MODEL}_${H}x${W}.onnx
H=240
W=320
$INTEL_OPENVINO_DIR/deployment_tools/model_optimizer/mo.py \
--input_model ${MODEL}_${H}x${W}.onnx \
--data_type FP32 \
--output_dir ${H}x${W}/openvino/FP32
$INTEL_OPENVINO_DIR/deployment_tools/model_optimizer/mo.py \
--input_model ${MODEL}_${H}x${W}.onnx \
--data_type FP16 \
--output_dir ${H}x${W}/openvino/FP16
mkdir -p ${H}x${W}/openvino/myriad
${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/lib/intel64/myriad_compile \
-m ${H}x${W}/openvino/FP16/${MODEL}_${H}x${W}.xml \
-ip U8 \
-VPU_NUMBER_OF_SHAVES 4 \
-VPU_NUMBER_OF_CMX_SLICES 4 \
-o ${H}x${W}/openvino/myriad/${MODEL}_${H}x${W}.blob
openvino2tensorflow \
--model_path ${H}x${W}/openvino/FP32/${MODEL}_${H}x${W}.xml \
--output_saved_model \
--output_pb \
--output_no_quant_float32_tflite \
--output_weight_quant_tflite \
--output_float16_quant_tflite \
--output_integer_quant_tflite \
--string_formulas_for_normalization 'data / 255' \
--output_integer_quant_type 'uint8' \
--output_tfjs \
--output_coreml \
--output_tftrt
mv saved_model saved_model_${H}x${W}
openvino2tensorflow \
--model_path ${H}x${W}/openvino/FP32/${MODEL}_${H}x${W}.xml \
--output_saved_model \
--output_pb \
--output_edgetpu \
--string_formulas_for_normalization 'data / 255' \
--output_integer_quant_type 'uint8'
mv saved_model/model_full_integer_quant.tflite saved_model_${H}x${W}/model_full_integer_quant.tflite
mv saved_model/model_full_integer_quant_edgetpu.tflite saved_model_${H}x${W}/model_full_integer_quant_edgetpu.tflite
mv ${H}x${W}/openvino saved_model_${H}x${W}/openvino
mv ${MODEL}_${H}x${W}.onnx saved_model_${H}x${W}/${MODEL}_${H}x${W}.onnx
PReLU (LeakyReLU) to Maximum (ReLU), Minimum, Mul, Add
From:
To:
EdgeTPU model
| https://stackoverflow.com/questions/69105473/ |
How do I use separate types of gpus (e.g. 1080Ti vs 2080Ti) on the same docker image without needing to re-run `python setup.py develop`? | I'm using a pytorch-based repository where the installation step specifies to run python setup.py develop with this setup.py file. I have been running the repository fine with 1080Ti and 1080 GPUs using a docker image which clones the repo and runs the setup.py script in the build process. The following are files copied from my Dockerfile.
RUN git clone https://github.com/CVMI-Lab/ST3D.git
WORKDIR /ST3D
RUN nvidia-smi
RUN pip install -r requirements.txt
RUN python setup.py develop
Upon entering the container, I only mount specific folders within the repo as follows:
GPU_ID = 0
ENVS=" --env=NVIDIA_VISIBLE_DEVICES=$GPU_ID
--env=CUDA_VISIBLE_DEVICES=$GPU_ID
--env=NVIDIA_DRIVER_CAPABILITIES=all"
VOLUMES=" --volume=$DATA_PATH:/ST3D/data
--volume=$CODE_PATH/pcdet:/ST3D/pcdet
--volume=$CODE_PATH/tools:/ST3D/tools
--volume=$CODE_PATH/output:/ST3D/output"
docker run -d -it --rm \
$VOLUMES \
$ENVS \
--runtime=nvidia \
--gpus $GPU_ID \
--privileged \
--net=host \
--workdir=/ST3D \
darrenjkt/st3d:v0.3.0
Recently we installed a 2080Ti in the same computer. When I enter the same docker container with solely the 2080Ti gpu, using the same python script, I get the following error:
RuntimeError: CUDA error: no kernel image is available for execution on the device
This error pertains to one of the cpp modules installed in the setup.py.
I can solve this by running python setup.py develop again which would then enable it to work with the 2080Ti. I've tried then committing the docker container to a 2080Ti specific docker image, and the 1080 docker container to a 1080 specific image. However I noticed that once I run python setup.py develop on the 2080Ti container, it then gives me the CUDA error for the 1080 images. And if I run the setup.py on the 1080 gpu again, it'll then give me the CUDA error back on that 2080Ti image. This baffles me as I have not mounted the build files but rather kept them solely in the container and committed it to a new image.
So my question is, how can I set up my environment/docker image such that it doesn't require a rebuild of setup.py each time?
| The problem was solved by building the docker image with the following:
RUN git clone https://github.com/CVMI-Lab/ST3D.git
WORKDIR /ST3D
RUN nvidia-smi
RUN pip install -r requirements.txt
RUN TORCH_CUDA_ARCH_LIST="6.1 7.5" python setup.py develop
Where the TORCH_CUDA_ARCH_LIST="6.1 7.5" where it's 6.1 for 1080/1080Ti and 7.5 for 2080Ti. This overwrites the default cpp_extension behaviour to explicitly specify which compute_capability to support.
| https://stackoverflow.com/questions/69111302/ |
understanding pytorch conv2d internally | I'm trying to understand what does the nn.conv2d do internally.
so lets assume we are applying Conv2d to a 32*32 RGB image.
torch.nn.Conv2d(3, 49, 4, bias=True)
so :
when we initialize the conv layer how many weights and in which shapes would it have, please tell this for biases apart?
before applying it the conv the image would have 3 * 32 * 32 shape and after applying it would have 49 * 29 * 29 so what happens in between?
I define "slide" operation (don't know real name) as multiplying first to element of kernel to first element of box in shape of image going till last element of kernel corresponding making one the 1of29 * 1of29 is calculated.
and "slide all" doing this horizontally and vertically till the all 29 * 29 are calculated.
so I understand how a kernel would act but I don't understand how many kernels would be created by the torch.nn.Conv2d(3, 49, 4, bias=True) and which of them would be apllying on R,G,B channels.
|
I understand how a kernel would act but I don't understand how many
kernels would be created by the nn.Conv2d(3, 49, 4, bias=True) and
which of them would be applying on R, G, and B channels.
Calling nn.Conv2d(3, 49, 4, bias=True) will initialize 49 4x4-kernels, each having a total of three channels and a single bias parameter. That's a total of 49*(4*4*3 + 1) parameters, i.e. 2,401 parameters.
You can check that it is indeed correct with:
>>> conv2d = nn.Conv2d(3, 49, 4, bias=True)
Parameters list will contain the weight tensor shaped (n_filters=49, n_channels=3, kernel_height=4, kernel_width=4), and a bias tensor shaped (49,):
>>> [p.shape for p in conv2d.parameters()]
[torch.Size([49, 3, 4, 4]), torch.Size([49])]
If we get a look at the total number of parameters, we indeed find:
>>> nn.utils.parameters_to_vector(conv2d.parameters()).numel()
2401
Concerning how they are applied: each one of the 49 kernels will be applied 'independently' to the input map. For each filter operation, you are convolving the input of a three-channel tensor, with a three-channel kernel. Each one of those 49 convolutions gets its respective bias added. At the end, you are left with a number of 49 single-channel maps which are concatenated to make up a single 49-channel map. In practice, everything is done in one go using a window view of the input.
I am certainly biased towards my own posts: here you will find another explanation of shapes in convolutional neural networks.
| https://stackoverflow.com/questions/69118656/ |
estimator.fit hangs on sagemaker on local mode | I am trying to train a pytorch model using Sagemaker on local mode, but whenever I call estimator.fit the code hangs indefinitely and I have to interrupt the notebook kernel. This happens both in my local machine and in Sagemaker Studio. But when I use EC2, the training runs normally.
Here the call to the estimator, and the stack trace once I interrupt the kernel:
import sagemaker
from sagemaker.pytorch import PyTorch
bucket = "bucket-name"
role = sagemaker.get_execution_role()
training_input_path = f"s3://{bucket}/dataset/path"
sagemaker_session = sagemaker.LocalSession()
sagemaker_session.config = {"local": {"local_code": True}}
output_path = "file://."
estimator = PyTorch(
entry_point="train.py",
source_dir="src",
hyperparameters={"max-epochs": 1},
framework_version="1.8",
py_version="py3",
instance_count=1,
instance_type="local",
role=role,
output_path=output_path,
sagemaker_session=sagemaker_session,
)
estimator.fit({"training": training_input_path})
Stack trace:
---------------------------------------------------------------------------
KeyboardInterrupt Traceback (most recent call last)
<ipython-input-9-35cdd6021288> in <module>
----> 1 estimator.fit({"training": training_input_path})
/opt/conda/lib/python3.7/site-packages/sagemaker/estimator.py in fit(self, inputs, wait, logs, job_name, experiment_config)
678 self._prepare_for_training(job_name=job_name)
679
--> 680 self.latest_training_job = _TrainingJob.start_new(self, inputs, experiment_config)
681 self.jobs.append(self.latest_training_job)
682 if wait:
/opt/conda/lib/python3.7/site-packages/sagemaker/estimator.py in start_new(cls, estimator, inputs, experiment_config)
1450 """
1451 train_args = cls._get_train_args(estimator, inputs, experiment_config)
-> 1452 estimator.sagemaker_session.train(**train_args)
1453
1454 return cls(estimator.sagemaker_session, estimator._current_job_name)
/opt/conda/lib/python3.7/site-packages/sagemaker/session.py in train(self, input_mode, input_config, role, job_name, output_config, resource_config, vpc_config, hyperparameters, stop_condition, tags, metric_definitions, enable_network_isolation, image_uri, algorithm_arn, encrypt_inter_container_traffic, use_spot_instances, checkpoint_s3_uri, checkpoint_local_path, experiment_config, debugger_rule_configs, debugger_hook_config, tensorboard_output_config, enable_sagemaker_metrics, profiler_rule_configs, profiler_config, environment, retry_strategy)
572 LOGGER.info("Creating training-job with name: %s", job_name)
573 LOGGER.debug("train request: %s", json.dumps(train_request, indent=4))
--> 574 self.sagemaker_client.create_training_job(**train_request)
575
576 def _get_train_request( # noqa: C901
/opt/conda/lib/python3.7/site-packages/sagemaker/local/local_session.py in create_training_job(self, TrainingJobName, AlgorithmSpecification, OutputDataConfig, ResourceConfig, InputDataConfig, **kwargs)
184 hyperparameters = kwargs["HyperParameters"] if "HyperParameters" in kwargs else {}
185 logger.info("Starting training job")
--> 186 training_job.start(InputDataConfig, OutputDataConfig, hyperparameters, TrainingJobName)
187
188 LocalSagemakerClient._training_jobs[TrainingJobName] = training_job
/opt/conda/lib/python3.7/site-packages/sagemaker/local/entities.py in start(self, input_data_config, output_data_config, hyperparameters, job_name)
219
220 self.model_artifacts = self.container.train(
--> 221 input_data_config, output_data_config, hyperparameters, job_name
222 )
223 self.end_time = datetime.datetime.now()
/opt/conda/lib/python3.7/site-packages/sagemaker/local/image.py in train(self, input_data_config, output_data_config, hyperparameters, job_name)
200 data_dir = self._create_tmp_folder()
201 volumes = self._prepare_training_volumes(
--> 202 data_dir, input_data_config, output_data_config, hyperparameters
203 )
204 # If local, source directory needs to be updated to mounted /opt/ml/code path
/opt/conda/lib/python3.7/site-packages/sagemaker/local/image.py in _prepare_training_volumes(self, data_dir, input_data_config, output_data_config, hyperparameters)
487 os.mkdir(channel_dir)
488
--> 489 data_source = sagemaker.local.data.get_data_source_instance(uri, self.sagemaker_session)
490 volumes.append(_Volume(data_source.get_root_dir(), channel=channel_name))
491
/opt/conda/lib/python3.7/site-packages/sagemaker/local/data.py in get_data_source_instance(data_source, sagemaker_session)
52 return LocalFileDataSource(parsed_uri.netloc + parsed_uri.path)
53 if parsed_uri.scheme == "s3":
---> 54 return S3DataSource(parsed_uri.netloc, parsed_uri.path, sagemaker_session)
55 raise ValueError(
56 "data_source must be either file or s3. parsed_uri.scheme: {}".format(parsed_uri.scheme)
/opt/conda/lib/python3.7/site-packages/sagemaker/local/data.py in __init__(self, bucket, prefix, sagemaker_session)
183 working_dir = "/private{}".format(working_dir)
184
--> 185 sagemaker.utils.download_folder(bucket, prefix, working_dir, sagemaker_session)
186 self.files = LocalFileDataSource(working_dir)
187
/opt/conda/lib/python3.7/site-packages/sagemaker/utils.py in download_folder(bucket_name, prefix, target, sagemaker_session)
286 raise
287
--> 288 _download_files_under_prefix(bucket_name, prefix, target, s3)
289
290
/opt/conda/lib/python3.7/site-packages/sagemaker/utils.py in _download_files_under_prefix(bucket_name, prefix, target, s3)
314 if exc.errno != errno.EEXIST:
315 raise
--> 316 obj.download_file(file_path)
317
318
/opt/conda/lib/python3.7/site-packages/boto3/s3/inject.py in object_download_file(self, Filename, ExtraArgs, Callback, Config)
313 return self.meta.client.download_file(
314 Bucket=self.bucket_name, Key=self.key, Filename=Filename,
--> 315 ExtraArgs=ExtraArgs, Callback=Callback, Config=Config)
316
317
/opt/conda/lib/python3.7/site-packages/boto3/s3/inject.py in download_file(self, Bucket, Key, Filename, ExtraArgs, Callback, Config)
171 return transfer.download_file(
172 bucket=Bucket, key=Key, filename=Filename,
--> 173 extra_args=ExtraArgs, callback=Callback)
174
175
/opt/conda/lib/python3.7/site-packages/boto3/s3/transfer.py in download_file(self, bucket, key, filename, extra_args, callback)
305 bucket, key, filename, extra_args, subscribers)
306 try:
--> 307 future.result()
308 # This is for backwards compatibility where when retries are
309 # exceeded we need to throw the same error from boto3 instead of
/opt/conda/lib/python3.7/site-packages/s3transfer/futures.py in result(self)
107 except KeyboardInterrupt as e:
108 self.cancel()
--> 109 raise e
110
111 def cancel(self):
/opt/conda/lib/python3.7/site-packages/s3transfer/futures.py in result(self)
104 # however if a KeyboardInterrupt is raised we want want to exit
105 # out of this and propogate the exception.
--> 106 return self._coordinator.result()
107 except KeyboardInterrupt as e:
108 self.cancel()
/opt/conda/lib/python3.7/site-packages/s3transfer/futures.py in result(self)
258 # possible value integer value, which is on the scale of billions of
259 # years...
--> 260 self._done_event.wait(MAXINT)
261
262 # Once done waiting, raise an exception if present or return the
/opt/conda/lib/python3.7/threading.py in wait(self, timeout)
550 signaled = self._flag
551 if not signaled:
--> 552 signaled = self._cond.wait(timeout)
553 return signaled
554
/opt/conda/lib/python3.7/threading.py in wait(self, timeout)
294 try: # restore state no matter what (e.g., KeyboardInterrupt)
295 if timeout is None:
--> 296 waiter.acquire()
297 gotit = True
298 else:
KeyboardInterrupt:
| SageMaker Studio does not natively support local mode. Studio Apps are themselves docker containers and therefore they require privileged access if they were to be able to build and run docker containers.
As an alternative solution, you can create a remote docker host on an EC2 instance and setup docker on your Studio App. There is quite a bit of networking and package installation involved, but the solution will enable you to use full docker functionality. Additionally, as of version 2.80.0 of SageMaker Python SDK, it now supports local mode when you are using remote docker host.
sdockerSageMaker Studio Docker CLI extension (see this repo) can simplify deploying the above solution in simple two steps (only works for Studio Domain in VPCOnly mode) and it has an easy to follow example here.
UPDATE:
There is now a UI extension (see repo) which can make the experience much smoother and easier to manage.
| https://stackoverflow.com/questions/69119171/ |
PyTorch | getting "RuntimeError: Found dtype Long but expected Float" with dataset Omniglot | Im a real newbie on PyTorch and Neural Networks. I have started to work on these suubjects this week and my mentor has gave me a code and with some tasks to work on the code.
But the code that he gave me is not working. I have tried to fix this all day but got no result. Because i do not know the background of the NN's and PyTorch it is harder to understand the problem.
Need your help on that.
Thank you !
import torch
import numpy as np
import torchvision.datasets as datasets
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
from torchsummary import summary
#DEFINE YOUR DEVICE
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print(device) #if cpu, go Runtime-> Change runtime type-> Hardware accelerator GPU -> Save -> Redo previous steps
#DOWNLOAD DATASET
train_data = datasets.Omniglot('./data', background=True, download = True, transform = transforms.ToTensor())
test_data = datasets.Omniglot('./data',background = False, download = True, transform = transforms.ToTensor())
#DEFINE DATA GENERATOR
batch_size = 50
train_generator = torch.utils.data.DataLoader(train_data, batch_size = batch_size, shuffle = True)
test_generator = torch.utils.data.DataLoader(test_data, batch_size = batch_size, shuffle = False)
#DEFINE NEURAL NETWORK MODEL
class CNN(torch.nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = torch.nn.Conv2d(1, 8, kernel_size = 4, stride = 1)
self.conv2 = torch.nn.Conv2d(8, 16, kernel_size = 4, stride = 1)
self.mpool = torch.nn.MaxPool2d(2)
self.fc1 = torch.nn.Linear(18432, 256)
self.fc2 = torch.nn.Linear(256, 64)
self.fc3 = torch.nn.Linear(64, 50)
self.relu = torch.nn.ReLU()
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x):
hidden = self.mpool(self.relu(self.conv1(x)))
hidden = self.mpool(self.relu(self.conv2(hidden)))
hidden = hidden.view(-1,18432)
hidden = self.relu(self.fc1(hidden))
hidden = self.relu(self.fc2(hidden))
output = self.fc3(hidden)
return output
# CREATE MODEL
model = CNN()
model.to(device)
summary(model, (1, 105, 105))
# DEFINE LOSS FUNCTION AND OPTIMIZER
learning_rate = 0.001
loss_fun = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# TRAIN THE MODEL
model.train()
epoch = 10
num_of_batch = np.int(len(train_generator.dataset) / batch_size)
loss_values = np.zeros(epoch * num_of_batch)
for i in range(epoch):
for batch_idx, (x_train, y_train) in enumerate(train_generator):
x_train, y_train = x_train.to(device), y_train.to(device)
optimizer.zero_grad()
y_pred = model(x_train)
loss = loss_fun(y_pred, y_train)
loss_values[num_of_batch * i + batch_idx] = loss.item()
loss.backward()
optimizer.step()
if (batch_idx + 1) % batch_size == 0:
print('Epoch: {}/{} [Batch: {}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
i + 1, epoch, (batch_idx + 1) * len(x_train), len(train_generator.dataset),
100. * (batch_idx + 1) / len(train_generator), loss.item()))
#PLOT THE LEARNING CURVE
iterations = np.linspace(0,epoch,num_of_batch*epoch)
plt.plot(iterations, loss_values)
plt.title('Learning Curve')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.grid('on')
#TEST THE MODEL
model.eval()
correct=0
total=0
for x_val, y_val in test_generator:
x_val = x_val.to(device)
y_val = y_val.to(device)
output = model(x_val)
y_pred = output.argmax(dim=1)
for i in range(y_pred.shape[0]):
if y_val[i]==y_pred[i]:
correct += 1
total +=1
print('Validation accuracy: %.2f%%' %((100*correct)//(total)))
Here is the error code that i receive.
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([50])) that is different to the input size (torch.Size([25, 50])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-11-bffd863688df> in <module>()
13 loss = loss_fun(y_pred, y_train)
14 loss_values[num_of_batch*i+batch_idx] = loss.item()
---> 15 loss.backward()
16 optimizer.step()
17 if (batch_idx+1) % batch_size == 0:
1 frames
/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
147 Variable._execution_engine.run_backward(
148 tensors, grad_tensors_, retain_graph, create_graph, inputs,
--> 149 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
150
151
RuntimeError: Found dtype Long but expected Float
| Your dataset is returning integers for your labels, you should cast them to floating points. One way of solving it is to do:
loss = loss_fun(y_pred, y_train.float())
| https://stackoverflow.com/questions/69124057/ |
How to optimize cudaHostAlloc and cudaLaunchKernel times in pytorch training | I am trying to profile my model with pytorch profiler. I used the below code to profile
with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA], record_shapes=True) as prof:
with record_function("model_inference"):
output_batch = self.model(input_batch)
print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10))
The profiler output is as follows
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg # of Calls
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
model_inference 3.17% 83.011ms 63.97% 1.675s 1.675s 0.000us 0.00% 373.844ms 373.844ms 1
aten::copy_ 0.24% 6.333ms 39.93% 1.046s 1.504ms 28.758ms 7.69% 29.035ms 41.777us 695
cudaHostAlloc 36.02% 943.053ms 36.02% 943.053ms 30.421ms 0.000us 0.00% 0.000us 0.000us 31
cudaLaunchKernel 35.93% 940.773ms 35.93% 940.773ms 86.619us 0.000us 0.00% 0.000us 0.000us 10861
aten::repeat 0.04% 979.000us 33.77% 884.170ms 30.489ms 0.000us 0.00% 204.000us 7.034us 29
aten::conv2d 0.06% 1.481ms 8.71% 228.183ms 695.680us 0.000us 0.00% 145.688ms 444.171us 328
aten::convolution 0.05% 1.391ms 8.66% 226.702ms 691.165us 0.000us 0.00% 145.688ms 444.171us 328
aten::_convolution 0.10% 2.742ms 8.61% 225.311ms 686.924us 0.000us 0.00% 145.688ms 444.171us 328
aten::cudnn_convolution 0.53% 13.803ms 8.33% 218.051ms 664.790us 137.822ms 36.87% 137.822ms 420.189us 328
cudaFree 7.46% 195.373ms 7.46% 195.373ms 48.843ms 0.000us 0.00% 0.000us 0.000us 4
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 2.618s
Self CUDA time total: 373.844ms
I notice that a large part of the time (self CPU) is taken by cudaHostAlloc and cudaLaunchKernel. What are these cudaHostAlloc and cudaLaunchKernel? Is it possible to reduce this time? If yes how? Are there any standard operations that I'm missing which is leading to this high time consumption?
PS: I'm new to profiling as such. Kindly let me know if any other information is needed.
| I am not an expert but I think cudaLaunchKernel is called at every operation done with cuda. So I think you cannot optimize it.
If you plot a detailed tracing https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html#using-tracing-functionality, you see that it is called everytime you do a cuda operation like here for a linear layer.
One note on you profiler output: aten::copy_ cudaHostAlloc cudaLaunchKernel and aten::repeat all take roughly 40% of the CPU total time. I think it may be related to ProfilerActivity.CUDA that records CUDA operation but it also add a lot of CPU time on your first CUDA operation that is profiled. In my case a simple torch.ones(1000, device="cuda") took a full second of CPU time because it was the first cuda operation.
It may be the problem in your case, try to remove ProfilerActivity.CUDA and maybe aten::copy_ cudaHostAlloc cudaLaunchKernel and aten::repeat will have a much smaller CPU time and will disappear from the table.
| https://stackoverflow.com/questions/69127080/ |
Weights & Biases with Transformers and PyTorch? | I'm training an NLP model at work (e-commerce SEO) applying a BERT variation for portuguese language (BERTimbau) through Transformers by Hugging Face.
I didn't used the Trainer from Transformers API. I used PyTorch to set all parameters through DataLoader.utils and adamW. I trained my model using run_glue.py.
I'm training with a VM on GCP using Jupyterlab. I know that I can use Weights & Biases both for PyTorch and Transformers. But I don't know exactly how to set it using run_glue.py. It's my first time using Weights & Biases.
After preprocessing and splitting train and test through Sklearn, my code is as it follows:
from transformers import BertTokenizer
import torch
#import torchvision
from torch.utils.data import Dataset, TensorDataset
import collections.abc as container_abcs
# To feed our text to BERT, it must be split into tokens, and then these tokens must be mapped to their index in the tokenizer vocabulary.
# Constructs a BERT tokenizer. Based on WordPiece.
# The tokenization must be performed by the tokenizer included with BERT
tokenizer = BertTokenizer.from_pretrained('neuralmind/bert-large-portuguese-cased',
do_lower_case=True)
# Tokenize all of the sentences and map the tokens to thier word IDs. To convert all the titles from text into encoded form.
# We will use padding and truncation because the training routine expects all tensors within a batch to have the same dimensions.
encoded_data_train = tokenizer.batch_encode_plus(
df[df.data_type=='train'].text.values,
add_special_tokens=True, # Add '[CLS]' and '[SEP]'. Sequences encoded with special tokens relative to their model
return_attention_mask=True, # Return mask according to specific tokenizer defined by max_length
pad_to_max_length=True, # Pad & truncate all sentences. Pad all titles to certain maximum length
max_length=128, # Do not need to set max_length=256
return_tensors='pt' # Set to use PyTorch tensors
)
encoded_data_val = tokenizer.batch_encode_plus(
df[df.data_type=='val'].text.values,
add_special_tokens=True,
return_attention_mask=True,
pad_to_max_length=True,
max_length=128,
return_tensors='pt'
)
# Split the data into input_ids, attention_masks and labels.
# Converting the input data to the tensor , which can be feeded to the model
input_ids_train = encoded_data_train['input_ids'] # Add the encoded sentence to the list.
attention_masks_train = encoded_data_train['attention_mask'] # And its attention mask (simply differentiates padding from non-padding).
labels_train = torch.tensor(df[df.data_type=='train'].label.values)
input_ids_val = encoded_data_val['input_ids']
attention_masks_val = encoded_data_val['attention_mask']
labels_val = torch.tensor(df[df.data_type=='val'].label.values)
# Create training data and validation data
dataset_train = TensorDataset(input_ids_train, attention_masks_train, labels_train)
dataset_val = TensorDataset(input_ids_val, attention_masks_val, labels_val)
from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("neuralmind/bert-large-portuguese-cased", # Select your pretrained Model
num_labels=len(label_dict), # Labels tp predict
output_attentions=False, # Whether the model returns attentions weights. We donβt really care about output_attentions.
output_hidden_states=False) # Whether the model returns all hidden-states. We also donβt need output_hidden_states
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler
batch_size = 32 # Set your batch size according to your GPU memory
dataloader_train = DataLoader(dataset_train # Use DataLoader to Optimize your model
,sampler=RandomSampler(dataset_train) # Random Sampler from your dataset
,batch_size=batch_size) # If your batch_size is too high you will get a warning when you run the model
#,num_workers=4 # Number of cores
#,pin_memory=True) # Use GPU to send your batch
dataloader_validation = DataLoader(dataset_val
,sampler=SequentialSampler(dataset_val) # For validation the order doesn't matter. Sequential Sampler consumes less GPU.
,batch_size=batch_size)
#,num_workers=4
#,pin_memory=True)
from transformers import AdamW, get_linear_schedule_with_warmup
# hyperparameters
# To construct an optimizer, we have to give it an iterable containing the parameters to optimize.
# Then, we can specify optimizer-specific options such as the learning rate, epsilon, etc.
optimizer = AdamW(model.parameters(), # AdamW is a class from the huggingface library (as opposed to pytorch)
lr=2e-5, # args.learning_rate - default is 5e-5
eps=1e-8) # args.adam_epsilon - default is 1e-8
# Number of training epochs. The BERT authors recommend between 2 and 4.
epochs = 2
# Create the learning rate scheduler that decreases linearly from the initial learning rate set in the optimizer to 0,
# after a warmup period during which it increases linearly from 0 to the initial learning rate set in the optimizer.
scheduler = get_linear_schedule_with_warmup(optimizer,
num_warmup_steps=0, # Default value in run_glue.py
num_training_steps=len(dataloader_train)*epochs) # Total number of training steps is [number of batches] x [number of epochs].
# Note that this is not the same as the number of training samples).
from sklearn.metrics import f1_score
def f1_score_func(preds, labels):
preds_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
return f1_score(labels_flat, preds_flat, average='weighted')
def accuracy_per_class(preds, labels):
label_dict_inverse = {v: k for k, v in label_dict.items()}
preds_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
for label in np.unique(labels_flat):
y_preds = preds_flat[labels_flat==label]
y_true = labels_flat[labels_flat==label]
print(f'Class: {label_dict_inverse[label]}')
print(f'Accuracy: {len(y_preds[y_preds==label])}/{len(y_true)}\n')
And here follows run_glue.py:
import random
from tqdm import tqdm
import torch
import numpy as np
# from tqdm.notebook import trange, tqdm
'''
This training code is based on the 'run_glue.py' script here:
https://github.com/huggingface/transformers/blob/5bfcd0485ece086ebcbed2d008813037968a9e58/examples/run_glue.py#L128
'''
# Just right before the actual usage select your hardware
device = torch.device('cuda') # or cpu
model = model.to(device) # send your model to your hardware
# Set the seed value all over the place to make this reproducible.
seed_val = 17
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
# We'll store a number of quantities such as training and validation loss, validation and timings.
def evaluate(dataloader_val):
'''
Put the model in evaluation mode--the dropout layers behave differently
during evaluation.
'''
model.eval()
loss_val_total = 0 # Tracking variables
predictions, true_vals = [], []
for batch in dataloader_val:
'''
Unpack this training batch from our dataloader.
As we unpack the batch, we'll also copy each tensor to the GPU using the
`to` method.
`batch` contains three pytorch tensors:
[0]: input ids
[1]: attention masks
[2]: labels
'''
batch = tuple(b.to(device) for b in batch)
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'labels': batch[2],
}
'''
Tell pytorch not to bother with constructing the compute graph during
the forward pass, since this is only needed for backprop (training).
'''
with torch.no_grad():
outputs = model(**inputs)
'''
Perform a forward pass (evaluate the model on this training batch).
The documentation for this `model` function is here:
https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification
This will return the loss (rather than the model output)
because we have provided the `labels`.
It returns different numbers of parameters depending on what arguments
arge given and what flags are set. For our useage here, it returns
the loss (because we provided labels) and the "logits"--the model
outputs prior to activation.
'''
loss = outputs[0]
logits = outputs[1]
loss_val_total += loss.item() # Accumulate the validation loss.
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = inputs['labels'].cpu().numpy()
predictions.append(logits)
true_vals.append(label_ids)
loss_val_avg = loss_val_total/len(dataloader_val) # Calculate the average loss over all of the batches.
predictions = np.concatenate(predictions, axis=0)
true_vals = np.concatenate(true_vals, axis=0)
return loss_val_avg, predictions, true_vals
# ========================================
# Training
# ========================================
# For each epoch...
for epoch in tqdm(range(1, epochs+1)):
'''
Put the model into training mode. Don't be mislead--the call to
`train` just changes the *mode*, it doesn't *perform* the training.
`dropout` and `batchnorm` layers behave differently during training
vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch)
'''
model.train() # Put the model into training mode.
loss_train_total = 0 # Reset the total loss for this epoch.
progress_bar = tqdm(dataloader_train, desc='Epoch {:1d}'.format(epoch), leave=False, disable=False)
for batch in progress_bar:
model.zero_grad()
'''
Always clear any previously calculated gradients before performing a
backward pass. PyTorch doesn't do this automatically because
accumulating the gradients is "convenient while training RNNs".
(source: https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch)
'''
batch = tuple(b.to(device) for b in batch)
'''
Unpack this training batch from our dataloader.
As we unpack the batch, we'll also copy each tensor to the GPU using the
`to` method.
`batch` contains three pytorch tensors:
[0]: input ids
[1]: attention masks
[2]: labels
'''
inputs = {'input_ids': batch[0], #.to(device)
'attention_mask': batch[1], #.to(device)
'labels': batch[2], #.to(device)
}
outputs = model(**inputs)
loss = outputs[0] # The call to `model` always returns a tuple, so we need to pull the loss value out of the tuple.
loss_train_total += loss.item() # Accumulate the training loss over all of the batches so that we can
# calculate the average loss at the end. `loss` is a Tensor containing a
# single value; the `.item()` function just returns the Python value
# from the tensor.
loss.backward() # Perform a backward pass to calculate the gradients.
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) # Clip the norm of the gradients to 1.0.
# This is to help prevent the "exploding gradients" problem.
# modified based on their gradients, the learning rate, etc.
optimizer.step() # Update parameters and take a step using the computed gradient.
# The optimizer dictates the "update rule"--how the parameters are
# modified based on their gradients, the learning rate, etc.
scheduler.step() # Update the learning rate.
progress_bar.set_postfix({'training_loss': '{:.3f}'.format(loss.item()/len(batch))})
torch.save(model.state_dict(), f'finetuned_BERT_epoch_{epoch}.model') # Save Model
tqdm.write(f'\nEpoch {epoch}') # Show running epoch
loss_train_avg = loss_train_total/len(dataloader_train) # Calculate the average loss over all of the batches.
tqdm.write(f'Training loss: {loss_train_avg}') # Show loss average
# ========================================
# Validation
# ========================================
# After the completion of each training epoch, measure our performance on
# our validation set.
# Record all statistics from this epoch.
val_loss, predictions, true_vals = evaluate(dataloader_validation)
val_f1 = f1_score_func(predictions, true_vals)
tqdm.write(f'Validation loss: {val_loss}')
tqdm.write(f'F1 Score (Weighted): {val_f1}')
| Scott from W&B here. Although you're not using the HuggingFace WandbCallback, you can still take advantage of wandb easily using our Python API.
All you need to do is call wandb.log({'val_loss': val_loss, 'train_loss': train_avg}) with whatever you want to log after you call wandb.init before training.
Here's an example training loop:
wandb.init(
# Set entity to specify your username or team name
# ex: entity="carey",
# Set the project where this run will be logged
project="huggingface",
# Track hyperparameters and run metadata
config={
"learning_rate": 0.02,
"architecture": "BERT",
"dataset": "my-dataset",})
# This simple block simulates a training loop logging metrics
for x in range(50):
loss, accuracy = #Β calculate this yourself
# Log metrics from your script to W&B
wandb.log({"acc":acc, "loss":loss})
# Mark the run as finished
wandb.finish()
If you want to log Datasets or Models, you can do that using wandb Artifacts.
This Quickstart guide is a good place to start for more information:
W&B Quickstart
| https://stackoverflow.com/questions/69147788/ |
Sequential batch processing vs parallel batch processing? | In deep learning based model training, in general batch of inputs are passed. For example for training a deep learning model with [512] dimensional input feature vector, say for batch size= 4, we mainly pass [4,512] dimenional input. I am curious what are the logical significance of passing the same input after flattening the input across the batch and channel dimenions [2048]. Logically the locality structure will be destroyed but will it significanlty speed up my implementation? And can it affect the performance?
| In supervised learning, you would usually be working with data points (e.g. a feature vector or a multi-dimensional input such as an image) paired with some kind of ground-truth (a label for classifications tasks, or another multi-dimensional object altogether). Feeding to your model a flattened tensor containing multiple data points would not make sense in terms of supervision. Assuming you do an inference this way, what would be the supervision signal at the output level of your model? Would you combine the labels as well? All of this seem to depend heavily on the use case: is there some kind of temporal coherence between the elements of the batch?
Performance-wise, this has no implications whatsoever. Tensors are already 'flattened' by design since their memory is laid out in contiguous memory buffers. The idea of multi-dimensionality is an abstraction layer provided by those libraries (namely NumPy's arrays and Torch's tensors) to allow for easier and more flexible control over data.
| https://stackoverflow.com/questions/69162442/ |
How to make Python libraries accessible in PYTHONPATH? | I am trying to implement StyleGAN2-ADA PyTorch: https://github.com/NVlabs/stylegan2-ada-pytorch.
On the GitHub repo, it states the following:
The above code requires torch_utils and dnnlib to be accessible via
PYTHONPATH. It does not need source code for the networks themselves β
their class definitions are loaded from the pickle via
torch_utils.persistence.
What does this mean, and how can I do this?
| Let's say you have the source code of this repo cloned to /somepath/stylegan2-ada-pytorch which means that the directories you quoted are at /somepath/stylegan2-ada-pytorch/torch_utils and /somepath/stylegan2-ada-pytorch/dnnlib, respectively.
Now let's say you have a python script that you want to access this code. It can be anywhere on your machine, as long as you add this to the top of your python script:
import os
import sys
#save the literal filepath to both directories as strings
tu_path = os.path.join('somepath','stylegan2-ada-pytorch','torch_utils')
dnnlib_path = os.path.join('somepath','stylegan2-ada-pytorch','dnnlib')
#add those strings to python path
sys.path.append(tu_path)
sys.path.append(dnnlib_path )
Note that this only adds those location to PYTHONPATH for the duration of that python script running, so you need this at the top of any python script that intends to use those libraries.
| https://stackoverflow.com/questions/69169145/ |
Pytorch customized dataloader | I am trying to train a classifier with MNIST dataset using pytorch-lightening.
import pytorch_lightning as pl
from torchvision import transforms
from torchvision.datasets import MNIST, SVHN
from torch.utils.data import DataLoader, random_split
class MNISTData(pl.LightningDataModule):
def __init__(self, data_dir='./', batch_size=256):
super().__init__()
self.data_dir = data_dir
self.batch_size = batch_size
self.transform = transforms.ToTensor()
def download(self):
MNIST(self.data_dir, train=True, download=True)
MNIST(self.data_dir, train=False, download=True)
def setup(self, stage=None):
if stage == 'fit' or stage is None:
mnist_train = MNIST(self.data_dir, train=True, transform=self.transform)
self.mnist_train, self.mnist_val = random_split(mnist_train, [55000, 5000])
if stage == 'test' or stage is None:
self.mnist_test = MNIST(self.data_dir, train=False, transform=self.transform)
def train_dataloader(self):
mnist_train = DataLoader(self.mnist_train, batch_size=self.batch_size)
return mnist_train
def val_dataloader(self):
mnist_val = DataLoader(self.mnist_val, batch_size=self.batch_size)
return mnist_val
def test_dataloader(self):
mnist_test = DataLoader(self.mnist_test, batch_size=self.batch_size)
After using MNISTData().setup(), I gained MNISTData().mnist_train, MNISTData().mnist_val, MNISTData().mnist_test whose length are 55000, 5000, 10000 with type of torch.utils.data.dataset.Subset.
But when i call dataloader w.r.t MNISTData().train_dataloader, MNISTData().val_dataloader, MNISTData().test_dataloader I only get DataLoader with 215, 20, None datas in them.
Can someone know the reason or could fix the problem?
| As I told in the comments, and Ivan posted in his answer, there was missing return statement:
def test_dataloader(self):
mnist_test = DataLoader(self.mnist_test, batch_size=self.batch_size)
return mnist_test # <<< missing return
As per your comment, if we try:
a = MNISTData()
# skip download, assuming you already have it
a.setup()
b, c, d = a.train_dataloader(), a.val_dataloader(), a.test_dataloader()
# len(b)=215, len(c)=20, len(d)=40
I think your question is why the length of b, c, d are different from the length of the datasets. The answer is that the len() of a DataLoader is equal to the number of batches, not the number of samples, therefore:
import math
batch_size = 256
len(b) = math.ceil(55000 / batch_size) = 215
len(c) = math.ceil(5000 / batch_size) = 20
len(d) = math.ceil(10000 / batch_size) = 40
BTW, we're using math.ceil because DataLoader has drop_last=False by default, otherwise it would be math.floor.
| https://stackoverflow.com/questions/69170854/ |
Is it possible to retrain a trained model on fewer classes? | I am working on image detection where I am detecting and classifying an image into one of 14 different thoric diseases (multi-label classification problem).
The model is trained on NIH dataset with which I get 80% AUC. Now I want to improve the model by training on a second dataset. But the main problem is both dataset's classes are not matched.
The second dataset contains 10 classes that overlap with the first dataset with which I trained the model.
Questions:
Is it possible to retrain a model on fewer classes.
Will retraining my model on a new dataset impact the AUC of other non-similar classes?
How big is the chance that this will improve the model?
The model and code are based on fast.ai and PyTorch.
| Based on discussion in the comments:
Yes, if the classes overlap (with different data points from a different dataset) you can train the same classifier layer with two datasets. This would mean in one of the datasets, 4 out of 14 classes are simply not trained. What this means is that you are basically making your existing 14-class dataset more imbalanced by adding more samples for only 10 out of 14 classes.
Training on 10 out of 14 classes will introduce a forgetting effect on the 4 classes that are not trained additionally. You can counteract this somewhat by using the suggested alternate training, or by combining all the data into one big dataset, but this does not solve the fact that the new combined dataset is then probably more imbalanced than the original 14-class dataset. Unless the 4 classes not in the 10-class dataset are for some reason over represented in the 14-class dataset, but I assume you're not going to get that lucky.
Because both your dataset and your model will focus heavier on 10 out of the 14 classes, your accuracy may go up. However, this means that the 4 classes that do not overlap are simply being ignored in favor of higher accuracy on the remaining 10 classes. On paper, the numbers may look better, but in practice you're making your model less useful for a 14-class classification task.
| https://stackoverflow.com/questions/69175045/ |
DropPath in TIMM seems like a Dropout? | The code below (taken from here) seems to implement only a simple Dropout, neither the DropPath nor DropConnect. Is that true?
def drop_path(x, drop_prob: float = 0., training: bool = False):
"""Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
This is the same as the DropConnect impl I created for EfficientNet, etc networks, however,
the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...
See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for
changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use
'survival rate' as the argument.
"""
if drop_prob == 0. or not training:
return x
keep_prob = 1 - drop_prob
shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets
random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
random_tensor.floor_() # binarize
output = x.div(keep_prob) * random_tensor
return output
| No, it is different from Dropout:
import torch
from torch.nn.functional import dropout
torch.manual_seed(2021)
def drop_path(x, drop_prob: float = 0., training: bool = False):
if drop_prob == 0. or not training:
return x
keep_prob = 1 - drop_prob
shape = (x.shape[0],) + (1,) * (x.ndim - 1)
random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
random_tensor.floor_() # binarize
output = x.div(keep_prob) * random_tensor
return output
x = torch.rand(3, 2, 2, 2)
# DropPath
d1_out = drop_path(x, drop_prob=0.33, training=True)
# Dropout
d2_out = dropout(x, p=0.33, training=True)
Let's compare the outputs (I removed the line break between channel dimension for readability):
# DropPath
print(d1_out)
# tensor([[[[0.1947, 0.7662],
# [1.1083, 1.0685]],
# [[0.8515, 0.2467],
# [0.0661, 1.4370]]],
#
# [[[0.0000, 0.0000],
# [0.0000, 0.0000]],
# [[0.0000, 0.0000],
# [0.0000, 0.0000]]],
#
# [[[0.7658, 0.4417],
# [1.1692, 1.1052]],
# [[1.2014, 0.4532],
# [1.4840, 0.7499]]]])
# Dropout
print(d2_out)
# tensor([[[[0.1947, 0.7662],
# [1.1083, 1.0685]],
# [[0.8515, 0.2467],
# [0.0661, 1.4370]]],
#
# [[[0.0000, 0.1480],
# [1.2083, 0.0000]],
# [[1.2272, 0.1853],
# [0.0000, 0.5385]]],
#
# [[[0.7658, 0.0000],
# [1.1692, 1.1052]],
# [[1.2014, 0.4532],
# [0.0000, 0.7499]]]])
As you can see, they are different. DropPath is dropping an entire sample from the batch, which effectively results in stochastic depth when used as in Eq. 2 of their paper. On the other hand, Dropout is dropping random values, as expected (from the docs):
During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.
Also note that both scale the output values based on the probability, i.e., the non-zeroed out elements are identical for the same p.
| https://stackoverflow.com/questions/69175642/ |
Can't install GPU-enabled Pytorch in Conda environment from environment.yml | I'm on Ubuntu 20.04 LTS with CUDA 11.1 installed (and working, with PATH and LD_LIBRARY_PATH configured correctly), and I'm trying to define a reusable conda environment (i.e., in an environment.yml file) that successfully installs PyTorch with CUDA support.
However, when I use the environment file, I get a message that Torch wasn't compiled with CUDA support:
Python 3.8.10 | packaged by conda-forge | (default, May 11 2021, 07:01:05)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> device = torch.device("cuda:0")
>>> t = torch.tensor(device=device, data=[0,1,2,3])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jdr2160/anaconda3/envs/foo/lib/python3.8/site-packages/torch/cuda/__init__.py", line 166, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
My environment.yml is pretty bare-bones:
name: foo
channels:
- conda-forge
- nvidia
- pytorch
dependencies:
- cudatoolkit=11.1
- python=3.8
- pytorch
When I create an 'empty' python 3.8 environment and install the Conda packages from the command line instead of from an environment file, everything works fine:
$ conda env create --name bar python=3.8
...
$ conda activate bar
$ conda install pytorch cudatoolkit=11.1 -c pytorch -c nvidia
...
$ python
Python 3.8.10 | packaged by conda-forge | (default, May 11 2021, 07:01:05)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> device = torch.device("cuda:0")
>>> t = torch.tensor(device=device, data=[0,1,2,3])
>>>
Can anyone tell what's going on here? It seems that Conda doesn't see the cudatoolkit=11.1 dependency while installing PyTorch from the environment file, but I have no idea how to fix it.
| Just a few minutes after posting this question, I was able to figure out the solution. It turns out that it has to do with prioritizing Conda channels. The solution (which isn't well-documented by Anaconda) is to specify the correct channel for cudatoolkit and pytorch in environment.yml:
name: foo
channels:
- conda-forge
- nvidia
- pytorch
dependencies:
- nvidia::cudatoolkit=11.1
- python=3.8
- pytorch::pytorch
| https://stackoverflow.com/questions/69180740/ |
Pytorch DataLoder Is Very Slow | I have a problem with the DataLoader form Pytorch, because is very slow.
I did a test to show this, here is the code:
data = np.load('slices.npy')
data = np.reshape(data, (-1, 1225))
data = torch.FloatTensor(data).to('cuda')
print(data.shape)
# ==> torch.Size([273468, 1225])
class UnlabeledTensorDataset(TensorDataset):
def __init__(self, data_tensor):
self.data_tensor = data_tensor
self.samples = data_tensor.shape[0]
def __getitem__(self, index):
return self.data_tensor[index]
def __len__(self):
return self.samples
test_set = UnlabeledTensorDataset(data)
test_loader = DataLoader(test_set, batch_size=data.shape[0])
start = datetime.datetime.now()
with torch.no_grad():
for batch in test_loader:
print(batch.shape) # ==> torch.Size([273468, 1225])
y_pred = model(batch)
loss = torch.sqrt(criterion(y_pred, batch))
avg_loss = loss
print(round((datetime.datetime.now() - start).total_seconds() * 1000, 2))
# ==> 1527.57 (milliseconds) !!!!!!!!!!!!!!!!!!!!!!!!
start = datetime.datetime.now()
with torch.no_grad():
print(data.shape) # ==> torch.Size([273468, 1225])
y_pred = model(data)
loss = torch.sqrt(criterion(y_pred, data))
avg_loss = loss
print(round((datetime.datetime.now() - start).total_seconds() * 1000, 2))
# ==> 2.0 (milliseconds) !!!!!!!!!!!!!!!!!!!!!!!!
I will like to use the DataLoader but I want a way to fix the slow issue, dose anyone know why this is happening?
| The time difference seems logical to me:
On one end you're looping over test_loader and doing 1225 inferences.
On the other, you are doing a single inference.
| https://stackoverflow.com/questions/69185093/ |
How to convert a tensor into a list of tensors | How can I convert a tensor into a list of tensors. For instance: P1 is a torch.Tensor with 60 values in it and I want a list of tensors with 60 tensors in it.
| You can coerce the torch.Tensor to a list with list:
>>> P1 = torch.rand(60)
>>> list(P1)
[tensor(0.5987),
tensor(0.5321),
tensor(0.6590),
...
tensor(0.1381)]
This works with multi-dimensional tensors too:
>>> P1 = torch.rand(60, 2)
>>> list(P1)
[tensor([0.4675, 0.0430]),
tensor([0.2175, 0.6271]),
tensor([0.3378, 0.8516]),
...,
tensor([0.5099, 0.3411]
| https://stackoverflow.com/questions/69186799/ |
PyTorch's nn.Conv2d with half-precision (fp16) is slower than fp32 | I have found that a single 2D convolution operation with float16 is slower than with float32.
I am working with a Gtx 1660 Ti with torch.1.8.0+cu111 and cuda-11.1 (also tried with torch.1.9.0)
Dtype
in=1,out=64
in=1,out=128
in=64,out=128
Fp16
3532 it/s
632 it/s
599it/s
Fp32
2160 it/s
1311 it/s
925it/s
I am measuring the convolution speed with the following code.
inputfp16 = torch.arange(0,ch_in*64*64).reshape(1, ch_in, 64, 64).type(torch.float16).to('cuda:0')
inputfp32 = torch.arange(0,ch_in*64*64).reshape(1, ch_in, 64, 64).type(torch.float32).to('cuda:0')
conv2d_16 = nn.Conv2d(ch_in,ch_out, 3, 1, 1).eval().to('cuda:0').half()
conv2d_32 = nn.Conv2d(ch_in,ch_out, 3, 1, 1).eval().to('cuda:0')
for i in tqdm(range(0, 50)):
out = conv2d_16(inputfp16)
out.cpu()
for i in tqdm(range(0, 50)):
out = conv2d_32(inputfp32)
out.cpu()
It would be great if you let me know whether you have had the same problem, even better if you can provide a solution.
| Well, the problem lays on the fact that Mixed/Half precision tensor calculations are accelerated via Tensor Cores.
Theoretically (and practically) Tensor Cores are designed to handle lower precision matrix calculations, where, for instance you add the fp32 multiplication product of 2 fp16 matrix calculation to the accumulator.
As long as GTX 1660 TI doesn't come with Tensor Cores, we can conclude that CUDA won't be able to utilize acceleration with Mixed/Half precisions on that GPU.
| https://stackoverflow.com/questions/69188051/ |
LSTM.weight_ih_l[k] dimensions with proj_size | According to Pytorch LSTM documentation :-
~LSTM.weight_ih_l[k] β the learnable input-hidden weights of the kth layer (W_ii|W_if|W_ig|W_io), of shape (4*hidden_size, input_size) for k = 0. Otherwise, the shape is (4 * hidden_size, num_directions * hidden_size)
My doubt is, why for k > 0 the shape for each weight is (hidden_size, num_directions * hidden_size), according to me, shouldn't be (hidden_size, num_directions * proj_size) because the layer above the lowest layer is receiving the input which is the output of the lowest layer which have the shape of (L, N, num_directions*proj_size)
UPDATE
Indeed, the documentation have mentioned wrong shapes. It will be fixed soon.
Github Issue
| This is now fixed. The OP opened the Issue #65053 that was fixed by PR #65102 (commit 83878e1).
It just happens that the documentation isn't providing all the details in this case, and you're correct. In fact, you can check in the source code that the shape of W_ih is (4*hidden_size, num_directions * proj_size) when proj_size > 0 for k > 0:
# [...]
if mode == 'LSTM':
gate_size = 4 * hidden_size
elif mode == 'GRU':
gate_size = 3 * hidden_size
elif mode == 'RNN_TANH':
gate_size = hidden_size
elif mode == 'RNN_RELU':
gate_size = hidden_size
else:
raise ValueError("Unrecognized RNN mode: " + mode)
self._flat_weights_names = []
self._all_weights = []
for layer in range(num_layers):
for direction in range(num_directions):
real_hidden_size = proj_size if proj_size > 0 else hidden_size
layer_input_size = input_size if layer == 0 else real_hidden_size * num_directions
w_ih = Parameter(torch.empty((gate_size, layer_input_size), **factory_kwargs))
# [...]
As you can see, w_ih has the shape (gate_size, layer_input_size), where:
gate_size is 4 * hidden_size for LSTM, and
layer_input_size is
input_size if layer == 0 (layer is equivalent to k in the docs), else
real_hidden_size * num_directions for k > 0, and
real_hidden_size = proj_size if proj_size > 0, else it is hidden_size.
That is: if proj_size > 0 and layer > 0, layer_input_size = proj_size * num_directions, and the shape of w_ih will be equal to (4 * hidden_size, proj_size * num_directions.
It is worth noting that they do have the following in the documentation:
If proj_size > 0 is specified, LSTM with projections will be used. This changes the LSTM cell in the following way. First, the dimension of h_t will be changed from hidden_size to proj_size (dimensions of W_hi will be changed accordingly). Second, the output hidden state of each layer will be multiplied by a learnable projection matrix: h_t = W_hr * h_t. Note that as a consequence of this, the output of LSTM network will be of different shape as well.
| https://stackoverflow.com/questions/69190587/ |
Mismatched size on BertForSequenceClassification from Transformers and multiclass problem | I just trained a BERT model on a Dataset composed by products and labels (departments) for an e-commerce website. It's a multiclass problem. I used BertForSequenceClassification to predict the department for each product. I split it in train and evaluation, I used dataloader from pytorch, and I've got a good score with no overfit.
Now I want to try it on a new Dataset to check how it works on unseen data. But I can't achieve to load the model and apply on the new Dataset. I get the following error:
RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([59, 1024]) from checkpoint, the shape in current model is torch.Size([105, 1024]).
size mismatch for classifier.bias: copying a param with shape torch.Size([59]) from checkpoint, the shape in current model is torch.Size([105]).
I see that the problem probably is a mismatch from labels size between both Datasets. I've searched a bit and I've found a recommendation to use ignore_mismatched_sizes=True as and argument for pretrained. But I keep receiving the same error.
Here is part of my code when trying to predict on unseen data:
from transformers import BertForSequenceClassification
# Just right before the actual usage select your hardware
device = torch.device('cuda') # or cpu
model = model.to(device) # send your model to your hardware
model = BertForSequenceClassification.from_pretrained("neuralmind/bert-large-portuguese-cased",
num_labels=len(label_dict),
output_attentions=False,
output_hidden_states=False,
ignore_mismatched_sizes=True)
model.to(device)
model.load_state_dict(torch.load('finetuned_BERT_epoch_2_full-Copy1.model', map_location=torch.device('cuda')))
_, predictions, true_vals = evaluate(dataloader_validation)
accuracy_per_class(predictions, true_vals)
Could someone help me how could I deal with it? I don't know what more can I do!
Any help I'm very grateful!
| Your new dataset has 105 classes while your model was trained for 59 classes. As you have already mentioned, you can use ignore_mismatched_sizes to load your model. This parameter will load the the embedding and encoding layers of your model, but will randomly initialize the classification head:
model = BertForSequenceClassification.from_pretrained("finetuned_BERT_epoch_2_full-Copy1.model",
num_labels=105,
output_attentions=False,
output_hidden_states=False,
ignore_mismatched_sizes=True)
In case you want to keep the classification layer of the 59 labels and add 46 labels, you can refer to this answer. Please also note the comments of this answer, because this approach does not provide any meaningful results due to the random initialization for the new labels.
| https://stackoverflow.com/questions/69194640/ |
Unable to access batch items in iterator - Torchtext Attribute Error: 'Field' object has no attribute 'vocab' | I am unable to access batch items from the Iterator object in Torchtext.
Following is the error
AttributeError: 'Field' object has no attribute 'vocab'
Code to Recreate Problem
#Access to Drive
from google.colab import drive
drive.mount ('/content/gdrive')
import numpy as np
import spacy
spacy_en = spacy.load("en")
def tokenize(text):
return [tok.text for tok in spacy_en.tokenizer(text)]
import torch
from torchtext.legacy.data import Field, LabelField, Iterator
TEXT = Field(sequential=True, use_vocab=True, tokenize=tokenize, lower=True)
LABEL = LabelField(dtype = torch.long, use_vocab=False)
fields = {"text": ("txt", TEXT), "label": ("lbl", LABEL)}
from torchtext.legacy.data import TabularDataset
train_data, test_data = TabularDataset.splits(path="/content/gdrive/MyDrive/Colab Notebooks/",
train="Strong_Train.csv",
test="Strong_Test.csv", format="csv", fields=fields)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
TEXT = Field(sequential=True, use_vocab=True, tokenize=tokenize, lower=True)
LABEL = LabelField(dtype = torch.long, use_vocab=False)
TEXT.build_vocab(train_data, max_size=10000 )
LABEL.build_vocab(train_data)
train_iterator = Iterator(train_data, batch_size=1, device=device)
for batch in train_iterator:
print('hello')
My Analysis of Problem
As per the error description, 'Field' object is causing the problem; TEXT is the Field object in this code.
type(TEXT)
Output: torchtext.legacy.data.field.Field
Since the error says,
Field object has no attribute 'vocab'
accessing 'vocab' should give an error, but it doesn't give an error
TEXT.vocab
Output: <torchtext.legacy.vocab.Vocab at 0x7fce37b91950>
I am also able to get the length of vocab
len(TEXT.vocab)
Output: 2
So the question still remains, if there exists 'vocab' attribute in the Field object, why am I getting this error? and how do I resolve it?
Environment Specifics
Running the code on Google Colab
Torchtext version is 0.10.0
| After you set your device, you are redefining TEXT and LABEL. That isn't necessary, and could cause issues. Also, you are setting use_vocab=False for your LabelField, and then building a vocab for it.
From the torchtext 0.8 docs:
use_vocab: Whether to use a Vocab object. If False, the data in this field should already be numerical. Default: True.
I would start by clearing up those issues and see if that resolves your error.
| https://stackoverflow.com/questions/69205178/ |
How do I split an iterable dataset into training and test datasets? | I have an iterable dataset object with all of my data files. How can I split it into train and validation set. I have seen a few solutions for custom datasets but iterable does not support len() operator.
torch.utils.random_sample() and torch.utils.SubsetRandomSample() don't work.
def __init__(self):
bla bla
def __iter__(self):
bla bla
yield batch
| Technically you can just set a goal ratio, and start collecting items into two lists randomly using that ratio. The result won't be perfect, but asymptotically it will keep the ratio.
The example is JavaScript, as it can be run here:
{
let a = [],
b = [];
function addsample(x) {
if (Math.random() < 0.2) // aims for 20%-80% split
a.push(x);
else
b.push(x);
return {a, b};
}
}
for(let i=0;i<20;i++)
console.log(JSON.stringify(addsample(i)));
If you run the snippet a couple times, you will see that the output varies a lot, but even with such a small sample size it's quite visible, that usually there is a suitable split available all the time where a really has around 1/4 the size of b. Sometimes it even manages to end up exactly 4:16, but many times it will be something else. And there can be "unlucky" runs too, when a has more elements than b at the end.
| https://stackoverflow.com/questions/69206153/ |
RuntimeError Pytoch Unable to find a valid cuDNN algorithm to run convolution | I want to test a github for my work:
https://github.com/tufts-ml/GAN-Ensemble-for-Anomaly-Detection
so I did a
git clone https://github.com/tufts-ml/GAN-Ensemble-for-Anomaly-Detection
Unfortunately, I have an error when I do the command
sh experiments/run_mnist_en_fanogan.sh
(from the github README)
sh experiments/run_mnist_en_fanogan.sh ξ² 1 β
/home/svetlana/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:106: UserWarning:
NVIDIA GeForce RTX 3080 Laptop GPU with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 3080 Laptop GPU GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
/home/svetlana/.local/lib/python3.9/site-packages/torchvision/datasets/mnist.py:498: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:180.)
return torch.from_numpy(parsed.astype(m[2], copy=False)).view(*s)
Traceback (most recent call last):
File "/home/svetlana/Documents/git/GAN-Ensemble-for-Anomaly-Detection/train.py", line 30, in <module>
main()
File "/home/svetlana/Documents/git/GAN-Ensemble-for-Anomaly-Detection/train.py", line 24, in main
model.train()
File "/home/svetlana/Documents/git/GAN-Ensemble-for-Anomaly-Detection/models/f_anogan.py", line 155, in train
self.gan_training(epoch)
File "/home/svetlana/Documents/git/GAN-Ensemble-for-Anomaly-Detection/models/f_anogan.py", line 93, in gan_training
fake_imgs = self.net_Gds[i_G](z)
File "/home/svetlana/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/svetlana/Documents/git/GAN-Ensemble-for-Anomaly-Detection/models/networks.py", line 175, in forward
output = self.main(input)
File "/home/svetlana/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/svetlana/.local/lib/python3.9/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/home/svetlana/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/svetlana/.local/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 916, in forward
return F.conv_transpose2d(
RuntimeError: Unable to find a valid cuDNN algorithm to run convolution
I thought my installation is ok but now I have doubts. This is my installation:
Python 3.9.6 (default, Jun 30 2021, 10:22:16)
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Wed_Jul_14_19:41:19_PDT_2021
Cuda compilation tools, release 11.4, V11.4.100
Build cuda_11.4.r11.4/compiler.30188945_0
import torch
print(torch.__version__)
1.9.0+cu102
I installed cudnn-11.4 from nvidia website (https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html),I don't know the command to check the version, I tried this one:
cat /opt/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
but it returns nothing
I tried solutions found here: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize,
without succes (to show VRAM, I used nvtop)
| @Berriel
You right, I was focus on the error.
To solve my problem, I did
pip uninstall torch torchvision torchaudio
Then
pip3 install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
According to
https://pytorch.org/get-started/locally/
(this link is from the warning message)
| https://stackoverflow.com/questions/69209100/ |
what is the difference of torch.nn.Softmax, torch.nn.funtional.softmax, torch.softmax and torch.nn.functional.log_softmax | I tried to find documents but cannot find anything about torch.softmax.
What is the difference among torch.nn.Softmax, torch.nn.funtional.softmax, torch.softmax and torch.nn.functional.log_softmax?
Examples are appreciated.
| import torch
x = torch.rand(5)
x1 = torch.nn.Softmax()(x)
x2 = torch.nn.functional.softmax(x)
x3 = torch.nn.functional.log_softmax(x)
print(x1)
print(x2)
print(torch.log(x1))
print(x3)
tensor([0.2740, 0.1955, 0.1519, 0.1758, 0.2029])
tensor([0.2740, 0.1955, 0.1519, 0.1758, 0.2029])
tensor([-1.2946, -1.6323, -1.8847, -1.7386, -1.5952])
tensor([-1.2946, -1.6323, -1.8847, -1.7386, -1.5952])
torch.nn.Softmax and torch.nn.functional.softmax gives identical outputs, one is a class (pytorch module), another one is a function.
log_softmax applies log after applying softmax.
NLLLoss takes log-probabilities (log(softmax(x))) as input. So, you would need log_softmax for NLLLoss, log_softmax is numerically more stable, usually yields better results.
| https://stackoverflow.com/questions/69217305/ |
Use of torch.stack() | t1 = torch.tensor([1,2,3])
t2 = torch.tensor([4,5,6])
t3 = torch.tensor([7,8,9])
torch.stack((t1,t2,t3),dim=1)
When implementing the torch.stack(), I can't understand how stacking is done for different dim.
Here stacking is done for columns but I can't understand the details as to how it is done. It becomes more complicated dealing with 2-d or 3-D tensors.
tensor([[1, 4, 7],
[2, 5, 8],
[3, 6, 9]])
| Imagine have n tensors. If we stay in 3D, those correspond to volumes, namely rectangular cuboids. Stacking corresponds to combining those n volumes on an additional dimension: here a 4th dimension is added to host the n 3D volumes. This operation is in clear contrast with concatenation, where the volumes would be combined on one of the existing dimensions. So concatenation of three-dimensional tensors would result in a 3D tensor.
Here is a possible representation of the stacking operations for limited dimensions sizes (up to three-dimensional inputs):
Where you chose to perform the stacking defines along which new dimension the stack will take place. In the above examples, the newly created dimension is last, hence the idea of "added dimension" makes more sense.
In the following visualization, we observe how tensors can be stacked on different axes. This in turn affects the resulting tensor shape
For the 1D case, for instance, it can also happen on the first axis, see below:
With code:
>>> x_1d = list(torch.empty(3, 2)) # 3 lines
>>> torch.stack(x_1d, 0).shape # axis=0 stacking
torch.Size([3, 2])
>>> torch.stack(x_1d, 1).shape # axis=1 stacking
torch.Size([2, 3])
Similarly for two-dimensional inputs:
With code:
>>> x_2d = list(torch.empty(3, 2, 2)) # 3 2x2-squares
>>> torch.stack(x_2d, 0).shape # axis=0 stacking
torch.Size([3, 2, 2])
>>> torch.stack(x_2d, 1).shape # axis=1 stacking
torch.Size([2, 3, 2])
>>> torch.stack(x_2d, 2).shape # axis=2 stacking
torch.Size([2, 2, 3])
With this state of mind, you can intuitively extend the operation to n-dimensional tensors.
| https://stackoverflow.com/questions/69220221/ |
Pytorch Tensor - How to get the index of a tensor given a multidimensional tensor | I have a the following tensor lets call it lookup_table:
tensor([266, 103, 84, 12, 32, 34, 1, 523, 22, 136, 268, 432, 53, 63,
201, 51, 164, 69, 31, 42, 122, 131, 119, 36, 245, 60, 28, 81,
9, 114, 105, 3, 41, 86, 150, 79, 104, 120, 74, 420, 39, 427,
40, 59, 24, 126, 202, 222, 145, 429, 43, 30, 38, 55, 10, 141,
85, 121, 203, 240, 96, 7, 64, 89, 127, 236, 117, 99, 54, 90,
57, 11, 21, 62, 82, 25, 267, 75, 111, 518, 76, 56, 20, 2,
61, 516, 80, 78, 555, 246, 133, 497, 33, 421, 58, 107, 92, 68,
13, 113, 235, 875, 35, 98, 102, 27, 14, 15, 72, 37, 16, 50,
517, 134, 223, 163, 91, 44, 17, 412, 18, 48, 23, 4, 29, 77,
6, 110, 67, 45, 161, 254, 112, 8, 106, 19, 498, 101, 5, 157,
83, 350, 154, 238, 115, 26, 142, 143])
And I have another tensor lets call it data, which looks like this:
tensor([[517, 235, 236, 76, 81, 25, 110, 59, 245, 39],
[523, 114, 350, 246, 30, 222, 39, 517, 106, 2],
[ 35, 235, 120, 99, 266, 63, 236, 133, 412, 38],
[134, 2, 497, 21, 78, 60, 142, 498, 24, 89],
[ 60, 111, 120, 145, 91, 141, 164, 81, 350, 55]])
Now I want something which looks similar to this:
tensor([112, 100, ..., 40],
[7, 29, ..., 2],
..., ])
I want to use my data tensor to get the index of the lookup table.
Basically I want to vectorize this:
(lookup_table == data).nonzero()
So that this works for multidimensional arrays.
I have read this, but they are not working for my case:
How Pytorch Tensor get the index of specific value
How Pytorch Tensor get the index of elements?
Pytorch tensor - How to get the indexes by a specific tensor
EDIT:
I am basically searching for an optimized/vectorized version of this:
x_data = torch.stack([(lookuptable == data[0][i]).nonzero(as_tuple=False) for i in range(len(data[0]))]).flatten().unsqueeze(0)
print(x_data.size())
for o in range(1, len(data)):
x_data = torch.cat((x_data, torch.stack([(lookuptable == data[o][i]).nonzero(as_tuple=False) for i in range(len(data[o]))]).flatten().unsqueeze(0)), dim=0)
EDIT 2 Minimal example:
We have the data tensor:
data = torch.Tensor([
[523, 114, 350, 246, 30, 222, 39, 517, 106, 2],
[ 35, 235, 120, 99, 266, 63, 236, 133, 412, 38],
[555, 104, 14, 81, 55, 497, 222, 64, 57, 131]
])
And we have the lookup_table tensor, see above.
If we apply this code to the 2 tensors:
# convert champion keys into index notation
x_data = torch.stack([(lookuptable == x[0][i]).nonzero(as_tuple=False) for i in range(len(x[0]))]).flatten().unsqueeze(0)
for o in range(1, len(data) - 1):
x_data = torch.cat((x_data, torch.stack([(lookuptable == x[o][i]).nonzero(as_tuple=False) for i in range(len(x[o]))]).flatten().unsqueeze(0)), dim=0)
We get an output of this:
tensor([[ 7, 29, 141, 89, 51, 47, 40, 112, 134, 83],
[102, 100, 37, 67, 0, 13, 65, 90, 119, 52],
[ 88, 36, 106, 27, 53, 91, 47, 62, 70, 21]
])
This output is what I want, and like I said above its the index of where each value of the tensor data lies on the tensor lookuptable.
The problem is that this is not vectorized.
And I have no Idea how to vectorize it.
| Using searchsorted:
Scanning the whole lookup_table array for each input element is quite inefficient. How about sorting the lookup table first (this only needs to be done once)
sorted_lookup_table, indexes = torch.sort(lookup_table)
and then using searchsorted
index_into_sorted = torch.searchsorted(sorted_lookup_table, data)
If you need an index into the original lookup_table, you can get it with
index_into_lookup_table = indexes[index_into_sorted]
| https://stackoverflow.com/questions/69225949/ |
Why can't torchtext find a symbol _ZN2at6detail10noopDeleteEPv? | Why can't torchtext find this symbol?
(synthesis) miranda9~/ultimate-utils $ python ~/type-parametric-synthesis/src/main.py --reproduce_10K --serial --debug --num_workers 0
Traceback (most recent call last):
File "/home/miranda9/type-parametric-synthesis/src/main.py", line 32, in <module>
from data_pkg.data_preparation import get_dataloaders, get_simply_type_lambda_calc_dataloader_from_folder
File "/home/miranda9/type-parametric-synthesis/src/data_pkg/data_preparation.py", line 10, in <module>
from torchtext.vocab import Vocab, vocab
File "/home/miranda9/miniconda3/envs/synthesis/lib/python3.9/site-packages/torchtext/__init__.py", line 5, in <module>
from . import vocab
File "/home/miranda9/miniconda3/envs/synthesis/lib/python3.9/site-packages/torchtext/vocab.py", line 13, in <module>
from torchtext._torchtext import (
ImportError: /home/miranda9/miniconda3/envs/synthesis/lib/python3.9/site-packages/torchtext/_torchtext.so: undefined symbol: _ZN2at6detail10noopDeleteEPv
| Reinstall torchtext with the current version of pytorch:
e.g.
conda install -y torchtext -c pytorch
or for older versions of pytorch torchtext ImportError in colab
conda install -y torchtext==0.8.0 -c pytorch
Though in general it seems in my experience that torchtext install the right version of pytorch on its own...
Ref: https://github.com/PyTorchLightning/pytorch-lightning/issues/4533
| https://stackoverflow.com/questions/69229397/ |
Installing pyTorch / Torch | I am trying to install PyTorch / torch on pyCharm Community edition. It gave the following error:
ERROR: Command errored out with exit status 1:
command: 'c:\users\joshu\uiuc\research\ai sound-20210814t005717z-001\ai sound\venv\scripts\python.exe' -u -c
'import io, os, sys, setuptools, tokenize; sys.argv[0] =
'"'"'C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-install-ygcl9r4a\pytorch_62f432ab0d344f46a572a5a74f2b015b\setup.py'"'"';
file='"'"'C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-install-ygcl9r4a\pytorch_62f432ab0d344f46a572a5a74f2b015b\setup.py'"'"';f
= getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import
setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"',
'"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))'
install --record
'C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-record-5cueumzp\install-record.txt'
--single-version-externally-managed --compile --install-headers 'c:\users\joshu\uiuc\research\ai sound-20210814t005717z-001\ai
sound\venv\include\site\python3.8\pytorch'
cwd: C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-install-ygcl9r4a\pytorch_62f432ab0d344f46a572a5a74f2b015b
Complete output (5 lines):
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-install-ygcl9r4a\pytorch_62f432ab0d344f46a572a5a74f2b015b\setup.py",
line 11, in
raise Exception(message)
Exception: You tried to install "pytorch". The package named for PyTorch is "torch"
sorry that this is a noob question, but help is appreciated :)
| torch is the name of the package, not pytorch.
Type the following in your terminal.
pip install torch
| https://stackoverflow.com/questions/69230786/ |
How to sample from a distribution with constraints in PyTorch? | I got a simple situation like:
Generate samples and log_prob from an uniform distribution for 2-dim variables (m1, m2) which is satisfying m1~U(5, 80), m2~U(5, 80) with constraint m1+m2 < 100.
from torch import distributions
prior = distributions.Uniform(torch.tensor([5.0, 5.0]),
torch.tensor([80.0, 80.0]))
I try coding in PyTorch like the above, but I don't know how to construct a torch.distribution with constraint condition. By the way, I see some implementations about torch.distributions.constraints but I can't figure out how to use it.
| This can be achieved using rejection sampling:
d = torch.distributions.Uniform(5, 80)
m1 = 80
m2 = 80
while m1 + m2 >= 100:
m1 = d.sample()
m2 = d.sample()
print(m1, m2)
Example output:
tensor(52.3322) tensor(67.8245)
tensor(68.3232) tensor(40.0983)
tensor(44.7374) tensor(9.9690)
| https://stackoverflow.com/questions/69249740/ |
How do I match samples with their predictions when doing inference with PyTorch's DistributedSampler? | I have trained a torch model for NLP tasks and would like to perform some inference using a multi GPU machine (in this case with two GPUs).
Inside the processing code, I use this
dataset = TensorDataset(encoded_dict['input_ids'], encoded_dict['attention_mask'])
sampler = DistributedSampler(
dataset, num_replicas=args.nodes * args.gpus, rank=args.node_rank * args.gpus + gpu_number, shuffle=False
)
dataloader = DataLoader(dataset, batch_size=batch_size, sampler=sampler)
For those familiar with NLP, encoded_dict is the output from the tokenizer.batch_encode_plus function where the tokenizer is an instance of transformers.BertTokenizer.
The issue Iβm having is that when I call the code through the torch.multiprocessing.spawn function, each GPU is doing predictions (i.e. inference) on a subset of the full dataset, and saving the predictions separately; for example, if I have a dataset with 1000 samples to predict, each GPU is predicting 500 of them. As a result, I have no way of knowing which samples out of the 1000 were predicted by which GPU, as their order is not preserved, therefore the model predictions are meaningless as I cannot trace each of them back to their input sample.
I have tried to save the dataloader instance (as a pickle) together with the predictions and then extracting the input_ids by using dataloader.dataset.tensors, however this requires a tokeniser decoding step which I rather avoid, as the tokenizer will have slightly changed the text (for example double whitespaces would be removed, words with dashes will have been split and so on).
What is the cleanest way to save the input text samples together with their predictions when doing inference in distributed mode, or alternatively to keep track of which prediction refers to which sample?
| As I understand it, basically your dataset returns for an index idx [data,label] during training and [data] during inference. The issue with this is that the idx is not preserved by the dataloader object, so there is no way to obtain the idx values for the minibatch after the fact.
One way to handle this issue is to define a very simple custom dataset object that also returns [data,id] instead of only data during inference. Probably the easiest way to do this is to make the dataset return a dictionary object with keys id and data. The dictionary return type is convenient because Pytorch collates (converts data structures to batches) this type automatically, otherwise you'd have to define a custom collate_fn and pass it to the dataloader object, which is itself not very hard but is an extra step.
In any case, here's I would define a new dataset object as follows which should be almost a one-to-one substitute for your current dataset (I believe):
def TensorDictDataset(torch.data.Dataset):
def __init__(self,ids,attention_mask):
self.ids = ids
self.mask = attention_mask
def __len__(self):
return len(self.ids)
def __getitem(self,idx):
datum = {
"mask": self.mask[idx],
"id":ids[idx]
}
return datum
The only change you'll then have to make is that rather than returning mask your dataset will now return dict{"mask":mask,"id":id} so you'll have to parse that appropriately.
| https://stackoverflow.com/questions/69253183/ |
Importing torchsparse (PyTorch) on Windows 10 not working | On Windows 10. I am testing someone's code which has the following imports:
import torch.nn as nn
import torchsparse.nn as spnn
from torchsparse.point_tensor import PointTensor
So on my machine I successfully installed via
pip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-1.9.0+cu111.html
As I have CUDA 11.1. However, there seems to be a syntax difference, as the above imports give:
import torchsparse.nn as spnn
ModuleNotFoundError: No module named 'torchsparse'
I found that when I am in Python I can do the following:
>>> import torchsparse
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'torchsparse'
>>> import torchsparse.nn
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'torchsparse'
>>> import torch_sparse
>>> import torch_sparse.nn
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'torch_sparse.nn'
>>>
So I can only import torch_sparse. Does anyone know how I can get the equivalent imports to test my buddy's code? Much appreciated.
---- EDIT ----
Trying Ivan's answer, I got the following:
pip install --upgrade git+https://github.com/mit-han-lab/[email protected]
WARNING: Ignoring invalid distribution -ip (c:\python39\lib\site-packages)
WARNING: Ignoring invalid distribution -ip (c:\python39\lib\site-packages)
Collecting git+https://github.com/mit-han-lab/[email protected]
Cloning https://github.com/mit-han-lab/torchsparse.git (to revision v1.4.0) to c:\users\iijds\appdata\local\temp\pip-req-build-gvmcjx1m
Running command git clone -q https://github.com/mit-han-lab/torchsparse.git 'C:\Users\iiJDS\AppData\Local\Temp\pip-req-build-gvmcjx1m'
Running command git checkout -q 74099d10a51c71c14318bce63d6421f698b24f24
Resolved https://github.com/mit-han-lab/torchsparse.git to commit 74099d10a51c71c14318bce63d6421f698b24f24
Building wheels for collected packages: torchsparse
Building wheel for torchsparse (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: 'c:\python39\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\iiJDS\\AppData\\Local\\Temp\\pip-req-build-gvmcjx1m\\setup.py'"'"'; __file__='"'"'C:\\Users\\iiJDS\\AppData\\Local\\Temp\\pip-req-build-gvmcjx1m\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\iiJDS\AppData\Local\Temp\pip-wheel-rz9wo9lc'
cwd: C:\Users\iiJDS\AppData\Local\Temp\pip-req-build-gvmcjx1m\
Complete output (53 lines):
running bdist_wheel
c:\python39\lib\site-packages\torch\utils\cpp_extension.py:370: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
running build
running build_py
creating build
creating build\lib.win-amd64-3.9
creating build\lib.win-amd64-3.9\torchsparse
copying torchsparse\operators.py -> build\lib.win-amd64-3.9\torchsparse
copying torchsparse\tensor.py -> build\lib.win-amd64-3.9\torchsparse
copying torchsparse\version.py -> build\lib.win-amd64-3.9\torchsparse
copying torchsparse\__init__.py -> build\lib.win-amd64-3.9\torchsparse
creating build\lib.win-amd64-3.9\torchsparse\nn
copying torchsparse\nn\__init__.py -> build\lib.win-amd64-3.9\torchsparse\nn
creating build\lib.win-amd64-3.9\torchsparse\utils
copying torchsparse\utils\collate.py -> build\lib.win-amd64-3.9\torchsparse\utils
copying torchsparse\utils\quantize.py -> build\lib.win-amd64-3.9\torchsparse\utils
copying torchsparse\utils\utils.py -> build\lib.win-amd64-3.9\torchsparse\utils
copying torchsparse\utils\__init__.py -> build\lib.win-amd64-3.9\torchsparse\utils
creating build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\activation.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\conv.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\count.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\crop.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\devoxelize.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\downsample.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\hash.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\pooling.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\query.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\voxelize.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\__init__.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
creating build\lib.win-amd64-3.9\torchsparse\nn\modules
copying torchsparse\nn\modules\activation.py -> build\lib.win-amd64-3.9\torchsparse\nn\modules
copying torchsparse\nn\modules\bev.py -> build\lib.win-amd64-3.9\torchsparse\nn\modules
copying torchsparse\nn\modules\conv.py -> build\lib.win-amd64-3.9\torchsparse\nn\modules
copying torchsparse\nn\modules\crop.py -> build\lib.win-amd64-3.9\torchsparse\nn\modules
copying torchsparse\nn\modules\norm.py -> build\lib.win-amd64-3.9\torchsparse\nn\modules
copying torchsparse\nn\modules\pooling.py -> build\lib.win-amd64-3.9\torchsparse\nn\modules
copying torchsparse\nn\modules\__init__.py -> build\lib.win-amd64-3.9\torchsparse\nn\modules
creating build\lib.win-amd64-3.9\torchsparse\nn\utils
copying torchsparse\nn\utils\apply.py -> build\lib.win-amd64-3.9\torchsparse\nn\utils
copying torchsparse\nn\utils\kernel.py -> build\lib.win-amd64-3.9\torchsparse\nn\utils
copying torchsparse\nn\utils\__init__.py -> build\lib.win-amd64-3.9\torchsparse\nn\utils
running build_ext
c:\python39\lib\site-packages\torch\utils\cpp_extension.py:305: UserWarning: Error checking compiler version for cl: [WinError 2] The system cannot find the file specified
warnings.warn(f'Error checking compiler version for {compiler}: {error}')
building 'torchsparse.backend' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "c:\python39\lib\site-packages\colorama\ansitowin32.py", line 59, in closed
return stream.closed
ValueError: underlying buffer has been detached
----------------------------------------
ERROR: Failed building wheel for torchsparse
Running setup.py clean for torchsparse
Failed to build torchsparse
WARNING: Ignoring invalid distribution -ip (c:\python39\lib\site-packages)
Installing collected packages: torchsparse
Running setup.py install for torchsparse ... error
ERROR: Command errored out with exit status 1:
command: 'c:\python39\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\iiJDS\\AppData\\Local\\Temp\\pip-req-build-gvmcjx1m\\setup.py'"'"'; __file__='"'"'C:\\Users\\iiJDS\\AppData\\Local\\Temp\\pip-req-build-gvmcjx1m\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\iiJDS\AppData\Local\Temp\pip-record-p7zn1m6h\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\python39\Include\torchsparse'
cwd: C:\Users\iiJDS\AppData\Local\Temp\pip-req-build-gvmcjx1m\
Complete output (53 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.9
creating build\lib.win-amd64-3.9\torchsparse
copying torchsparse\operators.py -> build\lib.win-amd64-3.9\torchsparse
copying torchsparse\tensor.py -> build\lib.win-amd64-3.9\torchsparse
copying torchsparse\version.py -> build\lib.win-amd64-3.9\torchsparse
copying torchsparse\__init__.py -> build\lib.win-amd64-3.9\torchsparse
creating build\lib.win-amd64-3.9\torchsparse\nn
copying torchsparse\nn\__init__.py -> build\lib.win-amd64-3.9\torchsparse\nn
creating build\lib.win-amd64-3.9\torchsparse\utils
copying torchsparse\utils\collate.py -> build\lib.win-amd64-3.9\torchsparse\utils
copying torchsparse\utils\quantize.py -> build\lib.win-amd64-3.9\torchsparse\utils
copying torchsparse\utils\utils.py -> build\lib.win-amd64-3.9\torchsparse\utils
copying torchsparse\utils\__init__.py -> build\lib.win-amd64-3.9\torchsparse\utils
creating build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\activation.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\conv.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\count.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\crop.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\devoxelize.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\downsample.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\hash.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\pooling.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\query.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\voxelize.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
copying torchsparse\nn\functional\__init__.py -> build\lib.win-amd64-3.9\torchsparse\nn\functional
creating build\lib.win-amd64-3.9\torchsparse\nn\modules
copying torchsparse\nn\modules\activation.py -> build\lib.win-amd64-3.9\torchsparse\nn\modules
copying torchsparse\nn\modules\bev.py -> build\lib.win-amd64-3.9\torchsparse\nn\modules
copying torchsparse\nn\modules\conv.py -> build\lib.win-amd64-3.9\torchsparse\nn\modules
copying torchsparse\nn\modules\crop.py -> build\lib.win-amd64-3.9\torchsparse\nn\modules
copying torchsparse\nn\modules\norm.py -> build\lib.win-amd64-3.9\torchsparse\nn\modules
copying torchsparse\nn\modules\pooling.py -> build\lib.win-amd64-3.9\torchsparse\nn\modules
copying torchsparse\nn\modules\__init__.py -> build\lib.win-amd64-3.9\torchsparse\nn\modules
creating build\lib.win-amd64-3.9\torchsparse\nn\utils
copying torchsparse\nn\utils\apply.py -> build\lib.win-amd64-3.9\torchsparse\nn\utils
copying torchsparse\nn\utils\kernel.py -> build\lib.win-amd64-3.9\torchsparse\nn\utils
copying torchsparse\nn\utils\__init__.py -> build\lib.win-amd64-3.9\torchsparse\nn\utils
running build_ext
c:\python39\lib\site-packages\torch\utils\cpp_extension.py:370: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
c:\python39\lib\site-packages\torch\utils\cpp_extension.py:305: UserWarning: Error checking compiler version for cl: [WinError 2] The system cannot find the file specified
warnings.warn(f'Error checking compiler version for {compiler}: {error}')
building 'torchsparse.backend' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "c:\python39\lib\site-packages\colorama\ansitowin32.py", line 59, in closed
return stream.closed
ValueError: underlying buffer has been detached
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\python39\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\iiJDS\\AppData\\Local\\Temp\\pip-req-build-gvmcjx1m\\setup.py'"'"'; __file__='"'"'C:\\Users\\iiJDS\\AppData\\Local\\Temp\\pip-req-build-gvmcjx1m\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\iiJDS\AppData\Local\Temp\pip-record-p7zn1m6h\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\python39\Include\torchsparse' Check the logs for full command output.
| It appears the package you are trying to import comes from this Github repo, which is different to the package you installed: torch-sparse you tried using. You should try with this instead:
pip install --upgrade git+https://github.com/mit-han-lab/[email protected]
Also, you can uninstall the other one with:
pip uninstall torch-sparse
| https://stackoverflow.com/questions/69258565/ |
What is different between The MaxPool1D API in tensorflow 2.X and MaxPool1d in pytorch | I'm trying to re-implement code generated in tensorflow into pytroch, but I came across maxpooling, looked into the documentation of the two frameworks, and found that their behavior is not the same. Can someone please explain to me why they are different, and which one is more efficient (I ask this because they give a different result)?
import tensorflow
from tensorflow.keras.layers import GlobalMaxPool1D
tf_tensor = tensorflow.random.normal([8, 6, 5])
tf_maxpool = GlobalMaxPool1D()
print("output shape : ", tf_maxpool(tf_tensor).shape)
output shape : (8, 5)
import torch
import torch.nn as nn
torch_tensor = torch.tensor(tf_tensor.numpy())
maxpool = nn.MaxPool1d(kernel_size=2)
print("output shape : ", maxpool(torch_tensor).shape)
output shape : torch.Size([8, 6, 2])
| MaxPool vs GlobalMaxPool
torch.nn.MaxPool1d pools every N adjacent values by performing max operation.
For these values:
[1, 2, 3, 4, 5, 6, 7, 8]
with kernel_size=2 as you've specified, you would get the following values:
[2, 4, 6, 8]
which means a sliding window of size 2 gets the maximum value and moves on to the next pair.
Global Pooling is a similar operation, but gets the maximum value from the whole list, as pointed out in Ivan's answer. In our case, we would simply get one 8 value.
This operation, in PyTorch, is called torch.nn.AdaptiveAvgPool1d (optionally followed by torch.nn.Flatten):
import torch
tensor = torch.randn(8, 6, 5)
global_max_pooling = torch.nn.Sequential(
torch.nn.AdaptiveMaxPool1d(1), # (8, 6, 1) shape
torch.nn.Flatten(), # (8, 6) after removing unnecessary 1 dimension
)
global_max_pooling(tensor) # (8, 6)
The above explanation is simplified as this operation is carried across specific dimension.
Tensorflow vs PyTorch shape difference
As one could notice, in the case of Tensorflow the output is of shape (8, 5), while in the case of PyTorch it is (8, 6).
This difference stems from different channels dimensions (see here for channels last in PyTorch), namely:
PyTorch assumes data layout of (batch, channels, sequence)
Tensorflow assumes data layout of (batch, sequence, channels) (a.k.a. channels last)
One has to permute the data in case of PyTorch to get exactly the same results:
tensor = tensor.permute(0, 2, 1) # (8, 5, 6)
global_max_pooling(tensor) # (8, 5)
Efficiency
Use torch.nn.AdaptiveAvgPool1d when you want to perform pooling with specified output size (different than 1) as it skips some unnecessary operations torch.nn.MaxPool1d performs (going over the same elements more than once, which is out of scope of this question).
In general case, when we perform global pooling both are roughly equivalent and perform the same number of operations.
| https://stackoverflow.com/questions/69275133/ |
Unfreeze model Layer by Layer in PyTorch | I'm working with a PyTorch model from here (T2T_ViT_7).
I'm trying to freeze all layers except the last (head) layer and train it on my dataset. I want to evaluate its performance, and then unfreeze layers one by one and train it each time and see how it performs.
To initially freeze all the layers and and just unfreeze the head layer, I used:
for param in model.parameters():
param.requires_grad_(False)
model.head.requires_grad_(True)
Now I want to start from the bottom, and start unfreezing layers one by one. How can I do this? Do I use model.modules() or maybe model.children()?
Thank you!
| If by layers you mean each block inside of model.blocks, then you can use nn.Module.children (// nn.Module.named_children). This will return all direct submodules, while the nn.Module.modules returns all submodules recursively.
Since model.blocks is a nn.ModuleList, you can slice the blocks to only select the last n layers. Something like that:
model.blocks[-n:].requires_grad_(False)
| https://stackoverflow.com/questions/69278507/ |
Pytorch: Building 3D Dense Network run into error adaptive_avg_pool3d: output_size must be 3 | I ran into such an error when trying to train my 3D Dense Network. There is an average pooling layer at the end of convolution blocks. As can be seen in the message below, it says that my code out = F.adaptive_avg_pool3d(input=out, output_size=[1,1,1]) does not give the right-sized output.
I have tried with output_size=1, output_size=(1,1,1) and tried to use layer F.avg_pool instead, but all ran into such error. It's quiet strange since my output does have the right size.
P.S. I have imported torch.nn.functional as F
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_37419/694291867.py in <module>
11 trainer = Trainer(model, device, optimizer, loss_fn)
12
---> 13 trainer.fit(N_EPOCHS, train_dataloader, val_dataloader, "4C", N_EPOCHS / 3, )
14
15 trainer.display_plots("4C")
/tmp/ipykernel_37419/579275755.py in fit(self, epochs, train_dataloader, valid_dataloader, mrimodule, patience)
26 self.info_message("EPOCH: {}", n_epoch)
27
---> 28 train_loss, train_auc, train_time = self.train_epoch(train_dataloader)
29 valid_loss, valid_auc, valid_time = self.valid_epoch(valid_dataloader)
30
/tmp/ipykernel_37419/579275755.py in train_epoch(self, train_dataloader)
71 Y = Y.to(self.device)
72 self.optimizer.zero_grad()
---> 73 pred = self.model(X).squeeze(1)
74 loss = self.loss_fn(pred, Y)
75
~/miniconda3/envs/MGMT_Classify/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/MGMT_Classify_Scripts/DenseNet3D.py in forward(self, x)
216 features = self.features(x)
217 out = F.relu(features, inplace=True)
--> 218 out = F.adaptive_avg_pool3d(input=out, output_size=[1,1,1])
219 out = torch.flatten(out, 1)
220 out = self.classifier(out)
~/miniconda3/envs/MGMT_Classify/lib/python3.7/site-packages/torch/nn/functional.py in adaptive_avg_pool3d(input, output_size)
954 adaptive_avg_pool3d, (input,), input, output_size)
955 _output_size = _list_with_default(output_size, input.size())
--> 956 return torch._C._nn.adaptive_avg_pool3d(input, _output_size)
957
958
RuntimeError: adaptive_avg_pool3d: output_size must be 3
| I have at least found one temporary solution to this by comparing my script with other 3D densenet models on GitHub.
I used the code:
out = F.adaptive_avg_pool3d(input=out, output_size=(1, 1, 1)).view(features.size(0), -1)
In place of the following, from the original script:
out = F.adaptive_avg_pool3d(input=out, output_size=[1,1,1])
out = torch.flatten(out, 1)
After that, it works. I have not tried it on my task now, but I have confirmed that it can overfit a very small test dataset.
I am still very curious what on earth lead to the error and why it can be fixed in this way. If you have insight, please offer it in the comments. Otherwise, this offers a workaround for anyone else that needs it.
| https://stackoverflow.com/questions/69279485/ |
How to Change the torchtext LabelField value | I'm using PyTorch to create several models which each one is run in a separate notebook.
When using torch text Field to create vocab it is assigning a number for each class that is correct and my original class labels also are numbers. But the assigned label for each class is not the same as the original class label. I was wondering is there a way to assign an exact number class for my Label vocab.
my code that creates torch text Field:
LABEL = data.LabelField()
LABEL.build_vocab(train_data)
my result's like this:
print(LABEL.vocab.stoi)
defaultdict(None, {'1': 0, '2': 1, '0': 2})
the result's I want:
defaultdict(None, {'0': 0, '1': 1, '2': 2})
I write this code for the solution. Is it correct to create vocab like this?
LABEL.build_vocab({'0': 0, '1': 1, '2': 2})
p.s: I know this assigning is just used in models and everything works fine but I was worried about the time I comparison models results on test data and was more worried about my confusion each time I look at the confession matrix.
| I don't think this is going to give you what you want. build_vocab iterates over a dataset and maps an item to an index if it appears in the dataset above some min_freq (default of min_freq=1). I think what you are giving it in your last example will tell build_vocab that the item '0' appears 0 times, so it won't be included in your dataset.
If you are concerned about mixing things up in your review process, you can always write a script to get the index of a certain label, then get whatever is at that index, and map it to a new dict with the index you want. This will probably be much easier than messing with the way torchtext is building your vocabulary.'
EDIT: A better solution for you mght be setting use_vocab=False when defining your Label field:
LABEL = data.LabelField(use_vocab=False)
This will work in your case, when your data is already numerical. From the torchtext 0.8 docs:
use_vocab: Whether to use a Vocab object. If False, the data in this
field should already be numerical. Default: True.
| https://stackoverflow.com/questions/69285712/ |
how is stacked rnn (num layers > 1) implemented on pytorch? | The GRU layer in pytorch takes in a parameter called num_layers, where you can stack RNNs. However, it is unclear how exactly the subsequent RNNs use the outputs of the previous layer.
According to the documentation:
Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two GRUs together to form a stacked GRU, with the second GRU taking in outputs of the first GRU and computing the final results.
Does this mean that the output of the final cell of the first layer of the GRU is fed as input to the next layer? Or does it mean the outputs of each cell (at each timestep) is fed as an input to the cell at the same timestep of the next layer?
|
Does this mean that the output of the final cell of the first layer of the GRU is fed as input to the next layer? Or does it mean the outputs of each cell (at each timestep) is fed as an input to the cell at the same timestep of the next layer?
The latter. Each time step's output from the first layer is used as input for the same time step of the second layer.
This figure from a Keras tutorial shows how multilayer RNNs are structured:
| https://stackoverflow.com/questions/69294045/ |
Could not find the pytorch 1.9.1 in conda's current channels | I create a new conda virtual environment and try to install the pytorch 1.9.1 by using conda install pytorch=1.9.1. But, conda reports the PackageNotFoundError as follows.
PackagesNotFoundError: The following packages are not available from current channels:
pytorch==1.9.1
Current channels:
https://conda.anaconda.org/python/win-64
https://conda.anaconda.org/python/noarch
https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
HelpοΌοΌοΌ Thanks
| The right commands are listed on the pytorch web site. They should use the pytorch channel, e.g. with cuda:
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia
| https://stackoverflow.com/questions/69296000/ |
How does one pip install torch 1.9.x with cuda 11.1 when errors related with memory issue arise? | I was trying to install torch 1.9.x with pip3 but I get this error:
(metalearning_gpu) miranda9~/automl-meta-learning $ pip3 install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
Looking in links: https://download.pytorch.org/whl/torch_stable.html
Collecting torch==1.9.0+cu111
ERROR: Exception:
Traceback (most recent call last):
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/cli/base_command.py", line 173, in _main
status = self.run(options, args)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/cli/req_command.py", line 203, in wrapper
return func(self, options, args)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/commands/install.py", line 315, in run
requirement_set = resolver.resolve(
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 94, in resolve
result = self._result = resolver.resolve(
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 472, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 341, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 172, in _add_to_criteria
if not criterion.candidates:
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/resolvelib/structs.py", line 151, in __bool__
return bool(self._sequence)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 140, in __bool__
return any(self)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 128, in <genexpr>
return (c for c in iterator if id(c) not in self._incompatible_ids)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 32, in _iter_built
candidate = func()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 204, in _make_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 295, in __init__
super().__init__(
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 156, in __init__
self.dist = self._prepare()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 227, in _prepare
dist = self._prepare_distribution()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 305, in _prepare_distribution
return self._factory.preparer.prepare_linked_requirement(
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 508, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 550, in _prepare_linked_requirement
local_file = unpack_url(
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 239, in unpack_url
file = get_http_url(
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 102, in get_http_url
from_path, content_type = download(link, temp_dir.path)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/network/download.py", line 132, in __call__
resp = _http_get_download(self._session, link)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/network/download.py", line 115, in _http_get_download
resp = session.get(target_url, headers=HEADERS, stream=True)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/requests/sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/network/session.py", line 454, in request
return super().request(method, url, *args, **kwargs)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/cachecontrol/adapter.py", line 44, in send
cached_response = self.controller.cached_request(request)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/cachecontrol/controller.py", line 139, in cached_request
cache_data = self.cache.get(cache_url)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/network/cache.py", line 54, in get
return f.read()
MemoryError
It says there is a memory error but I don't understand where and how? How do I start debugging this?
Nothing in my home seems like a problem:
(metalearning_gpu) miranda9~/automl-meta-learning $ du -hs ~
150G /home/miranda9
| Not sure why this is the case but it seems once I got a node to run the installs (with a gpu) the install worked...is that normal?!
(synthesis) miranda9~/automl-meta-learning $ condor_submit -i interactive.sub
Submitting job(s).
1 job(s) submitted to cluster 17192.
Could not find conda environment: metalearning_cpu
You can list all discoverable environments with `conda info --envs`.
Welcome to [email protected]!
Could not find conda environment: metalearning_cpu
You can list all discoverable environments with `conda info --envs`.
(synthesis) miranda9~/automl-meta-learning $
(synthesis) miranda9~/automl-meta-learning $
(synthesis) miranda9~/automl-meta-learning $
(synthesis) miranda9~/automl-meta-learning $
(synthesis) miranda9~/automl-meta-learning $
(synthesis) miranda9~/automl-meta-learning $ conda activate metalearning_gpu
(metalearning_gpu) miranda9~/automl-meta-learning $ pip3 install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
Looking in links: https://download.pytorch.org/whl/torch_stable.html
Collecting torch==1.9.0+cu111
Using cached https://download.pytorch.org/whl/cu111/torch-1.9.0%2Bcu111-cp39-cp39-linux_x86_64.whl (2041.4 MB)
Collecting torchvision==0.10.0+cu111
Using cached https://download.pytorch.org/whl/cu111/torchvision-0.10.0%2Bcu111-cp39-cp39-linux_x86_64.whl (23.1 MB)
Collecting torchaudio==0.9.0
Using cached torchaudio-0.9.0-cp39-cp39-manylinux1_x86_64.whl (1.9 MB)
Collecting typing-extensions
Using cached typing_extensions-3.10.0.2-py3-none-any.whl (26 kB)
Collecting numpy
Using cached numpy-1.21.2-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.8 MB)
Collecting pillow>=5.3.0
Using cached Pillow-8.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.0 MB)
Installing collected packages: typing-extensions, torch, pillow, numpy, torchvision, torchaudio
Successfully installed numpy-1.21.2 pillow-8.3.2 torch-1.9.0+cu111 torchaudio-0.9.0 torchvision-0.10.0+cu111 typing-extensions-3.10.0.2
(metalearning_gpu) miranda9~/automl-meta-learning $
you might want to try this command instead:
pip3 install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html
| https://stackoverflow.com/questions/69302569/ |
How to visualize a graph from DGL's datasets? | The following snippet comes from the tutorial https://cnvrg.io/graph-neural-networks/. How can I visualize a graph from the dataset? Using something like matplotlib if possible.
import dgl
import torch
import torch.nn as nn
import torch.nn.functional as F
import dgl.data
dataset = dgl.data.CoraGraphDataset()
g = dataset[0]
| import dgl.data
import matplotlib.pyplot as plt
import networkx as nx
dataset = dgl.data.CoraGraphDataset()
g = dataset[0]
options = {
'node_color': 'black',
'node_size': 20,
'width': 1,
}
G = dgl.to_networkx(g)
plt.figure(figsize=[15,7])
nx.draw(G, **options)
It is a huge graph so you might have to play with the sizes (node, width, figure, etc.).
Here are some useful links:
https://networkx.org/documentation/stable/tutorial.html
https://docs.dgl.ai/en/0.5.x/generated/dgl.to_networkx.html#dgl.to_networkx
https://docs.dgl.ai/en/0.5.x/api/python/dgl.data.html
| https://stackoverflow.com/questions/69308451/ |
Unble to install requirements for summarization | I am following this huggingface github(https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) for summarization but not able to install packages from requirements.
Command used:
pip install -r requirements.txt
For your information I am trying this Intel oneapi devcloud.Please find the below error.
env: βpkg-configβ: No such file or directory
Failed to find sentencepiece pkg-config
----------------------------------------
ERROR: Command "/home/uxxxxx/.conda/envs/env/bin/python -u -c 'import setuptools, tokenize;__file__='"'"'/home/uxxxx/tmp/pip-install-ld2o4xe1/sentencepiece/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /home/uxxxxx/tmp/pip-record-tl0qz8f6/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /home/uxxxxx/tmp/pip-install-ld2o4xe1/sentencepiece/
Please use "pip install --user <package>" to install user packages.
Please visit the forums at: https://software.intel.com/en-us/forums/intel-devcloud
Thanks in Advance!
| Try installing pkg-config and sentencepiece separately.
sudo apt-get install pkg-config
pip3 install sentencepiece== 0.19.2
Related source: Link
Also, follow the steps which @Abhijeet has mentioned below. That's kinda obvious so not reiterating it.
| https://stackoverflow.com/questions/69315011/ |
How to find input layers names for intermediate layer in PyTorch model? | I have some complicated model on PyTorch. How can I print names of layers (or IDs) which connected to layer's input. For start I want to find it for Concat layer. See example code below:
class Concat(nn.Module):
def __init__(self, dimension=1):
super().__init__()
self.d = dimension
def forward(self, x):
return torch.cat(x, self.d)
class SomeModel(nn.Module):
def __init__(self):
super(SomeModel, self).__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.conv2 = nn.Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
self.conc = Concat(1)
self.linear = nn.Linear(8192, 1)
def forward(self, x):
out1 = F.relu(self.bn1(self.conv1(x)))
out2 = F.relu(self.conv2(x))
out = self.conc([out1, out2])
out = F.avg_pool2d(out, 4)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
if __name__ == '__main__':
model = SomeModel()
print(model)
y = model(torch.randn(1, 3, 32, 32))
print(y.size())
for name, m in model.named_modules():
if 'Concat' in m.__class__.__name__:
print(name, m, m.__class__.__name__)
# Here print names of all input layers for Concat
| You can use type(module).__name__ to get the nn.Module class name:
>>> model = SomeModel()
>>> y = model(torch.randn(1, 3, 32, 32))
>>> for name, m in model.named_modules():
... if 'Concat' == type(m).__name__:
... print(name, m)
conc Concat()
Edit: You can actually manage to get the list of operators used to compute the inputs of Concat. However, I don't think you can actually get the attribute names of the nn.Module associated with these operators. This kind of information is not available - and needed - at model inference.
This solution requires you to register a forward hook on the layer with nn.Module.register_forward_hook. Then perform one inference to trigger it, then you can remove the hook. In the forward hook, you have access to the list of inputs and extract the name of the operator from the grad_fn attribute callback. Using nn.Module.register_forward_pre_hook here would be more appropriate since we are only looking at the inputs, and do not need the output.
>>> def op_name(x)
... return type(x.grad_fn).__name__.replace('Backward0', '')
>>> def forward_hook(module, ins):
... print([op_name(x) for x in ins[0]])
Attach the hook on model.conc, trigger it and then clean up:
>>> handle = model.conc.register_forward_pre_hook(forward_hook)
>>> model(torch.empty(2, 3, 10, 10, requires_grad=True))
['Relu', 'Relu']
>>> handle.remove()
| https://stackoverflow.com/questions/69318398/ |
Large, exploding loss in Pytorch transformer model | I am trying to solve a sequence to sequence problem with a transformer model. The data is derived from a set of crossword puzzles.
The positional encoding and transformer classes are as follows:
class PositionalEncoding(nn.Module):
def __init__(self, d_model: int, dropout: float = 0.1, max_len: int = 3000):
super().__init__()
self.dropout = nn.Dropout(p=dropout)
position = torch.arange(max_len).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2) * (-math.log(10000.0) / d_model))
pe = torch.zeros(1, max_len, d_model)
pe[0, :, 0::2] = torch.sin(position * div_term)
pe[0, :, 1::2] = torch.cos(position * div_term)
self.register_buffer('pe', pe)
def debug(self, x):
return x.shape, x.size()
def forward(self, x: Tensor) -> Tensor:
x = x + self.pe[:, :x.size(1), :]
return self.dropout(x)
class Transformer(nn.Module):
def __init__(
self,
num_tokens,
dim_model,
num_heads,
num_encoder_layers,
num_decoder_layers,
batch_first,
dropout_p,
):
super().__init__()
self.model_type = "Transformer"
self.dim_model = dim_model
self.positional_encoder = PositionalEncoding(
d_model=dim_model, dropout=dropout_p, max_len=3000
)
self.embedding = nn.Embedding.from_pretrained(vec_weights, freeze=False)#nn.Embedding(num_tokens, dim_model)
self.transformer = nn.Transformer(
d_model=dim_model,
nhead=num_heads,
num_encoder_layers=num_encoder_layers,
num_decoder_layers=num_decoder_layers,
dropout=dropout_p,
batch_first = batch_first
)
self.out = nn.Linear(dim_model, num_tokens)
def forward(self, src, tgt, tgt_mask=None, src_pad_mask=None, tgt_pad_mask=None):
src = self.embedding(src)*math.sqrt(self.dim_model)
tgt = self.embedding(tgt)*math.sqrt(self.dim_model)
src = self.positional_encoder(src)
tgt = self.positional_encoder(tgt)
transformer_out = self.transformer(src, tgt, tgt_mask=tgt_mask, src_key_padding_mask=src_pad_mask, tgt_key_padding_mask=tgt_pad_mask)
out = self.out(transformer_out)
return out
def get_tgt_mask(self, size) -> torch.tensor:
mask = torch.tril(torch.ones(size, size) == 1)
mask = mask.float()
mask = mask.masked_fill(mask == 0, float('-inf'))
mask = mask.masked_fill(mask == 1, float(0.0))
return mask
def create_pad_mask(self, matrix: torch.tensor, pad_token: int) -> torch.tensor:
return (matrix == pad_token)
The input tensors are a source tensor of size N by S, where N is the batch size and S is the source sequence length, and a target tensor of size N by T, where T is the target sequence length. S is about 10 and T is about 5, while the total number of items is about 160,000-200,000, divided into batch sizes of 512. They are torch.IntTensors, with elements in the range from 0 to V, where V is the vocabulary length.
The first layer is an embedding layer that takes the input from N by S to N by S by E, where E is the embedding dimension (300), or to N by T by E in the case of the target. The second layer adds position encoding without changing the shape. Then both tensors are passed through the transformer layer, which outputs an N by T by E tensor. Finally, we pass this output through a linear layer, which produces an N by T by V output, where V is the size of the vocabulary used in the problem. Here V is about 56,697. The most frequent tokens (words) appear about 50-60 times in the target tensor.
The transformer class also contains the functions for implementing the masking matrices.
Then we create the model and run it (this process is wrapped in a function).
device = "cuda"
src_train, src_test = torch.utils.data.random_split(src_t, [int(0.9*len(src_t)), len(src_t)-int(0.9*len(src_t))])
src_train, src_test = src_train[:512], src_test[:512]
tgt_train, tgt_test = torch.utils.data.random_split(tgt_t, [int(0.9*len(tgt_t)), len(tgt_t)-int(0.9*len(tgt_t))])
tgt_train, tgt_test = tgt_train[:512], tgt_test[:512]
train_data, test_data = list(zip(src_train, tgt_train)), list(zip(src_test, tgt_test))
train, test = torch.utils.data.DataLoader(dataset=train_data), torch.utils.data.DataLoader(dataset=test_data)
model = Transformer(num_tokens=ntokens, dim_model=300, num_heads=2, num_encoder_layers=3, num_decoder_layers=3, batch_first = True, dropout_p=0.1).to(device)
loss_function = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.0000001)
n_epochs = 50
def train_model(model, optimizer, loss_function, n_epochs):
loss_value=0
for epoch in range(n_epochs):
print(f"Starting epoch {epoch}")
for batch, data in enumerate(train):
x, y = data
if batch%100 == 0:
print(f"Batch is {batch}")
batch += 1
optimizer.zero_grad()
x, y = torch.tensor(x).to(device), torch.tensor(y).to(device)
y_input, y_base = y[:, :-1], y[:, 1:]
y_input, y_base = y_input.to(device), y_base.to(device)
tgt_mask = model.get_tgt_mask(y_input.shape[1]).to(device)
pad_token = vocabulary_table[embeddings.key_to_index["/"]]
src_pad_mask = model.create_pad_mask(x, pad_token).to(device)
tgt_pad_mask = model.create_pad_mask(y_input, pad_token).to(device)
z = model(x, y_input, tgt_mask, src_pad_mask, tgt_pad_mask)
z = z.permute(0, 2, 1).to(device)
y_base = y_base.long().to(device)
loss = loss_function(z, y_base).to(device)
loss.backward()
nn.utils.clip_grad_norm_(model.parameters(), max_norm=2.0, norm_type=2)
optimizer.step()
loss_value += float(loss)
if batch%100 == 0:
print(f"For epoch {epoch}, batch {batch} the cross-entropy loss is {loss_value}")
#Free GPU memory.
del z
del x
del y
del y_input
del y_base
del loss
torch.cuda.empty_cache()
return model.parameters(), loss_value
Basically, we split the data into test and training sets and use an SGD optimizer and cross-entropy loss. We create a masking matrix for the padding for both the target and source tensors, and a masking matrix for unseen elements for the target tensor. We then do the usual gradient update steps. Right now, there is no validation loop, because I cannot even get the training loss to decrease.
The loss is very high, reaching more than 1000 after 100 batches. More concerningly, the loss also increases rapidly during training, rather than decreasing. In the code that I included, I tried clipping the gradients, lowering the learning rate, and using a much smaller sample to debug the code.
What could be causing this behavior?
| You are only adding things to your loss, so naturally it can only increase.
loss_value += float(loss)
You're supposed to set it to zero after every epoch. Now you set it to zero once, in the beginning of the training process. There is a training loop template here if you're interested (https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html). This explains the increasing loss. To further troubleshoot (if needed) I'd throw in a validation loop.
| https://stackoverflow.com/questions/69323932/ |
Are PyTorch-trained models transferable between GPUs and TPUs? | After using a GPU for some to train a PyTorch model, can I use the saved weights to continue training my model on a TPU?
|
After using GPU for some time can I use the saved weights to train my model using TPU?
Yes, if you saved your GPU-trained model with, say
torch.save(model.save_dict(), 'model.pt')
you can load it again for use on a TPU (using https://github.com/pytorch/xla) in a separate program run with
import torch_xla.utils.serialization as xser
model.load_state_dict(xser.load('model.pt'))
| https://stackoverflow.com/questions/69328983/ |
Getting pytorch backward's RuntimeError: Trying to backward through the graph a second time... when slicing a tensor | Upon running the code snippet (PyTorch 1.7.1; Python 3.8),
import numpy as np
import torch
def batch_matrix(vector_pairs, factor=2):
baselen = len(vector_pairs[0]) // factor
split_batch = []
for j in range(factor):
for i in range(factor):
start_j = j * baselen
end_j = (j+1) * baselen if j != factor - 1 else None
start_i = i * baselen
end_i = (i+1) * baselen if i != factor - 1 else None
mini_pairs = vector_pairs[start_j:end_j, start_i:end_i, :]
split_batch.append(mini_pairs)
return split_batch
def concat_matrix(vectors_):
vectors = vectors_.clone()
seq_len, dim_vec = vectors.shape
project_x = vectors.repeat((1, 1, seq_len)).reshape(seq_len, seq_len, dim_vec)
project_y = project_x.permute(1, 0, 2)
matrix = torch.cat((project_x, project_y), dim=-1)
matrix_ = matrix.clone()
return matrix_
if __name__ == "__main__":
vector_list = []
for i in range(10):
vector_list.append(torch.randn((5,), requires_grad=True))
vectors = torch.stack(vector_list, dim=0)
pmatrix = concat_matrix(vectors)
factor = np.ceil(vectors.shape[0]/6).astype(int)
batched_feats = batch_matrix(pmatrix, factor=factor)
for i in batched_feats:
i = i + 5
print(i.shape)
summed = torch.sum(i)
summed.backward()
I get the output and error as below:
torch.Size([5, 5, 10])
torch.Size([5, 5, 10])
Traceback (most recent call last):
File "/home/user/PycharmProjects/project/run.py", line 43, in <module>
summed.backward()
File "/home/user/anaconda3/envs/diff/lib/python3.8/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/user/anaconda3/envs/diff/lib/python3.8/site-packages/torch/autograd/__init__.py", line 130, in backward
Variable._execution_engine.run_backward(
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time.
I have read all the existing posts on the issue and could not resolve it myself. Passing retain_graph=True in backward() fixes the issue in the provided snippet, however, the snippet is only an oversimplified version of a large network where retain_graph=True changes the error to the following:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [3000, 512]], which is output 0 of TBackward, is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
I tried setting torch.autograd.set_detect_anomaly(True) and determining the point of failure, but all that I tried failed and the error persisted.
I suspect that if I can understand the cause of error in the current situation then it will help me resolve this error in actual codebase.
Therefore, I want to understand why is it that backward() works fine for first two tensors in batched_feats, while fails for the third one? I would really appreciate if someone can help me see the reuse of an intermediate result that has been freed.
Thanks a lot!
| After backpropagation, the leaf nodes' gradients are stored in their Tensor.grad attributes. The gradients of non-leaf nodes (i.e. the intermediate results to which the error refers) are freed by default, as PyTorch assumes you won't need them. In your example, your leaf nodes are those in vector_list created from torch.randn().
Calling backward() multiple times consecutively accumulates gradients via summation by default (this is useful for recurrent neural networks). This is problematic when existing intermediate results have been freed; the leaf nodes' gradients have not; and the call to backward() involves some of the same leaf nodes and intermediate results as a previous call to backward(). This is the problem you're facing; some of your tensor slices reference the same underlying tensors, and you're not zeroing all the relevant gradients between calls to backward(), but you are implicitly zeroing intermediate gradients.
If you wish to accumulate gradients in the leaf nodes via summation, simply call backward like so: summed.backward(retain_graph = True).
However, if you wish to compute gradients with respect to your batches independently (rather than w.r.t. the leaf nodes in vector_list), then you can just detach your batches at the beginning of each iteration. This will prevent gradients from propagating through them all the way to their common leaf nodes in vector_list (i.e. they become leaf nodes themselves in their own graphs). Detaching a tensor disables gradients for it, so you'll have to re-enable them manually:
for i in batched_feats:
i = i.detach()
i.requires_grad = True
j = i + 5
print(j.shape)
summed = torch.sum(j)
summed.backward()
print(i.grad) # Prints the gradients stored in i
This is how some data loaders work; they load the data from disk, convert them to tensors, perform augmentation / other preprocessing, and then detach them so that they can serve as leaf nodes in a fresh computational graph. If the application developer wants to compute gradients w.r.t. the data tensors, they do not have to save intermediate results since the data tensors have been detached and thus serve as leaf nodes.
| https://stackoverflow.com/questions/69339143/ |
How can I fit a curve to a 3d point cloud? | My goal is to fit a line through a point cloud. The point cloud is approximately cylindrical but can be curved, which is why the fitted line should not be straight.
Iβve tried several things, but this is my current approach :
I use PyTorch to optimise on a surface equation and compute the loss for each point.
However, this does not seem to lead to βgoodβ results, since the plane/surface does not cut the point cloud vertically, which I would expect to lead to the least error. Since Iβm new to PyTorch I donβt know whether itβs a mistake in my code or whether thereβs a mathematical problem with the idea. This approach is in the code below.
Additionally, once Iβve fitted this surface, I would like to obtain a line on it, but Iβm unsure on the best way to do this.
My questions are :
Why is the fit of the plane/surface not more centred ?
How could I then obtain a curved line of best fit on the resulting surface ?
Other approaches I've tried :
SVD, but like I said, im looking for a line that is not straight
Computing the mean of each z-slice (i.e. each horizontal slice) in the y and x direction and using the resulting points to fit a spline (using pythonβs splrep). The problem with this approach is that a couple of top or bottom slices can consist of very few points that are not necessarily in the βcenterβ of the point cloud (e.g. are on the left extreme of the top slice), which forces the spline in that direction. Itβs really important for the project that the line is βcentralβ.
The data in this code example is just toy data, the real data would have a more complex shape. To check the idea I'm fitting a plane, but in the end I'd like a more complex surface. The code results in this figure.
import numpy as np
import torch
import matplotlib.pyplot as plt
from torch import nn
from torch.functional import F
ix = np.random.uniform(0,2, 1000)
iy = np.random.uniform(0,2, 1000)
iz = np.random.uniform(0,100, 1000)
x = torch.tensor(np.array([ ix, iy ]).T).float()
y = torch.tensor(iz).float()
degree =1
class Model(nn.Module):
def __init__(self):
super().__init__()
if degree == 1 :
weights = torch.distributions.Uniform(-1, 1).sample((4,))
if degree == 3 :
weights = torch.distributions.Uniform(-1, 1).sample((8,))
self.weights = nn.Parameter(weights)
def forward(self, X):
if degree ==1 :
a_1, a_2, a_3, a_4 = self.weights
return (a_1 *X[:,0] + a_2*X[:,1] +a_3)
if degree == 3 :
a_1, a_2, a_3, a_4, a_5, a_6, a_7, a_8 = self.weights
return (a_1 * X[:,0]**3 + a_2*X[:,1]**3 + a_3*X[:,0]**2 + a_4 *X[:,1] **2 + a_5 *X[:,0] + a_6 *X[:,1] + a_7)
def training_loop(model, optimizer):
losses = []
loss = 10000
it = 0
if degree == 3 :
lim = 0.1
if degree == 1 :
lim = 0.1
while loss > lim:
it +=1
if it > 5000:
break
preds1= model(x).float()
l1 = torch.nn.L1Loss()
loss = l1(preds1, y)
loss.backward()
optimizer.step()
optimizer.zero_grad()
losses.append(loss.detach().numpy())
print(loss)
return losses
m = Model()
if degree == 1 :
opt = torch.optim.Adam(m.parameters(), lr=0.01)
losses = np.array(training_loop(m, opt))
if degree == 3 :
opt= torch.optim.Adam(m.parameters(), lr=0.001)
losses = np.array(training_loop(m, opt))
params=list(m.parameters())[0].detach().numpy()
X = np.arange(0, 2, 0.1)
Y = np.arange(0, 2, 0.1)
X, Y = np.meshgrid(X, Y)
if degree == 1 :
Z = (params[0] * X + params[1]*Y + params[2])
if degree == 3:
Z = (params[0] * X**3 + params[1]*Y**3 + params[2]*X**2 + params[3]*Y**2 + params[4]*X + params[5]*Y + params[6])
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
surf = ax.plot_surface(X, Y, Z, color='tab:orange', alpha = 0.5,linewidth=0, antialiased=False)
ax.scatter3D(ix,iy,iz, alpha = 0.3, s=2)
plt.show()
Edit : I tried the approach in this post : Fit Curve-Spline to 3D Point Cloud
However this forces me to specify a source and target for the shortest path. Simply choosing the centers of the lowest and highest slice results in this :
| You can use Delaunay/Voronoi methods to get an approximation of the medial axis of the point cloud and pass a spline curve through it. See my previous answer here, which does exactly that for points sampled on a cylindrical surface. The figure below was taken from that answer. If your point cloud also has points from inside the bounding surface (and not just samples from the outer surface), you can compute the 3D alpha-shape using the code from this answer and then just take the points on the outer surface and approximate the medial axis as I describe in the answer (or use a different method to extract the medial curve from the triangulated boundary surface).
| https://stackoverflow.com/questions/69351109/ |
Exponential moving covariance matrix | I have time series data of a specific dimensionality (e.g. [T, 32]). I filter the data in an online fashion using an exponential moving average and variance (according to Wikipedia and this paper):
mean_n = alpha * mean_{n-1} + (1-alpha) * sample
var_n = alpha * (var_{n-1} + (1-alpha) * (sample - mean_{n-1}) * (sample - mean_{n-1}))
I wanted to replace the moving variance with a moving covariance matrix in order to capture the correlation between the data dimensions (e.g. 32). So I have simply replaced the element-wise variance calculation with an outer product:
covmat_n = alpha * (covmat_{n-1} + (1-alpha) * np.outer((sample - mean_{n-1}), (sample - mean_{n-1})))
However this does not seem to give correct results. For example, when I try to initialize a pytorch multivariate gaussian distribution with such a covariance matrix, it sometimes throws an error saying that the covariance matrix contains invalid values. Since it is always symmetric I assume it breaks the positive-definiteness constraint. Other downstream tasks suggested the same (e.g. computing the KL-Divergence between two gaussians with such covariance matrices sometimes gave negative results).
Does anyone know what I am doing wrong here? Where is the error in my calculations?
And as a bonus question: Are my calculations for the simple moving variance correct? It seeems strange to multiply the new sample variance with alpha again, but the sources suggest that it is the correct way.
| I have found the answer myself. It seemed to be a numerical problem. Since the eigenvalues of a positive definite matrix must be positive, I could solve it by applying an eigenvalue decomposition to every sample's covariance matrix and ensure that its eigenvalues are larger than zero:
diff = sample - last_mean
sample_covmat = np.outer(diff, diff)
w, v = np.linalg.eigh(sample_covmat)
w += 1e-3 # Note: Could also achieve this by e.g. w = np.maximum(w, 0)
sample_covmat = v @ np.diag(w) @ np.linalg.inv(v)
| https://stackoverflow.com/questions/69351441/ |
Pytorch nn.parallel.DistributedDataParallel model load | Model saved with:
torch.distributed.init_process_group(backend="nccl")
local_rank = torch.distributed.get_rank()
torch.cuda.set_device(local_rank)
device = torch.device("cuda", local_rank)
save_model = f'./model'
Path(save_model).mkdir(parents=True, exist_ok=True)
net = Net(args) # .to(device)
model_name = f"{save_model}/net.pt"
torch.save(net.state_dict(), model_name) #
model = Model(net, args).to(device)
model_name = f"{save_model}/model.pt"
if torch.cuda.device_count() > 1:
model = nn.parallel.DistributedDataParallel(model, device_ids=[local_rank],
output_device=local_rank)
model.module.fit(tr_data, val_data, args)
torch.save(maml, model_name)
I tried to load the model with:
save_model = f'./model'
net = Net(args) # .to(device)
model_name = f"{save_model}/net.pt"
net.load_state_dict(
torch.load(model_name, map_location=torch.torch.device("cpu")))
maml = Model(net, args).to(device)
model_name = f"{save_model}/model.pt"
maml = torch.load(model_name, map_location=torch.torch.device(
"cuda" if torch.cuda.is_available() else "cpu")) # .load_state_dict
The "net" can be loaded successfully, but I got the error when loading "model":
File "D:\Research\Traffic Prediction\maml\venv\lib\site-packages\torch\serialization.py", line 607, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "D:\Research\Traffic Prediction\maml\venv\lib\site-packages\torch\serialization.py", line 882, in _load result = unpickler.load()
TypeError: () missing 1 required positional argument: 'ddp_join_throw_on_early_termination'
Any input would be really appreciated.
| In general, with PyTorchβs DistributedDataParallel, same model is kept across all nodes (as itβs βsynchronisedβ during backpropagation).
Best way to save it is to just save the model instead of the whole DistributedDataParallel (usually on main node or multiple if possible node failure is a concern):
# or not only local_rank 0
if local_rank == 0:
torch.save(model.module.cpu(), path)
Please notice, if your model is wrapped within DistributedDataParallel the model you are after is kept within module attribute.
Another thing - cast your model to CPU, no mapping will be necessary in such case (as you might use multiple GPUs and you would have to map it appropriately on other devices, which might not have GPU).
| https://stackoverflow.com/questions/69356717/ |
How to remove the model of transformers in GPU memory | from transformers import CTRLTokenizer, TFCTRLLMHeadModel
tokenizer_ctrl = CTRLTokenizer.from_pretrained('ctrl', cache_dir='./cache', local_files_only=True)
model_ctrl = TFCTRLLMHeadModel.from_pretrained('ctrl', cache_dir='./cache', local_files_only=True)
print(tokenizer_ctrl)
gen_nlp = pipeline("text-generation", model=model_ctrl, tokenizer=tokenizer_ctrl, device=1, return_full_text=False)
Hello, my codes can load the transformer model, for example, CTRL here, into the gpu memory.
How to remove it from GPU after usage, to free more gpu memory?
show I use torch.cuda.empty_cache() ?
Thanks.
| You can simply del tokenizer_ctrl and then use torch.cuda.empty_cache().
See this thread from pytorch forum discussing it.
| https://stackoverflow.com/questions/69357881/ |
The output of my neural network is negative even if i am using ReLU on every layer | I am a beginner at deep learning.I am using PyTorch to implement a neural network to train some chemical data. the input range between (0 to 1 ) with no negative values and I am using the ReLu activation function on every layer so I didn't expect to see a negative value in the output
my input size: 9 features
the output size: 7 Features
number of layers: 5
I can predict 6 of the 7 features correctly without problem only I found one of them has always negative values which I don't know why !. As far as i know, ReLU can't generate negative values
This is my neural network model:
class NNModel(nn.Module):
def __init__(self, in_size, hidden_size, out_size):
super().__init__()
# hidden layers
self.linear1 = nn.Linear(in_size, hidden_size)
self.relu1 = nn.ReLU()
self.linear2 = nn.Linear(hidden_size, hidden_size)
self.relu2 = nn.ReLU()
self.linear3 = nn.Linear(hidden_size, hidden_size)
self.relu3 = nn.ReLU()
self.linear4 = nn.Linear(hidden_size, hidden_size)
self.relu4 = nn.ReLU()
# output layer
self.linear5 = nn.Linear(hidden_size, out_size)
def forward(self, xb):
# Get intermediate outputs using hidden layer
out = self.linear1(xb)
# Apply activation function
out = self.relu1(out)
out=self.linear2(out)
out = self.relu2(out)
out=self.linear3(out)
out = self.relu3(out)
out=self.linear4(out)
out = self.relu4(out)
# Get predictions using output layer
out = self.linear5(out)
return out
def training_step(self, batch):
inputs, targets = batch
out = self(inputs) # Generate predictions
L=nn.MSELoss()
loss = L(out, targets) # Calculate loss
return loss
def validation_step(self, batch):
inputs, targets = batch
out = self(inputs) # Generate predictions
L=nn.MSELoss()
loss = L(out, targets) # Calculate loss
return {'val_loss': loss.detach() }
def validation_epoch_end(self, outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean() # Combine losses
return {'val_loss': epoch_loss.item()}
def epoch_end(self, epoch, result):
print("Epoch [{}], val_loss: {}".format(epoch+1, result['val_loss']))
Any help, tips, or advice will be appreciated
| You are using a linear layer after relu, you can rewrite your last layer as:
out = self.relu(self.linear5(out))
and your model definition from:
self.relu1 = nn.ReLU()
self.relu2 = nn.ReLU()
self.relu3 = nn.ReLU()
to a single definition:
self.relu = nn.ReLU()
and reuse this self.relu, as it is just a function without any learnable parameters.
Based on you features, i would recommend some things:
use pyramidal like neurons by layer, eg:
layer 1: 512
layer 2: 256
layer 3: 125
last layer activation function:
use a linear then a sigmoid (don't use relu in the mid,otherwise you will clip you outputs in the range 0.5 to 1).
Normalize your output data, and store mean and variance for reversing the transformation after that. Also, take a closer look, it is very likely that the values you are failing to predict are in a different range/distribution compared to the ones you are doing ok.
Try adam optimizer with 0.001 learning rate.
Try different batch sizes, ranging from 1,2,8,16,32,64,128.
Try add dropout.
After all, you may need more data.
| https://stackoverflow.com/questions/69361678/ |
How to downgrade from CUDA 11.4 to 10.2 & add sm_35 - CUDA error: no kernel image is available for execution on the device | I'm trying to run a piece of code on Pytorch, but I get the error:
RuntimeError: CUDA error: no kernel image is available for execution on the device
I've narrowed down the issue to be a missmatch of CUDA versions. My machine has 2 GPUs:
a GeForce GTX 650 (compute capability 3.0) and a Tesla K40c (compute capability 3.5). I've checked the compute capabilities here: https://developer.nvidia.com/cuda-gpus.
My nvidia-smi command gives the following:
nvidia-smi output (Driver Version: 470.57.02 & CUDA Version: 11.4)
While my nvcc -V command gives the following:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
The 10.1 version exists because I tried to install that CUDA version, specifically, following the instructions elsewhere (for example: https://medium.com/@anarmammadli/how-to-install-cuda-10-2-cudnn-7-6-5-and-samples-on-ubuntu-18-04-2493124478ca)
Also, I've installed cudatoolkit with conda, and so on my conda list I have the following entry:
...
cudatoolkit 10.1.243 h6bb024c_0
...
In accordance with https://github.com/moi90/pytorch_compute_capabilities/blob/main/table.md I've also installed the 1.8.0 PyTorch version.
However, in Python 3.7.11:
Python 3.7.11 (default, Jul 27 2021, 14:32:16)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'1.8.1'
>>> torch.version.cuda
'10.1'
>>> torch.cuda.get_arch_list()
['sm_37', 'sm_50', 'sm_60', 'sm_61', 'sm_70', 'sm_75', 'compute_37']
>>> torch.cuda.is_available()
True
I don't have sm_35 available, which I need to use the Tesla K40. I believe this is the reason why I keep getting the CUDA error: no kernel image is available for execution on the device error. I've also tried all the above for version 10.2 of CUDA, same result.
| I've solved my problem. As stated in the comments, I required a version of PyTorch that supports sm_35 compute capability. It had little to do with the current CUDA version. In the end, I found these binaries:
https://blog.nelsonliu.me/2020/10/13/newer-pytorch-binaries-for-older-gpus/
I finally fixed the issue by creating a new environment and running:
pip install torch==1.3.1+cu92 -f https://nelsonliu.me/files/pytorch/whl/torch_stable.html
| https://stackoverflow.com/questions/69364529/ |
Correct use of Cross-entropy as a loss function for sequence of elements | I have a sequece labeling task.
So as input, I have a sequence of elements with shape [batch_size, sequence_length] and where each element of this sequence should be assigned with some class.
And as a loss function during training a neural net, I use a Cross-entropy.
How should I correctly use it?
My variable target_predictions has shape [batch_size, sequence_length, number_of_classes] and target has shape [batch_size, sequence_length].
Documentation says:
I know if I use CrossEntropyLoss(target_predictions.permute(0, 2, 1), target), everything will work fine. But I have concerns that torch is intepreting my sequence_length as d_1 variable as on screenshot and will think that it is a multidimential loss, which is not the case.
How should I correctly do it?
| Using CE Loss will give you loss instead of labels. By default mean will be taken which is what you are probably after and the snippet with permute will be fine (using this loss you can train your nn via backward).
To get predicted class just take argmax across appropriate dimension, in the case without permutation it would be:
labels = torch.argmax(target_predictions, dim=-1)
This will give you (batch, sequence_length) output containing classes.
| https://stackoverflow.com/questions/69367671/ |
Understanding backpropagation in PyTorch | I am exploring PyTorch, and I do not understand the output of the following example:
# Initialize x, y and z to values 4, -3 and 5
x = torch.tensor(4., requires_grad = True)
y = torch.tensor(-3., requires_grad = True)
z = torch.tensor(5., requires_grad = True)
# Set q to sum of x and y, set f to product of q with z
q = x + y
f = q * z
# Compute the derivatives
f.backward()
# Print the gradients
print("Gradient of x is: " + str(x.grad))
print("Gradient of y is: " + str(y.grad))
print("Gradient of z is: " + str(z.grad))
Output
Gradient of x is: tensor(5.)
Gradient of y is: tensor(5.)
Gradient of z is: tensor(1.)
I have little doubt that my confusion originates with a minor misunderstanding. Can someone explain in a stepwise manner?
| I hope you understand that When you do f.backward(), what you get in x.grad is .
In your case
.
So, simply (with preliminary calculus)
If you put your values for x, y and z, that explains the outputs.
But, this isn't really "Backpropagation" algorithm. This is just partial derivatives (which is all you asked in the question).
Edit:
If you want to know about the Backpropagation machinery behind it, please see @Ivan's answer.
| https://stackoverflow.com/questions/69367939/ |
How to get outputs in the same order as inputs with multiple spawned processes running on multiple GPUs and batches of data processed by each? | I am using Pytorch Distributed Data Parallel approach and spawning multiple processes, each running on separate GPU.I am using Pytorch Distributed Data Sampler along with Data Loader for loading batches of input data to each process.
My questions:
Under the hood, how does Pytorch Distributed Data Sampler, Data Loader make slices of input data? Just for simplicity say we have 4 GPUs, and 400 input samples and batch size of say 50, then will Pytorch Distributed Data Sampler (together with Data Loader) make first 50 samples go to GPU-0, next 50 to GPU-1., next 50 to GPU-2, then GPU-3 and then again next 50 to GPU-0 i.e. in the order of GPU device number? or the order of GPU to select for next batch of input is random based on which GPU has finished its previous batch first? or is it like 400 samples get divided into 4 parts first and then GPU-0 would get first 100 samples of input data (50 at a time ), GPU-1 will get next 100 samples ( 50 at a time) and so on..and in this case no matter if say GPU-3 gets its second batch started earlier than GPU-0, but still with respect to input data, GPU-0 would still have first 100 samples and GPU-3 would have last 100?
2). My Second question is how to retrieve output data in same order as input data so that final consolidated output ( having outputs from all processes combined in one data structure) is in same order as original inputs and each output corresponds to the right input
|
The PyTorch documentation on DistributedSampler doesn't provide any guarantees regarding how data is distributed across processes and devices, other than the fact that it is, in fact, distributed across processes and devices. You shouldn't design your application to be dependent on an implementation detail of an external package; otherwise, your application could suddenly fail one day after updating PyTorch, and you'd have no idea why (or potentially that it's even failing to begin with). If, for some reason, you absolutely need the data to be distributed in a very specific way, you should roll your own solution. The documentation for DistributedDataParallel suggests that, if you're using a single host with N GPUs, you should spin up N processes, each designated a single GPU. A simple solution would be to set the process's rank equal to the designated GPU device ID; this could in-turn be used in a custom sampler class to select the appropriate sample indices.
You could try to control the order in which outputs are returned by the various distributed processes, but this introduces unnecessary synchronization which would defeat much of the purpose of parallelization. A better solution is to simply return outputs in an arbitrary order, and then sort them after-the-fact. If you'd like the outputs to be sorted in the same order as the inputs, you can just associate each input with, say, an integer index (input 0 gets index 0, input 1 gets index 1, and so on). When returning the output, also return the index of the associated input (e.g. as a tuple). Afterwards, you can just sort the outputs by their corresponding indices.
| https://stackoverflow.com/questions/69368042/ |
Is there any option at Pytorch autograd function for these problem? | Sorry for the vague title because I don't know exactly how to ask the question.
I'm using pytorch's autograd function right now and I'm struggling with the results I don't understand.
In common sense, the grad calculated by the loss is how far each parameter should go in the direction where the loss is minimized. Because it doesn't make sense for the value to change just because the scale has changed.
It means $$ grad(loss) = 5grad(loss \frac 1 5) $$
But my actual result doesn;t. So
formulation explanation
And this is my actual code:)
from torch.autograd import grad
train_loss = loss(models(adaptation_data), adaptation_labels)
grads = grad(train_loss , models.parameters(),create_graph=True)
grads_02 = grad(train_loss*0.2 , models.parameters(),create_graph=True)
grads[-1] == grads_02[-1] * 5
#result : False
the whole code
Maybe I'm doing something wrong or there is an option for this for the grad function, but can anyone tell me? please
| Your code screenshot shows that the two tensors are different due to floating point truncation error. Do not compare them with == sign, use isclose() function instead
torch.isclose(grads[-1], grads_02[-1] * 5)
| https://stackoverflow.com/questions/69368150/ |
what is model.training in pytorch? | hi i'm going through pytorch tutorial about transfer learning.
(https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html)
what is model.training for??
enter def visualize_model(model,num_images=6):
was_training=model.training
model.eval()
images_so_far=0
fig=plt.figure()
with torch.no_grad():
for i, (inputs,labels) in enumerate(dataloaders['val']):
inputs=inputs.to(device)
labels=labels.to(device)
outputs=model(inputs)
_,pred=torch.max(outputs,1)
for j in range(inputs.size()[0]):
images_so_far+=1
ax=plt.subplot(num_images//2,2,images_so_far)
ax.axis('off')
ax.set_title('predicted: {}'.format(class_names[preds[j]]))
imshow(inputs.cpu().data[j])
if images_so_far==num_images:
model.train(mode=was_training)
return
model.train(mode=was_training)code here
i cannot understand "model.train(model=was_training)". any help?? thank you so much
| I think this will help (link)
All nn.Modules have an internal training attribute, which is changed by calling model.train() and model.eval() to switch the behavior of the model.
The was_training variable stores the current training state of the model, calls model.eval(), and resets the state at the end using model.train(training=was_training).
You can find great answers in pytorch discuss forum ;)
| https://stackoverflow.com/questions/69371652/ |
I cannot draw bounding box correctly | I can't draw the bounding box correctly.
img = Image.open("/content/drive/MyDrive/58125_000893_Sideline_frame298.jpg").convert('RGB')
convert_tensor = torchvision.transforms.ToTensor()
img=convert_tensor(img)
width=15
top=456
height=16
left=1099
img=img*255
boxs=[left,top,top+width,left+height]
print(boxs)
boxs=torch.tensor(boxs,dtype=torch.int)
a=torch.tensor(img,dtype=torch.uint8)
a=torchvision.utils.draw_bounding_boxes(image=a,boxes=boxs.unsqueeze(0),width=2,colors=(0,0,255))
a=a.permute(1,2,0)
plt.imshow(a)
plt.show()
I need to draw a bounding box to the player's helmet. But bounding was drawn in a different location. I get bounding attributes from CSV files. Can someone help me fix this issue?
| You are using a top-left coordinate system while PyTorch uses x: horizontal left->right and y: vertical bottom->top. The bonding box provided to torchvision.utils.draw_bounding_boxes is defined as (xmin, ymin, xmax, ymax).
Your mapping should therefore be:
xmin = left
ymin = top + height
xmax = left + width
ymax = top
| https://stackoverflow.com/questions/69403235/ |
How to enable Intel Extension for Pytorch(IPEX) in my python code? | I would like to use Intel Extension for Pytorch in my code to increase overall performance. Referred this GitHub(https://github.com/intel/intel-extension-for-pytorch) for installation.
Currently, I am trying out a hugging face summarization PyTorch sample(https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization.py). Below is the trainer API used for training.
# Initialize our Trainer
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics if training_args.predict_with_generate else None,
)
I am not aware of enabling Ipex in this code. Can anyone help me with this?
Thanks in Advance!
| The key changes that are required to enable IPEX are:
#Import the library:
import intel_extension_for_pytorch as ipex
#Apply the optimizations to the model for its datatype:
model = ipex.optimize(model)
#torch.channels_last should be applied to both of the model object and data to raise CPU resource usage efficiency.
model = model.to(memory_format=torch.channels_last)
data = data.to(memory_format=torch.channels_last)
Also, please check out, https://intel.github.io/intel-extension-for-pytorch/latest/tutorials/examples.html for IPEX examples. Please check out IPEX official page https://www.intel.com/content/www/us/en/developer/tools/oneapi/extension-for-pytorch.html.
| https://stackoverflow.com/questions/69404795/ |
How to enable mixed precision training while using Intel Extension for PyTorch (IPEX)? | I am working on Dog-Cat classifier using Intel extension for Pytorch (Ref - https://github.com/amitrajitbose/cat-v-dog-classifier-pytorch). I want to reduce the training time for my model. How do I enable mixed precision in my code? Referred this github(https://github.com/intel/intel-extension-for-pytorch) for training my model.
| Mixed precision for Intel Extension for PyTorch can be enabled using below commands,
# For Float32
model, optimizer = ipex.optimize(model, optimizer=optimizer)
# For BFloat16
model, optimizer = ipex.optimize(model, optimizer=optimizer, dtype=torch.bfloat16)
Please check out the link, https://intel.github.io/intel-extension-for-pytorch/cpu/latest/index.html and https://www.intel.com/content/www/us/en/developer/tools/oneapi/extension-for-pytorch.html to learn more about Intel Extension for PyTorch.
| https://stackoverflow.com/questions/69405997/ |
Summing over product of tensor elements and vector | I'm trying to write a python code for a higher order (d=4) factorization machine that returns the scalar result y of
Where x is a vector of some length n, v is a vector of length n, w is an upper triangular matrix of size n by n, and t is a rank 4 tensor of size n by n by n by n. The easiest implementation is just for loops over each index:
for i in range(0,len(x)):
for j in range(0,len(x)):
for k in range(0,len(x)):
for l in range(0,len(x)):
y += t[i,j,k,l] * x[i] * x[j] * x[k] * x[l]
The first two terms are easily calculated:
y = u @ x + x @ v @ x.T
My question- is there a better way of calculating the sum over the tensor than a nested for-loop? (currently looking at possible solutions in pytorch)
| This seems like a perfect fit for torch.einsum:
>>> torch.einsum('ijkl,i,j,k,l->', t, *(x,)*4)
In expanded form, this looks like torch.einsum('ijkl,i,j,k,l->', t, x, x, x, x) and computes the value defined by your four for loops:
for i, j, k, l in cartesian_prod:
y += t[i,j,k,l] * x[i] * x[j] * x[k] * x[l]
Where cartesian_prod is the cartesian product: range(len(x))^4
| https://stackoverflow.com/questions/69410396/ |
How to iterate over Dataloader until a number of samples is seen? | I'm learning pytorch, and I'm trying to implement a paper about the progressive growing of GANs. The authors train the networks on the given number of images, instead of for a given number of epochs.
My question is: is there a way to do this in pytorch, using default DataLoaders? I'd like to do something like:
loader = Dataloader(..., total=800000)
for batch in iter(loader):
... #do training
And the loader loops itself automatically until 800000 samples are seen.
I think that I'd be a better way, than to calculate the number of times you have to loop through the dataset by yourself
| You can use torch.utils.data.RandomSampler and sample from your dataset. Here is a minimal setup example:
class DS(Dataset):
def __len__(self):
return 5
def __getitem__(self, index):
return torch.empty(1).fill_(index)
>>> ds = DS()
Initialize a random sampler providing num_samples and setting replacement to True i.e. the sampler is forced to draw instances multiple times if len(ds) < num_samples:
>>> sampler = RandomSampler(ds, replacement=True, num_samples=10)
Then plug this sampler to a new torch.utils.data.DataLoader:
>>> dl = DataLoader(ds, sampler=sampler, batch_size=2)
>>> for batch in dl:
... print(batch)
tensor([[6.],
[4.]])
tensor([[9.],
[2.]])
tensor([[9.],
[2.]])
tensor([[6.],
[2.]])
tensor([[0.],
[9.]])
| https://stackoverflow.com/questions/69427073/ |
How to solve this Issue "ModuleNotFoundError: No module named 'torch.tensor'" | How to solve this issue
Traceback (most recent call last):
File "C:/Users/arulsuju/Desktop/OfflineSignatureVerification-master/OfflineSignatureVerification-master/main.py", line 4, in
from Preprocessing import convert_to_image_tensor, invert_image
File "C:\Users\arulsuju\Desktop\OfflineSignatureVerification-master\OfflineSignatureVerification-master\Preprocessing.py", line 4, in
from torch.tensor import Tensor
ModuleNotFoundError: No module named 'torch.tensor'
in Python I am using from torch.tensor import Tensor
| In line 4, change this
from torch.tensor import Tensor
TO
from torch import Tensor
| https://stackoverflow.com/questions/69429076/ |
torch.cuda.is_available() is False only in Jupyter Lab/Notebook | I tried to install CUDA on my computer. After doing so I checked in my Anaconda Prompt and is appeared to work out fine.
However, when I started Jupyter Lab from the same environment, torch.cuda.is_available() returns False. I managed to find and follow this solution, but the problem still persisted for me.
Does anybody have any idea why? Thank you so much!
| Seems to be a problem with similar causes:
Not able to import Tensorflow in Jupyter Notebook
You are probably using other environment than the one you are using outside jupyter.
Try open Anaconda Navigator, navigate to Environments and active your env, navigate to Home and install jupyter notebook, then lunch jupyter notebook from the navigator. This should solve the issue.
In linux, you can check using !which python inside jupyter.
In windows you can use:
import sys
os.path.dirname(sys.executable)
To find where is the python that you are using.
See if the path matches.
| https://stackoverflow.com/questions/69438417/ |
Indexing a multi-dimensional tensor using only one dimension | I have a PyTorch tensor b with the shape: torch.Size([10, 10, 51]). I want to select one element between the 10 possible elements in the dimension d=1 (middle one) using a numpy array: a = np.array([0,1,2,3,4,5,6,7,8,9]). this is just a random example.
I wanted to do:
b[:,a,:] but that isn't working
| Your solution is likely torch.index_select (docs)
You'll have to turn a into a tensor first, though.
a_torch = torch.from_numpy(a)
answer = torch.index_select(b, 1, a_torch)
| https://stackoverflow.com/questions/69439465/ |
Retrieve only the last hidden state from lstm layer in pytorch sequential | I have a pytorch model:
model = torch.nn.Sequential(
torch.nn.LSTM(40, 256, 3, batch_first=True),
torch.nn.Linear(256, 256),
torch.nn.ReLU()
)
And for the LSTM layer, I want to retrieve only the last hidden state from the batch to pass through the rest of the layers. Ex:
_, (hidden, _) = lstm(data)
hidden = hidden[-1]
Though, that example only works for a subclassed model. I need to somehow do this on a nn.Sequential() model that way when I save it, it can properly be converted to a tensorflow.js model. The reason I can't make and train this model in tensorflow.js is because I'm trying to implement this repo: Resemblyzer in tensorflow.js while still using the same weights as the pretrained Resemblyzer model which was made in pytorch as a subclassed model. I thought of using the torchvisions.transformations.Lambda() transformation but I would assume that would make it incompatible with tensorflow.js. Is there any way to make this possible while still allowing the model to convert properly?
| You could split up your sequential but only doing so in the forward definition of your model on inference. Once defined:
model = nn.Sequential(nn.LSTM(40, 256, 3, batch_first=True),
nn.Linear(256, 256),
nn.ReLU())
You can split it:
>>> lstm, fc = model[0], model[1:]
Then infer in two steps:
>>> out, (hidden, _) = lstm(data)
>>> hidden = hidden[-1]
>>> out = fc(out) # <- or fc(out[-1]) depending on what you want
| https://stackoverflow.com/questions/69443940/ |
How should I replace the sentence torch.Assert from pytorch-1.7.1 to some other sentence in pytorch-1.5.0 | The thing is I need to reimplement a GAN model using torch1.5.0 but the previous torch1.7.1 version codes contains a torch.Assert sentence to do symbolic assert. What sentence should I use to do the same thing?
| Just use python's native assert. That's what it does under the hood.
torch._assert(x == y, 'assertion message')
to be replaced with
assert x == y, 'assertion message'
| https://stackoverflow.com/questions/69447704/ |
the usage of @ operator in an implementation of extending nn.Module | I saw the following code segment for extending nn.Mudule. What I do not understand is the input_ @ self.weight in forward function. I can understand that it is try to use the weight information of input_. But @ is always used as decorator, why it can be used this way?
class Linear(nn.Module):
def __init__(self, in_size, out_size):
super().__init__()
self.weight = nn.Parameter(torch.randn(in_size, out_size))
self.bias = nn.Parameter(torch.randn(out_size))
def forward(self, input_):
return self.bias + input_ @ self.weight
linear = Linear(5, 2)
assert isinstance(linear, nn.Module)
assert not isinstance(linear, PyroModule)
example_input = torch.randn(100, 5)
example_output = linear(example_input)
assert example_output.shape == (100, 2)
| The @ is a shorthand for the __matmul__ function: the matrix multiplication operator.
| https://stackoverflow.com/questions/69451996/ |
Inverse operation to padding in Jax | I'm trying to learn how to use Jax and I stumbled upon the problem of converting the torch.nn.functionnal.pad function into Jax. There is a function to perform padding but I would like in the same way as in PyTorch use negative numbers in the padding (e.g F.pad(array, [-1,-1])).
Does anyone have an idea or had the same problem ?
| The jax.lax.pad function accepts negative padding indices, although the API is a bit different than that of torch.nn.functional.pad. For example:
from jax import lax
import jax.numpy as jnp
x = jnp.ones((2, 3))
y = lax.pad(x, padding_config=[(0, 0, 0), (1, 1, 0)], padding_value=0.0)
print(y)
# [[0. 1. 1. 1. 0.]
# [0. 1. 1. 1. 0.]]
x = lax.pad(y, padding_config=[(0, 0, 0), (-1, -1, 0)], padding_value=0.0)
print(x)
# [[1. 1. 1.]
# [1. 1. 1.]]
If you wish, you could wrap this with a function that has similar semantics to the torch version. Here's a quick attempt:
def jax_pad(input, pad, mode='constant', value=0):
"""JAX implementation of torch.nn.functional.pad
Warning: this has not been thoroughly tested!
"""
if mode != 'constant':
raise NotImplementedError("Only mode='constant' is implemented")
assert len(pad) % 2 == 0
assert len(pad) // 2 <= input.ndim
pad = list(zip(*[iter(pad)]*2))
pad += [(0, 0)] * (input.ndim - len(pad))
return lax.pad(
input,
padding_config=[(i, j, 0) for i, j in pad[::-1]],
padding_value=jnp.array(value, input.dtype))
x = jnp.ones((2, 3))
y = jax_pad(x, (1, 1))
print(y)
# [[0. 1. 1. 1. 0.]
# [0. 1. 1. 1. 0.]]
x = jax_pad(y, (-1, -1))
print(x)
# [[1. 1. 1.]
# [1. 1. 1.]]
| https://stackoverflow.com/questions/69453600/ |
Is there an actual minimum input image size for popular computer vision models? (E.g., vgg, resnet, etc.) | According to the documentation on pre-trained computer vision models for transfer learning (e.g., here), input images should come in "mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224".
However, when running transfer learning experiments on 3-channel images with height and width smaller than expected (e.g., smaller than 224), the networks generally run smoothly and often get decent performances.
Hence, it seems to me that the "minimum height and width" is somehow a convention and not a critical parameter. Am I missing something here?
| There is a limitation on your input size which corresponds to the receptive field of the last convolution layer of your network. Intuitively, you can observe the spatial dimensionality decreasing as you progress through the network. At least this is the case for feature extractor CNNs which aim at extracting feature embeddings from the input image. That is most pre-trained models such as vanilla VGG, and ResNets networks do not retain spatial dimensionality. If the input of a convolutional layer is smaller than the kernel size (even if/when padded), then you simply won't be able to perform the operation.
| https://stackoverflow.com/questions/69471729/ |
How to multiply a set of masks over an array of n matrices or tensors in python without using loops? | I have N binary masks, along with N x M matrices. I wish to apply the ith mask onto the M matrices at the ith index of the matrix array. Their data types can be either torch tensors or numpy arrays. To illustrate,
If I have an array arr:
arr = torch.rand((2, 3, 3, 3))
or
arr = tensor([[[[0.2336, 0.4841, 0.4121],
[0.9342, 0.8496, 0.8332],
[0.4670, 0.8158, 0.7891]],
[[0.5791, 0.2391, 0.8501],
[0.9811, 0.0087, 0.0655],
[0.6587, 0.3105, 0.0931]],
[[0.8892, 0.8104, 0.9181],
[0.1605, 0.5280, 0.0905],
[0.2149, 0.8851, 0.7125]]],
[[[0.9969, 0.8589, 0.7479],
[0.4013, 0.5922, 0.0252],
[0.9267, 0.8123, 0.0711]],
[[0.7931, 0.6477, 0.0947],
[0.5969, 0.7751, 0.5662],
[0.1785, 0.0310, 0.9135]],
[[0.1490, 0.3623, 0.3670],
[0.3710, 0.7887, 0.1310],
[0.2052, 0.0244, 0.6891]]]])
and I generate a mask using:
mask = arr[:, 0, 0:, :] > 0.5
For example:
mask = tensor([[[False, False, False],
[ True, True, True],
[False, True, True]],
[[ True, True, True],
[False, True, False],
[ True, True, False]]])
i.e. of shape (2, 3, 3). For each set of 3 x 3 x 3 matrices, I want to multiply the corresponding single mask, i.e.
result = mask[i] * arr[i, :, :, :]
or result should ideally be:
result = tensor([[[[0, 0, 0],
[0.9342, 0.8496, 0.8332],
[0, 0.8158, 0.7891]],
[[0, 0, 0],
[0.9811, 0.0087, 0.0655],
[0, 0.3105, 0.0931]],
[[0, 0, 0],
[0.1605, 0.5280, 0.0905],
[0, 0.8851, 0.7125]]],
[[[0.9969, 0.8589, 0.7479],
[0, 0.5922, 0],
[0.9267, 0.8123, 0]],
[[0.7931, 0.6477, 0.0947],
[0, 0.7751, 0],
[0.1785, 0.0310, 0]],
[[0.1490, 0.3623, 0.3670],
[0, 0.7887, 0],
[0.2052, 0.0244, 0]]]])
I am using this in a relatively long running piece of code, hence I want to avoid using loops which might blow up the code. I have tried using np.multiply, np.dot and np.matmul, but I keep facing dimensionality issues. Can anyone help me with this?
The following code shows the operation I want to perform, removing the for loop:
for i, m in enumerate(arr):
result[i] = mask[i] * arr[i]
| By using an integer indexing, you removed one of the dimension from arr (2nd axis), and you can't broadcast an array with shape (2, 3, 3) with (2, 3, 3, 3). To make them compatible again, you can add the dimension back by reshaping:
mask.reshape((2, 1, 3, 3)) * arr
Or keep the dimension from the first place by using slice:
mask = arr[:, :1, :, :] > 0.5
and then you can just do mask * arr.
| https://stackoverflow.com/questions/69474841/ |
Problem with Graph Neural Network in PyTorch Geometric | I'm trying to understand what is wrong with the following GNN model implemented in PyTorch
class Net(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv = SAGEConv(dataset.num_features,
dataset.num_classes,
aggr="max") # max, mean, add ...)
def forward():
x = self.conv(data.x, data.edge_index)
return F.log_softmax(x, dim=1)
but I get the following error when trying to run a training loop:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-34-f3ee8050af6a> in <module>
1 best_val_acc = test_acc = 0
2 for epoch in range(1,100):
----> 3 train()
4 _, val_acc, tmp_test_acc = test()
5 if val_acc > best_val_acc:
<ipython-input-14-64df4e2a24f9> in train()
2 model.train()
3 optimizer.zero_grad()
----> 4 F.nll_loss(model()[data.train_mask], data.y[data.train_mask]).backward()
5 optimizer.step()
6
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: forward() takes 0 positional arguments but 1 was given
I'm adding more details as requested on how I call the model :
def train():
model.train()
optimizer.zero_grad()
F.nll_loss(model()[data.train_mask], data.y[data.train_mask]).backward()
optimizer.step()
def test():
model.eval()
logits, accs = model(), []
for _, mask in data('train_mask', 'val_mask', 'test_mask'):
pred = logits[mask].max(1)[1]
acc = pred.eq(data.y[mask]).sum().item() / mask.sum().item()
accs.append(acc)
return accs
| Function torch.nn.Module.forward should have at minimum one argument: self. In your case, you have two: self and your input data.
def forward(self, data): # <-
x = self.conv(data.x, data.edge_index)
return F.log_softmax(x, dim=1)
| https://stackoverflow.com/questions/69478259/ |
NLLLoss is just a normal negative function? | I'm having trouble understanding nn.NLLLoss().
Since the code below always prints True then what's the difference between the nn.NLLLoss() and using the negative sign (-)?
import torch
while 1:
b = torch.randn(1)
print(torch.nn.NLLLoss()(b, torch.tensor([0])) == -b[0])
| In your case you only have a single output value per batch element and the target is 0. The nn.NLLLoss loss will pick the value of the predicted tensor corresponding to the index contained in the target tensor. Here is a more general example where you have a total of five batch elements each having three logit values:
>>> logits = torch.randn(5, 3, requires_grad=True)
>>> y = torch.tensor([1, 0, 2, 0, 1])
>>> y_hat = torch.softmax(b, -1)
Tensors y and y_hat correspond to the target tensor and estimated distributions respectively. You can implement nn.NLLLoss with the following:
>>> -y_hat[torch.arange(len(y_hat)), y]
tensor([-0.2195, -0.1015, -0.3699, -0.5203, -0.1171], grad_fn=<NegBackward>)
Compared to the built-in function:
>>> F.nll_loss(y_hat, y, reduction='none')
tensor([-0.2195, -0.1015, -0.3699, -0.5203, -0.1171], grad_fn=<NllLossBackward>)
Which is quite different to -y_hat alone.
| https://stackoverflow.com/questions/69495926/ |
Is there a way to overide the backward operation on nn.Module | I am looking for a nice way of overriding the backward operation in nn.Module for example:
class LayerWithCustomGrad(nn.Module):
def __init__(self):
super(LayerWithCustomGrad, self).__init__()
self.weights = nn.Parameter(torch.randn(200))
def forward(self,x):
return x * self.weights
def backward(self,grad_of_c): # This gets called during loss.backward()
# grad_of_c comes from the gradient of b*23
grad_of_a = some_operation(grad_of_c)
# perform extra computation
# and more computation
self.weights.grad = another_operation(grad_of_a,grad_of_c)
return grad_of_a # and the grad of parameter "a" will receive this
layer = LayerWithCustomGrad()
a = nn.Parameter(torch.randn(200),requires_grad=True)
b = layer(a)
c = b*23
Some of the projects I work on contains layers with non differentiable functions, I will love it if there is some how away to connect two broken graphs and/or modify gradients of graphs that already exist.
It will also be great if there is a possible method of doing it in tensor flow
| The way PyTorch is built you should first implement a custom torch.autograd.Function which will contain the forward and backward pass for your layer. Then you can create a nn.Module to wrap this function with the necessary parameters.
In this tutorial page you can see the ReLU being implemented. I will show here are to build a torch.autograd.Function and its nn.Module wrapper.
class F(torch.autograd.Function):
"""Both forward and backward are static methods."""
@staticmethod
def forward(ctx, input, weights):
"""
In the forward pass we receive a Tensor containing the input and return
a Tensor containing the output. ctx is a context object that can be used
to stash information for backward computation. You can cache arbitrary
objects for use in the backward pass using the ctx.save_for_backward method.
"""
ctx.save_for_backward(input, weights)
return input*weights
@staticmethod
def backward(ctx, grad_output):
"""
In the backward pass we receive a Tensor containing the gradient of the loss
with respect to the output, and we need to compute the gradient of the loss
with respect to the inputs: here input and weights
"""
input, weights = ctx.saved_tensors
grad_input = weights.clone()*grad_output
grad_weights = input.clone()*grad_output
return grad_input, grad_weights
The nn.Module will initialize the parameters and call F to handle the actual operation computation for forward/backward pass.
class LayerWithCustomGrad(nn.Module):
def __init__(self):
super().__init__()
self.weights = nn.Parameter(torch.rand(10))
self.fn = F.apply
def forward(self, x):
return self.fn(x, self.weights)
Now we can try to infer and backpropagate:
>>> layer = LayerWithCustomGrad()
>>> x = torch.randn(10, requires_grad=True)
>>> y = layer(x)
tensor([ 0.2023, 0.7176, 0.3577, -1.3573, 1.5185, 0.0632, 0.1210, 0.1566,
0.0709, -0.4324], grad_fn=<FBackward>)
Notice the <FBackward> as grad_fn: this is the backward function of F bound to the previous inference we made with x.
>>> y.mean().backward()
>>> x.grad # i.e. grad_input in F.backward
tensor([0.0141, 0.0852, 0.0450, 0.0922, 0.0400, 0.0988, 0.0762, 0.0227, 0.0569,
0.0309])
>>> layer.weights.grad # i.e. grad_weights in F.backward
tensor([-1.4584, -2.1187, 1.5991, 0.9764, 1.8956, -1.0993, -3.7835, -0.4926,
0.9477, -1.2219])
| https://stackoverflow.com/questions/69500995/ |
Continue training with torch.save and torch.load - key error messages | I am new to Torch and using a code template for a masked-cnn model. In order to be prepared if the training is interrupted, I have used torch.save and torch.load in my code, but I think I cannot use this alone for continuing training sessions? I start training by:
model = train_mask_net(64)
This calls the function train_mask_net where I have included torch.save in the epoch loop. I wanted to load one of the saved models and continue training with torch.load in front of the loop, but I got "key error" messages for the optimizer, loss and epoch call. Should I have made a specific checkpoint function as I have seen in some tutorials or is there a possibility that I can continue training with the files saved by the torch.saved command?
def train_mask_net(num_epochs=1):
data = MaskDataset(list(data_mask.keys()))
data_loader = torch.utils.data.DataLoader(data, batch_size=8, shuffle=True, num_workers=4)
model = XceptionHourglass(max_clz+2)
model.cuda()
dp = torch.nn.DataParallel(model)
loss = nn.CrossEntropyLoss()
params = [p for p in dp.parameters() if p.requires_grad]
optimizer = torch.optim.RMSprop(params, lr=2.5e-4, momentum=0.9)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
step_size=6,
gamma=0.9)
checkpoint = torch.load('imaterialist2020-pretrain-models/maskmodel_160.model_ep17')
#print(checkpoint)
model.load_state_dict(checkpoint)
#optimizer.load_state_dict(checkpoint)
#epoch = checkpoint['epoch']
#loss = checkpoint['loss']
for epoch in range(num_epochs):
print(epoch)
total_loss = []
prog = tqdm(data_loader, total=len(data_loader))
for i, (imag, mask) in enumerate(prog):
X = imag.cuda()
y = mask.cuda()
xx = dp(X)
# to 1D-array
y = y.reshape((y.size(0),-1)) # batch, flatten-img
y = y.reshape((y.size(0) * y.size(1),)) # flatten-all
xx = xx.reshape((xx.size(0), xx.size(1), -1)) # batch, channel, flatten-img
xx = torch.transpose(xx, 2, 1) # batch, flatten-img, channel
xx = xx.reshape((xx.size(0) * xx.size(1),-1)) # flatten-all, channel
losses = loss(xx, y)
prog.set_description("loss:%05f"%losses)
optimizer.zero_grad()
losses.backward()
optimizer.step()
total_loss.append(losses.detach().cpu().numpy())
torch.save(model.state_dict(), MODEL_FILE_DIR+"maskmodel_%d.model"%attr_image_size[0]+'_ep'+str(epoch)+'_tsave')
prog, X, xx, y, losses = None, None, None, None, None,
torch.cuda.empty_cache()
gc.collect()
return model
I don't think its necessary, but the xceptionhour class looks like this:
class XceptionHourglass(nn.Module):
def __init__(self, num_classes):
super(XceptionHourglass, self).__init__()
self.num_classes = num_classes
self.conv1 = nn.Conv2d(3, 128, 3, 2, 1, bias=True)
self.bn1 = nn.BatchNorm2d(128)
self.mish = Mish()
self.conv2 = nn.Conv2d(128, 256, 3, 1, 1, bias=True)
self.bn2 = nn.BatchNorm2d(256)
self.block1 = HourglassNet(4, 256)
self.bn3 = nn.BatchNorm2d(256)
self.block2 = HourglassNet(4, 256)
...
| torch.save(model.state_dict(), PATH) only saves the model weights.
To also save optimizer, loss, epoch, etc., change it to:
torch.save({'model': model.state_dict(),
'optimizer': optimizer.state_dict(),
'loss': loss,
'epoch': epoch,
# ...
}, PATH)
To load them:
checkpoint = torch.load(PATH)
model.load_state_dict(checkpoint['model'])
optimizer.load_state_dict(checkpoint['optimizer'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']
More on it here.
| https://stackoverflow.com/questions/69508602/ |
Subsets and Splits