instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Using model.eval() on neural network results in same output everytime for very different inputs | I have a simple network implemented in pytorch say,
class network:
def __init__(self):
self.device = device
#these are the 3 convolutional synapses; Same convolution;
self.layer = sequential(
conv2d(3, 3, (23), padding=11),
batch_norm_2d(3),
Swish(),
conv2d(3, 3, (11), padding=5),
batch_norm_2d(3),
Swish(),
conv2d(3, 3, (5), padding=2),
batch_norm_2d(3),
Swish(),
conv2d(3, 4, (3), padding=15, stride=2),
batch_norm_2d(4),
Swish(),
conv2d(4, 8, (3), padding=15, stride=2),
batch_norm_2d(8),
Swish(),
conv2d(8, 4, (1)),
batch_norm_2d(4),
Swish(),
conv2d(4, 8, (3), padding=15, stride=2),
batch_norm_2d(8),
Swish(),
conv2d(8, 16, (3), padding=15, stride=2),
batch_norm_2d(16),
Swish(),
conv2d(16, 8, (1)),
batch_norm_2d(8),
Swish(),
conv2d(8, 16, (3), padding=15, stride=2),
batch_norm_2d(16),
Swish(),
conv2d(16, 32, (3), padding=15, stride=2),
batch_norm_2d(32),
Swish(),
conv2d(32, 16, (1)),
batch_norm_2d(16),
Swish(),
conv2d(16, 32, (3), padding=15, stride=2),
batch_norm_2d(32),
Swish(),
conv2d(32, 64, (3), padding=15, stride=2),
batch_norm_2d(64),
Swish(),
conv2d(64, 32, (1)),
batch_norm_2d(32),
Swish(),
conv2d(32, 64, (3), padding=15, stride=2),
batch_norm_2d(64),
Swish(),
conv2d(64, 128, (3), padding=15, stride=2),
batch_norm_2d(128),
Swish(),
conv2d(128, 64, (1)),
batch_norm_2d(64),
Swish(),
conv2d(64, 128, (3), padding=15, stride=2),
batch_norm_2d(128),
Swish(),
conv2d(128, 256, (3), padding=15, stride=2),
batch_norm_2d(256),
Swish(),
conv2d(256, 128, (1)),
batch_norm_2d(128),
Swish(),
flatten(1, -1),
linear(128*29*29, 8*8*2*5),
batch_norm_1d(8*8*2*5),
Swish()
)
#loss and optimizer functions for ethirun
self.Loss_1 = IoU_Loss() #the loss function for bounding box.
self.Loss_2 = tor.nn.SmoothL1Loss(reduction='mean')
#the optimizer
self.Optimizer = tor.optim.AdamW(self.parameters())#tor.optim.SGD(self.parameters(), lr=1e-2, momentum=0.9, weight_decay=1e-5, nesterov=True)
self.Scheduler = tor.optim.lr_scheduler.StepLR(self.Optimizer, 288, gamma=0.5)
self.sizes = tor.tensor(range(0, 5), dtype=tor.int64, device=self.device)
def forward(self, input):
return self.layer(input)
def backprop(self, preds, lbls, val_or_trn):
#takes predictions and labels and calculates error and backpropagates
mask = tor.index_select(lbls, -1, self.sizes[0])
preds.register_hook(lambda grad: grad * mask.float())
error = self.Loss_2(preds, lbls)
if val_or_trn == 1:
#backpropagation
error.backward()
self.Optimizer.step()
self.Scheduler.step()
#zeroing the gradients.
self.Optimizer.zero_grad()
return error.detach()
model = network()
Where the inputs, outputs and channels are arbitrary. Then say I create some random input tensor like this,
input_data = torch.randn(1, 3, 256, 256)
Then I predict some result in this data like this,
model(input_data)
And say I also change the input_data variable by initiating the torch.randn command a bunch of different times while keeping the model same. That is not re-initiating the model=network() command.
I get this error,
Expected more than 1 value per channel when training, got input size torch.Size([1, some_value])
So, I tried running it in evaluation mode by using the model.eval() function like this,
model.eval()
with tor.no_grad()
pred = model(input_data)
model.train()
This works without errors. However no matter how I change the input_data variable I always get the same value in pred. If I however re-initiate the model's parameters I get a new pred Which once again does not change with different inputs. Unless I once again re-initiate the model using model=network(). What am I doing wrong?
Edit: To give more info on my problem I'm trying to create a yolo like network from scratch. And this is the dataset I'm using https://www.kaggle.com/devdgohil/the-oxfordiiit-pet-dataset
| Basically that's what the Batchnorm doing. You use Batchnorm to make training less prone to overfit but don't use batchnorm in eval so that you can get the correct result Same go for Dropout.
Every CNN model with batch normalization and/or dropout does the same. The output of the same input will be different during train and eval
Which is exactly why Pytorch has the model.eval(). To turn these layers off during inference to get the correct output.
Edit
The problem is the activation and Batch Normalization at the output.
Only use something that will make the result similar to the ground truth. Like use sigmoid when you want output to be in range of 0-1 or tanh for -1 to 1 or softmax for probability across the axis.
Imagine relu function (which is basically the simpler version of swish and softplus). It will turn everything below 0 to 0. And chances are you need some output to be below 0 so your model won't converge at all.
| https://stackoverflow.com/questions/68224016/ |
Indexing elements from a batch tensor in PyTorch | Say, I have a batch of images in PyTorch. For each image, I also have a pixel location say (x, y). The pixel value can be read using img[x, y] for one image. I am trying to read pixel value for each image in the batch. Please see below the code snippet:
import torch
# create tensors to represent random images in torch format
img_1 = torch.rand(1, 200, 300)
img_2 = torch.rand(1, 200, 300)
img_3 = torch.rand(1, 200, 300)
img_4 = torch.rand(1, 200, 300)
# for each image, x-y value are know, so creating a tuple
img1_xy = (0, 10, 70)
img2_xy = (0, 40, 20)
img3_xy = (0, 30, 50)
img4_xy = (0, 80, 60)
# this is what I am doing right now
imgs = [img_1, img_2, img_3, img_4]
imgs_xy = [img1_xy, img2_xy, img3_xy, img4_xy]
x = [img[xy] for img, xy in zip(imgs, imgs_xy)]
x = torch.as_tensor(x)
My Concerns and Questions
In each image, the pixel location i.e., (x, y) is know. However, I have to create a tuple with one more element, i.e., 0 to make sure that tuple matches the shape of image. Any elegant way?
Instead of using tuple, can't we use tensor and then get the pixel values?
All images can be concatenated to make a batch as img_batch = torch.cat((img_1, img_2, img_3, img_4)). But what about tuple?
| You can concatanate the images to form a (4, 200, 300) shaped stacked tensor. Then, we can index into this with the known (x, y) pairs for each image as follows: we need [0, x1, y1] for first image, [1, x2, y2] for second image, [2, x3, y3] for third image and so on. These can be achieved with "fancy indexing":
# stacking as you did
>>> stacked_imgs = torch.cat(imgs)
>>> stacked_imgs.shape
(4, 200, 300)
# no need for 0s in front
>>> imgs_xy = [(10, 70), (40, 20), (30, 50), (80, 60)]
# need xs together and ys together: take transpose of `imgs_xy`
>>> inds_x, inds_y = torch.tensor(imgs_xy).T
>>> inds_x
tensor([10, 40, 30, 80])
>>> inds_y
tensor([70, 20, 50, 60])
# now we index into the batch
>>> num_imgs = len(imgs)
>>> result = stacked_imgs[range(num_imgs), inds_x, inds_y]
>>> result
tensor([0.5359, 0.4863, 0.6942, 0.6071])
We can check the result:
>>> torch.tensor([img[0, x, y] for img, (x, y) in zip(imgs, imgs_xy)])
tensor([0.5359, 0.4863, 0.6942, 0.6071])
To answer your questions:
1: Since we stacked the images, that issue is mitigated and we use range(4) to index into each individual image instead.
2: Yes, we indeed turn x, y positions into tensors.
3: We directly index with them after they are separated into tensors.
| https://stackoverflow.com/questions/68226077/ |
Is there a resnet50_v2 pretrained on imageNet for Pytorch? | I'm new to ML, and as the title states, I'm wondering if there's a pretrained ResNet50_v2 with ImageNet on PyTorch?
I looked at https://pytorch.org/vision/stable/models.html
and found torchvision.models.resnet50(..) but I don't think that's the same as ResNet50_v2?
Thank you in advance!
| The official position is that it will not be added, as you can see here:
fmassa: "We've added ResNeXt to torchvision, but ResNetv2 didn't really caught up with the community so we won't be adding it"
| https://stackoverflow.com/questions/68227758/ |
How to change parts of parameters' device type of a module in pytorch? | I define a net and two parameters are on CPU, I try to move those two parameters to GPU. However, when I print the device, I find that those two are not moved to GPU.
How to change the model parameter device?
for p in net.parameters():
if p.device == torch.device('cpu'):
p = p.to('cuda')
for p in net.parameters():
if p.device == torch.device('cpu'):
print(p.device)
Output:
cpu
cpu
| You're dealing with parameters. Unlike a Module, you have to attribute them back to the original variable if you want to replace them. Additionally, you'll want to change the .data of a given parameter, otherwise it won't work because the .to(...) actually generates a copy.
for p in net.parameters():
if p.device == torch.device('cpu'):
p.data = p.to('cuda')
Note that if any of these parameters have .grad, they will not be moved to the GPU. Take a look here at how parameters are usually moved. As you'll see, you'll have to do the same for the gradients:
for p in net.parameters():
if p.device == torch.device('cpu'):
p.data = p.to('cuda')
if p.grad is not None:
p.grad.data = p.grad.to('cuda')
| https://stackoverflow.com/questions/68227826/ |
pytorch torchvision.datasets.ImageFolder FileNotFoundError: Found no valid file for the classes .ipynb_checkpoints | Tried to load training data with pytorch torch.datasets.ImageFolder in Colab.
transform = transforms.Compose([transforms.Resize(400),
transforms.ToTensor()])
dataset_path = 'ss/'
dataset = datasets.ImageFolder(root=dataset_path, transform=transform)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=20)
I encountered the following error :
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-27-7abcc1f434b1> in <module>()
2 transforms.ToTensor()])
3 dataset_path = 'ss/'
----> 4 dataset = datasets.ImageFolder(root=dataset_path, transform=transform)
5 dataloader = torch.utils.data.DataLoader(dataset, batch_size=20)
3 frames
/usr/local/lib/python3.7/dist-packages/torchvision/datasets/folder.py in make_dataset(directory, class_to_idx, extensions, is_valid_file)
100 if extensions is not None:
101 msg += f"Supported extensions are: {', '.join(extensions)}"
--> 102 raise FileNotFoundError(msg)
103
104 return instances
FileNotFoundError: Found no valid file for the classes .ipynb_checkpoints. Supported extensions are: .jpg, .jpeg, .png, .ppm, .bmp, .pgm, .tif, .tiff, .webp
My Dataset folder contains a subfolder with many training images in png format, still the ImageFolder can't access them.
| I encountered the same problem when I was using IPython notebook-like tools.
First please check if there is any hidden files under your dataset_path. Use ls -a if you are under a Linux environment.
The case happen to me is I found a hidden file called .ipynb_checkpoints which is located parallelly to image class subfolders. I think that file causes confusion to PyTorch dataset. I made sure it is not useful so I simply deleted it. Then the dataset works fine.
Or if you would like to simply ignore that file, you may also try this.
| https://stackoverflow.com/questions/68229246/ |
Cant install tensorflow for huggingface transformers library | Im trying to use huggingface transformers library in my python project. I am a first time python programmer, and I am stuck on this error message, even though tensorflow has been installed on my machine:
>>> from transformers import pipeline
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
I have discovered that tensorflow does not exist, even though I have installed it via pip. I have tried uninstalling it and reinstalling it, and but when I try to import the package, it just comes back as a ModuleNotFoundError
>>> import tensorflow
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
import tensorflow
File "C:\Users\######\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\tensorflow\__init__.py", line 41, in <module>
from tensorflow.python.tools import module_util as _module_util
ModuleNotFoundError: No module named 'tensorflow.python
I have tried uninstalling and re-installing using pip and conda. I even tried installing pytorch using the same methods. It always says that the package was succesfully installed, and yet the error persists.
I am using Python 3.9 and my OS is Windows 10. I dont know what I am doing wrong. But I know that a solution will definitely not be to uninstall and reinstall a package.
Pip version (pip -V):
pip 21.1.3 from c:\users\######\appdata\local\programs\python\python39\lib\site-packages\pip (python 3.9)
Python version (python -V):
Python 3.9.5
Python path list
I tried comparing the output of sys.path with the output of pip -V.
The closest path I saw for the pip -V path is down at the bottom, however I did not find the exact directory.
>>> import sys
>>> sys.path
['', 'C:\\windows\\system32', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.9_3.9.1520.0_x64__qbz5n2kfra8p0', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.9_3.9.1520.0_x64__qbz5n2kfra8p0\\python39.zip', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.9_3.9.1520.0_x64__qbz5n2kfra8p0\\DLLs', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.9_3.9.1520.0_x64__qbz5n2kfra8p0\\lib', 'C:\\Users\\######\\AppData\\Local\\Microsoft\\WindowsApps\\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0', 'C:\\Users\\######\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python39\\site-packages', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.9_3.9.1520.0_x64__qbz5n2kfra8p0', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.9_3.9.1520.0_x64__qbz5n2kfra8p0\\lib\\site-packages']
Closest path:
C:\Users\######\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages
| From comments
You have multiple python interpreters installed, that is why
installing stuff does not show in your python interpreter, use pip -V and compare it to the python version that appears in the interpreter. Remove one and use only one then your issue will be
resolved (paraphrased from Dr.Snoopy)
| https://stackoverflow.com/questions/68239361/ |
RuntimeError: Given groups=1, weight of size [64, 32, 3, 3], expected input[128, 64, 32, 32] to have 32 channels, but got 64 channels instead | I am trying to experiment with why we have a Vanishing & exploding gradient, and why Resnet is so helpful in avoiding the two problems above. So I decided to train a plain Convolution network with many layers just to know why the model LOSS increases as I train with many layers e.g 20 layers. but I am getting this error at some point, I can figure out what might be the issue, but I know it is from my model Architecture.
images.shape: torch.Size([128, 3, 32, 32])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-80-0ad7109b33c1> in <module>
1 for images, labels in train_dl:
2 print('images.shape:', images.shape)
----> 3 out = model(images)
4 print('out.shape:', out.shape)
5 print('out[0]:', out[0])
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
<ipython-input-78-81b21c16ed79> in forward(self, xb)
31
32 def forward(self, xb):
---> 33 return self.network(xb)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input)
115 def forward(self, input):
116 for module in self:
--> 117 input = module(input)
118 return input
119
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input)
421
422 def forward(self, input: Tensor) -> Tensor:
--> 423 return self._conv_forward(input, self.weight)
424
425 class Conv3d(_ConvNd):
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight)
418 _pair(0), self.dilation, self.groups)
419 return F.conv2d(input, weight, self.bias, self.stride,
--> 420 self.padding, self.dilation, self.groups)
421
422 def forward(self, input: Tensor) -> Tensor:
RuntimeError: Given groups=1, weight of size [64, 32, 3, 3], expected input[128, 64, 32, 32] to have 32 channels, but got 64 channels instead
My model Architecture is
class Cifar10CnnModel(ImageClassificationBase):
def __init__(self):
super().__init__()
self.network = nn.Sequential(
nn.Conv2d(3, 32, kernel_size=3, padding=1),
nn.Conv2d(32, 32, kernel_size=3, padding=1),
nn.ReLU(),
nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1),
nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.MaxPool2d(2, 2), # output: 64 x 16 x 16
nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.MaxPool2d(2, 2), # output: 128 x 8 x 8
nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.MaxPool2d(2, 2), # output: 256 x 4 x 4
nn.Flatten(),
nn.Linear(256*4*4, 1024),
nn.ReLU(),
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 10))
def forward(self, xb):
return self.network(xb)
for images, labels in train_dl:
print('images.shape:', images.shape)
out = model(images)
print('out.shape:', out.shape)
print('out[0]:', out[0])
break
| I can see by the model, it looks like you made a typo on the 4th conv block in your sequential.
You have
nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1),
nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1),
However, you already convert the image to 64 channels and you then pass it in to the next conv block as an image with 32 channels which results in the error you have above.
Fix this to:
self.network = nn.Sequential(
nn.Conv2d(3, 32, kernel_size=3, padding=1),
nn.Conv2d(32, 32, kernel_size=3, padding=1),
nn.ReLU(),
nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1),
# Change this from 32 to now 64 like I did here.
nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.MaxPool2d(2, 2), # output: 64 x 16 x 16
Sarthak Jain
| https://stackoverflow.com/questions/68243122/ |
Undoing tensor dstack and restack column-wise | I have two, tensors, a and b:
import torch
a = torch.tensor(([1,2],
[3,4],
[5,6],
[7,8]))
b = torch.tensor(([0,0],
[1,1],
[2,2],
[3,3]))
Which I can stack both horizontally or depth-wise.
d = torch.dstack([a, b])
h = torch.hstack([a, b])
Now, is there any PyTorch function, preferably in one line, that I can apply to d to get h? It sounds like I want to undo the depth-wise stacking, and re-stack them column-wise. I've tried reshaping, and flattening, but neither work, as they both disrupt the ordering of the values.
| in your case use torch.unbind
import torch
a = torch.tensor(([1,2],
[3,4],
[5,6],
[7,8]))
b = torch.tensor(([0,0],
[1,1],
[2,2],
[3,3]))
d = torch.dstack([a, b])
h = torch.hstack(torch.unbind(d,2)) # get h from d
| https://stackoverflow.com/questions/68243246/ |
Error calculating area "IndexError: too many indices for tensor of dimension 1" | I'm repurposing some code from here to perform object detection:
# Create boxes list
boxes = [
[annotation['xmin'], annotation['ymin'], annotation['xmax'], annotation['ymax']]
for annotation in image_annotations
]
...
area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
During training, I was hitting this error:
File "c:\2021-mcm-master\src\PyTorch-RCNN\ui-prediction\src\screenshot_dataset.py", line 94, in __getitem__
area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
IndexError: too many indices for tensor of dimension 1
| Following a suggestion here, I discovered some of my images did not contain any bounding boxes and was causing this error:
# Create boxes list
boxes = [
[annotation['xmin'], annotation['ymin'], annotation['xmax'], annotation['ymax']]
for annotation in image_annotations
]
if len(boxes) == 0:
print ("Help!")
| https://stackoverflow.com/questions/68244452/ |
How can I concatenate the 4 corners of the image quickly when loading image in deep learning? | What is the most effective way to concatenate 4 corner, shown in this photo ?
(conducting in getitem())
left_img = Image.open('image.jpg')
...
output = right_img
| This is how I would do it.
Firstly I would convert the image to a Tensor Image temporarily
from torchvision import transforms
tensor_image = transforms.ToTensor()(image)
Now assuming you have a 3 channel image (although similiar principles apply to any matrices of any number of channels including 1 channel gray scale images).
You can find the Red channel with tensor_image[0] the Green channel with tensor_image[1] and the the Blue channel with tensor_image[2]
You can make a for loop iterating through each channel like
for i in tensor_image.size(0):
curr_channel = tensor_image[i]
Now inside that for loop with each channel you can extract the
First corner pixel with float(curr_channel[0][0])
Last top corner pixel with float(curr_channel[0][-1])
Bottom first pixel with float(curr_channel[-1][0])
Bottom and last pixel with float(curr_channel[-1][-1])
Make sure to convert all the pixel values to float or double values before this next appending step
Now you have four values that correspond to the corner pixels of each channel
Then you can make a list called new_image = []
You can then append the above mentioned pixel values using
new_image.append([[curr_channel[0][0], curr_channel[0][-1]], [curr_channel[-1][0], curr_channel[-1][-1]]])
Now after iterating through every channel you should have a big list that contains three (or tensor_image.size(0)) number of lists of lists.
Next step is to convert this list of lists of lists to a torch.tensor by running
new_image = torch.tensor(new_image)
To make sure everything is right new_image.size() should return torch.Size([3, 2, 2])
If that is the case you now have your wanted image but it is tensor format.
The way to convert it back to PIL is to run
final_pil_image = transforms.ToPILImage()(new_image)
If everything went good, you should have a pil image that fulfills your task. The only code it uses is clever indexing and one for loop.
There is a possibility however if you look more than I can, then you can avoid using a for loop and perform operations on all the channels without the loop.
Sarthak Jain
| https://stackoverflow.com/questions/68244774/ |
Input dimension of Pytorch CNN model | I have input data for my 2D CNN model, say; X_train with shape (torch.Size([716, 50, 50])
my model is:
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=4,stride=1,padding = 1)
self.mp1 = nn.MaxPool2d(kernel_size=4,stride=2)
self.conv2 = nn.Conv2d(32,64, kernel_size=4,stride =1)
self.mp2 = nn.MaxPool2d(kernel_size=4,stride=2)
self.fc1= nn.Linear(2304,256)
self.dp1 = nn.Dropout(p=0.2)
self.fc2 = nn.Linear(256,10)
def forward(self, x):
in_size = x.size(0)
x = F.relu(self.mp1(self.conv1(x)))
x = F.relu(self.mp2(self.conv2(x)))
x = x.view(in_size,-1)
x = F.relu(self.fc1(x))
x = self.dp1(x)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
but when I run the model, I always get this error:
---> x = F.relu(self.mp1(self.conv1(x)))
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [32, 1, 4, 4], but got 3-dimensional input of size [64, 50, 50] instead
I understand my input for the model is of size 64 (batch size), 50*50 (size of each input, in this case is signal picture).
But I don't understand why it still requires 4-dimensional input where I had set my in_channels for nn.Conv2d to be 1.
How to solve this input dimension problem or to change the dimension requirement of model input?
| Whether in_channels is 1 or 42 does not matter: it is still an added dimension. It is useful to read the documentation in this respect.
In- and output are of the form N, C, H, W
N: batch size
C: channels
H: height in pixels
W: width in pixels
So you need to add the dimension in your case:
# Add a dimension at index 1
x = x.unsqueeze(1)
| https://stackoverflow.com/questions/68253414/ |
PEGASUS From pytorch to tensorflow | I have fine-tuned PEGASUS model for abstractive summarization using this script which uses huggingface.
The output model is in pytorch.
Is there a way to transorm it into tensorflow model so I can use it in a javascript backend?
| There are several ways in which you can potentially achieve a conversion, some of which might not even need Tensorflow at all.
Firstly, the way that does what you intend to do: PEGASUS seems to be completely based on the BartForConditionalGeneration model, according to the transformer implementation notes. This is important, because there exists a script to convert PyTorch checkpoints to TF2 checkpoints. While this script does not explicitly allow you to convert a PEGASUS model, it does have options available for BART. Running it with the respective parameters should give you the desired output.
Alternatively, you can potentially achieve the same by exporting the model into the ONNX format, which also has JS deployment options. Specific details for how to convert a Huggingface model to ONNX can be found here.
| https://stackoverflow.com/questions/68260614/ |
Pytorch - extracting predicted values to an array | I'm a new pytorch user and moderate experience with Tensorflow/Keras. The pytorch examples are fantastic. I've worked through the demand forecasting lab using the Temporal Fusion Transform (https://pytorch-forecasting.readthedocs.io/en/latest/tutorials/stallion.html).
All makes sense but haven't figured out how to save the predicted values in notebook section #20 to a numpy array.
Section #20,
*new_raw_predictions, new_x = best_tft.predict(new_prediction_data, mode="raw", return_x=True)*
I see the values in the tensors, print(new_raw_predictions) ,
like this --
*{'prediction': tensor([[[3.4951e+00, 1.7341e+01, 2.7446e+01, ..., 6.3175e+01,
9.0240e+01, 1.2589e+02],
[1.1698e+01, 2.3643e+01, 3.3291e+01, ..., 6.6374e+01,
9.1148e+01, 1.3173e+02],
I've seen some similar questions asked here but none seem to work. All attempts result in a similar error so I'm missing something fundamental about pytorch and the output tensor; I always get 'AttributeError: 'dict' object has no attribute new_raw_predictions'
A few examples of what's been tried:
*new_raw_predictions.cpu().numpy() new_raw_predictions.detach().cpu().numpy() new_raw_predictions.numpy()*
Goal is to save the predicted output so I can compare changes to the model. Thanks in advance!
| It all depends on how you've created your model, because pytorch can return values however you specify. In your case, it looks like it returns a dictionary, of which 'prediction' is a key. You can convert to numpy using the command you supplied above, but with one change:
preds = new_raw_predictions['prediction'].detach().cpu().numpy()
of course if it's not on the GPU you don't need to use .detach().cpu(), just .numpy()
| https://stackoverflow.com/questions/68262401/ |
Avoid Loop for Selecting Dimensions in a Batch Tensor in PyTorch | I have a batch tensor and another tensor having indices of the dimensions to select from batch tensor. At present, I am looping around batch tensor as shown below in the code snippet:
import torch
# create tensors to represent our data in torch format
batch_size = 8
batch_data = torch.rand(batch_size, 3, 240, 320)
# notice that channels_id has 8 elements, i.e., = batch_size
channels_id = torch.tensor([2, 0, 2, 1, 0, 2, 1, 0])
This is how I am selecting dimensions inside a for loop and then stacking to convert a single tensor:
batch_out = torch.stack([batch_i[channel_i] for batch_i, channel_i in zip(batch_data, channels_id)])
batch_out.size() # prints torch.Size([8, 240, 320])
It works fine. However, is there a better PyTorch way to achieve the same?
| As per the hint from @Shai, I could make it work using the torch.gather function. Below is the complete code:
import torch
# create tensors to represent our data in torch format
batch_size = 8
batch_data = torch.rand(batch_size, 3, 240, 320)
# notice that channels_id has 8 elements, i.e., batch_size
channels_id = torch.tensor([2, 0, 2, 1, 0, 2, 1, 0])
# resizing channels_id to (8 , 1, 240, 320)
channels_id = channels_id.view(-1, 1, 1, 1).repeat((1, 1) + batch_data.size()[-2:])
batch_out = torch.gather(batch_data, 1, channels_id).squeeze()
batch_out.size() # prints torch.Size([8, 240, 320])
| https://stackoverflow.com/questions/68264035/ |
How to test my own image on a MNIST trained network? | This is my first time trying to train a network and use PyTorch, so please forgive me if this is considered simple.
I have a pretrained AlexNet network that was modified to classify 3 classes, which I've already trained on MNIST that I mapped to 3 different labels.
class Net( nn.Module ) :
def __init__( self ) :
super( Net, self ).__init__()
self.model = models.alexnet( pretrained = True )
# changed in_channels from 3 to 1 bc images are black and white
self.model.features[ 0 ] = nn.Conv2d( 1, 64, kernel_size = 11, stride = 4, padding = 2 )
# binary classifier -> 3 out_features
self.model.classifier[ 4 ] = nn.Linear( 4096, 1024 )
self.model.classifier[ 6 ] = nn.Linear( 1024, 3 )
def forward( self, x ):
return self.model( x )
model = Net().to( device )
I want to test this on a single .png image that I drew, which is already 255x255, and in black and white. I would like the predicted label. This is the code I have so far for preprocessing the image:
from PIL import Image
import matplotlib.pyplot as plt
import cv2
image_8 = Image.open( "eight.png" ).convert('L')
image_8 = list( image_8.getdata())
normalized_8 = [(255 - x) * 1.0 / 255.0 for x in image_8 ]
tensor_8 = torch.FloatTensor( normalized_8 )
pred = model( tensor_8 )
from which I got the following error: Expected 4-dimensional input for 4-dimensional weight [64, 1, 11, 11], but got 1-dimensional input of size [50176] instead. So this is clearly the wrong way to do things, but I'm not sure how to proceed.
| Change your inference code to the following. Images are not intended to be flattened into 1d.
import matplotlib.pyplot as plt
import cv2
image_8 = cv2.imread("eight.png")
# following line may or may not be necessary
image_8 = cv2.cvtColor(image_8, cv2.COLOR_BGR2GRAY)
# you can divide numpy arrays by a constant natively
image_8 /= 255.
# This makes a 4d tensor (batched image) with shape [1, channels, width, height]
image_8 = torch.Tensor(tensor_8).unsqueeze(axis=0)
pred = model(image_8)
If the image is still 3d (shape of [1, width, height]), add a second .unsqueeze(axis=0).
| https://stackoverflow.com/questions/68264132/ |
using DQN to solve shortest path | I'm trying to find out if DQN can solve the shortest path algorithm
so I have this Dataframe which contains a source which has nodes id ,end which represents the destination and also has nodes id, and the weights which represent the distance of the edge and then I converted the data frame into a graph theory as following
DataFrame
source end weight
0 688615041 208456626 15.653688122127072
1 688615041 1799221665 10.092266065922756
2 1799221657 1799221660 8.673942902872051
3 1799221660 1799221665 15.282152665774992
4 1799221660 2003461246 25.85307821157314
5 1799221660 299832604 75.99884525624508
6 299832606 2003461227 4.510148061854331
7 299832606 2003461246 10.954119220974723
8 299832606 2364408910 4.903114362426424
9 1731824802 2003461235 6.812335798968233
10 1799221677 208456626 8.308567154008992
11 208456626 2003461246 14.56512909988425
12 208456626 1250468692 16.416527267975034
13 1011881546 1250468696 12.209773608913697
14 1011881546 2003461246 7.477102764665149
15 2364408910 1130166767 9.780352545373274
16 2364408910 2003461246 6.660771089602594
17 2364408910 2003461237 3.125301826317477
18 2364408911 2003461240 3.836966849565568
19 2364408911 2003461246 6.137847950353395
20 2364408911 2003461247 7.399469477211698
21 2364408911 2003461237 3.90876793066916
22 1250468692 1250468696 8.474825189804282
23 1250468701 2003461247 4.539111170687284
24 2003461235 2003461246 12.400601105777394
25 2003461246 2003461247 12.437602668573737
and the graph looks like this
pos = nx.spring_layout(g)
edge_labels = nx.get_edge_attributes(g, 'weight')
nx.draw(g, pos, node_size=100)
nx.draw_networkx_edge_labels(g, pos, edge_labels, font_size=8)
nx.draw_networkx_labels(g, pos, font_size=10)
plt.title("Syntethic representation of the City")
plt.show()
print('Total number of Nodes: '+str(len(g.nodes)))
graph
Now I used DQN in a fixed state from node number 1130166767 as a start to node number 1731824802 as a goal.
this is the whole code of mine
class Network(nn.Module):
def __init__(self,input_dim,n_action):
super(Network,self).__init__()
self.f1=nn.Linear(input_dim,128)
self.f2=nn.Linear(128,64)
self.f3=nn.Linear(64,32)
self.f4=nn.Linear(32,n_action)
#self.optimizer=optim.Adam(self.parameters(),lr=lr)
#self.loss=nn.MSELoss()
self.device=T.device('cuda' if T.cuda.is_available() else 'cpu')
self.to(self.device)
def forward(self,x):
x=F.relu(self.f1(x))
x=F.relu(self.f2(x))
x=F.relu(self.f3(x))
x=self.f4(x)
return x
def act(self,obs):
#state=T.tensor(obs).to(device)
state=obs.to(self.device)
actions=self.forward(state)
action=T.argmax(actions).item()
return action
device=T.device('cuda' if T.cuda.is_available() else 'cpu')
print(device)
num_states = len(g.nodes)*1
### if we need to train a specific set of nodes for ex 10 we *10
num_actions = len(g.nodes)
print("Expected number of States are: "+str(num_states))
print("Expected number of action are: "+str(num_actions))
#num_action*2=when we would like to convert the state into onehotvector we need to concatinate the two vector 22+22
online=Network(num_actions*2,num_actions)
target=Network(num_actions*2,num_actions)
target.load_state_dict(online.state_dict())
optimizer=T.optim.Adam(online.parameters(),lr=5e-4)
#create a dictionary that have encoded index for each node
#to solve this isssu
#reset()=476562122273
#number of state < 476562122273
enc_node={}
dec_node={}
for index,nd in enumerate(g.nodes):
enc_node[nd]=index
dec_node[index]=nd
def wayenc(current_node,new_node,type=1):
#encoded
if type==1: #distance
if new_node in g[current_node]:
rw=g[current_node][new_node]['weight']*-1
return rw,True
rw=-5000
return rw,False
def rw_function(current,action):
#current_node
#new_node
beta=1 #between 1 and 0
current=dec_node[current]
new_node=dec_node[action]
rw0,link=wayenc(current,new_node)
rw1=0
frw=rw0*beta+(1-beta)*rw1
return frw,link
def state_enc(dst, end,n=len(g.nodes)):
return dst+n*end
def state_dec(state,n=len(g.nodes)):
dst = state%n
end = (state-dst)/n
return dst, int(end)
def step(state,action):
done=False
current_node , end = state_dec(state)
new_state = state_enc(action,end)
rw,link=rw_function(current_node,action)
if not link:
new_state = state
return new_state,rw,False
elif action == end:
rw = 10000 #500*12
done=True
return new_state,rw,done
def reset():
state=state_enc(enc_node[1130166767],enc_node[1731824802])
return state
def state_to_vector(current_node,end_node):
n=len(g.nodes)
source_state_zeros=[0.]*n
source_state_zeros[current_node]=1
end_state_zeros=[0.]*n
end_state_zeros[end_node]=1.
vector=source_state_zeros+end_state_zeros
return vector
#return a list of list converted from state to vectors
def list_of_vecotrs(new_obses_t):
list_new_obss_t=new_obses_t.tolist()
#convert to integer
list_new_obss_t=[int(v) for v in list_new_obss_t]
vector_list=[]
for state in list_new_obss_t:
s,f=state_dec(state)
vector=state_to_vector(s,f)
vector_list.append(vector)
return vector_list
#fill the replay buffer
#replay_buffer=[]
rew_buffer=[0]
penalties=[]
episode_reward=0.0
batch_size=num_actions*2
buffer_size=100000
min_replay_size=int(buffer_size*0.20)
target_update_freq=1000
flag=0
action_list=np.arange(0,len(g.nodes)).tolist()
replay_buffer=deque(maxlen=buffer_size)
#populate the experience network
obs=reset()
#obs,end=state_dec(start,len(g.nodes))
for _ in tqdm(range(min_replay_size)):
action=np.random.choice(action_list)
new_obs,rew,done=step(obs,action)
transition=(obs,action,rew,done,new_obs)
replay_buffer.append(transition)
obs=new_obs
if done:
obs=reset()
#main training loop
obs=reset()
episodes=100000
start=1
end=0.1
decay=episodes
gamma=0.99
epsilon=0.5
gamma_list=[]
mean_reward=[]
done_location=[]
loss_list=[]
number_of_episodes=[]
stat_dict={'episodes':[],'epsilon':[],'explore_exploit':[],'time':[]}
for i in tqdm(range(episodes)):
itr=0
#epsilon=np.interp(i,[0,decay],[start,end])
#gamma=np.interp(i,[0,decay],[start,end])
epsilon=np.exp(-i/(episodes/3))
rnd_sample=random.random()
stat_dict['episodes'].append(i)
stat_dict['epsilon'].append(epsilon)
#choose an action
if rnd_sample <=epsilon:
action=np.random.choice(action_list)
stat_dict['explore_exploit'].append('explore')
else:
source,end=state_dec(obs)
v_obs=state_to_vector(source,end)
t_obs=T.tensor(v_obs)
action=online.act(t_obs)
stat_dict['explore_exploit'].append('exploit')
#fill transition and append to replay buffer
new_obs,rew,done=step(obs,action)
transition=(obs,action,rew,done,new_obs)
replay_buffer.append(transition)
obs=new_obs
episode_reward+=rew
if done:
obs=reset()
rew_buffer.append(episode_reward)
episode_reward=0.0
done_location.append(i)
#start gradient step
transitions=random.sample(replay_buffer,batch_size)
obses=np.asarray([t[0] for t in transitions])
actions=np.asarray([t[1] for t in transitions])
rews=np.asarray([t[2] for t in transitions])
dones=np.asarray([t[3] for t in transitions])
new_obses=np.asarray([t[4] for t in transitions])
obses_t=T.as_tensor(obses,dtype=T.float32).to(device)
actions_t=T.as_tensor(actions,dtype=T.int64).to(device).unsqueeze(-1)
rews_t=T.as_tensor(rews,dtype=T.float32).to(device)
dones_t=T.as_tensor(dones,dtype=T.float32).to(device)
new_obses_t=T.as_tensor(new_obses,dtype=T.float32).to(device)
list_new_obses_t=T.tensor(list_of_vecotrs(new_obses_t)).to(device)
target_q_values=target(list_new_obses_t)##
max_target_q_values=target_q_values.max(dim=1,keepdim=False)[0]
targets=rews_t+gamma*(1-dones_t)*max_target_q_values
list_obses_t=T.tensor(list_of_vecotrs(obses_t)).to(device)
q_values=online(list_obses_t)
action_q_values=T.gather(input=q_values,dim=1,index=actions_t)
#warning UserWarning: Using a target size (torch.Size([24, 24])) that is different to the input size (torch.Size([24, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
targets=targets.unsqueeze(-1)
loss=nn.functional.mse_loss(action_q_values,targets)
#loss=rmsle(action_q_values,targets)
loss_list.append(loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
#plot
mean_reward.append(np.mean(rew_buffer))
number_of_episodes.append(i)
gamma_list.append(gamma)
dec = {'number_of_episodes':number_of_episodes,'mean_reward':mean_reward,'gamma':gamma_list}
#clear_output(wait=True)
#sns.lineplot(data=dec, x="number_of_episodes", y="mean_reward")
#plt.show()
if i % target_update_freq==0:
target.load_state_dict(online.state_dict())
if i % 1000 ==0:
print('step',i,'avg rew',round(np.mean(rew_buffer),2))
pass
now if you can see from thephotos
nither the rewards are increasing or the loss is decreasing, I tried the following
increasing and decreasing the learning rate
changing target_update_freq from 100 1000 1000
I tried changing the state representation from Onehotvector to [state, end] and sent it as pair.
i tried to change the loss function from mse_loss,smooth_l1,... etc
i tried to increase the number of episodes
adding another layer to NN network
7.changing how the decay of epsilon works linear ,exponential
most of these solutions are from questions on Stacked, but nothing works for me
How can I improve the performance? or in another ward? How can I increase the rewards?
| It seems your problem needs just parameters tunning only
I change your learning rate into 0.02
I changed the dimension of the states that been send to NN
class Network(nn.Module):
def __init__(self,input_dim,n_action):
super(Network,self).__init__()
self.f1=nn.Linear(input_dim,128)
self.f2=nn.Linear(128,64)
self.f3=nn.Linear(64,32)
self.f4=nn.Linear(32,n_action)
#self.optimizer=optim.Adam(self.parameters(),lr=lr)
#self.loss=nn.MSELoss()
self.device=T.device('cuda' if T.cuda.is_available() else 'cpu')
self.to(self.device)
def forward(self,x):
x=F.relu(self.f1(x))
x=F.relu(self.f2(x))
x=F.relu(self.f3(x))
x=self.f4(x)
return x
def act(self,obs):
#state=T.tensor(obs).to(device)
state=obs.to(self.device)
actions=self.forward(state)
action=T.argmax(actions).item()
return action
device=T.device('cuda' if T.cuda.is_available() else 'cpu')
print(device)
num_states = len(g.nodes)**2
num_actions = len(g.nodes)
online=Network(num_actions*2,num_actions)
target=Network(num_actions*2,num_actions)
target.load_state_dict(online.state_dict())
optimizer=T.optim.Adam(online.parameters(),lr=1e-2)
enc_node={}
dec_node={}
for index,nd in enumerate(g.nodes):
enc_node[nd]=index
dec_node[index]=nd
def wayenc(current_node,new_node,type=1):
#encoded
if type==1: #distance
if new_node in g[current_node]:
rw=g[current_node][new_node]['weight']*-1
return rw,True
rw=-1000
return rw,False
def rw_function(current,action):
beta=1
current=dec_node[current]
new_node=dec_node[action]
rw0,link=wayenc(current,new_node)
rw1=0
frw=rw0*beta+(1-beta)*rw1
return frw,link
def state_enc(dst, end,n=len(g.nodes)):
return dst+n*end
def state_dec(state,n=len(g.nodes)):
dst = state%n
end = (state-dst)/n
return dst, int(end)
def step(state,action):
done=False
current_node , end = state_dec(state)
new_state = state_enc(action,end)
rw,link=rw_function(current_node,action)
if not link:
new_state = state
return new_state,rw,False
elif action == end:
rw = 10000
done=True
return new_state,rw,done
def reset():
state=state_enc(enc_node[1130166767],enc_node[1731824802])
return state
def state_to_vector(current_node,end_node):
n=len(g.nodes)
source_state_zeros=[0.]*n
source_state_zeros[current_node]=1
end_state_zeros=[0.]*n
end_state_zeros[end_node]=1.
vector=source_state_zeros+end_state_zeros
return vector
#return a list of list converted from state to vectors
def list_of_vecotrs(new_obses_t):
list_new_obss_t=new_obses_t.tolist()
#convert to integer
list_new_obss_t=[int(v) for v in list_new_obss_t]
vector_list=[]
for state in list_new_obss_t:
s,f=state_dec(state)
vector=state_to_vector(s,f)
vector_list.append(vector)
return vector_list
#replay_buffer=[]
rew_buffer=[0]
penalties=[]
episode_reward=0.0
#batch_size=num_actions*2
batch_size=32
buffer_size=50000
min_replay_size=int(buffer_size*0.25)
target_update_freq=1000
flag=0
action_list=np.arange(0,len(g.nodes)).tolist()
replay_buffer=deque(maxlen=min_replay_size)
#populate the experience network
obs=reset()
#obs,end=state_dec(start,len(g.nodes))
for _ in tqdm(range(min_replay_size)):
action=np.random.choice(action_list)
new_obs,rew,done=step(obs,action)
transition=(obs,action,rew,done,new_obs)
replay_buffer.append(transition)
obs=new_obs
if done:
obs=reset()
#main training loop
obs=reset()
episodes=70000
start=1
end=0.1
decay=episodes
gamma=0.99
epsilon=0.5
gamma_list=[]
mean_reward=[]
done_location=[]
loss_list=[]
number_of_episodes=[]
stat_dict={'episodes':[],'epsilon':[],'explore_exploit':[],'time':[]}
for i in tqdm(range(episodes)):
itr=0
epsilon=np.exp(-i/(episodes/2))
rnd_sample=random.random()
stat_dict['episodes'].append(i)
stat_dict['epsilon'].append(epsilon)
if rnd_sample <=epsilon:
action=np.random.choice(action_list)
stat_dict['explore_exploit'].append('explore')
else:
source,end=state_dec(obs)
v_obs=state_to_vector(source,end)
t_obs=T.tensor([v_obs])
action=online.act(t_obs)
stat_dict['explore_exploit'].append('exploit')
new_obs,rew,done=step(obs,action)
transition=(obs,action,rew,done,new_obs)
replay_buffer.append(transition)
obs=new_obs
episode_reward+=rew
if done:
obs=reset()
rew_buffer.append(episode_reward)
episode_reward=0.0
done_location.append(i)
batch_size=32
transitions=random.sample(replay_buffer,batch_size)
obses=np.asarray([t[0] for t in transitions])
actions=np.asarray([t[1] for t in transitions])
rews=np.asarray([t[2] for t in transitions])
dones=np.asarray([t[3] for t in transitions])
new_obses=np.asarray([t[4] for t in transitions])
obses_t=T.as_tensor(obses,dtype=T.float32).to(device)
actions_t=T.as_tensor(actions,dtype=T.int64).to(device).unsqueeze(-1)
rews_t=T.as_tensor(rews,dtype=T.float32).to(device)
dones_t=T.as_tensor(dones,dtype=T.float32).to(device)
new_obses_t=T.as_tensor(new_obses,dtype=T.float32).to(device)
list_new_obses_t=T.tensor(list_of_vecotrs(new_obses_t)).to(device)
target_q_values=target(list_new_obses_t)##
#target_q_values=target(obses_t)
max_target_q_values=target_q_values.max(dim=1,keepdim=False)[0]
targets=rews_t+gamma*(1-dones_t)*max_target_q_values
targets=targets.unsqueeze(-1)
list_obses_t=T.tensor(list_of_vecotrs(obses_t)).to(device)
q_values=online(list_obses_t)
#q_values=online(obses_t)
action_q_values=T.gather(input=q_values,dim=1,index=actions_t)
loss=nn.functional.mse_loss(action_q_values,targets)
loss_list.append(loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
mean_reward.append(np.mean(rew_buffer))
number_of_episodes.append(i)
gamma_list.append(gamma)
dec = {'number_of_episodes':number_of_episodes,'mean_reward':mean_reward,'gamma':gamma_list}
if i % target_update_freq==0:
target.load_state_dict(online.state_dict())
if i % 1000 ==0:
print('step',i,'avg rew',round(np.mean(rew_buffer),2))
pass
if i==5000:
pass
I did run this script and it gave me a good performance changing the learning rate helped a lot
| https://stackoverflow.com/questions/68269032/ |
How to fix PyTorch RuntimeError: CUDA error: out of memory? | I'm trying to train my Pytorch model on a remote server using a GPU. However, the training phase doesn't start, and I have the following error instead: RuntimeError: CUDA error: out of memory
I reinstalled Pytorch with Cuda 11 in case my version of Cuda is not compatible with the GPU I use (NVidia GeForce RTX 3080). It still doesn't work.
I also ran this command torch.cuda.empty_cache(). And it still doesn't work.
When I run the code below in my interpreter it still displays RuntimeError: CUDA error: out of memory
import torch
print(torch.rand(1, device="cuda"))
However, it works on cpu.
import torch
print(torch.rand(1, device="cpu"))
When I run the command nvidia-smi I have the following output:
How can I fix it?
| The problem here is that the GPU that you are trying to use is already occupied by another process. The steps for checking this are:
Use nvidia-smi in the terminal. This will check if your GPU
drivers are installed and the load of the GPUS. If it fails, or doesn't show your gpu, check your driver installation.
If the GPU shows >0% GPU Memory Usage, that means that it is already being used by another process. You can close it (Don't do that in a shared environment!) or launch it in the other GPU, if you have another one free.
| https://stackoverflow.com/questions/68271605/ |
TF depth_to_space not same as Torch's PixelShuffle when output channels > 1? | I noticed something interesting when trying to port a torch-trained model to tensorflow (TF). When the output channels of a PixelShuffle operation are greater than one, the corresponding depth_to_space function in TF is not equivalent (Note: I convert the input to TF to NHWC and the output back to NCHW). I was wondering whether this expected behavior OR there is a misunderstanding?
Specifically,
# Torch
torch_out = nn.PixelShuffle(2)(input)
and
# TF/Keras
input = np.transpose(input, (0, 2, 3, 1)) // Convert to NHWC
keras_input = keras.layers.Input(shape=input.shape[1:])
keras_d2s = keras.layers.Lambda(lambda x: tf.nn.depth_to_space(x, 2))(input)
...
keras_out = np.transpose(keras_d2s, (0, 3, 1, 2)) // Convert back to NCHW
and
keras_out != torch_out
Here is a testbench:
import numpy as np
import torch
import tensorflow as tf
from torch import nn
from tensorflow import keras
class Shuffle(nn.Module):
def __init__(self, s, k, ic):
super(Shuffle, self).__init__()
self.shuffle = nn.PixelShuffle(s)
def forward(self, inputs):
return self.shuffle(inputs)
def main():
sz = 4
h = 3
w = 3
k = 3
ic = 8
s = 2
input = np.arange(0, ic * h * w, dtype=np.float32).reshape(ic, h, w)
input = input[np.newaxis]
torch_input = torch.from_numpy(input)
shuffle_model = Shuffle(s, k, ic)
shuffle_out = shuffle_model(torch_input).detach().numpy()
print('Shuffle out:', shuffle_out.shape)
print(shuffle_out)
input = np.transpose(input, (0, 2, 3, 1))
keras_input = keras.layers.Input(shape=input.shape[1:])
keras_d2s = keras.layers.Lambda(lambda x: tf.nn.depth_to_space(x, s))(keras_input)
keras_model = keras.Model(keras_input, keras_d2s)
keras_out = keras_model.predict(input)
keras_out = np.transpose(keras_out, (0, 3, 1, 2))
print('Keras out:', keras_out.shape)
print(keras_out)
equal = np.allclose(shuffle_out, keras_out)
print('Equal?', equal)
if __name__ == '__main__':
main()
| They are indeed different. If you want them to match you need to shuffle the channels of one of the inputs. Or if the pixelshuffle/depth_to_space layer follows a convolution layer you can shuffle the channels of the weights of the convolution. Specifically, if oc is the number of output channels and s is the block_size then you need to permute the channels of the convolution's weights in TF using [i + oc * j for i in range(oc) for j in range(s ** 2)] (yields something like [0, 2, 4, 1, 3, 5]).
| https://stackoverflow.com/questions/68272502/ |
How to find contents of an NGC Docker image? | The NVIDIA NGC container catalog has a broad range of GPU-optimised containers for common activities such as deep learning. How does one find what is inside the Docker images?
For example, I require an image with Pytorch 1.4 and Python 3.6 or 3.7, but the Pytorch tags go from pytorch:17.10 to pytorch:21.06-py3 (where xx.xx is the container version). Is there somewhere a list of what is installed in each container, or even better the container Dockerfile that was used to build the images?
| The details of pytorch NGC containers are listed at PyTorch Release Notes at the bottom of the pytorch NGC page.
All other deep learning frameworks related documentation is also at NVIDIA Deep Learning Frameworks.
| https://stackoverflow.com/questions/68273802/ |
RandomAdjustSharpness gives IndexError: tuple index out of range | While using RandomAdjustSharpness, my code throws the following error - IndexError: tuple index out of range. I followed the instructions given over here - https://pytorch.org/vision/stable/transforms.html and therefore am confused with this error.
Here is my code -
import math, random
from sklearn.datasets import load_sample_images
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.autograd as autograd
import torch.nn.functional as F
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
def random_crop(imgs):
imgs = torch.tensor(imgs)
change = torch.nn.Sequential(
transforms.RandomCrop(427),
transforms.RandomAdjustSharpness(1, p=1)
)
imgs = change(imgs).numpy()
return imgs
###Obtaining a random image and preprocessing it!##
dataset = load_sample_images()
first_img_data = dataset.images[0]
first_img_data = first_img_data.reshape(-1, 427, 640)
first_img_data = first_img_data[1, :, :]
#first_img_data = first_img_data[0:84, 0:84].reshape(-1, 84,84)
# first_img_data = torch.tensor(first_img_data)
plt.figure()
plt.imshow(np.squeeze(first_img_data))
foo = random_crop(first_img_data)
plt.figure()
plt.imshow(np.squeeze(foo))
plt.show()
| you need to a dimension to your tensor like this
torch.tensor([imgs])
| https://stackoverflow.com/questions/68273819/ |
AttributeError: module 'torchvision.models' has no attribute 'mobilenet_v3_small' | I am unable to import mobilenet_v3_small from torchvision. I have the below code:
import torch
import torch.nn as nn
import torch.utils.data
from torch.autograd import Variable
import torch.nn.functional as F
import math
import numpy as np
import torchvision.models as models
class feature_extraction(nn.Module):
def __init__(self):
super().__init__()
self.mobilenet = models.mobilenet_v3_small(pretrained=True)
I am getting the error:
self.mobilenet = models.mobilenet_v3_small(pretrained=True)
AttributeError: module 'torchvision.models' has no attribute 'mobilenet_v3_small'
I have the below versions:
cudatoolkit 11.0.221 h6bb024c_0
torch 1.7.0 pypi_0 pypi
torchaudio 0.7.2 py37 pytorch
torchvision 0.8.1 pypi_0 pypi
Python version is 3.7
| mobilenet_v3_small is not available in torchvision 0.8.1. If you want to use it you need to upgrade to 0.10.0 (stable version) or at least 0.9.0.
| https://stackoverflow.com/questions/68275411/ |
How to cluster the nodes of the Cora dataset based on their in-degree values? | I want to cluster the nodes of Cora dataset in a way that each cluster only contains the nodes with the same in-degree value. I can code something as follows:
import torch
from torch_geometric.datasets import Planetoid
from torch_geometric.utils import degree
dataset = Planetoid('./data','CORA')
data = dataset[0]
n = data.num_nodes
indegree = degree(data.edge_index[1], n, dtype=torch.long)
counts = torch.bincount(indegree)
But since I don't access the index value of the nodes, I don't know how to place each node in which cluster?
| You can use return_inverse in torch.unique to recover the indices.
The nodes with the same value i in indices belong to the same cluster because they all have a degree equal to indegree_class[i].
indegree_class, indices = torch.unique(indegree, return_inverse=True)
| https://stackoverflow.com/questions/68275680/ |
Extracting Meaningful Error Message from 'RuntimeError: CUDA error: device-side assert triggered' on Google Colab in Pytorch | I am experiencing the following error while training a generative network via Pytorch 1.9.0+cu102:
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
While using a Google Colaboratory GPU session. This segment was triggered on either one of these two lines:
running_loss += loss.item()
or
target = target.to(device)
It produces the error on the first line when I am first running the notebook, and the second line each subsequent time I try to run the block. The first error occurs after training for 3 batches. The second error happens on the first batch. I can confirm that the device is cuda0, that device is available, and target is a pytorch tensor. Naturally, I tried to take the advice of the error and run:
!CUDA_LAUNCH_BLOCKING=1
and
os.system('CUDA_LAUNCH_BLOCKING=1')
However, neither of these lines changes the error message. According to a different post, this is because colab is running these lines in a subshell. The error does not occur when running on CPU, and I do not have access to a GPU device besides the GPU on Colab. While this question has been asked in many different forms, no answers are particularly helpful to me because they either recommend passing the aforementioned line, are about a situation fundamentally different from my own (such as training a classifier with an inappropriate number of classes), or recommend a solution which I have already tried, such as resetting the runtime or switching to CPU.
I am hoping to gain insight into the following questions:
Is there a way for me to get a more specific error message? Efforts to set the launch blocking variable have been unsuccessful.
How could it be that I am getting this error on two seemingly very different lines? How could it be that my network trains for 3 batches (it is always 3), but fails on the fourth?
Does this situation remind anyone of an error that they have encountered previously, and have a possible route for ameliorating it given the limited information I can extract?
| I was successfully able to get more information about the error by executing:
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
BEFORE importing torch. This allowed me to get a more detailed traceback and ultimately diagnose the problem as an inappropriate loss function.
| https://stackoverflow.com/questions/68277801/ |
Is it possible to expand nodes of a frozen neural network model by width? | I want to expand the number of nodes of a frozen neural network model by width in pytorch. I want to do something like what shown in the below image where grey are frozen weights and green are newly added trainable weights.
.
I have an initial model which takes 3 inputs and gives one output back, this model also has two hidden layers with nodes h1=5 and h2=3 respectively. I created the model in pytorch and frozen the weights.
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__()
self.fc1 = nn.Linear(3, 5)
self.fc2 = nn.Linear(5, 3)
self.fc3 = nn.Linear(3, 1)
def forward(self,x):
x = self.fc1(x)
x = self.fc2(x)
x = self.fc3(x)
return x
print(Net())
model = Net()
X = torch.rand(5,3)
y = model(X)
print(y)
# Freeze layers
for param in model.parameters():
param.requires_grad = False
Now I want to expand this model by adding trainable nodes to h1=5+2, h2=3+1 and output= 1+1. Only the newly added nodes should be trainable and all other weights should be frozen, and those frozen weights should have the same weight as the parent model. Can this be done in pytorch or in tensorflow?
| There are 2 things need to be done
1. Expand the layers
You really should use ModuleList or ModuleDict to create layers because that would mean you can use loop. I know eval or setattr also work but they tend to break something else so I don't want to use them.
There's 2 ways I can think of. One is directly replace the weight with something bigger and the other one is create a bigger layer and replace the whole layer.
# Replace the weight with randomly generated tensor
fc1_newweight = torch.rand(7, 3)
fc1_newbias = torch.rand(7)
fc1_shape = model.fc1.weight.shape
fc1_newweight[:fc1_shape[0], :fc1_shape[1]] = model.fc1.weight.clone()
fc1_newbias[:fc1_shape[0]] = model.fc1.bias.clone()
model.fc1.weight = torch.nn.Parameter(fc1_newweight)
model.fc1.bias = torch.nn.Parameter(fc1_newbias)
# Replace the weight with the random generated weights from the new layer
fc2_shape = model.fc2.weight.shape
fc2 = nn.Linear(7, 4)
fc2_weight = fc2.state_dict()
fc2_weight['weight'][:fc2_shape[0], :fc2_shape[1]] = model.fc2.weight.clone()
fc2_weight['bias'][:fc2_shape[0]] = model.fc2.bias.clone()
fc2.load_state_dict(fc2_weight)
model.fc2.weight = torch.nn.Parameter(fc2_weight['weight'])
model.fc2.bias = torch.nn.Parameter(fc2_weight['bias'])
# Replace the whole layer
fc3_shape = model.fc3.weight.shape
fc3 = nn.Linear(4, 2)
fc3_weight = fc3.state_dict()
fc3_weight['weight'][:fc3_shape[0], :fc3_shape[1]] = model.fc3.weight.clone()
fc3_weight['bias'][:fc3_shape[0]] = model.fc3.bias.clone()
fc3.load_state_dict(fc3_weight)
model.fc3 = fc3
I'd prefer the 2. or 3. over 1. because the weights will be generated using nn.init.kaiming_uniform instead of just uniform.
2. Select what to be trainable
This is tricky because you can't just set require_grad on only some elements of the weights because you'll get RuntimeError: you can only change requires_grad flags of leaf variables.
But something like this should be a good enough substitute. Again, using ModuleList will make the code here look a lot better too.
y = model(x)
loss = criterion(y, target)
loss.backward()
model.fc1.weight.grad[:fc1_shape[0], :fc1_shape[1]] = 0
model.fc1.bias.grad[:fc1_shape[0]] = 0
model.fc2.weight.grad[:fc2_shape[0], :fc2_shape[1]] = 0
model.fc2.bias.grad[:fc2_shape[0]] = 0
model.fc3.weight.grad[:fc3_shape[0], :fc3_shape[1]] = 0
model.fc3.bias.grad[:fc3_shape[0]] = 0
optimizer.step()
| https://stackoverflow.com/questions/68279292/ |
In my PyTorch train iterator, how do I resolve the ValueError: only one element tensors can be converted to Python scalars? | How do I solve the ValueError: only one element tensors can be converted to Python scalars?
I am closely following a tutorial on building a Question Answering Bot in PyTorch. However, at training, my code is unable to save the checkpoints, giving me aforementioned ValueError. The error happens at torch.save(torch.tensor(train_loss_set), os.path.join(output_dir, 'training_loss.pt'))
Below is my code corresponding to the train iterator:
num_train_epochs = 1
print("***** Running training *****")
print(" Num examples = %d" % len(dataset))
print(" Num Epochs = %d" % num_train_epochs)
print(" Batch size = %d" % batch_size)
print(" Total optimization steps = %d" % (len(train_dataloader) // num_train_epochs))
model.zero_grad()
train_iterator = trange(num_train_epochs, desc="Epoch")
set_seed()
for _ in train_iterator:
epoch_iterator = tqdm(train_dataloader, desc="Iteration")
for step, batch in enumerate(epoch_iterator):
if step < global_step + 1:
continue
model.train()
batch = tuple(t.to(device) for t in batch)
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'token_type_ids': batch[2],
'start_positions': batch[3],
'end_positions': batch[4]}
outputs = model(**inputs)
loss = outputs[0]
train_loss_set.append(loss)
loss.sum().backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
tr_loss += loss.sum().item()
optimizer.step()
model.zero_grad()
global_step += 1
if global_step % 1000 == 0:
print("Train loss: {}".format(tr_loss/global_step))
output_dir = 'checkpoints/checkpoint-{}'.format(global_step)
if not os.path.exists(output_dir):
os.makedirs(output_dir)
model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training
model_to_save.save_pretrained(output_dir)
torch.save(torch.tensor(train_loss_set), os.path.join(output_dir, 'training_loss.pt'))
print("Saving model checkpoint to %s" % output_dir)
Edit
print(train_loss_set[:10]) returns the following:
[tensor([5.7099, 5.7395], device='cuda:0', grad_fn=<GatherBackward>), tensor([5.2470, 5.4016], device='cuda:0', grad_fn=<GatherBackward>), tensor([5.1311, 5.0390], device='cuda:0', grad_fn=<GatherBackward>), tensor([4.4326, 4.8475], device='cuda:0', grad_fn=<GatherBackward>), tensor([3.4740, 3.9955], device='cuda:0', grad_fn=<GatherBackward>), tensor([4.8710, 4.5907], device='cuda:0', grad_fn=<GatherBackward>), tensor([4.4294, 4.3013], device='cuda:0', grad_fn=<GatherBackward>), tensor([2.7536, 2.9540], device='cuda:0', grad_fn=<GatherBackward>), tensor([3.8989, 3.3436], device='cuda:0', grad_fn=<GatherBackward>), tensor([3.3534, 3.2532], device='cuda:0', grad_fn=<GatherBackward>)]
Could this have to do with the fact that I'm using DataParallel?
| It's a weird behavior of pytorch.
Basically you can't create a Tensor using a list(s) of Tensors.
But there's 3 things you can do.
Is, you don't need torch.tensor when saving a list of tensors so this should work.
torch.save(train_loss_set, os.path.join(output_dir, 'training_loss.pt'))
Use torch.stack instead.
torch.save(torch.stack(train_loss_set), os.path.join(output_dir, 'training_loss.pt'))
This kinda counter intuitive but you can convert the tensors inside to ndarray. And you can use torch.tensor
train_loss_set.append(loss.cpu().detach().numpy())
| https://stackoverflow.com/questions/68283806/ |
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first (Segmentation using yolact edge) | I am running segmentation on yolact edge. I am trying to find coordinates of the minimu and maximum x and y pixel coordinated of the mask using my own algorithm.
I am trying to convert the values of a tuple to numpy. However I am getting the follwoing errror
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
Code
xmin = []
xmax = []
y = []
print(np.shape(t[3]))
print(type(t[3][:][:][:]))
#row = (t[3][1][360][:]==1).nonzero(as_tuple=True)
for i in range (0, 2):
t_cpu = t[3].clone().detach().cpu()
horizontal_translation = torch.where(t[3][i][:][:]==1)
print(horizontal_translation)
horizontal_translation_numpy = np.asarray(horizontal_translation[1])
x_min = np.amin(horizontal_translation_numpy)
x_max = np.amax(horizontal_translation_numpy)
np.append(xmin,x_min)
np.append(xmax, x_max)
print(xmin)
print(xmax)
Note:
t is a pytorch tensor that is output by the default program that contains mask data in t[3]. How do I fix this?
Output:
torch.Size([2, 720, 1280])
<class 'torch.Tensor'>
(tensor([105, 105, 105, ..., 503, 503, 503]), tensor([427, 428, 429, ..., 468, 469, 470]))
Traceback (most recent call last):
File "eval.py", line 1303, in <module>
evaluate(net, dataset)
File "eval.py", line 928, in evaluate
evalimage(net, inp, out, detections=detections, image_id="0")
File "eval.py", line 621, in evalimage
img_numpy = prep_display(preds, frame, None, None, undo_transform=False)
File "eval.py", line 198, in prep_display
horizontal_translation_numpy = np.asarray(horizontal_translation[1])
File "/home/nvidia/.local/lib/python3.6/site-packages/numpy/core/_asarray.py", line 83, in asarray
return array(a, dtype, copy=False, order=order)
File "/home/nvidia/.local/lib/python3.6/site-packages/torch/tensor.py", line 480, in __array__
return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
| This should work
xmin = []
xmax = []
y = []
print(np.shape(t[3]))
print(type(t[3][:][:][:]))
#row = (t[3][1][360][:]==1).nonzero(as_tuple=True)
for i in range (0, 2):
t_cpu = t[3].clone().detach().cpu()
horizontal_translation = torch.where(t[3][i][:][:]==1)
print(horizontal_translation)
horizontal_translation_numpy = horizontal_translation.cpu().numpy()
x_min = np.amin(horizontal_translation_numpy)
x_max = np.amax(horizontal_translation_numpy)
np.append(xmin,x_min)
np.append(xmax, x_max)
print(xmin)
print(xmax)
| https://stackoverflow.com/questions/68285810/ |
How can I apply group normalization after a full-connection layer? | How can I apply Group Normalization after a full-connection layer? Say the output of the full-connection layer is 1024. And the group normalization layer is using 16 groups.
self.gn1 = nn.GroupNorm(16, hidden_size)
h1 = F.relu(self.gn1(self.fc1(x))))
Am I right? How should we understand the group normalization if it is applied to the output of a full-connection layer?
| Your code is correct, but let's see what happens in a small example.
The output of a fully-connected layer is usually a 2D-tensor with shape (batch_size, hidden_size) so I will focus on this kind of input, but remember that GroupNorm supports tensors with an arbitrary number of dimensions. In fact, GroupNorm works always on the last dimension of the tensor.
GroupNorm treats all the samples in the batch as independent and it creates n_groups from the last dimension of the tensor, as you can see from the image.
When the input tensor is 2D, the cube in the image becomes a square because there is no third vertical dimension, so in practice the normalization is performed on fixed-size consecutive pieces of the rows of the input matrix.
Let's see an example with some code.
import torch
import torch.nn as nn
batch_size = 2
hidden_size = 32
n_groups = 8
group_size = hidden_size // n_groups # = 4
# Input tensor that can be the result of a fully-connected layer
x = torch.rand(batch_size, hidden_size)
# GroupNorm with affine disabled to simplify the inspection of results
gn1 = nn.GroupNorm(n_groups, hidden_size, affine=False)
r = gn1(x)
# The rows are split into n_groups (8) groups of size group_size (4)
# and the normalization is applied to these pieces of rows.
# We can check it for the first group x[0, :group_size] with the following code
first_group = x[0, :group_size]
normalized_first_group = (first_group - first_group.mean())/torch.sqrt(first_group.var(unbiased=False) + gn1.eps)
print(r[0, :4])
print(normalized_first_group)
if(torch.allclose(r[0, :4], normalized_first_group)):
print('The result on the first group is the expected one')
| https://stackoverflow.com/questions/68286001/ |
How to prevent appending to a list to fill gpu memory | I run a model with pytorch with batch inference. I want to save the output from that model to a list, because I need it later for other calculations. The problem is, that if I append the output from my model to my list "output", it fills the gpu memory.
The output from the model is a list.
Is there a way to move the list to RAM?
if __name__ == "__main__":
output = []
with torch.no_grad():
for i in input_split:
try:
preds = model(i)
output.append(preds)
del preds
gc.collect()
torch.cuda.empty_cache()
except RuntimeError as e:
print("Failed")
| It is because the tensors you get from preds = model(i) are still in GPU.
You can just take them out of the GPU before appending them to the list
output = []
with torch.no_grad():
for i in input_split:
preds = model(i)
output.append(preds.cpu())
And when you want to use them again in GPU then just put them into GPU one by one
for data in output:
data = data.cuda()
Edit 1. The Detectron2
This is similar to the above answer, except a little bit more complicated.
The output of Detectron2 is list[dict]. Which means if you have more than 1 images in the batch you'll have to create an empty list as buffer, like
output = []
with torch.no_grad():
for i in input_split:
preds = model(i)
buffer = []
for pred in preds:
pred['instances']['pred_boxes'] = pred['instances']['pred_boxes'].cpu()
pred['instances']['scores'] = pred['instances']['scores'].cpu()
pred['instances']['pred_classes'] = pred['instances']['pred_classes'].cpu()
pred['instances']['pred_masks'] = pred['instances']['pred_masks'].cpu()
pred['instances']['pred_keypoints'] = pred['instances']['pred_keypoints'].cpu()
buffer.append(pred)
output.append(buffer)
You maybe able to use pointer and get rid of buffer like this too
output = []
with torch.no_grad():
for i in input_split:
preds = model(i)
for pred in preds:
pred['instances']['pred_boxes'] = pred['instances']['pred_boxes'].cpu()
pred['instances']['scores'] = pred['instances']['scores'].cpu()
pred['instances']['pred_classes'] = pred['instances']['pred_classes'].cpu()
pred['instances']['pred_masks'] = pred['instances']['pred_masks'].cpu()
pred['instances']['pred_keypoints'] = pred['instances']['pred_keypoints'].cpu()
output.append(preds)
| https://stackoverflow.com/questions/68287336/ |
Torch.nn has a specific activation function? | My network has a output layer with Relu activation function, but I want the output is something like "Relu+1", that is I want the output is all bigger than 1 and has the same shape of Relu function.
How should I change my torch.nn network?
My code is like:
self.actor = nn.Sequential(
nn.Linear(state_dim, 256),
nn.ReLU(),
nn.Linear(256, 256),
nn.ReLU(),
nn.Linear(256, action_dim),
nn.ReLU()
)
| There are 2 ways I can think of.
Make self.actor an nn.Module object
class Actor(nn.Module):
def __int__(self, state_dim, action_dim):
super().__init__()
self.linear1 = nn.Linear(state_dim, 256)
self.relu = nn.ReLU()
self.linear2 = nn.Linear(256, 256)
self.linear3 = nn.Linear(256, action_dim)
def forward(self, x):
x = self.linear1(x)
x = self.relu(x)
x = self.linear2(x)
x = self.relu(x)
x = self.linear2(x)
x = self.relu(x) + 1
return x
class ......
self.actor = Actor(state_dim, action_dim)
Make a Module class to do that and add it to the self.actor
class Add1(nn.Module):
def forward(self, x):
return x + 1
class ......
self.actor = nn.Sequential(
nn.Linear(state_dim, 256),
nn.ReLU(),
nn.Linear(256, 256),
nn.ReLU(),
nn.Linear(256, action_dim),
nn.ReLU(),
Add1()
)
| https://stackoverflow.com/questions/68287943/ |
What causes the Pytorch nn Module to return a 1 or 0 for the argmax of this array? | I have an array containing four random numbers that an argmax function should be returning a 0, 1, 2, or 3, but when the argmax is called from inside the nn.Module model it is always 0 or 1.
I would just like to know how and why it is always getting a 1 or 0 from the four numbers in the array.
Below I have the nn.Module and a comparison of a random array of len 3 computed inside and outside of the model using the function act (Net.act).
from torch import nn
import torch
import random
class Network(nn.Module):
def __init__(self):
super().__init__()
self.net = nn.Sequential(
nn.Linear(4, 64),
nn.Tanh(),
nn.Linear(64, 2))
def forward(self, x):
return self.net(x)
def act(self, obs):
obs_t = torch.as_tensor(obs, dtype=torch.float32, device=device)
q_values = self(obs_t.unsqueeze(0))
max_q_index = torch.argmax(q_values, dim=1)[0]
action = max_q_index.detach().item()
return action
Net = Network()
This is a side by side comparison of a random array of len 3 inside and outside of the act function in the nn.Module (Net).
for _ in range(20):
z = np.array([random.uniform(-1, 1) for _ in range(4)])
obs_t = torch.as_tensor(z, dtype=torch.float32, device=device)
q_values = (obs_t.unsqueeze(0))
max_q_index = torch.argmax(q_values, dim=1)[0]
action = max_q_index.detach().item()
print(action, Net.act(z))
Output
3 0
0 0
2 0
0 0
2 0
0 0
1 0
2 0
3 0
0 1
0 0
1 1
3 1
3 0
1 0
0 1
3 0
3 0
0 0
1 0
| self(obs_t.unsqueeze(0)) returns a 2 column matrix because the last layer of your model (nn.Linear(64, 2)) is defined to output two columns. max_q_index contains the column index of the largest value in each row of the model output (column index because dim=1). Since there are only 2 columns, max_q_index can only have values of 0 or 1.
| https://stackoverflow.com/questions/68290924/ |
How to compare training and test performance in a Faster RCNN object detection model | I'm following a tutorial here for implementing a Faster RCNN against a custom dataset using PyTorch.
This is my training loop:
for images, targets in metric_logger.log_every(data_loader, print_freq, header):
# FOR GPU
images = list(image.to(device) for image in images)
targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
# Train the model
loss_dict = model(images, targets)
# reduce losses over all GPUs for logging purposes
losses = sum(loss for loss in loss_dict.values())
loss_dict_reduced = reduce_dict(loss_dict)
losses_reduced = sum(loss for loss in loss_dict_reduced.values())
loss_value = losses_reduced.item()
The metric logger (defined here) outputs the following to the console during training:
Epoch: [0] [ 0/226] eta: 0:07:57 lr: 0.000027 loss: 6.5019 (6.5019) loss_classifier: 0.8038 (0.8038) loss_box_reg: 0.1398 (0.1398) loss_objectness: 5.2717 (5.2717) loss_rpn_box_reg: 0.2866 (0.2866) time: 2.1142 data: 0.1003 max mem: 3827
Epoch: [0] [ 30/226] eta: 0:02:28 lr: 0.000693 loss: 1.3016 (2.4401) loss_classifier: 0.2914 (0.4067) loss_box_reg: 0.2294 (0.2191) loss_objectness: 0.3558 (1.2913) loss_rpn_box_reg: 0.3749 (0.5230) time: 0.7128 data: 0.0923 max mem: 4341
After an epoch has finished, I call an evaluate method which outputs the following:
Test: [ 0/100] eta: 0:00:25 model_time: 0.0880 (0.0880) evaluator_time: 0.1400 (0.1400) time: 0.2510 data: 0.0200 max mem: 4703
Test: [ 99/100] eta: 0:00:00 model_time: 0.0790 (0.0786) evaluator_time: 0.0110 (0.0382) time: 0.1528 data: 0.0221 max mem: 4703
Test: Total time: 0:00:14 (0.1401 s / it)
Averaged stats: model_time: 0.0790 (0.0786) evaluator_time: 0.0110 (0.0382)
Accumulating evaluation results...
DONE (t=0.11s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.263
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.346
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.304
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.208
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.308
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.013
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.027
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.175
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.311
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.264
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.351
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.086
I'm a bit confused by the differing metrics used during training and testing - I had wanted to plot training + validation loss (or the equivalent IoU values) so I can visualise training and testing performance, as well as checking if any overfitting is occurring.
My question is, how can I compare the model's training and testing performance?
| The evaluate() function here doesn't calculate any loss. And look at how the loss is calculate in train_one_epoch() here, you actually need model to be in train mode. And make it like the train_one_epoch() except without updating the weight, like
@torch.no_grad()
def evaluate_loss(model, data_loader, device):
model.train()
metric_logger = utils.MetricLogger(delimiter=" ")
header = 'Test:'
for images, targets in metric_logger.log_every(data_loader, 100, header):
images = list(image.to(device) for image in images)
targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
loss_dict = model(images, targets)
losses = sum(loss for loss in loss_dict.values())
# reduce losses over all GPUs for logging purposes
loss_dict_reduced = utils.reduce_dict(loss_dict)
losses_reduced = sum(loss for loss in loss_dict_reduced.values())
metric_logger.update(loss=losses_reduced, **loss_dict_reduced)
But since you need the model to be in eval mode to get bounding boxes. If you need mAP you'll need a loops of the original code too.
| https://stackoverflow.com/questions/68293253/ |
What is smart way to get batched gather? | I have two matrices, A and B, with shapes (n, m, k) and (n, m) respectively. n is the batch size, m is the amount of data in a batch, and k is the feature size.
Each element of B is an index less than m (specifically B = torch.randint(high=m, shape=(n,m))).
I want to implement [A[i][B[i]] for i in range(n)] in a smarter way.
Is there a better way in pytorch to implement this without doing for loop?
| You can use
a[torch.arange(n)[:, None], b]
An example:
>>> n, m, k = 3, 2, 5
>>> a = torch.arange(30).view(n, m, k)
>>> b = torch.randint(high=m, size=(n,m))
# first indexer (of shape (n, 1))
>>> torch.arange(n)[:, None]
tensor([[0],
[1],
[2]])
# second indexer
>>> b
tensor([[1, 0],
[0, 1],
[1, 1]])
The indexers have the shape (3, 1) and (3, 2) respectively so they'll be broadcasted to (3, 2) to effectively have
tensor([[0, 0],
[1, 1],
[2, 2]])
and
tensor([[1, 0],
[0, 1],
[1, 1]])
which says: for the first row, take 1st (k,) array and put the result and take 0th (k,) array and put the result. This fills in a (m, k) array in the output which is repeated n times for each row,
to get
>>> a[torch.arange(n)[:, None], b]
tensor([[[ 5, 6, 7, 8, 9],
[ 0, 1, 2, 3, 4]],
[[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]],
[[25, 26, 27, 28, 29],
[25, 26, 27, 28, 29]]])
comparing with list comprehension:
>>> [a[i][b[i]] for i in range(n)]
[tensor([[5, 6, 7, 8, 9],
[0, 1, 2, 3, 4]]),
tensor([[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]]),
tensor([[25, 26, 27, 28, 29],
[25, 26, 27, 28, 29]])]
| https://stackoverflow.com/questions/68299454/ |
Getting this while using pytorch transforms--->TypeError: integer argument expected, got float | I cloned transfer-learning-library repo and working on maximum classifier discrepancy. I am trying to change the augmentation but getting the following error
Traceback (most recent call last):
File "mcd.py", line 378, in <module>
main(args)
File "mcd.py", line 145, in main
results = validate(val_loader, G, F1, F2, args)
File "mcd.py", line 290, in validate
for i, (images, target) in enumerate(val_loader):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "../../../common/vision/datasets/imagelist.py", line 48, in __getitem__
img = self.transform(img)
File "/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py", line 60, in __call__
img = t(img)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py", line 750, in forward
return F.perspective(img, startpoints, endpoints, self.interpolation, fill)
File "/usr/local/lib/python3.7/dist-packages/torchvision/transforms/functional.py", line 647, in perspective
return F_pil.perspective(img, coeffs, interpolation=pil_interpolation, fill=fill)
File "/usr/local/lib/python3.7/dist-packages/torchvision/transforms/functional_pil.py", line 289, in perspective
return img.transform(img.size, Image.PERSPECTIVE, perspective_coeffs, interpolation, **opts)
File "/usr/local/lib/python3.7/dist-packages/PIL/Image.py", line 2371, in transform
im = new(self.mode, size, fillcolor)
File "/usr/local/lib/python3.7/dist-packages/PIL/Image.py", line 2578, in new
return im._new(core.fill(mode, size, color))
TypeError: integer argument expected, got float
The previous code was
# Data loading code
normalize = T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
if args.center_crop:
train_transform = T.Compose([
ResizeImage(256),
T.CenterCrop(224),
T.RandomHorizontalFlip(),
T.ToTensor(),
normalize
])
else:
train_transform = T.Compose([
ResizeImage(256),
T.RandomResizedCrop(224),
T.RandomHorizontalFlip(),
T.ToTensor(),
normalize
])
val_transform = T.Compose([
ResizeImage(256),
T.CenterCrop(224),
T.ToTensor(),
normalize
])
I just added T.RandomPerspective(distortion_scale = 0.8, p=0.5, fill=0.6) for val_transform.
Before this I also added few other transforms for train_transform but still got the same error.
What could be the problem?
| The fill argument needs to be an integer.
This transform does not support the fill parameter for Tensor types; therefore, if you wish to use the fill parameter, then you must use this transform before the ToTensor transform. At this point, the data is integral.
| https://stackoverflow.com/questions/68305315/ |
How is get predict accuracy score in Bert Classification | I am using Bert Classifier for my Chatbot project. I perform the necessary tokenizer operations for the incoming text message. Then I insert it into the model and make a prediction. How can I get the accuracy of this estimate?
for text in test_texts:
encoded_dict = tokenizer.encode_plus(
text,
add_special_tokens=True,
max_length=max_len,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt',
)
input_ids.append(encoded_dict['input_ids'])
attention_masks.append(encoded_dict['attention_mask'])
input_ids = torch.cat(input_ids, dim=0)
attention_masks = torch.cat(attention_masks, dim=0)
print("input_ids ",input_ids)
print("attention_masks ",attention_masks)
batch_size = 32
prediction_data = TensorDataset(input_ids, attention_masks)
prediction_sampler = SequentialSampler(prediction_data)
prediction_dataloader = DataLoader(prediction_data, sampler=prediction_sampler, batch_size=batch_size)
print("prediction_data ",prediction_data)
print("prediction_sampler ",prediction_sampler)
print("prediction_dataloader ",prediction_dataloader)
model.eval()
predictions, true_labels = [], []
for batch in prediction_dataloader:
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask = batch
print("b input ids",b_input_ids)
with torch.no_grad():
outputs = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask.to(device))
logits = outputs[0]
logits = logits.detach().cpu().numpy()
label_ids = b_input_mask.to('cpu').numpy()
predictions.append(logits)
true_labels.append(label_ids)
print("logits ",logits)
print("label_ids ",label_ids)
print("true_labels ",true_labels)
print('Prediction completed')
prediction_set = []
for i in range(len(true_labels)):
pred_labels_i = np.argmax(predictions[i], axis=1).flatten()
prediction_set.append(pred_labels_i)
prediction= [item for sublist in prediction_set for item in sublist]
print("prediction:", prediction[0])
I am looking for a percentage value. will respond or pass depending on the result of this percentage value.
| Accuracy can be directly computed using some libraries.
For example, you can use sklearn:
from sklearn.metrics import accuracy_score
print("Accuracy:", accuracy_score(true_labels, predictions)) # Value between 0 and 1
print("Accuracy Percentage {} %:".format(100*accuracy_score(true_labels, predictions))) # Value between 0 and 100
``
| https://stackoverflow.com/questions/68312423/ |
How can I avoid getting overlapping keypoints during inference? | I have been using Detectron2 for recognizing 4 keypoints on each image,
My dummy dataset consists of 1000 images, and I applied augmentations.
def build_train_loader(cls, cfg):
augs = [
T.RandomFlip(prob=0.5,horizontal=True),
T.RandomFlip(prob=0.5,horizontal=False,vertical=True),
T.RandomRotation(angle=[0, 180]),
T.RandomSaturation(0.9, 1.9)
]
return build_detection_train_loader(cfg,
mapper=DatasetMapper(cfg,
is_train=True,
augmentations=augs)
)
I have checked the images after those transforms which I have applied (each type of transform was tested separately), and it seems it has done well, the keypoints are positioned correctly.
Now after the training phase (keypoint_rcnn_R_50_FPN_3x.yaml),
I get some identical keypoints, which means in many images the keypoints overlap,
Here are few samples from my results:
[[[180.4211, 332.8872, 0.7105],
[276.3517, 369.3892, 0.7390],
[276.3517, 366.9956, 0.4788],
[220.5920, 296.9836, 0.9515]]]
And from another image:
[[[611.8049, 268.8926, 0.7576],
[611.8049, 268.8926, 1.2022],
[699.7122, 261.2566, 1.7348],
[724.5556, 198.2591, 1.4403]]]
I have compared the inference's results with augmentations and without,
and it seems with augmentation the keypoints are barely getting recognized . gosh, How can it be?
Can someone please suggest any idea how to overcome those kind of mistakes?
what am I doing wrong?
Thank you!
I have added a link to my google colab notebook:
https://colab.research.google.com/drive/1uIzvB8vCWdGrT7qnz2d2npEYCqOxET5S?usp=sharing
| The problem is that there's nothing unique about the different corners of the rectangle. However, in your annotation and in your loss function there is an implicit assumption that the order of the corners is significant:
The corners are labeled in a specific order and the network is trained to output the corners in that specific order.
However, when you augment the dataset, by flipping and rotating the images, you change the implicit order of the corners and now the net does not know which of the four corners to predict at each time.
As far as I can see you have two ways of addressing this issue:
Explicitly force order on the corners:
Make sure that no matter what augmentation the image underwent, for each rectangle the ground truth points are ordered "top left", "top right", "bottom left", "bottom right". This means you'll have to transform the coordinates of the corners (as you are doing now), but also reorder them.
Adding this consistency should help your model overcome the ambiguity in identifying the different corners.
Make the loss invariant to the order of the predicted corners:
Suppose your ground truth rectangle span the domain [0, 1]x[0, 1]: the four corners you should predict are [[0, 0], [1, 1], [1, 0], [0, 1]]. Note that if you predict [[1, 1], [0, 0], [0, 1], [1, 0]] your loss is very high, although you predicted the right corners just in a different order than annotated ones.
Therefore, you should make youy loss invariant to the order of the predicted points:
where pi(i) is a permutation of the corners.
| https://stackoverflow.com/questions/68315517/ |
Wrong tensor type when trying to do the HuggingFace tutorial (pytorch) | I've recently been trying to get hands on experience with the transformer library from Hugging Face. Since I'm an absolute noob when it comes to using Pytorch (and Deep Learning in general), I started with the introduction that can be found here.
Here is the code to install dependencies :
#!pip install transformers
!pip install transformers[sentencepiece] # includes transformers dependencies
!pip install datasets # datasets from huggingface hub
!pip install tqdm
Here's the code they propose to use to fine-tune BERT the MNPR dataset (used in the GLUE benchmark). This dataset includes two sentences per "sample", so in the tokenizer we have to use sentence1 and sentence2.
from datasets import load_dataset
from transformers import AutoTokenizer, DataCollatorWithPadding
from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification
from transformers import AdamW
from transformers import get_scheduler
import torch
from tqdm.auto import tqdm
raw_datasets = load_dataset("glue", "mrpc")
checkpoint = "bert-base-uncased"
# functions defining how the tokenizer works
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
def tokenize_function(example):
return tokenizer(example["sentence1"], example["sentence2"], truncation=True)
# tokenizer will use dynamic padding (https://huggingface.co/course/chapter3/2?fw=pt)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
# remove unecessary columns from data and format in torch tensors
tokenized_datasets = tokenized_datasets.remove_columns(
["sentence1", "sentence2", "idx"]
)
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
tokenized_datasets.set_format("torch")
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, batch_size=8, collate_fn=data_collator
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"], batch_size=8, collate_fn=data_collator
)
# loading model and training requirements
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)
optimizer = AdamW(model.parameters(), lr=5e-5)
num_epochs = 3
num_training_steps = num_epochs * len(train_dataloader)
lr_scheduler = get_scheduler(
"linear",
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=num_training_steps
)
print(num_training_steps)
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model.to(device)
progress_bar = tqdm(range(num_training_steps))
# training loop:
model.train()
for epoch in range(num_epochs):
for batch in train_dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
# assert 1==0
This works perfectly fine for me in Google Colab. I wanted to do the same thing with another dataset sst2. The code I use is very similar to the one above. The only few lines of code that change are the lines to import the data and the tokenizer (we have one sentence per feature instead of two). I have double-checked and the tokenizer works fine. Here is my code :
# imports
import torch
from datasets import load_dataset # datasets from huggingface
# tokenization
from transformers import AutoTokenizer, DataCollatorWithPadding
from torch.utils.data import DataLoader
# training
from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler
from tqdm.auto import tqdm
# Hyperparameters
batch_size = 8
learning_rate = 5e-5
num_epochs = 3
num_warmup_steps = 0
# load dataset and choosing checkpoint
raw_datasets = load_dataset("glue", "sst2")
checkpoint = "bert-base-uncased"
# load tokenizer
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# tokenization of dataset
def tokenize_function(example):
return tokenizer(example["sentence"], truncation=True)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
tokenized_datasets = tokenized_datasets.remove_columns(["sentence", "idx"])
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
tokenized_datasets.set_format("torch")
# setting DataLoader
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, batch_size=batch_size, collate_fn=data_collator
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"], batch_size=batch_size, collate_fn=data_collator
)
# import model
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=1)
# setup training loop
optimizer = AdamW(model.parameters(), lr=learning_rate)
num_training_steps = num_epochs * len(train_dataloader)
print(num_training_steps)
lr_scheduler = get_scheduler(
"linear",
optimizer=optimizer,
num_warmup_steps=num_warmup_steps,
num_training_steps=num_training_steps
)
# chose device (GPU or CPU)
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model.to(device)
progress_bar = tqdm(range(num_training_steps))
model.train()
for epoch in range(num_epochs):
for batch in train_dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
for k,v in batch.items():
print(f"key={k},v.dtype={v.dtype}, type(v)={type(v)}")
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
And here's the error I get :
RuntimeError Traceback (most recent call last)
<ipython-input-11-7893d7715ac2> in <module>()
69 outputs = model(**batch)
70 loss = outputs.loss
---> 71 loss.backward()
72
73 optimizer.step()
1 frames
/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
147 Variable._execution_engine.run_backward(
148 tensors, grad_tensors_, retain_graph, create_graph, inputs,
--> 149 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
150
151
RuntimeError: Found dtype Long but expected Float
This seems like a very silly mistake, but like I said I'm an absolute pytorch noob and it's difficult for me to know where to start solving this issue. I have checked the type of the values in batch.items() and in both cases, they are all torch.int64 (or torch.long). I tried to change the attention_mask and input_ids values to torch.float32, but I got the same error message.
Thanks in advance.
Python version and packages :
python 3.7.20
Pytorch 1.9.0+cu102
transformers 4.8.2
GPU : Tesla T4 (also tried with tesla P4)
| I found the source of the problem. The problem comes from the line
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=1)
Since the dataset has 2 classes, the correct way of calling the model should be
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)
With this modification, my code now works.
| https://stackoverflow.com/questions/68315780/ |
AttributeError: 'numpy.ndarray' object has no attribute 'unsqueeze' | I'm running a training code using pyhtorch and numpy.
This is the plot_example function:
def plot_example(low_res_folder, gen):
files=os.listdir(low_res_folder)
gen.eval()
for file in files:
image=Image.open("test_images/" + file)
with torch.no_grad():
upscaled_img=gen(
config1.both_transform(image=np.asarray(image))["image"]
.unsqueeze(0)
.to(config1.DEVICE)
)
save_image(upscaled_img * 0.5 + 0.5, f"saved/{file}")
gen.train()
The problem I have is that the unsqueeze attribute raises the error:
File "E:\Downloads\esrgan-tf2-masteren\modules\train1.py", line 58, in train_fn
plot_example("test_images/", gen)
File "E:\Downloads\esrgan-tf2-masteren\modules\utils1.py", line 46, in plot_example
config1.both_transform(image=np.asarray(image))["image"]
AttributeError: 'numpy.ndarray' object has no attribute 'unsqueeze'
The network is GAN network and gen() represents the Generator.
| Make sure image is a tensor in the shape of [batch size, channels, height, width] before entering any Pytorch layers.
Here you have
image=np.asarray(image)
I would remove this numpy conversion and keep it a torch.tensor.
Or if you really want it to be a numpy array, then right before it enters your generator make sure to use torch.from_numpy() as shown in this documentation on your numpy image before it gets unsqueezed: https://pytorch.org/docs/stable/generated/torch.from_numpy.html
This function is ofcourse an alternative if you don't want to get rid of that original conversion.
Sarthak Jain
| https://stackoverflow.com/questions/68319216/ |
How to compute the parameter importance in pytorch? | I want to develop a lifelong learning system,so i need to prevent important parameter from changing.I read related paper 'Memory Aware Synapses: Learning what (not) to forget',a method was mentioned,I need to calculate the gradient of each parameter conresponding to each input image,so how should i write my code in pytorch?
'Memory Aware Synapses: Learning what (not) to forget'
| You can do it using standard optimization procedure and .backward() method on your loss function.
First, scaling as defined in your link:
class Scaler:
def __init__(self, parameters, delta):
self.parameters = parameters
self.delta = delta
def step(self):
"""Multiplies gradients in place."""
for param in self.parameters:
if param.grad is None:
raise ValueError("backward() has to be called before running scaler")
param.grad *= self.delta
One can use it just like optimizer.step(), see below (see comments):
model = torch.nn.Sequential(
torch.nn.Linear(10, 100), torch.nn.ReLU(), torch.nn.Linear(100, 1)
)
scaler = Scaler(model.parameters(), delta=0.001)
optimizer = torch.optim.Adam(model.parameters())
criterion = torch.nn.MSELoss()
X, y = torch.randn(64, 10), torch.randn(64)
# Optimization loop
EPOCHS = 10
for _ in range(EPOCHS):
output = model(X)
loss = criterion(output, y)
loss.backward() # Now model has the gradients
optimizer.step() # Optimize model's parameters
print(next(model.parameters()).grad)
scaler.step() # Scaler gradients
optimizer.zero_grad() # Zero gradient before next step
After scaler.step() you will have gradient scaled available inside param.grad for each parameter (just like those are accessed within Scaler's step method) so you can do whatever you want with them.
| https://stackoverflow.com/questions/68325288/ |
Getting the autograd counter of a tensor in PyTorch | I am using PyTorch for training a network. I was going through the autograd documentation and here it is mentioned that for each tensor there is a counter that the autograd implements to track the "version" of any tensor. How can I get this counter for any tensor in the graph?
Reason why I need it.
I have encountered the autograd error
[torch.cuda.FloatTensor [x, y, z]], which is output 0 of torch::autograd::CopySlices, is at version 7; expected version 6 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
This is not new to me and I have been successful in handling it before. This time around I am not able to see why the tensor would be at version 7 instead of being at 6. To answer this, I would want to know the version at any given point in the run.
Thanks.
| It can be obtained through the command tensor_name._version.
As an example of how to use it, following MSE is provided.
import torch
a = torch.zeros(10, 5)
print(a._version) # prints 0
a[:, 1] = 1
print(a._version) # prints 1
| https://stackoverflow.com/questions/68326500/ |
What is the correct way of encoding a large batch of documents with sentence transformers/pytorch? | I am having issues encoding a large number of documents (more than a million) with the sentence_transformers library.
Given a very similar corpus list of strings. When I do:
from sentence_transformers import SentenceTransformer
embedder = SentenceTransformer('msmarco-distilbert-base-v2')
corpus_embeddings = embedder.encode(corpus, convert_to_tensor=False)
After some hours, the process seems to be stuck, as it never finishes and when checking the process viewer nothing is running.
As I am suspicious that this is a ram issue (the GPU board doesn't have enough memory to fit everything in a single step) I tried to split the corpus into batches, transform them into NumPy arrays, and concat them into a single matrix as follows:
from itertools import zip_longest
from sentence_transformers import SentenceTransformer, util
import torch
from loguru import logger
import glob
from natsort import natsorted
def grouper(iterable, n, fillvalue=np.nan):
args = [iter(iterable)] * n
return zip_longest(*args, fillvalue=fillvalue)
embedder = SentenceTransformer('msmarco-distilbert-base-v2')
for j, e in enumerate(list(grouper(corpus, 3))):
try:
# print('------------------')
for i in filter(lambda v: v==v, e):
corpus_embeddings=embedder.encode(i, convert_to_tensor=False)
torch.save(corpus_embeddings, f'/Users/user/Downloads/embeddings_part_{j}.npy')
except TypeError:
print(j, e)
logger.debug("TypeError in batch {batch_num}", batch_num=j)
l = []
for e in natsorted(glob.glob("/Users/user/Downloads/*.npy")):
l.append(torch.load(e))
corpus_embeddings = np.vstack(l)
corpus_embeddings
Nevertheless, the above procedure doesn't seem to work. The reason is that when I try with a small sample of the corpus with and without the batch approach the matrices I get are different for example:
Without batch approach:
array([[-0.6828216 , -0.26541945, 0.31026787, ..., 0.19941986,
0.02366139, 0.4489861 ],
[-0.45781 , -0.02955275, 1.0897563 , ..., -0.20077021,
-0.37821707, 0.2248317 ],
[ 0.8532193 , -0.13642257, -0.8872398 , ..., -0.57482916,
0.12760726, -0.66986346],
...,
[-0.04036704, 0.06745373, -0.6010259 , ..., -0.08174597,
-0.18513843, -0.64744204],
[-0.30782765, -0.04935509, -0.11624689, ..., 0.10423593,
-0.14073376, -0.09206307],
[-0.77139395, -0.08119706, 0.43753916, ..., 0.1653319 ,
0.06861683, -0.16276269]], dtype=float32)
With batch approach:
array([[ 0.8532191 , -0.13642241, -0.8872397 , ..., -0.5748289 ,
0.12760736, -0.6698637 ],
[ 0.3679317 , -0.21968201, 0.9932826 , ..., -0.86282325,
-0.04683857, 0.18995859],
[ 0.23026675, 0.69587034, -0.8116473 , ..., 0.23903558,
0.413471 , -0.23438476],
...,
[ 0.923319 , 0.4152724 , -0.3153545 , ..., -0.6863369 ,
0.01149149, -0.51300013],
[-0.30782777, -0.04935484, -0.11624689, ..., 0.10423636,
-0.1407339 , -0.09206269],
[-0.77139413, -0.08119693, 0.43753892, ..., 0.16533189,
0.06861652, -0.16276267]], dtype=float32)
What is the correct way of doing the above batch procedure?
UPDATE
After inspecting the above batch procedure, I found that I was able to get the same matrix output with and without the batching when I set to 1 the batch size of the above code (enumerate(list(grouper(corpus, 1)))). Therefore, my question is, what is the correct way of applying the encoder to a large set of documents?
| This line here sorts the input by text length before doing the encode. I have no idea why.
So, either comment those lines out or copy them to your code like
length_sorted_idx = np.argsort([-embedder._text_length(sen) for sen in corpus])
corpus_sorted = [corpus[idx] for idx in length_sorted_idx]
Then use the corpus_sorted to encode and map the output back using length_sorted_idx.
Or just encode it one by one and you won't need to care about which output is from which text.
| https://stackoverflow.com/questions/68337487/ |
IndexError: list index out of range in prediction of images | I am doing predictions on images where I write all classes' names and in the test folder, I have 20 images. Please give me some hint as, why I am getting error? How we can check the indices of the model?
Code
import numpy as np
import sys, random
import torch
from torchvision import models, transforms
from PIL import Image
from pathlib import Path
import matplotlib.pyplot as plt
import glob
# Paths for image directory and model
IMDIR = './test'
MODEL = 'checkpoint/resnet18/Monday_31_May_2021_21h_25m_05s/resnet18-1000-regular.pth'
# Load the model for testing
model = models.resnet18()
model.named_children()
torch.save(model.state_dict, MODEL)
model.eval()
# Class labels for prediction
class_names = ['BC', 'BK', 'CC', 'CL', 'CM', 'DF', 'DG', 'DS', 'HL', 'IF', 'JD', 'JS', 'LD', 'LP', 'LS', 'PO', 'RI',
'SD', 'SG', 'TO']
# Retreive 9 random images from directory
files = Path(IMDIR).resolve().glob('*.*')
print(files)
images = random.sample(list(files), 1)
print(images)
# Configure plots
fig = plt.figure(figsize=(9, 9))
rows, cols = 3, 3
# Preprocessing transformations
preprocess = transforms.Compose([
transforms.Resize((256, 256)),
# transforms.CenterCrop(size=224),
transforms.ToTensor(),
transforms.Normalize(0.5306, 0.1348)
])
# Enable gpu mode, if cuda available
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Perform prediction and plot results
with torch.no_grad():
for num, img in enumerate(images):
img = Image.open(img).convert('RGB')
inputs = preprocess(img).unsqueeze(0).cpu()
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
print(preds)
label = class_names[preds]
plt.subplot(rows, cols, num + 1)
plt.title("Pred: " + label)
plt.axis('off')
plt.imshow(img)
'''
Sample run: python test.py test
'''
Traceback
Traceback (most recent call last):
File "/media/khawar/HDD_Khawar/CVPR/pytorch-cifar100/test_box.py", line 57, in <module>
label = class_names[preds]
IndexError: list index out of range
| Your error stems from the fact that you don't do any modification to the linear layers of your resnet model.
I suggest adding this code:
# What you have
model = models.resnet18()
# What you need
model.fc = nn.Sequential(
nn.Linear(model.fc.in_features, len(class_names)))
This changes the last linear layers to outputting the correct amount of nodes
Sarthak
| https://stackoverflow.com/questions/68340882/ |
RuntimeError: all elements of input should be between 0 and 1 | I want to use an RNN with bilstm layers using pytorch on protein embeddings. It worked with Linear Layer but when i use Bilstm i have a Runtime error. Sorry if its not clear its my first publication and i will be grateful if someone can help me.
from collections import Counter, OrderedDict
from typing import Optional
import numpy as np
import pytorch_lightning as pl
import torch
import torch.nn.functional as F # noqa
from deepchain import log
from sklearn.model_selection import train_test_split
from sklearn.utils.class_weight import compute_class_weight
from torch import Tensor, nn
num_layers=2
hidden_size=256
from torch.utils.data import DataLoader, TensorDataset
def classification_dataloader_from_numpy(
x: np.ndarray, y: np.array, batch_size: int = 32
) -> DataLoader:
"""Build a dataloader from numpy for classification problem
This dataloader is use only for classification. It detects automatically the class of
the problem (binary or multiclass classification)
Args:
x (np.ndarray): [description]
y (np.array): [description]
batch_size (int, optional): [description]. Defaults to None.
Returns:
DataLoader: [description]
"""
n_class: int = len(np.unique(y))
if n_class > 2:
log.info("This is a classification problem with %s classes", n_class)
else:
log.info("This is a binary classification problem")
# y is float for binary classification, int for multiclass
y_tensor = torch.tensor(y).long() if len(np.unique(y)) > 2 else torch.tensor(y).float()
tensor_set = TensorDataset(torch.tensor(x).float(), y_tensor)
loader = DataLoader(tensor_set, batch_size=batch_size)
return loader
class RNN(pl.LightningModule):
"""A `pytorch` based deep learning model"""
def __init__(self, input_shape: int, n_class: int, num_layers, n_neurons: int = 128, lr: float = 1e-3):
super(RNN,self).__init__()
self.lr = lr
self.n_neurons=n_neurons
self.num_layers=num_layers
self.input_shape = input_shape
self.output_shape = 1 if n_class <= 2 else n_class
self.activation = nn.Sigmoid() if n_class <= 2 else nn.Softmax(dim=-1)
self.lstm = nn.LSTM(self.input_shape, self.n_neurons, num_layers, batch_first=True, bidirectional=True)
self.fc= nn.Linear(self.n_neurons, self.output_shape)
def forward(self, x):
h0=torch.zeros(self.num_layers, x_size(0), self.n_neurons).to(device)
c0=torch.zeros(self.num_layers, x_size(0), self.n_neurons).to(device)
out, _=self.lstm(x,(h0, c0))
out=self.fc(out[:, -1, :])
return self.fc(x)
def training_step(self, batch, batch_idx):
"""training_step defined the train loop. It is independent of forward"""
x, y = batch
y_hat = self.fc(x).squeeze()
y = y.squeeze()
if self.output_shape > 1:
y_hat = torch.log(y_hat)
loss = self.loss(y_hat, y)
self.log("train_loss", loss, on_epoch=True, on_step=False)
return {"loss": loss}
def validation_step(self, batch, batch_idx):
"""training_step defined the train loop. It is independent of forward"""
x, y = batch
y_hat = self.fc(x).squeeze()
y = y.squeeze()
if self.output_shape > 1:
y_hat = torch.log(y_hat)
loss = self.loss(y_hat, y)
self.log("val_loss", loss, on_epoch=True, on_step=False)
return {"val_loss": loss}
def configure_optimizers(self):
"""(Optional) Configure training optimizers."""
return torch.optim.Adam(self.parameters(),lr=self.lr)
def compute_class_weight(self, y: np.array, n_class: int):
"""Compute class weight for binary/multiple classification
If n_class=2, only compute weights for the positve class.
If n>2, compute for all classes.
Args:
y ([np.array]):vector of int represented the class
n_class (int) : number fo class to use
"""
if n_class == 2:
class_count: typing.Counter = Counter(y)
cond_binary = (0 in class_count) and (1 in class_count)
assert cond_binary, "Must have O and 1 class for binary classification"
weight = class_count[0] / class_count[1]
else:
weight = compute_class_weight(class_weight="balanced", classes=np.unique(y), y=y)
return torch.tensor(weight).float()
def fit(
self,
x: np.ndarray,
y: np.array,
epochs: int = 10,
batch_size: int = 32,
class_weight: Optional[str] = None,
validation_data: bool = True,
**kwargs
):
assert isinstance(x, np.ndarray), "X should be a numpy array"
assert isinstance(y, np.ndarray), "y should be a numpy array"
assert class_weight in (
None,
"balanced",
), "the only choice available for class_weight is 'balanced'"
n_class = len(np.unique(y))
weight = None
self.input_shape = x.shape[1]
self.output_shape = 1 if n_class <= 2 else n_class
self.activation = nn.Sigmoid() if n_class <= 2 else nn.Softmax(dim=-1)
if class_weight == "balanced":
weight = self.compute_class_weight(y, n_class)
self.loss = nn.NLLLoss(weight) if self.output_shape > 1 else nn.BCELoss(weight)
if validation_data:
x_train, x_val, y_train, y_val = train_test_split(x, y, test_size=0.2)
train_loader = classification_dataloader_from_numpy(
x_train, y_train, batch_size=batch_size
)
val_loader = classification_dataloader_from_numpy(x_val, y_val, batch_size=batch_size)
else:
train_loader = classification_dataloader_from_numpy(x, y, batch_size=batch_size)
val_loader = None
self.trainer = pl.Trainer(max_epochs=epochs, **kwargs)
self.trainer.fit(self, train_loader, val_loader)
def predict(self, x):
"""Run inference on data."""
if self.output_shape is None:
log.warning("Model is not fitted. Can't do predict")
return
return self.forward(x).detach().numpy()
def save(self, path: str):
"""Save the state dict model with torch"""
torch.save(self.fc.state_dict(), path)
log.info("Save state_dict parameters in model.pt")
def load_state_dict(self, state_dict: "OrderedDict[str, Tensor]", strict: bool = False):
"""Load state_dict saved parameters
Args:
state_dict (OrderedDict[str, Tensor]): state_dict tensor
strict (bool, optional): [description]. Defaults to False.
"""
self.fc.load_state_dict(state_dict, strict=strict)
self.fc.eval()
mlp = RNN(input_shape=1024, n_neurons=1024, num_layers=2, n_class=2)
mlp.fit(embeddings_train, np.array(y_train),validation_data=(embeddings_test, np.array(y_test)), epochs=30)
mlp.save("model.pt")
These are the errors that are occured. I really need help and i remain at your disposal for further informations.
Error 1
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-154-e5fde11a675c> in <module>
1 # init MLP model, train it on the data, then save model
2 mlp = RNN(input_shape=1024, n_neurons=1024, num_layers=2, n_class=2)
----> 3 mlp.fit(embeddings_train, np.array(y_train),validation_data=(embeddings_test, np.array(y_test)), epochs=30)
4 mlp.save("model.pt")
<ipython-input-153-a8d51af53bb5> in fit(self, x, y, epochs, batch_size, class_weight, validation_data, **kwargs)
134 val_loader = None
135 self.trainer = pl.Trainer(max_epochs=epochs, **kwargs)
--> 136 self.trainer.fit(self, train_loader, val_loader)
137 def predict(self, x):
138 """Run inference on data."""
/opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders, datamodule)
456 )
457
--> 458 self._run(model)
459
460 assert self.state.stopped
/opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in _run(self, model)
754
755 # dispatch `start_training` or `start_evaluating` or `start_predicting`
--> 756 self.dispatch()
757
758 # plugin will finalized fitting (e.g. ddp_spawn will load trained model)
/opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in dispatch(self)
795 self.accelerator.start_predicting(self)
796 else:
--> 797 self.accelerator.start_training(self)
798
799 def run_stage(self):
/opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py in start_training(self, trainer)
94
95 def start_training(self, trainer: 'pl.Trainer') -> None:
---> 96 self.training_type_plugin.start_training(trainer)
97
98 def start_evaluating(self, trainer: 'pl.Trainer') -> None:
/opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py in start_training(self, trainer)
142 def start_training(self, trainer: 'pl.Trainer') -> None:
143 # double dispatch to initiate the training loop
--> 144 self._results = trainer.run_stage()
145
146 def start_evaluating(self, trainer: 'pl.Trainer') -> None:
/opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in run_stage(self)
805 if self.predicting:
806 return self.run_predict()
--> 807 return self.run_train()
808
809 def _pre_training_routine(self):
/opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in run_train(self)
840 self.progress_bar_callback.disable()
841
--> 842 self.run_sanity_check(self.lightning_module)
843
844 self.checkpoint_connector.has_trained = False
/opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in run_sanity_check(self, ref_model)
1105
1106 # run eval step
-> 1107 self.run_evaluation()
1108
1109 self.on_sanity_check_end()
/opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in run_evaluation(self, on_epoch)
960 # lightning module methods
961 with self.profiler.profile("evaluation_step_and_end"):
--> 962 output = self.evaluation_loop.evaluation_step(batch, batch_idx, dataloader_idx)
963 output = self.evaluation_loop.evaluation_step_end(output)
964
/opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py in evaluation_step(self, batch, batch_idx, dataloader_idx)
172 model_ref._current_fx_name = "validation_step"
173 with self.trainer.profiler.profile("validation_step"):
--> 174 output = self.trainer.accelerator.validation_step(args)
175
176 # capture any logged information
/opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py in validation_step(self, args)
224
225 with self.precision_plugin.val_step_context(), self.training_type_plugin.val_step_context():
--> 226 return self.training_type_plugin.validation_step(*args)
227
228 def test_step(self, args: List[Union[Any, int]]) -> Optional[STEP_OUTPUT]:
/opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py in validation_step(self, *args, **kwargs)
159
160 def validation_step(self, *args, **kwargs):
--> 161 return self.lightning_module.validation_step(*args, **kwargs)
162
163 def test_step(self, *args, **kwargs):
<ipython-input-153-a8d51af53bb5> in validation_step(self, batch, batch_idx)
78 if self.output_shape > 1:
79 y_hat = torch.log(y_hat)
---> 80 loss = self.loss(y_hat, y)
81 self.log("val_loss", loss, on_epoch=True, on_step=False)
82 return {"val_loss": loss}
/opt/conda/envs/bio-transformers/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/opt/conda/envs/bio-transformers/lib/python3.7/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
611 def forward(self, input: Tensor, target: Tensor) -> Tensor:
612 assert self.weight is None or isinstance(self.weight, Tensor)
--> 613 return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
614
615
/opt/conda/envs/bio-transformers/lib/python3.7/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction)
2760 weight = weight.expand(new_size)
2761
-> 2762 return torch._C._nn.binary_cross_entropy(input, target, weight, reduction_enum)
2763
2764
RuntimeError: all elements of input should be between 0 and 1
Error 2
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-139-b7e8b13763ef> in <module>
1 # Model evaluation
----> 2 y_pred = mlp(embeddings_val).squeeze().detach().numpy()
3 model_evaluation_accuracy(np.array(y_val), y_pred)
/opt/conda/envs/bio-transformers/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
<ipython-input-136-e2fc535640ab> in forward(self, x)
55 self.fc= nn.Linear(self.hidden_size, self.output_shape)
56 def forward(self, x):
---> 57 h0=torch.zeros(self.num_layers, x_size(0), self.hidden_size).to(device)
58 c0=torch.zeros(self.num_layers, x_size(0), self.hidden_size).to(device)
59 out, _=self.lstm(x,(h0, c0))
NameError: name 'x_size' is not defined
| I am adding this as an answer because it would be too hard to put in comment.
The main problem that you have is about BCE loss. IIRC BCE loss expects p(y=1), so your output should be between 0 and 1. If you want to use logits (which is also more numerically stable), you should use BinaryCrossEntropyWithLogits.
As you mention in one of the comments, you are using the sigmoid activation but something about your forward function looks off to me. Mainly the last line of your forward function is
return self.fc(x)
This does not use sigmoid activation. Moreover you are only using input, x for producing the output. The LSTM outputs are just being discarded? I think, it would be a good idea to add some prints statements or breakpoints to make sure that the intermediate outputs are as you expect them to be.
| https://stackoverflow.com/questions/68351091/ |
How to drop running stats to default value for Norm layer in pyTorch? | I trained model on some images. Now to fit similar dataset but with another colors I want to load this model but also i want to drop all running stats from Batchnorm layers (set them to default value, like totally untrained). What parameters should i reset? Simple model looks like this
import torch
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv0 = nn.Conv2d(3, 3, 3, padding = 1)
self.norm = nn.BatchNorm2d(3)
self.conv = nn.Conv2d(3, 3, 3, padding = 1)
def forward(self, x):
x = self.conv0(x)
x = self.norm(x)
return self.conv(x)
net = Net()
##or for pretrained it will be
##net = torch.load('net.pth')
def drop_to_default():
for m in net.modules():
if type(m) == nn.BatchNorm2d:
####???####
drop_to_default()
| Simplest way to do that is to run reset_running_stats() method on BatchNorm objects:
def drop_to_default():
for m in net.modules():
if type(m) == nn.BatchNorm2d:
m.reset_running_stats()
Below is this method's source code:
def reset_running_stats(self) -> None:
if self.track_running_stats:
# running_mean/running_var/num_batches... are registered at runtime depending
# if self.track_running_stats is on
self.running_mean.zero_() # Zero (neutral) mean
self.running_var.fill_(1) # One (neutral) variance
self.num_batches_tracked.zero_() # Number of batches tracked
You can see the source code here, _NormBase class.
| https://stackoverflow.com/questions/68351686/ |
Optimizing my Cuda kernel to sum varying index ranges inside a torch tensor | I'd like to write a Cuda kernel to sum given (contiguous) index ranges in an array. For example, the input array is arr=[1]*10 and I want 3 sums - sum(arr[0:2]), sum(arr[2:3]), sum(arr[3:10]), so the output should be [2, 1, 7].
My arrays are large 2-dimensional arrays (so I want to do this summation for each row, with the same indices), dimensions are typically around 1,000 by 100,000 with the index sub-ranges to be summed varying a lot (between 1 and >1,000). The arrays are already on the GPU as Pytorch tensors so moving them back and forth to/from the CPU for this purpose is costly.
I wrote the following Numba Kernel (here with a minimal working example). Basically, each thread is responsible for a single source column. It finds the relevant target column (w.r.t. index ranges) and adds the column to the target.
from numba import cuda
import numpy as np
@cuda.jit
def sum_idxs(arr, idxs, sum_arr):
pos = cuda.grid(1)
if pos>=arr.shape[1]: return
for i in range(len(idxs)):
if idxs[i]<=pos<idxs[i+1]:
thread_idx = i
break
for i in range(arr.shape[0]):
cuda.atomic.add(sum_arr, (i, thread_idx), arr[i, pos])
arr = np.ones(shape=(3, 10))
idxs = np.array([0, 2, 3, 10])
sum_arr = np.zeros(shape=(arr.shape[0], len(idxs)-1))
threads_per_block = 32
blocks_per_grid = ceil(arr.shape[1] / threads_per_block)
sum_idxs[threads_per_block, blocks_per_grid](arr, idxs, sum_arr)
print(sum_arr)
which gives the correct result
[[2. 1. 7.]
[2. 1. 7.]
[2. 1. 7.]]
and allows me to do keep my tensors on the GPU as desired.
(I've used numpy arrays here for simplicity. In my code I use cuda.as_cuda_array(tensor) for my pytorch tensor)
However, this is still a major performance bottleneck of my code, is there any way to further optimize it?
| Here is one possible approach. Segmented reductions can often be implemented fairly efficiently by using one block per segment (or in this case, we will use one block per row). If the number of segments/rows is large enough, this will tend to saturate the GPU.
The code design I will suggest will use one block per row, and each block will process the 3 segments of that row in order. To process a segment, the block will use a canonical CUDA reduction implemented using a block-stride loop to do the initial data gather.
Here is an example, fixing some things in your code that have been mentioned in the comments (correct grid dimensioning, conversion to float32):
$ cat t73.py
from numba import cuda,float32,int32
import numpy as np
import math
#TPB = threads per block, max of 1024
#TPB must be the power-of-2 expressed in TPBP2, i.e. TPB = 2**TPBP2
TPB = 1024
H = TPB//2
TPBP2 = 10
@cuda.jit
def sum_idxs(arr, idxs, sum_arr):
pos = cuda.grid(1)
if pos>=arr.shape[1]: return
for i in range(len(idxs)):
if idxs[i]<=pos<idxs[i+1]:
thread_idx = i
break
for i in range(arr.shape[0]):
cuda.atomic.add(sum_arr, (i, thread_idx), arr[i, pos])
@cuda.jit
def sum_idxs_i(arr, idxs, sum_arr):
s = cuda.shared.array(shape=(TPB), dtype=float32)
tx = cuda.threadIdx.x
row = cuda.blockIdx.x
#process each of the 3 segments in a row
for j in range(3):
lower = idxs[j]
upper = idxs[j+1]
val = float32(0)
#block-stride loop to gather data from the segment
for i in range(tx+lower, upper, TPB):
val += arr[row, i]
#canonical shared-memory parallel reduction
s[tx] = val
mid = H
for i in range(TPBP2):
cuda.syncthreads()
if tx < mid:
s[tx] += s[tx+mid]
mid >>= 1
if tx == 0:
sum_arr[row, j] = s[0]
rows = 1000
cols = 100000
arr = np.ones(shape=(rows, cols),dtype=np.float32)
idxs = np.array([0, 2, 3, cols],dtype=np.int32)
sum_arr = np.zeros(shape=(arr.shape[0], len(idxs)-1),dtype=np.float32)
blocks_per_grid = math.ceil(arr.shape[1] / TPB)
sum_idxs[blocks_per_grid, TPB](arr, idxs, sum_arr)
print(sum_arr)
sum_arr = np.zeros(shape=(arr.shape[0], len(idxs)-1),dtype=np.float32)
blocks_per_grid = (arr.shape[0])
sum_idxs_i[blocks_per_grid, TPB](arr, idxs, sum_arr)
print(sum_arr)
$ nvprof python t73.py
==4383== NVPROF is profiling process 4383, command: python t73.py
[[2.0000e+00 1.0000e+00 9.9997e+04]
[2.0000e+00 1.0000e+00 9.9997e+04]
[2.0000e+00 1.0000e+00 9.9997e+04]
...
[2.0000e+00 1.0000e+00 9.9997e+04]
[2.0000e+00 1.0000e+00 9.9997e+04]
[2.0000e+00 1.0000e+00 9.9997e+04]]
[[2.0000e+00 1.0000e+00 9.9997e+04]
[2.0000e+00 1.0000e+00 9.9997e+04]
[2.0000e+00 1.0000e+00 9.9997e+04]
...
[2.0000e+00 1.0000e+00 9.9997e+04]
[2.0000e+00 1.0000e+00 9.9997e+04]
[2.0000e+00 1.0000e+00 9.9997e+04]]
==4383== Profiling application: python t73.py
==4383== Profiling result:
Type Time(%) Time Calls Avg Min Max Name
GPU activities: 45.92% 287.93ms 6 47.988ms 1.1520us 144.09ms [CUDA memcpy HtoD]
44.88% 281.42ms 6 46.903ms 1.4720us 140.74ms [CUDA memcpy DtoH]
8.46% 53.052ms 1 53.052ms 53.052ms 53.052ms cudapy::__main__::sum_idxs$241(Array<float, int=2, C, mutable, aligned>, Array<int, int=1, C, mutable, aligned>, Array<float, int=2, C, mutable, aligned>)
0.75% 4.6729ms 1 4.6729ms 4.6729ms 4.6729ms cudapy::__main__::sum_idxs_i$242(Array<float, int=2, C, mutable, aligned>, Array<int, int=1, C, mutable, aligned>, Array<double, int=2, C, mutable, aligned>)
API calls: 43.26% 339.61ms 6 56.602ms 20.831us 193.89ms cuMemcpyDtoH
36.75% 288.52ms 6 48.087ms 15.434us 144.35ms cuMemcpyHtoD
18.66% 146.51ms 1 146.51ms 146.51ms 146.51ms cuDevicePrimaryCtxRetain
0.93% 7.3083ms 5 1.4617ms 4.8120us 6.7314ms cuMemFree
0.23% 1.8049ms 6 300.81us 9.4520us 778.85us cuMemAlloc
0.04% 327.52us 2 163.76us 156.34us 171.19us cuLinkAddData
0.04% 299.72us 2 149.86us 148.92us 150.80us cuModuleLoadDataEx
0.04% 276.32us 2 138.16us 131.16us 145.16us cuLinkComplete
0.02% 123.96us 2 61.978us 61.252us 62.704us cuLinkCreate
0.01% 64.406us 2 32.203us 29.439us 34.967us cuLaunchKernel
0.01% 63.184us 2 31.592us 30.251us 32.933us cuDeviceGetName
0.00% 29.454us 1 29.454us 29.454us 29.454us cuMemGetInfo
0.00% 20.732us 26 797ns 477ns 2.0320us cuCtxGetCurrent
0.00% 12.852us 25 514ns 363ns 1.0920us cuCtxGetDevice
0.00% 12.429us 2 6.2140us 1.7830us 10.646us cuDeviceGetPCIBusId
0.00% 5.0950us 10 509ns 302ns 1.0770us cuFuncGetAttribute
0.00% 3.9600us 2 1.9800us 1.8000us 2.1600us cuModuleGetFunction
0.00% 3.5630us 2 1.7810us 1.7510us 1.8120us cuLinkDestroy
0.00% 1.8970us 1 1.8970us 1.8970us 1.8970us cuCtxPushCurrent
0.00% 1.8370us 4 459ns 226ns 697ns cuDeviceGet
0.00% 1.6080us 6 268ns 181ns 481ns cuDeviceGetAttribute
0.00% 1.5060us 3 502ns 230ns 795ns cuDeviceGetCount
0.00% 1.2390us 2 619ns 428ns 811ns cuDeviceComputeCapability
$
This code was run on a GTX960, which happens to report ~84GB/s of device memory bandwidth via bandwidthTest CUDA sample code. In the above example, we see that the improved kernel runs in ~4.7ms (about 10x faster than the original atomic kernel) and this translates to (1000*100000*4)bytes/4.7ms ~= 85GB/s, so we can conclude for this specific test case, this kernel is approximately "optimal".
| https://stackoverflow.com/questions/68352422/ |
Why python multiprocessing used more CPU and GPU than the specified in-parallel process numbers? | I use python multiprocessing to run 8 pytorch processes in parallel (for 8 CPU Cores and 8 GPU threads). But it consumed 48 CPUs and 24+ GPU threads. Anybody has some clues on how to reduce 48 CPUs and 24+ GPU to 8 CPU Cores and 8 GPU threads?
htop screenshot
(py38) [ec2-user@ip current]$ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.119.03 Driver Version: 450.119.03 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1B.0 Off | 0 |
| N/A 47C P0 28W / 70W | 8050MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Tesla T4 On | 00000000:00:1C.0 Off | 0 |
| N/A 50C P0 29W / 70W | 8962MiB / 15109MiB | 11% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 Tesla T4 On | 00000000:00:1D.0 Off | 0 |
| N/A 49C P0 28W / 70W | 9339MiB / 15109MiB | 9% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 49C P0 28W / 70W | 9761MiB / 15109MiB | 3% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 30971 C python 1167MiB |
| 0 N/A N/A 30973 C python 1135MiB |
| 0 N/A N/A 30974 C python 1135MiB |
| 0 N/A N/A 30975 C python 1135MiB |
| 0 N/A N/A 30976 C python 1195MiB |
| 0 N/A N/A 30977 C python 1115MiB |
| 0 N/A N/A 30978 C python 1163MiB |
| 1 N/A N/A 30971 C python 1259MiB |
| 1 N/A N/A 30972 C python 1241MiB |
| 1 N/A N/A 30973 C python 1295MiB |
| 1 N/A N/A 30975 C python 1273MiB |
| 1 N/A N/A 30976 C python 1287MiB |
| 1 N/A N/A 30977 C python 1269MiB |
| 1 N/A N/A 30978 C python 1333MiB |
| 2 N/A N/A 30971 C python 1263MiB |
| 2 N/A N/A 30972 C python 1163MiB |
| 2 N/A N/A 30973 C python 1167MiB |
| 2 N/A N/A 30974 C python 1135MiB |
| 2 N/A N/A 30975 C python 1135MiB |
| 2 N/A N/A 30976 C python 1167MiB |
| 2 N/A N/A 30977 C python 1137MiB |
| 2 N/A N/A 30978 C python 1167MiB |
| 3 N/A N/A 30971 C python 1195MiB |
| 3 N/A N/A 30972 C python 1291MiB |
| 3 N/A N/A 30973 C python 1175MiB |
| 3 N/A N/A 30974 C python 1235MiB |
| 3 N/A N/A 30975 C python 1181MiB |
| 3 N/A N/A 30976 C python 1153MiB |
| 3 N/A N/A 30977 C python 1263MiB |
| 3 N/A N/A 30978 C python 1263MiB |
+-----------------------------------------------------------------------------+
Here is the related code snippet:
p = multiprocessing.Pool(processes=8)
for id in id_list:
p.apply_async(
evaluate,
[id],
)
def evaluate(id):
# PyTorch code ...
| Instead of digging into how multiprocess dispatch works and memory releasing, I changed to loop through processes instead of video list, which resolved the problem in a controllable and simpler way. Now I can run 64 parallel processes with exactly specified GPU memory.
multiprocess_num = 8
batch_size = int(len(the_list) / multiprocess_num)
for i in range(multiprocess_num):
sub_list = the_list[i * batch_size:(i + 1) * batch_size]
p.apply_async(
evaluate,
[sub_list],
)
def evaluate(sub_list):
for id in sub_list:
# PyTorch code ...
| https://stackoverflow.com/questions/68353446/ |
torch tensor reshape from torch.Size([1, 16384, 3]) to torch.Size([1, 128, 128, 3]) | I have a torch tensor shaped, torch.Size([1, 16384, 3]) and I want it to be torch.Size([1, 128, 128, 3]).
How can I do that?
| You can use .view method, but make sure you don't truncate or extropolate dimension as you will get a Runtime Error.
I suggest:
# Your tensor
a = torch.ones((1, 16384, 3))
a = a.view((1, 128, 128, 3))
# Test if it works
print(a.size())
>> torch.Size([1, 128, 128, 3])
Optional: Also you can used up to 1 inferred size value like this
a = torch.ones((1, 16384, 3))
# Place -1 wherever you want an inferred value but make sure only one -1.
a = a.view(1, -1, 128, 3))
# or
a = a.view(1, 128, 128, -1))
Sarthak Jain
| https://stackoverflow.com/questions/68355706/ |
"Command errored out with exit status 1" error while installing pytorch | While installing any package using pip the same error occurs again and again. Earlier the error was Environment error so I used the command :pip install --user pytorch but again an error started occurring as :
ERROR: Command errored out with exit status 1: 'c:\users\rachi\appdata\local\programs\python\python39\python.exe' -u -c 'import io,
os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Public\\Documents\\Wondershare\\CreatorTemp\\pip-install-sl2341tq\\pytorch_e91f041838de48259fd31f183263d4ca\\setup.py'"'"'; __file__='"'"'C:\\Users\\Public\\Documents\\Wondershare\\CreatorTemp\\pip-install-sl2341tq\\pytorch_e91f041838de48259fd31f183263d4ca\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-record-o8hob45r\install-record.txt' --single-version-externally-managed --user --prefix= --compile --install-headers 'C:\Users\rachi\AppData\Roaming\Python\Python39\Include\pytorch'
I don't know what to do. I am new in python. Also attaching the image of log(terminal) while installing.
| The error is explicit. The package is not pytorch but torch
> pip install torch
Exception: You tried to install "pytorch". The package named for PyTorch is "torch"
----------------------------------------
| https://stackoverflow.com/questions/68357313/ |
Mix pytorch lightning with vanilla pytorch | I am doing a meta learning research and am using the MAML optimization provided by learn2learn. However as one of the baseline, I would like to test a non-meta-learning approach, i.e. the traditional training + testing.
Due to the lightning's internal usage of optimizer it seems that it is difficult to make the MAML work with learn2learn in lightning, so I couldn't use lightning in my meta-learning setup, however for my baseline, I really like to use lightning in that it provides many handy functionalities like deepspeed or ddp out of the box.
Here is my question, other than setting up two separate folders/repos, how could I mix the vanilia pytorch (learn2learn) with pytorch lightning (baseline)? What is the best practice?
Thanks!
| Decided to answer my question. So I ended up using the torch lightning's manual optimization so that I can customize the optimization step. This would make both approaches using the same framework, and I think is better than maintaining 2 separate repos.
| https://stackoverflow.com/questions/68359563/ |
Error when I modify number of input channels of inception-v3 | I would like to customize inception_v3 to make it work for 4-channel input.
I tried to modify first layer of inception v3 as below.
x=torch.randn((5,4,299,299))
model_ft=models.inception_v3(pretrained=True)
model_ft.Conv2d_1a_3x3.conv=nn.Conv2d(4, 32, kernel_size=(3, 3), stride=(2, 2), bias=False)
print(x.shape)
print(model_ft.Conv2d_1a_3x3.conv)
out=model_ft(x)
but it produces the following error.
I think the input shape and network are correctly modified, so I can't understand why it makes error. does anyone have any advice?
torch.Size([5, 4, 299, 299])
Conv2d(4, 32, kernel_size=(3, 3), stride=(2, 2), bias=False)
RuntimeErrorTraceback (most recent call last)
<ipython-input-118-41c045338348> in <module>
29 print(model_ft.Conv2d_1a_3x3.conv)
30
---> 31 out=model_ft(x)
32 print(out)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.6/dist-packages/torchvision/models/inception.py in forward(self, x)
202 def forward(self, x: Tensor) -> InceptionOutputs:
203 x = self._transform_input(x)
--> 204 x, aux = self._forward(x)
205 aux_defined = self.training and self.aux_logits
206 if torch.jit.is_scripting():
/usr/local/lib/python3.6/dist-packages/torchvision/models/inception.py in _forward(self, x)
141 def _forward(self, x: Tensor) -> Tuple[Tensor, Optional[Tensor]]:
142 # N x 3 x 299 x 299
--> 143 x = self.Conv2d_1a_3x3(x)
144 # N x 32 x 149 x 149
145 x = self.Conv2d_2a_3x3(x)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.6/dist-packages/torchvision/models/inception.py in forward(self, x)
474
475 def forward(self, x: Tensor) -> Tensor:
--> 476 x = self.conv(x)
477 x = self.bn(x)
478 return F.relu(x, inplace=True)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input)
441
442 def forward(self, input: Tensor) -> Tensor:
--> 443 return self._conv_forward(input, self.weight, self.bias)
444
445 class Conv3d(_ConvNd):
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
438 _pair(0), self.dilation, self.groups)
439 return F.conv2d(input, weight, bias, self.stride,
--> 440 self.padding, self.dilation, self.groups)
441
442 def forward(self, input: Tensor) -> Tensor:
RuntimeError: Given groups=1, weight of size [32, 4, 3, 3], expected input[5, 3, 299, 299] to have 4 channels, but got 3 channels instead
| The error is due the param pretrained=True.
Since you are using pretrained weights and you cannot edit the shape of pretrained weights to make its adjust for 4 channel. Hence the error pops up
Plz use it in this way ( which will only load architecture)
x=torch.randn((5,4,299,299))
model_ft=models.inception_v3(pretrained=False)
model_ft.Conv2d_1a_3x3.conv=nn.Conv2d(4, 32, kernel_size=(3, 3), stride=(2, 2), bias=False)
print(x.shape)
print(model_ft.Conv2d_1a_3x3.conv)
out=model_ft(x)
and it will work
| https://stackoverflow.com/questions/68360494/ |
Pytorch C++ (Libtroch), using inter-op parallelism | I am working on a machine learning system using the C++ API of PyTorch (libtorch).
One thing that I have been recently working on is researching performance, CPU utilization and GPU usage of libtorch. Trough my research I understand that Torch utilizes two ways of parallelization on CPUs:
inter-op parallelization
intra-op parallelization
My main questions are:
difference between these two
how can I utilize inter-op parallelism
I know that I can specify the number of threads used for intra-op parallelism (which from my understanding is performed using the openmp backend) using the torch::set_num_threads() function, as I monitor the performance of my models, I can see clearly that it utilizes the number of threads I specify using this function, and I can see clear performance difference by changing the number of intra-op threads.
There is also another function torch::set_num_interop_threads(), but it seems that no matter how many interop threads I specify, I never see any difference in performance.
Now I have read this PyTorch documentation article but it is still is unclear to me how to utilize the inter op thread pool.
The docs say:
PyTorch uses a single thread pool for the inter-op parallelism, this thread pool is shared by all inference tasks that are forked within the application process.
I have two questions to this part:
do I need to create new threads myself to utilize the interop threads, or does torch do it somehow for me internally?
If I need to create new threads myself, how do I do it in C++, so that I create a new thread form the interop thread pool?
In python example they use a fork function from torch.jit module, but I cant find anything similar in the C++ API.
| Questions
difference between these two
As one can see on this picture:
intra-op - parallelization done for single operation (like matmul or any other "per-tensor")
inter-op - you have multiple operations and their calculations can be intertwined
inter-op "example":
op1 starts and returns "Future" object (which is an object we can query for result once this operation finishes)
op2 starts immediately after (as op1 is non-blocking right now)
op2 finishes
we can query op1 for result (hopefully finished already or at least closer to finishing)
we add op1 and op2 results together (or whatever we'd like to do with them)
Due to above:
intra-op works without any additions (as it's PyTorch handled) and should improve the performance
inter-op is user driven (model's architecture, forward especially), hence architecture must be created with inter-op in mind!
how can I utilize inter-op parallelism
Unless you architectured your models with inter-op in mind (using for example Futures, see first code snippet in the link you posted) you won't see any performance improvements.
Most probably:
Your models are written in Python, converted to torchscript and only inference is done in C++
You should write (or refactor existing) inter-op code in Python, e.g. using torch.jit.fork and torch.jit.wait
do I need to create new threads myself to utilize the interop threads, or does torch do it somehow for me internally?
Not sure if it's possible in C++ currently, can't find any torch::jit::fork or related functionality.
If I need to create new threads myself, how do I do it in C++, so that
I create a new thread form the interop thread pool?
Unlikely as C++'s API's goal is to mimick Python's API as close to reality as possible. You might have to dig a little deeper for source code related to it and/or post a feature request on their GitHub repo if needed
| https://stackoverflow.com/questions/68361267/ |
How can I create a torch tensor from a numpy.array | I have created a code that generates a matrix with shape (12,12) for color image analyzes using convolutional neural network.
The input of my script is a torch tensor of shape (5,3,12,12). I took off the values 5 and 3 using detach().numpy().
The script :
for k in range(5):
for l in range(3):
y=x[k][l].detach().numpy()
m,n= y.shape
im=np.pad(y,((1,1),(1,1)),'constant')
Enhanced = np.zeros((m,n))
for i in range(1,m+1):
for j in range(1,n+1):
...
z.append(Enhanced)
... represents a simple function which I don't want to bother you with.
z is a list of the Enhanced which are numpy arrays.
so my goal is to create a torch tensor from the Enhanced numpy arrays with shape (5,3,12,12).
I added this line to my code inside the for loop and i get:
r=torch.Tensor(z)
print(r.shape)
and then print the r.shape and i get that :
torch.Size([3, 12, 12])
torch.Size([6, 12, 12])
torch.Size([9, 12, 12])
torch.Size([12, 12, 12])
torch.Size([15, 12, 12])
So what I understand is that I need to stack those r tensors.
I used the function t=np.stack(r)
but what I get is the shape of the last execution which is torch.Size([15, 12, 12])
so how can I modify that to get shape of (5, 3, 12, 12)
| You have to:
stack list of np.array together (Enhanced ones)
convert it to PyTorch tensors via torch.from_numpy function
For example:
import numpy as np
some_data = [np.random.randn(3, 12, 12) for _ in range(5)]
stacked = np.stack(some_data)
tensor = torch.from_numpy(stacked)
Please note that each np.array in the list has to be of the same shape
For different shapes one could do that:
import numpy as np
import torch
some_data = [np.random.randn(3, 12, 12) for _ in range(5)] + [
np.random.randn(6, 12, 12)
]
stacked = np.concatenate(some_data).reshape(-1, 3, 12, 12)
tensor = torch.from_numpy(stacked)
print(tensor.shape)
In your case:
r = torch.from_numpy(np.concatenate(z).reshape(-1, 3, 12, 12))
| https://stackoverflow.com/questions/68362245/ |
require_grad = True in pytorch model despite changing require_grad = false for all parameters | I am trying to extract features from pretrained model in pytorch and then use the features for further training.
I have imported the model and set the require_grad to false for all parameters as follow:
import torchvision.models as models
vgg_model = models.vgg19_bn(pretrained=True)
for param in vgg_model.parameters():
param.requires_grad = False
Now, I defined my model, that extracts the features and then train on other layers as follows:
class VGGModel(nn.Module):
def __init__(self):
'''Input Image Size: (227, 227)'''
super(VGGModel, self).__init__()
self.inception = list(model.children())[0]
# self.inception = incept_model
self.conv1 = nn.Conv2d(in_channels = 512, out_channels = 128, kernel_size = 5)
self.dropout = nn.Dropout(0.4)
self.fc1 = nn.Linear(128, 5)
def forward(self, x):
x = self.inception(x)
x = F.relu(x)
x = self.conv1(x)
x = F.relu(x)
x = F.max_pool2d(x, kernel_size=3)
x = torch.flatten(x, 1)
x = self.dropout(x)
x = self.fc1(x)
x = F.log_softmax(x, dim=1)
return x
But when I check require_grad for the model, it gives VGG layers as one which require require_grad as well.
model = VGGModel().to(device)
model.requires_grad_
output:
<bound method Module.requires_grad_ of VGGModel(
(inception): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(7): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(9): ReLU(inplace=True)
(10): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(12): ReLU(inplace=True)
(13): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(14): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(16): ReLU(inplace=True)
(17): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(19): ReLU(inplace=True)
(20): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(21): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(22): ReLU(inplace=True)
(23): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(24): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(25): ReLU(inplace=True)
(26): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(27): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(28): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(29): ReLU(inplace=True)
(30): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(31): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(32): ReLU(inplace=True)
(33): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(34): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(35): ReLU(inplace=True)
(36): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(37): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(38): ReLU(inplace=True)
(39): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(40): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(41): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(42): ReLU(inplace=True)
(43): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(44): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(45): ReLU(inplace=True)
(46): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(47): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(48): ReLU(inplace=True)
(49): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(50): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(51): ReLU(inplace=True)
(52): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(conv1): Conv2d(512, 128, kernel_size=(5, 5), stride=(1, 1))
(dropout): Dropout(p=0.4, inplace=False)
(fc1): Linear(in_features=128, out_features=5, bias=True)
)>
How do I prevent pretrained model from training again?
| You should run the method:
model.requires_grad_(False)
You probably want to freeze only part of the network though, in your case you should change the fc1 attribute:
model.fc1 = torch.nn.Linear(128, num_classes)
Where num_classes is the number of classes you have (you should at least unfreeze the last linear layer).
| https://stackoverflow.com/questions/68369696/ |
PyTorch - CosineSimilarity in Sequential | I have a model which consists of 3 parts:
A submodule computing some vector x.
A similar submodule computing some vector y.
Final layer, which computes the similarity between these vectors and linearly scales it. Namely:
self.final = nn.Sequential(
nn.CosineSimilarity(),
nn.Linear(1, 1),
)
Now, I have problems with calling self.final. The following doesn't work:
self.final(x, y): forward() takes 2 positional arguments but 3 were given.
self.final([x, y]): forward() missing 1 required positional argument: 'x2'.
How should I use self.final? Of course, I can apply nn.CosineSimilarity and nn.Linear separately, but I wonder how to make this module work.
Edit: after using the modules separately, it turns out that I also need to reshape the output of CosineSimilarity, with .reshape(-1, 1), so the module as it is won't work. This also raises the question: how to fix the module (without defining a custom Reshape module like this one).
| It is not possible to pass multiple objects through a torch.nn.Sequential module. If you really want to understand what's going on under the hood consider looking at the source:. The forward() method of Sequential is implemented as
def forward(self, input):
for module in self:
input = module(input)
return input
(Also consider this thread.)
So there is no way to do what you're asking with this module. But you can easily write your own, for instance following can accept multiple inputs to the first entry:
class MySeq(nn.Sequential):
def __init__(self, *args):
super().__init__(*args)
def forward(self, *input):
flag = True
for module in self:
if flag:
input = module(*input)
flag = False
else:
input = module(input)
return input
To your second question: I don't think there are currently any modules in torch.nn for just reshaping. You need to do that manually, so I'd suggest writing your own module like:
class Final(nn.Module):
def __init__(self):
super().__init__()
self.cs = nn.CosineSimilarity()
self.lin = nn.Linear(1, 1)
def forward(self, x, y):
t = self.cs(x, y)
return self.lin(t.reshape(-1, 1))
| https://stackoverflow.com/questions/68374109/ |
Dimension mismatch between output mask and original mask in dice loss in Semantic segmentation | I am doing multi-class semantic segmentation(4 classes + background). My mask dimension is (256, 256, 3) and the output mask dimension is (256, 256, 5). I took 5 because it is the number of classes.
Dice Loss Function
inputs = inputs.view(-1)
targets = targets.view(-1)
intersection = (inputs * targets).sum() ---> error
dice = (2.*intersection + smooth)/(inputs.sum() + targets.sum() + smooth)
return 1 - dice
What should I do to make the two dimensions the same? The mask was extracted from a TIF file.
I have attached my mask image below.
| I'm assuming the target segmentation you are showing is a RGB encoded map. You are looking to convert this 3-channel image into a 1-channel label map.
Assuming seg is your ground-truth segmentation map shaped as (b, 3, h, w). The label to color mapping can be arbitrarily set as:
colors = torch.FloatTensor([[0, 0, 0],
[1, 1, 0],
[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
For each color construct a mask of matching pixel and assign the corresponding label in a new tensor at those pixel positions:
b, _, h, w = seg.shape
gt = torch.zeros(b,1,h,w)
seg_perm = seg.permute(0,2,3,1)
for label, color in enumerate(colors):
mask = torch.all(seg_perm == color, dim=-1).unsqueeze(1)
gt[mask] = label
For example taking the following segmentation map:
>>> seg = tensor([[[[1., 1., 0., 0.],
[1., 0., 0., 0.]],
[[0., 1., 0., 0.],
[0., 1., 0., 1.]],
[[0., 0., 0., 0.],
[0., 0., 1., 0.]]]])
For visualization purposes:
>>> T.ToPILImage()(seg[0].repeat_interleave(100,2).repeat_interleave(100,1))
And the resulting label map will:
>>> gt
tensor([[[[2., 1., 0., 0.],
[2., 3., 4., 3.]]]])
| https://stackoverflow.com/questions/68377244/ |
Why matplotlib can't plot a histogram from a list of tensors? | I have a list of tensors:
data = [tensor(0.1647),tensor(0.1662),tensor(0.1650),tensor(0.1645),tensor(0.1683),tensor(0.1683),tensor(0.1648),tensor(0.1694),tensor(0.5016),tensor(0.5059),tensor(0.5031),tensor(0.5069),tensor(1.0047),tensor(0.9966),tensor(0.4958),tensor(0.9984),tensor(0.1664),tensor(0.1725),tensor(0.5011),tensor(0.1679),tensor(0.1694),tensor(0.5003),tensor(0.1672),tensor(0.4957),tensor(0.5080),tensor(0.5047),tensor(0.4956),tensor(0.5012),tensor(0.4978),tensor(0.4975),tensor(0.9926)]
I am trying to plot a histogram using
import matplotlib.pyplot as plt
plt.figure(figsize=(12,12))
plt.hist(data, bins=5)
plt.show();
The results are really weird:
However, if I'll change the list of tensors as follows:
new_data = [i.item() for i in flag_syn_values]
>>> new_data = [0.16471165418624878,0.16618900001049042,0.16499125957489014,0.16447265446186066,0.1683468520641327,0.16827784478664398,0.16477319598197937,0.16940128803253174,0.5015971064567566,0.5058760046958923,0.5030592679977417,0.5068832039833069,1.0046963691711426,0.9966360330581665,0.4957870543003082,0.9984496831893921,0.16643814742565155,0.17246568202972412,0.5011343955993652,0.16787128150463104,0.16941896080970764,0.5003153085708618,0.16719254851341248,0.4957270622253418,0.5079832673072815,0.5047211647033691,0.4956021308898926,0.5012259483337402,0.4977755844593048,0.49753040075302124,0.9925762414932251]
I can plot it normally:
plt.figure(figsize=(12,12))
plt.hist(new_data, bins=5)
plt.show();
| Your first example corresponds to the "sequence of (n,) arrays" case for the x parameter in the docs: In this case you obtain "... or ([n0, n1, ...], bins, [patches0, patches1, ...]) if the input contains multiple data". In other words: You are getting many overlaid histograms, each with a single data point. This is exactly what your first plot is showing. The solution to this will always be some flattening of the data, as you do in your second case.
| https://stackoverflow.com/questions/68378507/ |
Expected 4-dimensional input for 4-dimensional weight [6, 1, 5, 5], but got 3-dimensional input of size [1, 28, 28] instead | I am trying to make a neural network that is complex enough to fit the data ( I am using MNIST dataset) I had a small network I tried to make a new one now and I have stumbled upon this problem. the code is:
class NN1(nn.Module):
def __init__(self):
super(NN1, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120) # 5*5 from image dimension
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = torch.flatten(x, 1) # flatten all dimensions except the batch dimension
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
transform_list = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=[0.0], std=[1.0,]) ] )
mnist_trainset = datasets.MNIST(root='./data', train=True, download=True, transform=transform_list)
mnist_trainset_small = [ mnist_trainset[i] for i in range(0,4000) ]
mnist_testset = datasets.MNIST(root='./data', train=False, download=True, transform=transform_list)
nn1 = NN1()
tmp = nn1.forward( mnist_trainset[0][0])
tmp
how can I fix this with building a good network
| You should use a DataLoader on top of your Dataset:
mnist_train_dl = torch.utils.data.DataLoader(mnist_trainset, batch_size=16)
Predefined Pytorch modules work with batch-first tensors. In your case your model expects a tensor of shape (batch_size, channels=1, height, width).
You shouldn't call forward, instead call your module directly nn1(x).
Usually, you would loop through your data loader and infer/back-propagate/update for each batch. Something like:
for x, y in mnist_train_dl:
out = nn1(x)
# ...
However, you can debug your model by inferring one element, by accessing the first element of the first batch:
x, y = next(mnist_train_dl)
out = nn1(x[:1]) # target is y[:1]
indexing by [:1] instead of [0] makes it so you don't squeeze the first axis.
| https://stackoverflow.com/questions/68379443/ |
Does knowledge distillation have an ensemble effect? | I don't know much about knowledge distillation.
I have a one question.
There is a model with showing 99% performance(10class image classification). But I can't use a bigger model because I have to keep inference time.
Does it have an ensemble effect if I train knowledge distillation using another big model?
-------option-------
Or let me know if there's any way to improve performance than this.
enter image description here
| The technical answer is no. KD is a different technique from ensembling.
But they are related in the sense that KD was originally proposed to distill larger models, and the authors specifically cite ensemble models as the type of larger model they experimented on.
Net net, give KD a try on your big model to see if you can keep a lot of the performance of the bigger model but with the size of the smaller model. I have empirically found that you can retain 75%-80% of the power of the a 5x larger model after distilling it down to the smaller model.
From the abstract of the KD paper:
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
https://arxiv.org/abs/1503.02531
| https://stackoverflow.com/questions/68380183/ |
Reduce dimensions of a tensor (to a scalar) | In:
a = torch.tensor([[2.4]])
torch.squeeze(a, 1)
a.size(), a
Out:
(torch.Size([1, 1]), tensor([[2.4000]]))
During computations using nn.MSELoss, I got a mismatch of dimensions.
Input had size ([1,1]) and target ([]).
The functions reshape and squeeze haven't worked.
I would be grateful for a solution, to this embarassingly simple problem. : ]
Edit: there was a simple mistake of not assigning a= the squeezed value. Thank You for Your answer.
| Function torch.squeeze will not modify input a. Either reassign it:
a = a.squeeze(1)
or use the in-place version of the function torch.squeeze_
a.squeeze_(1)
| https://stackoverflow.com/questions/68382874/ |
CUDA error: CUBLAS_STATUS_INVALID_VALUE error when training BERT model using HuggingFace | I am working on sentiment analysis on steam reviews dataset using BERT model where I have 2 labels: positive and negative. I have fine-tuned the model with 2 Linear layers and the code for that is as below.
bert = BertForSequenceClassification.from_pretrained("bert-base-uncased",
num_labels = len(label_dict),
output_attentions = False,
output_hidden_states = False)
class bertModel(nn.Module):
def __init__(self, bert):
super(bertModel, self).__init__()
self.bert = bert
self.dropout1 = nn.Dropout(0.1)
self.relu = nn.ReLU()
self.fc1 = nn.Linear(768, 512)
self.fc2 = nn.Linear(512, 2)
self.softmax = nn.LogSoftmax(dim = 1)
def forward(self, **inputs):
_, x = self.bert(**inputs)
x = self.fc1(x)
x = self.relu(x)
x = self.dropout1(x)
x = self.fc2(x)
x = self.softmax(x)
return x
This is my train function:
def model_train(model, device, criterion, scheduler, optimizer, n_epochs):
train_loss = []
model.train()
for epoch in range(1, epochs+1):
total_train_loss, training_loss = 0,0
for idx, batch in enumerate(dataloader_train):
model.zero_grad()
data = tuple(b.to(device) for b in batch)
inputs = {'input_ids': data[0],'attention_mask': data[1],'labels':data[2]}
outputs = model(**inputs)
loss = criterion(outputs, labels)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
#update the weights
optimizer.step()
scheduler.step()
training_loss += loss.item()
total_train_loss += training_loss
if idx % 25 == 0:
print('Epoch: {}, Batch: {}, Training Loss: {}'.format(epoch, idx, training_loss/10))
training_loss = 0
#avg training loss
avg_train_loss = total_train_loss/len(dataloader_train)
#validation data loss
avg_pred_loss = model_evaluate(dataloader_val)
#print for every end of epoch
print('End of Epoch {}, Avg. Training Loss: {}, Avg. validation Loss: {} \n'.format(epoch, avg_train_loss, avg_pred_loss))
I am running this code on Google Colab. When I run the train function, I get the following the error, I have tried with batch sizes 32, 256, 512.
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
Can anyone please help me on this? Thank you.
Update on the code: I tried running the code on the CPU and the error is in the matrix shapes mismatch. The input shape, shape after the self.bert is printed in the image. Since the first linear layer (fc1) is not getting executed, the shape after that is not printed.
| I suggest trying out couple of things that can possibly solve the error.
As shown in this forum, one possible solution is to lower the batch size of how you load data. Since it might be a memory error.
If that does not work then I suggest as shown in this github issue to update to a new version of Pytorch cuda that fixes a matrix multiplication bug that releases this same error that your code could be doing. Hence, as shown in this forum You can update Pytorch to the nightly pip wheel, or use the CUDA10.2 or conda binaries. You can find information on such installations on the pytorch home page where it mentions how to install pytorch.
If none of that works, then the best thing to do is to run a smaller version of the process on CPU and recreate the error. When running it on CPU instead of CUDA, you will get a more useful traceback that can solve your error.
EDIT (Based on Comments):
You have a matrix error in your model.
The problem stems in your forward func then
The model BERT outputs a tensor that has torch.size (64, 2) which means if you put it in the Linear layer you have it will error since that linear layer requires input of (?, 768) b/c you initialized it as nn.Linear(768, 512). In order to make the error disappear you need to either do some transformation on the tensor or initialize another linear layer as shown below:
somewhere defined in __init__: self.fc0 = nn.Linear(2, 768)
def forward(self, **inputs):
_, x = self.bert(**inputs)
x = self.fc0(x)
x = self.fc1(x)
x = self.relu(x)
x = self.dropout1(x)
x = self.fc2(x)
x = self.softmax(x)
Sarthak Jain
| https://stackoverflow.com/questions/68383634/ |
Calculate angles in pytorch | If we have a set of points Rs, we can use torch.cdist to get the all pair distances.
dists_ij = torch.cdist(Rs, Rs)
Is there a function to get the angles between two set of vectors Vs like this:
angs_ij = torch.angs(Vs, Vs)
| You can do this manually using the relation between the dot product of two vectors and the angle between them:
# normalize the vectors
nVs = Vs / torch.norm(Vs, p=2, dim=-1, keepdim=True)
# compute cosine of the angles using dot product
cos_ij = torch.einsum('bni,bmi->bnm', nVs, nVs)
| https://stackoverflow.com/questions/68385757/ |
Error loading model saved with distributed data parallel | When loading a model which was saved from a model in distributed mode, the model names are different, resulting in this error. How can I resolve this?
File "/code/src/bert_structure_prediction/model.py", line 36, in __init__
self.load_state_dict(state_dict)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1223, in load_state
_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for BertCoordinatePredictor:
Missing key(s) in state_dict: "bert.embeddings.position_ids", "bert.embeddings.word_embeddin
gs.weight", ...etc.
| The reason why the model names don't match is because DDP wraps the model object, resulting in different layer names when saving the model in distributed data parallel mode (specifically, layer names will have module. prepended to the model name). To resolve this, use
torch.save(model.module.state_dict(), PATH)
instead of
torch.save(model.state_dict(), PATH)
when saving from data parallel.
| https://stackoverflow.com/questions/68386282/ |
RuntimeError: mat1 and mat2 shapes cannot be multiplied | I'm trying to input a 5D tensor with shape ( 1, 8, 32, 32, 32 ) to a VAE I wrote:
self.encoder = nn.Sequential(
nn.Conv3d( 8, 16, 4, 2, 1 ), # 32 -> 16
nn.BatchNorm3d( 16 ),
nn.LeakyReLU( 0.2 ),
nn.Conv3d( 16, 32, 4, 2, 1 ), # 16 -> 8
nn.BatchNorm3d( 32 ),
nn.LeakyReLU( 0.2 ),
nn.Conv3d( 32, 48, 4, 2, 1 ), # 16 -> 4
nn.BatchNorm3d( 48 ),
nn.LeakyReLU( 0.2 ),
)
self.fc_mu = nn.Linear( 3072, 100 ) # 48*4*4*4 = 3072
self.fc_logvar = nn.Linear( 3072, 100 )
self.decoder = nn.Sequential(
nn.Linear( 100, 3072 ),
nn.Unflatten( 1, ( 48, 4, 4 )),
nn.ConvTranspose3d( 48, 32, 4, 2, 1 ), # 4 -> 8
nn.BatchNorm3d( 32 ),
nn.Tanh(),
nn.ConvTranspose3d( 32, 16, 4, 2, 1 ), # 8 -> 16
nn.BatchNorm3d( 16 ),
nn.Tanh(),
nn.ConvTranspose3d( 16, 8, 4, 2, 1 ), # 16 -> 32
nn.BatchNorm3d( 8 ),
nn.Tanh(),
)
def reparametrize( self, mu, logvar ):
std = torch.exp( 0.5 * logvar )
eps = torch.randn_like( std )
return mu + eps * std
def encode( self, x ) :
x = self.encoder( x )
x = x.view( -1, x.size( 1 ))
mu = self.fc_mu( x )
logvar = self.fc_logvar( x )
return self.reparametrize( mu, logvar ), mu, logvar
def decode( self, x ):
return self.decoder( x )
def forward( self, data ):
z, mu, logvar = self.encode( data )
return self.decode( z ), mu, logvar
The error I'm getting is: RuntimeError: mat1 and mat2 shapes cannot be multiplied (64x48 and 3072x100). I thought I had calculated the output dimensions from each layer correctly, but I must have made a mistake, but I'm not sure where.
| This line
x = x.view( -1, x.size( 1 ))
Means you leave the second dimension(channel) as is and put everything else at the first dimension(batch).
And as the output of the self.encoder is (1, 48, 4, 4, 4), doing that means you'll get (64, 48) but from the look of it I think you want (1, 3072) instead.
So this should solve this particular problem.
x = x.view(x.size(0), -1)
Then you'll run into RuntimeError: unflatten: Provided sizes [48, 4, 4] don't multiply up to the size of dim 1 (3072) in the input tensor.
The cause is the unflatten here
nn.Linear(100, 3072),
nn.Unflatten(1, (48, 4, 4)),
nn.ConvTranspose3d(48, 32, 4, 2, 1)
Has to be (48, 4, 4, 4) instead.
| https://stackoverflow.com/questions/68387618/ |
Extract intermediate representation of MiDaS neural network in pytorch? | Pytorch documentation provides a concise way to apply MiDaS monocular depth estimation network for depth extraction. But how should I modify their code to extract network representation at some intermediate layer? I know that I could download the model from github and modify forward function to return what I want, but I am interested in the simplest solution, leaving outer code as is.
I'm aware of subclassing the model class and writing my own forward function, like here, but I don't know how to access the class in the code. The model instance is created straight away with midas = torch.hub.load("intel-isl/MiDaS", model_type). Maybe an example of using a forward hook will be easier.
| As you said, using a forward hook on a nn.Module is the easiest way to go about it. Consider the documentation: https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.register_forward_hook
Basically you just have to define a function that takes three inputs (module, input, output) and then does whatever you want with that data. To find at what Module you want to place that hook you obviously need to be familiar with the structure of the model. You can just print(midas) to get a pretty-printed representation of all the modules available. I just chose some random one, and used the print() function as a hook:
midas.pretrained.model.blocks[3].mlp.fc2.register_forward_hook(print)
This means whenever we call midas(some_input), the hook (print in this case) will be called with the corresponding arguments. Of course instead of print you can write a function that saves those files to e.g. a list that you can access from the outside, or write them to a file etc.
| https://stackoverflow.com/questions/68392911/ |
Problem in lr_find() in Pytorch fastai course | While following the Jupyter notebooks for the course
I hit upon an error when these lines are run.
I know that the cnn_learner line has got no errors whatsoever, The problem lies in the lr_find() part
It seems that learn.lr_find() does not want to return two values! Although its documentation says that it returns a tuple. That is my problem.
These are the lines of code:
learn = cnn_learner(dls, resnet34, metrics=error_rate)
lr_min,lr_steep = learn.lr_find()
The error says:
not enough values to unpack (expected 2, got 1)
for the second line.
Also, I get this graph with one 'marker' which I suppose is either one of the values of lr_min or lr_steep
This is the graph
When I run learn.lr_find() only, i.e. do not capture the output in lr_min, lr_steep; it runs well but then I do not get the min and steep learning rates (which is really important for me)
I read through what lr_find does and it is clear that it returns a tuple. Its docstring says
Launch a mock training to find a good learning rate and return suggestions based on suggest_funcs as a named tuple
I had duplicated the original notebook, and when I hit this error, I ran the original notebook, with the same results. I update the notebooks as well, but no change!
Wherever I have searched for this online, any sort of error hasn't popped up. The only relevant thing I found is that lr_find() returns different results of the learning rates after every run, which is perfectly fine.
| I was having the same problem and I found that the lr_find() output's has updated. You can substitute the second line to lrs = learn.lr_find(suggest_funcs=(minimum, steep, valley, slide)), and then you just substitute where you using lr_min and lr_steep to lrs.minimum and lrs.steep respectively, this should work fine and solve your problem.
If you wanna read more about it, you can see this post that is in the fastai's forum.
| https://stackoverflow.com/questions/68396513/ |
How to calculate dimensions of first linear layer of a CNN | Currently, I am working with a CNN where there is a fully connected layer attached to it and I am working with a 3 channel image of size 32x32. I am wondering on if there is a consistent formula I can use to calculate the input dimensions of the first linear layer with the input from the last conv/maxpooling layer. I want to be able to calculate the dimensions of the first linear layer given only information of the last conv2d layer and maxpool later. In other words, I would like to be able to calculate that value without having to use information of the previous layers before (so I don't have to manually calculate weight dimensions of a very deep network)
I also want to understand the calculation of acceptable dimensions, like what would be the reasoning of those calculations?
For some reason these calculations work and Pytorch accepted these dimensions:
val = int((32*32)/4)
self.fc1 = nn.Linear(val, 200)
and this also worked
self.fc1 = nn.Linear(64*4*4, 200)
Why do those values work, and is there a limitation in the calculation of those methods? I feel like this would break if I were to change stride distance or kernel size, for example.
Here is the general model architecture I was working with:
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer
self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(in_channels=16, out_channels=32,kernel_size=3)
self.pool2 = nn.MaxPool2d(2,2)
self.conv3 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3)
self.pool3 = nn.MaxPool2d(2,2)
self.dropout = nn.Dropout(0.25)
# H*W/4
val = int((32*32)/4)
#self.fc1 = nn.Linear(64*4*4, 200)
################################################
self.fc1 = nn.Linear(val, 200) # dimensions of the layer I wish to calculate
###############################################
self.fc2 = nn.Linear(200,100)
self.fc3 = nn.Linear(100,10)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool2(F.relu(self.conv2(x)))
x = self.pool3(F.relu(self.conv3(x)))
#print(x.shape)
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = F.relu(self.fc2(x))
x = self.dropout(x)
x = self.fc3(x)
return x
# create a complete CNN
model = Net()
print(model)
Can anyone tell me how to calculate the dimensions of the first linear layer and explain the reasoning?
| Given the input spatial dimension w, a 2d convolution layer will output a tensor with the following size on this dimension:
int((w + 2*p - d*(k - 1) - 1)/s + 1)
The exact same is true for nn.MaxPool2d. For reference, you can look it up here, on the PyTorch documentation.
The convolution part of your model is made up of three (Conv2d + MaxPool2d) blocks. You can easily infer the spatial dimension size of the output with this helper function:
def conv_shape(x, k=1, p=0, s=1, d=1):
return int((x + 2*p - d*(k - 1) - 1)/s + 1)
Calling it recursively, you get the resulting spatial dimension:
>>> w = conv_shape(conv_shape(32, k=3, p=1), k=2, s=2)
>>> w = conv_shape(conv_shape(w, k=3), k=2, s=2)
>>> w = conv_shape(conv_shape(w, k=3), k=2, s=2)
>>> w
2
Since your convolutions have squared kernels and identical strides, paddings (horizontal equals vertical), the above calculations hold true for the width and the height dimensions of the tensor. Lastly, looking at the last convolution layer conv3, which has 64 filters, the resulting number of elements per batch element before your fully connected layer is: w*w*64, i.e. 256.
However, nothing stops you from calling your layers to find out the output shape!
class Net(nn.Module):
def __init__(self):
super().__init__()
self.feature_extractor = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, padding=1),
nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.Flatten())
n_channels = self.feature_extractor(torch.empty(1, 3, 32, 32)).size(-1)
self.classifier = nn.Sequential(
nn.Linear(n_channels, 200),
nn.ReLU(),
nn.Dropout(0.25),
nn.Linear(200, 100),
nn.ReLU(),
nn.Dropout(0.25),
nn.Linear(100, 10))
def forward(self, x):
features = self.feature_extractor(x)
out = self.classifier(features)
return out
model = Net()
| https://stackoverflow.com/questions/68398528/ |
How can I make a PyTorch extension with cmake | This tutorial demonstrates how to make a C++/CUDA-based Python extension for PyTorch. But for ... reasons ... my use-case is more complicated than this and doesn't fit neatly within the Python setuptools framework described by the tutorial.
Is there a way to use cmake to compile a Python library that extends PyTorch?
| Yes.
The trick is to use cmake to combine together all the C++ and CUDA files we'll need and to use PyBind11 to build the interface we want; fortunately, PyBind11 is included with PyTorch.
The code below is collected and kept up-to-date in this Github repo.
Our project consists of several files:
CMakeLists.txt
cmake_minimum_required (VERSION 3.9)
project(pytorch_cmake_example LANGUAGES CXX CUDA)
find_package(Python REQUIRED COMPONENTS Development)
find_package(Torch REQUIRED)
# Modify if you need a different default value
if(NOT DEFINED CMAKE_CUDA_ARCHITECTURES)
set(CMAKE_CUDA_ARCHITECTURES 61)
endif()
# List all your code files here
add_library(pytorch_cmake_example SHARED
main.cu
)
target_compile_features(pytorch_cmake_example PRIVATE cxx_std_11)
target_link_libraries(pytorch_cmake_example PRIVATE ${TORCH_LIBRARIES} Python::Python)
# Use if the default GCC version gives issues.
# Similar syntax is used if we need better compilation flags.
target_compile_options(pytorch_cmake_example PRIVATE $<$<COMPILE_LANGUAGE:CUDA>:-ccbin g++-9>)
# Use a variant of this if you're on an earlier cmake than 3.18
# target_compile_options(pytorch_cmake_example PRIVATE $<$<COMPILE_LANGUAGE:CUDA>:-gencode arch=compute_61,code=sm_61>)
main.cu
#include <c10/cuda/CUDAException.h>
#include <torch/extension.h>
#include <torch/library.h>
using namespace at;
int64_t integer_round(int64_t num, int64_t denom){
return (num + denom - 1) / denom;
}
template<class T>
__global__ void add_one_kernel(const T *const input, T *const output, const int64_t N){
// Grid-strided loop
for(int i=blockDim.x*blockIdx.x+threadIdx.x;i<N;i+=blockDim.x*gridDim.x){
output[i] = input[i] + 1;
}
}
///Adds one to each element of a tensor
Tensor add_one(const Tensor &input){
auto output = torch::zeros_like(input);
// Common values:
// AT_DISPATCH_INDEX_TYPES
// AT_DISPATCH_FLOATING_TYPES
// AT_DISPATCH_INTEGRAL_TYPES
AT_DISPATCH_ALL_TYPES(
input.scalar_type(), "add_one_cuda", [&](){
const auto block_size = 128;
const auto num_blocks = std::min(65535L, integer_round(input.numel(), block_size));
add_one_kernel<<<num_blocks, block_size>>>(
input.data_ptr<scalar_t>(),
output.data_ptr<scalar_t>(),
input.numel()
);
// Always test your kernel launches
C10_CUDA_KERNEL_LAUNCH_CHECK();
}
);
return output;
}
///Note that we can have multiple implementations spread across multiple files, though there should only be one `def`
TORCH_LIBRARY(pytorch_cmake_example, m) {
m.def("add_one(Tensor input) -> Tensor");
m.impl("add_one", c10::DispatchKey::CUDA, TORCH_FN(add_one));
//c10::DispatchKey::CPU is also an option
}
Compilation
Compile it all using this command:
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_PREFIX_PATH=`python -c 'import torch;print(torch.utils.cmake_prefix_path)'` -GNinja ..
test.py
You can then run the following test script.
import torch
torch.ops.load_library("build/libpytorch_cmake_example.so")
shape = (3,3,3)
a = torch.randint(0, 10, shape, dtype=torch.float).cuda()
a_plus_one = torch.ops.pytorch_cmake_example.add_one(a)
| https://stackoverflow.com/questions/68401650/ |
How should I fix the code in order to make the linear regression model using train data properly work? | What I should do is to collect height and weight information from 5 people and use it as train data to learn the linear regression model in Colab.
There is an example code, so I fixed it, but it doesn’t work from the first place.
How should I fix the code in order to make the linear regression model using train data properly work? The code I have fixed so far is the following. The height and weight data below are the values I randomly chose.
import torch
import torch.optim as optim
# Defining data
x_train=torch.Float Tensor([48],[52],[60],[65],[73])
y_train=torch.Float Tensor([158],[162],[170],[175],[183])
# Hypothesis initiaization
W=torch.zeros(1,requires_grad=True)
b=torch.zeros(1,requires_grad=True)
# Defining Optimizer
optimizer=torch.optim.SGD([W,b],Ir=0.01)
nb_epochs=1000
for epoch in range(nb_epochs+1):
# Calculating H(x)
hypothesis=x_train*W+b
# Calculating cost
cost=torch.mean((hypothesis-y_train)**2)
# Learning with Optimizer
optimizer.zero_grad()
cost.backward()
optimizer.step()
# Log output every 100 times
if epoch % 100 ==0:
print('Epoch{:4d}/{}W:{:.3f},b:{:.3f}Coast:{:.6f}'.format(
epoch,nb_epochs,W.item(),b.item(),cost.item()
))
| There are several minor mistakes in the code snippet, and actually you can read these from error messages
torch.Tensor(), and "torch.Tensor is an alias for the default tensor type (torch.FloatTensor)"
the data you feed in should be a Python list or sequence
Ir->lr (not ir, but Lr)
try lower learning rate (lr) for the nan problem
training with enough epochs
Given the least modification, it works
import torch
import torch.optim as optim
# Defining data
x_train=torch.Tensor([[48],[52],[60],[65],[73]])
y_train=torch.Tensor([[158],[162],[170],[175],[183]])
# Hypothesis initiaization
W=torch.zeros(1,requires_grad=True)
b=torch.zeros(1,requires_grad=True)
# Defining Optimizer
optimizer=torch.optim.SGD([W,b],lr=0.0001)
nb_epochs=1000000
for epoch in range(nb_epochs+1):
# Calculating H(x)
hypothesis=x_train*W+b
# Calculating cost
cost=torch.mean((hypothesis-y_train)**2)
# Learning with Optimizer
optimizer.zero_grad()
cost.backward()
optimizer.step()
# Log output every 200 times
if epoch % 200 ==0:
print('Epoch{:4d}/{}W:{:.3f},b:{:.3f}Coast:{:.6f}'.format(
epoch,nb_epochs,W.item(),b.item(),cost.item()
))
reference:
https://pytorch.org/docs/stable/tensors.html
| https://stackoverflow.com/questions/68402878/ |
Getting nans for gradient | I am trying to create a search relevance model where I take the dot product between query vector and resulting documents. I add a positional bias term on top to take into account the fact that position 1 is more likely to be clicked on. The final (unnormalised) log likelihood calculation is as follows:
query = self.query_model(query_input_ids, query_attention_mask)
docs = self.doc_model(doc_input_ids, doc_attention_mask)
positional_bias = self.position_model()
if optimizer_idx is not None:
if optimizer_idx == 0:
docs = docs.detach()
positional_bias = positional_bias.clone().detach()
elif optimizer_idx == 1:
query = query.detach()
positional_bias = positional_bias.clone().detach()
else:
query = query.detach()
docs = docs.detach()
similarity = (docs @ query.unsqueeze(-1)).squeeze()
click_log_lik = (similarity + positional_bias)\
.reshape(doc_mask.shape)\
.masked_fill_((1 - doc_mask).bool(), float("-inf"))
The query and doc model is simply a distilbert model with a projection layer on top of CLS token. The models can be seen here: https://pastebin.com/g21g9MG3
When inspecting the first gradient descent step, it has nans, but only for the query model and not the doc model. My hypothesis is that normalizing the return values for doc and query models (return F.normalize(out, dim=-1)) is somehow playing up with the gradients.
Does anyone know 1. If my hypothesis is true and more importantly 2. How can I rectify nan gradients?.
Additional Info:
None of the losses are inf or nan.
query is BS x 768
docs is BS x DOC_RESULTS x 768
positional_bias is DOC_RESULTS
DOC_RESULTS is 10 in my case.
The masked_fill in the last line is because occasionally I have less than 10 data points for a query.
Update 1
The following changes made no difference to nans:
Changing masked_fill from -inf to 1e5.
Changing the projection from F.normalize(out, dim=-1) to out / 100.
Removed positional bias altogether with again no luck.
| If it helps anyone, and you come across this while using Transformers this is what I did:
So in the end the bug was due to the fact that I was masking away nan's. Since I had some documents with zero length, the output of the transformer was nan. I was hoping that masked_fill would fix this problem, but it doesn't. The solution in my case was to only put non-zero length sequences through transformers, and then append with zeros to fill the batch size.
| https://stackoverflow.com/questions/68403128/ |
output of conv layer in AlexNet | I am looking at an implementation of AlexNet with PyTorch. According to the formula, output height = (input_height + padding_top + padding_bottom - kernel_height) / stride_height + 1. so using the formula, with input of size 224, stride = 4, padding = 1, and kernel size =11, the output should be of size 54.75. But if you run a summary of the model, you would see that the output of this first layer to be 54. Does PyTorch clip down the output size? If so, does it consistently clip down (seem like it)? I would like to understand what is going behind the scene please .
Here is the code that I refer to:
net = nn.Sequential(
nn.Conv2d(1, 96, kernel_size=11, stride=4, padding=1), nn.ReLU(),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(96, 256, kernel_size=5, padding=2), nn.ReLU(),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(256, 384, kernel_size=3, padding=1), nn.ReLU(),
nn.Conv2d(384, 384, kernel_size=3, padding=1), nn.ReLU(),
nn.Conv2d(384, 256, kernel_size=3, padding=1), nn.ReLU(),
nn.MaxPool2d(kernel_size=3, stride=2), nn.Flatten(),
nn.Linear(6400, 4096), nn.ReLU(), nn.Dropout(p=0.5),
nn.Linear(4096, 4096), nn.ReLU(), nn.Dropout(p=0.5),
nn.Linear(4096, 10))
| The output size is an whole number of course! It's just that your formula is not correct, the expression is height = floor((input_height + padding_top + padding_bottom - kernel_height) / stride_height + 1). This wouldn't make any sense otherwise.
| https://stackoverflow.com/questions/68407037/ |
How do I load my dataset into Pytorch or Keras? | I'm learning to build a neural network using either Pytorch or Keras. I have my images in two separate folders for training and testing with their corresponding labels in two csv files and I'm having the basic problem of just loading them into with Pytorch or Keras so I can start building an NN. I've tried tutorials from
https://towardsdatascience.com/training-neural-network-from-scratch-using-pytorch-in-just-7-cells-e6e904070a1d
and
https://www.tensorflow.org/tutorials/keras/classification
and a few others but they all seem to use pre-existing datasets like MNIST where it can be imported in or downloaded from a link. I've tried something like this:
import numpy as np
import matplotlib.pyplot as plt
import os
import cv2
from tqdm import tqdm
DATADIR = r"Path to my image folder"
CATEGORIES = ["High", "Low"]
for category in CATEGORIES:
path = os.path.join(DATADIR,category)
for img in os.listdir(path):
img_array = cv2.imread(os.path.join(path,img) ,cv2.IMREAD_GRAYSCALE)
plt.imshow(img_array, cmap='gray')
plt.show()
break
break
but was after something more like:
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
Does anyone have ideas?
Thanks,
C
| If you have your data in a csv file and images as the target in separate folders, so one of the best ways is to use flow_from_dataframe generator from keras libraries. Here is an example, and a more detailed example on keras library here. It's also the documentations.
Here is some sample code:
import pandas as pd #import pandas library
from tensorflow import keras
df = pd.read_csv(r".\train.csv") #read csv file
datagen = keras.preprocessing.image.ImageDataGenerator(
rescale=1./255) #dividing pixels by 255 is arbitrary
train_generator = datagen.flow_from_dataframe(
dataframe=df, #dataframe object you have defined above
directory=".\train_imgs", #the dir where your images are stored
x_col="id", #column of image names
y_col="label", #column of class name
class_mode="categorical", #type of the problem
target_size=(32,32), #resizing image target according to your model input
batch_size=32) #batch size of data it should create
Then, you can pass it to the model.fit():
model.fit(train_generator, epochs=10)
| https://stackoverflow.com/questions/68408941/ |
batched tensor slice, slice B x N x M with B x 1 | I have an B x M x N tensor, X, and I have and B x 1 tensor, Y, which corresponds to the index of tensor X at dimension=1 that I want to keep. What is the shorthand for this slice so that I can avoid a loop?
Essentially I want to do this:
Z = torch.zeros(B,N)
for i in range(B):
Z[i] = X[i][Y[i]]
| The answer provided by @Hammad is short and perfect for the job. Here's an alternative solution if you're interested in using some less known Pytorch built-ins. We will use torch.gather (similarly you can achieve this with numpy.take).
The idea behind torch.gather is to construct a new tensor-based on two identically shaped tensors containing the indices (here ~ Y) and the values (here ~ X).
The operation performed is Z[i][j][k] = X[i][Y[i][j][k]][k].
Since X's shape is (B, M, N) and Y shape is (B, 1) we are looking to fill in the blanks inside Y such that Y's shape becomes (B, 1, N).
This can be achieved with some axis manipulation:
>>> Y.expand(-1, N)[:, None] # expand to dim=1 to N and unsqueeze dim=1
The actual call to torch.gather will be:
>>> X.gather(dim=1, index=Y.expand(-1, N)[:, None])
Which you can reshape to (B, N) by adding in [:, 0].
This function can be very effective in tricky scenarios...
| https://stackoverflow.com/questions/68413687/ |
problem accessing the pytorch versions in conda environment | I am a user of a server with no root access.
I have installed PyTorch version 1.9.0 in conda environment. But when I access it via Jupyter notebook, it still shows conda base version (PyTorch 1.3.0).
Although I am running the notebook from the environment. See screenshots for clarification.
My requirement is to run PyTorch 1.9.0 version to run in the notebook rather than the current 1.3.0
Update: I am using Jupyter via SSH (it is installed on the server, I am accessing via SSH client)
| After activating your target conda env, checl the location of jupyter:
conda activate paper
which jupyter
you have to see smth like
anaconda3/envs/paper/bin/jupyter
if not, for example you have this
anaconda3/bin/jupyter
That means you are using jupyter from base conda env, to fix this you have to install jupter inside your paper conda env
| https://stackoverflow.com/questions/68417477/ |
input.size(-1) must be equal to input_size. Expected 763, got 1 | I am tring to train my model with the batch size of 50. However I am getting error:
input.size(-1) must be equal to input_size. Expected 763, got 1
My code is:
for epoch in range(1, n_epochs + 1):
for i, (x_batch, y_batch) in enumerate(trn_dl):
#model.to(device)
#model.train()
x_batch = x_batch.to(device)
y_batch = y_batch.to(device)
#sched.step()
print('shape of the input batch')
print(x_batch.shape)
opt.zero_grad()
x_batch=torch.unsqueeze(x_batch,2)
print(x_batch.shape)
print(x_batch)
out = model(x_batch) # here I am getting error
y_batch=torch.unsqueeze(y_batch,0)
print('NOW')
print(y_batch.dtype)
y_batch = y_batch.to(torch.float32)
out = out.to(torch.float32)
out=torch.transpose(out,1,0)
loss = loss_function(out, torch.max(y_batch, 1)[1])
#(out, y_batch)
#targets = targets.to(torch.float32)
loss.backward()
opt.step()
My model is:
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super().__init__()
self.hidden_size = hidden_size
self.lstm = nn.LSTM(input_size, hidden_size)
self.linear =nn.Linear(hidden_size, output_size)
self.hidden_cell = (torch.zeros(1,1,self.hidden_size),
torch.zeros(1,1,self.hidden_size))
def forward(self, input_seq):
h0 = torch.zeros(1, input_seq.size(0), self.hidden_size).to(device)
c0 = torch.zeros(1, input_seq.size(0), self.hidden_size).to(device)
lstm_out, _ = self.lstm(input_seq, (h0,c0))
lstm_out = self.fc(lstm_out[:, -1, :])
predictions = self.Linear(lstm_out.view(len(input_seq), -1))
print("predictions",predictions)
return predictions[-1]
Could anyone please look into it and help me.
| By the looks of it, you are trying to pick the last step of the LSTM's output: lstm_out[:, -1, :]. However, by default with nn.RNNs the batch axis is second, not first: (sequence_length, batch_size, features). So you end up picking the last batch element, not the last sequence step. You might want to use batch_first=True when initializing your nn.LSTM:
Something like:
self.lstm = nn.LSTM(input_size, hidden_size, batch_first=True)
| https://stackoverflow.com/questions/68428558/ |
Why do I get a different image at the same index? | I have the following code portion:
images = []
image_labels = []
for i, data in enumerate(train_loader,0):
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
inputs, labels = inputs.float(), labels.float()
images.append(inputs)
image_labels.append(labels)
image = images[7]
image = image[0,...].permute([1,2,0])
image = image.numpy()
image = (image * 255).astype(np.uint8)
img = Image.fromarray(image,'RGB')
img.show()
As you can see, I'm trying to display the image at index 7. However, every time I run the code I get a different image displayed although using the same index, why is that?
The image displayed also is like degraded and has less quality than the original one.
Any thoughts on that?
Thanks.
| My best bet is that you have your DataLoader's shuffle option set to True, in which case it would result in different images appearing at index 7. Every time you go through the iterator, the sequence of indices, used to access the underlying dataset, will be different.
| https://stackoverflow.com/questions/68431570/ |
memmap arrays to pytorch and gradient accumulation | I have A Large dataset (> 62 GiB) after processing saved as two NumPy.memmap arrays one of the data and the other for the labels the dataset has these shapes (7390,60,224,224,3) , and (7390) and is NOT shuffled so i need to shuffle it first.
now i use tensorflow2 and used this code with my generator to manage memmap arrays before
def my_generator():
for i in range(len(numpy_array)):
yield numpy_array[i,:,:,:,:],np.array(labels[i]).reshape(1)
full_dataset = tf.data.Dataset.from_generator(
generator=my_generator,
output_types=(np.uint8,np.int32),
output_shapes=((60,224,224,3),(1))
)
full_dataset = full_dataset.shuffle(SHUFFLE_BUFFER_SIZE, reshuffle_each_iteration=False)
train_dataset = full_dataset.take(train_size)
test_dataset = full_dataset.skip(train_size)
val_dataset = test_dataset.skip(test_size)
test_dataset = test_dataset.take(test_size)
That way i can train without loading to memory the entire dataset with shuffling and batching.
Now with this current model and dataset the vram is not enogh for more than 2 batches to be loaded as tensors.
and i can't train with batchsize of 2.
i thought of gradient accumulation but i couldn't do it with TF2 and i found it easy with pytorch but i can't find how to deal with the memmap arrays with shuffle and split as in tensorflow with generators.
so i need to know how to load the datset from pytorch with the same shuffling and batching in pytorch.
Or if someone has a readymade code for GA on TF2
| I will just address the shuffle question.
Instead of shuffling with tf.data.Dataset, do it at the generator level. This should work:
class Generator(object):
def __init__(self, images, labels, batch_size):
self.images = images
self.labels = labels
self.batch_size = batch_size
self.idxs = np.arange(len(self.images))
self.on_epoch_end()
def on_epoch_end(self):
# Shuffle the indices
np.random.shuffle(self.idxs)
def generator(self):
i = 0
while i < len(self.idxs):
idx = self.idxs[i]
yield (self.images[idx], self.labels[i])
i += 1
self.on_epoch_end()
def batch_generator(self):
it = iter(self.generator)
while True:
vals = [next(it) for i in range(self.batch_size)]
images, labels = zip(*vals)
yield images, labels
Then you can use it by
gen = Generator(...)
it = iter(gen)
batch = next(it) # Call this every time you want a new batch
I'm sure pytorch has build in methods for this kind of stuff though
| https://stackoverflow.com/questions/68433271/ |
Pytorch Set value for Tensor with index tensor | Suppose i have a 2d index Array of shape [B,1,N,2] i.e N points holding indexes on a target tensor of size [B,1,H,W].
what is the best way to assign value to all the indices in the tensor?
for example:
for b in batchsize:
for i in N:
target[b,0,ind[i,0],ind[i,1]] = 1
but not in loop form
thanks
| If we look at this setup, you have a tensor target shaped (b, 1, h, w) and a tensor containing indices ind, shaped (b, 1, N, 2). You want to assign 1 to the N points per batch given by the two coordinates in ind.
The way I see it you could use torch.scatter_. We will stick with a 3D tensor since axis=1 is unused. Given a 3D tensor and argument value=1 and dim=1, .scatter_ operates on the input tensor as so:
input[i][index[i][j]] = 1
This does not exactly fit your setting since what would wish for is rather
input[i][index1[i][j]][index2[i][j]] = 1
In order to use scatter you could flatten target and unfold the values in ind accordingly.
Let's take an example:
>>> target = torch.zeros(2, 1, 10, 10)
>>> ind = torch.tensor([[[[0, 0], [1, 1], [2, 2]]],
[[[1, 2], [3, 4], [7, 8]]]])
tensor([[[[0, 0],
[1, 1],
[2, 2]]],
[[[1, 2],
[3, 0],
[4, 2]]]])
We will start by splitting ind into xs and ys coordinates:
>>> x, y = ind[..., 0], ind[..., 1]
Unfold and reshape them:
>>> z = x*target.size(-1) + y
tensor([[[ 0, 4, 8]],
[[ 5, 9, 14]]])
Flatten target:
>>> t = target.flatten(2)
tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
Then scatter the 1s:
>>> t.scatter_(dim=2, index=z, value=1)
tensor([[[1., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 0., 1.]]])
And finally reshape back to the original shape:
>>> t.reshape_as(target)
tensor([[[[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.],
[0., 0., 0.],
[0., 0., 0.]]],
[[[0., 0., 0.],
[0., 0., 1.],
[0., 0., 0.],
[1., 0., 0.],
[0., 0., 1.]]]])
In summary:
>>> x, y = ind[..., 0], ind[..., 1]
>>> z = x*target.size(-1) + y
>>> target.flatten(2).scatter_(dim=2, index=z, value=1).reshape_as(target)
This last line will mutate target.
| https://stackoverflow.com/questions/68441496/ |
Pytorch: mat1 and mat2 shapes cannot be multiplied | I have set up a toy example for my first pytorch model:
x = torch.from_numpy(np.linspace(1,100,num=100))
y = torch.from_numpy(np.dot(2,x))
I have built the model as follows:
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__()
self.fc1 = nn.Linear(1,10)
self.fc2 = nn.Linear(10,20)
self.fc3 = nn.Linear(16,1)
def forward(self,inputs):
x = F.relu(self.fc1(inputs))
x = F.relu(self.fc2(x))
x = F.linear(self.fc3(x))
return x
However, I have run into this error when I try to train:
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x20 and 1x10)
Here is the full code for reference:
import numpy as np # linear algebra
import torch
from torch.utils.data import Dataset
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
x = torch.from_numpy(np.linspace(1,100,num=100))
y = torch.from_numpy(np.dot(2,x))
class MyDataset(Dataset):
def __init__(self):
self.sequences = x
self.target = y
def __getitem__(self,i):
return self.sequences[i], self.target[i]
def __len__(self):
return len(self.sequences)
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__()
self.fc1 = nn.Linear(1,10)
self.fc2 = nn.Linear(10,20)
self.fc3 = nn.Linear(16,1)
def forward(self,inputs):
x = F.relu(self.fc1(inputs))
x = F.relu(self.fc2(x))
x = F.linear(self.fc3(x))
return x
model = Net().to('cpu')
# Generators
training_set = MyDataset()
loader = torch.utils.data.DataLoader(training_set, batch_size=20)
#criterion and optimizer
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.0001)
### Training
n_epochs = 12
for epoch in range(n_epochs):
for inputs,target in loader:
print(target)
optimizer.zero_grad()
output = model(inputs)
loss = criterion(output,target)
loss.backward()
optimizer.step()
And the full error message:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-107-d32fd01d3b41> in <module>
9 optimizer.zero_grad()
10
---> 11 output = model(inputs)
12
13 loss = criterion(output,target)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
<ipython-input-103-aefe4823d2e8> in forward(self, inputs)
7
8 def forward(self,inputs):
----> 9 x = F.relu(self.fc1(inputs))
10 x = F.relu(self.fc2(x))
11 x = F.linear(self.fc3(x))
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/linear.py in forward(self, input)
91
92 def forward(self, input: Tensor) -> Tensor:
---> 93 return F.linear(input, self.weight, self.bias)
94
95 def extra_repr(self) -> str:
/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1690 ret = torch.addmm(bias, input, weight.t())
1691 else:
-> 1692 output = input.matmul(weight.t())
1693 if bias is not None:
1694 output += bias
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x20 and 1x10)
Any advice would be very much appreciated.
| There are four issues here:
Looking at the model's first layer, I assume your batch size is 100. In that case, the correct input shape should be (100, 1), not (100,). To fix this you could use unsqueeze(-1).
The input should be dtype float: x.float().
Layer self.fc3 has an incorrect sizing. The following is valid for self.fc2 with respect to self.fc2: nn.Linear(20,1).
Lastly F.linear is not a linear function (i.e. the identity function). It's an actually linear transformation (i.e. x @ A.T + b). Take a look at the documentation for further details. I don't believe this is what you were looking to do in your case.
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(1, 10)
self.fc2 = nn.Linear(10, 20)
self.fc3 = nn.Linear(20, 1)
def forward(self,inputs):
x = F.relu(self.fc1(inputs))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
Inference:
>>> x = torch.linspace(1, 100, 100).float().unsqueeze(-1)
>>> y_hat = Net()(x)
>>> y_hat.shape
torch.Size([100, 1])
| https://stackoverflow.com/questions/68447264/ |
Memory Efficient Nearest Neighbour Algorithm | I have 10,00,000 agents, each associated with (x,y) coordinates. I am trying to find agents close to each other (radius=1.5). I tried to implement this using PyTorch:
X = torch.DoubleTensor(1000000,2).uniform_(0,10000)
torch.cdist(X,X,p=2)
However, with this the session crashes. I am running this on google colab. The same happened when I tried constructing the graph using, radius_neighbors_graph of scikit-learn package. It would be of great help if someone suggested a memory efficient way to implement the same.
| I found three solutions,
Solution 1
import torch
x = torch.randn(3000000, 2).cuda()
y = x
# Turn our Tensors into KeOps symbolic variables:
from pykeops.torch import LazyTensor
x_i = LazyTensor( x[:,None,:] )
y_j = LazyTensor( y[None,:,:] )
# We can now perform large-scale computations, without memory overflows:
D_ij = ((x_i - y_j)**2).sum(dim=2)
D_ij.argKmin(20,dim=1)
Solution 2
M = 3000000
import numpy as np
from pykeops.numpy import LazyTensor as LazyTensor_np
x = np.random.rand(M, 2)
y = x
x_i = LazyTensor_np(
x[:, None, :]
) # (M, 1, 2) KeOps LazyTensor, wrapped around the numpy array x
y_j = LazyTensor_np(
y[None, :, :]
) # (1, N, 2) KeOps LazyTensor, wrapped around the numpy array y
D_ij = ((x_i - y_j) ** 2).sum(-1) # **Symbolic** (M, N) matrix of squared distances
s_i = D_ij.argKmin(20,dim=1).ravel() # genuine (M,) array of integer indices
Solution 3
from sklearn.neighbors import NearestNeighbors
import numpy as np
M = 3000000
x = np.random.rand(M, 2)
nbrs = NearestNeighbors(n_neighbors=20, algorithm='ball_tree').fit(x)
distances, indices = nbrs.kneighbors(x)
Although the execution time of all the three solutions is the same, a minute, the memory requirements are approximately 2GB, 1GB and 1.3GB, respectively. It would be great to hear ideas to lower the execution time.
| https://stackoverflow.com/questions/68449560/ |
UNet loss is NaN + UserWarning: Warning: converting a masked element to nan | I'm training a UNet, which class looks like this:
class UNet(nn.Module):
def __init__(self):
super().__init__()
# encoder (downsampling)
# Each enc_conv/dec_conv block should look like this:
# nn.Sequential(
# nn.Conv2d(...),
# ... (2 or 3 conv layers with relu and batchnorm),
# )
self.enc_conv0 = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, padding=1, stride=1),
nn.BatchNorm2d(64),
nn.ReLU()
)
self.pool0 = nn.MaxPool2d(kernel_size=2, stride=2, return_indices=False) # 256 -> 128
self.enc_conv1 = nn.Sequential(
nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, padding=1),
nn.BatchNorm2d(128),
nn.ReLU(),
nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=1),
nn.BatchNorm2d(128),
nn.ReLU()
)
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2, return_indices=False) # 128 -> 64
self.enc_conv2 = nn.Sequential(
nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, padding=1),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=1),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=1),
nn.BatchNorm2d(256),
nn.ReLU()
)
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) # 64 -> 32
self.enc_conv3 = nn.Sequential(
nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(512),
nn.ReLU(),
nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(512),
nn.ReLU(),
nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(512),
nn.ReLU()
)
self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2) # 32 -> 16
# bottleneck
self.bottleneck_conv = nn.Sequential(
nn.Conv2d(in_channels=512, out_channels=1024, kernel_size=1, stride=1, padding=0),
nn.BatchNorm2d(1024),
nn.ReLU(),
nn.Conv2d(in_channels=1024, out_channels=512, kernel_size=1, stride=1, padding=0),
nn.BatchNorm2d(512),
nn.ReLU()
)
# decoder (upsampling)
self.upsample0 = nn.UpsamplingBilinear2d(scale_factor=2) # 16 -> 32
self.dec_conv0 = nn.Sequential(
nn.Conv2d(in_channels=512*2, out_channels=256, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(256),
nn.ReLU()
)
self.upsample1 = nn.UpsamplingBilinear2d(scale_factor=2) # 32 -> 64
self.dec_conv1 = nn.Sequential(
nn.Conv2d(in_channels=256*2, out_channels=128, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(128),
nn.ReLU(),
nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(128),
nn.ReLU(),
nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(128),
nn.ReLU()
)
self.upsample2 = nn.UpsamplingBilinear2d(scale_factor=2) # 64 -> 128
self.dec_conv2 = nn.Sequential(
nn.Conv2d(in_channels=128*2, out_channels=64, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU()
)
self.upsample3 = nn.UpsamplingBilinear2d(scale_factor=2) # 128 -> 256
self.dec_conv3 = nn.Sequential(
nn.Conv2d(in_channels=64*2, out_channels=1, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(1),
nn.ReLU(),
nn.Conv2d(in_channels=1, out_channels=1, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(1),
nn.ReLU(),
nn.Conv2d(in_channels=1, out_channels=1, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(1)
)
def forward(self, x):
# encoder
e0 = self.enc_conv0(x)
pool0 = self.pool0(e0)
e1 = self.enc_conv1(pool0)
pool1 = self.pool1(e1)
e2 = self.enc_conv2(pool1)
pool2 = self.pool2(e2)
e3 = self.enc_conv3(pool2)
pool3 = self.pool3(e3)
# bottleneck
b = self.bottleneck_conv(pool3)
# decoder
d0 = self.dec_conv0(torch.cat([self.upsample0(b), e3], 1))
d1 = self.dec_conv1(torch.cat([self.upsample1(d0), e2], 1))
d2 = self.dec_conv2(torch.cat([self.upsample2(d1), e1], 1))
d3 = self.dec_conv3(torch.cat([self.upsample3(d2), e0], 1)) # no activation
return d3
Train method:
def train(model, opt, loss_fn, score_fn, epochs, data_tr, data_val):
torch.cuda.empty_cache()
losses_train = []
losses_val = []
scores_train = []
scores_val = []
for epoch in range(epochs):
tic = time()
print('* Epoch %d/%d' % (epoch+1, epochs))
avg_loss = 0
model.train() # train mode
for X_batch, Y_batch in data_tr:
# data to device
X_batch = X_batch.to(device)
Y_batch = Y_batch.to(device)
# set parameter gradients to zero
opt.zero_grad()
# forward
Y_pred = model(X_batch)
loss = loss_fn(Y_pred, Y_batch) # forward-pass
loss.backward() # backward-pass
opt.step() # update weights
# calculate loss to show the user
avg_loss += loss / len(data_tr)
toc = time()
print('loss: %f' % avg_loss)
losses_train.append(avg_loss)
avg_score_train = score_fn(model, iou_pytorch, data_tr)
scores_train.append(avg_score_train)
# show intermediate results
model.eval() # testing mode
avg_loss_val = 0
#Y_hat = # detach and put into cpu
for X_val, Y_val in data_val:
with torch.no_grad():
Y_hat = model(X_val.to(device)).detach().cpu()
loss = loss_fn(Y_hat, Y_val)
avg_loss_val += loss / len(data_val)
toc = time()
print('loss_val: %f' % avg_loss_val)
losses_val.append(avg_loss_val)
avg_score_val = score_fn(model, iou_pytorch, data_val)
scores_val.append(avg_score_val)
torch.cuda.empty_cache()
# Visualize tools
clear_output(wait=True)
for k in range(5):
plt.subplot(2, 6, k+1)
plt.imshow(np.rollaxis(X_val[k].numpy(), 0, 3), cmap='gray')
plt.title('Real')
plt.axis('off')
plt.subplot(2, 6, k+7)
plt.imshow(Y_hat[k, 0], cmap='gray')
plt.title('Output')
plt.axis('off')
plt.suptitle('%d / %d - loss: %f' % (epoch+1, epochs, avg_loss))
plt.show()
return (losses_train, losses_val, scores_train, scores_val)
However, when executing I get train_loss and val_loss both equal nan and also a warning. In addition, when plotting the segmented picture and the target one, the output picture is not shown. I tried to execute with different loss function, but still the same. There is probably something wrong with my class.
Could you please help me? Thanks in advance.
| I am not sure if this is your error, but your last Convolution layer (self.dec_conv3) has looks odd. I would only reduce to 1 channel at the very last convolution and do not perform 2 Convolutions with 1 In and 1 Out channel. Also ending with a batchnorm can only produce normalized outputs, which could be far from what you really want:
self.dec_conv3 = nn.Sequential(
nn.Conv2d(in_channels=64*2, out_channels=32, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(in_channels=32, out_channels=1, kernel_size=3, stride=1, padding=1)
)
It would be interesting if your loss is Nan already at the first iteration or only after a few iterations. Maybe, you use a loss function, that devides by zero?
| https://stackoverflow.com/questions/68450437/ |
How can I make torch tensor? | I want to make a torch tensor composed of only 1 or -1.
I just try to use torch.empty() and torch.randn().
tmp = torch.randint(low=0, high=2, size=(4, 4))
tmp[tmp==0] = -1
tmp
>> tensor([[ 1, -1, -1, 1],
[-1, 1, 1, -1],
[-1, -1, 1, 1],
[-1, 1, -1, 1]])
However, I don`t know what method is most efficient in time.
And I want to make this code to one line as possible because this code is positioned at __init__ ()
| Why not:
tmp = -1 + 2 * torch.randint(low=0, high=2, size=(4, 4))
| https://stackoverflow.com/questions/68450847/ |
How to convert Bounding Box coordinates to COCO format? | I need to convert the coordinates. I have this format: Horizontal and
Vertical coordinates of the top left and lower right of the element ((x1, y1) and (x2, y2)). And I need x_center y_center width height in this format. How can I do it?
| The center and the size are simply
x_center = 0.5 * (x1 + x2)
y_center = 0.5 * (y1 + y2)
width = np.abs(x2 - x1)
height = np.abs(y2 - y1)
Note that by using np.abs when computing the width and height we do need to assume anything on the "order" of the first and second corners.
If you further want to normalize the center and size by the image size (img_w, img_h):
n_x_center = x_center / img_w
n_y_center = y_center / img_h
n_width = width / img_w
n_height = height / img_h
| https://stackoverflow.com/questions/68451369/ |
RuntimeError: Error(s) in loading state_dict for DataParallel: Unexpected key(s) in state_dict: “module.scibert_layer.embeddings.position_ids” | I am getting the following error while trying to load a saved model checkpoint (.pth file).
RuntimeError: Error(s) in loading state_dict for DataParallel: Unexpected key(s) in state_dict: "module.scibert_layer.embeddings.position_ids"
I trained my sequence labeling model in nn.DataParallel (torch version 1.7.0) but am trying to load it without the nn.DataParallel (torch version 1.9.0). Currently, I understand that not using nn.DataParallel caused the issue of RuntimeError: Error(s) in loading state_dict for DataParallel:, but could it also be because I am using different versions of torch or training and loading the model checkpoint?
The model is wrapped in nn.DataParallel using the following chunk of code.
if exp_args.parallel == 'true':
if torch.cuda.device_count() > 1:
model = nn.DataParallel(model, device_ids = [0, 1, 2, 3])
print("Using", len(model.device_ids), " GPUs!")
print("Using", str(model.device_ids), " GPUs!")
model.to(f'cuda:{model.device_ids[0]}')
elif exp_args.parallel == 'false':
model = nn.DataParallel(model, device_ids = [0])
This is my model.
DataParallel(
(module): SCIBERTPOSAttenCRF(
(scibert_layer): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(31090, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(1): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(2): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(3): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(4): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(5): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(6): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(7): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(8): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(9): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(10): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(11): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(lstmpos_layer): LSTM(44, 20, batch_first=True, bidirectional=True)
(self_attention): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=40, out_features=40, bias=True)
)
(lstm_layer): LSTM(808, 512, batch_first=True, bidirectional=True)
(hidden2tag): Linear(in_features=1024, out_features=2, bias=True)
(crf_layer): CRF(num_tags=2)
)
)
How should I go ahead with correctly wrapping the checkpoint in nn.DataParallel or should I use the correct version of the torch that could fix this problem?
I will be grateful for any help or hint.
| The problem was not caused by the DataParallel library, but rather the transformers library (notice that I used the SciBERT model from huggingface transformers library).
The error was caused by using older versions of transformers and tokeniezrs. Upgrading the transformers version from 3.0.2 to 4.8.2 and tokeniezrs to 0.10.3
| https://stackoverflow.com/questions/68453123/ |
How can I measure activation level of each nodes when predict using trained model? | I'm interesting in which nodes are activated and how much strongly activated when some input is entered to trained model.
The picture as bellow shows what I want to get from model.
(I want to know the degree of activation in every nodes)
As I know there are some techniques to visualize what the nodes(or filter) are paying attention to. (especially in CNN)
Is there any good way to measure how active each nodes are?
I usually using Keras. But pyTorch is ok too.
| You are looking for activation map / grad cam ..., you can get a look at : https://keras.io/examples/vision/grad_cam/ and https://keras.io/examples/vision/visualizing_what_convnets_learn/
You can also try keract : https://github.com/philipperemy/keract
There are others github repository on the subject on explainability in the AI field, such as : https://github.com/XAI-ANITI/ethik
Alternatively, you can do it yourself it both keras and pytorch, for example by registering a hook in pytorch :
def get_feat_vector(self, img, model):
with torch.no_grad():
my_output = None
def my_hook(module_, input_, output_):
nonlocal my_output
my_output = output_
a_hook = model.layers[0].register_forward_hook(my_hook)
model(img)
a_hook.remove()
return my_output
...
for element in val_dataloader:
model.eval()
feature_vect = get_feat_vector(element[0].float().cuda(), model)
| https://stackoverflow.com/questions/68455051/ |
torch Tensor add dimension | I have a tensor with this size
torch.Size([128, 64])
how do I add one "dummy" dimension such as
torch.Size([1, 128, 64])
| There are several ways:
torch.unsqueeze:
torch.unsqueeze(x, 0)
Using None (or np.newaxis):
x[None, ...]
# or
x[np.newaxis, ...]
reshape or view:
x.reshape(1, *x.shape)
# or
x.view(1, *x.shape)
| https://stackoverflow.com/questions/68455417/ |
Pytorch giving cuda error on mac when creating a neural net | I have Python 3.8 and an installation of torch from using
pip3 install torch
When I try to create a model with 3 input dimensions, I get an error relating to cuda support.
I'm on a mac which doesn't have a GPU with cuda support. How can I prevent this error?
to reproduce:
import torch as T
import torch.nn as nn
def create(input_dims):
return nn.Sequential(
nn.Linear(*input_dims, 256),
nn.ReLU(),
nn.Linear(256, 256),
nn.ReLU(),
nn.Linear(256, 50),
nn.Softmax(dim=-1)
)
create(input_dims=(100,100)) #this works fine
create(input_dims=(100,100,3)) #this produces the error:
"""
Traceback (most recent call last):
File "example.py", line 15, in <module>
create(input_dims=(100,100,3))
File "example.py", line 6, in create
nn.Linear(*input_dims, 256),
File "/Users/darrinwiley/Library/Python/3.8/lib/python/site-packages/torch/nn/modules/linear.py", line 81, in __init__
self.weight = Parameter(torch.empty((out_features, in_features), **factory_kwargs))
File "/Users/darrinwiley/Library/Python/3.8/lib/python/site-packages/torch/cuda/__init__.py", line 166, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
"""
| As shown in the Pytorch docs, If you are on a mac, then Pytorch does not offer any way to install the cuda toolkit as shown in this image:
A way to solve this error is by either using some shared server that is on linux or windows or by running virtual machines.
| https://stackoverflow.com/questions/68458917/ |
arguments and function call of LSTM in pytorch | Could anyone please explain me the below code:
import torch
import torch.nn as nn
input = torch.randn(5, 3, 10)
h0 = torch.randn(2, 3, 20)
c0 = torch.randn(2, 3, 20)
rnn = nn.LSTM(10,20,2)
output, (hn, cn) = rnn(input, (h0, c0))
print(input)
While calling rnn rnn(input, (h0, c0)) we gave arguments h0 and c0 in parenthesis. What is it supposed to mean? if (h0, c0) represents a single value then what is that value and what is the third argument passed here?
However, in the line rnn = nn.LSTM(10,20,2) we are passing arguments in LSTM function without paranthesis.
Can anyone explain me how this function call is working?
| The assignment rnn = nn.LSTM(10, 20, 2) instanciates a new nn.Module using the nn.LSTM class. It's first three arguments are input_size (here 10), hidden_size (here 20) and num_layers (here 2).
On the other hand rnn(input, (h0, c0)) corresponds to actually calling the class instance, i.e. running __call__ which is roughly equivalent to the forward function of that module. The __call__ method of nn.LSTM takes in two parameters: input (shaped (sequnce_length, batch_size, input_size), and a tuple of two tensors (h_0, c_0) (both shaped (num_layers, batch_size, hidden_size) in the basic use case of nn.LSTM)
Please refer to the PyTorch documentation whenever using builtins, you will find the exact definition of the parameters list (the arguments used to initialize the class instance) as well as the input/outputs specifications (whenever inferring with that said module).
You might be confused with the notation, here's a small example that could help:
tuple as input:
def fn1(x, p):
a, b = p # unpack input
return a*x + b
>>> fn1(2, (3, 1))
>>> 7
tuple as output
def fn2(x):
return x, (3*x, x**2) # actually output is a tuple of int and tuple
>>> x, (a, b) = fn2(2) # unpacking
(2, (6, 4))
>>> x, a, b
(2, 6, 4)
| https://stackoverflow.com/questions/68461215/ |
PyTorch new model function leaking gpu memory | I had a perfectly working code, until I had to do some modification to make it more flexible with some hyperparameters.
In the bellow snippet you can see my forward function which before worked using the 2 commented lines. As I needed more flexibility in the nn.Linear from self.classifier, I made the function classifierAdaptive() which does the same thing but allows some parameters to change in the nn.Linear.
Previous GPU Ram usage was around 7GB/11GB, now I am running out of GPU Ram and I can't figure out why. I can't think of any variable to cause memory issues.
Edit:
The dataset is of images of size 2200x2200, from which I make small cuts to train on. These cuts are currently hardcoded as in 6 * 6.
Desired behaviour will be to only have 1 nn.Linear which would work for all img.shapes. However, as @jsho said, the mode maybe is creating too many layers instead of just 1 and overwriting?
I cannot include the entire code for debugging. It would be too much. But if needed can be seen here without the bellow code modification.
self.classifier = nn.Sequential(
nn.Linear(512 * 6 * 6, 1024),
nn.LeakyReLU(0.2, True),
nn.Linear(1024, 1),
nn.Sigmoid()
)
def classifierAdaptive(self,img):
out = torch.flatten(img, 1)
outLinear = nn.Linear(512 * img.shape[2] * img.shape[3], 1024).cuda()
out = outLinear(out).cuda()
out = self.lrelu(out) # self.lrelu = nn.LeakyReLU(0.2, True)
out = self.dens2(out) # self.dens2 = nn.Linear(1024, 1)
out = self.sig(out) # self.sig = nn.Sigmoid()
return out
def forward(self, x):
out = self.features(x)
# out = torch.flatten(out, 1)
# out = self.classifier(out)
out = self.classifierAdaptive(out)
return out
| You are creating a new layer for every image with a different size, which not only will not work, but every new layer will have its set of parameters which it saves in GPU memory, and so you will quickly run out of memory space. This is a usual problem in image processing due to the fixed size of linear layers as opposed to the dynamic H,W dimensions of the convolution layers. The only real way to fix it is to reshape your images to the same size prior to putting them in the network (look at torch.nn.functional.interpolate or cv2.resize) or to use a convolution with a kernel size of 1 (effectively a linear layer) and do some global average/max pooling to shrink the data to the same size.
| https://stackoverflow.com/questions/68462003/ |
What do we mean by 'register' in PyTorch? | I am not asking about registers which are memory locations to store the content.
I am asking about the usage of word 'register' in PyTorch documentation.
While I am reading the documenation regarding MODULE in PyTorch, I encountered the usage of word registers, registered several times.
The context of usage is as follows
1. tensor (Tensor) – buffer to be registered.
2. Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.
3. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
4. Registers a backward hook on the module.
5. Registers a forward hook on the module.
.....
And the word 'register' has been used in the names of several methods
1. register_backward_hook(hook)
2. register_buffer(name, tensor, persistent=True)
3. register_forward_hook(hook)
4. register_forward_pre_hook(hook)
5. register_parameter(name, param)
......
What does it mean by the usage of word register programmatically?
Does it just mean the act of recording a name or information on an official list as in plain English or has any significance programmatically?
| This "register" in pytorch doc and methods names means "act of recording a name or information on an official list".
For instance, register_backward_hook(hook) adds the function hook to a list of other functions that nn.Module executes during the execution of the forward pass.
Similarly, register_parameter(name, param) adds an nn.Parameter param with name to the list of trainable parameters of the nn.Module.
It is crucial to register trainable parameters so pytorch will know what tensors to pass to the optimizer and what tensors to store as part of the nn.Module's state_dict.
| https://stackoverflow.com/questions/68463009/ |
does pytorch support complex numbers? | Minimum (not) working example
kernel = Conv2d(in_channels=1, out_channels=1, kernel_size=(3, 2))
data = torch.rand(1, 1, 100, 100).type(torch.complex64)
kernel(data)
yields RuntimeError: "unfolded2d_copy" not implemented for 'ComplexDouble' for 64 and 128 bit complex numbers, while for 32 bit, i get RuntimeError: "copy_" not implemented for 'ComplexHalf'.
Am I missing something, or is pytorch missing support for complex numbers??
note: I'm on macbook, using cpu only.
| Currently (@ latest stable version - 1.9.0) Pytorch is missing support for such operations on complex tensors (which are a beta feature). See this feature request at Native implementation of convolution for complex numbers
Splitting into convolution on real & image separately, though not ideal, is the way to go for now.
| https://stackoverflow.com/questions/68473604/ |
Positional Encoding for time series based data for Transformer DNN models | In several academic papers, researchers use the following positional encoding to denote the positioning of elements in a sequence, whether it be a time series-based sequence or words in a sentence for NLP purposes.
My question is how the positioning is actually applied to the data before it is fed to the deep neural network (in my case a transformer network):
Are the positional values added directly to the actual values of the elements in the sequence (or to the word representation values)? Or are they concatinated? Is the positional embedding part of the data preprocessing stage?
Does the Tensorflow/Keras MultiHeadAttention layer actually already contain an Embeeding layer that takes care of the positional encoding? Or not?
What about the normalization of data? Are only the actual element values normalized and then the positional encoding is added to that normalized value? Or is the positional encoding value added to the raw value of the element and the resulting values are normalized?
I am interested in actual implementation details not the conceptual part of positional encoding as I read most of the academic papers on positional encoding already. Unfortunately, most academic papers fall short of describing in detail at what stage and how precisely the positional encoding is applied to the data structure.
Thanks!!!
| Positional encoding is just a way to let the model differentiates two elements (words) that're the same but which appear in different positions in a sequence.
After applying embeddings in a LM - language model for example, we add PE to add an information about position of each word.
Are the positional values added directly to the actual values of the elements in the sequence (or to the word representation values)? Or are they concatinated? Is the positional embedding part of the data preprocessing stage?
Yes PE values are just added directly to actual values (embeddings in a LM). This will results that the embedding vector of the word a that appears in the beginning of the sequence will be different of the embedding vector of the same word that appears in the middle of the sequence. And no, PE is not a part of data preprocessing stage.
Here's an example of code:
class PositionalEncodingLayer(nn.Module):
def __init__(self, d_model, max_len=100):
super(PositionalEncodingLayer, self).__init__()
self.d_model = d_model
self.max_len = max_len
def get_angles(self, positions, indexes):
d_model_tensor = torch.FloatTensor([[self.d_model]]).to(positions.device)
angle_rates = torch.pow(10000, (2 * (indexes // 2)) / d_model_tensor)
return positions / angle_rates
def forward(self, input_sequences):
"""
:param Tensor[batch_size, seq_len] input_sequences
:return Tensor[batch_size, seq_len, d_model] position_encoding
"""
positions = torch.arange(input_sequences.size(1)).unsqueeze(1).to(input_sequences.device) # [seq_len, 1]
indexes = torch.arange(self.d_model).unsqueeze(0).to(input_sequences.device) # [1, d_model]
angles = self.get_angles(positions, indexes) # [seq_len, d_model]
angles[:, 0::2] = torch.sin(angles[:, 0::2]) # apply sin to even indices in the tensor; 2i
angles[:, 1::2] = torch.cos(angles[:, 1::2]) # apply cos to odd indices in the tensor; 2i
position_encoding = angles.unsqueeze(0).repeat(input_sequences.size(0), 1, 1) # [batch_size, seq_len, d_model]
return position_encoding
class InputEmbeddingAndPositionalEncodingLayer(nn.Module):
def __init__(self, vocab_size, max_len, d_model, dropout):
super(InputEmbeddingAndPositionalEncodingLayer, self).__init__()
self.vocab_size = vocab_size
self.max_len = max_len
self.d_model = d_model
self.dropout = nn.Dropout(p=dropout)
self.token_embedding = nn.Embedding(vocab_size, d_model)
self.position_encoding = PositionalEncodingLayer(d_model=d_model, max_len=max_len)
def forward(self, sequences):
"""
:param Tensor[batch_size, seq_len] sequences
:return Tensor[batch_size, seq_len, d_model]
"""
token_embedded = self.token_embedding(sequences) # [batch_size, seq_len, d_model]
position_encoded = self.position_encoding(sequences) # [batch_size, seq_len, d_model]
return self.dropout(token_embedded) + position_encoded # [batch_size, seq_len, d_model]
Does the Tensorflow/Keras MultiHeadAttention layer actually already contain an Embeeding layer that takes care of the positional encoding? Or not?
Simply No. You have to build PE yourself.
What about the normalization of data? Are only the actual element values normalized and then the positional encoding is added to that normalized value? Or is the positional encoding value added to the raw value of the element and the resulting values are normalized?
The normalization part is at your discretion. You do what you want. But you should apply the normalization. Also, PE is added to normalized values not actual one.
| https://stackoverflow.com/questions/68477306/ |
CPU only pytorch is crashing with error AssertionError: Torch not compiled with CUDA enabled | I'm trying to run the code from this repository and I need to use Pytorch 1.4.0. I've installed the CPU only version of pytorch with pip install torch==1.4.0+cpu torchvision==0.5.0+cpu -f https://download.pytorch.org/whl/torch_stable.html.
I ran the program by doing py -m train_Kfold_CV --device 0 --fold_id 10 --np_data_dir "C:\Users\username\OneDrive\Desktop\emadeldeen\AttnSleep\prepare_datasets\edf_20_npz" but I'm getting this error:
File "C:\Users\username\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\username\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\username\OneDrive\Desktop\emadeldeen\AttnSleep\train_Kfold_CV.py", line 94, in <module>
main(config, fold_id)
File "C:\Users\username\OneDrive\Desktop\emadeldeen\AttnSleep\train_Kfold_CV.py", line 65, in main
trainer.train()
File "C:\Users\username\OneDrive\Desktop\emadeldeen\AttnSleep\base\base_trainer.py", line 66, in train
result, epoch_outs, epoch_trgs = self._train_epoch(epoch, self.epochs)
File "C:\Users\username\OneDrive\Desktop\emadeldeen\AttnSleep\trainer\trainer.py", line 49, in _train_epoch
loss = self.criterion(output, target, self.class_weights)
File "C:\Users\username\OneDrive\Desktop\emadeldeen\AttnSleep\model\loss.py", line 6, in weighted_CrossEntropyLoss
cr = nn.CrossEntropyLoss(weight=torch.tensor(classes_weights).cuda())
File "C:\Users\username\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\cuda\__init__.py", line 196, in _lazy_init
_check_driver()
File "C:\Users\username\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\cuda\__init__.py", line 94, in _check_driver
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
I've changed the number of GPU in the config to 0 and tried adding device = torch.device('cpu') at the begining of the program, but it's not doing anything. How can I fix this error? I'm using windows 10 with python 3.7.9 if it helps
Thanks
| You are using CPU only pytorch, but your code has statement like cr = nn.CrossEntropyLoss(weight=torch.tensor(classes_weights).cuda()) which is trying to move the tensor to GPU.
To fix it,
remove all the .cuda() operations.
| https://stackoverflow.com/questions/68477345/ |
Normalizing CNN network output to get a distance output between 0 and 1 | I'm using PyTorch based CNNs to do feature extraction on images of humans in order to use it to re-identify that same person given a different picture. After the whole process I am left with a 1D vector, about 2048x1 which I then compare using L2 distance as a metric. I am currently trying to normalize that L2 distance output so I can represent the model's prediction as a confidence from 0-1.
I noticed that PyTorch recommends using the where images are loaded in as loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. When it does this, it seems to alter the range of the L2 distance output which changes the range of values for the L2 distance which makes it no longer 0-1. I am wondering is there a way to normalize the output vector back to the 0-1 range or even the L2 distance itself so I can represent it from 0-1
EDIT:
The following is the code that I am using to calculate the L2 distance
distmat = torch.pow(qf, 2).sum(dim=1, keepdim=True).expand(m, n) + \
torch.pow(gf, 2).sum(dim=1, keepdim=True).expand(n, m).t()
distmat.addmm_(qf, gf.t(), beta=1, alpha=-2, )
distmat = distmat.cpu().numpy()
As was suggested by a commenter, I am attempting to change the L2 distance calculation to instead use cosine distance. This is my current code:
distmat = torch.mm(qf, gf.t())
However when I ran this, I was getting outputs that looked like this:
tensor([[ 0.3244, 0.2478, 0.1808, -0.0249, 0.2137, 0.2113]])
Wondering if this is the right way to do the cosine distance calculation?
EDIT 2:
Here's how cosine similarity's final implementation looked like for me:
qf_norm = qf / qf.norm(dim=1)[:, None]
gf_norm = gf / gf.norm(dim=1)[:, None]
distmat = torch.mm(qf_norm, gf_norm.transpose(0, 1)).cpu().numpy()
| Using normalization vs not using normalization
Are you using a pretrained network?
If yes, and the pretrained model was trained with normalized input (mean/std transformation) then you should use those operations. If you use without those operations, the embeddings will not be useful or less useful. Simply, put you should input normalized images, because model was trained on normalized input.
Normalized distances
L2 distance is unbounded. Even if you compute the range over a sample of typical input, it is not guaranteed that a new input image would fall in that range. For this reason, people use cosine similarity which is akin to a normalized dot product and thus will always be between 0 and 1 -1 and 1.
| https://stackoverflow.com/questions/68478848/ |
PyTorch 1.9 command equivalent for `zero_gradients` | This error comes after the upgrade of my PyTorch version from 1.8 to 1.9.0.
When using this line:
from torch.autograd.gradcheck import zero_gradients, I get this error message: ImportError: cannot import name 'zero_gradients' from 'torch.autograd.gradcheck'
The command:
zero_gradients(im)
is used.
What is the new command equivalent in PyTorch 1.9.0?
| There is no function in pytorch named zero_gradients(). The nearest similar name is zero_grad(). It's instead a function defined in the repo you shared in auto-attack/autoattack/other_utils.py .
def zero_gradients(x):
if isinstance(x, torch.Tensor):
if x.grad is not None:
x.grad.detach_()
x.grad.zero_()
elif isinstance(x, container_abcs.Iterable):
for elem in x:
zero_gradients(elem)
| https://stackoverflow.com/questions/68485574/ |
What is the correct way to calculate the norm, 1-norm, and 2-norm of vectors in PyTorch? | I have a matrix:
t = torch.rand(2,3)
print(t)
>>>tensor([[0.5164, 0.3651, 0.0882],
[0.4488, 0.9824, 0.4067]])
I'm following this introduction to norms and want to try it in PyTorch.
It seems like the:
norm of a vector is "the size or length of a vector is a nonnegative number that describes the extent of the vector in space, and is sometimes referred to as the vector’s magnitude or the norm"
1-Norm is "the sum of the absolute vector values, where the absolute value of a scalar uses the notation |a1|. In effect, the norm is a calculation of the Manhattan distance from the origin of the vector space."
2-Norm is "the distance of the vector coordinate from the origin of the vector space. The L2 norm is calculated as the square root of the sum of the squared vector values."
I currently only know of this:
print(torch.linalg.norm(t, dim=1))
>>>tensor([0.6385, 1.1541])
But I can't figure out which one of the three (norm, 1-norm, 2-norm) from here it is calculating, and how to calculate the rest
| To compute the 0-, 1-, and 2-norm you can either use torch.linalg.norm, providing the ord argument (0, 1, and 2 respectively). Or directly on the tensor: Tensor.norm, with the p argument. Here are the three variants: manually computed, with torch.linalg.norm, and with Tensor.norm.
0-norm
>>> x.norm(dim=1, p=0)
>>> torch.linalg.norm(x, dim=1, ord=0)
>>> x.ne(0).sum(dim=1)
1-norm
>>> x.norm(dim=1, p=1)
>>> torch.linalg.norm(x, dim=1, ord=1)
>>> x.abs().sum(dim=1)
2-norm
>>> x.norm(dim=1, p=2)
>>> torch.linalg.norm(x, dim=1, ord=2)
>>> x.pow(2).sum(dim=1).sqrt()
| https://stackoverflow.com/questions/68489765/ |
How to create a DataSet of 1000 graphs in python | I need to create a dataset of 1000 graphs. I used the following code:
data_list = []
ngraphs = 1000
for i in range(ngraphs):
num_nodes = randint(10,500)
num_edges = randint(10,num_nodes*(num_nodes - 1))
f1 = np.random.randint(10, size=(num_nodes))
f2 = np.random.randint(10,20, size=(num_nodes))
f3 = np.random.randint(20,30, size=(num_nodes))
f_final = np.stack((f1,f2,f3), axis=1)
capital = 2*f1 + f2 - f3
f1_t = torch.from_numpy(f1)
f2_t = torch.from_numpy(f2)
f3_t = torch.from_numpy(f3)
capital_t = torch.from_numpy(capital)
capital_t = capital_t.type(torch.LongTensor)
x = torch.from_numpy(f_final)
x = x.type(torch.LongTensor)
edge_index = torch.randint(low=0, high=num_nodes, size=(num_edges,2), dtype=torch.long)
edge_attr = torch.randint(low=0, high=50, size=(num_edges,1), dtype=torch.long)
data = Data(x = x, edge_index = edge_index.t().contiguous(), y = capital_t, edge_attr=edge_attr )
data_list.append(data)
This works. But when I run my training function as follows:
for epoch in range(1, 500):
loss = train()
print(f'Loss: {loss:.4f}')
I keep getting the following error:
RuntimeError Traceback (most recent call
last) in ()
1 for epoch in range(1, 500):
----> 2 loss = train()
3 print(f'Loss: {loss:.4f}')
5 frames /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py
in linear(input, weight, bias) 1845 if
has_torch_function_variadic(input, weight): 1846 return
handle_torch_function(linear, (input, weight), input, weight,
bias=bias)
-> 1847 return torch._C._nn.linear(input, weight, bias) 1848 1849
RuntimeError: expected scalar type Float but found Long
Can someone help me troubleshoot this. Or make a 1000 graph dataset that doesn't throw this error.
| Change your x and y tensor into FloatTensor, since Linear layer in python only accept FloatTensor inputs
| https://stackoverflow.com/questions/68490678/ |
Train a torch model using one GPU and the shared memory | I am new to training pytorch models
and on GPU
I have tried training it on windows, but was always use the dedicated memory (10GB) and does not utilise the shared memory
I have tried enhancing its performance using multiprocessing, but I kept getting the error :
TypeError: cannot pickle 'module' object
The solution usually is to use num_wrokers =0 while loading the data
I actually use the multiprocessing after loading the data
And need only to utilise the shared memory
I am retraing meta-sr speaker verification code, specifically the training file:
https://github.com/seongmin-kye/meta-SR/blob/b4c1ea1728e33f7bbf7015c38f508f24594f3f88/train.py
I have edited the line 92 to use the shared GPU memory as the following
instead of:
train(train_generator, model, objective, optimizer, n_episode, log_dir, scheduler)
To:
model.share_memory()
p = mp.Process(target=train, args=(train_generator,model,objective, optimizer, n_episode, log_dir, scheduler))
p.num_workers=0
p.start()
p.join()
Please let me know if more information shall be added
Thanks in advance
| The shared memory (part of RAM) can be used only if there are two GPUs in PC
and this case there was only one GPU
| https://stackoverflow.com/questions/68492691/ |
Trouble with minimal hvp on pytorch model | While autograd's hvp tool seems to work very well for functions, once a model becomes involved, Hessian-vector products seem to go to 0. Some code.
First, I define the world's simplest model:
class SimpleMLP(nn.Module):
def __init__(self, in_dim, out_dim):
super().__init__()
self.layers = nn.Sequential(
nn.Linear(in_dim, out_dim),
)
def forward(self, x):
'''Forward pass'''
return self.layers(x)
Then, a loss function:
def objective(x):
return torch.sum(0.25 * torch.sum(x)**4)
We instantiate it:
Arows = 2
Acols = 2
mlp = SimpleMLP(Arows, Acols)
Finally, I'm going to define a "forward" function (distinct from the model's forward function) that will serve as the the full model+loss that we want to analyze:
def forward(*params_list):
for param_val, model_param in zip(params_list, mlp.parameters()):
model_param.data = param_val
x = torch.ones((Arows,))
return objective(mlp(x))
This passes a ones vector into the single-layer "mlp," and passes it into our quadratic loss.
Now, I attempt to compute:
v = torch.ones((6,))
v_tensors = []
idx = 0
#this code "reshapes" the v vector as needed
for i, param in enumerate(mlp.parameters()):
numel = param.numel()
v_tensors.append(torch.reshape(torch.tensor(v[idx:idx+numel]), param.shape))
idx += numel
And finally:
param_tensors = tuple(mlp.parameters())
reshaped_v = tuple(v_tensors)
soln = torch.autograd.functional.hvp(forward, param_tensors, v=reshaped_v)
But, alas, the Hessian-Vector Product in soln is all 0's. What is happening?
| What's happening is that strict is False by default in the hvp() function and a tensor of 0's is returned as the Hessian Vector Product instead of an error (source).
If you try with strict=True, an error RuntimeError: The output of the user-provided function is independent of input 0. This is not allowed in strict mode. is returned instead. And when I looked at the full error, I suspect that this error comes from _check_requires_grad(jac, "jacobian", strict=strict) which indicates that the jacobian jac is None.
Update:
Following is a full working example:
import torch
from torch import nn
# your loss function
def objective(x):
return torch.sum(0.25 * torch.sum(x)**4)
# Following are utilities to make nn.Module functional
# borrowed from the link I posted in comment
def del_attr(obj, names):
if len(names) == 1:
delattr(obj, names[0])
else:
del_attr(getattr(obj, names[0]), names[1:])
def set_attr(obj, names, val):
if len(names) == 1:
setattr(obj, names[0], val)
else:
set_attr(getattr(obj, names[0]), names[1:], val)
def make_functional(mod):
orig_params = tuple(mod.parameters())
# Remove all the parameters in the model
names = []
for name, p in list(mod.named_parameters()):
del_attr(mod, name.split("."))
names.append(name)
return orig_params, names
def load_weights(mod, names, params):
for name, p in zip(names, params):
set_attr(mod, name.split("."), p)
# your forward function with update
def forward(*new_params):
# this line replace your for loop
load_weights(mlp, names, new_params)
x = torch.ones((Arows,))
out = mlp(x)
loss = objective(out)
return loss
# your simple MLP model
class SimpleMLP(nn.Module):
def __init__(self, in_dim, out_dim):
super().__init__()
self.layers = nn.Sequential(
nn.Linear(in_dim, out_dim),
)
def forward(self, x):
'''Forward pass'''
return self.layers(x)
if __name__ == '__main__':
# your model instantiation
Arows = 2
Acols = 2
mlp = SimpleMLP(Arows, Acols)
# your vector computation
v = torch.ones((6,))
v_tensors = []
idx = 0
#this code "reshapes" the v vector as needed
for i, param in enumerate(mlp.parameters()):
numel = param.numel()
v_tensors.append(torch.reshape(torch.tensor(v[idx:idx+numel]), param.shape))
idx += numel
reshaped_v = tuple(v_tensors)
#make model's parameters functional
params, names = make_functional(mlp)
params = tuple(p.detach().requires_grad_() for p in params)
#compute hvp
soln = torch.autograd.functional.vhp(forward, params, reshaped_v, strict=True)
print(soln)
| https://stackoverflow.com/questions/68492748/ |
Direction of rotation of torch.rot90 | In the documentation for torch.rot90 it is stated that
Rotation direction is from the first towards the second axis if k > 0, and from the second towards the first for k < 0.
However say that we are rotating from axis 0 to axis 1, is axis 0 rotating to axis 1 in the clockwise or anti-clockwise direction? (since they are both 90 degree rotations as per the image below)
| axis=0 is the dimension that points downwards, while axis=1 points to the right. Visualize the axes like this:
---------> axis=1
|
|
|
\/
axis=0
Now, k>0 means counter-clockwise direction, k<0 is clockwise.
Thus,
>>> x = torch.arange(6).view(3, 2)
>>> x
tensor([[0, 1],
[2, 3],
[4, 5]])
>>> torch.rot90(x, 1, [0,1])
tensor([[1, 3, 5],
[0, 2, 4]])
>>> torch.rot90(x, 1, [1,0])
tensor([[4, 2, 0],
[5, 3, 1]])
The torch.rot90() is similar to numpy.rot90()
e.g.
numpy.rot90(m, k=1, axes=(0, 1))
mean
| https://stackoverflow.com/questions/68493579/ |
Randomly get index of one of the maximum values in a PyTorch tensor | I need to perform something similar to the built-in torch.argmax() function on a one-dimensional tensor, but instead of picking the index of the first of the maximum values, I want to be able to pick a random index of one of the maximum values. For example:
my_tensor = torch.tensor([0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.1])
index_1 = random_max_val_index_fn(my_tensor)
index_2 = random_max_val_index_fn(my_tensor)
print(f"{index_1}, {index_2}")
> 5, 1
| You can get the indexes of all the maximums first and then choose randomly from them:
def rand_argmax(tens):
max_inds, = torch.where(tens == tens.max())
return np.random.choice(max_inds)
sample runs:
>>> my_tensor = torch.tensor([0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.1])
>>> rand_argmax(my_tensor)
2
>>> rand_argmax(my_tensor)
5
>>> rand_argmax(my_tensor)
2
>>> rand_argmax(my_tensor)
1
| https://stackoverflow.com/questions/68499637/ |
PyTorch crashes when training: probable image decoding error, tensor value, corrupt image. (Runtime Error) | Premise
I am fairly new to using PyTorch, and more times than not I am getting a segfault when training my neural network with a small custom dataset (10 images of 90 classifications).
The output below is from these print statements that ran twice (with MNIST dataset at idx 0 and my custom dataset at idx 0). Both datasets were compiled using a csv file formatted the exact same (img_name, class) and with the image directory MNIST subet is of size 30, and my custom dataset is of size 10:
example, label = dataset[0]
print(dataset[0])
print(example.shape)
print(label)
The first tensor is an MNIST 28X28 png converted to a tensor using:
image = torchvision.io.read_image().type(torch.FloatTensor)
This was so I had a working dataset to compare to. It uses the same custom dataset class as the custom data I have.
The Neural Net class is the exact same as my custom data NN except it has 10 outputs as opposed to the 90 from my custom data.
The custom data is of varied sizes, that have all been resized to 28 X 28 using the transforms.Compose() listed below. In this 10 image subset of the data, there are images that are dimensions 800X170, 96X66, 64X34, 208X66, etc...
The second tensor output is from a png that was of size 800 X 170.
The transforms performed on both datasets are the exact same:
tf=transforms.Compose([
transforms.Resize(size = (28,28)),
transforms.Normalize(mean=[-0.5/0.5],std=[1/0.5])
])
There is no target transforms performed.
Output of tensor, tensor size, class, and train/test performed at end
(tensor([[[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 19.5000,
119.0000, 54.0000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 32.5000,
127.0000, 93.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 32.5000,
127.0000, 106.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 32.5000,
127.0000, 106.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 32.5000,
127.0000, 106.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 85.5000,
127.5000, 107.0000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 63.5000,
127.0000, 106.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 59.0000,
127.0000, 58.0000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 32.5000,
127.0000, 66.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 32.5000,
127.0000, 106.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 33.0000,
128.0000, 107.0000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 32.5000,
127.0000, 88.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 59.5000,
127.0000, 54.0000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 85.0000,
127.0000, 54.0000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 85.0000,
127.0000, 54.0000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 85.5000,
128.0000, 54.0000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 85.0000,
127.0000, 54.0000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 85.0000,
127.0000, 60.0000, 8.0000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 85.0000,
127.0000, 127.5000, 84.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 28.0000,
118.5000, 65.5000, 14.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[ 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,
0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000]]]), 1)
torch.Size([1, 28, 28])
1
Train Epoch: 1 [0/25 (0%)] Loss: -1.234500
Test set: Average loss: -1.6776, Accuracy: 1/5 (20%)
(tensor([[[68.1301, 67.3571, 68.4286, 67.9375, 69.5536, 69.2143, 69.0026,
69.2283, 70.4464, 70.2857, 68.8839, 68.6071, 71.3214, 70.5102,
71.0753, 71.9107, 71.5179, 71.5625, 73.6071, 71.9464, 73.2513,
72.5804, 73.5000, 74.1429, 72.7768, 72.9107, 73.1786, 74.9069],
[68.2028, 70.0714, 68.4821, 69.3661, 70.8750, 69.6607, 70.6569,
70.2551, 70.9464, 70.3393, 70.3929, 71.3571, 71.1250, 72.1901,
70.6850, 71.9464, 72.1071, 72.8304, 72.3036, 72.3214, 73.4528,
73.4898, 72.4286, 73.0179, 73.1071, 73.5179, 73.0357, 74.0280],
[71.3457, 70.4643, 70.4464, 70.7857, 70.6071, 71.9821, 71.6786,
72.7564, 72.4107, 72.2321, 72.8571, 72.7321, 70.0357, 72.2640,
73.8214, 72.8750, 73.0000, 73.0089, 74.8393, 74.1964, 74.9872,
73.4248, 72.0179, 74.5357, 74.9018, 74.9821, 75.0357, 72.9286],
[70.1429, 70.3750, 69.8750, 70.6250, 69.8750, 72.8750, 71.4107,
71.5089, 73.3750, 73.2500, 74.4375, 73.8750, 73.0000, 74.4375,
72.2768, 72.7500, 72.6250, 72.6250, 73.1250, 73.2500, 72.3571,
73.0625, 72.5000, 74.8750, 73.6875, 74.2500, 75.2500, 73.7411],
[53.1428, 56.1607, 57.4286, 58.3393, 60.6607, 59.3393, 62.2589,
62.8380, 64.1250, 66.6429, 66.9821, 67.8750, 74.7679, 70.5192,
68.7411, 69.3036, 66.0001, 67.9733, 67.4822, 68.3393, 68.3534,
69.5740, 69.4465, 70.9465, 69.0983, 72.2679, 70.4286, 70.1493],
[61.2143, 63.0000, 69.0357, 65.3393, 62.3214, 59.8036, 56.2730,
54.5829, 52.8393, 52.8929, 50.8304, 52.9107, 66.4643, 69.6875,
71.1849, 72.2678, 73.9821, 74.4643, 73.0357, 74.1250, 75.6492,
76.2360, 75.7679, 75.6071, 75.2857, 74.9286, 74.8929, 75.1850],
[54.9439, 62.5357, 69.7143, 72.0000, 71.2500, 74.1607, 75.9987,
79.6416, 79.5179, 81.4822, 77.3214, 75.2143, 49.6071, 59.7513,
71.4350, 74.4822, 73.5000, 73.8214, 72.2322, 73.7143, 73.9822,
74.5893, 74.7322, 74.8572, 76.2947, 71.5714, 73.4822, 74.8533],
[63.4298, 61.0357, 61.6072, 59.6697, 57.8036, 59.2322, 56.5982,
57.2079, 55.3393, 56.3572, 56.5804, 58.7322, 79.7499, 73.1900,
65.2423, 75.5357, 74.5356, 75.6250, 72.5893, 74.7321, 74.6135,
75.8852, 75.6964, 75.7678, 76.4286, 74.2500, 74.7857, 76.1671],
[63.7870, 60.3750, 67.5179, 67.5446, 66.7857, 66.2857, 66.4515,
68.5089, 68.5714, 67.0714, 68.5982, 66.7678, 57.3929, 67.2806,
68.9503, 72.9286, 74.0893, 73.4911, 74.2143, 73.3393, 72.4873,
73.3916, 71.7500, 75.4821, 73.8393, 74.8750, 74.6429, 75.0906],
[72.9260, 69.0178, 67.9643, 69.2321, 67.5178, 67.3750, 66.3814,
64.8890, 63.8572, 64.9464, 66.9821, 66.3928, 63.0000, 64.7449,
74.8800, 63.5178, 72.2143, 73.2321, 74.9286, 74.5893, 71.6938,
74.8635, 73.9107, 75.5536, 75.8036, 76.2857, 76.3750, 75.2564],
[72.1160, 69.5000, 72.0000, 69.4375, 71.2500, 70.5000, 72.3392,
73.5982, 71.5000, 72.3750, 68.8750, 67.1249, 65.3750, 60.2856,
61.6427, 65.3749, 67.4999, 65.0624, 70.4999, 69.4999, 65.3124,
71.9107, 69.7499, 72.8750, 72.5625, 72.7500, 74.8750, 73.7053],
[64.3763, 64.8571, 70.4642, 66.7857, 64.3214, 65.3928, 67.4859,
68.7385, 67.8750, 67.8750, 71.0267, 72.8749, 67.5356, 59.4106,
58.7625, 70.2319, 62.5534, 65.7141, 68.1249, 69.0713, 65.2013,
72.8392, 67.1427, 71.7500, 72.8482, 72.6071, 74.4285, 74.0051],
[69.7219, 71.8214, 67.4464, 68.6518, 66.0178, 66.1071, 65.5089,
65.6964, 65.6964, 61.0714, 61.4375, 61.8214, 67.8214, 61.8762,
57.3354, 66.8749, 63.8571, 60.3302, 62.9999, 67.8214, 68.9043,
71.6365, 67.5357, 75.6250, 74.6518, 73.6071, 74.5178, 75.3877],
[72.2857, 66.2857, 63.1964, 69.2232, 68.8214, 70.2857, 68.7895,
70.2436, 70.1250, 66.8750, 69.9643, 66.0893, 52.8393, 60.3201,
52.9273, 66.8571, 58.0535, 57.3035, 63.2321, 60.1785, 59.6058,
69.9936, 69.4286, 73.4821, 72.7143, 72.8750, 72.7500, 74.0791],
[65.7334, 56.6430, 60.7143, 67.8035, 66.5178, 65.8214, 67.6760,
67.3061, 65.6964, 64.5893, 53.1430, 68.4820, 52.7676, 48.1604,
48.1311, 65.3034, 51.9640, 61.8213, 59.6605, 57.3927, 54.6974,
75.5752, 73.1250, 74.3928, 74.0446, 72.2142, 72.2857, 77.7806],
[55.4095, 60.0893, 69.7142, 66.0892, 66.8750, 65.6607, 67.1926,
66.3712, 63.0000, 56.9465, 41.6073, 48.6609, 61.8035, 39.7281,
44.9195, 61.5892, 47.5891, 62.7678, 56.9641, 55.9820, 58.1236,
70.0548, 70.3750, 69.8392, 68.1517, 72.0535, 76.5893, 65.4489],
[60.6237, 66.5714, 67.8571, 65.7232, 66.2500, 67.6250, 66.9311,
67.3303, 64.8214, 48.9644, 45.9019, 49.4108, 51.6608, 43.9259,
47.5012, 38.9642, 37.5356, 66.0000, 65.5178, 49.3392, 57.3571,
67.8252, 69.7678, 70.2143, 51.7410, 76.1607, 69.7143, 54.4056],
[61.9643, 67.2500, 66.5000, 65.6875, 66.2500, 65.0000, 65.0625,
65.5268, 63.7500, 49.8750, 50.4375, 53.1250, 38.7500, 25.3750,
43.4286, 31.1250, 35.3750, 59.7500, 63.3750, 39.5000, 51.8125,
58.6249, 69.5000, 70.1250, 48.0000, 75.8750, 48.7500, 61.4018],
[67.8915, 65.7500, 66.3035, 66.5982, 66.0357, 64.9464, 65.4643,
65.8074, 63.4643, 56.2325, 48.3306, 54.9467, 22.0715, 23.6990,
29.0955, 27.3211, 29.4997, 57.8660, 68.2321, 36.9819, 50.7715,
52.6707, 69.7143, 71.3392, 55.5534, 45.7855, 62.9463, 64.1556],
[63.8431, 66.0893, 65.3571, 65.6161, 65.0893, 64.6964, 64.3444,
65.1225, 62.9107, 57.4287, 57.3216, 54.9287, 26.4465, 30.5689,
23.2499, 23.5534, 25.1605, 55.1071, 69.4643, 41.9642, 52.6619,
59.8954, 72.0893, 79.7322, 47.2856, 64.5000, 52.9463, 81.6888],
[64.2589, 69.9643, 71.5000, 75.2857, 77.6786, 78.6429, 76.2513,
71.0089, 67.5536, 60.8929, 57.2501, 48.1072, 22.4821, 44.3316,
17.5369, 24.3928, 22.8214, 45.4821, 67.8036, 35.4821, 43.7028,
52.7806, 81.8929, 56.7321, 60.5357, 44.2321, 82.6964, 72.7500],
[63.6748, 61.8929, 58.0001, 41.7859, 47.3037, 35.2502, 40.0525,
63.9669, 76.1962, 74.6603, 67.2228, 43.3748, 19.9821, 37.0776,
15.6544, 30.9823, 22.0182, 51.0984, 65.8215, 32.5717, 49.4747,
39.5946, 49.5359, 55.7859, 40.7681, 81.7857, 76.0357, 73.2832],
[60.0192, 53.6429, 43.5359, 44.8037, 39.9287, 48.8037, 48.3241,
35.5882, 22.6071, 20.7142, 33.8838, 45.3570, 25.0714, 32.6657,
26.8559, 22.9644, 27.7324, 69.4375, 62.5001, 33.9823, 48.6047,
33.4811, 38.3930, 58.5358, 74.2857, 73.2679, 68.8572, 71.0817],
[63.2500, 63.3393, 43.1608, 50.3751, 68.6786, 69.6429, 63.9324,
65.5510, 59.6249, 54.3035, 40.5267, 20.6071, 32.1785, 31.9834,
30.0791, 20.3036, 34.1073, 71.0000, 56.2322, 48.2501, 42.9695,
37.1225, 53.7322, 68.3750, 76.2232, 72.4822, 70.6072, 72.9324],
[63.1071, 64.1250, 65.7500, 41.7500, 26.2500, 25.6250, 25.1071,
24.1339, 18.8750, 23.5000, 35.5625, 44.5000, 31.1250, 37.3393,
28.3125, 23.6250, 39.3750, 67.1875, 60.7500, 53.2500, 41.6250,
39.1339, 61.2500, 81.0000, 71.3125, 70.8750, 71.5000, 72.1339],
[67.4796, 68.1429, 68.9821, 76.4286, 75.0893, 74.6250, 73.8419,
72.7398, 58.4108, 44.3572, 33.2322, 19.8036, 32.6965, 29.7296,
28.5957, 19.8750, 42.7499, 69.9196, 66.3214, 51.9285, 43.6848,
44.9017, 64.2857, 73.2857, 71.7321, 71.4286, 73.9286, 73.5893],
[67.7080, 67.9465, 68.0358, 69.1786, 69.1071, 69.7857, 69.0650,
70.3635, 60.1247, 52.3744, 52.1690, 44.3031, 30.2678, 29.7014,
20.1314, 25.4645, 45.8042, 74.2947, 63.4110, 56.0183, 49.2722,
50.1485, 73.1251, 74.6608, 74.3036, 73.8572, 72.2322, 74.1570],
[67.5868, 68.5179, 68.1786, 66.9018, 67.3215, 67.9822, 67.2628,
65.4694, 49.2318, 43.7318, 39.5888, 47.7318, 29.2499, 28.3277,
15.6326, 30.8215, 34.2502, 64.6428, 63.3572, 63.0001, 50.1688,
51.6037, 77.5000, 75.8215, 73.7501, 74.9286, 74.3572, 74.6097]]]), 20)
torch.Size([1, 28, 28])
20
Train Epoch: 1 [0/8 (0%)] Loss: -1.982941
Test set: Average loss: 0.0000, Accuracy: 0/2 (0%)
Error information
This output is when it ran successfully with no segfault, the segfault usually occurs 4 times out of 5. When a segfault does occur, it never occurs processing the MNIST subset, it only occurs while attempting to access the dataset either at dataset[0] or whichever 1, or literally any of them, but if I run the simple print statements enough times on any of the indices I can get it to output at least once and not crash.
Here is an occasion when it crashed more gracefully (outputted the tensor info and size/class, but crashed upon train:
torch.Size([1, 28, 28])
65
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _try_get_data(self, timeout)
989 try:
--> 990 data = self._data_queue.get(timeout=timeout)
991 return (True, data)
9 frames
/usr/lib/python3.7/queue.py in get(self, block, timeout)
178 raise Empty
--> 179 self.not_empty.wait(remaining)
180 item = self._get()
/usr/lib/python3.7/threading.py in wait(self, timeout)
299 if timeout > 0:
--> 300 gotit = waiter.acquire(True, timeout)
301 else:
/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/signal_handling.py in handler(signum, frame)
65 # Python can still get and update the process status successfully.
---> 66 _error_if_any_worker_fails()
67 if previous_handler is not None:
RuntimeError: DataLoader worker (pid 1132) is killed by signal: Segmentation fault.
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
<ipython-input-9-02c9a53ca811> in <module>()
68
69 if __name__ == '__main__':
---> 70 main()
<ipython-input-9-02c9a53ca811> in main()
60
61 for epoch in range(1, args.epochs + 1):
---> 62 train(args, model, device, train_loader, optimizerAdadelta, epoch)
63 test(model, device, test_loader)
64 scheduler.step()
<ipython-input-6-93be0b7e297c> in train(args, model, device, train_loader, optimizer, epoch)
2 def train(args, model, device, train_loader, optimizer, epoch):
3 model.train()
----> 4 for batch_idx, (data, target) in enumerate(train_loader):
5 data, target = data.to(device), target.to(device)
6 optimizer.zero_grad()
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in __next__(self)
519 if self._sampler_iter is None:
520 self._reset()
--> 521 data = self._next_data()
522 self._num_yielded += 1
523 if self._dataset_kind == _DatasetKind.Iterable and \
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _next_data(self)
1184
1185 assert not self._shutdown and self._tasks_outstanding > 0
-> 1186 idx, data = self._get_data()
1187 self._tasks_outstanding -= 1
1188 if self._dataset_kind == _DatasetKind.Iterable:
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _get_data(self)
1140 elif self._pin_memory:
1141 while self._pin_memory_thread.is_alive():
-> 1142 success, data = self._try_get_data()
1143 if success:
1144 return data
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _try_get_data(self, timeout)
1001 if len(failed_workers) > 0:
1002 pids_str = ', '.join(str(w.pid) for w in failed_workers)
-> 1003 raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
1004 if isinstance(e, queue.Empty):
1005 return (False, None)
RuntimeError: DataLoader worker (pid(s) 1132) exited unexpectedly
Generally speaking however this issue appears to 'crash for an unknown reasons' and here is what my logs look like when that occurs:
logs
What I think is going on/what I have tried
I think that there is something wrong with the tensor information and how the image is being read. I am only working with maximum 40 images at a single time so there is no reason the disk resources or RAM on Google Colab are failing. I might be normalizing the data improperly, I have tried different values but nothing has fixed it yet. Perhaps the images are corrupt?
I don't really have a solid grasp of what could be going on, otherwise, I would have already solved it. I think I provided ample resources for it to be a glaring issue for someone of expertise in the area. I put a lot of time into this post, I hope someone is able to help me get to the bottom of the problem.
If there are any other obvious issues with my code and my use of the network and custom dataset please let me know, as this is my first time working with PyTorch.
Thank you!
Additional information that I am not sure if it is relevant:
Custom dataset class:
# ------------ Custom Dataset Class ------------
class PhytoplanktonImageDataset(Dataset):
def __init__(self, annotations_file, img_dir, transform, target_transform):
self.img_labels = pd.read_csv(annotations_file) # Image name and label file loaded into img_labels
self.img_dir = img_dir # directory to find all image names
self.transform = transform # tranforms to apply to images
self.target_transform = target_transform
def __len__(self):
return len(self.img_labels) # get length of csv file
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
image = torchvision.io.read_image(path=img_path)
image = image.type(torch.FloatTensor)
label = self.img_labels.iloc[idx,1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
return image, label
NN class (only thing changed is nn.Linear() has 10 outputs for MNIST:
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 90),
nn.ReLU()
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
Args used:
args = parser.parse_args(['--batch-size', '64', '--test-batch-size', '64',
'--epochs', '1', '--lr', '0.01', '--gamma', '0.7', '--seed','4',
'--log-interval', '10'])
Edit: I was able to get the following exit gracefully on one of the runs (this traceback was a ways into the getitem call):
<ipython-input-3-ae5ff8635158> in __getitem__(self, idx)
13 img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0]) # image path
14 print(img_path)
---> 15 image = torchvision.io.read_image(path=img_path) # Reading image to 1 dimensional GRAY Tensor uint between 0-255
16 image = image.type(torch.FloatTensor) # Now a FloatTensor (not a ByteTensor)
17 label = self.img_labels.iloc[idx,1] # getting label from csv
/usr/local/lib/python3.7/dist-packages/torchvision/io/image.py in read_image(path, mode)
258 """
259 data = read_file(path)
--> 260 return decode_image(data, mode)
/usr/local/lib/python3.7/dist-packages/torchvision/io/image.py in decode_image(input, mode)
237 output (Tensor[image_channels, image_height, image_width])
238 """
--> 239 output = torch.ops.image.decode_image(input, mode.value)
240 return output
241
RuntimeError: Internal error.
Here is the image path being printed just before the decoding fails: /content/gdrive/My Drive/Colab Notebooks/all_images/sample_10/D20190926T145532_IFCB122_00013.png
and here is what that image looks like:
image
Information about this image:
Color Model: Gray
Depth: 16
Pixel Height: 50
Pixel Width: 80
Image DPI: 72 pixels per inch
file size: 3,557 bytes
| I suggest to take a look at you num workers param in your dataloader.
If you have a num_workers param that is too high it may be causing this error. Therefore, I suggest to lower it to zero or until you don't get this error.
Sarthak
| https://stackoverflow.com/questions/68504538/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.