instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
How PyTorch model layer weights get initialized implicitly? | I basically defined a model with a Conv2D and linear layer with PyTorch and trained it with a sample dataset. The model seems to run and converge. But I am wondering I did not explicitly initialize the model layer weights (normal or Xavier). Does that mean when I call model.train() before each epoch training, the layer weights will get initialized randomly by default ? If so, how can I explicitly change the type of initializations ?
| The type of initialization depends on the layer. You can check it from the reset_parameters method or from the docs as well.
For both linear and conv layers, it's He initialization (torch.nn.init.kaiming_uniform_).
It's mentioned in the documentation as
The values are initialized from U(−sqrt(k),sqrt(k)).
For embedding layer, it's Normal initialization. (mentioned in docs as N(0,1)).
You can change the type of initialization as mentioned in How to initialize weights in PyTorch?.
conv1 = torch.nn.Conv2d(...)
torch.nn.init.xavier_uniform(conv1.weight)
| https://stackoverflow.com/questions/65606553/ |
RuntimeError: Given input size: (40x256x1). Calculated output size: (40x253x-2). Output size is too small | import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from torch.autograd import Variable
from sklearn import preprocessing
batch_size = 32
num_classes = 8
epochs = 10
img_rows, img_cols = 256, 256
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
print('x_train shape:', x_train.shape)
print("Training samples: {}".format(x_train.shape[0]))
print("Test samples: {}".format
(x_test.shape[0]))
x_train = torch.Tensor(x_train).float()
x_test = torch.Tensor(x_test).float()
y_train = torch.LongTensor(y_train)
y_test = torch.LongTensor(y_test)
#Define model
model = nn.Sequential(
nn.Conv2d(256,80,1, stride=1),
nn.ReLU(),
nn.Conv2d(80,40,1, stride=1),
nn.ReLU(),
nn.MaxPool2d(4,stride=1),
nn.Conv2d(40,30,1, stride=1),
nn.ReLU(),
nn.Conv2d(30,15,1, stride=1),
nn.ReLU(),
nn.Dropout(0.2),
nn.Flatten(),
nn.Linear(32, 32),
nn.Dropout(0.1),
nn.Linear(num_classes, 256),
nn.ReLU()
)
criterion = nn.CrossEntropyLoss()# cross entropy loss
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
for epoch in range(100):
optimizer.zero_grad()
out = model(x_train)
loss = criterion(out, y_train)
loss.backward()
optimizer.step()
print(f"Epoch {epoch+1}, loss: {loss.item()}")
However when I run this code it produces the error at this specifc line
RuntimeError Traceback (most recent call last)
<ipython-input-10-96fa8b09f1ec> in <module>
63 for epoch in range(100):
64 optimizer.zero_grad()
---> 65 out = model(x_train)
66 loss = criterion(out, y_train)
67 loss.backward()
with the following error message
RuntimeError: Given input size: (40x256x1). Calculated output size: (40x253x-2). Output size is too small
I am unsure on how to fix this as I am new to pytorch and the original model did work in tensorflow.
Any help would be much appreciated
| I'm assuming you are working with images. In that case, there are several issues with your code. Also reading from the comments, there are a couple of things I need to clarify.
I think the most important is the fact you've switched up the axes on the input shape. Unlike in Tensorflow, PyTorch multi-channel maps are shaped with (b, c, h, w) (b: batch size; c: number of channels; hxw: height and width of a feature map.
Also you have defined the first layer as nn.Conv2d(256, 80, 1, stride=1) which means it has 80 filters, and expects a 256-channel input! That's not what you should want, assuming you've following my first point (which is to feed (1,256,256) images) to your model. The number of filters can stay at 80, that's up to you and your neural network design.
A little later down your model, you define a max pool with nn.MaxPool2d(4, stride=1). Just to point out that you are using a kernel size of 4 pixels here. Which means that, at this point, the resulting tensor will have a shape of (b, 40, 253, 253). The change from 256x256 to 253x253 is due to the kernel size being 4.
You flatten after your final convolution into (b, 960135). That's 253*253*15 (feature map dimensions times number of feature maps). Ultimately, the next dense layer needs to have that number as input size. Instead you have put nn.Linear(32, 32) followed by nn.Linear(num_classes, 256). It should rather be something like nn.Linear(253*253*15, n) followed by nn.Linear(n, num_classes). Where n is arbitrarily set and I assumed you wanted an output with num_classes logits at the end.
Also, I read in the comments
If you organize all correctly, I think you have to change the first layer in Sequential() from nn.Conv2d(256,80,1, stride=1) to nn.Conv2d(32,80,1, stride=1)
The batch size doesn't have anything to do with the layer's size parameters. Don't think about the batch axis, it's the first axis. Your model doesn't depend on your batch size!
I would recommend, increasing your kernel sizes, this will allow your feature maps to shrink in dimensions as it goes through the network, at the same time, increasing the number of channels until you hit the flatten layer and fully connected.
Here is your model with some important correction, just enough to make it run:
batch_size = 32
num_classes = 8
img_rows, img_cols = 256, 256
input_shape = (img_rows, img_cols, 1)
# dummy data, would broadcast your data with these shapes
x_train = torch.rand(batch_size, 1, img_rows, img_cols)
y_train = torch.rand(batch_size, 256)
model = nn.Sequential(
nn.Conv2d(1, 80, 1, stride=1),
nn.ReLU(),
nn.Conv2d(80, 40, 1, stride=1),
nn.ReLU(),
nn.MaxPool2d(4, stride=1),
nn.Conv2d(40, 30, 1, stride=1),
nn.ReLU(),
nn.Conv2d(30, 15, 1, stride=1),
nn.ReLU(),
nn.Dropout(0.2),
nn.Flatten(),
nn.Linear(253*253*15, 256),
nn.Dropout(0.1),
nn.Linear(256, num_classes),
nn.ReLU()
)
| https://stackoverflow.com/questions/65616189/ |
How to replicate PyTorch normalization in OpenCV or NumPy? | I need to replicate PyTorch image normalization in OpenCV or NumPy.
Quick backstory: I'm doing a project where I'm training in PyTorch but will have to inference in OpenCV due to deploying to an embedded device where I won't have the storage space to install PyTorch. After training in PyTorch and saving a PyTorch graph I'm then converting to an ONNX graph. For inferencing in OpenCV I'm opening the image as an OpenCV image (i.e. NumPy array), then resizing, then successively calling cv2.normalize, cv2.dnn.blobFromImage, net.setInput, and net.forward.
I'm getting slightly different accuracy results when test inferencing in PyTorch vs inferencing in OpenCV, and I suspect the difference is due to the normalization process producing a slightly different result between the two.
Here is a quick script I put together to show the difference on a single image. Note that I'm using grayscale (single-channel) and I'm normalizing into the -1.0 to +1.0 range:
# scratchpad.py
import torch
import torchvision
import cv2
import numpy as np
import PIL
from PIL import Image
TRANSFORM = torchvision.transforms.Compose([
torchvision.transforms.Resize((224, 224)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize([0.5], [0.5])
])
def main():
# 1st show PyTorch normalization
# open the image as an OpenCV image
openCvImage = cv2.imread('image.jpg', cv2.IMREAD_GRAYSCALE)
# convert OpenCV image to PIL image
pilImage = PIL.Image.fromarray(openCvImage)
# convert PIL image to a PyTorch tensor
ptImage = TRANSFORM(pilImage).unsqueeze(0)
# show the PyTorch tensor info
print('\nptImage.shape = ' + str(ptImage.shape))
print('ptImage max = ' + str(torch.max(ptImage)))
print('ptImage min = ' + str(torch.min(ptImage)))
print('ptImage avg = ' + str(torch.mean(ptImage)))
print('ptImage: ')
print(str(ptImage))
# 2nd show OpenCV normalization
# resize the image
openCvImage = cv2.resize(openCvImage, (224, 224))
# convert to float 32 (necessary for passing into cv2.dnn.blobFromImage which is not show here)
openCvImage = openCvImage.astype('float32')
# use OpenCV version of normalization, could also do this with numpy
cv2.normalize(openCvImage, openCvImage, 1.0, -1.0, cv2.NORM_MINMAX)
# show results
print('\nopenCvImage.shape = ' + str(openCvImage.shape))
print('openCvImage max = ' + str(np.max(openCvImage)))
print('openCvImage min = ' + str(np.min(openCvImage)))
print('openCvImage avg = ' + str(np.mean(openCvImage)))
print('openCvImage: ')
print(str(openCvImage))
print('\ndone !!\n')
# end function
if __name__ == '__main__':
main()
Here is the test image that I'm using:
and here are the results I'm getting currently:
$ python3 scratchpad.py
ptImage.shape = torch.Size([1, 1, 224, 224])
ptImage max = tensor(0.9608)
ptImage min = tensor(-0.9686)
ptImage avg = tensor(0.1096)
ptImage:
tensor([[[[ 0.0431, -0.0431, 0.1294, ..., 0.8510, 0.8588, 0.8588],
[ 0.0510, -0.0510, 0.0980, ..., 0.8353, 0.8510, 0.8431],
[ 0.0588, -0.0431, 0.0745, ..., 0.8510, 0.8588, 0.8588],
...,
[ 0.6157, 0.6471, 0.5608, ..., 0.6941, 0.6627, 0.6392],
[ 0.4902, 0.3961, 0.3882, ..., 0.6627, 0.6471, 0.6706],
[ 0.3725, 0.4039, 0.5451, ..., 0.6549, 0.6863, 0.6549]]]])
openCvImage.shape = (224, 224)
openCvImage max = 1.0000001
openCvImage min = -1.0
openCvImage avg = 0.108263366
openCvImage:
[[ 0.13725497 -0.06666661 0.20000008 ... 0.8509805 0.8666668
0.8509805 ]
[ 0.15294124 -0.06666661 0.09019614 ... 0.8274511 0.8431374
0.8274511 ]
[ 0.12156869 -0.06666661 0.0196079 ... 0.8509805 0.85882366
0.85882366]
...
[ 0.5843138 0.74117655 0.5450981 ... 0.83529425 0.59215695
0.5764707 ]
[ 0.6862746 0.34117654 0.39607853 ... 0.67843145 0.6705883
0.6470589 ]
[ 0.34117654 0.4117648 0.5215687 ... 0.5607844 0.74117655
0.59215695]]
done !!
As you can see the results are similar but definitely not exactly the same.
How can I do the normalization in OpenCV and have it come out exactly or almost exactly the same as the PyTorch normalization? I've tried various options in both OpenCV and with NumPy but could not get it any closer than the above results, which are substantially different.
-- Edit ---------------------------
In response to Ivan, I also tried this:
# resize the image
openCvImage = cv2.resize(openCvImage, (224, 224))
# convert to float 32 (necessary for passing into cv2.dnn.blobFromImage which is not show here)
openCvImage = openCvImage.astype('float32')
mean = np.mean(openCvImage)
stdDev = np.std(openCvImage)
openCvImage = (openCvImage - mean) / stdDev
# show results
print('\nopenCvImage.shape = ' + str(openCvImage.shape))
print('openCvImage max = ' + str(np.max(openCvImage)))
print('openCvImage min = ' + str(np.min(openCvImage)))
print('openCvImage avg = ' + str(np.mean(openCvImage)))
print('openCvImage: ')
print(str(openCvImage))
Which results in:
openCvImage.shape = (224, 224)
openCvImage max = 2.1724665
openCvImage min = -2.6999729
openCvImage avg = 7.298528e-09
openCvImage:
[[ 0.07062991 -0.42616782 0.22349077 ... 1.809422 1.8476373
1.809422 ]
[ 0.10884511 -0.42616782 -0.04401573 ... 1.7520993 1.7903144
1.7520993 ]
[ 0.0324147 -0.42616782 -0.21598418 ... 1.809422 1.8285296
1.8285296 ]
...
[ 1.1597633 1.5419154 1.0642253 ... 1.7712069 1.178871
1.1406558 ]
[ 1.4081622 0.56742764 0.70118093 ... 1.3890547 1.3699471
1.3126242 ]
[ 0.56742764 0.7393961 1.0069026 ... 1.1024406 1.5419154
1.178871 ]]
Which is similar to the PyTorch normalization but clearly not the same.
I'm attempting to achieve normalization in OpenCV that produces the same result as the PyTorch normalization.
I realize that due to slight differences in the resizing operation (and possibly very small rounding differences) I'll probably never get exactly the same normalized result but I'd like to get as close as possible to the PyTorch result.
| This probably would be helpful
If you look at actual implementation of
torchvision.transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
)
Below block is what it actually does:
import numpy as np
from PIL import Image
MEAN = 255 * np.array([0.485, 0.456, 0.406])
STD = 255 * np.array([0.229, 0.224, 0.225])
img_pil = Image.open("ty.jpg")
x = np.array(img_pil)
x = x.transpose(-1, 0, 1)
x = (x - MEAN[:, None, None]) / STD[:, None, None]
Here i have done it on image
| https://stackoverflow.com/questions/65617755/ |
what is Pytorch's add_module()? | I stumbled upon the method add_module() in a Pytorch model.
The doc only states
Adds a child module to the current module.
The module can be accessed as an attribute using the given name.
I don't understand what "adding a child module" means.
How is it different from just setting a pointer to the other module using self._other module = other_module?
What are the nuances?
| As mentioned here: https://discuss.pytorch.org/t/when-to-use-add-module-function/10534
In general, you won’t need to call add_module. One potential use case is the following:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
modules = [...] # some list of modules
for module in modules:
self.add_module(...)
| https://stackoverflow.com/questions/65619076/ |
How to merge two torch.utils.data dataloaders with a single operation | I have two dataloaders and I would like to merge them without redefining the datasets, in my case train_dataset and val_dataset.
train_loader = DataLoader(train_dataset, batch_size = 512, drop_last=True,shuffle=True)
val_loader = DataLoader(val_dataset, batch_size = 512, drop_last=False)
Wanted result:
train_loader = train_loader + val_loader
| Data loaders are iterators, you can implement a function that returns an iterator which yields the dataloaders' content, one dataloader after the other.
Given a number of iterators itrs, it would iterate over each iterator and in turn iterate over each iterator yielding one batch at a time. A possible implementation would be as simple as:
def itr_merge(*itrs):
for itr in itrs:
for v in itr:
yield v
Here is an usage example:
>>> dl1 = DataLoader(TensorDataset(torch.zeros(5, 1)), batch_size=2, drop_last=True)
>>> dl2 = DataLoader(TensorDataset(torch.ones(10, 1)), batch_size=2)
>>> for x in itr_merge(dl1, dl2):
>>> print(x)
[tensor([[0.], [0.]])]
[tensor([[0.], [0.]])]
[tensor([[1.], [1.]])]
[tensor([[1.], [1.]])]
[tensor([[1.], [1.]])]
[tensor([[1.], [1.]])]
[tensor([[1.], [1.]])]
| https://stackoverflow.com/questions/65621414/ |
Input batch size doesn't match target batch size in CrossEntropyLoss function | I've been trying to build a model from scratch to recognize handwritten digits from the MNIST dataset with the help of PyTorch and the DataLoader class from FastAI. So far, I've been using a linear model that has 784 inputs (a flattened grayscale 28 by 28 handwritten digit image tensor) and 10 outputs.
simple_linear = torch.nn.Linear(784, 10)
My training data is organized as such:
train_x = torch.cat([stacked_zeros, stacked_ones, stacked_twos, stacked_threes,
stacked_fours, stacked_fives, stacked_sixes, stacked_sevens,
stacked_eights, stacked_nines]).view(-1, 28*28)
train_y = torch.nn.functional.one_hot(tensor([0] * len(zeros) + [1] * len(ones) + [2] * len(twos) +
[3] * len(threes) + [4] * len(fours) + [5] * len(fives) +
[6] * len(sixes) + [7] * len(sevens) + [8] * len(eights) +
[9] * len(nines)).unsqueeze(1))
My x variables have shape [784] while y variables are labeled using one-hot encoded vectors with [1, 10] shape.
The loss function I chose based on research is torch.nn.CrossEntropyLoss and the following code gives me an error:
mnist_loss = torch.nn.CrossEntropyLoss()
mnist_loss(simple_linear(train_x[0]), train_y[0])
ValueError Traceback (most recent call last)
<ipython-input-245-03f54a6a43fb> in <module>()
----> 1 tst(simple_linear(x), y)
8 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
2260 if input.size(0) != target.size(0):
2261 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).'
-> 2262 .format(input.size(0), target.size(0)))
2263 if dim == 2:
2264 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
ValueError: Expected input batch_size (1) to match target batch_size (10).
I've tried reshaping my x variables and y variables but I always get a similar error. How must my data be structured in order for the loss function to work?
| The torch.nn.CrossEntropyLoss function doesn't take targets as one-hot-encodings!
Just pass the label index, so basically:
train_y = torch.tensor([0] * len(zeros) + [1] * len(ones) + [2] * len(twos) +
[3] * len(threes) + [4] * len(fours) + [5] * len(fives) +
[6] * len(sixes) + [7] * len(sevens) + [8] * len(eights) +
[9] * len(nines)).unsqueeze(1)
Here's a suggestion, you could write everything like:
dataset = [stacked_zeros, stacked_ones, stacked_twos, stacked_threes,
stacked_fours, stacked_fives, stacked_sixes, stacked_sevens,
stacked_eights, stacked_nines]
train_x = torch.cat(dataset)
train_y = torch.tensor([[i]*d.size(0) for i, d in enumerate(dataset)])
| https://stackoverflow.com/questions/65631215/ |
Loading the pre-trained model of torch and sentence_transformers when running in a docker container failing | I am getting below error while loading the pre-trained model of torch and sentence_transformers("distilbert-base-nli-stsb-mean-tokens") when trying to run in a docker container.
Error: Invalid value for '-A' / '--app':
Unable to load celery application.
While trying to load the module app.celery the following error occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/celery/bin/celery.py", line 53, in convert
return find_app(value)
File "/usr/local/lib/python3.8/site-packages/celery/app/utils.py", line 384, in find_app
sym = symbol_by_name(app, imp=imp)
File "/usr/local/lib/python3.8/site-packages/kombu/utils/imports.py", line 56, in symbol_by_name
module = imp(module_name, package=package, **kwargs)
File "/usr/local/lib/python3.8/site-packages/celery/utils/imports.py", line 100, in import_from_cwd
return imp(module, package=package)
File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/code/app.py", line 997, in <module>
load_model()
File "/code/app.py", line 255, in load_model
embedder = SentenceTransformer('distilbert-base-nli-stsb-mean-tokens')
File "/usr/local/lib/python3.8/site-packages/sentence_transformers/SentenceTransformer.py", line 48, in __init__
os.makedirs(model_path, exist_ok=True)
File "/usr/local/lib/python3.8/os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/usr/local/lib/python3.8/os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/usr/local/lib/python3.8/os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
[Previous line repeated 1 more time]
File "/usr/local/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/nonexistent'
Here it is saying permission denied error while creating the folder. But I have tried providing USER root in the Dockerfile. Stuck with this issue for long time. Please anyone help me here.
Updated:
My Dockerfile:
FROM python:3.8.5-slim
WORKDIR /code
ENV ENVIRONMENT='LOCAL'
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -y sudo netcat apt-utils
RUN apt-get install -y python3-dev build-essential python3-pip
COPY ./requirements_local.txt /code/requirements_local.txt
RUN pip install -r /code/requirements_local.txt
EXPOSE 8000
COPY . /code/
CMD [ "gunicorn", "app:app", "-b", "0.0.0.0:8000","--timeout","7200"]
Docker-compose:
services:
web:
build:
context: .
dockerfile: ./Dockerfile.prod
hostname: flaskapp
env_file:
- ./.env.prod
links:
- redis
- celery
depends_on:
- redis
volumes:
- data:/code
- type: bind
source: /home/ubuntu/models
target: /mnt/models
| sentence-transformers downloads and stores model in ~/.cache directory (or whatever the cache_folder evaluates to be in - https://github.com/UKPLab/sentence-transformers/blob/a13a4ec98b8fdda83855aca7992ea793444a207f/sentence_transformers/SentenceTransformer.py#L63). For you this looks like the /nonexistant directory. The permission denied error suggests you do not have permission to access that directory (to create cache folder).
You can modify the Dockerfile to create this directory and make it accessible to any user that needs to access this:-
RUN mkdir ~/.cache
RUN chmod -R 777 ~/.cache # don't do this in production - modify command to give permission to users who require it.
Or you could try downloading the model in the Dockerfile itself -
RUN python -c 'from sentence-transformers import SentenceTransformer; embedder = SentenceTransformer("distilbert-base-nli-stsb-mean-tokens")'
| https://stackoverflow.com/questions/65633918/ |
Problems identifying images in pytorch | I am really new at deep learning and I am studying how to properly run neural networks using pytorch. Currently I am trying to read a dataset of images using the following code:
from torch.utils.data import Dataset, DataLoader
from torchvision import datasets, transforms
from torchvision import transforms, utils
transformations = transforms.Compose([
transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
train_set = datasets.ImageFolder('faces/train', transform = transformations)
test_set = datasets.ImageFolder('faces/test', transform = transformations)
train_loader = DataLoader(train_set, batch_size=60, shuffle = True)
test_loader = DataLoader(test_set, batch_size=60, shuffle = True)
once done that, I am trying to read the images to start running the neural networks on the images, first by separating the imgs from the labels using:
img, labels = next(iter(train_loader))
And then I received the following message:
UnidentifiedImageError: cannot identify image file <_io.BufferedReader name='faces/train/karyadi/karyadi_straight_angry_open.pgm'>
The structure of the folders on my working directory where I have stored the images is as follows:
faces:
Train: 10 folders, each one with around 90 images in pgm format.
Test: 1o folders, each one with around 90 images in pgm format.
does anyone know what the problem might be?
thanks in advance.
| I finally found the problem. This post was very useful. The gist of it is that PIL has problems importing images of certain sizes (I do not have all the details about this). At the end I use cv2 to import the pgm images one by one and then convert them into 32-bit ndarrays to export them in jpeg format. Here is my code for one image:
import cv2
img = cv2.imread('faces/train/karyadi/karyadi_straight_angry_open.pgm')
img = np.full(img.shape, img, dtype=np.float32)
cv2.imwrite('img.jpeg', img)
| https://stackoverflow.com/questions/65636906/ |
Batch normalization makes training worse | I am trying to implement the batch normalization with Pytorch and use a simple fully connected neural network to approximate a given function.
The code is as follows. The result shows that the neural network without the batch normalization performs better than that with the batch normalization technique. This means that the batch normalization makes the training even worse. Could someone explain this result? Thanks!
import matplotlib.pyplot as plt
import numpy as np
import torch
class Net(torch.nn.Module):
def __init__(self, num_inputs, num_outputs, hidden_size=256, is_bn=True):
super(Net, self).__init__()
self.num_inputs = num_inputs
self.num_outputs = num_outputs
self.is_bn = is_bn
# no bias is needed if batch normalization
if self.is_bn:
self.linear1 = torch.nn.Linear(num_inputs, hidden_size, bias=False)
self.linear2 = torch.nn.Linear(hidden_size, hidden_size, bias=False)
else:
self.linear1 = torch.nn.Linear(num_inputs, hidden_size)
self.linear2 = torch.nn.Linear(hidden_size, hidden_size)
self.linear3 = torch.nn.Linear(hidden_size, num_outputs)
if self.is_bn:
self.bn1 = torch.nn.BatchNorm1d(hidden_size)
self.bn2 = torch.nn.BatchNorm1d(hidden_size)
self.activation = torch.nn.ReLU()
def forward(self, inputs):
x = inputs
if self.is_bn:
x = self.activation(self.bn1(self.linear1(x)))
x = self.activation(self.bn2(self.linear2(x)))
else:
x = self.activation(self.linear1(x))
x = self.activation(self.linear2(x))
out = self.linear3(x)
return out
torch.manual_seed(0) # reproducible
Nx = 100
x = torch.linspace(-1., 1., Nx)
x = torch.reshape(x, (Nx, 1))
y = torch.sin(3*x)
fcn_bn, fcn_no_bn = Net(num_inputs=1, num_outputs=1, is_bn=True), Net(num_inputs=1, num_outputs=1, is_bn=False)
criterion = torch.nn.MSELoss()
optimizer_bn = torch.optim.Adam(fcn_bn.parameters(), lr=0.001)
optimizer_no_bn = torch.optim.Adam(fcn_no_bn.parameters(), lr=0.001)
total_epoch = 5000
# record loss history
loss_history_bn = np.zeros(total_epoch)
loss_history_no_bn = np.zeros(total_epoch)
fcn_bn.train()
fcn_no_bn.train()
for epoch in range(total_epoch):
optimizer_bn.zero_grad()
loss = criterion(fcn_bn(x), y)
loss_history_bn[epoch] = loss.item()
loss.backward()
optimizer_bn.step()
optimizer_no_bn.zero_grad()
loss = criterion(fcn_no_bn(x), y)
loss_history_no_bn[epoch] = loss.item()
loss.backward()
optimizer_no_bn.step()
if epoch%1000 == 0:
print("epoch: %d; MSE (with bn): %.2e; MSE (without bn): %.2e"%(epoch, loss_history_bn[epoch], loss_history_no_bn[epoch]))
fcn_bn.eval()
fcn_no_bn.eval()
plt.figure()
plt.semilogy(np.arange(total_epoch), loss_history_bn, label='neural network (with bn)')
plt.semilogy(np.arange(total_epoch), loss_history_no_bn, label='neural network (without bn)')
plt.legend()
plt.figure()
plt.plot(x, y, '-', label='exact')
plt.plot(x, fcn_bn(x).detach(), 'o', markersize=2, label='neural network (with bn)')
plt.plot(x, fcn_no_bn(x).detach(), 'o', markersize=2, label='neural network (without bn)')
plt.legend()
plt.figure()
plt.plot(x, np.abs(fcn_bn(x).detach() - y), 'o', markersize=2, label='neural network (with bn)')
plt.plot(x, np.abs(fcn_no_bn(x).detach() - y), 'o', markersize=2, label='neural network (without bn)')
plt.legend()
plt.show()
The result is as follows:
epoch: 0; MSE (with bn): 3.99e-01; MSE (without bn): 4.84e-01
epoch: 1000; MSE (with bn): 4.70e-05; MSE (without bn): 1.27e-06
epoch: 2000; MSE (with bn): 1.81e-04; MSE (without bn): 7.93e-07
epoch: 3000; MSE (with bn): 2.73e-04; MSE (without bn): 7.45e-07
epoch: 4000; MSE (with bn): 4.04e-04; MSE (without bn): 5.68e-07
| To provide an alternate view to the answer that Khalid linked in the comments, which puts a stronger focus on generalization performance rather than training loss, consider this:
Batch Normalization has been postulated to have a regularizing effect. Luo et al. look at BN as a decomposition into population normalization and gamma decay and observe similar training loss curves as you do (comparing BN to no BN - note, however, that they use vanilla SGD and not Adam). There are a couple of things that affect BN (as outlined also in Khalid's link): The batch size, for example, on the one hand should be large enough for robust estimation of population parameters, however, with increasing size of the batch generalization performance can also drop (see Luo et al.'s paper: the gist is that lower batch sizes result in noisy population parameter estimates, essentially perturbing the input).
In your case I would not intuitively have expected a big difference (given how your data is set up), but maybe someone deeper into the theoretical analysis of BN can still provide insights.
| https://stackoverflow.com/questions/65637165/ |
What's the best way of handling the GAN training output? | Supervising the training of GANs usually involves outputting not only metrics, but also images at a certain interval of epochs. My application also involves printing tables. I use jupyter notebooks, but just printing it all on the notebooks makes each notebook for each experiment way too large (+100 MB), and the internet browser gets slow and crashes often because of that.
I suppose the usual practice would be to save the image outputs somewhere else (either with tensorboard or just plain image files), but that would not be ideal for me because I like to observe each image together with the text/table output relative to it's epoch. It would be even better if I could save the entire training output into a single file, so I could just scroll down through it, observing each epoch output with text/table/image, just like the jupyter notebook output. Is there a way of implementing this? Or perhaps a better way that I'm not considering?
Thank you and sorry if this question is somehow inadequate. If so, let me know and I'll delete it.
| Well It's a matter of preference, how someone likes to have it. When I trained a GAN, I handled it in the following way, for the loss and other such values I printed them simply on the notebook per epoch manner as we would do with any other models along with that I would generate an image made by the generator and save it to a folder to check If the generator is getting better. But to make it easier for me to observe those images, I've used meaningful labeling for those images, I've included epoch number in the label, which helps to go through the images pretty quickly. However going through the images would be a bit tedious job, so to make it a bit more comforting I've merged all those pictures produced at every epoch (let's say 500) and made a video, that's pretty cool to watch. I found this way of training my GAN and producing its output really easier but as I said It would be a matter of preference.
Here's the code-
https://github.com/khalidsaifullaah/Classic-Deep-Learning-Models/tree/master/GAN
| https://stackoverflow.com/questions/65637608/ |
Validation and training loss per batch and epoch | I am using Pytorch to run some deep learning models. I am currently keeping track of training and validation loss per epoch, which is pretty standard. However, what is the best way of going about keeping track of training and validation loss per batch/iteration?
For training loss, I could just keep a list of the loss after each training loop. But, validation loss is calculated after a whole epoch, so I’m not sure how to go about the validation loss per batch. The only thing I can think of is to run the whole validation step after each training batch and keeping track of those, but that seems overkill and a lot of computation.
For example, the training is like this:
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
And for validation loss:
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
# validation loss
batch_loss = error(outputs.float(), labels.long()).item()
loss_test += batch_loss
loss_test /= len(testloader)
The validation loss/test part is done per epoch. I’m looking for a way to get the validation loss per batch, which is my point above.
Any tips?
| Well, you're right that's the way to do it "run the whole validation step after each training batch and keeping track of those" and also as you've thought it's pretty time-consuming and would be overkill. However, If that's something you really need then there's a way you can do it. What you can do is, let's say you've 1000 batches in your data. Now to calculate per batch val_loss you can choose not to run the validation step for each of the batch (then you'd have to do it 1000 times!) but for a small subset of those batches, let's say 50/100 (choose as you please or find feasible). Now, you can use some statistical power so that your calculation for 50/100 batches becomes very very close to that of 1000 batches (meaning this val_loss for a small number of batches must be as close as to those of 1000 batches if you had calculated that), so to achieve it you can introduce some randomness in your batch selection.
This means you randomly select 100 batches from your 1000 batches for which you'll run the validation step.
| https://stackoverflow.com/questions/65638101/ |
pytorch runs slow when data are pre-transported to GPU | I have a model written in pytorch. Since my dataset is small, I can directly load all of the data to GPU. However, I found the forward speed becomes slow if I do so. The following is a runnable example. Specifically, I have the model:
import numpy as np
from time import time
import random
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
def knn(x, k):
inner = -2*torch.matmul(x.transpose(2, 1), x)
xx = torch.sum(x**2, dim=1, keepdim=True)
pairwise_distance = -xx - inner - xx.transpose(2, 1)
idx = pairwise_distance.topk(k=k, dim=-1)[1] # (batch_size, num_points, k)
return idx
def get_graph_feature(x, k=20, idx=None):
batch_size = x.size(0)
num_points = x.size(2)
x = x.view(batch_size, -1, num_points)
if idx is None:
idx = knn(x, k=k) # (batch_size, num_points, k)
idx_base = torch.arange(0, batch_size, device=x.device).view(-1, 1, 1)*num_points
idx = idx + idx_base
idx = idx.view(-1)
_, num_dims, _ = x.size()
x = x.transpose(2, 1).contiguous() # (batch_size, num_points, num_dims) -> (batch_size*num_points, num_dims) # batch_size * num_points * k + range(0, batch_size*num_points)
feature = x.view(batch_size*num_points, -1)[idx, :]
feature = feature.view(batch_size, num_points, k, num_dims)
x = x.view(batch_size, num_points, 1, num_dims).repeat(1, 1, k, 1)
feature = torch.cat((feature-x, x), dim=3).permute(0, 3, 1, 2).contiguous()
return feature
class DGCNN(nn.Module):
def __init__(self, k=25, output_channels=10):
super(DGCNN, self).__init__()
self.k = k
self.bn1 = nn.BatchNorm2d(64)
self.bn2 = nn.BatchNorm2d(64)
self.bn3 = nn.BatchNorm2d(128)
self.bn4 = nn.BatchNorm2d(256)
self.bn5 = nn.BatchNorm1d(1024)
self.conv1 = nn.Sequential(nn.Conv2d(6, 64, kernel_size=1, bias=False),
self.bn1,
nn.LeakyReLU(negative_slope=0.2))
self.conv2 = nn.Sequential(nn.Conv2d(64*2, 64, kernel_size=1, bias=False),
self.bn2,
nn.LeakyReLU(negative_slope=0.2))
self.conv3 = nn.Sequential(nn.Conv2d(64*2, 128, kernel_size=1, bias=False),
self.bn3,
nn.LeakyReLU(negative_slope=0.2))
self.conv4 = nn.Sequential(nn.Conv2d(128*2, 256, kernel_size=1, bias=False),
self.bn4,
nn.LeakyReLU(negative_slope=0.2))
self.conv5 = nn.Sequential(nn.Conv1d(512, 1024, kernel_size=1, bias=False),
self.bn5,
nn.LeakyReLU(negative_slope=0.2))
self.linear1 = nn.Linear(1024*2, 512, bias=False)
self.bn6 = nn.BatchNorm1d(512)
self.dp1 = nn.Dropout()
self.linear2 = nn.Linear(512, 256)
self.bn7 = nn.BatchNorm1d(256)
self.dp2 = nn.Dropout()
self.linear3 = nn.Linear(256, output_channels)
def forward(self, x):
x = x.transpose(2, 1)
batch_size = x.size(0)
x = get_graph_feature(x, k=self.k)
x = self.conv1(x)
x1 = x.max(dim=-1, keepdim=False)[0]
x = get_graph_feature(x1, k=self.k)
x = self.conv2(x)
x2 = x.max(dim=-1, keepdim=False)[0]
x = get_graph_feature(x2, k=self.k)
x = self.conv3(x)
x3 = x.max(dim=-1, keepdim=False)[0]
x = get_graph_feature(x3, k=self.k)
x = self.conv4(x)
x4 = x.max(dim=-1, keepdim=False)[0]
x = torch.cat((x1, x2, x3, x4), dim=1)
x = self.conv5(x)
x1 = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)
x2 = F.adaptive_avg_pool1d(x, 1).view(batch_size, -1)
x = torch.cat((x1, x2), 1)
x = F.leaky_relu(self.bn6(self.linear1(x)), negative_slope=0.2)
x = self.dp1(x)
x = F.leaky_relu(self.bn7(self.linear2(x)), negative_slope=0.2)
x = self.dp2(x)
x = self.linear3(x)
return x
Here is what the dataloader and test function looks like:
class my_loader(Dataset):
def __init__(self, device):
self.data = torch.rand(256, 2048, 3).to(device).float()
self.labels = torch.rand(256).to(device).long()
def __getitem__(self, ind):
return self.data[ind], self.labels[ind]
def __len__(self):
return len(self.data)
def test():
device = torch.device('cuda:2')
test_set = my_loader(device)
test_loader = DataLoader(test_set, batch_size=16, shuffle=True, num_workers=0)
model = DGCNN().to(device)
model.eval()
#---------- this one is 0.12s --------------#
for inputs, labels in test_loader:
tic = time()
pred = model(inputs)
print('time1 {}'.format(time() - tic))
print('------------------')
#---------- this one is 0.004s --------------#
for inputs, labels in test_loader:
inputs = inputs.detach().cpu().to(device)
tic = time()
pred = model(inputs)
print('time2 {}'.format(time() - tic))
print('------------------')
#---------- this one is 0.12s --------------#
for inputs, labels in test_loader:
tic = time()
inputs = inputs.detach().cpu().to(device)
pred = model(inputs)
print('time3 {}'.format(time() - tic))
print('------------------')
Basically, it seems that if there is no explicit call of gpu to cpu transportation either before or after the forward propagation, the forward propagation would cost more time. It just seems like that the forward propagation is implicitly doing gpu->cpu transportation.
| I played around with the code a little bit, and I think the problem is that you are measuring times for both cases in the same run. Here is my boiled down version of your code since your model crushed my GPU memory:
class DGCNN(nn.Module):
def __init__(self, num_layers):
super(DGCNN, self).__init__()
self.layers = nn.ModuleList([nn.Linear(256, 256) for _ in range(1200)])
def forward(self, x):
x = x.view(-1, 256)
for layer in self.layers:
x = layer(x)
return x
class my_loader(Dataset):
def __init__(self, device):
self.data = torch.rand(256, 2048, 3).to(device).float()
self.labels = torch.rand(256).to(device).long()
def __getitem__(self, ind):
return self.data[ind], self.labels[ind]
def __len__(self):
return len(self.data)
Now, here I demonstrate different versions of test().
Version #1:
def test():
device = torch.device('cuda:0')
test_set = my_loader(device)
test_loader = DataLoader(test_set, batch_size=16, shuffle=True, num_workers=0)
model = DGCNN().to(device)
model.eval()
#---------- this one is 0.12s --------------#
tic = time()
for inputs, labels in test_loader:
pred = model(inputs)
tac = time()
print(f'# First case -> Full forward pass: {tac - tic:.6f}')
#---------- this one is 0.004s --------------#
tic = time()
for inputs, labels in test_loader:
pred = model(inputs.detach().cpu().to(device))
tac = time()
print(f'# Second case -> Full forward pass: {tac - tic:.6f}')
>>> # First case -> Full forward pass: 3.105103, # Second case -> Full forward pass: 2.831652
Now I switched the order of timing calculations for the cases. Version #2:
def test():
device = torch.device('cuda:0')
test_set = my_loader(device)
test_loader = DataLoader(test_set, batch_size=16, shuffle=True, num_workers=0)
model = DGCNN().to(device)
model.eval()
#---------- this one is 0.004s --------------#
tic = time()
for inputs, labels in test_loader:
pred = model(inputs.detach().cpu().to(device))
tac = time()
print(f'# Second case -> Full forward pass: {tac - tic:.6f}')
#---------- this one is 0.12s --------------#
tic = time()
for inputs, labels in test_loader:
pred = model(inputs)
tac = time()
print(f'# First case -> Full forward pass: {tac - tic:.6f}')
>>> # Second case -> Full forward pass: 3.288522, # First case -> Full forward pass: 2.583231
Apparently, the first timing you calculate seems to end up slower. So, I calculated these timings separately in different runs with fresh kernels. Version #3:
def test():
device = torch.device('cuda:0')
test_set = my_loader(device)
test_loader = DataLoader(test_set, batch_size=16, shuffle=True, num_workers=0)
model = DGCNN().to(device)
model.eval()
#---------- this one is 0.12s --------------#
tic = time()
for inputs, labels in test_loader:
pred = model(inputs)
tac = time()
print(f'# First case -> Full forward pass: {tac - tic:.6f}')
>>> # First case -> Full forward pass: 3.091592
Version #4:
def test():
device = torch.device('cuda:0')
test_set = my_loader(device)
test_loader = DataLoader(test_set, batch_size=16, shuffle=True, num_workers=0)
model = DGCNN().to(device)
model.eval()
#---------- this one is 0.004s --------------#
tic = time()
for inputs, labels in test_loader:
pred = model(inputs.detach().cpu().to(device))
tac = time()
print(f'# Second case -> Full forward pass: {tac - tic:.6f}')
>>> # Second case -> Full forward pass: 3.190248
So, by testing one at a time, it seems like pred = model(inputs) runs slightly faster than pred = model(inputs.detach().cpu().to(device)), which is the obvious expected result.
| https://stackoverflow.com/questions/65642697/ |
Getting an error while training Resnet50 on Imagenet at 14th Epoch | I am training Resnet50 on imagenet using the script provided from PyTorch (with a slight trivial tweak for my purpose). However, I am getting the following error after 14 epochs of training. I have allocated 4 gpus in the server I'm using to run this. Any pointers as to what this error is about would be appreciated. Thanks a lot!
Epoch: [14][5000/5005] Time 1.910 (2.018) Data 0.000 (0.191) Loss 2.6954 (2.7783) Total 2.6954 (2.7783) Reg 0.0000 Prec@1 42.969 (40.556) Prec@5 64.844 (65.368)
Test: [0/196] Time 86.722 (86.722) Loss 1.9551 (1.9551) Prec@1 51.562 (51.562) Prec@5 81.641 (81.641)
Traceback (most recent call last):
File "main_group.py", line 549, in <module>
File "main_group.py", line 256, in main
File "main_group.py", line 466, in validate
if args.gpu is not None:
File "/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 801, in __next__
return self._process_data(data)
File "/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data
data.reraise()
File "/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torch/_utils.py", line 385, in reraise
raise self.exc_type(msg)
OSError: Caught OSError in DataLoader worker process 11.
Original Traceback (most recent call last):
File "/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torchvision/datasets/folder.py", line 138, in __getitem__
sample = self.loader(path)
File "/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torchvision/datasets/folder.py", line 174, in default_loader
return pil_loader(path)
File "/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torchvision/datasets/folder.py", line 155, in pil_loader
with open(path, 'rb') as f:
OSError: [Errno 5] Input/output error: '/data/users2/oiler/github/imagenet-data/val/n02102973/ILSVRC2012_val_00009130.JPEG'
| It is difficult to tell what the problem is just by looking at the error you have posted.
All we know is that there was an issue reading the file at '/data/users2/oiler/github/imagenet-data/val/n02102973/ILSVRC2012_val_00009130.JPEG'.
Try the following:
Confirm the file actually exists.
Confirm that it is infact a valid JPEG and not corrupted (by viewing it).
Confirm that you can open it with Python and also load it manually with PIL.
If none of that works, try deleting the file. Do you get the same error on another file in the folder?
| https://stackoverflow.com/questions/65668608/ |
Difference between CrossEntropyLoss and NNLLoss with log_softmax in PyTorch? | When I am building a classifier in PyTorch, I have 2 options to do
Using the nn.CrossEntropyLoss without any modification in the model
Using the nn.NNLLoss with F.log_softmax added as the last layer in the model
So there are two approaches.
Now, what approach should anyone use, and why?
| They're the same.
If you check the implementation, you will find that it calls nll_loss after applying log_softmax on the incoming arguments.
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
Edit: seems like the links are now broken, here's the C++ implementation which shows the same information.
| https://stackoverflow.com/questions/65669511/ |
Loading a single graph into a pytorch geometric data object for node classification | I have one graph, defined by 4 matrices: x (node features), y (node labels), edge_index (edges list) and edge_attr (edge features). I want to create a dataset in Pytorch Geometric with this single graph and perform node-level classification. It seems that just wrapping these 4 matrices into a data object fails, for some reason.
I have created a dataset containing the attributes:
Data(edge_attr=[3339730, 1], edge_index=[2, 3339730], x=[6911, 50000], y=[6911, 1])
representing a graph. If I try to slice this graph, like:
train_dataset, test_dataset = dataset[:5000], dataset[5000:]
I get the error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-11-feb278180c99> in <module>
3 # train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size])
4
----> 5 train_dataset, test_dataset = dataset[:5000], dataset[5000:]
6
7 # Create dataloader for training and test dataset.
~/anaconda3/envs/py38/lib/python3.8/site-packages/torch_geometric/data/data.py in __getitem__(self, key)
92 def __getitem__(self, key):
93 r"""Gets the data of the attribute :obj:`key`."""
---> 94 return getattr(self, key, None)
95
96 def __setitem__(self, key, value):
TypeError: getattr(): attribute name must be string
What am I doing wrong in the data construction?
| For node classification:
Create custom dataset.
class CustomDataset(InMemoryDataset):
def __init__(self, root, transform=None, pre_transform=None):
super(CustomDataset, self).__init__(root, transform, pre_transform)
self.data, self.slices = torch.load(self.processed_paths[0])
@property
def raw_file_names(self):
return ['edge_list.csv', 'x.pt', 'y.pt', 'edge_attributes.csv']
@property
def processed_file_names(self):
return ['graph.pt']
def process(self):
data_list = []
edge_list = pd.read_csv(self.raw_paths[0], dtype=int)
target_nodes = edge_list.iloc[:,0].values
source_nodes = edge_list.iloc[:,1].values
edge_index = torch.tensor([source_nodes, target_nodes], dtype=torch.int64)
x = torch.load(self.raw_paths[1], map_location=torch.device('cpu'))
y = torch.load(self.raw_paths[2], map_location=torch.device('cpu'))
# make masks
n = x.shape[0]
randomassort = list(range(n))
random.shuffle(randomassort)
max_train = floor(len(randomassort) * .1)
train_mask_idx = torch.tensor(randomassort[:max_train])
test_mask_idx = torch.tensor(randomassort[max_train:])
train_mask = torch.zeros(n); test_mask = torch.zeros(n)
train_mask.scatter_(0, train_mask_idx, 1)
test_mask.scatter_(0, test_mask_idx, 1)
train_mask = train_mask.type(torch.bool)
test_mask = test_mask.type(torch.bool)
edge_attributes = pd.read_csv(self.raw_paths[3])
data = Data(edge_index=edge_index, x=x, y=y, train_mask=train_mask, test_mask=test_mask)
print(data.__dict__)
data, slices = self.collate([data])
torch.save((data, slices), self.processed_paths[0])
Then in the train loop use the masks when updating the model.
def train():
...
model.train()
optimizer.zero_grad()
F.nll_loss(model()[data.train_mask], data.y[data.train_mask]).backward()
optimizer.step()
| https://stackoverflow.com/questions/65670777/ |
Pytorch BatchNorm3d / InstanceNorm3d not working when data size (1,C,1,1,1) | I'm training a neural network in PyTorch which at some point has a BatchNorm3d(C).
Normally, I'm training it with a batch size of 1, and the input of this specific level will then be of shape (1, C, 1, 1, 1).
Unfortunately, the BatchNorm then fails with the error message:
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 32, 1, 1, 1])
The same happens, when I use the InstanceNorm3d. It works perfectly fine when I use a batch size greater than two (i.e. the input will then be of shape (2, C, 1, 1, 1).
Does anyone know a solution to this problem? What am I missing?
The problem can be reproduced with the following snippet:
import torch
x_working = torch.ones([2,32,1,1,1])
x_not_working = torch.ones([1,32,1,1,1])
norm = torch.nn.InstanceNorm3d(32)
out_working = norm(x_working)
out_not_working=norm(x_not_working)
| All these normalization layers keep running estimates of the mean and variance of your data over the batch dimension (see doc). Since the variance is computed with the unbiased estimator (notice the n-1 in the denominator), the computation cannot work with less than 2 data points. Therfore, you need a batch size of at least 2 to use these layers.
Note that the variance of 1 data point - if pytorch agreed to compute it - would always be 0, so not really interesting a result. Actually, Batchnorm is known to require significantly larger batch sizes (you often finds 32 or 64 in the scientific literature) to work properly. This strong requirement lead to the development of layerNorm and groupNorm (of which instanceNorm is a particular case I believe).
| https://stackoverflow.com/questions/65682794/ |
Is it possible to save Tensorboad session? | I'm using Tensorboard and would like to save and send my report (email outside my organization), without losing interactive abilities. I've tried to save it as a complete html but that didn't work.
Anyone encountered the same issue and found a solution?
| Have you seen tensorboard.dev?
This page allows you to host your tensorboard experiment & share it with others using a link (it's still interactive) for free.
Also you can use it from the command line; try this from your CLI for more information:
$ tensorboard dev --help
| https://stackoverflow.com/questions/65683052/ |
TypeError: 'torch.dtype' object is not callable. how to call this function? | How to call this torch.dtype? because here the error shows it's not callable. before I used floatTensor and it shows the error like this can't convert np.ndarray of type numpy.object_ and now I using float64 it showing the error 'torch.dtype' object is not callable. please help with this issue.
import torch
a = torch.cuda.is_available()
b = torch.cuda.current_device()
c = torch.cuda.get_device_name()
d = torch.cuda.memory_reserved()
e = torch.cuda.memory_allocated()
var1 = torch.FloatTensor([1.0,2.0,3.0]).cuda()
var1
a1 = var1.device
import pandas as pd
df = pd.read_csv('diabetes.csv')
df.head()
b1 = df.isnull().sum()
import seaborn as sns
import numpy as np
df['Outcome']=np.where(df['Outcome']==1,"Diabetic","No Diabetic")
b2 = df.head()
b3 = sns.pairplot(df,hue="Outcome")
X=df.drop('Outcome',axis=1).values
y=df['Outcome'].values
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=0)
y_train
import torch
import torch.nn as nn
import torch.nn.functional as F
X_train=torch.FloatTensor(X_train).cuda()
X_test=torch.FloatTensor(X_test).cuda()
y_train=torch.float64(y_train).cuda()
this is the error:
C:\Users\vinot\.conda\envs\python21\python.exe D:/python/python_work/pythonProject/diabetes.py
Traceback (most recent call last):
File "D:/python/python_work/pythonProject/diabetes.py", line 35, in <module>
y_train=torch.float64(y_train).cuda()
TypeError: 'torch.dtype' object is not callable
Process finished with exit code 1
| torch.float64 is a dtype object and not a function so it cannot be called.
To make it into a double float (or at least to make sure it is), I would instead call:
y_train = torch.from_numpy(y_train).double().cuda()
| https://stackoverflow.com/questions/65696312/ |
How to get logits as neural network output | Simple and short question. I have a network (Unet) which performs image segmentation. I want the logits as the output to feed into the cross entropy loss (using pytorch). Currently my final layer looks as so:
class Logits(nn.Sequential):
def __init__(self,
in_channels,
n_class
):
super(Logits, self).__init__()
# fully connected layer outputting the prediction layers for each of my classes
self.conv = self.add_module('conv_out',
nn.Conv2d(in_channels,
n_class,
kernel_size = 1
)
)
self.activ = self.add_module('sigmoid_out',
nn.Sigmoid()
)
Is it correct to use the sigmoid activation function here? Does this give me logits?
| When people talk about "logits" they usually refer to the "raw" n_class-dimensional output vector. For multi-class classification (n_class > 2) you want to convert the n_class-dimensional vector of raw "logits" into a n_class-dim probability vector.
That is, you want prob = f(logits) with prob_i >= 0 for all n_class entries, and that sum(prob)=1.
The most straight forward way of doing that in a differentiable way is to use the Softmax function:
prob_i = softmax(logits) = exp(logits_i) / sum_j exp(logits_j)
It is easy to see that the output of softmax is indeed a n_class-dim probability vector (I leave it to you as a short exercise).
BTW, this is why the raw predictions are called "logits" because they are kind of "log" of the output predicted probabilities.
Now, it is customary not to explicitly compute the softmax on top of a classification network and defer its computation to the loss function, e.g. nn.CrossEntropyLoss that internally computes the softmax and requires the raw logits as inputs, rather than the normalized probabilities. This is done mainly for numerical stability.
Therefore, if you are training a multi-class classification network with nn.CrossEntropyLoss you do not need to worry at all about the final activation and simply output the raw logits from your final conv/linear layer.
Most importantly, do not use nn.Sigmoid() activation as it tends to have saturated gradients and will mess up your training.
| https://stackoverflow.com/questions/65703071/ |
Using tensordot with torch.sparse tensors | Is it possible to use a similar method as "tensordot" with torch.sparse tensors?
I am trying to apply a 4 dimensional tensor onto a 2 dimensional tensor. This is possible using torch or numpy. However, I did not find the way to do it using torch.sparse without making the sparse tensor dense using ".to_dense()".
More precisely, here is what I want to do without using ".to_dense()":
import torch
import torch.sparse
nb_x = 4
nb_y = 3
coordinates = torch.LongTensor([[0,1,2],[0,1,2],[0,1,2],[0,1,2]])
values = torch.FloatTensor([1,2,3])
tensor4D = torch.sparse.FloatTensor(coordinates,values,torch.Size([nb_x,nb_y,nb_x,nb_y]))
inp = torch.rand((nb_x,nb_y))
#what I want to do
out = torch.tensordot(tensor4D.to_dense(),inp,dims=([2,3],[0,1]))
print(inp)
print(out)
(here is the output: torch_code)
Alternatively, here is a similar code using numpy:
import numpy as np
tensor4D = np.zeros((4,3,4,3))
tensor4D[0,0,0,0] = 1
tensor4D[1,1,1,1] = 2
tensor4D[2,2,2,2] = 3
inp = np.random.rand(4,3)
out = np.tensordot(tensor4D,inp)
print(inp)
print(out)
(here is the output: numpy_code)
Thanks for helping!
| Your specific tensordot can be cast to a simple matrix multiplication by "squeezing" the first two and last two dimensions of tensor4D.
In short, what you want to do is
raw = tensor4D.view(nb_x*nb_y, nb_x*nb_y) @ inp.flatten()
out = raw.view(nb_x, nb_y)
However, since view and reshape are not implemented for sparse tensors, you'll have to it manually:
sz = tensor4D.shape
coeff = torch.tensor([[1, sz[1], 0, 0], [0, 0, 1, sz[3]]])
reshaped = torch.sparse.FloatTensor(coeff @ idx, tensor4D._values(), torch.Size([nb_x*nb_y, nb_x*nb_y]))
# once we reshaped tensord4D it's all downhill from here
raw = torch.sparse.mm(reshaped, inp.flatten()[:, None])
out = raw.reshape(nb_x, nb_y)
print(out)
And the output is
tensor([[0.4180, 0.0000, 0.0000],
[0.0000, 0.6025, 0.0000],
[0.0000, 0.0000, 0.5897],
[0.0000, 0.0000, 0.0000]])
| https://stackoverflow.com/questions/65703930/ |
Pytorch's Autograd does not support complex matrix inversion, does anyone have a workaround? | Somewhere in my loss function, I invert a complex matrix of size 64*64. Although complex matrix inversion is supported for torch.tensor, the gradient cannot be computed in the training loop as I get this error:
RuntimeError: inverse does not support automatic differentiation for outputs with complex type.
Does anyone have a workaround for this issue? a custom function instead of torch.inverse maybe?
| You can do the inverse yourself using the real-valued components of your complex matrix.
Some linear algebra first:
a complex matrix C can be written as a sum of two real matrices A and B (j is the sqrt of -1):
C = A + jB
Finding the inverse of C is basically finding two real valued matrices x and y such that
(A + jB)(x + jy) = I + j0
This boils down to solving the real valued system of equations:
Now that we know how to do reduce a complex matrix inversion to real-valued matrix inversion, we can use pytorch's solve to do the inverse for us.
def complex_inverse(C):
A = torch.real(C)
B = torch.imag(C)
# construct the left hand side of the system of equations
# side note: from pytorch 1.7.1 you can use vstack and hstack instead of cat
lhs = torch.cat([torch.cat([A, -B], dim=1), torch.cat([B, A], dim=1)], dim=0)
# construct the rhs of the system of equations
rhs = torch.cat([torch.eye(A.shape[0]).to(A), torch.zeros_like(A)],dim=0)
# solve the system of equations
raw, _ = torch.solve(rhs, lhs)
# write the solution as a single complex matrix
iC = raw[:C.shape[0], :] + 1j * raw[C.shape[0]:, :]
return iC
You can verify the solution using numpy:
# C is a complex torch tensor
iC = complex_inverse(C)
with torch.no_grad():
print(np.isclose(iC.cpu().numpy() @ C.cpu().numpy(), np.eye(C.shape[0])).all())
Note that by using inverse of block-matrices tricks you may reduce the computational cost of the solve operation.
| https://stackoverflow.com/questions/65712154/ |
Zero diagonal of a PyTorch tensor? | Is there a simple way to zero the diagonal of a PyTorch tensor?
For example I have:
tensor([[2.7183, 0.4005, 2.7183, 0.5236],
[0.4005, 2.7183, 0.4004, 1.3469],
[2.7183, 0.4004, 2.7183, 0.5239],
[0.5236, 1.3469, 0.5239, 2.7183]])
And I want to get:
tensor([[0.0000, 0.4005, 2.7183, 0.5236],
[0.4005, 0.0000, 0.4004, 1.3469],
[2.7183, 0.4004, 0.0000, 0.5239],
[0.5236, 1.3469, 0.5239, 0.0000]])
| I believe the simplest would be to use torch.diagonal:
z = torch.randn(4,4)
torch.diagonal(z, 0).zero_()
print(z)
>>> tensor([[ 0.0000, -0.6211, 0.1120, 0.8362],
[-0.1043, 0.0000, 0.1770, 0.4197],
[ 0.7211, 0.1138, 0.0000, -0.7486],
[-0.5434, -0.8265, -0.2436, 0.0000]])
This way, the code is perfectly explicit, and you delegate the performance to pytorch's built in functions.
| https://stackoverflow.com/questions/65712349/ |
RuntimeError: Given groups=1, weight of size [16, 1, 3, 3], expected input[16, 3, 1, 28] to have 1 channels, but got 3 channels instead | I know my images have only 1 channel so the first conv layer is (1,16,3,1) , but I have no idea why I got such an error.
Here is my code (I post only the related part).
org_x = train_csv.drop(['id', 'digit', 'letter'], axis=1).values
org_x = org_x.reshape(-1, 28, 28, 1)
org_x = org_x/255
org_x = np.array(org_x)
org_x = org_x.reshape(-1, 1, 28, 28)
org_x = torch.Tensor(org_x).float()
x_test = test_csv.drop(['id','letter'], axis=1).values
x_test = x_test.reshape(-1, 28, 28, 1)
x_test = x_test/255
x_test = np.array(x_test)
x_test = x_test.reshape(-1, 1, 28, 28)
x_test = torch.Tensor(x_test).float()
y = train_csv['digit']
y = list(y)
print(len(y))
org_y = np.zeros([len(y), 1])
for i in range(len(y)):
org_y[i] = y[i]
org_y = np.array(org_y)
org_y = torch.Tensor(org_y).float()
from sklearn.model_selection import train_test_split
x_train, x_valid, y_train, y_valid = train_test_split(
org_x, org_y, test_size=0.2, random_state=42)
I checked the x_train shape is [1638, 1, 28, 28] and the x_valid shape is [410, 1, 28, 28].
transform = transforms.Compose([transforms.ToPILImage(),
transforms.ToTensor(),
transforms.Normalize((0.5, ), (0.5, )) ])
class kmnistDataset(data.Dataset):
def __init__(self, images, labels, transforms=None):
self.x = images
self.y = labels
self.transforms = transforms
def __len__(self):
return (len(self.x))
def __getitem__(self, idx):
data = np.asarray(self.x[idx][0:]).astype(np.uint8)
if self.transforms:
data = self.transforms(data)
if self.y is not None:
return (data, self.y[idx])
else:
return data
train_data = kmnistDataset(x_train, y_train, transforms=transform)
valid_data = kmnistDataset(x_valid, y_valid, transforms=transform)
# dataloaders
train_loader = DataLoader(train_data, batch_size=16, shuffle=True)
valid_loader = DataLoader(valid_data, batch_size=16, shuffle = False)
And here is my model
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 16, 3, padding=1)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
self.bn1 = nn.BatchNorm2d(16)
self.pool = nn.MaxPool2d(2, 2)
unit = 64 * 14 * 14
self.fc1 = nn.Linear(unit, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = self.pool(F.relu(self.bn1(self.conv1(x))))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
x = x.view(-1, 128 * 28 * 28)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
model = Net()
print(model)
Lastly,
n_epochs = 30
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
train_loss = 0
valid_loss = 0
###################
# train the model #
###################
model.train()
for data in train_loader:
inputs, labels = data[0], data[1]
optimizer.zero_grad()
output = model(inputs)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
train_loss += loss.item()*data.size(0)
#####################
# validate the model#
#####################
model.eval()
for data in valid_loader:
inputs, labels = data[0], data[1]
output = model(inputs)
loss = criterion(output, labels)
valid_loss += loss.item()*data.size(0)
train_loss = train_loss/ len(train_loader.dataset)
valid_loss = valid_loss / len(valid_loader.dataset)
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
When I run it, I got this error message
RuntimeError: Given groups=1, weight of size [16, 1, 3, 3], expected input[16, 3, 1, 28] to have 1 channels, but got 3 channels instead
To be specific,
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-14-b8783819421f> in <module>
14 inputs, labels = data[0], data[1]
15 optimizer.zero_grad()
---> 16 output = model(inputs)
17 loss = criterion(output, labels)
18 loss.backward()
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
<ipython-input-12-500e34c49306> in forward(self, x)
26
27 def forward(self, x):
---> 28 x = self.pool(F.relu(self.bn1(self.conv1(x))))
29 x = F.relu(self.conv2(x))
30 x = F.relu(self.conv3(x))
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input)
421
422 def forward(self, input: Tensor) -> Tensor:
--> 423 return self._conv_forward(input, self.weight)
424
425 class Conv3d(_ConvNd):
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight)
418 _pair(0), self.dilation, self.groups)
419 return F.conv2d(input, weight, self.bias, self.stride,
--> 420 self.padding, self.dilation, self.groups)
421
422 def forward(self, input: Tensor) -> Tensor:
RuntimeError: Given groups=1, weight of size [16, 1, 3, 3], expected input[16, 3, 1, 28] to have 1 channels, but got 3 channels instead
| I tried a small demo with your code. and it works fine until your code had x = x.view(-1, 64*14*14) and input shape of torch.Size([1, 1, 28 ,28])
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 16, 3, padding=1)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
self.bn1 = nn.BatchNorm2d(16)
self.pool = nn.MaxPool2d(2, 2)
unit = 64 * 14 * 14
self.fc1 = nn.Linear(unit, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = self.pool(F.relu(self.bn1(self.conv1(x))))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
#print(x.shape)
x = x.view(-1, 64*14*14)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
model = Net()
print(model)
data = torch.rand((1,1,28,28))
pred = model(data)
And if i give my data tensor as data = torch.rand((1,3,28,28)) i get your error i.e RuntimeError: Given groups=1, weight of size [16, 1, 3, 3], expected input[16, 3, 1, 28] to have 1 channels, but got 3 channels instead
So please check your channel dim of your data just before passing it to your model i.e here (highlighted by ** **)
for data in train_loader:
inputs, labels = data[0], data[1]
optimizer.zero_grad()
**print(inputs.shape)**
output = model(inputs)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
train_loss += loss.item()*data.size(0)
| https://stackoverflow.com/questions/65719005/ |
Pytorch CUDA error: no kernel image is available for execution on the device on RTX 3090 with cuda 11.1 | If I run the following:
import torch
import sys
print('A', sys.version)
print('B', torch.__version__)
print('C', torch.cuda.is_available())
print('D', torch.backends.cudnn.enabled)
device = torch.device('cuda')
print('E', torch.cuda.get_device_properties(device))
print('F', torch.tensor([1.0, 2.0]).cuda())
I get this:
A 3.7.5 (default, Nov 7 2019, 10:50:52)
[GCC 8.3.0]
B 1.8.0.dev20210115+cu110
C True
D True
E _CudaDeviceProperties(name='GeForce RTX 3090', major=8, minor=6, total_memory=24267MB, multi_processor_count=82)
F
<stacktrace>
CUDA error: no kernel image is available for execution on the device
More info about my system:
Nvidia version: NVIDIA-SMI 455.38 Driver Version: 455.38 CUDA Version: 11.1
python 3.7, Ubuntu 18.04
| Found a fix for my problem here: https://github.com/pytorch/pytorch/issues/31285#issuecomment-739139454
pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U
Then my code snippet gives:
A 3.7.5 (default, Nov 7 2019, 10:50:52)
[GCC 8.3.0]
B 1.8.0.dev20210115+cu110
C True
D True
E _CudaDeviceProperties(name='GeForce RTX 3090', major=8, minor=6, total_memory=24267MB, multi_processor_count=82)
F tensor([1., 2.], device='cuda:0')
| https://stackoverflow.com/questions/65739700/ |
Reshaping function in numpy | I am trying to reshape data for image classification purpose. I want to convert shape (32,32,3) to (1,3,32,32). I have used two ways for the reshaping purpose and got different results. The first one is numpy reshape method. The other code is written by me.
def res(t):
n = np.zeros((3,32,32))
for j in range(3):
for k in range(32):
for l in range(32):
n[j][k][l]=t[k][l][j]
n=n.reshape(1,3,32,32)
return n
I am not able to understand what is the difference between both approaches.
| This is what you want to do with np.reshape after transpose -
new = original.transpose(2,0,1).reshape(1,3,32,32)
#(32,32,3)->(3,32,32)->(1,3,32,32)
##OR##
new = original.transpose(2,0,1)[None,...]
#(32,32,3)->(3,32,32)->(1,3,32,32)
Full code with a comparison of results between your function and the transpose method.
t = np.random.random((32,32,3))
def res(t):
n = np.zeros((3,32,32))
for j in range(3):
for k in range(32):
for l in range(32):
n[j,k,l]=t[k,l,j] #<--- fixed indexing
n=n.reshape(1,3,32,32)
return n
## METHOD Transpose and Reshape
np.allclose(t.transpose(2,0,1).reshape(1,3,32,32), res(t))
#True
## METHOD Transpose and new axis
np.allclose(t.transpose(2,0,1)[None,...], res(t))
#True
| https://stackoverflow.com/questions/65745440/ |
How can I create a tensor of batch batch_size of uniformly distributed values between -1 and 1? | The title pretty much sums it, I'm trying to implement a GAN:
How can I create a tensor of batch batch_size of uniformly distributed values between -1 and 1 with pytorch?
def create_latent_batch_vectors(batch_size, latent_vector_size, device):
'''
The function creates a random batch of latent vectors with random values
distributed uniformly between -1 and 1.
Finally, it moves the tensor to the given ```device``` (cpu or gpu).
The output should have a shape of [batch_size, latent_vector_size].
'''
# maybe torch.distributions.uniform.Uniform() somehow?
return z.to(device)
Thanks!
| Let us first define an uniform distribution with a low-range as -1 and high-range as +1
dist = torch.distributions.uniform.Uniform(-1,1)
sample_shape = torch.Size([2])
dist.sample(sample_shape)
>tensor([0.7628, 0.3497])
This is a tensor of shape 2 (sample_shape). It doesn't have batch_shape. Let's check:
dist.batch_shape
>torch.Size([])
Now let's use expand. It essentially creates a new distribution instance by expanding the batch_shape.
new_batch_shape = torch.Size([5]) # batch_shape of [5]
expanded_dist = dist.expand(new_batch_shape)
Check:
expanded_dist.batch_shape
>torch.Size([5])
Creating a tensor of shape [batch_size, sample_shape]
expanded_dist.sample(sample_shape)
>tensor([[0.1592, 0.3404, 0.3520, 0.3038, 0.0393],
[0.9368, 0.0108, 0.5836, 0.6156, 0.6704]])
The three types of shapes are defined as follows:
Sample shape describes independent, identically distributed draws from the distribution.
Batch shape describes independent, not identically distributed draws. Namely, we may have a set of (different)
parameterizations to the same distribution. This enables the common
use case in machine learning of a batch of examples, each modeled by
its own distribution.
Event shape describes the shape of a single draw (event space) from the distribution; it may be dependent across dimensions.
| https://stackoverflow.com/questions/65745441/ |
Zeroing the diagonal of a matrix by multiplying by (1-I) | I have a tensor, lets say like this:
tensor([[2.7183, 0.4005, 2.7183, 0.5236],
[0.4005, 2.7183, 0.4004, 1.3469],
[2.7183, 0.4004, 2.7183, 0.5239],
[0.5236, 1.3469, 0.5239, 2.7183]])
And I want to zero its main diagonal by multiplying it by (1-I), meaning by 1 minus the
identity matrix.
How can I do this in pytorch?
Result of the example should be:
tensor([[0.0000, 0.4005, 2.7183, 0.5236],
[0.4005, 0.0000, 0.4004, 1.3469],
[2.7183, 0.4004, 0.0000, 0.5239],
[0.5236, 1.3469, 0.5239, 0.0000]])
I'm looking for a general case solution and not specific to the example I gave.
Thanks!
| torch.eye will be helpful for generating identity matrix
import torch
x = torch.tensor([[2.7183, 0.4005, 2.7183, 0.5236],
[0.4005, 2.7183, 0.4004, 1.3469],
[2.7183, 0.4004, 2.7183, 0.5239],
[0.5236, 1.3469, 0.5239, 2.7183]],dtype=torch.float32)
y = 1-torch.eye(x.size()[0],dtype=torch.float32) #only if x is square matrix
output = x*y
| https://stackoverflow.com/questions/65746836/ |
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath' on matplotlib import | I'm trying to run a simple testfile on a remote Server. But it throws a numpy error for matplotlib.pyplot. Here is the code
import matplotlib.pyplot as plt
import numpy as np
# Fixing random state for reproducibility
np.random.seed(19680801)
x, y = np.random.randn(2, 100)
print('x')
print(x)
print('y')
print(y)
fig, [ax1, ax2] = plt.subplots(2, 1, sharex=True)
ax1.xcorr(x, y, usevlines=True, maxlags=50, normed=True, lw=2)
ax1.grid(True)
ax2.acorr(x, usevlines=True, normed=True, maxlags=50, lw=2)
ax2.grid(True)
plt.show()
Here is the error message.
PyTorch/1.7-py36-cuda11/numpy/core/overrides.py", line 7, in
from numpy.core._multiarray_umath import (
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "graph_test.py", line 1, in
import matplotlib.pyplot as plt
/PyTorch/1.7-py36-cuda11/numpy/core/init.py", line 48, in
raise ImportError(msg)
ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
The Python version is: Python3.7 from "/projects/smiles/Model/venv/bin/python"
The NumPy version is: "1.19.4"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: No module named 'numpy.core._multiarray_umath'
Python Version: 3.7.5
Numpy Version: 1.19.4
Matplotlib version: 3.3.3
| According to the site given by the library numpy:
to fix the error you need to check couple things that commonly gives this error:
1- Check if you are using the right version of python and the right version of numpy, check the documentation fo further information.(you might also have multiple versions of python that can affect the run of your libraries)+ try the newer version of numpy 1.19.5
2- check if your python path is in the environment variables of your system.
you can check your path by running this command in your terminal:
import os
print("PYTHONPATH:", os.environ.get('PYTHONPATH'))
print("PATH:", os.environ.get('PATH'))
3- try to uninstall and reinstall the numpy library(make sure to install the right version to satisfy what ever program you want to run)
4- if any of the above solutions didn't work, I highly recomend to install the anaconda installer here is a link to it : https://www.anaconda.com/products/individual
and that because anaconda most likely solve the problems of downloading libraries/ errors you get from those libraries.
Hopefully those solutions helped you, and feel free to ask any question regarding it.
| https://stackoverflow.com/questions/65749606/ |
PyTorch out of GPU memory in test loop | For the following training program, training and validation are all ok.
Once reach to Test method, I have CUDA out of memory. What should I change so that I have enough memory to test as well.
import torch
from torchvision import datasets, transforms
import torch.nn.functional as f
class CnnLstm(nn.Module):
def __init__(self):
super(CnnLstm, self).__init__()
self.cnn = CNN()
self.rnn = nn.LSTM(input_size=180000, hidden_size=256, num_layers=2, batch_first=True)#stacked LSTM with 2 layers
#print(num_classes)
self.linear = nn.Linear(256, num_classes)
#print('after num_classes')
def forward(self, x):
#print(x.shape)
batch_size, time_steps, channels, height, width = x.size()
c_in = x.view(batch_size * time_steps, channels, height, width)
_, c_out = self.cnn(c_in)
r_in = c_out.view(batch_size, time_steps, -1)
r_out, (_, _) = self.rnn(r_in)
r_out2 = self.linear(r_out[:, -1, :])
return f.log_softmax(r_out2, dim=1)
class TrainCNNLSTM:
def __init__(self):
self.seed = 1
self.batch_size = 8
self.validate_batch_size = 8
self.test_batch_size = 1
self.epoch = 20
self.learning_rate = 0.01
self.step = 100
self.train_loader = None
self.validate_loader = None
self.test_loader = None
#print('before')
self.model = CnnLstm().to(device)
#print('after')
self.criterion = nn.CrossEntropyLoss()
def load_data(self):
data_loader = DataLoader()
self.train_loader = data_loader.get_train_data(self.batch_size)
self.validate_loader = data_loader.get_validate_data(self.validate_batch_size)
self.test_loader = data_loader.get_test_data(self.test_batch_size)
def train(self):
optimizer = torch.optim.SGD(self.model.parameters(), lr=self.learning_rate, momentum=0.9)
scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=self.learning_rate/100.0, max_lr=self.learning_rate, step_size_up=13)
#optimizer = torch.optim.SGD(self.model.parameters(), lr=self.learning_rate)
for epoch in range(self.epoch):
t_losses=[]
for iteration, (data, target) in enumerate(self.train_loader):
data = np.expand_dims(data, axis=1)
data = torch.FloatTensor(data)
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = self.model(data)
loss = self.criterion(output, target)
#loss = f.nll_loss(output, target)
t_losses.append(loss)
loss.backward()
optimizer.step()
scheduler.step()
if iteration % self.step == 0:
print('Epoch: {} | train loss: {:.4f}'.format(epoch, loss.item()))
avgd_trainloss = sum(t_losses)/len(t_losses)
self.validate(epoch, avgd_trainloss)
def validate(self, epoch, avg_tloss):
v_losses=[]
with torch.no_grad():
for iteration, (data, target) in enumerate(self.validate_loader):
data = np.expand_dims(data, axis=1)
data = torch.FloatTensor(data)
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
output = self.model(data)
loss = self.criterion(output, target)
#loss = f.nll_loss(output, target)
v_losses.append(loss)
avgd_validloss = sum(v_losses)/len(v_losses)
print('Epoch: {} | train loss: {:.4f} | validate loss: {:.4f}'.format(epoch, avg_tloss, avgd_validloss))
def test(self):
test_loss = []
correct = 0
for data, target in self.test_loader:
data = np.expand_dims(data, axis=1)
data = torch.FloatTensor(data)
data, target = data.cuda(), target.cuda()
data, target = Variable(data, volatile=True), Variable(target)
output = self.model(data)
loss = self.criterion(output, target)
#f.nll_loss(output, target, size_average=False).item() # sum up batch loss
test_loss.append(loss)
pred = torch.max(output, 1)[1].data.squeeze()
correct += pred.eq(target.data.view_as(pred)).long().cpu().sum()
test_loss = sum(test_loss)/len(test_loss)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(test_loss, correct, len(self.test_loader.dataset),
100. * correct / len(self.test_loader.dataset)))
train = TrainCNNLSTM()
train.load_data()
train.train()
train.test()
| You should call .item() on your loss when appending it to the list of losses:
loss = self.criterion(output, target)
test_loss.append(loss.item())
This avoids accumulating tensors in a list which are still attached to the computational graph. I would say the same for your accuracy.
| https://stackoverflow.com/questions/65757115/ |
PyTorch GPU memory management | In my code, I want to replace values in the tensor given values of some indices are zero, for example
target_mac_out[avail_actions[:, 1:] == 0] = -9999999
But, it returns OOM
RuntimeError: CUDA out of memory. Tried to allocate 166.00 MiB (GPU 0; 10.76 GiB total capacity; 9.45 GiB already allocated; 4.75 MiB free; 9.71 GiB reserved in total by PyTorch)
I think there is no memory allocation because it just visits the tensor of target_mac_out and check the value and replace a new value for some indices.
Am I understanding right?
| It's hard to guess since we do not even know the sizes if the involved tensors, but your indexing avail_actions[:, 1:] == 0 creates a temporary tensor that does require memory allocation.
| https://stackoverflow.com/questions/65757291/ |
Learning rate finder for CNNLstm model | I have CNNLstm model as follows.
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(
in_channels=3,
out_channels=16,
kernel_size=5,
stride=1,
padding=2,
),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
)
self.conv2 = nn.Sequential(
nn.Conv2d(16, 32, 5, 1, 2),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
)
#print(num_classes)
self.out = nn.Linear(32 * 75 * 75, num_classes)#32 * 75 * 75/64 * 37 * 37/128 * 18 * 18
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = x.view(x.size(0), -1)
output = self.out(x)
return output, x
import torch
from torchvision import datasets, transforms
import torch.nn.functional as f
from torch_lr_finder import LRFinder
class CnnLstm(nn.Module):
def __init__(self):
super(CnnLstm, self).__init__()
self.cnn = CNN()
self.rnn = nn.LSTM(input_size=180000, hidden_size=256, num_layers=3, batch_first=True)#stacked LSTM with 2 layers
self.linear = nn.Linear(256, num_classes)
def forward(self, x):
batch_size, time_steps, channels, height, width = x.size()
c_in = x.view(batch_size * time_steps, channels, height, width)
_, c_out = self.cnn(c_in)
r_in = c_out.view(batch_size, time_steps, -1)
r_out, (_, _) = self.rnn(r_in)
r_out2 = self.linear(r_out[:, -1, :])
return f.log_softmax(r_out2, dim=1)
class TrainCNNLSTM:
def __init__(self):
self.seed = 1
self.batch_size = 8
self.validate_batch_size = 8
self.test_batch_size = 1
self.epoch = 50
self.learning_rate = 0.005
self.step = 100
self.train_loader = None
self.validate_loader = None
self.test_loader = None
self.modelloaded = False
self.model = CnnLstm().to(device)
self.criterion = nn.CrossEntropyLoss()
#self.optimizer = torch.optim.SGD(self.model.parameters(), lr=self.learning_rate)#self.learning_rate = 0.001
self.optimizer = torch.optim.AdamW(self.model.parameters())
#self.scheduler = optim.lr_scheduler.OneCycleLR(self.optimizer, 2e-3, epochs=self.epoch , steps_per_epoch=len(train_loader))
def load_data(self):
data_loader = DataLoader()
self.train_loader = data_loader.get_train_data(self.batch_size)
self.validate_loader = data_loader.get_validate_data(self.validate_batch_size)
self.test_loader = data_loader.get_test_data(self.test_batch_size)
def do_lrfinder(self):
lr_finder = LRFinder(self.model, self.optimizer, self.criterion, device)
lr_finder.range_test(self.train_loader, end_lr=1, num_iter=1000)
lr_finder.plot()
plt.savefig("LRvsLoss.png")
plt.close()
def train(self):
for epoch in range(0, self.epoch):
t_losses=[]
for iteration, (data, target) in enumerate(self.train_loader):
print(data.shape)
data = np.expand_dims(data, axis=1)
print(data.shape)
data = torch.FloatTensor(data)
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
self.optimizer.zero_grad()
Since it is CNNLstm model, the data input shape to the model are batch_size, time_steps, channels, height, width.
(8, 1, 3, 300, 300)
To use torch_lr_finder, we need to run the following code.
lr_finder = LRFinder(self.model, self.optimizer, self.criterion, device)
lr_finder.range_test(self.train_loader, end_lr=1, num_iter=1000)
self.train_loader output shape is (8, 3, 300, 300). So during finding learning rate, self.model can't be used.
How can I use torch_lr_finder for such model?
| One possibility is that instead of expanding the dims in the for loop you could pass the tensor into the forward function of the model and just use .unsqueeze(1) there. Like this
print(data.shape)
print(data.shape)
data = torch.FloatTensor(data)
just omit the expand dims then in your forward function do this
x = x.unsqueeze(1)
| https://stackoverflow.com/questions/65761728/ |
Pandas Dataframe to tensor | I have a dataframe with 3 columns (a date index, a price and a string symbol).
It looks like that:
Date
Price
Symbol
2019-01-02
39.480000
AAPL
2019-01-02
101.120003
MSFT
2019-01-02
62.023998
TSLA
2019-01-03
35.547501
AAPL
2019-01-03
97.400002
MSFT
2019-01-03
60.071999
TSLA
I'm looking for some panda/pytorch/python syntactic sugar to turn that into a tensor/matrix that will be:
[ [ 39.480000, 101.120003, 62.023998], [35.547501, 97.400002, 60.071999]]
With the number length of the first dimension will be the number of unique dates, and the length of the second will be the number of unique symbols.
I'm guaranteed to have exactly 3 symbols per date and I want that each row of my matrix follow the same order for its columns (e.g always AAPL, MSFT, TSLA).
Now, that is very easy with some for loops, but I'm looking for something more "pythonic"
| You can groupby the date column, convert the groups of Price to numpy arrays, and then convert this series to a tensor:
import torch
import pandas as pd
prices = df.groupby(['Date'])['Price'].apply(np.array)
my_tensor = torch.tensor(prices)
| https://stackoverflow.com/questions/65767833/ |
2D times 2D equals a 3d pytorch tensor | Given two 2-D pytorch tensors:
A = torch.FloatTensor([[1,2],[3,4]])
B = torch.FloatTensor([[0,0],[1,1],[2,2]])
Is there an efficient way to calculate a tensor of shape (6, 2, 2) where each entry is a column of A times each row of B?
For example, with A and B above, the 3D tensor should have the following matrices:
[[[0, 0],
[0, 0]],
[[1, 1],
[3, 3]],
[[2, 2],
[6, 6]],
[[0, 0],
[0, 0]],
[[2, 2],
[4, 4]],
[[4, 4],
[8, 8]]]
I know how to do it via for-loop but I am wondering if could have an efficient way to save it.
| Pytorch tensors implement numpy style broadcast semantics which will work for this problem.
It's not clear from the question if you want to perform matrix multiplication or element-wise multiplication. In the length 2 case that you showed the two are equivalent, but this is certainly not true for higher dimensionality! Thankfully the code is almost the same so I'll just give both options.
A = torch.FloatTensor([[1, 2], [3, 4]])
B = torch.FloatTensor([[0, 0], [1, 1], [2, 2]])
# matrix multiplication
C_mm = (A.T[:, None, :, None] @ B[None, :, None, :]).flatten(0, 1)
# element-wise multiplication
C_ew = (A.T[:, None, :, None] * B[None, :, None, :]).flatten(0, 1)
Code description. A.T transposes A and the indexing with None inserts unitary dimensions so A.T[:, None, :, None] will be shape (2, 1, 2, 1) and B[None, :, None, :] is shape (1, 3, 1, 2). Since @ (matrix multiplication) operates on the last two dimensions of tensors, and broadcasts the other dimensions, then the result is matrix multiplication for each column of A times each row of B. In the element-wise case the broadcasting is performed on every dimension. The result is a (2, 3, 2, 2) tensor. To turn it into a (6, 2, 2) tensor we just flatten the first two dimensions using Tensor.flatten.
| https://stackoverflow.com/questions/65784125/ |
Where to implement pre-processing in PyTorch Lightning (e.g. tokenizing input text) | Is there a convention to implement some kind of predict() method in PyTorch Lightning that does pre-processing before performing the actual prediction using forward()?
In my case, I have a text classifier consisting of an embedding layer and a few fully connected layers. The text needs to be tokenized before being passed to the embedding layer. During training and evaluation the LightningDataModule's setup() method does the job.
Now, I'm wondering what the best practice for inference during production is. I could add a predict() method to my LightningModule where I could write the same pre-processing code as in LightningDataModule.setup(). But, of course, I do not want to duplicate the code.
In this community example project linked in the official PyTorch Lightning docs, the authors define a prepare_sample() function in the LightningModule that is used by their predict() function, and is also passed to the LightningDataModule.
Is this the right way to handle pre-processing? Also, why is there no prepare_sample() or predict() in LightningModule? To me, this seems like a common use case, for example:
model = load_model('data/model.ckpt') # load pre-trained model, analyzes user reviews
user_input = input('Your movie review > ')
predicted_rating = model.predict(user_input) # e.g. "I liked the movie pretty much." -> 4 stars
print('Predicted rating: %s/5 stars' % predicted_rating)
Now that I think about it, predict() should also process the result from forward() the same way the evaluation code does, like selecting the class with the highest output or selecting all classes with outputs larger than some threshold - some more code that should not be duplicated.
| Why do you use a LightningModule if the code should be for production? If the model is finished you only need to load the model from memory and define the preprocess steps.
The repository you refer to have implemented the predict, and prepare_sample on top of the LightningModule.
In my opinion pytorch-lightning is for training and evaluation of the model and not for production. We would not want to keep the analytics and debugging when sending a model to production so instead we create a slimmed version which only have loading of model, preprocess and prediction.
Towardsdatascience have a small code example: https://towardsdatascience.com/how-to-deploy-pytorch-lightning-models-to-production-7e887d69109f
| https://stackoverflow.com/questions/65785142/ |
tensorflow autodiff slower than pytorch's counterpart | I am using tensorflow 2.0 and trying to evaluate gradients for backpropagating to a simple feedforward neural network. Here's how my model looks like:
def __init__(self, input_size, output_size):
inputs = tf.keras.Input(shape=(input_size,))
hidden_layer1 = tf.keras.layers.Dense(30, activation='relu')(inputs)
outputs = tf.keras.layers.Dense(output_size)(hidden_layer1)
self.model = tf.keras.Model(inputs=inputs, outputs=outputs)
self.optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
self.loss_function = tf.keras.losses.Huber()
The forward pass to this network is fine but when I use gradient tape to train the model, it is at least 10x slower than PyTorch.
Training function:
def learn_modified_x(self, inputs, targets, actions):
with tf.GradientTape() as tape:
predictions = self.model(inputs)
predictions_for_action = gather_single_along_axis(predictions, actions)
loss = self.loss_function(targets, predictions_for_action)
grads = tape.gradient(loss, self.model.trainable_weights)
self.optimizer.apply_gradients(zip(grads, self.model.trainable_weights))
I tried commenting lines to find what is actually causing the problem. I discovered that tape.gradient is a significant contributor to this situation.
Any idea?
PyTorch implementation
def __init__(self, input_size, nb_action):
super(Network, self).__init__()
self.input_size = input_size
self.nb_action = nb_action
self.fc1 = nn.Linear(input_size, 30)
self.fc2 = nn.Linear(30, nb_action)
def forward(self, state):
x = F.relu(self.fc1(state))
q_values = self.fc2(x)
return q_values
def learn(self, batch_state, batch_next_state, batch_reward, batch_action):
outputs = self.model(batch_state).gather(1, batch_action.unsqueeze(1)).squeeze(1)
next_outputs = self.model(batch_next_state).detach().max(1)[0]
target = self.gamma*next_outputs + batch_reward
td_loss = F.smooth_l1_loss(outputs, target)
self.optimizer.zero_grad()
td_loss.backward(retain_variables = True)
self.optimizer.step()
| def __init__(self,...):
...
self.model.call = tf.function(self.model.call)
...
you need use tf.function to wrap your model's call function.
| https://stackoverflow.com/questions/65785966/ |
Open3D-ML and pytorch | I’m currently trying to work with open3d ML and Pytorch. I followed the installation guide given in the Open3D-ML github. However when I try to import open3d.ml.torch it sends me the following error : Exception: Open3D was not built with PyTorch support!
I’m working with
python 3.8
open3d 0.12.0
pytorch 1.6.0
cuda 10.1
Windows 10
Do you have any idea of where that error comes from ?
| It does not support for Windows at the moment. You can install Ubuntu on WSL (Window Subsystem for Linux) on Windows OS, and install open3d-ml on ubuntu.
| https://stackoverflow.com/questions/65794655/ |
Pytorch model 2D regression given an scalar input | I want to create a model to perform this regression:
My dataset looks like:
t,x,y
0.0,-,0.5759052335487023
0.01,-,-
0.02,1.1159124144549086,-
0.03,-,-
0.04,1.0054825084650338,0.4775267298487888
0.05,-,-
I'm having some troubles with loss, dataset load, batch_size, and Net structure (I add one single layer to simplify the problem)
Thats my code:
Net:
class Net(nn.Module):
'''Model to regress 2d time series values given scalar input.'''
def __init__(self):
super(Net, self).__init__()
#Layers
self.predict = nn.Linear(1, 2)
def forward(self, x):
x = self.predict(x)
return x
Dataset load
class TimeSeriesDataset(torch.utils.data.Dataset):
def __init__(self, csv_file):
#Load the dataset
#Load the csv file as a dataframe
df = pd.read_csv(csv_file, header=0, na_values='-')
#Store the inputs and outputs
self.x = df.values[:,:-2].astype('float32')
self.y = df.values[:,1:].astype('float32')
#Ensure target has the right shape
self.y = self.y.reshape((len(self.y),2))
def __len__(self):
#Return the number of rows in the dataset
return len(self.x)
def __getitem__(self, idx):
#Return a row at an index
return [self.x[idx], self.y[idx]]
Trainloader, loss, optimizer
dataset = TimeSeriesDataset('data.csv')
trainloader = torch.utils.data.DataLoader(
dataset, batch_size=32, shuffle=True, num_workers=2)
def lossFunc(outputs, labels):
# nn.MSELoss() #Mean Squared Error, works fine with regression problems and with small numbers (x-y)^2
return torch.mean((outputs-labels)**2)
net = Net()
optimizer = torch.optim.SGD(net.parameters(), lr=0.01)
print(net)
Trainning:
for epoch in range(300):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# TODO get the data
# inputs, labels
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
#print("Inputs", inputs)
#print("labels", labels)
#print("outputs", outputs)
loss = lossFunc(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 20 == 19: # print every 20 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 20))
running_loss = 0.0
print('Finished Training')
Outputs looks this way:
tensor([[nan, nan],
[nan, nan],
[nan, nan],
...
And when I execute the 300 epochs error value doesn't change and prints nan
| After the line loss = loss(outputs, labels), loss is now a tensor, not a function anymore. Python does not allow you to have distinct objects with identical names.
So after the first call, loss has become a tensor, and as the error says "tensors are not callable", so the second call fails
| https://stackoverflow.com/questions/65812727/ |
How to initialize columns in hybrid sparse tensor | How initialize in pytorch hybrid tensor torch.sparse_coo_tensor (one dimension is sparse and other is not), which have the following dense representation?
array([[1, 0, 5, 0],
[2, 0, 6, 0],
[3, 0, 7, 0],
[4, 0, 8, 0]])
What should I put into the indices argument?
| How to initialize
Something like this:
import torch
indices = torch.tensor([[0, 0, 1, 1, 2, 2, 3, 3], [0, 2, 0, 2, 0, 2, 0, 2]])
tensor = torch.sparse_coo_tensor(
indices, torch.tensor([1, 2, 3, 4, 5, 6, 7, 8]), size=(4, 4)
)
Given above:
indices - first dimension specifies row, second column, where non-zero value(s) will be located. Those become pairs, in this case: (0, 0), (0, 2), (1, 0), (1, 2)... and so on
values - values located at those pairs, so 1 will be under (0, 0) coordinate, 2 under (0, 2) and so it goes.
size - total size of the matrix, optional, might be inferred in this case from your input
8 pairs, 8 values, there are also other ways to specify it, but the idea holds.
And a quick check:
print(tensor)
print(tensor.to_dense())
Gives us:
tensor(indices=tensor([[0, 0, 1, 1, 2, 2, 3, 3],
[0, 2, 0, 2, 0, 2, 0, 2]]),
values=tensor([1, 2, 3, 4, 5, 6, 7, 8]),
size=(4, 4), nnz=8, layout=torch.sparse_coo)
tensor([[1, 0, 2, 0],
[3, 0, 4, 0],
[5, 0, 6, 0],
[7, 0, 8, 0]])
Why to initialize
If your actual data is 50% sparse, you shouldn't use COO tensor.
It will save some memory, but operations will be way slower, so keep that in mind.
| https://stackoverflow.com/questions/65813122/ |
How to select indices according to another tensor in pytorch | The task seems to be simple, but I cannot figure out how to do it.
So what I have are two tensors:
an indices tensor indices with shape (2, 5, 2), where the last dimensions corresponds to indices in x and y dimension
a "value tensor" value with shape (2, 5, 2, 16, 16), where I want the last two dimensions to be selected with x and y indices
To be more concrete, the indices are between 0 and 15 and I want to get an output:
out = value[:, :, :, x_indices, y_indices]
The shape of the output should therefore be of (2, 5, 2). Can anybody help me here? Thanks a lot!
Edit:
I tried the suggestion with gather, but unfortunately it does not seem to work (I changed the dimensions, but it doesn't matter):
First I generate a coordinate grid:
y_t = torch.linspace(-1., 1., 16, device='cpu').reshape(16, 1).repeat(1, 16).unsqueeze(-1)
x_t = torch.linspace(-1., 1., 16, device='cpu').reshape(1, 16).repeat(16, 1).unsqueeze(-1)
grid = torch.cat((y_t, x_t), dim=-1).permute(2, 0, 1).unsqueeze(0)
grid = grid.unsqueeze(1).repeat(1, 3, 1, 1, 1)
In the next step, I am creating some indices. In this case, I always take index 1:
indices = torch.ones([1, 3, 2], dtype=torch.int64)
Next, I am using your method:
indices = indices.unsqueeze(-1).unsqueeze(-1)
new_coords = torch.gather(grid, -1, indices).squeeze(-1).squeeze(-1)
Finally, I manually select index 1 for x and y coordinate:
new_coords_manual = grid[:, :, :, 1, 1]
This outputs the following new coordinates:
new_coords
tensor([[[-1.0000, -0.8667],
[-1.0000, -0.8667],
[-1.0000, -0.8667]]])
new_coords_manual
tensor([[[-0.8667, -0.8667],
[-0.8667, -0.8667],
[-0.8667, -0.8667]]])
As you can see, it only works for one dimension. Do you have an idea how to fix that?
| What you could do is flatten the first three axes together and apply torch.gather:
>>> grid.flatten(start_dim=0, end_dim=2).shape
torch.Size([6, 16, 16])
>>> torch.gather(grid.flatten(0, 2), axis=1, indices)
tensor([[[-0.8667, -0.8667],
[-0.8667, -0.8667],
[-0.8667, -0.8667]]])
As explained on the documentation page, this will perform:
out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1
| https://stackoverflow.com/questions/65815668/ |
RuntimeError: Input tensor at index 3 has invalid shape [2, 2, 16, 128, 64] but expected [2, 4, 16, 128, 64] | Runtime error while finetuning a pretrained GPT2-medium model using Huggingface library in SageMaker - ml.p3.8xlarge instance.
The finetuning_gpt2_script.py contains the below,
Libraries:
from transformers import Trainer, TrainingArguments
from transformers import EarlyStoppingCallback
from transformers import GPT2LMHeadModel, GPT2Tokenizer
from transformers import TextDataset,DataCollatorForLanguageModeling
Pretrained Models:
gpt2_model = GPT2LMHeadModel.from_pretrained("gpt2-medium")
gpt2_tokenizer = GPT2Tokenizer.from_pretrained("gpt2-medium")
Train and Test Data Construction:
train_dataset = TextDataset(
tokenizer=gpt2_tokenizer,
file_path=train_path,
block_size=128)
test_dataset = TextDataset(
tokenizer=gpt2_tokenizer,
file_path=test_path,
block_size=128)
data_collator = DataCollatorForLanguageModeling(
tokenizer=gpt2_tokenizer, mlm=False,
)
train_path & test_path are unstructured text data file of size 1.45 Million and 200K lines of data
Training arguments:
training_args = TrainingArguments(
output_dir="./gpt2-finetuned-models", #The output directory
overwrite_output_dir=True, #overwrite the content of the output directory
num_train_epochs=1, # number of training epochs
per_device_train_batch_size=8, # batch size for training #32
per_device_eval_batch_size=8, # batch size for evaluation #64
save_steps=100, # after # steps model is saved
warmup_steps=500,# number of warmup steps for learning rate scheduler
prediction_loss_only=True,
metric_for_best_model = "eval_loss",
load_best_model_at_end = True,
evaluation_strategy="epoch",
)
training_args are the training arguments constructed to train the model.
Trainer:
trainer = Trainer(
model=gpt2_model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=test_dataset,
callbacks = [early_stop_callback],
)
early_stop_callback = EarlyStoppingCallback(early_stopping_patience = 3)
Training:
trainer.train()
trainer.save_model(model_path)
Here, the training is done for only 1 epoch in 4 GPUS using ml.p3.8xlarge instance.
The training is done by torch-distribution like below,
python -m torch.distributed.launch finetuning_gpt2_script.py
While training at the end of the epoch, observed the below error,
RuntimeError: Input tensor at index 3 has invalid shape [2, 2, 16, 128, 64] but expected [2, 4, 16, 128, 64]
Is the RuntimeError because of the way the train_dataset and test_datasetconstructed using TextData ?
Am I doing wrong in the torch-distribution ?
| It could be related to a mismatch in the batch size (expecting a batch size of 4 but receiving a batch size of 2) as suggested here ? Solution provided is to set the parameter drop_last in your DataLoader like this:
tain_text = DataLoader(train_dataset, batch_size=args.batch_size, shuffle=True, drop_last=True)
| https://stackoverflow.com/questions/65822014/ |
Pytorch Global Pruning is not reducing the size of the model | I am trying to Prune my Deep Learning model via Global Pruning. The original UnPruned model is about 77.5 MB. However after pruning, when I am saving the model, the size of the model is the same as the original. Can anyone help me with this issue?
Below is the Pruning code:-
import torch.nn.utils.prune as prune
parameters_to_prune = (
(model.encoder[0], ‘weight’),
(model.up_conv1[0], ‘weight’),
(model.up_conv2[0], ‘weight’),
(model.up_conv3[0], ‘weight’),
)
print(parameters_to_prune)
prune.global_unstructured(
parameters_to_prune,
pruning_method=prune.L1Unstructured,
amount=0.2,
)
print(
“Sparsity in Encoder.weight: {:.2f}%”.format(
100. * float(torch.sum(model.encoder[0].weight == 0))
/ float(model.encoder[0].weight.nelement())
)
)
print(
“Sparsity in up_conv1.weight: {:.2f}%”.format(
100. * float(torch.sum(model.up_conv1[0].weight == 0))
/ float(model.up_conv1[0].weight.nelement())
)
)
print(
“Sparsity in up_conv2.weight: {:.2f}%”.format(
100. * float(torch.sum(model.up_conv2[0].weight == 0))
/ float(model.up_conv2[0].weight.nelement())
)
)
print(
“Sparsity in up_conv3.weight: {:.2f}%”.format(
100. * float(torch.sum(model.up_conv3[0].weight == 0))
/ float(model.up_conv3[0].weight.nelement())
)
)
print(
“Global sparsity: {:.2f}%”.format(
100. * float(
torch.sum(model.encoder[0].weight == 0)
+ torch.sum(model.up_conv1[0].weight == 0)
+ torch.sum(model.up_conv2[0].weight == 0)
+ torch.sum(model.up_conv3[0].weight == 0)
)
/ float(
model.encoder[0].weight.nelement()
+ model.up_conv1[0].weight.nelement()
+ model.up_conv2[0].weight.nelement()
+ model.up_conv3[0].weight.nelement()
)
)
)
**Setting Pruning to Permanent**
prune.remove(model.encoder[0], “weight”)
prune.remove(model.up_conv1[0], “weight”)
prune.remove(model.up_conv2[0], “weight”)
prune.remove(model.up_conv3[0], “weight”)
**Saving the model**
PATH = “C:\PrunedNet.pt”
torch.save(model.state_dict(), PATH)
| Prunning won't change the model size if applied like this.
If you have a tensor, say something like:
[1., 2., 3., 4., 5., 6., 7., 8.]
And you prune 50% of data, so for example this:
[1., 2., 0., 4., 0., 6., 0., 0.]
You will still have 8 float values and their size will be the same.
When prunning reduces model size?
When we save weights in a sparse format, but it should have high sparsity (so 10% non-zero elements)
When we actually remove something (like a kernel from Conv2d, it could be removed if it's weights are zero or negligible)
Otherwise it's not going to work. Check out some related projects that would allow you to do it without coding it in on your own, for example Torch-Pruning.
| https://stackoverflow.com/questions/65827031/ |
Pytorch/cuda : CPU error and map_location | I write this code to download my model :
args = parser.parse_args()
use_cuda = torch.cuda.is_available()
state_dict = torch.load(args.model)
model = Net()
model.load_state_dict(state_dict)
model.eval()
if use_cuda:
print('Using GPU')
model.cuda()
else:
print('Using CPU')
But my terminal returns the following error RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
So then I tried to write without really understanding too much :
args = parser.parse_args()
map_location=torch.device('cpu')
state_dict = torch.load(args.model)
model = Net()
model.load_state_dict(state_dict)
model.eval()
But I still have the same mistake. Do you see please how I can correct it? (actually I want to load my model with my CPU).
| I'm assuming you saved the model on a computer with a GPU and are now loading it on a computer without one, or maybe you for some reason the GPU is not available. Also, which line is causing the error?
The parameter map_location needs to be set inside torch.load. Like this:
state_dict = torch.load(args.model, map_location='cpu')
or
map_location=torch.device('cpu')
state_dict = torch.load(args.model, map_location=map_location)
Notice that you need to send the map_location variable to the torch.load function.
| https://stackoverflow.com/questions/65842425/ |
Implementations and strategies for fast 2D interpolation from irregularly spaced points | Given a large (~10 million) number of irregularly spaced points in two dimensions, where each point has some intensity ("weight") associated with it, what existing python implementations are there for interpolating the value at:
a specific point at some random position (i.e. point = (0.5, 0.8))
a large number of points at random positions (i.e. points = np.random.random((1_000_000, 2)))
a regular grid at integer positions (i.e. np.indices((1000, 1000)).T)
I am aware that Delaunay triangulation is often used for this purpose. Are there alternatives to doing it this way?
Do any solutions take advantage of multiple CPU cores or GPUs?
As an example, here is an approach using scipy's LinearNDInterpolator. It does not appear to use more than one CPU core.
There are also other options in scipy, but with this question I am especially interested in hearing about other solutions than the ones in scipy.
# The %time tags are IPython magic functions that time that specific line
dimension_shape = (1000, 1000) # we spread the random [0-1] over [0-1000] to avoid floating point errors
N_points = dimension_shape[0] * dimension_shape[1]
known_points = np.random.random((N_points, 2)) * dimension_shape
known_weights = np.random.random((N_points,))
unknown_point = (0.5, 0.8)
unknown_points = np.random.random((N_points, 2)) * dimension_shape
unknown_grid = np.indices(dimension_shape, dtype=float).T.reshape((-1, 2)) # reshape to a list of 2D points
%time tesselation = Delaunay(known_points) # create grid to know neighbours # 6 sec
%time interp_func = LinearNDInterpolator(tesselation, known_weights) # 1 ms
%time interp_func(unknown_point) # 2 sec # run it once because the scipy function needs to compile
%time interp_func(unknown_point) # ~ns
%time interp_func(unknown_grid) # 400 ms
%time interp_func(unknown_points) # 1 min 13 sec
# Below I sort the above `unknown_points` array, and try again
%time ind = np.lexsort(np.transpose(unknown_points)[::-1]) # 306 ms
unknown_points_sorted = unknown_points[ind].copy()
%time interp_func(unknown_points_sorted) # 19 sec <- much less than 1 min!
In the above code, things that take an appreciable amount of time are the construction of the Delaunay grid, and interpolation on a non-regular grid of points. Note that sorting the non-regular points first results in a significant speed improvement!
Do not feel the need to give a complete answer from the start. Tackling any aspect of the above is welcome.
| Scipy is pretty good and I don't think that there are better solutions in Python, but I can add a couple things that might be helpful to you. First off, your idea of sorting the points is a really good one. The so-called "incremental algorithms" build the Delaunay by inserting vertices one at a time. The first step in inserting a vertex in an existing mesh is to figure out which triangle in the mesh to insert it into. To speed things up, some algorithms start the search right at the point where the most recent insertion occurred. So if your points are ordered so that each point inserted is relatively close to the previous one, the search is much faster. If you want more details, you can look up the "Lawson's Walk" algorithm. In my own implementation of the Delaunay (which is in Java, so I'm afraid it won't help you), I have a sort based on the Hilbert space-filling curve. the Hilbert sort works great. But even just sorting by x/y coordinates is a help.
In terms of whether there are other ways to interpolate without using the Delaunay... You could try something using Inverse-Distance-Weighting (IDW). IDW techniques don't require the Delaunay, but they do require some way to figure out which vertices are close to the point for which you wish to interpolate. I've played with dividing my coordinate space into uniformly spaced bins, storing the vertices in the appropriate bins, and then just pulling up the points I need for an interpolation by looking at the neighboring bins. It may be a lot of coding, but it will be reasonably fast and use less memory than the Delaunay
| https://stackoverflow.com/questions/65847051/ |
RuntimeError: stack expects each tensor to be equal size | I apologize in advance in this was asked before. I genuinely did not understand the solution.
MAX_LEN = 160
BATCH_SIZE = 16
EPOCHS = 10
class GPReviewDataset(data.Dataset):
def __init__(self, review, target, tokenizer, max_len):
self.review = review
self.target = target
self.tokenizer = tokenizer
self.max_len = max_len
def __len__(self):
return len(self.review)
def __getitem__(self, item):
review = str(self.review[item])
encoding = tokenizer.encode_plus(text=review,
max_length=self.max_len,
add_special_tokens=True, padding='max_length',
return_attention_mask=True,
return_token_type_ids=False, return_tensors='pt')
return {'review': review,
'input_ids': encoding['input_ids'].flatten(),
'attention_mask': encoding['attention_mask'].flatten(),
'targets': torch.tensor(self.target[item], dtype=torch.long)}
free_df_train, free_df_test = train_test_split(free_df, test_size=0.2)
free_df_val, free_df_test = train_test_split(free_df_test, test_size=0.5)
def create_data_loader(df, tokenizer, max_len, batch_size):
ds = GPReviewDataset(review=df.content.to_numpy(),
target=df['score'].to_numpy(),
tokenizer=tokenizer,
max_len=max_len)
return data.DataLoader(ds, batch_size=batch_size,
num_workers=0)
train_data_loader = create_data_loader(free_df_train, tokenizer, MAX_LEN, BATCH_SIZE)
val_data_loader = create_data_loader(free_df_val, tokenizer, MAX_LEN, BATCH_SIZE)
test_data_loader = create_data_loader(free_df_test, tokenizer, MAX_LEN, BATCH_SIZE)
data = next(iter(train_data_loader))
I wrote a function later on to take in train_data_loader when training the data, but it was giving me the Runtime error. It seems like the proper solution is to use some sort of collate_fn; however I am confused on how exactly to apply that function.
My error below:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<timed exec> in <module>
<ipython-input-26-8ba1e19dd195> in train_epoch(model, data_loader, loss_fn, optimizer, device, scheduler, n_examples)
4 correct_predictions = 0
5
----> 6 for i in data_loader:
7 input_ids = i['input_ids'].to(device)
8 attention_mask = i['attention_mask'].to(device)
~\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py in __next__(self)
361
362 def __next__(self):
--> 363 data = self._next_data()
364 self._num_yielded += 1
365 if self._dataset_kind == _DatasetKind.Iterable and \
~\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py in _next_data(self)
401 def _next_data(self):
402 index = self._next_index() # may raise StopIteration
--> 403 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
404 if self._pin_memory:
405 data = _utils.pin_memory.pin_memory(data)
~\Anaconda3\lib\site-packages\torch\utils\data\_utils\fetch.py in fetch(self, possibly_batched_index)
45 else:
46 data = self.dataset[possibly_batched_index]
---> 47 return self.collate_fn(data)
~\Anaconda3\lib\site-packages\torch\utils\data\_utils\collate.py in default_collate(batch)
72 return batch
73 elif isinstance(elem, container_abcs.Mapping):
---> 74 return {key: default_collate([d[key] for d in batch]) for key in elem}
75 elif isinstance(elem, tuple) and hasattr(elem, '_fields'): # namedtuple
76 return elem_type(*(default_collate(samples) for samples in zip(*batch)))
~\Anaconda3\lib\site-packages\torch\utils\data\_utils\collate.py in <dictcomp>(.0)
72 return batch
73 elif isinstance(elem, container_abcs.Mapping):
---> 74 return {key: default_collate([d[key] for d in batch]) for key in elem}
75 elif isinstance(elem, tuple) and hasattr(elem, '_fields'): # namedtuple
76 return elem_type(*(default_collate(samples) for samples in zip(*batch)))
~\Anaconda3\lib\site-packages\torch\utils\data\_utils\collate.py in default_collate(batch)
53 storage = elem.storage()._new_shared(numel)
54 out = elem.new(storage)
---> 55 return torch.stack(batch, 0, out=out)
56 elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
57 and elem_type.__name__ != 'string_':
RuntimeError: stack expects each tensor to be equal size, but got [160] at entry 0 and [376] at entry 5
| try this instead add (padding = 'max_length') in encode_plus
| https://stackoverflow.com/questions/65851195/ |
PyTorch how to do gathers over multiple dimensions | I'm trying to find a way to do this without for loops.
Say I have a multi-dimensional tensor t0:
bs = 4
seq = 10
v = 16
t0 = torch.rand((bs, seq, v))
This has shape: torch.Size([4, 10, 16])
I have another tensor labels that is a batch of 5 random indices in the seq dimension:
labels = torch.randint(0, seq, size=[bs, sample])
So this has shape torch.Size([4, 5]). This is used to index the seq dimension of t0.
What I want to do is loop over the batch dimension doing gathers using labels tensor. My brute force solution is this:
t1 = torch.empty((bs, sample, v))
for b in range(bs):
for idx0, idx1 in enumerate(labels[b]):
t1[b, idx0, :] = t0[b, idx1, :]
Resulting in tensor t1 which has shape: torch.Size([4, 5, 16])
Is there a more idiomatic way of doing this in pytorch?
| You can use fancy indexing here to select the desired portion of the tensor.
Essentially, if you generate the index arrays conveying your access pattern beforehand, you can directly use them to extract some slice of the tensor. The shape of the index arrays for each dimension should be same as that of the output tensor or slice you want to extract.
i = torch.arange(bs).reshape(bs, 1, 1) # shape = [bs, 1, 1]
j = labels.reshape(bs, sample, 1) # shape = [bs, sample, 1]
k = torch.arange(v) # shape = [v, ]
# Get result as
t1 = t0[i, j, k]
Note the shapes of the above 3 tensors. Broadcasting appends extra dimesions in the front of a tensor, thus essentially reshaping k to [1, 1, v] shape which makes all 3 of them compatible for elementwise operations.
After broadcasting (i, j, k) together will produce 3 [bs, sample, v] shaped arrays and those will (elementwise) index your original tensor to produce the output tensor t1 of shape [bs, sample, v].
| https://stackoverflow.com/questions/65894166/ |
Is there a way to classify a set of data as a whole via Pytorch? | I'm currently dealing with a classification task on a CT dataset. In CT datasets, multiple slices belong to one single patient, while setting up my dataset, I arrange my data as follows:
dataset/0/patient_1/1.png,2.png...
dataset/0/patient_2/1.png,2.png...
I wonder is there a way to let my network to classify by patient instead of by slices?
thank you
| Each slice is a 2D image, while for each patient you have a 3D volume of CT voxels.
If you want to work per-patient, rather than per-slice, you'll need to organize your data to output batches of 3D information (of shape batchxchannelxdepthxheightxwidth) and make your model process 3D information (e.g., using Conv3D instead of Conv2D)
| https://stackoverflow.com/questions/65895418/ |
PyTorch GPU memory leak during inference | I am trying to encode documents sentence-wise with a huggingface transformer module. I'm using the very small google/bert_uncased_L-2_H-128_A-2 pretrained model with the following code:
def pre_encode_wikipedia(model, tokenizer, device, save_path):
document_data_list = []
for iteration, document in enumerate(wikipedia_small['text']):
torch.cuda.empty_cache()
sentence_embeds_per_doc = [torch.randn(128)]
attention_mask_per_doc = [1]
special_tokens_per_doc = [1]
doc_split = nltk.sent_tokenize(document)
doc_tokenized = tokenizer.batch_encode_plus(doc_split, padding='longest', truncation=True, max_length=512, return_tensors='pt')
for key, value in doc_tokenized.items():
doc_tokenized[key] = doc_tokenized[key].to(device)
with torch.no_grad():
doc_encoded = model(**doc_tokenized)
for sentence in doc_encoded['last_hidden_state']:
sentence[0].to('cpu')
sentence_embeds_per_doc.append(sentence[0])
attention_mask_per_doc.append(1)
special_tokens_per_doc.append(0)
sentence_embeds = torch.stack(sentence_embeds_per_doc)
attention_mask = torch.FloatTensor(attention_mask_per_doc)
special_tokens_mask = torch.FloatTensor(special_tokens_per_doc)
document_data = torch.utils.data.TensorDataset(*[sentence_embeds, attention_mask, special_tokens_mask])
torch.save(document_data, f'{save_path}{time.strftime("%Y%m%d-%H%M%S")}{iteration}.pt')
print(f"Document at {iteration} encoded and saved.")
After about 200-300 iterations on my local GTX 1060 3GB I get an error saying that my CUDA memory is full. Running this code on Colab with more GPU RAM gives me a few thousand iterations.
Things I've tried:
Adding torch.cuda.empty_cache() to the start of every iteration to clear out previously held tensors
Wrapping the model in torch.no_grad() to disable the computation graph
Setting model.eval() to disable any stochastic properties that might take up memory
Sending the output straight to CPU in hopes to free up memory
I'm baffled as to why my memory keeps overflowing. I've trained several models of bigger sizes, applying all the standard practices of a training loop (optimizer.zero_grad(), etc.) I've never had this problem. Why does it appear during this seemingly trivial task?
Edit #1
Changing sentence[0].to('cpu') to cpu_sentence = sentence[0].to('cpu') gave me a few thousand iterations before VRAM usage suddenly spiked, causing the run to crash:
| Can you try replacing
sentence[0].to('cpu')
with
cpu_sentence = sentence[0].to('cpu')
See more info here https://pytorch.org/docs/stable/tensors.html#torch.Tensor.to
| https://stackoverflow.com/questions/65906965/ |
ModuleNotFoundError: No module named 'torch.nn'; 'torch' is not a package on Mac OS | I am trying to get pytorch to work but I keep getting this error.
ModuleNotFoundError: No module named 'torch.nn'; 'torch' is not a package
I am using a Macbook, i've tried looking at the other answers on here but nothing is working.
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
| Maybe you can check conda list to see if there is PyTorch installed. You should be able to run torch if you had installed PyTorch.
Download link: https://pytorch.org/get-started/locally/
Just remember to install CUDA additionally if you want to use GPU instead of CPU.
| https://stackoverflow.com/questions/65910782/ |
PyTorch RuntimeError: Tensor for argument #1 'self' is on CPU, but expected them to be on GPU | I'm using PyTorch for my Logistic Regression model but whenever I run the model summary I get an error
RuntimeError: Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU (while checking arguments for addmm)
Code
# Convert data to tensors
X_train = torch.Tensor(X_train)
y_train = torch.LongTensor(y_train)
X_test = torch.Tensor(X_test)
y_test = torch.LongTensor(y_test)
class LogisticRegression(nn.Module):
def __init__(self, input_features, num_classes):
super(LogisticRegression, self).__init__()
self.fc1 = nn.Linear(input_dim, num_classes)
def forward(self, x_in, apply_softmax = False):
y_pred = self.fc1(x_in)
if apply_softmax:
y_pred = F.softmax(y_pred, dim = 1)
return y_pred
INPUT_DIM = X_train.shape[1]
NUM_CLASSES = len(y_train.unique())
model = LogisticRegression(input_features = INPUT_DIM, num_classes = NUM_CLASSES)
print(model.named_parameters)
summary(model, input_size=(INPUT_DIM,))
My way does not work as expected, how do I go about fixing the problem?
| I had the same error.
RuntimeError: Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU (while checking arguments for addmm)
Ensuring the model and its weights were on the GPU helped:
model.to(device)
where device is defined:
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using {device} device")
| https://stackoverflow.com/questions/65914706/ |
Pytorch already installed using Conda but fails when called | I am trying to install pytorch for using BERT but when following the installation instructions found here: https://pytorch.org/get-started/locally/ I am getting an error.
When I try and initalise the BERT model I get the following error:
ImportError:
BertForSequenceClassification requires the PyTorch library but it was not found in your environment.
Checkout the instructions on theinstallation page: https://pytorch.org/get-started/locally/
and follow the ones that match your environment.
I have followed the instructions and run the following command line in my Conda prompt terminal AND in my current working directory:
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
It looks to complete but when I try and call the following line I get the same error as the start as if it hasn't installed at all.
Can anyone help me out please.
EDIT:
The code I am using to execute bert is:
model = BertForSequenceClassification.from_pretrained(r'C:\Users\441\bert\pytorch_model.bin', config = r'C:\Users\441\bert\config.json')
| I had the same issue (same error msg), and after using conda list | grep torch I also found it is there. What worked for me is that I restarted the jupyter notebook kernel and the error is gone.
| https://stackoverflow.com/questions/65921244/ |
Torch model forward with a diferent image size | I am testing some well known models for computer vision: UNet, FC-DenseNet103, this implementation
I train them with 224x224 randomly cropped patches and do the same on the validation set.
Now when I run inference on some videos, I pass it the frames directly (1280x640) and it works. It runs the same operations on different image sizes and never gives an error. It actually gives a nice output, but the quality of the output depends on the image size...
Now it's been a long time since I've worked with neural nets but when I was using tensorflow I remember I had to crop the input images to the train crop size.
Why don't I need to do this anymore? What's happening under the hood?
| It seems that the models that you are using have no linear layers. Because of this the output of the convolutional layers go straight into the softmax function. The softmax function doesn't take a specific shape for its input so it can take any shape as input. Because of this your model will work with any shape of image but the accuracy of your model will probably be far worse given different image shapes than the one you trained on.
| https://stackoverflow.com/questions/65933454/ |
all pairwise dot product pytorch | Is there a built in function to calculate efficiently all pairwaise dot products of two tensors in Pytorch?
e.g.
input - tensor A (shape NxD)
tensor B (shape NxD)
output - tensor C (shape NxN) such that C_i,j = torch.dot(A_i, B_j) ?
| Isn't it simply
C = torch.mm(A, B.T) # same as C = A @ B.T
BTW,
A very flexible tool for matrix/vector/tensor dot products is torch.einsum:
C = torch.einsum('id,jd->ij', A, B)
| https://stackoverflow.com/questions/65935952/ |
Is there a way to figure out whether PyTorch model is on cpu or on the device? | I would like to figure out, whether the PyTorch model is on cpu or cuda in order to
initialize some other variable as Torch.Tensor or Torch.cuda.Tensor depending on the model.
However, looking at the output of the dir() function I see only .cpu(), .cuda(), to() methods which put the model on device, GPU or other device, specified in to. For PyTorch tensor there is is_cuda attribute, but no analogue for the whole model.
Is there some way to deduce this for a model, or one needs to refer to a particular weight?
| No, there is no such function for nn.Module, I believe this is because parameters could be on multiple devices at the same time.
If you're working with a single device, a workaround is to check the first parameter:
next(model.parameters()).is_cuda
As described here.
| https://stackoverflow.com/questions/65941179/ |
diffrence between a[:,:,0] and a[:][:][0] | Hi I was studying slicing in python and I found something strange and I don't understand
import torch
a = torch.tensor([
[
[1, 2, 3],
[4, 5, 6]
],
[
[7, 2, 3],
[8, 5, 6]
]
])
>>> a[:][:][0]
tensor([[1, 2, 3],
[4, 5, 6]])
>>> a[:,:,0]
tensor([[1, 4],
[7, 8]])
I tried to pull out [[1,4,7,8]] from the corresponding torch list, so I entered a[:][:][0] and the result of it is [[1,2,3], [4,5,6]]. Then, when I input a[:,:,0], then [[1,4,7,8]] appeared.
I thought they have no different but different result appeared.
the torch and numpy operators, there was an operation like a[:,0]. How exactly can it be different from a[:][0]?
| You can see the first, a[:][:][0], as several, chained calls to __getitem__. That means a[:][:][0] is roughly equivalent to this:
b = a[:]
c = b[:]
d = c[0]
Where d is the result. In your case, it returns the same thing as a[0], because a[:] == a.
In contrast, a[:,:,0] will only call __getitem__ once with parameters slice(None), slice(None), 0.
In your case, that's the first slice of your tensor on the third axis.
| https://stackoverflow.com/questions/65945708/ |
How can I solve "torch.utils.ffi is deprecated. Please use cpp extensions instead" without downgrade pytorch version? | When I run the code below it shows me the error.
ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead.
I have been searching solution on the online. The problem is the code below working in old version of torch (0.4.1). I want to know whether it is possible to modify or replace this code for working in the new version of pytorch.
from torch.utils.ffi import _wrap_function
from ._nms import lib as _lib, ffi as _ffi
__all__ = []
def _import_symbols(locals):
for symbol in dir(_lib):
fn = getattr(_lib, symbol)
if callable(fn):
locals[symbol] = _wrap_function(fn, _ffi)
else:
locals[symbol] = fn
__all__.append(symbol)
_import_symbols(locals())
| I am facing the same problem have just seen some useful information in:
https://pytorch.org/tutorials/advanced/cpp_extension.html
https://pytorch.org/docs/stable/cpp_extension.html
To avoid downgrade the version of PyTorch, you should consider to use the following libraries while finding more details in the above links:
from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CppExtension
| https://stackoverflow.com/questions/65955378/ |
FastAi Learner changes the dataloader Why do they do it? How is this the right thing to do? | The code below shows that passing a dataloader to a learner changes it. This seems like a very odd behavior. Why is it done this way, what is the logic for the change and how can I turn it off?
More importantly, the dataloader also has the val and test data in it. If the learner goes around changing it then it should be very well documented on what its doing. Nothing is mentioned about changing the data loader in cnn_learner and Learner.
dls = ImageDataLoaders.from_name_func(path, get_image_files(path), valid_pct=0.2,
label_func=is_tb,item_tfms=Resize(224))
x,y=next(iter(dls[0]))
print(x.min(),x.max())
This gives 0 and 1 respectively. However, initiating a learner
learn = cnn_learner(dls, resnet34, metrics=[accuracy],n_out=2,loss_func=CrossEntropyLossFlat())
x,y=next(iter(dls[0]))
print(x.min(),x.max())
I get -2.11 and 2.64 respectively.
| The ImageDataLoaders.from_name_func dataloader shuffles the dataset by default.
You can pass it shuffle_train=False if you don't want to.
| https://stackoverflow.com/questions/65958584/ |
Get hash value of a pytorch architecture? | I would like to automatically check whether a certain architecture has already been trained on a task. My thought is: If I can get a hash value of the architecture and store this value in a .json file, then I can check whether it has already been trained by checking whether the architecture's hash value is in the .json file.
However, I'm not exactly sure what to hash: If I hash the module object, then it will be different each time I run the program since it'll have a different id, because it's in a different memory location. Also, different random initializations will probably cause a different hash value.
Is there a way I can get a hash value that will be the same so long as the module consists of the same layers with the same dimensions?
| I think hashing the string representation of the model might be a solution to your problem.
hash(str(model))
| https://stackoverflow.com/questions/65964784/ |
PyTorch - Creating Federated CIFAR-10 Dataset | I'm training a neural network (doesn't matter which one) on CIFAR-10 dataset. I'm using Federated Learning:
I have 10 models, each model having access to its own part of the dataset. At every time step, each model makes a step using its own data, and then the global model is an average of the model (this version is based on this, but I tried a lot of options):
def server_aggregate(server_model, client_models):
global_dict = server_model.state_dict()
for k in global_dict.keys():
global_dict[k] = torch.stack([client_models[i].state_dict()[k].float() for i in range(len(client_models))], 0).mean(0)
server_model.load_state_dict(global_dict)
for model in client_models:
model.load_state_dict(server_model.state_dict())
To be specific, each machine only has access to a data corresponding to a single class. I.e. machine 0 has only samples corresponding to class 0, etc. I'm doing it the following way:
def split_into_classes(full_ds, batch_size, num_classes=10):
class2indices = [[] for _ in range(num_classes)]
for i, y in enumerate(full_ds.targets):
class2indices[y].append(i)
datasets = [torch.utils.data.Subset(full_ds, indices) for indices in class2indices]
return [DataLoader(ds, batch_size=batch_size, shuffle=True) for ds in datasets]
Problem. During training, I can see that my federated training loss decreases. However, I never see my test loss/accuracy improve (acc is always around 10%).
Moreover, when I check accuracy on train/test datasets:
For the federated dataset, the accuracy improves.
For the testing dataset, the accuracy doesn't improve.
(Most surprising) for the training dataset, the accuracy doesn't improve. Note that this dataset is essentially the same as federated dataset, but not split into classes. The checking code is the following:
def epoch_summary(model, fed_loaders, true_train_loader, test_loader, frac):
with torch.no_grad():
train_len = 0
train_loss, train_acc = 0, 0
for train_loader in fed_loaders:
cur_loss, cur_acc, cur_len = true_results(model, train_loader, frac)
train_loss += cur_len * cur_loss
train_acc += cur_len * cur_acc
train_len += cur_len
train_loss /= train_len
train_acc /= train_len
true_train_loss, true_train_acc, true_train_len = true_results(model, true_train_loader, frac)
test_loss, test_acc, test_len = true_results(model, test_loader, frac)
print("TrainLoss: {:.4f} TrainAcc: {:.2f} TrueLoss: {:.4f} TrueAcc: {:.2f} TestLoss: {:.4f} TestAcc: {:.2f}".format(
train_loss, train_acc, true_train_loss, true_train_acc, test_loss, test_acc
), flush=True)
The full code can be found here. Things which don't seem to matter:
Model. I got the same problem for Resnet models and for some other models.
How I aggregate the models. I tried using state_dict or directly manipulate model.parameters(), no effect.
How I learn the models. I tried using optim.SGD or directly update param.data -= learning_rate * param.grad, no effect.
Computational graph. I've tried adding .detach().clone() and with torch.no_grad() into all possible places, no effect.
So I'm suspecting that the problem is somehow with the federated data itself (especially given strange accuracy results). What can be a problem?
| 10% on CIFAR-10 is basically random - your model outputs labels at random and gets 10%.
I think the problem lies in your "federated training" strategy: you cannot expect your sub-models to learn anything meaningful when all they see is a single label. This is why training data is shuffled.
Think of it: if each of your sub models learns all weights to be zero apart from the bias vector of the last classification layer that has 1 in the entry corresponding to the class this sub-model sees - the training of each sub model is perfect (it gets it right for all training samples it sees), but the averaged model is meaningless.
| https://stackoverflow.com/questions/65976605/ |
Can I access the inner layer outputs of DeepLab in pytorch? | Using Pytorch, I am trying to implement a network that is using the pre=trained DeepLab ResNet-101.
I found two possible methods for using this network:
this one
or
torchvision.models.segmentation.deeplabv3_resnet101(
pretrained=False, progress=True, num_classes=21, aux_loss=None, **kwargs)
However, I might not only need this network's output, but also several inside layers' outputs.
Is there a way to access the inner layer outputs using one of these methods?
If not - Is it possible to manually copy the trained resnet's parameters so I can manually recreate it and add those outputs myself? (Hopefully the first option is possible so I won't need to do this)
Thanks!
| You can achieve this without too much trouble using forward hooks.
The idea is to loop over the modules of your model, find the layers you're interested in, hook a callback function onto them. When called, those layers will trigger the hook. We will take advantage of this to save the intermediate outputs.
For example, let's say you want to get the outputs of layer classifier.0.convs.3.1:
layers = ['classifier.0.convs.3.1']
activations = {}
def forward_hook(name):
def hook(module, x, y):
activations[name] = y
return hook
for name, module in model.named_modules():
if name in layers:
module.register_forward_hook(forward_hook(name))
*The closure around hook() made by forward_hook's scope is used to enclose the module's name which you wouldn't otherwise have access to at this point.
Everything is ready, we can call the model
>>> model = torchvision.models.segmentation.deeplabv3_resnet101(
pretrained=True, progress=True, num_classes=21, aux_loss=None)
>>> model(torch.rand(16, 3, 100, 100))
And as expected, after inference, activations will have a new entry 'classifier.0.convs.3.1' which - in this case - will contain a tensor of shape (16, 256, 13, 13).
Not so long ago, I wrote an answer about a similar question which goes a little bit more in detail on how hooks can be used to inspect the intermediate output shapes.
| https://stackoverflow.com/questions/65984686/ |
How to strip a pretrained network and add some layers to it using pytorch lightning? | I am trying to use transfer learning for an image segmentation task, and my plan is to use the first few layers of a pretrained model (VGG16 for example) as an encoder and then will add my own decoder.
So, I can load the model and see the structure by printing it:
model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True)
print(model)
I get like this:
ResNet(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
.....
.....
.....
I can also access the specific layers with model.layer3 for example. Now, I am struggling with certain things.
How to cut the model and take every module from the beginning to the end of any layer (model.layer3 for example)?
How to freeze only this stripped part, and keep the newly added modules available for training?
| For 1): Initialize the ResNet in your LightningModule and slice it until the part that you need. Then add your own head after that, and define forward in the order that you need. See this example, based on the transfer learning docs:
import torchvision.models as models
class ImagenetTransferLearning(LightningModule):
def __init__(self):
super().__init__()
# init a pretrained resnet
backbone_tmp = models.resnet50(pretrained=True)
num_filters = backbone_tmp.fc.in_features
layers = list(backbone_tmp.children())[:-1]
self.backbone = nn.Sequential(*layers)
# use the pretrained model to classify cifar-10 (10 image classes)
num_target_classes = 10
self.classifier = nn.Linear(num_filters, num_target_classes)
For 2): Pass a BackboneFinetuning callback to your trainer. This requires that your LightningModule has a self.backbone attribute containing the modules that you want to be frozen, as shown on the snippet above. You can also use the BaseFinetuning callback if you need different freeze-unfreeze behavior.
from pytorch_lightning import Trainer
from pytorch_lightning.callbacks import BackboneFinetuning
multiplicative = lambda epoch: 1.5
backbone_finetuning = BackboneFinetuning(200, multiplicative)
trainer = Trainer(callbacks=[backbone_finetuning])
| https://stackoverflow.com/questions/66000358/ |
Unable to load weights from pytorch checkpoint after splitting pytorch_model.bin into chunks | I need to transfer a pytorch_model.bin of a pretrained deeppavlov ruBERT model but I have a file size limit. So I split it into chunks using python, transferred and reassembled in the correct order. However, the size of the file increased, and when I tried to load the resulting file using BertModel.from_pretrained(pytorch_model.bin) I received an error:
During handling of the above exception, another exception occurred:
OSError: Unable to load weights from pytorch checkpoint <...>
So my question is: is it actually possible to split the file like that? I could possibly have a mistake in the way I split and reassemble the file. However, this could also be some version mismatch.
My python code to get chunks:
chunk_size = 40000000
file_num = 1
with open("pytorch_model.bin", "rb") as f:
chunk = f.read(chunk_size)
while chunk:
with open("chunk_" + str(file_num), "wb") as chunk_file:
chunk_file.write(chunk)
file_num += 1
chunk = f.read(chunk_size)
Code to reassemble one file:
chunks = !ls | grep chunk_
chunks = sorted(chunks, key=lambda x: int(x.split("_")[-1]))
for chunk in chunks:
with open(chunk, "rb") as f:
contents = f.read()
if chunk == chunks[0]:
write_mode = "wb"
else:
write_mode = "ab"
with open("pytorch_model.bin", write_mode) as f:
f.write(contents)
python 3.7.0, torch 1.5.1, transformers 4.2.2. I have no way to move files bigger than 40 MB.
TIA for your help!
| Those who are new to this issue I just figured it out and save your time
What is this error about?
==> When you run the model for the first time it downloads some files { pytorch_model.bin } and if your internet is broken accidentally between processes it will continue running the pipeline file without completely downloading that pytorch_model.bin file so it will raise this issue.
Steps :
1 ] Go to C:// Users / UserName / .cache
2 ] Delete .cache folder
3 ] And Done Just Run The Model Once Again......
| https://stackoverflow.com/questions/66005027/ |
BERT embeddings in batches | I am following this post to extract embeddings for sentences and for a single sentence the steps are described as follows:
text = "After stealing money from the bank vault, the bank robber was seen " \
"fishing on the Mississippi river bank."
# Add the special tokens.
marked_text = "[CLS] " + text + " [SEP]"
# Split the sentence into tokens.
tokenized_text = tokenizer.tokenize(marked_text)
# Mark each of the 22 tokens as belonging to sentence "1".
segments_ids = [1] * len(tokenized_text)
# Convert inputs to PyTorch tensors
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
# Load pre-trained model (weights)
model = BertModel.from_pretrained('bert-base-uncased',
output_hidden_states = True,
)
# Put the model in "evaluation" mode, meaning feed-forward operation.
model.eval()
with torch.no_grad():
outputs = model(tokens_tensor, segments_tensors)
hidden_states = outputs[2]
And I want to do this for a batch of sequences. Here is my example code:
seql = ['this is an example', 'today was sunny and', 'today was']
encoded = [tokenizer.encode(seq, max_length=5, pad_to_max_length=True) for seq in seql]
encoded
[[2, 2511, 1840, 3251, 3],
[2, 1663, 2541, 1957, 3],
[2, 1663, 2541, 3, 0]]
But since I'm working with batches, sequences need to have same length. So I introduce a padding token (3rd sentence) which confuses me about several points:
What should the segment id for pad_token (0) will be?
Should I use attention masking when feeding the tensors to the model so that padding is ignored? In the example only token and segment tensors are used.
outputs = model(tokens_tensor, segments_tensors)
If I don't work with batches but with individual sentences, then I might not need a padding token. Would it be better to do that compared to batches?
| You could do all the work you need using one function ( padding,truncation)
encode_plus
check the parameters: the docs
The same you could do with a list of sequences
batch_encode_plus
docs
| https://stackoverflow.com/questions/66013380/ |
torch matmul two matrix row by row | I want to find a decent way to write the below function in torch. Appreciate for clean solution to complete this.
import torch
a=torch.randn(3,100)
b=torch.randn(3,100)
row_num = a.size()[0] # 3
# Given two matrix with shape (n1,n2)
# I want to have the row-wise `matmul` results which will result in a tensor with size (n1, )
scores = []
for i in range(row_num):
score_i = a[i,:].matmul(b[i,:])
scores.append(score_i)
expected_result = torch.tensor(scores)
| The operation you are trying to do is essentially the values of a dot product (matmul, a @ b.T) which lie on its diagonal.
You can get the same using torch.matmul or @ operator between a and b.T and then get the torch.diagonal -
np.diagonal(a @ b.T)
You can also use torch.einsum directly to get the same result -
torch.einsum('ij,ij->i',a,b)
| https://stackoverflow.com/questions/66015132/ |
How pytorch implements back propagation from the output layer to the input layer | I am having difficulty implementing the following functions.
Assuming that we have trained a network model, I want to backpropagate from the output layer to the input layer (not the first layer) to obtain a new input data. I want to know if there is a function in pytorch or other existing functions that can achieve this function, I did not find the relevant function in the pytorch tutorials.
| If you want the gradient w.r.t to the input, you can simply get it from the .grad:
x.requires_grad_(True) # explicitly ask pytorch to estimate the gradient w.r.t x
# forward pass:
pred = model(x) # make a prediction
loss = criterion(pred, y) # compute the loss
# backward pass - compute gradients:
loss.bacward()
# now you have access to the gradient of loss w.r.t the input:
x_grad = x.grad
if you are interested in inspecting gradients of specific layers, you'll need to use hooks.
| https://stackoverflow.com/questions/66020252/ |
How to convert PyTorch tensor to C++ torch::Tensor vice versa? | I want to receive a dictionary includes PyTorch Tensor in C++ module using pybind11, and return the result dictionary with some modification that includes C++ torch::Tensor back. As far as I was looking for, there seems no clear way to convert PyTorch Tensor to C++ Tensor, and C++ Tensor to PyTorch Tensor. For a last trial, I tried to convert PyObject to torch::Tensor but seems not working as well. (https://discuss.pytorch.org/t/is-it-possible-to-get-pyobject-from-a-torch-tensor/85980/2) I want to know if it is correct and there are there any workarounds. I share my code snippet on the below.
py::dict quantize(py::dict target) {
...
for (auto item: target) {
py::str key(item.first);
torch::Tensor test = item.second.ptr(); // it fails to compile
}
...
return py::dict("name"_a="test", "tensor"_a=torch::rand({3, 3, 3})); // it fails on runtime
}
| PyObject * THPVariable_Wrap(at::Tensor t);
at::Tensor& THPVariable_Unpack(PyObject* obj);
Those two are what you are looking for i guess.
| https://stackoverflow.com/questions/66024389/ |
NameError: name 'utils' is not defined in Pytorch | I have pytorch 1.7. The following code is same as from Pytorch's tutorial page for Object detection and finetuning.
But I have error for the following line
data_loader = torch.utils.data.DataLoader(dataset, batch_size=2, shuffle=True, num_workers=4, collate_fn=utils.collate_fn)
as NameError: name 'utils' is not defined
What could be wrong?
The whole code is as follows.
import os
import numpy as np
import torch
from PIL import Image
class PrepareDataset(object):
def __init__(self, root, transforms):
self.root = root
self.transforms = transforms
# load all image files, sorting them to
# ensure that they are aligned
self.imgs = list(sorted(os.listdir(os.path.join(root, "images"))))
self.masks = list(sorted(os.listdir(os.path.join(root, "masks"))))
self.annotations = list(sorted(os.listdir(os.path.join(root, "annotations"))))
def __getitem__(self, idx):
# load images ad masks
img_path = os.path.join(self.root, "images", self.imgs[idx])
mask_path = os.path.join(self.root, "masks", self.masks[idx])
annotation_path = os.path.join(self.root, "annotations", self.annotations[idx])
img = Image.open(img_path).convert("RGB")
# note that we haven't converted the mask to RGB,
# because each color corresponds to a different instance
# with 0 being background
mask = Image.open(mask_path)
# convert the PIL Image into a numpy array
mask = np.array(mask)
# instances are encoded as different colors
obj_ids = np.unique(mask)
# first id is the background, so remove it
obj_ids = obj_ids[1:]
# split the color-encoded mask into a set
# of binary masks
masks = mask == obj_ids[:, None, None]
# get bounding box coordinates for each mask
num_objs = len(obj_ids)
boxes = []
for i in range(num_objs):
pos = np.where(masks[i])
xmin = np.min(pos[1])
xmax = np.max(pos[1])
ymin = np.min(pos[0])
ymax = np.max(pos[0])
boxes.append([xmin, ymin, xmax, ymax])
# convert everything into a torch.Tensor
boxes = torch.as_tensor(boxes, dtype=torch.float32)
# there is only one class
labels = torch.ones((num_objs,), dtype=torch.int64)
masks = torch.as_tensor(masks, dtype=torch.uint8)
image_id = torch.tensor([idx])
area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
# suppose all instances are not crowd
iscrowd = torch.zeros((num_objs,), dtype=torch.int64)
target = {}
target["boxes"] = boxes
target["labels"] = labels
target["masks"] = masks
target["image_id"] = image_id
target["area"] = area
target["iscrowd"] = iscrowd
if self.transforms is not None:
img, target = self.transforms(img, target)
return img, target
def __len__(self):
return len(self.imgs)
import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
# load a model pre-trained pre-trained on COCO
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
# replace the classifier with a new one, that has
# num_classes which is user-defined
num_classes = 2 # 1 class (person) + background
# get number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
import torchvision
from torchvision.models.detection import FasterRCNN
from torchvision.models.detection.rpn import AnchorGenerator
# load a pre-trained model for classification and return
# only the features
backbone = torchvision.models.mobilenet_v2(pretrained=True).features
# FasterRCNN needs to know the number of
# output channels in a backbone. For mobilenet_v2, it's 1280
# so we need to add it here
backbone.out_channels = 1280
# let's make the RPN generate 5 x 3 anchors per spatial
# location, with 5 different sizes and 3 different aspect
# ratios. We have a Tuple[Tuple[int]] because each feature
# map could potentially have different sizes and
# aspect ratios
anchor_generator = AnchorGenerator(sizes=((32, 64, 128, 256, 512),), aspect_ratios=((0.5, 1.0, 2.0),))
# let's define what are the feature maps that we will
# use to perform the region of interest cropping, as well as
# the size of the crop after rescaling.
# if your backbone returns a Tensor, featmap_names is expected to
# be [0]. More generally, the backbone should return an
# OrderedDict[Tensor], and in featmap_names you can choose which
# feature maps to use.
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=[0], output_size=7, sampling_ratio=2)
# put the pieces together inside a FasterRCNN model
model = FasterRCNN(backbone, num_classes=5, rpn_anchor_generator=anchor_generator, box_roi_pool=roi_pooler)
import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor
def get_model_instance_segmentation(num_classes):
# load an instance segmentation model pre-trained pre-trained on COCO
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
# get number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
# now get the number of input features for the mask classifier
in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels
hidden_layer = 256
# and replace the mask predictor with a new one
model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask,
hidden_layer,
num_classes)
return model
from torchvision import transforms as T
def get_transform(train):
transforms = []
transforms.append(T.ToTensor())
if train:
transforms.append(T.RandomHorizontalFlip(0.5))
return T.Compose(transforms)
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
dataset = PrepareDataset('/home/centos/atic-nyan/Traffic', get_transform(train=True))
data_loader = torch.utils.data.DataLoader(dataset, batch_size=2, shuffle=True, num_workers=4, collate_fn=utils.collate_fn)
# For Training
images,targets = next(iter(data_loader))
images = list(image for image in images)
targets = [{k: v for k, v in t.items()} for t in targets]
output = model(images,targets) # Returns losses and detections
# For inference
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x) # Returns predictions
from engine import train_one_epoch, evaluate
import utils
def main():
# train on the GPU or on the CPU, if a GPU is not available
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# our dataset has two classes only - background and person
num_classes = 2
# use our dataset and defined transformations
dataset = PrepareDataset('/home/centos/atic-nyan/Traffic', get_transform(train=True))
dataset_test = PrepareDataset('/home/centos/atic-nyan/Traffic', get_transform(train=False))
# split the dataset in train and test set
indices = torch.randperm(len(dataset)).tolist()
dataset = torch.utils.data.Subset(dataset, indices[:-50])
dataset_test = torch.utils.data.Subset(dataset_test, indices[-50:])
# define training and validation data loaders
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=2, shuffle=True, num_workers=4,
collate_fn=utils.collate_fn)
data_loader_test = torch.utils.data.DataLoader(
dataset_test, batch_size=1, shuffle=False, num_workers=4,
collate_fn=utils.collate_fn)
# get the model using our helper function
model = get_model_instance_segmentation(num_classes)
# move model to the right device
model.to(device)
# construct an optimizer
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.005,
momentum=0.9, weight_decay=0.0005)
# and a learning rate scheduler
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
step_size=3,
gamma=0.1)
# let's train it for 10 epochs
num_epochs = 10
for epoch in range(num_epochs):
# train for one epoch, printing every 10 iterations
train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
# update the learning rate
lr_scheduler.step()
# evaluate on the test dataset
evaluate(model, data_loader_test, device=device)
print("That's it!")
| I just put
def collate_fn(batch):
data_list, label_list = [], []
for _data, _label in batch:
data_list.append(_data)
label_list.append(_label)
return torch.Tensor(data_list), torch.LongTensor(label_list)
in my code and it works.
| https://stackoverflow.com/questions/66028727/ |
take the output from a specific layer in pytorch | I have implemented an autoencoder in Pytorch and wish to extract the representations (output) from a specified encoding layer. This setup is similar to making predictions using sub-models that we used to have in Keras.
However, implementing something similar in Pytorch looks a bit challenging. I tried forward hooks as explained in How to get the output from a specific layer from a PyTorch model? and https://pytorch.org/tutorials/beginner/former_torchies/nnft_tutorial.html but to no avail.
Could you help me getting outputs from a specific layer?
I have attached my code below:
class Autoencoder(torch.nn.Module):
# Now defining the encoding and decoding layers.
def __init__(self):
super().__init__()
self.enc1 = torch.nn.Linear(in_features = 784, out_features = 256)
self.enc2 = torch.nn.Linear(in_features = 256, out_features = 128)
self.enc3 = torch.nn.Linear(in_features = 128, out_features = 64)
self.enc4 = torch.nn.Linear(in_features = 64, out_features = 32)
self.enc5 = torch.nn.Linear(in_features = 32, out_features = 16)
self.dec1 = torch.nn.Linear(in_features = 16, out_features = 32)
self.dec2 = torch.nn.Linear(in_features = 32, out_features = 64)
self.dec3 = torch.nn.Linear(in_features = 64, out_features = 128)
self.dec4 = torch.nn.Linear(in_features = 128, out_features = 256)
self.dec5 = torch.nn.Linear(in_features = 256, out_features = 784)
# Now defining the forward propagation step
def forward(self,x):
x = F.relu(self.enc1(x))
x = F.relu(self.enc2(x))
x = F.relu(self.enc3(x))
x = F.relu(self.enc4(x))
x = F.relu(self.enc5(x))
x = F.relu(self.dec1(x))
x = F.relu(self.dec2(x))
x = F.relu(self.dec3(x))
x = F.relu(self.dec4(x))
x = F.relu(self.dec5(x))
return x
autoencoder_network = Autoencoder()
I have to take the output from encoder layers marked enc1, enc2 .., enc5.
| The simplest way is to explicitly return the activations you need:
def forward(self,x):
e1 = F.relu(self.enc1(x))
e2 = F.relu(self.enc2(e1))
e3 = F.relu(self.enc3(e2))
e4 = F.relu(self.enc4(e3))
e5 = F.relu(self.enc5(e4))
x = F.relu(self.dec1(e5))
x = F.relu(self.dec2(x))
x = F.relu(self.dec3(x))
x = F.relu(self.dec4(x))
x = F.relu(self.dec5(x))
return x, e1, e2, e3, e4, e5
| https://stackoverflow.com/questions/66039520/ |
How to create a submodel from a pretrained model in pytorch without having to rewrite the whole architecture? | So, I have been working on neural style transfer in Pytorch, but I'm stuck at the point where we have to run the input image through limited number of layers and minimize the style loss. Long story short, I want to find a way in Pytorch to evaluate the input at different layers of the architecture(I'm using vgg16). I have seen this problem solved very simply in keras, but I wanted to see if there is a similar way in pytorch as well or not.
from keras.applications.vgg16 import VGG16
model = VGG16()
model = Model(inputs=model.inputs, outputs=model.layers[1].output)
| Of course you can do that:
import torch
import torchvision
pretrained = torchvision.models.vgg16(pretrained=True)
features = pretrained.features
# First 4 layers
model = torch.nn.Sequential(*[features[i] for i in range(4)])
You can always print your model and see how it's structured. If it is torch.nn.Sequential (or part of it is, as above), you can always use this approach.
| https://stackoverflow.com/questions/66051641/ |
ValueError: Target size (torch.Size([10, 1])) must be the same as input size (torch.Size([10, 2])) | A binary classification problem with Batch Size = 10. Trying to use torch.nn.BCEWithLogitsLoss().
~\Anaconda3\envs\notebook\lib\site-packages\torch\nn\functional.py in binary_cross_entropy_with_logits(input, target, weight, size_average, reduce, reduction, pos_weight)
2578
2579 if not (target.size() == input.size()):
-> 2580 raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
2581
2582 return torch.binary_cross_entropy_with_logits(input, target, weight, pos_weight, reduction_enum)
ValueError: Target size (torch.Size([1, 10])) must be the same as input size (torch.Size([10, 2]))
Here is my training code:
def train(epochs):
print('Starting training..')
for e in range(0, epochs):
exp_lr_scheduler.step()
print('='*20)
print(f'Starting epoch {e + 1}/{epochs}')
print('='*20)
train_loss = 0.
val_loss = 0.
resnet18.train() # set model to training phase
for train_step, (images, labels) in enumerate(dl_train):
optimizer.zero_grad()
outputs = resnet18(images)
outputs = outputs.float()
loss = loss_fn(outputs, labels.unsqueeze(0))
loss.backward()
optimizer.step()
train_loss += loss.item()
if train_step % 20 == 0:
print('Evaluating at step', train_step)
accuracy = 0
resnet18.eval() # set model to eval phase
for val_step, (images, labels) in enumerate(dl_val):
outputs = resnet18(images)
outputs = outputs.float()
loss = loss_fn(outputs, labels.unsqueeze(0))
val_loss += loss.item()
_, preds = torch.max(outputs, 1)
accuracy += sum((preds == labels).numpy())
val_loss /= (val_step + 1)
accuracy = accuracy/len(val_dataset)
print(f'Validation Loss: {val_loss:.4f}, Accuracy: {accuracy:.4f}')
show_preds()
resnet18.train() #set model to training phase
if accuracy >= 0.95:
print('Performance condition satisfied, stopping..')
return
train_loss /= (train_step + 1)
print(f'Training Loss: {train_loss:.4f}')
print('Training complete..')**
train(epochs=30)
|
Target size (torch.Size([1, 10])) must be the same as input size (torch.Size([10, 2]))
Seems to me you have two issues:
target size (a.k.a. ground truth tensor) should have the batch on the first axis: (1, 10).
From what you've described you are dealing with a binary classification task not a multi-label (2-class) classification task. Therefore input size (a.k.a. model's output) should have a shape of (10, 1).
In a binary classification task you should only have a single logit coming out of your model, i.e. your last nn.Linear layer should have a single neuron. The output will define which class has been predicted. Since you are using nn.BCEWithLogitsLoss, the loss input should be the raw output (since it includes a Sigmoid layer, cf. documentation) and should have a shape matching (batch_size=10, 1). Similarly, the target tensor should have the same shape. Its content would be 0s and 1s in shape (batch_size=10, 1).
| https://stackoverflow.com/questions/66053295/ |
torch: minimally pad tensor such that num elements divisible by x | Suppose I have a tensor t of arbitrary ndim
I want to pad (with zeroes) it such that
a) I introduce the fewest possible # elements
b) after padding, (t.numel() % x) == 0
Is there a better algorithm for doing this than
find the largest dimension and increase it by 1 until condition (b) is satisfied?
Maybe working code:
def pad_minimally(t, x):
largest_dim = np.argmax(t.shape)
buffer_shape = list(t.shape)
new_t = t.clone()
print(t.shape)
for n_to_add in range(x):
if new_t.numel() % x == 0:
break
buffer_shape[largest_dim] = n_to_add
new_buffer = torch.zeros(*buffer_shape)
new_t = torch.cat([t, new_buffer], axis=largest_dim)
assert new_t.numel() % x == 0
return new_t
assert pad_minimally(torch.rand(3,1), 7).shape == (7,1)
assert pad_minimally(torch.rand(3,2), 7).shape == (7,2)
assert pad_minimally(torch.rand(3,2, 6), 7).shape == (3,2,7)
| First off, simply adding one to the largest dimension until numel is divisible by x doesn't work in all cases. For example if the shape of t is (3, 2) and x = 9 then we would want to pad t to be (3, 3), not (9, 2).
Even more concerning is that there's no guarantee that only one dimension needs to be padded. For example if t has shape (13, 17, 25) and x = 8 then the optimally padded t would be either (14, 18, 26) or (13, 18, 28).
Distilling this into the mathematics, the problem becomes
Given positive integers s[1], ..., s[D] find positive integers q[1], ..., q[D] that minimize prod(q[i], i=1 to D) subject to the constraints that prod(q[i], i=1 to D) is divisible by x and q[i] >= s[i] for all i=1 to D.
I wasn't able to develop an efficient solution (see update for more efficient solution), though I'm not particularly well versed in non-linear integer programming. Perhaps an efficient solution to this problem exists. If it does I imagine it would involve the prime factors of x and q and/or better memoization. That said, it is possible to solve the problem using an exhaustive search, provided that x and D (i.e. len(t.shape)) are sufficiently small (otherwise the algorithm may run for a really really long time).
The brute force search algorithm I came up with iterates over each multiple of x greater-than or equal to t.numel() and uses depth-first search to see if a padding exists for that multiple. As soon as a valid padding is found the algorithm finishes. The python code for this algorithm is:
import numpy as np
def search(shape, target_numel, memory):
numel = np.prod(shape)
if numel == target_numel:
return True
elif numel < target_numel:
for idx in range(len(shape)):
shape[idx] += 1
if tuple(shape) not in memory:
if search(shape, target_numel, memory):
return True
memory.add(tuple(s for s in shape))
shape[idx] -= 1
return False
def minimal_shape(shape, target_multiple):
shape = [s for s in shape]
target_numel = target_multiple * int(np.ceil(max(1, np.prod(shape)) / target_multiple))
while not search(shape, target_numel, set()):
target_numel += target_multiple
return shape
Once you have the minimal shape, the pad_minimal function can be implemented pretty succinctly as
def pad_minimally(t, x):
new_shape = minimal_shape(t.shape, x)
new_t = t.new_zeros(new_shape)
new_t[[slice(0, s) for s in t.shape]] = t
return new_t
I'm not sure if this will be fast enough for your needs. Hopefully someone else can come along with a more efficient version.
Some test cases for minimal_shape
assert minimal_shape([2, 2], 9) == [3, 3]
assert minimal_shape([2, 8], 6) == [2, 9]
assert minimal_shape([13, 17, 25], 8) in [[14, 18, 26], [13, 18, 28]]
assert minimal_shape([5, 13, 19], 6) == [5, 14, 21]
Update
I asked about this algorithm on CS.SE. Based on the answer I received there and the subsequent update to the question the following is a much more efficient implementation of minimal_shape.
from functools import reduce
from operator import mul
from copy import deepcopy
def prod(x):
return reduce(mul, x, 1)
def argsort(x, reverse=False):
return sorted(range(len(x)), key=lambda idx: x[idx], reverse=reverse)
def divisors(v):
""" does not include 1 """
d = {v} if v > 1 else set()
for n in range(2, int(v**0.5) + 1):
if v % n == 0:
d.add(n)
d.add(v // n)
return d
def update_memory(b, c_rem, memory):
tuple_m = tuple(b + [c_rem])
if tuple_m in memory:
return False
memory.add(tuple_m)
return True
def dfs(a, b, c, c_rem, memory, p_best=float('inf'), b_best=None):
ab = [ai + bi for ai, bi in zip(a, b)]
p = prod(ab)
if p >= p_best:
return p_best, b_best
elif p % c == 0:
return p, deepcopy(b)
dc = divisors(c_rem)
for i in argsort(ab):
for d in dc:
db = (d - ab[i]) % d
b[i] += db
if update_memory(b, c_rem // d, memory):
p_best, b_best = dfs(a, b, c, c_rem // d, memory, p_best, b_best)
b[i] -= db
return p_best, b_best
def minimal_shape(shape, target_multiple):
a = list(shape)
b = [0 for _ in range(len(a))]
c = target_multiple
_, b = dfs(a, b, c, c, set())
return [ai + bi for a, b in zip(a, b)]
| https://stackoverflow.com/questions/66055262/ |
compute accuracy of Band RNN | So I am trying to figure out how to compute the accuracy of a BandRNN.
BandRnn is a diagonalRNN model with a different number of connections per neuron. For example:
here C is the number of connections per neuron.
My current model training is as follows:
model = ModelLSTM(m, k).to(device)
model.train()
opt = torch.optim.Adam(model.parameters(), lr=args.lr)
best_test = 1e7
best_validation = 1e7
for ep in range(1, args.epochs + 1):
init_time = datetime.now()
processed = 0
step = 1
for batch_idx, (batch_x, batch_y, len_batch) in enumerate(train_loader):
batch_x, batch_y, len_batch = batch_x.to(device), batch_y.to(device), len_batch.to(device)
opt.zero_grad()
logits = model(batch_x)
loss = model.loss(logits, batch_y, len_batch)
acc = sum(logits == batch_y) * 1.0 / len(logits)
print(acc)
loss.backward()
if args.clip > 0:
nn.utils.clip_grad_norm_(model.parameters(), args.clip)
opt.step()
processed += len(batch_x)
step += 1
print(" batch_idx {}\tLoss: {:.2f} ".format(batch_idx, loss))
print("Epoch {}, LR {:.5f} \tLoss: {:.2f} ".format(ep, opt.param_groups[0]['lr'], loss))
And my model test is as follows:
model.eval()
with torch.no_grad():
for batch_x, batch_y, len_batch in test_loader:
batch_x, batch_y, len_batch = batch_x.to(device), batch_y.to(device), len_batch.to(device)
logits = model(batch_x)
loss_test = model.loss(logits, batch_y, len_batch)
acc = sum(logits == batch_y) * 1.0 / len(logits)
for batch_x, batch_y, len_batch in val_loader:
batch_x, batch_y, len_batch = batch_x.to(device), batch_y.to(device), len_batch.to(device)
logits = model(batch_x)
loss_val = model.loss(logits, batch_y, len_batch)
if loss_val < best_validation:
best_validation = loss_val.item()
best_test = loss_test.item()
print()
print("Val: Loss: {:.2f}\tBest: {:.2f}".format(loss_val, best_validation))
print("Test: Loss: {:.2f}\tBest: {:.2f}".format(loss_test, best_test))
print()
model.train()
I am struggling with thinking about a way to compute the accuracy of this model and I would like to receive some suggestions about a way to do so.
Thank you.
| I believe this line in your code is already attempting to calculate accuracy:
acc = sum(logits == batch_y) * 1.0 / len(logits)
Though you probably want to argmax the logits before comparing with the labels:
preds = logits.argmax(dim=-1)
acc = sum(preds == batch_y) * 1.0 / len(logits)
| https://stackoverflow.com/questions/66065431/ |
ValueError: All bounding boxes should have positive height and width | Any help solving this will be highly appreciated.
I have an idea why the error is happening, it is because the xmin == xmax and ymin == ymax which should not be. However I seem not to know how this is happening. Here is how I load my custom dataset with pytorch Dataset class.
`class CustomDataset(torch.utils.data.Dataset):
def __init__(self, root, transforms=None):
self.root = root
self.transforms = transforms
# load all image files, sorting them to
# ensure that they are aligned
self.imgs = list(sorted(os.listdir(os.path.join(root, "seg_image_use"))))
self.masks = list(sorted(os.listdir(os.path.join(root, "seg_mask_use"))))
def __getitem__(self, idx):
# load one image and mask using idx
img_path = os.path.join(self.root, "seg_image_use", self.imgs[idx])
mask_path = os.path.join(self.root, "seg_mask_use", self.masks[idx])
img = Image.open(img_path).convert("RGB")
# note that we haven't converted the mask to RGB,
# because each color corresponds to a different instance
# with 0 being background
mask = Image.open(mask_path)
mask = np.asarray(mask)
# instances are encoded as different colors
obj_ids = np.unique(mask)[1:] # first id is the background, so remove it
masks = mask == obj_ids[:, None, None] # split the color-encoded mask into a set of binary masks
# get bounding box coordinates for each mask
num_objs = len(obj_ids)
boxes = []
for i in range(num_objs):
pos = np.where(masks[i])
xmin = np.min(pos[1])
xmax = np.max(pos[1])
ymin = np.min(pos[0])
ymax = np.max(pos[0])
boxes.append([xmin, ymin, xmax, ymax])
# convert everything into torch.Tensor
boxes = torch.as_tensor(boxes, dtype=torch.float32)
area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
target = {}
target["boxes"] = boxes
target["labels"] = torch.as_tensor(obj_ids, dtype=torch.int64) - 1 # corrected by Rawi
target["masks"] = torch.as_tensor(masks, dtype=torch.uint8) #uint8
target["image_id"] = torch.tensor([idx])
target["area"] = area
target["iscrowd"] = torch.zeros((num_objs,), dtype=torch.int64) # suppose all instances are not crowd
if self.transforms is not None:
img, target = self.transforms(img, target)
return img, target
def __len__(self):
return len(self.imgs)
`
And then when I call it to see the dataset, I get the first index of each dataset showing this; take note of the first index in the 'boxes' tensor. (italic)
`dataset_sample = CustomDataset('C:/Users/LENOVO/Desktop/clothme/Train')
img, target = dataset_sample[2]
print(target)
result: {'boxes': tensor(*[[ 0., 0., 286., 403.]*,
[ 30., 240., 52., 241.],
[ 25., 183., 31., 204.],
[ 26., 224., 34., 240.],
[ 30., 169., 88., 181.],
[ 32., 239., 85., 251.],
`
Here is the error I get when I try to train the model.
`---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-81-c798930961c1> in <module>
4 for epoch in range(num_epochs):
5 # train for one epoch, printing every 10 iterations
----> 6 train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
7 # update the learning rate
8 lr_scheduler.step()
~\measurement_model_dev\engine.py in train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq)
28 targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
29
---> 30 loss_dict = model(images, targets)
31
32 losses = sum(loss for loss in loss_dict.values())
~\anaconda3\envs\measurement_py37\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~\anaconda3\envs\measurement_py37\lib\site-packages\torchvision\models\detection\generalized_rcnn.py in forward(self, images, targets)
92 raise ValueError("All bounding boxes should have positive height and width."
93 " Found invalid box {} for target at index {}."
---> 94 .format(degen_bb, target_idx))
95
96 features = self.backbone(images.tensors)
ValueError: All bounding boxes should have positive height and width. Found invalid box [790.0323486328125, 359.0328369140625, 790.0323486328125, 359.0328369140625] for target at index 0.
`
| TLDR;you have to check your ground truth first and reassure that any zero_area boxes are discarded..
Imagine that what you provide as a bounding box is a zero_area box.
Considering that the data format is [x1,y1,x2,y2] which actually indicates the [left,top..] and the [..,right,bottom] edge of the ground truth box, in your case these two spots, are actually overlapping.
This is a good example of a valid input , since x1<x2 and y1<y2 in all boxes
result: {'boxes': tensor(*[[ 0., 0., 286., 403.]*,
[ 30., 240., 52., 241.],
[ 25., 183., 31., 204.],
[ 26., 224., 34., 240.],
[ 30., 169., 88., 181.],
[ 32., 239., 85., 251.],
but the error message says that:
[790.0323486328125, 359.0328369140625, 790.0323486328125, 359.0328369140625]
which indicates that x1=x2 and y1=y2.
| https://stackoverflow.com/questions/66068158/ |
Supplying weights to nn.functional.conv2d in PyTorch | I am trying to learn the weights of a 3x3 conv2d layer accepting 3 channels and outputting 3 channels. For this discussion consider bias=0 in each case. However, the weights of the conv layer are learned indirectly. I have a 2 layered Multi layer perception having 9 nodes in first layer and 9 in the second. The weights of the 2d conv layer are then precisely the weights learned using this MLP i.e. nn.Linear(9,9). I understand in this case I will have to use nn.functional.conv2d(input,weight). But how exactly to extract the weights from MLP and use it for convolution is not clear and can think of the following.
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
m=nn.Linear(9,9)
def forward(self, x):
# some operations involving MLP `m`
return nn.Functional.conv2d(x,m.weight)
Can some one provide a short, dummy code in PyTorch to achieve this training configuration allowing backpropagation?
| A convolution from 3 input channels to 3 output channels with kernel_size=3 has 81 weights (and not 9). You can reduce this number to 27 if you use groups=3.
you can do the following:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.hyper = nn.Linear(9, 9) # output the required number of parameters
def forward(self, x):
# do stuff with self.hyper(x)
y = nn.Functional.conv2d(x, self.hyper.weight.reshape((3, 3, 3, 3))) # add padding and other parameters
return y
| https://stackoverflow.com/questions/66088545/ |
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! when resuming training | I saved a checkpoint while training on gpu. After reloading the checkpoint and continue training I get the following error:
Traceback (most recent call last):
File "main.py", line 140, in <module>
train(model,optimizer,train_loader,val_loader,criteria=args.criterion,epoch=epoch,batch=batch)
File "main.py", line 71, in train
optimizer.step()
File "/opt/conda/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/optim/sgd.py", line 106, in step
buf.mul_(momentum).add_(d_p, alpha=1 - dampening)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
My training code is as follows:
def train(model,optimizer,train_loader,val_loader,criteria,epoch=0,batch=0):
batch_count = batch
if criteria == 'l1':
criterion = L1_imp_Loss()
elif criteria == 'l2':
criterion = L2_imp_Loss()
if args.gpu and torch.cuda.is_available():
model.cuda()
criterion = criterion.cuda()
print(f'{datetime.datetime.now().time().replace(microsecond=0)} Starting to train..')
while epoch <= args.epochs-1:
print(f'********{datetime.datetime.now().time().replace(microsecond=0)} Epoch#: {epoch+1} / {args.epochs}')
model.train()
interval_loss, total_loss= 0,0
for i , (input,target) in enumerate(train_loader):
batch_count += 1
if args.gpu and torch.cuda.is_available():
input, target = input.cuda(), target.cuda()
input, target = input.float(), target.float()
pred = model(input)
loss = criterion(pred,target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
....
The saving process happened after finishing each epoch.
torch.save({'epoch': epoch,'batch':batch_count,'model_state_dict': model.state_dict(),'optimizer_state_dict':
optimizer.state_dict(),'loss': total_loss/len(train_loader),'train_set':args.train_set,'val_set':args.val_set,'args':args}, f'{args.weights_dir}/FastDepth_Final.pth')
I can't figure why I get this error.
args.gpu == True, and I'm passing the model, all data, and loss function to cuda, somehow there is still a tensor on cpu, could anyone figure out what's wrong?
Thanks.
| There might be an issue with the device parameters are on:
If you need to move a model to GPU via .cuda() , please do so before constructing optimizers for it. Parameters of a model after .cuda() will be different objects with those before the call.
In general, you should make sure that optimized parameters live in consistent locations when optimizers are constructed and used.
| https://stackoverflow.com/questions/66091226/ |
looking for an equivalent of Tensorflow normalization layer in Pytorch | I was using 'tf.keras.layers.experimental.preprocessing.Normalization'. This layer is cool since you can save weights in this layer to normalize any input data to this layer.
However, I couldn't find any normalization layer in Pytorch.
Is there a layer that functions the same role?
| There is no built-in that achieves this is PyTorch. However, you can measure the mean and standard deviation yourself (keeping only the relevant axes), then use torchvision.transform.Normalize with those statistics.
For instance in order to measure mean and std over the channels:
>>> x = torch.rand(16, 3, 10, 10)
>>> mean, std = x.mean((0, 2, 3)), x.std((0, 2, 3))
(tensor(0.4941), tensor(0.2899))
Then initialize a transform:
>>> t = torchvision.transform.Normalize(mean, std)
You can use this function on a new dataset to normalize it based on the initial dataset's statistics:
>>> z_normalized = t(z)
| https://stackoverflow.com/questions/66092092/ |
AllenNLP DatasetReader: only loads a single instance, instead of iterating over all instances in the training dataset | I am using AllenNLP to train a hierarchical attention network model. My training dataset consists of a list of JSON objects (eg, each object in the list is a JSON object with keys := ["text", "label"]. The value associated with the text key is a list of lists, eg:
[{"text":[["i", "feel", "sad"], ["not", "sure", "i", "guess", "the", "weather"]], "label":0} ... {"text":[[str]], "label":int}]
My DatasetReader class looks like:
@DatasetReader.register("my_reader")
class TranscriptDataReader(DatasetReader):
def __init__(self,
token_indexers: Optional[Dict[str, TokenIndexer]] = None,
lazy: bool = True) -> None:
super().__init__(lazy)
self._token_indexers = token_indexers or {'tokens': SingleIdTokenIndexer()}
def _read(self, file_path: str) -> Iterator[Instance]:
with open(file_path, 'r') as f:
data = json.loads(f.read())
for _,data_json in enumerate(data):
sent_list = []
for segment in data_json["text"]:
sent_list.append(self.get_text_field(segment))
yield self.create_instance(sent_list, str(data_json["label"]))
def get_text_field(self, segment):
return TextField([Token(token.lower()) for token in segment],self._token_indexers)
def create_instance(self, sent_list, label):
label_field = LabelField(label, skip_indexing=False)
fields = {'tokens': ListField(sent_list), 'label': label_field}
return Instance(fields)
and in my config file, I have:
{
dataset_reader: {
type: 'my_reader',
},
train_data_path: 'data/train.json',
validation_data_path: 'data/dev.json',
data_loader: {
batch_sampler: {
type: 'bucket',
batch_size: 10
}
},
I have tried (alternatively) setting the lazy param for the dataset reader to True and False.
When set to True, the model is able to train, however, I observe that only one train and one dev instance actually get loaded, when my dataset contains ~100.
When set to False, I've modified the yield line in _read to be return; however, this causes a type error in the base vocabulary class. I've also tried keeping the yield as is when set to False; in this case, no instances get loaded at all, and since the set of instances is empty, the vocabulary does not get instantiated, and the embedding class throws an error.
Would appreciate pointers, and/or tips for debugging.
| If you are using allennlp>=v2.0.0, the lazy parameter in the DatasetReader constructor is deprecated. Therefore, your super().__init__(lazy) would be instead interpreted as the new constructor parameter max_instances, i.e. max_instances=True which is equivalent to max_instances=1.
| https://stackoverflow.com/questions/66092443/ |
Storing a dictionary with random indices as keys and simulated values as values in hdf5 possibly using pytorch? | UPDATED QUESTION:
Each entry in an nd-array (say Sim_nDArray) correspond to a combination of parameters chosen from 8D search space. I have used Sim_nDArray.ravel() to convert it to 1D equivalent. Since I can not search from ~100 million entries, I decided to choose ~1 million random entries. I have corresponding ~1 million simulated values.
I have been able to simulate and save it. However, it seems that I have not been able to load data properly. I am getting error while overloading "len" during the declaration of the object: "dataset".
I am planning to use hdf5 to store and read data. Can someone please guide me how to achieve this?
def add_trace(arrInd, arr):
""" Add one trace to the dataset, keeping count of the # of traces written """
global ntraces
dset1[ntraces, :] = arrInd
dset2[ntraces, :] = arr
ntraces += 1
def done():
""" After all calls to add_trace_2, trim the dataset to size """
dset1.resize((ntraces, 1000))
dset2.resize((ntraces, 1000))
import torch
from torch.utils.data import Dataset, DataLoader
class Dataset(torch.utils.data.Dataset):
# Characterizes a dataset for PyTorch
def __init__(self, dset1, dset2):
'Initialization'
self.dset1 = dset1
self.dset2 = dset2
self._data_len = len(dset1)
def __len__(self):
# Denotes the total number of samples
return len(self._data_len)
def __getitem__(self, index):
# Generates one sample of data
# Select sample
ID = self.dset1[index]
SimData = self.dset2[index]
return ID, SimData
# Running the main.
if __name__ == '__main__':
import h5py
import numpy as np
import timeit
""" Re-initialize both datasets for the tests """
global data, N, dset1, dset2, ntraces
N = 1000
################ WRITE #############################################################################################
## Creating two datasets
f = h5py.File("randomDataset2.hdf5", 'w')
dset1 = f.create_dataset('dataset1', (5000, 1000), maxshape=(None, 1000), dtype="float32", chunks=(1, 1000))
dset2 = f.create_dataset('dataset2', (5000, 1000), maxshape=(None, 1000),
dtype="float32") # DK: why faster if I do not define chunk
dset1.resize((10001, 1000)) # Allocating extra space
dset2.resize((10001, 1000)) # Allocating extra space
## TEST 1: Less efficient way of writing to hdf5
ntraces = 0
start1 = timeit.default_timer()
for idx in range(N):
IndxVec1 = np.random.randint(low=0, high=1000, size=1000);
DataVec1 = np.random.random(1000)
add_trace(IndxVec1, DataVec1)
done()
# All the program statements
stop1 = timeit.default_timer()
execution_time = stop1 - start1
print("Program Executed in " + str(execution_time)) # It returns time in seconds
f.close()
##################################
## READING HDF files
fr = h5py.File("randomDataset2.hdf5", 'r')
dset10 = fr['dataset1']
dset20 = fr['dataset2']
fr.close()
# Parameters
params = {'batch_size': 64, 'shuffle': True, 'num_workers': 6}
max_epochs = 100
# Generators
training_set = Dataset(dset10, dset20)
training_generator = torch.utils.data.DataLoader(training_set, batch_size= 64, shuffle=True, num_workers= 6)
Error:
(PipInConda_DKU) dushyant20@DESKTOP-U96RKFC:/mnt/c/PyImageSearch/Sim_Write_n_Read$ python3 main.
py
Program Executed in 2.265893899995717
Traceback (most recent call last):
File "main.py", line 89, in <module>
training_set = Dataset(dset10, dset20)
File "main.py", line 30, in __init__
self._data_len = len(dset1)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "/home/dushyant20/miniconda3/envs/PipInConda_DKU/lib/python3.8/site-packages/h5py/_hl/dat
aset.py", line 447, in __len__
size = self.len()
File "/home/dushyant20/miniconda3/envs/PipInConda_DKU/lib/python3.8/site-packages/h5py/_hl/dat
aset.py", line 459, in len
shape = self.shape
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "/home/dushyant20/miniconda3/envs/PipInConda_DKU/lib/python3.8/site-packages/h5py/_hl/dataset.py", line 286, in shape
return self.id.shape
File "h5py/h5d.pyx", line 132, in h5py.h5d.DatasetID.shape.__get__
File "h5py/h5d.pyx", line 133, in h5py.h5d.DatasetID.shape.__get__
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5d.pyx", line 289, in h5py.h5d.DatasetID.get_space
ValueError: Not a dataset (not a dataset)
| I don't use PyTorch, so can't comment on that (or run the entire code). Observations:
I noticed 2 methods for Class Dataset are not indented properly:
def __len__(self): and def __getitem__(self, index):. I assume
that's an error from cut-n-paste to your SO post...but you should
double check.
I ran your code (after commenting out the PyTorch stuff), and it runs
to completion and creates randomDataset2.hdf5 with 2 1000x1000
datasets. So, the problem is not in HDF5 file creation.
You are passing h5py datasets to your PyTorch generators. However, you close the HDF5 file BEFORE you call them. So, the data isn't available at that point in time. That could be the problem. Also, do your generators expect NumPy arrays or dataset objects? That could also cause a problem (once you fix the file close issue).
Other observations:
Use with / as : context manager when working with files to avoid
open/close issues.
Recommended practice is to put all imports at the top of the file.
Use this call if you want a NumPy array instead of a h5py dataset: arr10 = fr['dataset1'][:]
Modified code to reflect above is shown below.
I don't know if this will solve your problem...but it might get you pointed in the right direction.
import h5py
import numpy as np
import timeit
import torch
from torch.utils.data import Dataset, DataLoader
def add_trace(arrInd, arr):
""" Add one trace to the dataset, keeping count of the # of traces written """
global ntraces
dset1[ntraces, :] = arrInd
dset2[ntraces, :] = arr
ntraces += 1
def done():
""" After all calls to add_trace_2, trim the dataset to size """
dset1.resize((ntraces, 1000))
dset2.resize((ntraces, 1000))
class Dataset(torch.utils.data.Dataset):
# Characterizes a dataset for PyTorch
def __init__(self, dset1, dset2):
'Initialization'
self.dset1 = dset1
self.dset2 = dset2
self._data_len = len(dset1)
def __len__(self):
# Denotes the total number of samples
return len(self._data_len)
def __getitem__(self, index):
# Generates one sample of data
# Select sample
ID = self.dset1[index]
SimData = self.dset2[index]
return ID, SimData
# Running the main.
if __name__ == '__main__':
""" Re-initialize both datasets for the tests """
global data, N, dset1, dset2, ntraces
N = 1000
################ WRITE #############################################################################################
## Creating two datasets
with h5py.File("randomDataset2.hdf5", 'w') as f:
dset1 = f.create_dataset('dataset1', (5000, 1000), maxshape=(None, 1000), dtype="float32", chunks=(1, 1000))
dset2 = f.create_dataset('dataset2', (5000, 1000), maxshape=(None, 1000),
dtype="float32") # DK: why faster if I do not define chunk
dset1.resize((10001, 1000)) # Allocating extra space
dset2.resize((10001, 1000)) # Allocating extra space
## TEST 1: Less efficient way of writing to hdf5
ntraces = 0
start1 = timeit.default_timer()
for idx in range(N):
IndxVec1 = np.random.randint(low=0, high=1000, size=1000);
DataVec1 = np.random.random(1000)
add_trace(IndxVec1, DataVec1)
done()
# All the program statements
stop1 = timeit.default_timer()
execution_time = stop1 - start1
print("Program Executed in " + str(execution_time)) # It returns time in seconds
##################################
## READING HDF files
with h5py.File("randomDataset2.hdf5", 'r') as fr:
dset10 = fr['dataset1']
arr10 = fr['dataset1'][:]
dset20 = fr['dataset2']
arr20 = fr['dataset2'][:]
# Parameters
params = {'batch_size': 64, 'shuffle': True, 'num_workers': 6}
max_epochs = 100
# Generators
training_set = Dataset(dset10, dset20)
training_generator = torch.utils.data.DataLoader(training_set, batch_size= 64, shuffle=True, num_workers= 6)
| https://stackoverflow.com/questions/66096415/ |
How to allow complex inputs, and complex weights to a Pytorch model? | Assume even the simplest model (taken from here)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output
When feeding complex data to the model,
output = model(data.complex())
it gives
ret = torch.addmm(bias, input, weight.t())
RuntimeError: expected scalar type Float but found ComplexDouble
(I didn't copy the entire stack trace, nor the entire training code, for question simplicity)
doing self.complex() after the model's __init__, as I normally would do self.double(), doesn't work, with
torch.nn.modules.module.ModuleAttributeError: 'Net' object has no attribute 'complex'
How to allow model's weights to be complex?
How to allow complex input to a model?
Which built-in activation functions support this?
Is anything also supported for 1d operations?
EDIT:
In the meantime, I found
this paper. Still reading it.
| As you normally did self.double(), I found self.type(dst_type) from https://pytorch.org/docs/stable/generated/torch.nn.Module.html
In my case, self.type(torch.complex64) is working for me.
| https://stackoverflow.com/questions/66099139/ |
Reducing size of pytorch library | I've made a conversation telegram bot with pytorch and I'm trying to host it onto Github. The large pytorch file prevents me from doing so as its too large and I get this error:
remote: error: File env/lib/python3.8/site-packages/torch/lib/libtorch_cpu.dylib is 233.61 MB; this exceeds GitHub's file size limit of 100.00 MB
Is there anyway to reduce the size of the torch file? Or is it possible to find and delete unused dependencies?
| Do not host that file on github.
Make a requirements.txt file and add required versions there, you can even fix version required to run your code.
Whoever downloads it can create a virtual environment (venv) or a docker image and install it as
pip install -r requirements.txt
For example:
requirements.txt
https://download.pytorch.org/whl/cpu/torch-1.5.1%2Bcpu-cp38-cp38-linux_x86_64.whl
transformers==3.5
| https://stackoverflow.com/questions/66103345/ |
How to use a neural network (Pytorch- or Tensorflow-based) in Fortran? | Python is popular and optimal for neural network development and training. However, many scientific codes are written in the Fortran language. How I can call a trained network in my Fortran program?
| It would not make sense. You are not training the network in Fortran, you are just trying to run the C++ or Python code from Fortran.
You should abstract the training/inference from your Fortran code. You could do the orchestration in Fortran.
Create your model in Python
Expose your model thru an API that you can access from Fortran via an httpRequest.
By doing that, you could expose anything you want to your Fortran app.
| https://stackoverflow.com/questions/66107018/ |
GPU showing no speed up over CPU | I'm training a neural network with 100*100 hidden nodes, four inputs/one output, and batch size of 32, and I am seeing no speed improvement in using the GPU vs. CPU. I only have a limited data set (1067 samples, copied all to the GPU at the beginning), but I would have thought the 33 batches could have run in parallel, more than making up for the time in copying to the GPU. Is my data set too small, or is there potentially some other issue? Here is my code snippet:
def train_for_regression(X, T):
BATCH_SIZE = 32
n_epochs = 1000
learning_rate = 0.01
device = torch.device("cuda:0")
Xt = torch.from_numpy(X).float().to(device) #Training inputs are 4 * 1067 samples
Tt = torch.from_numpy(T).float().to(device) #Training outputs are 1 * 1067 samples
nnet = torch.nn.Sequential(torch.nn.Linear(4, 100),
torch.nn.Tanh(),
torch.nn.Linear(100, 100),
torch.nn.Tanh(),
torch.nn.Linear(100, 1))
nnet.to(device)
mse_f = torch.nn.MSELoss()
optimizer = torch.optim.Adam(nnet.parameters(), lr=learning_rate)
for epoch in range(n_epochs):
for i in range(0, len(Xt), BATCH_SIZE):
batch_Xt = Xt[i:i+BATCH_SIZE,:]
batch_Tt = Tt[i:i+BATCH_SIZE,:]
optimizer.zero_grad()
Y = nnet(batch_Xt)
mse = mse_f(Y, batch_Tt)
mse.backward()
optimizer.step()
return nnet
| Chances are the time required for the data to get to the GPU negates the benefit of the GPU. In this case the size of the network seems so small that the CPU should be efficient enough and the speedup from the GPU shouldn't be that big.
Also, GPUs are usually used for matrix computations in parallel, or in this case - a single batch's data multiplied by the weights of the network. So batches shouldn't be processed in parallel unless you take extra steps, like using additional libraries and/or GPUs.
| https://stackoverflow.com/questions/66112977/ |
Neural network graph visualization | I would like to generate visualization of my neural network (PyTorch or ONNX model) similar to this using Graphcore Poplar.
I have looked in the documentation but I cannot find where this visualization feature is.
How can I achieve such a task ? Is there any other existing library ?
| that visualization is not part of the Graphcore Poplar software. It is "data art" generated by the team at GraphCore.
It is a tough work and requires many hours to get to that fine quality, but if you are decided, I would suggest to start looking at graph visualization tools looking for "graph network visualization" (and get inspiration from galleries like https://cytoscape.org/screenshots.html).
The NN architecture can be converted into a common graph format (neurons as nodes, connections as edges) and then you may start trying.
Some ideas:
Start with a simple NN with three layers. Place the input layer at the outer circle, there is a inner circle for the hidden layer and the output layer is placed in the center. Each neuron is a dot, with radius relative to the weight and color with the bias, and you can displace it towards/away the neurons in the previous layers based on the weight. Check this image for inspiration if you are looking for a "biological" style: https://cytoscape.org/images/screenshots/edge_bundling3_1400px.png
| https://stackoverflow.com/questions/66117949/ |
TypeError: 'NoneType' object cannot be interpreted as an integer | I want to classify cat and dog with pytorch. So I downloaded dataset from Kaggle, and separate train/validate set. I changed the file name from 00001.jpg to cat.00001.jpg..
But when I try to use enumerate(dataset), this error occurs:
My dataset code is:
class TrainImageFolder(Dataset):
def __init__(self, path, transform=None):
self.transform = transform
self.path = path
self.image = []
self.label = []
for i in os.listdir(self.path):
self.image.append(i)
if i.startswith("cat"):
self.label.append(0)
elif i.startswith("dog"):
self.label.append(1)
assert len(self.label) == len(self.image)
def __len__(self):
len(self.image)
def __getitem__(self, index):
label = self.label[index]
img = Image.open(self.image[index]).convert("RGB")
if self.transform:
img = self.transform(img)
return img, label
train_transform = transforms.Compose([transforms.Resize((224, 224)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
train_dataset = TrainImageFolder('train', transform=train_transform)
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32, shuffle=False)
for i, (imgs, labels) in tqdm(enumerate(train_dataloader)):
print(labels)
and error is:
Traceback (most recent call last):
File "C:/Users/ge971/PycharmProjects/myVGG16/dataset.py", line 148, in <module>
for i, (imgs, labels) in tqdm(enumerate(train_dataloader)):
File "C:\Users\ge971\miniconda3\envs\torch17\lib\site-packages\tqdm\std.py", line 1166, in __iter__
for obj in iterable:
File "C:\Users\ge971\miniconda3\envs\torch17\lib\site-packages\torch\utils\data\dataloader.py", line 435, in __next__
data = self._next_data()
File "C:\Users\ge971\miniconda3\envs\torch17\lib\site-packages\torch\utils\data\dataloader.py", line 474, in _next_data
index = self._next_index() # may raise StopIteration
File "C:\Users\ge971\miniconda3\envs\torch17\lib\site-packages\torch\utils\data\dataloader.py", line 427, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "C:\Users\ge971\miniconda3\envs\torch17\lib\site-packages\torch\utils\data\sampler.py", line 227, in __iter__
for idx in self.sampler:
File "C:\Users\ge971\miniconda3\envs\torch17\lib\site-packages\torch\utils\data\sampler.py", line 67, in __iter__
return iter(range(len(self.data_source)))
TypeError: 'NoneType' object cannot be interpreted as an integer
Process finished with exit code 1
Could you let me know how to fix the error?
| I just had the same error, you forgot to return the length of the data in len function.
| https://stackoverflow.com/questions/66122889/ |
How to detect source of under fitting and vanishing gradients in pytorch? | How to detect source of vanishing gradients in pytorch?
By vanishing gradients, I mean then the training loss doesn't go down below some value, even on limited sets of data.
I am trying to train some network, and I have the above problem, in which I can't even get the network to over fit, but can't understand the source of the problem.
I've spent a long time googling this, and only found ways to prevent over fitting, but nothing about under fitting, or specifically, vanishing gradients.
What I did find:
Pytorch forum discussion about "bad gradients". It only refers to exploding gradients, and nan gradients, and leads to here and here which is more of the same.
I know that "making the network larger or more complex" is a general suggested way of causing over fitting (which is desired right now).
I also know that very deep networks can have their gradients vanish.
It is not clear to me that a larger network would solve the problem because it could create its own problem, as I just stated, and again I would not know how to debug this, while still seeing roughly the same behavior.
Changing the architecture to some res-net could help, but also could not, because the problem was not pinpointed to be caused by network depth.
Dead Relu can cause underfitting, and indeed moving to LeakyRelu helps, but still not enough.
How would one debug sources of under fitting in Pytorch, specifically, caused by vanishing gradients?
Instead of shooting blindly, trying things, I would like to be able to properly visualize the gradients in my network to know what I am actually trying to solve instead of guessing.
Surely, I am not the first one to have this requirement, and tools and methodologies were created for this purpose.
I would like to read about them, but don't know what to look for.
The specific net I have right now is irrelevant, as this is a general question about methodology.
| You can use tensorboard with Pytorch to visualize the training gradients. Add the gradients to a tensorboard histogram during training.
For example...
Let:
model be your pytorch model
model_input be an example input to your model
run_name be a string identifier for your training session
from torch.utils.tensorboard import SummaryWriter
summary_writer = SummaryWriter(comment=run_name)
summary_writer.add_graph(model, model_input, verbose=True)
# Training loop
for step_index in ...:
# Calculate loss etc
for name, param in model.named_parameters():
summary_writer.add_histogram(f'{name}.grad', param.grad, step_index)
References:
https://pytorch.org/tutorials/intermediate/tensorboard_tutorial.html
https://discuss.pytorch.org/t/is-there-a-way-to-visualize-the-gradient-path-of-the-back-propagation-of-the-entire-network/44322/4
https://debuggercafe.com/track-your-pytorch-deep-learning-project-with-tensorboard/
| https://stackoverflow.com/questions/66137298/ |
Changing config and loading Hugging Face model fine-tuned on a downstream task | I am using HuggingFace models for TokenClassification task. I have the following label2id mapping. I am using version 3.3.0 of the library
label2id = {
"B-ADD": 4,
"B-ARRESTED": 7,
"B-CRIME": 2,
"B-INCIDENT_DATE": 3,
"B-SUSPECT": 9,
"B-VICTIMS": 1,
"B-WPN": 5,
"I-ADD": 8,
"I-ARRESTED": 13,
"I-CRIME": 11,
"I-INCIDENT_DATE": 10,
"I-SUSPECT": 14,
"I-VICTIMS": 12,
"I-WPN": 6,
"O": 0
}
The following scenario works well and the model gets loaded correctly.
from transformers import AutoModelForTokenClassification, AutoTokenizer, AutoConfig
pretrained_model_name = "bert-base-cased"
config = AutoConfig.from_pretrained(pretrained_model_name)
id2label = {y:x for x,y in label2id.items()}
config.label2id = label2id
config.id2label = id2label
config._num_labels = len(label2id)
model = AutoModelForTokenClassification.from_pretrained(pretrained_model_name, config=config)
model
I get the following output. The last layer have been correctly initialized with 15 neurons (numer of token category to predict).
.....................
(dropout): Dropout(p=0.1, inplace=False)
(classifier): Linear(in_features=768, out_features=15, bias=True)
)
but if I changed the pretrained_model_name to "dbmdz/bert-large-cased-finetuned-conll03-english", I am getting the following error
loading weights file https://cdn.huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english/pytorch_model.bin from cache at C:\Users\anu10961/.cache\torch\transformers\4b02c1fe04cf7f7e6972536150e9fb329c7b3d5720b82afdac509bd750c705d2.6dcb154688bb97608a563afbf68ba07ae6f7beafd9bd98b5a043cd269fcc02b4
All model checkpoint weights were used when initializing BertForTokenClassification.
All the weights of BertForTokenClassification were initialized from the model checkpoint at dbmdz/bert-large-cased-finetuned-conll03-english.
If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForTokenClassification for predictions without further training.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-15-2969a8092bf4> in <module>
----> 1 model = AutoModelForTokenClassification.from_pretrained(pretrained_model_name, config=config)
C:\ProgramData\Anaconda3\envs\arcgis183\lib\site-packages\transformers\modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1372 if type(config) in MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING.keys():
1373 return MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING[type(config)].from_pretrained(
-> 1374 pretrained_model_name_or_path, *model_args, config=config, **kwargs
1375 )
1376
C:\ProgramData\Anaconda3\envs\arcgis183\lib\site-packages\transformers\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1047 raise RuntimeError(
1048 "Error(s) in loading state_dict for {}:\n\t{}".format(
-> 1049 model.__class__.__name__, "\n\t".join(error_msgs)
1050 )
1051 )
RuntimeError: Error(s) in loading state_dict for BertForTokenClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([9, 1024]) from checkpoint, the shape in current model is torch.Size([15, 1024]).
size mismatch for classifier.bias: copying a param with shape torch.Size([9]) from checkpoint, the shape in current model is torch.Size([15]).
The only difference I could see is the model dbmdz/bert-large-cased-finetuned-conll03-english is already finetuned on TokenClassification task and it model config has these label2id mappings
label2id = {
"B-LOC": 7,
"B-MISC": 1,
"B-ORG": 5,
"B-PER": 3,
"I-LOC": 8,
"I-MISC": 2,
"I-ORG": 6,
"I-PER": 4,
"O": 0
}
But still I feel that we can change the last layer of this model and use it for my specific task (although I need to train the model first before using it for inferencing)
| Once a part of the model is in the saved pre-trained model, you cannot change its hyperparameters. By setting the pre-trained model and the config, you are saying that you want a model that classifies into 15 classes and that you want to initialize with a model that uses 9 classes and that does not work.
If I understand correctly you want to initialize the underlying BERT from a different classifier. A workaround that can do it is:
Load only the underlying BERT without the classification layer;
Initialize a classification model from scratch;
Replace the randomly initialized BERT in the new classifier with the pre-trained one.
from Transformers import AutoModel, AutoModelForTokenClassification
bert = AutoModel.from_pretrained('dbmdz/bert-large-cased-finetuned-conll03-english')
classifier = AutoModelForTokenClassification.from_config(config)
classifier.bert = bert
| https://stackoverflow.com/questions/66148641/ |
PyTorch one of the variables needed for gradient computation has been modified by an inplace operation | I'm doing a policy gradient method in PyTorch. I wanted to move the network update into the loop and it stopped working. I'm still a PyTorch newbie so sorry if the explanation is obvious.
Here is the original code that works:
self.policy.optimizer.zero_grad()
G = T.tensor(G, dtype=T.float).to(self.policy.device)
loss = 0
for g, logprob in zip(G, self.action_memory):
loss += -g * logprob
loss.backward()
self.policy.optimizer.step()
And after the change:
G = T.tensor(G, dtype=T.float).to(self.policy.device)
loss = 0
for g, logprob in zip(G, self.action_memory):
loss = -g * logprob
self.policy.optimizer.zero_grad()
loss.backward()
self.policy.optimizer.step()
I get the error:
File "g:\VScode_projects\pytorch_shenanigans\policy_gradient.py", line 86, in learn
loss.backward()
File "G:\Anaconda3\envs\pytorch_env\lib\site-packages\torch\tensor.py", line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "G:\Anaconda3\envs\pytorch_env\lib\site-packages\torch\autograd\__init__.py", line 127, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [128, 4]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
I read that this RuntimeError often has to do with having to clone something, because we're using the same tensor to compute itself but I can't make heads of tails of what is wrong in my case.
| This line, loss += -g * logprob, is what is wrong in your case.
Change it to this:
loss = loss + (-g * logprob)
And Yes, they are different. They perform the same operations but in different ways.
| https://stackoverflow.com/questions/66177532/ |
What distribution is used when you make a tensor with torch.Tensor constructor in Pytorch? | I typed and ran torch.Tensor(2, 3) in Google Colab. It did work but it returned an weird-valued 2x3 tensor which includes even nan.
tensor([[3.8202e-36, 0.0000e+00, 3.9236e-44],
[0.0000e+00, nan, 1.8750e+00]])
I searched Pytorch(1.7.1)'s tensor.Tensor Doc to find out what distribution the default constructor has but the case when you create a tensor with Tensor class constructor was not written.
what happens when you use Tensor class constructor and what are the parameters for it?
| I believe that torch.Tensor is identical to the torch.empty creation operator. It doesn't use a distribution to draw from, it's just a tensor filled with uninitialized values. Essentially used to allocate memory.
>>> torch.empty(2, 3)
tensor([[5.5699e-35, 0.0000e+00, 1.5975e-43],
[1.3873e-43, 1.4574e-43, 6.4460e-44]])
| https://stackoverflow.com/questions/66194534/ |
PyTorch installation issues on MacOS through Anaconda | I am trying to install PyTorch on my Macbook Pro. I had no issues installing NumPy or Matplotlib using the following commands:
conda install numpy
conda install matplotlib
When I then import those into Python console, they work correctly. However, when I try to import PyTorch I get the following error:
(myenv) $ % python
Python 3.9.1 (default, Dec 11 2020, 06:28:49)
[Clang 10.0.0 ] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jeasl/opt/anaconda3/envs/myenv/lib/python3.9/site-packages/torch/__init__.py", line 189, in <module>
_load_global_deps()
File "/Users/jeasl/opt/anaconda3/envs/myenv/lib/python3.9/site-packages/torch/__init__.py", line 142, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File "/Users/jeasl/opt/anaconda3/envs/myenv/lib/python3.9/ctypes/__init__.py", line 382, in __init__
self._handle = _dlopen(self._name, mode)
OSError: dlopen(/Users/jeasl/opt/anaconda3/envs/myenv/lib/python3.9/site-packages/torch/lib/libtorch_global_deps.dylib, 10): Library not loaded: @rpath/libomp.dylib
Referenced from: /Users/jeasl/opt/anaconda3/envs/myenv/lib/python3.9/site-packages/torch/lib/libtorch_global_deps.dylib
Reason: image not found
I have absolutely no idea what is causing this, even after looking through several forums for answers. When I got to try and reinstall PyTorch, I get this:
(myenv) $ % conda install pytorch
Collecting package metadata (current_repodata.json): done
Solving environment: done
# All requested packages already installed.
So it seems like it is all downloaded correctly - I just can't import it when in the Python console.
Any idea how to get this working correctly?
| OP indicates use of Python 3.9 from Anaconda, but the PyTorch installer tool explicitly notes that one must use Python from the Conda Forge channel:
I have no issue with the following environment YAML:
File: pytorch.yaml
channels:
- pytorch
- conda-forge
- defaults
dependencies:
- python=3.9
- pytorch
- torchvision
- torchaudio
- numpy
- matplotlib
created with
conda env create -f pytorch.yaml -n foo
| https://stackoverflow.com/questions/66211541/ |
PyTorch - Where are kernels launched? | I need to get information about kernels that PyTorch launches. For example, a callstack information such as "main.py:24 -> ... -> callkernel.py:53" would be beneficial. Is there anyway I can gather this information out out a PyTorch application execution? I also am currently searching through the source code of PyTorch but I still could not find a line where a CUDA kernel is launched. My questions are twofold:
Can I get callstack at the time of kernel launch?
Can someone show me an example of kernel launch in the source of PyTorch?
| To get a helpful stack trace, you would most likely need to build pytorch with debug symbols (build instructions are here). I'm not sure if there are any debug builds available to download. But a stack trace might not make very much sense without some background, so here's a general outline of where things are defined in the codebase:
Most operators in PyTorch are implemented in the codebase as a C++ at::native namespace function within pytorch/aten/src/ATen/native. When PyTorch is built, codegen scripts automatically generate the Python functions and the Python-to-C++ bindings for the operators defined in native_functions.yaml, and the generated code is not checked into the repo (so you would have to either read the scripts or build PyTorch yourself if you want to see what's going on in codegen).
An at::native operator will usually call a device dispatch function for that operator, which is often suffixed with _stub. The dispatch function checks what device (cpu, cuda, etc) the arguments are on, and then runs a device-specific implementation. From there, another dispatch happens, which calls a datatype-specific implementation.
To go through an example, the add.out operator (which is called when you do torch.add(..., out=...) in Python) is declared here. Codegen generates everything needed to bind the Python function to at::native::add_out, which is defined here. Notice that that function calls add_stub, which is the device dispatch function.
A CPU implementation for add_stub is registered here and implemented here as add_kernel. A CUDA implementation is registered here and implemented here as add_kernel_cuda. Notice that both of these use a TensorIteratorBase object. Long story short, this object will iterate through each pair of elements in the tensor inputs that should be added together.
There is another dispatch within add_kernel and add_kernel_cuda which chooses a separate implementation based on the data type of the arguments. The separate data type implementations are generated from a shared template function. You can see that the CPU function also has a different implementation for a vectorized and a non-vectorized operation, while the CUDA implementation just has the one here.
If you want to see a full stack trace, you could run a script with gdb --args python <script name>, and create a break point for the specific kernel you want. Again, debug symbols would be needed to make sense of it.
| https://stackoverflow.com/questions/66214106/ |
regarding the trick of using 1*1 convolution | I once read the following statement on using 1*1 convolution, which can help connect the input and output with different dimensions:
For example, to reduce the activation dimensions (HxW) by a factor of 2, you can use a 1x1 convolution with a stride of 2.
How to understand this example?
| You can use a stride of 2. However, I wouldn't say this is a trick, not like a magic solution to retain information. You will lose half of the information. I wouldn't qualify this method as a pooling method either.
The kernel size is one pixel high and one pixel wide, and will move (stride) two pixels at a time. As a consequence, for every pixel there is on a row, the kernel will output a single value every two pixels, i.e. will output half the number of pixels on that row. Equivalently for the height, the kernel will completely discard half of the rows.
Here is the example of a 2D convolution of size 1x1 and stride 2 over a 6x6 input. On the left, the 1x1 patches in dark yellow are the successive positions of the kernel. On the right is the resulting image shaped 3x3.
| https://stackoverflow.com/questions/66218825/ |
How to parallelize a training loop ever samples of a batch when CPU is only available in pytorch? | I want to parallelize over single examples or batch of example (in my situation is that I only have cpus, I have up to 112). I tried it but I get a bug that the losses cannot have the gradient out of separate processes (which entirely ruins my attempt). I still want to do it and it essential that after the multiproessing happens that I can do an optimizer step. How do I get around it? I made a totally self contained example:
import torch
import torch.nn as nn
from torch.optim.lr_scheduler import StepLR
from torch.utils.data import Dataset, DataLoader
from torch.multiprocessing import Pool
class SimpleDataSet(Dataset):
def __init__(self, Din, num_examples=23):
self.x_dataset = [torch.randn(Din) for _ in range(num_examples)]
# target function is x*x
self.y_dataset = [x**2 for x in self.x_dataset]
def __len__(self):
return len(self.x_dataset)
def __getitem__(self, idx):
return self.x_dataset[idx], self.y_dataset[idx]
def get_loss(args):
x, y, model = args
y_pred = model(x)
criterion = nn.MSELoss()
loss = criterion(y_pred, y)
return loss
def get_dataloader(D, num_workers, batch_size):
ds = SimpleDataSet(D)
dl = DataLoader(ds, batch_size=batch_size, num_workers=num_workers)
return dl
def train_fake_data():
num_workers = 2
Din, Dout = 3, 1
model = nn.Linear(Din, Dout).share_memory()
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
batch_size = 2
num_epochs = 10
# num_batches = 5
num_procs = 5
dataloader = get_dataloader(Din, num_workers, batch_size)
scheduler = StepLR(optimizer, step_size=1, gamma=0.7)
for epoch in range(num_epochs):
for _, batch in enumerate(dataloader):
batch = [(torch.randn(Din), torch.randn(Dout), model) for _ in batch]
with Pool(num_procs) as pool:
optimizer.zero_grad()
losses = pool.map(get_loss, batch)
loss = torch.mean(losses)
loss.backward()
optimizer.step()
# scheduler
scheduler.step()
if __name__ == '__main__':
# start = time.time()
# train()
train_fake_data()
# print(f'execution time: {time.time() - start}')
Error:
Traceback (most recent call last):
File "/Users/brando/anaconda3/envs/coq_gym/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3427, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-ea57e03ba088>", line 1, in <module>
runfile('/Users/brando/ML4Coq/playground/multiprocessing_playground/multiprocessing_cpu_pytorch.py', wdir='/Users/brando/ML4Coq/playground/multiprocessing_playground')
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/brando/ML4Coq/playground/multiprocessing_playground/multiprocessing_cpu_pytorch.py", line 95, in <module>
train_fake_data()
File "/Users/brando/ML4Coq/playground/multiprocessing_playground/multiprocessing_cpu_pytorch.py", line 83, in train_fake_data
losses = pool.map(get_loss, batch)
File "/Users/brando/anaconda3/envs/coq_gym/lib/python3.7/multiprocessing/pool.py", line 290, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/Users/brando/anaconda3/envs/coq_gym/lib/python3.7/multiprocessing/pool.py", line 683, in get
raise self._value
multiprocessing.pool.MaybeEncodingError: Error sending result: '[tensor(0.5237, grad_fn=<MseLossBackward>)]'. Reason: 'RuntimeError('Cowardly refusing to serialize non-leaf tensor which requires_grad, since autograd does not support crossing process boundaries. If you just want to transfer the data, call detach() on the tensor before serializing (e.g., putting it on the queue).')'
I am sure I want to do this. How should I be doing this?
New attempt using DDP
"""
Based on: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html
Note: as opposed to the multiprocessing (torch.multiprocessing) package, processes can use
different communication backends and are not restricted to being executed on the same machine.
"""
import torch
from torch import nn, optim
import torch.distributed as dist
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP
import os
num_epochs = 5
batch_size = 8
Din, Dout = 10, 5
data_x = torch.randn(batch_size, Din)
data_y = torch.randn(batch_size, Dout)
data = [(i*data_x, i*data_y) for i in range(num_epochs)]
class OneDeviceModel(nn.Module):
"""
Toy example for a model ran in parallel but not distributed accross gpus
(only processes with their own gpu or hardware)
"""
def __init__(self):
super().__init__()
self.net1 = nn.Linear(Din, Din)
self.relu = nn.ReLU()
self.net2 = nn.Linear(Din, Dout)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
def setup_process(rank, world_size, backend='gloo'):
"""
Initialize the distributed environment (for each process).
gloo: is a collective communications library (https://github.com/facebookincubator/gloo). My understanding is that
it's a library/API for process to communicate/coordinate with each other/master. It's a backend library.
"""
# set up the master's ip address so this child process can coordinate
# os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# - use NCCL if you are using gpus: https://pytorch.org/tutorials/intermediate/dist_tuto.html#communication-backends
if torch.cuda.is_available():
backend = 'nccl'
# Initializes the default distributed process group, and this will also initialize the distributed package.
dist.init_process_group(backend, rank=rank, world_size=world_size)
def cleanup():
""" Destroy a given process group, and deinitialize the distributed package """
dist.destroy_process_group()
def run_parallel_training_loop(rank, world_size):
"""
Distributed function to be implemented later.
This is the function that is actually ran in each distributed process.
Note: as DDP broadcasts model states from rank 0 process to all other processes in the DDP constructor,
you don’t need to worry about different DDP processes start from different model parameter initial values.
"""
print()
print(f"Start running DDP with model parallel example on rank: {rank}.")
print(f'current process: {mp.current_process()}')
print(f'pid: {os.getpid()}')
setup_process(rank, world_size)
# create model and move it to GPU with id rank
model = OneDeviceModel().to(rank) if torch.cuda.is_available() else OneDeviceModel().share_memory()
# ddp_model = DDP(model, device_ids=[rank])
ddp_model = DDP(model)
for batch_idx, batch in enumerate(data):
x, y = batch
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(x)
labels = y.to(rank) if torch.cuda.is_available() else y
# Gradient synchronization communications take place during the backward pass and overlap with the backward computation.
loss_fn(outputs, labels).backward() # When the backward() returns, param.grad already contains the synchronized gradient tensor.
optimizer.step() # TODO how does the optimizer know to do the gradient step only once?
print()
print(f"Start running DDP with model parallel example on rank: {rank}.")
print(f'current process: {mp.current_process()}')
print(f'pid: {os.getpid()}')
# Destroy a given process group, and deinitialize the distributed package
cleanup()
def main():
print()
print('running main()')
print(f'current process: {mp.current_process()}')
print(f'pid: {os.getpid()}')
# args
world_size = mp.cpu_count()
mp.spawn(run_parallel_training_loop, args=(world_size,), nprocs=world_size)
if __name__ == "__main__":
print('starting __main__')
main()
print('Done!\a\n')
it seems it works but my question is in line 74 do I need to do this
model = OneDeviceModel().to(rank) if torch.cuda.is_available() else OneDeviceModel().share_memory()
or
model = OneDeviceModel().to(rank) if torch.cuda.is_available() else OneDeviceModel()
for it to work properly in multiple CPUs?
Serial is faster than parallel even if I have 112 cpu cores?
My current issue is that the cpu parallel job is slower than the serially running one when only cpus are available.
I want to know how to set up python and parallel cpus. e.g. if I have X cpus how many processes should I be running...X? or what? How do I choose this number, even if its heursitics rough.
related links from research:
https://discuss.pytorch.org/t/multiprocessing-for-loop-on-cpu/59836
How to use multiprocessing in PyTorch?
https://discuss.pytorch.org/t/how-to-parallelize-a-loop-over-the-samples-of-a-batch/32698/7
https://www.reddit.com/r/pytorch/comments/sm073v/how_to_parallelize_a_training_loop_ever_samples/
| Torch will use multiple CPU to parallelize operations, so your serial is maybe using multi-core vectorization.
Take this simple example
import torch
c = 0;
for i in range(10000):
A = torch.randn(1000, 1000, device='cpu');
B = torch.randn(1000, 1000, device='cpu');
c += torch.sum(A @ B)
No code is used to parallelize, however 80% of 12 CPUs with the default configuration.
You can use torch.set_num_threads to set intraop parallelism on CPU. In particular if you are running multiple process and you want each process to use a single CPU you may want to set in each process the intraop parallelism to 1.
However, parallelizing the operations has a cost, I am unable go into the implementation details but we can run a quick experiment that shows the overhead of using multiple threads.
import matplotlib.pyplot as plt
import numpy as np
import torch;
import time;
A = torch.randn(1000, 1000, device='cpu');
B = torch.randn(1000, 1000, device='cpu');
funcs = {
'sin': lambda a,b: torch.sin(A),
'tanh': lambda a,b: torch.tanh(A),
'log': lambda a,b: torch.log(A),
'matmul': lambda a,b: A @ B.T
}
t = np.zeros(20)
for k,f in funcs.items():
for i in range(1, len(t) + 1):
torch.set_num_threads(i)
c = 0;
t0 = time.time();
for _ in range(100):
f(A,B)
tf = time.time()
t[i-1] = (tf - t0)*i;
plt.plot(np.arange(1, len(t)+1), t, '-o', label=k)
plt.xlabel('Number of threads')
plt.legend()
plt.ylabel('Core x time')
The operations tends to run faster with parallelism
But if we take the total CPU time, by multiplying by the number of threads, we see that the single thread version is more efficient.
If you are able to parallelize your experiment at a higher level, by running independent processes, you should try that with a single core for each process, otherwise each process will try to use all the CPUs and all of them will run very slowly because your system is overloaded.
Tweaking DDP example
I modified hyperparameters of your example scripts intentionally in a way that weights in favor of multi process.
comparably less initialization overhead
comparably less communication between processes
"""
Based on: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html
Note: as opposed to the multiprocessing (torch.multiprocessing) package, processes can use
different communication backends and are not restricted to being executed on the same machine.
"""
import torch
from torch import nn, optim
import torch.distributed as dist
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP
import argparse
import os
# More than one epoch so that the initialization is less significant
# than compared to the model processing time
num_epochs = 10
# for the experiment select a number that has a lot of divisors
# as I want to test with equal number of batches
num_batches = 16*9*5
# Uses a larger batch so that more work is done in each process
# between two gradient synchronizations
# apparently the intraop optimization is not helping
# (at least not too much) in the batch dimension
batch_size = 10000
# Use smaller dimensions, so that the intraop parallelization becomes less
# helpful
Din, Dout = 3, 5
data_x = torch.randn(batch_size, Din)
data_y = torch.randn(batch_size, Dout)
data = [(i*data_x, i*data_y) for i in range(num_batches)]
class OneDeviceModel(nn.Module):
"""
Toy example for a model ran in parallel but not distributed accross gpus
(only processes with their own gpu or hardware)
"""
def __init__(self):
super().__init__()
# -- Use more layers
self.net = [nn.Linear(Din, Din) for _ in range(10)]
# -- Bob: use more complex activation
self.tanh = nn.Tanh()
self.sigmoid = nn.Sigmoid()
self.relu = nn.ReLU()
self.net2 = nn.Linear(Din, Dout)
def forward(self, x):
# apply the 10 layers sequentially
for i in range(10):
x = self.net[i](x)
x = self.sigmoid(x)
x = self.tanh(x)
x = self.relu(x)
return self.net2(x)
def setup_process(rank, world_size, backend='gloo'):
"""
Initialize the distributed environment (for each process).
gloo: is a collective communications library (https://github.com/facebookincubator/gloo). My understanding is that
it's a library/API for process to communicate/coordinate with each other/master. It's a backend library.
"""
# set up the master's ip address so this child process can coordinate
# os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# - use NCCL if you are using gpus: https://pytorch.org/tutorials/intermediate/dist_tuto.html#communication-backends
if torch.cuda.is_available():
backend = 'nccl'
# Initializes the default distributed process group, and this will also initialize the distributed package.
dist.init_process_group(backend, rank=rank, world_size=world_size)
def cleanup():
""" Destroy a given process group, and deinitialize the distributed package """
dist.destroy_process_group()
def run_parallel_training_loop(rank, world_size):
"""
Distributed function to be implemented later.
This is the function that is actually ran in each distributed process.
Note: as DDP broadcasts model states from rank 0 process to all other processes in the DDP constructor,
you don’t need to worry about different DDP processes start from different model parameter initial values.
"""
print()
print(f"Start running DDP with model parallel example on rank: {rank}.")
print(f'current process: {mp.current_process()}')
print(f'pid: {os.getpid()}')
setup_process(rank, world_size)
torch.set_num_threads(mp.cpu_count() // world_size)
# create model and move it to GPU with id rank
model = OneDeviceModel().to(rank) if torch.cuda.is_available() else OneDeviceModel().share_memory()
# ddp_model = DDP(model, device_ids=[rank])
ddp_model = DDP(model)
for _ in range(num_epochs):
for batch_idx, batch in enumerate(data[rank::world_size]):
x, y = batch
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(x)
labels = y.to(rank) if torch.cuda.is_available() else y
# Gradient synchronization communications take place during the backward pass and overlap with the backward computation.
loss_fn(outputs, labels).backward() # When the backward() returns, param.grad already contains the synchronized gradient tensor.
optimizer.step() # TODO how does the optimizer know to do the gradient step only once?
print()
print(f"Start running DDP with model parallel example on rank: {rank}.")
print(f'current process: {mp.current_process()}')
print(f'pid: {os.getpid()}')
# Destroy a given process group, and deinitialize the distributed package
cleanup()
def main():
print()
print('running main()')
print(f'current process: {mp.current_process()}')
print(f'pid: {os.getpid()}')
parser = argparse.ArgumentParser()
parser.add_argument('--world-size', default=1, type=int)
args = parser.parse_args()
assert num_batches % args.world_size == 0
mp.spawn(run_parallel_training_loop, args=(args.world_size,), nprocs=args.world_size)
if __name__ == "__main__":
print('starting __main__')
main()
print('Done!\a\n')
$ time python3 ddp.py --world-size 1 > /dev/null
real 0m59.092s
user 8m46.589s
sys 0m7.320s
$ time python3 ddp.py --world-size 1 > /dev/null
real 1m11.124s
user 10m54.209s
sys 0m9.595s
$ time python3 ddp.py --world-size 6 > /dev/null
real 0m18.348s
user 2m28.799s
sys 0m18.068s
$ time python3 ddp.py --world-size 12 > /dev/null
real 0m26.352s
user 4m3.074s
sys 0m39.179s
$ time python3 ddp.py --world-size 3 > /dev/null
real 0m23.047s
user 3m51.172s
sys 0m11.483s
$ time python3 ddp.py --world-size 4 > /dev/null
real 0m18.195s
user 2m55.241s
sys 0m12.841s
$ time python3 ddp.py --world-size 2 > /dev/null
real 0m26.955s
user 4m15.837s
sys 0m7.127s
If I remove the line
torch.set_num_threads(mp.cpu_count() // world_size)
$ time python3 ddp.py --world-size 4 > /dev/null
real 0m40.574s
user 6m39.176s
sys 0m19.025s
$ time python3 ddp.py --world-size 2 > /dev/null
real 0m28.066s
user 3m17.775s
sys 0m8.410s
$ time python3 ddp.py --world-size 1 > /dev/null
real 0m37.114s
user 2m19.743s
sys 0m4.866s
Using
torch.set_num_threads(mp.cpu_count() // world_size // 2)
$ time python3 ddp.py --world-size 6 > /dev/null
real 0m16.399s
user 1m38.915s
sys 0m20.780s
$ time python3 ddp.py --world-size 4 > /dev/null
real 0m15.649s
user 1m1.821s
sys 0m13.589s
$ time python3 ddp.py --world-size 3 > /dev/null
real 0m16.947s
user 1m29.696s
sys 0m10.069s
$ time python3 ddp.py --world-size 2 > /dev/null
real 0m21.851s
user 2m4.564s
sys 0m7.486s
My Opinion
DDP in a single node seems not particularly advantageous. Unless you have a model that does a lot of work that is particularly not well handled by pytorch intraop parallelism, have large batches, and preferrably models with less parameters and more operations, meaning less gradients to synchronize, e.g. a convolutional model on a very large input.
Other scenarios where DDP might be helpful is if you are using too much python in your model, instead of vectorized operations.
| https://stackoverflow.com/questions/66226135/ |
How to squeeze all but one torch dims? | torch.squeeze can convert the shape of a tensor to not have dimensions of size 1.
I want to squeeze my tensor in all dimensions but one (in this example, not squeeze dim=0).
All I can see in the doc is
dim (int, optional) – if given, the input will be squeezed only in
this dimension
I want the opposite:
t = torch.zeros(5, 1, 6, 1, 7, 1)
squeezed = torch.magic_squeeze(keep_dim=3)
assert squeezed == (5, 6, 1, 7)
Can this be done?
| Reshape will let you accomplish what you want to do:
import torch
t = torch.zeros(5, 1, 6, 1, 7, 1)
t = t.reshape((5, 6, 1, 7))
>>> torch.Size([5, 6, 1, 7])
| https://stackoverflow.com/questions/66226505/ |
PyTorch, select batches according to label in data column | I have a dataset like such:
index
tag
feature1
feature2
target
1
tag1
1.4342
88.4554
0.5365
2
tag1
2.5656
54.5466
0.1263
3
tag2
5.4561
845.556
0.8613
4
tag3
6.5546
8.52545
0.7864
5
tag3
8.4566
945.456
0.4646
The number of entries in each tag is not always the same.
And my objective is to load only the data with a specific tag or tags, so that I get only the entries in tag1 for one mini-batch and then tag2 for another mini-batch if I set batch_size=1. Or for instance tag1 and tag2 if I set batch_size=2
The code I have so far disregards completely the tag label and just chooses the batches randomly.
I built the datasets like such:
# features is a matrix with all the features columns through all rows
# target is a vector with the target column through all rows
featuresTrain, targetTrain = projutils.get_data(train=True, config=config)
train = torch.utils.data.TensorDataset(featuresTrain, targetTrain)
train_loader = make_loader(train, batch_size=config.batch_size)
And my loader (generically) looks like this:
def make_loader(dataset, batch_size):
loader = torch.utils.data.DataLoader(dataset=dataset,
batch_size=batch_size,
shuffle=True,
pin_memory=True,
num_workers=8)
return loader
Which I then train like this:
for epoch in range(config.epochs):
for _, (features, target) in enumerate(loader):
loss = train_batch(features, target, model, optimizer, criterion)
And the train_batch:
def train_batch(features, target, model, optimizer, criterion):
features, target = features.to(device), target.to(device)
# Forward pass ➡
outputs = model(features)
loss = criterion(outputs, target
return loss
| A simple dataset that implements roughly the characteristics you're looking for as best as I can tell.
class CustomDataset(data.Dataset):
def __init__(self,featuresTrain,targetsTrain,tagsTrain,sample_equally = False):
# self.tags should be a tensor in k-hot encoding form so a 2D tensor,
self.tags = tagsTrain
self.x = featuresTrain
self.y = targetsTrain
self.unique_tagsets = None
self.sample_equally = sample_equally
# self.active tags is a 1D k-hot encoding vector
self.active_tags = self.get_random_tag_set()
def get_random_tag_set(self):
# gets all unique sets of tags and returns one randomly
if self.unique_tagsets is None:
self.unique_tagsets = self.tags.unique(dim = 0)
if self.sample_equally:
rand_idx = torch.randint(len(self.unique_tagsets),[1])[1].detatch().int()
return self.unique_tagsets[rand_idx]
else:
rand_idx = torch.randint(len(self.tags),[1])[1].detatch().int()
return self.tags[rand_idx]
def set_tags(self,tags):
# specifies the set of tags that must be present for a datum to be selected
self.active_tags = tags
def __getitem__(self,index):
# get all indices of elements with self.active_tags
indices = torch.where(self.tags == self.active_tags)[0]
# we select an index based on the indices of the elements that have the tag set
idx = indices[index % len(indices)]
item = self.x[idx], self.y[idx]
return item
def __len__(self):
return len(self.y)
This dataset randomly selects a set of tags. Then, every time __getitem__() is called, it uses the index specified to select from amongst the data elements that have the set of tags. You can call set_tags() or get_random_tag_set() then set_tags() after each minibatch or however often you want to change up the tagset, or you can manually specify the tagset yourself. The dataset inherits from torch.data.Dataset so you should be able to use if with a torch.data.Dataloader without modification.
You can specify whether you'd like to sample each set of tags according to its prevalence, or whether you'd like to sample all tagsets equally regardless of how many elements have that set, using sample_equally.
In short, this dataset is a tiny bit rough around the edges but should allow you to sample batches all with the same tag set. The main shortcoming is that each element will likely be sampled more than once per batch.
For the initial encoding, let's say that to start each data example has a list of tags, so tags is a list of lists, each sublist containing tags. The following code would convert this to k-hot encoding, so you can just:
def to_k_hot(tags):
all_tags = []
for ex in tags:
for tag in ex:
all_tags.append(tag)
unique_tags = list(set(all_tags)) # remove duplicates
tagsTrain = torch.zeros([len(tags),len(unique_tags)]):
for i in range(len(tags)): # index through all examples
for j in range(len(unique_tags)): # index through all unique_tags
if unique_tags[j] in tags[i]:
tagsTrain[i,j] = 1
return tagsTrain
As an example, say you had the following tags for a dataset:
tags = [ [tag1],
[tag1,tag2],
[tag3],
[tag2],
[],
[tag1,tag2,tag3] ]
Calling to_k_hot(tags) would return:
tensor([1,0,0],
[1,1,0],
[0,0,1],
[0,1,0],
[0,0,0],
[1,1,1]])
| https://stackoverflow.com/questions/66228697/ |
Vanishing seq_len in attention-based BiLSTM | I'm studying several implementations of self attention-based BiLSTM and I don't understand why in each of them the input and output size are different. In particular I refer to the following codes taken from different implementations:
Implementation 1 e 2
def attnetwork(self, encoder_out, final_hidden):
# encoder_out shape = (batch_size, seq_len, n_hidden)
# final_hidden shape = (1, batch_size, n_hidden)
hidden = final_hidden.squeeze(0)
attn_weights = torch.bmm(encoder_out, hidden.unsqueeze(2)).squeeze(2)
soft_attn_weights = F.softmax(attn_weights, 1)
new_hidden = torch.bmm(encoder_out.transpose(1,2), soft_attn_weights.unsqueeze(2)).squeeze(2)
return new_hidden # shape = (batch_size, n_hidden)
As you can see this implementation takes as input two vectors of dimension (batch_size, seq_len, n_hidden) and (1, batch_size, n_hidden), respectively, and returns a vector of dimensions (batch_size, n_hidden) . But where is the dimension relative to seq_len? I need to have an output vector equal to the input one (i.e (batch_size, seq_len, n_hidden)).
Another implementation where the input size does not match the output size:
def attention(self,H):
M = torch.tanh(H) # Non-linear transformation size:(batch_size, hidden_dim, seq_len)
a = F.softmax(torch.bmm(self.att_weight,M),dim=2) # a.Size : (batch_size,1,seq_len)
a = torch.transpose(a,1,2) # (batch_size,seq_len,1)
return torch.bmm(H,a) # (batch_size,hidden_dim,1)
Another implementation with the same problem:
def attention(self, rnn_out, state):
merged_state = torch.cat([s for s in state],1)
merged_state = merged_state.squeeze(0).unsqueeze(2)
# (batch, seq_len, cell_size) * (batch, cell_size, 1) = (batch, seq_len, 1)
weights = torch.bmm(rnn_out, merged_state)
weights = torch.nn.functional.softmax(weights.squeeze(2)).unsqueeze(2)
# (batch, cell_size, seq_len) * (batch, seq_len, 1) = (batch, cell_size, 1)
return torch.bmm(torch.transpose(rnn_out, 1, 2), weights).squeeze(2)
How could one do to output a tensor of the same size as the input one without "breaking" the self attention mechanism?
Thank you!
EDIT:
The forward function I have to use is this:
def forward(self, x, x_len):
x = nn.utils.rnn.pack_padded_sequence(x, x_len, batch_first=True)
out1, (h_n, c_n) = self.lstm1(x)
# out1 = (seq_len, batch, num_directions * hidden_size)
# h_n = (num_layers * num_directions, batch, hidden_size)
x, lengths = nn.utils.rnn.pad_packed_sequence(out1, batch_first=True)
x, att1 = self.atten1(x, lengths) # skip connect
return x
the final x in return x I absolutely need it to have the shape (batch_size, seq_len, hidden_state) (obv also in another order so that a transpose is enough to fix it).
| From my experience, what an attention-based model does is :
Calculate some relationships between decoder hidden states and
encoder outputs.
Take softmax to get the attention distribution.
Take a weighted sum of the encoder output to get the attention output.
And a Seq2seq model act like:
Pass your sequences e.g., an sentence through an encoder, get a encoder output and a hidden layer ,use this hidden layer to initialize your decoder.
Calculate your attention model by initial decoder hidden states and encoder outputs.
Feed your decoder with a start character e.g., ,get a single character as output and a new hidden state, then use your attention model to calculate a new output.
Feed your decoder with a single-character input (defined on your own, may depend on a teacher-force ratio) and attention output (may concatenate attention output with decoder hidden state or something else), then you get a new character.
Loop 4 to get your predict sequence character by character.
You may have a look at this (from page 59):
http://web.stanford.edu/class/cs224n/slides/cs224n-2021-lecture07-nmt.pdf
Every time a character sent to your LSTM decoder, it will output a new character, and its hidden states will update to another value. The key is, hidden states keep changing.
So a (batch_size,hidden_dim,1) attention output is reasonable, since you should give a character to your model, use a corresponding attention output to help predicting next character, then get a new hidden state and get a new attention output and so on. You may not give a whole sequence to your LSTM decoder, you can't use any hidden states under that circumstance.
| https://stackoverflow.com/questions/66233078/ |
How perform unsupervised clustering on numbers in an Array using PyTorch | I got this array and I want to cluster/group the numbers into similar values.
An example of input array:
array([ 57, 58, 59, 60, 61, 78, 79, 80, 81, 82, 83, 101, 102, 103, 104, 105, 106]
expected result :
array([57,58,59,60,61]), ([78,79,80,81,82,83]), ([101,102,103,104,105,106])
I tried to use clustering but I don't think it's gonna work if I don't know how many I'm going to split up.
true = np.where(array>=1)
-> (array([ 57, 58, 59, 60, 61, 78, 79, 80, 81, 82, 83, 101, 102,
103, 104, 105, 106], dtype=int64),)
| You can perform kind of derivation on this array so that you can track changes better, assume your array is:
A = np.array([ 57, 58, 59, 60, 61, 78, 79, 80, 81, 82, 83, 101, 102, 103, 104, 105, 106])
so you can make a derivation vector by simply convolving your vector with [-1 1]:
A_ = abs(np.convolve(A, np.array([-1, 1])))
then A_ is:
array([57, 1, 1, 1, 1, 17, 1, 1, 1, 1, 1, 18, 2, 1, 1, 1, 106]
now you can define a threshold like 5 and find the cluster boundaries.
THRESHOLD = 5
cluster_bounds = np.argwhere(A_ > THRESHOLD)
now cluster_bounds is:
array([[0], [5], [11], [16]], dtype=int32)
| https://stackoverflow.com/questions/66238110/ |
Pytorch tensor shape | I have a simple question regarding the shapes of 2 different tensors - tensor_1 and tensor_2.
tensor_1.shape outputs torch.Size([784, 1]);
tensor_2.shape outputs torch.Size([784]).
I understand that the first one is rank-2 tensor, whereas the second is rank-1. What's hard for me is to conceptualize the difference between shape [784, 1] and [784].
Is it correct to think that tensor_1 has 784 rows and 1 column with a scalar inside each place? If so, why can't we call it simply a column vector (which is, in fact, rank-1 tensor), which also has values displayed vertically?
Similarly, can the shape of the second tensor ([784]) be imagined as 784 values inside a horizontal vector?
| You cant call tensor_1 as column vector because of its dimension . indexing that particular tensor is done in 2D
eg . tensor_1[1,1]
Coming to tensor_2 , its a scalar tensor having only one dimension.
And of course you can make it have a shape of tensor_1, just do
tensor_2 = tensor_2.unsqueeze(1) #This method will make tensor_2 have a shape of tensor_1
| https://stackoverflow.com/questions/66247473/ |
Testing my CNN on a small set of image but training has no effect | I constructed a CNN to recognize 9 classes of gestures in images of 224x224x3. I try to test its functionality by training it on 16 images and see if it overfits to 100 accuracy. Here is my network
import torch.nn as nn
class learn_gesture(nn.Module):
def __init__(self):
super(learn_gesture, self).__init__()
self.name = "gesture_learner"
self.conv1 = nn.Conv2d(in_channels=3, out_channels=20, kernel_size=5, stride=1, padding=2)
self.conv2 = nn.Conv2d(in_channels=20, out_channels=50, kernel_size=5, stride=1, padding=2)
self.conv3 = nn.Conv2d(in_channels=50, out_channels=100, kernel_size=5, stride=1, padding=2)
self.conv4 = nn.Conv2d(in_channels=100, out_channels=200, kernel_size=5, stride=1, padding=2)
self.conv5 = nn.Conv2d(in_channels=200, out_channels=400, kernel_size=5, stride=1, padding=2)
self.pool1 = nn.MaxPool2d(2,2)
self.pool2 = nn.MaxPool2d(2,2)
self.pool3 = nn.MaxPool2d(2,2)
self.pool4 = nn.MaxPool2d(2,2)
self.pool5 = nn.MaxPool2d(2,2)
self.fc1 = nn.Linear(7*7*400, 10000)
self.fc2 = nn.Linear(10000, 3000)
self.fc3 = nn.Linear(3000, 9)
def forward(self, x):
x = self.pool1(F.relu(self.conv1(x))) # gives 112*20
x = self.pool2(F.relu(self.conv2(x))) # gives 56*50
x = self.pool3(F.relu(self.conv3(x))) # gives 28*100
x = self.pool4(F.relu(self.conv4(x))) # gives 14*200
x = self.pool5(F.relu(self.conv5(x))) # gives 7*400
x = x.view(-1, 7*7*400)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
return F.softmax(self.fc3(x), dim=1)
And here is the training code:
overfit_model = learn_gesture()
num_epochs = 200 #set it high so that it will converge
## loss function and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(over_model.parameters(), lr=0.001, momentum=0.9) #optimizer is SGD with momentum
## set up some empty np arrays to store our result for plotting later
train_err = np.zeros(num_epochs)
train_loss = np.zeros(num_epochs)
################################################ train the network
for epoch in range(num_epochs):
total_train_loss = 0
total_train_err = 0
total_epoch = 0
for i, data in enumerate(smallLoader, 0):
inputs, labels = data
outputs = over_model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
optimizer.zero_grad()
corr = (determine_corr(outputs, labels)) # get a list of bool representing right or wrong predictions in the batch
total_train_err += corr.count(False)
total_train_loss += loss.item()
total_epoch += len(labels)
train_err[epoch] = float(total_train_err) / total_epoch
train_loss[epoch] = float(total_train_loss) / (i+1)
print(("Epoch {}: Train err: {}, Train loss: {}").format(
enter code hereepoch + 1,
train_err[epoch],
train_loss[epoch]))
The training has no effect, and both the accuracy and loss has no improvement either. I just absolutely can't figure out where the error is. Any help is greatly appreciated!
############### Update ##############
I got rid of the softmax in the forward function. Surprisingly, the performance of the model hasn't changed much. And I notice that some elements in the output now are negative and the elements across all classes do not add to 1. Is this supposed to happen?
output:
tensor([[ 0.0165, -0.0041, 0.0043, 0.0017, 0.0238, 0.0329, -0.0265, -0.0224,
-0.0187],
[ 0.0163, -0.0044, 0.0036, 0.0028, 0.0248, 0.0334, -0.0268, -0.0218,
-0.0194],
[ 0.0161, -0.0046, 0.0041, 0.0019, 0.0240, 0.0333, -0.0266, -0.0223,
-0.0192],
[ 0.0190, -0.0044, 0.0035, 0.0015, 0.0244, 0.0322, -0.0267, -0.0223,
-0.0187],
[ 0.0174, -0.0048, 0.0033, 0.0021, 0.0251, 0.0328, -0.0257, -0.0225,
-0.0190],
[ 0.0175, -0.0041, 0.0033, 0.0031, 0.0241, 0.0329, -0.0264, -0.0222,
-0.0192],
[ 0.0168, -0.0042, 0.0033, 0.0022, 0.0251, 0.0335, -0.0269, -0.0225,
-0.0195],
[ 0.0163, -0.0047, 0.0037, 0.0030, 0.0243, 0.0336, -0.0265, -0.0227,
-0.0192],
[ 0.0165, -0.0043, 0.0038, 0.0026, 0.0242, 0.0337, -0.0264, -0.0222,
-0.0191],
[ 0.0163, -0.0051, 0.0038, 0.0016, 0.0236, 0.0338, -0.0258, -0.0223,
-0.0195],
[ 0.0173, -0.0037, 0.0038, 0.0018, 0.0236, 0.0322, -0.0269, -0.0225,
-0.0191],
[ 0.0174, -0.0044, 0.0031, 0.0019, 0.0241, 0.0334, -0.0266, -0.0224,
-0.0200],
[ 0.0164, -0.0038, 0.0034, 0.0029, 0.0245, 0.0342, -0.0269, -0.0225,
-0.0200],
[ 0.0173, -0.0046, 0.0036, 0.0021, 0.0245, 0.0328, -0.0264, -0.0221,
-0.0192],
[ 0.0168, -0.0046, 0.0034, 0.0025, 0.0248, 0.0336, -0.0262, -0.0222,
-0.0194],
[ 0.0166, -0.0051, 0.0033, 0.0015, 0.0234, 0.0331, -0.0270, -0.0218,
-0.0186]], grad_fn=<AddmmBackward>)
Epoch 199: Train err: 0.8125, Train loss: 2.1874701976776123
|
It seems that you are using a model named overfit_model where you pass over_model.parameters() to the optimizer:
optimizer = optim.SGD(over_model.parameters(), lr=0.001, momentum=0.9)
Should be replaced with ovrefit_model.parameters().
You are setting your gradients to zeros right after you back propagate, where it should be done beforehand. So, the following lines:
loss.backward()
optimizer.step()
optimizer.zero_grad()
Should be replaced with:
optimizer.zero_grad()
loss.backward()
optimizer.step()
There is no need to call F.softmax in
return F.softmax(self.fc3(x), dim=1)
since you are using nn.CrossEntropyLoss that calls F.cross_entropy which natively bundles log_softmax before calling nll_loss
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
| https://stackoverflow.com/questions/66286991/ |
Pytorch how use a linear activation function | In Keras, I can create any network layer with a linear activation function as follows (for example, a fully-connected layer is taken):
model.add(keras.layers.Dense(outs, input_shape=(160,), activation='linear'))
But I can't find the linear activation function in the PyTorch documentation. ReLU is not suitable, because there are negative values in my sample. How do I create a layer with a linear activation function in PyTorch?
| If you take a look at the Keras documentation, you will see tf.keras.layers.Dense's activation='linear' corresponds to the a(x) = x function. Which means no non-linearity.
So in PyTorch, you just define the linear function without adding any activation layer:
torch.nn.Linear(160, outs)
| https://stackoverflow.com/questions/66294119/ |
Append a tensor vector to tensor matrix | I have a tensor matrix that i simply want to append a tensor vector as another column to it.
For example:
X = torch.randint(100, (100,5))
x1 = torch.from_numpy(np.array(range(0, 100)))
I've tried torch.cat([x1, X) with various numbers for both axis and dim but it always says that the dimensions don't match.
| You can also use torch.hstack to combine and unsqueeze for reshape x1
torch.hstack([X, x1.unsqueeze(1)])
| https://stackoverflow.com/questions/66299739/ |
How to build a dataset from a large text file without getting a memory error? | I have a text file with size > 7.02 GB. I have already built a tokenizer based on this text file. I want to build a dataset like so:
from transformers import LineByLineTextDataset
dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="data.txt", block_size=128,)
Since the size of my data is very large, a memory error occurs. This is the source code:
with open(file_path, encoding="utf-8") as f:
lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())]
batch_encoding = tokenizer(lines, add_special_tokens=True, truncation=True, max_length=block_size)
print(batch_encoding)
self.examples = batch_encoding["input_ids"]
self.examples = [{"input_ids": torch.tensor(e, dtype=torch.long)} for e in self.examples]
Supposing that my text file has only 4 lines, the following will be printed:
{'input_ids': [[49, 93, 1136, 1685, 973, 363, 72, 3130, 16502, 18], [44, 73, 1685, 279, 7982, 18, 225], [56, 13005, 1685, 4511, 3450, 18], [56, 19030, 1685, 7544, 18]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1]]}
I have changed the source code as the following so that the memory error doesn't appear:
for line in open(file_path, encoding="utf-8"):
if (len(line) > 0 and not line.isspace()):
new_line = line.split()
batch_encoding = tokenizer(new_line, add_special_tokens=True, truncation=True, max_length=block_size)
print(batch_encoding)
print(type(batch_encoding))
self.examples = batch_encoding["input_ids"]
self.examples = [{"input_ids": torch.tensor(e, dtype=torch.long)} for e in self.examples]
print(batch_encoding)
However, the following will be printed:
{'input_ids': [[49, 93], [3074], [329], [2451, 363, 72, 3130, 16502, 18]], 'token_type_ids': [[0, 0], [0], [0], [0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1], [1], [1], [1, 1, 1, 1, 1, 1]]}
<class 'transformers.tokenization_utils_base.BatchEncoding'>
{'input_ids': [[44, 73], [329], [69], [23788, 18]], 'token_type_ids': [[0, 0], [0], [0], [0, 0]], 'attention_mask': [[1, 1], [1], [1], [1, 1]]}
<class 'transformers.tokenization_utils_base.BatchEncoding'>
{'input_ids': [[56, 13005], [329], [7522], [7958, 18]], 'token_type_ids': [[0, 0], [0], [0], [0, 0]], 'attention_mask': [[1, 1], [1], [1], [1, 1]]}
<class 'transformers.tokenization_utils_base.BatchEncoding'>
{'input_ids': [[56, 19030], [329], [11639, 18]], 'token_type_ids': [[0, 0], [0], [0, 0]], 'attention_mask': [[1, 1], [1], [1, 1]]}
{'input_ids': [[56, 19030], [329], [11639, 18]], 'token_type_ids': [[0, 0], [0], [0, 0]], 'attention_mask': [[1, 1], [1], [1, 1]]}
How can I change the source code in order to be able to read the large text file line by line but get the same output as desired without a memory error?
| You can create a dictionary storing the byte offsets for each line of the .txt file:
offset_dict = {}
with open(large_file_path, 'rb') as f:
f.readline() # move over header
for line in range(number_of_lines):
offset = f.tell()
offset_dict[line] = offset
and then implement your own hashed __getitem__ method in a PyTorch Dataset (which can then be accessed by a DataLoader):
class ExampleDataset(Dataset):
def __init__(self, large_file_path, offset_dict, ):
self.large_file_path = large_file_path
self.offset_dict = offset_dict
def __len__(self):
return len(self.offset_dict)
def __getitem__(self, line):
offset = self.offset_dict[line]
with open(self.large_file_path, 'r', encoding='utf-8') as f:
f.seek(offset)
line = f.readline()
return line
| https://stackoverflow.com/questions/66301608/ |
Subsets and Splits