instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
PyTorch tensors topk for every tensor across a dimension | I have the following tensor
inp = tensor([[[ 0.0000e+00, 5.7100e+02, -6.9846e+00],
[ 0.0000e+00, 4.4070e+03, -7.1008e+00],
[ 0.0000e+00, 3.0300e+02, -7.2226e+00],
[ 0.0000e+00, 6.8000e+01, -7.2777e+00],
[ 1.0000e+00, 5.7100e+02, -6.9846e+00],
[ 1.0000e+00, 4.4070e+03, -7.1008e+00],
[ 1.0000e+00, 3.0300e+02, -7.2226e+00],
[ 1.0000e+00, 6.8000e+01, -7.2777e+00]],
[[ 0.0000e+00, 2.1610e+03, -7.0754e+00],
[ 0.0000e+00, 6.8000e+01, -7.2259e+00],
[ 0.0000e+00, 1.0620e+03, -7.2920e+00],
[ 0.0000e+00, 2.9330e+03, -7.3009e+00],
[ 1.0000e+00, 2.1610e+03, -7.0754e+00],
[ 1.0000e+00, 6.8000e+01, -7.2259e+00],
[ 1.0000e+00, 1.0620e+03, -7.2920e+00],
[ 1.0000e+00, 2.9330e+03, -7.3009e+00]],
[[ 0.0000e+00, 4.4070e+03, -7.1947e+00],
[ 0.0000e+00, 3.5600e+02, -7.2958e+00],
[ 0.0000e+00, 3.0300e+02, -7.3232e+00],
[ 0.0000e+00, 1.2910e+03, -7.3615e+00],
[ 1.0000e+00, 4.4070e+03, -7.1947e+00],
[ 1.0000e+00, 3.5600e+02, -7.2958e+00],
[ 1.0000e+00, 3.0300e+02, -7.3232e+00],
[ 1.0000e+00, 1.2910e+03, -7.3615e+00]]])
of shape
torch.Size([3, 8, 3])
and I would like to find the topk(k=4) elements across dim1, where the value to sort by is dim2 (the negative values). The resulting tensor shape should then be:
torch.Size([3, 4, 3])
I know how to do topk for a single tensor, but how do I do this for several batches at once?
| I did it like this:
val, ind = inp[:, :, 2].squeeze().topk(k=4, dim=1, sorted=True)
new_ind = ind.unsqueeze(-1).repeat(1,1,3)
result = inp.gather(1, new_ind)
I don't know if this is the best way to do this but it worked.
| https://stackoverflow.com/questions/66906505/ |
How is PyTorch's Class BCEWithLogitsLoss exactly implemented? | According to the PyTorch documentation, the advantage of the class BCEWithLogitsLoss() is that one can use the
log-sum-exp trick for numerical stability.
If we use the class BCEWithLogitsLoss() with the parameter reduction set to None, they have a formula for that:
I now simplified the terms, and obtain after some lines of calculation:
I was curious to see whether this is the way how the Source code does it, but I couldn't find it.. The only code they have is this:
Code for BCEWithLogitsLoss
| nn.BCEWithLogitsLoss is actually just cross entropy loss that comes inside a sigmoid function. It may be used in case your model's output layer is not wrapped with sigmoid. Typically used with the raw output of a single output layer neuron.
Simply put, your model's output say pred will be a raw value. In order to get probability, you will have to use torch.sigmoid(pred). (To get actual class labels, you need torch.round(torch.sigmoid(pred)).) However, you don't need to do anything like that (i.e take sigmoid) when you use nn.BCEWithLogitsLoss. Here you just have to do the following-
criterion = nn.BCEWithLogitsLoss()
loss = criterion(pred, target) # pred is just raw nn output
Hence, coming to implementation part, criterion accepts two torch tensors - one being the raw nn outputs, the other being the true class labels, then wraps the first using sigmoid - for each element in the tensor and then calculates Cross Entropy loss (-(target*log(sigmoid(pred))) for each pair and reduces it to mean.
| https://stackoverflow.com/questions/66906884/ |
Confused on using dropout in batch gradient descent with Q-learning | I am using PyTorch and adding dropout layers to my inner layers.
class MLP(nn.Module):
#def __init__(self, n_inputs, n_action, n_hidden_layers=2, hidden_dim=8, drop=0.25):
def __init__(self, console_file, n_inputs, n_action, layers_list, drop=0.25):
super(MLP, self).__init__()
print("Layers structure:")
console_file.write("Layers structure:\n")
print(f"inputs: {n_inputs}")
console_file.write(f"inputs: {n_inputs}\n")
self.layers = []
for i, layer_size in enumerate(layers_list):
if i == 0:
layer = nn.Linear(n_inputs, layer_size)
else:
layer = nn.Linear(layers_list[i-1], layer_size)
self.layers.append(layer)
print(f"layer {i}: {layer_size}")
console_file.write(f"layer {i}: {layer_size}\n")
self.layers.append(nn.LeakyReLU(0.1))
if drop > 0.01:
#self.layers.append(nn.Dropout(p = drop**(len(layers_list)-i)))
self.layers.append(nn.Dropout(p = drop))
#print(f"drop {i}: {drop**(len(layers_list)-i)}")
print(f"drop {i}: {drop}")
#console_file.write(f"drop {i}: {drop**(len(layers_list)-i)}\n")
console_file.write(f"drop {i}: {drop}\n")
# final layer
self.layers.append(nn.Linear(layers_list[-1], n_action))
self.layers = nn.Sequential(*self.layers)
print(f"outputs: {n_action}")
console_file.write(f"outputs: {n_action}\n")
print("========= NN structure =========\n")
console_file.write("========= NN structure =========\n\n")
def forward(self, X):
return self.layers(X)
def save_weights(self, path):
torch.save(self.state_dict(), path)
def load_weights(self, path):
self.load_state_dict(torch.load(path))
I am making sure to turn training on during training and eval mode outside training (removing the droput layers).
self.model = he.MLP(console_file, state_size, self.action_size, DIMENSION, DROPOUT)
if DROPOUT > 0.01:
self.model.train()
...
if DROPOUT > 0.01: # before testing
agent.model.eval()
...
if DROPOUT > 0.01: # after testing
self.model.train()
My confusion is as to HOW if at all PyTorch keeps track of which neurons it disables between the forward propagation and the later time where a random batch of output-and-reward is selected and backpropagation is performed on a specific case originating in a forward propagation with specific neurons disabled.
def train_one_step(model, criterion, optimizer, inputs, targets):
# convert to tensors
inputs = torch.from_numpy(inputs.astype(np.float32))
targets = torch.from_numpy(targets.astype(np.float32))
# zero the parameter gradients
optimizer.zero_grad()
# Forward pass
outputs = model(inputs)
loss = criterion(outputs, targets)
# Backward and optimize
loss.backward()
optimizer.step()
inputs and targets are from random selection (batch over buffered history)
For me it makes sense to have the SAME neurons disabled between the forward and back propagation but since I can find nothing on the subject everything would suggest that the dropout will apply randomly and therefor nonsensically to me.
Either the forward and back propagations need to happen identically (and somehow PyTorch manages to remember by some markers, because I don't seem to pass any markers during batching) or I need to understand why they can be randomly different.
| I'm not sure what is the problem, but let me try to explain how things work.
The .train() and .eval() calls only change the .training flag to True or False.
The Dropout layer samples the noise during the forward pass. Here's an example of forward implementation (I removed the ifs for the alpha and feature dropouts for readability):
template<bool feature_dropout, bool alpha_dropout, bool inplace, typename T>
Ctype<inplace> _dropout_impl(T& input, double p, bool train) {
TORCH_CHECK(p >= 0 && p <= 1, "dropout probability has to be between 0 and 1, but got ", p);
if (p == 0 || !train || input.numel() == 0) {
return input;
}
if (p == 1) {
return multiply<inplace>(input, at::zeros({}, input.options()));
}
auto noise = at::empty_like(input, LEGACY_CONTIGUOUS_MEMORY_FORMAT);
noise.bernoulli_(1 - p);
noise.div_(1 - p);
return multiply<inplace>(input, noise);
}
As you can see, if !train (i.e., .eval()), it will return the input as it is. Moreover, you could say that it "remembers" which neurons were disabled the same way it "remembers" what is the value of each active neuron. Notice that the dropout layer actually works as a mask of 0s and (scaled) 1s on the output of the previous layer. It does not actually mask the neurons, although in pratice the effect is equivalent, since the neurons that generated the outputs multiplied by 0 will get no gradient, and the rest will get properly scaled gradients (because of the .div_(1-p)).
| https://stackoverflow.com/questions/66917708/ |
Saving a pytoch tensor as a 32-bit grayscale Image | I have manipulated a 32-bit grayscale .tif image which I converted to tensor using PIL. After this I saved it with:
torchvision.utils.save_image(train_img_poac,fp=str(j)+".tif")
This method automatically converts the tensor to an RGB format image. I want my output image to be a 32-bit grayscale image.
I tried to use the arguments in the save_image function but could not find anything. Is converting it to numpy ndarray and then converting it to a 32-bit Image an option?
| Unfortunately save_image doesn't have an option for preserving one-channel images. You can use a different library like OpenCV:
import cv2
image = train_img_poac.numpy()
cv2.imwrite('image_name', image)
| https://stackoverflow.com/questions/66929030/ |
AttributeError:module 'torchtext.data' has no attribute 'TabularDataset' | I want to create a dataset from a tsv file with pytorch.
I was thinking of using
torchtext.data.TabularDataset.splits
but I'm getting an error message.
AttributeError:module 'torchtext.data' has no attribute 'TabularDataset'
| Try torchtext.legacy.data.TabularDataset.splits
| https://stackoverflow.com/questions/66931604/ |
"module 'torchtext.data' has no attribute 'Field'" | import torchtext
ENGLISH = torchtext.data.Field(tokenize=tokenizer_english, lower=True, init_token="<sos>", eos_token="<eos>")
Error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-12-2a3d11c77e7d> in <module>
----> 1 ENGLISH = torchtext.data.Field(tokenize=tokenizer_english, lower=True, init_token="<sos>", eos_token="<eos>")
AttributeError: module 'torchtext.data' has no attribute 'Field'
It won't import torchtext.data.Field for some reason even though that's whats in the docs
|
[BC Breaking] Legacy
In v0.9.0 release, we move the following legacy code to torchtext.legacy. This is part of the work to revamp the torchtext library and the motivation has been discussed in Issue #664:
torchtext.legacy.data.field
torchtext.legacy.data.batch
torchtext.legacy.data.example
torchtext.legacy.data.iterator
torchtext.legacy.data.pipeline
torchtext.legacy.datasets
We have a migration tutorial to help users switch to the torchtext datasets in v0.9.0 release. For the users who still want the legacy components, they can add legacy to the import path.
Try it with ENGLISH = torchtext.legacy.data.field(tokenize=tokenizer_english, lower=True, init_token="<sos>", eos_token="<eos>")
| https://stackoverflow.com/questions/66945577/ |
In Pytorch, is there a difference between (x<0) and x.lt(0)? | Suppose x is a tensor in Pytorch. One can either write:
x_lowerthanzero = x.lt(0)
or:
x_lowerthanzero = (x<0)
with seemingly the exact same results. Many other operations have Pytorch built-in equivalents: x.gt(0) for (x>0), x.neg() for -x, x.mul() etc.
Is there a good reason to use one form over the other?
| They are equivalent. < is simply a more readable alias.
Python operators have canonical function mappings e.g:
Algebraic operations
Operation
Syntax
Function
Addition
a + b
add(a, b)
Subtraction
a - b
sub(a, b)
Multiplication
a * b
mul(a, b)
Division
a / b
truediv(a, b)
Exponentiation
a ** b
pow(a, b)
Matrix Multiplication
a @ b
matmul(a, b)
Comparisons
Operation
Syntax
Function
Ordering
a < b
lt(a, b)
Ordering
a <= b
le(a, b)
Equality
a == b
eq(a, b)
Difference
a != b
ne(a, b)
Ordering
a >= b
ge(a, b)
Ordering
a > b
gt(a, b)
You can check that these are indeed mapped to the respectively named torch functions here e.g:
def __lt__(self, other):
return self.lt(other)
| https://stackoverflow.com/questions/66965389/ |
ValueError: Unknown CUDA arch (8.6) or GPU not supported | when I build DCNv2 in my conda environment,I got this message.
I have checked cuda by nvidia-smi:
Tue Apr 6 20:03:13 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.67 Driver Version: 460.67 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 3070 Off | 00000000:01:00.0 On | N/A |
| 0% 45C P8 17W / 220W | 448MiB / 7979MiB | 23% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 955 G /usr/lib/xorg/Xorg 53MiB |
| 0 N/A N/A 1555 G /usr/lib/xorg/Xorg 210MiB |
| 0 N/A N/A 1690 G /usr/bin/gnome-shell 61MiB |
| 0 N/A N/A 3564 G ...AAAAAAAAA= --shared-files 108MiB |
and nvcc -V:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
Any help?
| Your GPU is "too new" for CUDA 10. Support for cards with compute capability 8.6 like yours was added in CUDA 11.1 (thank you @RobertCrovella for the correction). This means you'll need to use CUDA 11.1 or newer.
| https://stackoverflow.com/questions/66968382/ |
how to change the labels in a datafolder of pytorch? | I first load an unlabeled dataset as following:
unlabeled_set = DatasetFolder("food-11/training/unlabeled", loader=lambda x: Image.open(x), extensions="jpg", transform=train_tfm)
and now since I'm trying to conduct semi-supervised learning: I'm trying to define the following function. The input "dataset" is the unlabeled_set I just loaded.
As I want to change the label of the dataset to be the one I predicted, not the original labels(all of the original labels were 1's), how can I do?
I have tried using dataset.targets to change the labels, but it doesn't work at all.
the following is my function:
import torch
def get_pseudo_labels(dataset, model, threshold=0.07):
# This functions generates pseudo-labels of a dataset using given model.
# It returns an instance of DatasetFolder containing images whose prediction confidences exceed a given threshold.
# You are NOT allowed to use any models trained on external data for pseudo-labeling.
device = "cuda" if torch.cuda.is_available() else "cpu"
x = []
y = []
# print(dataset.targets[0])
# Construct a data loader.
data_loader = DataLoader(dataset, batch_size=batch_size, shuffle=False)
# Make sure the model is in eval mode.
model.eval()
# Define softmax function.
softmax = nn.Softmax()
counter = 0
# Iterate over the dataset by batches.
for batch in tqdm(data_loader):
img, _ = batch
# Forward the data
# Using torch.no_grad() accelerates the forward process.
with torch.no_grad():
logits = model(img.to(device))
# Obtain the probability distributions by applying softmax on logits.
probs = softmax(logits)
count = 0
# ---------- TODO ----------
# Filter the data and construct a new dataset.
dataset.targets = torch.tensor(dataset.targets)
for p in probs:
if torch.max(p) >= threshold:
if not(counter in x):
x.append(counter)
dataset.targets[counter] = torch.argmax(p)
counter += 1
# Turn off the eval mode.
model.train()
# dat = DataLoader(ImgDataset(x,y), batch_size=batch_size, shuffle=False)
print(dataset.targets[10])
new = torch.utils.data.Subset(dataset, x)
return new```
| PyTorch DataSets can return tuples of values, but they have no inherent "features"/"target" distinction. You can create your modified DataSet like so:
labeled_data = [*zip(dataset, labels)]
data_loader = DataLoader(labeled_dataset, batch_size=batch_size, shuffle=False)
for imgs, labels in data_loader: # per batch
...
| https://stackoverflow.com/questions/66971274/ |
PyTorch warning about using a non-full backward hook when the forward contains multiple autograd Nodes | After a recent upgrade, when running my PyTorch loop, I now get the warning
Using a non-full backward hook when the forward contains multiple autograd Nodes`".
The training still runs and completes, but I am unsure where I am supposed to place the register_full_backward_hook function.
I have tried adding it to each of the layers in my neural network but this gives further errors about using different hooks.
Can anyone please advise?
| PyTorch version 1.8.0 deprecated register_backward_hook (source code) in favor of register_full_backward_hook (source code).
You can find it in the patch notes here: Deprecated old style nn.Module backward hooks (PR #46163)
The warning you're getting:
Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior.
Simply indicates that you should replace all register_backward_hook calls with register_full_backward_hook in your code to get the behavior described in the documentation page.
| https://stackoverflow.com/questions/66994662/ |
Installing PyTorch on Jetson Nano Ubuntu 18 | I am trying to install PyTorch on Jetson Nano Ruining Ubuntu 1804. My reference is https://dev.to/evanilukhin/guide-to-install-pytorch-with-cuda-on-ubuntu-18-04-5217
When I try the following command this is what I get:
(my_env) crigano@crigano-desktop:~$ python3.8 -m pip install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing
Collecting numpy
Using cached numpy-1.20.2-cp38-cp38-manylinux2014_aarch64.whl (12.7 MB)
Collecting ninja
Using cached ninja-1.10.0.post2.tar.gz (25 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Collecting pyyaml
Using cached PyYAML-5.4.1-cp38-cp38-manylinux2014_aarch64.whl (818 kB)
ERROR: Could not find a version that satisfies the requirement mkl
ERROR: No matching distribution found for mkl
| If you just want to use PyTorch on the bare-metal Jetson Nano, simply install it with NVIDIA's pre-compiled binary wheel. Other packages can be found in the Jetson Zoo.
MKL is developed by Intel "to optimize code for current and future generations of Intel® CPUs and GPUs." [PyPI]. Apparently it does run on other x86-based chips like AMD's (although Intel has historically intentionally crippled the library for non-Intel chips [Wikipedia]), but unsurprisingly Intel is not interested in supporting ARM devices and has not ported MKL to ARM architectures.
If your goal is to use MKL for math optimization in numpy, openblas is a working alternative for ARM. libopenblas-base:arm64 and libopenblas-dev:arm64 come pre-installed on NVIDIA's "L4T PyTorch" Docker images. You can confirm that numpy detects them with numpy.__config__.show(). This is what I get using numpy 1.12 in python 3.69 on the l4t-pytorch:r32.5.0-pth1.6-py3 image:
blas_mkl_info:
NOT AVAILABLE
blis_info:
NOT AVAILABLE
openblas_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/lib/aarch64-linux-gnu']
language = c
define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/lib/aarch64-linux-gnu']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_mkl_info:
NOT AVAILABLE
openblas_lapack_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/lib/aarch64-linux-gnu']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/lib/aarch64-linux-gnu']
language = c
define_macros = [('HAVE_CBLAS', None)]
So presumably it will use openblas in place of MKL for math optimization. If your use case is also for numpy optimization, you can likewise use openblas and shouldn't need MKL... which is fortunate, since it isn't available anyway.
| https://stackoverflow.com/questions/66995722/ |
Get a list of tensor from masked indices | I'm trying to get a list of tensors based on different group,
e.g.,
x = tensor([ 0.3018, -0.0079, 1.4995, -1.4422, 1.6007])
indices = torch.tensor([0,0,1,1,2])
res = func(x,indices)
I want my result to be
res= [[0.3018, -0.0079], [1.4995, -1.4422], [1.6007]]
I'm wondering how can I achieve this result, I checked gather and index_select,
but I can't get the result like above.
Thank you!
| How about
res = [x[indices == i_] for i_ in indices.unique()]
| https://stackoverflow.com/questions/66997166/ |
loading model failed in torchserving | i learning to serve a model using pytorch serving and i am new to this serving
this is the handler file i created for serving the vgg16 model
i am using the model from kaggle
Myhandler.py file
import io
import os
import logging
import torch
import numpy as np
import torch.nn.functional as F
from PIL import Image
from torchvision import transforms,datasets, models
from ts.torch_handler.image_classifier import ImageClassifier
from ts.torch_handler.base_handler import BaseHandler
from ts.utils.util import list_classes_from_module
import importlib
from torch.autograd import Variable
import seaborn as sns
import torchvision
from torch import optim, cuda
from torch.utils.data import DataLoader, sampler
import torch.nn as nn
import warnings
warnings.filterwarnings('ignore', category=FutureWarning)
# Data science tools
import pandas as pd
#path = 'C:\\Users\\fazil\\OneDrive\\Desktop\\pytorch\\vgg11\\vgg16.pt'
path = r'C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11\vgg16.pt'
#image = r'C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11\normal.jpeg'
class VGGImageClassifier(ImageClassifier):
"""
Overriding the model loading code as a workaround for issue :
https://github.com/pytorch/serve/issues/535
https://github.com/pytorch/vision/issues/2473
"""
def __init__(self):
self.model = None
self.mapping = None
self.device = None
self.initialized = False
def initialize(self,context):
"""load eager mode state_dict based model"""
properties = context.system_properties
#self.device = torch.device(
#"cuda:" + str(properties.get("gpu_id"))
#if torch.cuda.is_available()
# else "cpu"
#)
model_dir = properties.get("model_dir")
model_pt_path = os.path.join(model_dir, "model.pt")
# Read model definition file
model_def_path = os.path.join(model_dir, "model.py")
if not os.path.isfile(model_def_path):
raise RuntimeError("Missing the model definition file")
checkpoint = torch.load(path, map_location='cpu')
logging.error('%s ',checkpoint)
self.model = models.vgg16(pretrained=True)
logging.error('%s ',self.model)
self.model.classifier = checkpoint['classifier']
logging.error('%s ', self.model.classifier )
self.model.load_state_dict(checkpoint['state_dict'], strict=False)
self.model.class_to_idx = checkpoint['class_to_idx']
self.model.idx_to_class = checkpoint['idx_to_class']
self.model.epochs = checkpoint['epochs']
optimizer = checkpoint['optimizer']
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
for param in model.parameters():
param.requires_grad = False
logger.debug('Model file {0} loaded successfully'.format(model_pt_path))
self.initialized = True
def preprocess(self,data):
image = data.get("data")
if image is None:
image = data.get("body")
image_transform =transforms.Compose([
transforms.Resize(size=256),
transforms.CenterCrop(size=224),
transforms.ToTensor(),
transforms.Normalize((0.5), (0.5))
])
image = Image.open(io.BytesIO(image)).convert('RGB')
image = image_transform(image)
image = image.unsqueeze(0)
return image
def inference(self, image):
outs = self.model.forward(image)
probs = F.softmax(outs , dim=1)
preds = torch.argmax(probs, dim=1)
logging.error('%s ',preds)
return preds
def postprocess(self, preds):
res = []
preds = preds.cpu().tolist()
for pred in preds:
label = self.mapping[str(pred)] [1]
res.append({'label': label , 'index': pred })
return res
_service = VGGImageClassifier()
def handle(data,context):
if not _service.initialized:
_service.initialize(context)
if data is None:
return None
data = _service.preprocess(data)
data = _service.inference(data)
data = _service.postprocess(data)
return data
this is the error i got
Torchserve version: 0.3.1
TS Home: C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages
Current directory: C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11
Temp directory: C:\Users\fazil\AppData\Local\Temp
Number of GPUs: 0
Number of CPUs: 4
Max heap size: 3038 M
Python executable: c:\users\fazil\anaconda3\envs\serve\python.exe
Config file: ./config.properties
Inference address: http://0.0.0.0:8080
Management address: http://0.0.0.0:8081
Metrics address: http://0.0.0.0:8082
Model Store: C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11\model_store
Initial Models: vgg16.mar
Log dir: C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11\logs
Metrics dir: C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11\logs
Netty threads: 32
Netty client threads: 0
Default workers per model: 4
Blacklist Regex: N/A
Maximum Response Size: 6553500
Maximum Request Size: 6553500
Prefer direct buffer: false
Allowed Urls: [file://.*|http(s)?://.*]
Custom python dependency for model allowed: false
Metrics report format: prometheus
Enable metrics API: true
2021-04-08 12:33:22,517 [INFO ] main org.pytorch.serve.ModelServer - Loading initial models: vgg16.mar
2021-04-08 12:33:40,392 [INFO ] main org.pytorch.serve.archive.ModelArchive - eTag 85b61fc819804aea9db0ca8786c2e427
2021-04-08 12:33:40,423 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model vgg16
2021-04-08 12:33:40,424 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model vgg16
2021-04-08 12:33:40,424 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model vgg16 loaded.
2021-04-08 12:33:40,426 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: vgg16, count: 4
2021-04-08 12:33:40,481 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: NioServerSocketChannel.
2021-04-08 12:33:41,173 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: None
2021-04-08 12:33:41,177 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: None
2021-04-08 12:33:41,180 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]12328
2021-04-08 12:33:41,180 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]14588
2021-04-08 12:33:41,180 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2021-04-08 12:33:41,181 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2021-04-08 12:33:41,181 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.8
2021-04-08 12:33:41,181 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.8
2021-04-08 12:33:41,186 [DEBUG] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-vgg16_1.0 State change null -> WORKER_STARTED
2021-04-08 12:33:41,186 [DEBUG] W-9002-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9002-vgg16_1.0 State change null -> WORKER_STARTED
2021-04-08 12:33:41,199 [INFO ] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9001
2021-04-08 12:33:41,199 [INFO ] W-9002-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9002
2021-04-08 12:33:41,240 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: None
2021-04-08 12:33:41,244 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]12008
2021-04-08 12:33:41,244 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2021-04-08 12:33:41,245 [DEBUG] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-vgg16_1.0 State change null -> WORKER_STARTED
2021-04-08 12:33:41,245 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.8
2021-04-08 12:33:41,245 [INFO ] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9000
2021-04-08 12:33:41,255 [INFO ] W-9003-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: None
2021-04-08 12:33:41,260 [INFO ] W-9003-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]15216
2021-04-08 12:33:41,260 [INFO ] W-9003-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2021-04-08 12:33:41,261 [DEBUG] W-9003-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9003-vgg16_1.0 State change null -> WORKER_STARTED
2021-04-08 12:33:41,261 [INFO ] W-9003-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.8
2021-04-08 12:33:41,262 [INFO ] W-9003-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9003
2021-04-08 12:33:41,768 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://0.0.0.0:8080
2021-04-08 12:33:41,768 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: NioServerSocketChannel.
2021-04-08 12:33:41,774 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://0.0.0.0:8081
2021-04-08 12:33:41,775 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: NioServerSocketChannel.
2021-04-08 12:33:41,777 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://0.0.0.0:8082
2021-04-08 12:33:41,784 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: ('127.0.0.1', 9001).
2021-04-08 12:33:41,784 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: ('127.0.0.1', 9002).
2021-04-08 12:33:41,784 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: ('127.0.0.1', 9000).
2021-04-08 12:33:41,784 [INFO ] W-9003-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: ('127.0.0.1', 9003).
Model server started.
2021-04-08 12:33:48,486 [INFO ] pool-2-thread-1 TS_METRICS - CPUUtilization.Percent:100.0|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,487 [INFO ] pool-2-thread-1 TS_METRICS - DiskAvailable.Gigabytes:74.49674987792969|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,491 [INFO ] pool-2-thread-1 TS_METRICS - DiskUsage.Gigabytes:147.9403419494629|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,496 [INFO ] pool-2-thread-1 TS_METRICS - DiskUtilization.Percent:66.5|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,499 [INFO ] pool-2-thread-1 TS_METRICS - MemoryAvailable.Megabytes:4488.515625|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,504 [INFO ] pool-2-thread-1 TS_METRICS - MemoryUsed.Megabytes:7658.80859375|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,513 [INFO ] pool-2-thread-1 TS_METRICS - MemoryUtilization.Percent:63.0|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:34:24,385 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died.
2021-04-08 12:34:24,439 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-04-08 12:34:24,440 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 182, in <module>
2021-04-08 12:34:24,443 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server()
2021-04-08 12:34:24,444 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 154, in run_server
2021-04-08 12:34:24,446 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket)
2021-04-08 12:34:24,446 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 116, in handle_connection
2021-04-08 12:34:24,447 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg)
2021-04-08 12:34:24,448 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 89, in load_model
2021-04-08 12:34:24,523 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size, envelope)
2021-04-08 12:34:24,582 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 104, in load
2021-04-08 12:34:24,597 [INFO ] nioEventLoopGroup-5-2 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_STARTED
2021-04-08 12:34:24,583 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died.
2021-04-08 12:34:24,646 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn(service.context)
2021-04-08 12:34:24,646 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-04-08 12:34:24,649 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 182, in <module>
2021-04-08 12:34:24,649 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server()
2021-04-08 12:34:24,650 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 154, in run_server
2021-04-08 12:34:24,648 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 131, in <lambda>
2021-04-08 12:34:24,652 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket)
2021-04-08 12:34:24,649 [DEBUG] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED
2021-04-08 12:34:24,734 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 116, in handle_connection
2021-04-08 12:34:24,653 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn = lambda ctx: entry_point(None, ctx)
2021-04-08 12:34:24,734 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg)
2021-04-08 12:34:24,735 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 268, in handle
2021-04-08 12:34:24,735 [DEBUG] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.
java.lang.InterruptedException
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133)
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432)
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:188)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
2021-04-08 12:34:24,736 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 89, in load_model
2021-04-08 12:34:24,753 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size, envelope)
2021-04-08 12:34:24,736 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - _service.initialize(context)
2021-04-08 12:34:24,754 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 111, in initialize
2021-04-08 12:34:24,754 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 104, in load
2021-04-08 12:34:24,754 [WARN ] W-9000-vgg16_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: vgg16, error: Worker died.
2021-04-08 12:34:24,756 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn(service.context)
2021-04-08 12:34:24,755 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.model.classifier = checkpoint['classifier']
2021-04-08 12:34:24,758 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 131, in <lambda>
2021-04-08 12:34:24,810 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn = lambda ctx: entry_point(None, ctx)
2021-04-08 12:34:24,811 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 268, in handle
2021-04-08 12:34:24,757 [DEBUG] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-vgg16_1.0 State change WORKER_STARTED -> WORKER_STOPPED
2021-04-08 12:34:24,871 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - _service.initialize(context)
2021-04-08 12:34:24,872 [WARN ] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-vgg16_1.0-stderr
2021-04-08 12:34:24,812 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - KeyError: 'classifier'
2021-04-08 12:34:24,872 [WARN ] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-vgg16_1.0-stdout
2021-04-08 12:34:24,872 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 111, in initialize
2021-04-08 12:34:24,874 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.model.classifier = checkpoint['classifier']
2021-04-08 12:34:24,903 [INFO ] nioEventLoopGroup-5-1 org.pytorch.serve.wlm.WorkerThread - 9001 Worker disconnected. WORKER_STARTED
2021-04-08 12:34:24,876 [INFO ] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9000 in 1 seconds.
2021-04-08 12:34:24,931 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - KeyError: 'classifier'
2021-04-08 12:34:24,932 [DEBUG] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED
2021-04-08 12:34:24,974 [DEBUG] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.
java.lang.InterruptedException
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133)
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432)
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:188)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
2021-04-08 12:34:25,015 [WARN ] W-9001-vgg16_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: vgg16, error: Worker died.
2021-04-08 12:34:25,015 [DEBUG] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-vgg16_1.0 State change WORKER_STARTED -> WORKER_STOPPED
2021-04-08 12:34:25,016 [WARN ] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9001-vgg16_1.0-stderr
2021-04-08 12:34:25,017 [WARN ] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9001-vgg16_1.0-stdout
2021-04-08 12:34:25,017 [INFO ] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9001 in 1 seconds.
2021-04-08 12:34:25,038 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-vgg16_1.0-stdout
2021-04-08 12:34:25,038 [INFO ] W-9000-vgg16_1.0-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-vgg16_1.0-stderr
2021-04-08 12:34:25,085 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9001-vgg16_1.0-stdout
2021-04-08 12:34:25,085 [INFO ] W-9001-vgg16_1.0-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9001-vgg16_1.0-stderr
2021-04-08 12:34:25,247 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died.
2021-04-08 12:34:25,247 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-04-08 12:34:25,248 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 182, in <module>
2021-04-08 12:34:25,248 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server()
2021-04-08 12:34:25,248 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 154, in run_server
2021-04-08 12:34:25,249 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket)
2021-04-08 12:34:25,250 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 116, in handle_connection
2021-04-08 12:34:25,250 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg)
2021-04-08 12:34:25,251 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 89, in load_model
2021-04-08 12:34:25,251 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size, envelope)
2021-04-08 12:34:25,253 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 104, in load
2021-04-08 12:34:25,253 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn(service.context)
2021-04-08 12:34:25,254 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 131, in <lambda>
2021-04-08 12:34:25,254 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn = lambda ctx: entry_point(None, ctx)
2021-04-08 12:34:25,254 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 268, in handle
2021-04-08 12:34:25,255 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - _service.initialize(context)
2021-04-08 12:34:25,256 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 111, in initialize
2021-04-08 12:34:25,257 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.model.classifier = checkpoint['classifier']
2021-04-08 12:34:25,257 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - KeyError: 'classifier'
2021-04-08 12:34:25,454 [INFO ] nioEventLoopGroup-5-4 org.pytorch.serve.wlm.WorkerThread - 9002 Worker disconnected. WORKER_STARTED
2021-04-08 12:34:25,456 [DEBUG] W-9002-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED
2021-04-08 12:34:25,457 [DEBUG] W-9002-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.
java.lang.InterruptedException
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133)
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432)
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:188)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
2021-04-08 12:34:25,482 [WARN ] W-9002-vgg16_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: vgg16, error: Worker died.
and also i load the model from path
because i got error if i use model_pt_path
can some one help me with this
|
i am using the model from kaggle
I presume you got the model from https://www.kaggle.com/pytorch/vgg16
I think you are loading the model incorrectly.
You are loading a checkpoint, which would work if your model was saved like this:
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,
...
}, PATH)
But it was probably saved like this:
torch.save(model.state_dict(), PATH)
Which would explain the KeyError.
I modified the initialize method according to the second case:
def initialize(self,context):
"""load eager mode state_dict based model"""
properties = context.system_properties
model_dir = properties.get("model_dir")
model_pt_path = os.path.join(model_dir, "model.pt")
# Read model definition file
model_def_path = os.path.join(model_dir, "model.py")
if not os.path.isfile(model_def_path):
raise RuntimeError("Missing the model definition file")
state_dict = torch.load(path, map_location='cpu')
# logging.error('%s ',checkpoint)
self.model = models.vgg16(pretrained=True)
logging.error('%s ',self.model)
# self.model.classifier = checkpoint['classifier']
# logging.error('%s ', self.model.classifier )
self.model.load_state_dict(state_dict, strict=False)
# self.model.class_to_idx = checkpoint['class_to_idx']
# self.model.idx_to_class = checkpoint['idx_to_class']
# self.model.epochs = checkpoint['epochs']
# optimizer = checkpoint['optimizer']
# optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
# for param in model.parameters():
# param.requires_grad = False
# logger.debug('Model file {0} loaded successfully'.format(model_pt_path))
self.initialized = True
Using the model linked above, I managed to start torchserve without error.
| https://stackoverflow.com/questions/67000060/ |
PyTorch input/output data sizes | I'm trying out PyTorch for the first time, and running into a couple problems. I've shared some of my code below and have two questions.
Q1: What should my output size be? Each input should lead to one output, equal to one of 6 possible output labels (1-6). Should output size be 1 or 6?
Q2: Something is wrong with my accuracy calculation, which I think is tied into Q1. predicted ends up being of size 4 x 4411, where 4 is my batch size (and so I think that's correct) but 4411 is my feature/input size, which is almost certainly wrong. I would expect it to be 6 (number of possible output labels). labels is 4x6, which I think is correct. If I change the dim I'm taking the max over from 1 to 2, then it gets the correct size of 4x6, but logically it makes no sense as it's returning the max index across all feature values for an input.
I think I'm missing something crucial about what pytorch is doing with my data. I feel so close.... Any ideas on how I can fix this? Thanks!
def __init__(self, input_size, output_size, hidden_dim, n_layers):
super(Net, self).__init__()
self.input_size = input_size
self.output_size = output_size
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.rnn = torch.nn.RNN(input_size, hidden_dim, n_layers, batch_first=True)
# Q1: What should my output_size be?
self.fc = torch.nn.Linear(hidden_dim, output_size)
def forward(self, x):
batch_size = x.size(0)
hidden = self.init_hidden(batch_size)
out, hidden = self.rnn(x, hidden)
out = self.fc(out)
return out, hidden
def init_hidden(self, batch_size):
return torch.zeros(self.n_layers, batch_size, self.hidden_dim)
if __name__ == '__main__':
# ... code removed that just creates the Dataloaders, and initialises some size variables
net = Net(input_size=dataset.input_size, output_size=6, hidden_dim=24, n_layers=1)
net.to(device)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
for epoch in range(1, epochs+1):
total = 0
correct = 0
for inputs, labels in dataloaders['train']:
optimizer.zero_grad()
inputs, labels = inputs.to(device), labels.to(device)
output, hidden = net(inputs)
loss = criterion(output, labels) # error
loss.backward()
optimizer.step()
# Q2: Something is wrong here
_, predicted = torch.max(output.data, 1)
total += inputs.size(0)
correct += (predicted == labels).sum().item()
print(predicted == labels)
accuracy = 100 * correct / total```
|
Pytorch's CrossEntropyLoss expects output of size (n) (with the score of each of the n classes) and label as an integer of the correct class's index.
Your rnn based model is spitting out tensors of shape [batch, input_size, 6], since it is an rnn and producing a sequence of the same length as the input (with 6 scores per element of the sequence). If you wish to have one class label prediction per sequence (not per element of sequence) you will need to collapse this to a tensor of shape [batch, 6].
| https://stackoverflow.com/questions/67005759/ |
Pytorch: how to change requires_grad to be true in an OrderedDict | Suppose I have a neural network object from torch.nn, by default the requires_grad is False for its parameters. I want to change it to be True. But the following naive approach fails:
From torch import nn
a = nn.Linear(1, 1)
a.state_dict()[‘weight’].requires_grad = True
print(a.state_dict()[‘weight’].requires_grad)
The result is False. Could anyone explain what the problem is and how to fix it? Thank you! My torch version is 1.7.1.
| By default trainable nn objects parameters will have requires_grad=True.
You can verify that by doing:
import torch.nn as nn
layer = nn.Linear(1, 1)
for param in layer.parameters():
print(param.requires_grad)
# or use
print(layer.weight.requires_grad)
print(layer.bias.requires_grad)
To change requires_grad state:
for param in layer.parameters():
param.requires_grad = False # or True
| https://stackoverflow.com/questions/67010964/ |
PyTorch - Change weights of Conv2d | For some reason, I cannot seem to assign all the weights of a Conv2d layer in PyTorch - I have to do it in two steps. Can anyone help me with what I am doing wrong?
layer = torch.nn.Conv2d(in_channels=1, out_channels=2, kernel_size=(2,2), stride=(2,2))
layer.state_dict()['weight']
gives me a tensor of size (2,1,2,2)
tensor([[[[ 0.4738, -0.2197],
[-0.3436, -0.0754]]],
[[[ 0.1662, 0.4098],
[-0.4306, -0.4828]]]])
When I try to assign weights like so
layer.state_dict()['weight'] = torch.tensor([
[[[ 1, 2],
[3, 4]]],
[[[-1, -2],
[-3, -4]]]
])
the weights don't change. However, if I do something like this
layer.state_dict()['weight'][0] = torch.tensor([
[[[1, 2],
[3, 4]]],
])
layer.state_dict()['weight'][1] = torch.tensor([
[[[-1, -2],
[-3, -4]]],
])
The weights change. Why is this so?
| I'm not sure about why you can't directly assign them but the more proper way to achieve what you're trying to do would be
layer.load_state_dict({'weight': torch.tensor([[[[0.4738, -0.2197],
[-0.3436, -0.0754]]],
[[[0.1662, 0.4098],
[-0.4306, -0.4828]]]])}, strict=False)
| https://stackoverflow.com/questions/67014613/ |
Different test results with pytorch lightning | I use Pytorch Lightning to train a small NN transfert learning) with the hymenoptera photos (inspired from here).
In the test_step method, it prints the real classes (classes) and the predictions (preds).
After the training, I do the same (verification step) but I get different results.
import torch
from torch import nn
from torch.optim import Adam,SGD
import pytorch_lightning as pl
from torchvision import models
from torch.optim import lr_scheduler
from pytorch_lightning.metrics.functional import accuracy
from pytorch_lightning.loggers import TensorBoardLogger
from hymenoptereDataModule import HymenopteraDataModule
class LitHymenoptera(pl.LightningModule):
def __init__(self, batch_size=4):
super().__init__()
torch.manual_seed(42)
self.batch_size = batch_size
self.dataModule = HymenopteraDataModule()
self.dataModule.setup()
self.criterion = nn.CrossEntropyLoss()
self.logger = TensorBoardLogger('tb_logs', name=f'Model')
# Define the model
self.model = models.resnet18(pretrained=True)
num_ftrs = self.model.fc.in_features
self.model.fc = nn.Linear(num_ftrs, 2)
def forward(self, x):
return self.model(x)
def training_step(self, batch, batch_idx):
x, y = batch
logits = self.model(x)
# Compute loss
loss = self.criterion(logits, y)
# training metrics
preds = torch.argmax(logits, dim=1)
acc = accuracy(preds, y)
num_correct = torch.eq(preds.view(-1), y.view(-1)).sum()
return {'loss': loss,
'acc': acc,
'num_correct': num_correct}
def training_epoch_end(self, outputs):
self.exp_lr_scheduler.step()
def validation_step(self, batch, batch_idx):
x, y = batch
logits = self.model(x)
loss = self.criterion(logits, y)
# validation metrics
preds = torch.argmax(logits, dim=1)
acc = accuracy(preds, y)
num_correct = torch.eq(preds.view(-1), y.view(-1)).sum()
return {'loss': loss,
'acc': acc,
'num_correct': num_correct}
def test_step(self, batch, batch_idx):
inputs, classes = batch
logits = self(inputs)
preds = torch.argmax(logits, dim=1)
print('###############################')
print('classes1 = ',classes)
print('preds1 = ',preds)
print(logits)
def configure_optimizers(self):
optimizer = SGD(self.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
self.exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)
return optimizer
model = LitHymenoptera()
trainer = pl.Trainer(gpus=1, max_epochs=5, progress_bar_refresh_rate=100)
trainer.fit(model,model.dataModule)
trainer.test(model)
# Now, another test
for inputs, classes in model.dataModule.val_dataloader():
print('###############################')
logits = model(inputs.cuda())
preds = torch.argmax(logits, dim=1)
print('classes2 = ', classes)
print('preds2 = ', preds)
print(logits)
Here's the first output from test_step method :
classes1 = tensor([0, 0, 0, 0], device='cuda:0')
preds1 = tensor([1, 0, 0, 0], device='cuda:0') tensor([[0.1626, 0.2195],
[1.1437, 0.5745],
[0.9351, 0.4271],
[0.7365, 0.5342]], device='cuda:0')
and now the first output from the verification step :
classes2 = tensor([0, 0, 0, 0])
preds2 = tensor([1, 0, 1, 0], device='cuda:0')
tensor([[-0.0168, 0.0800],
[ 0.6817, 0.2949],
[-0.2205, 0.1009],
[ 0.6126, 0.4924]], device='cuda:0', grad_fn=<AddmmBackward>)
Both classes are identical (and I check the images, they are the same) but the preds are different.
Where does it come from?
| I realize that I forget to add:
model.freeze()
before using the model for the second time.
So, now, both results are the same.
| https://stackoverflow.com/questions/67028391/ |
Detectron2 Speed up inference instance segmentation | I have working instance segmentation, I'm using "mask_rcnn_R_101_FPN_3x" model. When I inference image it takes about 3 second / image on GPU. How can I speed up it faster ?
I code in Google Colab
This is my setup config:
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x.yaml"))
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1
cfg.OUTPUT_DIR = "/content/drive/MyDrive/TEAM/save/"
cfg.DATASETS.TRAIN = (train_name,)
cfg.DATASETS.TEST = (test_name, )
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
This is inference:
torch.backends.cudnn.benchmark = True
start = time.time()
predictor = DefaultPredictor(cfg)
im = cv2.imread("/content/drive/MyDrive/TEAM/mcocr_val_145114ixmyt.jpg")
outputs = predictor(im)
print(f"Inference time per image is : {(time.time() - start)} s")
Return time:
Inference time per image is : 2.7835421562194824 s
Image I inference size 1024 x 1024 pixel. I have change different size but it still inference 3 second / image. Am I missing anything about Detectron2 ?
More information GPU
enter image description here
| There is a third way. You could use a faster toolkit for the inference e.g. OpenVINO. OpenVINO is optimized specifically for Intel hardware but it should work with any CPU. It optimizes your model by converting to Intermediate Represantation (IR), performing graph pruning and fusing some operations into others while preserving accuracy. Then it uses vectorization in runtime.
If you are able to export Detectron2 to ONNX model you can utilize OpenVINO. You can find a full tutorial on how to convert the ONNX model and performance comparison here. Some snippets below.
Install OpenVINO
The easiest way to do it is using PIP, especially when you use Google Colab.
pip install openvino-dev[onnx]
Use Model Optimizer to convert ONNX model
The Model Optimizer is a command line tool which comes from OpenVINO Development Package. It converts the ONNX model to IR, which is a default format for OpenVINO. You can also try the precision of FP16, which should give you better performance (just change data_type). Run in command line:
mo --input_model "model.onnx" --input_shape "[1,3, 224, 224]" --mean_values="[123.675, 116.28 , 103.53]" --scale_values="[58.395, 57.12 , 57.375]" --data_type FP32 --output_dir "model_ir"
Run the inference
The converted model can be loaded by the runtime and compiled for a specific device e.g. CPU or GPU (integrated into your CPU like Intel HD Graphics). If you don't know what is the best choice for you, just use AUTO.
# Load the network
ie = Core()
model_ir = ie.read_model(model="model_ir/model.xml")
compiled_model_ir = ie.compile_model(model=model_ir, device_name="CPU")
# Get output layer
output_layer_ir = compiled_model_ir.output(0)
# Run inference on the input image
result = compiled_model_ir([input_image])[output_layer_ir]
Disclaimer: I work on OpenVINO.
| https://stackoverflow.com/questions/67035685/ |
Understanding custom policies in stable-baselines3 | I was trying to understand the policy networks in stable-baselines3 from this doc page.
As explained in this example, to specify custom CNN feature extractor, we extend BaseFeaturesExtractor class and specify it in policy_kwarg.features_extractor_class with first param CnnPolicy:
model = PPO("CnnPolicy", "BreakoutNoFrameskip-v4", policy_kwargs=policy_kwargs)
Q1. Can we follow same approach for custom MLP feature extractor?
As explained in this example, to specify custom MLP feature extractor, we extend ActorCriticPolicy class and override _build_mlp_extractor() and pass it as first param:
class CustomActorCriticPolicy(ActorCriticPolicy): ...
model = PPO(CustomActorCriticPolicy, "CartPole-v1", verbose=1)
Q2. Can we follow same approach for custom CNN feature extractor?
I feel either we can have CNN extractor or MLP extractor. So it makes no sense to pass MlpPolicy as first param to model and then specify CNN feature extractor in policy_kwarg.features_extractor_class as in this example. This result in following policy (containing both features_extractor and mlp_extractor), which I feel is incorrect:
ActorCriticPolicy(
(features_extractor): Net(
(conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1))
(conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1))
(fc3): Linear(in_features=384, out_features=512, bias=True)
)
(mlp_extractor): MlpExtractor(
(shared_net): Sequential(
(0): Linear(in_features=512, out_features=64, bias=True)
(1): ReLU()
)
(policy_net): Sequential(
(0): Linear(in_features=64, out_features=32, bias=True)
(1): ReLU()
(2): Linear(in_features=32, out_features=16, bias=True)
(3): ReLU()
)
(value_net): Sequential(
(0): Linear(in_features=64, out_features=32, bias=True)
(1): ReLU()
(2): Linear(in_features=32, out_features=16, bias=True)
(3): ReLU()
)
)
(action_net): Linear(in_features=16, out_features=7, bias=True)
(value_net): Linear(in_features=16, out_features=1, bias=True)
)
Q3. Am I correct with this understanding? If yes, then is one of the MLP or CNN feature extractor ignored?
| What I can say after i went through all the library code. CnnPolicy is differ to MlpPolicy only in implemented default BaseFeatureExtraction class. This make sense only in a case when you are not trying to create your custom BaseFeatureExtraction class.
Let me try to explain, we can see two types of policies:
MlpPolicy = ActorCriticPolicy
CnnPolicy = ActorCriticCnnPolicy
And we can see that class ActorCriticCnnPolicy(ActorCriticPolicy)is just based on ActorCriticPolicy and we can see following in parameters:
features_extractor_class: Type[BaseFeaturesExtractor] = NatureCNN
where NatureCNN is a simple implementation of BaseFeatureExtractor with CNN layers.
Now lets move to your questions!
Here is one thing, we always have following structure of model:
FeatureExtractor -> MlpExtractor -> Policy\Value nets
You can specify feature_exctractor_class and net_arch in policy_kwargs.
Default feature_extractor_class is based on your selection of CnnPolicy or MlpPolicy, but if you specify your own class then there is no difference. So, you can just use MlpPolicy.
If you specify net_arch, for example 'net_arch':[64, dict(pi=[32, 16], vf=[32, 16])] you will have 1 dense layer for mlp_extractor that connects output from feature_extractor.
To summary, my suggestion is to follow the example that you found in your Q3.:
Specify your own feature_extractor with any structure of this network either cnn or dense or whatever you want. The output of feature_extractor should be FC layer.
Specify net_acrh parameter in the following structure: [x1, x2, ..., dict(pi=[px1, px2, ..], vf=[vx1, vx2, ...])] where the first part is specified size of FC layers for mlp_extractor and the second part (inside dict) corresponds to sizes of policy and value networks.
| https://stackoverflow.com/questions/67036250/ |
How to reshape multichannel image with a PyTorch encoder? | I have a tensor with dimensions [18, 512, 512], representing grayscale heatmaps for up to 18 specific objects on a 512x512 image. In order to generate a suitable representation of this image for my conditional GAN, I need to reshape this tensor into a [512, 4, 4] shape using an encoder. However, I can't understand how this transformation can be achieved, since given dimensions apppear too mismatched for direct linear or convolutional transformations.
class HeatmapEncoder(torch.nn.Module):
def __init__(self):
# source = 18x512x512
# target = 512x4x4
self.encoder = torch.nn.Sequential(
nn.Linear(),
nn.ReLU(),
nn.Linear()
)
def forward(self, x):
pass
It is possible to use `nn.Flatten()' with start_dim=0 here, but the result will be a simplified tensor that can't be used as an input by the linear layer.
The decoder part is not especially important right now, since I only need a low-dimensional representation of the heatmaps to condition my GAN and not to recreate those images.
| You can try a couple of different approaches for your problem, like viewing it as a one big vector then slowly reducing it to your size, or permuting dimensions and applying different operations etc. Since there is no full code for testing it on the actual problem, I can't really say which will work better, but my first instinct says convolution based dimension reductions is quite suitable for this problem. First code, then talk:
class ReduceConv(torch.nn.Module):
def __init__(self, nin, nout, activ=nn.ReLU):
super(ReduceConv, self).__init__()
# source = Batch x nin x H x W
# target = Batch x nout x (H/2) x (W/2)
self.conv = nn.Sequential(
nn.Conv2d(
nin, nout,
kernel_size=3,
stride=1,
padding=1),
nn.Conv2d(
nout, nout,
kernel_size=3,
stride = 2,
padding = 1),
nn.BatchNorm2d(nout),
activ()
)
def forward(self, x):
return self.conv(x)
class HeatmapEncoder(torch.nn.Module):
def __init__(self):
super(HeatmapEncoder, self).__init__()
# source = 18x512x512
# target = 512x4x4
self.encoder = torch.nn.Sequential(
ReduceConv(18, 32), # out-> 32 256 256
ReduceConv(32, 64), # out-> 64 128 128
ReduceConv(64, 64), # out-> 64 64 64
ReduceConv(64, 64), # out-> 64 32 32
ReduceConv(64, 128), # out-> 128 16 16
ReduceConv(128, 256), # out-> 256 8 8
ReduceConv(256, 512) # out-> 512 4 4
)
def forward(self, x):
return self.encoder(x)
# 10 is batch size
inp = torch.rand(10, 18, 512, 512)
enc = HeatmapEncoder()
out = enc(inp)
print(inp.shape) # torch.Size([10, 18, 512, 512])
print(out.shape) # torch.Size([10, 512, 4, 4])
It is essentially just a stack of convolution layers. Note that, in each ReduceConv layer input dimensions are halved by using stride=2 convolutions. You don't technically need the first convolution in the ReduceConv layer, but it is deep learning era, more the merrier :) I've also added BatchNorm after each reduction along with an activation function. 18 is regarded as channels when inputted to first convolution. This way channels build up to 512 while the width and height is halved after each operation. This encoder model probably not the best or most efficient one but it should be good enough for your problem.
| https://stackoverflow.com/questions/67049161/ |
AttributeError in torch_geometric.transforms | I have a problem that I cannot understand: even though a module ‘torch_geometric.transforms’ has an attribute ‘AddTrainValTestMask’ according to documentation , I cannot import it. I keep receiving an error AttributeError: module 'torch_geometric.transforms' has no attribute 'AddTrainValTestMask
My Pytorch version is 1.7.1
I took the code from here
Minimum reproducible example:
import os.path as osp
import torch
import torch.nn.functional as F
from torch_geometric.datasets import Planetoid
import torch_geometric.transforms as T
from torch_geometric.nn import SplineConv
dataset = 'Cora'
transform = T.Compose([
T.AddTrainValTestMask('train_rest', num_val=500, num_test=500),
T.TargetIndegree(),
])
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', dataset)
dataset = Planetoid(path, dataset, transform=transform)
data = dataset[0]
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = SplineConv(dataset.num_features, 16, dim=1, kernel_size=2)
self.conv2 = SplineConv(16, dataset.num_classes, dim=1, kernel_size=2)
def forward(self):
x, edge_index, edge_attr = data.x, data.edge_index, data.edge_attr
x = F.dropout(x, training=self.training)
x = F.elu(self.conv1(x, edge_index, edge_attr))
x = F.dropout(x, training=self.training)
x = self.conv2(x, edge_index, edge_attr)
return F.log_softmax(x, dim=1)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model, data = Net().to(device), data.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-3)
def train():
model.train()
optimizer.zero_grad()
F.nll_loss(model()[data.train_mask], data.y[data.train_mask]).backward()
optimizer.step()
def test():
model.eval()
log_probs, accs = model(), []
for _, mask in data('train_mask', 'test_mask'):
pred = log_probs[mask].max(1)[1]
acc = pred.eq(data.y[mask]).sum().item() / mask.sum().item()
accs.append(acc)
return accs
for epoch in range(1, 201):
train()
log = 'Epoch: {:03d}, Train: {:.4f}, Test: {:.4f}'
print(log.format(epoch, *test()))
Can anybody explain to me the problem?
| It has been renamed to RandomNodeSplit in the latest version of torch_geometric. You can directly use RandomNodeSplit to replace it.
| https://stackoverflow.com/questions/67064190/ |
Worker timeout when preloading Pytorch model in Flask app on Render.com | In my app.py I have a function that uses a pretrained Pytorch model to generate keywords
@app.route('/get_keywords')
def get_keywords():
generated_keywords = ml_controller.generate_keywords()
return jsonify(keywords=generated_keywords)
and in ml_controller.py I have
def generate_keywords():
model = load_keywords_model()
output = model.generate()
return output
This is working fine. Calls to /get_keywords correctly return the generated keywords. However this solution is quite slow since the model gets loaded on each call. Hence I tried to load the model just once by moving it outside my function:
model = load_keywords_model()
def generate_keywords():
output = model.generate()
return output
But now all calls to /get_keywords time out when I deploy my app to Render.com. (Locally it's working.) Strangely the problem is not that the model does not get loaded. When I write
model = load_keywords_model()
testOutput = model.generate()
print(testOutput)
def generate_keywords():
output = model.generate()
return output
a bunch of keywords are generated when I boot gunicorn. Also, all other endpoints that don't call ml_controller.generate_keywords() work without problems.
For testing purposes I also added a dummy function to ml_controller.py that I can call without problems
def dummy_string():
return "dummy string"
Based on answers to similar problems I found, I'm starting Gunicorn with
gunicorn app:app --timeout 740 --preload --log-level debug
and in app.py I'm using
if __name__ == '__main__':
app.run(debug=False, threaded=False)
However, the problem still persists.
| The problem is that there's some bug that occurs for Pytorch models when Gunicorn is started with the --preload flag.
Render.com secretly adds this flag and doesn't show it in the settings which is why it took me days to figure this out. You can see all settings Render.com adds by calling printenv in the console.
To resolve the issue add a new environment variable
GUNICORN_CMD_ARGS: '--access-logfile - --bind=0.0.0.0:10000'
which overwrites Render.com's standard settings
GUNICORN_CMD_ARGS: '--preload --access-logfile - --bind=0.0.0.0:10000'
| https://stackoverflow.com/questions/67069183/ |
Pytorch: load checkpoint from batch without iterating over dataset again | Instead of loading from an epoch wise checkpoint I need to be able to load from a batch. I am aware that this is not optimal but since I only have limited training time before my training gets interrupted (google colab free version) I need to be able to load from the batch it stopped or around that batch.
I also do not want to iterate over all data again but continue with the data the model has not seen yet.
My current approach which does not work:
def save_checkpoint(state, file=checkpoint_file):
torch.save(state, file)
def load_checkpoint(checkpoint):
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
train_loss = checkpoint['train_loss']
val_loss = checkpoint['val_loss']
epoch = checkpoint['epoch']
step = checkpoint['step']
batch = checkpoint['batch']
return model, optimizer, train_loss, val_loss, epoch, step, batch
While it does load the weights from where it stopped, it iterates over all data again.
Also, do I even need to capture train_loss and val_loss? I cannot see a difference in the loss being output when I include them or not. Thus, I assume it is already included in model.load_state_dict (?)
I assume capturing step and batch won't work this way and I actually need to include some sort of index tracker within my class DataSet ? I do already have this within the DataSet class
def __getitem__(self, idx):
question = self.data_qs[idx]
answer1 = self.data_a1s[idx]
answer2 = self.data_a2s[idx]
target = self.targets[idx]
So, could this be useful?
| You can achieve your goal by creating a custom Dataset class with a property self.start_index=step*batch and in your __getitem__ function the new index should be (self.start_index+idx)%len(self.data_qs)
If you create your Dataloader with shuffle=False then this tricks will work.
Additionally, With shuffle=True you can maintain a index mapper and needs to verify.
| https://stackoverflow.com/questions/67072628/ |
Finding function/class definitions in PyTorch | I want to find out where certain classes and functions are defined within PyTorch (and other libraries).
Unfortunately, the following doesn't work:
import inspect
import torch
inspect.getsource(torch.tensor)
It throws the following error:
TypeError: module, class, method, function, traceback, frame, or code object was expected, got builtin_function_or_method
What's more, within PyCharm, I usually do 'gd' (in vim mode) to find a function/class definition, but this doesn't work either for PyTorch.
Please help me understand what is the problem here, and more importantly, how I can find these definitions in general.
| this is actually complicated. Pytorch/libtorch is a huge project, and it relies on a lot of builtin low-level functions which have been implemented in C/Cuda. Most low-level kernels (math operations for example) even have several implementations, in order to optimize differently for the CPU and the GPU etc.
So there is a lot in this library that is not python code, and inspect is going to have a hard time.
If you want to find the source files, you are probably going to need to dive in the github repository yourself, and make good use of tools like grep and find.
However, the torch.nn module is almost entirely python, so I think inspect will work correctly on its features (like datasets, dataloaders, modules, optimizers etc)
And finally, if you need it, here is the file for torch.tensor, in which you will find python code mixed with calls to the C api : torch tensor source code
About the question in the comment below:
I cannot provide a full answer because this is reaching beyond my understanding of how exactly are python and C++ code interfaced in torch. But I'll do my best (If anyone has any correction or improvement to make to this, please do it).
There is a fundamental difference between C source code and python code : C/C++ are compiled and thus the features implemented in these languages are shipped as compiled assembly code. In other words, when your python code calls functions/objects from the underlying C code, it make calls to assembly functions that are not human-readable anymore. So the computer can make the calls, but the inspect feature that looks up for source code (for you to read) cannot work because this code does not exist anymore (at least not where inspect is looking). You would need other tools like a disassembler, a debugger etc, which are specialized in analyzing assembly instructions (you can also learn x86-64 assembly language programming if you are brave enough :D)
| https://stackoverflow.com/questions/67077747/ |
Decode a json file properly, when get it as an input via message request | I would like to read a request, sent via curl, as given below:
curl -X POST http://127.0.0.1:8080/eval/res -v path/request.json. With the json having the following format:
{
"imgX": [{
"key": "x",
"url": "http://127.0.0.1:8080/imgs/x.png"
}],
"imgY": [{
"key": "y",
"url": "http://127.0.0.1:8080/imgs/y.png"
}]
}
After service initialization, my code is:
def preprocess(self, data):
_json = {}
for row in data:
json_obj = row.get("data") or row.get("body")
self.parameter_dict = dict(json_obj)
image_url = ""
json_obj = dict(json_obj)
for img_inputs, msk_inputs in zip(json_obj['imgX'], json_obj['imgY']):
key = img_inputs['key']
However, that gives:
KeyError: 'img'
I debug and find that when reading json file, each row is a bytearray:
{'body': bytearray(b'')}.
How can I decode to get the .json format back?
| To avoid problems with json responses from APIs, you can use the json lib. In particular json.loads() returns a dictionary so you don't have to convert things manually.
For example:
import json
with open('<path_to_file>/request.json','r') as f:
data = f.read()
_json = json.loads(data)
print(_json)
Would output:
{'imgX': [{'key': 'x', 'url': 'http://127.0.0.1:8080/imgs/x.png'}],
'imgY': [{'key': 'y', 'url': 'http://127.0.0.1:8080/imgs/y.png'}]}
On the other hand, when using zip() if you are getting two lists img_inputs and msk_inputs, each with more than one json object/dictionary inside, in order to make img_inputs['key'] work, you would need to loop over the values of those lists to get to the keys.
#...
for img_inputs, msk_inputs in zip(json_obj['imgX'], json_obj['imgY']):
key = img_inputs[0]['key']
#...
In this case (I revised my code), zip() will return the first element in the list for img_inputs and msk_inputs so that step is not necessary and your for loop is fine.
Make sure you are getting a valid json response from the API.
| https://stackoverflow.com/questions/67078988/ |
How to load/fetch the next data batches for the next epoch, during the current epoch? | I know that since PyTorch 1.7.0 it is possible to prefetch some batches before an epoch begins. However, this does not make it possible to fetch batches while the operations within an epoch are being performed and before the next epoch begins. Based on this thread, it seems that it should be possible to use a Sampler to load batches during an epoch, and before the next epoch begins. However, I cannot wrap my head around how I can use a Sampler to achieve this.
Can anyone provide a code sample for a Sampler that allows fetching samples during an epoch?
| You can prefetch the next batches from iterator in a background thread.
class _ThreadedIterator(threading.Thread):
"""
Prefetch the next queue_length items from iterator in a background thread.
Example:
>> for i in bg_iterator(range(10)):
>> print(i)
"""
class _End:
pass
def __init__(self, generator: Iterable, maxsize: int) -> None:
threading.Thread.__init__(self)
self.queue: Queue = Queue(maxsize)
self.generator = generator
self.daemon = True
self.start()
def run(self) -> None:
for item in self.generator:
self.queue.put(item)
self.queue.put(self._End)
def __iter__(self) -> Any:
return self
def __next__(self) -> Any:
next_item = self.queue.get()
if next_item == self._End:
raise StopIteration
return next_item
# Required for Python 2.7 compatibility
def next(self) -> Any:
return self.__next__()
def bg_iterator(iterable: Iterable, maxsize: int) -> Any:
return _ThreadedIterator(iterable, maxsize=maxsize)
UPD.
Usage:
model = model.to(device, non_blocking=True)
for inputs, targets in bg_iterator(data_loader, maxsize=2):
inputs = inputs.to(device, non_blocking=True)
targets = targets.to(device, non_blocking=True)
example
| https://stackoverflow.com/questions/67085517/ |
nn.DataParallel - Training doesn't seem to start | I am having a lot of problems using nn.DistributedDataParallel, because I cannot find a good working example of how to specify GPU id's within a single node. For this reason, I want to start off by using nn.DataParallel, since it should be easier to implement. According to the documentation [https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html] the following should work:
device = torch.device('cuda:1' if torch.cuda.is_available() else 'cpu')
model = Model(arg).to(device)
model = torch.nn.DataParallel(model, device_ids=[1, 8, 9])
for step, (original, keypoints) in enumerate(train_loader):
original, keypoints = original.to(device), keypoints.to(device)
loss = model(original)
optimizer.zero_grad()
total_loss.backward()
optimizer.step()
However, when I start to process the model is distributed to all three GPU's, but the training doesn't start. The RAM of the GPU's remains almost empty (except for the memory used for the loading the model). This can be seen here (see GPU 1, 8, 9):
Can someone explain me why that's not working?
Thanks a lot!!
| I am making a guess here and I haven't tested it since I don't have multiple GPUs.
Since your suppose to load it to parallel first then move it to gpu
model = Model(arg)
model = torch.nn.DataParallel(model, device_ids=[1, 8, 9])
model.to(device)
You can check out here the tutorial I referenced here: https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html
| https://stackoverflow.com/questions/67096073/ |
PyTorch: Zero all elements of vector except top k? | I am trying to create a new activation layer, let’s call it topk, that would work as follows. It will take a vector x of size n as input (result of multiplying previous layer output by weight matrix and adding bias) and a positive integer k and would output a vector topk(x) of size n whose elements are:
x_i (if x_i is one of the top k elements of x)
topk(x)_i =
0 (otherwise)
While calculating gradient of topk(x), top k elements of x should have gradient 1, everything else 0.
How should I implement this? Any help will be appreciated.
| You can use torch.topk for this:
k = 2
output = torch.randn(5)
vals, idx = output.topk(k)
topk = torch.zeros_like(output)
topk[idx] = vals
>>> topk
tensor([1.0557, 0.0000, 0.0000, 1.4562, 0.0000])
Note that while the 'values' of topk() are differentiable, the 'indices' are not (similar to how argmax is not a differentiable function).
| https://stackoverflow.com/questions/67099961/ |
expected np.ndarray (got DataFrame) | I wanted to find out the output_size of a convolution operation, but I am strugglin with converting my dataframe into a tensor.
output_size = torch.nn.Conv2d(3, 5, 5,stride=1, padding=3,
dilation=1, groups=1, bias=True, padding_mode='zeros')
fashion = torch.from_numpy(load_fashion)
input_ = torch.Tensor((fashion.values), dtype=torch.float)
output = output_size(input_)
| I can't completely understand your problem as your code is not formatted properly in your question but the error is just an expected datatype error.
You need to convert your dataframe to a np array. Just add .values at the end of your dataframe. So if your input was a sample dataframe like so:
df = pd.DataFrame({"Col1":[1,2,3,4], "Col2":[2,2,3,4]})
Convert the whole thing to a numpy array like so:
sample_array = df.values
or convert one column to a np array like so:
sample_array_2 = df["Col1"].values
Update:
As mentioned in the comments, pandas recommends .to_numpy() instead, so use something like:
sample_array = df.to_numpy()
| https://stackoverflow.com/questions/67110207/ |
_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceIdEEPKNS_6detail12TypeMetaDataEv | What is the reason for this error and how can I fix it? I am running the code from this repo: https://github.com/facebookresearch/frankmocap
(frank) mona@goku:~/research/code/frankmocap$ python -m demo.demo_frankmocap --input_path ./sample_data/han_short.mp4 --out_dir ./mocap_output
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/mona/research/code/frankmocap/demo/demo_frankmocap.py", line 25, in <module>
from handmocap.hand_bbox_detector import HandBboxDetector
File "/home/mona/research/code/frankmocap/handmocap/hand_bbox_detector.py", line 33, in <module>
from detectors.hand_object_detector.lib.model.roi_layers import nms # might raise segmentation fault at the end of program
File "/home/mona/research/code/frankmocap/detectors/hand_object_detector/lib/model/roi_layers/__init__.py", line 3, in <module>
from .nms import nms
File "/home/mona/research/code/frankmocap/detectors/hand_object_detector/lib/model/roi_layers/nms.py", line 3, in <module>
from model import _C
ImportError: /home/mona/research/code/frankmocap/detectors/hand_object_detector/lib/model/_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceIdEEPKNS_6detail12TypeMetaDataEv
I have:
$ lsb_release -a
LSB Version: core-11.1.0ubuntu2-noarch:security-11.1.0ubuntu2-noarch
Distributor ID: Ubuntu
Description: Ubuntu 20.04.2 LTS
Release: 20.04
Codename: focal
and
$ python
Python 3.8.5 (default, Jan 27 2021, 15:41:15)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'1.8.1+cu111'
>>> import detectron2
>>> detectron2.__version__
'0.4'
>>> from detectron2 import _C
and:
$ python -m detectron2.utils.collect_env
/home/mona/venv/frank/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 804: forward compatibility was attempted on non supported HW (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:109.)
return torch._C._cuda_getDeviceCount() > 0
No CUDA runtime is found, using CUDA_HOME='/usr'
--------------------- --------------------------------------------------------------------------
sys.platform linux
Python 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
numpy 1.19.5
detectron2 0.4 @/home/mona/venv/frank/lib/python3.8/site-packages/detectron2
Compiler GCC 7.3
CUDA compiler CUDA 11.1
DETECTRON2_ENV_MODULE <not set>
PyTorch 1.8.1+cu111 @/home/mona/venv/frank/lib/python3.8/site-packages/torch
PyTorch debug build False
GPU available False
Pillow 8.1.0
torchvision 0.9.1+cu111 @/home/mona/venv/frank/lib/python3.8/site-packages/torchvision
fvcore 0.1.3.post20210311
cv2 4.5.1
--------------------- --------------------------------------------------------------------------
PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,
| I tried the solutions mentioned here, but that didn't fully solve the problem. However, when I tried solving a different error using this solution, it also solved this error for me. Use the following command:
pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
| https://stackoverflow.com/questions/67117097/ |
extracting subtensor from a tensor according to an index tensor | I have this tensor:
tensor([[[1, 2],
[3, 4]],
[[5, 6],
[7, 8]]])
and I have this index tensor:
tensor([0, 1])
and what I want to get is the subtensors according to dim 1 and the corresponding indices in the index tensor, that is:
tensor([[1, 2],
[7, 8]])
tried to use torch.gather() function and advanced indexing with no success, can anyone help?
| You are implicitly using the index of each value of your index tensor. They just happen to be the same as the values. If you want to walk through the first level, elements of the tensor, you can use torch.arange to construct the first level indices.
import torch
from torch import tensor
t = tensor([[[1, 2],
[3, 4]],
[[5, 6],
[7, 8]]])
ix = tensor([0, 1])
ix0 = torch.arange(0, ix.shape.numel())
t[ix0, ix]
# returns:
tensor([[1, 2],
[7, 8]])
| https://stackoverflow.com/questions/67123934/ |
Torch doesnt see gpu on gcloud with deep learning containers | I am trying to run some python code on kubernetes in GCloud, I am using pytorch and for the base image i am using gcr.io/deeplearning-platform-release/pytorch-gpu/.
Everything spins up fine, and my model trains but only using CPU. When i run the following
>>> import torch
>>> torch.cuda.is_available()
/opt/conda/lib/python3.7/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: CUDA driver initialization failed, you might not have a CUDA gpu. (Triggered internally at /opt/conda/conda-bld/pytorch_1614378098133/work/c10/cuda/CUDAFunctions.cpp:109.)
return torch._C._cuda_getDeviceCount() > 0
False
But this is the output of nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 62C P8 32W / 149W | 11MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
The version of pytorch is the one that is given by the base image, so i am not exactly sure how to fix this. Any help would be appreciated.
EDIT:
>>> torch.__version__
'1.8.0'
EDIT TWO:
>>> torch.version.cuda
'11.1'
| The problem is that you have a NVIDIA driver that supports up to CUDA 10.1 and you installed a PyTorch built on CUDA 11.1. To solve that issue you can:
Update your NVIDIA driver to one that supports CUDA 11.1, or
Install a PyTorch compatible with CUDA 10.1 (which is compatible with your NVIDIA driver)
For the option 2, you can simply run the following:
pip install torch==1.8.0+cu101 torchvision==0.9.0+cu101 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
You can check the versions available for CUDA 10.1 in https://download.pytorch.org/whl/torch_stable.html (those in cu101/)
| https://stackoverflow.com/questions/67125766/ |
"TextInputSequence must be str” error on Hugging Face Transformers | I’m very new to HuggingFace, I’ve come around this error “TextInputSequence must be str” on a notebook which is helping me a lot to do some practice on various hugging face models. The boilerplate code on the notebook is throwing this error (I guess) due to some changes in huggingface’s API or something. So I was wondering if someone could suggest some changes that I can make to the code to resolve the error.
The error can easily be reproduced by just running all the cells of the notebook.
Link: Colab Notebook
This is the line that is throwing the error-
Here is the error-
| This is an issue with data , the data consists of None type or other data type except string
| https://stackoverflow.com/questions/67138037/ |
PyTorch indexing: select complement of indices | Say I have a tensor and index:
x = torch.tensor([1,2,3,4,5])
idx = torch.tensor([0,2,4])
If I want to select all elements not in the index, I can manually define a Boolean mask like so:
mask = torch.ones_like(x)
mask[idx] = 0
x[mask]
is there a more elegant way of doing this?
i.e. a syntax where I can directly pass the indices as opposed to creating a mask e.g. something like:
x[~idx]
| I couldn't find a satisfactory solution to finding the complement of a multi-dimensional tensor of indices and finally implemented my own. It can work on cuda and enjoys fast parallel computation.
def complement_idx(idx, dim):
"""
Compute the complement: set(range(dim)) - set(idx).
idx is a multi-dimensional tensor, find the complement for its trailing dimension,
all other dimension is considered batched.
Args:
idx: input index, shape: [N, *, K]
dim: the max index for complement
"""
a = torch.arange(dim, device=idx.device)
ndim = idx.ndim
dims = idx.shape
n_idx = dims[-1]
dims = dims[:-1] + (-1, )
for i in range(1, ndim):
a = a.unsqueeze(0)
a = a.expand(*dims)
masked = torch.scatter(a, -1, idx, 0)
compl, _ = torch.sort(masked, dim=-1, descending=False)
compl = compl.permute(-1, *tuple(range(ndim - 1)))
compl = compl[n_idx:].permute(*(tuple(range(1, ndim)) + (0,)))
return compl
Example:
>>> import torch
>>> a = torch.rand(3, 4, 5)
>>> a
tensor([[[0.7849, 0.7404, 0.4112, 0.9873, 0.2937],
[0.2113, 0.9923, 0.6895, 0.1360, 0.2952],
[0.9644, 0.9577, 0.2021, 0.6050, 0.7143],
[0.0239, 0.7297, 0.3731, 0.8403, 0.5984]],
[[0.9089, 0.0945, 0.9573, 0.9475, 0.6485],
[0.7132, 0.4858, 0.0155, 0.3899, 0.8407],
[0.2327, 0.8023, 0.6278, 0.0653, 0.2215],
[0.9597, 0.5524, 0.2327, 0.1864, 0.1028]],
[[0.2334, 0.9821, 0.4420, 0.1389, 0.2663],
[0.6905, 0.2956, 0.8669, 0.6926, 0.9757],
[0.8897, 0.4707, 0.5909, 0.6522, 0.9137],
[0.6240, 0.1081, 0.6404, 0.1050, 0.6413]]])
>>> b, c = torch.topk(a, 2, dim=-1)
>>> b
tensor([[[0.9873, 0.7849],
[0.9923, 0.6895],
[0.9644, 0.9577],
[0.8403, 0.7297]],
[[0.9573, 0.9475],
[0.8407, 0.7132],
[0.8023, 0.6278],
[0.9597, 0.5524]],
[[0.9821, 0.4420],
[0.9757, 0.8669],
[0.9137, 0.8897],
[0.6413, 0.6404]]])
>>> c
tensor([[[3, 0],
[1, 2],
[0, 1],
[3, 1]],
[[2, 3],
[4, 0],
[1, 2],
[0, 1]],
[[1, 2],
[4, 2],
[4, 0],
[4, 2]]])
>>> compl = complement_idx(c, 5)
>>> compl
tensor([[[1, 2, 4],
[0, 3, 4],
[2, 3, 4],
[0, 2, 4]],
[[0, 1, 4],
[1, 2, 3],
[0, 3, 4],
[2, 3, 4]],
[[0, 3, 4],
[0, 1, 3],
[1, 2, 3],
[0, 1, 3]]])
>>> al = torch.cat([c, compl], dim=-1)
>>> al
tensor([[[3, 0, 1, 2, 4],
[1, 2, 0, 3, 4],
[0, 1, 2, 3, 4],
[3, 1, 0, 2, 4]],
[[2, 3, 0, 1, 4],
[4, 0, 1, 2, 3],
[1, 2, 0, 3, 4],
[0, 1, 2, 3, 4]],
[[1, 2, 0, 3, 4],
[4, 2, 0, 1, 3],
[4, 0, 1, 2, 3],
[4, 2, 0, 1, 3]]])
>>> al, _ = al.sort(dim=-1)
>>> al
tensor([[[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]],
[[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]],
[[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]]])
| https://stackoverflow.com/questions/67157893/ |
YoloV5 killed at first epoch | I'm using a virtual machine on Windows 10 with this config:
Memory 7.8 GiB
Processor Intel® Core™ i5-6600K CPU @ 3.50GHz × 3
Graphics llvmpipe (LLVM 11.0.0, 256 bits)
Disk Capcity 80.5 GB
OS Ubuntu 20.10 64 Bit
Virtualization Oracle
I installed docker for Ubuntu as described in the official documentation.
I pulled the docker image as described on the yolo github section for docker.
Since I have no NVIDIA GPU I could not install a driver or CUDA.
I pulled the aquarium from roboflow and installed it on a folde aquarium.
I ran this command to start the image and have my aquarium folder mounted
sudo docker run --ipc=host -it -v "$(pwd)"/Desktop/yolo/aquarium:/usr/src/app/aquarium ultralytics/yolov5:latest
And was greeted with this banner
=============
== PyTorch ==
NVIDIA Release 21.03 (build 21060478) PyTorch Version 1.9.0a0+df837d0
Container image Copyright (c) 2021, NVIDIA CORPORATION. All rights
reserved.
Copyright (c) 2014-2021 Facebook Inc. Copyright (c) 2011-2014 Idiap
Research Institute (Ronan Collobert) Copyright (c) 2012-2014 Deepmind
Technologies (Koray Kavukcuoglu) Copyright (c) 2011-2012 NEC
Laboratories America (Koray Kavukcuoglu) Copyright (c) 2011-2013 NYU
(Clement Farabet) Copyright (c) 2006-2010 NEC Laboratories America
(Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston) Copyright
(c) 2006 Idiap Research Institute (Samy Bengio) Copyright (c)
2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio,
Johnny Mariethoz) Copyright (c) 2015 Google Inc. Copyright (c)
2015 Yangqing Jia Copyright (c) 2013-2016 The Caffe contributors
All rights reserved.
NVIDIA Deep Learning Profiler (dlprof) Copyright (c) 2021, NVIDIA
CORPORATION. All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION. All
rights reserved.
This container image and its contents are governed by the NVIDIA Deep
Learning Container License. By pulling and using the container, you
accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
WARNING: The NVIDIA Driver was not detected. GPU functionality will
not be available. Use 'nvidia-docker run' to start this container;
see https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker .
NOTE: MOFED driver for multi-node communication was not detected.
Multi-node communication performance may be reduced.
So no error there.
I installed pip and with pip wandb I added wandb. I used wandb login and set my API key.
I ran following command:
# python train.py --img 640 --batch 16 --epochs 10 --data ./aquarium/data.yaml --weights yolov5s.pt --project ip5 --name aquarium5 --nosave --cache
And received this output:
github: skipping check (Docker image)
YOLOv5 v5.0-14-g238583b torch 1.9.0a0+df837d0 CPU
Namespace(adam=False, artifact_alias='latest', batch_size=16, bbox_interval=-1, bucket='', cache_images=True, cfg='', data='./aquarium/data.yaml', device='', entity=None, epochs=10, evolve=False, exist_ok=False, global_rank=-1, hyp='data/hyp.scratch.yaml', image_weights=False, img_size=[640, 640], label_smoothing=0.0, linear_lr=False, local_rank=-1, multi_scale=False, name='aquarium5', noautoanchor=False, nosave=True, notest=False, project='ip5', quad=False, rect=False, resume=False, save_dir='ip5/aquarium5', save_period=-1, single_cls=False, sync_bn=False, total_batch_size=16, upload_dataset=False, weights='yolov5s.pt', workers=8, world_size=1)
tensorboard: Start with 'tensorboard --logdir ip5', view at http://localhost:6006/
hyperparameters: lr0=0.01, lrf=0.2, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0
wandb: Currently logged in as: pebs (use `wandb login --relogin` to force relogin)
wandb: Tracking run with wandb version 0.10.26
wandb: Syncing run aquarium5
wandb: ⭐️ View project at https://wandb.ai/pebs/ip5
wandb: View run at https://wandb.ai/pebs/ip5/runs/1c2j80ii
wandb: Run data is saved locally in /usr/src/app/wandb/run-20210419_102642-1c2j80ii
wandb: Run `wandb offline` to turn off syncing.
Overriding model.yaml nc=80 with nc=7
from n params module arguments
0 -1 1 3520 models.common.Focus [3, 32, 3]
1 -1 1 18560 models.common.Conv [32, 64, 3, 2]
2 -1 1 18816 models.common.C3 [64, 64, 1]
3 -1 1 73984 models.common.Conv [64, 128, 3, 2]
4 -1 1 156928 models.common.C3 [128, 128, 3]
5 -1 1 295424 models.common.Conv [128, 256, 3, 2]
6 -1 1 625152 models.common.C3 [256, 256, 3]
7 -1 1 1180672 models.common.Conv [256, 512, 3, 2]
8 -1 1 656896 models.common.SPP [512, 512, [5, 9, 13]]
9 -1 1 1182720 models.common.C3 [512, 512, 1, False]
10 -1 1 131584 models.common.Conv [512, 256, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 1 361984 models.common.C3 [512, 256, 1, False]
14 -1 1 33024 models.common.Conv [256, 128, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 1 90880 models.common.C3 [256, 128, 1, False]
18 -1 1 147712 models.common.Conv [128, 128, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 1 296448 models.common.C3 [256, 256, 1, False]
21 -1 1 590336 models.common.Conv [256, 256, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 1 1182720 models.common.C3 [512, 512, 1, False]
24 [17, 20, 23] 1 32364 models.yolo.Detect [7, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
[W NNPACK.cpp:80] Could not initialize NNPACK! Reason: Unsupported hardware.
Model Summary: 283 layers, 7079724 parameters, 7079724 gradients, 16.4 GFLOPS
Transferred 356/362 items from yolov5s.pt
Scaled weight_decay = 0.0005
Optimizer groups: 62 .bias, 62 conv.weight, 59 other
train: Scanning '/usr/src/app/aquarium/train/labels.cache' images and labels... 448 found, 0 missing, 1 empty, 0 corrupted: 100%|█| 448/448 [00:00<?, ?
train: Caching images (0.4GB): 100%|████████████████████████████████████████████████████████████████████████████████| 448/448 [00:01<00:00, 313.77it/s]
val: Scanning '/usr/src/app/aquarium/valid/labels.cache' images and labels... 127 found, 0 missing, 0 empty, 0 corrupted: 100%|█| 127/127 [00:00<?, ?it
val: Caching images (0.1GB): 100%|██████████████████████████████████████████████████████████████████████████████████| 127/127 [00:00<00:00, 141.31it/s]
Plotting labels...
autoanchor: Analyzing anchors... anchors/target = 5.17, Best Possible Recall (BPR) = 0.9997
Image sizes 640 train, 640 test
Using 3 dataloader workers
Logging results to ip5/aquarium5
Starting training for 10 epochs...
Epoch gpu_mem box obj cls total labels img_size
0%| | 0/28 [00:00<?, ?it/s]Killed
root@cf40a6498016:~# /opt/conda/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
From this output I would think that there were 0 epochs completed.
My data.yaml contains this code:
train: /usr/src/app/aquarium/train/images
val: /usr/src/app/aquarium/valid/images
nc: 7
names: ['fish', 'jellyfish', 'penguin', 'puffin', 'shark', 'starfish', 'stingray']
wandb.ai does not display any metrics, but I have the files config.yaml, requirements.txt, wandb-metadata.json and wandb-summary.json.
Why am I not getting any output?
Has there in fact be no training at all?
If there was a training, how can I use my model?
| The problem was, that the VM ran out of RAM. The solution was to create 16 GB of swap memory, so the machine can use the virtual harddrive as RAM.
| https://stackoverflow.com/questions/67160576/ |
Are the random transforms applied at each epoch in my Pytorch convolutional neural net? (data augmentation) | I'm new to Pytorch and I try to make a convolutional neural net to classify a set of images (personal iris recognition problem). My issue is that I have a small number of images (10 classes and 20 images per class). I tried to make data augmentation (random transforms for every epoch) but I'm not sure that these are applied at each epoch as I entended. Here's my code. If anyone can confirm that I'm doing it right or if it's not ok, is there a way to make the transforms inside the loop?
from torch import utils, nn, optim, no_grad
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from ConvNet import ConvNet
from ImagesDataset import ImagesDataset, AddGaussianNoise
DATABASE_PATH = "C://Users//Maria//Downloads//ees//CASIA-IrisV2"
MODEL_PATH = "entire_model.pt"
dataArray = []
# Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# data augmentation by applying some transforms randomly for every batch
transform = transforms.Compose([transforms.RandomCrop(5), transforms.RandomHorizontalFlip(p=0.1),
transforms.ColorJitter(brightness=0.1, contrast=0.2, saturation=0, hue=0),
AddGaussianNoise(0.1, 0.05), transforms.ToTensor()])
dataset = ImagesDataset(csv_file="generate_csv//generate_csv_correctly_detected.csv", root_dir=DATABASE_PATH, transform=transforms.ToTensor())
num_epochs = 300
num_classes = 10
batch_size = 100
learning_rate = 0.01
# the dataset is partitioned in 5 subsets to perform cross validation
sum_percents = 0
data_set = utils.data.random_split(dataset, [40, 40, 40, 40, 40])
for i in range(5):
test_set = data_set[i]
train_set = []
for j in range(5):
if j != i:
train_set += data_set[j]
train_loader = DataLoader(dataset=train_set, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(dataset=test_set, batch_size=batch_size, shuffle=True)
model = ConvNet(0).to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
# Train the model
total_step = len(train_loader)
loss_list = []
acc_list = []
# delete contents of loss1 file
file = open("loss1.txt", "r+")
file.truncate(0)
file.close()
for epoch in range(num_epochs):
print("Epoch: " + str(epoch))
for i, (images, labels) in enumerate(train_loader):
# Run the forward pass
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
loss = criterion(outputs, labels)
# set the gradients to zero
optimizer.zero_grad()
# compute gradients
loss.backward()
# update the parameters
optimizer.step()
# Track the accuracy
total = labels.size(0)
_, predicted = torch.max(outputs.data, 1)
correct = (predicted == labels).sum().item()
acc_list.append(correct / total)
# Save
torch.save(model, MODEL_PATH)
# Test the model
model.eval()
with no_grad():
correct = 0
total = 0
for images, labels in test_loader:
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Test Accuracy of the model on the 132 test images: {} %'.format((correct / total) * 100))
sum_percents += (correct / total) * 100
print('Average accuracy is {}%'.format((sum_percents/5)))
| Hi what i meant wasn't like that but the following and i cannot completely reproduce since i dont have your function AddGaussianNoise
import torchvision.transforms as T
import numpy as np
transforms = T.Compose([
T.ToPILImage(), # You need to add this to pil image
T.RandomCrop(5), T.RandomHorizontalFlip(p=0.1),
T.ColorJitter(brightness=0.1, contrast=0.2, saturation=0, hue=0),
T.ToTensor()
])
transforms(np.random.randn(224, 224, 3).astype(np.uint8))
>>>tensor([[[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0039],
[0.0000, 0.0000, 0.9882, 0.0000, 0.9882],
[0.0039, 0.9882, 0.9882, 0.0000, 0.9882],
[0.0000, 0.0039, 0.0000, 0.0000, 0.0000]],
[[0.0039, 0.0000, 0.0000, 0.0039, 0.9882],
[0.9882, 0.0000, 0.0000, 0.0000, 0.9882],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.9882, 0.0000, 0.0000, 0.0000],
[0.0000, 0.9882, 0.0000, 0.0000, 0.0039]],
[[0.0000, 0.9882, 0.0000, 0.9882, 0.0000],
[0.0000, 0.0039, 0.0000, 0.0000, 0.0000],
[0.0039, 0.0000, 0.0000, 0.0000, 0.0039],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0039, 0.0039, 0.0000, 0.0000, 0.0000]]])
So this is another assumption but a transform should work like this no? Since i dont have any of your code so here it is
import torchvision.transforms as T
transforms = T.Compose([
T.ToPILImage(), # You need to add this to pil image
T.RandomCrop(5), T.RandomHorizontalFlip(p=0.1),
T.ColorJitter(brightness=0.1, contrast=0.2, saturation=0, hue=0),
T.ToTensor(), # Maybe you can add you gaussian noise augment here
])
dataset = ImagesDataset(csv_file="generate_csv//generate_csv_correctly_detected.csv", root_dir=DATABASE_PATH, transform=transforms)
| https://stackoverflow.com/questions/67184229/ |
LSTM-CNN to classify sequences of images | I got an assignment and stuck with it while going down the rabbit hole of learning PyTorch, LSTM and cnn.
Provided the well known MNIST library I take combinations of 4 numbers and per combination it falls down into one of 7 labels.
eg:
1111 label 1 (follow a constant trend)
1234 label 2 increasing trend
4321 label 3 decreasing trend
...
7382 label 7 decreasing trend - increasing trend - decreasing trend
The shape of my tensor after loading of the tensor become (3,4,28,28) where the 28 comes from the MNIST image's width and height. 3 is the batch size and 4 is the channels (4 images).
I'm somewhat stuck with how to pass this into a PyTorch backed LSTM and CNN as basically all Google searches lead to articles where simply one image is passed in.
I was thinking of reshaping it to 1 long array of (pixel values) where I put all of the values of the first image row by row (28) after each other, then appended by the same approach for the second, third and fourth image. So that would make 4 * 28 * 28 = 3136.
Is my way of thinking on how to tackle this a correct one or should I rethink? I'm rather new to this all and looking for some guidance on how to go forward. I've been reading loads of articles, YT videos, ... but all seem to touch the basic stuff or alternatives of the same subject.
I have written some code but running it gives errors.
import numpy as np
import torch
import torch.nn as nn
from torch import optim, softmax
from sklearn.model_selection import train_test_split
#dataset = sequences of 4 MNIST images each
#datalabels =7
#Data
x_train, x_test, y_train, y_test = train_test_split(dataset.data, dataset.data_label, test_size=0.15,
random_state=42)
#model
class Mylstm(nn.Module):
def __init__(self, input_size, hidden_size, n_layers, n_classes):
super(Mylstm, self).__init__()
self.input_size = input_size
self.n_layers = n_layers
self.hidden_size = hidden_size
self.lstm = nn.LSTM(input_size, hidden_size, n_layers, batch_first=True)
# readout layer
self.fc = nn.Linear(hidden_size, n_classes)
def forward(self, x):
# Initialize hidden state with zeros
h0 = torch.zeros(self.n_layers, x.size(0), self.hidden_size).requires_grad_()
# initialize the cell state:
c0 = torch.zeros(self.n_layers, x.size(0), self.hidden_size).requires_grad_()
out, (h_n, h_c) = self.lstm(x, (h0.detach(), c0.detach()))
x = h_n[-1, :, 1]
x = self.fc(x)
x = softmax(x, dim=1)
return x
#Hyperparameters
input_size = 28
hidden_size = 256
sequence_length = 28
n_layers = 2
n_classes = 7
learning_rate = 0.001
model = Mylstm(input_size, hidden_size, n_layers, n_classes)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
#training
bs = 0
num_epochs = 5
batch_size=3
if np.mod(x_train.shape[0], batch_size) == 0.0:
iter = int(x_train.shape[0] / batch_size)
else:
iter = int(x_train.shape[0] / batch_size) + 1
bs = 0
for i in range(iter):
sequences = x_test[bs:bs + batch_size, :]
labels = y_test[bs:bs + batch_size]
test_images = dataset.load_images(sequences)
bs += batch_size
for epoch in range(num_epochs):
for i in range(iter):
sequences = x_train[bs:bs + batch_size, :]
labels = y_train[bs:bs + batch_size]
input_images = dataset.load_images(sequences)
bs += batch_size
images=(torch.from_numpy(input_images)).view(batch_size,4,-1)
labels=torch.from_numpy(labels)
optimizer.zero_grad()
output = model(images)
# calculate Loss
loss = criterion(output, labels)
loss.backward()
optimizer.step()
The error I'm currently getting is:
RuntimeError: input.size(-1) must be equal to input_size. Expected 28, got 784
| Change your input size from 28 to 784. (784=28*28).
Input size argument is the number of features in one element of the sequence, so the number of feature of an mnist image, so the number of pixels which is width*hight of the image.
| https://stackoverflow.com/questions/67195464/ |
save model output in pytorch | dic = []
for step, batch in tqdm(enumerate(train_dataloader)):
inpt = batch[0].to(device)
msks = batch[1].to(device)
#Run the sentences through the model
outputs = model_obj(inpt, msks)
dic.append( {
'hidden_states': outputs[2],
'pooled_output': outputs[1]})
I want to save the model output in each iteration but I got the below error for a small set of datasets.
RuntimeError: CUDA out of memory.
notice that without the below code my model works correctly.
dic.append( { 'hidden_states': outputs[2], 'pooled_output': outputs[1]})
How can I save these outputs in each iteration?
| First of all, you should always post the full error stacktrace. Secondly, you should move the outputs from your GPU when you want to store them to free up memory:
dic.append( {
'hidden_states': outputs[2].detach().cpu().tolist(),
'pooled_output': outputs[1].detach().cpu().tolist()
})
| https://stackoverflow.com/questions/67195895/ |
PyTorch DataLoader uses identical random transformation across each epoch | There is a bug in PyTorch/Numpy where when loading batches in parallel with a DataLoader (i.e. setting num_workers > 1), the same NumPy random seed is used for each worker, resulting in any random functions applied being identical across parallelized batches. This can be resolved by passing a seed generator to the worker_init_fn argument like so.
However the issue persists across multiple epochs.
Minimal example:
import numpy as np
from torch.utils.data import Dataset, DataLoader
class RandomDataset(Dataset):
def __getitem__(self, index):
return np.random.randint(0, 1000, 2)
def __len__(self):
return 4
dataset = RandomDataset()
dataloader = DataLoader(dataset, batch_size=1,
num_workers=2,
worker_init_fn = lambda x: np.random.seed(x))
for epoch in range(3):
print(f'\nEpoch {epoch}')
for batch in dataloader:
print(batch)
As you can see, while parallelized batches within an epoch now produce different results, the results are identical across epochs:
Epoch 0
tensor([[684, 559]])
tensor([[ 37, 235]])
tensor([[629, 192]])
tensor([[908, 72]])
Epoch 1
tensor([[684, 559]])
tensor([[ 37, 235]])
tensor([[629, 192]])
tensor([[908, 72]])
Epoch 2
tensor([[684, 559]])
tensor([[ 37, 235]])
tensor([[629, 192]])
tensor([[908, 72]])
How can this be behaviour be fixed?
Using an empty argument e.g. worker_init_fn = lambda _: np.random.seed() appears to fix this - are there any issues with this workaround?
| The best way I can think of is to use the seed set by pytorch for numpy and random:
import random
import numpy as np
import torch
from torch.utils.data import Dataset, DataLoader
def worker_init_fn(worker_id):
torch_seed = torch.initial_seed()
random.seed(torch_seed + worker_id)
if torch_seed >= 2**30: # make sure torch_seed + workder_id < 2**32
torch_seed = torch_seed % 2**30
np.random.seed(torch_seed + worker_id)
class RandomDataset(Dataset):
def __getitem__(self, index):
return np.random.randint(0, 1000, 2)
def __len__(self):
return 4
dataset = RandomDataset()
dataloader = DataLoader(dataset, batch_size=1,
num_workers=2,
worker_init_fn = worker_init_fn)
for epoch in range(3):
print(f'\nEpoch {epoch}')
for batch in dataloader:
print(batch)
Output:
Epoch 0
tensor([[593, 191]])
tensor([[207, 469]])
tensor([[976, 714]])
tensor([[ 13, 119]])
Epoch 1
tensor([[836, 664]])
tensor([[138, 836]])
tensor([[409, 313]])
tensor([[ 2, 221]])
Epoch 2
tensor([[269, 888]])
tensor([[315, 619]])
tensor([[892, 774]])
tensor([[ 70, 771]])
Alternatively, you can use int(time.time()) to seed numpy and random, assuming each epoch takes more than 1 second to run.
| https://stackoverflow.com/questions/67196075/ |
Pytorch cosine similarity NxN elements | I have 128 vectors of embeddings
image.shape = torch.Size([128, 512])
text.shape = torch.Size([128, 512])
And I want to calculate the tensor containing the cosine similarity between all elements (i.e:
cosine.shape = torch.Size([128, 128])
Where the first row is the cosine similarity between the 1st image and all text (128), etc.
At the moment I'm only doing this, but the result is a one-dimension array containing only N cosine similarities.
cosine_similarity = torch.nn.CosineSimilarity()
cosine = cosine_similarity(image, text)
How can I do it? I tried to transpose text but didn't work
| The way pytorch computes cosine similarity internally is like this:
def cos_sim(A, B, dim, eps=1e-08):
numerator = torch.mul(A, B).sum(axis=dim, keepdims=True)
A_l2 = torch.mul(A, A).sum(axis=dim, keepdims=True)
B_l2 = torch.mul(B, B).sum(axis=dim, keepdims=True)
denominator = torch.max(torch.sqrt(torch.mul(A_l2, B_l2)) torch.tensor(eps))
return torch.div(numerator, denominator).squeeze()
In order to get NxN cosine similarity, you can instead use this function:
def nxn_cos_sim(A, B, dim=1, eps=1e-8):
numerator = A @ B.T
A_l2 = torch.mul(A, A).sum(axis=dim)
B_l2 = torch.mul(B, B).sum(axis=dim)
denominator = torch.max(torch.sqrt(torch.outer(A_l2, B_l2)), torch.tensor(eps))
return torch.div(numerator, denominator)
Ahmed's answer also works, but if the value for N is large you will face memory issues using it.
| https://stackoverflow.com/questions/67199317/ |
CUDA driver version is higher than the CUDA runtime version? | The terminal shows the error:
RuntimeError: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version at torch/csrc/cuda/Module.cpp:51
But my driver version (440.118.02) is sufficient for cuda9.0
Some info about my machine: cat /proc/driver/nvidia/version NVRM version: NVIDIA UNIX x86_64 Kernel Module 440.118.02 Thu Sep 3 09:54:46 UTC 2020
cat /usr/local/cuda/version.txt CUDA Version 9.0.176
| You can upgrade the CUDA version to 9.2 or higher. After getting the new CUDA just check driver version 440 is still compatible or not. If not upgrade that too.
Do these and then run your code and check.
NOTE: If you have installed or upgraded the Nvidia driver or CUDA recently then reboot the system once and then try.
| https://stackoverflow.com/questions/67206495/ |
pytorch Dataloader - if input data returns multiple training instances | Problem
I have the following problem:
I want to use pytorchs DataLoader (in a similar way like here) but my setup varies a bit:
In my datafolder I have images (lets call them image_total of different street situations and I want to use cropped images (called image_crop_[idx] around persons that are close enough to the camera. So it can happen that some images give me one or more cropped images while others give me zero images as they do not show any person or they are to far away.
As I have a lot of images I want to make the implementation as efficient as possible.
My hope is that it is possible to use something like this:
I want to load the image_total and check if useful crops are in it. If so I extract the cropped images and get a list like [image_crop_0, image_crop_1, image_crop_2,...]
Now my question: Is this possible to be compatible with pytorchs DataLoader? The problem I see is that ````getitem```-method of my class would return zero to arbitrary instances. I want to use a constant batch-size for training.
Considerations
maybe DataLoader supports this (and I did not find it)
I have to work with a buffer or something similar
the fallback would be to pre process the data, but this would not be the most efficient solution
|
the fallback would be to pre process the data, but this would not be the most efficient solution
Indeed, this could be the most simple and efficient solution. Your dataset currently has a dynamic size, which is incompatible with DataLoader which should output something of fixed size for training.
An alternative solution may be to pre-process the data in your pytorch Dataset __init__ to create a list of all persons as well as their corresponding image:
[("img1", p1), ("img1", p2), ..., ("imgn", pk)]
Where pi is the person bounding box in the image. Then, in your __getitem__ method you can read the image and crop the corresponding person:
class PersonDataset(Dataset):
def __init__(self):
self.images = ["img1", "img2", ..., "image"]
self.persons = [("img1", p1), ("img1", p2), ..., ("imgn", pk)]
def __getitem__(self, index):
img, box = self.persons[index]
img = rad_image(img)
return crop(img, box)
def __len__(self):
return len(self.persons)
This is not the most efficient method as it may lead to an image being read multiple times, but this should not be a bottleneck if using a DataLoaderusing multiple workers.
You must implement how to create self.persons. Basically you have to read all your annotation files and extract the list of people bounding box of the image.
| https://stackoverflow.com/questions/67209968/ |
Import TextLMDataBunch from Fastai | I am following this tutorial to build a NLP sentiment analysis model.
from fastai.text import *
This is the only import specified that includes fastai.
Unfortunately the TextLMDataBunch is undefined.
What import should I used to have this class avaialable?
I have already tried:
from fastai.text.data import TextLMDataBunch
But apparently fastai.text.data is not even a package.
| I think you are using a tutorial of fast.ai v1 with the version 2 of the fastai library so it won't work. The link you've included in your question has the documentation for the class TextMLDataBunch but if you look at the url you will see that it is for fastai1.
https://fastai1.fast.ai/text.data.html
So you have two options, either you explicitely install fastai v1 or you find an alternative tutorial. This one may not be exactly what you're looking for but it could be a good starting point.
https://docs.fast.ai/tutorial.text.html
| https://stackoverflow.com/questions/67211962/ |
TypeError: string indices must be integers - PyTorch | I'm trying to loop through my pre-trained CNN using the following code, it's slightly modified from PyTorch's example:
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for i, batch in loaders[phase]:
inputs = batch["image"].float().to(device) # <---- error happens here
labels = batch["label"].float().to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
if phase == 'train':
scheduler.step()
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
However I get the error:
Epoch 0/24
----------
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-53-79684c739f29> in <module>()
----> 1 model_ft = train_model(resnet_cnn, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=25)
<ipython-input-49-55bb790e99a0> in train_model(model, criterion, optimizer, scheduler, num_epochs)
21 # Iterate over data.
22 for i, batch in loaders[phase]:
---> 23 inputs = batch["image"].float().to(device)
24 labels = batch["label"].float().to(device)
25
TypeError: string indices must be integers
The loaders variable is:
loaders = {"train":train_loader, "val":valid_loader}
The Dataset class I'm using for this for my train_loader and valid_loader is, and explains why I'm using the string in my initial model function:
class GetDataLabel(Dataset):
def __init__(self, df, root, transform = None):
self.df = df
self.root = root
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img_path = os.path.join(self.root, self.df.iloc[idx, 0])
img = Image.open(img_path)
label = self.df.iloc[idx, 1]
if self.transform:
img = self.transform(img)
img_lab = {"image": img,
"label": label}
return (img_lab)
Thank you in advance.
| There is a missing enumerate:
for i, batch in enumerate(loaders[phase]): # <--- here
inputs = batch["image"].float().to(device)
labels = batch["label"].float().to(device)
| https://stackoverflow.com/questions/67220647/ |
Mismatched tensor size error when generating text with beam_search (huggingface library) | I'm using the huggingface library to generate text using the pre-trained distilgpt2 model. In particular, I am making use of the beam_search function, as I would like to include a LogitsProcessorList (which you can't use with the generate function).
The relevant portion of my code looks like this:
beam_scorer = BeamSearchScorer(
batch_size=btchsze,
max_length=15, # not sure why lengths under 20 fail
num_beams=num_seq,
device=model.device,
)
j = input_ids.tile((num_seq*btchsze,1))
next_output = model.beam_search(
j,
beam_scorer,
eos_token_id=tokenizer.encode('.')[0],
logits_processor=logits_processor
)
However, the beam_search function throws this error when I try to generate using a max_length of less than 20:
~/anaconda3/envs/techtweets37/lib/python3.7/site-packages/transformers-4.4.2-py3.8.egg/transformers/generation_beam_search.py in finalize(self, input_ids, final_beam_scores, final_beam_tokens, final_beam_indices, pad_token_id, eos_token_id)
326 # fill with hypotheses and eos_token_id if the latter fits in
327 for i, hypo in enumerate(best):
--> 328 decoded[i, : sent_lengths[i]] = hypo
329 if sent_lengths[i] < self.max_length:
330 decoded[i, sent_lengths[i]] = eos_token_id
RuntimeError: The expanded size of the tensor (15) must match the existing size (20) at non-singleton dimension 0. Target sizes: [15]. Tensor sizes: [20]
I can't seem to figure out where 20 is coming from: it's the same even if the input length is longer or shorter, even if I use a different batch size or number of beams. There's nothing I've defined as length 20, nor can I find any default. The max length of the sequence does effect the results of the beam search, so I'd like to figure this out and be able to set a shorter max length.
| This is a known issue in the hugging face library:
https://github.com/huggingface/transformers/issues/11040
Basically, the beam scorer isn't using the max_length passed to it, but the max_length of the model.
For now, the fix is to set model.config.max_length to the desired max length.
| https://stackoverflow.com/questions/67221901/ |
Interleaving a set of channels during concatenation? | I am trying to perform a quaternion space concatenation which requires the four dimensions r,i,j,k to be concatenated. According to quaternion theory, we cannot apply the torch.cat function directly as they would mess up the components as r channels have to be concatenated with r channels and so on. I managed to perform this action using the quaternion_concat function adapted from here. However, when I use this for networks like densenet, this takes a long time due to multiple for loops concatenations.
Code: Let me give an example, tensor_1 and tensor_2 are two tensors that need to be concatenated, and it has 16 channels each. That means it has 4 channels of r,i,j,k respectively stacked together. I used the torch.chunk function to separate these and concatenate them separately and finally combine them back. Is there any way I could efficiently perform this?
import torch
def quarternion_concat(x, dim=2):
output = [[] for i in range(4)]
for _x in x:
sp = torch.chunk(_x, 4, dim=dim)
for i in range(4):
output[i].append(sp[i])
final = []
for o in output:
o = torch.cat(o, dim)
final.append(o)
return torch.cat(final, dim)
tensor_1 = torch.randn((1, 16, 64, 64), requires_grad=False)
tensor_2 = torch.randn((1, 16, 64, 64), requires_grad=False)
tensor3 = quarternion_concat([tensor_1, tensor_2], dim=1)
| Based on the previous answer, I came up with a solution that utilizes class to compute the indices and use index_select to rearrange the concatenation. The only drawback is that the number of inputs and channels are required before execution.
import torch
from time import time
def quarternion_concat(x, dim=2):
output = [[] for i in range(4)]
for _x in x:
sp = torch.chunk(_x, 4, dim=dim)
for i in range(4):
output[i].append(sp[i])
final = []
for o in output:
o = torch.cat(o, dim)
final.append(o)
return torch.cat(final, dim)
tensor_1 = torch.randn((1, 16, 64, 64), requires_grad=False)
tensor_2 = torch.randn((1, 16, 64, 64), requires_grad=False)
tensor_3 = torch.randn((1, 16, 64, 64), requires_grad=False)
tensor_4 = torch.randn((1, 16, 256, 256), requires_grad=False)
tensor_5 = torch.randn((1, 16, 256, 256), requires_grad=False)
tensor_6 = torch.randn((1, 16, 256, 256), requires_grad=False)
tensor_7 = torch.randn((1, 16, 1024, 1024), requires_grad=False)
tensor_8 = torch.randn((1, 16, 1024, 1024), requires_grad=False)
tensor_9 = torch.randn((1, 16, 1024, 1024), requires_grad=False)
def quarternion_concat2(x, dim=1):
output = torch.empty(tuple([i if I != dim else i * len(x) for I, i in enumerate(x[0].shape)])).cuda()
# output = torch.cat(x, dim=dim)
inds = torch.arange(output.shape[dim]).view([-1, 4 * len(x)]).cuda()
for i in range(len(x)):
output[:, inds[:, i * 4:i * 4 + 4].flatten()] = x[i]
return output
class concat(torch.nn.Module):
def __init__(self, no_of_inputs, total_number_of_channel):
super().__init__()
temp = torch.chunk(torch.arange(total_number_of_channel).view([-1, 4, 4]), no_of_inputs, dim=0)
self.register_buffer('indx', torch.cat(temp, dim=2).flatten())
def forward(self, x):
output = torch.cat(x, dim=1).index_select(dim=1, index=self.indx)
return output
def time_funcs(x, str, N=100):
x = [i.cuda() for i in x]
print(str)
s = time()
for _ in range(N):
tensor4 = quarternion_concat(x, dim=1)
print('First method: {:1.7f}'.format((time() - s) / 1000))
s = time()
for _ in range(N):
tensor5 = quarternion_concat2(x, dim=1)
print('Second method: {:1.7f}'.format((time() - s) / 1000))
m = concat(len(x), 16 * len(x)).cuda()
s = time()
for _ in range(N):
tensor6 = m(x)
print('Third method: {:1.7f}'.format((time() - s) / 1000))
time_funcs([tensor_1, tensor_2], 'Two small')
time_funcs([tensor_1, tensor_2, tensor_3], 'Three small')
time_funcs([tensor_4, tensor_5], 'Two big')
time_funcs([tensor_4, tensor_5, tensor_6], 'Three big')
time_funcs([tensor_7, tensor_8], 'Two huge')
time_funcs([tensor_7, tensor_8, tensor_9], 'Three huge')
Below are the results when run on a GPU.
Two small
First method: 0.0000054
Second method: 0.0000203
Third method: 0.0000031
Three small
First method: 0.0000081
Second method: 0.0000295
Third method: 0.0000022
Two big
First method: 0.0000050
Second method: 0.0001637
Third method: 0.0000028
Three big
First method: 0.0000056
Second method: 0.0002477
Third method: 0.0000022
Two huge
First method: 0.0000051
Second method: 0.0025740
Third method: 0.0000028
Three huge
First method: 0.0000062
Second method: 0.0038824
Third method: 0.0000028
| https://stackoverflow.com/questions/67221988/ |
RuntimeError: Input type (torch.cuda.LongTensor) and weight type (torch.cuda.FloatTensor) should be the same | I'm trying to train a CNN using PyTorch's example with my own data. I have the following training loop which is identical to PyTorch:
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for i, batch in enumerate(loaders[phase]):
inputs = batch["image"].type(torch.cuda.LongTensor).to(device)
labels = batch["label"].type(torch.cuda.LongTensor).to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs.type(torch.cuda.LongTensor).to(device))
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
if phase == 'train':
scheduler.step()
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
However, I get the error:
Epoch 0/24
----------
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-24-79684c739f29> in <module>()
----> 1 model_ft = train_model(resnet_cnn, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=25)
6 frames
<ipython-input-21-393aa43e7b06> in train_model(model, criterion, optimizer, scheduler, num_epochs)
30 # track history if only in train
31 with torch.set_grad_enabled(phase == 'train'):
---> 32 outputs = model(inputs.type(torch.cuda.LongTensor).to(device))
33 _, preds = torch.max(outputs, 1)
34 loss = criterion(outputs, labels)
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/torchvision/models/resnet.py in forward(self, x)
247
248 def forward(self, x: Tensor) -> Tensor:
--> 249 return self._forward_impl(x)
250
251
/usr/local/lib/python3.7/dist-packages/torchvision/models/resnet.py in _forward_impl(self, x)
230 def _forward_impl(self, x: Tensor) -> Tensor:
231 # See note [TorchScript super()]
--> 232 x = self.conv1(x)
233 x = self.bn1(x)
234 x = self.relu(x)
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in forward(self, input)
397
398 def forward(self, input: Tensor) -> Tensor:
--> 399 return self._conv_forward(input, self.weight, self.bias)
400
401 class Conv3d(_ConvNd):
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
394 _pair(0), self.dilation, self.groups)
395 return F.conv2d(input, weight, bias, self.stride,
--> 396 self.padding, self.dilation, self.groups)
397
398 def forward(self, input: Tensor) -> Tensor:
RuntimeError: Input type (torch.cuda.LongTensor) and weight type (torch.cuda.FloatTensor) should be the same
I've tried to convert my data using torch.cuda.LongTensor as seen from above however it doesn't work for some reason. Does anybody have any ideas? Thank you greatly in advance!
Edit 1:
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for i, batch in enumerate(loaders[phase]):
inputs = batch["image"].type(torch.cuda.FloatTensor).to(device)
labels = batch["label"].type(torch.cuda.FloatTensor).to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs.to(device))
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
if phase == 'train':
scheduler.step()
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
This returns the new error:
Epoch 0/24
----------
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-38-79684c739f29> in <module>()
----> 1 model_ft = train_model(resnet_cnn, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=25)
4 frames
<ipython-input-36-9b4381de034f> in train_model(model, criterion, optimizer, scheduler, num_epochs)
32 outputs = model(inputs.to(device))
33 _, preds = torch.max(outputs, 1)
---> 34 loss = criterion(outputs, labels)
35
36 # backward + optimize only if in training phase
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
1046 assert self.weight is None or isinstance(self.weight, Tensor)
1047 return F.cross_entropy(input, target, weight=self.weight,
-> 1048 ignore_index=self.ignore_index, reduction=self.reduction)
1049
1050
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2691 if size_average is not None or reduce is not None:
2692 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2693 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
2694
2695
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
2386 )
2387 if dim == 2:
-> 2388 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
2389 elif dim == 4:
2390 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss_forward
| By default the parameters of the model are in FloatTensor datatype.
inputs = batch["image"].type(torch.cuda.FloatTensor).to(device)
labels = batch["label"].type(torch.cuda.FloatTensor).to(device)
should rectify this error or you can modify your dataloader class itself.
| https://stackoverflow.com/questions/67240639/ |
ImportError: cannot import name 'PY3' from 'torch._six' | I am testing ZED Camera with the code on https://github.com/stereolabs/zed-pytorch. While running the final command: python zed_object_detection.py --config-file configs/caffe2/e2e_mask_rcnn_R_50_C4_1x_caffe2.yaml --min-image-size 256
I get the following error:
Traceback (most recent call last):
File "zed_object_detection.py", line 6, in
from predictor import COCODemo
File "/home/fypadmin/Desktop/23Apr_ZED/zed-pytorch/predictor.py", line 4, in
from torchvision import transforms as T
File "/home/fypadmin/anaconda3/envs/pytorch1/lib/python3.8/site-packages/torchvision/init.py", line 4, in
from torchvision import datasets
File "/home/fypadmin/anaconda3/envs/pytorch1/lib/python3.8/site-packages/torchvision/datasets/init.py", line 1, in
from .lsun import LSUN, LSUNClass
File "/home/fypadmin/anaconda3/envs/pytorch1/lib/python3.8/site-packages/torchvision/datasets/lsun.py", line 19, in
from .utils import verify_str_arg, iterable_to_str
File "/home/fypadmin/anaconda3/envs/pytorch1/lib/python3.8/site-packages/torchvision/datasets/utils.py", line 11, in
from torch._six import PY3
ImportError: cannot import name 'PY3' from 'torch._six' (/home/fypadmin/anaconda3/envs/pytorch1/lib/python3.8/site-packages/torch/_six.py)
I am new to ML and I am running pytorch 1.8.1. Looking forward to any help. Thanks
| For this question, the reason is that your 'torchvision' and 'pytorch' version, they didn't match. So, you need to upgrade your 'torchvision' and 'pytorch' version to the new version
pip install --upgrade torch torchvision
| https://stackoverflow.com/questions/67241289/ |
PyTorch - Efficient way to apply different functions to different 'row/column' of a tensor | Let's say I have a 2-d tensor:
x = torch.Tensor([[1, 2], [3, 4]])
Is there an efficient way to apply one function to the first 'row' [1, 2] and apply a second different function to the second row [3, 4]? (Doesn't have to be a row, could be across any dimension)
At the moment, I use the following code: Say I have my two functions, f and g, for example,
def f(z):
return 2 * z
def g(z):
return 0.5 * z
Then, to apply them to seperate rows I would do:
torch.cat([f(x[0]).unsqueeze(0), g(x[1]).unsqueeze(0)], dim = 0)
which gives the desired tensor [[2, 4], [1.5, 2]].
Obviously, in this 2-d example this solution is fine, but it seems a bit clunky. Is there a better way of doing this? Particularly in higher dimensions or when there are a large number of elements in the chosen dimension
| A handy tip is to slice instead of selecting to avoid the unsqueeze step. Indeed, notice how x[:1] keeps the indexed dimension compared to x[0].
This way you can perform the desired operation in a slightly shorter form:
>>> torch.vstack((f(x[:1]), g(x[1:])))
Optionally you can use vstack to not have to provide dim=0 to torch.stack.
Alternatively, you can use a helper function that will apply both f and g:
>>> fn = lambda a,b: (f(a), g(b))
And split the tensor inline with torch.Tensor.split:
>>> torch.vstack(fn(*x.split(1)))
| https://stackoverflow.com/questions/67244919/ |
How to display a video in colab, using a PyTorch tensor of RGB image arrays? | I have a tensor of shape (125, 3, 128, 128):
125 frames
3 channels (RGB)
each frame 128 x 128 size.
values in the tensor are in the range [0,1].
I want to display the video of these 125 frames, using Pytorch in Google Colab. How can I do that?
| One way to enable inline animations in Colab is using jshtml:
from matplotlib import rc
rc('animation', html='jshtml')
With this enabled, you can then plot your animation like so (note you will need to permute your image tensors to get them in PIL/matplotlib format):
import matplotlib.pyplot as plt
import matplotlib.animation as animation
fig, ax = plt.subplots()
imgs = torch.rand(10,3,128,128)
imgs = imgs.permute(0,2,3,1) # Permuting to (Bx)HxWxC format
frames = [[ax.imshow(imgs[i])] for i in range(len(imgs))]
ani = animation.ArtistAnimation(fig, frames)
ani
| https://stackoverflow.com/questions/67261108/ |
AutoModelForSequenceClassification requires the PyTorch library but it was not found in your environment | I am trying to use the roberta transformer and a pre-trained model but I keep getting this error:
ImportError:
AutoModelForSequenceClassification requires the PyTorch library but it was not found in your environment. Checkout the instructions on the
installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment.
Here's my code:
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='sentiment'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
I made sure that PyTorch is installed and working:
| I was having the same issue. It was solved for me by restarting the kernel.
| https://stackoverflow.com/questions/67263288/ |
AttributeError: module 'torch.utils' has no attribute 'data' | I am trying to run my PyTorch code on a Ubuntu server, it works well on my own computer, but it failed to run on the server.
Is this because of something related to PyTorch version?
This problem seems typical but yet no solutions work.
Traceback (most recent call last):
File "train.py", line 12, in <module>
from data_manager import *
File "/data1/lijun/cross_modal_reid_bigma/transformer/data_manager.py", line 7, in <module>
from util.data_loader import DataLoader
File "/data1/lijun/cross_modal_reid_bigma/transformer/util/data_loader.py", line 6, in <module>
from torchtext.legacy.data import Field, BucketIterator
File "/usr/local/anaconda3/lib/python3.6/site-packages/torchtext/__init__.py", line 3, in <module>
from . import datasets
File "/usr/local/anaconda3/lib/python3.6/site-packages/torchtext/datasets/__init__.py", line 2, in <module>
from .ag_news import AG_NEWS
File "/usr/local/anaconda3/lib/python3.6/site-packages/torchtext/datasets/ag_news.py", line 2, in <module>
from torchtext.data.datasets_utils import _RawTextIterableDataset
File "/usr/local/anaconda3/lib/python3.6/site-packages/torchtext/data/datasets_utils.py", line 205, in <module>
class _RawTextIterableDataset(torch.utils.data.IterableDataset):
AttributeError: module 'torch.utils' has no attribute 'data'
|
it worked for me...please make sure that you are using 1.7 + pytorch version
| https://stackoverflow.com/questions/67266152/ |
How to set and get confidence threshold from custom YOLOv5 model? | I am trying to perform inference on my custom YOLOv5 model. The official documentation uses the default detect.py script for inference.
Example: python detect.py --source data/images --weights yolov5s.pt --conf 0.25
I have written my own python script but I can neither set the confidence threshold during initialisation nor retrieve it from the predictions of the model. I am only able to get the labels and bounding box coordinates. Here is my code:
import torch
model = torch.hub.load('ultralytics/yolov5', 'custom', path_or_model='best.pt')
results = model("my_image.png")
labels, cord_thres = results.xyxyn[0][:, -1].numpy(), results.xyxyn[0][:, :-1].numpy()
| It works for me:
model.conf = 0.25 # confidence threshold (0-1)
model.iou = 0.45 # NMS IoU threshold (0-1)
More information:
https://github.com/ultralytics/yolov5/issues/36
| https://stackoverflow.com/questions/67280248/ |
Pytorch Tensor storages have the same id when calling the storage() method | I'm learning about tensor storage through a blog (in my native language - Viet), and after experimenting with the examples, I found something that was difficult to understand. Given 3 tensors x, zzz, and x_t as below:
import torch
x = torch.tensor([[3, 1, 2],
[4, 1, 7]])
zzz = torch.tensor([1,2,3])
# Transpose of the tensor x
x_t = x.t()
When I set the storage of each tensor to the corresponding variable, then their ids are different from each other:
x_storage = x.storage()
x_t_storage = x_t.storage()
zzz_storage = zzz.storage()
print(id(x_storage), id(x_t_storage), id(zzz_storage))
print(x_storage.data_ptr())
print(x_t_storage.data_ptr())
Output:
140372837772176 140372837682304 140372837768560
94914110126336
94914110126336
But when I called the storage() method on each original tensor in the same print statement, the same outputs are observed from all tensors, no matter how many times I tried:
print(id(x.storage()), id(x_t.storage()), id(zzz.storage()))
# 140372837967904 140372837967904 140372837967904
The situation gets even weirder as I print them separately on different lines; sometimes their results are different and sometimes theirs are the same:
print(id(x.storage()))
print(id(x_t.storage()))
# Output:
# 140372837771776
# 140372837709856
So my question is, why are there differences between the id of the storages in the first case, and the same id is observed in the second? (and where did that id come from?). And what is happening in the third case?
Also, I want to ask about the method data_ptr(), as it was suggested to be used instead of id in one question I saw on Pytorch discuss, but the Docs in Pytorch just show no more detail. I would be glad if anyone can give me detailed answers to any/all of the questions.
| After searching on the Pytorch discuss forum and Stack Overflow, I see that the method data_ptr() should be used in the comparison of locations of tensors (according to the Python discuss in the question and this link) (although it is not totally correct, check the first Python discuss for a better comparison method)
About the id part, there have been many questions on this topic on Stack Overflow. I saw one question here that has many answers which can clear up most part of the question above. I also have some misunderstanding on the id and the memory allocation of objects, which has also been answered in the comment section of my recent question
| https://stackoverflow.com/questions/67289617/ |
Word-embedding does not provide expected relations between words | I am trying to train a word embedding to a list of repeated sentences where only the subject changes. I expected that the generated vectors corresponding the subjects provide a strong correlation after training as it is expected from a word embedding. However, the angle between the vectors of subjects is not always larger than the angle between subjects and a random word.
Man is going to write a very long novel that no one can read.
Woman is going to write a very long novel that no one can read.
Boy is going to write a very long novel that no one can read.
The code is based on pytorch tutorial:
import torch
from torch import nn
import torch.nn.functional as F
import numpy as np
class EmbedTrainer(nn.Module):
def __init__(self, d_vocab, d_embed, d_context):
super(EmbedTrainer, self).__init__()
self.embed = nn.Embedding(d_vocab, d_embed)
self.fc_1 = nn.Linear(d_embed * d_context, 128)
self.fc_2 = nn.Linear(128, d_vocab)
def forward(self, x):
x = self.embed(x).view((1, -1)) # flatten after embedding
x = self.fc_2(F.relu(self.fc_1(x)))
x = F.log_softmax(x, dim=1)
return x
text = " ".join(["{} is going to write a very long novel that no one can read.".format(x) for x in ["Man", "Woman", "Boy"]])
text_split = text.split()
trigrams = [([text_split[i], text_split[i+1]], text_split[i+2]) for i in range(len(text_split)-2)]
dic = list(set(text.split()))
tok_to_ids = {w:i for i, w in enumerate(dic)}
tokens_text = text.split(" ")
d_vocab, d_embed, d_context = len(dic), 10, 2
""" Train """
loss_func = nn.NLLLoss()
model = EmbedTrainer(d_vocab, d_embed, d_context)
print(model)
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
losses = []
epochs = 10
for epoch in range(epochs):
total_loss = 0
for input, target in trigrams:
tok_ids = torch.tensor([tok_to_ids[tok] for tok in input], dtype=torch.long)
target_id = torch.tensor([tok_to_ids[target]], dtype=torch.long)
model.zero_grad()
log_prob = model(tok_ids)
#if total_loss == 0: print("train ", log_prob, target_id)
loss = loss_func(log_prob, target_id)
total_loss += loss.item()
loss.backward()
optimizer.step()
print(total_loss)
losses.append(total_loss)
embed_map = {}
for word in ["Man", "Woman", "Boy", "novel"]:
embed_map[word] = model.embed.weight[tok_to_ids[word]]
print(word, embed_map[word])
def angle(a, b):
from numpy.linalg import norm
a, b = a.detach().numpy(), b.detach().numpy()
return np.dot(a, b) / norm(a) / norm(b)
print("man.woman", angle(embed_map["Man"], embed_map["Woman"]))
print("man.novel", angle(embed_map["Man"], embed_map["novel"]))
| It's most probably the training size. Training a 128d embedding is definitely overkill. Rule of thumb from the the google developers blog:
Why is the embedding vector size 3 in our example? Well, the following "formula" provides a general rule of thumb about the number of embedding dimensions:
embedding_dimensions = number_of_categories**0.25
That is, the embedding vector dimension should be the 4th root of the number of categories. Since our vocabulary size in this example is 81, the recommended number of dimensions is 3:
3 = 81**0.25
| https://stackoverflow.com/questions/67291644/ |
Correct Validation Loss in Pytorch? | I am a bit confused as to how to calculate Validation loss? Are validation loss to be computed at the end of an epoch OR should the loss be also monitored during iteration through the batches ?
Below I have computed using running_loss which is getting accumulated over batches - but I want to see if its the correct approach?
def validate(loader, model, criterion):
correct = 0
total = 0
running_loss = 0.0
model.eval()
with torch.no_grad():
for i, data in enumerate(loader):
inputs, labels = data
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
loss = criterion(outputs, labels)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
running_loss = running_loss + loss.item()
mean_val_accuracy = (100 * correct / total)
mean_val_loss = ( running_loss )
#mean_val_accuracy = accuracy(outputs,labels)
print('Validation Accuracy: %d %%' % (mean_val_accuracy))
print('Validation Loss:' ,mean_val_loss )
Below is the training block I am using
def train(loader, model, criterion, optimizer, epoch):
correct = 0
running_loss = 0.0
i_max = 0
for i, data in enumerate(loader):
total_loss = 0.0
#print('batch=',i)
inputs, labels = data
inputs = inputs.to(device)
labels = labels.to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 2000 == 1999:
print('[%d , %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('finished training')
return mean_val_loss, mean_val_accuracy
| You can evaluate your network on the validation when you want. It can be every epoch or if this is too costly because the dataset is huge it can be each N epoch.
What you did seems correct, you compute the loss of the whole validation set. You can optionally divide by its length in order to normalize the loss, so the scale will be the same if you increase the validation set one day.
| https://stackoverflow.com/questions/67295494/ |
Convert pytorch geometric data sample to its corresponding line graph | I'm trying to convert a dataset of torch geometric so that its content is represented as line graphs of the original samples. My code looks likes the following:
G = to_networkx(data,
node_attrs=['x'],
edge_attrs=['edge_attr'],
to_undirected=not directed)
line_graph = nx.line_graph(G, create_using=nx.Graph)
result = from_networkx(line_graph)
However, the resulting samples don't have any attribute, neither edge_attr nor x. At the same time, the label y is gone too. Is there a better way to convert it?
| As noted in the previous answer, the attributes are not propagated by line_graph. Since I'm interested in preserving only the edge attributes, i.e. converting edges to nodes, my solution looks like this:
original_edge_attrs = data.edge_attr
original_edge_names = [(from_.item(), to_.item()) for from_, to_ in zip(data.edge_index[0, :], data.edge_index[1, :])]
original_edge_to_attr = {e: attr for e, attr in zip(original_edge_names, original_edge_attrs)}
G = to_networkx(data,
node_attrs=['x'],
edge_attrs=['edge_attr'],
to_undirected=not directed)
line_graph = nx.line_graph(G, create_using=nx.DiGraph)
res_data = from_networkx(line_graph)
# Copy original attribtues
res_data.x = torch.stack([original_edge_to_attr[e] for e in line_graph.nodes])
res_data.y = data.y
I hope this helps someone in the future.
| https://stackoverflow.com/questions/67296269/ |
How to continue training serialized AllenNLP model using `allennlp train`? | Currently training models using AllenNLP 1.2:
allennlp train -f --include-package custom-exp /usr/training_config/mock_model_config.jsonnet -s test-mock-out
The config is very standard:
"dataset_reader" : {
"reader": "params"
},
"data_loader": {
"batch_size": 3,
"num_workers": 1,
},
"trainer": {
"trainer_params": "various"
},
"vocabulary": {
"type": "from_files",
"directory": vocab_folder,
"oov_token": "[UNK]",
"padding_token": "[PAD]",
},
"model": {
"various params": ...
}
and serializing them to the test-mock-out directory (also have model.tar.gz).
Using the allennlp train command, is it possible to continue training? The documentation states Model.from_archive should be used, but it's unclear how the config should be adapted to use it.
http://docs.allennlp.org/v1.2.0/api/commands/train/
| OK, so to continue the training, one solution is to load the model from_archive. Assuming you have the serialization directory, make a model.tar.gz archive of the folder. Then, you can make a new config that is identical, except for the model key which uses from_archive:
retrain_config.json:
{
### Existing params ###
"data_loader": {
"batch_size": 3,
"num_workers": 1,
},
"trainer": {
"trainer_params": "various"
},
### Existing params ###
...
"model": {
"type": "from_archive",
"archive_file": "path/to/my_model.tar.gz"
}
}
Then, use your original train command, pointing to this new config:
allennlp train -f --include-package custom-exp /usr/training_config/retrain_config.json -s test-mock-out
Note that your vocab and output dimensions/label space should remain consistent. Also, it seems like the global training epochs are not preserved.
Alternatively, if you just want to keep training on the exact same training data, and you just have the serialization directory, then you can avoid having to compress the directory, and just keep adding results to the same directory.
allennlp train -f --include-package custom-exp /usr/training_config/mock_model_config.jsonnet -s test-mock-out --recover
| https://stackoverflow.com/questions/67306126/ |
How to set GPU count to 0 using os.environ['CUDA_VISIBLE_DEVICES'] =""? | So I have the following GPU configured in my system:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 461.33 Driver Version: 461.33 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100S-PCI... TCC | 00000000:3B:00.0 Off | 0 |
| N/A 30C P0 25W / 250W | 1MiB / 32642MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100S-PCI... TCC | 00000000:D8:00.0 Off | 0 |
| N/A 31C P0 25W / 250W | 1MiB / 32642MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
Now, via python, I have to set the environment, such that, GPU count = 0.
I have tried the following, after learning from various sources:
import os
os.environ["CUDA_VISIBLE_DEVICES"]=""
import torch
torch.device_count()
But, it still gives me the output as "2" as in for 2 GPUs in the system.
How to set the environment, such that it outputs "0" ?
Any other way, to set the count to "0" is also appreciated but it should be any ML-Library agnostic. (For example, I can't use device = torch.device("cpu") as this will work only for Pytorch and not for other libraries)
| To prevent your GPU from being used, set os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
| https://stackoverflow.com/questions/67311527/ |
How to use PyTorch's autograd efficiently with tensors? | In my previous question I found how to use PyTorch's autograd to differentiate. And it worked:
#autograd
import torch
from torch.autograd import grad
import torch.nn as nn
import torch.optim as optim
class net_x(nn.Module):
def __init__(self):
super(net_x, self).__init__()
self.fc1=nn.Linear(1, 20)
self.fc2=nn.Linear(20, 20)
self.out=nn.Linear(20, 4)
def forward(self, x):
x=torch.tanh(self.fc1(x))
x=torch.tanh(self.fc2(x))
x=self.out(x)
return x
nx = net_x()
r = torch.tensor([1.0], requires_grad=True)
print('r', r)
y = nx(r)
print('y', y)
print('')
for i in range(y.shape[0]):
# prints the vector (dy_i/dr_0, dy_i/dr_1, ... dy_i/dr_n)
print(grad(y[i], r, retain_graph=True))
>>>
r tensor([1.], requires_grad=True)
y tensor([ 0.1698, -0.1871, -0.1313, -0.2747], grad_fn=<AddBackward0>)
(tensor([-0.0124]),)
(tensor([-0.0952]),)
(tensor([-0.0433]),)
(tensor([-0.0099]),)
The problem that I currently have is that I have to differentiate a very large tensor and iterating through it like I'm currently doing (for i in range(y.shape[0])) is taking forever.
The reason I'm iterating is that from understanding, grad only knows how to propagate gradients from a scalar tensor, which y is not. So I need to compute the gradients with respect to each coordinate of y.
I know that TensorFlow is capable of differentiating tensors, from here:
tf.gradients(
ys, xs, grad_ys=None, name='gradients', gate_gradients=False,
aggregation_method=None, stop_gradients=None,
unconnected_gradients=tf.UnconnectedGradients.NONE
)
"ys and xs are each a Tensor or a list of tensors. grad_ys is a list of Tensor, holding the gradients received by the ys. The list must be the same length as ys.
gradients() adds ops to the graph to output the derivatives of ys with respect to xs. It returns a list of Tensor of length len(xs) where each tensor is the sum(dy/dx) for y in ys and for x in xs."
And was hoping that there's a more efficient way to differentiate tensors in PyTorch.
For example:
a = range(100)
b = range(100)
c = range(100)
d = range(100)
my_tensor = torch.tensor([a,b,c,d])
t = range(100)
#derivative = grad(my_tensor, t) --> not working
#Instead what I'm currently doing:
for i in range(len(t)):
a_grad = grad(a[i],t[i], retain_graph=True)
b_grad = grad(b[i],t[i], retain_graph=True)
#etc.
I was told that it might work if I could run autograd on the forward pass rather than the backwards pass, but from here it seems like it's not currently a feature PyTorch has.
Update 1:
@jodag mentioned that what I'm looking for might be just the diagonal of the Jacobian. I'm following the link he attached and trying out the faster method. Though, this doesn't seem to work and gives me an error:
RuntimeError: grad can be implicitly created only for scalar outputs.
Code:
nx = net_x()
x = torch.rand(10, requires_grad=True)
x = torch.reshape(x, (10,1))
x = x.unsqueeze(1).repeat(1, 4, 1)
y = nx(x)
dx = torch.diagonal(torch.autograd.grad(torch.diagonal(y, 0, -2, -1), x), 0, -2, -1)
| I believe I solved it using @ jodag advice -- to simply calculate the Jacobian and take the diagonal.
Consider the following network:
import torch
from torch.autograd import grad
import torch.nn as nn
import torch.optim as optim
class net_x(nn.Module):
def __init__(self):
super(net_x, self).__init__()
self.fc1=nn.Linear(1, 20)
self.fc2=nn.Linear(20, 20)
self.out=nn.Linear(20, 4) #a,b,c,d
def forward(self, x):
x=torch.tanh(self.fc1(x))
x=torch.tanh(self.fc2(x))
x=self.out(x)
return x
nx = net_x()
#input
t = torch.tensor([1.0, 2.0, 3.2], requires_grad = True) #input vector
t = torch.reshape(t, (3,1)) #reshape for batch
My approach so far was to iterate through the input since grad wants a scalar value as mentioned above:
#method 1
for timestep in t:
y = nx(timestep)
print(grad(y[0],timestep, retain_graph=True)) #0 for the first vector (i.e "a"), 1 for the 2nd vector (i.e "b")
>>>
(tensor([-0.0142]),)
(tensor([-0.0517]),)
(tensor([-0.0634]),)
Using the diagonal of the Jacobian seems more efficient and gives the same results:
#method 2
dx = torch.autograd.functional.jacobian(lambda t_: nx(t_), t)
dx = torch.diagonal(torch.diagonal(dx, 0, -1), 0)[0] #first vector
#dx = torch.diagonal(torch.diagonal(dx, 1, -1), 0)[0] #2nd vector
#dx = torch.diagonal(torch.diagonal(dx, 2, -1), 0)[0] #3rd vector
#dx = torch.diagonal(torch.diagonal(dx, 3, -1), 0)[0] #4th vector
dx
>>>
tensor([-0.0142, -0.0517, -0.0634])
| https://stackoverflow.com/questions/67320792/ |
UNET with CrossEntropy Loss Function | I was trying to train UNET with input size as [3,128,128] and the corresponding mask is [1,128,128] which contains classes directly(instead of pixels it will contain class numbers - 1,2). I am trying for a two-class problem hence my mask contains 1,2 as labels. Now I send my images to the model and the dimension of the predicted masks are [2,128,128]. Now to train a model I choose 16 as batch size. So, now I have input as [16,3,128,128] so the predicted dimension is [16,2,128,128]. But I have ground-truth masks as [16,1,128,128]. Now how can I apply Cross entropy loss in Pytorch? I have tried as follows and getting the following error. Could you please help? Thanks in advance.
lr = 0.1 # 0.1
criterion = nn.CrossEntropyLoss() #nn.L1Loss()
optimizer = optim.SGD(model.parameters(), lr=lr, momentum=0.9, nesterov=True, weight_decay=0.0001)
is_train = True
is_pretrain = False
acc_best = 0
total_epoch = 30
if is_train is True:
# Training
for epoch in range(total_epoch):
model.train()
tims = time.time()
for i, (images, labels) in enumerate(train_loader):
images = Variable(images.permute(0,3,1,2).float().cuda())
labels = Variable(labels.type(torch.LongTensor).cuda())
# Forward + Backward + Optimize
optimizer.zero_grad()
outputs = model(images)
outputs = outputs.type(torch.float)
print('predictedLabelsType:',outputs[0].type())
print('ActualLabelsType:',labels[0].type())
print('shape of predicted outputs:',outputs.shape)
print('shape of groundtruth masks:',labels.shape)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
My output is as follows:
predictedLabelsType: torch.cuda.FloatTensor
ActualLabelsType: torch.cuda.LongTensor
shape of predicted outputs: torch.Size([16, 2, 128, 128])
shape of groundtruth masks: torch.Size([16, 1, 128, 128])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-8-b692a8d536a9> in <module>()
52 print('shape of predicted outputs:',outputs.shape)
53 print('shape of groundtruth masks:',labels.shape)
---> 54 loss = criterion(outputs, labels)
55 loss.backward()
56 optimizer.step()
3 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
2385 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
2386 elif dim == 4:
-> 2387 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
2388 else:
2389 # dim == 3 or dim > 4
RuntimeError: 1only batches of spatial targets supported (3D tensors) but got targets of size: : [16, 1, 128, 128]
Could you please suggest where is the mistake and how this CrossEntropyLoss() will work in Pytorch for image segmentation? What I am missing here? I have tried reshaping the target size as [16,128,128] which is leads to another error. Thanks a lot!
| The documentation specifies that if the input is shape (N, C, d1, d2) then the target must be shape (N, d1, d2). Instead, your targets are shape (N, 1, d1, d2) so you need to remove the unnecessary unitary dimension.
loss = criterion(output, labels.squeeze(1))
If you're getting another error from this change then there's another issue with your code, but this is the correct tensor shape for CrossEntropyLoss.
| https://stackoverflow.com/questions/67322848/ |
AttributeError: 'collections.OrderedDict' object has no attribute 'predict' | Being a new guy and a beginner to deep learning and pytorch I am not sure what all inputs should I give you guys to answer my question. But I will try my best to make you guys understand my problem. I have loaded a model in pytorch using 'model= torch.load('model/resnet18-5c106cde.pth')'. But it is showing an AttributeError: 'collections.OrderedDict' object has no attribute 'predict', when I used the command 'prediction = model.predict(test_image)'. Hope you guys understood my problem and Thanks in advance...
| I'd guess that the checkpoint you are loading stores a model state dict (the model's parameters) rather than a model (the structure of the model plus its parameters). Try:
model = resnet18(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
model.eval()
where PATH is the path to the model checkpoint. You need to declare model as an instance of the object class (declare the model structure) so that you can load the checkpoint (parameters only, no structure). So you'll need to find the appropriate class to import for the resnet18, probably something along the lines of:
from torchvision.models import resnet18
| https://stackoverflow.com/questions/67337357/ |
how to convert the output of a nural network to long type while maintaining the trainability | The output of my pytorch neural network is a float64 type of data. This variable has to be used as a pixel offset and as such I need to convert it to long type.
However I have just discovered that a conversion out=out.long() switches the variable attribute ".requires_grad" to False.
How can I convert it to long maintaining ".requires_grad" to true?
| In general, you cannot convert a tensor to an integer-based type while maintaining it's gradient properties since converting to an integer is a non-differentiable operation. Thus, you essentially have two options:
If the data is only required as type long for inference operations that need not maintain their gradient,you can back-propagate loss before converting to long type, sequentially. You could also make a copy or use torch.detach().
Change the input-output structure of your model such that integer outputs are not needed. One way to do this might be to output a pixel-map with one value for each value in the original tensor which you are trying to index. This would be similar to NNs that output masks for segmentation.
Without more detail on what you're trying to accomplish, it's difficult to say what your best path forward is. Please add more code so the context of this operation is visible.
| https://stackoverflow.com/questions/67338689/ |
Why is my loss not decreasing over training 10 epochs? | My hardware is a Ryzen 5000 series cpu with an nvidia rtx 3060 gpu. I'm currently working on a school assignment involving using a deep learning model (implemented in PyTorch) to predict COVID diagnosis from CT slice images. The dataset can be found at this url on GitHub: https://github.com/UCSD-AI4H/COVID-CT
I've written a custom dataset that takes the images from the dataset and resizes them to 224x224. I've also converted all rgba or grayscale images to rgb using skimage.color. Other transforms include random horizontal and vertical flipping, as well as ToTensor(). To evaluate the model I've used sklearn.metrics to compute the AUC, F1 score, and accuracy of the model.
My trouble is that I can't get the model to train. After 10 epochs the loss has not decreased. I've tried adjusting the learning rate of my optimizer but it hasn't helped. Any recommendations/thoughts would be greatly appreciated. Thanks!
class RONANet(nn.Module):
def __init__(self, classifier_type=None):
super(RONANet, self).__init__()
self.classifier_type = classifier_type
self.relu = nn.ReLU()
self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2)
self.classifier = self.compose_classifier()
self.conv_layers = nn.Sequential(
nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(32),
self.relu,
self.maxpool,
nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(64),
self.relu,
self.maxpool,
nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(128),
self.relu,
self.maxpool,
nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1),
self.relu,
self.maxpool,
nn.AdaptiveAvgPool2d(output_size=(1,1)),
)
def compose_classifier(self):
if 'fc' in self.classifier_type:
classifier = nn.Sequential(
nn.Flatten(),
nn.Linear(14**2*256, 256),
self.relu,
nn.Linear(256, 128),
self.relu,
nn.Linear(128, 2))
elif 'conv'in self.classifier_type:
classifier = nn.Sequential(
nn.Conv2d(256, 1, kernel_size=1, stride=1))
return classifier
def forward(self, x):
features = self.conv_layers(x)
out = self.classifier(features)
if 'conv' in self.classifier_type:
out = out.reshape([-1,])
return out
RONANetv1 = RONANet(classifier_type='conv')
RONANetv1 = RONANetv1.cuda()
RONANetv2 = RONANet(classifier_type='fc')
RONANetv2 = RONANetv2.cuda()
criterion = nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(RONANetv1.parameters(), lr=0.1)
num_epochs = 100
best_auc = 0.5 # set threshold to random model performance
scores = {}
for epoch in range(num_epochs):
RONANetv1.train()
print(f'Current Epoch: {epoch+1}')
epoch_loss = 0
for images, labels in train_dataloader:
batch_loss = 0
optimizer.zero_grad()
with torch.set_grad_enabled(True):
images = images.cuda()
labels = labels.cuda()
out = RONANetv1(images)
loss = criterion(out, labels)
batch_loss += loss.item()
loss.backward()
optimizer.step()
epoch_loss += batch_loss
print(f'Loss this epoch: {epoch_loss}\n')
current_val_auc, current_val_f1, current_val_acc = get_scores(RONANetv1, val_dataloader)
if current_val_auc > best_auc:
best_auc = current_val_auc
torch.save(RONANetv1.state_dict(), 'RONANetv1.pth')
scores['AUC'] = current_val_auc
scores['f1'] = current_val_f1
scores['Accuracy'] = current_val_acc
print(scores)
.
Output:
Current Epoch: 1
Loss this epoch: 38.038745045661926
{'AUC': 0.6632183908045978, 'f1': 0.0, 'Accuracy': 0.4915254237288136}
Current Epoch: 2
Loss this epoch: 37.96312761306763
Current Epoch: 3
Loss this epoch: 37.93656861782074
Current Epoch: 4
Loss this epoch: 38.045261442661285
Current Epoch: 5
Loss this epoch: 38.01626980304718
Current Epoch: 6
Loss this epoch: 37.93017905950546
Current Epoch: 7
Loss this epoch: 37.913547694683075
Current Epoch: 8
Loss this epoch: 38.049841582775116
Current Epoch: 9
Loss this epoch: 37.95650988817215
| So the issue is you're only training the first part of the classifier and not the second
# this
optimizer = torch.optim.Adam(RONANetv1.parameters(), lr=0.1)
# needs to become this
from itertools import chain
optimizer = torch.optim.Adam(chain(RONANetv1.parameters(), RONANetv2.parameters()))
and you need to incorportate the other cnn in training too
intermediate_out = RONANetv1(images)
out = RONANetv2(intermediate_out)
loss = criterion(out, labels)
batch_loss += loss.item()
loss.backward()
optimizer.step()
Hope that helps best of luck!
| https://stackoverflow.com/questions/67340129/ |
How to add training end callback to AllenNLP config file? | Currently training models using AllenNLP 1.2 and the commands api:
allennlp train -f --include-package custom-exp /usr/training_config/mock_model_config.jsonnet -s test-mock-out
I'm trying to execute a forward pass on a test dataset after training is completed. I know how to add an epoch_callback, but am not sure about the syntax for the end_callback.
In my config.json, I have the following:
{
...
"trainer": {
...
"epoch_callbacks": [{"type": 'log_metrics_to_wandb',},]
}
...
}
I've tried:
"end_callback": [{"type": 'my_custom_function',},]
but got an illegal argument error. Also, I am not sure how I would accurately specify the exact custom function and communicate it to the trainer.
| I think you can create a new callback function/object that inherits from TrainerCallback and override the on_end method, and then it should work as expected if you register it the same way as you did log_metrics_to_wandb above.
| https://stackoverflow.com/questions/67342447/ |
PyTorch can't use a float type but only long | I am trying to run this very basic neural network:
import os; os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
import torch
import torchvision
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
import time
#####################################################
# Create the neural network #
#####################################################
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(1, 10)
self.fc2 = nn.Linear(10, 10)
self.fc3 = nn.Linear(10, 1)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
#####################################################
# Create the datasets #
#####################################################
trainset = [torch.tensor([1., 1.**2]), torch.tensor([2., 2.**2]), torch.tensor([3., 3.**2]), torch.tensor([4., 4.**2]), torch.tensor([5., 5.**2]), torch.tensor([6., 6.**2]), torch.tensor([7., 7.**2]), torch.tensor([8., 8.**2])]
testset = [torch.tensor([1.1, 1.1**2]), torch.tensor([2.3, 2.3**2]), torch.tensor([3.1, 3.1**2]), torch.tensor([4.5, 4.5**2]), torch.tensor([5.9, 5.9**2]), torch.tensor([6.1, 6.1**2]), torch.tensor([7.3, 7.3**2]), torch.tensor([8.01, 8.01**2])]
#####################################################
# Optimize the parameters #
#####################################################
optimizer = torch.optim.Adam(net.parameters(), lr=0.001)
EPOCHS = 3
for epoch in range(EPOCHS):
for data in trainset:
x, y = data
net.zero_grad()
output = net(x.view(-1,1))
loss = F.nll_loss(output, y.view(-1,1)[0])
loss.backward()
optimizer.step()
print(loss)
#####################################################
# Calculate the accuracy rate #
#####################################################
correct = 0
total = 0
with torch.no_grad():
for data in trainset:
x, y = data
output = net(x)
if y - 0.01 < output < y + 0.01:
correct += 1
total += 1
print("Accuracy: %.2f" % (correct / total))
but I get the following error:
Traceback (most recent call last): File
"C:\Users\Andrea\Desktop\pythonProject\main.py", line 52, in
loss = F.nll_loss(output, y.view(-1,1)[0]) File "C:\WinPython\python-3.9.1.amd64\lib\site-packages\torch\nn\functional.py",
line 2235, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: expected scalar type Long but found Float
Why can't I use a float type?
| The negative log likelihood loss (NLLLoss) is suitable for classification problems, where the output is one out of C classes. Since the classes are discrete, your labels need to be of the long type.
In your case, in a comment, you say:
I want to create a network that simulates a quadratic function with x as input and sth similar to x**2 as output.
This is a regression problem, where the output can have a real, continuous value. For this, you should use a suitable loss function such as the mean squared error loss (MSELoss). So, one way to fix would be changing F.nll_loss in your code to F.mse_loss.
| https://stackoverflow.com/questions/67345554/ |
Tensorflow and Torch on the same environment | I am working on a goat detection problem. The main model identifies "goat" (single class) on an image and each goat then cropped from the original image. The cropped image then passes through a tensorflow model (trained tensorflow.keras.applications.InceptionV3) to find the current posture of the goat (sitting and standing - two classes).
The version of the torch should be 1.7+ and I am trying to use any version of the tensorflow (1.15.1/1.13.0 preferred). If I install tensorflow while torch is already installed, both tensorflow and torch disappear from the environment (pip freeze). Are there any compatible versions out there? Any specific version of TensorFlow and torch can be on the same environment?
Here are the commands -
conda create -n tst2 python=3.7
conda activate tst2
# install torch 1.7 gpu with cuda 10.1
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch
if I import torch, it works fine.
Then I Installed tensorflow 2.1.0 gpu
conda install -c anaconda tensorflow-gpu
Now, I can import tensorflow but no torch
>>> import tensorflow
2021-05-02 17:47:23.158416: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'torch'
| Install tensorflow-gpu=2.6.0 and PyTorch with cudatoolkit=11.3 and its working fine with anaconda.
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
conda install tensorflow-gpu
| https://stackoverflow.com/questions/67353874/ |
Do we have lower performance and accuracy than when not using `pytorch.nn.Sequnetial` and if yes, why? | I was checking out this video where Phil points out to this fact that using torch.nn.Sequential is faster than not using it. I did quick google and came across this post which is not answered satisfactorily, so I am replicating it here.
Here is the code from the post with Sequential:
class net2(nn.Module):
def __init__(self):
super(net2, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(
in_channels=1,
out_channels=16,
kernel_size=5,
stride=1,
padding=2,
),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
)
self.conv2 = nn.Sequential(
nn.Conv2d(16, 32, 5, 1, 2),
nn.ReLU(),
nn.MaxPool2d(2),
)
self.out = nn.Linear(32 * 7 * 7, 10)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = x.view(x.size(0), -1)
output = self.out(x)
return output
And here is the net without Sequential:
class net1(nn.Module):
def __init__(self):
super(net1, self).__init__()
self.conv1 = nn.Conv2d(1,16,5,1,2)
self.conv2 = nn.Conv2d(16,32,5,1,2)
self.pool1 = nn.MaxPool2d(2)
self.pool2 = nn.MaxPool2d(2)
self.fc1 = nn.Linear(32*7*7,10)
def forward(self, x):
out = F.relu(self.conv1(x))
out = self.pool1(out)
out = F.relu(self.conv2(out))
out = self.pool2(out)
out = out.view(-1,32*7*7)
out = F.relu(self.fc1(out))
return out
In the comment, the author also states that the accuracy is better with Sequential than without it. One of the comment states that speed is more with Sequential because loop unrolling results in faster execution. So here are my related questions:
Is loop unrolling only reason why Sequntial implementation faster than the other one?
Can someone explain how non-Sequential does not result in loop unrolling and Sequntial results in loop-unrolling, perhaps by pointing me to pytorch source lines on github
Does Sequential really result in more accuracy, if yes why?
| I'm not familiar with what kind of optimizations the python interpreter does, but I'd guess it is very limited.
But the claim that one method is more accurate than another is completely nonsense. If you look at the implementation of nn.Sequential you'll see that it does exactly the same thing you'd do anyway, you just iterate over the modules and pass the output of one to the input of the next:
def forward(self, input):
for module in self:
input = module(input)
return input
There might be a tiny difference in speed due to the overhead of all the additional things that nn.Sequential might do, but this is negligible.
| https://stackoverflow.com/questions/67357795/ |
only first gpu is allocated (eventhough I make other gpus visible, in pytorch cuda framework) | I am using cuda in pytorch framwework in linux server with multiple cuda devices.
The problem is that
eventhough I specified certain gpus that can be shown,
the program keeps using only first gpu.
(But other program works fine and other specified gpus are allocated well.
because of that, I think it is not nvidia or system problem.
nvidia-smi shows all gpus well and there's no problem.
I didn't have problem with allocating gpus with below codes before (except when the system is not working)
)
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBILE_DEVICES"] = str(args.gpu)
I wrote that before running main function.
and it works fine for other programs in same system.
I printed that args.gpu variable, and could see that the value is not "0".
| Have you tried something like this?
device = torch.device("cuda:0,1" if torch.cuda.is_available() else "cpu") ## specify the GPU id's, GPU id's start from 0.
model = CreateModel()
model= nn.DataParallel(model,device_ids = [0, 1])
model.to(device)
let me know about this
| https://stackoverflow.com/questions/67364827/ |
CUDA version of package not importing? | Firstly, I installed torch 1.1.0, and then I installed its' dependencies. So, I can import torch_scatter 1.2.0 however I get this error when importing torch_scatter.scatter_cuda:
import torch_scatter.scatter_cuda
ModuleNotFoundError: No module named 'torch_scatter.scatter_cuda'
I have Cuda v10 installed and I have a GPU. All of the requirements for this code were installed together through pip in one go on my virtual environment.
| As pointed out by phd - it looks like the setup.py file of pytorch_scatter checks for and uses an available cuda installation automatically.
Also in the version you are using as seen here:
...
if CUDA_HOME is not None:
ext_modules += [
CUDAExtension('torch_scatter.scatter_cuda',
['cuda/scatter.cpp', 'cuda/scatter_kernel.cu'])
]
...
Might be a question of whether CUDA_HOME is available.
Installing from source might give you more information as suggested here.
| https://stackoverflow.com/questions/67365218/ |
ImageNet pretrained ResNet50 backbones are different between Pytorch and TensorFlow | "Obviously!", you might say... But there's one significant difference that I have trouble explaining by the difference in random initialization.
Take the two pre-trained basenets (before the average pooling layer) and feed them with the same image, you will notice that the output features don't follow the same distribution. Specifically, TensorFlow's backbone has more inhibited features by the ReLU compared to Pytorch's backbone. Additionally, as shows in the third figure, the dynamic range is different between the two frameworks.
Of course, this difference is absorbed by the dense layer addressing the classification task, but: Can that difference be explained by randomness in the training process? Or training time? Or is there something else that would explain the difference?
Code to reproduce:
import imageio
import numpy as np
image = imageio.imread("/tmp/image.png").astype(np.float32)/255
import tensorflow as tf
inputs = image[np.newaxis]
model = tf.keras.applications.ResNet50(include_top=False, input_shape=(None, None, 3))
output = model(inputs).numpy()
print(f"TensorFlow features range: [{np.min(output):.02f};{np.max(output):.02f}]")
import torchvision
import torch
model = torch.nn.Sequential(*list(torchvision.models.resnet50(pretrained=True).children())[0:8])
inputs = torch.tensor(image).permute(2,0,1).unsqueeze(0)
output = model(inputs).detach().permute(0,2,3,1).numpy()
print(f"Pytorch features range: [{np.min(output):.02f};{np.max(output):.02f}]")
Outputting
TensorFlow features range: [0.00;25.98]
Pytorch features range: [0.00;12.00]
Note: it's similar to any image.
| There are 2 things that differ in the implementations of ResNet50 in TensorFlow and PyTorch that I could notice and might explain your observation.
The batch normalization does not have the same momentum in both. It's 0.1 in PyTorch and 0.01 in TensorFlow (although it is reported as 0.99 I am writing it down in PyTorch's convention for comparison here). This might affect training and therefore the weights.
TensorFlow's implementation uses biases in convolutions while PyTorch's one doesn't (as can be seen in the conv3x3 and conv1x1 definitions). Because the batch normalization layers are affine, the biases are not needed, and are spurious. I think this is truly what explains the difference in your case since they can be compensated by the batch norm, and therefore be arbitrarily large, which would be why you observe a bigger range for TF.
Another way to see this is to compare the summaries as I did in this colab.
I currently have a PR that should fix the bias part (at least provide the possibility to train a resnet without conv bias in TF), and plan on submitting one for BN soon.
EDIT
I have actually found out more differences, that I listed in a paper I recently wrote. You can check them in Table 3 of the F appendix.
I list here for completeness of the answer, those that might have an impact on the output features statistics:
the variance estimation in the batch norm is different
the convolution weights and classification head weights and bias initialization are not the same
| https://stackoverflow.com/questions/67365237/ |
Why some weights of GPT2Model are not initialized? | I am using the GPT2 pre-trained model for a research project and when I load the pre-trained model with the following code,
from transformers.models.gpt2.modeling_gpt2 import GPT2Model
gpt2 = GPT2Model.from_pretrained('gpt2')
I get the following warning message:
Some weights of GPT2Model were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
From my understanding, it says that the weights of the above layers are not initialized from the pre-trained model. But we all know that attention layers ('attn') are so important in GPT2 and if we can not have their actual weights from the pre-trained model, then what is the point of using a pre-trained model?
I really appreciate it if someone could explain this to me and tell me how I can fix this.
| The masked_bias was added but the huggingface community as a speed improvement compared to the original implementation. It should not negatively impact the performance as the original weights are loaded properly. Check this PR for further information.
| https://stackoverflow.com/questions/67379533/ |
What is the TensorFlow/Keras equivalent of PyTorch's `no_grad` function? | When writing machine learning models, I find myself needing to compute metrics, or run additional forward-passes in callbacks for visualization purposes. In PyTorch, I do this with torch.no_grad(), and this prevents gradients from being computed and these operations, therefore, do not influence the optimization.
How does this mechanism work in TensorFlow/Keras?
Keras models are callable. So, something like model(x) is possible. But, it is also possible to say model.predict(x), which also seems to invoke the call. Is there a difference between the two?
| The tensorflow equivalent would be tf.stop_gradient
Also don't forget, that Keras does not compute gradients when using predict (or just calling the model via __call__).
| https://stackoverflow.com/questions/67385963/ |
How to merge sequential integer values into intervals in PyTorch? | I have a 1D array:
[3, 4, 5, 6, 7, 20, 31, 32, 33, 34]
which I want to turn into a 2D interval array by merging every consecutive values into intervals:
[
[ 3, 7],
[20, 20],
[32, 34]
]
What would be a decent, possibly GPU-friendly, way to do it?
| Not sure if this is ideal, but you can try cumsum() on the differences compared to 1. Then use that to slice the original data:
# mark the consecutive blocks
blocks = torch.cat([torch.tensor([0]),(t[1:] - t[:-1]) != 1]).cumsum(dim=0)
# where the blocks shift
mask = blocks[1:] != blocks[:-1]
out = torch.cat([t[:1], t[:-1][mask],t[1:][mask], t[-1:]]).reshape(-1,2)
Output:
tensor([[ 3, 7],
[20, 20],
[31, 34]])
| https://stackoverflow.com/questions/67389915/ |
Preprocessing a video in android for pytorch | What is the best way to preprocess video data in Android Kotlin, in preparation to feed into a PyTorch Android model? Specifically, I have a ready-made model in PyTorch, and I've converted it to be ready for PyTorch Mobile.
During training, the model takes in raw footage from phones and is preprocessed to (1) be greyscale, and (2) compressed to a specific smaller resolution that I have specified, (3) converted to a Tensor to be fed into a neural net (or potentially send the compressed video to a remote server). I use OpenCV for this, but I'm wondering what the easiest way to do this would be in Android Kotlin?
Python code for reference:
def save_video(filename):
frames = []
cap = cv2.VideoCapture(filename)
frameCount = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
frameWidth = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
frameHeight = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
buf_c = np.empty((frameCount, frameHeight, frameWidth, 3), np.dtype('uint8'))
buf = np.empty((frameCount, frameHeight, frameWidth), np.dtype('uint8'))
fc = 0
ret = True
# 9:16 ratio
width = 121
height = 216
dim = (width, height)
# Loop until the end of the video
while fc < frameCount and ret:
ret, buf_c[fc] = cap.read()
# convert to greyscale
buf[fc] = cv2.cvtColor(buf_c[fc], cv2.COLOR_BGR2GRAY)
# reduce resolution
resized = cv2.resize(buf[fc], dim, interpolation = cv2.INTER_AREA)
frames.append(resized)
fc += 1
# release the video capture object
cap.release()
# Closes all the windows currently opened.
cv2.destroyAllWindows()
return frames
| You said that your model was converted to be ready for PyTorch Mobile so I will assume that you scripted your model with TorchScript.
With TorchScript, you can write preprocessing logic using Torch operation and keep it inside the scripted model like this:
import torch
import torch.nn.functional as F
@torch.jit.script_method
def preprocess(self,
image: torch.Tensor, # This should have format HxWx3
height: int,
width: int) -> torch.Tensor:
img = image.to(self.device)
# (1) Convert to Grayscale
img = ((img[:, :, 0] + img[:, :, 1] + img[:, :, 2]) / 3).unsqueeze(-1)
# (2) Resize to specified resolution
# Mimic torchvision.transforms.ToTensor to use interpolate
img = img.float()
img = img.permute(2, 0, 1).unsqueeze(0)
img = F.interpolate(img, size=(
height, width), mode="bicubic", align_corners=False)
img = img.squeeze(0).permute(1, 2, 0)
# Then turn it back to normal image tensor
# (3) Other normalization like mean substraction and convert to BxCxHxW format
img -= self.mean_tensor # mean substraction
img = img.permute(2, 0, 1).unsqueeze(0)
return img
So all the preprocess will be done by libtorch, not opencv.
| https://stackoverflow.com/questions/67392409/ |
Use Adam optimizer for LSTM network vs LBGFS | I have modified pytorch tutorial on LSTM (sine-wave prediction: given [0:N] sine-values -> [N:2N] values) to use Adam optimizer instead of LBFGS optimizer. However, the model does not train well and cannot predict sine-wave correctly. Since in most cases we use Adam optimizer for RNN training, I wonder how this issue can be resolved. I also wonder if the code segment regarding sequence-in-sequence-out (done with a loop: for input_t in input.split(1, dim=1)), can be done by a pytorch module or function.
from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib
#matplotlib.use('Agg')
import matplotlib.pyplot as plt
class Sequence(nn.Module):
def __init__(self):
super(Sequence, self).__init__()
self.lstm1 = nn.LSTMCell(1, 51)
self.lstm2 = nn.LSTMCell(51, 51)
self.linear = nn.Linear(51, 1)
def forward(self, input, future = 0):
outputs = []
h_t = torch.zeros(input.size(0), 51, dtype=torch.double)
c_t = torch.zeros(input.size(0), 51, dtype=torch.double)
h_t2 = torch.zeros(input.size(0), 51, dtype=torch.double)
c_t2 = torch.zeros(input.size(0), 51, dtype=torch.double)
for input_t in input.split(1, dim=1):
h_t, c_t = self.lstm1(input_t, (h_t, c_t))
h_t2, c_t2 = self.lstm2(h_t, (h_t2, c_t2))
output = self.linear(h_t2)
outputs += [output]
for i in range(future):# if we should predict the future
h_t, c_t = self.lstm1(output, (h_t, c_t))
h_t2, c_t2 = self.lstm2(h_t, (h_t2, c_t2))
output = self.linear(h_t2)
outputs += [output]
outputs = torch.cat(outputs, dim=1)
return outputs
if __name__ == '__main__':
# set random seed to 0
np.random.seed(0)
torch.manual_seed(0)
# load data and make training set
data = torch.load('traindata.pt')
input = torch.from_numpy(data[3:, :-1])
target = torch.from_numpy(data[3:, 1:])
test_input = torch.from_numpy(data[:3, :-1])
test_target = torch.from_numpy(data[:3, 1:])
print("input.size", input.size())
print("target.size", target.size())
# build the model
seq = Sequence()
seq.double()
criterion = nn.MSELoss()
# use LBFGS as optimizer since we can load the whole data to train
optimizer = optim.Adam(seq.parameters(), lr=0.005)
#begin to train
for i in range(15):
print('STEP: ', i)
seq.train()
def run1step():
optimizer.zero_grad()
out = seq(input)
loss = criterion(out, target)
print('train loss:', loss.item())
loss.backward()
return loss
run1step()
optimizer.step()
# begin to predict, no need to track gradient here
seq.eval()
with torch.no_grad():
future = 1000
pred = seq(test_input, future=future)
loss = criterion(pred[:, :-future], test_target)
print('test loss:', loss.item())
y = pred.detach().numpy()
# draw the result
def draw(yi, color):
plt.figure(figsize=(30,10))
plt.title('Predict future values for time sequences\n(Dashlines are predicted values)', fontsize=30)
plt.xlabel('x', fontsize=20)
plt.ylabel('y', fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.plot(np.arange(input.size(1)), yi[:input.size(1)], color, linewidth = 2.0)
plt.plot(np.arange(input.size(1), input.size(1) + future), yi[input.size(1):], color + ':', linewidth = 2.0)
plt.show()
if i == 14:
draw(y[0], 'r')
draw(y[1], 'g')
draw(y[2], 'b')
plt.savefig('predict_LSTM%d.pdf'%i)
#plt.close()
| I've just executed your code and the original code. I think the problem is you didn't train your code with ADAM long enough. You can see your training loss is still getting smaller at step 15. So I changed the number of steps from 15 to 45 and this is the figure generated after step 40:
The original code reached 4e-05 loss after step 4. But after that, the loss somehow exploded. Your code with ADAM can reduce the loss across all 45 steps, but the final loss is around 0.001. I hope I run both programs correctly.
Oh, regarding your second question.
also wonder if the code segment regarding sequence-in-sequence-out
Yes, you can write a function or define a module with two LSTMs to do that. But it doesn't make sense since your network contains only two LSTMs. After all, you have to do this “wiring“ work at some point.
If your network contains several such blocks, you can write a module with two LSTMs and use it as a primitive module, e.g. self.BigLSTM = BigLSTM(...) , just like you define self.lstm1 = nn.LSTMCell(...).
| https://stackoverflow.com/questions/67409042/ |
Does pytorch Dataset.__getitem__ have to return a dict? | EDIT: This is not about the general __getitem__ method but the usage of __getitem__ in the Pytorch Dataset-subclass, as @dataista correctly states.
I'm trying to implement the usage of Pytorchs Dataset-class.
The guide e.g here is really good, but I struggle to figure out Pytorch requirements for the return value of __getitem__. In the Pytorch documentation I cannot find anything about what it should return; is it any object which is iterable with size 2 e.g [sample,target], (sample,target)? In some guides they return a dict, but they do not specify if it has to be a dict which is returned.
| PyTorch has no requirements on the return value of a DataSet's __getitem__ method. It can be anything, but you will commonly encounter a tensor, a tuple of tensors, a dictionary (e.g. {'features':..., 'label':...}) etc.
It is usual in 2d data to return a single tensor whose final column are the target values, but equally you may see tuples/dicts of the features and targets explicitly separated.
Note there is no requirement that you return two values - in many unsupervised contexts (e.g. autoencoders) there is only a set of features, with no distinct target.
| https://stackoverflow.com/questions/67416496/ |
"Numpy not Available" After installing Pytorch XLA | I am just getting started with using TPUs on kaggle with Pytorch and install it as follows -
!pip3 install mkl
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
!python3 pytorch-xla-env-setup.py --version nightly --apt-packages libomp5 libopenblas-dev
However, after installing Pytorch XLA, I am not able to use NumPy at all. Whenever I call functions like np.uint8 or even NumPy based functions from like torch.from_numpy I get an error whose bottom line says - NumPy not Available. Please note that I am able to import Numpy.
The whole stack trace is as follows -
RuntimeError Traceback (most recent call last)
<ipython-input-1-abfcbbc939b0> in <module>
1026 segmentation_Maps='/kaggle/input/pascal-voc/VOC2012/SegmentationClass/')
1027 dataloader = DataLoader(dataset, batch_size=5)
-> 1028 for _, data in enumerate(dataloader):
1029 i = data['image']
1030 gt = data['ground_truth']
/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
519 if self._sampler_iter is None:
520 self._reset()
--> 521 data = self._next_data()
522 self._num_yielded += 1
523 if self._dataset_kind == _DatasetKind.Iterable and \
/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self)
559 def _next_data(self):
560 index = self._next_index() # may raise StopIteration
--> 561 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
562 if self._pin_memory:
563 data = _utils.pin_memory.pin_memory(data)
/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
<ipython-input-1-abfcbbc939b0> in __getitem__(self, item)
939 print(mask.shape)
940 image = Image.fromarray(np.uint8(image)).convert('RGB')
--> 941 mask = torch.from_numpy(np.uint8(mask))
942
943 image = self.transforms(image)
RuntimeError: Numpy is not available
I have no clue what is going on. Could someone please Help.
PS - please note that pytorch xla updates pytorch to a nightly 1.9 version.
TIA
| At least in Google Colab I was able to solve this issue by running (after installing xla):
!pip install -U numpy
Not completely sure it will help in any context
| https://stackoverflow.com/questions/67417532/ |
Use PyTorch to speed up linear least squares optimization with bounds? | I'm using scipy.optimize.lsq_linear to run some linear least squares optimizations and all is well, but a little slow. My A matrix is typically about 100 x 10,000 in size and sparse (sparsity usually ~50%). The bounds on the solution are critical. Given my tolerance lsq_linear typically solves the problems in about 10 seconds and speeding this up would be very helpful for running many optimizations.
I've read about speeding up linear algebra operations using GPU acceleration in PyTorch. It looks like PyTorch handles sparse arrays (torch calls them tensors), which is good. However, I've been digging through the PyTorch documentation, particularly the torch.optim and torch.linalg packages, and I haven't found anything that appears to be able to do a linear least squares optimization with bounds.
Is there a torch method that can do linear least squares optimization with bounds like scipy.optimize.lsq_linear?
Is there another way to speed up lsq_linear or to perform the optimization in a faster way?
For what it's worth, I think I've pushed lsq_linear pretty far. I don't think I can decrease the number of matrix elements, increase sparsity or decrease optimiation tolerances much farther without sacrificing the results.
| Not easily, no.
I'd try to profile lsq_linear on your problem to see if it's pure python overhead (which can probably be trimmed some) or linear algebra. In the latter case, I'd start with vendoring the lsq_linear code and swapping relevant linear algebra routines. YMMV though.
| https://stackoverflow.com/questions/67421904/ |
How do I proceed to load a ga_instance as ".pkl" format in PyGad? | I have been trying to load the PyGad trained instance in another file, in order to make some prediction. But I have been having some problems in the loading process.
After the training phase, I saved the instance like this:
The saving function:
filename = 'GNN_CPTNet' #GNN_CPTNet.pkl
ga_instance.save(filename=filename)
The loading function:
loaded_ga_instance = pygad.load(filename=filename)
loaded_ga_instance.plot_result()
But, when I tried to load the instance in a new notebook or script, I could not load the instance, especially the "GNN_CPT Net.pkl" file.
| In the new script, you should define the the fitness function and all the callback functions you used in the original script.
For example, if you used only the on_generation (callback_generation) parameter, then the following functions should be defined:
def fitness_func(solution, solution_idx):
...
def callback_generation(ga_instance):
...
This way, the saved instance will be loaded correctly.
Anyway, it is better to post the sample codes you used to give more accurate answer.
Thanks for using PyGAD :)
| https://stackoverflow.com/questions/67424181/ |
Is it advisable to use the same torch Dataset class for training and predicting? | I have recently started using PyTorch and I liked it for its object-oriented style. However, I wonder what’s the best and advised workflow when predicting the model. I wanted to use a custom Dataset class I wrote and which I use for training and validating my model. This class is a map-style dataset, therefore I implement __getitem__ method to return image and target:
class CustomDataset:
def __init__(self, ...):
...
def __getitem__(self, image_id):
....
return (
torch.tensor(image, dtype=torch.float),
torch.tensor(target, dtype=torch.long),
)
However, when I’m using this class for predicting I don’t have any targets to return. My current workaround is something like
def __getitem__(self, image_id):
....
if predict:
return (
torch.tensor(image, dtype=torch.float),
np.nan,
)
else:
return (
torch.tensor(image, dtype=torch.float),
torch.tensor(target, dtype=torch.long),
)
However, I wonder if there’s a better way to do it. And at the same time, as it feels a bit unnatural, I started wondering if it is even advisable to use the same class for training and predicting (it should be, but the clunkiness of my solutions makes me wonder). Of course, I could not return a tuple at all, but only a first element, but this still needs if-else.
| PyTorch's DataSet class is really simple. So, do not overthink it. It's not much more than a wrapper for accessing your data.
You don't have to return a tuple, not even Tensors. You can return whatever data you want. Commonly, it will be in one of those styles:
For unsupervised data: Sample or (Sample, None)
For supervised data: (Sample, Label)
For supervised data with multiple targets, e.g. object detection: (Sample, [Label1, Label2, ...]) or (Sample, Label1, Label2, ...)
It is also common to use the same DataSet class for train / test.
So, in your case, simply return the sample or a tuple (sample, None) as done in torchvision and adjust your pipeline accordingly. I'd not suggest using np.nan as it would fail a simple None check (np.nan == None). Also, I'd encourage you to inherit from torch.data.Dataset.
If however, your pipeline forces you to use a tuple or has other constraints I'd suggest to rephrase your question.
| https://stackoverflow.com/questions/67445508/ |
TypeError when using torch.autograd.profiler.profile | I am trying to analyze memory consumption for my model as described here:
https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html
using these lines:
with profiler.profile(profile_memory=True, record_shapes=True) as prof:
tubes, _, _ = zip(*model(imgs, img_metas, return_loss=False))
print(prof.key_averages().table(sort_by="self_cpu_memory_usage", row_limit=10))
but I get this error
TypeError: init() got an unexpected keyword argument
'profile_memory'
So, what would be the reason and/or the solution for this error?
| You need to upgrade your version of PyTorch. Looking at the code, one can see that the profile_memory argument was added to the function signature first in PyTorch v1.6.0.
You can also see this through the documentation of torch.autograd. You can see that the argument is not present in PyTorch v1.1.0.
| https://stackoverflow.com/questions/67458728/ |
what does pytorch do for creating tensor from numpy | I am interested in what torch have done when I call torch.from_numpy. As the name indicates, it seems that PyTorch creates a Tensor instance and allocates the memory for copying the content from numpy ndarray to itself. But how does PyTorch do the memcpy work and what else has PyTorch done in the background? It seems the implementation of tensor is in autograd. But I have no idea which part should I look for.
I have the question because I found it is really fast for constructing a tensor from numpy. And it is even fast than creating a tensor directly
a = np.random.randn(100,100)
%timeit torch.from_numpy(a)
759 ns ± 7.53 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit torch.randn(100,100)
61 µs ± 2.46 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit torch.zeros(100,100)
3.1 µs ± 136 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
| The documentation explains that
The returned tensor and ndarray share the same memory. Modifications to the tensor will be reflected in the ndarray and vice versa. The returned tensor is not resizable.
These sentences implies that there is no memcopy involved (otherwise modifications would not be reflected in one aonther). That is why the operation is so fast : pytorch merely creates a pointer to the numpy array underlying data, and "assigns" this pointer to a tensor. This function does not allocate or copy any memory at all. Therefore, from_numpy is just duplicating a pointer (which is an integer number) and probably performing a few checks.
What is important to remember is really that the underlying memories are shared and therefore the tensor and the numpy array modify one another, and you should use clone or copy to perform a clean deep copy and get rid of this behavior (if you need to), like
b = torch.from_numpy(a).clone()
| https://stackoverflow.com/questions/67465094/ |
Custom Pytorch layer to apply LSTM on each group | I have a N × F tensor with features and a N × 1 tensor with group index. I want to design a custom pytorch layer which will apply LSTM on each group with sorted features. I have mentioned LSTM with sorted group features as an example, hypothetically it can be anything which supports variable length input or sequence. Please refer to the image below for visual interpretation of the problem.
The obvious approach would be calling a LSTM layer for each unique group but that would be inefficient. Is there any better way to do it?
| You can certainly parallelize the LSTM application -- the problem is indexing the feature tensor efficiently.
The best thing I could come up with (I use something similar for my own stuff) would be to list comprehend over the unique group ids to make a list of variable-length tensors, then pad them over and run the LSTM on top.
In code:
import torch
from torch import Tensor
from torch.nn.utils.rnn import pad_sequence
n = 13
f = 77
n_groups = 3
xs = torch.rand(n, f)
ids = torch.randint(low=0, high=n_groups, size=(n,))
def groupbyid(xs: Tensor, ids: Tensor, batch_first: bool,
padding_value: int = 0) -> Tensor:
return pad_sequence([xs[ids==idx] for idx in ids.unique()],
batch_first=batch_first,
padding_value=padding_value)
grouped = groupbyid(xs, ids)
print(grouped.shape)
# torch.Size([3, 5, 77])
You can then apply your LSTM in parallel over the n_groups dimension on the grouped Tensor.
Note that you will also need to inspect the content of ids.unique() to assign each LSTM output to its corresponding group id, but this is easy to write and depends on your application.
| https://stackoverflow.com/questions/67471635/ |
Is a .pth file a security risk, and how can we sanitise it? | It's well-established that pickled files are [unsafe][1] to simply load directly. However, the advice on that SE post concludes that basically one should not use a pickled file if they are not sure of its provenance.
What about PyTorch machine-learning models that are stored as .pth files on, say, public repos on Github, which we want to use for inference? For example, if I have a model.pth which I plan to load with torch.load(model.pth), is it necessary to check it's safe to do so? Assuming I have no choice but to use the model, how should one go about checking it?
Given these models are ultimately just weights, could we do something like make a minimal Docker container with PyTorch, load the model inside there, and then resave the weights? Is this necessary, and what sort of checking should we do (i.e. assuming it is safe to load within the container, what sort of code treatment should be applied to sanitise the model for shipping?).
EDIT: response to asking for clarification: say I have a model.pth - (1) do I need to be careful with it, like I would with a .pkl, given a .pth is meant to contain only model weights? Or can I just go ahead and throw it into torch.load(model.pth)? (2) If I can't, what can I do before torch.load() to provide some peace-of-mind?
EDIT 2: An answer should show some focus on ML pretrained models in particular. An example: see the model https://download.pytorch.org/models/resnet34-333f7ec4.pth (warning: 80 MB download from TorchHub - feel free to use another Resnet .pth model, but any cleaning does need to be reasonably fast). Currently I would download this and then load it in PyTorch using load_state_dict (explained in detail [here][2] for example). If I didn't know this was a safe model, how could I try to sanitise it first before loading it in load_state_dict?
[1]: https://stackoverflow.com/questions/25353753/python-can-i-safely-unpickle-untrusted-data
[2]: https://www.programmersought.com/article/95324315915/
| As pointed out by @MobeusZoom, this is answer is about Pickle and not PyTorch format. Anyway as PyTorch load mechanism relies on Pickle behind the scene observations drawn in this answer still apply.
TL;DR;
Don't try to sanitize pickle. Trust or reject.
Quoted from Marco Slaviero in his presentation Sour Pickle at the Black Hat USA 2011.
Real solution is:
Don't setup exchange with unequally trusted parties;
Setup a secure transport layer for exchange;
Sign exchanged files;
Also be aware that there are new kind of AI based attacks, this even if the pickle is shellcode free, you still may have other issues to address when loading pre-trained networks from untrusted sources.
Important notes
From the presentation linked above we can draw several important notes:
Pickle use a Virtual Machine to reconstruct live data (PVM happens alongside the python process), this virtual machine is not Turing complete but has: an instruction set (opcodes), a stack for the execution and a memo to host object data. This is enough for attacker to create exploits.
Pickle mechanism is backward compatible, it means latest Python can unpickle the very first version of its protocol.
Pickle can (re)construct any object as long as PVM does not crash, there is no consistency check in this mechanism to enforce object integrity.
In broad outline, Pickle allows attacker to execute shellcode in any language (including python) and those code can even persist after the victim program exits.
Attacker will generally forge their own pickles because it offers more flexibility than naively using pickle mechanism. Off course they can use pickle as an helper to write opcode sequences. Attacker can craft the malicious pickle payload in two significant ways:
to prepend the shellcode to be executed first and leave the PVM stack clean. Then you probably get a normal object after unpickling;
to insert the shellcode into the payload, so it gets executed while unpickling and may interact with the memo. Then unpickled object may have extra capabilities.
Attackers are aware of "safe unpickler" and know how to circumvent them.
MCVE
Find below a very naive MCVE to evaluate your suggestion to encapsulate cleaning of suspect pickled files in Docker container. We will use it to assess main associated risks. Be aware, real exploit will be more advanced and complexer.
Consider the two classes below, Normal is what you expect to unpickle:
# normal.py
class Normal:
def __init__(self, config):
self.__dict__.update(config)
def __str__(self):
return "<Normal %s>" % self.__dict__
And Exploit is the attacker vessel for its shellcode:
# exploit.py
class Exploit(object):
def __reduce__(self):
return (eval, ("print('P@wn%d!')",))
Then, the attacker can use pickle as an helper to produce intermediate payloads in order to forge the final exploit payload:
import pickle
from normal import Normal
from exploit import Exploit
host = Normal({"hello": "world"})
evil = Exploit()
host_payload = pickle.dumps(host, protocol=0) # b'c__builtin__\neval\np0\n(S"print(\'P@wn%d!\')"\np1\ntp2\nRp3\n.'
evil_payload = pickle.dumps(evil, protocol=0) # b'(i__main__\nNormal\np0\n(dp1\nS"hello"\np2\nS"world"\np3\nsb.'
At this point the attacker can craft a specific payload to both inject its shellcode and returns the data.
with open("inject.pickle", "wb") as handler:
handler.write(b'c__builtin__\neval\np0\n(S"print(\'P@wn%d!\')"\np1\ntp2\nRp3\n(i__main__\nNormal\np0\n(dp1\nS"hello"\np2\nS"world"\np3\nsb.')
Now, when victim will deserialize the malicious pickle file, the exploit is executed and a valid object is returned as expected:
from normal import Normal
with open("inject.pickle", "rb") as handler:
data = pickle.load(handler)
print(data)
Execution returns:
P@wn%d!
<Normal {'hello': 'world'}>
Off course, shellcode is not intended to be so obvious, you may not notice it has been executed.
Containerized cleaner
Now, lets try to clean this pickle as you suggested. We will encapsulate the following cleaning code:
# cleaner.py
import pickle
from normal import Normal
with open("inject.pickle", "rb") as handler:
data = pickle.load(handler)
print(data)
cleaned = Normal(data.__dict__)
with open("cleaned.pickle", "wb") as handler:
pickle.dump(cleaned, handler)
with open("cleaned.pickle", "rb") as handler:
recovered = pickle.load(handler)
print(recovered)
Into a Docker image to try to contain its execution. As a baseline, we could do something like this:
FROM python:3.9
ADD ./exploit ./
RUN chown 1001:1001 inject.pickle
USER 1001:1001
CMD ["python3", "./cleaner.py"]
Then we build the image and execute it:
docker build -t jlandercy/doclean:1.0 .
docker run -v /home/jlandercy/exploit:/exploit jlandercy/doclean:1.0
Also ensure the mounted folder containing the exploit has restrictive ad hoc permissions.
P@wn%d!
<Normal {'hello': 'world'}> # <-- Shellcode has been executed
<Normal {'hello': 'world'}> # <-- Shellcode has been removed
Now the cleaned.pickle is shellcode free. Off course you need to carefully check this assumption before releasing the cleaned pickle.
Observations
As you can see, Docker image does not prevent the exploit to be executed when unpickling but it may help to contain the exploit in some extent.
Points of attention are (not exhaustive):
Having a recent pickle file with the original protocol is a hint but not an evidence of something suspicious.
Be aware even if containerized, you still are running attacker code on your host;
Additionally, attacker may have designed its exploit to break a Docker container, use unprivileged user to reduce the risk;
Don't bind any network to this container as attacker can start a terminal and expose it over a network interface (and potentially to the web);
Depending on how the attacker has designed its exploit data may not be available at all. For the instance, if __reduce__ method actually returns the exploit instead of a recipe to recreate the desired instance. After all the main purpose of this is to make you unpickling it nothing more;
If you intend to dump raw data after loading the suspicious pickle archive you need a strict procedure to detach data from the exploit;
The cleaning step can be a limitation. It relies on your ability to recreate the intended object from the malicious payload. It will depends on what is really reconstructed from the pickle file and how the desired object constructor needs to be parametrized;
Finally, if you are confident in your cleaning procedure, you can mount a volume to access the result after the container exits.
| https://stackoverflow.com/questions/67493095/ |
from keras.preprocessing.text import one_hot equivalent in pytorch? | I just started using pytorch for NLP. I found a tutorial that uses from keras.preprocessing.text import one_hot and converts text to one_hot representation given a vocabulary size.
For example:
The input is
vocab_size = 10000
sentence = ['the glass of milk',
'the cup of tea',
'I am a good boy']
onehot_repr = [one_hot(words, vocab_size) for words in sentence]
The output is"
[[6654, 998, 8896, 1609], [6654, 998, 1345, 879], [123, 7653, 1, 5678,7890]]
how can i perform the same procedure in pytorch and get the output like above.
| PyTorch fundamentally works with Tensors, and is not designed to work with strings. You can use SK Learn's LabelEncoder to encode your words however:
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit([w for s in sentence for w in s.split()])
onehot_repr = [le.transform(s.split()) for s in sentence]
>>> [array([10, 5, 8, 7]), array([10, 4, 8, 9]), array([0, 2, 1, 6, 3])]
| https://stackoverflow.com/questions/67503960/ |
How to make VScode launch.json for a Python module | I'm reseaching self-supervised muchine learning code.
And I have wanted to debug the code with python debugger not pdb.set_trace().
This is python command for ubuntu terminal.
python -m torch.distributed.launch --nproc_per_node=1 main_swav.py \
--data_path /dataset/imagenet/train \
--epochs 400 \
--base_lr 0.6 \
--final_lr 0.0006 \
--warmup_epochs 0 \
--batch_size 8 \
--size_crops 224 96 \
--nmb_crops 2 6 \
--min_scale_crops 0.14 0.05 \
--max_scale_crops 1. 0.14 \
--use_fp16 true \
--freeze_prototypes_niters 5005 \
--queue_length 380 \
--epoch_queue_starts 15\
--workers 10
In order to debug the code with VScode, I tried to revise launch.json like below as referring stackoverflow -question
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"module": "torch.distributed.launch --nproc_per_node=1 main_swav.py",
"request": "launch",
"console": "integratedTerminal",
"args": ["--data_path", "/dataset/imagenet/train"]
}
]
}
I knew this would not work... TT
Could you give me some advice?
Thank you for your time.
| Specify the module you want to run with "module": "torch.distributed.launch"
You can ignore the -m flag. Put everything else under the args key.
Note: Make sure to include --nproc_per_node and the name of file (main_swav.py) in the list of arguments
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"module": "torch.distributed.launch",
"request": "launch",
"console": "integratedTerminal",
"args": [
"--nproc_per_node", "1",
"main_swav.py",
"--data_path", "/dataset/imagenet/train",
]
}
]
}
Read more here: https://code.visualstudio.com/docs/python/debugging#_module
| https://stackoverflow.com/questions/67518928/ |
PyTorch DataLoader Error: object of type 'type' has no len() | I'm quite new to programming and have now clue where my error comes from.
I got the following code to set up my dataset for training my classifier:
class cows_train(Dataset):
def __init__(self, folder_path):
self.image_list = glob.glob(folder_path+'/content/cows/train')
self.data_len = len(self.image_list)
def __getitem__(self, index):
single_image_path = self.image_list[index]
im_as_im = Image.open(single_image_path)
im_as_np = np.asarray(im_as_im)/255
im_as_np = np.expand_dims(im_as_np, 0)
im_as_ten = torch.from_numpy(im_as_np).float()
class_indicator_location = single_image_path.rfind('/content/cows/train/_annotations.csv')
label = int(single_image_path[class_indicator_location+2:class_indicator_location+3])
return (im_as_ten, label)
def __len__(self):
return self.data_len
And this for the DataLoader:
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
batch_size = 4
trainset = cows_train
trainloader = torch.utils.data.DataLoader(dataset = trainset, batch_size=10,
shuffle=True, num_workers=2)
classes = ('cow_left', 'cow_other')
As Output I receive:
TypeError Traceback (most recent call last)
<ipython-input-6-54702f98a725> in <module>()
6
7 trainset = cows_train
----> 8 trainloader = torch.utils.data.DataLoader(dataset = trainset, batch_size=10, shuffle=True, num_workers=2)
9
10 testset = cows_test
2 frames
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in __init__(self, dataset, batch_size, shuffle, sampler, batch_sampler, num_workers, collate_fn, pin_memory, drop_last, timeout, worker_init_fn, multiprocessing_context, generator, prefetch_factor, persistent_workers)
264 # Cannot statically verify that dataset is Sized
265 # Somewhat related: see NOTE [ Lack of Default `__len__` in Python Abstract Base Classes ]
--> 266 sampler = RandomSampler(dataset, generator=generator) # type: ignore
267 else:
268 sampler = SequentialSampler(dataset)
/usr/local/lib/python3.7/dist-packages/torch/utils/data/sampler.py in __init__(self, data_source, replacement, num_samples, generator)
100 "since a random permute will be performed.")
101
--> 102 if not isinstance(self.num_samples, int) or self.num_samples <= 0:
103 raise ValueError("num_samples should be a positive integer "
104 "value, but got num_samples={}".format(self.num_samples))
/usr/local/lib/python3.7/dist-packages/torch/utils/data/sampler.py in num_samples(self)
108 # dataset size might change at runtime
109 if self._num_samples is None:
--> 110 return len(self.data_source)
111 return self._num_samples
112
TypeError: object of type 'type' has no len()
Problem is: I don't understand why typ has no length, in my eyes it's defined... Someone please help?
Add: This is where "return len(self.data_source)" shows up in the code
def num_samples(self) -> int:
if self._num_samples is None:
return len(self.data_source)
return self._num_samples
| You are not creating your dataset object correctly. Currently, you do:
trainset = cows_train
This only assigns the class type to trainset. To create an object of the class, you need to use:
folder_path = '/path/to/dataset/'
trainset = cows_train(folder_path)
| https://stackoverflow.com/questions/67520509/ |
How to make code run on GPU on Windows 10? | I want to run my code run on gpu in windows 10, like for google colab, we can just change the runtime option which is pretty easy to do to shift to gpu. Is there a possibility to do the same for jupyter notebook in windows.
| You will actually need to use tensorflow-gpu to run your jupyter notebook on a gpu.
The best way to achieve this would be
Install Anaconda on your system
Download cuDNN & Cuda Toolkit 11.3 .
Add cuDNN and Cuda Toolkit to your PATH.
Create an environment in Anaconda
pip install tensorflow-gpu
pip install [jupyter-notebook/jupyterlab]
Import tensorflow-gpu in your notebook
Enjoy. You can now run your notebook on your GPU
| https://stackoverflow.com/questions/67521143/ |
how to use collate_fn properly in the code below? | My code is:
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
dataset = PennFudanDataset('PennFudanPed', get_transform(train=True))
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=2, shuffle=True, num_workers=4,
collate_fn=utils.collate_fn)
# For Training
images,targets = next(iter(data_loader))
images = list(image for image in images)
targets = [{k: v for k, v in t.items()} for t in targets]
output = model(images,targets) # Returns losses and detections
# For inference
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x) # Returns predictions
I get the error:
"collate_fn=utils.collate_fn" shows error"name 'utils' is not defined". "module 'torch.utils' has no attribute 'collate_fn' error after adding torch.
| Ok so I read the tutorial and it seems that it wants you to use the helper files in this repository: https://github.com/pytorch/vision/tree/master/references/detection .
In there is the utils.py which contains the collate_fn function.
So it seems that you dont have downloaded/copied this repository to integrate it into your project, right?
To solve just that error, you could just copy the collate_fn in utils.py
def collate_fn(batch):
return tuple(zip(*batch))
and paste it into your project. But since this tutorial probably wants you to use other util functions of utils.py too, you might want to download this directory and put it into your project directory so you can access it.
| https://stackoverflow.com/questions/67530442/ |
Python: BERT Error - Some weights of the model checkpoint at were not used when initializing BertModel | I am creating an entity extraction model in PyTorch using bert-base-uncased but when I try to run the model I get this error:
Error:
Some weights of the model checkpoint at D:\Transformers\bert-entity-extraction\input\bert-base-uncased_L-12_H-768_A-12 were not used when initializing BertModel:
['cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.bias',
'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight',
'cls.predictions.bias']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
I have downloaded the bert model from here and the additional files from here
Code
Following is the code for my model:
import config
import torch
import transformers
import torch.nn as nn
def loss_fn(output, target, mask, num_labels):
lfn = nn.CrossEntropyLoss()
active_loss = mask.view(-1) == 1
active_logits = output.view(-1, num_labels)
active_labels = torch.where(
active_loss,
target.view(-1),
torch.tensor(lfn.ignore_index).type_as(target)
)
loss = lfn(active_logits, active_labels)
return loss
class EntityModel(nn.Module):
def __init__(self, num_tag, num_pos):
super(EntityModel, self).__init__()
self.num_tag = num_tag
self.num_pos = num_pos
self.bert = transformers.BertModel.from_pretrained(config.BASE_MODEL_PATH)
self.bert_drop_1 = nn.Dropout(p = 0.3)
self.bert_drop_2 = nn.Dropout(p = 0.3)
self.out_tag = nn.Linear(768, self.num_tag)
self.out_pos = nn.Linear(768, self.num_pos)
def forward(self, ids, mask, token_type_ids, target_pos, target_tag):
o1, _ = self.bert(ids,
attention_mask = mask,
token_type_ids = token_type_ids)
bo_tag = self.bert_drop_1(o1)
bo_pos = self.bert_drop_2(o1)
tag = self.out_tag(bo_tag)
pos = self.out_pos(bo_pos)
loss_tag = loss_fn(tag, target_tag, mask, self.num_tag)
loss_pos = loss_fn(pos, target_pos, mask, self.num_pos)
loss = (loss_tag + loss_pos) / 2
return tag, pos, loss
print("model.py run success!")
| As R. Marolahy suggests, if you don't want to see this every time, I know I don't, add the following:
from transformers import logging
logging.set_verbosity_error()
| https://stackoverflow.com/questions/67546911/ |
TypeError: new(): argument 'size' must be tuple of ints, but found element of type NoneType at pos 2 when using pytorch, using nn.linear | File "C:\Users\J2\Desktop\Pytorchseries\thenn.py", line 50, in
net = Net()
TypeError: new(): argument 'size' must be tuple of ints, but found element of type NoneType at pos 2
If it helps I was following the sentdex pytorch tutorial. Any help would be appreciated. I am new to machine learning, and I was hoping that this would work. Please help me out!
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
import tqdm
training_data = np.load('training_data.npy', allow_pickle=True)
print(len(training_data))
X = torch.Tensor([i[0] for i in training_data]).view(-1,50,50)
X = X/255.0
y = torch.Tensor([i[1] for i in training_data])
plt.imshow(X[0], cmap='gray')
print(y[0])
class Net(nn.Module):
def __init__(self):
super().__init__() # just run the init of parent class (nn.Module)
self.conv1 = nn.Conv2d(1, 32, 5) # input is 1 image, 32 output channels, 5x5 kernel / window
self.conv2 = nn.Conv2d(32, 64, 5) # input is 32, bc the first layer output 32. Then we say the output will be 64 channels, 5x5 kernel / window
self.conv3 = nn.Conv2d(64, 128, 5)
x = torch.randn(50,50).view(-1,1,50,50)
self._to_linear = None
self.convs(x)
self.fc1 = nn.Linear(self._to_linear, 512) #flattening.
self.fc2 = nn.Linear(512, 2) # 512 in, 2 out bc we're doing 2 classes (dog vs cat).
def convs(self, x):
# max pooling over 2x2
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
x = F.max_pool2d(F.relu(self.conv2(x)), (2, 2))
x = F.max_pool2d(F.relu(self.conv3(x)), (2, 2))
def forward(self, x):
x = self.convs(x)
x = x.view(-1, self._to_linear) # .view is reshape ... this flattens X before
x = F.relu(self.fc1(x))
x = self.fc2(x) # bc this is our output layer. No activation here.
return F.softmax(x, dim=1)
if self._to_linear is None:
self._to_linear = x[0].shape[0]*x[0].shape[1]*x[0].shape[2]
return x
net = Net()
print(net)
import torch.optim as optim
optimizer = optim.Adam(net.parameters(), lr=0.001)
loss_function = nn.MSELoss()
X = torch.Tensor([i[0] for i in training_data]).view(-1,50,50)
X = X/255.0
y = torch.Tensor([i[1] for i in training_data])
VAL_PCT = 0.1 # lets reserve 10% of our data for validation
val_size = int(len(X)*VAL_PCT)
print(val_size)
train_X = X[:-val_size]
train_y = y[:-val_size]
test_X = X[-val_size:]
test_y = y[-val_size:]
print(len(train_X), len(test_X))
BATCH_SIZE = 100
EPOCHS = 1
for epoch in range(EPOCHS):
for i in tqdm(range(0, len(train_X), BATCH_SIZE)): # from 0, to the len of x, stepping BATCH_SIZE at a time. [:50] ..for now just to dev
#print(f"{i}:{i+BATCH_SIZE}")
batch_X = train_X[i:i+BATCH_SIZE].view(-1, 1, 50, 50)
batch_y = train_y[i:i+BATCH_SIZE]
net.zero_grad()
outputs = net(batch_X)
loss = loss_function(outputs, batch_y)
loss.backward()
optimizer.step() # Does the update
print(f"Epoch: {epoch}. Loss: {loss}")
correct = 0
total = 0
with torch.no_grad():
for i in tqdm(range(len(test_X))):
real_class = torch.argmax(test_y[i])
net_out = net(test_X[i].view(-1, 1, 50, 50))[0] # returns a list,
predicted_class = torch.argmax(net_out)
if predicted_class == real_class:
correct += 1
total += 1
print("Accuracy: ", round(correct/total, 3))
| The issue is with self._to_linear. You use it in __init__ as:
self._to_linear = None
self.convs(x)
self.fc1 = nn.Linear(self._to_linear, 512) #flattening.
The call to nn.Linear has it as a parameter. This parameter should equal the number of input features in the linear layer, and cannot be None, since the value will determine the shape of the layer (number of weights and biases). How to fix this depends on what you're trying to achieve.
| https://stackoverflow.com/questions/67547859/ |
A Problem aboud using multi-gpu with a two-stage CNN model | I design a CNN model which has two stages. First stage is generating proposals like RPN in Faster RCNN and the second feeds these proposals into the following part.
It causes error in the second step.
Accroding the below error information, it seems like the second input is not correctly assigned to the multi GPU.
However, The model works file with single gpu.
File "/home/f523/guazai/sdb/rsy/cornerPoject/myCornerNet6/exp/train.py", line 212, in run_epoch
cls, rgr = self.model([proposal, fm], stage='two')
File "/home/f523/anaconda3/envs/rsy/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/f523/anaconda3/envs/rsy/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 156, in forward
return self.gather(outputs, self.output_device)
File "/home/f523/anaconda3/envs/rsy/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 168, in gather
return gather(outputs, output_device, dim=self.dim)
File "/home/f523/anaconda3/envs/rsy/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather
res = gather_map(outputs)
File "/home/f523/anaconda3/envs/rsy/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map
return type(out)(map(gather_map, zip(*outputs)))
File "/home/f523/anaconda3/envs/rsy/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in gather_map
return Gather.apply(target_device, dim, *outputs)
File "/home/f523/anaconda3/envs/rsy/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 68, in forward
return comm.gather(inputs, ctx.dim, ctx.target_device)
File "/home/f523/anaconda3/envs/rsy/lib/python3.6/site-packages/torch/cuda/comm.py", line 166, in gather
return torch._C._gather(tensors, dim, destination)
RuntimeError: CUDA error: an illegal memory access was encountered
PS
my model sciprt shows below. I want my two-stage model can support for multi batch. e.g. the batch size is 4 and every img output 128 proposal, so the proposal size in here is (4*128, 5)
def _stage2(self, xs):
proposal, fm = xs
if proposal.dim()==2 and proposal.size(1) == 5:
# train mode
roi = roi_align(fm, proposal, output_size=[15, 15])
elif proposal.dim()==3 and proposal.size(2) == 4:
# eval mode
roi = roi_align(fm, [proposal[0]], output_size=[15, 15])
else:
assert AssertionError(" The boxes tensor shape should be Tensor[K, 5] in train or Tensor[N, 4] in eval")
x = self.big_kernel(roi)
cls = self.cls_fm(x)
rgr = self.rgr_fm(x)
return cls, rgr
| I know where I am wrong. Here’s my second stage to feed input
cls, offset = self.model([proposal, fm], stage='two')
proposal is the ROI whose shape is [N, 5], the 1th dim is the batch index. e.g. The batch size is 4, the range of index is [0,1,2,3]. And fm is the feature map.
When I use the mult-gpu like 2 gpu. the proposal and fm will be split into two branch and feed into two gpu. However the batch index range still be [0,1,2,3], then cause a index error and raise gpu error.
What I do is add a line before roi_align like below:
from torchvision.ops import roi_align
proposal[:, 0] = proposal[:, 0] % fm.size(0) # this make multi-gpu work
roi = roi_align(fm, proposal, output_size=[15, 15])
| https://stackoverflow.com/questions/67557828/ |
Python - How to output a numpy array of probabilities with certain precision and remain its sum | I have a numpy array of 6 probabilities which come from pytorch softmax function.
[0.055709425,0.04365404,0.008613999,0.0022386343,0.0037478858,0.88603604]
I want to convert all 6 float numbers to string to represent a score output,
and all of them need to be rounded to a certain precision, say 4.
I used the following code to get the output text:
','.join(f'{x:.4f}' for x in scores) # scores is the array above
and the output is
0.0557,0.0437,0.0086,0.0022,0.0037,0.8860
which sums up to 0.9999 instead of 1.0. And I have a bunch of arrays like this one but sums up to either 0.9999 or 1.0001.
So my question is, how do I get the output that sums up exactly to 1.0?
I know it's a floating point computation problem. What am I missing, some rounding operation or some adjustment?
Thank you very much.
| You can round off to 2 decimal places, to reduce that error:
For example:
import numpy as np
a = np.array([0.055709425,0.04365404,0.008613999,0.0022386343,0.0037478858,0.88603604])
print(sum(a))
Output:
1.0000000241
Now:
new_array = [round(x,2) for x in a]
print(sum(new_array))
Output:
1.0
| https://stackoverflow.com/questions/67564889/ |
FileNotFoundError: [Errno 2] :No such file or directory: 'C:/Users/My_computer/Desktop/Compare/MHAN-master/AID_train/AID_train_LR/x4\\9.png' | I'm very new to python environment. I have tried to compile a super-resolution code for upscaling factor 4 using my own dataset. The low resolution RGB images are kept in "C:/Users/My_computer/Desktop/Compare/MHAN-master/AID_train/AID_train_LR/x4". The code used for image load is shown in below:
def load_img(filepath):
img = Image.open(filepath).convert('RGB')
#img = Image.open(filepath, 'rb')
#y, _, _ = img.split()
return img
class DatasetFromFolder(data.Dataset):
def __init__(self, image_dir, lr_dir, patch_size, upscale_factor, data_augmentation, transform=None):
super(DatasetFromFolder, self).__init__()
self.image_filenames = [join(image_dir, x) for x in listdir(image_dir) if is_image_file(x)]
self.patch_size = patch_size
self.upscale_factor = upscale_factor
self.transform = transform
self.data_augmentation = data_augmentation
self.HR ='C://Users//My_computer//Desktop//Compare//MHAN-master//AID_train//AID_train_HR'
self.LR ='C://Users//My_computer//Desktop//Compare//MHAN-master//AID_train//AID_train_LR//x4'
def __getitem__(self, index):
target = load_img(self.image_filenames[index])
input = load_img(os.path.join(self.LR, file))
input, target, _ = get_patch(input,target,self.patch_size, self.upscale_factor)
return input, target
But I am getting the following error while the training code is compiled:
File "main_x4.py", line 185, in <module>
train(model, epoch)
File "main_x4.py", line 60, in train
for iteration, batch in enumerate(training_data_loader, 1):
File "C:\Users\My_computer\anaconda3\envs\MHAN\lib\site-packages\torch\utils\data\dataloader.py", line 346, in __next__
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "C:\Users\My_computer\anaconda3\envs\MHAN\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\My_computer\anaconda3\envs\MHAN\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\My_computer\Desktop\Compare\MHAN-master\dataset_x4.py", line 91, in __getitem__
input = load_img(os.path.join(self.LR, file))
File "C:\Users\My_computer\Desktop\Compare\MHAN-master\dataset_x4.py",
line 16, in load_img
img = Image.open(filepath).convert('RGB')
File "C:\Users\My_computer\anaconda3\envs\MHAN\lib\site-packages\PIL\Image.py",
line 2912, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'C://Users//My_computer//Desktop//Compare//MHAN-master//AID_train//AID_train_LR//x4\\9.png'
As LR images are already in RGB format, so is it necessary to convert to RGB again?
Please help me to fix this error
| 'C:/Users/My_computer/Desktop/Compare/MHAN-master/AID_train/AID_train_LR/x4\\9.png'
Your string contains a double backslash at the end of the path, that's why you can't access the directory
use a raw string like
r'yourString'
or review your os.path.join
EDIT:
Try to convert every string into a raw-String, like mentioned above. You are still getting double backslashes, because certain \character combinations are escaped.
These are the escaped characters:
Edit your code to:
self.HR =r'C:/Users/My_computer/Desktop/Compare/MHAN-
master/AID_train/AID_train_HR'
self.LR =r'C:/Users/My_computer/Desktop/Compare/MHAN-
master/AID_train/AID_train_LR/x4'
Please notice the "r" in front of the string to convert them into raw-Strings.
| https://stackoverflow.com/questions/67570563/ |
After installing Pytorch cuda , torch.cuda.is_available() show false. What to do? | I have installed pytorch cuda by running this command:
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia
My cuda version is 11.2 . I am using windows 10 .
Pytorch cuda 11.2 is not available right now . (pytorch.org)
So I have install 11.1 version .
(using nvidia-smi)
Cuda version
But it show false.
torch.cuda.is_available() >>> False
I have tried both 10.2 and 11.1 version.
As far as I know, I do not need to install cuda toolkit for pytorch
| You should not install package to your base environment. Create a separate environment with necessary tools.
Example: create env called dlearn with Python v3.7 and torch packages
conda create -n dlearn python=3.7 pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia
Activate and use your dlearn environment
conda activate dlearn
python -c "import torch;print(torch.cuda.is_available())"
# this should echo True if all is well
At the moment the supported cudatoolkit is 11.1 which works fine with 11.2 driver. They will update it sooner or later. You can build from GitHub PyTorch to get the latest if you want. The steps are more complex than above.
| https://stackoverflow.com/questions/67570573/ |
How to save training weight checkpoint of model and continue training from last point in PyTorch? | I'm trying to save checkpoint weights of the trained model after a certain number of epochs and continue to train from that last checkpoint to another number of epochs using PyTorch
To achieve this I've written a script like below
To train the model:
def create_model():
# load model from package
model = smp.Unet(
encoder_name="resnet152", # choose encoder, e.g. mobilenet_v2 or efficientnet-b7
encoder_weights='imagenet', # use `imagenet` pre-trained weights for encoder initialization
in_channels=3, # model input channels (1 for gray-scale images, 3 for RGB, etc.)
classes=2, # model output channels (number of classes in your dataset)
)
return model
model = create_model()
model.to(device)
learning_rate = 1e-3
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
epochs = 5
for epoch in range(epochs):
print('Epoch: [{}/{}]'.format(epoch+1, epochs))
# train set
pbar = tqdm(train_loader)
model.train()
iou_logger = iouTracker()
for batch in pbar:
# load image and mask into device memory
image = batch['image'].to(device)
mask = batch['mask'].to(device)
# pass images into model
pred = model(image)
# pred = checkpoint['model_state_dict']
# get loss
loss = criteria(pred, mask)
# update the model
optimizer.zero_grad()
loss.backward()
optimizer.step()
# compute and display progress
iou_logger.update(pred, mask)
mIoU = iou_logger.get_mean()
pbar.set_description('Loss: {0:1.4f} | mIoU {1:1.4f}'.format(loss.item(), mIoU))
# development set
pbar = tqdm(development_loader)
model.eval()
iou_logger = iouTracker()
with torch.no_grad():
for batch in pbar:
# load image and mask into device memory
image = batch['image'].to(device)
mask = batch['mask'].to(device)
# pass images into model
pred = model(image)
# get loss
loss = criteria(pred, mask)
# compute and display progress
iou_logger.update(pred, mask)
mIoU = iou_logger.get_mean()
pbar.set_description('Loss: {0:1.4f} | mIoU {1:1.4f}'.format(loss.item(), mIoU))
# save model
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,}, '/content/drive/MyDrive/checkpoint.pt')
from this, I can save the model checkpoint file as checkpoint.pt for 5 epochs
To continue the training using the saved checkpoint weight file for another I wrote below script:
epochs = 5
for epoch in range(epochs):
print('Epoch: [{}/{}]'.format(epoch+1, epochs))
# train set
pbar = tqdm(train_loader)
checkpoint = torch.load( '/content/drive/MyDrive/checkpoint.pt')
print(checkpoint)
model.load_state_dict(checkpoint['model_state_dict'])
model.to(device)
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']
model.train()
iou_logger = iouTracker()
for batch in pbar:
# load image and mask into device memory
image = batch['image'].to(device)
mask = batch['mask'].to(device)
# pass images into model
pred = model(image)
# pred = checkpoint['model_state_dict']
# get loss
loss = criteria(pred, mask)
# update the model
optimizer.zero_grad()
loss.backward()
optimizer.step()
# compute and display progress
iou_logger.update(pred, mask)
mIoU = iou_logger.get_mean()
pbar.set_description('Loss: {0:1.4f} | mIoU {1:1.4f}'.format(loss.item(), mIoU))
# development set
pbar = tqdm(development_loader)
model.eval()
iou_logger = iouTracker()
with torch.no_grad():
for batch in pbar:
# load image and mask into device memory
image = batch['image'].to(device)
mask = batch['mask'].to(device)
# pass images into model
pred = model(image)
# get loss
loss = criteria(pred, mask)
# compute and display progress
iou_logger.update(pred, mask)
mIoU = iou_logger.get_mean()
pbar.set_description('Loss: {0:1.4f} | mIoU {1:1.4f}'.format(loss.item(), mIoU))
# save model
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,}, 'checkpoint.pt')
This throws error:
RuntimeError Traceback (most recent call last)
<ipython-input-31-54f48c10531a> in <module>()
---> 14 model.load_state_dict(checkpoint['model_state_dict'])
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
1222 if len(error_msgs) > 0:
1223 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
-> 1224 self.__class__.__name__, "\n\t".join(error_msgs)))
1225 return _IncompatibleKeys(missing_keys, unexpected_keys)
1226
RuntimeError: Error(s) in loading state_dict for DataParallel:
Missing key(s) in state_dict: "module.encoder.conv1.weight", "module.encoder.bn1.weight", "module.encoder.bn1.bias", "module.encoder.bn1.running_mean", "module.encoder.bn1.running_var", "module.encoder.layer1.0.conv1.weight", "module.encoder.layer1.0.bn1.weight", "module.encoder.layer1.0.bn1.bias", "module.encoder.layer1.0.bn1.running_mean", "module.encoder.layer1.0.bn1.running_var", "module.encoder.layer1.0.conv2.weight", "module.encoder.layer1.0.bn2.weight", "module.encoder.layer1.0.bn2.bias", "module.encoder.layer1.0.bn2.running_mean", "module.encoder.layer1.0.bn2.running_var", "module.encoder.layer1.0.conv3.weight", "module.encoder.layer1.0.bn3.weight", "module.encoder.layer1.0.bn3.bias", "module.encoder.layer1.0.bn3.running_mean", "module.encoder.layer1.0.bn3.running_var", "module.encoder.layer1.0.downsample.0.weight", "module.encoder.layer1.0.downsample.1.weight", "module.encoder.layer1.0.downsample.1.bias", "module.encoder.layer1.0.downsample.1.running_mean", "module.encoder.layer1.0.downsample.1.running_var", "module.encoder.layer1.1.conv1.weight", "module.encoder.layer1.1.bn1.weight", "module.encoder.layer1.1.bn1.bias", "module.encoder.layer1.1.bn1.running_mean", "module.encoder.layer1.1.bn1.running_var", "module.encoder.layer1.1.conv2.weight", "module.encoder.layer1.1.bn2.weight", "module.encoder.layer1.1.bn2.bias", "module.encoder.layer1.1.bn2.running_mean", "module.encoder.layer1.1.bn2.running_var", "module.encoder.layer1.1.conv3.weight", "module.encoder.layer...
Unexpected key(s) in state_dict: "encoder.conv1.weight", "encoder.bn1.weight", "encoder.bn1.bias", "encoder.bn1.running_mean", "encoder.bn1.running_var", "encoder.bn1.num_batches_tracked", "encoder.layer1.0.conv1.weight", "encoder.layer1.0.bn1.weight", "encoder.layer1.0.bn1.bias", "encoder.layer1.0.bn1.running_mean", "encoder.layer1.0.bn1.running_var", "encoder.layer1.0.bn1.num_batches_tracked", "encoder.layer1.0.conv2.weight", "encoder.layer1.0.bn2.weight", "encoder.layer1.0.bn2.bias", "encoder.layer1.0.bn2.running_mean", "encoder.layer1.0.bn2.running_var", "encoder.layer1.0.bn2.num_batches_tracked", "encoder.layer1.1.conv1.weight", "encoder.layer1.1.bn1.weight", "encoder.layer1.1.bn1.bias", "encoder.layer1.1.bn1.running_mean", "encoder.layer1.1.bn1.running_var", "encoder.layer1.1.bn1.num_batches_tracked", "encoder.layer1.1.conv2.weight", "encoder.layer1.1.bn2.weight", "encoder.layer1.1.bn2.bias", "encoder.layer1.1.bn2.running_mean", "encoder.layer1.1.bn2.running_var", "encoder.layer1.1.bn2.num_batches_tracked", "encoder.layer1.2.conv1.weight", "encoder.layer1.2.bn1.weight", "encoder.layer1.2.bn1.bias", "encoder.layer1.2.bn1.running_mean", "encoder.layer1.2.bn1.running_var", "encoder.layer1.2.bn1.num_batches_tracked", "encoder.layer1.2.conv2.weight", "encoder.layer1.2.bn2.weight", "encoder.layer1.2.bn2.bias", "encoder.layer1.2.bn2.running_mean", "encoder.layer1.2.bn2.running_var", "encoder.layer1.2.bn2.num_batches_tracked", "encoder.layer2.0.conv1.weight", "encoder.layer...
What am I doing wrong? How can I fix this? Any help on this will be helpful.
| this line:
model.load_state_dict(checkpoint['model_state_dict'])
should be like this:
model.load_state_dict(checkpoint)
| https://stackoverflow.com/questions/67571329/ |
How to update tensors matching dimensionwise vectors | Let there be two 2D tensors, A (m × c) and B (n × c). Each row vector which belongs to B also belongs to A i.e. . Additionally, row vectors in A are not unique i.e. A may have duplicate rows. However, row vectors in B are unique.
There another pair of tensors P (m × f) and Q (n × f). I am trying to do the following
for i in range(B.shape[0]):
rv = B[i, :]
fv = Q[i, :]
# P[<row indexes of A matching rv>, :] = fv
How to do this correctly?
Is it possible to get rid of the for loop?
| You can use the following mask:
for i in range(B.shape[0]):
rv = B[i]
fv = Q[i]
mask = torch.where((A == rv).all(dim=1))[0]
P[mask] = fv
| https://stackoverflow.com/questions/67582406/ |
ValueError('need at least one array to stack') | When I Processed video and its audio,I encountered an error:
Original Traceback (most recent call last):
File "/home/yzx/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/yzx/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/yzx/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/yzx/lunwen/PseudoBinaural_CVPR2021-master/data/Augment_dataset.py", line 66, in __getitem__
data_ret, data_ret_sep = self._get_pseudo_item(index)
File "/home/yzx/lunwen/PseudoBinaural_CVPR2021-master/data/Pseudo_dataset.py", line 263, in _get_pseudo_item
stereo = self.construct_stereo_ambi(pst_sources)
File "/home/yzx/lunwen/PseudoBinaural_CVPR2021-master/data/Pseudo_dataset.py", line 120, in construct_stereo_ambi
signals = np.stack([src.signal for src in pst_sources], axis=1) # signals shape: [Len, n_signals]
File "", line 5, in stack
File "/home/yzx/anaconda3/envs/pytorch/lib/python3.8/site-packages/numpy/core/shape_base.py", line 423, in stack
raise ValueError('need at least one array to stack')
ValueError: need at least one array to stack
The list_samlple_file below is the content of train.txt. Every line has the same form as the first line "/home/yzx/lunwen/datasets/FAIR-PLAY/binaural_audios/000383.wav,/home/yzx/lunwen/datasets/FAIR-PLAY/frames/000383". The former path is the path of each audio,and the latter is the folder of the frames of each video.I extracted 10 frames per second in each 10 seconds video.They are splited to 'audio_file' and 'img_folder' in Psedo_dataset.py.I checked this path is true.But I don't know why the error generate.I need your help.
The content of clss Augment_dataset.py is:
class AugmentDataset(StereoDataset, PseudoDataset):
def __init__(self, opt, list_sample_file):
self.opt = opt
PseudoDataset.__init__(self, opt, list_sample_file)
StereoDataset.__init__(self, opt)
if "MUSIC" in self.opt.datalist:
dup_times = 1
print("dup_times:", dup_times)
else:
dup_times = 2
print("dup_times:", dup_times)
self.total_samples *= dup_times # in order to align with the length of FAIR-Play dataset
random.shuffle(self.audios)
random.shuffle(self.total_samples)
def __getitem__(self, index):
if random.uniform(0,1) < self.opt.pseudo_ratio:
data_ret, data_ret_sep = self._get_pseudo_item(index)
else:
data_ret = self._get_stereo_item(index)
_, data_ret_sep = self._get_pseudo_item(index)
data_ret.update(data_ret_sep)
return data_ret
def __len__(self):
return min(len(self.audios), len(self.total_samples))
def name(self):
return 'AugmentDataset'
The content of Psedo_dataset.py is:
def audio_normalize(samples, desired_rms = 0.1, eps = 1e-4):
rms = np.maximum(eps, np.sqrt(np.mean(samples**2)))
samples = samples * (desired_rms / rms)
return rms / desired_rms, samples
def generate_spectrogram(audio):
spectro = librosa.core.stft(audio, n_fft=512, hop_length=160, win_length=400, center=True)
real = np.expand_dims(np.real(spectro), axis=0)
imag = np.expand_dims(np.imag(spectro), axis=0)
spectro_two_channel = np.concatenate((real, imag), axis=0)
return spectro_two_channel
def process_image(image, augment, square=False):
if square:
iH, iW = 240, 240
H, W = 224, 224
else:
iH, iW = 240, 480
H, W = 224, 448
image = mmcv.imresize(image, (iW,iH))
h,w,_ = image.shape
w_offset = w - W
h_offset = h - H
left = random.randrange(0, w_offset + 1)
upper = random.randrange(0, h_offset + 1)
image = mmcv.imcrop(image, np.array([left, upper, left+W-1, upper+H-1]))
if augment:
enhancer = ImageEnhance.Brightness(Image.fromarray(image))
image = enhancer.enhance(random.random()*0.6 + 0.7)
enhancer = ImageEnhance.Color(image)
image = enhancer.enhance(random.random()*0.6 + 0.7)
return image
class PseudoDataset(data.Dataset):
def __init__(self, opt, list_sample_file):
super().__init__()
self.opt = opt
self.total_samples = mmcv.list_from_file(list_sample_file)
normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
vision_transform_list = [transforms.ToTensor(), normalize]
self.vision_transform = transforms.Compose(vision_transform_list)
random.seed(self.opt.seed)
# load background, just one large-sizeimage
self.bkg_img = mmcv.imread('./data/bkg.png')
# build binauralizer
hrtf_dir = "./data/subject03"
# binauralizer = SourceBinauralizer(use_hrtfs=True, cipic_dir=hrtf_dir)
self.hrir_db = CIPIC_HRIR(hrtf_dir)
# encode to ambisonics, and then stereo
speakers_phi = (2. * np.arange(2*4) / float(2*4) - 1.) * np.pi
self.speakers_pos = [Position(phi, 0, 1, 'polar') for phi in speakers_phi]
self.sph_mat = spherical_harmonics_matrix(self.speakers_pos, max_order=1) # shape: [N_array_speakers, 4]
# parameter of loading audio
self.exp_audio_len = int(self.opt.audio_length * self.opt.audio_sampling_rate)
#print("Now load {} box info".format(self.opt.mode))
#self.box_info = mmcv.load('./dataset/ASMR/scripts/results/{}_box_info.pkl'.format(self.opt.mode))
if opt.fov == '1/3':
self.fov = 1/3.
elif opt.fov == '1/2':
self.fov = 1/2.
elif opt.fov == '5/6':
self.fov = 5/6.
elif opt.fov == '1':
self.fov = 1.
else:
self.fov = 2/3.
#self.categories = ['acoustic_guitar', 'banjo', 'bass', 'cello', 'drum', 'harp', 'piano', 'trumpet', 'ukelele']
def construct_stereo_direct(self, pst_sources):
stereo = np.zeros((2, self.exp_audio_len))
for src in pst_sources:
left_hrir, right_hrir = self.hrir_db.get_closest(src.position)[1:]
left_signal = np.convolve(src.signal, np.flip(left_hrir, axis=0), 'valid')
right_signal = np.convolve(src.signal, np.flip(right_hrir, axis=0), 'valid')
n_valid, i_start = left_signal.shape[0], left_hrir.shape[0] - 1
stereo[0, i_start:(i_start + n_valid)] += left_signal
stereo[1, i_start:(i_start + n_valid)] += right_signal
return stereo
def construct_stereo_ambi(self, pst_sources):
# encode to ambisonics
Y = spherical_harmonics_matrix([src.position for src in pst_sources], max_order=1) # Y shape: [n_signals, 4]
signals = np.stack([src.signal for src in pst_sources], axis=1) # signals shape: [Len, n_signals]
ambisonic = np.dot(signals, Y) # shape: [Len, 4]
array_speakers_sound = np.dot(ambisonic, self.sph_mat.T)
#array_speakers_sound = np.dot(ambisonic, np.linalg.pinv(self.sph_mat))
array_sources = [PositionalSource(array_speakers_sound[:, i], speaker_pos, \
self.opt.audio_sampling_rate) for i, speaker_pos in enumerate(self.speakers_pos)]
return self.construct_stereo_direct(array_sources)
def construct_stereo_ambi_direct(self, pst_sources):
# encode to ambisonics
Y = spherical_harmonics_matrix([src.position for src in pst_sources], max_order=1)
signals = np.stack([src.signal for src in pst_sources], axis=1)
ambisonic = np.dot(signals, Y) # shape: [Len, 4]
stereo = np.stack((
ambisonic[:, 0] / 2 + ambisonic[:, 1] / 2,
ambisonic[:, 0] / 2 - ambisonic[:, 1] / 2
))
return stereo
def _get_pseudo_item(self, index):
# ensure the number of audios in a scene
N = np.random.choice([1,2,3], p=[0.4, 0.5, 0.1])
chosen_samples = [self.total_samples[index]]
# avoid repeat sample
for _ in range(1, N):
while True:
new_sample = random.choice(self.total_samples)
if new_sample not in chosen_samples:
chosen_samples.append(new_sample)
break
audio_margin = 0
init_H = 360
init_W = 640
pst_sources = []
if self.opt.not_use_background:
cur_bkg_img = np.zeros((init_H, init_W, 3)).astype(np.uint8)
else:
# crop background img, exp_shape: [init_H, init_W, 3]
bkg_start_x = np.random.randint(low=0, high=self.bkg_img.shape[1] - init_W)
bkg_start_y = np.random.randint(low=0, high=self.bkg_img.shape[0] - init_H)
cur_bkg_img = mmcv.imcrop(self.bkg_img.copy(),
np.array([bkg_start_x, bkg_start_y, bkg_start_x+init_W-1, bkg_start_y+init_H-1]))
#H_bkg, W_bkg, _ = cur_bkg_img.shape
corner_record = []
patch_size_record = []
center_x_record = []
audio_list = []
patch_list = []
actual_N = 0
#load audio
for idx, chosen_sample in enumerate(chosen_samples):
audio_file, img_folder = chosen_sample.split(',')
# audio part
audio, audio_rate = librosa.load(audio_file, sr=self.opt.audio_sampling_rate, mono=True)
#randomly get a start time for the audio segment from the original clip
audio_len = len(audio) / audio_rate
assert audio_len - self.opt.audio_length - audio_margin > audio_margin
audio_start_time = random.uniform(audio_margin, audio_len - self.opt.audio_length - audio_margin)
audio_end_time = audio_start_time + self.opt.audio_length
audio_start = int(audio_start_time * self.opt.audio_sampling_rate)
audio_end = audio_start + self.exp_audio_len
audio = audio[audio_start:audio_end]
if self.opt.audio_normal:
normalizer0, audio = audio_normalize(audio)
# video part
# load img **patches**, copy bkg_img and construct a new image
# load accurate frame
cur_img_list = natsort.natsorted(glob(osp.join(img_folder, '*.jpg')))
# get the closest frame to the audio segment
frame_idx = (audio_start_time + audio_end_time) / 2 * 10
frame_idx = int(np.clip(frame_idx, 0, len(cur_img_list) - 1))
img_file = cur_img_list[frame_idx]
img_patch = mmcv.imread(img_file)
if self.opt.patch_resize:
h_patch, w_patch, _= img_patch.shape
resize_ratio = min(1/normalizer0, init_H / h_patch, init_W / 2 / w_patch)
img_patch = mmcv.imrescale(img_patch, resize_ratio * random.uniform(0.8, 1))
H_new, W_new, _ = img_patch.shape
# just consider the overlap in the horizontal axis
occupy_matrix = np.ones((init_W))
# avoid cross border in x dim
occupy_matrix[:(-W_new + 1)] = 0
# avoid overlap
for last_corner_x, W_last in zip(corner_record, patch_size_record):
occupy_x = max(0, last_corner_x - W_new)
occupy_matrix[occupy_x : last_corner_x + W_last] = 1
# random sample position for this mono audio
free_x_positions = np.where(occupy_matrix == 0)[0]
if len(free_x_positions) < 2:
break
actual_N += 1
corner_x = random.choice(free_x_positions)
corner_record.append(corner_x)
patch_size_record.append(W_new)
corner_y = random.randint(0, init_H - H_new)
center_y = corner_y + H_new // 2
center_x = corner_x + W_new // 2
center_x_record.append(center_x)
azimuth = (init_W // 2 - center_x) / init_W * pi * self.fov
elevation = (init_H // 2 - center_y) / init_H * pi / 2
if self.opt.visualize_data:
output_dir = 'others/dataset_visual/{:d}_{:d}'.format(N, index)
if not osp.exists(output_dir):
os.mkdir(output_dir)
if librosa.__version__ >= '0.8.0':
import soundfile as sf
sf.write(osp.join(output_dir, '{:d}.wav'.format(idx)), audio.transpose(), audio_rate)
else:
librosa.output.write_wav(osp.join(output_dir, '{:d}.wav'.format(idx)), audio, sr=audio_rate)
audio_list.append(audio)
patch_list.append(img_file)
pst_sources.append(PositionalSource(audio, Position(azimuth, elevation, 3, 'polar'), audio_rate))
pdb.set_trace()
if self.opt.blending:
center = (center_x, center_y)
mask = 255 * np.ones(img_patch.shape, img_patch.dtype)
cur_bkg_img = cv2.seamlessClone(img_patch, cur_bkg_img, mask, center, cv2.NORMAL_CLONE)
else:
patch_in_start_x = corner_x
patch_in_start_y = corner_y
assert patch_in_start_x >= 0
assert patch_in_start_y >= 0
cur_bkg_img[patch_in_start_y : patch_in_start_y + H_new, patch_in_start_x : patch_in_start_x + W_new] = img_patch
if self.opt.stereo_mode == 'direct':
#print("use direct")
stereo = self.construct_stereo_direct(pst_sources)
elif self.opt.stereo_mode == 'ambisonic':
#print("use ambisonic")
stereo = self.construct_stereo_ambi(pst_sources)
elif self.opt.stereo_mode == 'ambidirect':
#print("use ambidirect")
stereo = self.construct_stereo_ambi_direct(pst_sources)
else:
raise ValueError("please choose right stereo mode")
normalizer, _ = audio_normalize(stereo[0] + stereo[1])
stereo = stereo / normalizer
audio_channel1, audio_channel2 = stereo
frame = cur_bkg_img
if self.opt.visualize_data:
output_dir = 'others/dataset_visual/{:d}_{:d}'.format(N, index)
if librosa.__version__ >= '0.8.0':
import soundfile as sf
sf.write(osp.join(output_dir, 'input_binaural.wav'), stereo.transpose(), audio_rate)
else:
librosa.output.write_wav(osp.join(output_dir, 'input_binaural.wav'), stereo, sr=audio_rate)
mmcv.imwrite(frame, osp.join(output_dir, 'reference.jpg'))
frame = process_image(frame, self.opt.enable_data_augmentation)
frame = self.vision_transform(frame)
#passing the spectrogram of the difference
audio_diff_spec = torch.FloatTensor(generate_spectrogram(audio_channel1 - audio_channel2))
audio_mix_spec = torch.FloatTensor(generate_spectrogram(audio_channel1 + audio_channel2))
data_ret = {'frame': frame, 'audio_diff_spec':audio_diff_spec, 'audio_mix_spec':audio_mix_spec}
# incorporate separation part
assert len(patch_list) == len(audio_list)
left_channel = np.zeros(self.exp_audio_len).astype(np.float32)
right_channel = np.zeros(self.exp_audio_len).astype(np.float32)
if len(audio_list) >= 2:
for cur_audio, center_x in zip(audio_list, center_x_record):
if center_x < init_W // 3:
left_channel += cur_audio
elif center_x > init_W // 3:
right_channel += cur_audio
else:
left_channel += cur_audio
right_channel += cur_audio
_, left_channel = audio_normalize(left_channel)
_, right_channel = audio_normalize(right_channel)
else:
left_channel = audio_channel1
right_channel = audio_channel2
if self.opt.visualize_data:
output_dir = 'others/dataset_visual/{:d}_{:d}'.format(N, index)
if librosa.__version__ >= '0.8.0':
import soundfile as sf
sf.write(osp.join(output_dir, 'gt_left.wav'), left_channel, audio_rate)
sf.write(osp.join(output_dir, 'gt_right.wav'), right_channel, audio_rate)
else:
librosa.output.write_wav(osp.join(output_dir, 'gt_left.wav'), left_channel, audio_rate)
librosa.output.write_wav(osp.join(output_dir, 'gt_right.wav'), right_channel, audio_rate)
sep_mix_spec = audio_mix_spec
sep_diff_spec = torch.FloatTensor(generate_spectrogram(left_channel - right_channel))
frame_sep_list = frame
if self.opt.mode == 'train':
data_ret_sep = {'frame_sep': frame_sep_list, 'sep_diff_spec': sep_diff_spec, 'sep_mix_spec': sep_mix_spec}
else:
data_ret_sep = {'frame_sep': frame_sep_list, 'sep_diff_spec': sep_diff_spec, 'sep_mix_spec': sep_mix_spec, 'left_audio': left_channel, 'right_audio': right_channel}
return data_ret, data_ret_sep
def __getitem__(self, index):
data_ret, data_ret_sep = self._get_pseudo_item(index)
if self.opt.dataset_mode == 'Pseudo_stereo':
return data_ret
elif self.opt.dataset_mode == 'Pseudo_sep':
return data_ret_sep
else:
data_ret.update(data_ret_sep)
return data_ret
def __len__(self):
return len(self.total_samples)
def name(self):
return 'PseudoDataset'
def initialize(self, opt):
pass
| Your issue is here:
signals = np.stack([src.signal for src in pst_sources], axis=1) # signals shape: [Len, n_signals]
It looks like pst_sources is empty, and so you are trying to stack an empty list.
| https://stackoverflow.com/questions/67584404/ |
Train on top of a Torchscript model | I currently have a Torchscript model I load via torch.jit.load. I would like to take some data I have and train on top of these weights, however I cannot find out how to train a serialised torchscript model.
| Turns out that the returned ScriptModule does actually support training: https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html#torch.jit.ScriptModule.train
| https://stackoverflow.com/questions/67584702/ |
Subsets and Splits