instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
What is the difference between `.zero_grad()` and `.zero_grad`? | I am working on neural network and I find that, with *.grad_zero() I get loss function values properly and also converge to zero. Where, with *.grad_zero (with out bracket) gives loss function values in 5 digits. (13,564.23). So, what is the difference between them? Why "()" important in FPP. Thank you.
| optimizer.zero_grad is a function, so you need to call it with parentheses. If you don't use the parentheses, you are just referencing the function object but never calling it.
| https://stackoverflow.com/questions/63455304/ |
Proper Usage of PyTorch's non_blocking=True for Data Prefetching | I am looking into prefetching data into the GPU from the CPU when the model is being trained on the GPU. Overlapping CPU-to-GPU data transfer with GPU model training appears to require both
Transferring data to GPU using data = data.cuda(non_blocking=True)
Pin data to CPU memory using train_loader = DataLoader(..., pin_memory=True)
However, I cannot understand how non-blocking transfer is being performed in this official PyTorch example, specifically this code block:
for i, (images, target) in enumerate(train_loader):
# measure data loading time
data_time.update(time.time() - end)
if args.gpu is not None:
images = images.cuda(args.gpu, non_blocking=True)
if torch.cuda.is_available():
target = target.cuda(args.gpu, non_blocking=True)
# compute output
output = model(images)
loss = criterion(output, target)
Won't images.cuda(non_blocking=True) and target.cuda(non_blocking=True) have to be completed before output = model(images) is executed. Since this is a synchronization point, images must be first fully transferred to the CUDA device, so the data transfer steps are effectively no longer non-blocking.
Since output = model(images) is blocking, images.cuda() and target.cuda() in the next i iteration of the for loop will not occur until the model output is computed, meaning no prefetching in the next loop iteration.
If this is correct, what is the correct way to perform data prefetching to the GPU?
| I think where you are off is that output = model(images) is a synchronization point. It seems the computation is handled by a different part of a GPU. Quote from official PyTorch docs:
Also, once you pin a tensor or storage, you can use asynchronous GPU
copies. Just pass an additional non_blocking=True argument to a
to() or a cuda() call. This can be used to overlap data
transfers with computation.
| https://stackoverflow.com/questions/63460538/ |
What is the right calculation of epoch loss in training? | I am reading Pytorch official tutorial for fine tuning and I am faced with one problem and that is calculation of loss in each epoch.
Before this , I calculate loss for batch of data, accumulate these batch losses and find mean of these values as loss of epoch. But in that example, the calculation is as follow:
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
my question is in this line running_loss += loss.item() * inputs.size(0). It is multiply loss value of batch in bach size. What is the true way to calculate loss of epoch?
and what is the unit of loss? What is the range of loss value?
| Yes the code snippet adds multiplication of batch size with batch mean error. If you want to calculate true summation. You can use
torch.nn.CrossEntropyLoss(reduction = "sum")
which will give you the sum of errors for the batch. Then you can directly sum for each batch as follows:
running_loss += loss.item()
The range of the loss value depends on your number of classes and feature vector. The code in your question will have same running_loss if you use reduction="sum" because your code basically makes
(loss/batch_size) * batch_size
which is the same thing with loss value. However, backpropagation changes because on the one hand you backprop according to the sum of losses, on the other hand you calculate backprop according to the mean loss.
| https://stackoverflow.com/questions/63463952/ |
AttributeError: module 'torch.utils' has no attribute 'tensorboard' | I tried to use tensorboard in torch.utils, but it says "module 'torch.utils' has no attribute 'tensorboard'".
My torch version is "1.6.0+cu101"
PS C:\Users\kelekelekle> python
Python 3.7.6 (tags/v3.7.6:43364a7ae0, Dec 19 2019, 01:54:44) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.__version__)
1.6.0+cu101
>>> writer = torch.utils.tensorboard.SummaryWriter()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'torch.utils' has no attribute 'tensorboard'
>>>
| You have to install tensorboard via:
pip install tensorboard
(or a-like). Given that is done, you should import tensorboard module from torch.utils package:
from torch.utils import tensorboard
tensorboard.SummaryWriter("foo")
Or you can import SummaryWriter directly:
from torch.utils.tensorboard import SummaryWriter
SummaryWriter("bar")
| https://stackoverflow.com/questions/63466204/ |
Vectorized way to apply a 3-dimension mask to RGB in pytorch | I have a HxWx3 tensor representing an RGB image and a HxWx3 mask (boolean) tensor as input.
It is assumed that for each (i,j) in the mask tensor there's exactly one true value (that is exactly one of R\G\B is on).
I want to apply the mask to the image to result in a HxW (or HxWx1) tensor V where V[i,j]='the matching R\G\B value according to the mask'.
Using Problem applying binary mask to an RGB image with numpy I was able to achieve the following:
>>> X*mask
tensor([[[ 9., 10.],
[ 0., 0.]],
[[ 0., 0.],
[ 0., 20.]],
[[ 0., 0.],
[30., 0.]]])
But as stated, I want a single dim HxW and not HxWx3 as result.
Illustration:
| Assuming that for each i,j only a single R/G/B value is retained, you can simply do:
(X*mask).sum(axis=2)
This should give you your desired (HxW) output.
| https://stackoverflow.com/questions/63467616/ |
TypeError: reshape(): argument 'input' (position 1) must be Tensor, not numpy.ndarray | I am a high school student who doesn't having much experience in using PyTorch and LIME. I'm having a lot of trouble with my image shape. Initially my image shape was (3,224,224), however the LIME algorithm only works with images that are in this shape(...,...,3). As a result, I tried transposing the image earlier. It seemed that I made some more progress by doing that, however, now I am getting a different error. Here is some of my code to understand what I have been doing before the error came up.
def get_preprocess_transform():
transf = transforms.Compose([
# transforms.ToPILImage(), #had to convert image to PIL as error was showing up two cells below about needing it in pil
transforms.Resize(input_size),
transforms.CenterCrop(input_size),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
])
return transf
preprocess_transform = get_preprocess_transform() ## use your data_transform but in a method version
def batch_predict(image):
model_ft.eval()
batch = torch.reshape(image,(1,3,224,224))
print(type(batch))
logits = model_ft(batch)
probs = F.softmax(logits, dim=1)
return probs.detach().cpu().numpy()
print(img_t.shape)
img_t = torch.reshape(img_t,(1,3,224,224))
test_pred = batch_predict(img_t)
test_pred.squeeze().argmax()
img_t = np.ones((3, 224, 224))
np.transpose(img_t, (2,1,0)).shape
img_x = np.transpose(img_t, (2, 1, 0))
print(img_x.shape)
from lime import lime_image
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(img_x, ## pass your image, do not transform
batch_predict, # classification function
top_labels=5,
hide_color=0,
num_samples=1000)
Here is the error message that comes from the "explainer cell"
| use this command to convert numpy.ndarray to tensor
img = torch.from_numpy(img).float() #use appropriate name of variable
| https://stackoverflow.com/questions/63473971/ |
How can torchaudio.transform.Resample be called without __call__ function inside? | if sample_rate != sr:
waveform = torchaudio.transforms.Resample(sample_rate, sr)(waveform)
sample_rate = sr
I was wondering how this Resamle works in there. So took a look at the docs of torchaudio. I thought there would be __call__ function. Because Resample is used as a function. I mean that Resample()(waveform). But inside, there are only __init__ and forward function. I think the forward function is the working function but I don't know why it is named 'forward' not __call__. What am I missing?
class Resample(torch.nn.Module):
r"""Resample a signal from one frequency to another. A resampling method can be given.
Args:
orig_freq (float, optional): The original frequency of the signal. (Default: ``16000``)
new_freq (float, optional): The desired frequency. (Default: ``16000``)
resampling_method (str, optional): The resampling method. (Default: ``'sinc_interpolation'``)
"""
def __init__(self,
orig_freq: int = 16000,
new_freq: int = 16000,
resampling_method: str = 'sinc_interpolation') -> None:
super(Resample, self).__init__()
self.orig_freq = orig_freq
self.new_freq = new_freq
self.resampling_method = resampling_method
def forward(self, waveform: Tensor) -> Tensor:
r"""
Args:
waveform (Tensor): Tensor of audio of dimension (..., time).
Returns:
Tensor: Output signal of dimension (..., time).
"""
if self.resampling_method == 'sinc_interpolation':
# pack batch
shape = waveform.size()
waveform = waveform.view(-1, shape[-1])
waveform = kaldi.resample_waveform(waveform, self.orig_freq, self.new_freq)
# unpack batch
waveform = waveform.view(shape[:-1] + waveform.shape[-1:])
return waveform
raise ValueError('Invalid resampling method: %s' % (self.resampling_method))
--edit--
I looked around torch.nn.module. There is no def __call__. But only
__call__ : Callable[..., Any] = _call_impl Would it be the way?
| Here's simple similar demonstrates of how forward function works in PyTorch.
Check this:
from typing import Callable, Any
class parent:
def _unimplemented_forward(self, *input):
raise NotImplementedError
def _call_impl(self, *args):
# original nn.Module _call_impl function contains lot more code
# to handle exceptions, to handle hooks and for other purposes
self.forward(*args)
forward : Callable[..., Any] = _unimplemented_forward
__call__ : Callable[..., Any] = _call_impl
class child(parent):
def forward(self, *args):
print('forward function')
class child_2(parent):
pass
Runtime:
>>> c1 = child_1()
>>> c1()
forward function
>>> c2 = child_2()
>>> c2()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".\callable.py", line 8, in _call_impl
self.forward(*args)
File ".\callable.py", line 5, in _unimplemented_forward
raise NotImplementedError
NotImplementedError
| https://stackoverflow.com/questions/63480624/ |
Export pytorch model parameters into separate files according to layer hierarchy | Is it possible to export the trained parameters of a Pytorch model into separate binary files (float32/64, not text) under a folder hierarchy reflecting the layers defined by the model's architecture?
I wish to examine a sizeable pretrained model without the framework overhead and also split the checkpoint into manageable chunks.
| There is no direct way to do this, but it should take only a few lines of code. For example, consider I have a model of the following structure:
class ConvBlock(nn.Module):
def __init__(self, C_in, C_out, kernel, pool):
super().__init__()
self.conv = nn.Conv2d(C_in, C_out, kernel)
self.relu = nn.ReLU(inplace = True)
self.pool = nn.MaxPool2d(2,2) if pool else nn.Identity()
def forward(self, input):
out = self.conv(input)
out = self.relu(out)
out = self.pool(out)
return out
class LeNet5(nn.Module):
def __init__(self):
super().__init__()
self.block1 = ConvBlock(1, 6, 5, pool = True)
self.block2 = ConvBlock(6, 16, 5, pool = True)
self.block3 = ConvBlock(16, 120, 5, pool = False)
self.fc = nn.Sequential(
nn.Linear(120, 84),
nn.ReLU(inplace = True),
nn.Linear(84, 10)
)
def forward(self, input):
out = self.block1(input)
out = self.block2(out)
out = self.block3(out)
out = out.view(-1,120)
out = self.fc(out)
return out
To binarize individual parameters, all you have to do is iterate through them.
net = LeNet5()
basedir = 'lenet_params'
for name, param in net.named_parameters():
name = name.split('.')
out_dir, filename = os.path.join(basedir, *name[:-1]), name[-1]+'.pth'
out_path = os.path.join(out_dir, filename)
if not os.path.exists(out_dir):
os.makedirs(out_dir, exist_ok=True)
torch.save(param, out_path)
This will produce the directory structure below:
lenet_params
|---block1
| |---conv
| | |---weight.pth
| | |---bias.pth
|---block2
| |---conv
| | |---weight.pth
| | |---bias.pth
|---block3
| |---conv
| | |---weight.pth
| | |---bias.pth
|---fc
| |---0
| | |---weight.pth
| | |---bias.pth
| |---2
| | |---weight.pth
| | |---bias.pth
| https://stackoverflow.com/questions/63490419/ |
Index tensor must have the same number of dimensions as input tensor error encountered when using torch.gather() | I'm very new to PyTorch, and I have encountered the "Index tensor must have the same number of dimensions as input tensor" error when running my neural network. It happens with I call an instance of torch.gather(). Could someone help me understand torch.gather() and explain the cause of this error?
Here is the code where the error occurs:
def learn(batch, optim, net, target_net, gamma, global_step, target_update):
my_loss = []
optim.zero_grad()
state, action, next_state, reward, done, next_action = batch
qval = net(state.float())
loss_a = torch.gather(qval, 3, action.view(-1,1,1,1)).squeeze() #Error happens here!
loss_b = reward + gamma * torch.max(target_net(next_state.float()).cuda(), dim=3).values * (1 - done.int())
loss_val = torch.sum(( torch.abs(loss_a-loss_b) ))
loss_val /= 128
my_loss.append(loss_val.item())
loss_val.backward()
optim.step()
if global_step % target_update == 0:
target_network.load_state_dict(q_network.state_dict())
In case it is helpful, here is the batch function that creates the batch that the action comes from:
def sample_batch(memory,batch_size):
indices = np.random.randint(0,len(memory), (batch_size,))
state = torch.stack([memory[i][0] for i in indices])
action = torch.tensor([memory[i][1] for i in indices], dtype = torch.long)
next_state = torch.stack([memory[i][2] for i in indices])
reward = torch.tensor([memory[i][3] for i in indices], dtype = torch.float)
done = torch.tensor([memory[i][4] for i in indices], dtype = torch.float)
next_action = torch.tensor([memory[i][5] for i in indices], dtype = torch.long)
return state,action,next_state,reward,done,next_action
When I print out the different shapes of 'qvals', 'action', and 'action.view(-1,1,1,1)' this is the output:
qval torch.Size([10, 225])
act view torch.Size([10, 1, 1, 1])
action shape torch.Size([10])
Any explanation as to what is causing this error is appreciated! I want to understand more what is going on in the code as well as how to fix the problem. Thanks!
| Torch.gather is described here. If we take your code, this line
torch.gather(qval, 3, action.view(-1,1,1,1))
is equivalent to
act_view = action.view(10,1,1,1)
out = torch.zeros_like(act_view)
for i in range(10):
for j in range(1):
for k in range(1):
for p in range(1):
out[i,j,k,p] = qval[i,j,k, act_view[i,j,k,p]]
return out
which obviously makes very little sense. In particular, qval is not 4-D and thus cannot be indexed like this. The number of for loops is determined by the shape of your input tensors, and they should all have the same number of dimensions for this to work (this is what your error tells you by the way). Here, qval is 2D and act_view is 4D.
I'm not sure what you wanted to do with this, but if you can explain your goal and remove all the useless stuff in your example (mostly the training and backprop related code) to get a minimal reproducible example, I could help you further in finding the correct way to do it :)
| https://stackoverflow.com/questions/63493193/ |
GPU support for TensorFlow & PyTorch | Okay, so I've worked on a bunch of Deep Learning projects and internships now and I've never had to do heavy training. But lately I've been thinking of doing some Transfer Learning for which I'll need to run my code on a GPU. Now I have a system with Windows 10 and a dedicated NVIDIA GeForce 940M GPU. I've been doing a lot of research online, but I'm still a bit confused. I haven't installed the NVIDIA Cuda Toolkit or cuDNN or tensorflow-gpu on my system yet. I currently use tensorflow and pytorch to train my DL models. Here are my queries -
When I define a tensor in tf or pytorch, it is a cpu tensor by default. So, all the training I've been doing so far has been on the CPU. So, if I make sure to install the correct versions of Cuda and cuDNN and tensorflow-gpu (specifically for tensorflow), I can run my models on my GPU using tf-gpu and pytorch and that's it? (I'm aware of the torch.cuda.is_available() in pytorch to ensure pytorch can access my GPU and the device_lib module in tf to check if my gpu is visible to tensorflow)(I'm also aware of the fact that tf doesnt support all Nvidia GPUs)
Why does tf have a separate module for GPU support? PyTorch doesnt seem to have that and all you need to do is cast your tensor from cpu() to cuda() to switch between them.
Why install cuDNN? I know it is a high-level API CUDA built for support to train Deep Neural Nets on the GPU. But do tf-gpu and torch use these in the backend while training on the gpu?
After tf == 1.15, did they combine CPU and GPU support all into one package?
| First of all unfortunately 940M is a kinda weak GPU for training. I suggest you use Google colab for faster training but of course, it would be faster than the CPU. So here my answers to your four questions.
1-) Yes if you install the requirements correctly, then you can run on GPU. You can manually place your data to your GPU as well. You can check implementations on TensorFlow. In PyTorch, you should specify the device that you want to use. As you said you should do device = torch.device("cuda" if args.cuda else "cpu") then for models and data you should always call .to(device) Then it will automatically use GPU if available.
2-) PyTorch also needs extra installation (module) for GPU support. However, with recent updates both TF and PyTorch are easy to use for GPU compatible code.
3-) Both Tensorflow and PyTorch is based on cuDNN. You can use them without cuDNN but as far as I know, it hurts the performance but I'm not sure about this topic.
4-) No they are still different packages. tensorflow-gpu==1.15 and tensorflow==1.15 what they did with tf2, was making the tensorflow more like Keras. So it is more simplified then 1.15 or before.
| https://stackoverflow.com/questions/63499994/ |
Getting the weight of a Layer | I've been working on the MNIST data set using PyTorch and I am having trouble in accessing the weights and biases that is generated in my code.
This is my code
from torch import nn
import torch.nn.functional as F
class Neural(nn.Module):
def __init__(self):
super().__init__()
self.hidden1 = nn.Linear(784,128)
self.hidden2 = nn.Linear(128,64)
self.output = nn.Linear(64,10)
def forward(self,x):
x=F.relu(self.hidden1(x))
x=F.relu(self.hidden2(x))
x=F.softmax(self.output(x))
return x
model= Neural()
and to access the weight when i use
print(model.fc1.weight)
print(model.fc1.bias)
this is the error i get
AttributeError Traceback (most recent call last)
<ipython-input-58-e92de631c798> in <module>()
----> 1 model.fc1.weight
2 print(model.fc1.bias)
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
530 return modules[name]
531 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 532 type(self).__name__, name))
533
534 def __setattr__(self, name, value):
AttributeError: 'Neural' object has no attribute 'fc1'
| You should access weights of a layer via its name, so it will be
print (model.hidden1.weight, model.hidden1.bias)
| https://stackoverflow.com/questions/63510021/ |
The result is not fixed after setting random seed in pytorch | def setup_seed(seed):
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed) # cpu
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = True
I set random seed when run the code, but I can not get fixed result with pytorch. Besides, I use batchnorm in my code. When evaluate and test, I have set model.eval(). I cannot figure out the reason for that.
| I think the line torch.backends.cudnn.benchmark = True causing the problem. It enables the cudnn auto-tuner to find the best algorithm to use. For example, convolution can be implemented using one of these algorithms:
CUDNN_CONVOLUTION_FWD_ALGO_GEMM,
CUDNN_CONVOLUTION_FWD_ALGO_FFT,
CUDNN_CONVOLUTION_FWD_ALGO_FFT_TILING,
CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM,
CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM,
CUDNN_CONVOLUTION_FWD_ALGO_DIRECT,
CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD,
CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD_NONFUSED,
There are several algorithms without reproducibility guarantees.
So use torch.backends.cudnn.benchmark = False for deterministic outputs(this may slow execution time).
And also there are some pytorch functions which cannot be deterministic refer this doc.
| https://stackoverflow.com/questions/63515991/ |
torch transform.resize() vs cv2.resize() | The CNN model takes an image tensor of size (112x112) as input and gives (1x512) size tensor as output.
Using Opencv function cv2.resize() or using Transform.resize in pytorch to resize the input to (112x112) gives different outputs.
What's the reason for this? (I understand that the difference in the underlying implementation of opencv resizing vs torch resizing might be a cause for this, But I'd like to have a detailed understanding of it)
import cv2
import numpy as np
from PIL import image
import torch
import torchvision
from torchvision import transforms as trans
# device for pytorch
device = torch.device('cuda:0')
torch.set_default_tensor_type('torch.cuda.FloatTensor')
model = torch.jit.load("traced_facelearner_model_new.pt")
model.eval()
# read the example image used for tracing
image=cv2.imread("videos/example.jpg")
test_transform = trans.Compose([
trans.ToTensor(),
trans.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])
test_transform2 = trans.Compose([
trans.Resize([int(112), int(112)]),
trans.ToTensor(),
trans.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])
resized_image = cv2.resize(image, (112, 112))
tensor1 = test_transform(resized_image).to(device).unsqueeze(0)
tensor2 = test_transform2(Image.fromarray(image)).to(device).unsqueeze(0)
output1 = model(tensor1)
output2 = model(tensor2)
The output1 and output2 tensors have different values.
| Basically torchvision.transforms.Resize() uses PIL.Image.BILINEAR interpolation by default.
While in your code you simply use cv2.resize which doesn't use any interpolation.
For example
import cv2
from PIL import Image
import numpy as np
a = cv2.imread('videos/example.jpg')
b = cv2.resize(a, (112, 112))
c = np.array(Image.fromarray(a).resize((112, 112), Image.BILINEAR))
You will see that b and c are slightly different.
Edit:
Actually the opencv docs says
INTER_LINEAR - a bilinear interpolation (used by default)
But yeah, it doesn't give the same result as PIL.
Edit 2:
This also in the docs
To shrink an image, it will generally look best with INTER_AREA interpolation
And apparently
d = cv2.resize(a, (112, 112), interpolation=cv2.INTER_AREA)
Gives almost the same result as c. But these don't answer the question unfortunately.
| https://stackoverflow.com/questions/63519965/ |
Is this a right way to descrease size of my docker images? | I am running a deep learning model in Docker container which needs pytorch and Azure ML service.
AML requirement is Ubuntu 18.04 (which by default has only python3.6 and only way to install python3.7+ is from the source from what i was able to find)
transformers in pytorch have a requirement of python 3.7+
and i need pytorch with Cuda so i choose abinali/pytorch https://github.com/anibali/docker-pytorch/blob/master/dockerfiles/1.5.0-cuda10.2-ubuntu18.04/Dockerfile
The issue is that the size of the image is around 4GB +. so I wanted to move the PyTorch installation out of the image and run install when the docker is running the container(which increases container running uptime).
Error when running basic torch commands
File "test.py", line 1, in <module>
import torch
File "/home/user/miniconda/lib/python3.8/site-packages/torch/__init__.py", line 135, in <module>
_load_global_deps()
File "/home/user/miniconda/lib/python3.8/site-packages/torch/__init__.py", line 93, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File "/home/user/miniconda/lib/python3.8/ctypes/__init__.py", line 373, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /home/user/miniconda/lib/python3.8/site-packages/torch/lib/../../../../libnvToolsExt.so.1: invalid ELF header
so my current docker file is :
From nvidia/cuda:10.2-base-ubuntu18.04
RUN apt-get update && apt-get install -y \
curl \
ca-certificates \
sudo \
git \
bzip2 \
libx11-6 \
&& rm -rf /var/lib/apt/lists/*
# Create a working directory
# COPY . /app
# WORKDIR /app/
RUN mkdir /app
COPY . /app
WORKDIR /app
# Create a non-root user and switch to it
RUN adduser --disabled-password --gecos '' --shell /bin/bash user \
&& chown -R user:user /app
RUN echo "user ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-user
USER user
# All users can use /home/user as their home directory
ENV HOME=/home/user
RUN chmod 777 /home/user
# Install Miniconda and Python 3.8
ENV CONDA_AUTO_UPDATE_CONDA=false
ENV PATH=/home/user/miniconda/bin:$PATH
RUN curl -sLo ~/miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-py38_4.8.2-Linux-x86_64.sh \
&& chmod +x ~/miniconda.sh \
&& ~/miniconda.sh -b -p ~/miniconda \
&& rm ~/miniconda.sh \
&& conda install -y python==3.8.1 \
&& conda clean -ya
CMD ["sh", "-c", "conda install -y -c pytorch cudatoolkit=10.2 \"pytorch=1.5.0=py3.8_cuda10.2.89_cudnn7.6.5_0\" \"torchvision=0.6.0=py38_cu102\" && conda clean -ya && python test.py"]
| Installing runtime dependencies via CMD is not a typical way to reduce Docker image size. As the OP noted, this imposes a cost in container startup.
There are a few changes that can be made to the Dockerfile to reduce image size.
Use a base image that has miniconda installed but not cuda/cudnn. The OP installs pytorch, cuda, and cudnn via conda, so then there will be duplicate installations of cudnn and cuda in the image (one from the nvidia/cuda base image and the other from conda). See the Azure ML documentation for a list of publicly available base images provided by Azure ML.
Use --no-install-recommends in the apt-get install command. This will prevent apt-get from installing recommended dependencies, which are not required for the packages' use.
Use the --chown option in COPY to change ownership of the copied files during the COPY. Using RUN chown ... after a COPY will cause the size of /app to count twice towards the Docker image's total size.
Here is a Dockerfile that implements my suggestions. This creates a Docker image that is 3.43 GB.
FROM continuumio/miniconda3
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
curl \
ca-certificates \
sudo \
git \
bzip2 \
libx11-6 \
&& rm -rf /var/lib/apt/lists/*
# Create a non-root user
RUN adduser --disabled-password --gecos '' --shell /bin/bash user \
&& echo "user ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-user
# All users can use /home/user as their home directory
ENV HOME=/home/user
RUN chmod 777 /home/user
# Install pytorch.
ENV CONDA_AUTO_UPDATE_CONDA="false"
RUN conda install --yes python=3.8 \
&& conda install --yes --channel pytorch \
cudatoolkit=10.2 \
pytorch=1.5.0=py3.8_cuda10.2.89_cudnn7.6.5_0 \
torchvision=0.6.0=py38_cu102 \
&& conda clean --all --yes
USER user
WORKDIR /app
COPY --chown=user:user . .
CMD ["python", "test.py"]
| https://stackoverflow.com/questions/63521958/ |
Understanding Memory Usage by PyTorch DataLoader Workers | When running a PyTorch training program with num_workers=32 for DataLoader, htop shows 33 python process each with 32 GB of VIRT and 15 GB of RES.
Does this mean that the PyTorch training is using 33 processes X 15 GB = 495 GB of memory? htop shows only about 50 GB of RAM and 20 GB of swap is being used on the entire machine with 128 GB of RAM. So, how do we explain the discrepancy?
Is there a more accurate way of calculating the total amount of RAM being used by the main PyTorch program and all its child DataLoader worker processes?
Thank you
|
Does this mean that the PyTorch training is using 33 processes X 15 GB = 495 GB of memory?
Not necessary. You have a worker process (with several subprocesses - workers) and the CPU has several cores. One worker usually loads one batch. The next batch can already be loaded and ready to go by the time the main process is ready for another batch. This is the secret for the speeding up.
I guess, you should use far less num_workers.
It would be interesting to know your batch size too, which you can adapt for the training process as well.
Is there a more accurate way of calculating the total amount of RAM being used by the main PyTorch program and all its child DataLoader worker processes?
I was googling but could not find a concrete formula. I think that it is a rough estimation of how many cores has your CPU and Memory and Batch Size.
To choose the num_workers depends on what kind of computer you are using, what kind of dataset you are taking, and how much on-the-fly pre-processing your data requires.
HTH
| https://stackoverflow.com/questions/63522955/ |
How to get the file name of image that I put into Dataloader in Pytorch | I use pytorch to load images like this:
inf_data = InfDataloader(img_folder=args.imgs_folder, target_size=args.img_size)
inf_dataloader = DataLoader(inf_data, batch_size=1, shuffle=True, num_workers=2)
And then:
with torch.no_grad():
for batch_idx, (img_np, img_tor) in enumerate(inf_dataloader, start=1):
img_tor = img_tor.to(device)
pred_masks, _ = model(img_tor)
But I want to get the file name of the image. Can anyone help me with this?
Thanks a lot!
| The DataLoader basically can not get the name of the file. But in Dataset, which is the InfDataloader in the question mentioned above, you can get the name of file from the tensor.
class InfDataloader(Dataset):
"""
Dataloader for Inference.
"""
def __init__(self, img_folder, target_size=256):
self.imgs_folder = img_folder
self.img_paths = []
img_path = self.imgs_folder + '/'
img_list = os.listdir(img_path)
img_list.sort()
img_list.sort(key=lambda x: int(x[:-4])) ##ζδ»Άεζζ°εζεΊ
img_nums = len(img_list)
for i in range(img_nums):
img_name = img_path + img_list[i]
self.img_paths.append(img_name)
# self.img_paths = sorted(glob.glob(self.imgs_folder + '/*'))
print(self.img_paths)
self.target_size = target_size
self.normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
def __getitem__(self, idx):
"""
__getitem__ for inference
:param idx: Index of the image
:return: img_np is a numpy RGB-image of shape H x W x C with pixel values in range 0-255.
And img_tor is a torch tensor, RGB, C x H x W in shape and normalized.
"""
img = cv2.imread(self.img_paths[idx])
name = self.img_paths[idx]
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# Pad images to target size
img_np = pad_resize_image(img, None, self.target_size)
img_tor = img_np.astype(np.float32)
img_tor = img_tor / 255.0
img_tor = np.transpose(img_tor, axes=(2, 0, 1))
img_tor = torch.from_numpy(img_tor).float()
img_tor = self.normalize(img_tor)
return img_np, img_tor, name
Here I add the line
name = self.img_paths[idx]
and return it.
So,
with torch.no_grad():
for batch_idx, (img_np, img_tor, name) in enumerate(inf_dataloader, start=1):
img_tor = img_tor.to(device)
pred_masks, _ = model(img_tor)
I could get the name.
| https://stackoverflow.com/questions/63529916/ |
Pytorch: Lower the parameters in U-net model | can anyone give me some tips on how i would be able to lower the amount of parameters in the following U-net implementation. I'm having trouble with over-fitting on my training data and i would like to lower the parameters in order to see if it improves the validation data accuracy.
Layers:
First2D
layers = [
nn.Conv2d(in_channels, middle_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(middle_channels),
nn.ReLU(inplace=True),
nn.Conv2d(middle_channels, out_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True)
]
Encoder2D
layers = [
nn.MaxPool2d(kernel_size=downsample_kernel),
nn.Conv2d(in_channels, middle_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(middle_channels),
nn.ReLU(inplace=True),
nn.Conv2d(middle_channels, out_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True)
]
Center2D
layers = [
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(in_channels, middle_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(middle_channels),
nn.ReLU(inplace=True),
nn.Conv2d(middle_channels, out_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(out_channels, deconv_channels, kernel_size=2, stride=2)
]
Decoder2D
layers = [
nn.Conv2d(in_channels, middle_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(middle_channels),
nn.ReLU(inplace=True),
nn.Conv2d(middle_channels, out_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(out_channels, deconv_channels, kernel_size=2, stride=2)
]
Last2D
layers = [
nn.Conv2d(in_channels, middle_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(middle_channels),
nn.ReLU(inplace=True),
nn.Conv2d(middle_channels, middle_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(middle_channels),
nn.ReLU(inplace=True),
nn.Conv2d(middle_channels, out_channels, kernel_size=1),
nn.Softmax(dim=1)
]
| One way to decrease the number of parameters is to decrease the number of channels in the convolution. You wouldn't be able to change the number of model input and output channels, because they depend on the data, but you can change the number of intermediate channels.
Remember that the output of one layer is the input to the next layer, so keep the number of output channels in the first layer the same as the number of input channels in the second layer, for every pair of layers. Example would be
layers = [
nn.Conv2d(in_channels, middle_channels//2, kernel_size=3, padding=1),
nn.BatchNorm2d(middle_channels//2),
nn.ReLU(inplace=True),
nn.Conv2d(middle_channels//2, out_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True)
]
Now, coming to the original question of overfitting, first you might want to try to use other things first, before reducing model size. Some things include data augmentations and dropout.
| https://stackoverflow.com/questions/63531538/ |
Unexpected Standard exception from MEX file (pytorch model forward) | When I call mex api from Matlab, I got an unexpected standard exception.
I exported 2 pytorch DNN models to 'A.pt' and 'B.pt' files.
And I implemented c++ functions that load models from the '.pt 'files and run models (forward).
The c++ implementation works fine, I can get proper results from the models.
I built the load & run forward function into '.dll' library,
and I implemented a mex api function can call them.
When I call the mex api in Matlab environment,
the 2 models are loaded normally, and the first model runs forward properly.
However, when run forward the 2nd model, I got the following exception.
Unexpected Standard exception from MEX file
What():The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: error in LoadLibraryA
I have no clue why the c++ implementation works fine but the exception occurs when call it through mex api from Matlab.
Because the load & run forward functions are unchanged I expected the exactly same results.
It is more difficult to debug because there is no call-stack print.
Is there any way to get call-stack ?
Please give me any advice.
Thanks in advance.
-environment-------------------------------
c++ compiler : visual studio 2017 community
matlab : R2020a
libtorch : 1.6
pytorch : 1.5
python : 3.6
cuda : 10.2
| From Mr. Cris Luengo's comments, I solved this problem by copying all libtorch dlls into Matlab's own bin folder. There are several duplicated files but I overwrote them. I'm not sure it is safe or not, so may be backup of previous dlls is good choice. Thank you Mr. Cris Luengo.
| https://stackoverflow.com/questions/63533029/ |
KeyError when enumerating over dataloader - why? | I am writing a binary classification model that consists of audio files of 40 participants and classifies them according to whether they have a speech disorder or not. The audio files have been divided into 5 second segments and to avoid subject bias, I have split the training/testing/validation sets such that a subject only appears in one set (i.e. participant ID02 does not appear in both the training and testing sets). The following error appears when I attempt to enumerate over the DataLoader validLoader in the code below and I'm not entirely sure why this error is occurring. Does anyone have any advice?
KeyError Traceback (most recent call last)
<ipython-input-69-55be99283cf7> in <module>()
----> 1 for i, data in enumerate(valid_loader, 0):
2 images, labels = data
3 print("Batch", i, "size:", len(images))
3 frames
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __next__(self)
361
362 def __next__(self):
--> 363 data = self._next_data()
364 self._num_yielded += 1
365 if self._dataset_kind == _DatasetKind.Iterable and \
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _next_data(self)
987 else:
988 del self._task_info[idx]
--> 989 return self._process_data(data)
990
991 def _try_put_index(self):
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _process_data(self, data)
1012 self._try_put_index()
1013 if isinstance(data, ExceptionWrapper):
-> 1014 data.reraise()
1015 return data
1016
/usr/local/lib/python3.6/dist-packages/torch/_utils.py in reraise(self)
393 # (https://bugs.python.org/issue2651), so we work around it.
394 msg = KeyErrorMessage(msg)
--> 395 raise self.exc_type(msg)
KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "<ipython-input-44-245be0a1e978>", line 19, in __getitem__
x = Image.open(self.df['path'][index])
File "/usr/local/lib/python3.6/dist-packages/pandas/core/series.py", line 871, in __getitem__
result = self.index.get_value(self, key)
File "/usr/local/lib/python3.6/dist-packages/pandas/core/indexes/base.py", line 4405, in get_value
return self._engine.get_value(s, k, tz=getattr(series.dtype, "tz", None))
File "pandas/_libs/index.pyx", line 80, in pandas._libs.index.IndexEngine.get_value
File "pandas/_libs/index.pyx", line 90, in pandas._libs.index.IndexEngine.get_value
File "pandas/_libs/index.pyx", line 138, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 998, in pandas._libs.hashtable.Int64HashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1005, in pandas._libs.hashtable.Int64HashTable.get_item
KeyError: 36
Can anyone advise why this is happening?
from google.colab import drive
drive.mount('/content/drive')
import torch
import torchvision
import torch.optim as optim
import torch.nn as nn
import torchvision.transforms as transforms
from torchvision import utils
from torch.utils.data import Dataset
from sklearn.metrics import confusion_matrix
from skimage import io, transform, data
from skimage.color import rgb2gray
import matplotlib.pyplot as plt
from tqdm import tqdm
from PIL import Image
import pandas as pd
import numpy as np
import csv
import os
import math
import cv2
root_dir = "/content/drive/My Drive/Read_Text/5_Second_Segments/"
class_names = [
"Parkinsons_Disease",
"Healthy_Control"
]
def get_meta(root_dir, dirs):
""" Fetches the meta data for all the images and assigns labels.
"""
paths, classes = [], []
for i, dir_ in enumerate(dirs):
for entry in os.scandir(root_dir + dir_):
if (entry.is_file()):
paths.append(entry.path)
classes.append(i)
return paths, classes
paths, classes = get_meta(root_dir, class_names)
data = {
'path': paths,
'class': classes
}
data_df = pd.DataFrame(data, columns=['path', 'class'])
data_df = data_df.sample(frac=1).reset_index(drop=True) # Shuffles the data
from pandas import option_context
print("Found", len(data_df), "images.")
with option_context('display.max_colwidth', 400):
display(data_df.head(100))
class Audio(Dataset):
def __init__(self, df, transform=None):
"""
Args:
image_dir (string): Directory with all the images
df (DataFrame object): Dataframe containing the images, paths and classes
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.df = df
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, index):
# Load image from path and get label
x = Image.open(self.df['path'][index])
try:
x = x.convert('RGB') # To deal with some grayscale images in the data
except:
pass
y = torch.tensor(int(self.df['class'][index]))
if self.transform:
x = self.transform(x)
return x, y
def compute_img_mean_std(image_paths):
"""
Author: @xinruizhuang. Computing the mean and std of three channel on the whole dataset,
first we should normalize the image from 0-255 to 0-1
"""
img_h, img_w = 224, 224
imgs = []
means, stdevs = [], []
for i in tqdm(range(len(image_paths))):
img = cv2.imread(image_paths[i])
img = cv2.resize(img, (img_h, img_w))
imgs.append(img)
imgs = np.stack(imgs, axis=3)
print(imgs.shape)
imgs = imgs.astype(np.float32) / 255.
for i in range(3):
pixels = imgs[:, :, i, :].ravel() # resize to one row
means.append(np.mean(pixels))
stdevs.append(np.std(pixels))
means.reverse() # BGR --> RGB
stdevs.reverse()
print("normMean = {}".format(means))
print("normStd = {}".format(stdevs))
return means, stdevs
norm_mean, norm_std = compute_img_mean_std(paths)
data_transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(256),
transforms.ToTensor(),
transforms.Normalize(norm_mean, norm_std),
])
unique_users = data_df['path'].str[-20:-16].unique()
train_users, test_users = np.split(np.random.permutation(unique_users), [int(0.8*len(unique_users))])
df_train = data_df[data_df['path'].str[-20:-16].isin(train_users)]
test_data_df = data_df[data_df['path'].str[-20:-16].isin(test_users)]
train_unique_users = df_train['path'].str[-20:-16].unique()
train_users, validate_users = np.split(np.random.permutation(train_unique_users), [int(0.875*len(train_unique_users))])
train_data_df = df_train[df_train['path'].str[-20:-16].isin(train_users)]
valid_data_df = df_train[df_train['path'].str[-20:-16].isin(validate_users)]
ins_dataset_train = Audio(
df=train_data_df,
transform=data_transform,
)
ins_dataset_valid = Audio(
df=valid_data_df,
transform=data_transform,
)
ins_dataset_test = Audio(
df=test_data_df,
transform=data_transform,
)
train_loader = torch.utils.data.DataLoader(
ins_dataset_train,
batch_size=8,
shuffle=True,
num_workers=2
)
test_loader = torch.utils.data.DataLoader(
ins_dataset_test,
batch_size=16,
shuffle=True,
num_workers=2
)
valid_loader = torch.utils.data.DataLoader(
ins_dataset_valid,
batch_size=16,
shuffle=True,
num_workers=2
)
//(This is where the error is occurring.)
for i, data in enumerate(valid_loader, 0):
images, labels = data
print("Batch", i, "size:", len(images))
| As @Abhik-Banerjee commented nicely, resetting the index of the dataframes before using them in the data loader did the trick for me:
train, val = train.reset_index(drop=True), val.reset_index(drop=True)
See https://discuss.pytorch.org/t/keyerror-when-enumerating-over-dataloader/54210/20 for a very helpful discussion and https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html for more insights on the parameters of the function.
| https://stackoverflow.com/questions/63545434/ |
rllib - obtain TensorFlow or PyTorch model output from checkpoint | I'd like to use the rllib trained policy model in a different code where I need to track which action is generated for specific input states. Using a standard TensorFlow or PyTorch (preferred) network model would provide that flexibility but I can't find clear documentation on how to produce a usable dat or H5 file from a trained rllib agent that I can then load into a torch or tf/Keras model.
| The easiest way to get the weights from a checkpoint is to load it again with rllib and then save it with the Tensorflow/Pytorch commands.
If you have a keras TF model you can simply call:
model.save('my_model.h5') # creates a HDF5 file
| https://stackoverflow.com/questions/63548115/ |
Pytorch inference CUDA out of memory when multiprocessing | To fully utilize CPU/GPU I run several processes that do DNN inference (feed forward) on separate datasets. Since the processes allocate CUDA memory during the feed forward I'm getting a CUDA out of memory error. To mitigate this I added torch.cuda.empty_cache() call which made things better. However, there are still occasional out of memory errors. Probably due to bad allocation/release timing.
I managed to solve the problem by adding a multiprocessing.BoundedSemaphore around the feed forward call but this introduces difficulties in initializing and sharing the semaphore between the processes.
Is there a better way to avoid this kind of errors while running multiple GPU inference processes?
| From my experience of parallel training and inference, it is almost impossible to squeeze the last bit of the GPU memory. Probably the best you can do is to estimate the maximum number of processes that can run in parallel, then restrict your code to run up to that many processes at the same time. Using semaphore is the typical way to restrict the number of parallel processes and automatically start a new process when there is an open slot.
To make it easier to initialize and share semaphore between processes, you can use a multiprocessing.Pool and the pool initializer as follows.
semaphore = mp.BoundedSemaphore(n_process)
with mp.Pool(n_process, initializer=pool_init, initargs=(semaphore,)) as pool:
# here, each process can access the shared variable pool_semaphore
def pool_init(semaphore):
global pool_semaphore
pool_semaphore = semaphore
On the other hand, the greedy approach is to run with a try ... except block in a while loop and keep trying to use GPU. However, this may come with significant performance overhead, so maybe not a good idea.
| https://stackoverflow.com/questions/63549736/ |
How to compute the uncertainty of a Monte Carlo Dropout neural network with PyTorch? | I am trying to implement Bayesian CNN using Mc Dropout on Pytorch, the main idea is that by applying dropout at test time and running over many forward passes, you get predictions from a variety of different models. I need to obtain the uncertainty, does anyone have an idea of how I can do it Please
This is how I defined my CNN
'''
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.dropout = nn.Dropout(p=0.3)
nn.init.xavier_uniform_(self.conv1.weight)
nn.init.constant_(self.conv1.bias, 0.0)
nn.init.xavier_uniform_(self.conv2.weight)
nn.init.constant_(self.conv2.bias, 0.0)
nn.init.xavier_uniform_(self.fc1.weight)
nn.init.constant_(self.fc1.bias, 0.0)
nn.init.xavier_uniform_(self.fc2.weight)
nn.init.constant_(self.fc2.bias, 0.0)
nn.init.xavier_uniform_(self.fc3.weight)
nn.init.constant_(self.fc3.bias, 0.0)
def forward(self, x):
x = self.pool(F.relu(self.dropout(self.conv1(x)))) # recommended to add the relu
x = self.pool(F.relu(self.dropout(self.conv2(x)))) # recommended to add the relu
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(self.dropout(x)))
x = self.fc3(self.dropout(x)) # no activation function needed for the last layer
return x
model = Net().to(device)
train_accuracies=np.zeros(num_epochs)
test_accuracies=np.zeros(num_epochs)
dataiter = iter(trainloader)
images, labels = dataiter.next()
#initializing variables
loss_acc = []
class_acc_mcdo = []
start_train = True
#Defining the Loss Function and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
def train():
loss_vals = []
acc_vals = []
for epoch in range(num_epochs): # loop over the dataset multiple times
n_correct = 0 # initialize number of correct predictions
acc = 0 # initialize accuracy of each epoch
somme = 0 # initialize somme of losses of each epoch
epoch_loss = []
for i, (images, labels) in enumerate(trainloader):
# origin shape: [4, 3, 32, 32] = 4, 3, 1024
# input_layer: 3 input channels, 6 output channels, 5 kernel size
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model.train()(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad() # zero the parameter gradients
loss.backward()
epoch_loss.append(loss.item()) # add the loss to epoch_loss list
optimizer.step()
# max returns (value ,index)
_, predicted = torch.max(outputs, 1)
n_correct += (predicted == labels).sum().item()
# print statistics
if (i + 1) % 2000 == 0:
print(f'Epoch [{epoch + 1}/{num_epochs}], Step [{i + 1}/{n_total_steps}], Loss:
{loss.item():.4f}')
somme = (sum(epoch_loss)) / len(epoch_loss)
loss_vals.append(somme) # add the epoch's loss to loss_vals
print("Loss = {}".format(somme))
acc = 100 * n_correct / len(trainset)
acc_vals.append(acc) # add the epoch's Accuracy to acc_vals
print("Accuracy = {}".format(acc))
# SAVE
PATH = './cnn.pth'
torch.save(model.state_dict(), PATH)
loss_acc.append(loss_vals)
loss_acc.append(acc_vals)
return loss_acc
And here is the code of the mc dropout
'''
def enable_dropout(model):
""" Function to enable the dropout layers during test-time """
for m in model.modules():
if m.__class__.__name__.startswith('Dropout'):
m.train()
def test():
# set non-dropout layers to eval mode
model.eval()
# set dropout layers to train mode
enable_dropout(model)
test_loss = 0
correct = 0
n_samples = 0
n_class_correct = [0 for i in range(10)]
n_class_samples = [0 for i in range(10)]
T = 100
for images, labels in testloader:
images = images.to(device)
labels = labels.to(device)
with torch.no_grad():
output_list = []
# getting outputs for T forward passes
for i in range(T):
output_list.append(torch.unsqueeze(model(images), 0))
# calculating mean
output_mean = torch.cat(output_list, 0).mean(0)
test_loss += F.nll_loss(F.log_softmax(output_mean, dim=1), labels,
reduction='sum').data # sum up batch loss
_, predicted = torch.max(output_mean, 1) # get the index of the max log-probability
correct += (predicted == labels).sum().item() # sum up correct predictions
n_samples += labels.size(0)
for i in range(batch_size):
label = labels[i]
predi = predicted[i]
if (label == predi):
n_class_correct[label] += 1
n_class_samples[label] += 1
test_loss /= len(testloader.dataset)
# PRINT TO HTML PAGE
print('\n Average loss: {:.4f}, Accuracy: ({:.3f}%)\n'.format(
test_loss,
100. * correct / n_samples))
# Accuracy for each class
acc_classes = []
for i in range(10):
acc = 100.0 * n_class_correct[i] / n_class_samples[i]
print(f'Accuracy of {classes[i]}: {acc} %')
acc_classes.append(acc)
class_acc_mcdo.extend(acc_classes)
print('Finished Testing')
| You can compute the statistics, such as the sample mean or the sample variance, of different stochastic forward passes at test time (i.e. with the test or validation data), when the dropout is enabled. These statistics can be used to represent uncertainty. For example, you can compute the entropy, which is a measure of uncertainty, from the sample mean.
| https://stackoverflow.com/questions/63551362/ |
How do I limit using one CPU per python processes launched via gnu parallel? | If I run this script
$ seq 1 4 | taskset -c 0-3 parallel -j4 -u <my_bash_script.sh>
Then each python process contained in the <my_bash_script.sh> runs on multiple cpus instead of one. The python function use both numpy and pytorch. So the option taskset -c 0-4 impose the max number of CPUs but it doesn't guarantee that each process will be limited to one CPU.
I've tried
$ export OPENBLAS_NUM_THREADS=1
$ export MKL_NUM_THREADS=1
but it didn't work
I've also added to the python script
import mkl
mkl.set_num_threads(1)
but it didn't help
| Use jobslot:
$ seq 1 4 | parallel -j4 -u taskset -c {%} <my_bash_script.sh>
Jobslot is built for this: Imagine you have a lot more than 4 jobs. If you then give every 4th job to cpu 4, then you risk that every 4th job is shorter than the others. In which case cpu 4 will be idling even if there are more jobs to be run.
Jobslot does not pass every 4th job to cpu 4. Instead it looks a which cpu (or rather jobslot) that finished a job, and then starts a new job on that cpu.
(Also: Since you are using -u you should learn the difference between --group (default) and --linebuffer (which is often what you really want when using -u)).
| https://stackoverflow.com/questions/63551993/ |
How do I install **Pytorch** with conda? Is anaconda.org down temporarily? | I just installed Anaconda and now I'm trying to install pytorch via conda install pytorch torchvision cudatoolkit=10.2 -c pytorch. But I'm getting the error message
Collecting package metadata (current_repodata.json): failed
CondaHTTPError: HTTP 503 SERVICE UNAVAILABLE: BACK-END SERVER IS AT CAPACITY for url <https://conda.anaconda.org/pytorch/win-64/current_repodata.json>
Elapsed: 00:00.635763
CF-RAY: 5c7c36945d34c4a4-DUS
A remote server error occurred when trying to retrieve this URL.
A 500-type error (e.g. 500, 501, 502, 503, etc.) indicates the server failed to
fulfill a valid request. The problem may be spurious, and will resolve itself if you
try your request again. If the problem persists, consider notifying the maintainer
of the remote server.
How can I resolve this and install pytorch? Is anaconda.org just down temporarily?
| The answer to your problem is in your question itself . The last paragraph says that :
A 500-type error (e.g. 500, 501, 502, 503, etc.) indicates the server failed to
fulfill a valid request. The problem may be spurious, and will resolve itself if you
try your request again. If the problem persists, consider notifying the maintainer
of the remote server.
This would mean that you would have to try again later . This generally happens when the official websites are down and this has happened earlier as well . So you just have to wait it out.
| https://stackoverflow.com/questions/63559006/ |
Does the loss of a model reflect its accuracy? | So these are my loss per 75 epochs:
Epoch: 75, loss: 47382825795584.000000
Epoch: 150, loss: 47382825795584.000000
Epoch: 225, loss: 47382825795584.000000
Epoch: 300, loss: 47382825795584.000000
Epoch: 375, loss: 47382825795584.000000
Epoch: 450, loss: 47382825795584.000000
Epoch: 525, loss: 47382825795584.000000
Epoch: 600, loss: 47382825795584.000000
Epoch: 675, loss: 47382825795584.000000
Epoch: 750, loss: 47382825795584.000000
And these are the values from predictions and targets respectively
Predictions: tensor([[ 8109436.0000, 7734814.0000, 8737677.0000, 11230861.0000,
3795826.7500, 3125072.7500, 1699706.1250, 5337285.0000,
3474238.5000]], grad_fn=<TBackward>)
----------------------------------------
Targets: tensor([[ 8111607., 7580798., 8749436., 11183578., 3822811., 3148031.,
2343278., 5360924., 3536146.]])
And this is the accuracy of the first, and second elements inside predictions against the first, and second elements of targets
8109436.0000/8111607*100 #First element
Output: 99.9732358828528
print(7734814.0000/7580798*100) #Second element
Output: 102.03165946381898
So I'm really not sure what is going on. Because I have a large loss there is a 99% accuracy for the first element and 98% accuracy on the second element? I'm not the best at math, so I'm not sure about the last percentage.
Could someone explain if the loss reflects the accuracy?
| Loss is only meaningful relatively (i.e. for comparison). Multiply your loss function by 10 and your loss is 10 times bigger on the same model. This doesn't tell you anything.
But using the same loss function, if model_1 gives a loss 10x smaller than model_2, then chances are model_1 will have better accuracy (although not 100% guarantied).
| https://stackoverflow.com/questions/63559314/ |
AzureML SDK not working with PyTorch 1.5? | Has anyone got PyTorch 1.5 to work with the AzureML SDK (versions 1.11 and 1.12)? torch.cuda.is_available() returns False even on GPU-enabled machines. Exactly the same setup works fine (is_available() is True) with PyTorch 1.3, 1.4 and 1.6. Any pointers welcome. These are the (possibly) relevant parts of my Conda environment file, with the values of pytorch and azureml-sdk varied as required.
channels:
- defaults
- pytorch
dependencies:
- python=3.7.3
- pytorch=1.5.0
- pip:
- azureml-sdk==1.12.0
Thanks
| This is a known issue with PyTorch 1.5 and CUDA and is acknowledged by PyTorch in this GitHub issue.
They haven't provided an official solution to the issue, but they recommend either updating old GPU-drivers or making sure you have a CPU-enabled version of PyTorch installed. Since you're not experiencing this problem with other PyTorch versions on AzureML GPUs, GPU drivers don't seem to be the issue, so it's probably the PyTorch installation.
Try installing "torchvision==0.6.0" along with your pytorch=1.5.0. PyTorch's site encourages pairing 1.5.0 with torchvision 0.6.0: https://pytorch.org/get-started/previous-versions/
| https://stackoverflow.com/questions/63564551/ |
CNN + RNN architecture for video recognition | I am trying to replicate the ConvNet + LSTM approach presented in this paper using pytorch. But I am struggling to find the correct way to combine the CNN and the LSTM in my model. Here is my attempt :
class VideoRNN(nn.Module):
def __init__(self, hidden_size, n_classes):
super(VideoRNN, self).__init__()
self.hidden_size = hidden_size
vgg = models.vgg16(pretrained=True)
embed = nn.Sequential(*list(vgg.classifier.children())[:-1])
vgg.classifier = embed
for param in vgg.parameters():
param.requires_grad = False
self.embedding = vgg
self.GRU = nn.GRU(4096, hidden_size)
def forward(self, input, hidden=None):
embedded = self.embedding(input)
output, hidden = self.gru(output, hidden)
output = self.classifier(output.view(-1, 4096))
return output, hidden
As my videos have variable length, I provide a PackedSequence as an input. It is created from a Tensor with shape (M,B,C,H,W) where M is the maximum sequence length and B the batch size. The C,H,W are the channels, height and width of each frame.
I want the pre-trained CNN to be part of the model as I may later unfreeze some layer to finetune the CNN for my task. That's why I didn't compute the embedding of the images separately.
My questions are then the following :
Is the shape of my input data correct in order to handle batches of videos in my context or should I use something else than a PackedSequence?
In my forward function, how can I handle the batch of sequences of images with my VGG and my GRU unit ? I cannot feed directly the PackedSequence as an input to my VGG so how can I proceed?
Does this approach seem to respect the "pytorch way of doing things" or should is my approach flawed?
| I finally found the solution to make it works. Here is a simplified yet complete example of how I managed to create a VideoRNN able to use packedSequence as an input :
class VideoRNN(nn.Module):
def __init__(self, n_classes, batch_size, device):
super(VideoRNN, self).__init__()
self.batch = batch_size
self.device = device
# Loading a VGG16
vgg = models.vgg16(pretrained=True)
# Removing last layer of vgg 16
embed = nn.Sequential(*list(vgg.classifier.children())[:-1])
vgg.classifier = embed
# Freezing the model 3 last layers
for param in vgg.parameters():
param.requires_grad = False
self.embedding = vgg
self.gru = nn.LSTM(4096, 2048, bidirectional=True)
# Classification layer (*2 because bidirectionnal)
self.classifier = nn.Sequential(
nn.Linear(2048 * 2, 256),
nn.ReLU(),
nn.Linear(256, n_classes),
)
def forward(self, input):
hidden = torch.zeros(2, self.batch , 2048).to(
self.device
)
c_0 = torch.zeros(self.num_layer * 2, self.batch, 2048).to(
self.device
)
embedded = self.simple_elementwise_apply(self.embedding, input)
output, hidden = self.gru(embedded, (hidden, c_0))
hidden = hidden[0].view(-1, 2048 * 2)
output = self.classifier(hidden)
return output
def simple_elementwise_apply(self, fn, packed_sequence):
return torch.nn.utils.rnn.PackedSequence(
fn(packed_sequence.data), packed_sequence.batch_sizes
)
the key is the simple_elementwise_apply methods allowing to feed the PackedSequence in the CNN networks and to retrieve a new PackedSequence made of embedding as an output.
I hope you'll find it useful.
| https://stackoverflow.com/questions/63567352/ |
I am getting a ValueError: All bounding boxes should have positive height and width | Hey I am getting the error
ValueError: All bounding boxes should have positive height and width. Found invaid box [264.0, 632.0, 264.0, 633.3333740234375] for target at index 2.
Epoch 1/1
Mini-batch: 1/1220 Loss: 0.1509
Mini-batch: 101/1220 Loss: 0.1201
Mini-batch: 201/1220 Loss: 0.1103
Mini-batch: 301/1220 Loss: 0.1098
Mini-batch: 401/1220 Loss: 0.1076
Mini-batch: 501/1220 Loss: 0.1056
Mini-batch: 601/1220 Loss: 0.1044
Mini-batch: 701/1220 Loss: 0.1035
ValueError Traceback (most recent call last)
in ()
13
14 # Calculate losses
β> 15 loss_dict = model(images, targets)
16 batch_loss = sum(loss for loss in loss_dict.values()) / len(loss_dict)
17
1 frames
/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py in forward(self, images, targets)
91 raise ValueError(βAll bounding boxes should have positive height and width.β
92 " Found invaid box {} for target at index {}."
β> 93 .format(degen_bb, target_idx))
94
95 features = self.backbone(images.tensors)
ValueError: All bounding boxes should have positive height and width. Found invaid box [264.0, 632.0, 264.0, 633.3333740234375] for target at index 2.
I cant find a bounding box that has these coordinates in label csv file. Can anybody please help me out with this.
here is my dataset class
from torch.utils.data import Dataset, DataLoader
Inherit from pytorch Dataset for convenience
class DamageDataset(Dataset):
def __init__(self, dataframe):
super().__init__()
self.filename = dataframe['filename'].unique()
self.df = dataframe
def __len__(self) -> int:
return len(self.filename)
def __getitem__(self, index: int):
filename = self.filename[index]
image = read_image_from_train_folder(filename).astype(np.float32)
# Scale to [0,1] range expected by the pre-trained model
image /= 255.0
# Convert the shape from [h,w,c] to [c,h,w] as expected by pytorch
image = torch.from_numpy(image).permute(2,0,1)
records = self.df[self.df['filename'] == filename]
boxes = records[['xmin', 'ymin', 'xmax', 'ymax']].values
classes= records['class'].values
damage_labels=[]
damage_dict={
'D00': 1,
'D10': 2,
'D20': 3,
'D40': 4,
}
for label in classes:
damage_labels.append(damage_dict[label])
boxes = torch.as_tensor(boxes, dtype=torch.float32)
n_boxes = boxes.shape[0]
# there is only one foreground class, WHEAT
labels = torch.as_tensor(damage_labels, dtype=torch.float32)
target = {}
target['boxes'] = boxes
target['labels'] = labels
return image, target
and here is my train code:
num_epochs = 1
Prepare the model for training
model = model.to(device)
model.train()
for epoch in range(num_epochs):
print("Epoch %i/%i " % (epoch + 1, num_epochs) )
average_loss = 0
for batch_id, (images, targets) in enumerate(train_data_loader):
# Prepare the batch data
images, targets = move_batch_to_device(images, targets)
# Calculate losses
loss_dict = model(images, targets)
batch_loss = sum(loss for loss in loss_dict.values()) / len(loss_dict)
# Refresh accumulated optimiser state and minimise losses
optimizer.zero_grad()
batch_loss.backward()
can someone help me find out the index of this bounding box so that I can delete it, I have iterated through my dataframe using the code:
for idx, row in merge_labels.iterrows():
if(row[βxminβ]==264 and row[βyminβ]== 632 and row[βxmaxβ]== 264 and row[βymaxβ]== 633.3333740234375 ):
print(idx)
but it doesnt print any index.
thank you
| This is happening because of resize transform applied in fasterRCNN in detection module. If you are explicitly applying a resize operation, the bounding box generated coordinates will change as per the resize definition but if you haven't applied a resize transform and your image min and max size is outsider (800,1333) then a default resize transform is applied.
check the below snippets from the pytorch git repo.
Module: /torchvision/models/detection; Exception generated in generalized_rcnn.py
degenerate_boxes = boxes[:, 2:] <= boxes[:, :2]
if degenerate_boxes.any():
# print the first degenerate box
bb_idx = torch.where(degenerate_boxes.any(dim=1))[0][0]
degen_bb: List[float] = boxes[bb_idx].tolist()
raise ValueError("All bounding boxes should have positive height and width."
" Found invalid box {} for target at index {}."
.format(degen_bb, target_idx))
Default min max image size; Module: detection/ faster_rcnn
def __init__(self, backbone, num_classes=None,
# transform parameters
min_size=800, max_size=1333,
image_mean=None, image_std=None,
............................
transform = GeneralizedRCNNTransform(min_size, max_size, image_mean, image_std)
Target resizing; Module: detection / transform.py
def resize_boxes(boxes, original_size, new_size):
# type: (Tensor, List[int], List[int]) -> Tensor
ratios = [
torch.tensor(s, dtype=torch.float32, device=boxes.device) /
torch.tensor(s_orig, dtype=torch.float32, device=boxes.device)
for s, s_orig in zip(new_size, original_size)
]
ratio_height, ratio_width = ratios
xmin, ymin, xmax, ymax = boxes.unbind(1)
xmin = xmin * ratio_width
xmax = xmax * ratio_width
ymin = ymin * ratio_height
ymax = ymax * ratio_height
return torch.stack((xmin, ymin, xmax, ymax), dim=1)
You can fix this by removing/correcting bounding boxes where xmin and xmax or ymin and ymax are equal in the original dataset.
| https://stackoverflow.com/questions/63572304/ |
path problem : NameError: name '__file__' is not defined | import os.path as osp
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch_geometric.datasets import MNISTSuperpixels
import torch_geometric.transforms as T
from torch_geometric.data import DataLoader
from torch_geometric.utils import normalized_cut
from torch_geometric.nn import (NNConv, graclus, max_pool, max_pool_x, global_mean_pool)
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', 'MNIST')
transform = T.Cartesian(cat=False)
train_dataset = MNISTSuperpixels(path, True, transform=transform)
test_dataset = MNISTSuperpixels(path, False, transform=transform)
train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)
d = train_dataset
I'm trying to use MNISTSuperpixels data for graph convolution, but I have some troubles using the example code.
Most of scripts were using
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', 'MNIST')
However, they gave me an error
NameError: name '__file__' is not defined and I don't understand what osp.realpath(__file__) really means.
I'm using Jupyter notebook on Ubuntu, and my working directory is
print(os.getcwd())
/home/hkimlx/GDL/pytorch_geometric/examples
which is the same directory where the sample code mnist_nn_conv.py is located.
Please help me. Thanks!
| In notebook, you need to use double quoted "__file__" as in osp.realpath("__file__") instead of osp.realpath(__file__)
Sourced from: https://queirozf.com/entries/python-working-with-paths-the-filesystem#-nameerror-name-'file'-is-not-defined
| https://stackoverflow.com/questions/63583062/ |
how to implement ResNet50 in PyTorch? | I learn NN in Coursera course, by deeplearning.ai and for one of my homework was an assignment for ResNet50 implementation by using Keras, but I see Keras is too high-level language) and decided to implement it in the more sophisticated library - PyTorch. I recorded it, but something went wrong. May someone, please, say to me what's going on and why there is appeared error with parameters when I cause method ResNet.parameters() when putting it in Adam optimization.
class implementation:
class ResNet50(torch.nn.Module):
def __init__(self, input_shape = (3, 96, 96), classes = 10):
super(ResNet50, self).__init__()
"""
Implementation of the popular ResNet50 the following architecture:
Conv2d -> BatchNorm -> ReLU -> MaxPool -> ConvBlock - > IdBlock*2 - > convBlock -> IdBlock*3 -> ConvBlock -> IdBlock*5 -> ConvBlock -> IdBlock*2 -> AvgPool -> FCLayer
Arguments:
input_shape -- shape of the image of the dataset
classes -- integer, number of classes
"""
self.input_shape = input_shape
self.classes = classes
self.relu = torch.nn.ReLU()
def identity_block(self, X, f, filters):
# Notice that there is no any kind of Pooling.
"""
Implementation of the identity block.
Arguments:
X -- input tensor of shape(m , n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
"""
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. It will be needed later to be added back to the main path.
X_shortcut = X
# First component of the main path
X = torch.nn.Conv2d(in_channels=X.shape[0], out_channels=F1, kernel_size=1, stride=1, padding=0)(X)
X = torch.nn.BatchNorm2d(num_features=F1)(X)
X = self.relu(X)
# Second component of the main path
X = torch.nn.Conv2d(in_channels=F1, out_channels=F2, kernel_size=f, stride=1, padding=f//2)(X)
X = torch.nn.BatchNorm2d(num_features=F2)(X)
X = self.relu(X)
# Third component of the main path
X = torch.nn.Conv2d(in_channels=F2, out_channels=F3, kernel_size=1, stride=1, padding=0)(X)
X = torch.nn.BatchNorm2d(num_features=F3)(X)
# X = self.relu(X) - NO RELU, notice this!
# Final step: Add shortcut value to main path, and pass it through a ReLU
X = X_shortcut + X
X = self.relu(X)
return X
def convolution_block(self, X, f, filters, s = 2):
# Notice that here is no any kind of Pooling.
"""
Implementation of the convolutional block.
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of middle CONV's window for main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
s -- integer, specifying the stride to be used
Returns:
X -- output of the convolution block, tensor of shape (n_H, n_W, n_C)
"""
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
# First component of the main path
X = torch.nn.Conv2d(in_channels=X.shape[0], out_channels=F1, kernel_size=1, stride=s, padding=0)(X)
X = torch.nn.BatchNorm2d(num_features=F1)(X)
X = self.relu(X)
# Second component of the main path
X = torch.nn.Conv2d(in_channels=F1, out_channels=F2, kernel_size=f, stride=1, padding=f//2)(X)
X = torch.nn.BatchNorm2d(num_features=F2)(X)
X = self.relu(X)
# Third component of the main path
X = torch.nn.Conv2d(in_channels=F2, out_channels=F3, kernel_size=1, stride=1, padding=0)(X)
X = torch.nn.BatchNorm2d(num_features=F2)(X)
# X = self.relu(X) - NO RELU, notice this!
# Shortcut path
X_shortcut = torch.nn.Conv2d(in_channels=X_shortcut.shape[0], out_channels=F3, kernel_size=1, stride=s, padding=0)(X)
X_shortcut = torch.nn.BatchNorm2d(num_features=F3)(X)
# X = self.relu(X) - NO RELU, notice this!
# Final step: Add shortcut value to main path, and pass it through a ReLU
X = X_shortcut + X
X = self.relu(X)
return X
def forward(self, X):
"""
Forward propogation by the following architecture:
Conv2d -> BatchNorm -> ReLU -> MaxPool -> ConvBlock - > IdBlock*2 - > convBlock -> IdBlock*3 -> ConvBlock -> IdBlock*5 -> ConvBlock -> IdBlock*2 -> AvgPool -> FCLayer
Arguments:
X -- input data for Network that needed to be propagated
Returns:
X -- output of the ResNet50, that propagated through it
"""
# # Define the input as a tensor with shape self.input_shape
# X = torch.zeros_like(self.input_shape)
# Stage 1
X = torch.nn.Conv2d(in_channels=3, out_channels=64, kernel_size=7, stride=2, padding=3)(X) # 96x96x3 -> 48x48x64
X = torch.nn.BatchNorm2d(num_features=64)(X)
X = self.relu(X)
X = torch.nn.MaxPool2d(kernel_size=3, stride=2, padding=0)(X) # 48x48x64 -> 23x23x64
# Stage 2
X = self.convolution_block(X, f=3, filters=[64, 64, 256], s=1) # 23x23x64 -> 23x23x256
X = self.identity_block(X, 3, [64, 64, 256]) # same
X = self.identity_block(X, 3, [64, 64, 256]) # same
# Stage 3
X = self.convolution_block(X, f=3, filters=[128, 128, 512], s=2) # 23x23x256 -> 12x12x512
X = self.identity_block(X, 3, [128, 128, 512]) # same
X = self.identity_block(X, 3, [128, 128, 512]) # same
X = self.identity_block(X, 3, [128, 128, 512]) # same
# Stage 4
X = self.convolution_block(X, f=3, filters=[256, 256, 1024], s=2) # 12x12x512 -> 6x6x1024
X = self.identity_block(X, 3, [256, 256, 1024]) # same
X = self.identity_block(X, 3, [256, 256, 1024]) # same
X = self.identity_block(X, 3, [256, 256, 1024]) # same
X = self.identity_block(X, 3, [256, 256, 1024]) # same
X = self.identity_block(X, 3, [256, 256, 1024]) # same
# Stage 5
X = self.convolution_block(X, f=3, filters=[512, 512, 2048], s=2) # 6x6x1024 -> 3x3x2048
X = self.identity_block(X, 3, [512, 512, 2048]) # same
X = self.identity_block(X, 3, [512, 512, 2048]) # same
# AvgPool
X = torch.nn.AvgPool2d(kernel_size=2)(X) # 3x3x2048 -> 2x2x2048
# Output layer
X = X.reshape(X.shape[0], -1)
X = torch.nn.Linear(in_features=X.shape[1], out_features=self.classes)
X = torch.nn.Softmax(X)
return X
next script:
NNet = ResNet50()
device = torch.device('cuda:0' if torch.cuda.is_available else 'cpu')
NNet = NNet.to(device)
loss = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(NNet.parameters(), lr = 0.001)
random.seed(k)
np.random.seed(k)
torch.manual_seed(k)
torch.cuda.manual_seed(k)
torch.backends.cudnn.deterministic = True
Returned error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-18-71ee8a51c6b2> in <module>()
5
6 loss = torch.nn.CrossEntropyLoss()
----> 7 optimizer = torch.optim.Adam(NNet.parameters(), lr = 0.001)
8
9 random.seed(k)
1 frames
/usr/local/lib/python3.6/dist-packages/torch/optim/optimizer.py in __init__(self, params, defaults)
44 param_groups = list(params)
45 if len(param_groups) == 0:
---> 46 raise ValueError("optimizer got an empty parameter list")
47 if not isinstance(param_groups[0], dict):
48 param_groups = [{'params': param_groups}]
ValueError: optimizer got an empty parameter list
| Your class does not have any parameters, so .parameters() will give you an empty list.
You have to actually create the individual layers and store them in variables.
Right now all you do is call
X = torch.nn.Conv2d(in_channels=X.shape[0], out_channels=F1, kernel_size=1, stride=1, padding=0)(X)
Which creates an temporary Conv2d object, calls the forward function of that object and then the object is lost, since only the output of the forward is saved in x.
The correct thing to do is to either define your layers in the __init__() or a function which you call in the init.
So correct thing to do
def __init__(self, input_shape = (3, 96, 96), classes = 10):
super(ResNet50, self).__init__()
"""
Implementation of the popular ResNet50 the following architecture:
Conv2d -> BatchNorm -> ReLU -> MaxPool -> ConvBlock - > IdBlock*2 - > convBlock -> IdBlock*3 -> ConvBlock -> IdBlock*5 -> ConvBlock -> IdBlock*2 -> AvgPool -> FCLayer
Arguments:
input_shape -- shape of the image of the dataset
classes -- integer, number of classes
"""
self._conv_1 = torch.nn.Conv2d(in_channels=X.shape[0], out_channels=F1, kernel_size=1, stride=1, padding=0)
self._bn_1 = torch.nn.BatchNorm2d(num_features=F1)
...
self.input_shape = input_shape
self.classes = classes
self.relu = torch.nn.ReLU()
and later in your forward or a function called by forward you can do
def forward(self, X):
"""
Forward propogation by the following architecture:
Conv2d -> BatchNorm -> ReLU -> MaxPool -> ConvBlock - > IdBlock*2 - > convBlock -> IdBlock*3 -> ConvBlock -> IdBlock*5 -> ConvBlock -> IdBlock*2 -> AvgPool -> FCLayer
Arguments:
X -- input data for Network that needed to be propogated
Returns:
X -- output of the ResNet50, that propogated through it
"""
# # Define the input as a tensor with shape self.input_shape
# X = torch.zeros_like(self.input_shape)
x = self.relu(self._bn_1(self._conv_1(x)))
return X
So you have to do it along these lines. Create your layers and save them in variables and later use the variables in the forward.
For further reference and help refer to the official tutorial https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html
| https://stackoverflow.com/questions/63591609/ |
Artifacts in StyleGAN generated images | I've written my own implementation of StyleGAN (paper here https://arxiv.org/abs/1812.04948), using PyTorch instead of Tensorflow, which is what the official implementation uses. I'm doing this partly as an exercise in implementing a scientific paper from scratch.
I have done my best to reproduce all the features mentioned in the paper and in the ProgressiveGAN paper which it is based on, and the network trains, but I consistently get blurry images and blob-shaped artifacts:
I would very much like to know if anyone with experience of GANs in general or StyleGAN in particular has seen this phenomenon and can give me any insight into possible reasons for it.
(Some detail: I'm training on downsampled CelebA images, 600k images burn-in, 600k images fade-in, but I see very similar phenomena with a tiny toy dataset and a lot fewer iterations.)
| I've been working with StyleGAN for a while and I couldn't guess the reason with such little information..
One possible reason is the effect of the truncation trick, this makes the results to represent an average face but with higher quality or deviate it to obtain results variability but with possibility of added artefacts as yours. Check how you implemented this trick in Pytorch.
I recommend you to check this repository (https://github.com/rosinality/style-based-gan-pytorch) where they implemented styleGAN in Pytorch. You could find if you are missing something from the model here.
Finally I would also suggest you to read StyleGAN2 paper (https://arxiv.org/abs/1912.04958) from the same authors where they explain how they solve a droplet artifacts and improve quality results from StyleGAN.
| https://stackoverflow.com/questions/63594267/ |
Cannot import torch module | I cannot seem to properly install pytorch on my computer, so here is the background of what I have done:
I had already installed python on my computer and it worked. I used it in Eclipse, using pyDev, so I don't know if that could be the problem. Now I want to install pytorch, so I installed anaconda and entered the command for installing pytorch. To get the right command, I use https://pytorch.org/get-started/locally/, where I tried the options both with and without cuda. In both cases I get an error when I type "import torch".
I have also installed miniconda and tried the same with that without succes. I also tried to work in IDLE in stead of Eclipse, but I keep getting the "no module named 'torch'" error. Each time I run a command in anaconda it appears that the installation is succesfull, but I still can't import 'torch'.
Any idea what the problem could be or what I could try?
| Open command prompt or terminal and type:
pip3 install pytorch
If it says pip isn't installed then type: python -m pip install -U pip
Then retry importing Pytorch module
| https://stackoverflow.com/questions/63600423/ |
What is the goal of Variable using pytorch? | I have this code :
from torch.autograd import Variable
d_real_data = Variable(d_sampler(d_input_size))
But I wonder what is the difference between Variable(d_sampler(d_input_size)) and d_sampler(d_input_size)
I think it is two tensors but the values are different. So I was wondering what is the goal of this function Variable ?
| Variable() was a way to to use autograd with tensors. This is now deprecated and should not be used anymore. Tensors now work fine with autograd if the requires_grad flag is set to true.
From the official docs
The Variable API has been deprecated: Variables are no longer
necessary to use autograd with tensors. Autograd automatically
supports Tensors with requires_grad set to True.
| https://stackoverflow.com/questions/63612498/ |
How to calculate the median of a masked tensor along an axis? | I have tensor X of floats of dimensions n x m and a tensor Y of booleans of dimensions n x m. I want to calculate values such as the mean, median and max of X, along one of the axes, but only considering the values in X which are true in Y. Something like X[Y].mean(dim=1). This is not possible because X[Y] is always a 1D tensor.
Edit:
For the mean, I was able to do it with:
masked_X = X * Y
masked_X_mean = masked_X.sum(dim=1) / Y.sum(dim=1)
For the max:
masked_X = X
masked_X[Y] = float('-inf')
masked_X_max = masked_X.max(dim=1)
But for the median, I was not able to be as creative. Any suggestions??
e.g.
X = torch.tensor([[1, 1, 1],
[2, 2, 4]]).type(torch.float32)
Y = torch.tensor([[0, 1, 0],
[1, 0, 1]]).type(torch.bool)
Expected Output
mean = [1., 3.]
median = [1., 2.]
var = [0., 1.]
| This is the best I have so far on this:
outs = []
for x, y in zip(X, Y): # X, Y could be permuted to loop over desired axis
out = torch.median(torch.masked_select(x, y))
outs.append(out)
torch.tensor(outs)
Would really appreciate if someone has better solution.
| https://stackoverflow.com/questions/63621694/ |
pytorch: How do I properly initialize Tensor without any entries? | What I am doing right now is this:
In [1]: torch.Tensor([[[] for _ in range(3)] for _ in range(5)])
Out[1]: tensor([], size=(5, 3, 0))
This works fine for me, but is there maybe a torch function that does this that I am missing?
Thanks in advance!
Edit:
My use case is this:
I use this to aggregate Tensors with all dimensions the same and that dont have the empty dimension. I am using torch.cat:
# results start with shape (a,b,0)
results = torch.Tensor([[[] for _ in range(b)] for _ in range(a)])
for t in range(time):
# r has shape (a,b)
r = model(...)
# results now has shape (a,b,t)
results = torch.cat([results,r.unsqueeze(2)],dim=-1)
Simply appending to a list is impractical for me as I have to do reshaping operations on results on every step (Im doing beam search).
One solution would also be to not initialize results until I have the first returned Tensor, but this feels unpythonic/wrong.
| This can be another way depending on your usecase.
alpha = torch.tensor([])
In[5]: alpha[:,None,None,None]
Out[5]: tensor([], size=(0, 1, 1, 1))
Otherways:
torch.tensor([[[[]]]]) #tensor([], size=(1, 1, 1, 0))
torch.tensor([[[[],[]]]]) #tensor([], size=(1, 1, 2, 0))
| https://stackoverflow.com/questions/63622972/ |
Error in training opennmt - caffe2_detectron_ops.dll not found | I have torch 1.6 and python 3.8. When training OpenNMT, it throws the following error -
OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\Girish\AppData\Local\Programs\Python\Python38\lib\sitepackages\torch\lib\caffe2_detectron_ops.dll" or one of its dependencies.
I checked the folder, the file is present there. I have tried uninstalling torch and reinstalling it, but no help.
Any help will be appreciated. thanks
| https://github.com/pytorch/pytorch/issues/35803#issuecomment-725285085
This answer worked for me.
Just deleting "caffe2_detectron_ops.dll" from the path ("C:\Users\Girish\AppData\Local\Programs\Python\Python38\lib\sitepackages\torch\lib\caffe2_detectron_ops.dll")
| https://stackoverflow.com/questions/63629075/ |
Loading a converted pytorch model in huggingface transformers properly | I converted a pre-trained tf model to pytorch using the following function.
def convert_tf_checkpoint_to_pytorch(*, tf_checkpoint_path, albert_config_file, pytorch_dump_path):
# Initialise PyTorch model
config = AlbertConfig.from_json_file(albert_config_file)
print("Building PyTorch model from configuration: {}".format(str(config)))
model = AlbertForPreTraining(config)
# Load weights from tf checkpoint
load_tf_weights_in_albert(model, config, tf_checkpoint_path)
# Save pytorch-model
print("Save PyTorch model to {}".format(pytorch_dump_path))
torch.save(model.state_dict(), pytorch_dump_path)
I am loading the converted model and encoding sentences in the following way:
def vectorize_sentence(text):
albert_tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2")
config = AlbertConfig.from_pretrained(config_path, output_hidden_states=True)
model = TFAlbertModel.from_pretrained(pytorch_dir, config=config, from_pt=True)
e = albert_tokenizer.encode(text, max_length=512)
model_input = tf.constant(e)[None, :] # Batch size 1
output = model(model_input)
v = [0] * 768
# generate sentence vectors by averaging the word vectors
for i in range(1, len(model_input[0]) - 1):
v = v + output[0][0][i].numpy()
vector = v/len(model_input[0])
return vector
However while loading the model, a warning comes up:
Some weights or buffers of the PyTorch model TFAlbertModel were not
initialized from the TF 2.0 model and are newly initialized:
['predictions.LayerNorm.bias', 'predictions.dense.weight',
'predictions.LayerNorm.weight', 'sop_classifier.classifier.bias',
'predictions.dense.bias', 'sop_classifier.classifier.weight',
'predictions.decoder.bias', 'predictions.bias',
'predictions.decoder.weight'] You should probably TRAIN this model on
a down-stream task to be able to use it for predictions and inference.
Can anyone tell me if I am doing anything wrong? What does the warning mean? I saw issue #5588 on the github repo of Transformers. Don't know if my issue is the same as this.
| I think you could try using
model = AlbertModel.from_pretrained
instead of
model = TFAlbertModel.from_pretrained
in the VectorizeSentence definition.
AlbertModel is the name of the class for the pytorch format model, and TFAlbertModel is the name of the class for the tensorflow format model.
I'm not sure exactly what load_tf_weights_in_albert() does, but I think that once you have done that your model is in pytorch format.
| https://stackoverflow.com/questions/63648380/ |
Why is the implementation of cross entropy different in Pytorch and Tensorflow? | I am going through the documentation of Cross Entropy in Pytorch and Tensorflow. I understand that they are modifying the naive implementation of Cross Entropy to solve for the potential numeric over/underflows. However, I am unable to understand as to how these modifications are helping at all.
The implementation of Cross Entropy in Pytorch follows the following logic -
where is the softmax score and is the raw score.
This doesn't seem to solve the problem because also leads to numeric overflow.
Now, we contrast it with Tensorflow's implementation (I got it from a discussion in Github. This might be completely wrong) -
Let is the vector of all k raw logit scores.
While this solves the problem of overflow but it runs into problems of underflow because it is possible that which would lead to an even smaller
Can someone please help me in understanding what's going on here?
| Answering here by combining answers from the comment section for the benefit of the community.
Since you have addressed the issue of numeric overflow in PyTorch, that is handled by subtracting the max value like below(from here).
scalar_t z = std::exp(input_data[d * dim_stride] - max_input);
Coming to TensorFlow's implementation of Cross entropy the issue of underflow is not that major since it is numerically ignored with the dominating large value.
| https://stackoverflow.com/questions/63657247/ |
How To Use The First Layers Of Model In PyTorch | I have uploaded a certain model
from efficientnet_pytorch import EfficientNet
model = EfficientNet.from_pretrained(model)
And I can see the model:
print(model.state_dict())
The model contains quite a few layers, and I want to take only the first 50. Please tell me how I can do this.
| I think this should do the trick:
model = nn.Sequential(*list(model.classifier.children())[:50])
| https://stackoverflow.com/questions/63676050/ |
Pytorch: Mask dilation / extension | I wonder how to extend / dilate binary mask in pytorch? i.e. it should be something like cv2.dilate from opencv.
| For rectangular neighborhoods, dilation is the same as max pooling.
See nn.MaxPool2d for implementation details.
| https://stackoverflow.com/questions/63687067/ |
calculating accuracy for Monte carlo Dropout on pytorch | I have found an implementation of the Monte carlo Dropout on pytorch the main idea of implementing this method is to set the dropout layers of the model to train mode. This allows for different dropout masks to be used during the different various forward passes.
The implementation illustrate how multiple predictions from the various forward passes are stacked together and used for computing different uncertainty metrics.
import sys
import numpy as np
import torch
import torch.nn as nn
def enable_dropout(model):
""" Function to enable the dropout layers during test-time """
for m in model.modules():
if m.__class__.__name__.startswith('Dropout'):
m.train()
def get_monte_carlo_predictions(data_loader,
forward_passes,
model,
n_classes,
n_samples):
""" Function to get the monte-carlo samples and uncertainty estimates
through multiple forward passes
Parameters
----------
data_loader : object
data loader object from the data loader module
forward_passes : int
number of monte-carlo samples/forward passes
model : object
keras model
n_classes : int
number of classes in the dataset
n_samples : int
number of samples in the test set
"""
dropout_predictions = np.empty((0, n_samples, n_classes))
softmax = nn.Softmax(dim=1)
for i in range(forward_passes):
predictions = np.empty((0, n_classes))
model.eval()
enable_dropout(model)
for i, (image, label) in enumerate(data_loader):
image = image.to(torch.device('cuda'))
with torch.no_grad():
output = model(image)
output = softmax(output) # shape (n_samples, n_classes)
predictions = np.vstack((predictions, output.cpu().numpy()))
dropout_predictions = np.vstack((dropout_predictions,
predictions[np.newaxis, :, :]))
# dropout predictions - shape (forward_passes, n_samples, n_classes)
# Calculating mean across multiple MCD forward passes
mean = np.mean(dropout_predictions, axis=0) # shape (n_samples, n_classes)
# Calculating variance across multiple MCD forward passes
variance = np.var(dropout_predictions, axis=0) # shape (n_samples, n_classes)
epsilon = sys.float_info.min
# Calculating entropy across multiple MCD forward passes
entropy = -np.sum(mean*np.log(mean + epsilon), axis=-1) # shape (n_samples,)
# Calculating mutual information across multiple MCD forward passes
mutual_info = entropy - np.mean(np.sum(-dropout_predictions*np.log(dropout_predictions + epsilon),
axis=-1), axis=0) # shape (n_samples,)
What I'm trying to do is to calculate accuracy across different forward passes, can anyone please help me on how to get the accuracy and make any changes on the dimensions used in this implementation
I am using the CIFAR10 dataset and would like to use the dropout on test time The code for the data_loader
testset = torchvision.datasets.CIFAR10(root='./data', train=False,download=True, transform=test_transform)
#loading the test set
data_loader = torch.utils.data.DataLoader(testset, batch_size=n_samples, shuffle=False, num_workers=4) ```
| Accuracy is the percentage of correctly classified samples. You can create a boolean array that indicates whether a certain prediction is equal to its corresponding reference value, and you can get the mean of these values to calculate accuracy. I have provided a code example of this below.
import numpy as np
# 2 forward passes, 4 samples, 3 classes
# shape is (2, 4, 3)
dropout_predictions = np.asarray([
[[0.2, 0.1, 0.7], [0.1, 0.5, 0.4], [0.9, 0.05, 0.05], [0.25, 0.74, 0.01]],
[[0.1, 0.5, 0.4], [0.2, 0.6, 0.2], [0.8, 0.10, 0.10], [0.25, 0.01, 0.74]]
])
# Get the predicted value for each sample in each forward pass.
# Shape of output is (2, 4).
classes = dropout_predictions.argmax(-1)
# array([[2, 1, 0, 1],
# [1, 1, 0, 2]])
# Test equality among the reference values and predicted classes.
# Shape is unchanged.
y_true = np.asarray([2, 1, 0, 1])
elementwise_equal = np.equal(y_true, classes)
# array([[ True, True, True, True],
# [False, True, True, False]])
# Calculate the accuracy for each forward pass.
# Shape is (2,).
elementwise_equal.mean(axis=1)
# array([1. , 0.5])
In the example above, you can see that the accuracy for the first forward pass was 100%, and the accuracy for the second forward pass was 50%.
| https://stackoverflow.com/questions/63691865/ |
Is there a many to many convolution in Pytorch? is this a thing? | I have been thinking about convolutions recently. There are common 3by3 convs, where (3,3) kernel's information is weighted and aggregated to supply information to a single spatial point on the output. There are also 3 by 3 upconvs, where a single spatial point on the input supplies weighted information to a 3 by 3 output space.
The conv is a many to one relationship and the upconv is a one to many relationship.
I have however never heard of many to many conv? is there such a thing? For example, a 3by3 kernel supplying information to another 3by3 kernel. I would like to experiment with it in PyTorch. My internet searching has not revealed anything.
| You can combine pixel shuffle and averaging to get what you want.
for example, if you want 3x3 -> 3x3 mapping with in_channels to out_channels:
from torch import nn
import torch.nn.functional as nnf
class ManyToManyConv2d(nn.Module):
def __init__(in_channels, out_channels, in_kernel, out_kernel):
self.out_kernel = out_kernel
self.conv = nn.Conv2d(in_channels, out_channles * out_kernel * out_kernel, in_kernel)
def forward(self, x):
y = self.conv(x) # all the output kernel are "folded" into the channel dim
y = nnf.pixel_shuffle(y, self.out_kernel) # "unfold" the out_kernel - image size *out_kernel bigger
y = nnf.avg_pool2d(y, self.out_kernel)
return y
| https://stackoverflow.com/questions/63692522/ |
Mesh-R-CNN Data with Colab and Pytorch3D | While using the Mesh-R-CNN demo on Google Colab:
https://github.com/facebookresearch/meshrcnn
on the demo.py file I get this message
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-24-202aa0a6d1de> in <module>()
17
18 # required so that .register() calls are executed in module scope
---> 19 import meshrcnn.data # noqa
20 import meshrcnn.modeling # noqa
21 import meshrcnn.utils # noqa
ModuleNotFoundError: No module named 'meshrcnn.data'
what should I do to import successfully Meshrcnn.data?!
I also don't know how to work with the config setting present in the repo. Any suggestions?
| Since it was colab it was missing the "! cd" on the previous import of MeshRCNN
| https://stackoverflow.com/questions/63697929/ |
How torch.distributed.launch assign data to each GPU? | When our batch size is 1 or 2 and we have 8 GPUs, how torch.distributed.launch assign data to each GPUs? I converted my model to torch.nn.parallel.DistributedDataParallel,
model = DistributedDataParallel(model,
device_ids=[args.local_rank],
output_device=args.local_rank,
find_unused_parameters=True,
)
but it stated in the documentation that DistributedDataParallel:
parallelizes the application of the given module by
splitting the input across the specified devices by chunking in the
batch dimension.
My question is when batch size is smaller than the number of GPUs how it deal with it?
| They don't. Unlike Dataparallel, the batch size you set is per-GPU. When you have 8 GPUs with batch size 1, you have an effective batch size of 8.
| https://stackoverflow.com/questions/63720392/ |
PyTorch GPU out of memory | I am running an evaluation script in PyTorch. I have a number of trained models (*.pt files), which I load and move to the GPU, taking in total 270MB of GPU memory. I am using a batch size of 1. For every sample, I load a single image and also move it to the GPU. Then, depending on the sample, I need to run a sequence of these trained models. Some models have a tensor as input and as output. Other models have a tensor as input, but a string as output. The final model in a sequence always has a string as output. The intermediary tensors are temporarily stored in a dictionary. When a model has consumed a tensor input, it is deleted using del. Still, I notice that after every sample, the GPU memory keeps increasing until the entire memory is full.
Below is some pseudocode to give you a better idea of what is going on:
with torch.no_grad():
trained_models = load_models_from_pt() # Loaded and moved to GPU, taking 270MB
model = Model(trained_models) # Keeps the trained_models in a dictionary by name
for sample in data_loader:
# A sample contains a single image and is moved to the GPU
# A sample also has some other information, but no other tensors
model.forward(sample)
class Model(nn.Module)
def __init__(self, trained_models):
self.trained_models = trained_models
self.intermediary = {}
def forward(sample):
for i, elem in enumerate(sample['sequence']):
name = elem['name']
in = elem['input']
if name == 'a':
model = self.trained_models['a']
out = model(self.intermediary[in])
del self.intermediary[in]
self.intermediary[i] = out
elif name == 'b':
model self.trained_models['b']
out = model(self.intermediary[in])
del self.intermediary[in]
self.intermediary[i] = out
elif ...
I have no idea why the GPU is out of memory. Any ideas?
| Try adding torch.cuda.empty_cache() after the del
| https://stackoverflow.com/questions/63725858/ |
Fluctuating loss during training for text binary classification | I'm doing a finetuning of a Longformer on a document text binary classification task using Huggingface Trainer class and I'm monitoring the measures of some checkpoints with Tensorboard.
Even if the F1 score and accuracy is quite high, I have perplexities about the fluctuations of training loss.
I read online a reason for that can be:
the too high learning rate, but I tried with 3 values (1e-4, 1e-5 and 1e-6) and all of them made the same effect
a small batch size. I'm using a Sagemaker notebook p2.8xlarge which has 8xK80 GPUs. The batch size per GPU I can use to avoid the CUDA out of memory error is 1. So the total batch size is 8. My intuition is that a bs of 8 is too small for a dataset containing 57K examples (7K steps per epoch). Unfortunately it's the highest value I can use.
Here I have reported the trend of F1, accuracy, loss and smoothed loss. The grey line is with 1e-6 of learning rate while the pink one is 1e-5.
I reasume all the info of my training:
batch size: 1 x 8GPU = 8
learning rate: 1e-4, 1e-5, 1e-6 (all of them tested without improvement on loss)
model: Longformer
dataset:
training set: 57K examples
dev set: 12K examples
test set: 12K examples
Which could be the reason? Can this be considered a problem despite the quite good F1 and accuracy results?
| I will first tell you the reason for the fluctuations and then a possible way to solve it.
REASON
When you train a network, you calculate a gradient that would reduce the loss. In order to do that, you need to backpropagate the loss. Now, ideally, you compute the loss based on all of the samples in your data because then you consider basically every sample and you come up with a gradient that would capture all of your samples. In practice, this is not possible due to the computational complexity of calculating gradient on all samples.
Therefore, we use small batch_size as an approximation! The idea is instead of considering all the samples, we say I compute the gradient-based on some small set of samples but as a trade-off I lose information regarding the gradient.
Rule of thumb: Smaller batch sizes give noisy gradients but they converge faster because per epoch you have more updates. If your batch size is 1 you will have N updates per epoch. If it is N, you will only have 1 update per epoch. On the other hand, larger batch sizes give a more informative gradient but they convergence slower and increase computational conplexity.
That is the reason why for smaller batch sizes, you observe varying losses/fluctuations because the gradient is noisy.
SOLUTION: Accumulated Gradients
In case of memory issues, you can use the concept of accumulated gradients to combat the fluctuating loss. It calculates the loss and gradients after each mini-batch, but instead of updating the weights on every batch, it waits and accumulates the gradients over consecutive batches. And then ultimately updates the parameters based on the cumulative gradient after a specified number of batches.
On this page from the documentation you can find how to apply it: https://huggingface.co/transformers/v1.2.0/examples.html
| https://stackoverflow.com/questions/63743557/ |
Correct way of normalizing and scaling the MNIST dataset | I've looked everywhere but couldn't quite find what I want. Basically the MNIST dataset has images with pixel values in the range [0, 255]. People say that in general, it is good to do the following:
Scale the data to the [0,1] range.
Normalize the data to have zero mean and unit standard deviation (data - mean) / std.
Unfortunately, no one ever shows how to do both of these things. They all subtract a mean of 0.1307 and divide by a standard deviation of 0.3081. These values are basically the mean and the standard deviation of the dataset divided by 255:
from torchvision.datasets import MNIST
import torchvision.transforms as transforms
trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True)
print('Min Pixel Value: {} \nMax Pixel Value: {}'.format(trainset.data.min(), trainset.data.max()))
print('Mean Pixel Value {} \nPixel Values Std: {}'.format(trainset.data.float().mean(), trainset.data.float().std()))
print('Scaled Mean Pixel Value {} \nScaled Pixel Values Std: {}'.format(trainset.data.float().mean() / 255, trainset.data.float().std() / 255))
This outputs the following
Min Pixel Value: 0
Max Pixel Value: 255
Mean Pixel Value 33.31002426147461
Pixel Values Std: 78.56748962402344
Scaled Mean: 0.13062754273414612
Scaled Std: 0.30810779333114624
However clearly this does none of the above! The resulting data 1) will not be between [0, 1] and will not have mean 0 or std 1. In fact this is what we are doing:
[data - (mean / 255)] / (std / 255)
which is very different from this
[(scaled_data) - (mean/255)] / (std/255)
where scaled_data is just data / 255.
| Euler_Salter
I may have stumbled upon this a little too late, but hopefully I can help a little bit.
Assuming that you are using torchvision.Transform, the following code can be used to normalize the MNIST dataset.
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('./data', train=True
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
Usually, 'transforms.ToTensor()' is used to turn the input data in the range of [0,255] to a 3-dimensional Tensor. This function automatically scales the input data to the range of [0,1]. (This is equivalent to scaling the data down to 0,1)
Therefore, it makes sense that the mean and std used in the 'transforms.Normalize(...)' will be 0.1307 and 0.3081, respectively. (This is equivalent to normalizing zero mean and unit standard deviation.)
Please refer to the link below for better explanation.
https://pytorch.org/vision/stable/transforms.html
| https://stackoverflow.com/questions/63746182/ |
About cosine similarity, how to choose the loss function and the network(I have two plans) | Sorry I have no clue, I don't know where to find a solution.
I'm using two networks to construct two embeddingsοΌI have binary target to indicate whether embeddingA and embeddingB "match" or not(1 or -1).
The dataset like this:
embA0 embB0 1.0
embA1 embB1 -1.0
embA2 embB2 1.0
...
I hope to use cosine similarity to get classification results.
But I feel confused when choosing the loss function, the two networks that generate embeddings are trained separately, now I can think of two options as follows:
Plan 1:
Construct the 3rd network, use embeddingA and embeddingB as the input of nn.cosinesimilarity() to calculate the final result (should be probability in [-1,1] ), and then select a two-category loss function.
(Sorry, I dont know which loss function to choose.)
class cos_Similarity(nn.Module):
def __init__(self):
super(cos_Similarity,self).__init__()
cos=nn.CosineSimilarity(dim=2)
embA=generator_A()
embB=generator_B()
def forward(self,a,b):
output_a=embA(a)
output_b=embB(b)
return cos(output_a,output_b)
loss_func=nn.CrossEntropyLoss()
y=cos_Similarity(a,b)
loss=loss_func(y,target)
acc=np.int64(y>0)
Plan 2:
The two Embeddings as the output, then use nn.CosineEmbeddingLoss() as loss function, when I calculate the accuracy, I use nn.Cosinesimilarity() to output the result(probability in [-1,1]).
output_a=embA(a)
output_b=embB(b)
cos=nn.CosineSimilarity(dim=2)
loss_function = torch.nn.CosineEmbeddingLoss()
loss=loss_function(output_a,output_b,target)
acc=cos(output_a,output_b)
I really need help. How do I make a choice? Why? Or I can only make a choice for me through experimental results.
Thank you very much!
###############################addition
def train_func(train_loss_list):
train_data=load_data('train')
trainloader = DataLoader(train_data, batch_size=BATCH_SIZE)
cos_smi=nn.CosineSimilarity(dim=2)
train_loss = 0
for step,(a,b,target) in enumerate(trainloader):
try:
optimizer.zero_grad()
output_a = model_A(a) #generate embA
output_b = model_B(b) #generate embB
acc=cos_smi(output_a,output_b)
loss = loss_fn(output_a,output_b, target.unsqueeze(dim=1))
train_loss += loss.item()
loss.backward()
optimizer.step()
train_loss_list.append(loss)
if step%10==0:
print('train:',step,'step','loss:',loss,'acc',acc)
except Exception as e:
print('train:',step,'step')
print(repr(e))
return train_loss_list,train_loss/len(trainloader)
| In response to the comment thread.
The objective or pipeline seems to be:
Receive two embedding vectors (say, A and B).
Check whether these two vectors are "similar" or not (using cosine similarity).
Label is 1 if they're similar, and -1 otherwise (I recommend changing this to 0 or 1 rather than -1 and 1).
What I can think of is the following. Correct me if I misunderstood something. A disclaimer is that I'm pretty much coding this off of my intuition without knowing any details, so it's probably going to be riddled with errors if you try to run in. Let's still try and get a high-level understanding.
Model
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self, num_emb, emb_dim): # I'm assuming the embedding matrices are same sizes.
self.embedding1 = nn.Embedding(num_embeddings=num_emb, embedding_dim=emb_dim)
self.embedding2 = nn.Embedding(num_embeddings=num_emb, embedding_dim=emb_dim)
self.cosine = nn.CosineSimilarity()
self.sigmoid = nn.Sigmoid()
def forward(self, a, b):
output1 = self.embedding1(a)
output2 = self.embedding2(b)
similarity = self.cosine(output1, output2)
output = self.sigmoid(similarity)
return output
Training/Evaluation
model = Model(num_emb, emb_dim)
if torch.cuda.is_available():
model = model.to('cuda')
model.train()
criterion = loss_function()
optimizer = some_optimizer()
for epoch in range(num_epochs):
epoch_loss = 0
for batch in train_loader:
optimizer.zero_grad()
a, b, label = batch
if torch.cuda.is_available():
a = a.to('cuda')
b = b.to('cuda')
label = label.to('cuda')
output = model(a, b)
loss = criterion(output, label)
loss.backward()
optimizer.step()
epoch_loss += loss.cpu().item()
print("Epoch %d \t Loss %.6f" % epoch, epoch_loss)
I omitted some detailed (e.g., hyperparameter values, loss function and optimizer, etc.). Is this overall procedure something similar to what you're looking for OP?
| https://stackoverflow.com/questions/63750215/ |
Partial derivatives of Gaussian Process wrt features | Given a Gaussian Process Model with multidimensional features and scalar observations, how do I compute derivatives of the output wrt to each input, in GPyTorch or GPFlow (or scikit-learn)?
| If I understand your question correctly, the following should give you what you want in GPflow with TensorFlow:
import numpy as np
import tensorflow as tf
import gpflow
### Set up toy data & model -- change as appropriate:
X = np.linspace(0, 10, 5)[:, None]
Y = np.random.randn(5, 1)
data = (X, Y)
kernel = gpflow.kernels.SquaredExponential()
model = gpflow.models.GPR(data, kernel)
Xtest = np.linspace(-1, 11, 7)[:, None] # where you want to predict
### Compute gradient of prediction with respect to input:
# TensorFlow can only compute gradients with respect to tensor objects,
# so let's convert the inputs to a tensor:
Xtest_tensor = tf.convert_to_tensor(Xtest)
with tf.GradientTape(
persistent=True # this allows us to compute different gradients below
) as tape:
# By default, only Variables are watched. For gradients with respect to tensors,
# we need to explicitly watch them:
tape.watch(Xtest_tensor)
mean, var = model.predict_f(Xtest_tensor) # or any other predict function
grad_mean = tape.gradient(mean, Xtest_tensor)
grad_var = tape.gradient(var, Xtest_tensor)
| https://stackoverflow.com/questions/63753078/ |
How can I get argmaxed torch tensor excluding certain index? | I wonder if I can get torch.argmax of my input excluding certain index.
For example,
target = torch.tensor([1,2])
input = torch.tensor([[0.1,0.5,0.2,0.2], [0.1,0.5,0.1,0.3]])
I want to get the maximum value in input excluding the index on the target, so that the result would be
output = torch.tensor([[0.2],[0.5]])
| You can try this
Set negative infy to the target indices in temp tensor
Then use torch.max or torch.argmax
tmp_input = input.clone()
tmp_input[range(len(input)), target] = float("-Inf")
torch.max(tmp_input, dim=1).values
tensor([0.2000, 0.5000])
torch.max(tmp_input, dim=1).indices
tensor([3, 1])
torch.argmax(tmp_input, dim=1)
tensor([3, 1])
| https://stackoverflow.com/questions/63772663/ |
TypeError: If no scoring is specified, the estimator passed should have a 'score' method | I have been working with a PyTorch neural network for a while now. I decided I wanted to add a permutation feature importance scorer, and this started to cause some issues.
I get" TypeError: If no scoring is specified, the estimator passed should have a 'score' method. The estimator <class 'skorch.net.NeuralNet'>[uninitialized](
module=<class 'main.run..MultiLayerPredictor'>,
) does not. " - error message. Here's my code:
class MultiLayerPredictor(torch.nn.Module):
def __init__(self, input_shape=9152, output_shape=1, hidden_dim=1024, **kwargs):
super().__init__()
self.fc1 = torch.nn.Linear(in_features=input_shape, out_features=hidden_dim)
self.fc2 = torch.nn.Linear(in_features=hidden_dim, out_features=hidden_dim)
self.fc3 = torch.nn.Linear(in_features=hidden_dim, out_features=output_shape)
def forward(self, x):
l1 = torch.relu(self.fc1(x))
l2 = torch.relu(self.fc2(l1))
return torch.sigmoid(self.fc3(l2)).reshape(-1)
print("Moving to wrapping the neural net")
net = NeuralNet(
MultiLayerPredictor,
criterion=nn.MSELoss,
max_epochs=10,
optimizer=optim.Adam,
lr=0.1,
iterator_train__shuffle=True
)
print("Moving to finding optimal hyperparameters")
lr = (10**np.random.uniform(-5,-2.5,1000)).tolist()
params = {
'optimizer__lr': lr,
'max_epochs':[300,400,500],
'module__num_units': [14,20,28,36,42],
'module__drop' : [0,.1,.2,.3,.4]
}
gs = RandomizedSearchCV(net,params,refit=True,cv=3,scoring='neg_mean_squared_error',n_iter=100)
gs.fit(X_train_scaled,y_train);
def report(results, n_top=3):
for i in range(1, n_top + 1):
candidates = np.flatnonzero(results['rank_test_score'] == i)
for candidate in candidates:
print("Model with rank: {0}".format(i))
print("Mean validation score: {0:.3f} (std: {1:.3f})".format(
results['mean_test_score'][candidate],
results['std_test_score'][candidate]))
print("Parameters: {0}".format(results['params'][candidate]))
print("")
print(report(gs.cv_results_,10))
epochs = [i for i in range(len(gs.best_estimator_.history))]
train_loss = gs.best_estimator_.history[:,'train_loss']
valid_loss = gs.best_estimator_.history[:,'valid_loss']
plt.plot(epochs,train_loss,'g-');
plt.plot(epochs,valid_loss,'r-');
plt.title('Training Loss Curves');
plt.xlabel('Epochs');
plt.ylabel('Mean Squared Error');
plt.legend(['Train','Validation']);
plt.show()
r = permutation_importance(net, X_test, y_test, n_repeats=30,random_state=0)
for i in r.importances_mean.argsort()[::-1]:
if r.importances_mean[i] - 2 * r.importances_std[i] > 0:
print(f"{metabolites.feature_names[i]:<8}"
f"{r.importances_mean[i]:.3f}"
f" +/- {r.importances_std[i]:.3f}")
y_pred_acc = gs.predict(X_test)
print('Accuracy : ' + str(accuracy_score(y_test,y_pred_acc)))
Stacktrace would point that the error stems from the line where I set the permutation importance. How can I fix this?
Full stacktrace:
*Traceback (most recent call last):
File "//ad..fi/home/h//Desktop/neuralnet/neuralnet_wrapped.py", line 141, in <module>
run()
File "//ad..fi/home/h//Desktop/neuralnet/neuralnet_wrapped.py", line 119, in run
r = permutation_importance(net, X_test, y_test,
File "C:\Users\\AppData\Roaming\Python\Python38\site-packages\sklearn\utils\validation.py", line 73, in inner_f
return f(**kwargs)
File "C:\Users\\AppData\Roaming\Python\Python38\site-packages\sklearn\inspection\_permutation_importance.py", line 132, in permutation_importance
scorer = check_scoring(estimator, scoring=scoring)
File "C:\Users\\AppData\Roaming\Python\Python38\site-packages\sklearn\utils\validation.py", line 73, in inner_f
return f(**kwargs)
File "C:\Users\\AppData\Roaming\Python\Python38\site-packages\sklearn\metrics\_scorer.py", line 425, in check_scoring
raise TypeError(
TypeError: If no scoring is specified, the estimator passed should have a 'score' method. The estimator <class 'skorch.net.NeuralNet'>[uninitialized](
module=<class '__main__.run.<locals>.MultiLayerPredictor'>,
) does not.*
| From the docs:
NeuralNet still has no score method. If you need it, you have to implement it yourself.
This is the problem. The NeuralNet has no score method, as the error says. And the documentation says that "you have to implement it yourself". You can check that looking at the source-code too.
| https://stackoverflow.com/questions/63788527/ |
Questions about Batch Normalization in Pytorch | Recently when I use the BN in the PyTorch, I have several questions.
Based on the BN2d documentation in PyTorch, when inferencing(evaluation), it will automatically use the mean and variance (running estimate when training) for BN layer. However, my first question is that when we save out the model after training, does it contain the running mean and variance? I originally thought the model will only save the learnable parameters. But the running mean and variance are not really learnable?
By default, when we use eval() in PyTorch, the BN layer will use some running mean and variance from a specific variable (maybe the results from training), right? But not to calculate the mean and variance of mini-batch? (Because I see some answers mention that the bad performance of BN when inferencing is cased by the 1-size batch. But it should use the running mean and variance of training, why the batch size of testing will influence?
The third question is kind of related to the second question. Will the BN performs the same when inferencing. For example, when I iterate the test set twice. Will the results behave differently? (The maybe more direct question is that will the mean and variance change when inferencing?)
I also want to know if it is possible to use mini-batch mean & variance or running mean & variance for inferencing. If I only set BN layer to train, its learnable parameter will also update. But maybe I just want them to get new mean and variance. Is there a possible way to do that?
The last question is related to the fourth question. So is it fair to use mean & variance of all test data, or batch of test data to calculate the mean & variance? I mean fair here is that is it improper (tricky?) to use some attributes of the test set?
Look forward to your answers. I'm a kind of new bird, willing to learn and discuss with you!!
Thanks in advance!!!
| I get the answers from my senior classmates, and I think it's useful for others. (If you have different points, feel free to comment)
When we save out the whole model, it will contain the running mean and variance for the BN layers. These two parameters are not learnable (not updated in backward process, but updated in the forward process)
If use .eval(), the BN layer will automatically use the running mean and variance stored in that layer, and will not update in forward process again. This means that when inferencing, BN layers use the running mean and variance calculated in the training process.
Yes, if just simply use .eval(), the BN layer will use the same mean and variance at all times.
It's possible to calculate the running mean and variance based on the test set. Just make the BN layer in train() mode. This will not influence the learnable parameters of BN layer. Because when inferencing, we only have a forward process (update the mean and variance) without a backward process. Maybe if we have to reset the mean and variance at the beginning of the evaluation or inference period to make mean and variance totally unrelated to the training process.
It's kind of some tricks. I hear that some GAN papers adopt such strategies for BN layers.
| https://stackoverflow.com/questions/63799763/ |
What is the correct way to implement gradient accumulation in pytorch? | Broadly there are two ways:
Call loss.backward() on every batch, but only call optimizer.step() and optimizer.zero_grad() every N batches. Is it the case that the gradients of the N batches are summed up? Hence to maintain the same learning rate per effective batch, we have to divide the learning rate by N?
Accumulate loss instead of gradient, and call (loss / N).backward() every N batches. This is easy to understand, but does it defeat the purpose of saving memory (because the gradients of the N batches are computed at once)? The learning rate doesn't need adjusting to maintain the same learning rate per effective batch, but should be multiplied by N if you want to maintain the same learning rate per example.
Which one is better, or more commonly used in packages such as pytorch-lightning? It seems that optimizer.zero_grad() is a prefect fit for gradient accumulation, therefore (1) should be recommended.
| You can use PytorchLightning and you get this feature of the box, see the Trainer argument accumulate_grad_batches which you can also pair with gradient_clip_val, more in docs.
| https://stackoverflow.com/questions/63815311/ |
What is the machine precision in pytorch and when should one use doubles? | I am running experiments on synthetic data (e.g. fitting a sine curve) and I get errors in pytorch that are really small. One if about 2.00e-7. I was reading about machine precision and it seems really close to the machine precision. How do I know if this is going to cause problems (or if perhaps it already has e.g. I can't differentiate between the different errors since they are "machine zero").
errors:
p = np.array([2.3078539778125768e-07,
1.9997889411762922e-07,
2.729681222011256e-07,
3.2532371115080884e-07])
m = np.array([3.309504692539563e-07,
4.1058904888091606e-06,
6.8326703386053605e-06,
7.4616147721799645e-06])
what confuses me is that I tried adding what I thought was to small of a number so that it returned no difference but it did return a difference (i.e. I tried to do a+eps = a using eps = smaller than machine precision):
import torch
x1 = torch.tensor(1e-6)
x2 = torch.tensor(1e-7)
x3 = torch.tensor(1e-8)
x4 = torch.tensor(1e-9)
eps = torch.tensor(1e-11)
print(x1.dtype)
print(x1)
print(x1+eps)
print(x2)
print(x2+eps)
print(x3)
print(x3+eps)
print(x4)
print(x4+eps)
output:
torch.float32
tensor(1.0000e-06)
tensor(1.0000e-06)
tensor(1.0000e-07)
tensor(1.0001e-07)
tensor(1.0000e-08)
tensor(1.0010e-08)
tensor(1.0000e-09)
tensor(1.0100e-09)
I expected everything to be zero but it wasn't. Can someone explain to me what is going on? If I am getting losses close to 1e-7 should I use double rather than float? googling it seems that single is the precision for float afaik.
If I want to use doubles what are cons/pros + what is the least error prone way to change my code? Is a single change to double type enough or is there a global flag?
Useful reminder:
recall machine precision:
Machine precision is the smallest number Ξ΅ such that the difference between 1 and 1 + Ξ΅ is nonzero, i.e., it is the smallest difference between these two numbers that the computer recognizes. For IEEE-754 single precision this is 2-23 (approximately 10-7) while for IEEE-754 double precision it is 2-52 (approximately 10-16) .
Potential solution:
Ok letβs see if this is a good summary of what I think is correct (modulo ignoring some details that I donβt fully understand right now of floats, like the bias).
But Iβve concluded that the best thing for me is to make sure my errors/numbers have two properties:
they are within 7decimals of each other (due to the mantissa being 24 bigs like you pointed out the log_10(2^24) = 7.225)
they are far enough from the edges. For this I take the mantissa to be 23 bits away from the lower edge (point position about -128+23) and the same for the largest edge but 127-23.
As long we satisfy that more or less we avoid adding two numbers that are too small for the machine to distinguish (condition 1) and avoid overflows/underflows (condition 2).
Perhaps there is a small detail I might be missing with the bias or some other float detail (like representing infinity, NaN). But I believe that is correct.
If anyone can correct the details, that would be fantastic.
useful links:
https://www.cfd-online.com/Wiki/Machine_precision
https://discuss.pytorch.org/t/what-is-the-machine-precision-of-pytorch-with-cpus-or-gpus/9384/3
Should I use double or float?
Double precision floating values in Python?
Difference between Python float and numpy float32
| I think you misunderstood how floating points work. There are many good resources (e.g.) about what floating points are, so I am not going into details here.
The key is that floating points are dynamic. They can represent the addition of very large values up to a certain accuracy, or the addition of very small values up to a certain accuracy, but not the addition of a very large value with a very small value. They adjust their ranges on-the-go.
So this is why your testing result is different than the explanation in "machine precision" -- you are adding two very small values, but that paragraph explicitly said "1+eps". 1 is a much larger value than 1e-6. The following thus will work as expected:
import torch
x1 = torch.tensor(1).float()
eps = torch.tensor(1e-11)
print(x1.dtype)
print(x1)
print(x1+eps)
Output:
torch.float32
tensor(1.)
tensor(1.)
The second question -- when should you use double?
Pros - higher accuracy.
Cons - Much slower (hardware are configured to like float most of the time), doubled memory usage.
That really depends on your application. Most of the time I would just say no. As I said, you need double when you have very large values and very small values coexist in the network. That should not be happening anyway with proper normalization of data.
(Another reason is the overflow of exponent, say when you need to represent very very very large/small values, beyond 1e-38 and 1e38)
| https://stackoverflow.com/questions/63818676/ |
pytorch dataloader default_collate argument use with to(device) | Ive been trying to integrate the to(device) inside my dataloader using to(device) as seen in https://github.com/pytorch/pytorch/issues/11372
I defined it on FashionMNIST in the following way:
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
batch_size = 32
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/',
download=True,
train=True,
transform=transforms.ToTensor())
rain_loader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=False, collate_fn=lambda x: default_collate(x).to(device))
But i get the following error:
AttributeError: 'list' object has no attribute 'to'
It seems that the output of default collate is a list of length 2 with the first element being the image tensor and the second the labels tensor (since its the output of next(iter(train_loader)) with collate_fn=None), so I tried with the following defined function:
def to_device_list(l, device):
return [l[0].to(device), l[1].to(device)]
train_loader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=False, collate_fn=lambda x: to_device_list(x, device))
And I got the following error:
AttributeError: 'tuple' object has no attribute 'to'
Any help please on how to do it?
| The fashion mnist dataset returns a tuple of img and target, where the img is tensor and target is int value for class.
Now, your dataloader takes batch size samples from dataset class to get list of samples. Note, this list of samples is now, List[Tuple[Tensor, int]](using typing annotation here). Then it calls, collate function to convert List[Tuple[Tensor, int]] into List[Tensor], where this list has 2 tensors. The first tensor is stacked array of images of size [32, 1, 28, 28], where 32 was batch size and second tensor is tensor array of int values(class labels).
The default_collate function, just converts array of structures to structures of array.
Now, when you use collate_fn=lambda x: default_collate(x).to(device), notice that default_collate returns you a list of tensors. So calling .to on list wont work and should be called on all elements of the list.
Solution
Use
collate_fn=lambda x: list(map(lambda x: x.to(device), default_collate(x))))
The map function transfers each element of list(from default_collate) to cuda, and finally, call list, since map is evaluated lazy in python3.
| https://stackoverflow.com/questions/63827178/ |
HParams in Tensorboard, Run IDs and naming | I'm using SummaryWriter.add_hparams(params, values) to log hyperparameters during training of my Seq2Seq model. My runs are named with a timestamp like 2020-09-10 14-50-27. In the HParams tab in Tensorboard, everything looks fine, but the HParam Trial IDs are different; they have another string of numbers attached like this: 2020-09-10 14-50-27/1599742915.9712806. These also appear in the Scalar tab as different runs, which is quite inconvenient. Is there a way to turn of this extra naming or to stop them of appearing in the Scalars tab? I use pytorch and its summarywriter like this:
params = {
'max_epochs' : max_epochs,
'learning_rate': learning_rate,
'batch_size': batch_size,
'optimizer_name': optimizer_name,
'dropout_fc': dropout_fc
}
values = {
'hparam/hp_total_time': t1_stop - t0_start,
'hparam/score' : best_score
}
tb.add_hparams(params, values)
| As Aniket mentioned there is not enough in your issue description to be entirely sure what the issue is.
However, if you are using Pytorch, I suspect you may be referring to the behaviour also reported in this issue. The add_hparams method creates a new subfolder with current timestamp when called, which is 1599742915.9712806 in your case.
TensorBoard uses the hierarchical folder structure to organise (group) runs, which is why 2020-09-10 14-50-27/1599742915.9712806 and 2020-09-10 14-50-27 appear as different runs.
As per the issue I mentioned above, there does not seem to be an "official" way to modify this behaviour but if you read the comments you will find a few custom classes that have been proposed to help.
| https://stackoverflow.com/questions/63830848/ |
Trying to access subset of mnist dataset in pytorch [equal samples from each class] | Trying to access subset of mnist dataset in pytorch [equal samples from each class] but getting this error
prng = RandomState(42)
random_permute = prng.permutation(np.arange(0, 6000))[0:3000]
indx = np.concatenate([np.where(np.array(mnist_data.targets) == classe)[0][random_permute] for classe in range(0,10)])
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-178-038015f76b77> in <module>
----> 1 indx = np.concatenate([np.where(np.array(mnist_data.targets) == classe)[0][random_permute] for classe in range(0,10)])
<ipython-input-178-038015f76b77> in <listcomp>(.0)
----> 1 indx = np.concatenate([np.where(np.array(mnist_data.targets) == classe)[0][random_permute] for classe in range(0,10)])
IndexError: index 5992 is out of bounds for axis 0 with size 5923
| MNIST dataset does not have a uniform distribution of targets. You are getting this error because class 0 in MNIST contains 5923 samples.
nums = [0]*10
for i in range(60000):
nums[(int(mnist_data.targets[i]))] += 1
print(nums)
This will print [5923, 6742, 5958, 6131, 5842, 5421, 5918, 6265, 5851, 5949].
| https://stackoverflow.com/questions/63851063/ |
no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:47 in Google Colab | I have been using their algorithm for days and I tried several, but none of them gave me this error until now.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-5-dbd18151b569> in <module>()
1 from demo import load_checkpoints
2 generator, kp_detector = load_checkpoints(config_path='config/vox-256.yaml',
----> 3 checkpoint_path='/content/gdrive/My Drive/first-order-motion-model/vox-cpk.pth.tar')
10 frames
/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py in _lazy_init()
188 raise AssertionError(
189 "libcudart functions unavailable. It looks like you have a broken build?")
--> 190 torch._C._cuda_init()
191 # Some of the queued calls may reentrantly call _lazy_init();
192 # we need to just return without initializing in that case.
RuntimeError: cuda runtime error (100) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:47
| You have not enabled GPU on your notebook, enable it in Runtime > Change runtime.
| https://stackoverflow.com/questions/63855269/ |
Saving Model State and Load in Google Colab | I have 500 epochs in total to train . But it is taking 8 minutes per epoch to be completed in google colab. Can any one help me how can I save my Model state after a Particular number of epoch completion and start the training again from where I left in google Colab ??
| If you want to save the model to google drive after certain number of epochs in pytorch you can do so by using
first mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
Then the run the cell in colab and authenticate. Now google drive should be mounted.
Now set the path to be
PATH = F"/content/gdrive/My Drive/{Model name}/{model_save_name}"
you can the save the model
if(epoch%(number_epoch_to_save)==0):
torch.save(model.state_dict(), PATH)
example documentation can be found here at https://pytorch.org/tutorials/beginner/saving_loading_models.html
| https://stackoverflow.com/questions/63879856/ |
how initialize a torch of matrices | Hello I m trying to create a tensor that will have inside N matrices of n by n size. I tried to initialize it with
Q=torch.zeros(N, (n,n))
but i get the following error
zeros(): argument 'size' must be tuple of ints, but found element of type tuple at pos 2
Also I want to fill it later with random matrices with integer values and I will turn it to semidefinte so I thought of the following
for i in range(0,N):
Q[i]=torch.randint(0,10,(n,n))
Q = Q*Q.t()
Is it correct? Is there any other faster way with a build in command?
| N matrices of n x n size is equivalent to three dimensional tensor of shape [N, n, n]. You can do it like so:
import torch
N = 32
n = 10
tensor = torch.randint(0, 10, size=(N, n, n))
No need to fill it with zeros to begin with, you can create it directly.
You can also iterate over 0 dimension similar to what you did:
for i in range(0, N):
tensor[i] = tensor[i] * tensor[i].T
See @Dishin H Goyani answer for faster approach with permutation.
| https://stackoverflow.com/questions/63884811/ |
RuntimeError: Expected 3-dimensional input for 3-dimensional weight [64, 512, 1], but got 2-dimensional input of size [4, 512] instead | Hello below is the pytorch model I am trying to run. But getting error. I have posted the error trace as well. It was running very well unless I added convolution layers. I am still new to deep learning and Pytorch. So I apologize if this is silly question. I am using conv1d so why should conv1d expect 3 dimensional input and it is also getting a 2d input which is also odd.
class Net(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(CROP_SIZE*CROP_SIZE*3, 512)
self.conv1d1 = nn.Conv1d(in_channels=512, out_channels=64, kernel_size=1, stride=2)
self.fc2 = nn.Linear(64, 128)
self.conv1d2 = nn.Conv1d(in_channels=128, out_channels=64, kernel_size=1, stride=2)
self.fc3 = nn.Linear(64, 256)
self.conv1d3 = nn.Conv1d(in_channels=256, out_channels=64, kernel_size=1, stride=2)
self.fc4 = nn.Linear(64, 256)
self.fc4 = nn.Linear(256, 128)
self.fc5 = nn.Linear(128, 64)
self.fc6 = nn.Linear(64, 32)
self.fc7 = nn.Linear(32, 64)
self.fc8 = nn.Linear(64, frame['landmark_id'].nunique())
def forward(self, x):
x = F.relu(self.conv1d1(self.fc1(x)))
x = F.relu(self.conv1d2(self.fc2(x)))
x = F.relu(self.conv1d3(self.fc3(x)))
x = F.relu(self.fc4(x))
x = F.relu(self.fc5(x))
x = F.relu(self.fc6(x))
x = F.relu(self.fc7(x))
x = self.fc8(x)
return F.log_softmax(x, dim=1)
net = Net()
import torch.optim as optim
loss_function = nn.CrossEntropyLoss()
net.to(torch.device('cuda:0'))
for epoch in range(3): # 3 full passes over the data
optimizer = optim.Adam(net.parameters(), lr=0.001)
for data in tqdm(train_loader): # `data` is a batch of data
X = data['image'].to(device) # X is the batch of features
y = data['landmarks'].to(device) # y is the batch of targets.
optimizer.zero_grad() # sets gradients to 0 before loss calc. You will do this likely every step.
output = net(X.view(-1,CROP_SIZE*CROP_SIZE*3)) # pass in the reshaped batch
# print(np.argmax(output))
# print(y)
loss = F.nll_loss(output, y) # calc and grab the loss value
loss.backward() # apply this loss backwards thru the network's parameters
optimizer.step() # attempt to optimize weights to account for loss/gradients
print(loss) # print loss. We hope loss (a measure of wrong-ness) declines!
Error trace
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-42-f5ed7999ce57> in <module>
5 y = data['landmarks'].to(device) # y is the batch of targets.
6 optimizer.zero_grad() # sets gradients to 0 before loss calc. You will do this likely every step.
----> 7 output = net(X.view(-1,CROP_SIZE*CROP_SIZE*3)) # pass in the reshaped batch
8 # print(np.argmax(output))
9 # print(y)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
<ipython-input-37-6d3e34d425a0> in forward(self, x)
16
17 def forward(self, x):
---> 18 x = F.relu(self.conv1d1(self.fc1(x)))
19 x = F.relu(self.conv1d2(self.fc2(x)))
20 x = F.relu(self.conv1d3(self.fc3(x)))
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input)
210 _single(0), self.dilation, self.groups)
211 return F.conv1d(input, self.weight, self.bias, self.stride,
--> 212 self.padding, self.dilation, self.groups)
213
214
RuntimeError: Expected 3-dimensional input for 3-dimensional weight [64, 512, 1], but got 2-dimensional input of size [4, 512] instead
| You should learn how convolutions work (e.g. see this answer) and some neural network basics (this tutorial from PyTorch).
Basically, Conv1d expects inputs of shape [batch, channels, features] (where features can be some timesteps and can vary, see example).
nn.Linear expects shape [batch, features] as it is fully connected and each input feature is connected to each output feature.
You can verify those shapes by yourself, for torch.nn.Linear:
import torch
layer = torch.nn.Linear(20, 10)
data = torch.randn(64, 20) # [batch, in_features]
layer(data).shape # [64, 10], [batch, out_features]
For Conv1d:
layer = torch.nn.Conv1d(in_channels=20, out_channels=10, kernel_size=3, padding=1)
data = torch.randn(64, 20, 15) # [batch, channels, timesteps]
layer(data).shape # [64, 10, 15], [batch, out_features]
layer(torch.randn(32, 20, 25)).shape # [32, 10, 25]
BTW. As you are working with images, you should use torch.nn.Conv2d instead.
| https://stackoverflow.com/questions/63885053/ |
How to train deeplabv3 on custom dataset on pytorch? | I am trying to do image segmentation and I got to know the Google work of DeepLabv3.
This is the reference to the paper:
https://arxiv.org/abs/1706.05587
Chen, L.C., Papandreou, G., Schroff, F. and Adam, H., 2017. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587.
This architecture is trained to do segmentation of the 20+1 classes of the Pascal VOC 2012 Dataset (20 foreground and 1 background class).
Pytorch provides pre-trained deeplabv3 on Pascal dataset, I would like to train the same architecture on cityscapes. Therefore, there are different classes with respect to the Pascal VOC dataset. I would like to know what is the efficient way to do it?
For now this is the only code I wrote:
import torch
model = torch.hub.load('pytorch/vision:v0.6.0', 'deeplabv3_resnet101', pretrained=True)
model.eval()
|
Write custom Dataloader class which should inherit Dataset class and implement at least 2 methods __len__ and __getitem__.
Modify the pretrained DeeplabV3 head with your custom number of output channels.
from torchvision.models.segmentation.deeplabv3 import DeepLabHead
from torchvision.models.segmentation import deeplabv3_resnet101
def custom_DeepLabv3(out_channel):
model = deeplabv3_resnet101(pretrained=True, progress=True)
model.classifier = DeepLabHead(2048, out_channel)
#Set the model in training mode
model.train()
return model
Train and evaluate the model.
| https://stackoverflow.com/questions/63892031/ |
Pytorch: How to concatenate lists within a tensor? | I have a tensor of size (2, b, h) and I want to change it to the following size: (b, 2*h), where the corresponding lists are concatenated, for example:
a = torch.tensor([[[1, 2, 3], [4, 5, 6], [4, 4, 4]],
[[4, 5, 6], [7, 8, 9], [5, 5, 5]]])
I want:
b = tensor([[1, 2, 3, 4, 5, 6],
[4, 5, 6, 7, 8, 9],
[4, 4, 4, 5, 5, 5]])
| Use permute first to change order of dimensions, then contiguous to prevent strides within the permuted tensor and finally use view to reshape the tensor.
b = a.permute(1,0,2).contiguous().view(a.shape[1],-1)
| https://stackoverflow.com/questions/63909009/ |
How to create a copy of nn.Sequential in torch? | I am trying to create a copy of a nn.Sequential network. For example, the following is the easiest way to do the same-
net = nn.Sequential(
nn.Conv2d(16, 32, 3, stride=2),
nn.ReLU(),
nn.Conv2d(32, 64, 3, stride=2),
nn.ReLU(),
)
net_copy = nn.Sequential(
nn.Conv2d(16, 32, 3, stride=2),
nn.ReLU(),
nn.Conv2d(32, 64, 3, stride=2),
nn.ReLU(),
)
However, it is not so great to define the network again. I tried the following ways but it didn't work-
net_copy = nn.Sequential(net): In this approach, it seems that net_copy is just a shared pointer of net
net_copy = nn.Sequential(*net.modules()): In this approach, net_copy contains many more layers.
Finally, I tired deepcopy in the following way which worked fine-
net_copy = deepcopy(net)
However, I am wondering if it is the proper way. I assume it is fine because it works.
| Well, I just use torch.load and torch.save with io.BytesIO
import io, torch
# write to a buffer
buffer = io.BytesIO()
torch.save(model, buffer) #<--- model is some nn.module
print(buffer.tell()) #<---- no of bytes written
del model
# read from buffer
buffer.seek(0) #<--- must see to origin every time before reading
model = torch.load(buffer)
del buffer
| https://stackoverflow.com/questions/63913170/ |
LayerNorm inside nn.Sequential in torch | I am trying to use LayerNorm inside nn.Sequential in torch. This is what I am looking for-
import torch.nn as nn
class LayerNormCnn(nn.Module):
def __init__(self):
super(LayerNormCnn, self).__init__()
self.net = nn.Sequential(
nn.Conv2d(3, 32, kernel_size=3, stride=2, padding=1),
nn.LayerNorm(),
nn.ReLU(),
nn.Conv2d(32, 64, kernel_size=3, stride=2, padding=1),
nn.LayerNorm(),
nn.ReLU(),
)
def forward(self, x):
x = self.net(x)
return x
Unfortunately, it doesn't work because LayerNorm requires normalized_shape as input. The code above throws following exception-
nn.LayerNorm(),
TypeError: __init__() missing 1 required positional argument: 'normalized_shape'
Right now, this is how I have implemented it-
import torch.nn as nn
import torch.nn.functional as F
class LayerNormCnn(nn.Module):
def __init__(self, state_shape):
super(LayerNormCnn, self).__init__()
self.conv1 = nn.Conv2d(state_shape[0], 32, kernel_size=3, stride=2, padding=1)
self.conv2 = nn.Conv2d(32, 64, kernel_size=3, stride=2, padding=1)
# compute shape by doing a forward pass
with torch.no_grad():
fake_input = torch.randn(1, *state_shape)
out = self.conv1(fake_input)
bn1_size = out.size()[1:]
out = self.conv2(out)
bn2_size = out.size()[1:]
self.bn1 = nn.LayerNorm(bn1_size)
self.bn2 = nn.LayerNorm(bn2_size)
def forward(self, x):
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
return x
if __name__ == '__main__':
in_shape = (3, 128, 128)
batch_size = 32
model = LayerNormCnn(in_shape)
x = torch.randn((batch_size,) + in_shape)
out = model(x)
print(out.shape)
Is it possible to use LayerNorm inside nn.Sequential?
| The original layer normalisation paper advised against using layer normalisation in CNNs, as receptive fields around the boundary of images will have different values as opposed to the receptive fields in the actual image content. This issue does not arise with RNNs, which is what layer norm was originally tested for. Are you sure you want to be using LayerNorm? If you're looking to compare a different normalisation technique against BatchNorm, consider GroupNorm. This gets rid of the LayerNorm assumption that all channels in a layer contribute equally to a prediction, which is problematic particularly if the layer is convolutional. Instead, each channel is divided further into groups, that still allows a GN layer to learn different statistics across channels.
Please refer here for related discussion.
| https://stackoverflow.com/questions/63914843/ |
Using sigmoid output for cross entropy loss on Pytorch | Iβm trying to modify Yolo v1 to work with my task which each object has only 1 class. (e.g: an obj cannot be both cat and dog)
Due to the architecture (other outputs like localization prediction must be used regression) so sigmoid was applied to the last output of the model (f.sigmoid(nearly_last_output)). And for classification, yolo 1 also use MSE as loss. But as far as I know that MSE sometimes not going well compared to cross entropy for one-hot like what I want.
And specific: GT like this: 0 0 0 0 1 (let say we have only 5 classes in total, each only has 1 class so only one number 1 in them, of course this is class 5th in this example)
and output model at classification part: 0.1 0.1 0.9 0.2 0.1
I found some suggestion use nn.BCE / nn.BCEWithLogitsLoss but I think I should ask here for more correct since Iβm not good at math and maybe Iβm wrong somewhere so just ask to learn more and for sure what should I use correctly?
|
MSE loss is usually used for regression problem.
For binary classification, you can either use BCE or BCEWithLogitsLoss. BCEWithLogitsLoss combines sigmoid with BCE loss, thus if there is sigmoid applied on the last layer, you can directly use BCE.
The GT mentioned in your case refers to 'multi-class' classification problem, and the output shown doesn't really correspond to multi-class classification. So, in this case, you can apply a CrossEntropyLoss, which combines softmax and log loss and suitable for 'multi-class' classification problem.
| https://stackoverflow.com/questions/63914849/ |
Get Predictions from Trained Pytorch Model | I am using transfer learning to fine tune an inception_v3 model. After I train the model and store the best version off I am attempting to use it to generate predictions for my test set. Below is an example of my attempt on one image.
img_test=Image.open("img.png")
#Perform same transformations to image that the model used
transform_pipeline = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
img_test = transform_pipeline(img_test)
# I believe this is adding in the batch size of 1, but in looking around online it looked like I needed it
img = img_test.unsqueeze(0)
img = Variable(img)
model_ft(img)
When I do the above I get
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
Which seems to imply my model weights are on my gpu and the variable is on the cpu, how do I move one or the other over so I can use it, or reference one that is on the opposite processor?
| As the error said, it seem that the input of the model (your img_test) is in the cpu.
Try to move the image to cuda before send it through your pre-trained model:
device = torch.device('cuda' if torch.cuda.is_available())
img_test = img_test.to(device)
| https://stackoverflow.com/questions/63921487/ |
How to retain 2D (or more) shape when using pytrorch masked_select | Suppose I have the following two matching shape tensors:
a = tensor([[ 0.0113, -0.1666, 0.5960, -0.0667], [-0.0977, -0.1984, 0.5153, 0.0420]])
selectors = tensor([[ True, True, False, False], [ True, False, True, False]])
When using torch.masked_select to find the values in a that match True indices in selectors like this:
torch.masked_select(a, selectors)
The output will be in 1D shape instead of the original 2D shape:
tensor([ 0.0113, -0.1666, -0.0977, 0.5153])
This is consistent with masked_select behavior as it is given in the documentation (torch.masked_select). However, my goal is to get a result that matches the shape of the two original tensors. I.e.:
tensor([[0.0113, -0.1666], [-0.0977, 0.5153]])
Is there a way to get this without having to loop over all the elements in the tensors and find the mask for each one? Please note that I have also looked into using torch.where, but it doesn't fit the case I have as I see it.
| As @jodag pointed out, for general inputs, each row on the desired masked result might have a different number of elements, depending on how many True values there are on the same row in selectors. However, you could overcome this by allowing trailing zero padding in the result.
Basic solution:
indices = torch.masked_fill(torch.cumsum(selectors.int(), dim=1), ~selectors, 0)
masked = torch.scatter(input=torch.zeros_like(a), dim=1, index=indices, src=a)[:,1:]
Explanation:
By applying cumsum() row-wise over selectors, we compute for each unmasked element in a the target column number it should be copied to in the output tensor. Then, scatter() performs a row-wise scattering of a's elements to these computed target locations. We leave all masked elements with the index 0, so that the first element in each row of the result would contain one of the masked elements (maybe arbitrarily. we don't care which). We then ignore these un-wanted 1st values by taking the slice [:,1:]. The output resulting masked tensor has the exact same size as the input a (this is the maximum needed size, for the case where there is a row of full True values in selectors).
Usage example:
>>> a = Torch.tensor([[ 1, 2, 3, 4, 5, 6], [10, 20, 30, 40, 50, 60]])
>>> selectors = Torch.tensor([[ True, False, False, True, False, True], [False, False, True, True, False, False]])
>>> torch.cumsum(selectors.int(), dim=1)
tensor([[1, 1, 1, 2, 2, 3],
[0, 0, 1, 2, 2, 2]])
>>> indices = torch.masked_fill(torch.cumsum(selectors.int(), dim=1), ~selectors, 0)
>>> indices
tensor([[1, 0, 0, 2, 0, 3],
[0, 0, 1, 2, 0, 0]])
>>> torch.scatter(input=torch.zeros_like(a), dim=1, index=indices, src=a)
tensor([[ 5, 1, 4, 6, 0, 0],
[60, 30, 40, 0, 0, 0]])
>>> torch.scatter(input=torch.zeros_like(a), dim=1, index=indices, src=a)[:,1:]
tensor([[ 1, 4, 6, 0, 0],
[30, 40, 0, 0, 0]])
Adapting output size: Here, the length of dim=1 of the output resulting masked tensor is the max number of un-masked items in a row. For your original show-case, the output shape would be (2,2) as you desired. Note that if this number is not previously known and a is on CUDA, it would cause an additional host-device synchronization that might affect the performance.
To do so, instead of allocating input=torch.zeros_like(a) for scatter(), allocate it by a.new_zeros(size=(a.size(0), torch.max(indices).item() + 1)). The +1 is for the 1st place which is later sliced-out. The host-device synchronization would occur by accessing the result of max() to calculate the allocated output size.
Example:
>>> torch.scatter(input=a.new_zeros(size=(a.size(0), torch.max(indices).item() + 1)), dim=1, index=indices, src=a)[:,1:]
tensor([[ 1, 4, 6],
[30, 40, 0]])
Changing the padding value: If another custom default value is wanted as a padding, one could use torch.full_like(my_custom_value) rather than torch.zeros_like() when allocating the output for scatter().
| https://stackoverflow.com/questions/63928630/ |
Non-deterministic behavior for training a neural network on GPU implemented in PyTorch and with a fixed random seed | I observed a strange behavior of the final Accuracy when I run exactly the same experiment (the same code for training neural net for image classification) with the same random seed on different GPUs (machines). I use only one GPU. Precisely, When I run the experiment on one machine_1 the Accuracy is 86,37. When I run the experiment on machine_2 the Accuracy is 88,0.
There is no variability when I run the experiment multiple times on the same machine. PyTorch and CUDA versions are the same. Could you help me to figure out the reason and fix it?
Machine_1:
NVIDIA-SMI 440.82 Driver Version: 440.82 CUDA Version: 10.2
Machine_2:
NVIDIA-SMI 440.100 Driver Version: 440.100 CUDA Version: 10.2
To fix random seed I use the following code:
random.seed(args.seed)
os.environ['PYTHONHASHSEED'] = str(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed(args.seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
| This is what I use:
import torch
import os
import numpy as np
import random
def set_seed(seed):
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(seed)
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
set_seed(13)
Make sure you have a single function that set's the seeds from once. If you are using Jupyter notebooks cell execution timing may cause this. Also the order of functions inside may be important. I never had problems with this code. You may call set_seed() often in code.
| https://stackoverflow.com/questions/63939096/ |
requires_grad = False seems not working in my case | I received a Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient error with tensor W.
W has the size of (10,10) and grad_fn=<DivBackward0>. The error happens at the second line
def muy(self, x):
V = torch.tensor(self.W - self.lambda_ * torch.eye(self.ENCODING_DIM), requires_grad=False)
return -0.5 * V.inverse().mm(self.b + self.lambda_ * x[:, None])
Other vars, values taken at the time of the error
self.lambda_: 1.0
self.ENCODING_DIM: 10
self.b: torch.Size([10, 1]), requires_grad=True
x: torch.Size([3, 1, 10]), grad_fn=<MulBackward0>
How could I set the result of muy as just an ingredient of the leaf node, so grad through V is required?
I tried this monstrosity, to no avail
def muy(self, x):
V_inv = np.linalg.inv(self.V.detach().numpy())
x_numpy = x[:, None].detach().numpy()
temp= -0.5 * np.matmul(V_inv, self.b.detach().numpy() + self.lambda_ * x_numpy)
return temp
Why I cared about this JIT:
I wanted to use tensorboard to visualize my model, if I understand the error messages right, the visualzing models use Tracer
EDIT
This still gives the same error, W or W.detach()
with torch.no_grad():
V = self.W - self.lambda_ * torch.eye(self.ENCODING_DIM)
return -0.5 * V.inverse().mm(self.b + self.lambda_ * x[:, None])
| V = torch.tensor(self.W - self.lambda_ * torch.eye(self.ENCODING_DIM), requires_grad=False)
What you are trying to do here doesn't make much sense. torch.tensor(value) can only be created if the value is scalar (e.g. Python's 5), while you are trying to fit torch.Tensor there.
What you should do is simply this:
V = self.W - self.lambda_ * torch.eye(self.ENCODING_DIM)
If you want to detach self.W for some reason you can do this:
V = self.W.detach() - self.lambda_ * torch.eye(self.ENCODING_DIM)
(this will make a copy of self.W with requires_grad set to False).
You could also use torch.no_grad() context manager so this operation will not be recorded on the graph which will have the same effect on the graph (but only in this case, not in general and you won't make copy of self.W so it is advised to do that):
with torch.no_grad():
V = self.W - self.lambda_ * torch.eye(self.ENCODING_DIM)
Code to reproduce
Can't reproduce this exact issue based on your code description, see below:
import torch
lambda_ = 1.0
W = torch.randn(10, 10, requires_grad=True)
ENCODING_DIM = 10
b = torch.randn(10, 1, requires_grad=True)
x = torch.randn(3, 1, 10, requires_grad=True)
with torch.no_grad():
V = W - lambda_ * torch.eye(ENCODING_DIM)
result = -0.5 * V.inverse().mm(b + lambda_ * x[:, None])
print(result)
This code gives the following (different!) error:
Traceback (most recent call last): File "foo.py", line 13, in
result = -0.5 * V.inverse().mm(b + lambda_ * x[:, None]) RuntimeError: matrices expected, got 2D, 4D tensors at
/pytorch/aten/src/TH/generic/THTensorMath.cpp:36
| https://stackoverflow.com/questions/63944967/ |
Retrieve elements from a 3D tensor with a 2D index tensor | I am playing around with GPT2 and I have 2 tensors:
O: An output tensor of shaped (B, S-1, V) where B is the batch size S is the the number of timestep and V is the vocabulary size. This is the output of a generative model and is softmaxed along the 2nd dimension.
L: A 2D tensor shaped (B, S-1) where each element is the index of the correct token for each timestep for each sample. This is basically the labels.
I want to extract the predicted probability of the corresponding correct token from tensor O based on tensor L such that I will end up with a 2D tensor shaped (B, S). Is there an efficient way of doing this apart from using loops?
| For reference, I based my answer on this Medium article.
Essentially, your answer lies in torch.gather, assuming that both of your tensors are just regular torch.Tensors (or can be converted to one).
import torch
# Specify some arbitrary dimensions for now
B = 3
V = 6
S = 4
# Make example reproducible
torch.manual_seed(42)
# L necessarily has to be a torch.LongTensor, otherwise indexing will fail.
L = torch.randint(0, V, size=[B, S])
O = torch.rand([B, S, V])
# Now collect the results. L needs to have similar dimension,
# except in the axis you want to collect along.
X = torch.gather(O, dim=2, index=L.unsqueeze(dim=2))
# Make sure X has no "unnecessary" dimension
X = X.squeeze(dim=2)
It is a bit difficult to see whether this produces the exact correct results, which is why I included a random seed which makes the example deterministic in the result, and you an easily verify that it gets you the desired results. However, for clarification, one could also use a lower-dimensional tensor, for which this becomes clearer what exactly torch.gather does.
Note that torch.gather also allows you to index multiple indexes in the same row theoretically. Meaning if you instead got a multiclass example for which multiple values are correct, you could similarly use a tensor L of shape [B, S, number_of_correct_samples].
| https://stackoverflow.com/questions/63950303/ |
Moving a tensor to cuda device cause illegal memory access in Pytorch | I am trying the following snippet in Colab but causes the following error.
Is it wrong to move a tensor object to Cuda device?.
import torch
a = torch.Tensor(torch.randn(5,5,5))
# a.device("cuda")
device = torch.device("cuda")
class abc(torch.nn.Module):
def __init__(self):
super().__init__()
self.w1 = torch.nn.Linear(5,5)
def forward(self,x):
return self.w1(x)
mod = abc()
a.cuda()
mod.to(device)
mod(a)
Output:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-10-5372fb4d5512> in <module>()
11 return self.w1(x)
12 mod = abc()
---> 13 a.cuda()
14 mod.to(device)
15 mod(a)
RuntimeError: CUDA error: an illegal memory access was encountered
| This works for me on Google colab:
import torch
a = torch.randn(5,5,5)
a = a.to("cuda") # or just a = torch.randn((5,5,5), device='cuda')
class abc(torch.nn.Module):
def __init__(self):
super().__init__()
self.w1 = torch.nn.Linear(5,5)
def forward(self,x):
return self.w1(x)
mod = abc()
mod.to("cuda")
mod(a)
Output:
tensor([[[ 1.5691e+00, 8.0326e-01, 1.4352e+00, 7.3295e-01, 3.2156e-01],
[ 5.1630e-01, -2.2816e-03, 7.1052e-01, 1.9250e-01, 8.3110e-01],
[ 7.6572e-01, -8.9701e-01, 2.7974e-01, 7.4309e-04, 9.5218e-01],
[ 2.0723e-01, -1.0049e+00, 1.6938e+00, 1.0019e+00, 7.9305e-01],
[-1.0973e-02, -1.1260e-01, 1.0521e+00, -1.3839e-01, -4.2380e-01]],
[[ 1.3870e+00, 1.1620e+00, -3.6523e-01, -5.6704e-01, 4.2481e-01],
[ 1.6204e-01, 8.3231e-02, -5.9607e-01, -1.0912e+00, -6.1651e-01],
[ 2.3584e-01, -5.9825e-01, 1.1670e+00, 9.3185e-01, 4.0269e-01],
[ 1.3120e+00, 1.3967e-01, -5.5048e-01, -9.8143e-01, 3.5059e-01],
[ 8.0019e-01, -1.8983e-02, 2.3792e-01, -5.9157e-01, 3.5816e-01]],
[[ 3.9709e-01, -8.7349e-01, -2.9742e-01, -3.8732e-01, -1.7191e-03],
[-8.7056e-01, -8.8214e-01, 1.0647e+00, 7.7785e-01, 6.3816e-01],
[ 7.4920e-01, -4.0143e-01, 5.9780e-01, 2.7842e-01, 8.1991e-01],
[-5.9389e-02, -4.9465e-01, -3.7322e-03, -7.0475e-01, -2.5955e-01],
[ 1.5722e+00, 6.4410e-01, -5.1310e-02, -1.2716e+00, -1.4607e-01]],
[[ 6.5152e-02, -6.8772e-01, 1.0366e+00, -2.4278e-01, -2.7106e-01],
[ 7.0832e-01, 1.4581e-01, 1.9924e-01, -4.1930e-01, 4.0567e-01],
[ 3.9120e-01, -1.0099e+00, 1.6907e+00, 7.2674e-01, 6.5285e-01],
[-1.3191e-01, -8.6324e-01, -1.2734e-01, -5.6135e-01, -4.1949e-01],
[ 5.4183e-02, -5.6837e-01, 5.1347e-02, -5.3199e-01, 2.2167e-01]],
[[ 9.9237e-02, -5.8725e-01, -3.3042e-01, -8.7371e-01, -2.3261e-01],
[ 5.5485e-01, -3.5022e-01, 1.1516e-01, 3.8139e-02, 4.6032e-01],
[-7.5111e-01, -9.7203e-01, 1.7809e-01, 2.2506e-01, 3.6540e-02],
[ 2.5590e-01, 3.0592e-01, 6.8972e-01, 1.8452e-01, 6.7794e-01],
[-7.6091e-01, -1.3956e+00, 7.8801e-01, -1.7489e-01, -1.0143e+00]]],
device='cuda:0', grad_fn=<AddBackward0>)
| https://stackoverflow.com/questions/63951247/ |
How to avoid overfitting in deep learning when features are binary in nature | I am constructing a deep learning model using 2048 bits of binary fingerprints (0 and 1's) for some 2000 samples to predict their outputs (positive (1) OR negative(0)). The feature data is quite sparse i.e. lots of zeros and rare 1's.
I have used 'binary cross entropy' but my validation accuracy doesn't increase more than 70%. I have balanced data. The model seems to be overfitting. I can't normalize my data since fetures are binary. How can I avoid overfitting?
earlystop = EarlyStopping(monitor='val_acc', patience=20, mode='max')
final_model = Sequential()
final_model.add(Dense(1012, input_dim=2048, activation = 'relu'))
final_model.add(BatchNormalization())
final_model.add(Dropout(0.9))
final_model.add(Dense(512, activation='relu'))
final_model.add(BatchNormalization())
final_model.add(Dropout(0.9))
final_model.add(Dense(128, activation='relu'))
final_model.add(BatchNormalization())
final_model.add(Dropout(0.5))
final_model.add(Dense(32, activation='relu'))
final_model.add(Dense(1, activation='sigmoid'))
adam_opt = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
final_model.compile(loss='binary_crossentropy', optimizer=adam_opt, metrics=['accuracy'])
| If you want to do a binary classification, binary crossentropy is the loss function you are looking for.
Achieving a well generalizing model includes more than just the right loss function choice (Preprocessing Data, Finding a proper Network Architecture, Finding the right hyper parameter choice, ...).
You can find a discussion about generalization of Deep Learning Models here:
https://stats.stackexchange.com/questions/365778/what-should-i-do-when-my-neural-network-doesnt-generalize-well
| https://stackoverflow.com/questions/63956262/ |
PyTorch: How to check if some weights are not changed during training? | How can I check if some weights are not changed during training in PyTorch?
As I understand one option can be just dump model weights at some epochs and check if they are changed iterating over weights, but maybe there is some simpler way?
| There can be two ways around this:
First
for name, param in model.named_parameters():
if 'weight' in name:
temp = torch.zeros(param.grad.shape)
temp[param.grad != 0] += 1
count_dict[name] += temp
This step comes in after your loss.backward() step in the training module. The count_dict[name] dictionary keeps track of the gradient updates. You can initialize it this way before the start of training:
for name, param in model.named_parameters():
if 'weight' in name:
count_dict[name] = torch.zeros(param.grad.shape)
Now one more way would be to register a hook function and then create the hook function where you can even update the or modify the gradients if you want to. This is not necessary to keep track of the weight updates but then if you want to do something with the gradient, it comes in handy.
Suppose, here I am randomly sparsifying the gradients.
def hook_fn(grad):
'''
Randomly sparsify the gradients
:param grad: Input gradient of the layer
:return: grad_clone - the sparsified FC layer gradients
'''
grad_clone = grad.clone()
temp = torch.cuda.FloatTensor(grad_clone.shape).uniform_()
grad_clone[temp < 0.8] = 0
return grad_clone
And here I give the model the hook.
for name, param in model.named_parameters():
if 'weight' in name:
param.register_hook(hook_fn)
So, this might just sparsify the gradients for you, and you can keep track of gradients in the hook function itself in this way:
def hook_func(module, input, output):
temp = torch.zeros(output.shape)
temp[output != 0] += 1
count_dict[module] += temp
Although, I won't recommend doing this. This is generally useful in the case of visualizing the forward pass features/activations. And also, input and output can confuse because the gradient and param inputs and outputs are reversed.
| https://stackoverflow.com/questions/63962561/ |
Pytorch Multi-GPU Issue | I want to train my model with 2 GPU(id 5, 6), so I run my code with CUDA_VISIBLE_DEVICES=5,6 train.py. However, when I printed torch.cuda.current_device I still got the id 0 rather than 5,6. But torch.cuda.device_count is 2, which semms right. How can I use GPU5,6 correctly?
| It is most likely correct. PyTorch only sees two GPUs (therefore indexed 0 and 1) which are actually your GPU 5 and 6.
Check the actual usage with nvidia-smi. If it is still inconsistent, you might need to set an environment variable:
export CUDA_DEVICE_ORDER=PCI_BUS_ID
(See Inconsistency of IDs between 'nvidia-smi -L' and cuDeviceGetName())
| https://stackoverflow.com/questions/63967302/ |
Process stuck when training on multiple nodes using PyTorch DistributedDataParallel | I am trying to run the script mnist-distributed.py from Distributed data parallel training in Pytorch. I have also pasted the same code here. (I have replaced my actual MASTER_ADDR with a.b.c.d for posting here).
import os
import argparse
import torch.multiprocessing as mp
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torch.distributed as dist
class ConvNet(nn.Module):
def __init__(self, num_classes=10):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc = nn.Linear(7*7*32, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.fc(out)
return out
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-n', '--nodes', default=1, type=int, metavar='N')
parser.add_argument('-g', '--gpus', default=1, type=int,
help='number of gpus per node')
parser.add_argument('-nr', '--nr', default=0, type=int,
help='ranking within the nodes')
parser.add_argument('--epochs', default=2, type=int, metavar='N',
help='number of total epochs to run')
args = parser.parse_args()
args.world_size = args.gpus * args.nodes
os.environ['MASTER_ADDR'] = 'a.b.c.d'
os.environ['MASTER_PORT'] = '8890'
mp.spawn(train, nprocs=args.gpus, args=(args,))
def train(gpu, args):
rank = args.nr * args.gpus + gpu
dist.init_process_group(
backend='nccl',
init_method='env://',
world_size=args.world_size,
rank=rank
)
torch.manual_seed(0)
model = ConvNet()
torch.cuda.set_device(gpu)
model.cuda(gpu)
batch_size = 100
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda(gpu)
optimizer = torch.optim.SGD(model.parameters(), 1e-4)
# Wrap the model
model = nn.parallel.DistributedDataParallel(model,
device_ids=[gpu])
# Data loading code
train_dataset = torchvision.datasets.MNIST(
root='./data',
train=True,
transform=transforms.ToTensor(),
download=True
)
train_sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset,
num_replicas=args.world_size,
rank=rank
)
train_loader = torch.utils.data.DataLoader(
dataset=train_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=0,
pin_memory=True,
sampler=train_sampler)
total_step = len(train_loader)
for epoch in range(args.epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.cuda(non_blocking=True)
labels = labels.cuda(non_blocking=True)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0 and gpu == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(
epoch + 1,
args.epochs,
i + 1,
total_step,
loss.item())
)
if __name__ == '__main__':
main()
There are 2 nodes with 2 GPUs each. I run this command from the terminal of the master node-
python mnist-distributed.py -n 2 -g 2 -nr 0
, and then this from the terminal of the other node-
python mnist-distributed.py -n 2 -g 2 -nr 1
But then my process gets stuck with no output on either terminal.
Running the same code on a single node using the following command works perfectly fine-
python mnist-distributed.py -n 1 -g 2 -nr 0
| I met a similar problem. And the problem is solved by
sudo vi /etc/default/grub
Edit it:
#GRUB_CMDLINE_LINUX="" <----- Original commented
GRUB_CMDLINE_LINUX="iommu=soft" <------ Change
sudo update-grub
Reboot to see the change.
Ref: https://github.com/pytorch/pytorch/issues/1637#issuecomment-338268158
| https://stackoverflow.com/questions/63968082/ |
Why am I getting calculated padding input size per channel smaller than kernel size? | I have the following model but its returning an error. Not sure why. I have tried googling but not found anything so far. My input is an numpy array of 6 by 6.
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=(3,3), stride=1, padding=0)
self.conv2 = nn.Conv2d(16, 32, kernel_size=(3,3), stride=1, padding=0)
self.conv3 = nn.Conv2d(32, 64, kernel_size=(3,3), stride=1, padding=0)
self.fc1 = nn.Linear(64*4*4, 320)
self.fc2 = nn.Linear(320, 160)
self.out = nn.Linear(160, 2)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = F.max_pool2d(x, kernel_size=2, stride=2)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, kernel_size=2, stride=2)
x = self.conv3(x)
x = F.relu(x)
x = F.max_pool2d(x, kernel_size=2, stride=2)
x = x.reshape(-1, 64*4*4)
#x = torch.flatten(x)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.out(x)
return F.softmax(x, dim=1)
My input is a 6x6 numpy array and I get the following error, any idea why?
RuntimeError: Calculated padded input size per channel: (2 x 2). Kernel size: (3 x 3). Kernel size can't be greater than actual input size
| Here is what you may do and I used the padding=1 as proposed by Szymon Maszke. This padding is added to the convolution and to maxpooling.
import numpy
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=(3,3), stride=1, padding=1)
self.conv2 = nn.Conv2d(16, 32, kernel_size=(3,3), stride=1, padding=1)
self.conv3 = nn.Conv2d(32, 64, kernel_size=(3,3), stride=1, padding=1)
self.fc1 = nn.Linear(64*4*4, 320)
self.fc2 = nn.Linear(320, 160)
self.out = nn.Linear(160, 2)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = F.max_pool2d(x, kernel_size=2, stride=2)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1)
x = self.conv3(x)
x = F.relu(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1)
x = x.reshape(-1, 64*4*4)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.out(x)
return F.softmax(x, dim=1)
a = numpy.random.rand(6,6)
print(a)
data = torch.tensor(a).float()
print(data.shape)
# data.unsqueeze_(0).unsqueeze_(0)
data= data.expand(16, 1 ,-1,-1)
print(data.shape)
n=Net()
print("Start")
o = n(data)
print(o)
Out:
[[0.89695967 0.09447725 0.0905144 0.52694105 0.66000333 0.10537102]
[0.32854697 0.86046884 0.29804184 0.62988374 0.5965067 0.54139821]
[0.41561266 0.95484358 0.82919364 0.75556819 0.77373267 0.52209278]
[0.46406436 0.6553954 0.60010151 0.86314529 0.70020608 0.16471554]
[0.72863547 0.83846636 0.95122373 0.84322402 0.32264676 0.1233866 ]
[0.75767067 0.56546123 0.7765021 0.35303595 0.3254407 0.84033049]]
torch.Size([6, 6])
torch.Size([16, 1, 6, 6])
Start
tensor([[0.5134, 0.4866]], grad_fn=<SoftmaxBackward>)
By default in PyTorch padding=0, so you need to explicitly set padding=1 when needed.
| https://stackoverflow.com/questions/63971920/ |
How to install PyTorch with pipenv and save it to Pipfile and Pipfile.lock? | Iβm currently using Pipenv to maintain the Python packages used in a specific project. Most of the downloads Iβve tried so far have worked as intended; that is, I enter pipenv install [package] and it installs the package into the virtual environment, then records the package information into both the Pipfile and Pipfile.lock.
However, Iβm running into some problems installing PyTorch.
Iβve tried running pipenv install torch, but every time the locking step fails. Instead, Iβve tried forcing a download directly from the PyTorch website using
pipenv run pip install torch===1.6.0 torchvision===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
And it actually installs! If I run pipenv graph it displays both torch and torchvision with their dependencies. But one problem remains: neither torch nor torchvision are being saved into Pipfile and Pipfile.lock.
Any idea on how I can make this happen?
| When you use pipenv run pip install <package>, that skips the custom pipenv operations of updating the Pipfile and the Pipfile.lock. It is basically equivalent to doing a plain pip install <package> as if you did not have/use pipenv.
The only way to also update the Pipfile's is to use pipenv install.
Unfortunately, as I'm posting this, pipenv does not have an equivalent for pip's -f/--find-links option. The best solution is to specify pytorch's "https://download.pytorch.org/whl/" URLs as an alternative package index, by adding it as a [[source]] in your Pipfile. See this answer from Mohamad and this answer from Mitch McMabers that describes how to do it. I recommend trying out those answers instead.
A less elegant and quite bad alternative is to manually find the correct torch wheel (.whl) links you need, which usually means looking for the correct link from https://download.pytorch.org/whl/torch_stable.html. Then, create/modify the Pipfile with the specific package versions and URLs to the wheels:
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[requires]
python_version = "3.8"
[packages]
torch = {version = "==1.6.0", file = "https://download.pytorch.org/whl/cpu/torch-1.6.0-cp38-none-macosx_10_9_x86_64.whl"}
torchvision = {version = "==0.7.0", file = "https://download.pytorch.org/whl/torchvision-0.7.0-cp38-cp38-macosx_10_9_x86_64.whl"}
Then just do normal pipenv install.
You can confirm the installation with pipenv install --verbose:
Collecting torch==1.6.0
...
Looking up "https://download.pytorch.org/whl/cpu/torch-1.6.0-cp38-none-macosx_10_9_x86_64.whl" in the cache
Current age based on date: 8
Starting new HTTPS connection (1): download.pytorch.org:443
https://download.pytorch.org:443 "GET /whl/cpu/torch-1.6.0-cp38-none-macosx_10_9_x86_64.whl HTTP/1.1" 304 0
...
Added torch==1.6.0 from https://download.pytorch.org/whl/cpu/torch-1.6.0-cp38-none-macosx_10_9_x86_64.whl#egg=torch
...
Successfully installed torch-1.6.0
Collecting torchvision==0.7.0
...
Looking up "https://download.pytorch.org/whl/torchvision-0.7.0-cp38-cp38-macosx_10_9_x86_64.whl" in the cache
Current age based on date: 8
Starting new HTTPS connection (1): download.pytorch.org:443
https://download.pytorch.org:443 "GET /whl/torchvision-0.7.0-cp38-cp38-macosx_10_9_x86_64.whl HTTP/1.1" 304 0
...
Added torchvision==0.7.0 from https://download.pytorch.org/whl/torchvision-0.7.0-cp38-cp38-macosx_10_9_x86_64.whl#egg=torchvision
...
Successfully installed torchvision-0.7.0
This also adds entries to Pipfile.lock:
"torch": {
"file": "https://download.pytorch.org/whl/cpu/torch-1.6.0-cp38-none-macosx_10_9_x86_64.whl",
"hashes": [
...
],
"index": "pypi",
"version": "==1.6.0"
},
"torchvision": {
"file": "https://download.pytorch.org/whl/torchvision-0.7.0-cp38-cp38-macosx_10_9_x86_64.whl",
"hashes": [
...
],
"index": "pypi",
"version": "==0.7.0"
}
With that, you now have a Pipfile and Pipfile.lock that you can check-in/commit to version control and track/manage as you develop your application.
Instead of manually editing the Pipfile, you can also do it from the command line:
(temp) $ pipenv install --verbose "https://download.pytorch.org/whl/torchvision-0.7.0-cp38-cp38-macosx_10_9_x86_64.whl"
Installing https://download.pytorch.org/whl/torchvision-0.7.0-cp38-cp38-macosx_10_9_x86_64.whl...
...
Adding torchvision to Pipfile's [packages]...
β Installation Succeeded
That should also add an entry to the Pipfile:
[packages]
...
torchvision = {file = "https://download.pytorch.org/whl/torchvision-0.7.0-cp38-cp38-macosx_10_9_x86_64.whl"}
Of course, this all depends on finding out which wheel you actually need. This can be done by first using a plain pip install <package> with the -f/--find-links option targeting the https://download.pytorch.org/whl/torch_stable.html URL, then checking which wheel it used.
First, let's get the correct .whl file with pip install
$ pipenv run pip install --verbose torchvision==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
Looking in links: https://download.pytorch.org/whl/torch_stable.html
...
Collecting torchvision==0.7.0
Downloading torchvision-0.7.0-cp38-cp38-macosx_10_9_x86_64.whl (387 kB)
...
Remove pip install-ed things from the virtual environment
$ pipenv clean
Repeat installation but using pipenv install
$ pipenv install --verbose "https://download.pytorch.org/whl/torchvision-0.7.0-cp38-cp38-macosx_10_9_x86_64.whl"
Just combine "https://download.pytorch.org/whl/" + .whl filename from step 1
It might seem a bit backwards using pip install first then copying it over to pipenv, but the objective here is to let pipenv update the Pipfile and Pipfile.lock (to support deterministic builds) and to "document" your env for version control.
| https://stackoverflow.com/questions/63974588/ |
Using trained BERT Model and Data Preprocessing | When using a pre-trained BERT embeddings from pytorch (which are then fine-tuned), should the text data fed into the model be pre-processed like in any standard NLP task?
For instance, should stemming, removing low frequency words, de-captilisation, be performed or should the raw text simply be passed to `transformers.BertTokenizer'?
| I think preprocessing will not change your output predictions. I will try to explain for each case you mentioned -
stemming or lemmatization :
Bert uses BPE (Byte- Pair Encoding to shrink its vocab size), so words like run and running will ultimately be decoded to run + ##ing.
So it's better not to convert running into run because, in some NLP problems, you need that information.
De-Capitalization - Bert provides two models (lowercase and uncased). One converts your sentence into lowercase, and others will not change related to the capitalization of your sentence. So you don't have to do any changes here just select the model for your use case.
Removing high-frequency words -
Bert uses the Transformer model, which works on the attention principal.
So when you finetune it on any problem, it will look only on those words which will impact the output and not on words which are common in all data.
| https://stackoverflow.com/questions/63979544/ |
CNN model is overfitting to data after reaching 50% accuracy | I am trying to identify 3 (classes) mental states based on EEG connectome data. The shape of the data is 99x1x34x34x50x130 (originally graph data, but now represented as a matrix), with respectably represent [subjects, channel, height, width, freq, time series]. For the sake of this study, can only input a 1x34x34 image of the connectome data. From previous studies, it was found that the alpha band (8-1 hz) had given the most information, thus the dataset was narrowed down to 99x1x34x34x4x130. The testing set accuracy on pervious machine learning techniques such as SVMs reached a testing accuracy of ~75%. Hence, by goal is to achieve a greater accuracy given the same data (1x34x34). Since my data is very limited 1-66 for training and 66-99 for testing (fixed ratios and have a 1/3 class distribution), I thought of splitting the data along the time series axis (6th axis) and then averaging the data to a shape of 1x34x34 (from ex. 1x34x34x4x10, 10 is the random sample of time series). This gave me ~1500 samples for training, and 33 for testing (testing is fixed, the class distributions are 1/3).
Model:
SimpleCNN(
(conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(drop1): Dropout(p=0.25, inplace=False)
(fc1): Linear(in_features=9248, out_features=128, bias=True)
(drop2): Dropout(p=0.5, inplace=False)
(fc2): Linear(in_features=128, out_features=3, bias=True)
)
CrossEntropyLoss()
Adam (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.999)
eps: 1e-08
lr: 5e-06
weight_decay: 0.0001
)
Results:
The training set can achieve an accuracy of 100% with enough iteration, but at the cost of the testing set accuracy. After around 20-50 epochs of testing, the model starts to overfit to the training set and the test set accuracy starts to decrease (same with loss).
What I have tried:
I have tried tuning the hyperparameters: lr=.001-000001, weight decay=0.0001-0.00001. Training to 1000 epochs (useless bc overfitting in less than 100 epochs). I have also tried increasing/decreasing the model complexity with adding adding addition fc layers and varying amount of channels in CNN layers form 8-64. I have also tried adding more CNN layers and the model did a bit worse averaging around an accuracy of ~45% on the test set. I tried manually scheduling the learning rate every 10 epochs, the results were the same. Weight decay didnβt seem to effect the results much, changed it from 0.1-0.000001.
From previous testing, I have a model that achieves 60% acc on both the testing and the training set. However, when I try to retrain it, the acc instantly goes down to ~40 on both sets (training and testing), which makes no sense. I have tried altering the learning rate from 0.01 to 0.00000001, and also tried weight decay for this.
From training the model and the graphs, it seems like the model dosnβt know what itβs doing for the first 5-10 epochs and then starts to learn rapidly to around ~50%-60% acc on both sets. This is where the model starts to overfit, form there the modelβs acc increases to 100% on the training set, and the acc for the testing set goes down to 33%, which is equivalent to guessing.
Any tips?
Edit:
The modelβs outputs for the test set are very very close to each other.
0.33960407972335815, 0.311821848154068, 0.34857410192489624
The average standard deviation for the whole test set between predictions for each image are (softmax):
0.017695341517654846
However, the average std for the training set is .22 so...
F1 Scores:
Micro Average: 0.6060606060606061
Macro Average: 0.5810185185185186
Weighted Average: 0.5810185185185186
Scores for each class: 0.6875 0.5 0.55555556
Here is a confusion matrix:
| I have some suggestions, what I would try, maybe you've already done it:
increase the probability of dropout, that could decrease overfitting,
I did not see or I missed it but if you don't do it, shuffle all the samples,
there is not so much data, did you thought about using other NN to generate more data of the classes which are having the least score? I am not sure if it is the case here but even randomly rotating, scaling the images can produce more training examples,
another approach you can take, if you haven't done it already, use transfer learning using another popular CNN net and see how it is doing the job, then you can have some comparison, whether it is something wrong with your architecture or it's lack of examples :)
I know these are just suggestions but maybe, if you haven't try some of them, they will bring you closer to the solution.
Good luck!
| https://stackoverflow.com/questions/63983710/ |
How to find the mean and the covariance of a 2d activation map (pytorch) | I have a tensor of shape [h, w], which consists of a normalized, 2-dimensional activation map. Considering this to be some distribution, I want to find the mean and the covariance within this activation map in pytorch. Is there an efficient way to do that?
| You can use the following code, where activation_map is a tensor of shape (h,w), with non-negative elements, and is normalised (activation_map.sum() is 1):
activation_map = torch.tensor(
[[0.2, 0.1, 0.0],
[0.1, 0.2, 0.4]])
h, w = activation_map.shape
range_h = torch.arange(h)
range_w = torch.arange(w)
idxs = torch.stack([
range_w[None].repeat(h, 1),
range_h[:, None].repeat(1, w)
])
map_flat = activation_map.view(-1)
idxs_flat = idxs.reshape(2, -1).T
mean = (map_flat[:, None] * idxs_flat).sum(0)
mats = idxs_flat[:, :, None] @ idxs_flat[:, None, :]
second_moments = (map_flat[:, None, None] * mats).sum(0)
covariance = second_moments - mean[:, None] @ mean[None]
# mean:
# tensor([1.1000, 0.7000])
# covariance:
# tensor([[0.6900, 0.2300],
# [0.2300, 0.2100]])
| https://stackoverflow.com/questions/63991646/ |
How to specify different layer sizes in Pytorch LSTM/GRU/RNN | so I know how to work with LSTMs in general with Pytorch. But it bugs me, that you can only specify ONE hidden_size for all your layers in the LSTM. Like this:
lstm = nn.LSTM(input_size=26, hidden_size=128, num_layers=3, dropout=dropout_chance, batch_first=True)
So for all three layers, the size will be 128. But is there really no way to say, for example, that the first layer should be 128, the second 32 and the third 128?
If I missed something in the documentation or you know a work-around, please let me know, thank ya!
| Actually, it depends on the shape of your input and you can see How to decide input and hidden layer dimension to torch.nn.RNN?. Also, you have to understand what is the input and the output because there are different ways to deal with the input and the output. In the A Beginnerβs Guide on Recurrent Neural Networks with PyTorch, you can see how the input data is taken in by the model.
Your model can be
lstm = nn.LSTM(input_size=26, hidden_size=128, num_layers=3, dropout=dropout_chance, batch_first=True)
lstm2 = nn.LSTM(input_size=26, hidden_size=32, num_layers=3, dropout=dropout_chance, batch_first=True)
lstm3 = nn.LSTM(input_size=26, hidden_size=128, num_layers=3, dropout=dropout_chance, batch_first=True)
For multi-layer see this model.
# sequence classification model
class M1(nn.Module):
def __init__(self):
super(M1, self).__init__()
self.recurrent_layer = nn.LSTM(hidden_size = 100, input_size = 75, num_layers = 5)
self.recurrent_layer1 = nn.LSTM(hidden_size = 200, input_size = 100, num_layers = 5)
self.recurrent_layer2 = nn.LSTM(hidden_size = 300, input_size = 200, num_layers = 5)
self.project_layer = nn.Linear(300, 200)
self.project_layer1 = nn.Linear(200, 100)
self.project_layer2 = nn.Linear(100, 10)
# the size of input is [batch_size, seq_len(15), input_dim(75)]
# the size of logits is [batch_size, num_class]
def forward(self, input, h_t_1=None, c_t_1=None):
# the size of rnn_outputs is [batch_size, seq_len, rnn_size]
# self.recurrent_layer.flatten_parameters()
rnn_outputs, (hn, cn) = self.recurrent_layer(input)
rnn_outputs, (hn, cn) = self.recurrent_layer1(rnn_outputs)
rnn_outputs, (hn, cn) = self.recurrent_layer2(rnn_outputs)
# classify the last step of rnn_outpus
# the size of logits is [batch_size, num_class]
logits = self.project_layer(rnn_outputs[:,-1])
logits = self.project_layer1(logits)
logits = self.project_layer2(logits)
return logits
| https://stackoverflow.com/questions/63996218/ |
Is there any way to get torch.mode over multidimensional tensor | is there any way torch.mode can be applied over multiple dimensions
for example
import numpy as np
import torch
x = np.random.randint(10, size=(3, 5))
y = torch.tensor(x)
lets say y has
[[6 3 7 3 0]
[2 5 7 9 7]
[6 1 4 6 3]]
torch.mode should return a size 3 tensor [3,7,6]
without using a loop
| Use the dimension attribute in torch to select which dimension should be reduced using mode operator.
torch.mode(y, dim = 1)[0]
Will give you the desired answer.
| https://stackoverflow.com/questions/64001903/ |
How can I assign a list to a torch.tensor? | Assume that I have a list [(0,0),(1,0),(1,1)] and another list [4,5,6] and a matrix X with size is (3,2). I am trying to assign the list to the matrix like X[0,0] = 4, X[1,0] = 5 and X[1,1] = 6. But it seems like I have a problem of assigning a list to tensor
x = torch.zeros(3,2)
indices = [(0,0),(1,0),(1,1)]
value = [4,5,6]
x[indices] = values
Error:
TypeError Traceback (most recent call last)
<ipython-input-7-dec4e6a479a5> in <module>
4 indices = [(0, 0), (1, 0), (1, 1)]
5 values = [4, 5, 6]
----> 6 x[indices] = values
TypeError: can't assign a list to a torch.FloatTensor
| In general, the answer to "how do I change a list to a Tensor" is to use torch.Tensor(list). But that will not solve your actual problem here.
One way would be to associate the index and value and then iterate over them:
for (i,v) in zip(indices,values) :
x[i] = v
| https://stackoverflow.com/questions/64015076/ |
Why is Loss of SGD for a dataset is not matching the pytorch code with the scratch python code for linear regression? | I'm trying to implement Multiple Linear regression on the wine dataset. But when I compare the results of Pytorch with scratch code of Python the losses are not coming same.
My Scratch Code:
Functions:
def yinfer(X, beta):
return beta[0] + np.dot(X,beta[1:])
def cost(X, Y, beta):
sum = 0
m = len(Y)
for i in range(m):
sum = sum + ( yinfer(X[i],beta) - Y[i])*(yinfer(X[i],beta) - Y[i])
return sum/(1.0*m)
Main Code:
alpha = 0.005
b=[0,0.04086357 ,-0.02831656 ,0.09622949 ,-0.15162516 ,0.60188454 ,0.47528714,
-0.6066466 ,-0.22995654 ,-0.58388734 ,0.20954669 ,-0.67851365]
beta = np.array(b)
print(beta)
iterations = 1000
arr_cost = np.zeros((iterations,2))
m = len(Y)
temp_beta = np.zeros(12)
for i in range(iterations):
for k in range(m):
temp_beta[0] = yinfer(X[k,:], beta) - Y[k]
temp_beta[1:] = (yinfer(X[k,:], beta) - Y[k])*X[k,:]
beta = beta - alpha*temp_beta/(1.0*m) #(m*np.linalg.norm(temp_beta))
arr_cost[i] = [i,cost(X,Y,beta)]
#print(cost(X,Y,beta))
plt.scatter(arr_cost[0:iterations,0], arr_cost[0:iterations,1])
I have used same weights that were used in Pytorch code
My Pytorch code:
class LinearRegression(nn.Module):
def __init__(self,n_input_features):
super(LinearRegression,self).__init__()
self.linear=nn.Linear(n_input_features,1)
# self.linear.weight.data=b.view(1,-1)
self.linear.bias.data.fill_(0.0)
nn.init.xavier_uniform_(self.linear.weight)
# nn.init.xavier_normal_(self.linear.bias)
def forward(self,x):
y_predicted=self.linear(x)
return y_predicted
model=LinearRegression(11)
criterion = nn.MSELoss()
num_epochs=1000
for epoch in range(num_epochs):
for x,y in train_data:
y_pred=model(x)
loss=criterion(y,y_pred)
# print(loss)
loss.backward()
optimizer.step()
optimizer.zero_grad()
My DataLoader:
class Data(Dataset):
def __init__(self):
self.x=x_train
self.y=y_train
self.len=self.x.shape[0]
def __getitem__(self,index):
return self.x[index],self.y[index]
def __len__(self):
return self.len
dataset=Data()
train_data=DataLoader(dataset=dataset,batch_size=1,shuffle=False)
Can someone please tell me why is that happening or is there any faults in my code?
| There were a couple of tweaks necessary to the code. I also had to create data and an optimizer, which you hadn't provided. With the changes below, both methods produce a learning function.
Of course optimal hyperparameters such as alpha or iterations might be different between the two approaches, and you might need to find them separately.
# Create data:
import sklearn
X, Y = sklearn.datasets.load_diabetes(return_X_y=True)
# Adding a random column to match your data shape:
X = np.hstack((X, np.random.randn(X.shape[0], 1)))
iterations = 500
################
# Python version
def yinfer(X, beta):
return beta[0] + np.dot(X,beta[1:])
def cost(X, Y, beta):
sum = 0
m = len(Y)
for i in range(m):
sum = sum + ( yinfer(X[i], beta) - Y[i])*(yinfer(X[i], beta) - Y[i])
return sum/(1.0*m)
beta = np.array([0,0.04086357 ,-0.02831656 ,0.09622949 ,-0.15162516 ,0.60188454 ,0.47528714,
-0.6066466 ,-0.22995654 ,-0.58388734 ,0.20954669 ,-0.67851365])
arr_cost = []
m = len(Y)
alpha = 0.1
temp_beta = np.zeros(12)
for i in range(iterations):
for k in range(m):
temp_beta[0] = yinfer(X[k,:], beta) - Y[k]
temp_beta[1:] = (yinfer(X[k,:], beta) - Y[k])*X[k,:]
beta = beta - alpha*temp_beta/(1.0*m)
arr_cost.append(cost(X,Y,beta))
#################
# Pytorch version
from torch import nn
from torch import optim
class LinearRegression(nn.Module):
def __init__(self,n_input_features):
super(LinearRegression,self).__init__()
self.linear=nn.Linear(n_input_features,1)
self.linear.bias.data.fill_(0.0)
nn.init.xavier_uniform_(self.linear.weight)
def forward(self,x):
y_predicted=self.linear(x)
return y_predicted
class Data(Dataset):
def __init__(self, x_train, y_train):
self.x=x_train
self.y=y_train
self.len=self.x.shape[0]
def __getitem__(self,index):
return self.x[index],self.y[index]
def __len__(self):
return self.len
train_data=DataLoader(dataset=Data(X, Y),batch_size=1,shuffle=False)
criterion = nn.MSELoss()
model=LinearRegression(11)
optimizer = optim.SGD(model.parameters(), lr=0.01)
loss_vals = [] # store results
for epoch in range(iterations):
for x, y in train_data:
x, y = x.float(), y.float()
y_pred=model.forward(x)
loss=criterion(y, y_pred)
loss.backward()
optimizer.step()
optimizer.zero_grad()
loss_vals.append(float(loss))
##############
# Plot results
f, ax = plt.subplots(1,1, figsize=(20,5))
ax.plot(range(1, iterations+1), arr_cost, label='python')
ax.plot(range(1, iterations+1), loss_vals, label='torch')
ax.legend(); ax.set_xlabel('epochs'); ax.set_xlabel('loss');
| https://stackoverflow.com/questions/64016054/ |
Getting an error(cannot import name 'BertPreTrainedModel') while importing classification model from simpletransformers | Getting the following error while trying to import the classificationmodel from simpletransformers.
ImportError Traceback (most recent call last)
<ipython-input-1-29f08e6c2d87> in <module>()
----> 1 from simpletransformers.classification import ClassificationModel, ClassificationArgs
3 frames
/usr/local/lib/python3.6/dist-packages/simpletransformers/classification/transformer_models/roberta_model.py in <module>()
2 import torch.nn as nn
3 from torch.nn import CrossEntropyLoss, MSELoss
----> 4 from transformers.modeling_roberta import (
5 ROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST,
6 BertPreTrainedModel,
ImportError: cannot import name 'BertPreTrainedModel'
---------------------------------------------------------------------------
| In this github issue the problem was an old version of simpletransformers. To get the latest version do pip install --upgrade simpletransformers. Maybe even do this for the transformers package as well.
| https://stackoverflow.com/questions/64020998/ |
implement dropout layer using nn.Sequential() | I am trying to implement a Dropout layer using pytorch as follows:
class DropoutLayer(nn.Module):
def __init__(self, p):
super().__init__()
self.p = p
def forward(self, input):
if self.training:
u1 = (np.random.rand(*input.shape)<self.p) / self.p
u1 *= u1
return u1
else:
input *= self.p
And then calling a simple NN.sequential:
model = nn.Sequential(nn.Linear(input_size,num_classes), DropoutLayer(.7), nn.Flatten())
opt = torch.optim.Adam(model.parameters(), lr=0.005)
train(model, opt, 5) #train(model, optimizer, epochs #)
But I'm getting the following error:
TypeError: flatten() takes at most 1 argument (2 given)
Not sure what I'm doing wrong. Still new to pytorch. Thanks.
| In the forward function of your DropoutLayer, when you enter the elsebranch, there is no return. Therefore the following layer (flatten) will have no input. However, as emphasized in the comments, that's not the actual problem.
The actual problem is that you are passing a numpy array to your Flatten layer. A Minimal code to reproduce the problem would be :
nn.Flatten()(np.random.randn(5,5))
>>> TypeError: flatten() takes at most 1 argument (2 given)
However, I cannot explain why this layer behaves like that on a numpy tensor, the behavior of the flatten function being much more understandable. I don't know what additional operations the layer performs.
torch.flatten(np.random.randn(5,5))
>>> TypeError: flatten(): argument 'input' (position 1) must be Tensor, not numpy.ndarray
Why this error is raised by your code is because in the forward pass, you create a numpy tensor, perform some operations, and return it instead of returning a tensor. If I may, you don't even touch the actual input tensor (in the first branch)
| https://stackoverflow.com/questions/64032525/ |
Force installing torchvision 0.4.2 when I am forced to use pytorch 1.3.1 due to hardware constraints (ppc64le IBM) | I am in a weird scenario were I am forced to use torch 1.3.1 (due to hardware see: https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/#/). I read from the pytorch docs that it's corresponding version of torchvision is 0.4.1 (https://pypi.org/project/torchvision/):
Installation
We recommend Anaconda as Python package management system. Please refer to pytorch.org for the detail of PyTorch (torch) installation. The following is the corresponding torchvision versions and supported Python versions.
Installation
We recommend Anaconda as Python package management system. Please refer to pytorch.org for the detail of PyTorch (torch) installation. The following is the corresponding torchvision versions and supported Python versions.
torch torchvision python
master / nightly master / nightly >=3.6
1.5.0 0.6.0 >=3.5
1.4.0 0.5.0 ==2.7, >=3.5, <=3.8
1.3.1 0.4.2 ==2.7, >=3.5, <=3.7
1.3.0 0.4.1 ==2.7, >=3.5, <=3.7
1.2.0 0.4.0 ==2.7, >=3.5, <=3.7
1.1.0 0.3.0 ==2.7, >=3.5, <=3.7
<=1.0.1 0.2.2 ==2.7, >=3.5, <=3.7
but for some reason I have the wrong version of it:
torchvision 0.2.2 pypi_0 pypi
is there a way to install the right version of torchvision?
What I've tried:
First I tried force installing the right version with conda. Conda couldn't find the version of torchvision that I need:
$ conda install torchvision==0.4.2
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- torchvision==0.4.2
Current channels:
- https://repo.anaconda.com/pkgs/main/linux-ppc64le
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/linux-ppc64le
- https://repo.anaconda.com/pkgs/r/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
Then I proceeded to try to install it regardless with pip
$ pip install torchvision==0.4.2
Defaulting to user installation because normal site-packages is not writeable
ERROR: Could not find a version that satisfies the requirement torchvision==0.4.2 (from versions: 0.1.6, 0.1.7, 0.1.8, 0.1.9, 0.2.0, 0.2.1, 0.2.2, 0.2.2.post2, 0.2.2.post3)
ERROR: No matching distribution found for torchvision==0.4.2
got an error too.
Is there anything else to try?
I tried but it failed:
conda install torchvision==0.4.2 -c pytorch
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- torchvision==0.4.2
Current channels:
- https://conda.anaconda.org/pytorch/linux-ppc64le
- https://conda.anaconda.org/pytorch/noarch
- https://repo.anaconda.com/pkgs/main/linux-ppc64le
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/linux-ppc64le
- https://repo.anaconda.com/pkgs/r/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
related:
crossposted SO: Force installing torchvision 0.4.2 when I am forced to use pytorch 1.3.1 due to hardware constraints (ppc64le IBM)
crossposted pytorch forum: https://discuss.pytorch.org/t/force-installing-torchvision/97279
crossposted reddit pytorch: https://www.reddit.com/r/pytorch/comments/iyf2qn/force_installing_torchvision/
crossposted reddit ibm: https://www.reddit.com/r/IBM/comments/iyhzex/force_installing_torchvision_042_when_i_am_forced/
real problem is installing torchmeta: https://github.com/tristandeleu/pytorch-meta/issues/95
https://www.ibm.com/mysupport/s/forumsquestion?id=0D50z00006gaxV9CAI
quora: https://www.quora.com/unanswered/How-does-one-install-specific-Python-packages-in-Conda-from-IBM-architectures
reddit ibm2: https://www.reddit.com/r/newIBM/comments/iyij10/force_installing_torchvision_042_when_i_am_forced/
gitissue for ibm: https://github.com/IBM/powerai/issues/268
gitissue in pytorch
| For all details check (https://github.com/IBM/powerai/issues/268).
Make sure you have the right conda channel prepended:
conda config --prepend channels https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/#/
then install the powerai wmlce that you want e.g. 1.7.0 (most recent as of this writing):
conda create -n my_new_env python=3.7 powerai=1.7.0
conda activate my_new_env
| https://stackoverflow.com/questions/64033543/ |
Why does requires_grad turns from true to false when doing torch.nn.conv2d operation? | I have Unet network which takes in MRI images of the brain, where the goal is to segment white substance in the brain. The images has the shape 256x256x183 (reshaped to 183x256x256) (FLAIR and T1 images). The problem I am having is that before sending the input to the Unet network, I have requires_grad=True on my pytorch tensor, but after one torch.nn.conv2d operation the requires_grad=False. This is a huge problem since the gradient will not update and learn.
from collections import OrderedDict
import torch
import torch.nn as nn
class UNet(nn.Module):
def __init__(self, in_channels=3, out_channels=1, init_features=32):
super(UNet, self).__init__()
features = init_features
self.encoder1 = UNet._block(in_channels, features, name="enc1")
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
self.encoder2 = UNet._block(features, features * 2, name="enc2")
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
self.encoder3 = UNet._block(features * 2, features * 4, name="enc3")
self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2)
self.encoder4 = UNet._block(features * 4, features * 8, name="enc4")
self.pool4 = nn.MaxPool2d(kernel_size=2, stride=2)
self.bottleneck = UNet._block(features * 8, features * 16, name="bottleneck")
self.upconv4 = nn.ConvTranspose2d(
features * 16, features * 8, kernel_size=2, stride=2
)
self.decoder4 = UNet._block((features * 8) * 2, features * 8, name="dec4")
self.upconv3 = nn.ConvTranspose2d(
features * 8, features * 4, kernel_size=2, stride=2
)
self.decoder3 = UNet._block((features * 4) * 2, features * 4, name="dec3")
self.upconv2 = nn.ConvTranspose2d(
features * 4, features * 2, kernel_size=2, stride=2
)
self.decoder2 = UNet._block((features * 2) * 2, features * 2, name="dec2")
self.upconv1 = nn.ConvTranspose2d(
features * 2, features, kernel_size=2, stride=2
)
self.decoder1 = UNet._block(features * 2, features, name="dec1")
self.conv = nn.Conv2d(
in_channels=features, out_channels=out_channels, kernel_size=1
)
def forward(self, x):
print(x.requires_grad) #<---- here it is true
enc1 = self.encoder1(x)#<---- where the problem happens
print(enc1.requires_grad) #<---- here it is false
enc2 = self.encoder2(self.pool1(enc1))
print(enc2.requires_grad)
enc3 = self.encoder3(self.pool2(enc2))
print(enc3.requires_grad)
enc4 = self.encoder4(self.pool3(enc3))
print(enc4.requires_grad)
bottleneck = self.bottleneck(self.pool4(enc4))
print(bottleneck.requires_grad)
dec4 = self.upconv4(bottleneck)
print(dec4.requires_grad)
dec4 = torch.cat((dec4, enc4), dim=1)
print(dec4.requires_grad)
dec4 = self.decoder4(dec4)
print(dec4.requires_grad)
dec3 = self.upconv3(dec4)
print(dec3.requires_grad)
dec3 = torch.cat((dec3, enc3), dim=1)
print(dec3.requires_grad)
dec3 = self.decoder3(dec3)
print(dec3.requires_grad)
dec2 = self.upconv2(dec3)
print(dec2.requires_grad)
dec2 = torch.cat((dec2, enc2), dim=1)
print(dec2.requires_grad)
dec2 = self.decoder2(dec2)
print(dec2.requires_grad)
dec1 = self.upconv1(dec2)
print(dec1.requires_grad)
dec1 = torch.cat((dec1, enc1), dim=1)
print(dec1.requires_grad)
dec1 = self.decoder1(dec1)
print(dec1.requires_grad)
print("going out")
return torch.sigmoid(self.conv(dec1))
@staticmethod
def _block(in_channels, features, name):
return nn.Sequential(
OrderedDict(
[
(
name + "conv1",
nn.Conv2d(
in_channels=in_channels,
out_channels=features,
kernel_size=3,
padding=1,
bias=False,
),
),
(name + "norm1", nn.BatchNorm2d(num_features=features)),
(name + "relu1", nn.ReLU(inplace=True)),
(
name + "conv2",
nn.Conv2d(
in_channels=features,
out_channels=features,
kernel_size=3,
padding=1,
bias=False,
),
),
(name + "norm2", nn.BatchNorm2d(num_features=features)),
(name + "relu2", nn.ReLU(inplace=True)),
]
)
)
Edit:
This is the training code
class run_network:
def __init__(self, eta, epoch, batch_size, train_file_path, validation_file_path, shuffle_after_epoch = True):
self.eta = eta
self.epoch = epoch
self.batch_size = batch_size
self.train_file_path = train_file_path
self.validation_file_path = validation_file_path
self.shuffle_after_epoch = shuffle_after_epoch
def __call__(self, is_train = False):
device = torch.device("cpu" if not torch.cuda.is_available() else torch.cuda())
unet = torch.hub.load('mateuszbuda/brain-segmentation-pytorch', 'unet',
in_channels=3, out_channels=1, init_features=32, pretrained=True)
unet.to(device)
unet = unet.double()
optimizer = optim.Adam(unet.parameters(), lr=self.eta)
dsc_loss = DiceLoss()
Load_training = NiftiLoader(self.train_file_path)
Load_validation = NiftiLoader(self.validation_file_path)
mean_flair, mean_t1, std_flair, std_t1 = Load_training.average_mean_and_std(20, 79,99)
total_mean = [mean_flair, mean_t1]
total_std = [std_flair, std_t1]
loss_train = []
loss_validation = []
for current_epoch in tqdm(range(self.epoch)):
for phase in ["train", "validation"]:
if phase == "train":
mini_batch = Load_training.create_batch(self.batch_size, self.shuffle_after_epoch)
unet.train()
print("her22")
if phase == "validation":
print("her")
mini_batch = Load_validation.create_batch(self.batch_size, self.shuffle_after_epoch)
unet.eval()
dim1, dim2, dim3 = mini_batch.shape
for iteration in range(1):
if phase == "train":
current_batch = Load_training.Load_Image_batch(mini_batch, iteration)
image_batch = Load_training.image_zero_mean_normalizer(current_batch)
if phase == "validation":
current_batch = Load_validation.Load_Image_batch(mini_batch, iteration)
image_batch = Load_training.image_zero_mean_normalizer(current_batch, False, mean_list, std_list)
image_dim0, image_dim1, image_dim2, image_dim3, image_dim4 = image_batch.shape
image_batch = image_batch.reshape((
image_dim0,
image_dim1*image_dim2,
image_dim3,
image_dim4
))
image_batch = np.swapaxes(image_batch, 0,1)
image_batch = torch.as_tensor(image_batch)#.requires_grad_(True) #, requires_grad=True)
image_batch = image_batch.to(device)
print(image_batch.requires_grad)
optimizer.zero_grad()
with torch.set_grad_enabled(is_train == "train"):
for j in range(0, 10, 1):
# [183*5, 3, 256, 256] -> [12, 3, 256, 256]
# ANTALL ITERASJONER: (183*5/12) -> en chunk
input_image = image_batch[j:j+2,0:3,:,:]
print(input_image.requires_grad)
print("gΓ₯r inn")
y_predicted = unet(input_image)
print(y_predicted.requires_grad)
print(image_batch[j:j+2,3,:,:].requires_grad)
loss = dsc_loss(y_predicted.squeeze(1), image_batch[j:j+2,3,:,:])
print(loss.requires_grad)
if phase == "train":
loss_train.append(loss.item())
loss.backward()
print(loss.item())
exit()
optimizer.step()
print(loss.item())
exit()
if phase == "validation":
loss_validation.append(loss.item())
Number of iteration and print statement are for experimenting what the cause could be.
| The training code is fine and the input doesn't need a gradient at all, if you just want to train and update the weights.
The real problem is this line here
with torch.set_grad_enabled(is_train == "train"):
So you want to disable the gradients if you are not training. The thing is is_train is a bool (judging form this: def __call__(self, is_train=False):), so the comparisons will be always false and no gradients will bet set. Just change it to
with torch.set_grad_enabled(is_train):
and you will be fine.
| https://stackoverflow.com/questions/64034471/ |
undefined symbol: THPVariableClaload_textures.cpython-37m-x86_64-linux-gnu.so: undefined symbol: THPVariableClass | Do you know how I could fix this? I am trying to use https://github.com/benjiebob/SMALViewer/issues/3 repo however I get error on neural_renderer port:
$ python smal_viewer.py
Traceback (most recent call last):
File "smal_viewer.py", line 2, in <module>
import pyqt_viewer
File "/home/mona/research/3danimals/SMALViewer/pyqt_viewer.py", line 13, in <module>
from smal.smal3d_renderer import SMAL3DRenderer
File "/home/mona/research/3danimals/SMALViewer/smal/smal3d_renderer.py", line 6, in <module>
import neural_renderer as nr
File "/home/mona/anaconda3/lib/python3.7/site-packages/neural_renderer/__init__.py", line 3, in <module>
from .load_obj import load_obj
File "/home/mona/anaconda3/lib/python3.7/site-packages/neural_renderer/load_obj.py", line 8, in <module>
import neural_renderer.cuda.load_textures as load_textures_cuda
ImportError: /home/mona/anaconda3/lib/python3.7/site-packages/neural_renderer/cuda/load_textures.cpython-37m-x86_64-linux-gnu.so: undefined symbol: THPVariableClass
Here is some details:
$ python
Python 3.7.6 (default, Jan 8 2020, 19:59:22)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'1.6.0'
>>> torch.version.cuda
'10.1'
>>> torch.cuda.is_available()
True
$ gcc --version
gcc (Ubuntu 9.3.0-10ubuntu2) 9.3.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$ lsb_release -a
LSB Version: core-11.1.0ubuntu2-noarch:security-11.1.0ubuntu2-noarch
Distributor ID: Ubuntu
Description: Ubuntu 20.04.1 LTS
Release: 20.04
Codename: focal
Here is the neural renderer git repo: https://github.com/daniilidis-group/neural_renderer
I installed the neural renderer using pip install neural_renderer_pytorch
| It looks like you didn't build the neural_renderer_pytorch yourself, but used a wheel. However, this wheel was built with an older pytorch version and doesn't work with the current pytorch version on your machine.
Build neural_renderer from the source (after deinstalling neural_renderer you have now) using your current pytorch-version i.e.
$ pip uninstall neural-renderer-pytorch
$ pip install https://github.com/daniilidis-group/neural_renderer/zipball/master
and it should work.
Until pytorch 1.5, it used a someway brittle way of building extensions on Linux: despite depending on torch, extensions didn't link explicitly against libtorch.so. The missing symbols were provided only because import torch loaded libtorch.so with RTLD_GLOBAL, thus making its symbols globally visibile/accessible - this is the reason why prior to loading those extensions (e.g. neural_renderer_pytorch like here) one had to import torch.
One could enforce the old behavior setting RTLD_GLOBAL prior to importing torch for the very first time it is happening:
import sys; import ctypes;
sys.setdlopenflags(sys.getdlopenflags() | ctypes.RTLD_GLOBAL)
import torch # now all symbols of torch
# have global visibility and can be used in
# other extensions
However, using RTLD_GLOBAL is quite dangerous as it could possibly interpose symbols that are unrelated and lead to subtle bugs or even crashes.
Thus, since 1.5 pytorch no longer uses RTLD_GLOBAL, but links explicitly against libpytorch.so (see this commit) and extensions built with older pytorch versions will not work.
| https://stackoverflow.com/questions/64037618/ |
How to make a Trainer pad inputs in a batch with huggingface-transformers? | I'm trying to train a model using a Trainer, according to the documentation (https://huggingface.co/transformers/master/main_classes/trainer.html#transformers.Trainer) I can specify a tokenizer:
tokenizer (PreTrainedTokenizerBase, optional) β The tokenizer used to
preprocess the data. If provided, will be used to automatically pad
the inputs the maximum length when batching inputs, and it will be
saved along the model to make it easier to rerun an interrupted
training or reuse the fine-tuned model.
So padding should be handled automatically, but when trying to run it I get this error:
ValueError: Unable to create tensor, you should probably activate
truncation and/or padding with 'padding=True' 'truncation=True' to
have batched tensors with the same length.
The tokenizer is created this way:
tokenizer = BertTokenizerFast.from_pretrained(pretrained_model)
And the Trainer like that:
trainer = Trainer(
tokenizer=tokenizer,
model=model,
args=training_args,
train_dataset=train,
eval_dataset=dev,
compute_metrics=compute_metrics
)
I've tried putting the padding and truncation parameters in the tokenizer, in the Trainer, and in the training_args. Nothing does. Any idea?
| Look at the columns your tokenizer is returning. You might wanna limit it to only the required columns.
For Example
def preprocess_function(examples):
#function to tokenize the dataset.
if sentence2_key is None:
return tokenizer(examples[sentence1_key], truncation=True, padding=True)
return tokenizer(examples[sentence1_key], examples[sentence2_key], truncation=True, padding=True)
encoded_dataset = dataset.map(preprocess_function, batched=True, load_from_cache_file=False)
#Thing you should do is
columns_to_return = ['input_ids', 'label', 'attention_mask']
encoded_dataset.set_format(type='torch', columns=columns_to_return)
| https://stackoverflow.com/questions/64047261/ |
How does one install torchmeta for a ppc64le architecture in pytorch? | I was trying to use torchmeta in a ppc64le architecture. Unfortunately it's not been easy to install since ppc64le requires special binaries to work.
I eventually managed to get the right binaries for pytorch and torchvision by following these instructions (that prepend the right ibm channel with the conda binaries, plus installs all the required files too):
conda config --prepend channels https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/
conda create -n my_new_env python=3.7 powerai=1.7.0
conda activate my_new_env
after that I proceeded to install the right version of torchmeta, which was 1.3.1 since ppc64le only has pytorch 1.3.1 and torchvision 0.4.2. So I did:
pip install torchmeta==1.3.1
but now I have a new error that it cannot find the right version of h5py compatible with what I want to do. The error message is to large to paste but I will paste what I hope are useful part of it:
(my_new_env) [miranda9@hal-login ~]$ pip install torchmeta==1.3.1
Collecting torchmeta==1.3.1
Using cached torchmeta-1.3.1-py3-none-any.whl (144 kB)
Requirement already satisfied: requests in ./.conda/envs/my_new_env/lib/python3.7/site-packages (from torchmeta==1.3.1) (2.22.0)
Requirement already satisfied: torchvision<0.6.0,>=0.4.0 in ./.conda/envs/my_new_env/lib/python3.7/site-packages (from torchmeta==1.3.1) (0.4.2)
Requirement already satisfied: torch<1.5.0,>=1.3.0 in ./.conda/envs/my_new_env/lib/python3.7/site-packages (from torchmeta==1.3.1) (1.3.1)
Processing ./.cache/pip/wheels/87/f5/ad/9f04a48453875e8054c19f9fe3f50cbbe0c09b956835555019/Pillow-6.2.2-cp37-cp37m-linux_ppc64le.whl
Requirement already satisfied: numpy>=1.14.0 in ./.conda/envs/my_new_env/lib/python3.7/site-packages (from torchmeta==1.3.1) (1.17.4)
Requirement already satisfied: tqdm>=4.0.0 in ./.conda/envs/my_new_env/lib/python3.7/site-packages (from torchmeta==1.3.1) (4.36.1)
Collecting h5py~=2.9.0
Using cached h5py-2.9.0.tar.gz (287 kB)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in ./.conda/envs/my_new_env/lib/python3.7/site-packages (from requests->torchmeta==1.3.1) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in ./.conda/envs/my_new_env/lib/python3.7/site-packages (from requests->torchmeta==1.3.1) (2020.6.20)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in ./.conda/envs/my_new_env/lib/python3.7/site-packages (from requests->torchmeta==1.3.1) (1.25.10)
Requirement already satisfied: idna<2.9,>=2.5 in ./.conda/envs/my_new_env/lib/python3.7/site-packages (from requests->torchmeta==1.3.1) (2.8)
Requirement already satisfied: six in ./.conda/envs/my_new_env/lib/python3.7/site-packages (from torchvision<0.6.0,>=0.4.0->torchmeta==1.3.1) (1.13.0)
Building wheels for collected packages: h5py
Building wheel for h5py (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /home/miranda9/.conda/envs/my_new_env/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-bpmeop26/h5py/setup.py'"'"'; __file__='"'"'/tmp/pip-install-bpmeop26/h5py/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-ccg1oj0n
cwd: /tmp/pip-install-bpmeop26/h5py/
Complete output (1321 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-ppc64le-3.7
creating build/lib.linux-ppc64le-3.7/h5py
copying h5py/__init__.py -> build/lib.linux-ppc64le-3.7/h5py
copying h5py/h5py_warnings.py -> build/lib.linux-ppc64le-3.7/h5py
copying h5py/highlevel.py -> build/lib.linux-ppc64le-3.7/h5py
copying h5py/ipy_completer.py -> build/lib.linux-ppc64le-3.7/h5py
copying h5py/version.py -> build/lib.linux-ppc64le-3.7/h5py
creating build/lib.linux-ppc64le-3.7/h5py/_hl
copying h5py/_hl/__init__.py -> build/lib.linux-ppc64le-3.7/h5py/_hl
copying h5py/_hl/attrs.py -> build/lib.linux-ppc64le-3.7/h5py/_hl
copying h5py/_hl/base.py -> build/lib.linux-ppc64le-3.7/h5py/_hl
copying h5py/_hl/compat.py -> build/lib.linux-ppc64le-3.7/h5py/_hl
copying h5py/_hl/dataset.py -> build/lib.linux-ppc64le-3.7/h5py/_hl
copying h5py/_hl/datatype.py -> build/lib.linux-ppc64le-3.7/h5py/_hl
copying h5py/_hl/dims.py -> build/lib.linux-ppc64le-3.7/h5py/_hl
copying h5py/_hl/files.py -> build/lib.linux-ppc64le-3.7/h5py/_hl
copying h5py/_hl/filters.py -> build/lib.linux-ppc64le-3.7/h5py/_hl
copying h5py/_hl/group.py -> build/lib.linux-ppc64le-3.7/h5py/_hl
copying h5py/_hl/selections.py -> build/lib.linux-ppc64le-3.7/h5py/_hl
copying h5py/_hl/selections2.py -> build/lib.linux-ppc64le-3.7/h5py/_hl
copying h5py/_hl/vds.py -> build/lib.linux-ppc64le-3.7/h5py/_hl
creating build/lib.linux-ppc64le-3.7/h5py/tests
copying h5py/tests/__init__.py -> build/lib.linux-ppc64le-3.7/h5py/tests
copying h5py/tests/common.py -> build/lib.linux-ppc64le-3.7/h5py/tests
creating build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/__init__.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_attrs.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_attrs_data.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_base.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_dataset.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_datatype.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_dimension_scales.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_file.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_file_image.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_group.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_h5.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_h5d_direct_chunk_write.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_h5f.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_h5p.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_h5t.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_objects.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_selections.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
copying h5py/tests/old/test_slicing.py -> build/lib.linux-ppc64le-3.7/h5py/tests/old
creating build/lib.linux-ppc64le-3.7/h5py/tests/hl
copying h5py/tests/hl/__init__.py -> build/lib.linux-ppc64le-3.7/h5py/tests/hl
copying h5py/tests/hl/test_attribute_create.py -> build/lib.linux-ppc64le-3.7/h5py/tests/hl
copying h5py/tests/hl/test_dataset_getitem.py -> build/lib.linux-ppc64le-3.7/h5py/tests/hl
copying h5py/tests/hl/test_dataset_swmr.py -> build/lib.linux-ppc64le-3.7/h5py/tests/hl
copying h5py/tests/hl/test_datatype.py -> build/lib.linux-ppc64le-3.7/h5py/tests/hl
copying h5py/tests/hl/test_deprecation.py -> build/lib.linux-ppc64le-3.7/h5py/tests/hl
copying h5py/tests/hl/test_dims_dimensionproxy.py -> build/lib.linux-ppc64le-3.7/h5py/tests/hl
copying h5py/tests/hl/test_file.py -> build/lib.linux-ppc64le-3.7/h5py/tests/hl
copying h5py/tests/hl/test_filters.py -> build/lib.linux-ppc64le-3.7/h5py/tests/hl
copying h5py/tests/hl/test_threads.py -> build/lib.linux-ppc64le-3.7/h5py/tests/hl
creating build/lib.linux-ppc64le-3.7/h5py/tests/hl/test_vds
copying h5py/tests/hl/test_vds/__init__.py -> build/lib.linux-ppc64le-3.7/h5py/tests/hl/test_vds
copying h5py/tests/hl/test_vds/test_highlevel_vds.py -> build/lib.linux-ppc64le-3.7/h5py/tests/hl/test_vds
copying h5py/tests/hl/test_vds/test_lowlevel_vds.py -> build/lib.linux-ppc64le-3.7/h5py/tests/hl/test_vds
copying h5py/tests/hl/test_vds/test_virtual_source.py -> build/lib.linux-ppc64le-3.7/h5py/tests/hl/test_vds
running build_ext
Autodetected HDF5 1.10.2
********************************************************************************
Summary of the h5py configuration
Path to HDF5: None
HDF5 Version: '1.10.2'
MPI Enabled: False
Rebuild Required: True
********************************************************************************
Executing api_gen rebuild of defs
Executing cythonize()
[ 1/22] Cythonizing /tmp/pip-install-bpmeop26/h5py/h5py/_conv.pyx
/tmp/pip-install-bpmeop26/h5py/.eggs/Cython-0.29.21-py3.7.egg/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /tmp/pip-install-bpmeop26/h5py/h5py/_conv.pxd
tree = Parsing.p_module(s, pxd, full_module_name)
...
/home/miranda9/.conda/envs/my_new_env/lib/python3.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
#warning "Using deprecated NumPy API, disable it with " \
^
In file included from /tmp/pip-install-bpmeop26/h5py/h5py/defs.c:654:0:
/tmp/pip-install-bpmeop26/h5py/h5py/api_compat.h:27:18: fatal error: hdf5.h: No such file or directory
#include "hdf5.h"
^
compilation terminated.
error: command 'gcc' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for h5py
Running setup.py clean for h5py
Failed to build h5py
DEPRECATION: Could not build wheels for h5py which do not use PEP 517. pip will fall back to legacy 'setup.py install' for these. pip 21.0 will remove support for this functionality. A possible replacement is to fix the wheel build issue reported above. You can find discussion regarding this at https://github.com/pypa/pip/issues/8368.
Installing collected packages: Pillow, h5py, torchmeta
Attempting uninstall: Pillow
Found existing installation: Pillow 7.1.2
Uninstalling Pillow-7.1.2:
Successfully uninstalled Pillow-7.1.2
Attempting uninstall: h5py
Found existing installation: h5py 2.8.0
Uninstalling h5py-2.8.0:
Successfully uninstalled h5py-2.8.0
Running setup.py install for h5py ... error
ERROR: Command errored out with exit status 1:
command: /home/miranda9/.conda/envs/my_new_env/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-bpmeop26/h5py/setup.py'"'"'; __file__='"'"'/tmp/pip-install-bpmeop26/h5py/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-hlwpfooj/install-record.txt --single-version-externally-managed --compile --install-headers /home/miranda9/.conda/envs/my_new_env/include/python3.7m/h5py
...
copying h5py/tests/hl/test_vds/test_lowlevel_vds.py -> build/lib.linux-ppc64le-3.7/h5py/tests/hl/test_vds
copying h5py/tests/hl/test_vds/test_virtual_source.py -> build/lib.linux-ppc64le-3.7/h5py/tests/hl/test_vds
running build_ext
Autodetected HDF5 1.10.2
********************************************************************************
Summary of the h5py configuration
Path to HDF5: None
HDF5 Version: '1.10.2'
MPI Enabled: False
Rebuild Required: True
********************************************************************************
Executing cythonize()
[ 1/22] Cythonizing /tmp/pip-install-bpmeop26/h5py/h5py/_conv.pyx
/tmp/pip-install-bpmeop26/h5py/.eggs/Cython-0.29.21-py3.7.egg/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /tmp/pip-install-bpmeop26/h5py/h5py/_conv.pxd
...
warning: h5py/api_types_hdf5.pxd:730:6: 'H5Z_ERROR_EDC' redeclared
warning: h5py/api_types_hdf5.pxd:731:6: 'H5Z_DISABLE_EDC' redeclared
warning: h5py/api_types_hdf5.pxd:732:6: 'H5Z_ENABLE_EDC' redeclared
warning: h5py/api_types_hdf5.pxd:733:6: 'H5Z_NO_EDC' redeclared
building 'h5py.defs' extension
creating build/temp.linux-ppc64le-3.7
creating build/temp.linux-ppc64le-3.7/tmp
creating build/temp.linux-ppc64le-3.7/tmp/pip-install-bpmeop26
creating build/temp.linux-ppc64le-3.7/tmp/pip-install-bpmeop26/h5py
creating build/temp.linux-ppc64le-3.7/tmp/pip-install-bpmeop26/h5py/h5py
gcc -pthread -B /home/miranda9/.conda/envs/my_new_env/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DH5_USE_16_API -I./h5py -I/tmp/pip-install-bpmeop26/h5py/lzf -I/opt/local/include -I/usr/local/include -I/home/miranda9/.conda/envs/my_new_env/lib/python3.7/site-packages/numpy/core/include -I/home/miranda9/.conda/envs/my_new_env/include/python3.7m -c /tmp/pip-install-bpmeop26/h5py/h5py/defs.c -o build/temp.linux-ppc64le-3.7/tmp/pip-install-bpmeop26/h5py/h5py/defs.o
In file included from /home/miranda9/.conda/envs/my_new_env/lib/python3.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1830:0,
from /home/miranda9/.conda/envs/my_new_env/lib/python3.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:12,
from /home/miranda9/.conda/envs/my_new_env/lib/python3.7/site-packages/numpy/core/include/numpy/arrayobject.h:4,
from /tmp/pip-install-bpmeop26/h5py/h5py/api_compat.h:26,
from /tmp/pip-install-bpmeop26/h5py/h5py/defs.c:654:
/home/miranda9/.conda/envs/my_new_env/lib/python3.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
#warning "Using deprecated NumPy API, disable it with " \
^
In file included from /tmp/pip-install-bpmeop26/h5py/h5py/defs.c:654:0:
/tmp/pip-install-bpmeop26/h5py/h5py/api_compat.h:27:18: fatal error: hdf5.h: No such file or directory
#include "hdf5.h"
^
compilation terminated.
error: command 'gcc' failed with exit status 1
----------------------------------------
Rolling back uninstall of h5py
Moving to /home/miranda9/.conda/envs/my_new_env/lib/python3.7/site-packages/h5py
from /home/miranda9/.conda/envs/my_new_env/lib/python3.7/site-packages/~5py
Moving to /home/miranda9/.conda/envs/my_new_env/lib/python3.7/site-packages/h5py-2.8.0-py3.7.egg-info
from /home/miranda9/.conda/envs/my_new_env/lib/python3.7/site-packages/~5py-2.8.0-py3.7.egg-info
ERROR: Command errored out with exit status 1: /home/miranda9/.conda/envs/my_new_env/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-bpmeop26/h5py/setup.py'"'"'; __file__='"'"'/tmp/pip-install-bpmeop26/h5py/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-hlwpfooj/install-record.txt --single-version-externally-managed --compile --install-headers /home/miranda9/.conda/envs/my_new_env/include/python3.7m/h5py Check the logs for full command output.
anyone know how I can successfully install a working torchmeta version in a ppc64le (using wmcle 1.7.0)?
related:
gitissue for torchmeta: https://github.com/tristandeleu/pytorch-meta/issues/95
IBM gitissue for torchmeta support: https://github.com/IBM/powerai/issues/269
h5py gitissue for torchmeta: https://github.com/h5py/h5py/issues/1678
IBM h5py support for torchmeta: https://github.com/IBM/powerai/issues/270
| Because there are not wheels for powerpc for h5py you are installing h5py from source (from the tarball). This requires both the Python and h5py development headers to be available, see https://docs.h5py.org/en/stable/build.html#source-installation.
Either install h5py from conda or install the required build dependencies.
| https://stackoverflow.com/questions/64049603/ |
how "data" and "target" are choosen in a federated learning? (PySyft) | i can't understand how in function train() below, the variable (data, target) are choosen.
def train(args, model, device, federated_train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(federated_train_loader): # <-- now it is a distributed dataset
model.send(data.location) # <-- NEW: send the model to the right location`
i guess they are 2 tensor representing 2 random images of dataset train, but then the loss function
loss = F.nll_loss(output, target)
is calculated at every interaction with different target?
Also i have different question: i trained the network with images of cats, then i test it with images of cars and the accuracy reached is 97%. How is this possible? is a proper value or i'm doing something wrong?
here is the entire code:
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import syft as sy # <-- NEW: import the Pysyft library
hook = sy.TorchHook(torch) # <-- NEW: hook PyTorch ie add extra functionalities to support Federated Learning
bob = sy.VirtualWorker(hook, id="bob") # <-- NEW: define remote worker bob
alice = sy.VirtualWorker(hook, id="alice") # <-- NEW: and alice
class Arguments():
def __init__(self):
self.batch_size = 64
self.test_batch_size = 1000
self.epochs = 2
self.lr = 0.01
self.momentum = 0.5
self.no_cuda = False
self.seed = 1
self.log_interval = 30
self.save_model = False
args = Arguments()
use_cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
device = torch.device("cuda" if use_cuda else "cpu")
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
federated_train_loader = sy.FederatedDataLoader( # <-- this is now a FederatedDataLoader
datasets.MNIST("C:\\users...\\train", train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
.federate((bob, alice)), # <-- NEW: we distribute the dataset across all the workers, it's now a FederatedDataset
batch_size=args.batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST("C:\\Users...\\test", train=False, download=True, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.test_batch_size, shuffle=True, **kwargs)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4*4*50, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4*4*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
def train(args, model, device, federated_train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(federated_train_loader): # <-- now it is a distributed dataset
model.send(data.location) # <-- NEW: send the model to the right location
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
model.get() # <-- NEW: get the model back
if batch_idx % args.log_interval == 0:
loss = loss.get() # <-- NEW: get the loss back
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * args.batch_size, len(federated_train_loader) * args.batch_size,
100. * batch_idx / len(federated_train_loader), loss.item()))
def test(args, model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=args.lr) # TODO momentum is not supported at the moment
for epoch in range(1, args.epochs + 1):
train(args, model, device, federated_train_loader, optimizer, epoch)
test(args, model, device, test_loader)
if (args.save_model):
torch.save(model.state_dict(), "mnist_cnn.pt")
| Consider it like this. When you hook torch, all your torch tensors will get additional functionality - methods like .send(), .federate(), and attributes like .location and ._objects. Your data and target, which were once torch tensors, became pointers to tensors residing in different VirtualWorker objects due to .federate((bob, alice)).
Now data and target have additional attributes that includes .location, which will return the location of that tensor - data/target pointed by the pointer called data/target.
Federated learning sends the global model to this location, as seen in model.send(data.location).
Now, model is a pointer residing at the same location and data is also a pointer residing there. Hence when you take the output as output = model(data), output will also reside there and all we (the central server or in other words, the VirtualWorker called 'me') will get is a pointer to that output.
Now, regarding your doubt on loss calculation, since output and target are both residing in that same location, calculation of loss will also happen there. Same goes for backprop and step.
Finally, you can see model.get(), here is where the central server pulls the remote model using the pointer called model. (I'm not sure if it should be model = model.get() though).
So anything with .get() will be pulled from that worker and will be returned in our python statement. Also note that .get() will remove that object from it's location when called. Hence use .copy().get() if you are going to need it further.
| https://stackoverflow.com/questions/64050391/ |
Using both vCPU's with google cloud computing. Python code. PyTorch | I am new to cloud computing. I made a virtual machine in google cloud computing, machine type:
e2-highcpu-2 (2 vCPU's, 2 GB geheugen)
I run a script with running the command
python3 simulation1.py
When I look at the output control screen, I note that only 50% of the CPU power is used. So I just use one of my 2 CPU's. Is there a way to make full use of the computing power ?
| Looks like your question can be resumed to "is Python capable of running on multiple cores?"
And you can find the answer to that question perfectly explained in this post.
Basically:
Python threads cannot take advantage of many cores. This is due to an internal implementation detail called the GIL (global interpreter lock) in the C implementation of python.
You can either use something like multiprocessing, celery or mpi4py to split the parallel work into another process;
Or you can use something like Jython or IronPython to use an alternative interpreter that doesn't have a GIL.
If you are already using any of the tools mentioned above. you could also add more details about your code.
| https://stackoverflow.com/questions/64051603/ |
PyTorch linear regression model | I have a multivariate linear regression problem in which each data point looks like this:
y_i = 3 # Some integer between 0 and 20
X_i = [0.5, 80, 0.004, 0.5, 0.789] # A 5 dimensional vector
I can train a simple linear model by using sklearn, something like:
from sklearn import linear_model
ols = linear_model.LinearRegression()
model = ols.fit(X, y)
This gets me an accuracy of ~55 % (a linear model is not suitable for the problem, but this is a baseline to demonstrate the feasibility of modelling the problem, and a way for me to learn PyTorch, having use TensorFlow previously).
When I try to train a linear model using PyTorch I am defining the model as:
class TwoLayerNet(torch.nn.Module):
def __init__(self, D_in, D_out):
super(TwoLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, D_out)
def forward(self, x):
y_pred = self.linear1(x)
return y_pred
D_in, D_out = 5, 1
model = TwoLayerNet(D_in, D_out)
And training as:
epochs = 10
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
for epoch in range(epochs):
for n, batch in enumerate(batches):
X = []
y = []
for values in batch:
X.append(values[0])
y.append(values[1])
X = torch.from_numpy(np.asarray(X))
y = torch.from_numpy(np.asarray(y))
# Forward pass: Compute predicted y by passing x to the model
optimizer.zero_grad()
y_pred = model(X)
# Compute and print loss
loss = criterion(y_pred, y)
if n % 100 == 99:
print(n, loss.item())
# Zero gradients, perform a backward pass, and update the weights.
loss.backward()
optimizer.step()
This is just some code from the PyTorch documentation which I have adjusted. The current set up only achieves ~25%, no where near the accuracy that I would expect from the linear model. Am I doing something incorrect in the model training wrt PyTorch?
| tam63,
you are missing activation function in the model definition. replace
y_pred = self.linear1(x)
with
y_pred = F.relu(self.linear1(x))
there are few more things that may go wrong.
For instance (1) too low a learning rate, (2) too few layers (add one more). If you are familiar with TF as you say, try same problem in TF and once you have good results - translate it into Pytorch with same network structure and same hyperparameters.
| https://stackoverflow.com/questions/64052643/ |
PyTorch: difference between type(a), a.type, a.type() | suppose a is a tensor, then what's the difference between:
type(a)
a.type
a.type()
I couldn't find a document differentiating these.
| type is the python in-built method.
It will return type of object. like <class 'torch.Tensor'>
torch.Tensor.type (x.type()) is pytorch in-built method.
It will return type of data stored inside tensor. like torch.DoubleTensor, etc.
Edit:
And about x.type() vs x.type -
When you write a function name with parentheses x.type () it will actually execute the function and return its value. Whereas without parentheses x.type it is simply a reference to function.
| https://stackoverflow.com/questions/64056979/ |
Jupyter doesn't recognize torchaudio | I am trying to install torchaudio to use in a Jupyter notebook but when i import it i get the error:
ModuleNotFoundError: No module named 'torchaudio'
I tried to import it in a .py file that the notebook uses but to with no success. I thought maybe it wasnt installed properly but when i try to install it using pip install torchaudio i get "requirement already satisfied".
Im lost, how can i import it successfully?
| pip install torchaudio
should return:
Collecting torchaudio
Downloading https://files.pythonhosted.org/packages/96/34/c651430dea231e382ddf2eb5773239bf4885d9528f640a4ef39b12894cb8/torchaudio-0.6.0-cp36-cp36m-manylinux1_x86_64.whl (6.7MB)
|ββββββββββββββββββββββββββββββββ| 6.7MB 2.4MB/s
Requirement already satisfied: torch==1.6.0 in /usr/local/lib/python3.6/dist-packages (from torchaudio) (1.6.0+cu101)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch==1.6.0->torchaudio) (0.16.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch==1.6.0->torchaudio) (1.18.5)
Installing collected packages: torchaudio
Successfully installed torchaudio-0.6.0
And everything should work as expected.
| https://stackoverflow.com/questions/64058958/ |
matching PyTorch tensor dimensions | I am having some issues with regards to the dimensionality of my tensors in my training function at present. I am using the MNIST dataset, so 10 possible targets, and originally wrote the prototype code using a training batch size of 10, which was in retrospect not the wisest choice. It gave poor results during some earlier tests, and increasing the amount of training iterations saw no benefit. Upon trying to then increase the batch size, I realised that what I had written was not that general, and I was likely never training it on the proper data. Below is my training function:
def Train(tLoops, Lrate):
for _ in range(tLoops):
tempData = train_data.view(batch_size_train, 1, 1, -1)
output = net(tempData)
trainTarget = train_targets
criterion = nn.MSELoss()
print("target:", trainTarget.size())
print("Output:", output.size())
loss = criterion(output, trainTarget.float())
# print("error is:", loss)
net.zero_grad() # zeroes the gradient buffers of all parameters
loss.backward()
for j in net.parameters():
j.data.sub_(j.grad.data * Lrate)
of which the print functions return
target: torch.Size([100])
Output: torch.Size([100, 1, 1, 10])
before the error message on the line where loss is calculated;
RuntimeError: The size of tensor a (10) must match the size of tensor b (100) at non-singleton dimension 3
The first print, target, is a 1-dimensional list of the respective ground truth values for each image. Output contains the output of the neural net for each of those 100 samples, so a 10 x 100 list, however from skimming and reshaping the data from 28 x 28 to 1 x 784 earlier, I seem to have extra dimensions unnecessarily. Does PyTorch provide a way to remove these? I couldn't find anything in the documentation, or is there something else that could be my issue?
| There are several problems in your training script. I will address each of them below.
First, you should NOT do data batching by hand. Pytorch/torchvision have functions for that, use a dataset and a data loader: https://pytorch.org/tutorials/recipes/recipes/loading_data_recipe.html.
You should also NEVER update the parameters of you network by hand. Use an Optimizer: https://pytorch.org/docs/stable/optim.html. In your case, SGD without momentum will have the same effect.
The dimensionality of your input seems to be wrong, for MNIST an input tensor should be (batch_size, 1, 28, 28) or (batch_size, 784) if you're training a MLP. Furthermore, the output of your network should be (batch_size, 10)
| https://stackoverflow.com/questions/64067203/ |
Pytorch: What happens to memory when moving tensor to GPU? | Iβm trying to understand what happens to both RAM and GPU memory when a tensor is sent to the GPU.
In the following code sample, I create two tensors - large tensor arr = torch.Tensor.ones((10000, 10000)) and small tensor c = torch.Tensor.ones(1). Tensor c is sent to GPU inside the target function step which is called by multiprocessing.Pool. In doing so, each child process uses 487 MB on the GPU and RAM usage goes to 5 GB. Note that the large tensor arr is just created once before calling Pool and not passed as an argument to the target function. Ram usage does not explode when everything is on the CPU.
I have the following questions on this example:
Iβm sending torch.Tensor.ones(1) to GPU and yet it consumes 487 MB of GPU memory. Does CUDA allocate a minimum amount of memory on the GPU even if the underlying tensor is very small? GPU memory is not a problem for me, and this is just for me to understand how the allocation is done.
The problem lies in RAM usage. Even though I am sending a small tensor to the GPU, it appears as if everything in memory (large tensor arr) is copied for every child process (possibly to pinned memory). So when a tensor is sent to the GPU, what objects are copied to pinned memory? Iβm missing something here as it does not make sense to prepare everything to be sent to GPU when Iβm only sending a particular object.
Thanks!
from multiprocessing import get_context
import time
import torch
dim = 10000
sleep_time = 2
npe = 4 # number of parallel executions
# cuda
if torch.cuda.is_available():
dev = 'cuda:0'
else:
dev = "cpu"
device = torch.device(dev)
def step(i):
c = torch.ones(1)
# comment the line below to see no memory increase
c = c.to(device)
time.sleep(sleep_time)
if __name__ == '__main__':
arr = torch.ones((dim, dim))
# create list of inputs to be executed in parallel
inp = list(range(npe))
# sleep added before and after launching multiprocessing to monitor the memory consumption
print('before pool') # to check memory with top or htop
time.sleep(sleep_time)
context = get_context('spawn')
with context.Pool(npe) as pool:
print('after pool') # to check memory with top or htop
time.sleep(sleep_time)
pool.map(step, inp)
time.sleep(sleep_time)
|
Iβm sending torch.Tensor.ones(1) to GPU and yet it consumes 487 MB of GPU memory. Does CUDA allocate a minimum amount of memory on the GPU even if the underlying tensor is very small?
The CUDA device runtime reserves memory for all sorts of things at context establishment, some of which are fixed in size and some of which are variable and can be controlled by API calls (see here for some more information). It is completely normal that the first API call which explicitly or lazily establishes a context on the device will produce a jump in GPU memory consumption. In this case I imagine the first tensor creation is triggering this memory overhead allocation. This is a property of the CUDA runtime and not PyTorch or the tensor.
| https://stackoverflow.com/questions/64068771/ |
What does this strange C++ syntax do? | From what I understand, 3 arguments are being passed to m.def(). I don't understand the syntax of the second argument passed, i.e. [] (Observable...){} What does this mean?
m.def(
"DumpHistogramFile",
[](Observable<NetBase>::Observer* ob) {
HistogramNetObserver* hist_ob =
dynamic_cast_if_rtti<HistogramNetObserver*>(ob);
hist_ob->DumpHistogramFile();
},
pybind11::arg("ob"));
I found this syntax in the pytorch source code
| The second argument is a function that takes a pointer to an Observer and doesn't return anything. When executed, it calls DumpHistogramFile. This is called a lambda expression.
| https://stackoverflow.com/questions/64073352/ |
Subsets and Splits