instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
PyTorch normalize two sets of gradients during training | In this GAN tutorial, if you scroll down to the training loop you can see they combine the gradients
errD = errD_real + errD_fake like this. Where errD_real = criterion(output, label) and errD_fake = criterion(output, label) and criterion = nn.BCELoss(). I want to do the same thing but before doing a backward pass I want to normalize both gradients to the lower Euclidean norm of the two. How would I do that?
I know I can access the gradients of each weight individually on netD by printing out netD.weight.grad, but is there some way to batchnorm them to the lower Euclidean norm of the two?
Here's the part of the training loop I'm talking about:
for epoch in range(num_epochs):
# For each batch in the dataloader
for i, data in enumerate(dataloader, 0):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
## Train with all-real batch
netD.zero_grad()
# Format batch
real_cpu = data[0].to(device)
b_size = real_cpu.size(0)
label = torch.full((b_size,), real_label, device=device)
# Forward pass real batch through D
output = netD(real_cpu).view(-1)
# Calculate loss on all-real batch
errD_real = criterion(output, label)
# Calculate gradients for D in backward pass
errD_real.backward()
D_x = output.mean().item()
## Train with all-fake batch
# Generate batch of latent vectors
noise = torch.randn(b_size, nz, 1, 1, device=device)
# Generate fake image batch with G
fake = netG(noise)
label.fill_(fake_label)
# Classify all fake batch with D
output = netD(fake.detach()).view(-1)
# Calculate D's loss on the all-fake batch
errD_fake = criterion(output, label)
# Calculate the gradients for this batch
errD_fake.backward()
D_G_z1 = output.mean().item()
# Add the gradients from the all-real and all-fake batches
errD = errD_real + errD_fake
# Update D
optimizerD.step()
...
| You mentioned two things which are incorrect.
errD = errD_real + errD_fake basically combines two objective function, not gradients.
You need to change the gradients after the backward pass, not before. With the backward pass, the gradients will be computed and then you can modify the gradients before calling optimizer.step() which updates the model parameters.
Answer to your question:
I believe there is no direct API support from PyTorch to achieve your goal. However, you can easily modify the gradients by yourself after the backward pass. To compute the Euclidean norm, you can use the norm function. Then you can simply divide the gradients (inplace).
| https://stackoverflow.com/questions/57931967/ |
How Pytorch Tensor get the index of elements? | I have 2 Tensors named x and list and their definitions are below:
x = torch.tensor(3)
list = torch.tensor([1,2,3,4,5])
Now I want to get the index of element x from list. The expected output is an Integer:
2
How can I do in an easy way?
| import torch
x = torch.tensor(3)
list = torch.tensor([1,2,3,4,5])
idx = (list == x).nonzero().flatten()
print (idx.tolist()) # [2]
list = torch.tensor([1,2,3,3,5])
idx = (list == x).nonzero().flatten()
print (idx.tolist()) # [2, 3]
| https://stackoverflow.com/questions/57933781/ |
How to save GPU memory usage in PyTorch | In PyTorch I wrote a very simple CNN discriminator and trained it. Now I need to deploy it to make predictions. But the target machine has a small GPU memory and got out of memory error. So I think that I can set requires_grad = False to prevent PyTorch from storing the gradient values. However I didn't find it making any difference.
There are about 5 millions of parameters in my model. But when predicting a single batch of input, it consumes about 1.2GB of memory. I think there should be no need for such large memory.
The question is how to save GPU memory usage when I just want to use my model to make predictions?
Here is a demo, I use discriminator.requires_grad_ to disable/enable autograd of all parameters. But it seems to be no use.
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as functional
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
def getMemoryUsage():
usage = nvsmi.DeviceQuery("memory.used")["gpu"][0]["fb_memory_usage"]
return "%d %s" % (usage["used"], usage["unit"])
print("Before GPU Memory: %s" % getMemoryUsage())
class Discriminator(nn.Module):
def __init__(self):
super().__init__()
# trainable layers
# input: 2x256x256
self.conv1 = nn.Conv2d(2, 8, 5, padding=2) # 8x256x256
self.pool1 = nn.MaxPool2d(2) # 8x128x128
self.conv2 = nn.Conv2d(8, 32, 5, padding=2) # 32x128x128
self.pool2 = nn.MaxPool2d(2) # 32x64x64
self.conv3 = nn.Conv2d(32, 96, 5, padding=2) # 96x64x64
self.pool3 = nn.MaxPool2d(4) # 96x16x16
self.conv4 = nn.Conv2d(96, 256, 5, padding=2) # 256x16x16
self.pool4 = nn.MaxPool2d(4) # 256x4x4
self.num_flat_features = 4096
self.fc1 = nn.Linear(4096, 1024)
self.fc2 = nn.Linear(1024, 256)
self.fc3 = nn.Linear(256, 1)
# loss function
self.loss = nn.MSELoss()
# other properties
self.requires_grad = True
def forward(self, x):
y = x
y = self.conv1(y)
y = self.pool1(y)
y = functional.relu(y)
y = self.conv2(y)
y = self.pool2(y)
y = functional.relu(y)
y = self.conv3(y)
y = self.pool3(y)
y = functional.relu(y)
y = self.conv4(y)
y = self.pool4(y)
y = functional.relu(y)
y = y.view((-1,self.num_flat_features))
y = self.fc1(y)
y = functional.relu(y)
y = self.fc2(y)
y = functional.relu(y)
y = self.fc3(y)
y = torch.sigmoid(y)
return y
def predict(self, x, score_th=0.5):
if len(x.shape) == 3:
singlebatch = True
x = x.view([1]+list(x.shape))
else:
singlebatch = False
y = self.forward(x)
label = (y > float(score_th))
if singlebatch:
y = y.view(list(y.shape)[1:])
return label, y
def requires_grad_(self, requires_grad=True):
for parameter in self.parameters():
parameter.requires_grad_(requires_grad)
self.requires_grad = requires_grad
x = torch.cuda.FloatTensor(np.zeros([2, 256, 256]))
discriminator = Discriminator()
discriminator.to("cuda:0")
# comment/uncomment this line to make difference
discriminator.requires_grad_(False)
discriminator.predict(x)
print("Requires grad", discriminator.requires_grad)
print("After GPU Memory: %s" % getMemoryUsage())
By comment out the line discriminator.requires_grad_(False), I got output:
Before GPU Memory: 6350MiB
Requires grad True
After GPU Memory: 7547MiB
While by uncomment the line, I got:
Before GPU Memory: 6350MiB
Requires grad False
After GPU Memory: 7543MiB
| You can use pynvml.
This python tool made Nvidia so you can Python query like this:
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
nvsmi.DeviceQuery('memory.free, memory.total')
You can always also execute:
torch.cuda.empty_cache()
To empty the cache and you will find even more free memory that way.
Before calling torch.cuda.empty_cache() if you have objects you don't use anymore you can call this:
obj = None
And after that you call
gc.collect()
| https://stackoverflow.com/questions/57942507/ |
plt.imshow() gives TypeError: Image data of dtype object cannot be converted to float using PyTorch | I am trying to create a custom Dataset Processor for a set of images. However, when I try to view the images in my dataset, I get hit with the TypeError: Image data of dtype object cannot be converted to float.
I tried to check if I am passing in a PIL Image into the plt.imshow() function and I am.
class DatasetProcessing(Dataset):
def __init__(self, input_data, output_data, transform=None):
self.transform = transform
self.input_data =
input_data.reshape((-1,64,64)).astype(np.float32)[:,:,:,None]
self.output_data = output_data
def __getitem__(self, index):
return self.transform(self.input_data[index]), self.output_data[index]
def __len__(self):
return len(list(self.input_data))
transform = transforms.Compose([transforms.ToPILImage()])
dset_train = DatasetProcessing(X_slices_train, Y_train, transform)
train_loader = torch.utils.data.DataLoader(dset_train, batch_size=4,
shuffle=True, num_workers=4)
plt.figure(figsize = (16, 4))
for num, x in enumerate(dset_train):
plt.subplot(1,6,num+1)
plt.axis('off')
print(x)
plt.imshow(np.asarray(x))
plt.title(y_train[num])
I expected to get pictures of my dataset, but instead I get the following error message:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-10-8b8caac49d97> in <module>
4 plt.axis('off')
5 print(x)
----> 6 plt.imshow(np.asarray(x))
7 plt.title(y_train[num])
~/anaconda3/lib/python3.7/site-packages/matplotlib/pyplot.py in imshow(X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, resample, url, data, **kwargs)
2675 filternorm=filternorm, filterrad=filterrad, imlim=imlim,
2676 resample=resample, url=url, **({"data": data} if data is not
-> 2677 None else {}), **kwargs)
2678 sci(__ret)
2679 return __ret
~/anaconda3/lib/python3.7/site-packages/matplotlib/__init__.py in inner(ax, data, *args, **kwargs)
1587 def inner(ax, *args, data=None, **kwargs):
1588 if data is None:
-> 1589 return func(ax, *map(sanitize_sequence, args), **kwargs)
1590
1591 bound = new_sig.bind(ax, *args, **kwargs)
~/anaconda3/lib/python3.7/site-packages/matplotlib/cbook/deprecation.py in wrapper(*args, **kwargs)
367 f"%(removal)s. If any parameter follows {name!r}, they "
368 f"should be pass as keyword, not positionally.")
--> 369 return func(*args, **kwargs)
370
371 return wrapper
~/anaconda3/lib/python3.7/site-packages/matplotlib/cbook/deprecation.py in wrapper(*args, **kwargs)
367 f"%(removal)s. If any parameter follows {name!r}, they "
368 f"should be pass as keyword, not positionally.")
--> 369 return func(*args, **kwargs)
370
371 return wrapper
~/anaconda3/lib/python3.7/site-packages/matplotlib/axes/_axes.py in imshow(self, X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, resample, url, **kwargs)
5658 resample=resample, **kwargs)
5659
-> 5660 im.set_data(X)
5661 im.set_alpha(alpha)
5662 if im.get_clip_path() is None:
~/anaconda3/lib/python3.7/site-packages/matplotlib/image.py in set_data(self, A)
676 not np.can_cast(self._A.dtype, float, "same_kind")):
677 raise TypeError("Image data of dtype {} cannot be converted to "
--> 678 "float".format(self._A.dtype))
679
680 if not (self._A.ndim == 2
TypeError: Image data of dtype object cannot be converted to float
| Your dset_train yields self.transform(self.input_data[index]), self.output_data[index] if understood correctly self.transform(self.input_data[index]) is an image tensor (data) and self.output_data[index] is a label, but here:
plt.imshow(np.asarray(x))
you are passing unpacked x which is actually (data, label)
So, you need to unpack it first:
plt.figure(figsize = (16, 4))
for num, x in enumerate(dset_train):
data, label = x
plt.subplot(1,6,num+1)
plt.axis('off')
print(x)
plt.imshow(np.asarray(data))
plt.title(y_train[num])
EDIT:
Why I have to unpack x?
You're inheriting from PyTorch's Dataset, and according to docs:
All datasets that represent a map from keys to data samples should subclass it. All subclasses should overrite __getitem__(), supporting fetching a data sample for a given key.
In your defined DatasetProcessing class __getitem__() returns a tuple of 2 items: self.transform(self.input_data[index]) and self.output_data[index], the first one is data, the second one is appropriate label. And that's why you need to unpack it like data, y = x, because your DatasetProcessing dataset yields data and a label.
Is there any documentation/tutorials you can link me to?
I can recommend you this links:
Data Loading and Processing Tutorial
Dataset docs
torch.utils.data docs
| https://stackoverflow.com/questions/57943032/ |
How a robust background removal is implemented? | I found that a deep-learning-based method (e.g., 1) is much more robust than a non-deep-learning-based method (e.g., 2, using OpenCV).
https://www.remove.bg
How do I remove the background from this kind of image?
In the OpenCV example, Canny is used to detect the edges. But this step can be very sensitive to the image. The contour detection may end up with wrong contours. It is also difficult to determine which contours should be kept.
How a robust deep-learning method is implemented? Is any good example code? Thanks.
| For that to work you need to use Unet. You can search for that on github.
Unet transofrm is: I->I.
Space of the image will become image (of same or similar size).
You need to have say 10.000 images with bg removed. People, (long hair people), cats, cars, shoes, T-shirts, etc.
So you set different backgrounds on all these images as source and prediction should be images with removed background.
You can also do a segmentation model and when you find the foreground you can remove the bg.
| https://stackoverflow.com/questions/57943845/ |
How can I disable gradient updates for some modules in autograd backpropagation? | I'm building a multi-model neural network for reinforcement learning to include an action network, a world model network, and a critic. The idea is train the world model to emulate whatever simulation you are trying to master based on input from the action network and the previous state, to train the critic to maximize the Bellman equation (total reinforcement over time) based on the world model output, and then backpropagate the critic value through the world model to provide gradient targets for training the actions. So - from some state, the action network outputs an action which is fed into the model to generate the next state, and that state feeds into the critic network for evaluation against some goal state.
For all this to work, I must use 3 separate loss functions, one for each network, and they all add something to the gradients in one or more networks but they can be in conflict. For example - to train the world model I use a target from an environmental simulation and for the critic I use a target of the current state reward + discount * next state forecast value. However, to train the a actor I just use the negative critic value as a loss and backpropagate all the way through all three models to calibrate the best action.
I can make this work without any batching by zeroing out gradients incrementally, but that is inefficient and doesn't let me accumulate gradients for any kind of "time-series batching" optimizer update step. Each model has it's own trainable parameters, but the execution graph flows through all three networks. So inside the calibration loop after firing the networks in sequence:
...
if self.actor.calibrating:
self.actor.optimizer.zero_grad()
#Pick loss For maximizing the value of all actions
loss = -self.critic.value
#Backpropagate through all three networks to train actor output
#How do I stop the critic and model networks from incrementing their gradient values?
loss.backward(retain_graph=True)
self.actor.optimizer.step()
if self.model.calibrating:
self.model.optimizer.zero_grad()
#Reduce loss for ambiguous actions
loss = self.model.get_loss() * self.actor.get_confidence()**2
#How can I block this from backpropagating through action network?
loss.backward(retain_graph=True)
self.model.optimizer.step()
if self.critic.calibrating:
self.critic.optimizer.zero_grad()
#Reduce loss for ambiguous actions
loss = self.critic.get_loss(self.goal) * self.actor.get_confidence()**2
#How do I stop this from backpropagating through the model and action networks?
loss.backward(retain_graph=True)
self.critic.optimizer.step()
...
Finally - my question is in two parts:
How can I temporarily stop loss.backward() at a given layer without detaching it forever?
How can I block loss.backward() from updating some gradients where I'm just flowing through a model to get gradients for another model?
| Got this figured out thanks to a suggestion from a colleague to try the requires_grad setting. (I had assumed that would break the execution graph, but it doesn't)
So - to answer my own two questions:
If you calibrate the chained models in the correct order, you can detach them one at a time so that loss.backward() doesn't run over models that aren't needed. I was thinking that this would break the graph but... this is Pytorch, not Tensorflow 1.x and the graph is regenerated on every forward pass anyway. Silly me for missing this yesterday.
If you set requires_grad to False for a model (or a layer or an individual weight) then loss.backward() will STILL traverse the entire connected graph but it will leave those individual gradients as they were while still setting any gradients earlier in the graph. Exactly what I wanted.
This code works to minimize the execution of unnecessary graph traversals and gradient updates. I still need to refactor it for staggered updates over time so that it can accumulate gradients for several cycles before stepping the optimizers, but this definitely works as intended.
#Step through all models in a chain to create gradient paths from critic back through the world model, to the actor.
def step(self):
#Get the current state from the simulation
state = self.world.state
#Fire the actor to select a softmax action.
self.actor(state)
#run the world simulation on that action.
self.world.step(self.actor.action)
#Combine the action and starting state as input to the world model.
if self.actor.calibrating:
action_state = torch.cat([self.actor.value, state], dim=0)
else:
#Push softmax action closer to 1.0
action_state = torch.cat([self.actor.hard_value, state], dim=0)
#Run the model and then the critic on the action_state
self.critic(self.model(action_state))
if self.actor.calibrating:
self.actor.optimizer.zero_grad()
self.model.requires_grad = False
self.critic.requires_grad = False
#Pick loss For maximizing the value of the action choice
loss = -self.critic.value * self.actor.get_confidence()
loss.backward(retain_graph=True)
self.actor.optimizer.step()
if self.model.calibrating:
#Don't need to backpropagate through actor again
self.actor.value.detach_()
self.model.optimizer.zero_grad()
self.model.requires_grad = True
#Reduce loss for ambiguous actions
loss = self.model.get_loss() * self.actor.get_confidence()**2
loss.backward(retain_graph=True)
self.model.optimizer.step()
if self.critic.calibrating:
#Don't need to backpropagate through the model or actor again
self.model.value.detach_()
self.critic.optimizer.zero_grad()
self.critic.requires_grad = True
#Reduce loss for ambiguous actions
loss = self.critic.get_loss(self.goal) * self.actor.get_confidence()**2
loss.backward(retain_graph=True)
self.critic.optimizer.step()
| https://stackoverflow.com/questions/57945356/ |
any script to test the installation of Pytorch | I have installed the pytorch, and would like to check are there any script to test whether the installation is correct, e.g., whether it can enable CUDA or not, etc?
| Coming to your 1st question,
In your python script....
just add
import torch
if this gives "ModuleNotFoundError: No module named 'torch'",
then your pytorch installation is not complete
And your 2nd question to check if your pytorch is using cuda,use this
torch.cuda.is_available()
this will return True if your pytorch is using cuda
| https://stackoverflow.com/questions/57977880/ |
Pytorch tensor multiplication with Float tensor giving wrong answer | I am seeing some strange behavior when i multiply two pytorch tensors.
x = torch.tensor([99397544.0])
y = torch.tensor([0.1])
x * y
This outputs
tensor([9939755.])
However, the answer should be 9939754.4
| In default, the tensor dtype is torch.float32 in pytorch. Change it to torch.float64 will give the right result.
x = torch.tensor([99397544.0], dtype=torch.float64)
y = torch.tensor([0.1], dtype=torch.float64)
x * y
# tensor([9939754.4000])
The mismatched result for torch.float32 caused by rounding error if you do not have enough precision to calculate (represent) it.
What Every Computer Scientist Should Know About Floating-Point Arithmetic
| https://stackoverflow.com/questions/57982376/ |
embedding layer outputs nan | I am trying to learn a seq2seq model.
An embedding layer is located in the encoder and it sometimes outputs nan value after some iterations.
I cannot identify the reason.
How can I solve this??
The problem is the first emb_layer in the forward function in the code below.
class TransformerEncoder(nn.Module):
def __init__(self, vocab_size, hidden_size=1024, num_layers=6, dropout=0.2, input_pad=1, batch_first=False, embedder=None, init_weight=0.1):
super(TransformerEncoder, self).__init__()
self.input_pad = input_pad
self.vocab_size = vocab_size
self.num_layers = num_layers
self.embedder = embedder
if embedder is not None:
self.emb_layer = embedder
else:
self.emb_layer = nn.Embedding(vocab_size, hidden_size, padding_idx=1)
self.positional_encoder = PositionalEncoder()
self.transformer_layers = nn.ModuleList()
for _ in range(num_layers):
self.transformer_layers.append(
TransformerEncoderBlock(num_heads=8, embedding_dim=1024, dropout=dropout))
def set_mask(self, inputs):
self.input_mask = (inputs == self.input_pad).unsqueeze(1)
def forward(self, inputs):
x = self.emb_layer(inputs)
x = self.positional_encoder(x)
| It is usually the inputs more than the weights which tend to become nan (either goes too high or too low). Maybe these are incorrect to start out with and worsen after some gradients. You can identify these inputs by running the tensor or np.array thru' a simple condition check like:
print("Inp value too high") if len(bert_embeddings[bert_embeddings>1000]) > 1 else None
A common mistake for a beginner is to use a torch.empty instead of torch.zeros. This invariably leads to Nan over time.
If all your inputs are good, then it is the vanishing or exploding gradients issue. See if the problem worsens after a few iterations. Explore different activations or clipping gradients which usually fix these type of issues. If you are using latest optimizers you usually need not worry about adjusting the learning rate.
| https://stackoverflow.com/questions/57986783/ |
Loading .npy files as dataset for pytorch | I have preprocessed data in .npy files, let's call it X.npy for raw data and Y.npy for labels. They're organized to match every element from both files (first element from X has first label from Y etc.). How can I load it as dataset using torch.utils.data.DataLoader? I'm very new to pytorch, and any help will be useful.
| You could also use DatasetFolder, which basically is the underlying class of ImageFolder. Using this class you can provide your own file extensions and loader to load the samples.
def npy_loader(path):
return torch.from_numpy(np.load(path))
| https://stackoverflow.com/questions/57989716/ |
GroupNorm is considerably slower and consumes higher GPU memory than BatchNorm in Pytorch | I use GroupNorm in pytorch instead of BatchNorm and keep all the others (network architecture) unchanged. It shows that in Imagenet dataset, using resnet50 architecture, GroupNorm is 40% slower than BatchNorm, and consumes 33% more GPU memory than BatchNorm. I am really confused because GroupNorm shouldn’t need more calculation than BatchNorm. The details are listed below.
For details of Group Normalization, one can see this paper: https://arxiv.org/pdf/1803.08494.pdf
For BatchNorm, one minibatch consumes 12.8 seconds with GPU memory 7.51GB;
For GroupNorm, one minibatch consumes 17.9 seconds with GPU memory 10.02GB.
I use the following code to convert all the BatchNorm layers to GroupNorm layers.
def convert_bn_model_to_gn(module, num_groups=16):
"""
Recursively traverse module and its children to replace all instances of
``torch.nn.modules.batchnorm._BatchNorm`` with :class:`torch.nn.GroupNorm`.
Args:
module: your network module
num_groups: num_groups of GN
"""
mod = module
if isinstance(module, nn.modules.batchnorm._BatchNorm):
mod = nn.GroupNorm(num_groups, module.num_features,
eps=module.eps, affine=module.affine)
# mod = nn.modules.linear.Identity()
if module.affine:
mod.weight.data = module.weight.data.clone().detach()
mod.bias.data = module.bias.data.clone().detach()
for name, child in module.named_children():
mod.add_module(name, convert_bn_model_to_gn(
child, num_groups=num_groups))
del module
return mod
| Yes, you are right GN does use more resources compared to BN. I'm guessing this is because it has to calculate the mean and variance for every group of channels, whereas BN only has to calculate once over the whole batch.
But the advantage with GN, is that you can lower your Batch Size up to 2, without reducing any performance, as stated within the paper, so you can make up for the overhead computation.
| https://stackoverflow.com/questions/58002524/ |
PyTorch having trouble detecting CUDA | I am running CNN on PyTorch. The torch.cuda.is_available() function returned false and no GPU is detected. However, I can run Keras model with GPU. Here is my system information:
OS: Ubuntu 18.04.3
Python 3.7.3 (Conda)
GPU: GTX1080Ti
Nvidia driver: 430.50
When I check nvidia-smi, the output said that the CUDA version is 10.1. However, the nvcc -V command tells me that it is CUDA 9.1.
I downloaded NVIDIA-Linux-x86_64-430.50.run from the official site and install it with command line. I installed CUDA 10.1 using these following command line recommended by the official site:
wget http://developer.download.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda_10.1.243_418.87.00_linux.run
sudo sh cuda_10.1.243_418.87.00_linux.run
I installed PyTorch through pip install. What is wrong? Thanks in advance!
| The default Pytorch 1.2 package depends on CUDA 10.0, but you have CUDA 9.1. The output of nvidia-smi just tells you the maximum CUDA version your GPU supports, nvcc gives the CUDA installed on your system. It seems that your installation of CUDA 10.1 was unsuccessful.
In addition to CUDA 10.0, Pytorch also supports CUDA 9.2 and I've found that the Pytorch package compiled for CUDA 10.0 also works with CUDA 10.1. So you can either upgrade your CUDA installation to 9.2 and install the Pytorch CUDA 9.2 package with
pip3 install torch==1.2.0+cu92 torchvision==0.4.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html
Or get a working installation of CUDA 10.1. There are detailed Linux instructions here. (Note that you may have to remove previous installations of CUDA before installing a new one.)
| https://stackoverflow.com/questions/58005297/ |
Module Not Found Error when importing Pytorch_Transformers | After downloading pytorch_transformers through Anaconda and executing the import command through the Jupyter Notebook, I am facing several errors related to missing modules.
I tried searching sacremoses to import the package via Anaconda, but it is only available for Linux machines. Has anyone else faced similar issues? Thanks in advance!
from pytorch_transformers import BertTokenizer, BertModel, BertForMaskedLM
This is the error:
<ipython-input-5-218d0858d00f> in <module>
----> 1 from pytorch_transformers import BertTokenizer, BertModel, BertForMaskedLM
~\Anaconda3\lib\site-packages\pytorch_transformers\__init__.py in <module>
1 __version__ = "1.2.0"
----> 2 from .tokenization_auto import AutoTokenizer
3 from .tokenization_bert import BertTokenizer, BasicTokenizer, WordpieceTokenizer
4 from .tokenization_openai import OpenAIGPTTokenizer
5 from .tokenization_transfo_xl import (TransfoXLTokenizer, TransfoXLCorpus)
~\Anaconda3\lib\site-packages\pytorch_transformers\tokenization_auto.py in <module>
24 from .tokenization_transfo_xl import TransfoXLTokenizer
25 from .tokenization_xlnet import XLNetTokenizer
---> 26 from .tokenization_xlm import XLMTokenizer
27 from .tokenization_roberta import RobertaTokenizer
28 from .tokenization_distilbert import DistilBertTokenizer
~\Anaconda3\lib\site-packages\pytorch_transformers\tokenization_xlm.py in <module>
25 from io import open
26
---> 27 import sacremoses as sm
28
29 from .tokenization_utils import PreTrainedTokenizer
ModuleNotFoundError: No module named 'sacremoses'```
| Please try to create a conda environment and install the packages in the created environment using the below steps:
conda create -n env_pytorch -c intel python=3.5
source activate env_pytorch
pip install pytorch-transformers
| https://stackoverflow.com/questions/58011563/ |
how to identify wrong classification with batches in pytorch | I have a script like this where batches of images were used
correct = 0
total = 0
incorrect_classification=[]
for (i, [images, labels]) in enumerate(test_loader):
images = Variable(images.view(-1, n_pixel*n_pixel))
outputs = net(images)
_, predicted = torch.min(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy: %d %%' %
(100 * correct / total))
With a batch size of 10, each enumerate returns 10 x image size tensor. How can I save all the wrong classifications into the array incorrect_classification or wrong img and their probability into an dictionary so I use can plt.imshow to inspect them later?
If batch size is 1 I could use this:
if (predicted==labels).item()==0:
incorrect_examples.append(images.numpy())
But with a batch size specified (like 100 images per batch) how should I save the wrong classifications?
Thanks in advance for any answers.
| As already said in the comment of @zihaozhihao, images[predicted==labels] should do the work.
In other words, you will get a mask of indexes and then access the images you want with this mask:
correct = 0
total = 0
incorrect_examples=[]
for (i, [images, labels]) in enumerate(test_loader):
images = Variable(images.view(-1, n_pixel*n_pixel))
outputs = net(images)
_, predicted = torch.min(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy: %d %%' % (100 * correct / total))
# if (predicted==labels).item()==0:
# incorrect_examples.append(images.numpy())
idxs_mask = (predicted == labels).view(-1)
incorrect_examples.append(images[idxs_mask].numpy())
The view(-1) will flatten the mask which will be used to mask the batch channel of the images tensor.
At the end of the loop (out of it), the itens in the list incorrect_examples will have shape [batch_size, n_pixel, n_pixel], and for convenience, you can group all of them in one tensor by concatenating them:
incorrect_images = torch.cat(incorrect_examples)
# incorrect_images.size() -> (n_incorrect_images, n_pixel, n_pixel)
| https://stackoverflow.com/questions/58019435/ |
numpy argmax in array with multiple brackets | I have an issue in apply argmax to an array which has multiple brackets.
In real life I am getting this as a result of a pytorch tensor.
Here I can put an example:
a = np.array([[1.0, 1.1],[2.1,2.0]])
np.argmax(a,axis=1)
array([1, 0])
It is correct. But:
a = np.array([[[1.0, 1.1]],[[2.1,2.0]]])
np.argmax(a,axis=1)
array([[0, 0],
[0, 0]])
It does not give me what I expect.
Consider that in reality I have this level of inner brackets:
a = np.array([[[[1.0, 1.1]]],[[[2.1,2.0]]]])
| Use .squeeze() and a negative index.
a = np.array([[[[1.0, 1.1]]], [[[2.1, 2.0]]]])
np.argmax(a, axis = -1).squeeze()
array([1, 0], dtype=int32)
| https://stackoverflow.com/questions/58022846/ |
Difference of torch.matmul and python built-in @ operator to do matrix multiplication | Can I always replace torch.matmul with python's built-in @ operator to do the matrix multiplication? Please assume that I know the difference between torch.matmul, torch.mm and many others. I just want to make sure how many of them can be safely replaced by @ operator without sacrificing speed or some native support from torch.
If it does no harm, I would like to extensively use them in the future.
| yes you can always use @ in place of .matmul. You can also do the same in numpy. Also see PyTorch mapping operators to functions
| https://stackoverflow.com/questions/58040324/ |
Is there a handy way to dump the running_stats for a pytorch model? | I'm writing a C version of the pytorch model to run it on my special hardware.
Everything looks ok so far, except the running_mean and running_var in every batchnorm layer.
We have a python code to dump all named_parameters, but nothing to do for the running_stats, although we need to use it in the forwarding computation.
So is there a way to dump it with sort of built-in features?
I searched pytorch doc, no help on my task.
Otherwise I might need to write a regexp code to recognize and dump them.
Thanks a lot.
/Patrick
for name, param in model.named_parameters():
# here can dump weight and bias, but not running_stats
names.append(name)
shapes.append(list(param.data.numpy().shape))
values.append(param.data.numpy().flatten().tolist())
| running_mean and others are registered_buffers in PyTorch. You can save (as you say dump) them with torch.nn.Module's state_dict:
torch.save(model.state_dict(), PATH)
You can iterate over named buffers and save each of them however you like similarly to parameters:
for name, buffer in model.named_buffers():
# do your thing with them
| https://stackoverflow.com/questions/58050020/ |
what does nn.Linear() do in pytorch's last, and why is it necessary? | I am working with some code that trains an lstm to generate sequences. After training the model, the lstm() method is called:
x = some_input
lstm_output, (h_n, c_n) = lstm(x, hc)
funcc = nn.Linear(in_features=lstm_num_hidden,
output_features=vocab_size,
bias=True)
func_output = func(lstm_output)
I've looked at the documentation for nn.Linear() but I still don't understand what this transformation is doing and why it is necessary. If the lstm has already been trained, then the output it gives should already have a pre-established dimensionality. This output (lstm_output) would be the generated sequence, or in my case an array of vectors. Am I missing something here?
| Here, the Linear layer is transforming the hidden state representations (lstm_output) produced by the LSTM into a vector of size vocab_size. Your understanding is perhaps wrong. The Linear layer should be trained along with the LSTM.
And I guess you are trying to generate a sequence of tokens (words), so the Linear layer should be followed by a Softmax operation to predict a probability distribution over the vocabulary.
| https://stackoverflow.com/questions/58066194/ |
Pytorch:Apply cross entropy loss with custom weight map | I am solving multi-class segmentation problem using u-net architecture in pytorch.
As specified in U-NET paper, I am trying to implement custom weight maps to counter class imbalances.
Below is the opertion which I want to apply -
Also, I reduced the batch_size=1 so that I can remove that dimension while passing it to precompute_to_masks function.
I tried the below approach-
def precompute_for_image(masks):
masks = masks.cpu()
cls = masks.unique()
res = torch.stack([torch.where(masks==cls_val, torch.tensor(1), torch.tensor(0)) for cls_val in cls])
return res
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(final_train_loader):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
optimizer.zero_grad()
output = model(data)
temp_target = precompute_for_image(target)
w = weight_map(temp_target)
loss = criterion(output,target)
loss = w*loss
loss.backward()
optimizer.step()
train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
return model
where weight_map is the function to calculate weight mask which I got from here
The issue, I am facing is I am getting memory error when I apply the following method. I am using 61gb RAM and Tesla V100 GPU.
I really think I am applying it in incorrect way.
How to do it?
I am omitting the non-essential details from the training loop.
Below is my weight_map function:
from skimage.segmentation import find_boundaries
w0 = 10
sigma = 5
def make_weight_map(masks):
"""
Generate the weight maps as specified in the UNet paper
for a set of binary masks.
Parameters
----------
masks: array-like
A 3D array of shape (n_masks, image_height, image_width),
where each slice of the matrix along the 0th axis represents one binary mask.
Returns
-------
array-like
A 2D array of shape (image_height, image_width)
"""
nrows, ncols = masks.shape[1:]
masks = (masks > 0).astype(int)
distMap = np.zeros((nrows * ncols, masks.shape[0]))
X1, Y1 = np.meshgrid(np.arange(nrows), np.arange(ncols))
X1, Y1 = np.c_[X1.ravel(), Y1.ravel()].T
for i, mask in enumerate(masks):
# find the boundary of each mask,
# compute the distance of each pixel from this boundary
bounds = find_boundaries(mask, mode='inner')
X2, Y2 = np.nonzero(bounds)
xSum = (X2.reshape(-1, 1) - X1.reshape(1, -1)) ** 2
ySum = (Y2.reshape(-1, 1) - Y1.reshape(1, -1)) ** 2
distMap[:, i] = np.sqrt(xSum + ySum).min(axis=0)
ix = np.arange(distMap.shape[0])
if distMap.shape[1] == 1:
d1 = distMap.ravel()
border_loss_map = w0 * np.exp((-1 * (d1) ** 2) / (2 * (sigma ** 2)))
else:
if distMap.shape[1] == 2:
d1_ix, d2_ix = np.argpartition(distMap, 1, axis=1)[:, :2].T
else:
d1_ix, d2_ix = np.argpartition(distMap, 2, axis=1)[:, :2].T
d1 = distMap[ix, d1_ix]
d2 = distMap[ix, d2_ix]
border_loss_map = w0 * np.exp((-1 * (d1 + d2) ** 2) / (2 * (sigma ** 2)))
xBLoss = np.zeros((nrows, ncols))
xBLoss[X1, Y1] = border_loss_map
# class weight map
loss = np.zeros((nrows, ncols))
w_1 = 1 - masks.sum() / loss.size
w_0 = 1 - w_1
loss[masks.sum(0) == 1] = w_1
loss[masks.sum(0) == 0] = w_0
ZZ = xBLoss + loss
return ZZ
Traceback of the error-
MemoryError Traceback (most recent call last)
<ipython-input-30-f0a595b8de7e> in <module>
1 # train the model
2 model_scratch = train(20, final_train_loader, unet, optimizer,
----> 3 criterion, train_on_gpu, 'model_scratch.pt')
<ipython-input-29-b481b4f3120e> in train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path)
24 loss = criterion(output,target)
25 target.requires_grad = False
---> 26 w = make_weight_map(target)
27 loss = W*loss
28 loss.backward()
<ipython-input-5-e75a6281476f> in make_weight_map(masks)
33 X2, Y2 = np.nonzero(bounds)
34 xSum = (X2.reshape(-1, 1) - X1.reshape(1, -1)) ** 2
---> 35 ySum = (Y2.reshape(-1, 1) - Y1.reshape(1, -1)) ** 2
36 distMap[:, i] = np.sqrt(xSum + ySum).min(axis=0)
37 ix = np.arange(distMap.shape[0])
MemoryError:
| Your final_train_loader provides you with an input image data and the expected pixel-wise labeling target. I assume (following pytorch's conventions) that data is of shape B-3-H-W and of dtype=torch.float.
More importantly, target is of shape B-H-W and of dtype=torch.long.
On the other hand make_weight_map expects its input to be C-H-W (with C = number of classes, NOT batch size), of type numpy array.
Try providing make_weight_map the input mask as it expects it and see if you get similar errors.
I also recommend that you visualize the resulting weight map - to make sure your function does what you expect it to do.
| https://stackoverflow.com/questions/58072185/ |
How to do sequence classification with pytorch nn.Transformer? | I am doing a sequence classification task using nn.TransformerEncoder(). Whose pipeline is similar to nn.LSTM().
I have tried several temporal features fusion methods:
Selecting the final outputs as the representation of the whole sequence.
Using an affine transformation to fuse these features.
Classifying the sequence frame by frame, and then select the max values to be the category of the whole sequence.
But, all these 3 methods got a terrible accuracy, only 25% for 4 categories classification. While using nn.LSTM with the last hidden state, I can achieve 83% accuracy easily. I tried plenty of hyperparameters of nn.TransformerEncoder(), but without any improvement for the accuracy.
I have no idea about how to adjust this model now. Could you give me some practical advice? Thanks.
For LSTM: the forward() is:
def forward(self, x_in, x_lengths, apply_softmax=False):
# Embed
x_in = self.embeddings(x_in)
# Feed into RNN
out, h_n = self.LSTM(x_in) #shape of out: T*N*D
# Gather the last relevant hidden state
out = out[-1,:,:] # N*D
# FC layers
z = self.dropout(out)
z = self.fc1(z)
z = self.dropout(z)
y_pred = self.fc2(z)
if apply_softmax:
y_pred = F.softmax(y_pred, dim=1)
return y_pred
For transformer:
def forward(self, x_in, x_lengths, apply_softmax=False):
# Embed
x_in = self.embeddings(x_in)
# Feed into RNN
out = self.transformer(x_in)#shape of out T*N*D
# Gather the last relevant hidden state
out = out[-1,:,:] # N*D
# FC layers
z = self.dropout(out)
z = self.fc1(z)
z = self.dropout(z)
y_pred = self.fc2(z)
if apply_softmax:
y_pred = F.softmax(y_pred, dim=1)
return y_pred
| The accuracy you mentioned indicates that something is wrong. Since you are comparing LSTM with TransformerEncoder, I want to point to some crucial differences.
Positional embeddings: This is very important since the Transformer does not have recurrence concept and so it doesn't capture sequence information. So, make sure you add positional information along with the input embeddings.
Model architecture: d_model, n_head, num_encoder_layers are important. Go with the default size as used in Vaswani et al., 2017. (d_model=512, n_head=8, num_encoder_layers=6)
Optimization: In many scenarios, it has been found that the Transformer needs to be trained with smaller learning rate, large batch size, WarmUpScheduling.
Last but not least, for a sanity check, just make sure the parameters of the model is updating. You can also check the training accuracy to make sure the accuracy keeps increasing as the training proceeds.
Although it is difficult to say what is exactly wrong in your code but I hope that the above points will help!
| https://stackoverflow.com/questions/58092004/ |
Splitting a directory with images into sub folders using Pytorch or Python | I have a directory with two sub directories in it. Each sub-dir has bunch of images. These sub-dir also specify the two classes of images.
I want to have 3 directories (train, validation, test) and within each of these 3 sub dir, I want to 2 sub directory of each class respectively with images.
I want to split the no.of images into train, val, test directories by random sampling. SO some 60% images go to train, 20, val and 20 in test.
Initial Structure:
Main_folder
- Good
- Bad
What I want:
Main_folder:
- Train
- Good
- Bad
- Val
- Good
- Bad
- Test
- Good
- Bad
I want to split each sub directories into two directories within it with random assignment of images (
| There is a pytorch way to do that (and I would advise using single library for such easy task).
Creating dataset
There is a pre-made one inside torchvision, namely ImageFolder.
Simply use it like this:
import torchvision
dataset = torchvision.datasets.ImageFolder("./my_data")
This will create a dataset, where Good folder has 0 label and Bad has label of 1 for each image in those folder respectively.
Dividing into train, validation, test
No need for sklearn for such an easy job + torch has most of the numpy functionality as well and I would rather stick with one library instead of 3 (though train_test_split could be used twice consecutively, similar to what @ESZ proposed).
IMO simpler would be:
def get_subset(indices, start, end):
return indices[start : start + end]
TRAIN_PCT, VALIDATION_PCT = 0.6, 0.2 # rest will go for test
train_count = int(len(dataset) * TRAIN_PCT)
validation_count = int(len(dataset) * VALIDATION_PCT)
indices = torch.randperm(len(dataset))
train_indices = get_subset(indices, 0, train_count)
validation_indices = get_subset(indices, train_count, validation_count)
test_indices = get_subset(indices, train_count + validation_count, len(dataset))
This will create indices for SubsetRandomSampler and torch.utils.data.DataLoader.
So, similarly to @ESZ once again:
dataloaders = {
"train": torch.utils.data.DataLoader(
dataset, sampler=SubsetRandomSampler(train_indices)
),
"validation": torch.utils.data.DataLoader(
dataset, sampler=SubsetRandomSampler(validation_indices)
),
"test": torch.utils.data.DataLoader(
dataset, sampler=SubsetRandomSampler(test_indices)
),
}
You can specify batch_size and other argument to DataLoader, see documentation if you need more info.
| https://stackoverflow.com/questions/58105073/ |
Increasing the size of images displayed in Pytorch | I want to display few images and their respective labels using Pytorch dataloader.
However the image displayed is very tiny grid.
How do I increase the width of each image so it's bigger.
Here's the code I used:
mean_nums = [0.485, 0.456, 0.406]
std_nums = [0.229, 0.224, 0.225]
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
mean = np.array(mean_nums)
std = np.array(std_nums)
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a bit so that plots are updated
# Get a batch of training data
inputs, classes = next(iter(dataloaders['trainLoader']))
# Make a grid from batch
out = torchvision.utils.make_grid(inputs,nrow=2)
imshow(out, title=[image_datasets['train'].classes[x] for x in classes])
| Try to insert plt.figure(figsize=[width, height]) before plt.imshow And choose the width, height that will satisfy you.
So, for example, the imshow function may be:
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
mean = np.array(mean_nums)
std = np.array(std_nums)
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.figure(figsize=[20, 20])
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a bit so that plots are updated
| https://stackoverflow.com/questions/58113098/ |
Pytorch :Why my dataset variance do not get the correct result? | Here is the function I wrote:
def channel_var(image_dataset):
res = image_dataset[0]
for image in image_dataset[1:]:
res += image
return tuple(map(lambda x: x/len(image_dataset),
(torch.var(res[0]),
torch.var(res[1]),
torch.var(res[2]))))
then I tested it with a Normal distribution :
m = normal.Normal(0, 3)
m.sample((1, 3, 32, 32))
And I get this wrong result :
channel_var(list_test)
>>(tensor(0.0338), tensor(0.0352), tensor(0.0365))
Thank you
| Your function is wrong. And that's because you are computing the average image and then computing the channel variance in the average image. I don't think you want that. You can just find the variance in each channel by using
torch.var(img, dim=[0,2,3])
assuming dim=1 is the channel dimension and img is a torch tensor. If img is not a torch tensor, you can concatenate list of imgs to make a tensor.
You can do this as torch.var(torch.cat(img, dim=0), dim=[0,2,3]) cat operation concatenates list to a tensor.
| https://stackoverflow.com/questions/58120448/ |
Conda install of pytorch fails | I created an environment with conda and I want to install pytorch in it, but it doesn't work. After I get inside my environment with source activate env_name I tried this: conda install pytorch torchvision -c pytorch (I also tried it like this: conda install -c pytorch pytorch torchvision) but I am getting this error:
Using Anaconda Cloud api site https://api.anaconda.org
Fetching package metadata: ......
Solving package specifications: ......
Error: Could not find some dependencies for pytorch: mkl >=2018, cudatoolkit >=9.0,<9.1, blas * mkl, cudatoolkit >=10.0,<10.1, cudatoolkit >=9.2,<9.3, blas * openblas, cudnn 7.0.*, cudatoolkit 9.*
Did you mean one of these?
pytorch, pytorch-gpu, pytorch-cpu
Did you mean one of these?
cudatoolkit
You can search for this package on anaconda.org with
anaconda search -t conda cudatoolkit 9.*
(and similarly for the other packages)
Here are my installed packages:
backports 1.0 py34_0
backports.shutil-get-terminal-size 1.0.0 <pip>
decorator 4.0.11 py34_0
get_terminal_size 1.0.0 py34_0
ipython 4.2.0 py34_0
ipython-genutils 0.1.0 <pip>
ipython_genutils 0.1.0 py34_0
libgfortran 1.0 0
numpy 1.9.2 py34_0
openssl 1.0.2l 0
path.py 10.0 py34_0
pexpect 4.2.1 py34_0
pickleshare 0.7.4 py34_0
pip 9.0.1 py34_1
ptyprocess 0.5.1 py34_0
python 3.4.5 0
readline 6.2 2
scipy 0.16.0 np19py34_0
setuptools 27.2.0 py34_0
simplegeneric 0.8.1 py34_1
six 1.10.0 py34_0
sqlite 3.13.0 0
tk 8.5.18 0
traitlets 4.3.1 py34_0
wheel 0.29.0 py34_0
xz 5.2.3 0
zlib 1.2.11 0
What should I do? Thank you!
| Pytorch's vision package (aka torchvision) was developed post-Python 3.4, and so only has versions supporting Python 2.7, 3.5-7. Please create a new environment with a later Python version. Note it is always better to include the packages you care about in the creation of the environment, e.g.,
conda create -n env_name -c pytorch torchvision
and Conda will figure the rest out. If you need to have a specific version of Python, you can include that as well (e.g., python=3.6).
| https://stackoverflow.com/questions/58128970/ |
Vectorizing assignment of a tensor to a slice in PyTorch | I'm trying to vectorize a slice assignment of the form
for i in range(a.shape[1]):
for j in range(a.shape[2]):
a[:,i,j,:,i:i+b.shape[2],j:j+b.shape[3]] = b
where b itself is an array. This is because the nested Python loop is too inefficient and is taking up most of the runtime. Is there a way to do this?
For a simpler case, consider the following:
for i in range(a.shape[1]):
a[:,i,:,i:i+b.shape[2]] = b
This is what b and a might look like:
You can see the diagonal, "sliding" structure of the resulting matrix.
| We can leverage np.lib.stride_tricks.as_strided based scikit-image's view_as_windows to get sliding windowed views into a 0s padded version of the input and being a view would be efficient on memory and performance. More info on use of as_strided based view_as_windows.
Hence, for the simpler case, it would be -
from skimage.util.shape import view_as_windows
def sliding_2D_windows(b, outshp_axis1):
# outshp_axis1 is desired output's shape along axis=1
n = outshp_axis1-1
b1 = np.pad(b,((0,0),(0,0),(n,n)),'constant')
w_shp = (1,b1.shape[1],b.shape[2]+n)
return view_as_windows(b1,w_shp)[...,0,::-1,0,:,:]
Sample run -
In [192]: b
Out[192]:
array([[[54, 57, 74, 77],
[77, 19, 93, 31],
[46, 97, 80, 98]],
[[98, 22, 68, 75],
[49, 97, 56, 98],
[91, 47, 35, 87]]])
In [193]: sliding_2D_windows(b, outshp_axis1=3)
Out[193]:
array([[[[54, 57, 74, 77, 0, 0],
[77, 19, 93, 31, 0, 0],
[46, 97, 80, 98, 0, 0]],
[[ 0, 54, 57, 74, 77, 0],
[ 0, 77, 19, 93, 31, 0],
[ 0, 46, 97, 80, 98, 0]],
[[ 0, 0, 54, 57, 74, 77],
[ 0, 0, 77, 19, 93, 31],
[ 0, 0, 46, 97, 80, 98]]],
[[[98, 22, 68, 75, 0, 0],
[49, 97, 56, 98, 0, 0],
[91, 47, 35, 87, 0, 0]],
....
[[ 0, 0, 98, 22, 68, 75],
[ 0, 0, 49, 97, 56, 98],
[ 0, 0, 91, 47, 35, 87]]]])
| https://stackoverflow.com/questions/58140866/ |
Run inference on CPU using pytorch and multiprocessing | I have trained a CNN model on GPU using FastAI (PyTorch backend). I am now trying to use that model for inference on the same machine, but using CPU instead of GPU. Along with that, I am also trying to make use of multiple CPU cores using the multiprocessing module. Now here is the issue,
Running the code on single CPU (without multiprocessing) takes only 40 seconds to process nearly 50
images
Running the code on multiple CPUs using torch multiprocessing takes more than 6 minutes to process the same 50 images
from torch.multiprocessing import Pool, set_start_method
os.environ['CUDA_VISIBLE_DEVICES']=""
from fastai.vision import *
from fastai.text import *
defaults.device = torch.device('cpu')
def process_image_batch(batch):
learn_cnn = load_learner(scripts_folder, 'cnn_model.pkl')
learn_cnn.model.training = False
learn_cnn.model = learn_cnn.model.eval()
# for image in batch:
# prediction = ... # predicting the image here
# return prediction
if __name__ == '__main__':
#
# image_batches = ..... # retrieving the image batches (It is a list of 5 lists)
# n_processes = 5
set_start_method('spawn', force=True)
try:
pool = Pool(n_processes)
pool.map(process_image_batch, image_batches)
except Exception as e:
print('Main Pool Error: ', e)
except KeyboardInterrupt:
exit()
finally:
pool.terminate()
pool.join()
I am not sure what's causing this slowdown in multiprocessing mode. I've read a lot of posts discussing similar issue but couldn't find a proper solution anywhere.
| I think you have done a very naive mistake here, you are reading the model object in the function which you are parallelizing.
Meaning for every single image, you are reloading the model from the disk.
Depending on your model object size, IO is gonna be more time consuming then running a forward step.
Please consider reading the model once in the main thread and then make the object available for inference in the parallel function.
| https://stackoverflow.com/questions/58150186/ |
Assign a tensor to multiple slices | Let
a = tensor([[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
b = torch.tensor([1, 2])
c = tensor([[1, 2, 0, 0],
[0, 1, 2, 0],
[0, 0, 1, 2]])
Is there a way to obtain c by assigning b to slices of a without any loops? That is, a[indices] = b for some indices or something similar?
| You can use scatter method in pytorch.
a = torch.tensor([[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
b = torch.tensor([1, 2])
index = torch.tensor([[0,1],[1,2],[2,3]])
a.scatter_(1, index, b.view(-1,2).repeat(3,1))
# tensor([[1, 2, 0, 0],
# [0, 1, 2, 0],
# [0, 0, 1, 2]])
| https://stackoverflow.com/questions/58155999/ |
Custom c++ extension error: pasting “pybind11_init_” and “‘sigmoid’” does not give a valid preprocessing token | I’m trying to add custom c++ extension with pytorch.
I’m following the tutorial at https://pytorch.org/tutorials/advanced/cpp_extension.html
I have created two files.
project/
main.py
sigmoid.cpp
main.py
from torch.utils.cpp_extension import load
lltm_cpp = load(name=‘sigmoid’, sources=[‘sigmoid.cpp’], verbose=True)
sigmoid.cpp
#include <torch/extension.h>
#include <iostream>
torch::Tensor d_sigmoid(torch::Tensor z) {
auto s = torch::sigmoid(z);
return (1 - s) * s;
}
PYBIND11_MODULE('sigmoid', m) {
m.def('d_sigmoid', &d_sigmoid, 'sigmoid');
}
when I run main.py I get the following error
error: pasting “pybind11_init_” and “‘sigmoid’” does not give a valid
preprocessing token
How can I fix the error?
Full error report
Using /tmp/torch_extensions as PyTorch extensions root…
Emitting ninja build file /tmp/torch_extensions/sigmoid/build.ninja…
Building extension module sigmoid…
[1/2] c++ -MMD -MF sigmoid.o.d -DTORCH_EXTENSION_NAME=sigmoid -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include -isystem /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -isystem /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/TH -isystem /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/THC -isystem /home/wickrama/anaconda3/envs/pytorch/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -c /home/wickrama/projects/torch_cpp_ext/sigmoid.cpp -o sigmoid.o
FAILED: sigmoid.o
c++ -MMD -MF sigmoid.o.d -DTORCH_EXTENSION_NAME=sigmoid -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include -isystem /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -isystem /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/TH -isystem /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/THC -isystem /home/wickrama/anaconda3/envs/pytorch/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -c /home/wickrama/projects/torch_cpp_ext/sigmoid.cpp -o sigmoid.o
In file included from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/pytypes.h:12:0,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/cast.h:13,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/attr.h:13,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/pybind11.h:44,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/csrc/utils/pybind.h:6,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/python.h:12,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/extension.h:6,
from /home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:1:
/home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:10:1: error: pasting “pybind11_init_” and “‘sigmoid’” does not give a valid preprocessing token
PYBIND11_MODULE(‘sigmoid’, m) {
^
/home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:10:1: warning: character constant too long for its type
PYBIND11_MODULE(‘sigmoid’, m) {
^~~~~~~~~~~~~~
In file included from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/pytypes.h:12:0,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/cast.h:13,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/attr.h:13,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/pybind11.h:44,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/csrc/utils/pybind.h:6,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/python.h:12,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/extension.h:6,
from /home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:1:
/home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:10:1: error: pasting “PyInit_” and “‘sigmoid’” does not give a valid preprocessing token
PYBIND11_MODULE(‘sigmoid’, m) {
^
/home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:10:1: warning: character constant too long for its type
PYBIND11_MODULE(‘sigmoid’, m) {
^~~~~~~
In file included from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/pytypes.h:12:0,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/cast.h:13,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/attr.h:13,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/pybind11.h:44,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/csrc/utils/pybind.h:6,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/python.h:12,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/extension.h:6,
from /home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:1:
/home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:10:1: error: pasting “pybind11_init_” and “‘sigmoid’” does not give a valid preprocessing token
PYBIND11_MODULE(‘sigmoid’, m) {
^
/home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:10:1: warning: character constant too long for its type
PYBIND11_MODULE(‘sigmoid’, m) {
^~~~~~~~~~~~~~
In file included from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/pytypes.h:12:0,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/cast.h:13,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/attr.h:13,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/pybind11.h:44,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/csrc/utils/pybind.h:6,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/python.h:12,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/extension.h:6,
from /home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:1:
/home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:10:1: error: pasting “pybind11_init_” and “‘sigmoid’” does not give a valid preprocessing token
PYBIND11_MODULE(‘sigmoid’, m) {
^
/home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:10:1: warning: character constant too long for its type
PYBIND11_MODULE(‘sigmoid’, m) {
^~~~~~~~~~~~~~
/home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:11:9: warning: character constant too long for its type
m.def(‘d_sigmoid’, &d_sigmoid, ‘sigmoid’);
^~~~~~~~~~~
/home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:11:34: warning: character constant too long for its type
m.def(‘d_sigmoid’, &d_sigmoid, ‘sigmoid’);
^~~~~~~~~
In file included from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/pytypes.h:12:0,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/cast.h:13,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/attr.h:13,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/pybind11/pybind11.h:44,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/csrc/utils/pybind.h:6,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/python.h:12,
from /home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/include/torch/extension.h:6,
from /home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:1:
/home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:10:17: error: expected initializer before ‘\x6d6f6964’
PYBIND11_MODULE(‘sigmoid’, m) {
^
/home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:10:17: error: expected initializer before ‘\x6d6f6964’
PYBIND11_MODULE(‘sigmoid’, m) {
^
/home/wickrama/projects/torch_cpp_ext/sigmoid.cpp:10:17: error: expected initializer before ‘\x6d6f6964’
PYBIND11_MODULE(‘sigmoid’, m) {
^
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File “/home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/cpp_extension.py”, line 949, in _build_extension_module
check=True)
File “/home/wickrama/anaconda3/envs/pytorch/lib/python3.6/subprocess.py”, line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command ‘[‘ninja’, ‘-v’]’ returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “/home/wickrama/projects/torch_cpp_ext/main.py”, line 3, in
lltm_cpp = load(name=‘sigmoid’, sources=[‘sigmoid.cpp’], verbose=True)
File “/home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/cpp_extension.py”, line 644, in load
is_python_module)
File “/home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/cpp_extension.py”, line 813, in _jit_compile
with_cuda=with_cuda)
File “/home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/cpp_extension.py”, line 866, in _write_ninja_file_and_build
_build_extension_module(name, build_directory, verbose)
File “/home/wickrama/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/cpp_extension.py”, line 962, in _build_extension_module
raise RuntimeError(message)
RuntimeError: Error building extension ‘sigmoid’
| My guess: unlike in Python, single quotes (') are not equivalent with double quotes (") in c++. The former are used to construct char literals, so they can only wrap a single character, and the latter are used to construct string literals (strictly speaking const char * literals). So try replacing
PYBIND11_MODULE('sigmoid', m) {
m.def('d_sigmoid', &d_sigmoid, 'sigmoid');
}
with
PYBIND11_MODULE("sigmoid", m) {
m.def("d_sigmoid", &d_sigmoid, "sigmoid");
}
and see if that helps.
| https://stackoverflow.com/questions/58186907/ |
Pytorch Question from 'Deep Reinforcement Learning: Hands-On' | I'm reading Maxim Lapan's Deep Learning Hands On. I came across this code in chapter 2 and I don't understand a few things. Could anybody explain why the output of print(out) gives three parameters instead of the single float tensor we put in. Also, why is the super function necessary here? Finally, what is the x parameter that forward is accepting? Thank you.
class OurModule(nn.Module):
def __init__(self, num_inputs, num_classes, dropout_prob=0.3): #init
super(OurModule, self).__init__() #Call OurModule and pass the net instance (Why is this necessary?)
self.pipe = nn.Sequential( #net.pipe is the nn object now
nn.Linear(num_inputs, 5),
nn.ReLU(),
nn.Linear(5, 20),
nn.ReLU(),
nn.Linear(20, num_classes),
nn.Dropout(p=dropout_prob),
nn.Softmax(dim=1)
)
def forward(self, x): #override the default forward method by passing it our net instance and (return the nn object?). x is the tensor? This is called when 'net' receives a param?
return self.pipe(x)
if __name__ == "__main__":
net = OurModule(num_inputs=2, num_classes=3)
print(net)
v = torch.FloatTensor([[2, 3]])
out = net(v)
print(out) #[2,3] put through the forward method of the nn? Why did we get a third param for the output?
print("Cuda's availability is %s" % torch.cuda.is_available()) #find if gpu is available
if torch.cuda.is_available():
print("Data from cuda: %s" % out.to('cuda'))
OurModule.__mro__
| OurModule defined a PyTorch nn.Module that accepts 2 inputs (num_inputs) and produces 3 outputs (num_classes).
It consists of:
A Linear layers that accepts 2 inputs and produces 5 outputs
A ReLU
A Linear layer that accepts 5 inputs and produces 20 outputs
A ReLU
A Linear layer that accepts 20 inputs and produces 3 (num_classes) outputs
A Dropout layer
A Softmax layer
You create v which consists of 2 inputs and pass it through this network's forward() method when you call net(v). The result of running this network (3 outputs) is then stored in out.
In your example, x takes on the value of v, torch.FloatTensor([[2, 3]])
| https://stackoverflow.com/questions/58193626/ |
Pytorch - how to extract features of an MLP network (weights, biases, number of nodes, hidden layers)? | I would be interested to extract the weights, biases, number of nodes and number of hidden layers from an MLP/neural network built in pytorch. I wonder if anyone may be able to point me in the right direction?
Many thanks,
Max
| Yes, we can do what you want by first creating a simple network:
input_dim = 400
hidden_dim = 512
net = nn.Sequential(nn.Linear(input_dim, hidden_dim),
nn.Sigmoid())
print(net)
When we print the net, we get to know the number of layers, number of nodes (out_features), and many other details:
Sequential(
(0): Linear(in_features=400, out_features=512, bias=True)
(1): Sigmoid()
)
Then if you want to load specific values of each parameter, you can print that too
model = Net(400, 512,10)
bias = model.fc1.bias
print(bias)
the output is:
tensor([ 3.4078e-02, 3.1537e-02, 3.0819e-02, 2.6163e-03, 2.1002e-03,
4.6842e-05, -1.6454e-02, -2.9456e-02, 2.0646e-02, -3.7626e-02,
3.5531e-02, 4.7748e-02, -4.6566e-02, -1.3317e-02, -4.6593e-02,
-8.9996e-03, -2.6568e-02, -2.8191e-02, -1.9806e-02, 4.9720e-02,
---------------------------------------------------------------
-4.6214e-02, -3.2799e-02, -3.3605e-02, -4.9720e-02, -1.0293e-02,
3.2559e-03, -6.6590e-03, -1.2456e-02, -4.4547e-02, 4.2101e-02,
-2.4981e-02, -3.6840e-03], requires_grad=True)
Hope it helps
| https://stackoverflow.com/questions/58223002/ |
Right place to do class imbalance regularisation (data level or batch level) | I have two binary imbalanced dataset where the labels are ether 0 or 1 and prediction output is between 0 and 1. The positive case has 10000 samples, while the negative case has 90000 samples. I'm using a batch of 100 when training.
when calculating the BinaryCrossEntropyLoss (in pytorch) its possible to supply the per batch element regularisation weight.
My question is:
To calculate the general class weight dose it make more sense to calculate it 1 time at the start (so 1/(10000/(100000) for the positive case) and scale the loss of each sample with this value
or:
Calculate the weight at the batch level, by firstly finding the batch class imbalance (e.g in the batch it might be 25 positives and 75 negatives, hence 1/(25/(25+75) for the positive case)
I'm asking this because the loss is averaged across the batch
| If you wish to do it this way, you should calculate per batch class imbalance.
On the other hand you should probably make sure that each batch preserves label statistics (e.g. for batch 64 and your case, you should have 6 positive samples and the rest negative). This way, it would be enough to calculate class imbalance once and add it to torch.nn.BCELoss on a per-batch basis.
I would suggest the other approach though, e.g. oversampling or undersampling using PyTorch's Sampler class (don't do it by copying examples, it wastes space totally unnecessarily). You can implement it manually or use third party library which did it for you for example torchdata (disclosure: I'm the author) and torchdata.samplers.RandomOverSampler.
| https://stackoverflow.com/questions/58233328/ |
Where does Ray.Tune create the model vs implementing the perturbed hyperparameters | I am new to using ray.tune. I already have my network written in a modular format and now I am trying to incorporate ray.tune, but I do not know where to initialize the model (vs updating the perturbed hyperparameters) so that the model and the weights are not re-initialized when a worker is truncated and replaced by a better performing worker.
Background
I am using the PBT scheduler of ray.tune which creates num_samples number of models (workers) each of which are initialized with a different set of sampled hyperparameters. When a model is evaluated, if it is performing poorly, it will be stopped and load the checkpoint of one of the top performing workers. Once it is loaded (this is a deep copy of the network), the hyperparameters are perturbed and then it will train until the next evaluation.
The MyTrainable class should have a _setup, _train, _save, and _restore function. The setup calls for a config variable and this is where the newly sampled hyperparameters are implemented.
My question is where should be the original model be defined? I can easily implement the updated HPs in this section. But I have not seen anywhere in the documentation where I can pass a pre-defined model into the ray.tune.run function. If I keep the create_model() function in the _setup() though, it will eliminate the previously trained weights which is part of the benefit of this method.
Code
Here are the 3 functions I have:
self._hyperparameters(config) # redefines the self.opt options accoring to the new perturbations
self.model.update_optimizer(self.opt) # redefines the optimizers using the new learning rates and the beta values for Adam
self.model = create_model(self.opt) # Original function that defines the initial model and initializes the weights
| The create_model should be called in _setup. _restore will be called after _setup, and in restore, the model should be updated to the weights stored in the checkpoint.
| https://stackoverflow.com/questions/58236245/ |
Understanding TypeError: '<' not supported between instances of 'Example' and 'Example' | I am working on a project of text simplification using a multi-head attention transformer model. For the same, I am using torchtext for tokenisation and numericalization. The dataset contains two aligned files for training and two aligned files for testing. In the training files, one file contains the complex sentences while the other contains the corresponding simplified sentences.
I read the files as such:
training_sentences = open(path + "train.en" , encoding = "utf-8").read().split("\n")
target_sentences = open(path + "train.sen" , encoding = "utf-8").read().split("\n")
Next, I tokenised them as such:
complicated = spacy.load('en')
simple = spacy.load('en')
def tokenize_complicated(sentence):
return [tok.text for tok in complicated.tokenizer(sentence)]
def tokenize_simple(sentence):
return [tok.text for tok in simple.tokenizer(sentence)]
C_TEXT = Field(tokenize=tokenize_complicated, fix_length = 100)
S_TEXT = Field(tokenize=tokenize_simple, fix_length = 100, init_token = "<sos>", eos_token = "<eos>")
I then converted into TabularDataset object of torchtext.
import pandas as pd
raw_data = {'Complicated' : [line for line in training_sentences],
'Simple': [line for line in target_sentences]}
df = pd.DataFrame(raw_data, columns=["Complicated", "Simple"])
df.to_csv("df.csv", index=False)
data_fields = [('Complicated', C_TEXT), ('Simple', S_TEXT)]
train = torchtext.data.TabularDataset.splits(path='./', train = "df.csv", format='csv', fields=data_fields, skip_header = True)
And then created vocabulary
C_TEXT.build_vocab(train)
S_TEXT.build_vocab(train)
However, on doing so I got this error:
TypeError: '<' not supported between instances of 'Example' and
'Example'
On searching, I came across this solution here and the error disappeared. However, I am not understanding whether this makes the model take only one instance or it takes all of the dataset?
I would like to know the significance of the index [0] so that I can manipulate it effectively for my model.
| In my case I solved the issue passing a sort_key and sort_within_batch arg to True as follows:
BATCH_SIZE = 64
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
device = device,
batch_size = BATCH_SIZE,
sort_key = lambda x: len(x.src),
sort_within_batch=True
)
Good luck
| https://stackoverflow.com/questions/58241313/ |
How to Fix "RuntimeError: CUDA error: device-side assert triggered" in Pytorch | I am trying to train the yolo-v3 model from this repo https://github.com/eriklindernoren/PyTorch-YOLOv3
on my custom dataset of shapes, but I keep getting the error "RuntimeError: CUDA error: device-side assert triggered"
I have tried to lookup the solution and tried several things suggested in different answers (like fixing the indexing of the classes in the annotations) but the error persists.
I am following the description in the readme of the repo to train on a custom dataset, and have adjusted custom.data and the data/custom/ accordingly.
I keep receiving this output.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [32,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [33,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [34,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [35,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [36,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [37,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [38,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [39,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [40,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [41,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [42,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [43,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [44,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [45,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [0,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [1,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [2,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [3,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [4,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [5,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [6,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [7,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [12,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [13,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [14,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [15,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [20,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [21,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [22,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [23,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [24,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [25,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [26,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [27,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [28,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [29,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: block: [0,0,0], thread: [31,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
Traceback (most recent call last):
File "train.py", line 105, in <module>
loss, outputs = model(imgs, targets)
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "D:\Documents\GP\Code\TorchYolo\PyTorch-YOLOv3\models.py", line 259, in forward
x, layer_loss = module[0](x, targets, img_dim)
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "D:\Documents\GP\Code\TorchYolo\PyTorch-YOLOv3\models.py", line 188, in forward
ignore_thres=self.ignore_thres,
File "D:\Documents\GP\Code\TorchYolo\PyTorch-YOLOv3\utils\utils.py", line 318, in build_targets
iou_scores[b, best_n, gj, gi] = bbox_iou(pred_boxes[b, best_n, gj, gi], target_boxes, x1y1x2y2=False)
File "D:\Documents\GP\Code\TorchYolo\PyTorch-YOLOv3\utils\utils.py", line 199, in bbox_iou
b1_x1, b1_x2 = box1[:, 0] - box1[:, 2] / 2, box1[:, 0] + box1[:, 2] / 2
RuntimeError: CUDA error: device-side assert triggered
with the only thing changing being the "2" in the array index when messing around with the train.jpg class label index
b1_x1, b1_x2 = box1[:, 0] - box1[:, 2] / 2, box1[:, 0] + box1[:, 2] / 2
| I also had this issue while training resnet model on google colab. So in my case I was training the model on 7 classes but the last layer of my network was set to output 3 classes.
Hence I changed
self.classifier = torch.nn.Sequential(torch.nn.BatchNorm1d(512), torch.nn.Linear(512, 3))
to this
self.classifier = torch.nn.Sequential(torch.nn.BatchNorm1d(512), torch.nn.Linear(512, 7))
After doing that also, i was still getting the error, because I didn't restart the google colab.
Remember whenever you get this error, check two things:
class_labels should start from 0 i.e. in my case [0,1,2,3,4,5,6] for 7 classes.
Check whether the final output layer outputs the exact number of classes.
And after that,
Refresh the notebook to flush all the cuda asserts.
After any of the CUDA error, restart the notebook, otherwise you will keep on getting the CUDA error because the earlier assertion hasn't been flushed out. By restarting the notebook, you will flush out all the cuda assertions.
| https://stackoverflow.com/questions/58242415/ |
How do I train an LSTM in Pytorch? | I am having a hard time understand the inner workings of LSTM in Pytorch.
Let me show you a toy example. Maybe the architecture does not make much sense, but I am trying to understand how LSTM works in this context.
The data can be obtained from here. Each row i (total = 1152) is a slice, starting from t = i until t = i + 91, of a longer time series. I will extract the last column of each row to use as labels.
import torch
import numpy as np
import pandas as pd
from torch import nn, optim
from sklearn.metrics import mean_absolute_error
data = pd.read_csv('data.csv', header = None).values
X = torch.tensor(data[:, :90], dtype = torch.float).view(1152, 1, 90)
y = torch.tensor(data[:, 90], dtype = torch.float).view(1152, 1, 1)
dataset = torch.utils.data.TensorDataset(X, y)
loader = torch.utils.data.DataLoader(dataset, batch_size = 50)
Then I am defining an LSTM regressor containing three LSTM layers with different structures.
class regressor_LSTM(nn.Module):
def __init__(self):
super().__init__()
self.lstm1 = nn.LSTM(input_size = 49, hidden_size = 100)
self.lstm2 = nn.LSTM(100, 50)
self.lstm3 = nn.LSTM(50, 50, dropout = 0.3, num_layers = 2)
self.dropout = nn.Dropout(p = 0.3)
self.linear = nn.Linear(in_features = 50, out_features = 1)
def forward(self, X):
X, _ = self.lstm1(X)
X = self.dropout(X)
X, _ = self.lstm2(X)
X = self.dropout(X)
X, _ = self.lstm3(X)
X = self.dropout(X)
X = self.linear(X)
return X
Initializing what needs to be initialized:
regressor = regressor_LSTM()
criterion = nn.MSELoss()
optimizer = optim.RMSprop(regressor.parameters())
Then training:
for epoch in range(25):
acc_loss = 0.
acc_mae = 0.
for i, data in enumerate(loader):
inputs, labels = data
optimizer.zero_grad()
outputs = regressor(inputs)
loss = criterion(outputs, labels)
loss.backward(retain_graph = True)
optimizer.step()
acc_loss += loss.item()
mae = mean_absolute_error(labels.detach().cpu().numpy().flatten(), outputs.detach().cpu().numpy().flatten())
acc_mae += mae
# print('\rEPOCH {:3d} - Loop {:3d} of {:3d}: loss {:03.2f} - MAE {:03.2f}'.format(epoch+1, i+1, len(loader), loss, mae), end = '\r')
print('\nEPOCH %3d FINISHED: loss %.5f - MAE %.5f' % (epoch+1, acc_loss/len(loader), acc_mae/len(loader)))
The thing is, after some initial decrease in both loss and MAE (expected behavior), both seem to get stuck (showing only first 10 epochs below):
EPOCH 1 FINISHED: loss 0.38506 - MAE 0.27322
EPOCH 2 FINISHED: loss 0.02825 - MAE 0.13601
EPOCH 3 FINISHED: loss 0.02593 - MAE 0.13117
EPOCH 4 FINISHED: loss 0.02568 - MAE 0.12705
EPOCH 5 FINISHED: loss 0.02546 - MAE 0.12920
EPOCH 6 FINISHED: loss 0.02502 - MAE 0.12763
EPOCH 7 FINISHED: loss 0.02445 - MAE 0.12659
EPOCH 8 FINISHED: loss 0.02310 - MAE 0.12328
EPOCH 9 FINISHED: loss 0.02277 - MAE 0.12237
EPOCH 10 FINISHED: loss 0.02352 - MAE 0.12476
When run with Keras, both metrics decrease consistently throughout the process. (I also noticed Keras takes much longer.)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM
import pandas as pd
data = pd.read_csv('data.csv', header = None).values
X = data[:, :90].reshape(1152, 90, 1)
y = data[:, 90]
regressor = Sequential()
regressor.add(LSTM(units = 100, return_sequences = True, input_shape = (90, 1)))
regressor.add(Dropout(0.3))
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.3))
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.3))
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.3))
regressor.add(Dense(units = 1, activation = 'linear'))
regressor.compile(optimizer = 'rmsprop', loss = 'mean_squared_error', metrics = ['mean_absolute_error'])
regressor.fit(X, y, epochs = 25, batch_size = 32)
[OUTPUT]
Epoch 1/25
1152/1152 - 35s 30ms/sample - loss: 0.0307 - mean_absolute_error: 0.1225
Epoch 2/25
1152/1152 - 32s 28ms/sample - loss: 0.0156 - mean_absolute_error: 0.0978
Epoch 3/25
1152/1152 - 32s 28ms/sample - loss: 0.0126 - mean_absolute_error: 0.0871
Epoch 4/25
1152/1152 - 34s 30ms/sample - loss: 0.0111 - mean_absolute_error: 0.0806
Epoch 5/25
1152/1152 - 29s 25ms/sample - loss: 0.0103 - mean_absolute_error: 0.0785
Epoch 6/25
1152/1152 - 29s 25ms/sample - loss: 0.0088 - mean_absolute_error: 0.0718
Epoch 7/25
1152/1152 - 32s 27ms/sample - loss: 0.0085 - mean_absolute_error: 0.0699
Epoch 8/25
1152/1152 - 30s 26ms/sample - loss: 0.0069 - mean_absolute_error: 0.0640
Epoch 9/25
1152/1152 - 30s 26ms/sample - loss: 0.0077 - mean_absolute_error: 0.0660
Epoch 10/25
1152/1152 - 30s 26ms/sample - loss: 0.0070 - mean_absolute_error: 0.0644
I've been reading about hidden state initialization, I tried to set them to 0 in the beginning of the forward method (which, though, I understood to be the standard behavior), but nothing helped. I must confess that I do not understand what the parameters of an LSTM are, nor which should be reinitialized (if any) after each batch or epoch.
I appreciate any return!
| I am coming back after a few days because I have come to a conclusion. After reading some material on hidden/cell states, (this one was quite useful) it seems that reusing them is a matter of net design choice. Whether doing so, and when, can count as a hyperparameter. I tried many options with my toy dataset, mainly resetting the states after each batch, resetting after each epoch, and not resetting at all, and the results were quite similar. Also, my results were so low because (as I believe) I did not choose shuffle = True in the loader; doing so made them considerably better (loss around 0.003, MAE around 0.047).
In the original code for the LSTM class, line 510, it also seems that hidden/cell states are initiated at zero if no values are explicitly passed.
| https://stackoverflow.com/questions/58251677/ |
Deriving the structure of a pytorch network | For my use case, I require to be able to take a pytorch module and interpret the sequence of layers in the module so that I can create a “connection” between the layers in some file format. Now let’s say I have a simple module as below
class mymodel(nn.Module):
def __init__(self, input_channels):
super(mymodel, self).__init__()
self.fc = nn.Linear(input_channels, input_channels)
def forward(self, x):
out = self.fc(x)
out += x
return out
if __name__ == "__main__":
net = mymodel(5)
for mod in net.modules():
print(mod)
Here the output yields:
mymodel(
(fc): Linear(in_features=5, out_features=5, bias=True)
)
Linear(in_features=5, out_features=5, bias=True)
as you can see the information about the plus equals operation or plus operation is not captured as it is not a nnmodule in the forward function. My goal is to be able to create a graph connection from the pytorch module object to say something like this in json :
layers {
"fc": {
"inputTensor" : "t0",
"outputTensor": "t1"
}
"addOp" : {
"inputTensor" : "t1",
"outputTensor" : "t2"
}
}
The input tensor names are arbitrary but it captures the essence of the graph and the connections between layers.
My question is, is there a way to extract the information from a pytorch object? I was thinking to use the .modules() but then realized that hand written operations are not captured this way as a module. I guess if everything is an nn.module then the .modules() might give me the network layer arrangement. Looking for some help here. I want to be able to know the connections between tensors to create a format as above.
| The information you are looking for is not stored in the nn.Module, but rather in the grad_fn attribute of the output tensor:
model = mymodel(channels)
pred = model(torch.rand((1, channels))
pred.grad_fn # all the information is in the computation graph of the output tensor
It is not trivial to extract this information. You might want to look at torchviz package that draws a nice graph from the grad_fn information.
| https://stackoverflow.com/questions/58253003/ |
tf.function property in pytorch | I'm a beginner in pytorch, and I have some functions that are needed to implement in network.
My question is: is there any way like tf.function, or should I use "class(nn.Module)" with variable?
For example, let X be a 10x2 matrix . In pseudo-code:
a = Variable(1.0)
b = Variable(1.0)
Y = a*X[:,0]**2 + b*X[:,1]
| In PyTorch you don't need things like tf.function, you just use normal Python code (because of the dynamic graph).
Please give more detailed example (with code) of what you're trying to do if the above doesn't answer your question.
| https://stackoverflow.com/questions/58261029/ |
Is this how nn.Transformer works? | If I want to transform an image to another image,
then
transformer_model = nn.Transformer(img_size, n_heads)
transformer_model(source_image, target_image)
is this the correct way to use nn.Transformer?
| No, this is not what the Transformer module does. The Transformer is primarily used for pre-training general use models for NLP on large bodies of text. If you're curious to learn more, I strongly recommend you read the article which introduced the architecture, "Attention is All You Need". If you've heard of models like BERT or GPT-2, these are examples of transformers.
It's not entirely clear what you are trying to accomplish when you ask how to "transform an image into another image." I'm thinking maybe you are looking for something this? https://junyanz.github.io/CycleGAN/
In any event, to re-answer your question: no, that's not how you use nn.Transformer. You should try to clarify what you are trying to accomplish with "transforming one picture into another," and post that description as a separate question.
| https://stackoverflow.com/questions/58267531/ |
EnsembleVoteClassifier with neural network | I have a trained neural networks in which I am trying to average their prediction using EnsembleVoteClassifier from mlxtend.classifier. The problem is my neural network don't share the same input, (I performed feature reduction and feature select algorithms randomly and stored the results on new different variables, so I have something like X_test_algo1, X_test_algo2 and X_test_algo3 and Y_test).
I am trying to average the weights, but as I said, I don't have the same X, and I didn't any example on the documentation. How can I average the predictions for my three models model1, model2 and model3
eclf = EnsembleVoteClassifier(clfs=[model1, model2, model3], weights=[1,1,1], refit=False)
names = ['NN1', 'NN2', 'NN2', 'Ensemble']
eclf.fit(X_train_algo1, Ytrain) #????
If it's not possible, that is okay. I am only interested on how to calculate the formulas of Hard Voting, Hard Voting and Weighted Voting, or if there is anther library that is more flexible or the explicit expressions of the formulas could be helpful too.
| Why would you need a library to do that?
Simply pass the same examples through all your neural networks and get the predictions (either logits or probabilities or labels).
Hard voting choose the label predicted most often by classifiers.
Soft voting, average probabilities predicted by classifiers and choose the label having the highest.
Weighted voting - either of the above can be weighted. Just assign weights to each classifier and multiply their predictions by them. Weights are usually normalized to (0, 1] range.
In principle you could also sum logits and choose the label with highest.
Oh, and weight averaging is different technique and requires you to have the same model and usually is done for the same initialization but at different training timesteps. You can read about it in this blog post.
| https://stackoverflow.com/questions/58276400/ |
Segmentation fault when using the custom activation function | I am trying to implement a custom activation function (the codes attached below). Before using the custom activation function, everything works well. However, as long as it is used, the server would throw the error:
Segmentation fault
The error always appears at the first epoch.
I am using
Pytorch 1.1.0
Cuda compilation tools, release 9.2, V9.2.148
the codes
def mg(x):
c = 1.33
b = 0.4
p = 6.88
input_size = x.shape
num = torch.numel(x) # the element number of the input tensor
x = x.view(num)
out = torch.zeros(len(x))
for i in range(len(x)):
if x[i] < 0:
out[i] = 0
else:
out[i] = (c * x[i]) / (1 + torch.mul(b * p, torch.pow(x[i], p)))
out = out.view(input_size[0], input_size[1], input_size[2], input_size[3])
return out
| You are breaking the gradient with newly created out.
You should modify your code to act upon x inputs. Additionally, you shouldn't use any loops (almost always there is a way to do it without them). Given that, this function should be equivalent to yours but works:
def mg(x, c=1.33, b=0.4, p=6.88):
input_size = x.shape
x = x.flatten()
x[x < 0] = 0
x[x != 0] *= c
x[x != 0] /= 1 + b * p * x[x != 0] ** p
return x.reshape(*input_size)
If you are still getting an error it's probably related to some other part of your program.
| https://stackoverflow.com/questions/58286286/ |
Restarting with adam | I am training my network with early stopping strategy. I start with a higher learning rate, and based on validation loss, I need to restart training from an earlier snapshot.
I am able to save/load snapshot with model and optimizer state_dicts. No problem with that.
My question is, once I restart training, how do I set the learning rate of adam again? Should I restart adam fresh instead of using a state_dict or should I use
optimizer.param_groups[0][‘lr’] = lr to adjust learning rate with loaded optimizer state_dict?
For example,
I train my network with lr = 1e-6 for 5 epochs, saved model and optimizer state_dict.
I am now restarting from epoch 6, but I need lr = 1e-7 instead. What is the best approach for this?
Thanks!
| Looking further into the scheduler code, I found the correct way to do it as:
def get_lr(gamma, optimizer):
return [group['lr'] * gamma
for group in optimizer.param_groups]
for param_group, lr in zip(optimizer.param_groups, get_lr(gamma, optimizer)):
param_group['lr'] = lr
| https://stackoverflow.com/questions/58298711/ |
What is the logic behind this assignment: understanding in-place assignment operations in numpy | I have two fairly simple codes that give different answer. I understand it is due to the reference shared but I am not very clear what exactly happens in the 2nd case
a = np.ones(5)
b = torch.from_numpy(a)
a=np.add(a, 1, out=a)
print(a)
print(b)
[out]:
[2. 2. 2. 2. 2.]
tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
a = np.ones(5)
b = torch.from_numpy(a)
a=a+1
print(a)
print(b)
[out]:
[2. 2. 2. 2. 2.]
tensor([1., 1., 1., 1., 1.], dtype=torch.float64)
Why isn't b changed in the second case ?
| In the first case both a and b share the same memory (i.e. b is a view of a or in other words, b is pointing to the (array) value where a is also pointing to) and out argument guarantees that the same memory of a is updated after the np.add() operation is completed. Whereas in the second case, a is a new copy when you do a = a+1 and b is still pointing to the old value of a.
Try the second case with:
a += 1
and observe that both a and b are indeed updated.
In [7]: a = np.ones(5)
...: b = torch.from_numpy(a)
...: a += 1
In [8]: a
Out[8]: array([2., 2., 2., 2., 2.])
In [9]: b
Out[9]: tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
As @hpaulj aptly pointed out in his comment, when we do a = a+1, a new object is created and a would now point to this new (array) object instead of the old one, which is still pointed to by b. And this is the reason the (array) value of b is not updated.
To understand this behavior a bit better, you might wanna refer the excellent article by Ned Batchelder about how names are bind to values in Python
| https://stackoverflow.com/questions/58310080/ |
Can't see graph using torch.utils.tensorboard | I'm trying to get used to tensorboard, and I code my models using pytorch.
However when I try to see my model using the add_graph() function, I've got this:
With this as the test code:
import numpy as np
import torch
import torchvision.transforms as transforms
import torch.nn as nn
import torch.optim as optim
from torch.utils.tensorboard import SummaryWriter
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.linear = nn.Linear(2, 1)
def forward(self, x):
x = self.linear(x)
return x
writer = SummaryWriter('runs_pytorch/test')
net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
writer.add_graph(net, torch.zeros([4, 2], dtype=torch.float))
writer.close()
On the other hand, if I try to see a graph using TensorFlow, everything seems fine:
with this as the test code this time:
import tensorflow as tf
tf.Variable(42, name='foo')
w = tf.summary.FileWriter('runs_tensorflow/test')
w.add_graph(tf.get_default_graph())
w.flush()
w.close()
In case you are wondering, I'm using this command to start tensorboard:
tensorboard --logdir runs_pytorch
Something I noticed is that when I use it on the directory allocated for my tensorflow test, I've got the usual message with the address, but if I do the same thing with --logdir runs_pytorch I've got something more:
W1010 15:19:24.225109 15308 plugin_event_accumulator.py:294] Found more than one graph event per run, or there was a metagraph containing a graph_def, as well as one or more graph events. Overwriting the graph with the newest event.
W1010 15:19:24.226075 15308 plugin_event_accumulator.py:322] Found more than one "run metadata" event with tag step1. Overwriting it with the newest event.
I'm on windows, I tried on different browsers (chrome, firefox...).
I have tensorflow 1.14.0, torch 1.2.0, Python 3.7.3
Thank you very much for your help, it's driving me crazy!
| There are two ways to solve it:
1. update PyTorch to 1.3.0 and above:
conda way:
conda install pytorch torchvision cudatoolkit=9.2 -c pytorch
pip way:
pip3 install torch==1.3.0+cu92 torchvision==0.4.1+cu92 -f https://download.pytorch.org/whl/torch_stable.html
2. install tensorboardX instead:
uninstall tensorboard:
if your tensorboard is installed by pip:
pip uninstall tensorboard
if your tensorboard is installed by anaconda:
conda uninstall tensorboard
install tensorboardX
pip install tensorboardX
when writing script,
change
from torch.utils.tensorboard import SummaryWriter
to
from tensorboardX import SummaryWriter
| https://stackoverflow.com/questions/58324471/ |
Pytorch copying inexact value of numpy floating point number | I'm converting a floating point number (or numpy array) to Pytorch tensor and it seems to be copying the inexact value to the tensor. The error comes in the 8th significant digit and afterwards. This is significant (no-pun intended) for my work as I deal with chaotic dynamics which is very sensitive towards the slight change in the initial conditions.
I'm already using torch.set_printoptions(precision=16) to print 16 significant digits.
np_x = state
print(np_x)
x = torch.tensor(np_x,requires_grad=True,dtype=torch.float32)
print(x.data[0])
and the output is :
0.7575408585008059
tensor(0.7575408816337585)
It would be helpful to know what is going wrong or how it could be resolved ?
| Because you're using float32 dtype. If you convert these two numbers to binary, you will find they are actually the same. Strictly speaking, the most accurate representations of those two numbers in float32 format are the same.
0.7575408585008059
Most accurate representation = 7.57540881633758544921875E-1
0.7575408816337585
Most accurate representation = 7.57540881633758544921875E-1
Binary: 00111111 01000001 11101110 00110011
| https://stackoverflow.com/questions/58329742/ |
How to fix the error `Process finished with exit code -1073741819 (0xC0000005)` | My problem is when I running FC network the code works well in both CPU and GPU. But when it comes to CNN, I can only train it on CPU. It raises an error when I try to train it on GPU.
Like that:
Process finished with exit code -1073741819 (0xC0000005)
I find the error raised when the code goes to loss.backword. The error happened when I use the first column instead of the second.
device = torch.device("cuda:0")
device = torch.device("cuda:0" if opt.cuda else "cpu")
My environment is Python 3.6.9, Windows 10, Torch 1.2.0, Cuda 9.2.
| Finally, I figured out this.
This error occurs just because one of my variables is not loaded in cuda.
When I add this output = Variable(netD(real_cpu),requires_grad=True) the problem solved.
| https://stackoverflow.com/questions/58334740/ |
How to initialize mean and variance of Pytorch BatchNorm2d? | I’m transforming a TensorFlow model to Pytorch. And I’d like to initialize the mean and variance of BatchNorm2d using TensorFlow model.
I’m doing it in this way:
bn.running_mean = torch.nn.Parameter(torch.Tensor(TF_param))
And I get this error:
RuntimeError: the derivative for 'running_mean' is not implemented
But is works for bn.weight and bn.bias. Is there any way to initialize the mean and variance using my pre-trained Tensorflow model? Is there anything like moving_mean_initializer and moving_variance_initializer in Pytorch?
Thanks!
| The running mean and variance of a batch norm layer are not nn.Parameters, but rather a buffer of the layer.
I think you can simply assign a torch.tensor, no need to wrap a nn.Parameter around it.
| https://stackoverflow.com/questions/58334955/ |
How to avoid create a network structure properly | I tried to define a feedforwad function in my neural network model:
class FeedForward(nn.Module):
def __init__(self):
super(FeedForward,self).__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 64)
self.fc2 = nn.Linear(64, 10)
def feedforward(self, x):
x = x.view(x.shape[0], -1) # make sure inputs are flattened
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x= F.log_softmax(x, dim=1) # preserve batch dim
return x
The message says:
NotImplementedError
I am not sure what I have missing.
| The method name must be forward and not feedforward:
class FeedForward(nn.Module):
def __init__(self):
super(FeedForward,self).__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 64)
self.fc2 = nn.Linear(64, 10)
def forward(self, x): # this is what pytorch expects
x = x.view(x.shape[0], -1) # make sure inputs are flattened
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x= F.log_softmax(x, dim=1) # preserve batch dim
return x
| https://stackoverflow.com/questions/58334999/ |
Translating LSTM model from Keras to Pytorch | I am having a hard time translating a quite simple LSTM model from Keras to Pytorch. X (get it here) corresponds to 1152 samples of 90 timesteps, each timestep has only 1 dimension. y (here) is a single prediction at t = 91 for all 1152 samples.
In Keras:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM
import numpy as np
import pandas as pd
X = pd.read_csv('X.csv', header = None).values
X.shape
y = pd.read_csv('y.csv', header = None).values
y.shape
# From Keras documentation [https://keras.io/layers/recurrent/]:
# Input shape 3D tensor with shape (batch_size, timesteps, input_dim).
X = np.reshape(X, (1152, 90, 1))
regressor = Sequential()
regressor.add(LSTM(units = 100, return_sequences = True, input_shape = (90, 1)))
regressor.add(Dropout(0.3))
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.3))
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.3))
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.3))
regressor.add(Dense(units = 1, activation = 'linear'))
regressor.compile(optimizer = 'rmsprop', loss = 'mean_squared_error', metrics = ['mean_absolute_error'])
regressor.fit(X, y, epochs = 10, batch_size = 32)
... leads me to:
# Epoch 10/10
# 1152/1152 [==============================] - 33s 29ms/sample - loss: 0.0068 - mean_absolute_error: 0.0628
Then in Pytorch:
import torch
from torch import nn, optim
from sklearn.metrics import mean_absolute_error
X = pd.read_csv('X.csv', header = None).values
y = pd.read_csv('y.csv', header = None).values
X = torch.tensor(X, dtype = torch.float32)
y = torch.tensor(y, dtype = torch.float32)
dataset = torch.utils.data.TensorDataset(X, y)
loader = torch.utils.data.DataLoader(dataset, batch_size = 32, shuffle = True)
class regressor_LSTM(nn.Module):
def __init__(self):
super().__init__()
self.lstm1 = nn.LSTM(input_size = 1, hidden_size = 100)
self.lstm2 = nn.LSTM(100, 50)
self.lstm3 = nn.LSTM(50, 50, dropout = 0.3, num_layers = 2)
self.dropout = nn.Dropout(p = 0.3)
self.linear = nn.Linear(in_features = 50, out_features = 1)
def forward(self, X):
# From the Pytorch documentation [https://pytorch.org/docs/stable/_modules/torch/nn/modules/rnn.html]:
# **input** of shape `(seq_len, batch, input_size)`
X = X.view(90, 32, 1)
# I am discarding hidden/cell states since in Keras I am using a stateless approach
# [https://keras.io/examples/lstm_stateful/]
X, _ = self.lstm1(X)
X = self.dropout(X)
X, _ = self.lstm2(X)
X = self.dropout(X)
X, _ = self.lstm3(X)
X = self.dropout(X)
X = self.linear(X)
return X
regressor = regressor_LSTM()
criterion = nn.MSELoss()
optimizer = optim.RMSprop(regressor.parameters())
for epoch in range(10):
running_loss = 0.
running_mae = 0.
for i, data in enumerate(loader):
inputs, labels = data
optimizer.zero_grad()
outputs = regressor(inputs)
outputs = outputs[-1].view(*labels.shape)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
mae = mean_absolute_error(labels.detach().cpu().numpy().flatten(), outputs.detach().cpu().numpy().flatten())
running_mae += mae
print('EPOCH %3d: loss %.5f - MAE %.5f' % (epoch+1, running_loss/len(loader), running_mae/len(loader)))
... leads me to:
# EPOCH 10: loss 0.04220 - MAE 0.16762
You can notice that both loss and MAE are quite different (Pytorch's are much higher). If I use Pytorch's model to predict the values, they all return as a constant.
What am I doing wrong?
| Oh I believe I made considerable progress. It seems that the way to represent y is different between Keras and Pytorch. In Keras, we should pass it as a single value representing one timestep in the future (or, at least, for the problem I am trying to solve). But in Pytorch, y must be X shifted one timestep to the future. It is like this:
time_series = [0, 1, 2, 3, 4, 5]
X = [0, 1, 2, 3, 4]
# Keras:
y = [5]
# Pytorch:
y = [1, 2, 3, 4, 5]
This way, Pytorch compares all values in the time slice when calculating loss. I believe Keras rearranges the data under the hood to conform to this approach, as the code works when fed the variables just like that. But in Pytorch, I was estimating loss based only on one value (the one I was trying to predict), not the whole series, therefore I believe it could not correctly capture the time dependency.
When taking this in consideration, I got to:
EPOCH 100: loss 0.00551 - MAE 0.058435
And, most importantly, comparing true and predicted values in a separate dataset got me to
The patterns were clearly captured by the model.
Hooray!
| https://stackoverflow.com/questions/58354951/ |
Masking and Instance Normalization in PyTorch | Assume I have a PyTorch tensor, arranged as shape [N, C, L] where N is the batch size, C is the number of channels or features, and L is the length. In this case, if one wishes to perform instance normalization, one does something like:
N = 20
C = 100
L = 40
m = nn.InstanceNorm1d(C, affine=True)
input = torch.randn(N, C, L)
output = m(input)
This will perform a normalization in the L-wise dimension for each N*C = 2000 slices of data, subtracting 2000 means, scaling by 2000 standard deviations, and re-scaling by 100 learnable weight and bias parameters (one per channel). The unspoken assumption here is that all of these values exist and are meaningful.
But I have a situation where, for the slice N=1, I would like to exclude all data after (say) L=35. For the slice N=2 (say) all the data are valid. For the slice N=3, exclude all data after L=30, etc. This mimics data which are one dimensional time sequences, having multiple features, but which are not the same length.
How can I perform an instance norm on such data, get correct statistics, and maintain differentiability/AutoGrad information in PyTorch?
Update: While maintaining GPU performance, or at least not killing it dead.
I cannot...
...Mask with zero values, as this destroys the computer means and variances giving erroneous results
...Mask with np.nan or np.inf, as PyTorch tensors do not ignore such values, but treat them as errors. They are sticky, and lead to garbage results. PyTorch currently lacks the equivalent of np.nanmean and np.nanvar.
...Permute or transpose to an amenable arrangement of data; no such approach gives me what I need
...Use a pack_padded_sequence; instance normalization does not operate on that data structure, and one cannot import data into that structure as far as I know. Also, data re-arrangement would still be necessary, see 3 above.
Am I missing an approach which would give me what I need? Or perhaps am I missing a method of data re-arrangement which would allow 3 or 4 above to work?
This is an issue faced by recurrent neural networks all the time, hence the pack_padded_sequence functionality, but it isn't quite applicable here.
| I don't think this is directly possible to implement using the existing InstanceNorm1d, the easiest way would probably be implementing it yourself from scratch. I did a quick implementation that should work. To make it a little bit more general this module requires a boolean mask (a boolean tensor of the same size as the input) that specifies which elements should be considered when passing through the instance norm.
import torch
class MaskedInstanceNorm1d(torch.nn.Module):
def __init__(self, num_features, eps=1e-6, momentum=0.1, affine=True, track_running_stats=False):
super().__init__()
self.num_features = num_features
self.eps = eps
self.momentum = momentum
self.affine = affine
self.track_running_stats = track_running_stats
self.gamma = None
self.beta = None
if self.affine:
self.gamma = torch.nn.Parameter(torch.ones((1, self.num_features, 1), requires_grad=True))
self.beta = torch.nn.Parameter(torch.zeros((1, self.num_features, 1), requires_grad=True))
self.running_mean = None
self.running_variance = None
if self.affine:
self.running_mean = torch.zeros((1, self.num_features, 1), requires_grad=True)
self.running_variance = torch.zeros((1, self.num_features, 1), requires_grad=True)
def forward(self, x, mask):
mean = torch.zeros((1, self.num_features, 1), requires_grad=False)
variance = torch.ones((1, self.num_features, 1), requires_grad=False)
# compute masked mean and variance of batch
for c in range(self.num_features):
if mask[:, c, :].any():
mean[0, c, 0] = x[:, c, :][mask[:, c, :]].mean()
variance[0, c, 0] = (x[:, c, :][mask[:, c, :]] - mean[0, c, 0]).pow(2).mean()
# update running mean and variance
if self.training and self.track_running_stats:
for c in range(self.num_features):
if mask[:, c, :].any():
self.running_mean[0, c, 0] = (1-self.momentum) * self.running_mean[0, c, 0] \
+ self.momentum * mean[0, c, 0]
self.running_variance[0, c, 0] = (1-self.momentum) * self.running_variance[0, c, 0] \
+ self.momentum * variance[0, c, 0]
# compute output
x = (x - mean)/(self.eps + variance).sqrt()
if self.affine:
x = x * self.gamma + self.beta
return x
| https://stackoverflow.com/questions/58361068/ |
output:\ntorch-1.1.0-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform- Pytorch /cloud functions | I'm trying to deploy a function on Cloud functions and I'm having trouble getting pytorch to work. I need either version 1.1, 1.2 or 1.3 (any version that has torch.hub functionality)
Here is what I have been trying in the requirments.txt for my function:
numpy==1.17.2
https://download.pytorch.org/whl/cpu/torch-1.1.0-cp27-cp27mu-linux_x86_64.whl
Which results in the error:
Build failed: {"error": {"canonicalCode": "INVALID_ARGUMENT", "errorMessage": "`pip_download_wheels` had stderr output:\ntorch-1.1.0-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform.\n\nerror: `pip_download_wheels` returned code: 1", "errorType": "InternalError", "errorId": XXX}}
And of course trying the various other url for 1.2 and 1.3 and the same thing happens.
What can I do to fix this/ what am I doing wrong??
Appreciate the help.
-if this affects anything, I'm using Python 3.7 for my function and have been using the linux packages.
__________________________________________________________________________
Edit: I've tried this in my requirments.txt:
numpy==1.17.0
torch==1.3.0
torchvision===0.4.1
And I now get the error:
Build failed: {"cacheStats": [{"status": "MISS", "hash": "1f6ebb5b3667b3d677184dbf04b82666XXX", "type": "docker_layer_cache", "level": "global"}, {"status": "MISS", "hash": "1f6ebb5b3667b3d677184dbf04b826660b67c784608d4e4XXXXX", "type": "docker_layer_cache", "level": "project"}]}
I've never had this error with any other library on cloud functions. If anyone has other suggestions it would be greatly appreciated.
| That cp27 means it's for python2.7. It's actually not a good idea to install from url, use the package name istead(like numpy==1.17.2)
Try something like pip3 install torch torchvision, this will give the latest stable(1.3) version with cuda 10.
Look at the pytorch's home page - https://pytorch.org/ - for reference
| https://stackoverflow.com/questions/58369705/ |
What is the gradient of pytorch floor() gradient method? | I am looking to use floor() method in one of my models. I would like to understand what pytorch does with its gradient propagation since as such floor is a discontinuous method.
If there is no gradient defined, I could override the backward method to define my own gradient as necessary but I would like to understand what the default behavior is and the corresponding source code if possible.
import torch
x = torch.rand(20, requires_grad=True)
y = 20*x
z = y.floor().sum()
z.backward()
x.grad returns zeros.
z has a grad_fn=
So FloorBackward is the gradient method. But there is no reference to the source code of FloorBackward in pytorch repository.
| As the floor function is piece wise constant. This means the gradient must be zero almost everywhere.
While the code doesn't say anything about it, I expect that the gradient is set to a constant zero everywhere.
| https://stackoverflow.com/questions/58374374/ |
Trouble with nn.embedding in pytorch, expected scalar type Long, but got torch.cuda.FloatTensor (how to fix)? | so I have an RNN encoder that is part of a larger language model, where the process is encode -> rnn -> decode.
As part of my __init__ for my rnn class I have the following:
self.encode_this = nn.Embedding(self.vocab_size, self.embedded_vocab_dim)
now I am trying to implement a forward class, which takes in batches and performs encoding then decoding,
def f_calc(self, batch):
#Here, batch.shape[0] is the size of batch while batch.shape[1] is the sequence length
hidden_states = (torch.zeros(self.num_layers, batch.shape[0], self.hidden_vocab_dim).to(device))
embedded_states = (torch.zeros(batch.shape[0],batch.shape[1], self.embedded_vocab_dim).to(device))
o1, h = self.encode_this(embedded_states)
however, my problem is always with the encoder which gives me the following error:
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1465 # remove once script supports set_grad_enabled
1466 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1467 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1468
1469
RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)
Anyone have any idea how to fix? I am completely new to pytorch so please excuse me if this is a stupid question. I know there is some form of type casting involved but I am not sure how to go about doing it...
much appreciated!
| Embedding layer expects integers at the input.
import torch as t
emb = t.nn.Embedding(embedding_dim=3, num_embeddings=26)
emb(t.LongTensor([0,1,2]))
Add long() in your code:
embedded_states = (torch.zeros(batch.shape[0],batch.shape[1], self.embedded_vocab_dim).to(device)).long()
| https://stackoverflow.com/questions/58374866/ |
Run multiple models of an ensemble in parallel with PyTorch | My neural network has the following architecture:
input -> 128x (separate fully connected layers) -> output averaging
I am using a ModuleList to hold the list of fully connected layers. Here's how it looks at this point:
class MultiHead(nn.Module):
def __init__(self, dim_state, dim_action, hidden_size=32, nb_heads=1):
super(MultiHead, self).__init__()
self.networks = nn.ModuleList()
for _ in range(nb_heads):
network = nn.Sequential(
nn.Linear(dim_state, hidden_size),
nn.Tanh(),
nn.Linear(hidden_size, dim_action)
)
self.networks.append(network)
self.cuda()
self.optimizer = optim.Adam(self.parameters())
Then, when I need to calculate the output, I use a for ... in construct to perform the forward and backward pass through all the layers:
q_values = torch.cat([net(observations) for net in self.networks])
# skipped code which ultimately computes the loss I need
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
This works! But I am wondering if I couldn't do this more efficiently. I feel like by doing a for...in, I am actually going through each separate FC layer one by one, while I'd expect this operation could be done in parallel.
| In the case of Convnd in place of Linear you could use the groups argument for "grouped convolutions" (a.k.a. "depthwise convolutions"). This let's you handle all parallel networks simultaneously.
If you use a convolution kernel of size 1, then the convolution does nothing else than applying a Linear layer, where each channel is considered an input dimension. So the rough structure of your network would look like this:
Modify the input tensor of shape B x dim_state as follows: add an additional dimension and replicate by nb_state-times B x dim_state to B x (dim_state * nb_heads) x 1
replace the two Linear with
nn.Conv1d(in_channels=dim_state * nb_heads, out_channels=hidden_size * nb_heads, kernel_size=1, groups=nb_heads)
and
nn.Conv1d(in_channels=hidden_size * nb_heads, out_channels=dim_action * nb_heads, kernel_size=1, groups=nb_heads)
we now have a tensor of size B x (dim_action x nb_heads) x 1 you can now modify it to whatever shape you want (e.g. B x nb_heads x dim_action)
While CUDA natively supports grouped convolutions, there were some issues in pytorch with the speed of grouped convolutions (see e.g. here) but I think that was solved now.
| https://stackoverflow.com/questions/58374980/ |
What is the alternative of CUDA GPU for model training with CPU support? | I dont have CUDA enabled GPU but I have i7 processor and 16GB Ram 1 GB amd graphics card
i want to disable that option and need to train a model with CPU support itself
mycodes are
parser = argparse.ArgumentParser()
parser.add_argument("--gpu", dest='gpu', type=str, default='0', help='Set CUDA_VISIBLE_DEVICES environment variable, optional')
os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu
params = vars(args)
how can i change into cpu version
| So, the above is just the argparser, which tells Python which values to accept at the command line. It just sets variable values within the code. Even if we change this, it wouldn't change how the code runs.
It depends on how your code is written (that actually calls the ML) but running on CPU is the default. Your code specifically has to tell it to run on the GPU.
With the line os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu you're setting the environment variable CUDA_VISIBLE_DEVICES to the command-line-passed-in argument gpu ... which your code that calls the GPU will use.
But you need to change the code regarding how the ML processes are called.
Maybe you can post more code?
| https://stackoverflow.com/questions/58408143/ |
how to specify python==3.6.8 for PyTorch Estimator (conda_packages not sufficient) | I need to run my python script under Azure Machine Learning, using python=3.6.8 (not the default 3.6.2). I am using the AML "PyTorch()" Estimator, setting the "conda_packages" arg to ["python==3.6.8"].
I am relying on this doc page for the PyTorch Estimator:
https://learn.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn.pytorch?view=azure-ml-py
When my script runs, I print out "sys.version" and see that it is still set to python 3.6.2:
python: 3.6.2 | packaged by conda-forge | (default, Jul 23 2017, 22:59:30)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)]
I expected to see python 3.6.8, since I specified that in the PyTorch Estimator's conda_packages arg.
I also tried moving the "python==3.6.8" from conda_packages to pip_packages, but received an error saying pip could not locate that package.
FYI, I have another package specified in pip_packages, and that does get installed correctly during this process. It seems like the value of the "conda_packages" arg is not being used (I can find no mention of a conda or python install error in the AML logs for my job).
| one other option is specifying a conda dependency file conda_dependencies_file_path with the right python version. the below docs outlines detailed documentation on how to do that. once you specify conda_depencies_file_path, it overrides pip_packages, and conda_packages so I recommend putting all your packages in the conda dependency file
https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.conda_dependencies.condadependencies?view=azure-ml-py
| https://stackoverflow.com/questions/58412359/ |
Feed multi resolution images to Neural Network Pytorch | I am new in this area so pardon if my question seems stupid.
I have created a multiresolution image pyramid using
skimage.transform.pyramid_gaussian
The images are 2D. Now I want to feed these images to a neural network. The structure of the neural network is not fixed. But I can't do that since the images are not of the same size. Can anyone guide me to any resource regarding if this can be done?
Thank you
| There are two types of Neural Networks: First one that can process variable input size and second that requires fixed input size.
Good example for first kind is Fully Convolutional Network (FCN). They are widely used for object detection and semantic segmentation. Next code snippet is minimal example of testing pre-trained keypointrcnn from PyTorch. This is improvement of previous state of the art Mask R-CNN
import torch
import torchvision
from PIL import Image
model_rcnn = torchvision.models.detection.keypointrcnn_resnet50_fpn(pretrained=True)
model_rcnn.eval()
image1 = Image.open('image122 × 430.jpg')
image2 = Image.open('image448 × 465.jpg')
image_tensor1 = torchvision.transforms.functional.to_tensor(image1)
image_tensor2 = torchvision.transforms.functional.to_tensor(image2)
output1 = model_rcnn([image_tensor1])
output2 = model_rcnn([image_tensor2])
print(output1, output2)
Second kind of Neural Networks require fixed size input, for example ResNet. Standard solution is using Resize transform before feeding images to the network. Minimal example:
import torch
import torchvision
from torchvision import transforms
from PIL import Image
model_imagnet = torchvision.models.resnet50(pretrained=True)
model_imagnet.eval()
# don't forget to use the same normalization as in training,
# if you are using pre-trained model
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
my_transforms = transforms.Compose([transforms.Resize(224),
transforms.ToTensor(),
normalize])
image1 = Image.open('image122 × 430.jpg')
image2 = Image.open('image448 × 465.jpg')
image_tensor1 = my_transforms(image1)
image_tensor2 = my_transforms(image2)
output1 = model_imagnet(torch.unsqueeze(image_tensor1, 0))
output2 = model_imagnet(torch.unsqueeze(image_tensor2, 0))
For more details about the models and there usage you may refer to PyTorch documentation
| https://stackoverflow.com/questions/58422569/ |
How to extract features from a pytorch pretrained fine-tuned model | I need to extract features from a pretrained (fine-tuned) BERT model.
I fine-tuned a pretrained BERT model in Pytorch using huggingface transformer. All the training/validation is done on a GPU in cloud.
At the end of the training, I save the model and tokenizer like below:
best_model.save_pretrained('./saved_model/')
tokenizer.save_pretrained('./saved_model/')
This creates below files in the saved_model directory:
config.json
added_token.json
special_tokens_map.json
tokenizer_config.json
vocab.txt
pytorch_model.bin
I save the saved_model directory in my computer and load the model and tokenizer like below
model = torch.load('./saved_model/pytorch_model.bin',map_location=torch.device('cpu'))
tokenizer = BertTokenizer.from_pretrained('./saved_model/')
Now to extract features, I do below
input_ids = torch.tensor([tokenizer.encode("Here is some text to encode", add_special_tokens=True)])
last_hidden_states = model(input_ids)[0][0]
But for the last line, it throws me error TypeError: 'collections.OrderedDict' object is not callable
It seems like I am not loading the model properly. Instead of loading the entire model in itself, I think my model=torch.load(....) line is loading a ordered dictionary.
What am I missing here? Am I even saving the model in the right way? Please suggest.
| torch.load() returns a collections.OrderedDict object. Checkout the recommended way of saving and loading a model's state dict.
Save:
torch.save(model.state_dict(), PATH)
Load:
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
model.eval()
So, in your case, it should be:
model = BertModel(config)
model.load_state_dict('./saved_model/pytorch_model.bin',
map_location=torch.device('cpu'))
model.eval() # to disable dropouts
| https://stackoverflow.com/questions/58424594/ |
Compute gradient between a scalar and vector in PyTorch | I am trying to replicate code which was written using Theano, to PyTorch. In the code, the author computes the gradient using
import theano.tensor as T
gparams = T.grad(cost, params)
and the shape of gparams is (256, 240)
I have tried using backward() but it doesn't seem to return anything. Is there an equivalent to grad within PyTorch?
Assume this is my input,
import torch
from torch.autograd import Variable
cost = torch.tensor(1.6019)
params = Variable(torch.rand(1, 73, 240))
| cost needs to be a result of an operation involving params. You can't compute a gradient just knowing the values of two tensors. You need to know the relationship as well. This is why pytorch builds a computation graph when you perform tensor operations. For example, say the relationship is
cost = torch.sum(params)
then we would expect the gradient of cost with respect to params to be a vector of ones regardless of the value of params.
That could be computed as follows. Notice that you need to add the requires_grad flag to indicate to pytorch that you want backward to update the gradient when called.
# Initialize independent variable. Make sure to set requires_grad=true.
params = torch.tensor((1, 73, 240), requires_grad=True)
# Compute cost, this implicitly builds a computation graph which records
# how cost was computed with respect to params.
cost = torch.sum(params)
# Zero the gradient of params in case it already has something in it.
# This step is optional in this example but good to do in practice to
# ensure you're not adding gradients to existing gradients.
if params.grad is not None:
params.grad.zero_()
# Perform back propagation. This is where the gradient is actually
# computed. It also resets the computation graph.
cost.backward()
# The gradient of params w.r.t to cost is now stored in params.grad.
print(params.grad)
Result:
tensor([1., 1., 1.])
| https://stackoverflow.com/questions/58431156/ |
RuntimeError: Error(s) in loading state_dict for BertModel | I finetune a BERT model using hugging face transformer library and train it in GPU in the cloud. Then I save the model and tokenizer like below:
model.save_pretrained('/saved_model/')
torch.save(best_model.state_dict(), '/saved_model/model')
tokenizer.save_pretrained('/saved_model/')
I download the saved_model directory in my computer. Then I load the model/tokenizer like below in my computer
import torch
from transformers import *
tokenizer = BertTokenizer.from_pretrained('./saved_model/')
config = BertConfig('./saved_model/config.json')
model = BertModel(config)
model.load_state_dict(torch.load('./saved_model/pytorch_model.bin', map_location=torch.device('cpu')))
model.eval()
But it throws below error for the model.load_state_dict line
RuntimeError: Error(s) in loading state_dict for BertModel:
Missing key(s) in state_dict:
It lists a bunch of keys that are apparently missing from the state_dict.
I am new to pytorch and not sure what is going on. Most likely I am not saving the model the right way.
Please suggest.
| As you may know, the state_dict of a PyTorch module is an OrderedDict. When you tried to load the weights of a module from a state_dict, it complains about missing keys which means the state_dict does not contain those keys. In this situation, I would suggest taking the following actions.
Check which keys are present in the state_dict. It sounds impossible that you save a subset of the keys only.
Also, make sure you have the correct configuration loaded. Otherwise, if your trained BertModel and the new BertModel for which you want to load the weights are different, then you will receive this error.
Finally, if your code passes both the above cases, then saving the model, make sure you save all the layers' parameters in the file. The statement, torch.save(best_model.state_dict(), '/saved_model/model') looks okay to me but make sure the best_model.state_dict() contains all the expected keys.
| https://stackoverflow.com/questions/58444517/ |
How to view the weight_decay loss in PyTorch during the training? | I train a model with Adam optimizer in PyTorch and set the weight_decay parameter to 1.0.
optimizer = optim.Adam(model.parameters(), lr=args.lr, weight_decay=1.0)
optimizer.zero_grad()
loss.backward()
optimizer.step()
If I want to compare the number of the weight_decay loss and the model loss, how do I view the value of the loss caused by the weight_decay?
| Are you familiar with L2 regularization? If not, you can study it. I find this tutorial very helpful.
There is a subtle difference between L2 regularization and weight decay and that is:
Weight decay is usually defined as a term that’s added directly to the update rule. On the other hand, the L2 regularization term is added to the loss function.
You may find this tutorial helpful to study the differences between weight decay and L2 regularization.
[Update] I find the lecture by Prof. Andrew Ng very helpful.
| https://stackoverflow.com/questions/58444848/ |
pytorch model returns NANs after first round | This is my first time writing a Pytorch-based CNN. I've finally gotten the code to run to the point of producing output for the first data batch, but on the second batch produces nans. I greatly simplified the model for debugging purposes, but it's still not working right. The model shown here is just a few fully connected layers with a linear output.
I am guessing that the problem is the the back-propagation step, but it's unclear to me where and why.
Here is a very simplified version of the model that still produces the error:
Data loader:
batch_size = 36
device = 'cuda'
# note "rollaxis" to move channel from last to first dimension
# X_train is n input images x 70 width x 70 height x 3 channels
# Y_train is n doubles
torch_train = utils.TensorDataset(torch.from_numpy(np.rollaxis(X_train, 3, 1)).float(), torch.from_numpy(Y_train).float())
train_loader = utils.DataLoader(torch_train, batch_size=batch_size, shuffle=True)
Define and create the model:
def MyCNN(**kwargs):
return MyCNN_model_simple(**kwargs)
# switched from Sequential() style to assist debugging
class MyCNN_model_simple(nn.Module):
def __init__(self, **kwargs):
super(MyCNN_model_simple, self).__init__()
self.fc1 = FullyConnected( 3 * 70 * 70, 100)
self.fc2 = FullyConnected( 100, 100)
self.last = nn.Linear(100, 1)
# self.net = nn.Sequential(
# self.fc1,
# self.fc2,
# self.last,
# nn.Flatten()
# )
def forward(self, x):
print(f"x shape A: {x.shape}")
x = torch.flatten(x, 1)
print(f"x shape B: {x.shape}")
x = self.fc1(x)
print(f"x shape C: {x.shape}")
x = self.fc2(x)
print(f"x shape D: {x.shape}")
x = self.last(x)
print(f"x shape E: {x.shape}")
x = torch.flatten(x)
print(f"x shape F: {x.shape}")
return x
# return self.net(x)
class FullyConnected(nn.Module):
def __init__(self, in_channels, out_channels, dropout=None):
super(FullyConnected, self).__init__()
layers = []
layers.append(nn.Linear(in_channels, out_channels, bias=True))
layers.append(nn.ReLU())
if dropout != None:
layers.append(nn.Dropout(p=dropout))
self.net = nn.Sequential(*layers)
def forward(self, x):
return self.net(x)
model = MyCNN()
# convert to 16-bit half-precision to save memory
model.half()
model.to(torch.device('cuda'))
Run the model:
loss_fn = nn.MSELoss()
dev = torch.device('cuda')
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3, weight_decay=1e-4)
losses = []
max_batches = 2
def process_batch():
inputs = images.half().to(dev)
values = scores.half().to(dev)
# clear accumulated gradients
optimizer.zero_grad()
# make predictions
outputs = model(inputs)
# calculate and save the loss
model_out = torch.flatten(outputs)
print(f"Outputs: {model_out}")
loss = loss_fn(model_out.half(), torch.flatten(values))
losses.append( loss.item() )
# backpropogate the loss
loss.backward()
# adjust parameters to computed gradients
optimizer.step()
model.train()
i = 0
for images, scores in train_loader:
process_batch()
i += 1
if i > max_batches: break
Stdout:
x shape A: torch.Size([36, 3, 70, 70])
x shape B: torch.Size([36, 9800])
x shape C: torch.Size([36, 100])
x shape D: torch.Size([36, 100])
x shape E: torch.Size([36, 1])
x shape F: torch.Size([36])
Outputs: tensor([0.0406, 0.0367, 0.0446, 0.0529, 0.0406, 0.0391, 0.0397, 0.0391, 0.0415,
0.0443, 0.0410, 0.0406, 0.0349, 0.0396, 0.0368, 0.0401, 0.0343, 0.0419,
0.0428, 0.0385, 0.0345, 0.0431, 0.0287, 0.0328, 0.0309, 0.0416, 0.0473,
0.0352, 0.0422, 0.0375, 0.0428, 0.0345, 0.0368, 0.0319, 0.0365, 0.0382],
device='cuda:0', dtype=torch.float16, grad_fn=<AsStridedBackward>)
x shape A: torch.Size([36, 3, 70, 70])
x shape B: torch.Size([36, 9800])
x shape C: torch.Size([36, 100])
x shape D: torch.Size([36, 100])
x shape E: torch.Size([36, 1])
x shape F: torch.Size([36])
Outputs: tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
device='cuda:0', dtype=torch.float16, grad_fn=<AsStridedBackward>)
x shape A: torch.Size([36, 3, 70, 70])
x shape B: torch.Size([36, 9800])
x shape C: torch.Size([36, 100])
x shape D: torch.Size([36, 100])
x shape E: torch.Size([36, 1])
x shape F: torch.Size([36])
Outputs: tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
device='cuda:0', dtype=torch.float16, grad_fn=<AsStridedBackward>)
You can see the nans that are coming out of the model starting with the second batch. Is there anything obviously wrong that I'm doing? If anyone has tips on best practices for debugging pytorch module runs that I can use to track down the problem, that would be very helpful.
Thanks.
| You should switch to full precision when updating the gradients and to half precision upon training
loss.backward()
model.float() # add this here
optimizer.step()
Switch back to half precission
for images, scores in train_loader:
model.half() # add this here
process_batch()
| https://stackoverflow.com/questions/58457901/ |
PyTorch - a functional equivalent of nn.Module | As we know we can wrap arbitrary number of stateful building blocks into a class which inherits from nn.Module. But how is it supposed to be done when you want to wrap a bunch of stateless functions (from nn.Functional), in order to fully utilize things which nn.Module allows you to, like automatic moving of tensors between CPU and GPU with just model.to(device)?
| I already found the solution: if you have an operation inside of a module which creates a new tensor, then you have to use self.register_buffer in order to fully utilize automating moving between devices.
| https://stackoverflow.com/questions/58465570/ |
Creating custom dataset in PyTorch | Problem
In PyTorch, I am trying to write a class that could return the entire data and label separately using syntax like dataset.data and dataset.label. The code skeleton looks like:
class MyDataset(object):
data = _get_data()
label = _get_label()
def __init__(self, dir, transforms):
self.img_list = ... # all image paths loaded from dir
# do something
def __getitem__(self):
# do something
return data, label
def __len__(self):
return len(self.img_list)
def _get_data():
# do something
def _get_label():
# do something
However, when I use dataset.data and dataset.label to access the corresponding variables, nothing is returned.
I am wondering why this is the case and how I can fix this.
Edit
Thank you for all of your attention.
I have solved this problem by myself. The solution is pretty straightforward, which just utilizes the property of class variables.
class FaceDataset(object):
# class variable
data = None
label = None
def __init__(self, root, transforms=None):
# read img_list from root
img_list = ...
self.transforms = ...
FaceDataset.data = FaceDataset._get_data(self.img_list, self.transforms)
FaceDataset.label = FaceDataset._get_label(self.img_list)
@classmethod
def _get_data(cls, img_list, transforms):
data_list = []
for img_path in img_list:
data_list.append(transforms(Image.open(img_path)).unsqueeze(0))
return torch.stack(data_list, dim=0)
@classmethod
def _get_label(cls, img_list):
label = torch.zeros(len(img_list))
for i, img_path in enumerate(img_list):
label[i] = ...
return label
def __getitem__(self, index):
img_path = self.img_list[index]
label = ...
# read image from file
data = Image.open(img_path)
# apply transform defined in __init__
data = self.transforms(data)
return data, label
def __len__(self):
return len(self.img_list)
| The "normal" way to create custom datasets in Python has already been answered here on SO. There happens to be an official PyTorch tutorial for this.
For a simple example, you can read the PyTorch MNIST dataset code here (this dataset is used in this PyTorch example code for further illustration). Finally, you can find other dataset implementations in this torchvision datasets list (click on the dataset name, then on the "source" button in the dataset documentation, to access the dataset's PyTorch implementation).
| https://stackoverflow.com/questions/58496535/ |
In PyTorch, what is the difference between forward() and an ordinary method? | How is implementing the forward() method of a custom nn.Module class different from adding an ordinary method to that class?
I heard that the forward() method should only accept and return tensors, because PyTorch has implemented special processing on the input and output of the forward() method. But I tried inputting/outputting non-tensor objects on a forward() method, and implementing a module that doesn't have a forward() method (instead, there are multiple custom-named methods which act like forward() methods). Both ways worked well.
| forward() method does accept any type of parameters. However, the goal of the forward() method is to encapsulate the forward computational steps. forward() is called in the __call__ function. In the forward() method, PyTorch call the nested model itself to perform the forward pass.
It is encouraged to:
NOT call the forward(x) method. You should call the whole model itself, as in model(x) to perform a forward pass and output predictions.
What happens if you do not do that?
If you call the .forward() method, and have hooks in your model, the hooks won’t have any effect.
| https://stackoverflow.com/questions/58508190/ |
Can both the GPU and CPU versions of PyTorch be installed in the same Conda environment? | The PyTorch installation web page shows how to install the GPU and CPU versions of PyTorch:
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
and
conda install pytorch torchvision cpuonly -c pytorch
Can both version be installed in the same Conda environment?
In case you might ask why would this be needed, it's because I would like a single Conda environment which I can use on computers which have a GPU and those which don't.
| The GPU version of PyTorch is actually a superset of the CPU PyTorch. You can use the GPU PyTorch on a CPU, but you cannot use the CPU PyTorch on a GPU. So in your case, installing just the GPU version of PyTorch would be sufficient.
| https://stackoverflow.com/questions/58511598/ |
Difference between nn.MaxPool2d vs.nn.functional.max_pool2d? | Whats the difference between: nn.MaxPool2d(kernel_size, stride) and nn.functional.max_pool2d(t, kernel_size, stride)?
The first one I define in the module and the second in the forward function?
Thanks
| They are essentially the same. The difference is that torch.nn.MaxPool2d is an explicit nn.Module that calls through to torch.nn.functional.max_pool2d() it its own forward() method.
You can look at the source for torch.nn.MaxPool2d here and see the call for yourself: https://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html#MaxPool2d
Reproduced below:
def forward(self, input):
return F.max_pool2d(input, self.kernel_size, self.stride,
self.padding, self.dilation, self.ceil_mode,
self.return_indices)
Why have two approaches for the same task? I suppose it's to suit the coding style of the many people who might use PyTorch. Some prefer a stateful approach while others prefer a more functional approach.
For example having torch.nn.MaxPool2d means that we could very easily drop it into a nn.Sequential block.
model = nn.Sequential(
nn.Conv2d(1,3,3),
nn.ReLU(),
nn.MaxPool2d((2, 2))
)
| https://stackoverflow.com/questions/58514197/ |
Taking a derivative through torch.ge, or how to explicitly define a derivative in pytorch | I am trying to set up a network in which one layer maps from real numbers to {0, 1} (i.e. makes output binary).
What I tried
While I was able to find that torch.ge provides such functionality, whenever I want to train any parameter occurring before that layer in a network PyTorch breaks.
I have been also trying to find if there is any way in PyTorch/autograd, to override the derivative of a module by hand. More specifically in this cause, I would just like to pass derivative through the torch.ge, without changing it.
Minimal Example
Here is a minimal example I produced, which uses a typical neural network training structure in PyTorch.
import torch
import torch.nn as nn
import torch.optim as optim
class LinearGE(nn.Module):
def __init__(self, features_in, features_out):
super().__init__()
self.fc = nn.Linear(features_in, features_out)
def forward(self, x):
return torch.ge(self.fc(x), 0)
x = torch.randn(size=(10, 30))
y = torch.randint(2, size=(10, 10))
# Define Model
m1 = LinearGE(30, 10)
opt = optim.SGD(m1.parameters(), lr=0.01)
crit = nn.MSELoss()
# Train Model
for x_batch, y_batch in zip(x, y):
# zero the parameter gradients
opt.zero_grad()
# forward + backward + optimize
pred = m1(x_batch)
loss = crit(pred.float(), y_batch.float())
loss.backward()
opt.step()
What I encountered
When I run the above code the following error occurs:
File "__minimal.py", line 33, in <module>
loss.backward()
...
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
This error makes sense since torch.ge function is not differentiable. However, since MaxPool2D is also not differentiable, I believe that there are ways of mitigating non-differentiability in PyTorch.
It would be great if someone could point me to any source which can help me either implement my own backprop for a custom module, or any way of avoiding this error message.
Thanks!
| Two things I noticed
If your input x is 10x30 (10 examples, 30 features)and the number of output node is 10, then the parameter matrix is 30x10. The expected output matrix is 10x10 (10 examples 10 output nodes)
ge = greater than and equal to. As the code indicated, x >= 0 element wise. We can use relu.
class LinearGE(nn.Module):
def __init__(self, features_in, features_out):
super().__init__()
self.fc = nn.Linear(features_in, features_out)
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
return self.relu(self.fc(x))
or torch.max
torch.max(self.fc(x), 0)[0]
| https://stackoverflow.com/questions/58529988/ |
How can I change the following code from pytorch to tensorflow? | I want to change the follow pytorch network (v1.2) to tensorflow. I am confusing between tf.nn.conv2d and tf.keras.layers.Conv2D what should I choose?
import torch.nn as nn
nn.Sequential(nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, bias=True),
nn.BatchNorm2d(out_planes),
nn.ReLU(inplace=True))
| tf.nn.conv2d is functional api and tf.keras.layers.Conv2D is layer-class api. You should use the latter one. It's quite as similar as the relationship between torch.nn.functional.conv2d and torch.nn.Conv2D.
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Conv2D, ReLU, BatchNormalization
model = Sequential()
model.add(Conv2D(filters=10, kernel_size=3, strides=1))
model.add(BatchNormalization())
model.add(ReLU())
| https://stackoverflow.com/questions/58532977/ |
torch.softmax and torch.sigmoid are not equivalent in the binary case | Given:
x_batch = torch.tensor([[-0.3, -0.7], [0.3, 0.7], [1.1, -0.7], [-1.1, 0.7]])
and then applying torch.sigmoid(x_batch):
tensor([[0.4256, 0.3318],
[0.5744, 0.6682],
[0.7503, 0.3318],
[0.2497, 0.6682]])
gives a completely different result to torch.softmax(x_batch,dim=1):
tensor([[0.5987, 0.4013],
[0.4013, 0.5987],
[0.8581, 0.1419],
[0.1419, 0.8581]])
As per my understanding, isn't the softmax is exactly the same as the sigmoid in the binary case?
| You are misinformed. Sigmoid and softmax are not equal, even for the 2 element case.
Consider x = [x1, x2].
sigmoid(x1) = 1 / (1 + exp(-x1))
but
softmax(x1) = exp(x1) / (exp(x1) + exp(x2))
= 1 / (1 + exp(-x1)/exp(-x2))
= 1 / (1 + exp(-(x1 - x2))
= sigmoid(x1 - x2)
From the algebra we can see an equivalent relationship is
softmax(x, dim=1) = sigmoid(x - fliplr(x))
or in pytorch
x_softmax = torch.sigmoid(x_batch - torch.flip(x_batch, dims=(1,))
| https://stackoverflow.com/questions/58539767/ |
Error: symeig_cpu: the algorithm failed to converge: 6 off-diagonal elements of an intermediate tridiagonal form did not converge to zero | I am trying to use https://github.com/Michaelvll/DeepCCA
After 20-40 iterations, It gives the following error:
RuntimeError: symeig_cpu: the algorithm failed to converge; 6 off-diagonal elements of an intermediate tridiagonal form did not converge to zero.
Error is generated from https://github.com/Michaelvll/DeepCCA/blob/master/objectives.py#L46
[D1, V1] = torch.symeig(SigmaHat11, eigenvectors=True)
System Configuration:
Windows 10.
Python 3.7
Pytorch 1.2.0
How can I debug this?
| I have come across a similar error. Its root cause was a failure in the Cholesky decomposition at some point down the road, because a tensor was singular.
| https://stackoverflow.com/questions/58547160/ |
Non LSTM: Trying to backward through the graph a second time, but the buffers have already been freed | Note that unlike other questions, this is not about any RNN structure. I wish to create a model that has changing gradients, and will look like below. The breakpoints are manually supplied.
The model that I have created is as follows:
class Trend(nn.Module):
"""
Broken Trend model, with breakpoints as defined by user.
"""
def __init__(self, breakpoints):
super().__init__()
self.bpoints = breakpoints[None, :]
self.init_layer = nn.Linear(1,1) # first linear bit
# extract gradient and bias
w = self.init_layer.weight
b = self.init_layer.bias
self.params = [[w,b]] # save it to buffer
if len(breakpoints>0):
# create deltas which is how the gradient will change
deltas = torch.randn(len(breakpoints)) / len(breakpoints) # initialisation
self.deltas = nn.Parameter(deltas) # make it a parameter
for d, x1 in zip(self.deltas, breakpoints):
y1 = w *x1 + b # find the endpoint of line segment (x1, y1)
w = w + d # add on the delta to gradient
b = y1 - w * x1 # find new bias of line segment
self.params.append([w,b]) # add to buffer
# create buffer
self.wb = torch.zeros(len(self.params), len(self.params[0]))
def __copy2array(self):
"""
Saves parameters into wb
"""
for i in range(self.wb.shape[0]):
for j in range(self.wb.shape[1]):
self.wb[i,j] = self.params[i][j]
def forward(self, x):
# get the line segment area (x_sec) for each x
x_sec = x >= self.bpoints
x_sec = x_sec.sum(1)
self.__copy2array() # copy across parameters into matrix
# get final prediction y = mx +b for relevant section
return x*self.wb[x_sec][:,:1] + self.wb[x_sec][:,1:]
However, once I attempt to train it I get the error RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
I obtained the above plot by doing:
time = torch.arange(700).float()[:,None]
y_pred = model(time)
plt.plot(time, y_pred.detach().numpy())
plt.show()
So we know the forward pass is working as expected. However the backward pass is not quite working. Was wondering what I need to change to get it working.
If you're wondering why __copy2array is being used, when I tried to use torch.Tensor(self.params) it destroyed the gradients in those parameters. Thanks in advance.
| Since your answer does not contain your complete code it is hard to judge, but I recommend trying what the error message says: Replace .backward() with .backward( retain_graph=True). This means that the gradient is not deleted after the update.
| https://stackoverflow.com/questions/58551186/ |
Pytorch simulation fails to converge on convex loss function when not initialized with 0 | My code works when the weights initialized with 0. When I initialize them according to some seed, they fail to converge. This should be a bug since the loss function is convex.
I filtered two labels from MNIST (0 and 1), and then I trained a logistic regression model using pytorch. Since I use only 200 training samples (and 784 parameters), the model should quickly converge to 100% accuracy on the training set. This is not the case when the weights initialize by some seed.
I had some problem to share my code on stackoverflow, so here is a link to the code: https://drive.google.com/file/d/1ELe8TIWrXMiXgsB63B0Ss43GPr719rGc/view?usp=sharing
| Your data are not rescaled and normalized. If you look at the images variable in your training loop it's between 0 and 255 this is in all likelihood hurting your training process.
There are cleaner ways to subsample the dataset as you want, but without modifying too much of your code, using this data loading definition
import torchvision.transforms as transforms
#Load Dataset
preprocessing = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
train_dataset = dsets.MNIST(root='./data', train=True, transform=preprocessing, download=True)
#Filter samples by label (to get binary classification) and by number of training samples
Binary_filter=torch.add(train_dataset.targets==1, train_dataset.targets==0)
train_dataset.data, train_dataset.targets = train_dataset.data[Binary_filter],train_dataset.targets[Binary_filter]
TrainSet_filter=torch.cat((torch.ones(num_of_training_samples)
,torch.zeros(len(train_dataset.targets)-num_of_training_samples)),0).bool()
train_dataset.data, train_dataset.targets = train_dataset.data[TrainSet_filter], train_dataset.targets[TrainSet_filter]
#Make Dataset Iterable
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
I have ~100% accuracy in about 5-10 epochs.
| https://stackoverflow.com/questions/58558914/ |
Is there another step to importing Pytorch and using it? | I'm trying to test Pytorch but the first step is to get it running and usable on my computer. I have it to an extent but I need it to function in VS code which it does, it's just that it reads as an error despite working.
I've set up pytorch locally on my computer. I can run the test scripts fine in VS Code's integrated terminal.
The issue comes up when I try to do some work normally in VS code.
from __future__ import print_function
import torch
x = torch.rand(5, 3) //This line has an error. torch.rand(5,3) is not callable)
print(x)
It says that module 'torch' has no 'rand' member. But it still outputs correctly.
Running this in the terminal as a python file or in debug mode results in.
(base) c:\Users\Sean\Desktop\Test\hello>D:\Anaconda\python.exe c:\Users\Sean\.vscode\extensions\ms-python.python-2019.10.44104\pythonFiles\ptvsd_launcher.py --default --client --host localhost --port 63625 c:\Users\Sean\Desktop\Test\hello\something.py
tensor([[0.5449, 0.1669, 0.4740],
[0.3079, 0.0447, 0.9543],
[0.9137, 0.3987, 0.5736],
[0.1788, 0.4932, 0.5584],
[0.1632, 0.6285, 0.4483]])
(base) c:\Users\Sean\Desktop\Test\hello>D:/Anaconda/Scripts/activate
(base) c:\Users\Sean\Desktop\Test\hello>conda activate Anaconda
Could not find conda environment: Anaconda
You can list all discoverable environments with `conda info --envs`.
So it's working, what I want is to not have it coming up as an error. It'll be hard to find the actual bugs later on if it continues and I feel it might cause problems if not addressed. Any help with telling me the root issue and how to go about fixing it would be appreciated.
| Found the answer. The issue was pylint not recognizing Pytorch or Numpty methods. The functions still worked but the error messages make it hard to see actual error messages.
Fixed by adding the following to user settings.
"python.linting.pylintArgs": [
"--errors-only",
"--generated-members=numpy.* ,torch.* ,cv2.* , cv.*"
]
Recorded Error fix:
https://github.com/pytorch/pytorch/issues/701
| https://stackoverflow.com/questions/58558951/ |
Weighted summation of embeddings in pytorch | I have a sequence of 12 words which I represent using a 12x256 matrix (using word embeddings). Let us refer to these as . I wish to take this as input and output a 1x256 vector. However I don't want to use a (12x256) x 256 dense layer. Instead I want to create the output embedding using a weighted summation of the 12 embeddings
where the wi s are scalars (thus there is weight sharing).
How can I create trainable wi s in pytorch? I am new and only familiar with the standard modules like nn.Linear.
| You can implement this via 1D convolution with kernel_size = 1
import torch
batch_size=2
inputs = torch.randn(batch_size, 12, 256)
aggregation_layer = torch.nn.Conv1d(in_channels=12, out_channels=1, kernel_size=1)
weighted_sum = aggregation_layer(inputs)
Such convolution will have 12 parameters. Each parameter will be a equal to e_i in formula you provided.
In other words this convolution will ran over dimetion with size 256 and sum it with learnable weights.
| https://stackoverflow.com/questions/58568400/ |
Pytorch linear/affine layer parameters confusing | I'm on the Pytorch documentation (https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html) and I'm not really understanding why they are making the the affine layer (16 * 6 * 6, 120). I understand that the last outputs from the convolution layer were 16 and the output here is 120, but even with their annotation, I'm not understanding where the 6 * 6 comes from. Can someone please explain?
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 3x3 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 3)
self.conv2 = nn.Conv2d(6, 16, 3)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
| The 6x6 comes from the height and width of x after it has been passed through your convolutions and maxpools.
Here is a simplified version where you can see how the shape changes at each point. It may help to print out the shapes in their example so you can see exactly how everything changes.
import torch
import torch.nn as nn
import torch.nn.functional as F
conv1 = nn.Conv2d(1, 6, 3)
conv2 = nn.Conv2d(6, 16, 3)
# Making a pretend input similar to theirs.
# We define an input with 1 batch, 1 channel, height 32, width 32
x = torch.ones((1,1,32,32))
# Simulating forward()
x = F.max_pool2d(F.relu(conv1(x)), (2, 2))
print(x.shape) # torch.Size([1, 6, 15, 15]) 1 batch, 6 channels, height 15, width 15
x = F.max_pool2d(F.relu(conv2(x)), 2)
print(x.shape) # torch.Size([1, 16, 6, 6]) 1 batch, 16 channels, height 6, width 6
Next they flatten x and pass it through fc1 which accepts 16*6*6 and produces 120 outputs.
| https://stackoverflow.com/questions/58571341/ |
Recreating char level RNN for generating text | I tied to follow a book on deep learning, where there is an chapter about generating text in the style of an example. They used an char level RNN with two LSTM layers in it to generate text in the style of shakespare. But the code in the book (also online: https://github.com/DOsinga/deep_learning_cookbook/blob/master/05.1%20Generating%20Text%20in%20the%20Style%20of%20an%20Example%20Text.ipynb) is written in keras and I only use pytorch. So i tied to recreate it exactly in pytorch, with same network structure and hyperparameters.
So after recreating it and making it work without errors it trained it and it only learned to write the most common character, a space. Then i tried to overfit it on one realy simple sentence, so I had to decrease the sequence lenght to 8. This also did not work, but when decreasing the hidden size of the LSTMs to only 32 it learned it nearly perfectly.
So then I continued working on the original text and started to play arround with the hidden size, learning rate, optimizer (also tried adam) and trained it even longer. The best I could achieve were some random letters, still with a lot of spaces and somtimes something like "her", but far from readable, with still an quite high loss. I used RMSprop with lr=0.01 and a hidden size of 128 over 20000 epochs. I also tried to initialize the hidden state and cell state to zero.
The problem is, that my results are far worse than those in the book, but I did exactly the same just in pytorch. Can someone please tell me, what I should try or what I have done wrong. Any help is appreciated!
PS: Sorry for my bad english.
Here is my code with the original hyperparameters:
#hyperparameters
batch_size = 256
seq_len = 160
hidden_size = 640
layers = 2
#network structure
class RNN(nn.Module):
def __init__(self):
super().__init__()
self.lstm = nn.LSTM(len(chars),hidden_size,layers)
self.linear = nn.Linear(hidden_size,len(chars))
self.softmax = nn.Softmax(dim=2)
def forward(self,x,h,c):
x,(h,c) = self.lstm(x,(h,c))
x = self.softmax(self.linear(x))
return x,h,c
#create network, optimizer and criterion
rnn = RNN().cuda()
optimizer = torch.optim.RMSprop(rnn.parameters(),lr=0.01)
criterion = nn.CrossEntropyLoss()
#training loop
plt.ion()
losses = []
loss_sum = 0
for epoch in range(10000):
#generate input and target filled with zeros
input = numpy.zeros((seq_len,batch_size,len(chars)))
target = numpy.zeros((seq_len,batch_size))
for batch in range(batch_size):
#choose random starting index in text
start = random.randrange(len(text)-seq_len-1)
#generate sequences for that batch filled with zeros
input_seq = numpy.zeros((seq_len+1,len(chars)))
target_seq = numpy.zeros((seq_len+1))
for i,char in enumerate(text[start:start+seq_len+1]):
#convert character to index
idx = char_to_idx[char]
#set value of index to one (one-hot-encoding)
input_seq[i,idx] = 1
#set value to index (only label)
target_seq[i] = idx
#insert sequences into input and target
input[:,batch,:] = input_seq[:-1]
target[:,batch] = target_seq[1:]
#convert input and target from numpy array to pytorch tensor on gpu
input = torch.from_numpy(input).float().cuda()
target = torch.from_numpy(target).long().cuda()
#initialize hidden state and cell state to zero
h0 = torch.zeros(layers,batch_size,hidden_size).cuda()
c0 = torch.zeros(layers,batch_size,hidden_size).cuda()
#run the network on the input
output,h,c = rnn(input,h0,c0)
#calculate loss and perform gradient descent
optimizer.zero_grad()
loss = criterion(output.view(-1,len(chars)),target.view(-1))
loss.backward()
optimizer.step()
Plot of the loss with original hyperparameters:
Example of target and output after training:
Target: can bring this instrument of honour
again into his native quarter, be magnanimous in the enterprise,
and go on; I will grace the attempt for a worthy e
Output:
Plot of the loss with hidden size of 128 over 20000 epochs (best results):
| I later finally found a way to achive something close to real sentences, maybe it will help someone. Here is an example result:
-I have not seen him and the prince was a signt of the streme of the sumpering of the property of th
In my case the important change was to not inizialize the hidden and cell state to zero every batch but only every epoch. For this to work I had to rewrite the batch generator, so that it produces batches following on each other.
| https://stackoverflow.com/questions/58580553/ |
Creating a Feed Forward NN Model in Pytorch with a dynamic number of hidden layers | Why are these two code segments not equivalent:
Segment 1: Creating a model with 2 layers.
class FNNModule(nn.Module):
def __init__(self, input_dim, output_dim, hidden_dim1, hidden_dim2, non_linear_function):
super().__init__()
self.hidden1 = nn.Linear(input_dim, hidden_dim1)
self.hidden2 = nn.Linear(hidden_dim1, hidden_dim2)
self.non_linear_function = non_linear_function()
self.final_linear = nn.Linear(hidden_dim2, output_dim)
def forward(self, x):
out = self.hidden1(x)
out = self.non_linear_function(out)
out = self.hidden2(x)
out = self.non_linear_function(out)
out = self.final_linear(out)
return out
Segment Two: Creating the same model but changing code where hidden_layers is a variable:
class FNNModuleVar(nn.Module):
def __init__(self, input_dim, output_dim, hidden_dim_array = [], non_linear_function_array=[]):
super().__init__()
self.linear_functions = []
self.non_linear_functions = [x() for x in non_linear_function_array]
self.hidden_layers = len(hidden_dim_array)
for l in range(self.hidden_layers):
self.linear_functions.append(nn.Linear(input_dim, hidden_dim_array[l]))
input_dim = hidden_dim_array[l]
self.final_linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
out = x
for i in range(self.hidden_layers):
out = self.linear_functions[i](out)
out = self.non_linear_functions[i](out)
out = self.final_linear(x)
return out
modelVar = FNNModuleVar(input_dim, output_dim, [100, 50], [nn.Tanh, nn.Tanh])
model = FNNModule(input_dim, output_dim, 100, 50, nn.Tanh)
When I try to iterate through modelVar.parameters() and model.parameters() I see that I have very different models.
What am I doing wrong in modelVar?
| Those modules are called as you would expect them to be called they are just not visible to the Module. In order to make them visible you can wrap them in a nn.ModuleList like this:
class FNNModuleVar(nn.Module):
def __init__(self, input_dim, output_dim, hidden_dim_array = [], non_linear_function_array=[]):
super().__init__()
self.linear_functions = []
self.non_linear_functions = [x() for x in non_linear_function_array]
self.hidden_layers = len(hidden_dim_array)
for l in range(self.hidden_layers):
self.linear_functions.append(nn.Linear(input_dim, hidden_dim_array[l]))
input_dim = hidden_dim_array[l]
self.linear_functions = nn.ModuleList(self.linear_functions)
self.final_linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
out = x
for i in range(self.hidden_layers):
out = self.linear_functions[i](out)
out = self.non_linear_functions[i](out)
out = self.final_linear(out)
return out
printing the models now would yield:
FNNModule(
(hidden1): Linear(in_features=50, out_features=100, bias=True)
(hidden2): Linear(in_features=100, out_features=50, bias=True)
(non_linear_function): Tanh()
(final_linear): Linear(in_features=50, out_features=100, bias=True)
)
FNNModuleVar(
(linear_functions): ModuleList(
(0): Linear(in_features=50, out_features=100, bias=True)
(1): Linear(in_features=100, out_features=50, bias=True)
)
(final_linear): Linear(in_features=50, out_features=100, bias=True)
)
More details: https://pytorch.org/docs/stable/nn.html#torch.nn.ModuleList
| https://stackoverflow.com/questions/58585892/ |
Runtime error 999 when trying to use cuda with pytorch | I installed Cuda 10.1 and the latest Nvidia Driver for my Geforce 2080 ti. I try to run a basic script to test if pytorch is working and I get the following error:
RuntimeError: cuda runtime error (999) : unknown error at ..\aten\src\THC\THCGeneral.cpp:50
Below is the code im trying to run:
import torch
torch.cuda.current_device()
torch.cuda.is_available()
torch.cuda.get_device_name(0)
| Restarting my computer fixed this for me.
But for a less invasive fix, you can also try this solution (from a tensorflow issue thread):
sudo rmmod nvidia_uvm
sudo rmmod nvidia
sudo modprobe nvidia
sudo modprobe nvidia_uvm
| https://stackoverflow.com/questions/58595291/ |
Can pytorch optimize sequential operations (like a tensorflow graph or JAX's jit)? | Originally, tensorflow and pytorch had a fundamental difference:
tensorflow is based on a computional graph. Building this graph and evaluating it in a session are two separate steps. While it is being used, the graph doesn't change, which allows for optimizations.
torch eagerly evaluates operations on a tensor. This makes the API more convenient (no sessions) but also looses the potential to recognize and optimize operations that always occur in sequence.
Now this difference is becoming less clear. Tensorflow has answered to the popularity of torch with tf eager. There is also the JAX project, which builds on the same underlying framework as tensorflow (XLA). JAX has no concept of a session. But it allows you to compile multiple operations together by simply calling jit.
Since Tensorflow has moved to cover PyTorch functionality, is PyTorch also working on integrating Tensorflow advantages? Is there something like a session or jit functionality in PyTorch (or on its roadmap) ?
The API docs have a jit section, but as far as I can see, that is more about exporting your models.
| As you mentioned, there is a torch.jit and it's purpose is also to introduce optimization in the exported graph (e.g. kernel fusion, optimization of constants etc.). IIRC you can find some source code regarding those in their github repo here, though I'm not sure whether those are explicitly mentioned somewhere in the docs (or explicitly enough to be remembered).
Since 1.3 there is also quantization introduced (see here for some introduction). In tutorials section, namely here you can see explicit fusion of Conv2d, BatchNorm and ReLU in order to improve performance. Ofc there also exists specific stuff like using int instead of float for weights (quantization), mixed arithmetic (using half float precision whenever possible, see NVidia's Apex) and others.
Last but not least, I don't think for a well written model using vectorized operations and exported with torchscript you are gonna see really substantial runtime differences because of some generic graph optimization. Still it differs whether you are going to use GPU, CPU, TPU, what are their versions, whether you are after inference only or training as well etc. It's pretty hard to pinpoint how fast tensorflow is in comparison to pytorch (besided some well-known issues in both frameworks). All in all it depends and measurements vary a lot AFAIK.
BTW. When it comes to advantages of each framework their core indeed starts to cover similar things (PyTorch got mobile support lately, see here). Real difference is still different underlying approach and what each framework has to do to circumvent those limitations.
| https://stackoverflow.com/questions/58596343/ |
Is there a method in Pytorch to count the number of unique values in a way that can be back propagated? | Given the following tensor (which is the result of a network [note the grad_fn]):
tensor([121., 241., 125., 1., 108., 238., 125., 121., 13., 117., 121., 229.,
161., 13., 0., 202., 161., 121., 121., 0., 121., 121., 242., 125.],
grad_fn=<MvBackward>)
Which we will define as:
xx = torch.tensor([121., 241., 125., 1., 108., 238., 125., 121., 13., 117., 121., 229.,
161., 13., 0., 202., 161., 121., 121., 0., 121., 121., 242., 125.]).requires_grad_(True)
I would like to define an operation which counts the number of occurrences of each value in such a way that the operation will output the following tensor:
tensor([2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,
0, 7, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
0, 1, 1])
i.e. there are 2 zeros, 1 one, 2 thirteens, etc... the total number of possible values is set upstream, but in this example is 243
So far I have tried the following approaches, which successfully produce the desired tensor, but do not do so in a way that allows computing gradients back through the network:
Attempt 1
tt = []
for i in range(243):
tt.append((xx == i).unsqueeze(0))
torch.cat(tt,dim=0).sum(dim=1)
Attempt 2
tvtensor = torch.tensor([i for i in range(243)]).unsqueeze(1).repeat(1,xx.shape[0]).float().requires_grad_(True)
(xx==tvtensor).sum(dim=1)
EDIT: Added Attempt
Attempt 3
-- Didn't really expect this to back prop, but figured I would give it a try anyway
ll = torch.zeros((1,243))
for x in xx:
ll[0,x.long()] += 1
Any help is appreciated
EDIT: As requested the end goal of this is the following:
I am using a technique for calculating structural similarity between two time sequences. One is real and the other is generated. The technique is outlined in this paper (https://link.springer.com/chapter/10.1007/978-3-642-02279-1_33) where a time series is converted to a sequence of code words and the distribution of code words (similar to the way that Bag of Words is used in NLP) is used to represent the time series. Two series are considered similar when the two signal distributions are similar. This is what the counting statistics tensor is for.
What is desired is to be able to construct a loss function which consumes this tensor and measures the distance between the real and generated signal (euclidian norm on the time domain data directly does not work well and this approach claimed better results), so that it can update the generator appropriately.
| I would do it with unique method (only to count occurrences):
if you want to count the occurrences, you have to add the parameter return_counts=True
I did it in the version 1.3.1
This is the fast way to count occurrences, however is a non-differentiable operation, therefore, this method is not recommendable (anyway I have described the way to count ocurrences). To perform what you want, I think you should turn the input into a distribution by means of a differentiable function (softmax is the most used) and then, use some way to measure the distance between distributions (output and target) like cross-entropy, KL (kullback-liebler), JS or wasserstein.
| https://stackoverflow.com/questions/58598773/ |
Updating a BERT model through Huggingface transformers | I am attempting to update the pre-trained BERT model using an in house corpus. I have looked at the Huggingface transformer docs and I am a little stuck as you will see below.My goal is to compute simple similarities between sentences using the cosine distance but I need to update the pre-trained model for my specific use case.
If you look at the code below, which is precisely from the Huggingface docs. I am attempting to "retrain" or update the model and I assumed that special_token_1 and special_token_2 represent "new sentences" from my "in house" data or corpus. Is this correct? In summary, I like the already pre-trained BERT model but I would like to update it or retrain it using another in house dataset. Any leads will be appreciated.
import tensorflow as tf
import tensorflow_datasets
from transformers import *
model = BertModel.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
SPECIAL_TOKEN_1="dogs are very cute"
SPECIAL_TOKEN_2="dogs are cute but i like cats better and my
brother thinks they are more cute"
tokenizer.add_tokens([SPECIAL_TOKEN_1, SPECIAL_TOKEN_2])
model.resize_token_embeddings(len(tokenizer))
#Train our model
model.train()
model.eval()
| BERT is pre-trained on 2 tasks: masked language modeling (MLM) and next sentence prediction (NSP). The most important of those two is MLM (it turns out that the next sentence prediction task is not really that helpful for the model's language understanding capabilities - RoBERTa for example is only pre-trained on MLM).
If you want to further train the model on your own dataset, you can do so by using BERTForMaskedLM in the Transformers repository. This is BERT with a language modeling head on top, which allows you to perform masked language modeling (i.e. predicting masked tokens) on your own dataset. Here's how to use it:
from transformers import BertTokenizer, BertForMaskedLM
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMaskedLM.from_pretrained('bert-base-uncased', return_dict=True)
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
outputs = model(**inputs, labels=labels)
loss = outputs.loss
logits = outputs.logits
You can update the weights of BertForMaskedLM using loss.backward(), which is the main way of training PyTorch models. If you don't want to do this yourself, the Transformers library also provides a Python script which allows you perform MLM really quickly on your own dataset. See here (section "RoBERTa/BERT/DistilBERT and masked language modeling"). You just need to provide a training and test file.
You don't need to add any special tokens. Examples of special tokens are [CLS] and [SEP], which are used for sequence classification and question answering tasks (among others). These are added by the tokenizer automatically. How do I know this? Because BertTokenizer inherits from PretrainedTokenizer, and if you take a look at the documentation of its __call__ method here, you can see that the add_special_tokens parameter defaults to True.
| https://stackoverflow.com/questions/58620282/ |
Create a torch::Tensor from C/C++ array without using "from_blob(...)..." | Using the C++ libtorch frontend for Pytorch
I want to create a torch::Tensor from a C++ double[] array. Comming from a legacy C/C++ API.
I could not find a simple documentation about the subject not in docs nor in the forums.
Something like:
double array[5] = {1, 2, 3, 4, 5}; // or double *array;
auto tharray = torch::Tensor(array, 5, torch::Device(torch::kCUDA));
The only thing I found is to use torch::from_blob but then I would have to clone() and use to(device) if I wanted to use it with CUDA.
double array[] = { 1, 2, 3, 4, 5};. // or double *array;
auto options = torch::TensorOptions().dtype(torch::kFloat64);
torch::Tensor tharray = torch::from_blob(array, {5}, options);
Is there any cleaner way of doing so?
| You can read more about tensor creation here: https://pytorch.org/cppdocs/notes/tensor_creation.html
I don't know of any way to create a tensor from an array without using from_blob but you can use TensorOptions to control various things about the tensor including its device.
Based on your example you could create your tensor on the GPU as follows:
double array[] = { 1, 2, 3, 4, 5};
auto options = torch::TensorOptions().dtype(torch::kFloat64).device(torch::kCUDA, 1);
torch::Tensor tharray = torch::from_blob(array, {5}, options);
| https://stackoverflow.com/questions/58631466/ |
I have a pytorch image classifier training, and I want to pause training and save the weights at time of program pause. Can I do this? | I'm in the middle of training a classifier that's been training for a few days now, but my problem is that I didn't code in to save .pt checkpoints throughout training, and so I'll only end up with a weights file when the program is done with all of its epochs. Is there a way to pause training (PAUSE BREAK) and save the model's weights right now?
| Unfortunately, PyTorch does not have a native API for this at the moment.
For the current job, you could use an IDE like PyDev or Pycharm to attach a debugger to the running process and set a break point somewhere in your code and extract the weights and biases.
For future jobs, you could always create checkpoints inside the epochs loop and save the learned model there. This link will help.
| https://stackoverflow.com/questions/58665534/ |
In torch.distributed, how to average gradients on different GPUs correctly? | In torch.distributed, how to average gradients on different GPUs correctly?
Modified from https://github.com/seba-1511/dist_tuto.pth/blob/gh-pages/train_dist.py, the codes below can successfully make use of both GPUs (can be checked with nvidia-smi).
But one thing difficult to understand is whether the 'average_gradients' below is indeed the correct way of averaging gradients on the two models on the two GPUs. Like the codes below, the two 'model = Net()' run with two processes are two models on two different GPUs, but and the line 'average_gradients(model)' just 'averages' gradients of the model on one GPU, not two models on the two GPUs.
The question is that is the codes below indeed a correct way for averaging gradients on the two GPU? If true, how to read, how to understand the codes? If not, what is the correct way of averaging gradients on the two models below?
import os
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from math import ceil
from random import Random
from torch.multiprocessing import Process
from torchvision import datasets, transforms
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"
class Partition(object):
""" Dataset-like object, but only access a subset of it. """
def __init__(self, data, index):
self.data = data
self.index = index
def __len__(self):
return len(self.index)
def __getitem__(self, index):
data_idx = self.index[index]
return self.data[data_idx]
class DataPartitioner(object):
""" Partitions a dataset into different chuncks. """
def __init__(self, data, sizes=[0.7, 0.2, 0.1], seed=1234):
self.data = data
self.partitions = []
rng = Random()
rng.seed(seed)
data_len = len(data)
indexes = [x for x in range(0, data_len)]
rng.shuffle(indexes)
for frac in sizes:
part_len = int(frac * data_len)
self.partitions.append(indexes[0:part_len])
indexes = indexes[part_len:]
def use(self, partition):
return Partition(self.data, self.partitions[partition])
class Net(nn.Module):
""" Network architecture. """
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
def partition_dataset():
""" Partitioning MNIST """
dataset = datasets.MNIST(
'./data',
train=True,
download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307, ), (0.3081, ))
]))
size = dist.get_world_size()
bsz = int(256 / float(size))
partition_sizes = [1.0 / size for _ in range(size)]
partition = DataPartitioner(dataset, partition_sizes)
partition = partition.use(dist.get_rank())
train_set = torch.utils.data.DataLoader(
partition, batch_size=bsz, shuffle=True)
return train_set, bsz
def average_gradients(model):
""" Gradient averaging. """
size = float(dist.get_world_size())
for param in model.parameters():
dist.all_reduce(param.grad.data, op=dist.reduce_op.SUM)
param.grad.data /= size
def run(rank, size):
""" Distributed Synchronous SGD Example """
# print("107 size = ", size)
# print("dist.get_world_size() = ", dist.get_world_size()) ## 2
torch.manual_seed(1234)
train_set, bsz = partition_dataset()
device = torch.device("cuda:{}".format(rank))
model = Net()
model = model.to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
num_batches = ceil(len(train_set.dataset) / float(bsz))
for epoch in range(10):
epoch_loss = 0.0
for data, target in train_set:
# data, target = Variable(data), Variable(target)
# data, target = Variable(data.cuda(rank)), Variable(target.cuda(rank))
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
epoch_loss += loss.item()
loss.backward()
average_gradients(model)
optimizer.step()
print('Rank ',
dist.get_rank(), ', epoch ', epoch, ': ',
epoch_loss / num_batches)
# if epoch == 4:
# from utils import module_utils
# module_utils.save_model()
def init_processes(rank, size, fn, backend='gloo'):
""" Initialize the distributed environment. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
if __name__ == "__main__":
size = 2
processes = []
for rank in range(size):
p = Process(target=init_processes, args=(rank, size, run))
p.start()
processes.append(p)
for p in processes:
p.join()
| My solution is to use DistributedDataParallel instead of DataParallel like below.
The code
for param in self.model.parameters():
torch.distributed.all_reduce(param.grad.data)
can work successfully.
class DDPOptimizer:
def __init__(self, model, torch_optim=None, learning_rate=None):
"""
:param parameters:
:param torch_optim: like torch.optim.Adam(parameters, lr=learning_rate, eps=1e-9)
or optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
:param is_ddp:
"""
if torch_optim is None:
torch_optim = torch.optim.Adam(model.parameters(), lr=3e-4, eps=1e-9)
if learning_rate is not None:
torch_optim.defaults["lr"] = learning_rate
self.model = model
self.optimizer = torch_optim
def optimize(self, loss):
self.optimizer.zero_grad()
loss.backward()
for param in self.model.parameters():
torch.distributed.all_reduce(param.grad.data)
self.optimizer.step()
pass
def run():
""" Distributed Synchronous SGD Example """
module_utils.initialize_torch_distributed()
start = time.time()
train_set, bsz = partition_dataset()
model = Net()
local_rank = torch.distributed.get_rank()
device = torch.device("cuda", local_rank)
model = model.to(device)
sgd = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
optimizer = DDPOptimizer(model, torch_optim=sgd)
# optimizer = NoamOptimizerDistributed(100, 1, 10, model)
num_batches = math.ceil(len(train_set.dataset) / float(bsz))
epoch, end_epoch = 1, 10
while epoch <= end_epoch:
epoch_loss = 0.0
for data, target in train_set:
data, target = data.to(device), target.to(device)
output = model(data)
loss = F.nll_loss(output, target)
epoch_loss += loss.item()
optimizer.optimize(loss)
print('Rank ', dist.get_rank(), ', epoch ', epoch, ': ', epoch_loss / num_batches)
# if epoch % 6 == 0:
# if local_rank == 0:
# module_utils.save_model(model, "a.pt")
epoch += 1
print("Time take to train: ", time.time() - start)
| https://stackoverflow.com/questions/58671916/ |
Usage of data.to(device) with cuda GPUs | I'm trying to implement a neural network to run across 8 GPUs and i just want clarification on what exactly these commands do
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
data.to(device)
Will this automatically spread the training across the 8 GPUs ?
Thanks!
| No. The code snippet will move the model and data to GPU if CUDA is available, otherwise, it will put them in CPU.
torch.device('cuda') refers to the current cuda device
torch.device('cuda:0') refer to the cuda device with index=0
To use all the 8 GPUs, you can do something like:
if torch.cuda.device_count() > 1:
model = torch.nn.DataParallel(model)
Note:
torch.cuda.device_count() returns the number of GPUs available.
You do not need to call: data = torch.nn.DataParallel(data)
Why? Because torch.nn.DataParallel
is a container that parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension (other objects will be copied once per device). In the forward pass, the module is replicated on each device, and each replica handles a portion of the input. During the backward pass, gradients from each replica are summed into the original module.
The batch size should be larger than the number of GPUs used.
| https://stackoverflow.com/questions/58675157/ |
How to resize a PyTorch tensor? | I have a PyTorch tensor of size (5, 1, 44, 44) (batch, channel, height, width), and I want to 'resize' it to (5, 1, 224, 224)
How can I do that? What functions should I use?
| It seems like you are looking for interpolate (a function in nn.functional):
import torch.nn.functional as nnf
x = torch.rand(5, 1, 44, 44)
out = nnf.interpolate(x, size=(224, 224), mode='bicubic', align_corners=False)
If you really care about the accuracy of the interpolation, you should have a look at ResizeRight: a pytorch/numpy package that accurately deals with all sorts of "edge cases" when resizing images. This can have an effect when directly merging features of different scales: inaccurate interpolation may result in misalignments.
| https://stackoverflow.com/questions/58676688/ |
How to optimize a simulation metric with deep learning without target values? | I am trying to use an RNN model that outputs bus routes and its input is the demand matrix. The bus routes are then used in a simulation which spits out a metric of how the routes performed. The question is, since there is no target value of bus routes, how do I back propagate the simulation result?
To explain the question with simple python code:
"""
The model is an RNN that takes 400,24,24 matrix as input
dimension 0 represents time, dimension 1 represents departure bus stop and dimension 2 represents the arrival bus stop. Each value is a count of the number of passengers who departed at a bus stop with an arrival bus stop in mind in a specific time
output is 64,24 matrix which will be reshaped to 8,8,24
dimension 0 is the sequence index, dimension 1 is the index of bus (there are 8 buses), dimension 2 is the softmaxed classifier dimension of 24 different bus stops. From the output, 8 bus stops are picked per bus with a sequence
These sequences are then used for path generations of buses and they are evaluated from a simulation
"""
model.train()
optimizer.zero_grad()
out = model(demand)#out is 64,24 demand is 400,24,24
demand, performance = simulation(out)#assume performance as float
#here the out has grad_fn but the performance does not
loss = SOME_NUMBER - performance
loss = torch.FloatTensor(loss)
#here I need to back propagate and it is the confusing part
#simply doing loss.backward() does nothing because no grad_fn
#out.backward() requires 64,24 gradients computed somehow from 1 #metric, causes complete divergence within few steps
optimizer.step()
| How does the model output represent the bus routes? Maybe you could try a reinforced learning approach. Take a look at Deep-Q Learning, It basically takes and input vector (the state of the system) and outputs an action (usually represented by an index in your output layer), then it computes the reward of that action and uses it to train the model (without the need of target values).
Here are some resources that might help you get started:
https://towardsdatascience.com/double-deep-q-networks-905dd8325412
https://arxiv.org/pdf/1802.09477.pdf
https://arxiv.org/pdf/1509.06461.pdf
Hope this was useful.
UPDATE
There is a second option, you could define a custom loss function. Generally these functions only take two arguments, the predicted_y and the target_y, in your case, there is no target_y, so you could pass a dummy target_y and not use it inside the function (I assume that you could call your simulation process inside that function, and return the metric as the "loss"). Here are examples in PyTorch and Keras.
Keras: Make a custom loss function in keras
PyTorch:PyTorch custom loss function
| https://stackoverflow.com/questions/58688255/ |
What is Adaptive average pooling and How does it work? | I recently came across a method in Pytorch when I try to implement AlexNet.
I don't understand how it works. Please explain the idea behind it with some examples. And how it is different from Maxpooling or Average poling in terms of Neural Network functionality
nn.AdaptiveAvgPool2d((6, 6))
| In average-pooling or max-pooling, you essentially set the stride and kernel-size by your own, setting them as hyper-parameters. You will have to re-configure them if you happen to change your input size.
In Adaptive Pooling on the other hand, we specify the output size instead. And the stride and kernel-size are automatically selected to adapt to the needs. The following equations are used to calculate the value in the source code.
Stride = (input_size//output_size)
Kernel size = input_size - (output_size-1)*stride
Padding = 0
| https://stackoverflow.com/questions/58692476/ |
Pinning memory is actually slower in PyTorch? | I'm wondering why would pinning memory in PyTorch make things even slower. By reading the code of torch.utils.data.dataloader, I found the pin_memory=True option of DataLoader simply calls .pin_memory() on each batch before returning them. The returned tensor is still on CPU, and I have to call .cuda(non_blocking=True) manually after this. Therefore, the whole process would be
for x in some_iter:
yield x.pin_memory().cuda(non_blocking=True)
I compared the performance of this with
for x in some_iter:
yield x.cuda()
Here is the actual code
a = torch.rand(1024, 655360)
%%time
for i in a:
i.pin_memory().cuda(non_blocking=True)
# CPU times: user 1.35 s, sys: 55.8 ms, total: 1.41 s
# Wall time: 396 ms
%%time
for i in a:
i.pin_memory().cuda()
# CPU times: user 1.6 s, sys: 12.2 ms, total: 1.62 s
# Wall time: 404 ms
%%time
for i in a:
i.cuda(non_blocking=True)
# CPU times: user 855 ms, sys: 3.87 ms, total: 859 ms
# Wall time: 274 ms
%%time
for i in a:
i.cuda()
# CPU times: user 314 ms, sys: 12 µs, total: 314 ms
# Wall time: 313 ms
As a result, not pinning memory both uses less CPU time, and is faster in terms of actual time. Shouldn't pinning memory make data transfer asynchronous and therefore be faster? If that's not the case, why would we do pin memory?
PS. I thought about the possibility of pinning a whole TensorDataset in advance (rather than pinning batches each time). But this cannot pin a tensor that is bigger than GPU memory:
a = np.memmap('../dat/R/train.3,31,31B', '3,31,31B', 'r')
a.nbytes // 2**30
## 68
torch.from_numpy(a).pin_memory()
## ---------------------------------------------------------------------------
## RuntimeError Traceback (most recent call last)
## <ipython-input-36-d6f2d74da8e7> in <module>
## ----> 1 torch.from_numpy(a).pin_memory()
##
## RuntimeError: cuda runtime error (2) : out of memory at /tmp/pip-req-build-58y_cjjl/aten/src/THC/THCCachingHostAllocator.cpp:296
And if I do want to pin a small tensor, why don't I directly move the whole tensor into GPU memory in advance?
| TL:DR
Your code is slower, because you allocate a new block of pinned memory each time you call the generator. Allocating new memory each time requires synchronization each time making it much slower than non-pinned memory. Likely, you are measuring this overhead.
Your code example in the edit fails in the THCCachingHostAllocator.cpp. It's not the GPU running out of memory, but your host denying you to allocate 68GB of pinned physical memory.
Pinning memory is actually slower in PyTorch?
Creating or releasing pinned memory (cudaHostAlloc()/cudaFreeHost() via the CUDA Runtime) is much slower than malloc/free because it involves synchronization between the devices (GPU and host). Likely, what you are measuring is - to a large extent - this overhead, as you are incrementally allocating pinned memory.
Shouldn't pinning memory make data transfer asynchronous and therefore be faster? If that's not the case, why would we do pin memory?
It can, but not if you halt/join to synchronize before each transfer in order to allocate the memory.
What pinning memory ultimately does is that it prevents the memory block from being swapped out by the OS; it is guaranteed to remain in RAM. This guarantee enables the GPU's DMA to operate on that block without going through the CPU (which has to check, among other things, if the data needs to be swapped back in). Thus, the CPU is free to do other stuff in the meantime.
It is not a perfect analogy, but you could think about pinned memory as shared memory between the GPU and the host. Both parties can operate on it without informing the other party; a bit like multiple threads in a process. This can be much faster if you implement non-blocking code. However, it can also be much slower if parties end up joining all the time.
Contrast this to the non-pinned approach, where the CPU loads the data from RAM (swapped in if necessary) and then sends it to the GPU. Not only is it slower (needs to go through the northbridge twice), but it also keeps the thread (and hence one CPU core) busy. Python also has the infamous GIL, so it could be that your entire application is waiting for that synchronous I/O.
If you want to use pinned memory to shuffle batches of data into the GPU, then one way to do it is to use pinned memory as a (circular) buffer. The CPU can load the data from disk, apply preprocessing, and place the batch into the buffer. The GPU can then fetch batches from the buffer in its own time and do the inference. If the implementation is done well, then the GPU will not idle more than necessary, and there is no more need for synchronization between the host and the GPU.
And if I do want to pin a small tensor, why don't I directly move the whole tensor into GPU memory in advance?
If you don't need to access the tensor from the CPU and it fits onto the GPU, then there is indeed no need to put it into pinned memory.
In your example, you are opening a memory-mapped numpy array memmap, and then ask to transfer it to pinned memory. A memory-mapped file works very similar to paged memory in that data that doesn't fit the RAM anymore is flushed to disk, and loaded back in when it is accessed again.
This "swapping" can not happen for pinned memory, because we need to guarantee that the entire block resides in RAM at all dimes. Hence, we need to first load the entire array into host memory - a contiguous block of 68 GB -, likely creating a copy of the array in the process to not destroy the memmap object, and then we need to pin that memory block, telling the host to forfeit 68GB of managed physical memory to our application. Either of these two steps can be denied by the OS and raise an OutOfMemory error.
This is pretty much what you are seeing, as you fail in the THCCachingHostAllocator.cpp.
| https://stackoverflow.com/questions/58741872/ |
pytorch math with exponents less than 1 return nan 's | torch.pow() returns nan when it's given the exponent argument between ranges -1 and 1
a = torch.randn(1,3)
a
>> tensor([[-1.7871, -0.5375, -0.1164]])
torch.pow(a, 2) #or a**2
>> tensor([[3.1938, 0.2889, 0.0136]])
torch.pow(a,0.5) #or a**0.5
>> tensor([[nan, nan, nan]])
expect result:
tensor([[-1.3368, -0.7331, -0.3412]])
Edit: turns out that ** works the same way as well. a**2 does the same thing with the tensor as torch.pow(a,2). a**0.5 returns nan's like torch.pow does.
| The issue is that square root of a negative number is a non-real complex number.
If you want to keep the sign and take the square root of its absolute value, the following code does the trick
torch.sign(a) * torch.pow(torch.abs(a), 0.5)
| https://stackoverflow.com/questions/58786745/ |
Face alignment in pytorch | I am trying to do face alignment on 300W dataset. I am using ResNet50 and L1 loss for training. My code looks like this.
batch_size = 10
image_size = 128
net = torchvision.models.resnet50(pretrained=True)
num_ftrs = net.fc.in_features
net.fc = nn.Linear(num_ftrs, 136) # 136 because 68 points with 2 dim. so 136= 68*2
def train():
device = torch.device("cuda:0" if torch.cuda.is_available() else
"cpu")
optimiser = optim.Adam(net.parameters(), lr=0.001,
weight_decay=0.0005)
criterion = L1Loss(reduction='sum')
for epoch in range(int(0), 200000):
for batch, data in enumerate(trainloader, 0):
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
optimiser.zero_grad()
outputs = net(inputs).reshape(-1, 68, 2)
loss = criterion(outputs, labels)
loss.backward()
optimiser.step()
running_loss += loss.item()
sys.stdout.write(
'\rTrain Epoch: {} Batch {} avg_Loss_per_batch: {:.2f}
'.format(epoch, batch, running_loss/(batch+1)))
sys.stdout.flush()
The trainloader is with images and points. The ground-truths are shaped as (batch, 68, 2). We have 68 points on the face on 2 dimensional space.
The papers suggests that the ResNet50 should get an error of 10 (metric: pixel) for a 256*256 image with L1 loss. I am getting error around 500-800 on validation set even after 5000 epoch.
I am training images with 256*256 resolution with ground truth of 68 points such as ((x1,y1),(x2,y2)....(x68,y68)) and I have trained over 5000 epoch with many learning rates. My validation code looks like this,
def validater(load_weights=False):
device = torch.device("cuda:0" if torch.cuda.is_available() else
"cpu")
net.eval()
net.to(device)
with torch.no_grad():
for batch, data in enumerate(testloader, 0):
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
outputs = net(inputs).reshape(-1, 68, 2)
loss = criterion(outputs, labels)
loss2 = np.linalg.norm(labels.to('cpu') - outputs.to('cpu'))
sys.stdout.write('\rTest Epoch: {} Batch {} total_L1_Loss:
{:.2f} avg_L1_Loss_per_img: {:.2f} total_norm_loss:
{:.2f}'.format(
0, batch, avg_loss, avg_loss/batch/batch_size,
avg_loss2))
sys.stdout.flush()
print()
What is wrong with my code ?
PS: I normalise the imgs with the following code
img = cv2.normalize(img, None, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)
After 4000 epoch I get outputs like this where yellow dots are ground truth and blue ones are predicted
| From your output image you can tell the error is smaller on the top left landmarks and grows larger towards the lower right part of the face.
The landmarks you are trying to predict are (x, y) coordinates relative to the top left corner of the image. As you can see, your model's prediction error grows proportionally to the norm of each coordinate. This is not an uncommon phenomenon: when you model predicts a landmark close to the origin (e.g. left eye) it makes "small" predictions as the norm of this landmark is also small, the learned weights are small and therefore the errors are also small. On the other side, when predicting landmarks far from the origin (right part of mouth) the model need to make "large" predictions as the norm of these landmarks is large. Consequently, the trained weights are larger resulting with cruder errors.
To mitigate this, you should pre-process your data (train and test) and normalize the coordinates of the landmarks to be 1:
1. relative to the center of the image
2. relative to image size
That is, instead of (x, y) coordinates in the range of [0, width]x[0, height] you should have the landmarks in the range [-1, 1]x[-1, 1].
After prediction, to get the original coordinates you simply need to shift them and scale them by image size.
1 I am assuming here all faces in the training set are roughly the same size and located roughly in the center of the images. If your settings are "in the wild" where faces can be of any size at any place in the image I'm afraid this will not work.
| https://stackoverflow.com/questions/58794019/ |
Pytorch - Batch Normalizaiton simple question | I implemented a model with batch normalization:
class FFNet(torch.nn.Module):
def __init__(self, D_in, H_1, H_2, D_out):
super(FFNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H_1)
self.linear2 = torch.nn.Linear(H_1, H_2)
self.bn2 = torch.nn.BatchNorm1d(H_2)
self.linear4 = torch.nn.Linear(H_2, D_out)
def forward(self, x):
h_relu_1=F.relu(self.linear1(x))
h_relu_2=F.relu(self.bn2(self.linear2(h_relu_1)))
y_pred=self.linear4(h_relu_2)
return y_pred
Also, I wrote the training loop:
for epoch in range(epoches):
running_loss = 0.0
cnt = 0
for i, data in enumerate(train_data, 0):
local_X, local_y = data
y_pred = model.forward(local_X)
loss = criterion(y_pred, local_y)
optimizer.zero_grad()
#loss = criterion(y_pred, Y_local_output)
loss.backward() # back props
optimizer.step()
running_loss = running_loss + loss.item()
cnt+=1
Validation_loss = 0.0
cnt2 = 0
# Validation
for i, data in enumerate(validation_data, 0):
Val_X, Val_Y = data
y_pred = model.forward(Val_X)
loss=criterion(y_pred, Val_Y)
Validation_loss = Validation_loss + loss.item()
cnt2+=1
I have two questions:
1. Is there no need to use model.train() in this code?
2. How to evaluate this model using eval? I have one data sample whose size is (1xD_in), and batch size is more than 1. When I use the below code, there is an error:
test_single = torch.tensor([aa, ab, ac, ad, ae, af, ag])
test_single = test_single.unsqueeze(0)
model.eval()
[bb,cc] = model.forward(test_single)
The error is 'not enough values to unpack (expected 2, got 1)'
| If you have batch normalization, then you do need to use model.train() and model.eval() while training and evaluating respectively.
The 2nd part (the unsqueezing code) is not wrong. However, there is only one output of your model (see the return statement of your model's forward function), which causes the error i.e. you try to unpack 2 values whereas there is only one. So, you can't do
[bb,cc] = model.forward(test_single)
You have to do
out = model.forward(test_single)
I tried this and it works.
| https://stackoverflow.com/questions/58797309/ |
How to parallelize model prediction from a pytorch model? | I have a PyTorch instance segmentation model that predicts the masks of images one by one. Is there any way to parallelize this task so that it can be completed faster. I have tried to use multiprocessing pool.apply_async() to call the method that performs prediction passing the required arguments, but it throws me segmentation fault. Any help is much appreciated. Thanks.
| In general, you should be able to use torch.stack to stack multiple images together into a batch and then feed that to your model. I can't say for certain without seeing your model, though. (ie. if your model was built to explicitly handle one image at a time, this won't work)
model = ... # Load your model
input1 = torch.randn(3, 32, 32) # An RGB input image
input2 = torch.randn(3, 32, 32) # An RGB input image
model_input = torch.stack((input1, input2))
print(model_input.shape) # (2, 3, 32, 32)
result = model(model_input)
print(result.shape) # (2, 1, 32, 32) 2 output masks, one for each image
If you've trained the model yourself this will look familiar as it's essentially how we feed batches of images to the network during training.
You can stack more than two together, it will typically be limited by the amount of GPU memory you have available to you.
| https://stackoverflow.com/questions/58823918/ |
How can you specify docker flags when a step is ran on Google Cloud Build? | I need to be able to specify --ipc=host to the docker command that runs a step. How do I do that?
A little more context.... When running the tests in one of my Google Cloud Build steps I am running into this error:
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
This error is common when using pytorch, like I am, in docker but it is easily addressed with the flag --ipc=host. I don't know where to put this flag in my step's settings in the cloudbuild.yml.
Right now my step looks like this:
- id: 'run-tests'
name: 'gcr.io/project/app:test-$BRANCH_NAME'
entrypoint: 'bash'
args:
- '-c'
- './bin/run-tests.sh'
I have tried adding the "ipc=host" to the args but those go to bash and not the docker command running the image.
| I am going to try this option out myself, but I think you can build a container with ipc=host set. There is no option to do so during a docker build, but you can run a container with ipc=host, then create a new image using docker commit.
The basic premise is:
docker build
docker run --ipc=host
docker commit
docker push
More details and discussion here: https://github.com/moby/moby/issues/24743
5/4 Edit
While this works, the host may have the default shm setting, so you won't get a larger size. Instead, you should probably use docker run --shm-size=256m or similar to give your docker container more shared memory. This requires you to use the docker cloud builder image and configure your own docker run command.
| https://stackoverflow.com/questions/58830600/ |
How does loss.backward() relate to the appropriate parameters of the model? | I'm new in PyTorch and I'm having trouble understanding how loss knows to compute the gradients through loss.backward()?
Sure, I understand that the parameters need to have requires_grad=True and I understand that it sets x.grad to the appropriate gradient only for the optimizer later to perform the gradient update.
The optimizer is linked to the model parameters when it's instantiated, but the loss is never linked to the model.
I've been going through this thread, but I don't think anyone answered it clearly and the person that started the thread seems to have the same issue as I do.
What happens when I have two different networks with two different loss functions and two different optimizers. I will easily link the optimizers to each of the networks, but how will the loss functions know how to compute the gradients for each of their appropriate network if I never link them together?
| Loss is itself a tensor which is derived from the parameters of the network. A graph is implicitly constructed where each new tensor, including loss, points back to the tensors which were involved with it's construction. When you apply loss.backward() pytorch follows the graph backwards and populates the .grad member of each tensor with the partial dervative of loss with respect to that tensor using the chain rule (i.e. backpropagation)
| https://stackoverflow.com/questions/58844168/ |
Is there a similar function in tensorflow like load_state_dict() in Pytorch? | Like it has been described, I am wondering is there a similar function in tensorflow for load_state_dict() like the one does in Pytorch. To demonstrate a scenario, please refer to the code following:
# Suppose we have two correctly initialized neural networks: net2 and net1
# Using Pytorch
net2.load_state_dict(net1.state_dict())
Does anyone have any idea?
| Below code may help in achieveing the same in tensorflow:
Save the model
w1 = tf.Variable(tf.truncated_normal(shape=[10]), name='w1')
w2 = tf.Variable(tf.truncated_normal(shape=[20]), name='w2')
tf.add_to_collection('vars', w1)
tf.add_to_collection('vars', w2)
saver = tf.train.Saver()
sess = tf.Session()
sess.run(tf.global_variables_initializer())
saver.save(sess, 'my-model')
# `save` method will call `export_meta_graph` implicitly.
# you will get saved graph files:my-model.meta
To Restore the model
sess = tf.Session()
new_saver = tf.train.import_meta_graph('my-model.meta')
new_saver.restore(sess, tf.train.latest_checkpoint('./'))
all_vars = tf.get_collection('vars')
for v in all_vars:
v_ = sess.run(v)
print(v_)
| https://stackoverflow.com/questions/58848100/ |
Calculate binary entropy loss using a function in pytorch | I have a problem about calculating binary cross entropy. The way I know that works out in pytorch is:
import torch
import torch.nn as nn
import torch.nn.functional as F
def lossfunc():
return F.binary_cross_entropy
criterion = lossFunc()
input = torch.randn((3, 2), requires_grad=True)
target = torch.rand((3, 2), requires_grad=False)
loss = criterion(torch.sigmoid(input),target)
But how to complete the lossfunc() in such way, because I don't know how to pass the arguments to the function:
#the function that add sigmoid to input and calculate the binary cross entropy loss
def lossfunc():
return
criterion = lossFunc()
input = torch.randn((3, 2), requires_grad=True)
target = torch.rand((3, 2), requires_grad=False)
loss = criterion(input,target)
| I think you're confusing the nn api with the functional F api. In functional api, loss function F.binary_cross_entropy can be used as a function directly.
In nn api, you need to create an object of the loss class such as criterion = nn.BCELoss()
Thus, you can simply do:
def lossFunc(input, target):
return F.binary_cross_entropy(torch.sigmoid(input),target)
input = torch.randn((3, 2), requires_grad=True)
target = torch.rand((3, 2), requires_grad=False)
loss = lossFunc(input,target)
Also, PyTorch provides nn.nn.BCEWithLogitsLoss() and F.binary_cross_entropy_with_logits() that combines both sigmoid and binary cross-entropy.
| https://stackoverflow.com/questions/58848111/ |
Explanation of build_vocab in torch and it's association with pre-trained embeddings | Can some explain me what is build_vocab in torch, it is not clear from online documentation? Why do we need it and it's relation to pre-trained embeddings?
| I think you confuse pytorch and torchtext here. In torchtext (a package that provides processing utilities and popular datasets for natural language) you can run build_vocab of a Field to iterate over your dataset in order to build up the vocabulary.
Take also a look here:
https://torchtext.readthedocs.io/en/latest/data.html#torchtext.data.Field.build_vocab
| https://stackoverflow.com/questions/58854702/ |
How can I fix this pytorch error on Windows? (ModuleNotFoundError: No module named 'torch') | Edit: You might want to skip to the end of the question first, I've followed some advice in comments / answers and the current error is different from the original (appears to be related to numpy possibly).
This error ModuleNotFoundError: No module named 'torch' shows up in tons of threads, I've been trying solutions all day. I'll go through my troubleshooting steps one by one, using the solutions suggested in threads.
System info:
Windows 10
First thing I did was follow the instructions on Pytorch, installed Anaconda and did this using the correct settings for my machine (Note: I tried Python v3.7 before trying v3.8 in these screenshots, none of the solutions worked with that either):
As you can see, that should be good to go, according to the instructions.
So I go into the python terminal and try to import pytorch, like so:
ModuleNotFoundError: No module named 'torch' Great, so what now? Well I paste the error into Google and begin my 4 hour wild goose chase.
First result, stack overflow answer: No module named "Torch"
Let's try the selected answer, it requires some version-related syntax so lets check my python version:
Alright so as directed by the answer:
Try to install PyTorch using pip:
First create a conda environment using:
conda create -n env_pytorch python=3.6
Ok:
Activate the environment using:
source activate env_pytorch
That doesnt work, but if we activate using the instructions given by the prompt, we can do so:
Now install PyTorch using pip:
pip install torchvision --user ( this will install both torch and torchvision)
Hmmm.. well that went up in flames, so the following...
Now go to python shell and import using the command:
import torch
import torchvision
...doesn't do anything new, same error as before.
Well, to the next thread, on PyTorch GitHub: https://github.com/pytorch/pytorch/issues/4827
They're trying to use Jupyter, so I tried this, is was another long process like the above that went up in flames, and I really dont want to need to use Jupyter anyway, so we'll skip this one.
Another Pytorch GitHub thread: https://github.com/pytorch/pytorch/issues/12004
@edtky Could you please give me the output of the following commands
in CMD?
where conda.exe
where pip.exe
where python.exe
Sure I'll give it a shot:
@edtky Looks like you have two Python environments. Please try
importing torch in Anaconda Prompt.
Oh well, I already did that. No bueno.
Another thread: https://discuss.pytorch.org/t/modulenotfounderror-no-module-named-torch/7309 suggests:
In that case you’ve probably forgotten to activate the environment
where pytorch is installed. It can also be the library missing in your
PYTHONPATH variable.
Well I did activate the environment as shown above, but I dont know anything about a PYTHONPTH variable, seems like the PyTorch setup guide wouldve mentioned if I needed to manually do that, I have no clue how to do it and you aren't explaining, so lets look for other answers.
Someone made a whole article to give us this little gym of advice: https://medium.com/@valeryyakovlev/anaconda-no-module-named-torch-ead10946de66
Another beginner error I encountered when started to use pytorch in
anaconda environment
import torch ModuleNotFoundError: No module named ‘torch’ the proper way to install pytorch to anaconda is following
conda install -c pytorch pytorch It’s not enough to simply run “conda install pytorch” — the package won’t be found. So first
activate your conda profile with “source activate {your_profile}” and
then run the command conda install -c...
Ok thats new info, let's try that command again now that our env is activated:
Ok that's a lot of green, let's try now...
Well we can't win 'em all, so lets go onto the next thread: https://forums.fast.ai/t/modulenotfounderror-no-module-named-torch-windows-10/12438/2
I had also faced the similar problem , I just installed torch and torchvision using pip and it worked …
Ok! Let's try:
Oh well, another solution up in flames..
I ran into a similar issue with Windows 10. In the end I could only get torch installed with Miniconda.
Alrighty, lets try it!
Alright, cool, moment of truth:
Awesome! You just read through 25 minutes of me re-producing all my attempts to solve this problem, and it doesnt even include the hour I spend down a rabbit hole trying to use Jupyter, which failed equally as miserably. I think it's time to post the question to StackOverflow!
Edit 1:
An answer points out that one of my logs was an error python 3.8 isn't compatible with pytorch, good point I'll fix that. After unintalling 3.8 and installing 3.7:
And no luck! Remember I actually mentioned in my first paragraph that while I was trying 3.8 in these screenshots, the first time around I did all of this with 3.7
Edit 2:
I forgot to install after activating the environment in the previous edit. Once I fixed that, there's a new error:
| Pytorch requires 3.5 <= python < 3.8. Setup an environment with:
conda create -n pytorch python=3.7
conda activate pytorch
conda install pytorch
You should also make sure that you launch the installed python interpreter from this environment (YourAnacondaInstallDirectory\envs\pytorch\python.exe) from the activated conda environment! The later is important because conda will export certain environment variables (have a look at this for a related issue caused by missing envionment variables).
| https://stackoverflow.com/questions/58864070/ |
How to set python environment variables in VS Code? | I know how to add arguments for the Python script I want to run. For example, if test.py is my script file and it has one argument like '--batch_size' then I can edit launch.json in VS Code and set "args": ["--batch_size", "32"]
But I don't know how to add environmental arguments for Python itself. For example, Python has -m environmental variable, which runs library module as a script. If I want to run python -m torch.distributed.launch test.py --batch_size 32, what should I edit in VS Code to run the debugger?
UPDATE:
Here is my launch.json
| -m is not environmental variable. It's just a regular argument.
To run python -m torch.distributed.launch test.py --batch_size 32 use args "args": ["-m", "torch.distributes.launch" ,"--batch_size", "32"] Also you need to run python itself instead of running script to pass these args to it ("program": "python3").
To set actual environment variables use "env": {"ENV_VAR_NAME1": "value of ENV_VAR_NAME1", "ENVVAR2": "Value for ENVVAR2"}
Here you can find more information about launch.json configuration
| https://stackoverflow.com/questions/58871113/ |
Subsets and Splits