instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
DistributedSampler — Expected a ‘cuda’ device type for generator when generating indices | Performing distributed training, I have the following code like this:
training_sampler = DistributedSampler(training_set, num_replicas=2, rank=0)
training_generator = data.DataLoader(training_set, **params, sampler=training_sampler)
for x, y, z in training_generator: # Error occurs here.
...
Overall, I get the following message:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/ubuntu/VC/ppg_training_extraction/ppg_training_scripts/train_ASR_trim_scp.py", line 336, in train
for local_batch_src, local_batch_tgt, lengths in dataloaders[phase]:
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 352, in __iter__
return self._get_iterator()
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 294, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 827, in __init__
self._reset(loader, first_iter=True)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 857, in _reset
self._try_put_index()
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1091, in _try_put_index
index = self._next_index()
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 427, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 227, in __iter__
for idx in self.sampler:
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/distributed.py", line 97, in __iter__
indices = torch.randperm(len(self.dataset), generator=g).tolist() # type: ignore
RuntimeError: Expected a 'cuda' device type for generator but found 'cpu'
Now at that line, I ran the following instructions in pdb:
(Pdb) g = torch.Generator()
(Pdb) g.manual_seed(0)
<torch._C.Generator object at 0x7ff7f8143110>
(Pdb) indices = torch.randperm(4556, generator=g).tolist()
(Pdb) indices = torch.randperm(455604, generator=g).tolist()
*** RuntimeError: Expected a 'cuda' device type for generator but found 'cpu'
Why am I getting the runtime error when the upperbound integer is high, but not when it’s low enough?
Note, I ran on a clean Python session and found
>>> import torch
>>> g = torch.Generator()
>>> g.manual_seed(0)
<torch._C.Generator object at 0x7f9d2dfb39f0>
>>> indices = torch.randperm(455604, generator=g).tolist()
that this worked fine. Is it some configuration in how I’m handling distributed training among multiple GPUs? Any sort of insights would be appreciated!
| I just met the same problem using dataloader and I found the following helps without removing torch.set_default_tensor_type('torch.cuda.FloatTensor')
data.DataLoader(..., generator=torch.Generator(device='cuda'))
since I don't want to manually add .to('cuda') for tons of tensors in my code
| https://stackoverflow.com/questions/64940953/ |
Installing Python dependencies in Heroku | I wanted to make an Ai chatbot hosted on Heroku, but there are some problems with installing the requirements.
The chatbot needs the following packages:
discord
pytorch
spaCy
I already figured out putting discord into the requirements.txt, but I got no clue on how to do the other two dependencies.
The problem with them is that they aren't just installed with
pip install NAME_OF_THE_PACKAGE .
Pytorch for example needs:
pip install torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
and spaCy needs
pip install -U spacy
pip install -U spacy-lookups-data
python -m spacy download en_core_web_sm.
I tried different variations of the commands above, but none worked.
Now I even reached the built limit on my account.
Please help me if you know how to solve the problem.
| You can use the following to create a requirements.txt file having all the dependency installed in your current environment. This will install
pip freeze > requirements.txt
To download the Spacy Models, you can add this in the requirements.txt
https://github.com/explosion/spacy-models/releases/download/en_core_web_trf-3.0.0a0/en_core_web_trf-3.0.0a0.tar.gz
For more info, go through this issue on Github
https://github.com/explosion/spaCy/issues/1129
| https://stackoverflow.com/questions/64943891/ |
Pytorch - AttributeError: 'tuple' object has no attribute 'dim' | I am trying to use this architecture:
class Net(BaseFeaturesExtractor):
def __init__(self, observation_space: gym.spaces.Box, features_dim: int = 512):
super(Net, self).__init__(observation_space, features_dim)
self.conv1 = nn.Conv2d(1, 64, 3, stride=1, padding=1)
self.conv2 = nn.Conv2d(64, 64, 3, stride=1, padding=1)
self.conv3 = nn.Conv2d(64, 64, 3, stride=1)
self.conv4 = nn.Conv2d(64, 64, 3, stride=1)
self.bn1 = nn.BatchNorm2d(64)
self.bn2 = nn.BatchNorm2d(64)
self.bn3 = nn.BatchNorm2d(64)
self.bn4 = nn.BatchNorm2d(64)
self.fc1 = nn.Linear(64 * (7 - 4) * (6 - 4), 128)
self.fc_bn1 = nn.BatchNorm1d(128)
self.fc2 = nn.Linear(128, 64)
self.fc_bn2 = nn.BatchNorm1d(64)
self.fc3 = nn.Linear(64, 7)
self.fc4 = nn.Linear(64, 1)
def forward(self, s):
# s: batch_size x board_x x board_y
s = s.view(-1, 1, 7, 6) # batch_size x 1 x board_x x board_y
s = F.relu(self.bn1(self.conv1(s))) # batch_size x num_channels x board_x x board_y
s = F.relu(self.bn2(self.conv2(s))) # batch_size x num_channels x board_x x board_y
s = F.relu(self.bn3(self.conv3(s))) # batch_size x num_channels x (board_x-2) x (board_y-2)
s = F.relu(self.bn4(self.conv4(s))) # batch_size x num_channels x (board_x-4) x (board_y-4)
s = s.view(-1,64 * (7 - 4) * (6 - 4))
s = F.dropout(
F.relu(self.fc_bn1(self.fc1(s))),
p=0.3,
training=self.training) # batch_size x 128
s = F.dropout(
F.relu(self.fc_bn2(self.fc2(s))),
p=0.3,
training=self.training) # batch_size x 64
pi = self.fc3(s) # batch_size x action_size
v = self.fc4(s) # batch_size x 1
return F.log_softmax(pi, dim=1), th.tanh(v)
When I am trying to use this architecture, I am getting following error:
Traceback (most recent call last):
File "/Users/joe/Documents/JUPYTER/ConnectX/training3.py", line 130, in <module>
learner.learn(total_timesteps=iterations, callback=eval_callback)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/stable_baselines3/ppo/ppo.py", line 264, in learn
reset_num_timesteps=reset_num_timesteps,
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 222, in learn
continue_training = self.collect_rollouts(self.env, callback, self.rollout_buffer, n_rollout_steps=self.n_steps)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 154, in collect_rollouts
actions, values, log_probs = self.policy.forward(obs_tensor)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/stable_baselines3/common/policies.py", line 545, in forward
latent_pi, latent_vf, latent_sde = self._get_latent(obs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/stable_baselines3/common/policies.py", line 564, in _get_latent
latent_pi, latent_vf = self.mlp_extractor(features)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/stable_baselines3/common/torch_layers.py", line 220, in forward
shared_latent = self.shared_net(features)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 93, in forward
return F.linear(input, self.weight, self.bias)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/functional.py", line 1688, in linear
if input.dim() == 2 and bias is not None:
AttributeError: 'tuple' object has no attribute 'dim'
How this problem can be fixed?
| I tried to reproduce a small working code based on the class definitions given by you and I was able to get the outputs from the model. Here is the following code:
# BaseFeaturesExtractor class
import gym
import torch as th
from torch import nn
class BaseFeaturesExtractor(nn.Module):
"""
Base class that represents a features extractor.
:param observation_space:
:param features_dim: Number of features extracted.
"""
def __init__(self, observation_space: gym.Space, features_dim: int = 0):
super(BaseFeaturesExtractor, self).__init__()
assert features_dim > 0
self._observation_space = observation_space
self._features_dim = features_dim
@property
def features_dim(self) -> int:
return self._features_dim
def forward(self, observations: th.Tensor) -> th.Tensor:
raise NotImplementedError()
# Net class
class Net(BaseFeaturesExtractor):
def __init__(self, observation_space: gym.spaces.Box, features_dim: int = 512):
super(Net, self).__init__(observation_space, features_dim)
self.conv1 = nn.Conv2d(1, 64, 3, stride=1, padding=1)
self.conv2 = nn.Conv2d(64, 64, 3, stride=1, padding=1)
self.conv3 = nn.Conv2d(64, 64, 3, stride=1)
self.conv4 = nn.Conv2d(64, 64, 3, stride=1)
self.bn1 = nn.BatchNorm2d(64)
self.bn2 = nn.BatchNorm2d(64)
self.bn3 = nn.BatchNorm2d(64)
self.bn4 = nn.BatchNorm2d(64)
self.fc1 = nn.Linear(64 * (7 - 4) * (6 - 4), 128)
self.fc_bn1 = nn.BatchNorm1d(128)
self.fc2 = nn.Linear(128, 64)
self.fc_bn2 = nn.BatchNorm1d(64)
self.fc3 = nn.Linear(64, 7)
self.fc4 = nn.Linear(64, 1)
def forward(self, s):
# s: batch_size x board_x x board_y
s = s.view(-1, 1, 7, 6) # batch_size x 1 x board_x x board_y
s = F.relu(self.bn1(self.conv1(s))) # batch_size x num_channels x board_x x board_y
s = F.relu(self.bn2(self.conv2(s))) # batch_size x num_channels x board_x x board_y
s = F.relu(self.bn3(self.conv3(s))) # batch_size x num_channels x (board_x-2) x (board_y-2)
s = F.relu(self.bn4(self.conv4(s))) # batch_size x num_channels x (board_x-4) x (board_y-4)
s = s.view(-1,64 * (7 - 4) * (6 - 4))
s = F.dropout(
F.relu(self.fc_bn1(self.fc1(s))),
p=0.3,
training=self.training) # batch_size x 128
s = F.dropout(
F.relu(self.fc_bn2(self.fc2(s))),
p=0.3,
training=self.training) # batch_size x 64
pi = self.fc3(s) # batch_size x action_size
v = self.fc4(s) # batch_size x 1
return F.log_softmax(pi, dim=1), th.tanh(v)
# Minimal code to reproduce a forward pass
import numpy as np
import torch
import torch.nn.functional as F
params = gym.spaces.Box(np.array([-1,0,0]), np.array([+1,+1,+1]))
model = Net(params)
inputs = torch.randn(2, 1, 7, 6)
outputs = model(inputs)
print(outputs[0].shape, outputs[1].shape) # prints (torch.Size([2, 7]), torch.Size([2, 1]))
| https://stackoverflow.com/questions/64950464/ |
Image augmentation in Pytorch | I like to augment image alternately.
I have pytorch transform code as follows.
import torchvision.transforms as tt
from torchvision.datasets import ImageFolder
#Data transform (normalization & data augmentation)
stats = ((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))
train_tfms = tt.Compose([tt.RandomCrop(32, padding = 4, padding_mode = 'reflect'),
tt.RandomHorizontalFlip(),
tt.RandomAffine(degrees=(10, 30),
translate=(0.1, 0.3),
scale=(0.7, 1.3),
shear=0.1,
resample=Image.BICUBIC)
tt.ToTensor(),
tt.Normalize(*stats)])
When I create dataset as follow and do training, all images will be augmented.
train_ds = ImageFolder('content/train', train_tfms)
But I want alternately. First image, just train as original image. But the next image is augmented.
How can I do that?
| From a single dataset you can create two datasets one with augmentation and the other without, and then concatenate them. The order is going to be kept since we are using the subdataset pytorch class which will handle this for us.
train_ds_no_aug = ImageFolder('content/train')
train_ds_aug = ImageFolder('content/train', train_tfms)
# Check that aug_idx and no_aug_idx are not overlapping
aug_idx = torch.arange(1, len(train_ds_no_aug), 2)
no_aug_idx = torch.arange(0, len(train_ds_no_aug), 2)
train_ds_no_aug = torch.utils.data.Subset(train_ds_no_aug, no_aug_idx)
train_ds_aug = torch.utils.data.Subset(train_ds_aug, aug_idx)
train_ds = torch.utils.data.ChainDataset([train_ds_no_aug, train_ds_aug])
# Done :=
| https://stackoverflow.com/questions/64952519/ |
Multiplying two 3D Pytorch tensors iteratively | I have two 3 dimensional Pytorch tensors, one of dimension (8, 1, 1024) and the other has dimension (8, 59, 77). I wish to multiply these two tnesors.
I know they cannot be multiplied in their current state, so I want to multiply them iteratively and append into a single tensor.
The second tensor can be represented as (8, 59, 1) when we iterate over the 2nd dimension. In this state multiplying it with the first tensor of shape (8, 1, 1024), resulting in a tensor of shape (8, 59, 1024), and finally appending all these 77 outputs into one, resulting in the final shape of (8, 59, 1024, 77).
However I am having issues in it's implementation. Can someone help me here ?
| If I didn't mess up the computation, it would be equivalent to:
import torch
x = torch.rand(8, 1, 1024)
y = torch.rand(8, 59, 77)
torch.matmul(
y.unsqueeze(-1), # shape = (8, 59, 77, 1)
x.unsqueeze(1) # shape = (8, 1, 1, 1024)
).permute(0, 1, 3, 2) # output shape = (8, 59, 1024, 77)
Note that, in this case, matmul performs a batched matrix multiply.
| https://stackoverflow.com/questions/64952700/ |
LSTM to Predict Pattern 010101... Understanding Hidden State | I did a quick experiment to see if I could understand what the hidden state in an LSTM does...
I tried to make an LSTM predict a sequence of [1,0,1,0,1...] based off an input sequence of X with X[0] = 1 and the remainder as random noise.
X = [1, randFloat, randFloat, randFloat...]
label = [1, 0, 1, 0...]
In my head, the model would understand:
The inputs X mean nothing, or at least very little (as it's noise) - so it'd discard these values for the most part
Solely the hidden state from the previous sequence/timestep n would be used to predict the next timestep n+1... [1, 0, 1, 0...]
I also set X[0] = 1 so the first initial in an attempt to guide the net to predicting 1 on the first item (which it does)
So, this didn't work. In theory, should it not? Can you someone explain?
It essentially never converges, and is on the cusp of guessing between 0 or 1
## Code
import os
import numpy as np
import torch
from torchvision import transforms
from torch import nn
from sklearn import preprocessing
from util import create_sequences
import torch.optim as optim
Create some fake data
sequence_1 = torch.tensor(np.random.uniform(size=50)).float().detach()
sequence_1[0] = 1
sequence_2 = torch.tensor(np.random.uniform(size=50)).float().detach()
sequence_2[0] = 1
labels_1 = np.zeros(50)
labels_1[::2] = 1
labels_1 = torch.tensor(labels_1, dtype=torch.long)
labels_2 = labels_1.clone()
training_data = [sequence_1, sequence_2]
label_data = [labels_1, labels_2]
Create simple LSTM Model
class LSTM(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
super(LSTM, self).__init__()
self.lstm = nn.LSTM(input_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, seq):
lstm_out, _ = self.lstm(seq.view(len(seq), 1, -1))
out = self.fc(lstm_out.view(len(seq), -1))
out = F.log_softmax(out, dim=1)
return out
We try to overfit on the dataset
INPUT_DIM = 1
HIDDEN_DIM = 6
model = LSTM(INPUT_DIM, HIDDEN_DIM, 2)
loss_function = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
for epoch in range(500):
for i, seq in enumerate(training_data):
labels = label_data[i]
model.zero_grad()
scores = model(seq)
loss = loss_function(scores, labels)
loss.backward()
print(loss)
optimizer.step()
with torch.no_grad():
seq_d = training_data[0]
tag_scores = model(seq_d)
for score in tag_scores:
print(np.argmax(score))
| I would say it's not meant to work.
The model would always try to make sense and find patterns in the data it's trained on i.e sequence_1 and to "verify" that it has "found" them, it uses labels_1. Since the data is random the model fails to find the pattern.
The pattern the model tries to find is not in the label but in the data, so it doesn't matter how the label is arranged. The label actually never passes through the model, so NO.
If perhaps, you trained it on a single example then definitely. The model will become overfit and give you your ones and zeros and fail miserably on other examples, otherwise it just won't be able to make sense of the random data no matter the size.
Hidden State
Solely the hidden state from the previous sequence/timestep n would be used to predict the next timestep n+1... [1, 0, 1, 0...]
Concerning Hidden state, NOTE that it is not a trainable parameter, it is the result of performing some operations on the data and parameters, meaning that the input data determines the Hidden state.
What the Hidden state does is to hold the information the model has extracted from the previous timesteps and passes it to the next timestep or as output. In the case of LSTM, it does some forgetting and updating before passing it.
| https://stackoverflow.com/questions/64960794/ |
How to create and use PyTorch learnable scalar variables outside of nn.Module? | I am working on a multi-objective problem where I have multiple losses that I need to compute, and the total loss is just the sum of the losses. I want to have PyTorch learnable floating-point parameters alpha, and beta that act as coefficients to the individual losses. Note that the summation of losses occurs outside my NN model in the training loop:
optimizer = AdamW(model.parameters(), lr=2e-5, eps=1e-8)
for batch in dl:
optimizer.zero_grad()
result = model(batch)
loss1 = loss_fn_1(result)
loss2 = loss_fn_2(result)
loss3 = loss_fn_3(result)
loss = alpha*loss1 + beta*loss2 + (1-beta)*loss3 # How to optimize alpha, beta?
loss.backward()
optimizer.step()
How would I declare and use the learnable parameters alpha and beta?
| You can put them into a list and add them to an optimizer, for example,
optimizer_for_my_params = torch.Adam([alpha, beta], lr=1e-3)
or separately,
optimizer_alpha = torch.Adam([alpha], lr=1e-3)
optimizer_beta = torch.Adam([beta], lr=1e-3)
and at each step, call zero_grad and step on all optimizers.
Or you can put them in an nn.Module and declare them as parameters:
class MyParams(nn.Module):
def __init__(self):
super(MyParams, self).__init__()
self.alpha = nn.Parameter(torch.tensor(0.))
self.beta = nn.Parameter(torch.tensor(0.))
def forward(self, loss1, loss2, loss3):
loss = self.alpha*loss1 + self.beta*loss2 + (1 - self.beta)*loss3
return loss
When using it, defining a separate optimizer for the class object should do the job.
Update:
Here is a more comprehensive example for the first method.
import torch
import torch.optim as optim
alpha = torch.tensor(0.)
alpha.requires_grad = True
optimizer_alpha = optim.Adam([alpha], lr=1e-3)
print(optimizer_alpha)
# Adam (
# Parameter Group 0
# amsgrad: False
# betas: (0.9, 0.999)
# eps: 1e-08
# lr: 0.001
# weight_decay: 0
# )
out = alpha + 1
# test backward()
optimizer_alpha.zero_grad()
out.backward()
print(alpha.grad)
# tensor(1.)
# test step()
optimizer_alpha.step()
print(alpha)
# tensor(-0.0010, requires_grad=True)
| https://stackoverflow.com/questions/64963125/ |
Pytorch CNN:RuntimeError: Given groups=1, weight of size [16, 16, 3], expected input[500, 1, 19357] to have 16 channels, but got 1 channels instead | class ConvolutionalNetwork(nn.Module):
def __init__(self, in_features, trial):
super().__init__()
self.in_features = in_features
self.trial = trial
# this computes num features outputted from the two conv layers
c1 = int(((self.in_features - 2)) / 64) # this is to account for the loss due to conversion to int type
c2 = int((c1 - 2) / 64)
self.n_conv = int(c2 * 16)
# self.n_conv = int((( ( (self.in_features - 2)/4 ) - 2 )/4 ) * 16)
self.conv1 = nn.Conv1d(16, 16, 3, 1)
self.conv1_bn = nn.BatchNorm1d(16)
self.conv2 = nn.Conv1d(16, 16, 3, 1)
self.conv2_bn = nn.BatchNorm1d(16)
# self.dp = nn.Dropout(trial.suggest_uniform('dropout_rate',0,1.0))
self.dp = nn.Dropout(0.5)
self.fc3 = nn.Linear(self.n_conv, 2)
def forward(self, x):
# shape x for conv 1d op
x = x.view(-1, 1, self.in_features)
x = self.conv1(x)
x = F.tanh(x)
x = F.max_pool1d(x, 64, 64)
x = self.conv2(x)
x = F.tanh(x)
x = F.max_pool1d(x, 64, 64)
x = x.view(-1, self.n_conv)
x = self.dp(x)
x = self.fc3(x)
x = F.log_softmax(x, dim=1)
return x
Ran the code above and this error popped up :
RuntimeError: Given groups=1, weight of size [16, 16, 3], expected input[500, 1, 19357] to have 16 channels, but got 1 channels instead.
Anyone able to advise on this? It says discrepancies in input , but the above codes works well earlier , unsure what happened after I re-arranged the code.
| Well, just after entering in the forward method you are reshaping your input array so it has only a single channel:
x = x.view(-1, 1, self.in_features)
And at the same time at the model constructor you are specifying that conv1 has 16 channels as input:
self.conv1 = nn.Conv1d(16, 16, 3, 1)
Thus the error of expecting 16 channels but received 1.
There are two things to note here:
If you are used to tensorflow, maybe you are thinking that channels are the last dimension but in pytorch channels are located at the first dimension. Take a look at the Conv1d torch documentation. Take this into account when reshaping the data.
Conv1d are agnostic to the length of your input (I am telling you this just in case in_features represents the length)
I Cannot provide you with a concrete solution since I am not sure of what you are trying to do.
| https://stackoverflow.com/questions/64963473/ |
How to get number of classes from .pth file without any model information in Python? | I have a .pth file and there is no model information available. How can I know the number of final output classes in .pth file.
For visualization, I can use Netron to see the number of classes. But, how can I get the same output number in Python.
| You can load the checkpoint and inspect the shape of the weights of the last layer. Depending if the last layer has two weights (kernel and bias), you will have to inspect both.
I provide you with an example on how to inspect the last weight.
import torch
checkpoint = torch.load('_.pt')
last_key = list(checkpoint)[-1]
print(checkpoint[last_key].size())
| https://stackoverflow.com/questions/64964239/ |
EasyOCR used under Python / Torch Multiprocessing is defaulting to CPU | I am using EasyOCR for text extraction from images. It uses PyTorch. There are multiple images in different folders and the sequence in which these folders are read isn't consequential.
When run in sequence, EasyOCR is by default using GPU and is faster compared to when run on CPU. But when Python / Torch Multiprocessing is invoked, so that the multiple folders are read in parallel, EasyOCR is defaulting to CPU.
torch.cuda.is_available returns False.
How can I solve this?
| If torch.cuda.is_available returns False,
Verify your device has a GPU.
Verify that the installed version of CUDA is supported on your GPU.
Verify that you have installed torch with CUDA support.
Check this question for additional details:
Why `torch.cuda.is_available()` returns False even after installing pytorch with cuda?
| https://stackoverflow.com/questions/64967770/ |
"Didn't find engine for operation quantized" error while using dynamic quantization with Huggingface transformer | I am trying to do dynamic quantization(quantizes the weights and the activations) on a pytorch pre-trained model from huggingface library. I have referred this link and found dynamic quantization the most suitable. I will be using the quantized model on a CPU.
Link to hugginface model here.
torch version: 1.6.0 (installed via pip)
Pre-trained models
tokenizer = AutoTokenizer.from_pretrained("microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext")
model = AutoModel.from_pretrained("microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext")
Dynamic quantization
quantized_model = torch.quantization.quantize_dynamic(
model, qconfig_spec={torch.nn.Linear}, dtype=torch.qint8
)
print(quantized_model)
Error
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-7-df2355c17e0b> in <module>
1 quantized_model = torch.quantization.quantize_dynamic(
----> 2 model, qconfig_spec={torch.nn.Linear}, dtype=torch.qint8
3 )
4
5 print(quantized_model)
~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/quantization/quantize.py in quantize_dynamic(model, qconfig_spec, dtype, mapping, inplace)
283 model.eval()
284 propagate_qconfig_(model, qconfig_spec)
--> 285 convert(model, mapping, inplace=True)
286 _remove_qconfig(model)
287 return model
~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/quantization/quantize.py in convert(module, mapping, inplace)
363 for name, mod in module.named_children():
364 if type(mod) not in SWAPPABLE_MODULES:
--> 365 convert(mod, mapping, inplace=True)
366 reassign[name] = swap_module(mod, mapping)
367
~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/quantization/quantize.py in convert(module, mapping, inplace)
363 for name, mod in module.named_children():
364 if type(mod) not in SWAPPABLE_MODULES:
--> 365 convert(mod, mapping, inplace=True)
366 reassign[name] = swap_module(mod, mapping)
367
~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/quantization/quantize.py in convert(module, mapping, inplace)
363 for name, mod in module.named_children():
364 if type(mod) not in SWAPPABLE_MODULES:
--> 365 convert(mod, mapping, inplace=True)
366 reassign[name] = swap_module(mod, mapping)
367
~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/quantization/quantize.py in convert(module, mapping, inplace)
363 for name, mod in module.named_children():
364 if type(mod) not in SWAPPABLE_MODULES:
--> 365 convert(mod, mapping, inplace=True)
366 reassign[name] = swap_module(mod, mapping)
367
~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/quantization/quantize.py in convert(module, mapping, inplace)
363 for name, mod in module.named_children():
364 if type(mod) not in SWAPPABLE_MODULES:
--> 365 convert(mod, mapping, inplace=True)
366 reassign[name] = swap_module(mod, mapping)
367
~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/quantization/quantize.py in convert(module, mapping, inplace)
364 if type(mod) not in SWAPPABLE_MODULES:
365 convert(mod, mapping, inplace=True)
--> 366 reassign[name] = swap_module(mod, mapping)
367
368 for key, value in reassign.items():
~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/quantization/quantize.py in swap_module(mod, mapping)
393 )
394 device = next(iter(devices)) if len(devices) > 0 else None
--> 395 new_mod = mapping[type(mod)].from_float(mod)
396 if device:
397 new_mod.to(device)
~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/nn/quantized/dynamic/modules/linear.py in from_float(cls, mod)
101 else:
102 raise RuntimeError('Unsupported dtype specified for dynamic quantized Linear!')
--> 103 qlinear = Linear(mod.in_features, mod.out_features, dtype=dtype)
104 qlinear.set_weight_bias(qweight, mod.bias)
105 return qlinear
~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/nn/quantized/dynamic/modules/linear.py in __init__(self, in_features, out_features, bias_, dtype)
33
34 def __init__(self, in_features, out_features, bias_=True, dtype=torch.qint8):
---> 35 super(Linear, self).__init__(in_features, out_features, bias_, dtype=dtype)
36 # We don't muck around with buffers or attributes or anything here
37 # to keep the module simple. *everything* is simply a Python attribute.
~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/nn/quantized/modules/linear.py in __init__(self, in_features, out_features, bias_, dtype)
150 raise RuntimeError('Unsupported dtype specified for quantized Linear!')
151
--> 152 self._packed_params = LinearPackedParams(dtype)
153 self._packed_params.set_weight_bias(qweight, bias)
154 self.scale = 1.0
~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/nn/quantized/modules/linear.py in __init__(self, dtype)
18 elif self.dtype == torch.float16:
19 wq = torch.zeros([1, 1], dtype=torch.float)
---> 20 self.set_weight_bias(wq, None)
21
22 @torch.jit.export
~/.virtualenvs/python3/lib64/python3.6/site-packages/torch/nn/quantized/modules/linear.py in set_weight_bias(self, weight, bias)
24 # type: (torch.Tensor, Optional[torch.Tensor]) -> None
25 if self.dtype == torch.qint8:
---> 26 self._packed_params = torch.ops.quantized.linear_prepack(weight, bias)
27 elif self.dtype == torch.float16:
28 self._packed_params = torch.ops.quantized.linear_prepack_fp16(weight, bias)
RuntimeError: Didn't find engine for operation quantized::linear_prepack NoQEngine
| Is qnnpack in the list when you run print(torch.backends.quantized.supported_engines)?
Does torch.backends.quantized.engine = 'qnnpack' work for you?
| https://stackoverflow.com/questions/64968060/ |
pipenv install pytorch cpu + specific version | I ned to install a specific version of pytorch cpu mode.
With pip I would do it like this:
pip install torch==1.2.0+cpu torchvision==0.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html --trusted-host download.pytorch.org
How can I achieve the same using Pipenv?
I tried having the following Pipfile:
[[source]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cpu"
verify_ssl = false
[packages]
torchvision = {index = "pytorch", version = "==0.4.0"}
torch = {index = "pytorch", version = "==1.2.0"}
but didn't work
| You can do:
$ PIP_FIND_LINKS="https://download.pytorch.org/whl/torch_stable.html" pipenv install torch==1.2.0+cpu torchvision==0.4.0+cpu
But, you'll have to ensure that you add PIP_FIND_LINKS for any consecutive pipenv sync, pipenv lock, etc.
UPD:
You may also add PIP_FIND_LINKS="https://download.pytorch.org/whl/torch_stable.html" to the .env file, but it's only being loaded on pipenv run and pipenv shell.
| https://stackoverflow.com/questions/64974877/ |
PyTorch installation fails Could not find a version that satisfies the requirement | I'm trying to install PyTorch with PyCharm Community Edition 2020.2.3 x64 and Python 3.9.0 on Windows 10 pro 64-bit OS PC machine
I've tried:
pip install torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
and:
python -m pip install torch==1.7.0 -f https://download.pytorch.org/whl/torch_stable.html
Should I downgrade Python version, let's say to Python 3.8.6 or PyTorch version, to make it work, or am I doing something else incorrectly besides this, maybe missed something to install, for example I did not selected CUDA, but seems like it is different reason:
ERROR: Could not find a version that satisfies the requirement
torch==1.7.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch==1.7.0+cpu
with pip3 install https://download.pytorch.org/whl/cpu//torch-1.7.0%2Bcpu-cp38-cp38-win_amd64.whl:
ERROR: torch-1.7.0+cpu-cp38-cp38-win_amd64.whl is not a supported
wheel on this platform.
and
pip install torch==1.4.0+cpu torchvision==0.5.0+cpu -f https://download.pytorch.org/whl/torch_stable.html -vvv
ERROR: Could not find a version that satisfies the requirement
torch==1.4.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch==1.4.0+cpu
Any advice, guide or example would be helpful
Solution:
Installed successfully with Python 3.8.6
| I tried with Python 3.8.0, Python 3.8.5, Python 3.8.6 and Python 3.9.0. It seems to work only with 3.8.6 version.
| https://stackoverflow.com/questions/64975755/ |
PyTorch how to compute second order Jacobian? | I have a neural network that's computing a vector quantity u. I'd like to compute first and second-order jacobians with respect to the input x, a single element.
Would anybody know how to do that in PyTorch? Below, the code snippet from my project:
import torch
import torch.nn as nn
class PINN(torch.nn.Module):
def __init__(self, layers:list):
super(PINN, self).__init__()
self.linears = nn.ModuleList([])
for i, dim in enumerate(layers[:-2]):
self.linears.append(nn.Linear(dim, layers[i+1]))
self.linears.append(nn.ReLU())
self.linears.append(nn.Linear(layers[-2], layers[-1]))
def forward(self, x):
for layer in self.linears:
x = layer(x)
return x
I then instantiate my network:
n_in = 1
units = 50
q = 500
pinn = PINN([n_in, units, units, units, q+1])
pinn
Which returns
PINN(
(linears): ModuleList(
(0): Linear(in_features=1, out_features=50, bias=True)
(1): ReLU()
(2): Linear(in_features=50, out_features=50, bias=True)
(3): ReLU()
(4): Linear(in_features=50, out_features=50, bias=True)
(5): ReLU()
(6): Linear(in_features=50, out_features=501, bias=True)
)
)
Then I compute both FO and SO jacobians
x = torch.randn(1, requires_grad=False)
u_x = torch.autograd.functional.jacobian(pinn, x, create_graph=True)
print("First Order Jacobian du/dx of shape {}, and features\n{}".format(u_x.shape, u_x)
u_xx = torch.autograd.functional.jacobian(lambda _: u_x, x)
print("Second Order Jacobian du_x/dx of shape {}, and features\n{}".format(u_xx.shape, u_xx)
Returns
First Order Jacobian du/dx of shape torch.Size([501, 1]), and features
tensor([[-0.0310],
[ 0.0139],
[-0.0081],
[-0.0248],
[-0.0033],
[ 0.0013],
[ 0.0040],
[ 0.0273],
...
[-0.0197]], grad_fn=<ViewBackward>)
Second Order Jacobian du/dx of shape torch.Size([501, 1, 1]), and features
tensor([[[0.]],
[[0.]],
[[0.]],
[[0.]],
...
[[0.]]])
Should not u_xx be a None vector if it didn't depend on x?
Thanks in advance
| So as @jodag mentioned in his comment, ReLU being null or linear, its gradient is constant (except on 0, which is a rare event), so its second-order derivative is zero. I changed the activation function to Tanh, which finally allows me to compute the jacobian twice.
Final code is
import torch
import torch.nn as nn
class PINN(torch.nn.Module):
def __init__(self, layers:list):
super(PINN, self).__init__()
self.linears = nn.ModuleList([])
for i, dim in enumerate(layers[:-2]):
self.linears.append(nn.Linear(dim, layers[i+1]))
self.linears.append(nn.Tanh())
self.linears.append(nn.Linear(layers[-2], layers[-1]))
def forward(self, x):
for layer in self.linears:
x = layer(x)
return x
def compute_u_x(self, x):
self.u_x = torch.autograd.functional.jacobian(self, x, create_graph=True)
self.u_x = torch.squeeze(self.u_x)
return self.u_x
def compute_u_xx(self, x):
self.u_xx = torch.autograd.functional.jacobian(self.compute_u_x, x)
self.u_xx = torch.squeeze(self.u_xx)
return self.u_xx
Then calling compute_u_xx(x) on an instance of PINN with x.require_grad set to True gets me there. How to get rid of useless dimensions introduced by torch.autograd.functional.jacobian remains to be understood though...
| https://stackoverflow.com/questions/64978232/ |
DenseNet, Sizes of tensors must match | would you know how I can adapt this code so that sizes of tensors must match because I have this error: x = torch.cat([x1,x2],1) RuntimeError: Sizes of tensors must match except in dimension 0. Got 32 and 1 (The offending index is 0).
My images are size 416x416.
Thank you in advance for your help,
num_classes = 20
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.inc = models.inception_v3(pretrained=True)
self.inc.aux_logits = False
for child in list(self.inc.children())[:-5]:
for param in child.parameters():
param.requires_grad = False
self.inc.fc = nn.Sequential()
self.dens121 = models.densenet121(pretrained=True)
for child in list(self.dens121.children())[:-6]:
for param in child.parameters():
param.requires_grad = False
self.dens121 = nn.Sequential(*list(self.dens121.children())[:-1])
self.SiLU = nn.SiLU()
self.linear = nn.Linear(4096, num_classes)
self.dropout = nn.Dropout(0.2)
def forward(self, x):
x1 = self.SiLU(self.dens121(x))
x1 = x1.view(-1, 2048)
x2 = self.inc(x).view(-1, 2048)
x = torch.cat([x1,x2],1)
return self.linear(self.dropout(x))
| The shapes of the two tensors are very different and that's why the torch.cat() fails. I tried to run your code with the following example:
def forward(self, x):
x1 = self.SiLU(self.dens121(x))
x1 = x1.view(-1, 2048)
x2 = self.inc(x).view(-1, 2048)
print(x1.shape, x2.shape)
x = torch.cat([x1,x2], dim=1)
return self.linear(self.dropout(x))
Here's the driver code
inputs = torch.randn(2, 3, 416, 416)
model = Net()
outputs = model(inputs)
The shapes of x1 of x2 are as follows:
torch.Size([169, 2048]) torch.Size([2, 2048])
Either your DenseNet should output the same shape as the output of Inceptionv3 or vice-versa. The output from DenseNet is of shape torch.Size([2, 1024, 13, 13]) and the output from Inceptionv3 is of shape torch.Size([2, 2048]).
EDIT
Add this line to the init method:
self.conv_reshape= nn.Conv2d(1024, 2048, kernel_size=13, stride=1)
Add these lines to your forward():
x1 = self.SiLU(self.dens121(x))
out = self.conv_reshape(x1)
x1 = out.view(-1, out.size(1))
x2 = self.inc(x).view(-1, 2048)
| https://stackoverflow.com/questions/64984301/ |
The definition of "heads" in MultiheadAttention in Pytorch Transformer module | I am a bit confused about the definition of Multihead.
Are [1] and [2] below the same?
[1]
My understanding about multiplhead is the multiple attention patterns as below.
"multiple sets of Query/Key/Value weight matrices (the Transformer uses eight attention heads, so we end up with eight sets for each encoder/decoder)."
http://jalammar.github.io/illustrated-transformer/
But
[2] in class MultiheadAttention(Module): in Pytorch Transformer module,
it seems like embed_dim is DIVIDED by the number of heads.. WHy?
Or... the embed_dim is meant to be the feature dimension times the number of heads in the first place?
self.head_dim = embed_dim // num_heads
assert self.head_dim * num_heads == self.embed_dim, "embed_dim must be divisible by num_heads"
https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/activation.py
| As per your understanding, multi-head attention is multiple times attention over some data.
But on contrast, it isn't implemented by multiplying the set of weights into number of required attention. Instead, you rearrange the weight matrices corresponding to the number of attentions, that is reshape to the weight-matrix. So, in essence, it still remains multiple time attention, but you are attending different parts of the weights.
| https://stackoverflow.com/questions/64984627/ |
"RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn " error BertFoeSequenceClassification | I am trying to build Bert model for Arabic Text classification task using pretrained model from https://github.com/alisafaya/Arabic-BERT
i want to know the exact difference between the two statement:
model_name = 'kuisailab/albert-large-arabic'
model = AutoModel.from_pretrained(model_name)
model = BertForSequenceClassification .from_pretrained(model_name)
I fine-tuned the model by adding the following layers on top of the model:
for param in model.parameters():
param.requires_grad = False
model.classifier = nn.Sequential(
nn.Dropout(0.5),
nn.ReLU(),
nn.Linear(768,512),
nn.Linear(512,2),
nn.LogSoftmax(dim=1),
nn.Softmax(dim=1)
)
model = model.to(device)
and used the optimizer:
optimizer = AdamW(model.parameters(),
lr = 2e-5)
finally this is my training loop:
model.train()
for idx, row in train_data.iterrows():
text_parts = preprocess_text(str(row['sentence']))
label = torch.tensor([row['label']]).long().to(device)
optimizer.zero_grad()
overall_output = torch.zeros((1, 2)).float().to(device)
for part in text_parts:
if len(part) > 0:
try:
input = part.reshape(-1)[:512].reshape(1, -1)
# print(input.shape)
overall_output += model(input, labels=label)[1].float().to(device)
except Exception as e:
print(str(e))
# overall_output /= len(text_parts)
overall_output = F.softmax(overall_output[0], dim=-1)
if label == 0:
label = torch.tensor([1.0, 0.0]).float().to(device)
elif label == 1:
label = torch.tensor([0.0, 1.0]).float().to(device)
# print(overall_output, label)
loss = criterion(overall_output, label)
total_loss += loss.item()
loss.backward()
optimizer.step()
and i get the error:
mat1 dim 1 must match mat2 dim 0
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-33-5c2f0fea6c1f> in <module>()
39 total_loss += loss.item()
40
---> 41 loss.backward()
42 optimizer.step()
43
1 frames
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
130 Variable._execution_engine.run_backward(
131 tensors, grad_tensors_, retain_graph, create_graph,
--> 132 allow_unreachable=True) # allow_unreachable flag
133
134
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
any idea how to solve this error
| BertForSequenceClassification is the class which extends the BertModel, i.e, BertForSequenceClassification defines a logistic regression layer, for the task of classificiation, with cross-entropy loss, to be jointly fine-tuned or trained on the existing Bert Model.
AutoModel, is a class provided in the library that allows to automatically identify the model class based on it's name or the model file contents.
Since you already know that you need a model for classification, you can directly use BertForSequenceClassification
| https://stackoverflow.com/questions/64985740/ |
How to compute Hessian of the loss w.r.t. the parameters in PyTorch using autograd.grad | I know there is quite a bit of content out there about "computing the Hessian" in pytorch, but as far as I've seen I haven't found anything working for me. So to try to be most precise, the Hessian that I want is the Jacobian of the gradient of the loss with respect to the network parameters. Also called the matrix of second-order derivatives with respect to the parameters.
I found some code that works in an intuitive way, although shouldn't be fast. It clearly just computes the gradient of the gradient of the loss w.r.t. the params w.r.t the params, and it does it one element (of the gradient) at a time. I think the logic is definitely right but I am getting an error, having to do with requires_grad. I'm a pytorch beginner so maybe its a simple thing, but the error seems to be saying that it can't take the gradient of the env_grads variable, which is the output from the previous grad function call.
Any help with this would be greatly appreciated. Here is the code followed by the error message. I also printed out the env_grads[0] variable so we can see that it is in fact a tensor, which is the correct output from the previous grad call.
env_loss = loss_fn(env_outputs, env_targets)
total_loss += env_loss
env_grads = torch.autograd.grad(env_loss, params,retain_graph=True)
print( env_grads[0] )
hess_params = torch.zeros_like(env_grads[0])
for i in range(env_grads[0].size(0)):
for j in range(env_grads[0].size(1)):
hess_params[i, j] = torch.autograd.grad(env_grads[0][i][j], params, retain_graph=True)[0][i, j] # <--- error here
print( hess_params )
exit()
Output:
tensor([[-6.4064e-03, -3.1738e-03, 1.7128e-02, 8.0391e-03],
[ 7.1698e-03, -2.4640e-03, -2.2769e-03, -1.0687e-03],
[-3.0390e-04, -2.4273e-03, -4.0799e-02, -1.9149e-02],
...,
[ 1.1258e-02, -2.5911e-05, -9.8133e-02, -4.6059e-02],
[ 8.1502e-04, -2.5814e-03, 4.1772e-02, 1.9606e-02],
[-1.0075e-02, 6.6072e-03, 8.3118e-04, 3.9011e-04]], device='cuda:0')
Error:
Traceback (most recent call last):
File "/home/jefferythewind/anaconda3/envs/rapids3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/jefferythewind/anaconda3/envs/rapids3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/jefferythewind/Projects/Irina/learning-explanations-hard-to-vary/and_mask/run_synthetic.py", line 258, in <module>
main(args)
File "/home/jefferythewind/Projects/Irina/learning-explanations-hard-to-vary/and_mask/run_synthetic.py", line 245, in main
deep_mask=args.deep_mask
File "/home/jefferythewind/Projects/Irina/learning-explanations-hard-to-vary/and_mask/run_synthetic.py", line 103, in train
scale_grad_inverse_sparsity=scale_grad_inverse_sparsity
File "/home/jefferythewind/Projects/Irina/learning-explanations-hard-to-vary/and_mask/and_mask_utils.py", line 154, in get_grads_deep
hess_params[i, j] = torch.autograd.grad(env_grads[0][i][j], params, retain_graph=True)[0][i, j]
File "/home/jefferythewind/anaconda3/envs/rapids3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 157, in grad
inputs, allow_unused)
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
| PyTorch recently-ish added a functional higher level API to torch.autograd which provides torch.autograd.functional.hessian(func, inputs,...) to directly evaluate the hessian of the scalar function func with respect to its arguments at a location specified by inputs, a tuple of tensors corresponding to the arguments of func. hessian itself does not support automatic differentiation, I believe.
Note, however, that as of March 2021 it is still in beta.
Full example using torch.autograd.functional.hessian to create a score-test for non-zero mean (As a (bad) alternative to the one sample t-test):
import numpy as np
import torch, torchvision
from torch.autograd import Variable, grad
import torch.distributions as td
import math
from torch.optim import Adam
import scipy.stats
x_data = torch.randn(100)+0.0 # observed data (here sampled under H0)
N = x_data.shape[0] # number of observations
mu_null = torch.zeros(1)
sigma_null_hat = Variable(torch.ones(1), requires_grad=True)
def log_lik(mu, sigma):
return td.Normal(loc=mu, scale=sigma).log_prob(x_data).sum()
# Find theta_null_hat by some gradient descent algorithm (in this case an closed-form expression would be trivial to obtain (see below)):
opt = Adam([sigma_null_hat], lr=0.01)
for epoch in range(2000):
opt.zero_grad() # reset gradient accumulator or optimizer
loss = - log_lik(mu_null, sigma_null_hat) # compute log likelihood with current value of sigma_null_hat (= Forward pass)
loss.backward() # compute gradients (= Backward pass)
opt.step() # update sigma_null_hat
print(f'parameter fitted under null: sigma: {sigma_null_hat}, expected: {torch.sqrt((x_data**2).mean())}')
#> parameter fitted under null: sigma: tensor([0.9260], requires_grad=True), expected: 0.9259940385818481
theta_null_hat = (mu_null, sigma_null_hat)
U = torch.tensor(torch.autograd.functional.jacobian(log_lik, theta_null_hat)) # Jacobian (= vector of partial derivatives of log likelihood w.r.t. the parameters (of the full/alternative model)) = score
I = -torch.tensor(torch.autograd.functional.hessian(log_lik, theta_null_hat)) / N # estimate of the Fisher information matrix
S = torch.t(U) @ torch.inverse(I) @ U / N # test statistic, often named "LM" (as in Lagrange multiplier), would be zero at the maximum likelihood estimate
pval_score_test = 1 - scipy.stats.chi2(df = 1).cdf(S) # S asymptocially follows a chi^2 distribution with degrees of freedom equal to the number of parameters fixed under H0
print(f'p-value Chi^2-based score test: {pval_score_test}')
#> p-value Chi^2-based score test: 0.9203232752568568
# comparison with Student's t-test:
pval_t_test = scipy.stats.ttest_1samp(x_data, popmean = 0).pvalue
print(f'p-value Student\'s t-test: {pval_t_test}')
#> p-value Student's t-test: 0.9209265268946605
| https://stackoverflow.com/questions/64997817/ |
Triton inference server serving TorchScript model | I am trying to serve a TorchScript model with the triton (tensorRT) inference server. But every time I start the server it throws the following error:
PytorchStreamReader failed reading zip archive: failed finding central directory
My folder structure is :
<model_repository>
<model_name>
config.pbtxt
<1>
<model.pt>
My config.pbtxt file is :
name: "model"
platform: "pytorch_libtorch"
max_batch_size: 1
input[
{
name: "INPUT__0"
data_type: TYPE_FP32
dims: [-1,3,-1,-1]
}
]
output:[
{
name: "OUTPUT__0"
data_type: TYPE_FP32
dims: [-1,1,-1,-1]
}
]
| I found the solution. It was a silly mistake on my part. The .pt torchscript file was not loaded properly.
| https://stackoverflow.com/questions/65010792/ |
Run validation on 1 GPU while Train on multi-GPU Pytorch Lightning | Is there any way I can execute validation_step method on single GPU while training_step with multiple GPU using DDP.
The reason I want to do is because there are several metrics which I want to implement which requires complete access to the data, and running on single GPU will ensure that. I have tried validation_step_end method but somehow I am only getting part of the data. That post is here: Stack Overflow Post
| I am afraid that this is not possible. But there is the TorchMetrics package which has been developed with multi-GPU support in mind so when your custom metric is derived from TM you shall be able to get running even on your multi-GPU setting.
| https://stackoverflow.com/questions/65013992/ |
How to use a learnable parameter in pytorch, constrained between 0 and 1? | I want to use a learnable parameter that only takes values between 0 and 1. How can I do this in pytorch?
Currently I am using:
self.beta = Parameter(torch.Tensor(1))
#initialize
zeros(self.beta)
But I am getting zeros and NaN for this parameter, as I train.
| You can have a "raw" parameter taking any values, and then pass it through a sigmoid function to get a values in range (0, 1) to be used by your function.
For example:
class MyZeroOneLayer(nn.Module):
def __init__(self):
self.raw_beta = nn.Parameter(data=torch.Tensor(1), requires_grad=True)
def forward(self): # no inputs
beta = torch.sigmoid(self.raw_beta) # get (0,1) value
return beta
Now you have a module with trainable parameter that is effectively in range (0,1)
| https://stackoverflow.com/questions/65022269/ |
How to randomly mix two PyTorch tensors | I have two same-shaped PyTorch tensors A and B, and I'd like to create a same-shape "randomly mixed" tensor C where C[i,...] = A[i,...] with probability alpha or B[i,...] with probability 1-alpha. Is there some Pythonic way to do this compactly?
| consider using torch.bernoulli to create a mask tensor:
import torch
prob = 0.8
x = torch.full((2, 6, 3), 10.2, dtype=torch.float)
y = torch.full((2, 6, 3), -1.6, dtype=torch.float)
mask = torch.bernoulli(torch.full(x.shape, prob)).int()
reverse_mask = torch.ones(x.shape).int() - mask
result = x * mask + y * reverse_mask
result is now:
[[[10.2000, 10.2000, 10.2000],
[10.2000, -1.6000, 10.2000],
[10.2000, 10.2000, -1.6000],
[-1.6000, 10.2000, -1.6000],
[10.2000, 10.2000, 10.2000],
[10.2000, 10.2000, 10.2000]],
[[10.2000, 10.2000, -1.6000],
[10.2000, 10.2000, 10.2000],
[10.2000, 10.2000, -1.6000],
[10.2000, -1.6000, 10.2000],
[-1.6000, 10.2000, 10.2000],
[10.2000, 10.2000, 10.2000]]]
Good Luck!
| https://stackoverflow.com/questions/65027847/ |
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same | I just start to learn about the YOLO v5 PyTorch version and I was able to build a model, so then I tried to implement a flask application for real-time prediction using this trained model.
class for load model and predict
class Model(object):
def __init__(self, model):
self.device = torch_utils.select_device()
print(self.device)
model = torch.load(model, map_location=self.device)['model']
self.half = False and self.device.type != 'cpu'
print('half = ' + str(self.half))
if self.half:
model.half()
# model = model.to(self.device).eval()
model.cuda()
self.loaded_model = model
def predict(self, img):
global session
# img1 = torch.from_numpy(img).to(self.device)
# img = img1.reshape(1, 3, 640, 640)
img = img.half() if self.half else img.float() # uint8 to fp16/32
img /= 255.0 # 0 - 255 to 0.0 - 1.0
print(img.ndimension())
if img.ndimension() == 3:
img = img.unsqueeze(0)
print(self.loaded_model)
img = img.to(self.device)
# img = img.half()
self.preds = self.loaded_model(img, augment=False)[0]
print(self.predict())
return self.preds
Camera class for reading frames from camera or video
model = Model("weights/best.pt")
class Camera(object):
def __init__(self):
# self.video = cv2.VideoCapture('facial_exp.mkv')
self.video = cv2.VideoCapture(0)
def __del__(self):
self.video.release()
def get_frame(self):
_, fr = self.video.read()
loader = transforms.Compose([transforms.ToTensor()])
image = cv2.resize(fr, (640, 640), interpolation=cv2.INTER_AREA)
input_im = image.reshape(1, 640, 640, 3)
pil_im = Image.fromarray(fr)
image = loader(pil_im).float()
# image = Variable(image, requires_grad=True)
image = image.unsqueeze(0)
pred = model.predict(input_im)
pred = model.predict(image)
print(pred)
_, jpeg = cv2.imencode('.jpg', fr)
return jpeg.tobytes()
Some of the commented lines are the ways which I tried but in all times bellow line
self.preds = self.loaded_model(img, augment=False)[0] throws below error
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same
any idea or guidance for solving this error thank you.
| this error means: the input type is float32, the weight type(of your model) is float16.
for exsample, this code below runned:
model.half() # so the weight type is float16
but this code below not runned:
img = img.half() # so the input type is float32
please check your code.
for more information about 'half', you can refer to torch.Tensor.to() and torch.nn.Module.to()
| https://stackoverflow.com/questions/65029217/ |
How to "cut" a tensor into half in Pytorch? | I have a tensor with shape [1, 2, 96, 96] and would like two tensors with the shape [1, 1, 96, 96], is there a quick way of doing this? Thanks in advance
| a, b = tensor.split(1, dim=1) should do the job. By specifying 1 you specify how many elements should be in each split e.g. [1,2,3,4,5,6].split(2) -> [1,2] [3,4] [5,6]. Then dim just specifies which dimension to split over which in your case would be one.
EDIT:
if you wanted to cut it in half more generally use tensor.split(n) where n is half the size of the tensor. So in your specific case if you had shape [1,10,96,96] you would use tensor.split(5,dim=1)
| https://stackoverflow.com/questions/65033693/ |
What is the function of FrozenBatchNorm2d in “maskrcnn_benchmark”? | "maskrcnn_benchmark"s github
Here is the source code for "FrozenBatchNorm2d"
import torch
from torch import nn
class FrozenBatchNorm2d(nn.Module):
def __init__(self, n):
super(FrozenBatchNorm2d, self).__init__()
self.register_buffer("weight", torch.ones(n))
self.register_buffer("bias", torch.zeros(n))
self.register_buffer("running_mean", torch.zeros(n))
self.register_buffer("running_var", torch.ones(n))
def forward(self, x):
scale = self.weight * self.running_var.rsqrt()
bias = self.bias - self.running_mean * scale
scale = scale.reshape(1, -1, 1, 1)
bias = bias.reshape(1, -1, 1, 1)
return x * scale + bias
When I put this function in my script, I found that this function had almost no effect.
Here is my usage
import torch.nn as nn
import torch
class FrozenBatchNorm2d(nn.Module):
"""
BatchNorm2d where the batch statistics and the affine parameters
are fixed
"""
def __init__(self, n):
super(FrozenBatchNorm2d, self).__init__()
self.register_buffer("weight", torch.ones(n))
self.register_buffer("bias", torch.zeros(n))
self.register_buffer("running_mean", torch.zeros(n))
self.register_buffer("running_var", torch.ones(n))
def forward(self, x):
scale = self.weight * self.running_var.rsqrt()
bias = self.bias - self.running_mean * scale
scale = scale.reshape(1, -1, 1, 1)
bias = bias.reshape(1, -1, 1, 1)
print(scale.shape,bias.shape)
return x * scale + bias
a=FrozenBatchNorm2d((1,2))
a(torch.tensor([1,2,3]))
The running result is different from what I thought.
So can someone tell me what this function exactly does?
I will appreciate it if someone could help me.
| "register_buffer" means open an RAM for some parameters which couldn't be optimized or changed during the tranning process, in another word, the "weight","bias","running_mean","running_var" are consistent values. Hence, that is the reason why this rebuild batchnorm method could be called FrozenBatchnorm2d. It's my explan, hope it can help you.
| https://stackoverflow.com/questions/65034269/ |
Remove downloaded tensorflow and pytorch(Hugging face) models | I would like to remove tensorflow and hugging face models from my laptop.
I did find one link https://github.com/huggingface/transformers/issues/861
but is there not command that can remove them because as mentioned in the link manually deleting can cause problems because we don't know which other files are linked to those models or are expecting some model to be present in that location or simply it may cause some error.
| The transformers library will store the downloaded files in your cache. As far as I know, there is no built-in method to remove certain models from the cache. But you can code something by yourself. The files are stored with a cryptical name alongside two additional files that have .json (.h5.json in case of Tensorflow models) and .lock appended to the cryptical name. The json file contains some metadata that can be used to identify the file. The following is an example of such a file:
{"url": "https://cdn.huggingface.co/roberta-base-pytorch_model.bin", "etag": "\"8a60a65d5096de71f572516af7f5a0c4-30\""}
We can now use this information to create a list of your cached files as shown below:
import glob
import json
import re
from collections import OrderedDict
from transformers import TRANSFORMERS_CACHE
metaFiles = glob.glob(TRANSFORMERS_CACHE + '/*.json')
modelRegex = "huggingface\.co\/(.*)(pytorch_model\.bin$|resolve\/main\/tf_model\.h5$)"
cachedModels = {}
cachedTokenizers = {}
for file in metaFiles:
with open(file) as j:
data = json.load(j)
isM = re.search(modelRegex, data['url'])
if isM:
cachedModels[isM.group(1)[:-1]] = file
else:
cachedTokenizers[data['url'].partition('huggingface.co/')[2]] = file
cachedTokenizers = OrderedDict(sorted(cachedTokenizers.items(), key=lambda k: k[0]))
Now all you have to do is to check the keys of cachedModels and cachedTokenizers and decide if you want to keep them or not. In case you want to delete them, just check for the value of the dictionary and delete the file from the cache. Don't forget to also delete the corresponding *.json and *.lock files.
| https://stackoverflow.com/questions/65037368/ |
How to get indices of top-K values from a numpy array | Let suppose I have probabilities from a Pytorch or Keras predictions and result is with the softmax function
from scipy.special import softmax
probs = softmax(np.random.randn(20,10),1) # 20 instances and 10 class probabilities
probs
I want to find top-5 indices from this numpy array. All I want to do is to run a loop on the results something like:
for index in top_5_indices:
if index in result:
print('Found')
I'll get if my results are in top-5 results.
Pytorch has top-k function and I have seen numpy.argpartition but I have no idea how to get this done?
| A little more expensive, but argsort would do:
idx = np.argsort(probs, axis=1)[:,-5:]
If we are talking about pytorch:
probs = torch.from_numpy(softmax(np.random.randn(20,10),1))
values, idx = torch.topk(probs, k=5, axis=-1)
| https://stackoverflow.com/questions/65038206/ |
Incompletable PyTorch with any CUDA version (module 'torch' has no attribute 'cuda') | I have NVidia 1080TI, Ubuntu x64, and Python 3.6.9 installed.
I was trying to launch PyTorch with command
import torch
print(torch.cuda.is_available)
and expected to see 'True' but met the error:
AttributeError: module 'torch' has no attribute 'cuda'
I tried to update PyTorch and install the last version 1.7.0 with CUDA 11.0 support. After that, I noticed some version discrepancies. nvidia-smi shows CUDA version 11.0 but nvcc -V shows 9.1. Also, I used cat /usr/local/cuda/version.txt to check CUDA version but got the error: cat: /usr/local/cuda/version.txt: No such file or directory
I installed CUDA driver 450.33 after fully nvidia purging but the error remains and nvcc -V still shows 9.1 version (after reboot also).
One more option I addressed to is conda installation but it didn't help.
What I can do to resolve the problem?
| As a result, I had a file named torch.py in my home directory. After the renaming problem was solved.
Thanks. Maybe my answer will be helpful to someone.
| https://stackoverflow.com/questions/65045558/ |
Debugging pytorch code in pycharm (Feasibility) | I am trying to run a code in written in python (pytorch code) which when passed as an arguments options trains the Neural network.
if __name__ == "__main__":
args = docopt(__doc__)
myparams = args["options"]
....
/* do work */
Now if we have to run this code, I need to call it from console. python3 train.py --option1 123 etc. But in that case the debug points won't work in pycharm. Can anybody clarify how to debug in this scenario? (If you know the way it would be great if you let me know).
| Look at menu bar,Run->Edit Configurations->(Chose One Configuration)Parameters
| https://stackoverflow.com/questions/65049919/ |
PyTorch error in trying to backward through the graph a second time | I'm trying to run this code: https://github.com/aitorzip/PyTorch-CycleGAN
I modified only the dataloader and transforms to be compatible with my data.
When trying to run it I get this error:
Traceback (most recent call last):
File "models/CycleGANs/train",
line 150, in
loss_D_A.backward()
File "/opt/conda/lib/python3.8/site-packages/torch/tensor.py", line 221, in
backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File
"/opt/conda/lib/python3.8/site-packages/torch/autograd/init.py",
line 130, in backward
Variable._execution_engine.run_backward(
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate
results have already been freed. Specify retain_graph=True when
calling backward the first time.
This is the train loop up to the point of error:
for epoch in range(opt.epoch, opt.n_epochs):
for i, batch in enumerate(dataloader):
# Set model input
real_A = Variable(input_A.copy_(batch['A']))
real_B = Variable(input_B.copy_(batch['B']))
##### Generators A2B and B2A #####
optimizer_G.zero_grad()
# Identity loss
# G_A2B(B) should equal B if real B is fed
same_B = netG_A2B(real_B)
loss_identity_B = criterion_identity(same_B, real_B)*5.0
# G_B2A(A) should equal A if real A is fed
same_A = netG_B2A(real_A)
loss_identity_A = criterion_identity(same_A, real_A)*5.0
# GAN loss
fake_B = netG_A2B(real_A)
pred_fake = netD_B(fake_B)
loss_GAN_A2B = criterion_GAN(pred_fake, target_real)
fake_A = netG_B2A(real_B)
pred_fake = netD_A(fake_A)
loss_GAN_B2A = criterion_GAN(pred_fake, target_real)
# Cycle loss
# TODO: cycle loss doesn't allow for multimodality. I leave it for now but needs to be thrown out later
recovered_A = netG_B2A(fake_B)
loss_cycle_ABA = criterion_cycle(recovered_A, real_A)*10.0
recovered_B = netG_A2B(fake_A)
loss_cycle_BAB = criterion_cycle(recovered_B, real_B)*10.0
# Total loss
loss_G = loss_identity_A + loss_identity_B + loss_GAN_A2B + loss_GAN_B2A + loss_cycle_ABA + loss_cycle_BAB
loss_G.backward()
optimizer_G.step()
##### Discriminator A #####
optimizer_D_A.zero_grad()
# Real loss
pred_real = netD_A(real_A)
loss_D_real = criterion_GAN(pred_real, target_real)
# Fake loss
fake_A = fake_A_buffer.push_and_pop(fake_A)
pred_fale = netD_A(fake_A.detach())
loss_D_fake = criterion_GAN(pred_fake, target_fake)
# Total loss
loss_D_A = (loss_D_real + loss_D_fake)*0.5
loss_D_A.backward()
I am not familiar at all what it means. My guess is it's something to do with fake_A_buffer. It's just a fake_A_buffer = ReplayBuffer()
class ReplayBuffer():
def __init__(self, max_size=50):
assert (max_size > 0), 'Empty buffer or trying to create a black hole. Be careful.'
self.max_size = max_size
self.data = []
def push_and_pop(self, data):
to_return = []
for element in data.data:
element = torch.unsqueeze(element, 0)
if len(self.data) < self.max_size:
self.data.append(element)
to_return.append(element)
else:
if random.uniform(0,1) > 0.5:
i = random.randint(0, self.max_size-1)
to_return.append(self.data[i].clone())
self.data[i] = element
else:
to_return.append(element)
return Variable(torch.cat(to_return))
Error after setting `loss_G.backward(retain_graph=True)
Traceback (most recent call last): File "models/CycleGANs/train",
line 150, in
loss_D_A.backward() File "/opt/conda/lib/python3.8/site-packages/torch/tensor.py", line 221, in
backward
torch.autograd.backward(self, gradient, retain_graph, create_graph) File
"/opt/conda/lib/python3.8/site-packages/torch/autograd/init.py",
line 130, in backward
Variable._execution_engine.run_backward( RuntimeError: one of the variables needed for gradient computation has been modified by an
inplace operation: [torch.FloatTensor [3, 64, 7, 7]] is at version 2;
expected version 1 instead. Hint: enable anomaly detection to find the
operation that failed to compute its gradient, with
torch.autograd.set_detect_anomaly(True).
And after setting torch.autograd.set_detect_anomaly(True)
/opt/conda/lib/python3.8/site-packages/torch/autograd/init.py:130:
UserWarning: Error detected in MkldnnConvolutionBackward. Traceback of
forward call that caused the error:
File "models/CycleGANs/train",
line 115, in
fake_B = netG_A2B(real_A)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py",
line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/Histology-Style-Transfer-Research/models/CycleGANs/models.py",
line 67, in forward
return self.model(x)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py",
line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/container.py",
line 117, in forward
input = module(input)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py",
line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/Histology-Style-Transfer-Research/models/CycleGANs/models.py",
line 19, in forward
return x + self.conv_block(x)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py",
line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/container.py",
line 117, in forward
input = module(input)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py",
line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/conv.py",
line 423, in forward
return self._conv_forward(input, self.weight)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/conv.py",
line 419, in _conv_forward
return F.conv2d(input, weight, self.bias, self.stride, (Triggered internally at
/opt/conda/conda-bld/pytorch_1603729096996/work/torch/csrc/autograd/python_anomaly_mode.cpp:104.)
Variable._execution_engine.run_backward(
Traceback (most recent call
last): File "models/CycleGANs/train", line 133, in
loss_G.backward(retain_graph=True)
File "/opt/conda/lib/python3.8/site-packages/torch/tensor.py", line 221, in
backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File
"/opt/conda/lib/python3.8/site-packages/torch/autograd/init.py",
line 130, in backward
Variable._execution_engine.run_backward( RuntimeError: Function 'MkldnnConvolutionBackward' returned nan values in its 2th output.
| loss_G.backward() should be loss_G.backward(retain_graph=True) this is because when you use backward normally it doesn't record the operations it performs in the backward pass, retain_graph=True is telling to do so.
| https://stackoverflow.com/questions/65050791/ |
pytorch: how to apply function over all cells of 4-d tensor | I'm trying to apply a function over a 4-D tensor (I think about it as a 2-D matrix with a 2-D matrix in each cell) with the following dimensions: [N x N x N x N].
The apply function returns [1 x N] tensor, so after the apply function I'm expecting a tensor of the following dimensions: [N x N x 1 x N].
Example: let's define [4 x 4 x 4 x 4] tensor:
tensor_4d = torch.tensor([[[[1., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]],
[[1., 0., 0., 0.],
[1., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]],
[[1., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]],
[[1., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]]],
[[[0., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]],
[[0., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]],
[[0., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]],
[[0., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]]],
[[[0., 0., 1., 0.],
[0., 0., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 0.]],
[[0., 0., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 0.]],
[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 0.]],
[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 0.]]],
[[[0., 0., 1., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 1.],
[0., 0., 0., 1.]],
[[0., 0., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 1.],
[0., 0., 0., 1.]],
[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 1.],
[0., 0., 0., 1.]],
[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 1.]]]], dtype=torch.float64)
lets look on tensor_4d at [3][0]:
tensor_4d[3][0]
tensor([[0., 0., 1., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 1.],
[0., 0., 0., 1.]], dtype=torch.float64)
this my apply function :
def apply_function(tensor_2d):
eigenvalues, eigenvectors = torch.eig(input=tensor_2d, eigenvectors=True)
return eigenvector[:, 2]
and this the result of the apply function:
apply_function(tensor_4d[3][0])
tensor([-1.0000e+00, 0.0000e+00, 4.0083e-292, 0.0000e+00],
dtype=torch.float64)
so the apply_function works for each cell.
Next, I'm trying to use the apply_function with the whole matrix, and expecting each cell will contain the result of activating 'apply_function' for this cell. but, when using the apply function I'm getting the following error:
apply_function(tensor_4d)
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.2.3\helpers\pydev\_pydevd_bundle\pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "<input>", line 1, in <module>
File "C:/Users/LiavB/PycharmProjects/RBC/Components/RBC_torch.py", line 41, in apply_function
eigenvalues, eigenvectors = torch.eig(input=tensor_2d, eigenvectors=True)
RuntimeError: invalid argument 1: A should be 2 dimensional at ..\aten\src\TH/generic/THTensorLapack.cpp:206
| Let's try:
new_shape=(-1,)+tensor_4d.shape[2:]
out = (torch.stack([apply_function(t) for t in tensor_4d.view(new_shape)], axis=-1)
.reshape(new_shape)
)
| https://stackoverflow.com/questions/65051014/ |
What could be reason of "ValueError: axes don't match array error" for Pytorch U-net segmentation model? | I'm trying to implement a segmentation model (which i used for another dataset succesfully before) for kaggle dataset called "Carvana Image Masking Challange".
I searched a lot, but still could not figured out what is the reason i am getting this error. There were some suggestion to check image dimension which could be grayscale format but it seems i have 3 channel for both original and mask images.I am grateful for all your support
My code is following:
Libraries
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
from PIL import Image
import numpy as np
import cv2
import matplotlib.pyplot as plt
from torch.utils.data import DataLoader
from torch.utils.data import Dataset as BaseDataset
import albumentations as albu
import torch
import numpy as np
import segmentation_models_pytorch as smp
Data path
DATA_DIR = 'D:/Users/eugur/Belgeler/Jupyter/Segmentation_Kaggle'
x_train_dir = os.path.join(DATA_DIR, 'train')
y_train_dir = os.path.join(DATA_DIR, 'train_masks')
x_valid_dir = os.path.join(DATA_DIR, 'valid')
y_valid_dir = os.path.join(DATA_DIR, 'valid_masks')
x_test_dir = os.path.join(DATA_DIR, 'test')
helper function for data visualization
def visualize(**images):
"""PLot images in one row."""
n = len(images)
plt.figure(figsize=(16, 5))
for i, (name, image) in enumerate(images.items()):
plt.subplot(1, n, i + 1)
plt.xticks([])
plt.yticks([])
plt.title(' '.join(name.split('_')).title())
plt.imshow(image)
plt.show()
Dataset Class
class Dataset(BaseDataset):
"""
Args:
images_dir (str): path to images folder
masks_dir (str): path to segmentation masks folder
class_values (list): values of classes to extract from segmentation mask
augmentation (albumentations.Compose): data transfromation pipeline
(e.g. flip, scale, etc.)
preprocessing (albumentations.Compose): data preprocessing
(e.g. noralization, shape manipulation, etc.)
"""
CLASSES = ['car']
def __init__(
self,
images_dir,
masks_dir,
classes=None,
augmentation=None,
preprocessing=None,
):
self.ids = os.listdir(images_dir)
self.images_fps = [os.path.join(images_dir, image_id) for image_id in self.ids]
self.masks_fps = [os.path.join(masks_dir, image_id.split('.')[0]+'_mask.gif') for image_id in self.ids]
# convert str names to class values on masks
self.class_values = [self.CLASSES.index(cls.lower()) for cls in classes]
self.augmentation = augmentation
self.preprocessing = preprocessing
def __getitem__(self, i):
# read data
image = cv2.imread(self.images_fps[i])
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# mask = cv2.imread(self.masks_fps[i], 0)
mask = cv2.VideoCapture(self.masks_fps[i],0)
ret,mask = mask.read()
mask = mask/255
# extract certain classes from mask (e.g. cars)
masks = [(mask == v) for v in self.class_values]
mask = np.stack(masks, axis=-1).astype('float')
# apply augmentations
if self.augmentation:
sample = self.augmentation(image=image, mask=mask)
image, mask = sample['image'], sample['mask']
# apply preprocessing
if self.preprocessing:
sample = self.preprocessing(image=image, mask=mask)
image, mask = sample['image'], sample['mask']
return image, np.squeeze(mask,axis=3)
def __len__(self):
return len(self.ids)
Preprocessing and Augmentation
def get_training_augmentation():
train_transform = [
albu.HorizontalFlip(p=0.5),
albu.ShiftScaleRotate(scale_limit=0.5, rotate_limit=0, shift_limit=0.1, p=1, border_mode=0),
albu.PadIfNeeded(min_height=320, min_width=320, always_apply=True, border_mode=0),
albu.RandomCrop(height=320, width=320, always_apply=True),
albu.IAAAdditiveGaussianNoise(p=0.2),
albu.IAAPerspective(p=0.5),
albu.OneOf(
[
albu.CLAHE(p=1),
albu.RandomBrightness(p=1),
albu.RandomGamma(p=1),
],
p=0.9,
),
albu.OneOf(
[
albu.IAASharpen(p=1),
albu.Blur(blur_limit=3, p=1),
albu.MotionBlur(blur_limit=3, p=1),
],
p=0.9,
),
albu.OneOf(
[
albu.RandomContrast(p=1),
albu.HueSaturationValue(p=1),
],
p=0.9,
),
]
return albu.Compose(train_transform)
def get_validation_augmentation():
"""Add paddings to make image shape divisible by 32"""
test_transform = [
albu.PadIfNeeded(384, 480)
]
return albu.Compose(test_transform)
def to_tensor(x, **kwargs):
return x.transpose(0,2,1).astype('float32')
def get_preprocessing(preprocessing_fn):
"""Construct preprocessing transform
Args:
preprocessing_fn (callbale): data normalization function
(can be specific for each pretrained neural network)
Return:
transform: albumentations.Compose
"""
_transform = [
albu.Lambda(image=preprocessing_fn),
albu.Lambda(image=to_tensor, mask=to_tensor),
]
return albu.Compose(_transform)
Model Definition
ENCODER = 'se_resnext50_32x4d'
ENCODER_WEIGHTS = 'imagenet'
CLASSES = ['car']
ACTIVATION = 'sigmoid' # could be None for logits or 'softmax2d' for multicalss segmentation
DEVICE = 'cuda'
# create segmentation model with pretrained encoder
model = smp.FPN(
encoder_name=ENCODER,
encoder_weights=ENCODER_WEIGHTS,
classes=len(CLASSES),
in_channels=3,
activation=ACTIVATION,
)
preprocessing_fn = smp.encoders.get_preprocessing_fn(ENCODER, ENCODER_WEIGHTS)
Data Loader
train_dataset = Dataset(
x_train_dir,
y_train_dir,
preprocessing=get_preprocessing(preprocessing_fn),
classes=CLASSES,
)
valid_dataset = Dataset(
x_valid_dir,
y_valid_dir,
preprocessing=get_preprocessing(preprocessing_fn),
classes=CLASSES,
)
train_loader = DataLoader(train_dataset, batch_size=8, shuffle=True, num_workers=0)
valid_loader = DataLoader(valid_dataset, batch_size=1, shuffle=False, num_workers=0)
Optimer Definition
loss = smp.utils.losses.DiceLoss()
metrics = [
smp.utils.metrics.IoU(threshold=0.5),
]
optimizer = torch.optim.Adam([
dict(params=model.parameters(), lr=0.0001),
])
Training
train_epoch = smp.utils.train.TrainEpoch(
model,
loss=loss,
metrics=metrics,
optimizer=optimizer,
device=DEVICE,
verbose=True,
)
valid_epoch = smp.utils.train.ValidEpoch(
model,
loss=loss,
metrics=metrics,
device=DEVICE,
verbose=True,
)
max_score = 0
for i in range(0, 20):
print('\nEpoch: {}'.format(i))
train_logs = train_epoch.run(train_loader)
valid_logs = valid_epoch.run(valid_loader)
# do something (save model, change lr, etc.)
if max_score < valid_logs['iou_score']:
max_score = valid_logs['iou_score']
torch.save(model, './best_model.pth')
print('Model saved!')
if i == 25:
optimizer.param_groups[0]['lr'] = 1e-5
print('Decrease decoder learning rate to 1e-5!')
Error
> Epoch: 0 train: 0%| | 0/510 [00:00<?, ?it/s]
>
> --------------------------------------------------------------------------- ValueError Traceback (most recent call
> last) <ipython-input-208-d2306c5ca0ea> in <module>
> 6
> 7 print('\nEpoch: {}'.format(i))
> ----> 8 train_logs = train_epoch.run(train_loader)
> 9 valid_logs = valid_epoch.run(valid_loader)
> 10
>
> C:\ProgramData\Anaconda3\envs\segmentation\lib\site-packages\segmentation_models_pytorch\utils\train.py
> in run(self, dataloader)
> 43
> 44 with tqdm(dataloader, desc=self.stage_name, file=sys.stdout, disable=not (self.verbose)) as iterator:
> ---> 45 for x, y in iterator:
> 46 x, y = x.to(self.device), y.to(self.device)
> 47 loss, y_pred = self.batch_update(x, y)
>
> C:\ProgramData\Anaconda3\envs\segmentation\lib\site-packages\tqdm\std.py
> in __iter__(self) 1169 1170 try:
> -> 1171 for obj in iterable: 1172 yield obj 1173 # Update and possibly print the
> progressbar.
>
> C:\ProgramData\Anaconda3\envs\segmentation\lib\site-packages\torch\utils\data\dataloader.py
> in __next__(self)
> 433 if self._sampler_iter is None:
> 434 self._reset()
> --> 435 data = self._next_data()
> 436 self._num_yielded += 1
> 437 if self._dataset_kind == _DatasetKind.Iterable and \
>
> C:\ProgramData\Anaconda3\envs\segmentation\lib\site-packages\torch\utils\data\dataloader.py
> in _next_data(self)
> 473 def _next_data(self):
> 474 index = self._next_index() # may raise StopIteration
> --> 475 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
> 476 if self._pin_memory:
> 477 data = _utils.pin_memory.pin_memory(data)
>
> C:\ProgramData\Anaconda3\envs\segmentation\lib\site-packages\torch\utils\data\_utils\fetch.py
> in fetch(self, possibly_batched_index)
> 42 def fetch(self, possibly_batched_index):
> 43 if self.auto_collation:
> ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
> 45 else:
> 46 data = self.dataset[possibly_batched_index]
>
> C:\ProgramData\Anaconda3\envs\segmentation\lib\site-packages\torch\utils\data\_utils\fetch.py
> in <listcomp>(.0)
> 42 def fetch(self, possibly_batched_index):
> 43 if self.auto_collation:
> ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
> 45 else:
> 46 data = self.dataset[possibly_batched_index]
>
> <ipython-input-146-65256f8f536d> in __getitem__(self, i)
> 54 # apply preprocessing
> 55 if self.preprocessing:
> ---> 56 sample = self.preprocessing(image=image, mask=mask)
> 57 image, mask = sample['image'], sample['mask']
> 58
>
> C:\ProgramData\Anaconda3\envs\segmentation\lib\site-packages\albumentations\core\composition.py
> in __call__(self, force_apply, *args, **data)
> 180 p.preprocess(data)
> 181
> --> 182 data = t(force_apply=force_apply, **data)
> 183
> 184 if dual_start_end is not None and idx == dual_start_end[1]:
>
> C:\ProgramData\Anaconda3\envs\segmentation\lib\site-packages\albumentations\core\transforms_interface.py
> in __call__(self, force_apply, *args, **kwargs)
> 87 )
> 88 kwargs[self.save_key][id(self)] = deepcopy(params)
> ---> 89 return self.apply_with_params(params, **kwargs)
> 90
> 91 return kwargs
>
> C:\ProgramData\Anaconda3\envs\segmentation\lib\site-packages\albumentations\core\transforms_interface.py
> in apply_with_params(self, params, force_apply, **kwargs)
> 100 target_function = self._get_target_function(key)
> 101 target_dependencies = {k: kwargs[k] for k in self.target_dependence.get(key, [])}
> --> 102 res[key] = target_function(arg, **dict(params, **target_dependencies))
> 103 else:
> 104 res[key] = None
>
> C:\ProgramData\Anaconda3\envs\segmentation\lib\site-packages\albumentations\augmentations\transforms.py
> in apply_to_mask(self, mask, **params) 3068 def
> apply_to_mask(self, mask, **params): 3069 fn =
> self.custom_apply_fns["mask"]
> -> 3070 return fn(mask, **params) 3071 3072 def apply_to_bbox(self, bbox, **params):
>
> <ipython-input-186-4f194a842931> in to_tensor(x, **kwargs)
> 52
> 53
> ---> 54 return x.transpose(0,2,1).astype('float32')
> 55
> 56
>
> ValueError: axes don't match array
| There were 2 problem on the above code;
Mask image size was wrong, expected as (x,y,1) but it was (x,y,3)
Model expect equal size of rows and columns.
After above changes code works well properly.
| https://stackoverflow.com/questions/65055133/ |
Validation loss is neither increasing or decreasing | Usually when a model overfits, validation loss goes up and training loss goes down from the point of overfitting. But for my case, training loss still goes down but validation loss stays at same level. Hence validation accuracy also stays at same level but training accuracy goes up. I am trying to reconstruct a 2D image from a 3D volume using UNet. Same is the behavior when I am trying to reconstruct 3D volume from 2D image but at higher loss and lower accuracy. Can someone explain the curve that why validation loss is not going down from the point of overfitting?
| The trends show that your model is overfitting. Ways to overcome overfitting include:
Use data augmentation
Use more data
Use Dropout
Use regularization
Try slowing down your learning rate!
| https://stackoverflow.com/questions/65057340/ |
Pytorch multiprocessing with shared memory causes matmul to be 30x slower (with only two processes) | I am trying to improve the speed of my reinforcement learning algorithm by using multiprocessing to have multiple workers generating experience at the same time. Each process just runs the forward pass of my neural net, no gradient computation is needed.
As I understand it, when passing Tensors and nn.Modules across process boundaries (using torch.multiprocessing.Queue or torch.multiprocessing.Pool), the tensor data is moved to shared memory, which shouldn't be any slower than non-shared memory.
However, when I run my multiprocess code with 2 processes (on an 8 core machine), I find that my pytorch operations become more than 30x slower, more than counteracting the speedup from running two processes simultaneously.
I profiled my application to find which operations specifically are slowing down. I found that much of my time was spend in nn.functional.linear(), specifically on this line inside a Tensor.matmul call:
output = input.matmul(weight.t())
I added a timer just to this specific matmul call, and I found that when one process is running, this operation takes less than 0.3 milliseconds, but when two processes are running, it takes more than 10 milliseconds. Note that in both cases the weight matrix has been put in shared memory and passed across process boundaries to a worker process, the only difference is that in the second case there are two worker processes instead of one.
For reference, the shapes of input and weight tensors are torch.Size([1, 24, 180]) and torch.Size([31, 180]), respectively.
What could be causing this drastic slowdown? is there some subtlety to using torch multiprocessing or shared memory that is not mentioned in any of the documentation? I feel like there must be some hidden lock that is causing contention here, because this drastic slowdown makes no sense to me.
| It seems like this was caused by a bad interaction of OpenMP (used by pytorch by default) and multiprocessing. This is a known issue in pytorch (https://github.com/pytorch/pytorch/issues/17199) and I was even hitting deadlocks in certain configurations I used to debug. Turning off OpenMP using torch.set_num_threads(1) fixed the issue for me, and immediately sped up my tensor operations in the multiple processes case, presumably, by bypassing internal locking OpenMP was doing.
| https://stackoverflow.com/questions/65057388/ |
Pytorch - (Categorical) Cross Entropy Loss using one hot encoding and softmax | I'm looking for a cross entropy loss function in Pytorch that is like the CategoricalCrossEntropyLoss in Tensorflow.
My labels are one hot encoded and the predictions are the outputs of a softmax layer. For example (every sample belongs to one class):
targets = [0, 0, 1]
predictions = [0.1, 0.2, 0.7]
I want to compute the (categorical) cross entropy on the softmax values and do not take the max values of the predictions as a label and then calculate the cross entropy. Unfortunately, I did not find an appropriate solution since Pytorch's CrossEntropyLoss is not what I want and its BCELoss is also not exactly what I need (isn't it?).
Does anyone know which loss function to use in Pytorch or how to deal with it?
Many thanks in advance!
| I thought Tensorflow's CategoricalCrossEntropyLoss was equivalent to PyTorch's CrossEntropyLoss but it seems not. The former takes OHEs while the latter takes labels as well. It seems, however, that the difference is:
torch.nn.CrossEntropyLoss is a combination of torch.nn.LogSoftmax and torch.nn.NLLLoss():
tf.keras.losses.CategoricalCrossEntropyLoss is something like:
Your predictions have already been through a softmax. So only the negative log-likelihood needs to be applied. Based on what was discussed here, you could try this:
class CategoricalCrossEntropyLoss(nn.Module):
def __init__(self):
super().__init__()
def forward(self, y_hat, y):
return F.nll_loss(y_hat.log(), y.argmax(dim=1))
Above the prediction vector is converted from one-hot-encoding to label with torch.Tensor.argmax.
If that's correct why not just use torch.nn.CrossEntropyLoss in the first place? You would just have to remove the softmax on your model's last layer and convert your targets labels.
| https://stackoverflow.com/questions/65059829/ |
How to call a network in tf.keras.Sequential()? | If I use pytorch, I could use [index] to loop the layers:
layers = nn.ModuleList()
q = nn.ModuleList()
for _ in range(10):
layers.append(attn)
q.append(nn.Linear(dim1, dim2))
list = []
for index, layer in enumerate(self.layers):
Q = q[index](inputTensor)
list.append(layer(attn))
so when we use tensorflow, can we still use index like pytorch?
layers = tf.keras.Sequential()
q = tf.keras.Sequential()
for _ in range(10):
layers.add(attn)
q.add(Dense(dim2))
list = []
for index, layer in enumerate(layers):
Q = layer[index](inputTensor)
list.add(layer(att))
| Yes - it is possible:
model = tf.keras.Sequential([
tf.keras.layers.Dense(128),
tf.keras.layers.Dense(1) ])
for layer in model.layers:
Q = layer
| https://stackoverflow.com/questions/65061871/ |
How to convert yolov4 weights.wt to pytorch weights .pt? | I was trying out the yolov4 from https://github.com/theAIGuysCode/YOLOv4-Cloud-Tutorial and I wanted to convert the weights from .wt files to .pt files for pytorch
Is there a way I can do that?
| Pytorch YOLOv4 (I am biased as I am a maintainer) has the ability to do this with darknet2pytorch. The following is an example snippet
from tool.darknet2pytorch import Darknet
WEIGHTS = Darknet(cfgfile)
WEIGHTS.load_weights(weightfile)
Where cfgfile is your darknet config.cfg file, and weightfile is your darknet .wt weights.
WEIGHTS is now an ordinary PyTorch model, which you can save however you'd like.
| https://stackoverflow.com/questions/65067023/ |
Implementing a simple optimization algorithm in PyTorch | I'm currently learning PyTorch in order to utilize its open source autograd feature, and as an exercise for myself, I want to implement a simple optimization algorithm that I've already implemented in MATLAB. As a simple example, say I'm trying to solve the problem min_x 1/2 x'Ax - b'x, i.e. find the vector x which minimizes the quantity x'Ax - b'x, given both A and b. A simple gradient descent algorithm with exact line search in MATLAB may look something like this:
% initialize x = zeros(n, 1) where n is the length of b
while residual > tolerance
grad = A*x - b; % compute the gradient of the objective
alpha = norm(grad)^2/(grad'*A*grad); % compute step-size alpha by exact line search
x = x - alpha*grad; % do a gradient step
residual = norm(grad); % compute residual
objective = x'*A*x - b'*x; % compute objective value at current iteration
How would I implement this optimization algorithm in PyTorch? Specifically, I would want to do an identical optimization loop, where I replace my own computation of the gradient with Torch's autograd feature. In other words, I want to perform the exact same algorithm as above in PyTorch, except instead of computing the gradient myself, I simply use PyTorch's autograd feature to compute the gradient. In this way, I don't want to call any given optimizer (like SGD or Adam)--just write the algorithm myself with the only difference being that the gradient is computed by PyTorch. I plan to compare the results of the above numpy/MATLAB implementation with the explicit gradient computation vs the PyTorch implementation with what I assume is a numerical approximation of the gradient.
Thank you very much!
| Basically you need to use the autograd in pytorch. Not a complete program but it'll look something like this along the lines:
In each iteration do the following:
Specify x.requires_grad=True because you need its gradient. Then compute your objective function:
x.requires_grad = True
obj_function = torch.matmul(x.t(),torch.matmul(A,x)) * .5 - torch.matmul(b,x)
Clear old gradient results of x:
x.grad = None
Call backward() on obj_function. This will automatically compute the gradients of the tensors involved in the operation:
obj_function.backward()
Set x.requires_grad to false because you need to modify x, but pytorch does not allow inplace modification of a tensor that requires grad:
x.requires_grad = False
x = x - x.grad * step_size
| https://stackoverflow.com/questions/65081845/ |
dropout(): argument 'input' (position 1) must be Tensor, not str when using Bert with Huggingface | My code was working fine and when I tried to run it today without changing anything I got the following error:
dropout(): argument 'input' (position 1) must be Tensor, not str
Would appreciate if help could be provided.
Could be an issue with the data loader?
| if you use HuggingFace, this information could be useful. I have same error and fix it with adding parameter return_dict=False in model class before dropout:
outputs = model(**inputs, return_dict=False)
| https://stackoverflow.com/questions/65082243/ |
PyTorch's DataParallel is only using one GPU | I'm trying to use two GPU's for training a model in PyTorch. I'm using torch.nn.DataParallel but for some reason nvidia-smi is saying that I'm only using one GPU.
The code is something along the lines of:
>>> import torch.nn as nn
>>> model = SomeModel()
>>> model = nn.DataParallel(model)
>>> model.to('cuda')
When I run the program and observe the output of nvidia-smi, I only see GPU 0 running. Would anybody know what the problem is?
| You should be using nn.DataParallel(model, [0,1]) in order to use GPU #0 and GPU #1. The call model.to('cuda') afterwards is not necessary. You may be tempted to use nn.DataParallel(model.to('cuda'), [0,1]), but this appears unnecessary as well.
| https://stackoverflow.com/questions/65084728/ |
Run Pytorch stacked model on Colab TPU | I am trying to run this my model on Colab Multi core TPU but I really don't know how to do it. I tried this tutorial notebook but I got some error and I can't fix it but I think there is maybe simpler wait for to do it.
About my model:
class BERTModel(nn.Module):
def __init__(self,...):
super().__init__()
if ...:
self.bert_model = XLMRobertaModel.from_pretrained(...) # huggingface XLM-R
elif ...:
self.bert_model = others_model.from_pretrained(...) # huggingface XLM-R
... # some other model's parameters
def forward(self,...):
bert_input = ...
output = self.bert_model(bert_input)
... # some function that process on output
def other_function(self,...):
# just doing some process on output. like concat layers's embedding and return ...
class MAINModel(nn.Module):
def __init__(self,...):
super().__init__()
print('Using model 1')
self.bert_model_1 = BERTModel(...)
print('Using model 2')
self.bert_model_2 = BERTModel(...)
self.linear = nn.Linear(...)
def forward(self,...):
bert_input = ...
bert_output = self.bert_model(bert_input)
linear_output = self.linear(bert_output)
return linear_output
Can you please tell me how to run a model like my model on Colab TPU? I used Colab PRO to make sure Ram memory is not a big problem. Thanks you so so much.
| I would work off the examples here: https://github.com/pytorch/xla/tree/master/contrib/colab
Maybe start with a simpler model like this: https://github.com/pytorch/xla/blob/master/contrib/colab/mnist-training.ipynb
In the pseudocode you shared, there is no reference to the torch_xla library, which is required to use PyTorch on TPUs. I'd recommend starting with on of the working Colab notebooks in that directory I shared and then swapping out parts of the model with your own model. There are a few (usually like 3-4) places in the overall training code you need to modify for a model that runs on GPUs using native PyTorch if you want to run that model on TPUs. See here for a description of some of the changes. The other big change is to wrap the default dataloader with a ParallelLoader as shown in the example MNIST colab I shared
If you have any specific error you see in one of the Colabs, feel free to open an issue : https://github.com/pytorch/xla/issues
| https://stackoverflow.com/questions/65117817/ |
Does pytorch have function to calculate correlation coefficient matrix like numpy.corrcoef () | Does pytorch have function to calculate correlation coefficient matrix like numpy.corrcoef ()
| fastai (implemented heavily in pytorch) provides a suite of correlation coefficients including Pearson, Spearman, and Matthews (which probably is not what you want). fastai's documentation lists all of the stored commands here. You can use them via hooks during Pytorch training.
| https://stackoverflow.com/questions/65120345/ |
The difference of loading model parameters between load_state_dict and nn.Parameter in pytorch | When I wanna assign part of pre-trained model parameters to another module defined in a new model of PyTorch, I got two different outputs using two different methods.
The Network is defined as follows:
class Net:
def __init__(self):
super(Net, self).__init__()
self.resnet = torch.hub.load('pytorch/vision', 'resnet18', pretrained=True)
self.resnet = nn.Sequential(*list(self.resnet.children())[:-1])
self.freeze_model(self.resnet)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Linear(256, 3),
)
def forward(self, x):
out = self.resnet(x)
out = out.flatten(start_dim=1)
out = self.classifier(out)
return out
What I want is to assign pre-trained parameters to classifier in the net module. Two different ways were used for this task.
# First way
net.load_state_dict(torch.load('model_CNN_pretrained.ptl'))
# Second way
params = torch.load('model_CNN_pretrained.ptl')
net.classifier[1].weight = nn.Parameter(params['classifier.1.weight'], requires_grad =False)
net.classifier[1].bias = nn.Parameter(params['classifier.1.bias'], requires_grad =False)
net.classifier[3].weight = nn.Parameter(params['classifier.3.weight'], requires_grad =False)
net.classifier[3].bias = nn.Parameter(params['classifier.3.bias'], requires_grad =False)
The parameters were assigned correctly but got two different outputs from the same input data. The first method works correctly, but the second doesn't work well. Could some guys point what the difference of these two methods?
| Finally, I find out where is the problem.
During the pre-trained process, buffer parameters in BatchNorm2d Layer of ResNet18 model were changed even if we set require_grad of parameters False. Buffer parameters were calculated by the input data after model.train() was processed, and unchanged after model.eval().
There is a link about how to freeze the BN layer.
How to freeze BN layers while training the rest of network (mean and var wont freeze)
| https://stackoverflow.com/questions/65127800/ |
PyTorch mutiprocessing: Do I need to use Lock() when accessing a shared model? | I have some questions about using the torch.multiprocessing module. Let’s say I have a torch.nn.Module called model and I call model.share_memory() on it.
What happens if two threads call the forward(), i.e. model(input) at the same time? Is it safe? Or should I use Lock mechanisms to be sure that model is not accessed at the same time by multiple threads?
Similarly, what happens if two or more threads have an optimizer working on model.parameters() and they call optimizer.step() at the same time?
I ask these questions because I often see the optimizer.step() being called on shared models without lock mechanisms (i.e. in RL implementations of A3C or ACER) and I wonder if it is a safe thing to do.
| It doesn't have to be safe, since they are running asynchronously not in parallel. Quoting from the docs,
Using torch.multiprocessing, it is possible to train a model
asynchronously, with parameters either shared all the time, or being
periodically synchronized. In the first case, we recommend sending
over the whole model object, while in the latter, we advise to only
send the state_dict().
| https://stackoverflow.com/questions/65132527/ |
PyTorch's torch.nn.functional.interpolate behaving strangely | I'm having issues with PyTorch's tensor-resizing options. In the following code, x is a dataset of 888 64x64 RGB images of pokemon. xs is to be a dictionary of the same dataset at different resolutions.
def load_data():
pokemon = []
for png in os.listdir("pokemon"):
pokemon.append(imageio.imread("pokemon/" + png))
pokemon = np.array(pokemon)
s = pokemon.shape
pokemon = pokemon.reshape((s[0], s[-1], s[1], s[2]))
pokemon = (pokemon.astype(np.float32) - 127.5)/127.5
return(pokemon)
x = load_data()
ss = [2, 1, 1/2, 1/4, 1/8, 1/16]
xs = {s : nn.functional.interpolate(torch.from_numpy(x), scale_factor = s, mode='nearest') for s in ss}
As expected, the output shapes include all 888 RGB images with new heights and widths.
print("Data shape:")
for x in xs.values():
print(" ", x.shape)
> Data shape:
> torch.Size([888, 3, 128, 128])
> torch.Size([888, 3, 64, 64])
> torch.Size([888, 3, 32, 32])
> torch.Size([888, 3, 16, 16])
> torch.Size([888, 3, 8, 8])
> torch.Size([888, 3, 4, 4])
However, the images weren't sensibly resized.
When s = 1, the pokemon are in their ordinary 64x64 format, as expected:
When s = 2, the pokemon become Pollocks, overlapping in different colors:
When s = 1/2, the pokemon are cut into pieces and rearranged:
Is there an alternative PyTorch tool for resizing? I encounter roughly the same problem using AvgPool2d and Upsample instead. I will be resizing non-image tensors, too, so PIL isn't always an option. In any case, using the GPU instead of CPU would be helpful.
| I have solved my problem. I was unused to transitioning images from (Batch, Height, Width, Channels) to (Batch, Channels, Height, Width) for PyTorch. Replacing the reshape lines with np.moveaxis fixed the issue. Thanks, everyone, for your help.
| https://stackoverflow.com/questions/65136845/ |
Should a data batch be moved to CPU and converted (from torch Tensor) to a numpy array when doing evaluation w.r.t. a metric during the training? | I am going through Andrew Ng’s tutorial from the CS230 Stanford course, and in every epoch of the training, evaluation is performed by calculating the metrics.
But before calculating the metrics, they are sending the batches to CPU and converting them to numpy arrays (code here).
# extract data from torch Variable, move to cpu, convert to numpy arrays
output_batch = output_batch.data.cpu().numpy()
labels_batch = labels_batch.data.cpu().numpy()
# compute all metrics on this batch
summary_batch = {metric: metrics[metric](output_batch, labels_batch) for metric in metrics}
My question is: why do they do that? Why don’t they just calculate the metrics (which is done here) on GPU using torch methods (e.g. torch.sum as opposed to np.sum)?
I would think that GPU to CPU transfers would slow things down, so there should be a very good reason for doing them?
I am new to PyTorch so I might be missing something.
| Correct me if I'm wrong. Sending back the data to the CPU allows to reduce the GPU load even though memory is replaced when entering the following loop cycle. Futhermore, I believe converting to numpy has the advantage of freeing memory since you are detaching your data from the calculation graph. You end up manipulating labels_batch.cpu().numpy() a fixed array vs labels_batch a tensor attached to the entire network through linked backward_fn callbacks.
| https://stackoverflow.com/questions/65179954/ |
How to Create Class Label for Mosaic Augmentation in Image Classification? | To create a class label in CutMix or MixUp type augmentation, we can use beta such as np.random.beta or scipy.stats.beta and do as follows for two labels:
label = label_one*beta + (1-beta)*label_two
But what if we've more than two images? In YoLo4, they've tried an interesting augmentation called Mosaic Augmentation for object detection problems. Unlike CutMix or MixUp, this augmentation creates augmented samples with 4 images. In object detection cases, we can compute the shift of each instance co-ords and thus possible to get the proper ground truth, here. But for only image classification cases, how can we do that?
Here is a starter.
import tensorflow as tf
import matplotlib.pyplot as plt
import random
(train_images, train_labels), (test_images, test_labels) = \
tf.keras.datasets.cifar10.load_data()
train_images = train_images[:10,:,:]
train_labels = train_labels[:10]
train_images.shape, train_labels.shape
((10, 32, 32, 3), (10, 1))
Here is a function we've written for this augmentation; ( too ugly with an `inner-outer loop! Please suggest if we can do it efficiently.)
def mosaicmix(image, label, DIM, minfrac=0.25, maxfrac=0.75):
'''image, label: batches of samples
'''
xc, yc = np.random.randint(DIM * minfrac, DIM * maxfrac, (2,))
indices = np.random.permutation(int(image.shape[0]))
mosaic_image = np.zeros((DIM, DIM, 3), dtype=np.float32)
final_imgs, final_lbs = [], []
# Iterate over the full indices
for j in range(len(indices)):
# Take 4 sample for to create a mosaic sample randomly
rand4indices = [j] + random.sample(list(indices), 3)
# Make mosaic with 4 samples
for i in range(len(rand4indices)):
if i == 0: # top left
x1a, y1a, x2a, y2a = 0, 0, xc, yc
x1b, y1b, x2b, y2b = DIM - xc, DIM - yc, DIM, DIM # from bottom right
elif i == 1: # top right
x1a, y1a, x2a, y2a = xc, 0, DIM , yc
x1b, y1b, x2b, y2b = 0, DIM - yc, DIM - xc, DIM # from bottom left
elif i == 2: # bottom left
x1a, y1a, x2a, y2a = 0, yc, xc, DIM
x1b, y1b, x2b, y2b = DIM - xc, 0, DIM, DIM-yc # from top right
elif i == 3: # bottom right
x1a, y1a, x2a, y2a = xc, yc, DIM, DIM
x1b, y1b, x2b, y2b = 0, 0, DIM-xc, DIM-yc # from top left
# Copy-Paste
mosaic_image[y1a:y2a, x1a:x2a] = image[i,][y1b:y2b, x1b:x2b]
# Append the Mosiac samples
final_imgs.append(mosaic_image)
return final_imgs, label
The augmented samples, currently with the wrong labels.
data, label = mosaicmix(train_images, train_labels, 32)
plt.imshow(data[5]/255)
However, here are some more examples to motivate you. Data is from the Cassava Leaf competition.
(source: googleapis.com)
(source: googleapis.com)
| We already know that, in CutMix, λ is a float number from the beta distribution Beta(α,α). We have seen, when α=1, it performs best. Now, If we grant α==1 always, we can say that λ is sampled from the uniform distribution..
Simply we can say λ is just a floating-point number which value will be 0 to 1.
So, only for 2 images,
if we use λ for the 1st image then we can calculate the remaining unknown portion simply by 1-λ.
But for 3 images, if we use λ for the 1st image, we cannot calculate other 2 unknowns from that single λ. If we really want to do so, we need 2 random numbers for 3 images. In the same way, we can say that for the n number of images, we need the n-1 number random variable. And in all cases, the summation should be 1. (for example, λ + (1-λ) == 1). If the sum is not 1, the label will be wrong!
For this purpose Dirichlet distribution might be helpful because it helps to generate quantities that sum to 1. A Dirichlet-distributed random variable can be seen as a multivariate generalization of a Beta distribution.
>>> np.random.dirichlet((1, 1), 1) # for 2 images. Equivalent to λ and (1-λ)
array([[0.92870347, 0.07129653]])
>>> np.random.dirichlet((1, 1, 1), 1) # for 3 images.
array([[0.38712673, 0.46132787, 0.1515454 ]])
>>> np.random.dirichlet((1, 1, 1, 1), 1) # for 4 images.
array([[0.59482542, 0.0185333 , 0.33322484, 0.05341645]])
In CutMix, the size of the cropped part of an image has a relation with λ which weighting the corresponding labels.
So, for multiple λ, you also need to calculate them accordingly.
# let's say for 4 images
# I am not sure the proper way.
image_list = [4 images]
label_list = [4 label]
new_img = np.zeros((w, h))
beta_list = np.random.dirichlet((1, 1, 1, 1), 1)[0]
for idx, beta in enumerate(beta_list):
x0, y0, w, h = get_cropping_params(beta, full_img) # something like this
new_img[x0, y0, w, h] = image_list[idx][x0, y0, w, h]
label_list[idx] = label_list[idx] * beta
| https://stackoverflow.com/questions/65181294/ |
How to process TransformerEncoderLayer output in pytorch | I am trying to use bio-bert sentence embeddings for text classification of longer pieces of text.
As it currently stands I standardize the number of sentences in each piece of text (some sentences are only comprised of ("[PAD]") and run each sentence through biobert to get sentence vectors as they do here:
https://pypi.org/project/biobert-embedding/
I then run those embeddings through a TrasnformerEncoder with 8 layers and 16 attention heads.
The TrasnformerEncoder outputs something of shape (batch_size, num_sentences, embedding_size).
I then try to decode this with a Linear layer and map it to my classes (of which there are 7) and softmax the output to get probabilities.
My loss function is simply nn.CrossEntropyLoss().
At first, I summed over dimensions 1 of the TransformerEncoder output to get something of size (batch_size, embedding_size). This invariable led to my network converging on always predicting one of the labels with absolute certainty. Usually the most common label in the dataset.
I then tried only taking the output for the last sentence of the TransformerEncoder output. i.e. TransformerEncoderOutput[:, -1, :].
This resulted in something similar.
I then tried running my Linear layer on each of the outputs of TransformerEncoder output to produce a tensor of size (batch_size, num_sentences, 7). I then sum over dim 1 (makes a tensor of size (batch_size, 7) and softmax as usual. The idea here is that every sentence gets to vote for the label after being informed about its place in the sequence.
This converged even more quickly to just predicting 1 for one of the labels and vanishingly small values for the others.
I feel like I am misunderstanding out to use the output of a pytorch Transformer somehow.
My learning rate is very low, 0.00001, and that helped delay the convergence but it converged eventually anyway.
What this suggests to me is that my network is incapable of figuring anything out about the text and is just learning to find the most common labels. I would guess that this is either a problem with my loss function or a problem with how I am using the Transformer.
Is there a glaring flaw in the architecture that I have laid out?
| So the input and output shape of the transformer-encoder is batch-size, sequence-length, embedding-size).
There are three possibilities to process the output of the transformer encoder (when not using the decoder).
you take the mean of the sequence-length dimension:
x = self.transformer_encoder(x)
x = x.reshape(batch_size, seq_size, embedding_size)
x = x.mean(1)
sum it up as you said:
x = self.transformer_encoder(x)
x = x.reshape(batch_size, seq_size, embedding_size)
x = x.sum(1)
using a recurrent neural network to combine the information along the sequence-length dimension:
x = self.transformer_encoder(x)
x = x.reshape(batch_size, seq_len, embedding_size)
# init hidden state
hidden = torch.zeros(layers, batch_size, embedding_size).to(device=device)
x, hidden = self.rnn(x, hidden)
x = x.reshape(batch_size, seq_size, embedding_size)
# take last output
x = x[:, -1]
Taking the last element of the Transformer output isnt a good idea I think. Because then you only take 1 / seq-len of the information. But using rnn, the last output has still information of every other output.
Id say that taking the mean is the best idea.
And to the learning rate: For me it always worked very much better when I used warmup training. If you dont know what that is: You start at a low learning rate, for example 0.00001 and you increase it until you have reached some target lr, for example 0.002. And from then you just decay the lr as usual.
| https://stackoverflow.com/questions/65190217/ |
Diagonal embedding of a batch of matrices in pytorch? | If you are given a collection of n x n matrices say m of them, is there a predefined function in pytorch that performs a diagonal embedding on all of these into a larger matrix of dimension nm x nm?
To be concrete, what I am looking for is say you have two 2 x 2 identity matrices, then their diagonal embedding into a 4 x 4 matrix would be the identity 4 x 4 matrix.
Something like:
torch.block_diag
but this expects you to feed each matrix as a separate argument.
| Your question doesn't specify how you get your m tensors. Let's say you have
# channel first tensors
a = torch.ones(4,2,2)
or
# a list of tensors
a = [torch.ones(2,2) for _ in range(4)]
then you can unpack that in block_diag:
>>> torch.block_diag(*a)
tensor([[1., 1., 0., 0., 0., 0., 0., 0.],
[1., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 1., 0., 0., 0., 0.],
[0., 0., 1., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 1., 0., 0.],
[0., 0., 0., 0., 1., 1., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 1.],
[0., 0., 0., 0., 0., 0., 1., 1.]])
| https://stackoverflow.com/questions/65191270/ |
Convert a simple cnn from keras to pytorch | Can anyone please help me to convert this model to PyTorch? I already tried to convert from Keras to PyTorch like this How can I convert this keras cnn model to pytorch version but training results were different. Thank you.
input_3d = (1, 64, 96, 96)
pool_3d = (2, 2, 2)
model = Sequential()
model.add(Convolution3D(8, 3, 3, 3, name='conv1', input_shape=input_3d,
data_format='channels_first'))
model.add(MaxPooling3D(pool_size=pool_3d, name='pool1'))
model.add(Convolution3D(8, 3, 3, 3, name='conv2',data_format='channels_first'))
model.add(MaxPooling3D(pool_size=pool_3d, name='pool2'))
model.add(Convolution3D(8, 3, 3, 3, name='conv3',data_format='channels_first'))
model.add(MaxPooling3D(pool_size=pool_3d, name='pool3'))
model.add(Flatten())
model.add(Dense(2000, activation='relu', name='dense1'))
model.add(Dropout(0.5, name='dropout1'))
model.add(Dense(500, activation='relu', name='dense2'))
model.add(Dropout(0.5, name='dropout2'))
model.add(Dense(3, activation='softmax', name='softmax'))
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1 (Conv3D) (None, 8, 60, 94, 94) 224
_________________________________________________________________
pool1 (MaxPooling3D) (None, 8, 30, 47, 47) 0
_________________________________________________________________
conv2 (Conv3D) (None, 8, 28, 45, 45) 1736
_________________________________________________________________
pool2 (MaxPooling3D) (None, 8, 14, 22, 22) 0
_________________________________________________________________
conv3 (Conv3D) (None, 8, 12, 20, 20) 1736
_________________________________________________________________
pool3 (MaxPooling3D) (None, 8, 6, 10, 10) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4800) 0
_________________________________________________________________
dense1 (Dense) (None, 2000) 9602000
_________________________________________________________________
dropout1 (Dropout) (None, 2000) 0
_________________________________________________________________
dense2 (Dense) (None, 500) 1000500
_________________________________________________________________
dropout2 (Dropout) (None, 500) 0
_________________________________________________________________
softmax (Dense) (None, 3) 1503
=================================================================
| Your PyTorch equivalent of the Keras model would look like this:
class CNN(nn.Module):
def __init__(self, ):
super(CNN, self).__init__()
self.maxpool = nn.MaxPool3d((2, 2, 2))
self.conv1 = nn.Conv3d(in_channels=1, out_channels=8, kernel_size=3)
self.conv2 = nn.Conv3d(in_channels=8, out_channels=8, kernel_size=3)
self.conv3 = nn.Conv3d(in_channels=8, out_channels=8, kernel_size=3)
self.linear1 = nn.Linear(4800, 2000)
self.dropout1 = nn.Dropout3d(0.5)
self.linear2 = nn.Linear(2000, 500)
self.dropout2 = nn.Dropout3d(0.5)
self.linear3 = nn.Linear(500, 3)
def forward(self, x):
out = self.maxpool(self.conv1(x))
out = self.maxpool(self.conv2(out))
out = self.maxpool(self.conv3(out))
# Flattening process
b, c, d, h, w = out.size() # batch_size, channels, depth, height, width
out = out.view(-1, c * d * h * w)
out = self.dropout1(self.linear1(out))
out = self.dropout2(self.linear2(out))
out = self.linear3(out)
out = torch.softmax(out, 1)
return out
A driver program to test the model:
inputs = torch.randn(8, 1, 64, 96, 96)
model = CNN()
outputs = model(inputs)
print(outputs.shape) # torch.Size([8, 3])
| https://stackoverflow.com/questions/65192453/ |
RuntimeError: Found dtype Long but expected Float: When using criterion | I realized this question has been asked many times but I cannot find any solution which fixes my issue. I found this code online and tried to run it to understand how it actually works. I found the error is RuntimeError: Found dtype Long but expected Float. It happens at errD_real = criterion(output, label). I tried to add .float() to change the type, but it does not work. I use Pytorch 1.7.0, Python 3.8.3, Anaconda 1.9.12, Cudnn 7.6.0. The error and code are below.
runfile('C:/Summer2020/computer_vision/final_project/copy_of_chest_x_ray_shenanigans_(dcgan).py', wdir='C:/Summer2020/computer_vision/final_project')
Generator(
(main): Sequential(
(0): ConvTranspose2d(100, 512, kernel_size=(4, 4), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(7): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): ReLU(inplace=True)
(9): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(10): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(11): ReLU(inplace=True)
(12): ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(13): Tanh()
)
)
Discriminator(
(main): Sequential(
(0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): LeakyReLU(negative_slope=0.2, inplace=True)
(2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(4): LeakyReLU(negative_slope=0.2, inplace=True)
(5): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(6): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(7): LeakyReLU(negative_slope=0.2, inplace=True)
(8): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(9): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(10): LeakyReLU(negative_slope=0.2, inplace=True)
(11): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), bias=False)
(12): Sigmoid()
)
)
Starting Training Loop...
Type of output is <class 'torch.Tensor'>
Type of label is <class 'torch.Tensor'>
Traceback (most recent call last):
File "C:\Summer2020\computer_vision\final_project\copy_of_chest_x_ray_shenanigans_(dcgan).py", line 337, in <module>
main()
File "C:\Summer2020\computer_vision\final_project\copy_of_chest_x_ray_shenanigans_(dcgan).py", line 243, in main
errD_real = criterion(output, label)
File "C:\Users\binhd\anaconda3\envs\t2\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\binhd\anaconda3\envs\t2\lib\site-packages\torch\nn\modules\loss.py", line 530, in forward
return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
File "C:\Users\binhd\anaconda3\envs\t2\lib\site-packages\torch\nn\functional.py", line 2525, in binary_cross_entropy
return torch._C._nn.binary_cross_entropy(
RuntimeError: Found dtype Long but expected Float
from __future__ import print_function
import argparse
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
def main():
# Set random seed for reproducibility
manualSeed = 999
#manualSeed = random.randint(1, 10000) # use if you want new results
random.seed(manualSeed)
torch.manual_seed(manualSeed)
# Load dataset
dataroot = 'C:\Spring 2020/Machine Learning and Computer Vision/Project Database/chest xray/chest_xray/chest_xray/train/'
# Number of workers for dataloader
workers = 4
# Batch size during training
batch_size = 128
# Spatial size of training images. All images will be resized to this
# size using a transformer.
image_size = 64
# Number of channels in the training images. For color images this is 3
nc = 3
# Size of z latent vector (i.e. size of generator input)
nz = 100
# Size of feature maps in generator
ngf = 64
# Size of feature maps in discriminator
ndf = 64
# Number of training epochs
num_epochs = 5
# Learning rate for optimizers
lr = 0.0002
# Beta1 hyperparam for Adam optimizers
beta1 = 0.5
# Number of GPUs available. Use 0 for CPU mode.
ngpu = 1
# Create the dataset
dataset = dset.ImageFolder(root='C:\Spring 2020/Machine Learning and Computer Vision/Project Database/chest xray/chest_xray/chest_xray/train/',
transform=transforms.Compose([
transforms.Resize(image_size),
# transforms.Grayscale(1),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5), (0.5)),
]))
# Create the dataloader
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
shuffle=True, num_workers=workers)
# Decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
# Plot some training images
real_batch = next(iter(dataloader))
plt.figure(figsize=(8,8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))
# Custom weights initialization called on netG and netD
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constant_(m.bias.data, 0)
"""Generator Code"""
class Generator(nn.Module):
def __init__(self, ngpu):
super(Generator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
# state size. (ngf*8) x 4 x 4
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
# state size. (ngf) x 32 x 32
nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
nn.Tanh()
# state size. (nc) x 64 x 64
)
def forward(self, input):
return self.main(input)
# Create the generator
netG = Generator(ngpu).to(device)
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
netG = nn.DataParallel(netG, list(range(ngpu)))
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netG.apply(weights_init)
# Print the model
print(netG)
"""Discriminator Code
"""
class Discriminator(nn.Module):
def __init__(self, ngpu):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
nn.Sigmoid()
)
def forward(self, input):
return self.main(input)
# Create the Discriminator
netD = Discriminator(ngpu).to(device)
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
netD = nn.DataParallel(netD, list(range(ngpu)))
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netD.apply(weights_init)
# Print the model
print(netD)
"""Train the GAN!
"""
# Initialize BCELoss function
criterion = nn.BCELoss()
# Create batch of latent vectors that we will use to visualize
# the progression of the generator
fixed_noise = torch.randn(64, nz, 1, 1, device=device)
# Establish convention for real and fake labels during training
real_label = 1
fake_label = 0
# Setup Adam optimizers for both G and D
optimizerD = optim.Adam(netD.parameters(), lr=0.0002, betas=(beta1, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=0.001, betas=(beta1, 0.999))
# Commented out IPython magic to ensure Python compatibility.
# Training Loop
# Lists to keep track of progress
img_list = []
G_losses = []
D_losses = []
iters = 0
num_epochs = 35
print("Starting Training Loop...")
# For each epoch
for epoch in range(num_epochs):
# For each batch in the dataloader
for i, data in enumerate(dataloader, 0):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
## Train with all-real batch
netD.zero_grad()
# Format batch
real_cpu = data[0].to(device)
b_size = real_cpu.size(0)
label = torch.full((b_size,), real_label, device=device) # 64 = batchSize
# print("label " + str(label.size()))
# Forward pass real batch through D
output = netD(real_cpu).view(-1) # 256??
print('Type of output is ', type(output))
print('Type of label is ', type(label))
# print("output " + str(output.size()))
# Calculate loss on all-real batch
errD_real = criterion(output, label)
# Calculate gradients for D in backward pass
errD_real.backward()
D_x = output.mean().item()
## Train with all-fake batch
# Generate batch of latent vectors
noise = torch.randn(b_size, nz, 1, 1, device=device)
# Generate fake image batch with G
fake = netG(noise)
# print("fake " + str(fake.size()))
label.fill_(fake_label)
# Classify all fake batch with D
output = netD(fake.detach()).view(-1)
# Calculate D's loss on the all-fake batch
errD_fake = criterion(output, label)
# Calculate the gradients for this batch
errD_fake.backward()
D_G_z1 = output.mean().item()
# Add the gradients from the all-real and all-fake batches
errD = errD_real + errD_fake
# Update D
optimizerD.step()
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
netG.zero_grad()
label.fill_(real_label) # fake labels are real for generator cost
# Since we just updated D, perform another forward pass of all-fake batch through D
output = netD(fake).view(-1)
# Calculate G's loss based on this output
errG = criterion(output, label)
# Calculate gradients for G
errG.backward()
D_G_z2 = output.mean().item()
# Update G
optimizerG.step()
# Output training stats
if i % 50 == 0:
print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
% (epoch, num_epochs, i, len(dataloader),
errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))
# Save Losses for plotting later
G_losses.append(errG.item())
D_losses.append(errD.item())
# Check how the generator is doing by saving G's output on fixed_noise
if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):
with torch.no_grad():
fake = netG(fixed_noise).detach().cpu()
img_list.append(vutils.make_grid(fake, padding=2, normalize=True))
iters += 1
plt.figure(figsize=(10,5))
plt.title("Generator and Discriminator Loss During Training")
plt.plot(G_losses,label="G")
plt.plot(D_losses,label="D")
plt.xlabel("iterations")
plt.ylabel("Loss")
plt.legend()
plt.show()
fig = plt.figure(figsize=(8,8))
plt.axis("off")
ims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list]
ani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True)
HTML(ani.to_jshtml())
# Grab a batch of real images from the dataloader
real_batch = next(iter(dataloader))
# Plot the real images
plt.figure(figsize=(80,80))
plt.subplot(1,2,1)
plt.axis("off")
plt.title("Real Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0)))
# Plot the fake images from the last epoch
plt.subplot(1,2,2)
plt.axis("off")
plt.title("Fake Images")
plt.imshow(np.transpose(img_list[-1],(1,2,0)))
plt.show()
if __name__ == '__main__':
main()
| I think I just found the solution is to change
from
label = torch.full((b_size,), real_label, device=device)
to
label = torch.full((b_size,), real_label, device=device, dtype=torch.float)
| https://stackoverflow.com/questions/65192811/ |
Decaying the learning rate from the 100th epoch | Knowing that
learning_rate = 0.0004
optimizer = torch.optim.Adam(
model.parameters(),
lr=learning_rate, betas=(0.5, 0.999)
)
is there a way of decaying the learning rate from the 100th epoch?
Is this a good practice:
decayRate = 0.96
my_lr_scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=my_optimizer, gamma=decayRate)
| from torch.optim.lr_scheduler import MultiStepLR
# reduce the learning rate by 0.1 after epoch 100
scheduler = MultiStepLR(optimizer, milestones=[100,], gamma=0.1)
Please refer: MultiStepLR for more information.
| https://stackoverflow.com/questions/65200452/ |
How can i add a Bi-LSTM layer on top of bert model? | I'm using pytorch and I'm using the base pretrained bert to classify sentences for hate speech.
I want to implement a Bi-LSTM layer that takes as an input all outputs of the latest
transformer encoder from the bert model as a new model (class that implements nn.Module), and i got confused with the nn.LSTM parameters.
I tokenized the data using
bert = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=int(data['class'].nunique()),output_attentions=False,output_hidden_states=False)
My data-set has 2 columns: class(label), sentence.
Can someone help me with this?
Thank you in advance.
Edit:
Also, after processing the input in the bi-lstm, the network sends the final hidden state to a fully connected network that performs classication using the softmax activation function. how can I do that ?
| You can do it as follows:
from transformers import BertModel
class CustomBERTModel(nn.Module):
def __init__(self):
super(CustomBERTModel, self).__init__()
self.bert = BertModel.from_pretrained("bert-base-uncased")
### New layers:
self.lstm = nn.LSTM(768, 256, batch_first=True,bidirectional=True)
self.linear = nn.Linear(256*2, <number_of_classes>)
def forward(self, ids, mask):
sequence_output, pooled_output = self.bert(
ids,
attention_mask=mask)
# sequence_output has the following shape: (batch_size, sequence_length, 768)
lstm_output, (h,c) = self.lstm(sequence_output) ## extract the 1st token's embeddings
hidden = torch.cat((lstm_output[:,-1, :256],lstm_output[:,0, 256:]),dim=-1)
linear_output = self.linear(hidden.view(-1,256*2)) ### assuming that you are only using the output of the last LSTM cell to perform classification
return linear_output
tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
model = CustomBERTModel()
| https://stackoverflow.com/questions/65205582/ |
How can i get all outputs of the last transformer encoder in bert pretrained model and not just the cls token output? | I'm using pytorch and this is the model from huggingface transformers link:
from transformers import BertTokenizerFast, BertForSequenceClassification
bert = BertForSequenceClassification.from_pretrained("bert-base-uncased",
num_labels=int(data['class'].nunique()),
output_attentions=False,
output_hidden_states=False)
and in the forward function I'm building, I'm calling x1, x2 = self.bert(sent_id, attention_mask=mask)
Now, as far as I know, x2 is the cls output(which is the output of the first transformer encoder) but yet again, I don't think I understand the output of the model.
but I want the output of all the 12 last transformer encoders.
How can I do that in pytorch ?
| Ideally, if you want to look into the outputs of all the layer, you should use BertModel and not BertForSequenceClassification. Because, BertForSequenceClassification is inherited from BertModel and adds a linear layer on top of the BERT model.
from transformers import BertModel
my_bert_model = BertModel.from_pretrained("bert-base-uncased")
### Add your code to map the model to device, data to device, and obtain input_ids and mask
sequence_output, pooled_output = my_bert_model(ids, attention_mask=mask)
# sequence_output has the following shape: (batch_size, sequence_length, 768), which contains output for all tokens in the last layer of the BERT model.
sequence_output contains output for all tokens in the last layer of the BERT model.
In order to obtain the outputs of all the transformer encoder layers, you can use the following:
my_bert_model = BertModel.from_pretrained("bert-base-uncased")
sequence_output, pooled_output, all_layer_output = model(ids, attention_mask=mask, output_hidden_states=True)
all_layer_output is a output tuple containing the outputs embeddings layer + outputs of all the layer. Each element in the tuple will have a shape (batch_size, sequence_length, 768)
Hence, to get the sequence of outputs at layer-5, you can use all_layer_output[5]. As, all_layer_output[0] contains outputs of the embeddings.
| https://stackoverflow.com/questions/65217033/ |
What does this code in PyTorch do? How can I express it with tensorflow | I found a code that would solve my problem that looks like this:
(self.conv_diag(input_tensor.diagonal(dim1=2, dim2=3))).diag_embed(dim1=2, dim2=3)
While self.conv_diag is a layer I have defined before.
As far as I understood it extracts the diagonal of a subtensor in the second and third dimension puts it into the layer and constructs a new tensor filled with zeros and replaces its second and third dimension with the new values calculated by my layer.
What I have found to extract the diagonal is
tf.math.reduce_diag(input_tensor)
but I cannot choose the axis and I have not yet found an equivalent function to replace torch.diag_embed()
How can I express it in Tensorflow?
| This is maybe what you are looking for:
tf.linalg.diag(
diagonal, name='diag', k=0, num_rows=-1, num_cols=-1, padding_value=0,
align='RIGHT_LEFT'
)
| https://stackoverflow.com/questions/65224911/ |
Load csv and Image dataset in pytorch | I am doing image classification with PyTorch. I have a separate Images folder and train and test csv file with images ids and labels . I don’t have any an idea about how to combine those images and ID and converting into tensors.
train.csv : contains all ID of Image like 4325.jpg, 2345.jpg,…so on and contains Labels like cat,dog.
Image_data : contains all the images of with ID name.
| You can create custom dataset class by inherting pytorch's torch.utils.data.Dataset.
The assumption for the following custom dataset class is
csv file format is
filename
label
4325.jpg
cat
2345.jpg
dog
All images are inside images folder.
class CustomDataset(torch.utils.data.Dataset):
def __init__(self, csv_path, images_folder, transform = None):
self.df = pd.read_csv(csv_path)
self.images_folder = images_folder
self.transform = transform
self.class2index = {"cat":0, "dog":1}
def __len__(self):
return len(self.df)
def __getitem__(self, index):
filename = self.df[index, "FILENAME"]
label = self.class2index[self.df[index, "LABEL"]]
image = PIL.Image.open(os.path.join(self.images_folder, filename))
if self.transform is not None:
image = self.transform(image)
return image, label
Now you can use this class to load the training and test dataset using both csv file and image folder.
train_dataset = CustomDataset("path - to - train.csv", "path - to - images - folder" )
test_dataset = CustomDataset("path - to - test.csv", "path - to - images - folder" )
image, label = train_dataset[0]
| https://stackoverflow.com/questions/65231299/ |
How do I specify and install the latest version of PyTorch via Conda in AWS Sagemaker? | I'm attempting to use a recent version of PyTorch (1.7.0) in a Conda environment on Sagemaker by specifying the package version in an environment.yml file. However, I'm getting a ResolvePackageNotFound error. Note that I'm just working in a Jupyter notebook with a kernel corresponding to this Conda environment. I'm not using a deep learning image.
Steps to reproduce:
Save the code below as a .yml file, navigate to where this .yml file is stored, and run conda env create environment.yml.
name: test_env
dependencies:
- numpy
- pandas
- pytorch>=1.7.0
- torchvision
- scipy
- ipykernel
- torchvision
I've tried this on instances of various types (ml.p2.xlarge, ml.p3.2xlarge, and ml.p3.8xlarge) and have gotten the same error each time. I tried it with conda version 4.8.3 and 4.9.2 as well. If I specify pytorch>=1.5.0, I'm able to create the environment successfully.
Does anyone have any idea why I can't create an environment successfully with more recent versions of PyTorch? Based on this documentation, I wonder if Sagemaker has preinstalled certain versions of PyTorch, and something is going awry when I attempt to use a more recent version.
| Sagemaker instances do not always have support for latest packages.
Check this link for the list of supported images in Sagemaker instances.
| https://stackoverflow.com/questions/65243886/ |
Load trained model on another machine - fastai, torch, huggingface | I am using fastai with pytorch to fine tune XLMRoberta from huggingface.
I've trained the model and everything is fine on the machine where I trained it.
But when I try to load the model on another machine I get OSError - Not Found - No such file or directory pointing to .cache/torch/transformers/. The issue is the path of a vocab_file.
I've used fastai's Learner.export to export the model in .pkl file, but I don't believe that issue is related to fastai since I found the same issue appearing in flairNLP.
It appears that the path to the cache folder, where the vocab_file is stored during the training, is embedded in the .pkl file:
The error comes from transformer's XLMRobertaTokenizer __setstate__:
def __setstate__(self, d):
self.__dict__ = d
self.sp_model = spm.SentencePieceProcessor()
self.sp_model.Load(self.vocab_file)
which tries to load the vocab_file using the path from the file.
I've tried patching this method using:
pretrained_model_name = "xlm-roberta-base"
vocab_file = XLMRobertaTokenizer.from_pretrained(pretrained_model_name).vocab_file
def _setstate(self, d):
self.__dict__ = d
self.sp_model = spm.SentencePieceProcessor()
self.sp_model.Load(vocab_file)
XLMRobertaTokenizer.__setstate__ = MethodType(_setstate, XLMRobertaTokenizer(vocab_file))
And that successfully loaded the model but caused other problems like missing model attributes and other unwanted issues.
Can someone please explain why is the path embedded inside the file, is there a way to configure it without reexporting the model or if it has to be reexported how to configure it dynamically using fastai, torch and huggingface.
| I faced the same error. I had fine tuned XLMRoberta on downstream classification task with fastai version = 1.0.61. I'm loading the model inside docker.
I'm not sure about why the path is embedded, but I found a workaround. Posting for future readers who might be looking for workaround as retraining is usually not possible.
I created /home/.cache/torch/transformer/ inside the docker image.
RUN mkdir -p /home/<username>/.cache/torch/transformers
Copied the files (which were not found in docker) from my local /home/.cache/torch/transformer/ to docker image /home/.cache/torch/transformer/
COPY filename:/home/<username>/.cache/torch/transformers/filename
| https://stackoverflow.com/questions/65249790/ |
I got this error when installing PyTorch in Pycharm. I installed torchvision and torch successfully but the problem is in PyTorch | I try to install PyTorch in cmd to import it in pycharm project. it gives me numerous errors after Running setup.py install for PyTorch ... error.
ERROR: Command errored out with exit status 1:
command: 'c:\users\sarah\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\sarah\\AppData\\Local\\Temp\\pip-install-uejad0d4\\pytorch_87cb825d32c24f6ca6f350ea09c367a4\\setup.py'"'"'; __file__='"'"'C:\\Users\\sarah\\AppData\\Local\\Temp\\pip-install-uejad0d4\\pytorch_87cb825d32c24f6ca6f350ea09c367a4\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\sarah\AppData\Local\Temp\pip-record-k6l1ba85\install-record.txt' --single-version-externally-managed --user --prefix= --compile --install-headers 'C:\Users\sarah\AppData\Roaming\Python\Python38\Include\PyTorch'
cwd: C:\Users\sarah\AppData\Local\Temp\pip-install-uejad0d4\pytorch_87cb825d32c24f6ca6f350ea09c367a4\
Complete output (5 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\sarah\AppData\Local\Temp\pip-install-uejad0d4\pytorch_87cb825d32c24f6ca6f350ea09c367a4\setup.py", line 11, in <module>
raise Exception(message)
Exception: You tried to install "pytorch". The package named for PyTorch is "torch"
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\users\sarah\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\sarah\\AppData\\Local\\Temp\\pip-install-uejad0d4\\pytorch_87cb825d32c24f6ca6f350ea09c367a4\\setup.py'"'"'; __file__='"'"'C:\\Users\\sarah\\AppData\\Local\\Temp\\pip-install-uejad0d4\\pytorch_87cb825d32c24f6ca6f350ea09c367a4\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\sarah\AppData\Local\Temp\pip-record-k6l1ba85\install-record.txt' --single-version-externally-managed --user --prefix= --compile --install-headers 'C:\Users\sarah\AppData\Roaming\Python\Python38\Include\PyTorch' Check the logs for full command output.
| The best way to install PyTorch is using pip or conda, with the commands provided on their website: https://pytorch.org/
This way you can choose which OS you are using, which version of CUDA (or no CUDA), and whether you are using conda or pip.
Please notice that as of today (12/14/20), I tried installing PyTorch in a new environment and it failed because of a new Numpy version. I solved this by installing Numpy first:
pip install numpy==1.18
And then PyTorch as usual from their website. Notice that their pip install command specifies torch as the package name and not pytorch, which you seem to have used.
| https://stackoverflow.com/questions/65282529/ |
How to solve the failure of getting a python file from cpp extension of pytorch using setuptools? | I wanted to try a github project named deformable kernels, and followed the steps described in the README.md file:
conda env create -f environment.yml
cd deformable_kernels/ops/deform_kernel;
pip install -e .;
The structure of deformable_kernel/ops/deform_kernel is showed here:
.
csrc
filter_sample_depthwise_cuda.cpp
filter_sample_depthwise_cuda.h
filter_sample_depthwise_cuda_kernel.cu
nd_linear_sample_cuda.cpp
nd_linear_sample_cuda.h
nd_linear_sample_cuda_kernel.cu
functions
filter_sample_depthwise.py
__init__.py
nd_linear_sample.py
__init__.py
modules
filter_sample_depthwise.py
__init__.py
setup.py
And the content of file setup.py is showed here:
from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CUDAExtension
setup(
name='filter_sample_depthwise',
ext_modules=[
CUDAExtension(
'filter_sample_depthwise_cuda',
[
'csrc/filter_sample_depthwise_cuda.cpp',
'csrc/filter_sample_depthwise_cuda_kernel.cu',
]
),
],
cmdclass={'build_ext': BuildExtension}
)
setup(
name="nd_linear_sample",
ext_modules=[
CUDAExtension(
"nd_linear_sample_cuda",
[
"csrc/nd_linear_sample_cuda.cpp",
"csrc/nd_linear_sample_cuda_kernel.cu",
],
)
],
cmdclass={"build_ext": BuildExtension},
)
When I install this directory using command pip install -e ., it failed and the result is:
Obtaining file:///home/xxx/Downloads/deformable_kernels/deformable_kernels/ops/deform_kernel
ERROR: More than one .egg-info directory found in /tmp/pip-pip-egg-info-pta6z__q
So I tried to separate the 2 setup()s in different setup.py files. It worked but I didn't get a python file. Instead a .so file was generated.
Does anyone know how to solve a problem like this?
| Check your pip version. I've had the same error (when installing other things in dev mode with pip) and downgrading to pip version 20.0.2 worked. Unsure why, but I've seen other folks on the internet solve the problem similarly.
| https://stackoverflow.com/questions/65283131/ |
Efficient numpy broadcasting not found | It may be an easy problem but I could not find any practical solution. My code has following code segment involving 3 nested for loops. The target is to create a specialized intensity matrix for my algorithms for both prediction and ground_truth image matrix as follows:
for i in range (batch):
for j in range (img_width):
for k in range (img_height):
tensor=prediction[i][j][:]-prediction[i][k][:]
extracted_intensity_pred[i][j][k]=torch.norm(tensor, 2)
tensor=ground truth[i][j][:]-ground_truth[i][k][:]
extracted_intensity_ground_truth[i][j][k]=torch.norm(tensor, 2)
This nested for loop structure is slowing the execution intensively. Is there any broadcasting implementation(in numpy or pytorch tensor based) that may be used?
| first lets clean up some notation; [:] does nothing
But first what's the dimensions, mostly 3d?
for i in range (batch):
for j in range (img_width):
for k in range (img_height):
tensor = prediction[i,j,:] - prediction[i,k,:]
# looks like a prediction[:,:,None]-prediction[:,None,:]; making 4d?
extracted_intensity_pred[i,j,k] = torch.norm(tensor, 2)
# what can torch.norm work with?
so maybe it's just
tensor = prediction[:,:,None] - prediction(:,None,:]
extracted_intensity_pred = torch.norm(tensor, ?)
| https://stackoverflow.com/questions/65284619/ |
CNN forward function , AutoTuning the number of layers | class ConvolutionalNetwork(nn.Module):
def __init__(self, in_features, trial):
# we optimize the number of layers, hidden units and dropout ratio in each layer.
n_layers = self.trial.suggest_int("n_layers", 1, 5)
p = self.trial.suggest_uniform("dropout_1{}".format(i), 0, 1.0)
layers = []
for i in range(n_layers):
self.out_features = self.trial.suggest_int("n_units_1{}".format(i), 16, 160,step =2)
kernel_size = trial.suggest_int('kernel_size', 2, 7)
layers.append(nn.Conv1d(1, self.out_features,kernel_size,1))
layers.append(nn.RReLU())
layers.append(nn.BatchNorm1d(self.out_features)
layers.append(nn.Dropout(p))
self.in_features = self.out_features
layers.append(nn.Conv1d(self.in_features, 16,kernel_size,1))
layers.append(nn.RReLU())
return nn.Sequential(*layers)
As you can see above, I did some Optuna tuning of parameters , including tuning on the number of layers.
def forward(self,x):
# shape x for conv 1d op
x = x.view(-1, 1, self.in_features)
x = self.conv1(x)
x = F.rrelu(x)
x = F.max_pool1d(x, 64, 64)
x = self.conv2(x)
x = F.rrelu(x)
x = F.max_pool1d(x, 64, 64)
x = x.view(-1, self.n_conv)
x = self.dp(x)
x = self.fc3(x)
x = F.log_softmax(x, dim=1)
return x
I now need to do the same for the forward function above, I did the pseudo code below but it won't run, kindly advise how. The main issue is to incorporate the for loop function for the forward function.
def forward(self,x):
# shape x for conv 1d op
x = x.view(-1, 1, self.in_features)
for i in range(n_layers):
layers.append(self.conv1(x))
layers.append(F.rrelu(x))
layers.append(F.max_pool1d(x, 64, 64))
x = x.view(-1, self.n_conv)
x = self.dp(x)
x = self.fc3(x)
#x = F.sigmoid(x)
x = F.log_softmax(x, dim=1)
return x
| there are a bunch of errors that make it hard to understand what you intended to do :
Why would you build a nn.Sequential model in the __init__and not use it ?
What is this return instruction in __init__ ??
The successive convolution layers you create do not have matching channel sizes (in_channels is always 1). The out_feature of one iteration should be the in_features of the next iteration
Your pseudocode for the forward function appends tensors in a layers list (which you did not declare btw) and then does not use this list.
At the beginning of the forward, you reshape your input with x = x.view(-1, 1, self.in_features), but at that point in_features does not match at all the numer of input channels for the first convolution layer.
long story short : correct all the above errors, and then something like :
class ConvolutionalNetwork(nn.Module):
def __init__(self, in_features, trial):
# do your stuff here ...
self._model = nn.Sequential(*layers)
def forward(self, x):
return self._model(x)
should work
| https://stackoverflow.com/questions/65300793/ |
how to calculate mahalanobis distance in pytorch? | What is the most efficient way to calculate the mahalanobis distance: in pytorch?
| Based on SciPy's implementation of the mahalanobis distance, you would do this in PyTorch. Assuming u and v are 1D and cov is the 2D covariance matrix.
def mahalanobis(u, v, cov):
delta = u - v
m = torch.dot(delta, torch.matmul(torch.inverse(cov), delta))
return torch.sqrt(m)
Note: scipy.spatial.distance.mahalanobis takes in the inverse of the covariance matrix.
| https://stackoverflow.com/questions/65328887/ |
Loss is “nan” when fine-tuning HuggingFace NLI model (both RoBERTa/BART) | I'm using HuggingFace's Transformer's library and I’m trying to fine-tune a pre-trained NLI model (ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli) on a dataset of around 276.000 hypothesis-premise pairs. I’m following the instructions from the docs here and here. I have the impression that the fine-tuning works (it does the training and saves the checkpoints), but trainer.train() and trainer.evaluate() return "nan" for the loss.
What I've tried:
I tried using both ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli and facebook/bart-large-mnli to make sure that it's not linked to specific model, but I get the issue for both models
I tried following the advice in this related github issue, but adding num_labels=3 to the config file does not solve the issue. (I think my issue is different because the models are already fine-tuned on NLI in my case)
I tried many different ways of changing my input data because I suspected that there could be an issue with my input data, but I also couldn't solve it that way.
The probable source of the issue: I inspected the prediction output from the model during training and the weird thing is the prediction value always seems to be "0" (entailment) in 100% of cases (see printed output in the code below). This is clearly an error. I think the source for this is that the logits the model seems to return during training are torch.tensor([[np.nan, np.nan, np.nan]]) and when you apply .argmax(-1) to this, you get torch.tensor(0). The big mystery for me is why the logits would become "nan", because the model does not do that when I use the same input data only outside of the trainer.
=> Does anyone know where this issues comes from? See my code below.
Thanks a lot in advance for any suggestion!
Here is my code:
### load model & tokenize
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
max_length = 256
hg_model_hub_name = "ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli"
# also tried: hg_model_hub_name = "facebook/bart-large-mnli"
tokenizer = AutoTokenizer.from_pretrained(hg_model_hub_name)
model = AutoModelForSequenceClassification.from_pretrained(hg_model_hub_name)
model.config
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Device: {device}")
if device == "cuda":
model = model.half()
model.to(device)
model.train();
#... some data preprocessing
encodings_train = tokenizer(premise_train, hypothesis_train, return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=False, padding=True)
encodings_val = tokenizer(premise_val, hypothesis_val, return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=False, padding=True)
encodings_test = tokenizer(premise_test, hypothesis_test, return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=False, padding=True)
### create pytorch dataset object
class XDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.as_tensor(val[idx]) for key, val in self.encodings.items()}
#item = {key: torch.as_tensor(val[idx]).to(device) for key, val in self.encodings.items()}
item['labels'] = torch.as_tensor(self.labels[idx])
#item['labels'] = self.labels[idx]
return item
def __len__(self):
return len(self.labels)
dataset_train = XDataset(encodings_train, label_train)
dataset_val = XDataset(encodings_val, label_val)
dataset_test = XDataset(encodings_test, label_test)
# compute metrics with trainer
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
def compute_metrics(pred):
labels = pred.label_ids
print(labels)
preds = pred.predictions.argmax(-1)
print(preds)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='binary', pos_label=0)
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
## training
from transformers import Trainer, TrainingArguments
# https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=8, # batch size per device during training
per_device_eval_batch_size=8, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=100,
)
trainer = Trainer(
model=model, # the instantiated Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=dataset_train, # training dataset
eval_dataset=dataset_val # evaluation dataset
)
trainer.train()
# output: TrainOutput(global_step=181, training_loss=nan)
trainer.evaluate()
# output:
[2 2 2 0 0 2 2 2 0 2 0 0 2 2 2 2 0 2 0 2 2 2 2 0 2 0 2 0 0 2 0 0 2 0 0 0 2
0 2 0 0 0 0 0 2 0 0 2 2 2 0 2 2 2 2 2 0 0 0 0 2 0 0 0 2 2 0 0 0 2 0 0 0 2
2 0 2 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 2 0 0 0 0 2 0 2 2 0 2 0 0 2 2 2 2 2 2
2 0 0 0 0 2 0 0 2 0 0 0 0 2 2 2 0 0 0 0 0 2 0 0 2 0 2 0 2 0 2 0 0 2 2 0 0
2 2 2 2 2 2 0 0 2 2 2 2 0 2 0 0 2 2 2 0 0 2 0 2 0 2 0 0 0 0 0 0 2 0 0 2 2
0 2 2 2 0 2 2 0 2 2 2 2 2 2 0 0 2 0 0 2 2 0 0 0 2 0 2 2 2 0 0 0 0 0 0 0 0
2 0 2 2 2 0 2 0 0 2 0 2 2 0 0 0 0 2 2 2 0 0 0 2 2 2 2 0 2 0 2 2 2]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
{'epoch': 1.0,
'eval_accuracy': 0.5137254901960784,
'eval_f1': 0.6787564766839378,
'eval_loss': nan,
'eval_precision': 0.5137254901960784,
'eval_recall': 1.0}
Edit: I've also opened a github issue with a but more detailed description of the issue here: https://github.com/huggingface/transformers/issues/9160
| I received a good answer from the HuggingFace team on github. The issue was the model.half(), which has the advantage of increasing speed and reducing memory usage, but it also changes the model in a way that it produces the error. removing the model.half() solved the issue for me. For details, see https://github.com/huggingface/transformers/issues/9160
| https://stackoverflow.com/questions/65332165/ |
What memory does Transformer Decoder Only use? | I've been reading a lot about transformers and self attention and have seen both BERT and GPT-2 are a newer version that only use an encoder transformer (BERT) and decoder transformer (GPT-2). I've been trying to build a decoder only model for myself for next sequence prediction but am confused by one thing. I'm using PyTorch and have looked at thereSeq2Seq tutorial and then looked into the Transformer Decoder Block which is made up of Transformer Decoder Layers. My confusion comes from the memory these need to be passed as well. In the documentation they say memory is the last layer of the encoder block which makes sense for a Seq2Seq model but I'm wanting to make a decoder only model. So my question is what do you pass a decoder only model like GPT-2 for memory if you do not have an encoder?
| After further investigation I believe I can now answer this myself. A decoder only transformer doesn't actually use any memory as there is no encoder-decoder self attention in it like there is in a encoder-decoder transformer. A decoder only transformer looks a lot like an encoder transformer only instead it uses a masked self attention layer over a self attention layer. In order to do this you can pass a square subsequent mask (upper triangle) so that the model cannot look forward to achieve a decoder only model like found in GPT-2/GPT-3.
| https://stackoverflow.com/questions/65341363/ |
Adam optimizer with warmup on PyTorch | In the paper Attention is all you need, under section 5.3, the authors suggested to increase the learning rate linearly and then decrease proportionally to the inverse square root of steps.
How do we implement this in PyTorch with Adam optimizer? Preferably without additional packages.
| PyTorch provides learning-rate-schedulers for implementing various methods of adjusting the learning rate during the training process.
Some simple LR-schedulers are are already implemented and can be found here: https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
In your special case you can - just like the other LR-schedulers do - subclass _LRScheduler for implementing a variable schedule based on the number of epochs. For a bare-bones method you only need to implement __init__() and get_lr() methods.
Just note that many of these schedulers expect you to call .step() once per epoch. But you can also update it more frequently or even pass a custom argument just like in the cosine-annealing LR-scheduler: https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#CosineAnnealingLR
| https://stackoverflow.com/questions/65343377/ |
How to store wrong predictions during evaluation on the CNN | During evaluation, I want to store unique ids that are wrongly predicted to do some more processing.
It is a multiclass prediction problem
Here is the code during the evaluation:
outputs = model(imgs)
loss = criterion(outputs, targets) # Prediction error
val_loss += loss.item()
predicted = torch.argmax(outputs, dim=1)
t_predicted +=predicted.cpu().tolist()
total += targets.size(0)
good_answers = (predicted == targets)
correct += good_answers.sum().item()
Knowing that ids is a list of the ids of the images, When I try to get the ids that are wrong:
wrong_ids += ids[~(good_answers.to('cpu'))]
I get this error:
add(): argument 'other' (position 1) must be Tensor
| The question contained a tensorflow tag, so I was preparing an answer. After completing my write up, I've found that this tag is removed. However, I believe my answer can give insight into this general question of whether they're using tf or pytorch.
Data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
# train set / data
x_train = x_train.astype('float32') / 255
# validation set / data
x_test = x_test.astype('float32') / 255
# train set / target
y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)
# validation set / target
y_test = tf.keras.utils.to_categorical(y_test, num_classes=10)
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
Train
import tensorflow as tf
# declare input shape
input = tf.keras.Input(shape=(32,32,3))
# Block 1
x = tf.keras.layers.Conv2D(32, 3, strides=2, activation="relu")(input)
x = tf.keras.layers.MaxPooling2D(3)(x)
# Now that we apply global max pooling.
gap = tf.keras.layers.GlobalMaxPooling2D()(x)
# Finally, we add a classification layer.
output = tf.keras.layers.Dense(10, activation='softmax')(gap)
# bind all
func_model = tf.keras.Model(input, output)
print('\nFunctional API')
func_model.compile(
metrics=['accuracy'],
loss = 'categorical_crossentropy',
optimizer = tf.keras.optimizers.Adam()
)
func_model.fit(x_train, y_train)
Error Prediction
# Predict the values from the validation dataset
y_pred = func_model.predict(x_test)
# Convert predictions classes to one hot vectors
y_pred_classes = np.argmax(y_pred, axis = 1)
y_test = np.argmax(y_test, axis=1)
# Errors are difference between predicted labels and true labels
errors = (y_pred_classes - y_test != 0)
y_pred_classes_errors = y_pred_classes[errors]
y_pred_errors = y_pred[errors]
y_true_errors = y_test[errors]
x_test_errors = x_test[errors]
# Probabilities of the wrong predicted numbers
y_pred_errors_prob = np.max(y_pred_errors, axis = 1)
# Predicted probabilities of the true values in the error set
true_prob_errors = np.diagonal(np.take(y_pred_errors, y_true_errors, axis=1))
# Difference between the probability of the predicted label and the true label
delta_pred_true_errors = y_pred_errors_prob - true_prob_errors
# Sorted list of the delta prob errors
sorted_dela_errors = np.argsort(delta_pred_true_errors)
# Top 6 errors
most_important_errors = sorted_dela_errors[-6:]
Display
import matplotlib.pyplot as plt
def display_errors(errors_index,img_errors,pred_errors, obs_errors):
""" This function shows 6 images with their predicted and real labels"""
n = 0
nrows = 2
ncols = 3
fig, ax = plt.subplots(nrows,ncols,sharex=True,sharey=True)
for row in range(nrows):
for col in range(ncols):
error = errors_index[n]
ax[row,col].imshow((img_errors[error]).reshape((32,32,3)))
ax[row,col].set_title("Predicted label :{}\nTrue label :{}".format(pred_errors[error],obs_errors[error]))
n += 1
# Show the top 6 errors
display_errors(most_important_errors, x_test_errors, y_pred_classes_errors, y_true_errors)
| https://stackoverflow.com/questions/65345897/ |
How to set hydra's parameter HYDRA_FULL_ERROR? | When I use hydra in a python pytorch project, the operation result prompt “Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.”
But i don't konw how to set it.
| You set an environment variable in the shell.
For a specific run:
$ HYDRA_FULL_ERROR=1 python foo.py
Or for all runs in this shell session:
$ export HYDRA_FULL_ERROR=1
$ python foo.py
However, you shouldn't normally need to set it. This is more of a debugging backdoor in case of issues with Hydra itself.
If you hit a case where you can only understand your issue after setting HYDRA_FULL_ERROR, please file an issue.
| https://stackoverflow.com/questions/65376556/ |
Torch gather middle dimension | Let a be a (n, d, l) tensor. Let indices be a (n, 1) tensor, containing indices. I want to gather from a in the middle dimension tensors from indices given by indices. The resulting tensor would therefore be of shape (n, l).
n = 3
d = 2
l = 3
a = tensor([[[ 0, 1, 2],
[ 3, 4, 5]],
[[ 6, 7, 8],
[ 9, 10, 11]],
[[12, 13, 14],
[15, 16, 17]]])
indices = tensor([[0],
[1],
[0]])
# Shape of result is (n, l)
result = tensor([[ 0, 1, 2], # a[0, 0, :] since indices[0] == 0
[ 9, 10, 11], # a[1, 1, :] since indices[1] == 1
[12, 13, 14]]) # a[2, 0, :] since indices[2] == 0
This is indeed similar to a.gather(1, indices), but gather won't work since indices does not have the same shape as a. How can I use gather in this setting? Or what should I use?
| You can create the indices manually. The indices tensor has to be flattened if it has the shape of your example data.
a[torch.arange(len(a)),indices.view(-1)]
# equal to a[[0,1,2],[0,1,0]]
Out:
tensor([[ 0, 1, 2],
[ 9, 10, 11],
[12, 13, 14]])
| https://stackoverflow.com/questions/65378968/ |
Why is my dynamic pytorch model definition not complete? | I tried to create a custom pytorch model class in a way that would allow variable number of hidden layers. Everything seems to "work" in the code, however even if I set 10 hidden layers, none of them show up in the print out of the model definition. I am wondering why this is happening? I can see obviously it has something to do with the way I use a loop to apply the hidden layers.
Code:
class FFModel(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim):
super(FFModel, self).__init__()
self.input_layer = nn.Linear(input_dim, hidden_dim).to(device)
self.hidden_layers = [ nn.Linear(hidden_dim, hidden_dim).to(device) for _ in range(layer_dim) ]
self.activation = nn.LeakyReLU().to(device)
self.output_layer = nn.Linear(hidden_dim, output_dim).to(device)
def forward(self, x):
y_hat = self.activation( self.input_layer(x) )
for hidden_layer in self.hidden_layers:
y_hat = self.activation( hidden_layer(y_hat) )
out = self.output_layer(y_hat)
return out
model = FFModel(2, 5, 10, 1)
model
Output:
FFModel(
(input_layer): Linear(in_features=2, out_features=5, bias=True)
(activation): LeakyReLU(negative_slope=0.01)
(output_layer): Linear(in_features=5, out_features=1, bias=True)
)
Indeed, if I inspect deeper and parse through the parameters, even though this model should have 10 layers, the parameters are not correct for a model of 10 hidden layers. Clearly this model doesn't have 10 layers as I thought it would.
Why are the hidden layers all gone?
for param in model.parameters():
print (param)
out:
Parameter containing:
tensor([[-0.1064, -0.0171],
[-0.5892, -0.5391],
[-0.6420, -0.5128],
[ 0.0128, 0.2155],
[ 0.3406, 0.1825]], device='cuda:0', requires_grad=True)
Parameter containing:
tensor([-0.5432, -0.0650, 0.6447, 0.4615, 0.4772], device='cuda:0',
requires_grad=True)
Parameter containing:
tensor([[ 0.0615, 0.0901, 0.3399, -0.3341, 0.2029]], device='cuda:0',
requires_grad=True)
Parameter containing:
tensor([0.1742], device='cuda:0', requires_grad=True)
| PyTorch cannot see your list object. You need to use nn.ModuleList:
self.hidden_layers = nn.ModuleList([nn.Linear(hidden_dim, hidden_dim).to(device) for _ in range(layer_dim)])
| https://stackoverflow.com/questions/65380984/ |
How to convert a yolo darknet format into .csv file | I have a few annotations that is originally in YOLO format. I need to convert it into yolo csv format in order to train with my transformers model.
Sample .csv file I need:
Sample annotation file in CSV format
The csv attributes include: image_id, width, height and coordinates of the image's bounding box.
Any help would be appreciated!
| first of all i should say there is no straight way to convert those format into csv. you should read files and parse their data.
Step 1: import libraries
we need to read txt files ( yolo labels ) from a directory and save them into csv. so we need these libraries :
import os
import glob
import pandas as pd
import numpy as np
Step 2: get list of your yolo labels
open directory where you have your yolo labels with OS library and read txt files with glob:
os.chdir(r'D:\karami\Labeled\train1\labels')
myFiles = glob.glob('*.txt')
my labels are in my labels folder so i set that directory to that.
Step 3: read lines and split data
you have your labels list in myFiles variable
you should iterate on it and read the first line of each file and split its data.
in the image you shared we have absolute coordinate of bounding boxes. according to yolo darknet document we have :
object-class x_center y_center width height
x_center,y_center,width,height - float values relative to width and
height of image, it can be equal from (0.0 to 1.0] for example: =
<absolute_x> / <image_width> or = <absolute_height> /
<image_height>
you can see full description in here
so we need to multiply our coordinates into width and height of images.
yolo file doesnt contain image size, so you should define them before
after that use your splitted string. then save your data frame. code :
width=1024
height=1024
image_id=0
final_df=[]
for item in myFiles:
row=[]
bbox_temp=[]
with open(item, 'rt') as fd:
first_line = fd.readline()
splited = first_line.split();
row.append(image_id)
row.append(width)
row.append(height)
try:
bbox_temp.append(float(splited[1])*width)
bbox_temp.append(float(splited[2])*height)
bbox_temp.append(float(splited[3])*width)
bbox_temp.append(float(splited[4])*height)
row.append(bbox_temp)
final_df.append(row)
except:
print("file is not in YOLO format!")
df = pd.DataFrame(final_df,columns=['image_id', 'width', 'height','bbox'])
df.to_csv("saved.csv",index=False)
here is my csv file :
| https://stackoverflow.com/questions/65381312/ |
"MisconfigurationError: No TPU devices were found" even when TPU is connected in PyTorch Lightning | Have been frustrated over the past few hours over a problem, though It's likely its a problem I started myself hah.
I'm trying to connect to the TPU in Colab. I'm pretty sure I've gotten all the import stuff down. My code is here. I'm not completely set on everything, so the entire document isn't functional, but you should be able to see my attempts at TPU connection.
I'm running Pytorch in version 1.5.0 and torchvision in 0.6.0 because I was finding I couldn't install XLA with anything later than 1.5.0. I'm running XLA in version 20200325.
This is the image that seems so confusing: It states that we have a connection with xla: 1 yet when trying to flag it in the trainer I get an error saying no TPUs can be found.
If anyone could help me, that would be amazing.
Thanks,
A
| I suffered the same problem, and these steps solved the problem.
Follow the PyTorch-Lightning docs: TPU SUPPORT
Add another notebook cell:
%%capture
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py > /dev/null
!python pytorch-xla-env-setup.py --version nightly --apt-packages libomp5 libopenblas-dev > /dev/null
!pip install pytorch-lightning > /dev/null
| https://stackoverflow.com/questions/65387967/ |
PyTorch LinearLayer+BatchNorm1d with a 3D input | I would like to apply a BatchNorm1d after a Linear. My input is a 3D multivariate time series of shape [batch_size, n_variables, timesteps]. The Linear performs the linear transformation on the third dimension so that the new shape is [batch_size, n_variables, LinearLayer_out_features]. My problem occurs with the BatchNorm1d, I would like to apply it on the third dimension but, for a 3D input, BatchNorm1d operation is done over the second dimension (even for a 3D tensor). Do you have any suggestion on how to do that?
| Why not transpose the input to BatchNorm1d and then transpose it back?
m=Linear(.....)
m=torch.transpose(BatchNorm1D(torch.transpose(m,1,2)),1,2)
This doesn't create a copy of your tensor.
https://pytorch.org/docs/stable/generated/torch.transpose.html
| https://stackoverflow.com/questions/65398540/ |
Running two different independent PyTorch programs on a single GPU | I have a single NVIDIA GPU which has a memory of 16GB. I have to run two different (and independent; meaning, two different problems: one is a vision type task, another is NLP task) Python programs. The codes are written using PyTorch and both the codes can use GPU.
I have tested that program 1 takes roughly 5GB of GPU memory, and the rest is free. If I run the two programs, will it hamper the model performance or will it cause any process conflicts?
Linked question; but it does not necessarily mean PyTorch codes
| I do not know the details of how this works, but I can tell from experience that both programs will run well (as long as they do not need more than 16GB of RAM when combined), and execution times should stay roughly the same.
However, computer vision usually requires a lot of IO (mostly reading images), if the other task needs to read files too, this part may become slower than when running both programs individually.
| https://stackoverflow.com/questions/65399566/ |
Tweak positional encodings shape (DETR model) to support batchsize > 1 | Referencing from this notebook, and would like to scale this to support batch size > 1, as it state on in the comments Only batch size 1 supported.. I'm having trouble tweaking the pos statement inside the forward(). How to go about doing this? Any tips will be very helpful too.
This is the original code taken from the notebook](https://colab.research.google.com/github/facebookresearch/detr/blob/colab/notebooks/detr_demo.ipynb#scrollTo=h91rsIPl7tVl):
import torch
from torch import nn
from torchvision.models import resnet50
import torchvision.transforms as T
torch.set_grad_enabled(False);
class DETRdemo(nn.Module):
"""
Demo DETR implementation.
Demo implementation of DETR in minimal number of lines, with the
following differences wrt DETR in the paper:
* learned positional encoding (instead of sine)
* positional encoding is passed at input (instead of attention)
* fc bbox predictor (instead of MLP)
The model achieves ~40 AP on COCO val5k and runs at ~28 FPS on Tesla V100.
Only batch size 1 supported.
"""
def __init__(self, num_classes, hidden_dim=256, nheads=8,
num_encoder_layers=6, num_decoder_layers=6):
super().__init__()
# create ResNet-50 backbone
self.backbone = resnet50()
del self.backbone.fc
# create conversion layer
self.conv = nn.Conv2d(2048, hidden_dim, 1)
# create a default PyTorch transformer
self.transformer = nn.Transformer(
hidden_dim, nheads, num_encoder_layers, num_decoder_layers)
# prediction heads, one extra class for predicting non-empty slots
# note that in baseline DETR linear_bbox layer is 3-layer MLP
self.linear_class = nn.Linear(hidden_dim, num_classes + 1)
self.linear_bbox = nn.Linear(hidden_dim, 4)
# output positional encodings (object queries)
self.query_pos = nn.Parameter(torch.rand(100, hidden_dim))
# spatial positional encodings
# note that in baseline DETR we use sine positional encodings
self.row_embed = nn.Parameter(torch.rand(50, hidden_dim // 2))
self.col_embed = nn.Parameter(torch.rand(50, hidden_dim // 2))
def forward(self, inputs):
# propagate inputs through ResNet-50 up to avg-pool layer
x = self.backbone.conv1(inputs)
x = self.backbone.bn1(x)
x = self.backbone.relu(x)
x = self.backbone.maxpool(x)
x = self.backbone.layer1(x)
x = self.backbone.layer2(x)
x = self.backbone.layer3(x)
x = self.backbone.layer4(x)
# convert from 2048 to 256 feature planes for the transformer
h = self.conv(x)
# construct positional encodings
H, W = h.shape[-2:]
pos = torch.cat([ ## <--- trouble scaling `pos` to support batch size > 1
self.col_embed[:W].unsqueeze(0).repeat(H, 1, 1),
self.row_embed[:H].unsqueeze(1).repeat(1, W, 1),
], dim=-1).flatten(0, 1).unsqueeze(1)
# propagate through the transformer
h = self.transformer(pos + 0.1 * h.flatten(2).permute(2, 0, 1),
self.query_pos.unsqueeze(1)).transpose(0, 1)
# finally project transformer outputs to class labels and bounding boxes
return {'pred_logits': self.linear_class(h),
'pred_boxes': self.linear_bbox(h).sigmoid()}
For development:
model = DETRdemo(num_classes=10)
x = torch.ones((1,3,128,128)) # <-- this is ok
y = model(x)
model = DETRdemo(num_classes=10)
x = torch.ones((2,3,128,128)) # <-- this is NOT ok
y = model(x)
| Have a look at their code on github: https://github.com/facebookresearch/detr
The code there allows for arbitrary batch sizes, see e.g. the Evaluation section in their Readme file.
| https://stackoverflow.com/questions/65423938/ |
AssertionError: No inf checks were recorded for this optimizer in Pytorch's AutomaticMixedPrecision | I'm using AutomaticMixedPrecision feature of PyTorch to train a network with smaller footprint and precision.
At a certain point some embeddings from the network have NaNs in their tensors, so I'd like to replace those with 0s in order to perform online hard negative samples mining.
However, after replacing the NaNs in the tensor like this:
tensor[torch.isnan(tensor)] = 0
I get the following error while doing the next scaler ste (scaler.step(optimizer):
assert len(optimizer_state["found_inf_per_device"]) > 0, "No inf checks were recorded for this optimizer."
AssertionError: No inf checks were recorded for this optimizer.
What's the correct way to zero out NaNs while getting rid of this error?
| could you show us your full code. Generally it is advisable to just skip the step (batch) if it has NaNs.
Also take a look at torch.nan_to_num.
| https://stackoverflow.com/questions/65428216/ |
Replacing a max pooling layer with an average pooling layer on a VGG model | I'm following this article and I try to implement this function:
def replace_max_pooling(model):
'''
The function replaces max pooling layers with average pooling layers with
the following properties: kernel_size=2, stride=2, padding=0.
'''
for layer in model.layers:
if layer is max pooling:
replace
But I get an error on the iteration that says:
ModuleAttributeError: 'VGG' object has no attribute 'layers'...
How can I do this correctly?
| The VGG model provided by Torchvision contains three components: the features sub-module, avgpool (the adaptive average pool), and the classifier. You need to be looking into the head of the network, where the convolution and pool layers are located: features.
You can loop over the layers of a nn.Module with named_children(). However there are other ways of going about this. You can use isinstance to determine if the layer is of a particular type.
In this particular model, layers are named by there index. So in order to locate the appropriate layers in the nn.Module and overwrite them, we can convert the names to ints.
for i, layer in m.features.named_children():
if isinstance(layer, torch.nn.MaxPool2d):
m.features[int(i)] = nn.AvgPool2d(kernel_size=2, stride=2, padding=0)
Having setup beforehand:
import torch
import torch.nn as nn
m = models.vgg16()
| https://stackoverflow.com/questions/65429057/ |
What is hp_metric in TensorBoard and how to get rid of it? | I am new to Tensorboard.
I am using fairly simple code running an experiment, and this is the output:
I don't remember asking for a hp_metric graph, yet here it is.
What is it and how do I get rid of it?
Full code to reproduce, using Pytorch Lightning (not that I think anyone should have to reproduce this to answer):
Please notice the ONLY line dereferencing TensorBoard is
self.logger.experiment.add_scalars("losses", {"train_loss": loss}, global_step=self.current_epoch)
import torch
from torch import nn
import torch.nn.functional as F
from typing import List, Optional
from pytorch_lightning.core.lightning import LightningModule
from Testing.Research.toy_datasets.ClustersDataset import ClustersDataset
from torch.utils.data import DataLoader
from Testing.Research.config.ConfigProvider import ConfigProvider
from pytorch_lightning import Trainer, seed_everything
from torch import optim
import os
from pytorch_lightning.loggers import TensorBoardLogger
class VAEFC(LightningModule):
# see https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73
# for possible upgrades, see https://arxiv.org/pdf/1602.02282.pdf
# https://stats.stackexchange.com/questions/332179/how-to-weight-kld-loss-vs-reconstruction-loss-in-variational-auto-encoder
def __init__(self, encoder_layer_sizes: List, decoder_layer_sizes: List, config):
super(VAEFC, self).__init__()
self._config = config
self.logger: Optional[TensorBoardLogger] = None
assert len(encoder_layer_sizes) >= 3, "must have at least 3 layers (2 hidden)"
# encoder layers
self._encoder_layers = nn.ModuleList()
for i in range(1, len(encoder_layer_sizes) - 1):
enc_layer = nn.Linear(encoder_layer_sizes[i - 1], encoder_layer_sizes[i])
self._encoder_layers.append(enc_layer)
# predict mean and covariance vectors
self._mean_layer = nn.Linear(encoder_layer_sizes[
len(encoder_layer_sizes) - 2],
encoder_layer_sizes[len(encoder_layer_sizes) - 1])
self._logvar_layer = nn.Linear(encoder_layer_sizes[
len(encoder_layer_sizes) - 2],
encoder_layer_sizes[len(encoder_layer_sizes) - 1])
# decoder layers
self._decoder_layers = nn.ModuleList()
for i in range(1, len(decoder_layer_sizes)):
dec_layer = nn.Linear(decoder_layer_sizes[i - 1], decoder_layer_sizes[i])
self._decoder_layers.append(dec_layer)
self._recon_function = nn.MSELoss(reduction='mean')
def _encode(self, x):
for i in range(len(self._encoder_layers)):
layer = self._encoder_layers[i]
x = F.relu(layer(x))
mean_output = self._mean_layer(x)
logvar_output = self._logvar_layer(x)
return mean_output, logvar_output
def _reparametrize(self, mu, logvar):
if not self.training:
return mu
std = logvar.mul(0.5).exp_()
if std.is_cuda:
eps = torch.cuda.FloatTensor(std.size()).normal_()
else:
eps = torch.FloatTensor(std.size()).normal_()
reparameterized = eps.mul(std).add_(mu)
return reparameterized
def _decode(self, z):
for i in range(len(self._decoder_layers) - 1):
layer = self._decoder_layers[i]
z = F.relu((layer(z)))
decoded = self._decoder_layers[len(self._decoder_layers) - 1](z)
# decoded = F.sigmoid(self._decoder_layers[len(self._decoder_layers)-1](z))
return decoded
def _loss_function(self, recon_x, x, mu, logvar, reconstruction_function):
"""
recon_x: generating images
x: origin images
mu: latent mean
logvar: latent log variance
"""
binary_cross_entropy = reconstruction_function(recon_x, x) # mse loss TODO see if mse or cross entropy
# loss = 0.5 * sum(1 + log(sigma^2) - mu^2 - sigma^2)
kld_element = mu.pow(2).add_(logvar.exp()).mul_(-1).add_(1).add_(logvar)
kld = torch.sum(kld_element).mul_(-0.5)
# KL divergence Kullback–Leibler divergence, regularization term for VAE
# It is a measure of how different two probability distributions are different from each other.
# We are trying to force the distributions closer while keeping the reconstruction loss low.
# see https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73
# read on weighting the regularization term here:
# https://stats.stackexchange.com/questions/332179/how-to-weight-kld-loss-vs-reconstruction-loss-in-variational
# -auto-encoder
return binary_cross_entropy + kld * self._config.regularization_factor
def training_step(self, batch, batch_index):
orig_batch, noisy_batch, _ = batch
noisy_batch = noisy_batch.view(noisy_batch.size(0), -1)
recon_batch, mu, logvar = self.forward(noisy_batch)
loss = self._loss_function(
recon_batch,
orig_batch, mu, logvar,
reconstruction_function=self._recon_function
)
# self.logger.experiment.add_scalars("losses", {"train_loss": loss})
self.logger.experiment.add_scalars("losses", {"train_loss": loss}, global_step=self.current_epoch)
# self.logger.experiment.add_scalar("train_loss", loss, self.current_epoch)
self.logger.experiment.flush()
return loss
def train_dataloader(self):
default_dataset, train_dataset, test_dataset = ClustersDataset.clusters_dataset_by_config()
train_dataloader = DataLoader(train_dataset, batch_size=self._config.batch_size, shuffle=True)
return train_dataloader
def test_dataloader(self):
default_dataset, train_dataset, test_dataset = ClustersDataset.clusters_dataset_by_config()
test_dataloader = DataLoader(test_dataset, batch_size=self._config.batch_size, shuffle=True)
return test_dataloader
def configure_optimizers(self):
optimizer = optim.Adam(model.parameters(), lr=self._config.learning_rate)
return optimizer
def forward(self, x):
mu, logvar = self._encode(x)
z = self._reparametrize(mu, logvar)
decoded = self._decode(z)
return decoded, mu, logvar
if __name__ == "__main__":
config = ConfigProvider.get_config()
seed_everything(config.random_seed)
latent_dim = config.latent_dim
enc_layer_sizes = config.enc_layer_sizes + [latent_dim]
dec_layer_sizes = [latent_dim] + config.dec_layer_sizes
model = VAEFC(config=config, encoder_layer_sizes=enc_layer_sizes, decoder_layer_sizes=dec_layer_sizes)
logger = TensorBoardLogger(save_dir='tb_logs', name='VAEFC')
logger.hparams = config # TODO only put here relevant stuff
# trainer = Trainer(gpus=1)
trainer = Trainer(deterministic=config.is_deterministic,
#auto_lr_find=config.auto_lr_find,
#log_gpu_memory='all',
# min_epochs=99999,
max_epochs=config.num_epochs,
default_root_dir=os.getcwd(),
logger=logger
)
# trainer.tune(model)
trainer.fit(model)
print("done training vae with lightning")
ClustersDataset.py
from torch.utils.data import Dataset
import matplotlib.pyplot as plt
import torch
import numpy as np
from Testing.Research.config.ConfigProvider import ConfigProvider
class ClustersDataset(Dataset):
__default_dataset = None
__default_dataset_train = None
__default_dataset_test = None
def __init__(self, cluster_size: int, noise_factor: float = 0, transform=None, n_clusters=2, centers_radius=4.0):
super(ClustersDataset, self).__init__()
self._cluster_size = cluster_size
self._noise_factor = noise_factor
self._n_clusters = n_clusters
self._centers_radius = centers_radius
# self._transform = transform
self._size = self._cluster_size * self._n_clusters
self._create_data_clusters()
self._combine_clusters_to_array()
self._normalize_data()
self._add_noise()
# self._plot()
pass
@staticmethod
def clusters_dataset_by_config():
if ClustersDataset.__default_dataset is not None:
return \
ClustersDataset.__default_dataset, \
ClustersDataset.__default_dataset_train, \
ClustersDataset.__default_dataset_test
config = ConfigProvider.get_config()
default_dataset = ClustersDataset(
cluster_size=config.cluster_size,
noise_factor=config.noise_factor,
transform=None,
n_clusters=config.n_clusters,
centers_radius=config.centers_radius
)
train_size = int(config.train_size * len(default_dataset))
test_size = len(default_dataset) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(default_dataset, [train_size, test_size])
ClustersDataset.__default_dataset = default_dataset
ClustersDataset.__default_dataset_train = train_dataset
ClustersDataset.__default_dataset_test = test_dataset
return default_dataset, train_dataset, test_dataset
def _create_data_clusters(self):
self._clusters = [torch.zeros((self._cluster_size, 2)) for _ in range(self._n_clusters)]
centers_radius = self._centers_radius
for i, c in enumerate(self._clusters):
r, x, y = 3.0, centers_radius * np.cos(i * np.pi * 2 / self._n_clusters), centers_radius * np.sin(
i * np.pi * 2 / self._n_clusters)
cluster_length = 1.1
cluster_start = i * 2 * np.pi / self._n_clusters
cluster_end = cluster_length * (i + 1) * 2 * np.pi / self._n_clusters
cluster_inds = torch.linspace(start=cluster_start, end=cluster_end, steps=self._cluster_size,
dtype=torch.float)
c[:, 0] = r * torch.sin(cluster_inds) + y
c[:, 1] = r * torch.cos(cluster_inds) + x
def _plot(self):
plt.figure()
plt.scatter(self._noisy_values[:, 0], self._noisy_values[:, 1], s=1, color='b', label="noisy_values")
plt.scatter(self._values[:, 0], self._values[:, 1], s=1, color='r', label="values")
plt.legend(loc="upper left")
plt.show()
def _combine_clusters_to_array(self):
size = self._size
self._values = torch.zeros(size, 2)
self._labels = torch.zeros(size, dtype=torch.long)
for i, c in enumerate(self._clusters):
self._values[i * self._cluster_size: (i + 1) * self._cluster_size, :] = self._clusters[i]
self._labels[i * self._cluster_size: (i + 1) * self._cluster_size] = i
def _add_noise(self):
size = self._size
mean = torch.zeros(size, 2)
std = torch.ones(size, 2)
noise = torch.normal(mean, std)
self._noisy_values = torch.zeros(size, 2)
self._noisy_values[:] = self._values
self._noisy_values = self._noisy_values + noise * self._noise_factor
def _normalize_data(self):
values_min, values_max = torch.min(self._values), torch.max(self._values)
self._values = (self._values - values_min) / (values_max - values_min)
self._values = self._values * 2 - 1
def __len__(self):
return self._size # number of samples in the dataset
def __getitem__(self, index):
item = self._values[index, :]
noisy_item = self._noisy_values[index, :]
# if self._transform is not None:
# noisy_item = self._transform(item)
return item, noisy_item, self._labels[index]
@property
def values(self):
return self._values
@property
def noisy_values(self):
return self._noisy_values
Config values (ConfigProvider just returns those as an object)
num_epochs: 15
batch_size: 128
learning_rate: 0.0001
auto_lr_find: False
noise_factor: 0.1
regularization_factor: 0.0
cluster_size: 5000
n_clusters: 5
centers_radius: 4.0
train_size: 0.8
latent_dim: 8
enc_layer_sizes: [2, 200, 200, 200]
dec_layer_sizes: [200, 200, 200, 2]
retrain_vae: False
random_seed: 11
is_deterministic: True
| It's the default setting of tensorboard in pytorch lightning. You can set default_hp_metric to false to get rid of this metric.
TensorBoardLogger(save_dir='tb_logs', name='VAEFC', default_hp_metric=False)
The hp_metric helps you track the model performance across different hyperparameters. You can check it at hparams in your tensorboard.
| https://stackoverflow.com/questions/65450707/ |
How to draw a scatter plot in Tensorboard Pytorch? | Assuming I want a generic scatter plot drawn in TensorBoard that draws the 1st batch[:, 0], batch[:, 1] of every epoch.
How can that be done in TensorBoard?
An old similar question (2017 january) has a workaround, but I hope we now (2020 december) have the technology for a real solution.
Not enough is my attempt:
if self._current_epoch == 0:
self.logger.experiment.add_scalars("epoch", {"batch": batch[:, 1]}, batch[:, 0])
Gives me the wonderful error
assert(scalar.squeeze().ndim == 0), 'scalar should be 0D'
| If I understand your question right, you could use add_images, add_figure to add image or figure to tensorboard(docs).
Sample code:
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import matplotlib.pyplot as plt
# create summary writer
writer = SummaryWriter('lightning_logs')
# write dummy image to tensorboard
img_batch = np.zeros((16, 3, 100, 100))
writer.add_images('my_image_batch', img_batch, 0)
# write dummy figure to tensorboard
plt.imshow(np.transpose(img_batch[0], [1, 2, 0]))
plt.title('example title')
writer.add_figure('my_figure_batch', plt.gcf(), 0)
writer.close()
| https://stackoverflow.com/questions/65451949/ |
How to cache big data in memory (efficiently) in complex variables across executions of Python scripts? | I am trying to call (from Java Spring beans) Python Pytorch scripts that contains trained neural networks for the use: my Pytorch neural networks are neural functions that accepts state, encodes it and returns the action, decodes all this is according to learned/trained policy.
So - each time when I am trying to invoke Python script I should construc torch.nn, load weights and biases from some external store (DB, file) and then execute this nn to get single answer, pretty expensive operation.
How can I keep torch.nn instance (with loaded weights and biases) in memory and make it available immediately for each execution of Python script?
Memcached is not the solution, because it can keep string or binary values only and it is quite expensive to serialized and deserialize torch.nn instance. One suggestion was Tensorflow Serving, I am currently researching it, so - I don't yet know whether this is the answer.
It is quite possible that Tensorflow has some caching technologies which I can use for the Pytroch as well?
| If I understand your use case correctly, what you need is a model server that keeps the model loaded and ideally also handles any exceptions from incorrect data.
One rather straightforward way to transform your inference script into a tensorflow-serving-like callable service is the python library flask. Another way seems to be a new tool called torchserve.
| https://stackoverflow.com/questions/65458445/ |
tensor type attributes in bert model returned as string | I am new to nlp and i want to build a bert model for sentiment Analysis so i am following this tuto
https://curiousily.com/posts/sentiment-analysis-with-bert-and-hugging-face-using-pytorch-and-python/
but i am getting the error bellow
bert_model = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME)
last_hidden_state, pooled_output = bert_model(
input_ids=encoding['input_ids'],
attention_mask=encoding['attention_mask']
)
last_hidden_state.shape
pooled_output.shape
When i want to execute last_hidden_state.shape I get an error:
'str' object has no attribute 'shape'
why does it return last_hidden_state and pooled_output as str and not tensors.
Thank you.
| it seems there was A couple of changes were introduced when the switch from version 3 to version 4 was done in hugging face and can be solved like below
bert_model = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME, return_dict=False)
| https://stackoverflow.com/questions/65461593/ |
What are the numbers in torch.transforms.normalize and how to select them? | I am following some tutorials and I keep seeing different numbers that seem quite arbitrary to me in the transforms section
namely,
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])
or
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
or
transform = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
or others.
I wonder where these numbers arise, and how to know to select the correct ones?
I am about to use MNIST for sanity, but very soon to use my own unique dataset and will probably need my own normaliztion.
| Normalize in pytorch context subtracts from each instance (MNIST image in your case) the mean (the first number) and divides by the standard deviation (second number). This takes place for each channel separately, meaning in mnist you only need 2 numbers because images are grayscale, but on let's say cifar10 which has colored images you would use something along the lines of your last sform (3 numbers for mean and 3 for std).
So basically each input image in MNIST gets transformed from [0,255] to [0,1] because you transform an image to Tensor (source: https://pytorch.org/docs/stable/torchvision/transforms.html --
Converts a PIL Image or numpy.ndarray (H x W x C) in the range [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0] if the PIL Image belongs to one of the modes (L, LA, P, I, F, RGB, YCbCr, RGBA, CMYK, 1) or if the numpy.ndarray has dtype = np.uint8)
After that you want your input image to have values in a range like [0,1] or [-1,1] to help your model converge to the right direction (Many reasons why scaling takes place, e.g. NNs prefer inputs around that range to avoid gradient saturation). Now as you probably noticed passing 0.5 and 0.5 in Normalize would yield vales in range:
Min of input image = 0 -> 0-0.5 = -0.5 -> gets divided by 0.5 std -> -1
Max of input image = 255 -> toTensor -> 1 -> (1 - 0.5) / 0.5 -> 1
so it transforms your data in a range [-1, 1]
| https://stackoverflow.com/questions/65467621/ |
Convert image to tensor with range [0,255] instead of [0,1]? | I'm trying to use transforms.compose to convert my images into normalized images with a range of [0,255] instead of normalizing it as [0,1] for training my model. How do I make my code do this. Currently it normalizes the images from [0,1]. How would i just multiply this up to 255 to make it 0-255 or is it not that simple?
def build_model(self):
""" DataLoader """
train_transform = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.Resize((self.img_size + 30, self.img_size+30)),
transforms.RandomCrop(self.img_size),
transforms.ToTensor(),
transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))
])
test_transform = transforms.Compose([
transforms.Resize((self.img_size, self.img_size)),
transforms.ToTensor(),
transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))
])
| Ideally you would normalize values between [0, 1] then standardize by calculating the mean and std of your whole training set and apply it to all datasets (training, validation and test set).
The following is essentially a x in [x_min, x_max] -> x' in [0, 1] mapping:
x_min, x_max = x.min(), x.max()
x = (x - x_min) / (x_max-x_min)
Then standardize, for instance with the z-score, which makes mean(x')=0 and std(x')=1:
mean, std = x.mean(), x.std()
x = (x - mean) / std
Back to your question, torchvision.transforms.Normalize is described as:
output[channel] = (input[channel] - mean[channel]) / std[channel]
If you divide your std argument by 255 then you will end multiply by 255 the output.
Here's an exemple with shape (b=1, c=3, h=1, w=3):
> x = torch.tensor([[[[0.2, 0.3, 0.6]], [[0.1, 0.4, 0.2]], [[0.1, 0.8, 0.6]]]])
> mean, std = x.mean(), x.std()
tensor(0.3667), tensor(0.2500)
> t = T.Normalize(mean=mean, std=std/255)
> t(x).mean(), t(x).std()
tensor(~0), tensor(255.)
However, if you're just looking to multiply your data by 255 inside the torchvision.transforms.Compose pipeline you can just do:
T.Compose([
# other transforms
T.ToTensor(),
T.Normalize(mean=(0,)*3, std=(255,)*3)
])
Or with just a lambda:
T.Compose([
# other transforms
T.ToTensor(),
lambda x: x*255
])
Having imported torchvision.transforms as T.
| https://stackoverflow.com/questions/65469814/ |
How do I create a DataLoaders using rows of a DataFrame? | I am trying to create a model that will predict the next row of values. There are 7 columns, but I am only using the first 6. I figure that if I pass in the datetimes in column 7 to the model, that will guarantee overfitting. Here is a screenshot of the DataFrame:
I am using an arbitrary number of rows, 100 in this case, to make this prediction. All I need to know is some way to create a DataLoader where the y value is the row that I want to predict, and the x value is the 100 preceding rows.
If there is a way to do this with a DataBlock, that would be preferred. I have thought about using .loc and .iloc, but I do not know how I would use those to create a DataLoader.
| Create your custom dataset like this:
class TimeSeriesDataset:
def __init__(self, df, input_features: list,
output_features: list, lookback=99, lookahead=1):
self.df = df
self.lookback = lookback
self.lookahead = lookahead
def __len__(self):
return len(self.df) - self.lookback
def __getitem__(self, idx):
idx += self.lookback
lookback = self.df.iloc[idx-self.lookback:idx]
lookahead = self.df.iloc[idx]
lookback = lookback[self.input_features].values
lookahead = lookahead[self.output_features].values
X = T.tensor(lookback)
y = T.tensor(lookahead)
return X, y
Then make your dataloader like this.
dataset = TimeSeriesDataset(df, input_features, ouput_features)
dataloader = DataLoader(dataset, batch_size=batch_size)
| https://stackoverflow.com/questions/65470428/ |
Calculate alpha values with torch mean? | I'm trying to calculate the alpha values as explained here.
I have as argument a tensor with shape (1, 512, 14, 14). To calculate alpha values I need to calculate the average of all dimensions except the channel dimension, so the output will have the shape (1, k, 1, 1) which is essentialy (k,).
How can I do this in PyTorch?
Thanks!
| You could permute the first and second axis to keep the channel dimension on dim=0, then flatten all other dimensions, and lastly, take the mean on that new axis:
x.permute(1, 0, 2, 3).flatten(start_dim=1).mean(dim=1)
Here are the shapes, step by step:
>>> x.permute(1, 0, 2, 3).shape
(512, 1, 14, 14)
>>> x.permute(1, 0, 2, 3).flatten(start_dim=1).shape
(512, 1, 196)
>>> x.permute(1, 0, 2, 3).flatten(start_dim=1).mean(dim=1).shape
(512,)
| https://stackoverflow.com/questions/65471913/ |
Pytorch tensor dimension multiplication | I'm trying to implement the grad-camm algorithm:
https://arxiv.org/pdf/1610.02391.pdf
My arguments are:
activations: Tensor with shape torch.Size([1, 512, 14, 14])
alpha values : Tensor with shape torch.Size([512])
I want to multiply each activation (in dimension index 1 (sized 512)) in each corresponding alpha value: for example if the i'th index out of the 512 in the activation is 4 and the i'th alpha value is 5, then my new i'th activation would be 20.
The shape of the output should be torch.Size([1, 512, 14, 14])
| Assuming the desired output is of shape (1, 512, 14, 14).
You can achieve this with torch.einsum:
torch.einsum('nchw,c->nchw', x, y)
Or with a simple dot product, but you will first need to add a couple of additional dimensions on y:
x*y[None, :, None, None]
Here's an example with x.shape = (1, 4, 2, 2) and y = (4,):
>>> x = torch.arange(16).reshape(1, 4, 2, 2)
tensor([[[[ 0, 1],
[ 2, 3]],
[[ 4, 5],
[ 6, 7]],
[[ 8, 9],
[10, 11]],
[[12, 13],
[14, 15]]]])
>>> y = torch.arange(1, 5)
tensor([1, 2, 3, 4])
>>> x*y[None, :, None, None]
tensor([[[[ 0, 1],
[ 2, 3]],
[[ 8, 10],
[12, 14]],
[[24, 27],
[30, 33]],
[[48, 52],
[56, 60]]]])
| https://stackoverflow.com/questions/65480530/ |
ImportError: cannot import name 'AdultDataset' from 'dataset' | Please I am working on AdultDataset for a classification task
I found out: from dataset import AdultDataset
is giving the error below:
ImportError: cannot import name 'AdultDataset' from 'dataset'
Import Relevant Libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
from torch.utils.data import DataLoader
from dataset import AdultDataset
So when I am trying to create a 3 layer Feed Forward Neural network
using pytorch that takes as input the a dataset entries and classifies
if the individuals gains more or less than 50K (i.e., the fnlwgt label)
starting with the code below I get an error
train_dataset = AdultDataset(X_train, y_train)
test_dataset = AdultDataset(X_test, y_test)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-49-4ab31bdb6258> in <module>
1 # Using norm_D01
2
----> 3 train_dataset = AdultDataset(X_train, y_train)
4 test_dataset = AdultDataset(X_test, y_test)
5
NameError: name 'AdultDataset' is not defined
| I believe your problem can be fixed like this:
from aif360.datasets import AdultDataset
Maybe you are using an old guide?
| https://stackoverflow.com/questions/65527107/ |
Pytorch torch.load ModuleNotFoundError: No module named 'utils' | I'm trying to load a pretrained model with torch.load.
I get the following error:
ModuleNotFoundError: No module named 'utils'
I've checked that the path I am using is correct by opening it from the command line. What could be causing this?
Here's my code:
import torch
import sys
PATH = './gan.pth'
model = torch.load(PATH)
model.eval()
EDIT:
Entire error stack:
Traceback (most recent call last):
File "load.py", line 6, in <module>
model = torch.load(PATH)
File "C:\Users\user\anaconda3\envs\pytorch-flask\lib\site-packages\torch\serialization.py", line 595, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "C:\Users\user\anaconda3\envs\pytorch-flask\lib\site-packages\torch\serialization.py", line 774, in _legacy_load
result = unpickler.load()
ModuleNotFoundError: No module named 'utils'
| EDIT this answer doesn't provide the answer for the question but addresses another issue in the given code
the .pth file just stores the parameters of a model, not the model itself. When you want to load a model you will need the .pt/-h file and the python code of your model class. Then you can load it like this:
# your model
class YourModel(nn.Modules):
def __init__(self):
super(YourModel, self).__init__()
. . .
def forward(self, x):
. . .
# the pytorch save-file in which you stored your trained model
model_file = "<your path>"
model = Model()
model = model.load_state_dict(torch.load(model_file))
model.eval()
| https://stackoverflow.com/questions/65538179/ |
How do I use a .pickle file to predict an image? | I have trained a CNN model in PyTorch to detect skin diseases in 6 different classes. My model came out with an accuracy of 92% and I saved it in a .pickle file. I wish to use this model for predictions but I don't know how to do so. If anyone can aid me in the necessary steps, I will be grateful. I have tried using Streamlit but apparently, Streamlit does not work anymore so I am opting for an offline solution where I can just upload an image and the model will give me a prediction of such.
Here is the code for my model. I used a pre-trained ResNet18 model and trained it on the Skin Cancer MNIST: HAM10000 dataset from Kaggle.
def set_parameter_requires_grad(model, feature_extracting):
if feature_extracting:
for param in model.parameters():
param.requires_grad = False
def initialize_model(model_name, num_classes, feature_extract, use_pretrained=True):
# Initialize these variables which will be set in this if statement. Each of these
# variables is model specific.
model_ft = None
input_size = 0
if model_name == "resnet":
""" Resnet18, resnet34, resnet50, resnet101
"""
model_ft = models.resnet18(pretrained=use_pretrained)
set_parameter_requires_grad(model_ft, feature_extract)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, num_classes)
input_size = 224
elif model_name == "vgg":
""" VGG11_bn
"""
model_ft = models.vgg11_bn(pretrained=use_pretrained)
set_parameter_requires_grad(model_ft, feature_extract)
num_ftrs = model_ft.classifier[6].in_features
model_ft.classifier[6] = nn.Linear(num_ftrs,num_classes)
input_size = 224
elif model_name == "densenet":
""" Densenet121
"""
model_ft = models.densenet121(pretrained=use_pretrained)
set_parameter_requires_grad(model_ft, feature_extract)
num_ftrs = model_ft.classifier.in_features
model_ft.classifier = nn.Linear(num_ftrs, num_classes)
input_size = 224
elif model_name == "inception":
""" Inception v3
"""
model_ft = models.inception_v3(pretrained=use_pretrained)
set_parameter_requires_grad(model_ft, feature_extract)
# Handle the auxilary net
num_ftrs = model_ft.AuxLogits.fc.in_features
model_ft.AuxLogits.fc = nn.Linear(num_ftrs, num_classes)
# Handle the primary net
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs,num_classes)
input_size = 299
else:
print("Invalid model name, exiting...")
exit()
return model_ft, input_size
# resnet,vgg,densenet,inception
model_name = 'resnet'
num_classes = 7
feature_extract = False
# Initialize the model for this run
model_ft, input_size = initialize_model(model_name, num_classes, feature_extract, use_pretrained=True)
# Define the device:
device = torch.device('cuda:0')
# Put the model on the device:
model = model_ft.to(device)
# norm_mean = (0.49139968, 0.48215827, 0.44653124)
# norm_std = (0.24703233, 0.24348505, 0.26158768)
# define the transformation of the train images.
train_transform = transforms.Compose([transforms.Resize((input_size,input_size)),transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),transforms.RandomRotation(20),
transforms.ColorJitter(brightness=0.1, contrast=0.1, hue=0.1),
transforms.ToTensor(), transforms.Normalize(norm_mean, norm_std)])
# define the transformation of the val images.
val_transform = transforms.Compose([transforms.Resize((input_size,input_size)), transforms.ToTensor(),
transforms.Normalize(norm_mean, norm_std)])
# Define a pytorch dataloader for this dataset
class HAM10000(Dataset):
def __init__(self, df, transform=None):
self.df = df
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, index):
# Load data and get label
X = Image.open(self.df['path'][index])
y = torch.tensor(int(self.df['cell_type_idx'][index]))
if self.transform:
X = self.transform(X)
return X, y
# Define the training set using the table train_df and using our defined transitions (train_transform)
training_set = HAM10000(df_train, transform=train_transform)
train_loader = DataLoader(training_set, batch_size=64, shuffle=True, num_workers=4)
# Same for the validation set:
validation_set = HAM10000(df_val, transform=train_transform)
val_loader = DataLoader(validation_set, batch_size=64, shuffle=False, num_workers=4)
# we use Adam optimizer, use cross entropy loss as our loss function
optimizer = optim.Adam(model.parameters(), lr=1e-5)
criterion = nn.CrossEntropyLoss().to(device)
Here are the training process and the save file.
class AverageMeter(object):
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
total_loss_train, total_acc_train = [],[]
def train(train_loader, model, criterion, optimizer, epoch):
model.train()
train_loss = AverageMeter()
train_acc = AverageMeter()
curr_iter = (epoch - 1) * len(train_loader)
for i, data in enumerate(train_loader):
images, labels = data
N = images.size(0)
# print('image shape:',images.size(0), 'label shape',labels.size(0))
images = Variable(images).to(device)
labels = Variable(labels).to(device)
optimizer.zero_grad()
outputs = model(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
prediction = outputs.max(1, keepdim=True)[1]
train_acc.update(prediction.eq(labels.view_as(prediction)).sum().item()/N)
train_loss.update(loss.item())
curr_iter += 1
if (i + 1) % 100 == 0:
print('[epoch %d], [iter %d / %d], [train loss %.5f], [train acc %.5f]' % (
epoch, i + 1, len(train_loader), train_loss.avg, train_acc.avg))
total_loss_train.append(train_loss.avg)
total_acc_train.append(train_acc.avg)
return train_loss.avg, train_acc.avg
def validate(val_loader, model, criterion, optimizer, epoch):
model.eval()
val_loss = AverageMeter()
val_acc = AverageMeter()
with torch.no_grad():
for i, data in enumerate(val_loader):
images, labels = data
N = images.size(0)
images = Variable(images).to(device)
labels = Variable(labels).to(device)
outputs = model(images)
prediction = outputs.max(1, keepdim=True)[1]
val_acc.update(prediction.eq(labels.view_as(prediction)).sum().item()/N)
val_loss.update(criterion(outputs, labels).item())
print('------------------------------------------------------------')
print('[epoch %d], [val loss %.5f], [val acc %.5f]' % (epoch, val_loss.avg, val_acc.avg))
print('------------------------------------------------------------')
return val_loss.avg, val_acc.avg
if os.path.exists("Tested_model2.pickle"):
print("Loading Trained Model")
model = pickle.load(open("Tested_model2.pickle", "rb"))
print(model)
else:
print("Training New Model.")
print("Training begins.")
print("********************************************************")
epoch_num = 25
load_model = True
best_val_acc = 0
total_loss_val, total_acc_val = [],[]
for epoch in range(1, epoch_num+1):
loss_train, acc_train = train(train_loader, model, criterion, optimizer, epoch)
loss_val, acc_val = validate(val_loader, model, criterion, optimizer, epoch)
total_loss_val.append(loss_val)
total_acc_val.append(acc_val)
if acc_val > best_val_acc:
best_val_acc = acc_val
print('*****************************************************')
print('best record: [epoch %d], [val loss %.5f], [val acc %.5f]' % (epoch, loss_val, acc_val))
print('*****************************************************')
with open ("Tested_model2.pickle", "wb") as file:
pickle.dump(model, file)
To put it simply, I wanna know how to use the pickle file for predictions.
Edit: I added the evaluation part in the following, please help me to understand how can I proceed further with this code.
model.eval()
y_label = []
y_predict = []
with torch.no_grad():
for i, data in enumerate(val_loader):
images, labels = data
N = images.size(0)
images = Variable(images).to(device)
outputs = model(images)
prediction = outputs.max(1, keepdim=True)[1]
y_label.extend(labels.cpu().numpy())
y_predict.extend(np.squeeze(prediction.cpu().numpy().T))
Also, This is the code that I previously used to load and make predictions, however I did not get to know whether the code is correct or not or the method is right.
%%writefile app.py
import streamlit as st
import torch
st.set_option('deprecation.showfileUploaderEncoding', False)
@st.cache(allow_output_mutation=True)
def load_model():
model = pickle.load(open("Trained_Model_part2.pickle", "rb"))
return mdoel
model = load_model()
st.write("""
#Classification of skin disease
""")
file = st.file_uploader("Please upload the image of the affected area.", type = ["jpg", "png"])
import cv2
from PIL import Image, ImageOps
import numpy as np
def import_and_predict(image_data, model):
size = (224, 224)
image = ImageOps.fit(image_data, size, Image.ANTIALIAS)
img = np.asarray(image)
image_reshape = img[np.newaxis,...]
prediction = model.predict(img_reshape)
return prediction
if file is None:
st.text("Please upload an image file.")
else:
image = Image.open(file)
st.image(image, use_column_width = True)
predictions = import_and_predict(image, model)
class_names = ["Melanocytic nevi", "Melanoma", "Benign keratosis-like lesions", "Basal cell carcinoma", "Actinic keratoses", "Vascular lesions", "Dermatofibroma"]
string = "It is: " + class_names[np.argmax(predictions)]
st.success(string)
This uses streamlit and the loader is the previous pickle file loader which will be replaced by a .pth loader. I want to know what changes do I have to make so the code will ask for an image input or look for an image in the specific folder and deliver a prediction. Thank you.
| I'll show you how to save and load pytorch model parameters properly (you should use the .pt extension):
To save the model do this (once every epoch or after training):
torch.save(model.state_dict(), "your/path/model_file.pt")
All the model parameters are now loaded into "your/path/model_file.pt".
To load the model now you will need the model class (class YourModel(nn.Module): ...) and the parameters:
model = YourModel()
model.load_state_dict(torch.load("your/path/model_file.pt"))
The model is now initialized with the trianed parameters and ready to use. For example like this:
model = YourModel()
model.load_state_dict(torch.load("your/path/model_file.pt"))
# set to evaluation mode
model.eval()
# load an image
sample = get_sample()
# reshape sample to (batch-size x width x height) but batch-size is 1 because you probably want to predict just one image at a time in real-life usage
sample = torch.reshape(1, sample.size(0), sample.size(1))
prediction = model(sample)
Edit to answer the question in the comments:
To load a trained pytorch model you need the file in which the models parameter is saved and the model structure itself. The model structure is just the python code of a pytorch module class. You havent build the model youself therefore you dont have the direct model code but in your case it should be model_ft. It's just the python class holding all the layers. So the model class is like the skeletton and the parameters like the flesh or something.
When you would create a model class completly yourself and load the trained weights into it it would look like this for example:
import torch
import torch.nn as nn
import torch.nn.functional as F
# the model (skeleton class)
class YourModel(nn.Module):
def __init__(self):
super(YourModel, self).__init__()
self.dense1 = nn.Linear(128, 64)
self.dense2 = nn.Linear(64, 2)
def forward(self, x):
x = F.relu(self.dense1(x))
x = torch.sigmoid(self.dense2(x))
return x
# . . .
# train model and save it to model.pt
# . . .
# load "empty" model
model = YourModel()
# load trained paramters/weights into the model
model.load_state_dict(torch.load("/path/model.pt"))
So as I said, in your case it the model class should be model_ft.
| https://stackoverflow.com/questions/65540507/ |
Pytorch : RuntimeError: mat1 dim 1 must match mat2 dim 0 | using resnet50 model. Customize the last layer and it showing runtime error..Im new to PyTorch and I keep getting the error mat1 dim1 must match mat1 dim0
this is my code for the network
from torchvision import models
model = models.resnet50(pretrained=True)
for param in model.parameters():
param.requires_grad = False
class Identity(nn.Module):
def __init__(self):
super(Identity, self).__init__()
def forward(self, x):
return x
model.avgpool = Identity()
model.fc = nn.Linear(2048, 2, bias=True)
for param in model.fc.parameters():
param.requires_grad = True
model = model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)
def train(num_epoch, model):
for epoch in range(0, 3):
losses = []
model.train()
loop = tqdm(enumerate(train_loader), total=len(train_loader))
for batch_idx, (data, targets) in loop:
data = data.to(device=device)
targets = targets.to(device=device)
scores = model.forward(data)
loss = criterion(scores, targets)
optimizer.zero_grad()
losses.append(loss)
loss.backward()
optimizer.step()
loop.set_description(f"Epoch {epoch+1}/{num_epoch} process: {int((batch_idx / len(train_loader)) * 100)}")
loop.set_postfix(loss=loss.data.item())
train(1, model)
RuntimeError: mat1 dim 1 must match mat2 dim 0
| This error comes from the nn.Linear you changed.
As you recall, nn.Linear computes a simple matrix dot product, and therefore the input dimension coming from the previous layer must equal the weight matrix shape (you set it to 2048).
my guess is that since you removed the model.avgpool layer, you now have more than 2048 input dimension resulting with the error you got.
BTW, you do not need to implement "identity" layer yourself, pytorch already has nn.Identity.
| https://stackoverflow.com/questions/65543055/ |
create tensor in specific indexes without loops | I have a tensor t1 (with shape (2*n, 2*n), and I need to create tensor t2 (with shape (2*n)) with the values of t1 at [i,(i+n) mod 2n] for each row i.
For example, given:
t1 = torch.tensor([[1, 2, 3, 4],
[5, 6, 7, 8],
[9 ,10,11,12],
[13,14,15,16]])
Here n=2.
t2 has to be [3,8,9,14].
I have this code:
t2 = torch.tensor([t1[i,(i+n)%(2*n)] for i in range(2*n)])
but Im searching a way without loops.
| You could create the list of indices beforehand with a loop then pick the values from t1 with torch.gather without having to loop over them yourself:
>>> index = torch.tensor([[(i+n)%(2*n)] for i in range(2*n)])
>>> torch.gather(t1, 1, index).flatten()
tensor([ 3, 8, 9, 14])
Alternatively, you can use torch.roll to shift by n an (2*n, 2*n) identity matrix, multiply by t1 to get a mask and apply that to t1 with torch.masked_select:
>>> mask = torch.eye(2*n).roll(n, 1).bool()
tensor([[False, False, True, False],
[False, False, False, True],
[ True, False, False, False],
[False, True, False, False]])
>>> torch.masked_select(t1, mask)
tensor([ 3, 8, 9, 14])
| https://stackoverflow.com/questions/65544574/ |
what does offsets mean in pytorch nn.EmbeddingBag? | I know offsets meaning when it has two numbers, but what does it mean when more than two numbers,for example:
weight = torch.FloatTensor([[1, 2, 3], [4, 5, 6]])
embedding_sum = nn.EmbeddingBag.from_pretrained(weight, mode='sum')
print(list(embedding_sum.parameters()))
input = torch.LongTensor([0,1])
offsets = torch.LongTensor([0,1,2,1])
print(embedding_sum(input, offsets))
the result is :
[Parameter containing:
tensor([[1., 2., 3.],
[4., 5., 6.]])]
tensor([[1., 2., 3.],
[4., 5., 6.],
[0., 0., 0.],
[0., 0., 0.]])
who can help me?
| import torch
import torch.nn as nn
weight = torch.FloatTensor([[1, 2, 3], [4, 5, 6]])
embedding_sum = nn.EmbeddingBag.from_pretrained(weight, mode='sum')
print(embedding_sum.weight)
""" output
Parameter containing:
tensor([[1., 2., 3.],
[4., 5., 6.]])
"""
input = torch.LongTensor([0, 1])
offsets = torch.LongTensor([0, 1, 2, 1])
According to these offsets you will get the following samples
"""
sample_1: input[0:1] # tensor([0])
sample_2: input[1:2] # tensor([1])
sample_3: input[2:1] # tensor([])
sample_4: input[1:] # tensor([1])
"""
Embedding the samples above
# tensor([0]) => lookup 0 => embedding_sum.weight[0] => [1., 2., 3.]
# tensor([1]) => lookup 1 => embedding_sum.weight[1] => [4., 5., 6.]
# tensor([]) => empty bag => [0., 0., 0.]
# tensor([1]) => lookup 1 => embedding_sum.weight[1] => [4., 5., 6.]
print(embedding_sum(input, offsets))
""" output
tensor([[1., 2., 3.],
[4., 5., 6.],
[0., 0., 0.],
[4., 5., 6.]])
"""
One more example:
input = torch.LongTensor([0, 1])
offsets = torch.LongTensor([0, 1, 0])
According to these offsets you will get the following samples
"""
sample_1: input[0:1] # tensor([0])
sample_2: input[1:0] # tensor([])
sample_3: input[0:] # tensor([0, 1])
"""
Embedding the samples above
# tensor([0]) => lookup 0 => embedding_sum.weight[0] => [1., 2., 3.]
# tensor([]) => empty bag => [0., 0., 0.]
# tensor([0, 1]) => lookup 0 and 1 then reduce by sum
# => embedding_sum.weight[0] + embedding_sum.weight[1] => [5., 7., 9.]
print(embedding_sum(input, offsets))
""" output
tensor([[1., 2., 3.],
[0., 0., 0.],
[5., 7., 9.]])
"""
| https://stackoverflow.com/questions/65547335/ |
How do I to average irregularly spaced x & y coordinate tensor into a grid with a specific cell size? | I have an algorithm that generates a tensor of irregularly spaced x and y coordinates (ex: torch.size([3600, 2])), and I need to average the points into grid cells of a specific size (ex: 8 by 8). The resulting grid needs to be either an array or tensor.
It's not required, but I would also like to be able to determine if any of the resulting cells have less than a specified number of points in them.
For example I can graph the tensor using matplotlib's plt.scatter, and it looks like this:
In the above example, 100,000 points exist but the number of points can sometimes be in the tens of millions.
I've tried using histogram approaches, and most of them use a specific number of cells vs a specific cell size. Matplotlib can seemingly do it in a graph, but that doesn't help me get an array or tensor.
Edit:
This code might work, if it can be made to work properly.
def grid_torch(x_coords, y_coords, grid_size=(8,8), x_extent=(0., 1.), y_extent=(0., 1.)):
x_coords = ((x_coords - x_extent[0]) / (x_extent[1] - x_extent[0])) * grid_size[0]
y_coords = ((y_coords - y_extent[0]) / (y_extent[1] - y_extent[0])) * grid_size[1]
x_list = []
for x in range(grid_size[0]):
x = torch.ones_like(x_coords) * x
y_list = []
for y in range(grid_size[1]):
y = torch.ones_like(y_coords) * y
in_bounds_x = torch.logical_and(x <= x_coords, x_coords <= x + 1)
in_bounds_y = torch.logical_and(y <= y_coords, y_coords <= y + 1)
in_bounds = torch.logical_and(in_bounds_x, in_bounds_y)
in_bounds_indices = torch.where(in_bounds)
print(in_bounds_indices)
y_list.append(in_bounds_indices)
x_list.append(torch.stack(y_list))
return torch.stack(x_list)
out = grid_torch(xy_tensor[:,0], xy_tensor[:,1])
print(out.shape)
def create_grid(grid_layout, activ, grid_size=(8,8), min_density=8):
cells = []
for x in range(grid_size[0]):
for y in range(grid_size[1]):
indices = grid_layout[x, y]
if len(indices) > min_density:
average_activation = torch.mean(activ[indices])
cells.append((average_activation, x, y))
print(average_activation, x, y)
return torch.stack(cells)
grid_test = create_grid(out, xy_tensor, grid_size=(8,8))
| I think this code would give you a good starting point.
def grid_torch(x_coords, y_coords, grid_size=(8,8), x_extent=(0., 1.), y_extent=(0., 1.)):
# This part converts coordinates to bin numbers (like (2,5), (7,7) etc)
x_bin = (((x_coords - x_extent[0]) / (x_extent[1] - x_extent[0])) * grid_size[0]).int()
y_bin = (((y_coords - y_extent[0]) / (y_extent[1] - y_extent[0])) * grid_size[1]).int()
counts = torch.zeros(grid_size)
means = torch.zeros(list(grid_size) + [2])
for x in range(grid_size[0]):
for y in range(grid_size[1]):
# these tensors are 1 where (x_bin == x and y_bin == y), 0 else where
x_where = 1 * (x_bin == x)
y_where = 1 * (y_bin == y)
p_where = (x_where * y_where)
cnt = p_where.sum()
counts[x, y] = cnt
# we'll average both x and y coords seperately.
# you can embed min_density logic here.
if cnt > 0:
means[x, y, 0] = (x_coords * p_where).sum() / p_where.sum()
means[x, y, 1] = (y_coords * p_where).sum() / p_where.sum()
return counts, means
# Generate sample points
points = torch.tensor(np.concatenate([
np.random.normal(loc=0.2, scale=0.1, size=(1000, 2)),
np.random.normal(loc=0.6, scale=0.1, size=(1000, 2))
]).clip(0,1)).float()
# plt.scatter(points[:,0], points[:,1])
# plt.grid()
counts, means = grid_torch(points[:,0], points[:,1])
counts
>>>
tensor([[ 47., 114., 75., 10., 0., 0., 0., 0.],
[102., 204., 141., 27., 0., 0., 0., 0.],
[ 60., 101., 74., 16., 7., 4., 1., 0.],
[ 5., 17., 9., 23., 72., 51., 10., 0.],
[ 1., 1., 4., 54., 186., 141., 28., 3.],
[ 0., 0., 3., 47., 154., 117., 14., 0.],
[ 0., 0., 0., 9., 37., 24., 4., 0.],
[ 0., 0., 0., 2., 0., 1., 0., 0.]])
| https://stackoverflow.com/questions/65551875/ |
PyTorch mini batch, when to call optimizer.zero_grad() | When we use mini batch, should I call optimizer.zero_grad() before starting the iteration? Or inside the iteration? I think the second code is correct, but I'm not sure.
nb_epochs = 20
for epoch in range(nb_epochs + 1):
optimizer.zero_grad() # THIS PART!!
for batch_idx, samples in enumerate(dataloader):
x_train, y_train = samples
prediction = model(x_train)
cost = F.mse_loss(prediction, y_train)
cost.backward()
optimizer.step()
print('Epoch {:4d}/{} Batch {}/{} Cost: {:.6f}'.format(
epoch, nb_epochs, batch_idx+1, len(dataloader),
cost.item()
))
or
nb_epochs = 20
for epoch in range(nb_epochs + 1):
for batch_idx, samples in enumerate(dataloader):
x_train, y_train = samples
prediction = model(x_train)
optimizer.zero_grad() #THIS PART!!
cost = F.mse_loss(prediction, y_train)
cost.backward()
optimizer.step()
print('Epoch {:4d}/{} Batch {}/{} Cost: {:.6f}'.format(
epoch, nb_epochs, batch_idx+1, len(dataloader),
cost.item()
))
Which one is correct? The only difference is location of optimizer.zero_grad().
| Gradients accumulates by default everytime you call .backward() on the computational graph.
On the first snippet, you are resetting the gradients once per epoch so all gradients will accumulate their values over time. With a total of len(dataloader) accumulated gradients, only resseting the gradients when the next epoch starts. On the second snippet, you are doing the right thing, which is to reset the gradient after every backward pass.
So your assumptions were right.
There are some instances where accumulating gradients is needed, but most times it's not.
| https://stackoverflow.com/questions/65570250/ |
Torch throws a RuntimeError: element 0 of tensors does not require grad... but can't find where computational graph is severed | I am getting the above error:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I looked this up and it looks like the computational graph is not connected for some reason. However, I cannot find the location where the graph is severed.
My code is a reproduction of the arjovsky WGAN: https://github.com/martinarjovsky/WassersteinGAN
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import torch
import torch.nn as nn
from __future__ import print_function
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
from torch.autograd import Variable
import os
import json
class MLP_G(nn.Module):
def __init__(self, isize, nz, ngf, ngpu):
super(MLP_G, self).__init__()
self.ngpu = ngpu
main = nn.Sequential(
# Z goes into a linear of size: ngf
nn.Linear(nz, ngf),
nn.ReLU(True),
nn.Linear(ngf, ngf),
nn.ReLU(True),
nn.Linear(ngf, ngf),
nn.ReLU(True),
nn.Linear(ngf, isize),
)
self.main = main
self.isize = isize
self.nz = nz
def forward(self, input):
input = input.view(input.size(0), input.size(1))
if isinstance(input.data, torch.cuda.FloatTensor) and self.ngpu > 1:
output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
else:
output = self.main(input)
return output.view(output.size(0), self.isize)
class MLP_D(nn.Module):
def __init__(self, isize, nz, ndf, ngpu):
super(MLP_D, self).__init__()
self.ngpu = ngpu
main = nn.Sequential(
# Z goes into a linear of size: ndf
nn.Linear(isize, ndf),
nn.ReLU(True),
nn.Linear(ndf, ndf),
nn.ReLU(True),
nn.Linear(ndf, ndf),
nn.ReLU(True),
nn.Linear(ndf, 1),
)
self.main = main
self.isize = isize
self.nz = nz
def forward(self, input):
input = input.view(input.size(0),input.size(1))
if isinstance(input.data, torch.cuda.FloatTensor) and self.ngpu > 1:
output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
else:
output = self.main(input)
output = output.mean(0)
return output.view(1)
netG = None #path to saved generator
netD = None #discriminator path
batchSize = 1000 #size of batch (which is size of data)
cuda = False
lrD = lrG = .00005
beta1 = .5
niter = 25
experiment = '/content/drive/MyDrive/savefolder'
clamp_upper = .01
clamp_lower = -clamp_upper
manualSeed = random.randint(1, 10000) # fix seed
print("Random Seed: ", manualSeed)
random.seed(manualSeed)
torch.manual_seed(manualSeed)
cudnn.benchmark = True
dataset = torch.tensor(np.stack([x,y, instrument], axis = 1)).float().reshape(-1,3)
ngpu = 1
nz = 4 #three latents and the instrument
ngf = 128
ndf = 128
# custom weights initialization called on netG and netD
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
m.weight.data.normal_(0.0, 0.02)
elif classname.find('BatchNorm') != -1:
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
netG = MLP_G(2, nz, ngf, ngpu)
netG.apply(weights_init)
print(netG)
netD = MLP_D(3, nz, ndf, ngpu)
print(netD)
input = torch.FloatTensor(batchSize, 2)
noise = torch.FloatTensor(batchSize, nz-1)
fixed_noise = torch.FloatTensor(batchSize, nz-1).normal_(0, 1)
one = torch.FloatTensor([1])
mone = one * -1
# setup optimizer
optimizerD = optim.Adam(netD.parameters(), lr=lrD, betas=(beta1, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=lrG, betas=(beta1, 0.999))
real_cpu = data = dataset
gen_iterations = 0
for epoch in range(niter):
#data_iter = iter(dataloader)
############################
# (1) Update D network
###########################
for p in netD.parameters(): # reset requires_grad
p.requires_grad = True # they are set to False below in netG update
# train the discriminator Diters times
if gen_iterations < 25 or gen_iterations % 500 == 0:
Diters = 100
else:
Diters = 5
j = 0
while j < Diters:
j += 1
# clamp parameters to a cube
for p in netD.parameters():
p.data.clamp_(clamp_lower, clamp_upper)
# train with real
netD.zero_grad()
if cuda:
real_cpu = real_cpu.cuda()
input.resize_as_(real_cpu).copy_(real_cpu)
inputv = Variable(input, requires_grad=False)
errD_real = netD(inputv)
errD_real.backward(one)#Error Occurs here
# train with fake
noise.resize_(batchSize, nz-1).normal_(0, 1)
noisev = torch.cat([Variable(noise, requires_grad=False), dataset[:,2].reshape(-1,1)], 1)# totally freeze netG
fake = torch.cat([Variable(netG(noisev).data), dataset[:,2].view(-1,1)], 1)
inputv = fake
errD_fake = netD(inputv)
errD_fake.backward(mone)
errD = errD_real - errD_fake
optimizerD.step()
############################
# (2) Update G network
###########################
for p in netD.parameters():
p.requires_grad = False # to avoid computation
netG.zero_grad()
# in case our last batch was the tail batch of the dataloader,
# make sure we feed a full batch of noise
noise.resize_(batchSize, nz-1).normal_(0, 1)
noisev = torch.cat([Variable(noise), dataset[:,2].view(-1,1)], 1)
fake = torch.cat([netG(noisev), dataset[:,2].view(-1,1)], 1)
errG = netD(fake)
errG.backward(one)
optimizerG.step()
gen_iterations += 1
i = 0
print('[%d/%d][%d] Loss_D: %f Loss_G: %f Loss_D_real: %f Loss_D_fake %f'
% (epoch, niter, gen_iterations,
errD.data[0], errG.data[0], errD_real.data[0], errD_fake.data[0]))
# if gen_iterations % 500 == 0:
# real_cpu = real_cpu.mul(0.5).add(0.5)
# vutils.save_image(real_cpu, '{0}/real_samples.png'.format(opt.experiment))
# fake = netG(Variable(fixed_noise, volatile=True))
# fake.data = fake.data.mul(0.5).add(0.5)
# vutils.save_image(fake.data, '{0}/fake_samples_{1}.png'.format(opt.experiment, gen_iterations))
# do checkpointing
torch.save(netG.state_dict(), '{0}/netG_epoch_{1}.pth'.format(experiment, epoch))
torch.save(netD.state_dict(), '{0}/netD_epoch_{1}.pth'.format(experiment, epoch))
Error occurs on the line: errD_real.backward(one). The error might be something regarding zeroing out the computational graph as the code runs for one iteration then throws an error. Thanks for your help.
| You most certainly need to add require_grad=True on one. You could define it as:
one = torch.tensor([1], dtype=torch.float16, requires_grad=True)
| https://stackoverflow.com/questions/65570549/ |
Tensorboard: All experiments were written as one (without provided tags) | I wanted to compare several runs that I did in the loop creating new SummaryWriter instances like this:
for experiment_name in experiments:
logger = SummaryWriter(self._log_path, comment=experiment_name)
...
for epoch in range(5):
...
logger.add_scalar("Epoch Loss", loss, epoch)
...
logger.close()
In the log path I got several files like this:
events.out.tfevents.1609675249.nlp-vm.13735.0
events.out.tfevents.1609679736.nlp-vm.13735.1
events.out.tfevents.1609687200.nlp-vm.13735.2
events.out.tfevents.1609691662.nlp-vm.13735.3
events.out.tfevents.1609699158.nlp-vm.13735.4
events.out.tfevents.1609703743.nlp-vm.13735.5
events.out.tfevents.1609711308.nlp-vm.13735.6
events.out.tfevents.1609716054.nlp-vm.13735.7
But the Tensorboard displays all runs as one:
example
example
Could you say what I should do to fix it and can I do it without re-running all experiments?
| The events files that I have in the one folder should be in the separate and the folder name will be displayed as an experiment name.
Also found the important note in the SummaryWriter documentation:
comment (string): Comment log_dir suffix appended to the default
log_dir. If log_dir is assigned, this argument has no effect.
| https://stackoverflow.com/questions/65575183/ |
torchtext ImportError in colab | I am trying to run this tutorial in colab.
However, when I try to import a bunch of modules:
import io
import torch
from torchtext.utils import download_from_url
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator
It gives me the errors for extract_archive and build_vocab_from_iterator:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-5-a24e72502dbc> in <module>()
1 import io
2 import torch
----> 3 from torchtext.utils import download_from_url, extract_archive
4 from torchtext.data.utils import get_tokenizer
5 from torchtext.vocab import build_vocab_from_iterator
ImportError: cannot import name 'extract_archive'
ImportError Traceback (most recent call last)
<ipython-input-4-02a401fd241b> in <module>()
3 from torchtext.utils import download_from_url
4 from torchtext.data.utils import get_tokenizer
----> 5 from torchtext.vocab import build_vocab_from_iterator
6
7 url = 'https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip'
ImportError: cannot import name 'build_vocab_from_iterator'
Please help me with this one.
| You need to upgrade torchtext first
!pip install -U torchtext==0.8.0
Currently, version 0.8.0 works with torch 1.7.0 (no need to upgrade torch, torchvision)
Update (sep 2021)
Currently, torchtext is already 0.10.0 and you don't need to upgrade anything.
| https://stackoverflow.com/questions/65575871/ |
Pytorch transformer forward function masks implementation for decoder forward function | I am trying to use and learn PyTorch Transformer with DeepMind math dataset. I have tokenized (char not word) sequence that is fed into model. Models forward function is doing once forward for encoder and multiple forwards for decoder (till all batch outputs reach token, this is still TODO).
I am struggling with Transformer masks and decoder forward as it throws the error:
k = k.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1)
RuntimeError: shape '[-1, 24, 64]' is invalid for input of size 819200.
Source is N = 32, S = 50, E = 512. Target is N = 32, S = 3, E = 512.
It is possible that I have wrong implementation of masks or that source and target lengths are different, not realy sure.
class PositionalEncoding(nn.Module):
# function to positionally encode src and target sequencies
def __init__(self, d_model, dropout=0.1, max_len=5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0).transpose(0, 1)
self.register_buffer('pe', pe)
def forward(self, x):
x = x + self.pe[:x.size(0), :]
return self.dropout(x)
class MyTransformerModel(nn.Module):
# should implement init and forward function
# define separate functions for masks
# define forward function with
# implement:
# embedding layer
# positional encoding
# encoder layer
# decoder layer
# final classification layer
# encoder -> forward once
# decoder -> forward multiple times (for one encoder forward)
# decoder output => concatenate to input e.g. decoder_input = torch.cat([decoder_input], [decoder_output])
# early stopping => all in batch reach <eos> token
def __init__(self, vocab_length = 30, sequence_length = 512, num_encoder_layers = 3, num_decoder_layers = 2, num_hidden_dimension = 256, feed_forward_dimensions = 1024, attention_heads = 8, dropout = 0.1, pad_idx = 3, device = "CPU", batch_size = 32):
super(MyTransformerModel, self).__init__()
self.src_embedding = nn.Embedding(vocab_length, sequence_length)
self.pos_encoder = PositionalEncoding(sequence_length, dropout)
self.src_mask = None # attention mask
self.memory_mask = None # attention mask
self.pad_idx = pad_idx
self.device = device
self.batch_size = batch_size
self.transformer = nn.Transformer(
sequence_length,
attention_heads,
num_encoder_layers,
num_decoder_layers,
feed_forward_dimensions,
dropout,
)
def src_att_mask(self, src_len):
mask = (torch.triu(torch.ones(src_len, src_len)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
return mask
def no_peak_att_mask(self, batch_size, src_len, time_step):
mask = np.zeros((batch_size, src_len), dtype=bool)
mask[:, time_step: ] = 1 # np.NINF
mask = torch.from_numpy(mask)
return mask
def make_src_key_padding_mask(self, src):
# mask "<pad>"
src_mask = src.transpose(0, 1) == self.pad_idx
return src_mask.to(self.device)
def make_trg_key_padding_mask(self, trg):
tgt_mask = trg.transpose(0, 1) == self.pad_idx
return tgt_mask.to(self.device)
def forward(self, src, trg):
src_seq_length, N = src.shape
trg_seq_length, N = trg.shape
embed_src = self.src_embedding(src)
position_embed_src = self.pos_encoder(embed_src)
embed_trg = self.src_embedding(trg)
position_embed_trg = self.pos_encoder(embed_trg)
src_padding_mask = self.make_src_key_padding_mask(src)
trg_padding_mask = self.make_trg_key_padding_mask(trg)
trg_mask = self.transformer.generate_square_subsequent_mask(trg_seq_length).to(self.device)
time_step = 1
att_mask = self.no_peak_att_mask(self.batch_size, src_seq_length, time_step).to(self.device)
encoder_output = self.transformer.encoder.forward(position_embed_src, src_key_padding_mask = src_padding_mask)
# TODO : implement loop for transformer decoder forward fn, implement early stopping
# where to feed decoder_output?
decoder_output = self.transformer.decoder.forward(position_embed_trg, encoder_output, trg_mask, att_mask, trg_padding_mask, src_padding_mask)
return decoder_output
Can anyone pin point where I have made a mistake?
| It looks like I have messed dimensions order (as Transformer does not have batch first option). Corrected code is below:
class MyTransformerModel(nn.Module):
def __init__(self, d_model = 512, vocab_length = 30, sequence_length = 512, num_encoder_layers = 3, num_decoder_layers = 2, num_hidden_dimension = 256, feed_forward_dimensions = 1024, attention_heads = 8, dropout = 0.1, pad_idx = 3, device = "CPU", batch_size = 32):
#, ninp, device, nhead=8, nhid=2048, nlayers=2, dropout=0.1, src_pad_idx = 1, max_len=5000, forward_expansion= 4):
super(MyTransformerModel, self).__init__()
self.src_embedding = nn.Embedding(vocab_length, d_model)
self.pos_encoder = PositionalEncoding(d_model, dropout)
self.vocab_length = vocab_length
self.d_model = d_model
self.src_mask = None # attention mask
self.memory_mask = None # attention mask
self.pad_idx = pad_idx
self.device = device
self.batch_size = batch_size
self.transformer = nn.Transformer(
d_model,
attention_heads,
num_encoder_layers,
num_decoder_layers,
feed_forward_dimensions,
dropout,
)
self.fc = nn.Linear(d_model, vocab_length)
# self.init_weights() <= used in tutorial
def src_att_mask(self, src_len):
mask = (torch.triu(torch.ones(src_len, src_len)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
return mask
def no_peak_att_mask(self, batch_size, src_len, time_step):
mask = np.zeros((batch_size, src_len), dtype=bool)
mask[:, time_step: ] = 1 # np.NINF
mask = torch.from_numpy(mask)
# mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
return mask
def make_src_key_padding_mask(self, src):
# mask "<pad>"
src_mask = src.transpose(0, 1) == self.pad_idx
# src_mask = src == self.pad_idx
# (N, src_len)
return src_mask.to(self.device)
def make_trg_key_padding_mask(self, trg):
# same as above -> expected tgt_key_padding_mask: (N, T)
tgt_mask = trg.transpose(0, 1) == self.pad_idx
# tgt_mask = trg == self.pad_idx
# (N, src_len)
return tgt_mask.to(self.device)
def init_weights(self):
initrange = 0.1
nn.init.uniform_(self.encoder.weight, -initrange, initrange)
nn.init.zeros_(self.decoder.weight)
nn.init.uniform_(self.decoder.weight, -initrange, initrange)
def forward(self, src, trg):
N, src_seq_length = src.shape
N, trg_seq_length = trg.shape
# S - source sequence length
# T - target sequence length
# N - batch size
# E - feature number
# src: (S, N, E) (sourceLen, batch, features)
# tgt: (T, N, E)
# src_mask: (S, S)
# tgt_mask: (T, T)
# memory_mask: (T, S)
# src_key_padding_mask: (N, S)
# tgt_key_padding_mask: (N, T)
# memory_key_padding_mask: (N, S)
src = rearrange(src, 'n s -> s n')
trg = rearrange(trg, 'n t -> t n')
print("src shape {}".format(src.shape))
print(src)
print("trg shape {}".format(trg.shape))
print(trg)
embed_src = self.src_embedding(src)
print("embed_src shape {}".format(embed_src.shape))
print(embed_src)
position_embed_src = self.pos_encoder(embed_src)
print("position_embed_src shape {}".format(position_embed_src.shape))
print(position_embed_src)
embed_trg = self.src_embedding(trg)
print("embed_trg shape {}".format(embed_trg.shape))
print(embed_trg)
position_embed_trg = self.pos_encoder(embed_trg)
# position_embed_trg = position_embed_trg.transpose(0, 1)
print("position_embed_trg shape {}".format(position_embed_trg.shape))
print(position_embed_trg)
src_padding_mask = self.make_src_key_padding_mask(src)
print("KEY - src_padding_mask shape {}".format(src_padding_mask.shape))
print("should be of shape: src_key_padding_mask: (N, S)")
print(src_padding_mask)
trg_padding_mask = self.make_trg_key_padding_mask(trg)
print("KEY - trg_padding_mask shape {}".format(trg_padding_mask.shape))
print("should be of shape: trg_key_padding_mask: (N, T)")
print(trg_padding_mask)
trg_mask = self.transformer.generate_square_subsequent_mask(trg_seq_length).to(self.device)
print("trg_mask shape {}".format(trg_mask.shape))
print("trg_mask should be of shape tgt_mask: (T, T)")
print(trg_mask)
# att_mask = self.src_att_mask(trg_seq_length).to(self.device)
time_step = 1
# error => memory_mask: expected shape! (T, S) !!! this is not a key_padding_mask!
# att_mask = self.no_peak_att_mask(self.batch_size, src_seq_length, time_step).to(self.device)
# print("att_mask shape {}".format(att_mask.shape))
# print("att_mask should be of shape memory_mask: (T, S)")
# print(att_mask)
att_mask = None
# get encoder output
# forward(self, src: Tensor, mask: Optional[Tensor] = None, src_key_padding_mask: Optional[Tensor] = None)
# forward encoder just once for a batch
# attention forward of encoder expects => src, src_mask, src_key_padding_mask +++ possible positional encoding error !!!
encoder_output = self.transformer.encoder.forward(position_embed_src, src_key_padding_mask = src_padding_mask)
print("encoder_output")
print("encoder_output shape {}".format(encoder_output.shape))
print(encoder_output)
# forward decoder till all in batch did not reach <eos>?
# def forward(self, tgt: Tensor, memory: Tensor, tgt_mask: Optional[Tensor] = None,
# memory_mask: Optional[Tensor] = None, tgt_key_padding_mask: Optional[Tensor] = None,
# memory_key_padding_mask: Optional[Tensor] = None)
# first forward
decoder_output = self.transformer.decoder.forward(position_embed_trg, encoder_output, trg_mask, att_mask, trg_padding_mask, src_padding_mask)
# TODO: target in => target out shifted by one, loop till all in batch meet stopping criteria || max len is reached
#
print("decoder_output")
print("decoder_output shape {}".format(decoder_output.shape))
print(decoder_output)
output = rearrange(decoder_output, 't n e -> n t e')
output = self.fc(output)
print("output")
print("output shape {}".format(output.shape))
print(output)
predicted = F.log_softmax(output, dim=-1)
print("predicted")
print("predicted shape {}".format(predicted.shape))
print(predicted)
# top k
top_value, top_index = torch.topk(predicted, k=1)
top_index = torch.squeeze(top_index)
print("top_index")
print("top_index shape {}".format(top_index.shape))
print(top_index)
print("top_value")
print("top_value shape {}".format(top_value.shape))
print(top_value)
return top_index
| https://stackoverflow.com/questions/65588829/ |
What should be output size of image classifier model? | I'm performing an image classification task . Images are labeled as 0 1 2. Should be the size of the last linear layer in the model output be 3 or 1 ? In general, for a 3-class operation, the output is set to 3, and as a result of these three, the maximum probability is returned. But I saw that the last layer is set as 1 in some codes. I think it is actually logical. What do you think about ? ( Also I dont use softmax or sigmoid function in last layer.)
| To perform classification into c classes (c = 3 in your example) you need to predict the probability of each class, therefore you need to output a c-dim output.
Usually you do not explicitly apply softmax to the "raw predictions" (aka "logits") - the loss function usually does that for you in a more numerically-robust way (see, e.g., nn.CrossEntropyLoss).
After you trained the model, at inference time you can take argmax over the predicted c logits and output a single scalar - the index of the predicted class. This can only be done during inference since argmax is not a differentiable operation.
| https://stackoverflow.com/questions/65592554/ |
Pytorch lightning Datamodule override warning: Signature of method '.setup()' does not match signature of base method in class 'LightningDataModule' | The following is a working Pytorch Lightning DataModule.
import os
from pytorch_lightning import LightningDataModule
import torchvision.datasets as datasets
from torchvision.transforms import transforms
import torch
from torch.utils.data import DataLoader
from Testing.Research.config.paths import mnist_data_download_folder
class PressureDataModule(LightningDataModule):
def __init__(self, config):
super().__init__()
self._config = config
def prepare_data(self):
pass
def setup(self, stage):
# transform
transform = transforms.Compose([transforms.ToTensor()])
mnist_train_full = datasets.MNIST(mnist_data_download_folder, train=True, download=False, transform=self._transforms)
mnist_test = datasets.MNIST(mnist_data_download_folder, train=False, download=False, transform=self._transforms)
# train/val split
train_size = int(self._config.train_size /
(self._config.train_size + self._config.val_size) * len(mnist_train_full))
val_size = len(mnist_train_full) - train_size
mnist_train, mnist_val = torch.utils.data.random_split(mnist_train_full, [train_size, val_size])
# assign to use in dataloaders
self._train_dataset = mnist_train
self._val_dataset = mnist_val
self._test_dataset = mnist_test
def train_dataloader(self):
return DataLoader(self._train_dataset, batch_size=self._config.batch_size, num_workers=self._config.num_workers)
def val_dataloader(self):
return DataLoader(self._val_dataset, batch_size=self._config.batch_size, num_workers=self._config.num_workers)
def test_dataloader(self):
return DataLoader(self._test_dataset, batch_size=self._config.batch_size, num_workers=self._config.num_workers)
Pycharm doesn't like setup with
Signature of method 'PressureDataModule.setup()' does not match
signature of base method in class 'LightningDataModule'
If there is no match, why is Pycharm crying?
Is it because of different parameters? What is the correct number of parameters?
What's the correct way to fix this?
| It seems that simple copy-paste the parent method signature solves this issue:
def setup(self, stage: Optional[str] = None) -> None:
...
| https://stackoverflow.com/questions/65594849/ |
Subsets and Splits