instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
How to shrink a 2D tensor to another 2D tensor using boolean mask? | Say I have a 2D pytorch tensor and a 2D numpy boolean as follows,
a = torch.tensor([[ 0., 1., 2.],
[ 3., 4., 5.],
[ 6., 7., 8.],
[ 9., 10., 11.],
[12., 13., 14.]])
m = numpy.array([[ False, True, False],
[ True, False, True],
[ False, True, True],
[ False, False, False],
[ True, False, False]])
They have the same dimension and the number of True's in each column of m is the same.
I need to get the 2x3 tensor that is
a.transpose(0,1).masked_select(torch.from_numpy(m.transpose())).reshape(a.shape[1],-1).transpose(0,1)
which is
tensor([[ 3., 1., 5.],
[12., 7., 8.]])
The actual tensor is very large, and the operation needs to be performed many times. So I want to ask what is an efficient way of doing this (or the most efficient way).
| In my benchmarks a jitted numba solution is the fastest, I could find
My benchmarks for a, m with shape (10000,200)(equal result tensors)
1
@numba.jit
13.2 ms (3.46x)
2
list comprehension
31.3 ms (1.46x)
3
baseline
45.7 ms (1.00x)
Generation of sufficiently large sample data for benchmarking
import torch
import numpy as np
def generate_data(rows=500, columns=100):
a = torch.from_numpy(np.random.uniform(1,10, (rows,columns)).astype(np.float32))
# argsort trick by @divakar https://stackoverflow.com/a/55317373/14277722
def shuffle_along_axis(a, axis):
idx = np.random.rand(*a.shape).argsort(axis=axis)
return np.take_along_axis(a,idx,axis=axis)
m = shuffle_along_axis(np.full((columns,rows), np.random.randint(2, size=rows)), 1).astype('bool').T
return a, np.ascontiguousarray(m)
a, m = generate_data(10000,200)
A jitted numba implementation
import numba as nb
@nb.njit
def gather2d(arr1, arr2):
res = np.zeros((np.count_nonzero(arr2[:,0]), arr1.shape[1]), np.float32)
counter = np.zeros(arr1.shape[1], dtype=np.intp)
for i in range(arr1.shape[0]):
for j in range(arr1.shape[1]):
if arr2[i,j]:
res[counter[j], j] = arr1[i,j]
counter[j] += 1
return res
torch.from_numpy(gather2d(a.numpy(),m))
Output
# %timeit 10 loops, best of 5: 13.2 ms per loop
tensor([[2.1846, 7.8890, 8.8218, ..., 4.8309, 9.2853, 6.4404],
[5.8842, 3.7332, 6.7436, ..., 1.2914, 3.2983, 3.5627],
[9.5128, 2.4283, 2.2152, ..., 4.9512, 9.7335, 9.6252],
...,
[7.3193, 7.8524, 9.6654, ..., 3.3665, 8.8926, 4.7660],
[1.3829, 1.3347, 6.6436, ..., 7.1956, 4.0446, 6.4633],
[6.4264, 3.6283, 3.6385, ..., 8.4152, 5.8498, 5.0281]])
Against a vectorized baseline solution
# %timeit 10 loops, best of 5: 45.7 ms per loop
a.gather(0, torch.from_numpy(np.nonzero(m.T)[1].reshape(-1, m.shape[1], order='F')))
A python list comprehension turns out to be surprisingly fast
def g(arr1,arr2):
return np.array([i[j] for i,j in zip(arr1.T,arr2.T)]).T
# %timeit 10 loops, best of 5: 31.3 ms per loop
torch.from_numpy(g(a.numpy(), m))
| https://stackoverflow.com/questions/71641977/ |
How to resolve the error: RuntimeError: received 0 items of ancdata | I have a torch.utils.data.DataLoader. I have created them with the following code.
transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
trainset = CIFAR100WithIdx(root='.',
train=True,
download=True,
transform=transform_train,
rand_fraction=args.rand_fraction)
train_loader = torch.utils.data.DataLoader(trainset,
batch_size=args.batch_size,
shuffle=True,
num_workers=args.workers)
But when I run the following code I get an error.
train_loader_2 = []
for i, (inputs, target, index_dataset) in enumerate(train_loader):
train_loader_2.append((inputs, target, index_dataset))
The error is
Traceback (most recent call last):
File "main_superloss.py", line 460, in <module>
main()
File "main_superloss.py", line 456, in main
main_worker(args)
File "main_superloss.py", line 374, in main_worker
train_loader, val_loader = get_train_and_val_loader(args)
File "main_superloss.py", line 120, in get_train_and_val_loader
for i, (inputs, target, index_dataset) in enumerate(train_loader):
File "/home/C00423766/.conda/envs/dp/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 804, in __next__
idx, data = self._get_data()
File "/home/C00423766/.conda/envs/dp/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 771, in _get_data
success, data = self._try_get_data()
File "/home/C00423766/.conda/envs/dp/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 724, in _try_get_data
data = self.data_queue.get(timeout=timeout)
File "/home/C00423766/.conda/envs/dp/lib/python3.7/multiprocessing/queues.py", line 113, in get
return _ForkingPickler.loads(res)
File "/home/C00423766/.conda/envs/dp/lib/python3.7/site-packages/torch/multiprocessing/reductions.py", line 284, in rebuild_storage_fd
fd = df.detach()
File "/home/C00423766/.conda/envs/dp/lib/python3.7/multiprocessing/resource_sharer.py", line 58, in detach
return reduction.recv_handle(conn)
File "/home/C00423766/.conda/envs/dp/lib/python3.7/multiprocessing/reduction.py", line 185, in recv_handle
return recvfds(s, 1)[0]
File "/home/C00423766/.conda/envs/dp/lib/python3.7/multiprocessing/reduction.py", line 161, in recvfds
len(ancdata))
RuntimeError: received 0 items of ancdata
The reason I want to get the data inside a list is because I want to reorder the samples. And not in a random way but in a particular way. How can I do that?
| I was facing a similar issue with my code and based on some discussions (check #1, #2, #3). I used ulimit -n 2048 to increase the maximum number of file descriptors a process can have. You can read more about ulimit here.
About the issue - The discussions suggest that it has to do something with pytorch’s forked multiprocessing code.
On the second part of your question, how to reorder a dataloader - You can refer this answer
| https://stackoverflow.com/questions/71642653/ |
PyTorch: How to calculate output size of the CNN? | I went through this PyTorch CNN implementation available here: https://machinelearningknowledge.ai/pytorch-conv2d-explained-with-examples/
I am unable to understand how they replace the '?' with some value. What is the formula for calculating the CNN layer output?
This is essential to be calculated in PyTorch; not so in Tensorflow - Keras. If there is any other blog that explains this well, please drop it in the comments.
# Implementation of CNN/ConvNet Model
class CNN(torch.nn.Module):
def __init__(self):
super(CNN, self).__init__()
# L1 ImgIn shape=(?, 28, 28, 1)
# Conv -> (?, 28, 28, 32)
# Pool -> (?, 14, 14, 32)
self.layer1 = torch.nn.Sequential(
torch.nn.Conv2d(1, 32, kernel_size=3, stride=1, padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size=2, stride=2),
torch.nn.Dropout(p=1 - keep_prob))
# L2 ImgIn shape=(?, 14, 14, 32)
# Conv ->(?, 14, 14, 64)
# Pool ->(?, 7, 7, 64)
self.layer2 = torch.nn.Sequential(
torch.nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size=2, stride=2),
torch.nn.Dropout(p=1 - keep_prob))
# L3 ImgIn shape=(?, 7, 7, 64)
# Conv ->(?, 7, 7, 128)
# Pool ->(?, 4, 4, 128)
self.layer3 = torch.nn.Sequential(
torch.nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size=2, stride=2, padding=1),
torch.nn.Dropout(p=1 - keep_prob))
# L4 FC 4x4x128 inputs -> 625 outputs
self.fc1 = torch.nn.Linear(4 * 4 * 128, 625, bias=True)
torch.nn.init.xavier_uniform(self.fc1.weight)
self.layer4 = torch.nn.Sequential(
self.fc1,
torch.nn.ReLU(),
torch.nn.Dropout(p=1 - keep_prob))
# L5 Final FC 625 inputs -> 10 outputs
self.fc2 = torch.nn.Linear(625, 10, bias=True)
torch.nn.init.xavier_uniform_(self.fc2.weight) # initialize parameters
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = self.layer3(out)
out = out.view(out.size(0), -1) # Flatten them for FC
out = self.fc1(out)
out = self.fc2(out)
return out
#instantiate CNN model
model = CNN()
model
Thanks!
| I assume you calculation is wrong because:
Pytorch support images in format C * H * W (e.g. 3x32x32 not 32x32x3)
First dimension always batch dimension and must be omitted in calculation because, all nn.Modules handle it by default
So if you want calculate input size for first Linear layer, you can use this trick:
conv = nn.Sequential(self.layer1,self.layer2, self.layer3, nn.Flatten())
out = conv(torch.randn(1,im_height,im_width).unsqueeze(0))
# fc_layer_in_channels = out.shape[1]
self.fc1 = torch.nn.Linear(out.shape[1], 625, bias=True)
but only if you know im_height,im_width
The best practice is use torch.nn.AdaptiveAvgPool2d.
With this layer you always can get output of fixed spatial size.
| https://stackoverflow.com/questions/71647309/ |
Difference in the order of applying linear decoder and average pooling for sequence models | I am working with sequence modelling in pytorch and trying to determine if the order of the pooling and linear decoding layer matters. Given that I have a sequence with the shape (Batch, Seqlen, dim_model) and I want to transform it into (Batch, dim_output) I will need a pooling layer for reducing the second dimension (SeqLen) and an affine transformation that maps dim_model to dim_output. Assume Batch = 16, SeqLen = 6000, dim_model = 32, dim_output = 5, we have the following input:
import torch
pooler = lambda x: x.mean(dim=1)
decoder = torch.nn.Linear(32, 5)
x = torch.randn(16, 6000, 32)
Would this:
y = decoder(pooler(x))
Be the same as:
y = pooler(decoder(x))
The normalized difference between both outputs suggest that they are close:
torch.norm(decoder(pooler(x)) - pooler(decoder(x)))
output:
tensor(6.5412e-08, grad_fn=<CopyBackwards>)
But can one say they are equivalent? Are the gradients computed in the same way?
I am interesting the case of using arbitrary pooling layer, this includes for instance the "last" pooler:
pooler = lambda x: x[:,-1]
torch.norm(decoder(pooler(x)) - pooler(decoder(x)))
output:
tensor(0., grad_fn=<CopyBackwards>)
| A linear layer does x -> Ax+b for some matrix A and vector b.
If you have a bunch of x (x1, x2, x3, ..., xn) then A[(x1+...+xn)/n] = (Ax1 +... +Axn)/n, so for mean pooling, applying pooling first and then doing the linear layer results (up to floating point errors) in the same value as applying the linear layer first and then doing the pooling.
For "last pooling", the result is the same because it doesn't matter whether you apply A to every element and then afterwards only pick the final one, or if you pick the final one, and apply A to it.
However, for plenty of other operations, the result would not be the same. E.g. for max pooling, the result would in general not be the same.
e.g. if x1 = (1, 0, 0), x2 = (0, 1, 0), x3 = (0, 0, 1), and A = ((1, 1, 1)) then Ax1 = Ax2 =Ax3 = (1), so applying max pooling after the linear layer just gives you (1),
but max pooling applied to x1, x2, x3 gives you (1, 1, 1) and A(1, 1, 1) = 3.
| https://stackoverflow.com/questions/71650951/ |
How to insert a value in a fixed positon of pytorch | I have a PyTorch tensor
x = [[1,2,3,4,5]]
Now I want to add a value to a fixed position of the tensor x, for example, I want to add 11 in position 3 then the x will be
x= [[1,2,3,11,4,5]]
How can I perform this operation in Pytorch?
| Dynamically extending arrays to arbitrary sizes along the non-singleton dimensions, such as the ones you mentioned, are unsupported in PyTorch mainly because the memory is pre-allocated during tensor construction and set to fixed size depending on the data type. The only way to grow non-singleton dimension size is to create a new (empty/zero) tensor with the target shape and insert values at the desired position(s), while also copying values.
In [24]: z = torch.zeros(1, 6)
In [27]: t
Out[27]: tensor([[1, 2, 3, 4, 5]])
In [30]: z[:, :3] = t[:, :3]
In [33]: z[:, -2:] = t[:, -2:]
In [36]: z[z == 0] = 11
In [37]: z
Out[37]: tensor([[ 1., 2., 3., 11., 4., 5.]])
However, if you'd have instead wanted to expand the tensor along the singleton dimension, then that's easy to achieve using tensor.expand(new_shape). In the below example, we expand the tensor t to length 3 along the 0th dimension, which is originally a singleton dimension.
# make a copy for in-place modification since `expand()` returns a view
In [64]: t_expd = t.expand(3, -1).clone()
In [65]: t_expd
Out[65]:
tensor([[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]])
# modify 2nd and 3rd rows
In [66]: t_expd[1:, ...] = 23
In [67]: t_expd
Out[67]:
tensor([[ 1, 2, 3, 4, 5],
[23, 23, 23, 23, 23],
[23, 23, 23, 23, 23]])
| https://stackoverflow.com/questions/71669183/ |
RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of dimension: 4 | I am having a hard time understanding image segmentation. I have implemented Unet model for image segmentation. I am using PASCAL VOC dataset and I am trying to train my model. However, I got stuck when calculating the loss. I am unsure of what should be the expected shapes of the output and target classes. Can someone please educate me on what I am doing wrong? My only guess is that I am missing something when it comes to the ground truth images since I don't know how the model will learn which class is which. Thank!
Here is my Unet class:
import torch
import torch.nn as nn
from torchvision import transforms
def x2conv(in_channels, out_channels):
double_conv = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=0),
nn.ReLU(inplace=True),
nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=0),
nn.ReLU(inplace=True))
return double_conv
class Encoder(nn.Module):
def __init__(self, chs):
super().__init__()
self.enc_blocks = nn.ModuleList(
[x2conv(chs[i], chs[i+1]) for i in range(len(chs)-1)])
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
def forward(self, x):
ftrs = []
for block in self.enc_blocks:
x = block(x)
ftrs.append(x)
x = self.pool(x)
return ftrs
class Decoder(nn.Module):
def __init__(self, chs):
super().__init__()
self.chs = chs
self.upconvs = nn.ModuleList(
[nn.ConvTranspose2d(chs[i], chs[i+1], kernel_size=2, stride=2) for i in range(len(chs)-1)])
self.dec_blocks = nn.ModuleList(
[x2conv(chs[i], chs[i+1]) for i in range(len(chs)-1)])
def forward(self, x, encoder_features):
for i in range(len(self.chs)-1):
x = self.upconvs[i](x)
enc_ftrs = self.crop(encoder_features[i], x)
x = torch.cat([x, enc_ftrs], dim=1)
x = self.dec_blocks[i](x)
return x
def crop(self, enc_ftrs, x):
_, _, H, W = x.shape
enc_ftrs = transforms.CenterCrop([H, W])(enc_ftrs)
return enc_ftrs
class UNet(nn.Module):
def __init__(self, enc_chs, dec_chs, num_class):
super(UNet, self).__init__()
self.encoder = Encoder(enc_chs)
self.decoder = Decoder(dec_chs)
self.softmax = nn.Conv2d(dec_chs[-1], num_class, kernel_size=1)
def forward(self, x):
enc_ftrs = self.encoder(x)
out = self.decoder(enc_ftrs[::-1][0], enc_ftrs[::-1][1:])
out = self.softmax(out)
return out
And here is my dataset class:
from PIL import Image
import torchvision
VOC_CLASSES = [ # How to use?
"background",
"aeroplane",
"bicycle",
"bird",
"boat",
"bottle",
"bus",
"car",
"cat",
"chair",
"cow",
"diningtable",
"dog",
"horse",
"motorbike",
"person",
"pottedplant",
"sheep",
"sofa",
"train",
"tvmonitor",
]
VOC_COLORMAP = [ # How to use?
[0, 0, 0], # Background
[128, 0, 0], # Aeroplane
[0, 128, 0], # Bicycle
[128, 128, 0], # Bird
[0, 0, 128], # Boat
[128, 0, 128], # Bottle
[0, 128, 128], # Bus
[128, 128, 128], # Car
[64, 0, 0], # Cat
[192, 0, 0], # Chair
[64, 128, 0], # Cow
[192, 128, 0], # Diningtable
[64, 0, 128], # Dog
[192, 0, 128], # Horse
[64, 128, 128], # Motorbike
[192, 128, 128], # Person
[0, 64, 0], # Pottedplant
[128, 64, 0], # Sheep
[0, 192, 0], # Sofa
[128, 192, 0], # Train
[0, 64, 128], # tvmonitor
]
class VocDataset(torchvision.datasets.VOCSegmentation):
def __init__(self, image_set, transform, root="../data/VOCtrainval_11-May-2012/", download=False, year="2012"):
self.transform = transform
self.year = year
super().__init__(root=root, image_set=image_set,
download=download, transform=transform, year=year)
def __len__(self):
return len(self.images)
def __getitem__(self, index):
# open images and do transformation img = jpg, mask = png
img = Image.open(self.images[index]).convert("RGB")
target = Image.open(self.masks[index]).convert("RGB")
if self.transform:
img = self.transform(img)
trfm = T.Compose([T.ToTensor(), T.Resize((388, 388))])
target = trfm(target)
return img, target
and lastly here is my train function
import torch
import torch.nn as nn
import torch.optim as optim
from unet import UNet
from torch.utils.data import DataLoader
from dataset import VocDataset
import torchvision.transforms as T
import torch.nn.functional as F
# Hyperparameters etc.
STD = [0.2686, 0.2652, 0.2812] # Std for dataset
MEAN = [0.4568, 0.4431, 0.4083] # Mean for dataset
MOMENTUM = 0.9
LEARNING_RATE = 1e-4
BATCH_SIZE = 32
NUM_EPOCHS = 1
NUM_WORKERS = 2
NUM_CLASSES = 20
TRAIN_SET = "train"
VAL_SET = "val"
ENC_CHANNELS = (3, 64, 128, 256, 512, 1024) # Encoder channels
DEC_CHANNELS = (1024, 512, 256, 128, 64) # Decoder channels
TRANSFORM = T.Compose(
[T.ToTensor(), T.Resize(SIZE), T.Normalize(MEAN, STD)]
)
def main():
training_data = VocDataset(TRAIN_SET, TRANSFORM)
train_dataloader = DataLoader(
training_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=NUM_WORKERS, drop_last=True)
# Create instance of unet
unet = UNet(ENC_CHANNELS, DEC_CHANNELS, NUM_CLASSES)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(
unet.parameters(), lr=LEARNING_RATE, momentum=MOMENTUM)
for epoch in range(NUM_EPOCHS): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(train_dataloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data # Shape for labels and inputs are: [32,3,388,388]
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = unet(inputs) # output shape is [32, 32, 388, 388]
loss = criterion(outputs, labels) # Error here
loss.backward()
optimizer.step()
# print('Finished Training')
if __name__ == "__main__":
main()
| For starters, your label and outputs have different dimension. (32 vs 3 channels). Cross Entropy Loss expects them to either have the same number of channels, or for the target to have only one channel with integer values indicating the relevant class.
Let's work with the latter case. In this case, we need to reduce the target to be a single channel [32 x 388 x 388] for your input and batch size. (Secondarily, the Unet should ideally have one output channel for each class (looks like there are 22 classes so you should change the final output layer of the Unet decoder to have 22 outputs)).
To convert the label of size [32 x 3 x 388 x 388] to [32 x 388 x 388], you need to use the colormap for conversion. That is, create a new tensor target of size [32 x 1 x 388 x 388]. For each value target[i,j,k], assign the index into VOC_COLORMAP that matches the value stored in the pixels at label[i,:,j,k].
| https://stackoverflow.com/questions/71674595/ |
Stable Baselines3 - Setting "manually" the q_values | What I have done
I'm using the DQN Algorithm in Stable Baselines 3 for a two players board type game. In this game, 40 moves are available, but once one is made, it can't be done again.
I trained my first model with an opponent which would choose randomly its move. If an invalid move is made by the model, I give a negative reward equal to the max score one can obtain and stop the game.
The issue
Once it's was done, I trained a new model against the one I obtained with the first run. Unfortunately, ultimately, the training process gets blocked as the opponent seems to loop an invalid move. Which means that, with all I've tried in the first training, the first model still predicts invalid moves. Here's the code for the "dumb" opponent :
while(self.dumb_turn):
#The opponent chooses a move
chosen_line, _states = model2.predict(self.state, deterministic=True)
#We check if the move is valid or not
while(line_exist(chosen_line, self.state)):
chosen_line, _states = model2.predict(self.state, deterministic=True)
#Once a good move is made, we registered it as a move and add it to the space state
self.state[chosen_line]=1
What I would like to do but don't know how
A solution would be to set manually the Q-values to -inf for the invalid moves so that the opponent avoid those moves, and the training algorithm does not get stuck. I've been told how to access to these values :
import torch as th
from stable_baselines3 import DQN
model = DQN("MlpPolicy", "CartPole-v1")
env = model.get_env()
obs = env.reset()
with th.no_grad():
obs_tensor, _ = model.q_net.obs_to_tensor(obs)
q_values = model.q_net(obs_tensor)
But I don't know how to set them to -infinity.
If somebody could help me, I would be very grateful.
| I recently had a similar problem in which I needed to directly alter the q-values produced by the RL model during training in order to influence its actions.
To do this I overwritten some methods of the library:
# Imports
from stable_baselines3.dqn.policies import QNetwork, DQNPolicy
# Override some methods of the class QNetwork used by the DQN model in order to set to a negative value the q-values of
# some actions
# Two possibile methods to override:
# Override _predict ---> alter q-values only during predictions but not during training
# Override forward ---> alter q-values also during training (Attention: here we are working with batches of q-values)
class QNetwork_modified(QNetwork):
def forward(self, obs: th.Tensor) -> th.Tensor:
"""
Predict the q-values.
:param obs: Observation
:return: The estimated Q-Value for each action.
"""
# Compute the q-values using the QNetwork
q_values = self.q_net(self.extract_features(obs))
# For each observation in the training batch:
for i in range(obs.shape[0]):
# Here you can alter q_values[i]
return q_values
# Override the make_q_net method of the DQN policy used by the DQN model to make it use the new DQN network
class DQNPolicy_modified(DQNPolicy):
def make_q_net(self) -> DQNPolicy:
# Make sure we always have separate networks for features extractors etc
net_args = self._update_features_extractor(self.net_args, features_extractor=None)
return QNetwork_modified(**net_args).to(self.device)
model = DQN(DQNPolicy_modified, env, verbose=1)
Personally I don’t like too much this approach, and I would suggest you to try first some “more natural” alternatives, like for examples giving in input to your model also some kind of history of what actions have been already selected, in order to help the model learn that pre-selected actions should be avoided.
For example you could enrich the input for the RL model with an additional binary mask where the moves already chosen have their corresponding bit set to 1. (In this case you should modify the gym environment).
| https://stackoverflow.com/questions/71678249/ |
My model memory size keeps decreasing and isn't clearing. And as a result, I get a CUDA memory issue | I'm trying to run a deep network in CUDA/Pytorch. But I keep getting a GPU issue that tells me I'm out of memory in in my GPU as follows:
CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 11.17 GiB total capacity; 10.62 GiB already allocated; 14.81 MiB free; 10.63 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
And I get the same sort of error even with an HPC, just that the numbers are bigger.
As far as I can tell, my model is fine and as expected. (My professors told us we can run CIFAR with < 5 M parameters and my model has 4.7 M parameters).
I create data loaders as follows:
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
transforms.CenterCrop(32),
transforms.RandomRotation(15)
])
#Augmentation helps prevent overfitting. We use Bilinear and not the default (nearest) since Bilinear is a more efficient technique
batch_size=128
train=datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
test=datasets.CIFAR10(root='./data',train=False,download=True,transform=transform)
#train=train.type(torch.IntTensor)
#train=train.to(uint8)
train_size=int(len(train))
test_size=int(len(test))
train_idx=list(range(train_size))
test_idx=list(range(test_size))
split=int(np.floor(0.77*train_size))
train_idx, val_idx=train_idx[:split] , train_idx[split:]
train_sampler = SubsetRandomSampler(train_idx)
val_sampler = SubsetRandomSampler(val_idx)
train_loader=DataLoader(train,batch_size=batch_size,num_workers=1,sampler=train_sampler)
valid_loader=DataLoader(train,batch_size=batch_size,num_workers=1,sampler=val_sampler)
test_loader=DataLoader(test,batch_size=batch_size,shuffle=False, num_workers=2)
So I split it into a train and test set that are split in a 70/30 fashion.
To run my epochs:
for e in range(epochs):
running_loss=0.0
v_loss=0.0
correct=0
v_correct=0
for i,data in enumerate(train_loader):
print("Ran a batch")
x,y=data
x=x.to(device)
y=y.to(device)
optimizer.zero_grad()
y_hat=model(x)
loss=criterion(y_hat,y) #Comparing prediction to actual output
loss.backward
optimizer.step()
running_loss+=loss
pred=y_hat.argmax(axis=0) #The index associated with correct prediction
correct+=y[pred] #Either they match and correct. Or different and wrong
GPUtil.showUtilization()
del x
del y
GPUtil.showUtilization()
gc.collect()
torch.cuda.empty_cache()
GPUtil.showUtilization()
print("end of epoch")
In each epoch, I load a batch that I feed to my model. Unfortunatley, it seems like my model can never even train a full batch, that is despite constantly deleting my cache and tensors.
When I look at memory, while GPU utilisation stays balanced and doesn't shoot up, it seems like the memory does and I don't know why that's the case/
From this error, I can see that the memory utilized never clears and it only keeps increasing.
| ID | GPU | MEM |
| 0 | 57% | 76% |
....
| 0 | 68% | 100% |
| ID | GPU | MEM |
When I look at this problem online, I'm told that decreasing my batch size will help with CUDA running out of memory.
but even when my batch size is 1, the memory keeps growing and causing issues.
As a result, I'm not sure how to solve this problem. I can't tell what's going wrong here.
| Maybe typo, but there should be loss.backward() like method calling and loss.backward by itself makes nothing. Also when computing running_loss call .item() method on loss, so full line looks like this running_loss += loss.item(). In discussion there is an answer on why it should reduce memory usage (...cumulative loss will hold reference to each individual loss, which in turn holds references to each of the computation nodes that computed it...).
| https://stackoverflow.com/questions/71683934/ |
Cuda:0 device type tensor to numpy problem for plotting graph | as mentioned in the title, I am facing the problem of
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
I found out that that need to be a .cpu() method to overcome the problem, but tried various ways and still unable to solve the problem
def plot(val_loss,train_loss,typ):
plt.title("{} after epoch: {}".format(typ,len(train_loss)))
plt.xlabel("Epoch")
plt.ylabel(typ)
plt.plot(list(range(len(train_loss))),train_loss,color="r",label="Train "+typ)
plt.plot(list(range(len(val_loss))),val_loss,color="b",label="Validation "+typ)
plt.legend()
plt.savefig(os.path.join(data_dir,typ+".png"))
plt.close()
| I guess during loss calculation, when you try to save the loss, instead of
train_loss.append(loss)
it should be
train_loss.append(loss.item())
item() returns the value of the tensor as a standard Python number, therefore, train_loss will be a list of numbers and you will be able to plot it.
You can read more about item() here
https://pytorch.org/docs/stable/generated/torch.Tensor.item.html
| https://stackoverflow.com/questions/71686820/ |
Django web Deployment Failed on azure | 10:47:19 AM django-face-restore: ERROR: Could not find a version that satisfies the requirement torch==TORCH_VERSION+cpu (from versions: 1.4.0, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0)
10:47:19 AM django-face-restore: ERROR: No matching distribution found for torch==TORCH_VERSION+cpu
10:47:19 AM django-face-restore: WARNING: You are using pip version 21.1.1; however, version 22.0.4 is available.
10:47:19 AM django-face-restore: You should consider upgrading via the '/tmp/8da12d59fc3e034/antenv/bin/python -m pip install --upgrade pip' command.
10:47:19 AM django-face-restore: "2022-03-31 07:00:00"|ERROR|Failed pip installation with exit code: 1
10:47:21 AM django-face-restore: /opt/Kudu/Scripts/starter.sh oryx build /tmp/zipdeploy/extracted -o /home/site/wwwroot --platform python --platform-version 3.8 -i /tmp/8da12d59fc3e034 --compress-destination-dir -p virtualenv_name=antenv --log-file /tmp/build-debug.log
10:47:21 AM django-face-restore: Deployment Failed
| To resolve this ERROR: No matching distribution found for torch==TORCH_VERSION+cpu error:
You need to install the specific version of torch, try either of the following ways:
Add the following to your requirements.txt file:
--find-links https://download.pytorch.org/whl/torch_stable.html
torch==1.7.0+cpu
OR
python -m pip install torch==1.7.0 -f https://download.pytorch.org/whl/torch_stable.html
OR
python3.8 -m pip install torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
References: Install PyTorch from requirements.txt , ERROR: Could not find a version that satisfies the requirement torch==1.7.0+cpu
, and Could not find a version that satisfies the requirement torch==1.3.1
| https://stackoverflow.com/questions/71687284/ |
How can I implement a simple dot counting problem using a regression instead of a classification approach in Pytorch? | I'm trying to figure out how to solve the really simple problem of counting how many pixels in an image are white.
I have 20x20 pixel images (zeros matrix) with 1 to 20 pixels randomly set to 1.
Checking some tutorials Im able to solve this problem via a classification approach with the model using CrossEntropyLoss() and Adam optimizer:
nn.Linear(20*20, 100),
nn.ReLU(),
nn.Linear(100, 30),
nn.ReLU(),
nn.Linear(30, 20)```
Here my output is a vector of size 20 with the probabilities that the image contains 1 to 20 white pixels (ones)
Using argmax() I can simply retreive the most possible "class"
What I wanna do now is to solve the same problem but not as a classification one, but like a "counter", where, wiht the image as imput, the NN can estimate the number of white pixels as one single Integer output.
I've made some changes to the code using one neuron as output and MSELoss() and other functions as loss functions, but still cannot get the output to be an integer (1 to 20) instead of a real number
At the end, what I try to understand is how to see a NN as a function, that, in this case, goes from Rn to R
Any ideas?
Using one neuron as output, the predictions are real numbers and the accuracy is always around 4.9% with a loss that keeps quite constant across epochs
| Keep in mind that you probably have to normalize the outputs. So your model should still output something between 0 and 1 where 0 means 0 white pixels, 1 means 20 white pixels and 0.5 means 10 and so on. Therefore use sigmoid on the output neuron and mulltiply it by 20 to get the estimated amount of white pixels.
| https://stackoverflow.com/questions/71694867/ |
PyTorch adapt binary classification model to output probabilities of both classes | My dataset has 14 features and a target containing {0,1}.
I have trained this binary classifier:
class SimpleBinaryClassifier(nn.Module):
def __init__(self,input_shape):
super().__init__()
self.fc1 = nn.Linear(input_shape,64)
self.fc2 = nn.Linear(64,32)
self.dropout = nn.Dropout(p=0.1)
self.fc3 = nn.Linear(32,1)
def forward(self,x):
x = torch.relu(self.fc1(x))
x = torch.relu(self.fc2(x))
x = self.dropout(x)
x = self.fc3(x)
return x
with the following criterion and training loop:
criterion = nn.BCEWithLogitsLoss()
def binary_acc(y_pred, y_test):
y_pred_tag = torch.round(torch.sigmoid(y_pred))
correct_results_sum = (y_pred_tag == y_test).sum().float()
acc = correct_results_sum/y_test.shape[0]
acc = torch.round(acc * 100)
return acc
model.train()
for e in range(1, EPOCHS+1):
epoch_loss = 0
epoch_acc = 0
for X_batch, y_batch in train_loader:
X_batch, y_batch = X_batch.to(device), y_batch.to(device)
optimizer.zero_grad()
y_pred = model(X_batch)
loss = criterion(y_pred, y_batch.unsqueeze(1))
acc = binary_acc(y_pred, y_batch.unsqueeze(1))
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
print(f'Epoch {e+0:03}: | Loss: {epoch_loss/len(train_loader):.5f} | Acc: {epoch_acc/len(train_loader):.3f}')
This model, when called like sigmoid(model(input_tensor)) outputs a single number in [0,1]. The pipeline I'm working with, expects a model to output probabilities [p_class1, p_class2].
How can I adapt the model and the training loop?
If I set the output of the last layer to 2, I have problems with the criterion inside the training loop.
class SimpleBinaryClassifier2(nn.Module):
def __init__(self,input_shape):
super().__init__()
self.fc1 = nn.Linear(input_shape,64)
self.fc2 = nn.Linear(64,32)
self.dropout = nn.Dropout(p=0.1)
self.fc3 = nn.Linear(32,2) # now it's 2
def forward(self,x):
x = torch.relu(self.fc1(x))
x = torch.relu(self.fc2(x))
x = self.dropout(x)
x = self.fc3(x)
return x
I use the CrossEntropy
model = SimpleBinaryClassifier2(input_shape=14)
model.to(device)
print(model)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
criterion = nn.CrossEntropyLoss()
and replace y_pred_tag = torch.round(torch.sigmoid(y_pred)) with argmax(softmax)
def binary_acc2(y_pred, y_test):
y_pred_tag = torch.argmax(torch.softmax(y_pred), dim=1)
correct_results_sum = (y_pred_tag == y_test).sum().float()
acc = correct_results_sum/y_test.shape[0]
acc = torch.round(acc * 100)
return acc
Then the train loop rises an error:
model.train()
for e in range(1, EPOCHS+1):
epoch_loss = 0
epoch_acc = 0
for X_batch, y_batch in train_loader:
X_batch, y_batch = X_batch.to(device), y_batch.to(device)
optimizer.zero_grad()
y_pred = model(X_batch)
loss = criterion(y_pred, y_batch)
acc = binary_acc(y_pred, y_batch)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
print(f'Epoch {e+0:03}: | Loss: {epoch_loss/len(train_loader):.5f} | Acc: {epoch_acc/len(train_loader):.3f}')
The error is the following:
RuntimeError: "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Float'
I already looked up on this other post where the cause of that error was that the element was a Float and not a tensor, but in my case the datasets are tensors:
train_dataset = GenericDataset(torch.FloatTensor(X_train), torch.FloatTensor(y_train))
test_dataset = GenericDataset(torch.FloatTensor(X_test), torch.FloatTensor(y_test))
| According to nn.CrossEntropyLoss description it expects target as long and not float, while in your train_dataset you clearly convert it to float
| https://stackoverflow.com/questions/71696991/ |
What is the significance of 1.000000015047466e+30? | I was attempting to do JIT compilation on a pytorch-based module from an NLP library and I saw that one of the generated fused CUDA kernel code implementations mentions the number 1.000000015047466e+30:
#define NAN __int_as_float(0x7fffffff)
#define POS_INFINITY __int_as_float(0x7f800000)
#define NEG_INFINITY __int_as_float(0xff800000)
template<typename T>
__device__ T maximum(T a, T b) {
return isnan(a) ? a : (a > b ? a : b);
}
template<typename T>
__device__ T minimum(T a, T b) {
return isnan(a) ? a : (a < b ? a : b);
}
extern "C" __global__
void fused_add_mul_mul_sub(float* tattn_mask1_1, float* tac_2, float* tbd_2, float* output_1, float* aten_mul_1) {
{
if (blockIdx.x<1ll ? 1 : 0) {
if ((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)<169ll ? 1 : 0) {
if (blockIdx.x<1ll ? 1 : 0) {
float v = __ldg(tattn_mask1_1 + (long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x));
aten_mul_1[(long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)] = v * 1.000000015047466e+30f;
} } }if ((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)<2028ll ? 1 : 0) {
float v_1 = __ldg(tac_2 + (long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x));
float v_2 = __ldg(tbd_2 + 12ll * (((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)) % 169ll) + ((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)) / 169ll);
float v_3 = __ldg(tattn_mask1_1 + ((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)) % 169ll);
output_1[(long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)] = (v_1 + v_2) * 0.125f - v_3 * 1.000000015047466e+30f;
}}
}
It feels like this should be some sort of FLOAT_MAX constant, but I don't know any numerical type that would have this as a limit.
Google searching for this constant just yields a handful of results that seem to suggest it may be a physical constant of some kind:
This MATLAB forum link suggests it may be a default value for some physical phenomena
This Github repo on whole cell electrophysiology seems to use it as a max limit of some kind.
This Opensea NFT link shows that the number is somehow used as a parameter for some kind of fractal art?
I'm absolutely baffled because I'm doing natural language processing using CUDA and have absolutely no clue why this number is appearing in my CUDA kernel code and why it's also being used in hard science research code. Does this number have some special floating point properties or something? Is it serving as a numerical stability factor somehow? Any leads would be greatly appreciated.
| 1.000000015047466e+30f is 1030 rounded to the nearest value representable in float (IEEE-754 binary32, also called “single precision”), then rounded to 16 decimal digits, then formatted as a float literal.
Thus, it likely originated as 1e30 or another representation of 1030 that was subsequently converted to float and then used to produce new source code.
(The float nearest 1030 is exactly 1000000015047466219876688855040.)
| https://stackoverflow.com/questions/71699232/ |
Object detection from synthetic to real life data with Yolov5 | Currently trying yolov5 with custom synthetic data. The dataset we've created consists of 8 different objects. Each object has a minimum of 1500 pictures/labels, where the pictures are split 500/500/500 of normal/fog/distractors around object. Sample images from the dataset is in the first imgur link. The model is not trained from scratch, but from yolov5 standard .pt.
So far we've tried:
Adding more data (from 300 images per object, to 4500)
Creating more complex data (distractors on/around objects)
Running multiple runs of training
Trained with network size small, medium, large, xlarge
Different batch size between 4-32 (depending on model size)
Everything so far has resulted in good/great detection on synthetic data, but completely off when used on real-life data.
Examples: Thinks that the whole pictures of unrelated objects is a paperbox, walls are pallets, etc. Quick sample images in the last imgur link.
Anyone got clues for how to improve the training or data to be better suited for real life detection? Or how to better interpret the results? I don't understand how the model draws the conclusion that a whole picture, with unrelated objects, is a box/pallet.
Results from training uploaded to imgur:
https://imgur.com/a/P0TQeBl
Example on real life data:
https://imgur.com/a/SGY7w8w
| There are couple of things to improve results.
After training your model with synthetic data, fine tune your model with real training data, with a smaller learning rate (1/10th maybe). This will reduce the gap between synthetic and real life images. In some cases rather than fine tuning, training the model with mixed (synthetic+real) produces better results.
Generate images structurally similar to real life examples. For example, put humans inside forklifts, or pallets or barrels on forks, etc. Models learn from it.
Randomize the texture on items that you want to detect. Models tend to focus on textures for detection. By randomizing textures, with lots of variability including mon natural occurrences, you force model to learn to identify objects not based on its textures. Although, texture of an object sometimes is a good identifier, synthetic data suffers from not replicating that feature good enough, hence the domain gap, so you reduce its impact on model decision.
I am not sure whether the screenshot accurately represent your data generation distribution, if so, you have to randomize the angles of objects, sizes and occlusion amounts more.
Use objects that you don’t want to detect but will be in the images you will do inference as distractors, rather than simple shapes like spheres.
Randomize lighting more. Intensity, color, angles etc.
Increase background and ground randomization. Use hdris, there are lots of free hdris
Balance your dataset
https://imgur.com/a/LdCa8aO
| https://stackoverflow.com/questions/71711521/ |
Input and Output to the lstms in pytorch | I want to implement lstms with CNN in pytorch as my data is a time series data i.e. frames of video for heart rate detection, I am struggling with the input and output dimensions for lstms what and how i should properly configure the dimensions/parameters/arguments at input of lstms in pytorch as its quite confusing when considering time steps, hidden state etc.
my output from CNN is “2 batches of 256 frames”, which is now the input to lstms
batch is 2
features =256
the output is also a batch of 2 with 256 frames.
| Generally, the input shape of sequential data takes the form (batch_size, seq_len, num_features). Based on your explanation, I assume your input is of the form (2, 256), where 2 is the batch size and 256 is the sequence length of scalars (1-dimensional tensor). Therefore, you should reshape your input to be (2, 256, 1) by inputs.unsqueeze(2).
To declare and use an LSTM model, simply try
from torch import nn
model = nn.LSTM(
input_size=1, # 1-dimensional features
batch_first=True, # batch is the first (zero-th) dimension
hidden_size=some_hidden_size, # maybe 64, 128, etc.
num_layers=some_num_layers, # maybe 1 or 2
proj_size=1, # output should also be 1-dimensional
)
outputs, (hidden_state, cell_state) = model(inputs)
| https://stackoverflow.com/questions/71714857/ |
TypeError: forward() missing 1 required positional argument in a method | I use the following model:
model = DeepGraphInfomax(128, pos_summary_t).to(device)
which looks like:
class DeepGraphInfomax(torch.nn.Module):
def __init__(self, hidden_channels, pos_summary):#, encoder):#, summary, corruption):
super().__init__()
self.hidden_channels = hidden_channels
#self.encoder = GCNEncoder() # needs to be defined either here or give it to here (my comment)
self.pos_summary = pos_summary
#self.corruption = corruption
self.weight = Parameter(torch.Tensor(hidden_channels, hidden_channels))
#self.reset_parameters()
def forward(self, pos_summary, *args, **kwargs):
#pos_z = self.encoder(*args, **kwargs)
#cor = self.corruption(*args, **kwargs)
#cor = cor if isinstance(cor, tuple) else (cor, )
#neg_z = self.encoder(*cor)
summary = self.summary(pos_summary, *args, **kwargs)
return summary# pos_z#, neg_z, summary
but running model() gives me the error:
TypeError: forward() missing 1 required positional argument:
'pos_summary'
| model is an object since you instantiated DeepGraphInfomax.
model() calls the .__call__ function.
forward is called in the .__call__ function i.e. model().
Have a look at here.
The TypeError means that you should write input in forward function i.e. model(data).
Here is an exmaple:
import torch
import torch.nn as nn
class example(nn.Module):
def __init__(self):
super().__init__()
self.mlp = nn.Sequential(nn.Linear(5,2), nn.ReLU(), nn.Linear(2,1), nn.Sigmoid())
def forward(self, x):
return self.mlp(x)
# instantiate object
test = example()
input_data = torch.tensor([1.0,2.0,3.0,4.0,5.0])
# () and forward() are equal
print(test(input_data))
print(test.forward(input_data))
# outputs for me
#(tensor([0.5387], grad_fn=<SigmoidBackward>),
# tensor([0.5387], grad_fn=<SigmoidBackward>))
| https://stackoverflow.com/questions/71717638/ |
How to Reverse Order of Rows in a Tensor | I'm trying to reverse the order of the rows in a tensor that I create. I have tried with tensorflow and pytorch. Only thing I have found is the torch.flip() method. This does not work as it reverses not only the order of the rows, but also all of the elements in each row. I want the elements to remain the same. Is there an array operation of this to index the integers? For instance:
tensor_a = [1, 2, 3]
[4, 5, 6]
[7, 8, 9]
I want it to be returned as:
[7, 8, 9]
[4, 5, 6]
[1, 2, 3]
however, torch.flip(tensor_a) =
[9, 8, 7]
[6, 5, 4]
[3, 2, 1]
Anyone have any suggestions?
| According to documentation torch.flip has argument dims, which control what axis to be flipped. In this case torch.flip(tensor_a, dims=(0,)) will return expected result. Also torch.flip(tensor_a) will reverse all tensor, and torch.flip(tensor_a, dims=(1,)) will reverse every row, like [1, 2, 3] --> [3, 2, 1].
| https://stackoverflow.com/questions/71723788/ |
ConvNeXt torchvision - specify input channels | How do I change the number of input channels in the torchvision ConvNeXt model? I am working with grayscale images and want 1 input channel instead of 3.
import torch
from torchvision.models.convnext import ConvNeXt, CNBlockConfig
# this is the given configuration for the 'tiny' model
block_setting = [
CNBlockConfig(96, 192, 3),
CNBlockConfig(192, 384, 3),
CNBlockConfig(384, 768, 9),
CNBlockConfig(768, None, 3),
]
model = ConvNeXt(block_setting)
# my sample image (N, C, W, H) = (16, 1, 50, 50)
im = torch.randn(16, 1, 50, 50)
# forward pass
model(im)
output:
RuntimeError: Given groups=1, weight of size [96, 3, 4, 4], expected input[16, 1, 50, 50] to have 3 channels, but got 1 channels instead
However, if I change my input shape to (16, 3, 50, 50) it seems to work fine.
The torchvision source code seems to be based of their github implementation but where do I specify in_chans with the torchvision interface?
| You can rewrite the whole input layer, model._modules["features"][0][0] is
nn.Conv2d(3, 96, kernel_size=(4, 4), stride=(4, 4))
Then, you only need to change the in_channels
>>> model._modules["features"][0][0] = nn.Conv2d(1, 96, kernel_size=(4, 4), stride=(4, 4))
>>> model(im)
tensor([[-0.4854, -0.1925, 0.1051, ..., -0.2310, -0.8830, -0.0251],
[ 0.3332, -0.4205, -0.3007, ..., 0.8530, 0.1429, -0.3819],
[ 0.1794, -0.7546, -0.7835, ..., -0.8072, -0.0972, 0.7413],
...,
[ 0.1356, 0.0868, 0.6135, ..., -0.1382, -0.2001, 0.2415],
[-0.1612, -0.4812, 0.1271, ..., -0.6594, 0.2706, 1.0833],
[ 0.0243, -0.5039, -0.4086, ..., 0.4233, 0.0389, 0.2787]],
grad_fn=<AddmmBackward0>)
| https://stackoverflow.com/questions/71728710/ |
Float64 Normalisation in pytorch | So when I want to do normalization to float64 in deep learning, I need to make float64_max to be 1 or just every image’s max value as 1?
I read a .nii file to get a 3D array with type float64 and its value is very big. I need to normalize it into 0-1 and float32 type to input my deep learning model. So I was wondering which normalization way is better or correct.
Make float64_max to be 1:
return torch.tensor(img / sys.float_info.max).to(torch.float32)
Make every image's max value as 1:
return torch.nn.functional.normalize(torch.tensor(img)).to(torch.float32)
Also, because the float64 range is much bigger than float32, when I do return torch.tensor(img / sys.float_info.max).to(torch.float32), the value have a chance to be all 0.
| No, you should normalize the max possible value of all images to 1 not the maximum value of each image separately.
For a general image RGB 8-bit image, the maximum value for a pixel channel is 255, so you could divide the entire image's channel value by 255.
In this way, you will obtain a new image of type float32 (or float64) with all pixel's channel values in the range [0, 1], but the min is not always 0 and max is not always 1.
| https://stackoverflow.com/questions/71747126/ |
CUDA 11.3 not being detected by PyTorch [Anaconda] | I am running Ubuntu 20.04 on GTX 1050TI. I have installed CUDA 11.3.
nvidia-smi output:
Wed Apr 6 18:27:23 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 465.19.01 Driver Version: 465.19.01 CUDA Version: 11.3 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| N/A 44C P8 N/A / N/A | 11MiB / 4040MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 3060 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 4270 G /usr/lib/xorg/Xorg 4MiB |
+-----------------------------------------------------------------------------+
nvcc --version output:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Mar_21_19:15:46_PDT_2021
Cuda compilation tools, release 11.3, V11.3.58
Build cuda_11.3.r11.3/compiler.29745058_0
Anaconda PyTorch isn't detecting CUDA:
> import torch
> torch.cuda.is_available()
> False
Any ideas how to solve the issue?
| The solution:
Conda in my case installed cpu build. You can easily identify your build type by running torch.version.cuda which should return a string in case you have the CUDA build. if you get None then you are running the cpu build and it will not detect CUDA
To fix that I installed torch using pip instead :
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
| https://stackoverflow.com/questions/71770438/ |
why does pytorch's utils.save_image() change the color of my image | I am saving two images with pytorch's utils.save_image() function here. one is the real image and the other, the perturbed image (just a patch). However, the latter image lost its real appearance when saved with save_image().
# save source image
utils.save_image(data.data, "./%s/%d_%d_org.png" % ("log", batch_idx, labels),
normalize=True)
# save adv image
utils.save_image(noised_data.data, "./%s/%d_%d_adv.png" % ("log", batch_idx, target_class), normalize=True)
This is the saved clean image
This is the saved perturbed image
| So, I managed to solve this by adding make_grid() to the save method and clipping the image to [0,1] without normalizing.
utils.save_image(utils.make_grid(torch.clip(noised_data.data, 0, 1)), "./%s/%d_%d_adv.png" % ("log", batch_idx, target_class))
| https://stackoverflow.com/questions/71770745/ |
Combine multiple DataLoaders sequentially | I'm interested in how I'd go about combining multiple DataLoaders sequentially for training. I understand I can use ConcatDataset to combine datasets first, but this does not work for my use case. I have a custom collate_fn that is passed to each dataloader, and this function depends on an attribute of the underlying Dataset. So, I'll have a set of custom DataLoaders like the following:
def custom_collate(sample, ref):
data = clean_sample(torch.stack([x[0] for x in sample]), ref)
labels = torch.tensor([x[1] for x in sample])
return data, labels
class CollateLoader(torch.utils.data.DataLoader):
def __init__(self, ref, *args, **kwargs):
collate_fn = functools.partial(custom_collate, ref=ref)
super().__init__(collate_fn = collate_fn, *args, **kwargs)
Where ref is a property of the custom Dataset class and is passed on initialization of a CollateLoader. Also, I know transforms can be applied in the Dataset, but in my case it must be done batch-wise.
So, how would I go about combining multiple DataLoaders? In the PyTorch-Lightning LightningDataModule, we can do something like
def train_dataloader(self):
return [data_loader_1, data_loader_2]
But this will return a list of batches, not the batches sequentially.
| I ran into the same problem and found a workaround. I overrided the epoch training loop using the Loops API from PytorchLightning, defining a class CustomLoop which inherits from pytorch_lightning.loops.TrainingEpochLoop, and overrided the advance() method. I copy pasted the source code from pytorch_lightning and replaced these lines with:
if not hasattr(self,'dataloader_idx'):
self.dataloader_idx=0
if not isinstance(data_fetcher, DataLoaderIterDataFetcher):
batch_idx = self.batch_idx + 1
batch = next(data_fetcher.dataloader.loaders[self.dataloader_idx])
self.dataloader_idx+=1
if self.dataloader_idx == len(data_fetcher.dataloader.loaders):
self.dataloader_idx = 0
else:
batch_idx, batch = next(data_fetcher)
That way, instead of iterating over the CombinedLoader, i make it iterate over one dataloader at a time.
Then, to make use of this custom loop you have to replace the default loop in the Trainer:
trainer.fit_loop.replace(epoch_loop=CustomLoop)
trainer.fit(my_model)
| https://stackoverflow.com/questions/71774659/ |
pytorch Error: module 'torch.nn' has no attribute 'ReLu' | i am working in google colab, so i assume its the current version of pytorch.
I tried this:
class Fc(nn.Module):
def __init__(self):
super(Fc, self).__init__()
self.flatt = nn.Flatten()
self.seq = nn.Sequential(nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLu(),
nn.Linear(512, 10), nn.ReLu())
def forward(x):
p = self.flatt(x)
p = self.seq(p)
return p
m1 = Fc()
and got:
<ipython-input-85-142a1e77b6b6> in <module>()
----> 1 m1 = Fc()
<ipython-input-84-09df3be0b613> in __init__(self)
4 self.flatt = nn.Flatten()
5 self.relu = torch.nn.modules.activation.ReLU()
----> 6 self.seq = nn.Sequential(nn.Linear(28*28, 1012), nn.ReLU(),
nn.Linear(1012, 512), nn.ReLu(), nn.Linear(512, 10), nn.ReLu())
AttributeError: module 'torch.nn' has no attribute 'ReLu'
What I am doing wrong here?
| You got a typo regarding casing. It's called ReLU not ReLu.
import torch.nn as nn
class Fc(nn.Module):
def __init__(self):
super(Fc, self).__init__()
self.flatt = nn.Flatten()
self.seq = nn.Sequential(nn.Linear(28*28, 512),
# TODO: Adjust here
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
# TODO: Adjust here
nn.Linear(512, 10), nn.ReLU())
def forward(x):
p = self.flatt(x)
p = self.seq(p)
return p
m1 = Fc()
| https://stackoverflow.com/questions/71796137/ |
Inputing a torch 3d tensor into a keras.Sequential model | Here I have a pytorch tensor object which I need to use for training a neural network. Can pytorch tensors be used for training a keras neural network and if so, what will be the value of the input_shape parameter when the training data has 3 dimensions?
Here's the code:
image, label = val_dset.__getitem__(20)
print(image, label)
Output:
tensor([[[0.8549, 0.8549, 0.8588, ..., 0.8667, 0.8627, 0.8627],
[0.8549, 0.8549, 0.8588, ..., 0.8667, 0.8667, 0.8667],
[0.8549, 0.8549, 0.8549, ..., 0.8667, 0.8667, 0.8667],
...,
[0.1216, 0.1412, 0.1686, ..., 0.5490, 0.7529, 0.5686],
[0.1451, 0.1843, 0.2667, ..., 0.4510, 0.6627, 0.5765],
[0.3098, 0.4078, 0.5098, ..., 0.3529, 0.4588, 0.4431]],
[[0.8549, 0.8549, 0.8588, ..., 0.8902, 0.8824, 0.8824],
[0.8549, 0.8549, 0.8588, ..., 0.8863, 0.8824, 0.8824],
[0.8549, 0.8549, 0.8588, ..., 0.8863, 0.8824, 0.8784],
...,
[0.1294, 0.1451, 0.1725, ..., 0.4745, 0.6824, 0.4902],
[0.1451, 0.1843, 0.2627, ..., 0.3804, 0.5804, 0.5098],
[0.2863, 0.3686, 0.4745, ..., 0.3020, 0.3843, 0.3922]],
[[0.8510, 0.8510, 0.8510, ..., 0.8667, 0.8627, 0.8627],
[0.8510, 0.8510, 0.8510, ..., 0.8667, 0.8667, 0.8667],
[0.8510, 0.8510, 0.8510, ..., 0.8667, 0.8667, 0.8627],
...,
[0.1294, 0.1451, 0.1765, ..., 0.3804, 0.5294, 0.3882],
[0.1490, 0.1882, 0.2627, ..., 0.3137, 0.4588, 0.4039],
[0.2902, 0.3647, 0.4588, ..., 0.2471, 0.3176, 0.3059]]]) 1
image.shape
Output:
torch.Size([3, 224, 224])
For the network:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, callbacks, Input
model = keras.Sequential([
Input(shape=(_)),
layers.BatchNormalization(),
layers.Dropout(0.5),
layers.Dense(512, activation='relu'),
layers.Dropout(0.5),
layers.Dense(1024, activation='relu'),
layers.Dropout(0.5),
layers.Dense(1024, activation='relu'),
layers.Dropout(0.5),
layers.Dense(1024, activation='relu'),
layers.BatchNormalization(),
layers.Dropout(0.5),
layers.Dense(26, activation='softmax')
])
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
early_stopping = keras.callbacks.EarlyStopping(
patience=50,
min_delta=0.005,
restore_best_weights=True
)
| You can get .numpy() then convert to tensor like below:
>>> torch_image = torch.rand(3, 224, 224)
>>> torch_image.shape
torch.Size([3, 224, 224])
>>> tf_image = tf.convert_to_tensor(torch_image.numpy())
>>> tf_image
<tf.Tensor: shape=(3, 224, 224), dtype=float32, numpy=
array([[[0.7935423 , 0.7479191 , 0.976204 , ..., 0.31874692,
0.70065683, 0.77162033],
...,
[0.19848353, 0.41827488, 0.5245047 , ..., 0.28861862,
0.5350955 , 0.6847391 ],
[0.57963634, 0.8628217 , 0.0179103 , ..., 0.19654012,
0.38167596, 0.5232694 ]]], dtype=float32)>
Or If you have multiple images then want to convert to tensor and train your model, you can use tf.data.Dataset.from_tensor_slices this function by default does this for you and convert torch to tensor like below:
(In the below example I create 100 random arrays as images)
import torch
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, callbacks, Input
torch_image = torch.rand(100, 3, 224, 224)
torch_label = torch.randint(1,5,(100,))
dataset = tf.data.Dataset.from_tensor_slices((torch_image, torch_label))
dataset = dataset.repeat(10).batch(5)
# for check type Tensor or Torch
# tf_image , label_image = next(iter(dataset))
# >>> tf_image
# <tf.Tensor: shape=(5, 3, 224, 224), dtype=float32, numpy=...>
model = keras.Sequential([
Input(shape=(3,224,224,)),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.BatchNormalization(),
layers.Dropout(0.5),
layers.Dense(1, activation='softmax')
])
model.compile(
loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
early_stopping = keras.callbacks.EarlyStopping(
patience=50, min_delta=0.005, restore_best_weights=True)
model.fit(dataset, epochs=10)
Output:
Epoch 1/10
200/200 [==============================] - 7s 37ms/step - loss: 0.0000e+00 - accuracy: 0.2500
| https://stackoverflow.com/questions/71797688/ |
How can I sum parts of pytorch tensor of variable sizes? | Let's consider example.
I have a tensor of size (10, 3).
I want to sum first 3 rows, next 2 rows and 5 next rows by 0 axis.
For example from:
t = torch.ones([10, 3])
I want to get:
[
[3.0, 3.0, 3.0],
[2.0, 2.0, 2.0],
[5.0, 5.0, 5.0],
]
I want to specify a tensor with values and a tensor with part sizes and possibly axis and get a tensor summed along this axis by parts of specified sizes.
How can I achieve that?
| Following the great idea of @ben-grossmann I modified it a little to use sparse tensor and make it more efficient. And implemented it as a function:
def sum_var_parts(t, lens):
t_size_0 = t.size(0)
ind_x = torch.repeat_interleave(torch.arange(lens.size(0)), lens)
indices = torch.cat(
[
torch.unsqueeze(ind_x, dim=0),
torch.unsqueeze(torch.arange(t_size_0), dim=0)
],
dim=0
)
M = torch.sparse_coo_tensor(
indices,
torch.ones(t_size_0, dtype=torch.float32),
size=[lens.size(0), t_size_0]
)
return M @ t
| https://stackoverflow.com/questions/71800953/ |
How To Return Incorrectly Predicted Images | I am having trouble figuring out how to create a list containing the first 10 image IDs that were incorrectly predicted.
import os
import torch
import torchvision
from torch.utils.data import random_split
from torchvision.datasets import ImageFolder
from torchvision.transforms import ToTensor
from torch.utils.data.dataloader import
DataLoader
import torch.nn as nn
import torch.nn.functional as F
batch_size=128
def predict_image(img, model):
# Convert to a batch of 1
xb = to_device(img.unsqueeze(0), device)
# Get predictions from model
yb = model(xb)
# Pick index with highest probability
_, preds = torch.max(yb, dim=1)
# Retrieve the class label
return dataset.classes[preds[0].item()]
def invalid_predictions(n=10):
invalid_ids = []
while invalid_ids
return invalid_ids
# This method should return a list of first
# 10 image ids that the model could not
# predict correctly.
# For example [40, 35, 20, ...]
| def invalid_predictions(n=10, images, labels):
invalid_ids = []
image_count = 0
invalid_count = 0
while invalid_count < n:
prediction = predict_image(images[image_count], model)
if prediction != labels[image_count ]:
invalid_ids.append(image_count )
invalid_count +=1
image_count += 1
return invalid_ids
| https://stackoverflow.com/questions/71813347/ |
how to overfit a model on a single batch in keras? | I am trying to overfit my model on a single batch to check model integrity. I am using Keras and TensorFlow for the implementation of my model and coding style for this project.
I know how to get the single batch and overfit the model in PyTorch but don't have an idea in Keras.
to get a single batch in PyTorch I used:
images, labels = next(iter(train_dataset))
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr = 0.0001)
for epoch in range(epochs):
print(f"Epoch [{epoch}/{epochs}]")
# for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
data = data.reshape(data.shape[0], -1)
# forward
score = model(data)
loss = criterion(score, target)
print(f"Loss: {loss.item()}")
# backward
optimizer.zero_grad()
loss.backward()
optimizer.step()
How to do it in keras any helping matrial?
| Thank you everyone for coming here. I found a solution and here it is:
datagen = ImageDataGenerator(rescale=1 / 255.0,
rotation_range=20,
zoom_range=0.2,
width_shift_range=0.05,
height_shift_range=0.05,
shear_range=0.2,
horizontal_flip=True,
fill_mode="nearest"
)
# preprocessing_function=preprocess_input,
# Declare an image generator for validation & testing without generation
test_datagen = ImageDataGenerator(rescale = 1./255,)#preprocessing_function=preprocess_input
# Declare generators for training, validation, and testing from DataFrames
train_gen = datagen.flow_from_directory(directory_train,
target_size=(512, 512),
color_mode='rgb',
batch_size=BATCH_SIZE,
class_mode='binary',
shuffle=True)
val_gen = test_datagen.flow_from_directory(directory_val,
target_size=(512, 512),
color_mode='rgb',
batch_size=BATCH_SIZE,
class_mode='binary',
shuffle=False)
test_gen = test_datagen.flow_from_directory(directory_test,
target_size=(512, 512),
color_mode='rgb',
batch_size=BATCH_SIZE,
class_mode='binary',
shuffle=False)
train_images, train_labels = next(iter(train_gen))
val_images, val_labels = next(iter(val_gen))
test_images, test_labels = next(iter(val_gen))
#check shape for selected Batch
print("Length of Train images : {}".format(len(train_images)))
print("shape of Train images : {}".format(train_images.shape))
print("shape of Train labels : {}".format(train_labels.shape))
Length of Train images : 32
shape of Train images : (32, 512, 512, 3)
shape of Train labels : (32,)
history = model.fit(train_images, train_labels,
use_multiprocessing=True,
workers=16,
epochs=100,
class_weight=class_weights,
validation_data=(val_images, val_labels),
shuffle=True,
callbacks=call_backs)
| https://stackoverflow.com/questions/71823551/ |
How can I use 2 images as a training sample in PyTorch? | I just begin learning deep learning and my first homework is to finish an leaves-classification system based on convolutional neural networks.I built a resnet-34 model with the code on github to do it.However,my teacher told me that the basic training unit in his dataset is an image pair.I should use 2 images(photos of the same leaf under different light conditions) as the input,combining two 3-channel images into one 6-channel image,but I don't know how to input 2 images and combine them into 6 channels.How can I do that?Are there any functions?Should I modify the structure of the resnet network?
this is my dataset,you can see every two images are about the same leaf.
| You have several issues to tackle:
You need a Dataset with a __getitem__ method that returns 2 images (and a label) instead of the basic ones that returns a single image and a label. You'll probably need to customize your own dataset.
Make sure the augmentations you apply to your images are applied in the same manner to each pair.
You need to modify ResNet-34 network to get as an input 2 images, instead of one. See, e.g., this answer how that can be done.
You need to change the first convolution layer to have 6 input channels instead of 3.
If you want to use pre-trained weights you will not be able to load the existing state_dict of ResNet34 because of changes #3 and #4 - you'll have to do it manually for the first time.
| https://stackoverflow.com/questions/71852199/ |
Combination of features of convolutional layers channel-by-channel in a multi-branch model | The convolutional model presented below, has two branches and each branch (for example) has two stages (convolutional layers).
My aim is to combine the weighted feature maps (channels) of the first convolutional layer from the second branch with the channels of the first convolutional layer from the first branch.
I want to extract the channels from the first convolutional layer in the second branch, multiply it by a weight (weight is a class in the code that makes the output a weighted version of its input) and stack it with the channels of its counterpart convolutional layer from the first branch. Afterwards, by utilizing a 1x1 conv2d, the size of the stacked feature maps will be changed to its initial size and this combined channels should be used by the first branch and the next convolutional layers will be computed based on these combined channels. After that, I want to have this kind of combination between the second convolutional layers of the branches. (In other words, I want to combine features channel-by-channels between branches.)
Please find the main_class (the whole model that consists of two branches) and the first_branch and second_branch below:
class main_class(nn.Module):
def __init__(self, pretrained=False):
super(main_class, self).__init__()
self.input=input_data() # input_data is a class the provides the input data for the each branch
self.conv_t2 = BasicConv3d(...........)
self.second_branch=second_branch(512, out_sigmoid=True)
self.conv_t1 = BasicConv3d(..............)
self.first_branch=first_branch(512, out_sigmoid=True)
self.last = nn.Conv2d(4, 1, kernel_size=1, stride=1)
self.sigmoid = nn.Sigmoid()
def forward(self, x, par = False):
x1, x2 = self.input(x)
#second branch
y2 = self.conv_t2(x2)
out2 = self.second_branch(y2)
#first branch
y1 = self.conv_t1(x1)
out1 = self.first_branch(y1)
x = torch.cat((out2, out1), 1)
x = self.last(x)
out = self.sigmoid(x)
if par:
return out1, out2, out
return out
The first_branch:
class first_branch(nn.Module):
def __init__(self, in_channel=512, out_channel=[380, 200], out_sigmoid=False):
super(first_branch, self).__init__()
self.out_sigmoid=out_sigmoid
self.deconvlayer1_2 = self._make_deconv(in_channel, out_channel[0], num_conv=3)
self.upsample1_2=Upsample(scale_factor=2, mode='bilinear')
self.combined1_2 = nn.conv2d(720, 380, kernel_size=1, stride=1, padding=0)
self.deconvlayer1_1 = self._make_deconv(out_channel[0], out_channel[1], num_conv=3)
self.upsample1_1=Upsample(scale_factor=2, mode='bilinear')
self.combined1_1 = nn.conv2d(400, 200, kernel_size=1, stride=1, padding=0)
def forward(self, x):
x=self.deconvlayer1_2(x)
x = self.upsample1_2(x)
x=self.deconvlayer1_1(x)
x = self.upsample1_1(x)
if self.out_sigmoid:
x=self.sigmoid(x)
return x
The second_branch:
class second_branch(nn.Module):
def __init__(self, in_channel=512, out_channel=[380,200], out_sigmoid=False):
super(second_branch, self).__init__()
self.out_sigmoid=out_sigmoid
self.weight = weight() # weight is a class that weighted its input
self.deconvlayer2_2 = self._make_deconv(in_channel, out_channel[0], num_conv=3)
self.upsample2_2=Upsample(scale_factor=2, mode='bilinear')
self.deconvlayer2_! = self._make_deconv(out_channel[0], out_channel[1], num_conv=3)
self.upsample2_1=Upsample(scale_factor=2, mode='bilinear')
def forward(self, x):
x=self.deconvlayer2_2(x)
x = self.upsample2_2(x)
weighted2_2 = self.weight(x)
x=self.deconvlayer2_1(x)
x = self.upsample2_1(x)
weighted2_1 = self.weight(x)
if self.out_sigmoid:
x=self.sigmoid(x)
return x, weighted2_1, weighted2_2
For implementing the mentioned idea in the main_class, I modified it as follows (instead of using the first_branch class in the forward function of the main_class, I wrote the script lines of the forward function of the first_branch in the forward function of the main_class):
class main_class(nn.Module):
def __init__(self, pretrained=False):
super(main_class, self).__init__()
self.input=input_data() # input_data is a class the provides the input data for the each branch
self.conv_t2 = BasicConv3d(....................)
self.second_branch=second_branch(512, out_sigmoid=True)
self.conv_t1 = BasicConv3d(............)
self.first_branch=first_branch(512, out_sigmoid=True)
self.last = nn.Conv2d(4, 1, kernel_size=1, stride=1)
self.sigmoid = nn.Sigmoid()
def forward(self, x, par = False):
x1, x2 = self.input(x)
#second branch
y2 = self.conv_t2(x2)
out2, weighted2_1, weighted2_2 = self.second_branch(y2)
#first branch
y1 = self.conv_t1(x1)
# instead of using from class first_branch, again I write the script lines of first_branch.forward() in below:
x=self.deconvlayer1_2(y1)
x = self.upsample1_2(x)
stacking_2 = torch.stack(x, weighted2_2)
x = self.frist_branch.combined1_2(stacking_2)
x=self.deconvlayer1_1(x)
x = self.upsample1_1(x)
stacking_1 = torch.stack(x, weighted2_1)
x = self.frist_branch.combined1_1(stacking_1)
out1=self.sigmoid(x)
x = torch.cat((out2, out1), 1)
x = self.last(x)
out = self.sigmoid(x)
if par:
return out1, out2, out
return out
I face with the following error:
TypeError: Cannot create a consistent method resolution order (MRO) for bases Module, second_branch
How can I fix this problem and how can I make the code able to have the interactions between new branches that may be added later to the model (for example if I have three branches, how can I have this kind of data combination between the third branch and the second one, and between the output of the previous combination and the first branch)?
| In your main_class the second branch is not receiving additional arguments, it's only the first one that needs to be executed as second (in order). You could just add a parameter to the forward method of that branch like so:
class first_branch(nn.Module):
...
def forward(self, x, weighted_x: list = []):
x = self.deconvlayer1_2(x)
x = self.upsample1_2(x)
out1 = None
if len(weighted_x) > 0:
x = torch.stack(x, weighted_x[0])
x = self.combined1_2(x)
x = self.deconvlayer1_1(x)
x = self.upsample1_1(x)
out2 = None
if len(weighted_x) > 1:
x = torch.stack(x, weighted_x[1])
x = self.combined1_1(x)
if self.out_sigmoid:
x = self.sigmoid(x)
return x, out1, out2
As you can see, there's a lot of boilerplate code, which you can avoid by creating a small submodule that do this part of forward. You could then store multiple modules in your first_branch inside a ModuleList and iterate over them.
| https://stackoverflow.com/questions/71872584/ |
Reduce multiclass image classification to binary classification in Pytorch | I am working on an stl-10 image dataset that consists of 10 different classes. I want to reduce this multiclass image classification problem to the binary class image classification such as class 1 Vs rest. I am using PyTorch torchvision to download and use the stl data but I am unable to do it as one Vs the rest.
train_data=torchvision.datasets.STL10(root='data',split='train',transform=data_transforms['train'], download=True)
test_data=torchvision.datasets.STL10(root='data',split='test',transform=data_transforms['val'], download=True)
train_dataloader = DataLoader(train_data,batch_size = 64,shuffle=True,num_workers=2)
test_dataloader = DataLoader(test_data,batch_size = 64,shuffle=True,num_workers=2)
| For torchvision datasets, there is an inbuilt way to do this. You need to define a transformation function or class and add that into the target_transform while creating the dataset.
torchvision.datasets.STL10(root: str, split: str = 'train', folds: Union[int, NoneType] = None, transform: Union[Callable, NoneType] = None, target_transform: Union[Callable, NoneType] = None, download: bool = False)
Here is a working example for reference :
import torchvision
from torch.utils.data import DataLoader
from torchvision import transforms
class Multi2UniLabelTfm():
def __init__(self,pos_label=5):
if isinstance(pos_label,int) or isinstance(pos_label,float):
pos_label = [pos_label,]
self.pos_label = pos_label
def __call__(self,y):
# if y==self.pos_label:
if y in self.pos_label:
return 1
else:
return 0
if __name__=='__main__':
test_tfms = transforms.Compose([
transforms.ToTensor()
])
data_transforms = {'val':test_tfms}
#Original Labels
# target_transform = None
# Label 5 is converted to 1. Rest are 0.
# target_transform = Multi2UniLabelTfm(pos_label=5)
# Labels 5,6,7 are converted to 1. Rest are 0.
target_transform = Multi2UniLabelTfm(pos_label=[5,6,7])
test_data=torchvision.datasets.STL10(root='data',split='test',transform=data_transforms['val'], download=True, target_transform=target_transform)
test_dataloader = DataLoader(test_data,batch_size = 64,shuffle=True,num_workers=2)
for idx,(x,y) in enumerate(test_dataloader):
print(idx,y)
if idx == 5:
break
| https://stackoverflow.com/questions/71889622/ |
Training a u-net for multi-landmark heatmap regression producing the same heatmap for each channel | I’m training a U-Net (model below) to predict 4 heatmaps (gaussian centered around a keypoint, one in each channel). Each channel is for some reason outputting the same result, an example is given of a test image where the blue is ground truth for that channel and red is the output of the u-net. I have tried using L1, MSE and adaptive wing loss (Wang 2019), and none are able to regress the heatmaps. I'm not sure what I'm doing wrong would appreciate any advice. Thanks
test1:
test2:
test3:
test4:
class CNN(nn.Module):
def __init__(self):
super(CNN,self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 64,kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.BatchNorm2d(64))
self.layer2 = nn.Sequential(
nn.Conv2d(64, 64,kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.BatchNorm2d(64))
self.layer3 = nn.Sequential(
nn.MaxPool2d(2, stride=2, padding=0))
self.layer4 = nn.Sequential(
nn.Conv2d(64,128,kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.BatchNorm2d(128))
self.layer5 = nn.Sequential(
nn.Conv2d(128, 128,kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.BatchNorm2d(128))
self.layer6 = nn.Sequential(
nn.MaxPool2d(2, stride=2, padding=0))
self.layer7 = nn.Sequential(
nn.Conv2d(128, 256,kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.BatchNorm2d(256))
| Hard to see how this is a UNet. The only components that will modify the spatial shape of your input is your MaxPool2d's. You have two of these, so for a given input with size [B, 1, H, W], your output will have the shape [B, 256, H/4, W/4].
I think you need to give a more complete code snippet (don't have enough rep to leave this as a comment).
| https://stackoverflow.com/questions/71910583/ |
attempting to manually download MNIST pytorch dataset in databricks | I've attempted a couple different iterations now to get the dataset manually loaded into databricks's DBFS.. so that PyTorch can load it.. however the MNIST dataset seems to just be some binary file.. is it expected I unzip it first or just.. point to the GZipped tarball? So far all my trials have gotten this error
train_dataset = datasets.MNIST(
13 'dbfs:/FileStore/tarballs/train_images_idx3_ubyte.gz',
14 train=True,
RuntimeError: Dataset not found. You can use download=True to download it
I am aware I can turn Download=True , however due to the firewalls this is not an option and I want to just upload the files and wire them in myself via DBFS... anyone done this as well?
EDIT: @alexey suggested I need to add the extra paths MNIST/raw
And then change the input to
train_dataset = datasets.MNIST(
'/dbfs/FileStore/tarballs',
train=True,
download=False,
transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]))
data_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
But same error
| My code and dir:
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../colabx/data', train=True, download=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
....\colabx\data\MNIST\raw>ls
t10k-images-idx3-ubyte train-images-idx3-ubyte
t10k-images-idx3-ubyte.gz train-images-idx3-ubyte.gz
t10k-labels-idx1-ubyte train-labels-idx1-ubyte
t10k-labels-idx1-ubyte.gz train-labels-idx1-ubyte.gz
| https://stackoverflow.com/questions/71923061/ |
Assigning weights to the feature maps of a convolutional layer | I'm using a class SE_Block (squeeze and excitation block) and give N feature maps (channels) of a convolutional layer as the input to this SE_Block. My goal is that after using the SE_Block, each of its input feature maps obtain their own weight. In other words, the SE_Block aims to assign a weight for each of the feature maps.
But I face the following error:
TypeError: init() missing 1 required positional argument: 'c'
class SE_Block(nn.Module):
"credits: https://github.com/moskomule/senet.pytorch/blob/master/senet/se_module.py#L4"
def __init__(self, c, r=16):
super().__init__()
self.squeeze = nn.AdaptiveAvgPool2d(1)
self.excitation = nn.Sequential(
nn.Linear(c, c // r, bias=False),
nn.ReLU(inplace=True),
nn.Linear(c // r, c, bias=False),
nn.Sigmoid()
)
def forward(self, x):
bs, c, _, _ = x.shape
y = self.squeeze(x).view(bs, c)
y = self.excitation(y).view(bs, c, 1, 1)
return x * y.expand_as(x)
My code:
class myclass(nn.Module):
def __init__(self, in_channel=1024, out_channel=[512], out_sigmoid=False):
super(myclass, self).__init__()
self.out_sigmoid=out_sigmoid
self.SEBlock = SE_Block()
self.deconvlayer = self._make_deconv(in_channel, out_channel[0], num_conv=3)
self.upsample=Upsample(scale_factor=2, mode='bilinear')
def _make_deconv(self, in_channel, out_channel, num_conv=2, kernel_size=3, stride=1, padding=1):
layers=[]
layers.append(BasicConv2d(in_channel, out_channel ,kernel_size=kernel_size, stride=stride, padding=padding))
for i in range(1, num_conv):
layers.append(_SepConv2d(out_channel, out_channel,kernel_size=kernel_size, stride=stride, padding=padding))
return nn.Sequential(*layers)
def forward(self, x):
x=self.deconvlayer(x)
x = self.upsample(x)
w = self.SEBlock(x)
return x, w
| When you're creating the SE_block you're not passing the c (channel) argument.
You need to add:
class myclass(nn.Module):
def __init__(self, in_channel=1024, out_channel=512, out_sigmoid=False):
...
self.SEBlock = SE_Block(out_channel) # Adding argument here
...
You also have some errors with the forward part in your class implementation, it should be rewritten like:
class myclass(nn.Module):
...
def forward(self, x):
x = self.deconvlayer(x) # Remove the 5_5 part
x = self.upsample(x) # Remove the 5_5 part
w = self.SEBlock(x)
return x, w
| https://stackoverflow.com/questions/71935848/ |
How define the number of class in Detectron2 bounding box predict Pytorch? | Where should i define the number os classes ?
ROI HEAD or RETINANET ?
Or both should have the same value ?
cfg.MODEL.RETINANET.NUM_CLASSES =int( len(Classe_list)-1)
cfg.MODEL.ROI_HEADS.NUM_CLASSES=int( len(Classe_list)-1)
| It depends on the network architecture you choose to use. If you use the "MaskRCNN", then you should set the cfg.MDOEL.ROI_HEADS.NUM_CLASSES.
The deep reason is that ROI_HEAD is the component used by MaskRCNN. If you use different network, you may need to change different things dependent on their implementation
| https://stackoverflow.com/questions/71940944/ |
Automatically check available GPU on Google Colab | Is there an automatic way to check which GPU is currently available on Google Colab (Pro).
Say I would like to use a Tesla P100 instead of the Tesla T4 to train my model, is there a way to periodically check with a python script in Colab whether the P100 is available?
I have tried eliminate the kernel periodically but it won't restart again automatically after shutting down:
import os
def restart_runtime():
os.kill(os.getpid(), 9)
Thank you
| There is no way to check what GPU is available. Add the line:
!nvidia-smi
to the beginning of your code and then keep on disconnecting and reconnecting the runtime until you get the GPU that you want.
| https://stackoverflow.com/questions/71952532/ |
Curious loss in a CNN | in my CNN for image classification, I get a curious loss and I don't know what's wrong. I'm lucky, if you help me to find the failure.
Here is a cutout of my print output and at the end there is my code:
Train Epoch: 1 [0/2048 (0%)] Loss: 0.654869
Train Epoch: 1 [64/2048 (3%)] Loss: 0.271722
Train Epoch: 1 [128/2048 (6%)] Loss: 0.001958
Train Epoch: 1 [192/2048 (9%)] Loss: 0.003399
Train Epoch: 1 [256/2048 (12%)] Loss: 0.000000
Train Epoch: 1 [320/2048 (16%)] Loss: 0.006664
Train Epoch: 1 [384/2048 (19%)] Loss: 0.000000
Train Epoch: 1 [448/2048 (22%)] Loss: 0.000000
Train Epoch: 1 [512/2048 (25%)] Loss: 0.000000
Train Epoch: 1 [576/2048 (28%)] Loss: 0.000000
Train Epoch: 2 [0/2048 (0%)] Loss: 173505.656250
Train Epoch: 2 [64/2048 (3%)] Loss: 0.000000
Train Epoch: 2 [128/2048 (6%)] Loss: 0.000000
Train Epoch: 2 [192/2048 (9%)] Loss: 33394.285156
Train Epoch: 2 [256/2048 (12%)] Loss: 0.000000
Train Epoch: 2 [320/2048 (16%)] Loss: 0.000000
Train Epoch: 2 [960/2048 (47%)] Loss: 0.000000
Train Epoch: 2 [1024/2048 (50%)] Loss: 636908.437500
Train Epoch: 2 [1088/2048 (53%)] Loss: 32862667387437056.000000
Train Epoch: 2 [1152/2048 (56%)] Loss: 15723443952412777718762887446528.000000
Train Epoch: 2 [1216/2048 (59%)] Loss: nan
Train Epoch: 2 [1280/2048 (62%)] Loss: nan
Train Epoch: 2 [1344/2048 (66%)] Loss: nan
Train Epoch: 2 [1408/2048 (69%)] Loss: nan
Here, you see code for the training.
def trainM(epoch):
model.train()
for batch_id, (data, target) in enumerate(net.train_data):
target = torch.LongTensor(target[64*batch_id:64*(batch_id+1)])
data = Variable(data)
target = Variable(target)
optimizer.zero_grad()
out = model(data)
criterion = F.nll_loss
loss = criterion(out,target)
loss.backward()
optimizer.step()
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(epoch,batch_id*len(data), len(net.train_data)*64, 100*batch_id/len(net.train_data), loss.item()))
for item in range(1,10):
trainM(item)
That's the code for neural network and the end there is the dataPrep method for data preparation.
train_data = []
target_list = []
class Netz(nn.Module):
def __init__(self):
super(Netz, self).__init__()
self.conv1 = nn.Conv2d(1, 10,kernel_size=5)
self.conv2 = nn.Conv2d(10,20, kernel_size = 5)
self.conv_dropout = nn.Dropout2d()
self.fc1 = nn.Linear(1050,60)
self.fc2 = nn.Linear(60,2)
self.fce = nn.Linear(20,1)
def forward(self,x):
x = self.conv1(x)
x = F.max_pool2d(x, 2)
x = F.relu(x)
x = self.conv2(x)
x = self.conv_dropout(x)
x = F.max_pool2d(x,2)
x = F.relu(x)
x = x.reshape(x.shape[0], x.shape[1], -1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
x = self.fce(x.permute(0,2,1)).squeeze(-1)
return F.log_softmax(x, -1)
def dataPrep(list_of_data, data_path, category, quantity):
global train_data
global target_list
train_data_list = []
transform = transforms.Compose([
transforms.ToTensor(),
])
len_data = len(train_data)
for item in list_of_data:
f = random.choice(list_of_data)
list_of_data.remove(f)
try:
img = Image.open(data_path +f)
except:
continue
img_crop = img.crop((310,60,425,240))
img_tensor = transform(img_crop)
train_data_list.append(img_tensor)
if category == True:
target = 1
else:
target = 0
target_list.append(target)
if len(train_data_list) >=64:
train_data.append((torch.stack(train_data_list), target_list))
train_data_list = []
if (len_data*64 + quantity) <= len(train_data)*64:
break
return list_of_data
| I might also suggest that the network needs to be initialized with random parameters for the convolutional layer weights. By default these weights are 0, which probably means that you end up predicting all one class. This might explain the very low (0) or very high losses (based on the makeup of the particular batch).
| https://stackoverflow.com/questions/71955542/ |
Fine tune/train a pre-trained BERT on data with no sentences but only words (bank transactions) | I have a lot of bank-transactions which I want to classify into different categories. The issue is that the text is not a sentence as such but consists only of words e.g "private withdrawal", "payment invoice 19234", "taxes" etc.
Since the domain is so specific, I think we might get a better performance by fine-tune a already pre-trained BERT, compared to just use the pre-trained BERT right away, but how do we do that when we don't have any sentences? I.e how would the "guess next sentence" part be created? Or can we skip it?
| Your problem is a sequence classification problem. If you want to use a pre-trained model, you want to do transfer learning. Basically you want to use the Bert base model and add a layer of classification.
You can check huggingface for that https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertForSequenceClassification
My answer isn't very specific, feel free to add details to your question
| https://stackoverflow.com/questions/71967595/ |
Training multiple pytorch models on GPUs | I'm trying to implement something with pytorch.
I have 2 GPUs and I want to train 2 models as below:
model0 = Mymodel().to('cuda:0')
model1 = Mymodel().to('cuda:1')
opt0 = torch.optim.Adam(model0.parameters(), lr=0.01)
opt1 = torch.optim.Adam(model0.parameters(), lr=0.01)
# 1.Forward data into model0 on GPU0
out = model0(x.to('cuda:0'))
# 2.Calculate the loss on model0, update model0's parameters
model0.loss.backward()
opt0.step()
opt0.zero_grad()
# 3.Use model0's output as input of model1 on GPU1
out = model1(out.to('cuda:1'))
# 4.Calculate the loss on model1, update model1's parameters
model1.loss.backward()
opt1.step()
opt1.zero_grad()
I want to train them simultaneously to speed up the whole procedure, but I think the code now will wait step 2(or 4) finished and finally do step 3(or 1). How can I implement my idea? Or which technique is I need(e.g. model parel, thread, multiprocessing...)?
I've consider some article like this, but I think there is some worng with the result, and I think it actually doesn't train models simultaneously.
| You have a strong dependency between the 2 models, the 2nd one always needs the output from the previous one, so that part of the code will always be sequential.
I think you might need some sort of multiprocessing (take a look at torch.multiprocessing) or some kind of queue, where you can store the output from the first model.
| https://stackoverflow.com/questions/71970110/ |
While training BERT variant, getting IndexError: index out of range in self | While training XLMRobertaForSequenceClassification:
xlm_r_model(input_ids = X_train_batch_input_ids
, attention_mask = X_train_batch_attention_mask
, return_dict = False
)
I faced following error:
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py", line 1218, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py", line 849, in forward
past_key_values_length=past_key_values_length,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py", line 132, in forward
inputs_embeds = self.word_embeddings(input_ids)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py", line 160, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 2044, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
Below are details:
Creating model
config = XLMRobertaConfig()
config.output_hidden_states = False
xlm_r_model = XLMRobertaForSequenceClassification(config=config)
xlm_r_model.to(device) # device is device(type='cpu')
Tokenizer
xlmr_tokenizer = XLMRobertaTokenizer.from_pretrained('xlm-roberta-large')
MAX_TWEET_LEN = 402
>>> df_1000.info() # describing a data frame I have pre populated
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1000 entries, 29639 to 44633
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 text 1000 non-null object
1 class 1000 non-null int64
dtypes: int64(1), object(1)
memory usage: 55.7+ KB
X_train = xlmr_tokenizer(list(df_1000[:800].text), padding=True, max_length=MAX_TWEET_LEN+5, truncation=True) # +5: a head room for special tokens / separators
>>> list(map(len,X_train['input_ids'])) # why its 105? shouldn't it be MAX_TWEET_LEN+5 = 407?
[105, 105, 105, 105, 105, 105, 105, 105, 105, 105, 105, 105, 105, 105, ...]
>>> type(train_index) # describing (for clarity) training fold indices I pre populated
<class 'numpy.ndarray'>
>>> train_index.size
640
X_train_fold_input_ids = np.array(X_train['input_ids'])[train_index]
X_train_fold_attention_mask = np.array(X_train['attention_mask'])[train_index]
>>> i # batch id
0
>>> batch_size
16
X_train_batch_input_ids = X_train_fold_input_ids[i:i+batch_size]
X_train_batch_input_ids = torch.tensor(X_train_batch_input_ids,dtype=torch.long).to(device)
X_train_batch_attention_mask = X_train_fold_attention_mask[i:i+batch_size]
X_train_batch_attention_mask = torch.tensor(X_train_batch_attention_mask,dtype=torch.long).to(device)
>>> X_train_batch_input_ids.size()
torch.Size([16, 105]) # why 105? Shouldnt this be MAX_TWEET_LEN+5 = 407?
>>> X_train_batch_attention_mask.size()
torch.Size([16, 105]) # why 105? Shouldnt this be MAX_TWEET_LEN+5 = 407?
After this I make the call xlm_r_model(...) as stated at the beginning of this question and ending up with the specified error.
Noticing all these details, I am still not able to get why I am getting the specified error. Where I am doing it wrong?
| As per this post on github, there can be possibly many reasons for this. Below is the list of reasons summmarised from that post (as of April 24, 2022, note that 2nd and 3rd reasons are not tested):
Mismatching vocabulary size of tokenizer and bert model. This will cause the tokenizer to generate IDs that the model cannot understand. ref
Model and data to exist on different devices (CPUs, GPUs, TPUs) ref
Sequences of length more than 512 (which is max for BERT-like models) ref
In my case, it was the first reason, mismatching vocab size and I have fixed this as follows:
Here is how I fixed this:
xlmr_tokenizer = XLMRobertaTokenizer.from_pretrained('xlm-roberta-large')
config = XLMRobertaConfig()
config.vocab_size = xlmr_tokenizer.vocab_size # setting both to have same vocab size
| https://stackoverflow.com/questions/71984994/ |
Resize to 224×224 directly or resize to 256 ×256 then crop 224×224? | My images in training set are leaves like this.
enter image description here
Its size is 572*108 and my resnet network need 224×224 images as input.
I found most of the codes process the images with the second way(resize to 256 ×256 then crop 224×224) so I did that.
As a result,parts of my leaves were cut which may influence the effect of leaves classification.Like this:
enter image description here
The first way (resize to 256×256 directly) may keep more complete structure.Like this:
enter image description here
I am worrying that if choose the first way I will lose many training images since random crop 224×224 may generate more samples for training.
Which should I choose?
| Running two experiments and compare their evalution results is the simplest solution.
A complete view is not neccessary for model to classify images, so as it to human. On the contrary, learning from cropped image normally can improve the generalization capacity of a model.
| https://stackoverflow.com/questions/71989157/ |
Is there a multi images to 1 image deep learning method? (pix2pix?) | I'm trying to build a video stabilization deep learning model.
I want to make the model predict how the frame should be stabilized depending on the last 10 frames
I have tried pix2pix, which is image to image, but I didn't get a good result
so, I want the same as pix2pix but multi images to 1 image
is there a method or can I do it using pix2pix?
| So, I do not know if you actually need to build this video stabilization using deep learning or if you just want on off-the-shelves solution.
For the on-the-shelves solution, you can look into vidgear that has an awesome stabilisation system built-in: https://abhitronix.github.io/vidgear/latest/gears/stabilizer/overview/
If you want a more advanced solution and architecture, you could take a look at his thread of paper with code: https://paperswithcode.com/task/video-stabilization
Given the current architecture of pix2pix, I do not see how multi-images will provide some stabilisation since, it is just as you said, pix2pix does not consider its previous output nor the flow of images to generate its prediction.
I hope that it helps ^^
| https://stackoverflow.com/questions/71995440/ |
Splitting a Tensor channelwise | I am dumping a tensor of size [1,3,224,224] to a file and would like to split into 3 tensors of size [1,1,224,224], one for each RGB channel and dump them into 3 separate files. How do I implement this?
| I think the simplest way is by a loop:
for c in range(x.shape[1]):
torch.save(x[:, c:c+1, ...], f'channel{c}.pth')
Note the indexing of the second (channel) dimension: you want the saved tensor to have a singleton channel dimension. If you were to index it using x[:, c, ...] you will get a tensor of shape [1, 224, 224] (no channel dimension at all).
| https://stackoverflow.com/questions/71997364/ |
Unable to convert the pytorch model to the TorchScript format | Loaded the pretrained PyTorch model file, and when I try to run it with torch.jit.script I get the below error, When I try to run the inbuilt pretrained model from pytorch.org it works perfectly fine. (Ex. Link to example code) but throws error for custom built pretrained model (Git repo containing the pretrained model weights), (pretrained weight)
encoder = enCoder()
encoder = torch.nn.DataParallel(encoder)
encoder.load_state_dict(weights['state_dict'])
encoder.eval()
torchscript_model = torch.jit.script(encoder)
# Error
---------------------------------------------------------------------------
NotSupportedError Traceback (most recent call last)
[<ipython-input-30-1d9f30e14902>](https://localhost:8080/#) in <module>()
1 # torch.quantization.convert(encoder, inplace=True)
2
----> 3 torchscript_model = torch.jit.script(encoder)
8 frames
[/usr/local/lib/python3.7/dist-packages/torch/jit/_script.py](https://localhost:8080/#) in script(obj, optimize, _frames_up, _rcb, example_inputs)
1256 obj = call_prepare_scriptable_func(obj)
1257 return torch.jit._recursive.create_script_module(
-> 1258 obj, torch.jit._recursive.infer_methods_to_compile
1259 )
1260
[/usr/local/lib/python3.7/dist-packages/torch/jit/_recursive.py](https://localhost:8080/#) in create_script_module(nn_module, stubs_fn, share_types, is_tracing)
449 if not is_tracing:
450 AttributeTypeIsSupportedChecker().check(nn_module)
--> 451 return create_script_module_impl(nn_module, concrete_type, stubs_fn)
452
453 def create_script_module_impl(nn_module, concrete_type, stubs_fn):
[/usr/local/lib/python3.7/dist-packages/torch/jit/_recursive.py](https://localhost:8080/#) in create_script_module_impl(nn_module, concrete_type, stubs_fn)
461 """
462 cpp_module = torch._C._create_module_with_type(concrete_type.jit_type)
--> 463 method_stubs = stubs_fn(nn_module)
464 property_stubs = get_property_stubs(nn_module)
465 hook_stubs, pre_hook_stubs = get_hook_stubs(nn_module)
[/usr/local/lib/python3.7/dist-packages/torch/jit/_recursive.py](https://localhost:8080/#) in infer_methods_to_compile(nn_module)
730 stubs = []
731 for method in uniqued_methods:
--> 732 stubs.append(make_stub_from_method(nn_module, method))
733 return overload_stubs + stubs
734
[/usr/local/lib/python3.7/dist-packages/torch/jit/_recursive.py](https://localhost:8080/#) in make_stub_from_method(nn_module, method_name)
64 # In this case, the actual function object will have the name `_forward`,
65 # even though we requested a stub for `forward`.
---> 66 return make_stub(func, method_name)
67
68
[/usr/local/lib/python3.7/dist-packages/torch/jit/_recursive.py](https://localhost:8080/#) in make_stub(func, name)
49 def make_stub(func, name):
50 rcb = _jit_internal.createResolutionCallbackFromClosure(func)
---> 51 ast = get_jit_def(func, name, self_name="RecursiveScriptModule")
52 return ScriptMethodStub(rcb, ast, func)
53
[/usr/local/lib/python3.7/dist-packages/torch/jit/frontend.py](https://localhost:8080/#) in get_jit_def(fn, def_name, self_name, is_classmethod)
262 pdt_arg_types = type_trace_db.get_args_types(qualname)
263
--> 264 return build_def(parsed_def.ctx, fn_def, type_line, def_name, self_name=self_name, pdt_arg_types=pdt_arg_types)
265
266 # TODO: more robust handling of recognizing ignore context manager
[/usr/local/lib/python3.7/dist-packages/torch/jit/frontend.py](https://localhost:8080/#) in build_def(ctx, py_def, type_line, def_name, self_name, pdt_arg_types)
300 py_def.col_offset + len("def"))
301
--> 302 param_list = build_param_list(ctx, py_def.args, self_name, pdt_arg_types)
303 return_type = None
304 if getattr(py_def, 'returns', None) is not None:
[/usr/local/lib/python3.7/dist-packages/torch/jit/frontend.py](https://localhost:8080/#) in build_param_list(ctx, py_args, self_name, pdt_arg_types)
324 expr = py_args.kwarg
325 ctx_range = ctx.make_range(expr.lineno, expr.col_offset - 1, expr.col_offset + len(expr.arg))
--> 326 raise NotSupportedError(ctx_range, _vararg_kwarg_err)
327 if py_args.vararg is not None:
328 expr = py_args.vararg
NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:
File "/usr/local/lib/python3.7/dist-packages/torch/nn/parallel/data_parallel.py", line 147
def forward(self, *inputs, **kwargs):
~~~~~~~ <--- HERE
with torch.autograd.profiler.record_function("DataParallel.forward"):
if not self.device_ids:
`
### Versions
Collecting environment information...
PyTorch version: 1.10.0+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.12.0
Libc version: glibc-2.26
Python version: 3.7.13 (default, Mar 16 2022, 17:37:17) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: False
CUDA runtime version: 11.1.105
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.10.0+cu111
[pip3] torchaudio==0.10.0+cu111
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.11.0
[pip3] torchvision==0.11.1+cu111
[conda] Could not collect
Any help is appreciated.
| torch.jit.script create a ScriptFunction(a Function with Graph) by parsing the python source code from module.forward().
If your module contains some grammar cannot support by the python parser, it will failed. Especially for the object not contains a static type.
Using torch.jit.trace is able to avoid such problems. It creates Graph in the op call process (c++ way). It will never failed, but cannot handle if-else branch cases. If you have branches, you should trace it every iteration which leading to 2 forward 1 backward in each training process. With no-brach model, you can reuse the traced ScriptFunction.
| https://stackoverflow.com/questions/72003175/ |
Cannot import name 'functional_datapipe' from 'torch.utils.data' | When I am running datasets_utils.py from '/usr/local/lib/python3.7/dist-packages/torchtext/data/datasets_utils.py' in Google Colab, the following error occurs even with the most updated versions of Python packages:
ImportError: cannot import name 'functional_datapipe' from 'torch.utils.data' (/usr/local/lib/python3.7/dist-packages/torch/utils/data/init.py)
Are there any solutions to solve such errors, as I could not find functional_datapipe even in the official torch.utils.data documentation? The following is excerpt from datasets._utils.py in the Google Colab environment
import functools
import inspect
import os
import io
import torch
from torchtext.utils import (
validate_file,
download_from_url,
extract_archive,
)
from torch.utils.data import functional_datapipe, IterDataPipe
from torch.utils.data.datapipes.utils.common import StreamWrapper
import codecs
| It might be available only on torchdata.datapipes
| https://stackoverflow.com/questions/72009516/ |
What does Union from typing module in Python do? | I was looking the implementation of ResNet deep learning architecture in PyTorch from git-hub.
At line 167, inside the initializer of another class definition which defines ResNet and is also named ResNet, I saw the code below:
block: Type[Union[BasicBlock, Bottleneck]],
BasicBlock and Bottleneck are two classes defined before line 167. When I looked up for what Type and Union do, as far as I understood it says that block could be either of BasicBlock or Bottleneck. block itself was passed to a function named _make_layer which is defined at line 223 and is a method of ResNet class and I wonder how _make_layer know how the passed argument is either BasicBlock or Bottleneck?
When someone is making an instance of ResNet class, should they pass an object of BasicBlock or Bottleneck? and is this how _make_layer knows what it got as its argument? Then why do we need to use Union?
| To reinforce the previous answer. Python couldn't care less about types. They are ignored completely. Their sole purpose is for linters.
IDE's like PyCharm also have linters built into them. PyCharm has caught numerous bugs for me: "you said that function was supposed to take a two strings, but you're passing it an integer and a string". Python itself just doesn't care.
| https://stackoverflow.com/questions/72017661/ |
Troubles while compiling C++ program with PyTorch, HElib and OpenCV | I'm trying to compile my C++ program that uses the libraries HElib, OpenCV and PyTorch. I'm on Ubuntu 20.04. The entire code is:
#include <iostream>
#include <fstream>
#include <string>
#include <vector>
#include <cstdint>
#include <memory>
#include <stdio.h>
#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#include <helib/helib.h>
#include <torch/torch.h>
#include "include/mnist/mnist_reader.hpp"
using namespace cv;
using namespace torch;
using namespace std;
using namespace mnist;
using namespace helib;
int main(int argc, char* argv[]) {
Tensor tensor = torch::rand({2, 3});
cout << tensor << endl;
Mat imageMat = Mat(image, true).reshape(0, 1).t();
return 0;
}
(where image is a 28x28 matrix).
I'm compiling it with the command (I know I should be using cmake but I'm new to C++ and for now I'd like to learn how to link libraries properly just from the command line):
g++ -g -O2 -std=c++17 -pthread -march=native prova.cpp -lopencv_core -lopencv_highgui -lopencv_imgcodecs -o prova -I/home/lulu/helib_install/helib_pack/include -I/usr/include/opencv4 -I/home/lulu/libtorch/include -I/home/lulu/libtorch/include/torch/csrc/api/include -I/home/lulu/libtorch/include/torch -L/home/lulu/helib_install/helib_pack/lib -L/usr/include/opencv4 -L/home/lulu/libtorch/lib -lhelib -lntl -lgmp -lm -ltorch -ltorch_cpu -lc10 -D_GLIBCXX_USE_CXX11_ABI=0
The error I get is the following:
/usr/bin/ld: /tmp/cc3mP2mc.o: in function `cv::Mat::Mat(int, int, int, void*, unsigned long)':
/usr/include/opencv4/opencv2/core/mat.inl.hpp:548: undefined reference to `cv::error(int, std::string const&, char const*, char const*, int)'
collect2: error: ld returned 1 exit status
Deleting the flag -D_GLIBCXX_USE_CXX11_ABI=0 doesn't help, I tried.
I also tried setting the variable LD_LIBRARY_PATH to /home/lulu/libtorch/lib, but neither that helps.
I think I'm linking all the libraries I need, what am I missing?
Thanks in advance for the help.
| I found the answer, but I can't really explain with my little experience what I've done, I'll just illustrate the passages.
I've re-downloaded PyTorch from its website, selecting the libtorch-cxx11-abi-shared-with-deps version (the one compiled with -D_GLIBCXX_USE_CXX11_ABI=1).
Then I had to add to the compilation command the flag -Wl,-rpath,/path/to/pytorch/lib, because for some reason the compiler didn't find libc10 and libtorch_cpu, so the final command was:
g++ -g -O2 -std=c++17 \
-pthread \
-march=native \
-I/home/lulu/helib_install/helib_pack/include \
-I/usr/include/opencv4 \
-I/home/lulu/libtorch/include \
-I/home/lulu/libtorch/include/torch/csrc/api/include \
-/home/lulu/libtorch/include/torch \
-L/home/lulu/helib_install/helib_pack/lib \
-L/usr/include/opencv4 \
-L/home/lulu/libtorch/lib \
-Wl,-rpath,/home/lulu/libtorch/lib \
prova.cpp \
-lopencv_core -lopencv_highgui -lopencv_imgcodecs \
-lhelib -lntl -lgmp -lm \
-ltorch -ltorch_cpu -lc10 \
-o prova
| https://stackoverflow.com/questions/72030738/ |
Pytorch: Having trouble understanding the inline replacement happening | This seems to be a common error people get, but i can't really understand the real cause.
I am having trouble figuring out where the inline replacement is happening.
My forward function:
def forward(self, input, hidden=None):
if hidden is None :
hidden = self.init_hidden(input.size(0))
out, hidden = self.lstm(input, hidden)
out = self.linear(out)
return out, hidden
The training loop
def training(dataloader, iterations, device):
torch.autograd.set_detect_anomaly(True)
model = NModel(662, 322, 2, 1)
hidden = None
model.train()
loss_fn = F.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
running_loss = []
last_loss = 0
for i, (feature, label) in tqdm(enumerate(dataloader)):
optimizer.zero_grad()
outputs, hidden = model(feature, hidden)
loss = loss_fn(outputs, label)
print("loss item" , loss.item())
running_loss.append(loss.item())
loss.backward(retain_graph=True)
optimizer.step()
if i%1000 == 0:
last_loss = len(running_loss) /1000
return last_loss
The error's stack trace
Traceback (most recent call last):
File "main.py", line 18, in <module>
main()
File "main.py", line 14, in main
training(dataloader=training_loader, iterations=3, device=0)
File "/home//gitclones/feature-extraction/training.py", line 30, in training
loss.backward(retain_graph=True)
File "/home/miniconda3/envs/pytorch-openpose/lib/python3.7/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home//miniconda3/envs/pytorch-openpose/lib/python3.7/site-packages/torch/autograd/__init__.py", line 156, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [322, 1288]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
when i remove optimizer.step() the code runs but i think there is no backpropagation happening.
[EDIT]
Strange how now it works when i don't pass the hidden state as input in forward pass
def forward(self, input, hidden=None):
if hidden is None :
hidden = self.init_hidden(input.size(0))
out, hidden = self.lstm(input)
out = self.linear(out)
return out, hidden
| Adding hidden = tuple([each.data for each in hidden]) after your optimizer.step() fix the error, but zeros the gradient on the hidden value. You can achieve the same effect with hidden = tuple([each.detach() for each in hidden])
| https://stackoverflow.com/questions/72038258/ |
Terminate called after throwing an instance of 'std::bad_alloc' from importing torch_geometric | I am writing in python and getting the error:
"terminate called after throwing an instance of 'std::bad_alloc'.
what(): std::bad_alloc.
Aborted (core dumped)"
After lots of debugging, I found out the source of the issue is:
import torch_geometric
I even created a file with just this line of code, and I still get the error.
I am running in a conda environment (4.10.3)
I made sure that I installed torch_geometric while I was in the conda environment. I tried deleting and reinstalling, but this did not work.
I also tried deleting and reinstalling torch/cuda.
I googled the error, but only seemed to come up with issues in data allocation, but I'm not sure how this would be an issue, since I am just importing torch_geometric.
Any ideas?
| This problem is because of mismatched versions of pytorch.
The current pytorch being used is 1.11.0, but when scatter and sparse were installed installed scatter and sparse, 1.10.1 were used:
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.10.1+cu113.html.
pip install torch-sparse -f https://data.pyg.org/whl/torch-1.10.1+cu113.html
So,torch-1.10.1 was used to install scatter and sparse, but torch-1.11.0 was the true version.
Simply doing:
pip uninstall torch-scatter
pip uninstall torch-sparse
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.11.0+cu113.html.
pip install torch-sparse -f https://data.pyg.org/whl/torch-1.11.0+cu113.html
Resolves the issue.
| https://stackoverflow.com/questions/72039582/ |
Pytorch Temporal Fusion Transformer - TimeSeriesDataSet TypeError: '<' not supported between instances of 'int' and 'str' | I'm following Temporal-Fusion-Transformer (TFT) tutorial in the PytorchForecasting (https://pytorch-forecasting.readthedocs.io/en/stable/tutorials/stallion.html#Demand-forecasting-with-the-Temporal-Fusion-Transformer) to train TFT model with custom dataset to predict "booking" value based on several static/time-varying features for each Sold TO Party in each Region and Sub-Region.
When converting the dataframe into a PyTorch Forecasting TimeSeriesDataSet, I encountered the error: "TypeError: '<' not supported between instances of 'int' and 'str'." Does anyone know what's potential issue for this error?
The code is as follows:
max_prediction_length = 2
max_encoder_length = 8
training_cutoff = ctmdata["time_idx"].max() - max_prediction_length
target_value = "Booking"
key_idx = ["Region","Sub-Region","Sold TO Party Code"]
training = TimeSeriesDataSet(
ctmdata[lambda x: x.time_idx <= training_cutoff],
time_idx="time_idx",
target=target_value,
group_ids=key_idx,
min_encoder_length=max_encoder_length // 2, # keep encoder length long (as it is in the validation set)
max_encoder_length=max_encoder_length,
min_prediction_length=1,
max_prediction_length=max_prediction_length,
static_categoricals=["Region","Sub-Region","Sold TO Party Code","Customer Type","Customer Segment L1","Customer Segment L2"],
static_reals=[],
time_varying_known_categoricals=["Supply Chain Customer","Quarter"],
variable_groups={}, # group of categorical variables can be treated as one variable
time_varying_known_reals=["time_idx"],
time_varying_unknown_categoricals=["DW Customer"],
time_varying_unknown_reals=[
"Booking","yeojohnson_Booking","avg_booking_bySubRegion","avg_booking_byCTMSegL1",
"Billing","yeojohnson_Billing","avg_billing_bySubRegion","avg_billing_byCTMSegL1",
"MGP","MGP%","BB Ratio"
],
target_normalizer=GroupNormalizer(
groups=key_idx, transformation="softplus"
) # use softplus and normalize by group
)
The error message shows:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-119-9cc2c7eb9aa5> in <module>()
38 ],
39 target_normalizer=GroupNormalizer(
---> 40 groups=key_idx, transformation="softplus"
41 ) # use softplus and normalize by group
42 )
4 frames
/usr/local/lib/python3.7/dist-packages/pytorch_forecasting/data/timeseries.py in __init__(self, data, time_idx, target, group_ids, weight, max_encoder_length, min_encoder_length, min_prediction_idx, min_prediction_length, max_prediction_length, static_categoricals, static_reals, time_varying_known_categoricals, time_varying_known_reals, time_varying_unknown_categoricals, time_varying_unknown_reals, variable_groups, constant_fill_strategy, allow_missing_timesteps, lags, add_relative_time_idx, add_target_scales, add_encoder_length, target_normalizer, categorical_encoders, scalers, randomize_length, predict_mode)
432
433 # preprocess data
--> 434 data = self._preprocess_data(data)
435 for target in self.target_names:
436 assert target not in self.scalers, "Target normalizer is separate and not in scalers."
/usr/local/lib/python3.7/dist-packages/pytorch_forecasting/data/timeseries.py in _preprocess_data(self, data)
651 # use existing encoder - but a copy of it not too loose current encodings
652 encoder = deepcopy(self.categorical_encoders.get(group_name, NaNLabelEncoder()))
--> 653 self.categorical_encoders[group_name] = encoder.fit(data[name].to_numpy().reshape(-1), overwrite=False)
654 data[group_name] = self.transform_values(name, data[name], inverse=False, group_id=True)
655
/usr/local/lib/python3.7/dist-packages/pytorch_forecasting/data/encoders.py in fit(self, y, overwrite)
88
89 idx += offset
---> 90 for val in np.unique(y):
91 if val not in self.classes_:
92 self.classes_[val] = idx
<__array_function__ internals> in unique(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/numpy/lib/arraysetops.py in unique(ar, return_index, return_inverse, return_counts, axis)
270 ar = np.asanyarray(ar)
271 if axis is None:
--> 272 ret = _unique1d(ar, return_index, return_inverse, return_counts)
273 return _unpack_tuple(ret)
274
/usr/local/lib/python3.7/dist-packages/numpy/lib/arraysetops.py in _unique1d(ar, return_index, return_inverse, return_counts)
331 aux = ar[perm]
332 else:
--> 333 ar.sort()
334 aux = ar
335 mask = np.empty(aux.shape, dtype=np.bool_)
TypeError: '<' not supported between instances of 'int' and 'str'
Thanks for help!
| You can check the values in the static_categoricals=["Region","Sub-Region","Sold TO Party Code","Customer Type","Customer Segment L1","Customer Segment L2"], maybe you are passing a value that have a str and this is trying to be compare with a int
| https://stackoverflow.com/questions/72041613/ |
Torchvision transforms.ToTensor show different range result. Dose not scale array to [0,1] as document says | I am converting the numpy array to tensor with the following code:
self.transform_1 = transforms.Compose([transforms.ToTensor()])
source_parsing_np = cv2.imread(source_parsing_path, cv2.IMREAD_GRAYSCALE) #The range is integer in the range [0,14]
source_parsing_tensor = self.transform_1(source_parsing_tensor)
As the documentation says, the data will be scaled to [0.0, 1.0]. But in my environment, two different results appeared at different times.
Specifically, in the previous training and testing code and the jupyter notebook code I'm testing now, the tensor results are still integer in the range [0,14]. wrong range result
When I use the same code in test phase again, the data truly be scaled to [0.0, 1.0], which is different from previous train phase. And another numpy array data is with same transform has not been changed(still integer in [0,24]). same transform different result
Because of the above problem I cannot reproduce my model test results. I would be very thankful for any information on this.
| I found the reason: the conversion occurs only when the numpy.ndarray has dtype = np.uint8, but my dytpe is np.long, sorry for my carelessness.
| https://stackoverflow.com/questions/72041753/ |
Converting PyTorch Boolean target to regression target | Question
I have code that is based on Part 2, Chapter 11 of Deep Learning with PyTorch, by Luca Pietro Giovanni Antiga, Thomas Viehmann, and Eli Stevens. It's working just fine. It predicts the value of a Boolean variable. I want to convert this so that it predicts the value of a real number variable that happens to be always between 0 and 34.
There are two parts that I don't know how to convert. First, this part:
pos_t = torch.tensor([
not candidateInfo_tup.isNodule_bool,
candidateInfo_tup.isNodule_bool
],
dtype=torch.long,
)
(Why are two values passed in here when one is completely determined by the other?)
and then this part:
self.head_linear = nn.Linear(1152, 2)
self.head_softmax = nn.Softmax(dim=1)
How do I do this?
Guess
I don't want people to think I haven't thought about this at all, so here is my guess:
First part:
age_t = torch.tensor(candidateInfo_tup.age_int, dtype=torch.double)
Second part:
self.head_linear = nn.Linear(299520, 1)
self.head_relu = nn.ReLU()
I'm also guessing that I need to change this:
loss_func = nn.CrossEntropyLoss(reduction='none')
to something like this:
loss_func = nn.L1Loss()
My guesses are based on this article by Christian Versloot.
| The example from the book is working but it has some redundant elements which confuse you.
Normally output size of 1 is enough for a binary classification problem. To bring it to 0 or 1, one may use sigmoid and then rounding, like in the example here: PyTorch Binary Classification - same network structure, 'simpler' data, but worse performance?
Or just put after single output neuron this:
y_pred_binary = torch.round(torch.sigmoid(y_pred))
The book example uses output size of 2 and then applies softmax to get to 0 and 1. This works, but such technique is typically used in multi-class classification.
For prediction of 0-34 variable:
if these are discrete variables - it is called "multiclass classification" as Ken indicated. Use size 35 output and softmax in this case. Search for "pytorch multiclass classification" for examples.
if this is a regression - than your changes seem in the right direction, except 'Second part'. instead of RELU - clip output at both ends at [0, 34]. Also 299520 - is too much for the previous layer. Use whatever input size there was before. Search for "pytorch regression" for examples.
| https://stackoverflow.com/questions/72049566/ |
Which method should I use to sample from a normal distribution? | I'm trying to sample batch_size points from an N-dim standard Gaussian distribution. But I noticed there are two similar functions I can use, and I want to know which one is correct or both.
Assume I want to sample 8 points from a 2-dim standard Gaussian.
torch.distributions.MultivariateNormal(torch.zeros(2), torch.eye(2)).sample([8]), it will return a tensor of size [8,2]
torch.randn(8,2)
They return similar outputs, but I'm wondering whether they're the same.
| torch.randn gives you samples from a univariate standard normal distribution and reshapes them to the desired shape. So the mean of all the samples is 0 with unit variance.
x = torch.randn(1000000,2).numpy()
assert np.isclose(np.mean(x.flatten()), 0, atol=0.01)
plt.hist(x.flatten())
MultivariateNormal generates samples from a multivariate normal distribution. It can be parameterized by a mean vector and a covariance matrix.
x = torch.distributions.MultivariateNormal(
torch.zeros(2), torch.eye(2)).sample([10000]).numpy()
assert np.isclose(np.mean(x.flatten()), 0, atol=0.01)
plt.hist(x.flatten())
The above usage is sort of a hack; we are creating two standard normals in two different dimensions and since the distributions (mean and variance) are the same we can consider them to be as well from a single distribution and hence the flattened array can be considered to be from a univariate standard normal.
The real purpose of MultivariateNormal:
x = torch.distributions.MultivariateNormal(
torch.tensor([-2.0, 2.0]), torch.eye(2)).sample([10000]).numpy()
plt.hist(x[:, 0])
plt.hist(x[:, 1])
| https://stackoverflow.com/questions/72054349/ |
How to fix "initial_lr not specified when resuming optimizer" error for scheduler? | In PyTorch I have configured SGD like this:
sgd_config = {
'params' : net.parameters(),
'lr' : 1e-7,
'weight_decay' : 5e-4,
'momentum' : 0.9
}
optimizer = SGD(**sgd_config)
My requirements are:
Total epochs are 100
Every 30 epochs learning rate is decreased by a factor of 10
Decreasing learning rate will stop at 60 epochs
So for 100 epochs I will get two times a decrease of 0.1 of my learning rate.
I read about learning rate scheduler, available in torch.optim.lr_scheduler so I decided to try using that instead of manually adjusting the learning rate:
scheduler = lr_scheduler.StepLR(optimizer, step_size=30, last_epoch=60, gamma=0.1)
However I am getting
Traceback (most recent call last):
File "D:\Projects\network\network_full.py", line 370, in <module>
scheduler = lr_scheduler.StepLR(optimizer, step_size=30, last_epoch=90, gamma=0.1)
File "D:\env\test\lib\site-packages\torch\optim\lr_scheduler.py", line 367, in __init__
super(StepLR, self).__init__(optimizer, last_epoch, verbose)
File "D:\env\test\lib\site-packages\torch\optim\lr_scheduler.py", line 39, in __init__
raise KeyError("param 'initial_lr' is not specified "
KeyError: "param 'initial_lr' is not specified in param_groups[0] when resuming an optimizer"
I read a post here and I still don't get how I would use the scheduler for my scenario. Maybe I am just not understanding the definition of last_epoch given that the documentation is very brief on this parameter:
last_epoch (int) – The index of last epoch. Default: -1.
Since the argument is made available to the user and there is no explicit prohibition on using a scheduler for less epochs than the optimizer itself, I am starting to think it's a bug.
| You have misunderstood the last_epoch argument and you are not using the correct learning rate scheduler for your requirements.
This should work:
optim.lr_scheduler.MultiStepLR(optimizer, [0, 30, 60], gamma=0.1, last_epoch=args.current_epoch - 1)
The last_epoch argument makes sure to use the correct LR when resuming training. It defaults to -1, so the epoch before epoch 0.
| https://stackoverflow.com/questions/72058575/ |
UserWarning: Using a target size (torch.Size([1])) that is different to the input size (torch.Size([1, 1])) | I have this code:
actual_loes_score_g = actual_loes_score_t.to(self.device, non_blocking=True)
predicted_loes_score_g = self.model(input_g)
loss_func = nn.L1Loss()
loss_g = loss_func(
predicted_loes_score_g,
actual_loes_score_g,
)
where predicted_loes_score_g is tensor([[-24.9374]], grad_fn=<AddmmBackward0>) and actual_loes_score_g is tensor([20.], dtype=torch.float64). (I am using a batch size of 1 for debugging purposes.)
I am getting this warning:
torch/nn/modules/loss.py:96: UserWarning: Using a target size (torch.Size([1])) that is
different to the input size (torch.Size([1, 1])). This will likely lead to incorrect
results due to broadcasting. Please ensure they have the same size.
How do I correctly ensure they have the same size?
I thought this might be the answer:
predicted_loes_score = predicted_loes_score_g.detach()[0]
loss_g = loss_func(
predicted_loes_score,
actual_loes_score_g,
)
but then I get this error later:
torch/autograd/__init__.py", line 154, in backward
Variable._execution_engine.run_backward(
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
| predicted_loes_score_g = tensor([[-24.9374]], grad_fn=<AddmmBackward0>)
which is size [1,1]
actual_loes_score_g = tensor([20.], dtype=torch.float64)
which is size [1]
You need to either remove a dimension from your prediction or add a dimension to your target. I would recommend the latter because that extra dimension corresponds to your batch size. Try this:
actual_loes_score_g = actual_loes_score_g.unsqueeze(1)
| https://stackoverflow.com/questions/72061934/ |
Overlap two tensors of different size based on an offset in PyTorch | I have the following structure:
torch.Size([channels, width, height])
Let's say I have a tensor a
torch.Size([4, 512, 512])
And tensor b
torch.Size([4, 100, 100])
What I would like to do is to create a tensor c that is the result of "placing" tensor b on an arbitrary (width, height) coordinate offset of tensor a. For example, let's say I would like to place tensor b on (300,100) of tensor a
So for tensor a's width between the 300-400 position, the values on tensor a should be replaced by the 100 values of tensor b width.
For tensor a's height between the 100-200 position, the values of tensor a should be replaced by the 100 values of tensor b height.
I would also like to choose for which channels I want to do this substitution and for which channels I would keep tensor a's value
(PS: The image is just an easy to illustrate example, but I would like to do it in a more generalisable way, so I'm no interested in converting to PIL, using PIL.paste and back to tensor but I would like to do all operations directly with)
| This function would be fragile without a bunch of pre-conditions to catch for size mismatches, but I think this is basically what you're describing:
def place(a: torch.Tensor, b: torch.Tensor,
height: int, width: int,
channels: list[int]) -> torch.Tensor:
"""create a tensor ``c`` that is the result of
"placing" tensor ``b`` on an arbitrary (height, width)
coordinate offset of tensor ``a``,
for only the specified ``channels``
"""
channels_b, height_b, width_b = b.size()
c = a.clone().detach()
for channel in channels:
c[channel,
height:height + height_b,
width:width + width_b] = b[channel]
return c
| https://stackoverflow.com/questions/72069422/ |
torch_optimizer has no attribute 'SGD' | I am trying to import torch_optimizer on Google Colab. I have successfully !pip installed torch_optimizer and then imported it. However, every attribute I call with torch_optimizer gives an attribute error:
AttributeError: module 'torch_optimizer' has no attribute 'SGD'
This holds true for SGD, Adam, etc.
Here is a photo of my code. Thanks!
Pytorch Optimizer
| I have checked the documentation of pytorch-optimizer. The vanilla SGD is not there. There are small modifications of SGD as AccSGD, SGDW, SGDP etc. You can use the simple pytorch optimizer torch.optim.SGD. Check this visualization script where they are comparing the baseline SGD to other methods implemented by this library. Same thing holds for Adam as well. Pretty clear from the script.
| https://stackoverflow.com/questions/72070882/ |
pytorch layer input, output shape calculation | Can anyone help me to understand when I use conv1d and then a linear layer, What will be the inputs of the linear layer? How to calculate how many input features should I have to pass in pytorch
| In Pytorch, Linear layers operate using only the last dimension of the input tensor: [*features_in] -> [*,features_out].
However, Conv1D layers consider the last 2 dimensions of the input tensor: [batches,channels_in, length_in] -> [batches,channels_out, length_out].
Therefore, if no pre-processing is used, Linear layers will only work with the signals defined for every channel, i.e., [batches,channels_in,features_in] -> [batches,channels_in,features_out]. This behavior is rarely desired, so people usually flatten tensors before passing them to a Linear layer. For example, it's common to use Linear(x.view(n_batches,-1)).
The behavior you need depends on the details of your application. Good luck,
Sources:
https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html (Conv1d)
https://pytorch.org/docs/stable/generated/torch.nn.Linear.html (Linear)
| https://stackoverflow.com/questions/72078299/ |
RuntimeError: output with shape [1] doesn't match the broadcast shape [10] | Hi I trying to make RBM Model code using pytorch module but got a issue in visible layer to hidden layer. Here is the problem part code.
h_bias = (self.h_bias.clone()).expand(10)
v = v.clone().expand(10)
p_h = F.sigmoid(
F.linear(v, self.W, bias=h_bias)
)
sample_h = self.sample_from_p(p_h)
return p_h, sample_h
and each parameters size is here.
h_bias v self.W
torch.Size([10]) torch.Size([10]) torch.Size([1, 10])
1 1 2
Traceback (most recent call last):
File "/Users/bahk_insung/Documents/Github/ecg-dbn/model.py", line 68, in <module>
v, v1 = rbm(sample_data)
File "/Users/bahk_insung/miniforge3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/bahk_insung/Documents/Github/ecg-dbn/RBM.py", line 54, in forward
pre_h1, h1 = self.v_to_h(v)
File "/Users/bahk_insung/Documents/Github/ecg-dbn/RBM.py", line 36, in v_to_h
F.linear(v, self.W, bias=h_bias)
File "/Users/bahk_insung/miniforge3/lib/python3.9/site-packages/torch/nn/functional.py", line 1849, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: output with shape [1] doesn't match the broadcast shape [10]
I think dimension and size are not matched thats reason why happened. But I can't get any solutions. Please help me guys. Thank you.
| If you look at the pytorch functional.linear documentation it shows the weight parameter can be either 1D or 2D: "Weight: (out_features, in_features) or (in_features)". Since your weight is 2D ([1, 10]) it indicates that you are trying to create an output of size "1" with an input size of "10". The linear transform does not know how to change your inputs of size 10 into an output of size 1. If your weight will always be [1, N] then you can use squeeze to change it to 1D like so:
F.linear(v, self.W.squeeze(), bias=h_bias)
This would create an output of size 10.
| https://stackoverflow.com/questions/72081872/ |
How to install fastai on Mac m1 | I am trying to install fastai (version 1.0.61) on my new Mac m1.
I first tried:
pip install fastai==1.0.61
This gave me an error that I didn't have cmake, so I installed cmake successfully with brew install cmake.
Then, rerunning the fastai install command, I get this error:
Collecting fastai==1.0.61
Using cached fastai-1.0.61-py3-none-any.whl (239 kB)
Collecting fastprogress>=0.2.1
Using cached fastprogress-1.0.2-py3-none-any.whl (12 kB)
Collecting numexpr
Using cached numexpr-2.8.1-cp310-cp310-macosx_11_0_arm64.whl
Collecting scipy
Using cached scipy-1.8.0-cp310-cp310-macosx_12_0_arm64.whl (28.7 MB)
Collecting pyyaml
Using cached PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl (173 kB)
Collecting bottleneck
Using cached Bottleneck-1.3.4-cp310-cp310-macosx_11_0_arm64.whl
Collecting matplotlib
Using cached matplotlib-3.5.1-cp310-cp310-macosx_11_0_arm64.whl (7.2 MB)
Collecting pynvx>=1.0.0
Using cached pynvx-1.0.0.tar.gz (150 kB)
Preparing metadata (setup.py) ... done
Collecting beautifulsoup4
Using cached beautifulsoup4-4.11.1-py3-none-any.whl (128 kB)
Collecting nvidia-ml-py3
Using cached nvidia_ml_py3-7.352.0-py3-none-any.whl
Requirement already satisfied: requests in /Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages (from fastai==1.0.61) (2.27.1)
Requirement already satisfied: Pillow in /Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages (from fastai==1.0.61) (9.1.0)
Requirement already satisfied: pandas in /Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages (from fastai==1.0.61) (1.4.2)
Collecting packaging
Using cached packaging-21.3-py3-none-any.whl (40 kB)
Requirement already satisfied: torch>=1.0.0 in /Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages (from fastai==1.0.61) (1.11.0)
Requirement already satisfied: torchvision in /Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages (from fastai==1.0.61) (0.12.0)
Requirement already satisfied: numpy>=1.15 in /Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages (from fastai==1.0.61) (1.22.3)
Requirement already satisfied: typing-extensions in /Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages (from torch>=1.0.0->fastai==1.0.61) (4.2.0)
Collecting soupsieve>1.2
Using cached soupsieve-2.3.2.post1-py3-none-any.whl (37 kB)
Collecting pyparsing>=2.2.1
Using cached pyparsing-3.0.8-py3-none-any.whl (98 kB)
Collecting kiwisolver>=1.0.1
Using cached kiwisolver-1.4.2-cp310-cp310-macosx_11_0_arm64.whl (63 kB)
Requirement already satisfied: python-dateutil>=2.7 in /Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages (from matplotlib->fastai==1.0.61) (2.8.2)
Collecting cycler>=0.10
Using cached cycler-0.11.0-py3-none-any.whl (6.4 kB)
Collecting fonttools>=4.22.0
Using cached fonttools-4.33.3-py3-none-any.whl (930 kB)
Requirement already satisfied: pytz>=2020.1 in /Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages (from pandas->fastai==1.0.61) (2022.1)
Requirement already satisfied: certifi>=2017.4.17 in /Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages (from requests->fastai==1.0.61) (2021.10.8)
Requirement already satisfied: idna<4,>=2.5 in /Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages (from requests->fastai==1.0.61) (3.3)
Requirement already satisfied: charset-normalizer~=2.0.0 in /Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages (from requests->fastai==1.0.61) (2.0.12)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages (from requests->fastai==1.0.61) (1.26.9)
Requirement already satisfied: six>=1.5 in /Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages (from python-dateutil>=2.7->matplotlib->fastai==1.0.61) (1.16.0)
Building wheels for collected packages: pynvx
Building wheel for pynvx (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [78 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-11.0-arm64-cpython-310
creating build/lib.macosx-11.0-arm64-cpython-310/pynvx
copying pynvx/pynvml.py -> build/lib.macosx-11.0-arm64-cpython-310/pynvx
copying pynvx/__init__.py -> build/lib.macosx-11.0-arm64-cpython-310/pynvx
running build_ext
-- The C compiler identification is AppleClang 13.1.6.13160021
-- The CXX compiler identification is AppleClang 13.1.6.13160021
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found PythonInterp: /Users/connoreaton/miniforge3/envs/spaff_nlp/bin/python3.10 (found version "3.10.4")
-- Found PythonLibs: /Users/connoreaton/miniforge3/envs/spaff_nlp/lib/libpython3.10.dylib
-- Performing Test HAS_CPP14_FLAG
-- Performing Test HAS_CPP14_FLAG - Success
-- pybind11 v2.3.dev0
-- Performing Test HAS_FLTO
-- Performing Test HAS_FLTO - Success
-- LTO enabled
CMake Error at /opt/homebrew/Cellar/cmake/3.23.1/share/cmake/Modules/FindCUDA.cmake:859 (message):
Specify CUDA_TOOLKIT_ROOT_DIR
Call Stack (most recent call first):
CMakeLists.txt:8 (find_package)
-- Configuring incomplete, errors occurred!
See also "/private/var/folders/fn/lgvtrxd90cl3nqryfdfcsld80000gn/T/pip-install-81yvkyvs/pynvx_19ff0317a3854976bb7ea683521d385f/build/temp.macosx-11.0-arm64-cpython-310/CMakeFiles/CMakeOutput.log".
See also "/private/var/folders/fn/lgvtrxd90cl3nqryfdfcsld80000gn/T/pip-install-81yvkyvs/pynvx_19ff0317a3854976bb7ea683521d385f/build/temp.macosx-11.0-arm64-cpython-310/CMakeFiles/CMakeError.log".
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/fn/lgvtrxd90cl3nqryfdfcsld80000gn/T/pip-install-81yvkyvs/pynvx_19ff0317a3854976bb7ea683521d385f/setup.py", line 68, in <module>
setup(
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 148, in setup
return run_commands(dist)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 163, in run_commands
dist.run_commands()
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/command/build.py", line 136, in run
self.run_command(cmd_name)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/private/var/folders/fn/lgvtrxd90cl3nqryfdfcsld80000gn/T/pip-install-81yvkyvs/pynvx_19ff0317a3854976bb7ea683521d385f/setup.py", line 32, in run
self.build_extension(ext)
File "/private/var/folders/fn/lgvtrxd90cl3nqryfdfcsld80000gn/T/pip-install-81yvkyvs/pynvx_19ff0317a3854976bb7ea683521d385f/setup.py", line 56, in build_extension
subprocess.check_call(['cmake', ext.sourcedir] + cmake_args, cwd=self.build_temp, env=env)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '/private/var/folders/fn/lgvtrxd90cl3nqryfdfcsld80000gn/T/pip-install-81yvkyvs/pynvx_19ff0317a3854976bb7ea683521d385f', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=/private/var/folders/fn/lgvtrxd90cl3nqryfdfcsld80000gn/T/pip-install-81yvkyvs/pynvx_19ff0317a3854976bb7ea683521d385f/build/lib.macosx-11.0-arm64-cpython-310', '-DPYTHON_EXECUTABLE=/Users/connoreaton/miniforge3/envs/spaff_nlp/bin/python3.10', '-DCMAKE_BUILD_TYPE=Release']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pynvx
Running setup.py clean for pynvx
Failed to build pynvx
Installing collected packages: pynvx, nvidia-ml-py3, soupsieve, scipy, pyyaml, pyparsing, kiwisolver, fonttools, fastprogress, cycler, bottleneck, packaging, beautifulsoup4, numexpr, matplotlib, fastai
Running setup.py install for pynvx ... error
error: subprocess-exited-with-error
× Running setup.py install for pynvx did not run successfully.
│ exit code: 1
╰─> [82 lines of output]
running install
/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build/lib.macosx-11.0-arm64-cpython-310
creating build/lib.macosx-11.0-arm64-cpython-310/pynvx
copying pynvx/pynvml.py -> build/lib.macosx-11.0-arm64-cpython-310/pynvx
copying pynvx/__init__.py -> build/lib.macosx-11.0-arm64-cpython-310/pynvx
running build_ext
-- The C compiler identification is AppleClang 13.1.6.13160021
-- The CXX compiler identification is AppleClang 13.1.6.13160021
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found PythonInterp: /Users/connoreaton/miniforge3/envs/spaff_nlp/bin/python3.10 (found version "3.10.4")
-- Found PythonLibs: /Users/connoreaton/miniforge3/envs/spaff_nlp/lib/libpython3.10.dylib
-- Performing Test HAS_CPP14_FLAG
-- Performing Test HAS_CPP14_FLAG - Success
-- pybind11 v2.3.dev0
-- Performing Test HAS_FLTO
-- Performing Test HAS_FLTO - Success
-- LTO enabled
CMake Error at /opt/homebrew/Cellar/cmake/3.23.1/share/cmake/Modules/FindCUDA.cmake:859 (message):
Specify CUDA_TOOLKIT_ROOT_DIR
Call Stack (most recent call first):
CMakeLists.txt:8 (find_package)
-- Configuring incomplete, errors occurred!
See also "/private/var/folders/fn/lgvtrxd90cl3nqryfdfcsld80000gn/T/pip-install-81yvkyvs/pynvx_19ff0317a3854976bb7ea683521d385f/build/temp.macosx-11.0-arm64-cpython-310/CMakeFiles/CMakeOutput.log".
See also "/private/var/folders/fn/lgvtrxd90cl3nqryfdfcsld80000gn/T/pip-install-81yvkyvs/pynvx_19ff0317a3854976bb7ea683521d385f/build/temp.macosx-11.0-arm64-cpython-310/CMakeFiles/CMakeError.log".
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/fn/lgvtrxd90cl3nqryfdfcsld80000gn/T/pip-install-81yvkyvs/pynvx_19ff0317a3854976bb7ea683521d385f/setup.py", line 68, in <module>
setup(
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 148, in setup
return run_commands(dist)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 163, in run_commands
dist.run_commands()
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/command/install.py", line 68, in run
return orig.install.run(self)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/command/install.py", line 670, in run
self.run_command('build')
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/command/build.py", line 136, in run
self.run_command(cmd_name)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/private/var/folders/fn/lgvtrxd90cl3nqryfdfcsld80000gn/T/pip-install-81yvkyvs/pynvx_19ff0317a3854976bb7ea683521d385f/setup.py", line 32, in run
self.build_extension(ext)
File "/private/var/folders/fn/lgvtrxd90cl3nqryfdfcsld80000gn/T/pip-install-81yvkyvs/pynvx_19ff0317a3854976bb7ea683521d385f/setup.py", line 56, in build_extension
subprocess.check_call(['cmake', ext.sourcedir] + cmake_args, cwd=self.build_temp, env=env)
File "/Users/connoreaton/miniforge3/envs/spaff_nlp/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '/private/var/folders/fn/lgvtrxd90cl3nqryfdfcsld80000gn/T/pip-install-81yvkyvs/pynvx_19ff0317a3854976bb7ea683521d385f', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=/private/var/folders/fn/lgvtrxd90cl3nqryfdfcsld80000gn/T/pip-install-81yvkyvs/pynvx_19ff0317a3854976bb7ea683521d385f/build/lib.macosx-11.0-arm64-cpython-310', '-DPYTHON_EXECUTABLE=/Users/connoreaton/miniforge3/envs/spaff_nlp/bin/python3.10', '-DCMAKE_BUILD_TYPE=Release']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> pynvx
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
I am clearly unable to install pynvx. This makes sense because pynvx deals with CUDA, which apparently Mac M1 cannot support. I tried the CPU only build install documentation from fastai here:
https://fastai1.fast.ai/install.html
But I get the same error.
What can I do? If I can't train models locally that's okay, but I want to at least be able to load models and predict with them.
| fastai seems to need pyenv which needs CUDA to work. CUDA is available only on Nvidia GPUs and the MAC M1 has a completly differente SOC with no Nvidia GPU
You can read the actual error
CMake Error at /opt/homebrew/Cellar/cmake/3.23.1/share/cmake/Modules/FindCUDA.cmake:859
| https://stackoverflow.com/questions/72088567/ |
Access all batch outputs at the end of epoch in callback with pytorch lightning | The documentation for the on_train_epoch_end, https://pytorch-lightning.readthedocs.io/en/stable/extensions/callbacks.html#on-train-epoch-end, states:
To access all batch outputs at the end of the epoch, either:
Implement training_epoch_end in the LightningModule and access outputs via the module OR
Cache data across train batch hooks inside the callback implementation to post-process in this hook.
I am trying to use the first alternative with the following LightningModule and Callback setup:
import pytorch_lightning as pl
from pytorch_lightning import Callback
class LightningModule(pl.LightningModule):
def __init__(self, *args):
super().__init__()
self.automatic_optimization = False
def training_step(self, batch, batch_idx):
return {'batch': batch}
def training_epoch_end(self, training_step_outputs):
# training_step_outputs has all my batches
return
class MyCallback(Callback):
def on_train_epoch_end(self, trainer, pl_module):
# pl_module.batch ???
return
How do I access the outputs via the pl_module in the callback? What is the recommended way of getting access to training_step_outputs in my callback?
| You can store the outputs of each training batch in a state and access it at the end of the training epoch. Here is an example -
from pytorch_lightning import Callback
class MyCallback(Callback):
def __init__(self):
super().__init__()
self.state = []
def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, unused=0):
self.state.append(outputs)
def on_train_epoch_end(self, trainer, pl_module):
# access output using state
all_outputs = self.state
Hope this helps you!
| https://stackoverflow.com/questions/72134203/ |
Improve Python (.exe) startup time | I created an exe with the PyInstaller. As soon as I enable the --onefile flag. The exe needs several minutes to start. When I build the application with the --onedir flag, the exe launches immediately after launch. In order to distribute the application better, it is important for me that the exe is created with the --onefile flag.
My problem can be reproduced with the following two scripts.
In main.py I only import the torch module, because the problems only occur with this module.
#main.py
print("Test1")
import torch
print("Test2")
print(torch.cuda.is_available())
print("Test3")
I generate the exe with the setup.py file.
#setup.py
import PyInstaller.__main__
import pyinstaller_versionfile
PyInstaller.__main__.run([
'main.py',
'--onefile',
'--name=Object Detection',
'--console',
])
Alternatively, the exe can also be generated using the Powershell command.
pyinstaller --noconfirm --onefile --console --name "Object Detection"
So my question is what can be done to make the application start faster? What is the specific cause of the problem?
By excluding the module --exclude-module=torch the application also starts immediately,
but that's not my goal.
| Pyinstaller --onefile mode has to unpack all libraries before starting in a temporary directory, when you use --onedir they are already there. The problem is noticeable with big libraries like PyTorch ...
| https://stackoverflow.com/questions/72143729/ |
PyTorch distributed dataLoader | Any recommended ways to make PyTorch DataLoader (torch.utils.data.DataLoader) work in distributed environment, single machine and multiple machines? Can it be done without DistributedDataParallel?
| Maybe you need to make your question clear. DistributedDataParallel is abbreviated as DDP, you need to train a model with DDP in a distributed environment. This question seems to ask how to arrange the dataset loading process for distributed training.
First of all,
data.Dataloader is proper for both dist and non-dist training, usually, there is no need to do something on that.
But the sampling strategy varies in this two modes, you need to specify a sampler for the dataloader(the sampler arg in data.Dataloader), adopting torch.utils.data.distributed.DistributedSampler is the simplest way.
| https://stackoverflow.com/questions/72154443/ |
torch.nn.BCEloss() and torch.nn.functional.binary_cross_entropy | What is the basic difference between these two loss functions? I have already tried using both the loss functions.
| The difference is that nn.BCEloss and F.binary_cross_entropy are two PyTorch interfaces to the same operations.
The former, torch.nn.BCELoss, is a class and inherits from nn.Module which makes it handy to be used in a two-step fashion, as you would always do in OOP (Object Oriented Programming): initialize then use. Initialization handles parameters and attributes initialization as the name implies which is quite useful when using stateful operators such as parametrized layers and the kind. This is the way to go when implementing classes of your own, for example:
class Trainer():
def __init__(self, model):
self.model = model
self.loss = nn.BCEloss()
def __call__(self, x, y)
y_hat = self.model(x)
loss = self.loss(y_hat, y)
return loss
On the other hand, the later, torch.nn.functional.binary_cross_entropy, is the functional interface. It is actually the underlying operator used by nn.BCELoss, as you can see at this line. You can use this interface but this can become cumbersome when using stateful operators. In this particular case, the binary cross-entropy loss does not have parameters (in the most general case), so you could do:
class Trainer():
def __init__(self, model):
self.model = model
def __call__(self, x, y)
y_hat = self.model(x)
loss = F.binary_cross_entropy(y_hat, y)
return loss
| https://stackoverflow.com/questions/72167344/ |
Which nvidia drivers version do I need? | I'm running on Ubuntu 20.04.4 LTS
I'm going to work with cuda 11.3 and torch 1.11 python 3.8
Which nvidia driver (version) do I need to install ?
How can I do it ?
| Download and install the latest version and you will be ok:
https://www.nvidia.com/download/index.aspx
| https://stackoverflow.com/questions/72181403/ |
my code couldn't find instance that was created initialized using pytorch | I implemented dataset class to use model, and When i strated train i get error
Traceback (most recent call last):
File "model.py", line 146, in <module>
train = Train()
File "model.py", line 70, in __init__
self.dataset.get_label()
File "model.py", line 61, in get_label
return self.label
AttributeError: 'MaskDataset' object has no attribute 'label'
and bellow code is maked error. but I never know why it's make problem. I check 'self.imgs' and 'self.label' used print(self.imgs) and print(self.label). and it was perfecte.
So I Mean, I don't know why python interpreter couldn't find instance that created initialized.
class MaskDataset(object):
def __init__(self, transforms,path):
self.data = data.Data()
self.transform = transforms
self.path = path
if 'Validation' in self.path :
self.img_path = "/home/ubuntu/lecttue-diagonosis/YangDongJae/ai/data/Validation/images/"
self.lab_path = "/home/ubuntu/lecttue-diagonosis/YangDongJae/ai/data/Validation/annotations/"
self.label = list(sorted(os.listdir(self.lab_path)))
self.imgs = list(sorted(os.listdir(self.img_path)))
elif 'train' in self.path:
self.img_path = "/home/ubuntu/lecttue-diagonosis/YangDongJae/ai/data/Training/images/"
self.lab_path = "/home/ubuntu/lecttue-diagonosis/YangDongJae/ai/data/Training/annotations/"
self.label = list(sorted(os.listdir(self.lab_path)))
self.imgs = list(sorted(os.listdir(self.img_path)))
def __getitem__(self,idx):
file_image = self.imgs[idx]
file_label = self.label[idx]
img_path = self.img_path+file_image
label_path = self.lab_path + file_label
img = Image.open(img_path).convert("RGB")
target = self.data.generate_target(label_path)
if self.transform is not None:
img = self.transform(img)
return img, target
class Train(MaskDataset):
def __init__(self,epochs = 100, lr = 0.005, momentum = 0.9, weight_decay = 0.0005):
self.data_transform = transforms.Compose([ # transforms.Compose : list 내의 작업을 연달아 할 수 있게 호출하는 클래스
transforms.ToTensor() # ToTensor : numpy 이미지에서 torch 이미지로 변경
])
self.dataset = MaskDataset(self.data_transform,'/home/ubuntu/lecttue-diagonosis/YangDongJae/ai/data/Training/')
self.val_dataset = MaskDataset(self.data_transform, '/home/ubuntu/lecttue-diagonosis/YangDongJae/ai/data/Validation/')
self.data_loader = torch.utils.data.DataLoader(self.dataset, batch_size = 10, collate_fn = self.collate_fn)
self.val_data_loader = torch.utils.data.DataLoader(self.val_dataset, batch_size = 10,collate_fn = self.collate_fn)
self.num_classes = 8
self.epochs = epochs
self.momentum = momentum
self.lr = 0.005
self.weight_decay = weight_decay
| This is happening because elif condition is not True during self.dataset object creation. Note that the self.path has a Train sub-string staring with an uppercase T, while elif is comparing it with lower-case train, which evaluates to False. This can be fixed by changing the elif as:
elif 'train'.lower() in self.path.lower():
self.img_path = "/home/ubuntu/lecttue-diagonosis/YangDongJae/ai/data/Training/images/"
self.lab_path = "/home/ubuntu/lecttue-diagonosis/YangDongJae/ai/data/Training/annotations/"
self.label = list(sorted(os.listdir(self.lab_path)))
self.imgs = list(sorted(os.listdir(self.img_path)
You may also change the if statement for validation case similarly.
| https://stackoverflow.com/questions/72181975/ |
Databricks notebook hanging with pytorch | We have a Databricks notebooks issue. One of our notebook cells seems to be hanging, while the driver logs do show that the notebook cell has been executed. Does anyone know why our notebook cell keeps hanging, and does not complete? See below the details.
Situation
We are training a ML model with pytorch in the Databricks notebook UI
The training uses mlflow to register a model
At the end of the cell we print a statement "Done with training"
We are using a single node cluster with
Databricks Runtime: 10.4 LTS ML (includes Apache Spark 3.2.1, GPU, Scala 2.12)
Node type: Standard_NC6s_v3
Observations
In the Databricks notebook UI we see the cell running pytorch training and showing the intermediate logs of the training
After awhile the model is registered in mlflow but we don't see this log in the Databricks notebook UI
We can also see the print statement "Done with training" in the driver logs. We don't see this statement in the Databricks notebook UI
Code
from pytorch_lightning import Trainer
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
trainer = Trainer(gpus=-1, num_sanity_val_steps=0, logger = logger, callbacks=[EarlyStopping(monitor="test_loss", patience = 2, mode = "min", verbose=True)])
with mlflow.start_run(experiment_id = experiment_id) as run:
trainer.fit(model, train_loader, val_loader)
mlflow.log_param("param1", param1)
mlflow.log_param("param2", param2)
mlflow.pytorch.log_model(model._image_model, artifact_path="model", registered_model_name="image_model")
mlflow.pytorch.log_state_dict(model._image_model.state_dict(), "model")
print("Done with training")
Packages
mlflow-skinny==1.25.1
torch==1.10.2+cu111
torchvision==0.11.3+cu111
Solutions that I tried that did not work
Tried adding cache deletion, but that did not work
# Cleaning up to avoid any open processes...
del trainer
torch.cuda.empty_cache()
# force garbage collection
gc.collect()
Tried forcing exiting the notebook, but also did not work
parameters = json.dumps({"Status": "SUCCESS", "Message": "DONE"})
dbutils.notebook.exit(parameters)
| I figured out the issue. To solve this, adjust the parameters for the torch.utils.data.DataLoader
Disable pin_memory
Set num_workers to 30% of total vCPU (e.g. 1 or 2 for Standard_NC6s_v3)
For example:
train_loader = DataLoader(
train_dataset,
batch_size=32,
num_workers=1,
pin_memory=False,
shuffle=True,
)
This issue seems to be related to PyTorch and is due to a deadlock issue. See the details here.
| https://stackoverflow.com/questions/72183733/ |
Normalization required for pre-trained model (PyTorch) | I am using a pre-trained model from pytorch:
model = models.resnet50(pretrained=True).to(device)
for param in model.parameters():
param.requires_grad = False
model.fc = Identity()
Should I normalize the data using my data mean and std or use the values used by the model creators?
class customDataset(torch.utils.data.Dataset):
'Characterizes a dataset for PyTorch'
def __init__(self, X, y):
'Initialization'
self.normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
self.X = X
self.y = torch.tensor(y, dtype=torch.float32)
def __len__(self):
return len(self.X)
def __getitem__(self, idx):
X = self.X[idx]
X = ToTensor()(X).type(torch.float32)[:3,:]
X = self.normalize(X)
return X, self.y[idx]
| You must use the normalization mean and std that was used during training. Based on the training data normalization, the model was optimized. In order for the model to work as expected the same data distribution has to be used.
If you train a model from scratch, you can use your dataset specific normalization parameters.
| https://stackoverflow.com/questions/72184771/ |
Exception when converting Unet from pytorch to onnx | I'm trying to convert a Unet model from PyTorch to ONNX.
Running the following code:
import torch
from unets import Unet, thin_setup
net = Unet(in_features=3, down=[16, 32, 64, 64, 64], up=[64, 64, 64, 128 + 1],
setup={**thin_setup, 'bias': True, 'padding': True})
net.eval()
inputs = torch.randn((1, 3, 768, 768))
outputs = net(inputs)
torch.onnx.export(net, inputs, "unet.onnx", opset_version=12)
a RuntimeError: Unsupported: ONNX export of instance_norm for unknown channel size. exception is raised.
How am I solving it?
remark: I suspect that this is due to a node of a upsample layer that has no output shape:
%196 : Float(*, *, *, *, strides=[589824, 9216, 96, 1], requires_grad=1, device=cpu) = onnx::Resize[coordinate_transformation_mode="pytorch_half_pixel", cubic_coeff_a=-0.75, mode="linear", nearest_mode="floor"](%169, %194, %195, %193) # ~/miniconda/envs/my_env/lib/python3.7/site-packages/torch/nn/functional.py:3709:0
environment: python 3.7 / torch 1.9.1+cu102 / onnx 1.10.2
| The problem is due to ONNX not having an implementation of the PyTorch 2D Instane Normalization layer.
The solution was to copy the relevant UNet code and implement the layer myself:
class InstanceNormAlternative(nn.InstanceNorm2d):
def forward(self, inp: Tensor) -> Tensor:
self._check_input_dim(inp)
desc = 1 / (input.var(axis=[2, 3], keepdim=True, unbiased=False) + self.eps) ** 0.5
retval = (input - input.mean(axis=[2, 3], keepdim=True)) * desc
return retval
Make sure to use unbiased variance if you wish to be as similar as possible to PyTorch.
NOTE: CoreML tools cannot convert the variance operator from PyTorch to CoreML. Make sure to use PyTorch's nn.InstanceNorm2d layer (and not the above alternative) when converting to CoreML.
FREE TIP: If converting PyTorch UNets to TF, you are also going to encounter the following error:
RuntimeError: Resize coordinate_transformation_mode=pytorch_half_pixel is not supported in Tensorflow
The remedy is to change the interpolation parameters in TrivialUpsample.forward to align_corners=True. In my experience, the effect of the change on the network output was minor.
This answer was written with the help of Michał Tyszkiewicz.
| https://stackoverflow.com/questions/72187686/ |
Extracting feature maps from ResNet | TLDR - what is considered best practice when extracting feature maps from ResNet?
I'm trying to feed the entire CIFAR10 dataset through ResNet18, to extract a new dataset that consists of some non-output activation of every sample in CIFAR10. I have implemented a code that generates this dataset, but the running time takes too long (exceeds Google Colab free RAM access, which is quite some RAM). The code I've implemented is based on a blog post called Intermediate Activations — the forward hook.
activation = {}
def get_activation(name):
"""
when given as input to register_forward_hook, this function is implicitly called when model.forward() is performed
and saves the output of layer 'name' in the dictionary described above.
:param name:
:return:
"""
def hook(model, input, output):
activation[name] = output.detach()
return hook
the get_activation helper function is used inside the activation_maps function which takes the feature map provided from the 4th layer, 2nd BasicBlock, conv1 layer (batch-size, 3,224,224) -> (batch-size,512,7,7) of ResNet18
(PS - this layer was arbitrarily chosen - is there a known layer from which the activations are better?)
ResNet18 = torch.hub.load('pytorch/vision:v0.10.0', 'resnet18', pretrained=True)
def activation_maps(name='conv1'):
"""
This function takes a batch and returns some non - last activation alongside the true labels
:return: train_activations_and_true_labels: array of tuples (Activation,True_labels) as train data
"""
non_output_activation_map = ResNet18.layer4[1].register_forward_hook(get_activation(name))
# now we create a list of activations and true labels for every sample.
# This means that if we looped over (X,y) in a dataloader, we can now loop (activation,y) which is
# an element in the arrays below, like a regular dataloader.
train_activations_and_true_labels = []
for i, (X_train, y_train) in enumerate(train_dataloader):
out = ResNet18(X_train)
train_activations_and_true_labels.append((activation[name], y_train))
print(f"Training data [{i}/{len(train_dataloader)}]", end='\r')
non_output_activation_map.remove() # detaching hooks
return train_activations_and_true_labels
Now, this code runs - but exceeds the memory capacity of my PyCharm/Google-Colab Am I missing something? what is the best approach when extracting feature maps?
| What batch size are you using, and how much RAM do you have available? Resnet is a somewhat large model, and the layer you're extracting is quite large as well so storing all that in memory might be causing issues.
Try reducing your batch size, or storing intermediary results to disk and clearing them from memory.
You might also consider turning off the gradient computation when calling the ResNet18 model, this would save a good bit of memory. Putting the @torch.no_grad() decorator on activation_maps(name='conv1') might work.
| https://stackoverflow.com/questions/72189867/ |
I was struggling build a lstm model during the model evaluation stage, bug shows different device, can anybody help me out |
for the model building stage
import torch
from torch import nn
import torch.nn.functional as F
from torch import utils
class LSTM(nn.Module):
def __init__(self, vocab_size, embed_dim):
super().__init__()
self.embedding = nn.EmbeddingBag(vocab_size, embed_dim) # embedding layer
self.lstm = nn.LSTM(embed_dim,hidden_dim, vocab_size,n_class,bidirectional=True, batch_first=True, dropout=0.3)
self.fc1 = nn.Linear(hidden_dim * n_class,hidden_dim)
def forward(self, text):
text_embeded = self.embedding(text)
output, (h_n, c_n) = self.lstm(text_embeded)
out = torch.cat([h_n[-1, :, :], h_n[-2, :, :]], dim=-1)
out_fc1 = self.fc1(output)
out_fc1_relu = F.relu(out_fc1)
return out_fc1_relu
# model initalization
# hyperparameter
embed_dim = 32
hidden_dim = 16
n_class = 5
model = LSTM(vocab_size, embed_dim)
loss_fn = torch.nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
for the evaluation stage
def device():
if torch.cuda.is_available():
return torch.device('cuda')
else:
return torch.device('cpu')
from sklearn.metrics import accuracy_score
loss_list_train= []
loss_list_test = []
epochs = 30
for epoch in range(epochs):
loss_now = 0.0
correct = 0
for sentence, targets in train_loader:
sentence = sentence.to(device())
targets = targets.to(device())
temp_batch_size = sentence.shape[0]
model.train()
optimizer.zero_grad()
pred = model(sentence)
# loss = loss_function(pred.view(-1, pred.shape[-1]), targets.view(-1))
loss = F.nll_loss(pred, targets)
loss.backward()
optimizer.step()
loss_now += loss.item() * temp_batch_size
predicted = torch.argmax(pred,-1)
correct += accuracy_score(predicted.view(-1).cpu().numpy(),targets.view(-1).cpu()).numpy() * temp_batch_size
model.eval()
train_loss = loss_now/len(train_data)
outputs = model(torch.from_numpy(sentence_padding_train).to(device())) # training set raw outputs
predicted = torch.argmax(outputs, 1) # training set predictions
f1 = f1_score(label_train, predicted.cpu().numpy().reshape(-1), average='weighted')
# training set f1-score
outputs_test = model(torch.from_numpy(sentence_padding_val). to(device())) # testing set raw outputs
loss_test = loss_function(outputs_test, torch.from_numpy(label_val).view(-1).to(device())) # testing set average loss
predicted_test = torch.argmax(outputs_test, 1) # testing set predictions
f1_test =f1_score(label_val, predicted_test.cpu().numpy(), average-'weighted' )
# testing set f1-score
loss_list_train.append(train_loss)
loss_list_test.append(loss_test.item())
print('Epoch {}: {:.4f} (train loss), {:.4f} (val loss), {:.4f} (train f1-score), {:.4f} (test f1-score), {:.4f}'.format(epoch + 1, train_loss, loss_test.item,f1,f1_test))
the bug shows below, says "Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument indices in method wrapper___embedding_bag", I've already define the device, how can I solve this problem?
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-99-2fb3a5047897> in <module>()
22 model.train()
23 optimizer.zero_grad()
---> 24 pred = model(sentence)
25 # loss = loss_function(pred.view(-1, pred.shape[-1]), targets.view(-1))
26 loss = F.nll_loss(pred, targets)
4 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
<ipython-input-94-7b5f176858ed> in forward(self, text)
14 def forward(self, text):
15
---> 16 text_embeded = self.embedding(text)
17
18 output, (h_n, c_n) = self.lstm(text_embeded)
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py in forward(self, input, offsets, per_sample_weights)
385 self.scale_grad_by_freq, self.mode, self.sparse,
386 per_sample_weights, self.include_last_offset,
--> 387 self.padding_idx)
388
389 def extra_repr(self) -> str:
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding_bag(input, weight, offsets, max_norm, norm_type, scale_grad_by_freq, mode, sparse, per_sample_weights, include_last_offset, padding_idx)
2363
2364 ret, _, _, _ = torch.embedding_bag(
-> 2365 weight, input, offsets, scale_grad_by_freq, mode_enum, sparse, per_sample_weights, include_last_offset, padding_idx
2366 )
2367 return ret
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument indices in method wrapper___embedding_bag)
| The error indicates that two tensors are on different devices (clear to understand).
Since you'd moved your input tensor to your GPU via sentence = sentence.to(device()) but not your model, the model's parameters are on the CPU which causes this error. Just add model.to(device()) before your training procedure starts.
| https://stackoverflow.com/questions/72199637/ |
How to access the save results of yolov5 in different folder? | I am using the below code to load the trained custom Yolov5 model and perform detections.
import cv2
import torch
from PIL import Image
model = torch.hub.load('ultralytics/yolov5', 'custom',
path='yolov5/runs/train/exp4/weights/best.pt', force_reload=True)
img = cv2.imread('example.jpeg')[:, :, ::-1] # OpenCV image (BGR to RGB)
results = model(img, size=416)
#To display and save results I am using:
results.print()
results.save()
results.show()
My question is how can I save the results in different directory so that I can use them in my web-based application. For your reference I am using Streamlit. For instance, at the moment, results (image) are being saved in runs\detect\exp*. I want to change it. Can anyone please guide me.
| You can make changes in the function definition of results.save(), the function can be found in the file yolov5/models/common.py. By default the definition is:
def save(self, labels=True, save_dir='runs/detect/exp'):
save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) # increment save_dir
self.display(save=True, labels=labels, save_dir=save_dir) # save results
You can make changes in the save_dir argument to the desired save location and the files should be saved in the new directory.
| https://stackoverflow.com/questions/72207081/ |
PyTorch - FineTuning bert - Oscillating loss - Very bad accuracy | I have been trying to train a model on vulnerability detection through source code. And, after a little bit of searching, I thought a very good starting point could be using a pre-trained transformer model from HuggingFace with PyTorch and pl.lightning torch. I chose DistilBert because it was the fastest one.
I have an imbalanced dataset, approximately 70% non-vulnerable and 30% vulnerable functions.
However, my results have been very bad. The model does not seem to learn and generalize. Specifically, during training the train loss is heavily oscillating, accuracy is around 70 percent and recall is extremely low (implying that the model always predicts one label).
I was wondering if there is anything that might be obviously problematic with my code. This is the first time I am using a pre-trained model and pl lightning and I cannot really tell what might have gone wrong.
class Model(pl.LightningModule):
def __init__(self, n_classes, n_training_steps, n_warmup_steps, lr, fine_tune=False):
super().__init__()
self.save_hyperparameters()
self.bert = DistilBert.from_pretrained(BERT_MODEL_NAME, return_dict=True)
for name, param in self.bert.named_parameters():
param.requires_grad = False
self.classifier = nn.Linear(self.bert.config.hidden_size, self.hparams.n_classes)
self.criterion = nn.BCELoss()
def finetune(self):
self.fine_tune = True
for name, param in self.bert.named_parameters():
if 'layer.5' in name:
param.requires_grad = True
def forward(self, input_ids, attention_mask, labels=None):
x = self.bert(input_ids, attention_mask=attention_mask)
x = x.last_hidden_state[:,0,:]
x = self.classifier(x)
x = torch.sigmoid(x)
x = x.squeeze(dim=-1)
loss = 0
if labels is not None:
loss = self.criterion(x, labels.float())
return loss, x
def training_step(self, batch, batch_idx):
enc, labels = batch
input_ids, attention_mask = enc
loss, outputs = self.forward(input_ids, attention_mask, labels)
self.log("train_loss", loss, prog_bar=True, logger=True)
return {'loss': loss, 'predictions': outputs, 'labels': labels}
def validation_step(self, batch, batch_idx):
enc, labels = batch
input_ids, attention_mask = enc
loss, outputs = self.forward(input_ids, attention_mask, labels)
r = recall(outputs[:], labels[:])
self.log("val_loss", loss, prog_bar=True, logger=True)
self.log("val_recall", r, prog_bar=True, logger=True)
return {'loss': loss, 'predictions': outputs, 'labels': labels}
def test_step(self, batch, batch_idx):
enc, labels = batch
input_ids, attention_mask = enc
loss, outputs = self.forward(input_ids, attention_mask, labels)
self.log("test_loss", loss, prog_bar=True, logger=True)
return {'loss': loss, 'predictions': outputs, 'labels': labels}
def training_epoch_end(self, outputs):
labels = []
predictions = []
for o in outputs:
for o_labels in o['labels'].detach().cpu():
labels.append(o_labels)
for o_preds in o['predictions'].detach().cpu():
predictions.append(o_preds)
labels = torch.stack(labels).int()
predictions = torch.stack(predictions)
class_recall = recall(predictions[:], labels[:])
self.logger.experiment.add_scalar("recall/Train", class_recall, self.current_epoch)
def validation_epoch_end(self, outputs):
labels = []
predictions = []
for o in outputs:
for o_labels in o['labels'].detach().cpu():
labels.append(o_labels)
for o_preds in o['predictions'].detach().cpu():
predictions.append(o_preds)
labels = torch.stack(labels).int()
predictions = torch.stack(predictions)
class_recall = recall(predictions[:], labels[:])
self.logger.experiment.add_scalar("recall/Validation", class_recall, self.current_epoch)
def test_epoch_end(self, outputs):
labels = []
predictions = []
for o in outputs:
for o_labels in o['labels'].detach().cpu():
labels.append(o_labels)
for o_preds in o['predictions'].detach().cpu():
predictions.append(o_preds)
labels = torch.stack(labels).int()
predictions = torch.stack(predictions)
class_recall = recall(predictions[:], labels[:])
self.logger.experiment.add_scalar("recall/Test", class_recall, self.current_epoch)
def configure_optimizers(self):
optimizer = AdamW(self.parameters(), lr=self.hparams.lr if self.hparams.fine_tune == False else self.hparams.lr // 100)
scheduler = get_linear_schedule_with_warmup(
optimizer,
num_warmup_steps=self.hparams.n_warmup_steps,
num_training_steps=self.hparams.n_training_steps
)
return dict(
optimizer=optimizer,
lr_scheduler=dict(
scheduler=scheduler,
interval='step'
)
)
if __name__ == "__main__":
data_module = SourceCodeDataModule(batch_size=BATCH_SIZE)
steps_per_epoch = len(train_loader) // BATCH_SIZE
total_training_steps = steps_per_epoch * N_EPOCHS
warmup_steps = total_training_steps // 5
model = Model(
n_classes=1,
n_warmup_steps = warmup_steps,
n_training_steps=total_training_steps,
lr=2e-5
)
logger = TensorBoardLogger("lightning_logs", name="bert_predictor")
early_stopping_callback = EarlyStopping(monitor='val_loss', patience=2)
trainer = pl.Trainer(
logger=logger,
checkpoint_callback=checkpoint_callback,
callbacks=[early_stopping_callback],
max_epochs=N_EPOCHS,
gpus=1 if str(device).startswith('cuda') else 0,
progress_bar_refresh_rate=30
)
# First just train the final layer.
trainer.fit(model, datamodule=data_module)
result = trainer.test(model, datamodule=data_module)
print(f"Result when training classifier only: {result}")
# Then train the whole model
model = Model.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
model.finetune()
trainer.fit(model, datamodule=data_module)
result = trainer.test(model, datamodule=data_module)
print(f"Result when fine tuning: {result}")
| Here,
def finetune(self):
self.fine_tune = True
for name, param in self.bert.named_parameters():
if 'layer.5' in name:
param.requires_grad = True
try to unfreeze more layers at the end of the neural net, maybe the weights are saturated and not learning enough. Also, pay attention to the loss you are using, as well as the activation function at the output.
| https://stackoverflow.com/questions/72207543/ |
TypeError: backward() got an unexpected keyword argument 'variables' | I am using the recurrent Gaussian Process library. I believe the code is developed by older versions of python and pytorch. I ran one of the experiments of the model after cloning the repository
python ./testing/rnn_rgp_test.py
I got this error message from this line of the rnn_encoder.py script:
./RGP/autoreg/rnn_encoder.py", line 274, in backward_computation
torch.autograd.backward( variables=self.forward_means_list + self.forward_vars_list,
TypeError: backward() got an unexpected keyword argument 'variables'
I will be grateful if someone can point out how I can fix this error?
| Version 0.3.1 of PyTorch seems to be the last version with the variables parameter. Ideally, the RGP library should have documented which version of their dependencies they use but they didn't. Given that their Git repo seems to be inactive, you have several choices:
Use old versions of whatever libraries they require. You will have to go from one error to the next, hoping that things work as intended.
Fork RGP and re-implement the logic with current libraries. This will likely involve significant coding and may not even be possible at all.
Try to find a different library that implements RGPs.
| https://stackoverflow.com/questions/72208333/ |
Why is RandomCrop with size 84 and padding 8 returning an image size of 84 and not 100 in pytorch? | I was using the mini-imagenet data set and noticed this line of code:
elif data_augmentation == 'lee2019:
normalize = Normalize(
mean=[120.39586422 / 255.0, 115.59361427 / 255.0, 104.54012653 / 255.0],
std=[70.68188272 / 255.0, 68.27635443 / 255.0, 72.54505529 / 255.0],
)
train_data_transforms = Compose([
ToPILImage(),
RandomCrop(84, padding=8),
ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4),
RandomHorizontalFlip(),
ToTensor(),
normalize,
])
test_data_transforms = Compose([
normalize,
])
but when I checked the image size it was 84 instead of 100 (after adding padding):
X.size()=torch.Size([50, 3, 84, 84])
what is going on with this? Shouldn't it be 100?
reproduction:
import random
from typing import Callable
import learn2learn as l2l
import numpy as np
import torch
from learn2learn.data import TaskDataset, MetaDataset, DataDescription
from learn2learn.data.transforms import TaskTransform
from torch.utils.data import Dataset
class IndexableDataSet(Dataset):
def __init__(self, datasets):
self.datasets = datasets
def __len__(self) -> int:
return len(self.datasets)
def __getitem__(self, idx: int):
return self.datasets[idx]
class SingleDatasetPerTaskTransform(Callable):
"""
Transform that samples a data set first, then creates a task (e.g. n-way, k-shot) and finally
applies the remaining task transforms.
"""
def __init__(self, indexable_dataset: IndexableDataSet, cons_remaining_task_transforms: Callable):
"""
:param: cons_remaining_task_transforms; constructor that builds the remaining task transforms. Cannot be a list
of transforms because we don't know apriori which is the data set we will use. So this function should be of
type MetaDataset -> list[TaskTransforms] i.e. given the dataset it returns the transforms for it.
"""
self.indexable_dataset = MetaDataset(indexable_dataset)
self.cons_remaining_task_transforms = cons_remaining_task_transforms
def __call__(self, task_description: list):
"""
idea:
- receives the index of the dataset to use
- then use the normal NWays l2l function
"""
# - this is what I wish could have gone in a seperate callable transform, but idk how since the transforms take apriori (not dynamically) which data set to use.
i = random.randint(0, len(self.indexable_dataset) - 1)
task_description = [DataDescription(index=i)] # using this to follow the l2l convention
# - get the sampled data set
dataset_index = task_description[0].index
dataset = self.indexable_dataset[dataset_index]
dataset = MetaDataset(dataset)
# - use the sampled data set to create task
remaining_task_transforms: list[TaskTransform] = self.cons_remaining_task_transforms(dataset)
description = None
for transform in remaining_task_transforms:
description = transform(description)
return description
def sample_dataset(dataset):
def sample_random_dataset(x):
print(f'{x=}')
i = random.randint(0, len(dataset) - 1)
return [DataDescription(index=i)]
# return dataset[i]
return sample_random_dataset
def get_task_transforms(dataset: IndexableDataSet) -> list[TaskTransform]:
"""
:param dataset:
:return:
"""
transforms = [
sample_dataset(dataset),
l2l.data.transforms.NWays(dataset, n=5),
l2l.data.transforms.KShots(dataset, k=5),
l2l.data.transforms.LoadData(dataset),
l2l.data.transforms.RemapLabels(dataset),
l2l.data.transforms.ConsecutiveLabels(dataset),
]
return transforms
def print_datasets(dataset_lst: list):
for dataset in dataset_lst:
print(f'\n{dataset=}\n')
def get_indexable_list_of_datasets_mi_and_cifarfs(root: str = '~/data/l2l_data/') -> IndexableDataSet:
from learn2learn.vision.benchmarks import mini_imagenet_tasksets
datasets, transforms = mini_imagenet_tasksets(root=root)
mi = datasets[0].dataset
from learn2learn.vision.benchmarks import cifarfs_tasksets
datasets, transforms = cifarfs_tasksets(root=root)
cifarfs = datasets[0].dataset
dataset_list = [mi, cifarfs]
dataset_list = [l2l.data.MetaDataset(dataset) for dataset in dataset_list]
dataset = IndexableDataSet(dataset_list)
return dataset
# -- tests
def loop_through_l2l_indexable_datasets_test():
"""
:return:
"""
# - for determinism
random.seed(0)
torch.manual_seed(0)
np.random.seed(0)
# - options for number of tasks/meta-batch size
batch_size: int = 10
# - create indexable data set
indexable_dataset: IndexableDataSet = get_indexable_list_of_datasets_mi_and_cifarfs()
# - get task transforms
def get_remaining_transforms(dataset: MetaDataset) -> list[TaskTransform]:
remaining_task_transforms = [
l2l.data.transforms.NWays(dataset, n=5),
l2l.data.transforms.KShots(dataset, k=5),
l2l.data.transforms.LoadData(dataset),
l2l.data.transforms.RemapLabels(dataset),
l2l.data.transforms.ConsecutiveLabels(dataset),
]
return remaining_task_transforms
task_transforms: TaskTransform = SingleDatasetPerTaskTransform(indexable_dataset, get_remaining_transforms)
# -
taskset: TaskDataset = TaskDataset(dataset=indexable_dataset, task_transforms=task_transforms)
# - loop through tasks
for task_num in range(batch_size):
print(f'{task_num=}')
X, y = taskset.sample()
print(f'{X.size()=}')
print(f'{y.size()=}')
print(f'{y=}')
print()
print('-- end of test --')
# -- Run experiment
if __name__ == "__main__":
import time
from uutils import report_times
start = time.time()
# - run experiment
loop_through_l2l_indexable_datasets_test()
# - Done
print(f"\nSuccess Done!: {report_times(start)}\a")
context: https://github.com/learnables/learn2learn/issues/333
crossposted:
https://discuss.pytorch.org/t/why-is-randomcrop-with-size-84-and-padding-8-returning-an-image-size-of-84-and-not-100-in-pytorch/151463
https://www.reddit.com/r/pytorch/comments/uno1ih/why_is_randomcrop_with_size_84_and_padding_8/
| The padding is applied to the input image or tensor before applying the random crop. Ultimately, the output image has a spatial size equal to that of the provided size(s) given to the T.RandomCrop function since the operation is performed after.
After all, it makes more sense to pad the input image rather than the cropped image, doesn't it?
| https://stackoverflow.com/questions/72208865/ |
TypeError: forward() takes 1 positional argument but 2 were given while inferencing of PyTorch model | My mode likes the following:
class RankingModel(nn.Module):
def __init__(self, conf: Dict[Text, Any], **kwargs: Any):
super(RankingModel, self).__init__()
self.conf = deepcopy(conf)
......
def forward(self, **_features): # the model input is a torch.utils.data.Dataset()
### model body part.
return prob
Then I train my model using:
trainer = Trainer(
model=model,
args=training_args,
train_dataset=training_dataset,
eval_dataset=valid_dataset,
compute_metrics=compute_metrics_for_binary_classification,
callbacks=[callback],
)
trainer.train()
Then I predict the result with the model.
test_predict = model(x_test)
I get the error:
TypeError Traceback (most recent call last)
Input In [18], in <cell line: 9>()
17 x_test.from_dict(feature_test)
18 x_test.set_format(tensor_type="torch")
---> 20 test_predict = model(x_test) # trainer.predict(x_test).predictions
21 if np.argmax(test_predict) < 5:
22 recall_counter = recall_counter + 1
File ~/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py:1051, in Module._call_impl(self, *input, **kwargs)
1047 # If we don't have any hooks, we want to skip the rest of the logic in
1048 # this function, and just call forward.
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: forward() takes 1 positional argument but 2 were given
But all is OK if I predict the result through:
test_predict = trainer.predict(x_test).predictions
Why may I not use model(x_test) to get inference result? Could you please give me any suggestions? Thanks.
| Your forward expect argument with key like forward(data=myarray) because you used double asterix when defining it and didn't give positional argument.
either use def forward(self, input, **kwargs)which would read the first argument of the call and then use other argument as kwargs
or call it with:
model(keyword=x_test) and then in your foward function you can access it with _features['keyword']
| https://stackoverflow.com/questions/72211429/ |
Why won't PyTorch RNN accept unbatched input? | I'm trying to train a PyTorch RNN to predict the next value in a 1D sequence. According to the PyTorch documentation page, I think I should be able to feed unbatched input to the RNN with shape [L,H_in] where L is the length of the sequence and H_in is the input length. That is, a 2D vector.
https://pytorch.org/docs/stable/generated/torch.nn.RNN.html
import torch
x1 = torch.tensor([[1.0], [2.0], [3.0], [4.0], [5.0], [6.0], [7.0], [8.0]])
x1_input = x1[0:-1, :]
x1_target = x1[1:, :]
rnn = torch.nn.RNN(1, 1, 1)
optimizer_prediction = torch.optim.Adam(rnn.parameters())
prediction_loss = torch.nn.L1Loss()
rnn.train()
epochs = 100
for i in range(0, epochs):
output_x1 = rnn(x1_input)
print(output_x1)
error_x1 = prediction_loss(output_x1, x1_target)
error_x1.backward()
optimizer_prediction.step()
optimizer_prediction.zero_grad()
However, PyTorch is complaining that it expects a 3D input vector (i.e. including a dimension for the batch):
RuntimeError: input must have 3 dimensions, got 2
What is the correct method for feeding unbatched input to an RNN?
| I would recommend turning your input into a 3d array by adding a batch size of one with:
torch.unsqueeze(x1_input, dim=0).
| https://stackoverflow.com/questions/72234859/ |
How do I calculate the mean of values with the same label given by a mask for multi-dimensional data? | My input x is a tensor of [B, C, H, W] dim. B is batch size, C number of channels, H height and W width. I have a mask m of [H, W] dim. For each batch size and each channel I want to use the mask m to calculate the mean of all values in [H, W] with the same label.
For example:
B = 2
C = 2
H = 2
W = 3
x = torch.tensor([[[[1,2,3][4,5,6]][[7,8,9][0,1,2]]][[[1,2,3][4,5,6]][[7,8,9][0,1,2]]]])
m = torch.tensor([[0,0,1][0,2,2]])
I expect as output:
mean = torch.tensor([[2.3,3,5.5][5,9,1.5][2.3,3,5.5][5,9,1.5]])
out = torch.tensor([[[[2.3,2.3,3][2.3,5.5,5.5]][[5,5,9][5,1.5,1.5]]][[[2.3,2.3,3][2.3,5.5,5.5]][[5,5,9][5,1.5,1.5]]]])
What I want is the following:
1.) The mean value for each label [mean of label 0, mean of label 1, mean of label 2, mean of label 3, mean of label 4]
2.) I want to replace each value in x with the mean value of its associated label. Output should have the size [B, C, H, W].
I thought to somehow use torch_scatter.scatter_mean.
| I figured out how to get the mean values for multi-dimensional data. I flatten H and W and then call scatter_mean from torch_scatter. That gives me the mean values per label for each channel and batch.
x = x.view(B, C, H*W)
m = m.view(B, 1, H*W)
mean = torch_scatter.scatter_mean(x, m, dim=-1)
| https://stackoverflow.com/questions/72255702/ |
Steps for Machine Learning in Pytorch | When we define our model in PyTorch. We run through different #epochs. I want to know that in the iteration of epochs.
What is the difference between the two following snippets of code in which the order is different? These two snippet versions are:
I found over tutorials
The code provided by my supervisor for the project.
Tutorial Version
for i in range(epochs):
logits = model(x)
loss = loss_fcn(logits,lables)
loss.backward()
optimizer.step()
optimizer.zero_grad()
Supervisor Version
for i in range(epochs):
logits = model(x)
loss = loss_fcn(logits,lables)
optimizer.zero_grad()
loss.backward()
optimizer.step()
| The only difference is when the gradients are cleared. (when you call optimizer.zero_grad()) the first version zeros out the gradients after updating the weights (optimizer.step()), the second one zeroes out the gradient after updating the weights. both versions should run fine. The only difference would be the first iteration, where the second snippet is better as it makes sure the residue gradients are zero before calculating the gradients. Check this link that explains why you would zero the gradients
| https://stackoverflow.com/questions/72262608/ |
pytorch: simple recurrent neural network for image classification | I am making a simple recurrent neural network architecture for CIFAR10 image classification. I am not interested not use pre-defined RNN class in PyTorch because i am implementing from scratch according to figure. I am getting input tensor errors in the same device. I am not sure whether my code is right or wrong. Any simple way to write FC layer without defining shape and hard coded parameters.
Figure
Code
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size, num_classes):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.input_to_hidden = nn.Linear(in_features=input_size + hidden_size, out_features=output_size)
self.input_to_output = nn.Linear(in_features=input_size + hidden_size, out_features=output_size)
self.softmax = nn.LogSoftmax(dim=1)
self.fc = nn.Linear(hidden_size, num_classes)
def forward(self, input_tensor):
combined = torch.cat((input_tensor, torch.zeros(input_tensor.size(0))), 1)
hidden = self.input_to_hidden(combined)
output = self.input_to_output(combined)
output = self.softmax(output)
return output, hidden
Trackback
Traceback (most recent call last):
File "/media/cvpr/CM_1/tutorials/rnn.py", line 81, in <module>
outputs = model(images)
File "/home/cvpr/anaconda3/envs/tutorials/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/media/cvpr/CM_1/tutorials/rnn.py", line 33, in forward
combined = torch.cat((input_tensor, torch.zeros(input_tensor.size(0))), 1)
RuntimeError: All input tensors must be on the same device. Received cuda:0 and cpu
| You need to make sure the tensors are on the same device (cpu/gpu) before you are contacting them
you can add a device parameter to your class and use it:
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size, num_classes, device='cuda'):
super(RNN, self).__init__()
self.device = device
self.hidden_size = hidden_size
self.input_to_hidden = nn.Linear(in_features=input_size + hidden_size, out_features=output_size)
self.input_to_output = nn.Linear(in_features=input_size + hidden_size, out_features=output_size)
self.softmax = nn.LogSoftmax(dim=1)
self.fc = nn.Linear(hidden_size, num_classes)
def forward(self, input_tensor):
combined = torch.cat((input_tensor.to(device), torch.zeros(input_tensor.size(0), device=self.device)), 1)
hidden = self.input_to_hidden(combined)
output = self.input_to_output(combined)
output = self.softmax(output)
return output, hidden
| https://stackoverflow.com/questions/72267262/ |
Transforms.Normalize returns values higher than 255 Pytorch | I am working on an video dataset, I read the frames as integers and convert them to a numpy array float32.
After being loaded, they appear in a range between 0 and 255:
[165., 193., 148.],
[166., 193., 149.],
[167., 193., 149.],
...
Finally, to feed them to my model and stack the frames I do the "ToTensor()" plus my transformation [transforms.Resize(224), transforms.Normalize([0.454, 0.390, 0.331], [0.164, 0.187, 0.152])]
and here the code to transform and stack the frames:
res_vframes = []
for i in range(len(v_frames)):
res_vframes.append(self.transforms((v_frames[i])))
res_vframes = torch.stack(res_vframes, 0)
The problem is that after the transformation the values appears in this way, which has values higher than 255:
[tensor([[[1003.3293, 1009.4268, 1015.5244, ..., 1039.9147, 1039.9147,
1039.9147],...
Any idea on what I am missing or doing wrong?
| The behavior of torchvision.transforms.Normalize:
output[channel] = (input[channel] - mean[channel]) / std[channel]
Since the numerator of the lefthand of the above equation is greater than 1 and the denominator of it is smaller than 1, the computed value gets larger.
The class ToTensor() maps a tensor's value to [0, 1] only if some condition is satisfied. Check this code from official Pytorch docs:
if isinstance(pic, np.ndarray):
# handle numpy array
if pic.ndim == 2:
pic = pic[:, :, None]
img = torch.from_numpy(pic.transpose((2, 0, 1))).contiguous()
# backward compatibility
if isinstance(img, torch.ByteTensor):
return img.to(dtype=default_float_dtype).div(255)
else:
return img
Therefore you need to divide tensors explicitly or make to match the above condition.
| https://stackoverflow.com/questions/72275297/ |
No CMAKE_CUDA_COMPILER could be found when installing pytorch | I am trying to install pytorch from source. The reason why I am doing this (instead of just pip install pytorch) is because I need the sm_86 support for my GPU (NVIDIA GTX 3060 Ti) and so I have set the TORCH_CUDA_ARCH_LIST=8.6 variable. I've read that this variable affects only the source installation.
Basically I am following this guide (linux system, using pip instead of conda) but I was not able to understand how to correctly set the CMAKE_PREFIX_PATH variable.
Despite this I tried in any case to install pytorch with the python3 setup.py install command but it returned this error:
CMake Error at cmake/public/cuda.cmake:47 (enable_language):
No CMAKE_CUDA_COMPILER could be found.
Tell CMake where to find the compiler by setting either the environment
variable "CUDACXX" or the CMake cache entry CMAKE_CUDA_COMPILER to the full
path to the compiler, or to the compiler name if it is in the PATH.
Call Stack (most recent call first):
cmake/Dependencies.cmake:43 (include)
CMakeLists.txt:696 (include)
The log file shows this:
Checking whether the CUDA compiler is NVIDIA using "" did not match "nvcc: NVIDIA \(R\) Cuda compiler driver":
Checking whether the CUDA compiler is Clang using "" did not match "(clang version)":
Compiling the CUDA compiler identification source file "CMakeCUDACompilerId.cu" failed.
Compiler: CMAKE_CUDA_COMPILER-NOTFOUND
Build flags: ;-Xfatbin;-compress-all
Id flags: -v
The output was:
No such file or directory
Compiling the CUDA compiler identification source file "CMakeCUDACompilerId.cu" failed.
Compiler: CMAKE_CUDA_COMPILER-NOTFOUND
Build flags:
Id flags: -v
The output was:
No such file or directory
Can anybody help me solve this?
Update
Cuda seems to be installed. With apt list --installed|grep cuda this is the output (I am Italian "sconosciuto"=unknown :)
cuda-11-7/sconosciuto,now 11.7.0-1 amd64 [installato, automatico]
cuda-cccl-11-7/sconosciuto,now 11.7.58-1 amd64 [installato, automatico]
cuda-command-line-tools-11-7/sconosciuto,now 11.7.0-1 amd64 [installato, automatico]
cuda-compiler-11-7/sconosciuto,now 11.7.0-1 amd64 [installato, automatico]
cuda-cudart-11-7/sconosciuto,now 11.7.60-1 amd64 [installato, automatico]
cuda-cudart-dev-11-7/sconosciuto,now 11.7.60-1 amd64 [installato, automatico]
cuda-cuobjdump-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-cupti-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-cupti-dev-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-cuxxfilt-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-demo-suite-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-documentation-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-driver-dev-11-7/sconosciuto,now 11.7.60-1 amd64 [installato, automatico]
cuda-drivers-515/sconosciuto,now 515.43.04-1 amd64 [installato, automatico]
cuda-drivers/sconosciuto,now 515.43.04-1 amd64 [installato, automatico]
cuda-gdb-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-libraries-11-7/sconosciuto,now 11.7.0-1 amd64 [installato, automatico]
cuda-libraries-dev-11-7/sconosciuto,now 11.7.0-1 amd64 [installato, automatico]
cuda-memcheck-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-nsight-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-nsight-compute-11-7/sconosciuto,now 11.7.0-1 amd64 [installato, automatico]
cuda-nsight-systems-11-7/sconosciuto,now 11.7.0-1 amd64 [installato, automatico]
cuda-nvcc-11-7/sconosciuto,now 11.7.64-1 amd64 [installato, automatico]
cuda-nvdisasm-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-nvml-dev-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-nvprof-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-nvprune-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-nvrtc-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-nvrtc-dev-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-nvtx-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-nvvp-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-runtime-11-7/sconosciuto,now 11.7.0-1 amd64 [installato, automatico]
cuda-sanitizer-11-7/sconosciuto,now 11.7.50-1 amd64 [installato, automatico]
cuda-toolkit-11-7-config-common/sconosciuto,now 11.7.60-1 all [installato, automatico]
cuda-toolkit-11-7/sconosciuto,now 11.7.0-1 amd64 [installato, automatico]
cuda-toolkit-11-config-common/sconosciuto,now 11.7.60-1 all [installato, automatico]
cuda-toolkit-config-common/sconosciuto,now 11.7.60-1 all [installato, automatico]
cuda-tools-11-7/sconosciuto,now 11.7.0-1 amd64 [installato, automatico]
cuda-visual-tools-11-7/sconosciuto,now 11.7.0-1 amd64 [installato, automatico]
cuda/sconosciuto,now 11.7.0-1 amd64 [installato]
libcuda1/sconosciuto,now 515.43.04-1 amd64 [installato, automatico]
nvidia-cuda-mps/sconosciuto,now 515.43.04-1 amd64 [installato, automatico]
| My guess is that the CUDA installation is somehow messed up / invisible - otherwise CMake should have noticed it. You can overcome the issue more "manually" by running CMake like so:
CUDACXX=/usr/local/cuda-11.7/bin/nvcc cmake -S /path/to/source/dir -B /path/to/build/dir
(as you have installed CUDA under /usr/local/cuda-11.7)
A less-likely cause is that CMake doesn't properly recognize the just-released CUDA 11.7, although I doubt it.
| https://stackoverflow.com/questions/72278881/ |
Massive neural network training time increase by inverting images in a data set | I have been working with neural networks for a few months now and I have a little mystery that I can't solve on my own.
I wanted to create and train a neural network which can identify simple geometric shapes (squares, circles, and triangles) in 56*56 pixel greyscale images. If I use images with a black background and a white shape, everything work pretty well. The training time is about 18 epochs and the accuracy is pretty close to 100% (usually 99.6 % - 99.8%).
But all that changes when I invert the images (i.e., now a white background and black shapes). The training time skyrockets to somewhere around 600 epochs and during the first 500-550 epochs nothing really happens. The loss barely decreases in those first 500-550 epochs and it just seems like something is "stuck".
Why does the training time increase so much and how can I reduce it (if possible)?
|
Color inversion
You have to essentially “switch” WxH pixels, hence touching every possible pixel during augmentation for every image, which amounts to lots of computation.
In total it would be DxWxH operations per epoch (D being size of your dataset).
You might want to precompute these and feed your neural network with them afterwards.
Loss
It is harder for neural networks as white is encoded with 1, while black is encoded with 0. Inverse giving us 0 for white and 1 for black.
This means most of neural network weights are activated by background pixels!
What is more, every sensible signal (0 in case of inversion) is multiplied by zero value and has not effect on final loss.
With hard {0, 1} encoding neural network tries to essentially get signal from the background (now 1 for black pixels) which is mostly meaningless (each weight will tend to zero or almost zero as it bears little to no information) and what it does instead is fitting distribution to your labels (if I predict 1 only I will get smaller loss, no matter the input).
Experiment if you are bored
Try to encode your pixels with smooth values, e.g. white being 0.1 and black being 0.9 (although 1.0 might work okayish, although more epochs might be needed as 0 is very hard to obtain via backprop) and see what the results are.
| https://stackoverflow.com/questions/72286749/ |
AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor' | I get error on line x_stats = dec(z).float().
import torch.nn.functional as F
z_logits = enc(x)
z = torch.argmax(z_logits, axis=1)
z = F.one_hot(z, num_classes=enc.vocab_size).permute(0, 3, 1, 2).float()
x_stats = dec(z).float()
x_rec = unmap_pixels(torch.sigmoid(x_stats[:, :3]))
x_rec = T.ToPILImage(mode='RGB')(x_rec[0])
display_markdown('Reconstructed image:')
display(x_rec)
I tried to downgrade and reinstall the torch package but that didn't help the issue. My package version is torch==1.11.0
Full traceback:
AttributeError Traceback (most recent call last)
/Users/hanpham/github/DALL-E/notebooks/usage.ipynb Cell 4' in <cell line: 7>()
4 z = torch.argmax(z_logits, axis=1)
5 z = F.one_hot(z, num_classes=enc.vocab_size).permute(0, 3, 1, 2).float()
----> 7 x_stats = dec(z).float()
8 x_rec = unmap_pixels(torch.sigmoid(x_stats[:, :3]))
9 x_rec = T.ToPILImage(mode='RGB')(x_rec[0])
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/homebrew/lib/python3.9/site-packages/dall_e/decoder.py:94, in Decoder.forward(self, x)
91 if x.dtype != torch.float32:
92 raise ValueError('input must have dtype torch.float32')
---> 94 return self.blocks(x)
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/container.py:141, in Sequential.forward(self, input)
139 def forward(self, input):
140 for module in self:
--> 141 input = module(input)
142 return input
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/container.py:141, in Sequential.forward(self, input)
139 def forward(self, input):
140 for module in self:
--> 141 input = module(input)
142 return input
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/upsampling.py:154, in Upsample.forward(self, input)
152 def forward(self, input: Tensor) -> Tensor:
153 return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners,
--> 154 recompute_scale_factor=self.recompute_scale_factor)
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1185, in Module.__getattr__(self, name)
1183 if name in modules:
1184 return modules[name]
-> 1185 raise AttributeError("'{}' object has no attribute '{}'".format(
1186 type(self).__name__, name))
AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'
| Install Torch version, this will solve the issue
pip install torchvision==0.10.1
pip install torch==1.9.1
| https://stackoverflow.com/questions/72297590/ |
Pytorch-forecasting:: Univariate AssertionError: filters should not remove entries all entries | I tried to do univariate forecasting with Pytorch-Forecasting.
But I got following error on TimeSeriesDataSet
AssertionError: filters should not remove entries all entries - check
encoder/decoder lengths and lags
I have tried googling for the error, read the suggestion and make sure my training_df has sufficient number of rows. (I have plenty: 196). Also I only has 1 group_id which is 1. No other group_id, so all those 196 should be in same group.
My dataframes sample:
note: all rows has same group value = 1
PutCall_Ratio_Total time_idx group
Date
2006-02-24 11119.140000 0 1
2006-02-25 7436.316667 1 1
2006-02-26 3753.493333 2 1
I have training_df with length of 196
len(training_df)
196
And here is my TimeSeriesDataSet portion:
context_length = 28*7
prediction_length = 7
# setup Pytorch Forecasting TimeSeriesDataSet for training data
training_data = TimeSeriesDataSet(
training_df,
time_idx="time_idx",
target="PutCall_Ratio_Total",
group_ids=["group"],
time_varying_unknown_reals=["PutCall_Ratio_Total"],
max_encoder_length=context_length,
max_prediction_length=prediction_length
)```
| After some experiment, it seems that the training_df length (196) should be larger
than or equal to (context_length + prediction_length).
So for example above it works once I update the context_length to 27 * 7 instead of 28 * 7.
Since 27 * 7 + 7 = 196.
While 28 * 7 + 7 > 196.
| https://stackoverflow.com/questions/72302786/ |
Google Colab: torch cuda is true but No CUDA GPUs are available | I use Google Colab to train the model, but like the picture shows that when I input 'torch.cuda.is_available()' and the ouput is 'true'. And then I run the code but it has the error that RuntimeError: No CUDA GPUs are available.
| Try to install cudatoolkit version you want to use
"conda install pytorch torchvision cudatoolkit=10.1 -c pytorch"
| https://stackoverflow.com/questions/72303759/ |
Pytorch: Dataloader shuffle=False producing same batches | class DataSet(torch.utils.data.Dataset):
def __init__(self,dataframe,n_classes=3,w=384,h=384,apply_aug=False):
self.data=dataframe
self.n_classes=n_classes
self.apply_aug=apply_aug
self.w=w
self.h=h
self.transform=A.Compose([A.Rotate(limit=30,p=0.8),
A.HorizontalFlip(),
A.CoarseDropout(max_height=0.1,max_width=0.1,p=1.0),
A.ShiftScaleRotate(shift_limit=0.09,scale_limit=0.2,rotate_limit=0)
])
def __len__(self):
return self.data.shape[0]
def __getitem__(self,idx):
IMG=np.zeros((1,self.w,self.h),dtype=np.float32)
MASK=np.zeros((self.n_classes,self.w,self.h),dtype=np.float32)
path=self.data.iloc[idx]["image_path"]
encoded_list=self.data.iloc[idx]["segmentation"]
width=int(self.data.iloc[idx]["width"][0])
heigth=int(self.data.iloc[idx]["heigth"][0])
class_list=self.data.iloc[idx]["class"]
img=implt.imread(path)
print(f"idx is {idx}")
IMG[:,:width,:heigth]=img
for class_idx,c in enumerate(class_list):
if str(encoded_list[class_idx])=="nan":
mask=np.zeros((1,width,heigth))
else:
mask=RLE_TO_MASK(encoded_list[class_idx],width,heigth)
MASK[c,:width,:heigth]=mask[0]
if self.apply_aug:
transformed=self.transform(image=IMG,mask=MASK)
IMG,MASK=transformed['image'],transformed['mask']
IMG=IMG/255.0
return IMG,MASK
Above is the dataset function created. It outputs the images and masks.
When I change shuffle=True for data loader, it's working fine but when I change shuffle=False. For the next batch the data loader providing the same batch which is produced before.
dataloader=torch.utils.data.DataLoader(DataSet(df,apply_aug=True),batch_size=BATCH_SIZE,shuffle=False)
for i in range(2):
images,masks=next(iter(dataloader))
print()
print(images.shape,masks.shape)
idx is 0
idx is 1
idx is 2
idx is 3
idx is 4
idx is 5
idx is 6
idx is 7
idx is 8
idx is 9
idx is 10
idx is 11
idx is 12
idx is 13
idx is 14
idx is 15
idx is 16
idx is 17
idx is 18
idx is 19
idx is 20
idx is 21
idx is 22
idx is 23
idx is 24
idx is 25
idx is 26
idx is 27
idx is 28
idx is 29
idx is 30
idx is 31
idx is 0
idx is 1
idx is 2
idx is 3
idx is 4
idx is 5
idx is 6
idx is 7
idx is 8
idx is 9
idx is 10
idx is 11
idx is 12
idx is 13
idx is 14
idx is 15
idx is 16
idx is 17
idx is 18
idx is 19
idx is 20
idx is 21
idx is 22
idx is 23
idx is 24
idx is 25
idx is 26
idx is 27
idx is 28
idx is 29
idx is 30
idx is 31
torch.Size([32, 1, 384, 384]) torch.Size([32, 3, 384, 384])
When shuffle=True
for i in range(2):
images,masks=next(iter(dataloader))
print()
print(images.shape,masks.shape)
idx is 25498
idx is 15357
idx is 11275
idx is 36247
idx is 33223
idx is 8566
idx is 14229
idx is 23999
idx is 28883
idx is 8847
idx is 35485
idx is 36647
idx is 22422
idx is 3693
idx is 32525
idx is 19464
idx is 22187
idx is 38244
idx is 7795
idx is 3690
idx is 7461
idx is 36806
idx is 22455
idx is 6817
idx is 8789
idx is 37809
idx is 33157
idx is 22828
idx is 35858
idx is 38320
idx is 2684
idx is 29708
idx is 38240
idx is 28020
idx is 10356
idx is 20215
idx is 18561
idx is 30083
idx is 30997
idx is 14020
idx is 20896
idx is 25551
idx is 2735
idx is 19138
idx is 23026
idx is 30677
idx is 26664
idx is 2731
idx is 14150
idx is 16735
idx is 28621
idx is 18268
idx is 11793
idx is 35654
idx is 4470
idx is 11312
idx is 37349
idx is 27501
idx is 5389
idx is 34019
idx is 24120
idx is 38311
idx is 14880
idx is 9533
torch.Size([32, 1, 384, 384]) torch.Size([32, 3, 384, 384])
| You use iterator incorrectly:
next(iter(dataloader))
Every step you create a new iterator and take the first element (hence it's always the same because the iterator is actually the same). Instead you should create the iterator before for-loop and call next() in every step.
But why not simply iterate over your dataloader this way:
for images,masks in dataloader:
# do sth with data
| https://stackoverflow.com/questions/72314135/ |
How to calculate mutual information in PyTorch (differentiable estimator) | I am training a model with pytorch, where I need to calculate the degree of dependence between two tensors (let's say they are the two tensors each containing values very close to zero or one, e.g. v1 = [0.999, 0.998, 0.001, 0.98] and v2 = [0.97, 0.01, 0.997, 0.999]) as a part of my loss function. I am trying to calculate mutual information, but I can't find any mutual information estimation implementation in PyTorch. Has such a thing been provided anywhere?
| Mutual information is defined for distribution and not individual points. So, I will write the next part assuming v1 and v2 are samples from a distribution, p. I will also take that you have n samples from p, n>1.
You want a method to estimate mutual information from samples. There are many ways to do this. One of the simplest ways to do this would be to use a non-parametric estimator like NPEET (https://github.com/gregversteeg/NPEET). It works with numpy (you can convert from torch to numpy for this). There are more involved parametric models for which you may be able to find implementation in pytorch (See https://arxiv.org/abs/1905.06922).
If you only have two vectors and want to compute a similarity measure, a dot product similarity would be more suitable than mutual information as there is no distribution.
| https://stackoverflow.com/questions/72323285/ |
How to require gradient only for some tensor elements in a pytorch tensor? | I like to use a tensor with only a few variable elements which are considered during the backpropagation step. Consider for example:
self.conv1 = nn.Conv2d(3, 16, 3, 1, padding=1, bias=False)
mask = torch.zeros(self.conv1.weight.data.shape, requires_grad=False)
self.conv1.weight.data[0, 0, 0, 0] += mask[0, 0, 0, 0]
print(self.conv1.weight.data[0, 0 , 0, 0].requires_grad)
It will be output False
| You can only switch on and off gradient computation at the tensor level which means that the requires_grad is not element-wise. What you observe is different because you have accessed the requires_grad attribute of conv1.weight.data which is not the same object as its wrapper tensor conv1.weight!
Notice the difference:
>>> conv1 = nn.Conv2d(3, 16, 3) # requires_grad=True by default
>>> conv1.weight.requires_grad
True
>>> conv1.weight.data.requires_grad
False
conv1.weight is the weight tensor while conv1.weight.data is the underlying data tensor which never requires gradient because it is at a different level.
Now onto how to solve the partially requiring gradient computation on a tensor. Instead of looking at solving it as "only require gradient for some elements of tensor", you can think of it as "don't require gradient for some elements of tensor". You can do so by overwriting the gradient values on the tensor at the desired positions after the backward pass:
>>> conv1 = nn.Conv2d(3, 1, 2)
>>> mask = torch.ones_like(conv1.weight)
For example, to prevent the update of the first component of the convolutional layer:
>>> mask[0,0,0,0] = 0
After the backward pass, you can mask the gradient on conv1.weight:
>>> conv1(torch.rand(1,3,10,10)).mean().backward()
>>> conv1.weight.grad *= mask
| https://stackoverflow.com/questions/72325827/ |
How can I change my environment architecture to arm64 from x86-64? | I am an MacBook M1 user and I am trying to use M1 GPU (MPS) supported by Pytorch. I read that I need to make sure my system is arm64 rather than x86 so I created my env as below:
CONDA_SUBDIR=osx-arm64 conda create -n nlp2 --clone nlp
(nlp2) twang20@C02G82XRQ05N ~ % python --version
Python 3.9.7
(nlp2) twang20@C02G82XRQ05N ~ % conda config --env --set subdir
osx-arm64
(nlp2) twang20@C02G82XRQ05N ~ % uname -m
arm64
However, in torch, when I checked the environment info, it still tells me my architecture is x86-64. I cannot find a way to change it to arm64.
get_pretty_env_info()
Out[2]:
PyTorch version: 1.12.0.dev20220520
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 11.6.5 (x86_64)
GCC version: Could not collect
Clang version: 13.0.0 (clang-1300.0.29.30)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.7 (default, Sep 16 2021, 08:50:36) [Clang 10.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.20.3
[pip3] numpydoc==1.1.0
[pip3] torch==1.12.0.dev20220520
[pip3] torchaudio==0.12.0.dev20220520
[pip3] torchvision==0.13.0.dev20220520
[conda] blas
I would expect to see something like this:
OS: macOS 11.6.5 (arm64)
GCC version: Could not collect
Clang version: 13.0.0 (clang-1300.0.29.30)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.7 (default, Sep 16 2021, 08:50:36) [Clang 10.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
How can I make it happen? Thanks.
| Cloning is going to copy/link the packages from the previous environment, which is already x86_64. Instead, you would need to recreate the environment. Something like:
## dump previous environment
conda env export -n nlp --from-history > nlp_x86.yaml
## create new one with temp subdir
CONDA_SUBDIR=osx-arm64 conda env create -n nlp_arm -f nlp_x86.yaml
## permanently set subdir after creation
conda activate nlp_arm
conda config --env --set subdir osx-arm64
However, you'll likely need to edit the YAML to add channels, adjust packages, etc.. For example, some packages may not yet be available.
In particular, the M1 support from PyTorch is still only on nightly builds, so you'll need the pytorch-nightly channel.
Also, note they aren't yet building other PyTorch packages (e.g., torchvision) for osx-arm64, so at the time of this writing, I wouldn't expect full environments to simply swap out to M1 support. Might need to wait for an official release.
| https://stackoverflow.com/questions/72343472/ |
Input Shape of Deep learning model | I have created deep learning models in different input shapes.
For Testing , I am resizing the images according to the model's input shape manually
I need to resize the image with input shape of the deep model
Any Command to find the input shape of the model in PYTORCH
model = torch.load(config.MODEL_PATH).to(config.DEVICE)
im = cv2.resize(im, (INPUT_IMAGE_HEIGHT,INPUT_IMAGE_HEIGHT))
How can I find the INPUT_IMAGE_HEIGHT,INPUT_IMAGE_HEIGHT from model
Thank You
| This is a tricky question because your input size will can depend on several components of your model. The short answer is you can't.
Concerning the number of channels in your input tensor, you can infer this solely based on the first convolutional layer. Assuming your model is a two dimensional convolutional network, then you can get the input channel number based on :
for child in model.modules():
if type(child).__name__ == 'Conv2d':
print(child.weight.size(1))
break
Now for the input size, as I said you may not be able to infer this information at all. Indeed some convolutional networks, such as classification networks, may require specific dimensions such that the bottleneck can be flattened and fed into a fully-connected network. This is not always true though, networks that use some sort of pulling operation (average, or maximum pulling) will alleviate the need to provide fixed input shapes. Other networks such as dense prediction networks may not need to get a specific shape as the input, given the symmetry between input and output...
This all depends on the network's design and as such, I'm afraid there is no definitive answer to your question.
| https://stackoverflow.com/questions/72346720/ |
Why does a call to torch.tensor inside of apply_async fail to complete (seems to block execution)? | I'm trying to understand why the following simple example doesn't successfully complete execution and seems to get stuck on the first line of really_simple_func (on Ubuntu machines, but not Windows). The code is:
import torch as t
import numpy as np
import multiprocessing as mp # I've tried both multiprocessing
# import torch.multiprocessing as mp # and torch.multiprocessing
def really_simple_func():
temp_val_2 = t.tensor(np.zeros(425447)[0:400000]) # this is the line that blocks.
return 4.3
if __name__ == "__main__":
print("Run brief starting")
some_zeros = np.zeros(425447)
temp_val = t.tensor(some_zeros[0:400000]) # DELETE THIS LINE TO MAKE IT WORK
pool = mp.Pool(processes=1)
job = pool.apply_async(really_simple_func)
print("just before job.get()")
result = job.get()
print("Run brief completed. Reward = {}".format(result))
I have torch 1.11.0 installed, numpy 1.22.3 and have tried both CPU and GPU versions of Torch. When I run this code on two different Ubuntu machines, I get the following output:
Run brief starting
just before job.get()
However, the code never successfully completes (doesn't print the "Run brief completed" line). (It does complete on a third Windows box).
On the Ubuntu machines, if I delete the line with the comment "#DELETE THIS LINE TO MAKE IT WORK" the execution DOES complete, printing the final line as expected. Similarly, if I leave the line defining temp_val in but delete the line with the comment "This is the line that blocks" it will also complete. Moreover, if I reduce the size of the temp_val tensor (say from 400000 to 4000) it will also complete successfully. Finally, it is worth noting that while I can reproduce this behaviour on two different Ubuntu machines, this code does actually complete on my Windows machine - though, as far as I can tell, the versions of key packages, such as torch, are the same.
I don't understand this behaviour. I suspect it is something to do with the way torch allocates memory or stores information. I've tried calling del temp_val to free up memory, but that doesn't seem to fix things. It seems to me that the async call to t.tensor within really_simple_func is stopped from completing if there has already been a call to t.tensor in the main code block, creating a sufficiently large tensor.
I don't understand why this is happening, or even if that is the correct explanation. In any case, what would be best practice if I do need to do some tensor processing within apply_async as well as in the main thread? More generally, what is Torch waiting on when I make a call to t.tensor?
(Obviously, this is just the simplest version of the real code I'm trying to get to work that reproduced this issue. I realise that calling mp.Pool with only one process doesn't really make sense...nor, indeed, does using apply_async to call a function that returns a constant!)
| Unfortunately, I cannot provide any answers to your questions.
I can, however, share experiences with seemingly the same issue. I use a Linux machine with torch 1.8.1 and numpy 1.19.2.
When I run the following code on my machine:
with Pool(max_pool) as p:
pool_outputs = list(
tqdm(
p.imap(lambda f: get_model_results_per_query_file(get_preds, tokenizer, f), query_files),
total=len(query_files)
)
)
For which the function get_model_results_per_query_file contains operations similar to the following:
feats = features.unsqueeze(0).repeat(batch_size, 1, 1).to(device)
(features is a torch tensor)
The first round of jobs automatically fail, and new ones are immediately started (that do not fail for some reason). The whole process never completes though, since the main process still seems to be waiting for the first failed jobs.
If I remove the lines in my code involving the repeat function, no jobs fail.
I managed to solve my issue and preserve the same results by adapting a similar solution to yours:
feats = torch.as_tensor(np.tile(features, (batch_size, 1, 1))).to(device)
I believe as_tensor works in a similar fashion to from_numpy in this case.
I only managed to find this solution thanks to your post and your proposed workaround, so thank you!
| https://stackoverflow.com/questions/72348083/ |
the same pretrained model with same input , running multiple times gives different outputs | I load a pretrained Resnet152 from torchvision. I evaluate the model multiple times with the same input image, but each time the output is different. It's very strange. Anyone knows what is the reason? My code is
from torchvision import transforms
import torch
from torchvision import models
from PIL import Image
# load a pretrained model
model = models.resnet152(pretrained=True)
# load a image and preprocess it
preprocessor = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)])
input_image = Image.open('lion2.jpg')
input_tensor = preprocessor(input_image)
input_batch = torch.unsqueeze(input_tensor, 0)
# run multiple times and print output
for k in range(5):
model.train()
out = model(input_batch)
model.eval()
out2 = model(input_batch)
print(out2[0][:10].cpu().detach())
The output are
tensor([ 0.4722, -2.1463, -0.5993, -0.3880, -2.6292, 1.9123, -1.7939, -0.3289,
-0.3189, 0.5306])
tensor([ 0.4407, -2.0370, -0.7397, -0.4447, -2.6059, 1.9052, -1.9715, -0.6495,
-0.5361, 0.2618])
tensor([ 0.3874, -1.9249, -0.8254, -0.5408, -2.5266, 1.8302, -2.1151, -0.8739,
-0.7206, 0.0478])
tensor([ 0.3150, -1.8490, -0.9004, -0.6544, -2.4615, 1.7409, -2.2083, -1.0194,
-0.8352, -0.1017])
tensor([ 0.2310, -1.7754, -0.8858, -0.7081, -2.3238, 1.5943, -2.2625, -1.1185,
-0.9551, -0.2954])
(If I remove either model.train() or model.eval(), the output keep constant. )
| The model torchvision.models.resnet152 contains batch normalization layers with track_running_stats set to True. This means that whenever the model is called in training mode (i.e., when model.train() is set), the running_mean and running_var parameters of such batch normalization layers get updated to include the data of the batch passed in that call.
In your example, each time you call the model under model.train() in the loop, this causes the running_mean and running_var parameters of all the batch normalization layers to be updated. These are then freezed at the updated values when you call model.eval(), and used in the second forward pass, which causes the outputs to be different.
Since you are passing exactly the same inputs every time in the loop, it implies that the running_mean and running_var will converge to a constant after a large number of iterations. You can check that as a result the outputs in eval() mode eventually become identical.
The standard way of evaluating networks typically involves calling model.eval() once and using it over the entire test set, and so does not exhibit this apparent discrepancy.
| https://stackoverflow.com/questions/72357323/ |
Detect Apple silicon GPU core count | Similar to this question, I am interested in detecting the exact GPU inside a Mac equipped with Apple silicon.
I am interested in knowing the exact GPU core count.
sysctl -a | grep gpu
or
sysctl -a | grep core
does not seem to provide anything useful.
| You can use ioreg like this:
ioreg -l | grep gpu-core-count
You can also look up an object with class that's named something like AGXAcceleratorG13X and see all of its properties, gpu-core-count will also be there.
| https://stackoverflow.com/questions/72363212/ |
How to not break differentiability with a model's output? | I have an autoregressive language model in Pytorch that generates text, which is a collection of sentences, given one input:
output_text = ["sentence_1. sentence_2. sentence_3. sentence_4."]
Note that the output of the language model is in the form of logits (probability over the vocabulary), which can be converted to token IDS or strings.
Some of these sentences need to go into another model to get a loss that should affect only those sentences:
loss1 = model2("sentence_2")
loss2 = model2("sentence_4")
loss_total = loss1+loss2
What is the correct way to break/split the generated text from the first model without breaking differentiability? That is, so the corresponding text (from above) will look like a pytorch tensor of tensors (in order to then use some of them in the next model):
"[["sentence_1."]
["sentence_2."]
["sentence_3."]
["sentence_4."]]
For example, Python's split(".") method will most likely break differentiability, but will allow me to take each individual sentence and insert it into the second model to get a loss.
| Okay solved it. Posting answer for completion.
Since the output is in the form of logits, I can take the argmax to get the indices of each token. This should allow me to know where each period is (to know where the end of the sentence is). I can then split the sentences in the following way to maintain the gradients:
sentences_list = []
r = torch.rand(50) #imagine that this is the output logits (though instead of a tensor of values it will be a tensor of tensors)
period_indices = [10,30,49]
sentences_list.append(r[0:10])
sentences_list.append(r[10:30])
sentences_list.append(r[30:])
Now each element in sentences_list is a sentence, that I can send to another model to get a loss
| https://stackoverflow.com/questions/72368294/ |
How to write many and similar python scripts at once? | I need to have 100 of those similar python scripts that have MyData class from MyData_1 to MyData_100.
import torch
import numpy as np
from torch_geometric.data import InMemoryDataset, Data
from torch_geometric.utils import to_undirected
class MyData_1(InMemoryDataset):
def __init__(self, root, transform=None):
super(MyData_1, self).__init__(root, transform)
self.data, self.slices = torch.load(self.processed_paths[0])
@property
def raw_file_names(self):
return "mydata_1.npz"
@property
def processed_file_names(self):
return "data_1.pt"
def process(self):
raw_data = np.load(self.raw_paths[0])
cluster_data = torch.load('./raw/total_clusters.pt')
x = torch.from_numpy(raw_data['x'])
y = torch.from_numpy(raw_data['y'])
pos = torch.stack([x,y], dim=-1)
cp = torch.from_numpy(raw_data['cp'])
data_list = []
for i in range(cp.size(0)):
data = Data(x=cp[i].view(-1,1),pos=pos.view(-1,2), cluster=cluster_data[0])
data_list.append(data)
torch.save(self.collate(data_list), self.processed_paths[0])
I'm trying to do this because each MyData class calls different mydata_1,2,...100.npz to generate dataset.
Is there any way to make this fast?
Thanks in advance!
| I didn't fully understand the reason why you need to create 100 different classes.
Is it because you need to return mydata_1.npz to mydata_100.npz? If then, You can create a single class like this:
class Myclass:
def __init__(self, index):
self.index = index
def raw_file_names(self):
return "mydata_{}.npz".format(self.index)
Then, at another script like main.py, you can create/assign it like:
for i in range(100):
exec('dataset_{} = MyData_{}({})'.format(i, i, i))
I believe you can build your own code that fits your problem with above examples.
| https://stackoverflow.com/questions/72371704/ |
How to free all GPU memory from pytorch.load? | This code fills some GPU memory and doesn't let it go:
def checkpoint_mem(model_name):
checkpoint = torch.load(model_name)
del checkpoint
torch.cuda.empty_cache()
Printing memory with the following code:
print(torch.cuda.memory_reserved(0))
print(torch.cuda.memory_allocated(0))
shows BEFORE running checkpoint_mem:
0
0
and AFTER:
121634816
97332224
This is with torch.__version__ 1.11.0+cu113 on Google colab.
Does torch.load leak memory? How can I get the GPU memory completely cleared?
| It probably doesn't. Also, it depends on what you call memory leak. In this case, after the program ends all memory should be freed, python has a garbage collector, so it might not happen immediately (your del or after leaving the scope) like it does in C++ or similar languages with RAII.
del
del is called by Python and only removes the reference (same as when the object goes out of scope in your function).
torch.nn.Module does not implement del, hence its reference is simply removed.
All of the elements within torch.nn.Module have their references removed recursively (so for each CUDA torch.Tensor instance their __del__ is called).
del on each tensor is a call to release memory
More about __del__
Caching allocator
Another thing - caching allocator occupies part of the memory so it doesn't have to rival other apps in need of CUDA when you are going to use it.
Also, I assume PyTorch is loaded lazily, hence you get 0 MB used at the very beginning, but AFAIK PyTorch itself, during startup, reserves some part of CUDA memory.
The short story is given here, longer one here in case you didn’t see it already.
Possible experiments
You may try to run time.sleep(5) after your function and measure afterwards.
You can get snapshot of the allocator state via torch.cuda.memory_snapshot to get more info about allocator’s reserved memory and inner workings.
You might set the environment variable PYTORCH_NO_CUDA_MEMORY_CACHING=1 and see whether and if anything changes.
Disclaimer
Not a CUDA expert by any means, so someone with more insight could probably expand (and/or correct) my current understanding as I am sure way more things happen under the hood.
| https://stackoverflow.com/questions/72380592/ |
Torch neural network does not train | I have implemented a very simple neural network in the torch framework
def mlp(sizes, activation, output_activation=torch.nn.Identity):
layers = []
for j in range(len(sizes)-1):
act = activation if j < len(sizes)-1 else output_activation
layers += [torch.nn.Linear(sizes[j], sizes[j+1]), act()]
return torch.nn.Sequential(*layers)
In order to train a network to make regression on the function y=sin(x)
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)
the training code is here
size = [1,20,20,1]
activation = torch.nn.ReLU
model = mlp(size, activation)
optimizer = torch.optim.SGD(model.parameters(), lr=0.002)
n_epoch = 600
mse_loss = torch.nn.MSELoss()
X = x.unsqueeze(-1)
for i in range(n_epoch):
y_pred = model(X)
step_loss = mse_loss(y_pred, y)
optimizer.zero_grad()
step_loss.backward()
optimizer.step()
Unfortunately, the network only learn an almost constant function $y=0$.
I have already tried many things
Change Hyperparameters of the network
Add mini batches in training
Change the number of epochs and learning rate
But nothing seems to work. The problem is so simple that I think there is an error in the code.
| I am not sure if this is the main cause, but the statement
act = activation if j < len(sizes)-1 else output_activation
appears to be logically incorrect. In the loop, j can take values from 0 to len(sizes)-1, so the condition is always true. This means that your network has a ReLU right at the end, and so can only ever give non-negative outputs. This can be corrected by changing that statement to:
act = activation if j < len(sizes)-2 else output_activation
| https://stackoverflow.com/questions/72383323/ |
How to apply differential privacy on list of data? | How to apply differential privacy on a list of data.
OpenMined release a differential privacy project called PyDP 2 years ago.
On the examples provided, they showed how to compute the PyDP on the data by computing some statistical features such as the mean, Max, Median.
Is there a way to apply a differential privacy to the list of dataset, and get the list of data back, without computing any statistical feature yet ?
e.g. input_list = [1.03,2.23,3.058,4.97]
out_put_differential_privacy_list = dp_function(input_list)
out_put_differential_privacy_list
>> [1.01,2.03,3.8,4.04]
How is the noise added to the data (they use laplacian)?
Is the noise added taking into account the whole data set, or is it added considering each single value at a time ?
I couldn't fine the github code for pydp.algorithms.laplacian.
These are the statistical features they showed how to compute.
from pydp.algorithms.laplacian import (
BoundedSum,
BoundedMean,
BoundedStandardDeviation,
Count,
Max,
Min,
Median,
)
Are they also functions to compute differential privacy percentiles ?
Any other resources will also be welcome.
| Here are my two cents on the question,
The idea of differential privacy is to publish aggregated information of sensitive values only if noise is added to the aggregated info. This will in terms make it infeasible to match sensitive values to their owners, and also make the dataset not highly dependent on any particular data from the dataset.
The way that noise is added to the differential privacy is by injecting Laplace noise into each pieces of data at a time which in terms will add noise to the overall dataset, the essential idea of DP would be the following:
A(D,f) = f(D) + noise
A = some randomised algorithm
This is to ensure that the result each time will be slightly different.
f = sensitivity of a function such that this sensitivity will be used to determine to what degree an individual piece of data will affect the output.
D = the dataset you want to 'mask', the overall thing, in your case it would be the list of numbers.
noise = Laplace noise, i.e. lambda = delta f/epsilon = 1/epsilon
The epsilon value here is sort of indicates the privacy loss on adding/removing an entry from the dataset, i.e. making adjusments to the dataset. The smaller the epsilon is, the less privacy loss on the adjustments made on the dataset, which means better protection for privacy.
And as you can see now, the noise are only dependent on the sentitivyt and epsilon value, and has nothing to do with the underlying dataset.
... they showed how to compute the PyDP on the data by computing some statistical features such as the mean, Max, Median.
lets say for example, say we have a bunch of numbers like you have here, we could first find the max number out of the list, which would be 4.97, then we can just try to draw the eta from Lap(4.7/epsilon). I believe the idea is to sort of anchor the data around some certain statistical feature.
Hope this is somewhat useful :)
| https://stackoverflow.com/questions/72390974/ |
Adding in the weight parameter for PyTorch's cross-entropy loss causes datatype RuntimeError | I'm currently using PyTorch to train a neural network. The dataset that I'm using is a binary classification dataset with a large number of 0's.
I decided to try and use the weight parameter of PyTorch's cross-entropy loss. I calculated the weights via sklearn.utils.class_weight.compute_class_weight and got weight values of [0.58479532, 3.44827586].
When I added this class_weights tensor into the weight parameter of my loss (i.e., criterion = nn.CrossEntropyLoss(weight=class_weights), I'm suddenly getting a RuntimeError: expected scalar type Float but found Double. The outputs and labels that I'm feeding into my loss are of types float32 and int64, respectively. The loss was working fine but when I add the weight parameter I'm getting this error. Attempting to cast my data via outputs.float() doesn't seem to work either.
Does anybody know why this error may be occurring and how I might fix this? Thanks.
| You can create a tensor from your weights as follows.
Also, remember to match the devices between the weights and the rest of your tensors.
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
weights = torch.tensor([0.58479532, 3.44827586],dtype=torch.float32).to(device)
| https://stackoverflow.com/questions/72400142/ |
How to train pytorch model using large data file while using Dataloader? | I am using pytorch Dataset class and Dataloader to load data. The class and loader looks like the following.
class Dataset(Dataset):
def __init__(self):
self.input_and_label = json.load(open(path_to_large_json_file)) # Large file
self.dataset_size = len(self.input_and_label)
def __getitem__(self, index):
# Convert to tensor
input_data = torch.LongTensor(self.input_and_label[index][0]) # Input X
label_data = torch.LongTensor(self.input_and_label[index][1]) # Label y
return input_data, label_data
def __len__(self):
return self.dataset_size
And the iterator is generated like,
train_loader = torch.utils.data.DataLoader(
Dataset(),
# Batch size
batch_size = 8, # This is expected to be large, 8 is for trial -- didn't work
shuffle = True,
pin_memory = False #True
)
The data-file is a large (json) file. But I am getting memory error as,
<RuntimeError: CUDA out of memory. Tried to allocate... ... ... >
Note:
The large json file content is list of numbers like,
[0, 1 , 0, 0,..... 4000 numbers] <-- this is the input_data
[0, 2, 2, ... 50 numbers ] <-- this is the label
So, probably batch size 8 (that means 8 such pairs), or 800 ... should not matter much
Can someone please help me, how can I get the iterator without loading the large file at once? Or, any other solution welcome. Thank you very much for your support.
| You get a CUDA OOM error, it is not related to the file itself being large, but single example being large.
JSON file loads correctly to RAM, but 8 examples cannot fit on your GPU (which is often the case for images/videos, especially with high resolution).
Solutions
Use a larger GPU (e.g. cloud provided)
Use a smaller batch size (even of size 1) and use gradient accumulation. It will run slow, but the results will be as good as those of larger batches.
| https://stackoverflow.com/questions/72410553/ |
Subsets and Splits