instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Alternate training between two datasets | I am trying to use augmented and not augmented datasets in each epoch(for example: augmented in one epoch not augmented in a different epoch) but I couldn't figure out how to do it. My approach was loading DataLoader in each epoch again and again but I think it's wrong. Because when I print indexes in __getitem__ in my Dataset class, there is a lot of duplicates indices.
Here is my code for training:
for i in range(epoch):
train_loss = 0.0
valid_loss = 0.0
since = time.time()
scheduler.step(i)
lr = scheduler.get_lr()
#######################################################
#Training Data
#######################################################
model_test.train()
k = 1
tx=""
lx=""
random_ = random.randint(0,1)
print("QPQPQPQPQPQPQPQPPQPQ")
print(random_)
print("QPQPQPQPQPQPQPQPPQPQ")
if random_== 0:
tx = torchvision.transforms.Compose([
# torchvision.transforms.Resize((128,128)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])])
lx = torchvision.transforms.Compose([
# torchvision.transforms.Resize((128,128)),
torchvision.transforms.Grayscale(),
torchvision.transforms.ToTensor(),
# torchvision.transforms.Lambda(lambda x: torch.cat([x, 1 - x], dim=0))
])
else:
tx = torchvision.transforms.Compose([
# torchvision.transforms.Resize((128,128)),
torchvision.transforms.CenterCrop(96),
torchvision.transforms.RandomRotation((-10, 10)),
# torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
])
lx = torchvision.transforms.Compose([
# torchvision.transforms.Resize((128,128)),
torchvision.transforms.CenterCrop(96),
torchvision.transforms.RandomRotation((-10, 10)),
torchvision.transforms.Grayscale(),
torchvision.transforms.ToTensor(),
# torchvision.transforms.Lambda(lambda x: torch.cat([x, 1 - x], dim=0))
])
Training_Data = Images_Dataset_folder(t_data,
l_data,tx,lx)
train_loader = torch.utils.data.DataLoader(Training_Data, batch_size=batch_size, sampler=train_sampler,
num_workers=num_workers, pin_memory=pin_memory,)
valid_loader = torch.utils.data.DataLoader(Training_Data, batch_size=batch_size, sampler=valid_sampler,
num_workers=num_workers, pin_memory=pin_memory,)
for x,y in train_loader:
x, y = x.to(device), y.to(device)
#If want to get the input images with their Augmentation - To check the data flowing in net
input_images(x, y, i, n_iter, k)
# grid_img = torchvision.utils.make_grid(x)
#writer1.add_image('images', grid_img, 0)
# grid_lab = torchvision.utils.make_grid(y)
opt.zero_grad()
y_pred = model_test(x)
lossT = calc_loss(y_pred, y) # Dice_loss Used
train_loss += lossT.item() * x.size(0)
lossT.backward()
# plot_grad_flow(model_test.named_parameters(), n_iter)
opt.step()
x_size = lossT.item() * x.size(0)
k = 2
Here is my code for the dataset:
def __init__(self, images_dir, labels_dir, transformI=None,
transformM=None):
self.images = sorted(os.listdir(images_dir))
self.labels = sorted(os.listdir(labels_dir))
self.images_dir = images_dir
self.labels_dir = labels_dir
self.transformI = transformI
self.transformM = transformM
self.tx=self.transformI
self.lx=self.transformM
def __len__(self):
return len(self.images)
def __getitem__(self, i):
with open("/content/x.txt", "a") as o:
o.write(str(i)+"\n")
i1 = Image.open(self.images_dir + self.images[i])
l1 = Image.open(self.labels_dir + self.labels[i])
seed = np.random.randint(0, 2 ** 32) # make a seed with numpy generator
# apply this seed to img tranfsorms
random.seed(seed)
torch.manual_seed(seed)
img = self.tx(i1)
# apply this seed to target/label tranfsorms
random.seed(seed)
torch.manual_seed(seed)
label = self.lx(l1)
return img, label
How can i achieve what i want?
Thanks in advance.
| Instantiating a dataset and data loader for each epoch doesn't seem to be the way to go. Instead, you may want to instantiate two sets of dataset + data loader, each one with its corresponding augmentation pipeline.
Here is an example to give you a basic frame:
Start by defining the transformation pipelines inside of the dataset itself:
class Images_Dataset_folder(Dataset):
def __init__(self, images_dir, labels_dir, augment=False):
super().__init__()
self.tx, self.lx = self._augmentations() if augment else self._no_augmentations()
def __len__(self):
pass
def __getitem__(self, i):
pass
def _augmentations(self):
tx = T.Compose([
T.ToTensor(),
T.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])])
lx = T.Compose([
T.Grayscale(),
T.ToTensor()])
return tx, lx
def _no_augmentations(self):
tx = T.Compose([
T.CenterCrop(96),
T.RandomRotation((-10, 10)),
T.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4),
T.ToTensor(),
T.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])])
lx = T.Compose([
T.CenterCrop(96),
T.RandomRotation((-10, 10)),
T.Grayscale(),
T.ToTensor()])
return tx, lx
Then you can construct your training loop as:
# augmented images dataset
aug_trainset = Images_Dataset_folder(t_data, l_data, augment=True)
aug_dataloader= DataLoader(aug_trainset, batch_size=batch_size)
# unaugmented images dataset
unaug_trainset = Images_Dataset_folder(t_data, l_data, augment=False)
unaug_dataloader = DataLoader(unaug_trainset, batch_size=batch_size)
# on each epoch you go through the
for i in range(epochs//2):
# call train loop on augmented data loader
train(model, aug_dataloader)
# call train loop with un-augmented data loader
train(model, unaug_dataloader )
This being said, you will essentially loop over the dataset twice: once on unaugmented images and a second time around with augmented images.
If you want to only iterate only once, then the easiest solution I can come up with is having a random flag inside the __getitem__ that would decide whether or not the current image needs to get augmented.
Side note: you wouldn't want to use train data in your validation set!
| https://stackoverflow.com/questions/70813287/ |
Element-wise matrix vector multiplication | I have a tensor m which stores n 3 x 3 matrices with dim n x 3 x 3 and a tensor v with n 3x1 vectors and dim n x 3. How can I apply element-wise matrix-vector multiplication, i.e. multiply the i-th matrix with the i-th vector, to get an output tensor with dim n x 3?
Thanks for your help.
| You want to perform a matrix multiplication operation (__matmul__) in a batch-wise manner. Intuitively you can use the batch-matmul operator torch.bmm. Keep in mind you first need to unsqueeze one dimension on v such that it becomes a 3D tensor. In this case indexing the last dimension with None as v[..., None] will provide a shape of (n, 3, 1).
With torch.bmm:
>>> torch.bmm(m, v[..., None])
As it turns out, torch.matmul handles this case out-of-the-box:
>>> torch.matmul(m, v[..., None]) # same as m@v[..., None]
If you want explicit control over the operation, you can go with torch.einsum:
>>> torch.einsum('bij,bj->bi', m, v)
| https://stackoverflow.com/questions/70823447/ |
Split features, preprocess some of them, then join them back together. (hangs forever) | I'm trying to feed the all features (except the first one) to some layers (nn.Linear + nn.LeakyReLU), get the output, then reassemble the initial data structure and feed it to the last layers. But the training process just hangs forever and I don't get any output.
To be clear, the code works fine without this, but I'm trying to improve the results by preprocessing some of the features before feeding them (with the unprocessed first feature) to the last layer.
Any help would be much appreciated.
Here is my code:
def forward(self, x):
# save the residual for the skip connection
res = x[:, :, 0:self.skip]
xSignal = np.zeros((len(x),len(x[0]),1))
xParams = np.zeros((len(x),len(x[0]),len(x[0][0])-1))
# separate data
for b in range(len(x)):
for c in range(len(x[b])):
for d in range(len(x[b][c])):
if d == 0:
xSignal[b][c][d] = x[b][c][d]
else:
xParams[b][c][d-1] = x[b][c][d]
# pass parameters through first network
xParams = torch.from_numpy(xParams).cuda().float()
xParams = self.paramsLinear(xParams)
xParams = self.paramsLeakyRelu(xParams)
# make new array with output and the signal
xConcat = np.zeros((len(x),len(x[0]),len(x[0][0])))
for b in range(len(x)):
for c in range(len(x[b])):
for d in range(len(x[b][c])):
if d == 0:
xConcat[b][c][d] = xSignal[b][c][d]
else:
xConcat[b][c][d] = xParams[b][c][d-1]
# convert to tensor
xConcat = torch.from_numpy(xConcat).cuda().float()
# pass it through the recurrent part
xConcat, self.hidden = self.rec(xConcat, self.hidden)
# then the linear part and return
return self.lin(xConcat) + res```
| Well, as it turns out, slicing is WAY faster and easier than iterating. And I also used torch.cat function to put everything back in one tensor.
def forward(self, x):
# save the residual for the skip connection
res = x[:, :, 0:self.skip]
# split features
xSignal = x[:, :, 0:1]
xParams = x[:, :, 1:]
# pass only some features through first layers
xParams = self.paramsLinear(xParams)
xParams = self.paramsLeakyRelu(xParams)
# put everything back together
x = torch.cat((xSignal, xParams), 2)
# pass it through the last layers
x, self.hidden = self.rec(x, self.hidden)
# then the linear part and return
return self.lin(x) + res
It's now training as expected :)
| https://stackoverflow.com/questions/70828018/ |
RuntimeError: Input type and weight type should be the same | I’m trying to swap resNet blocks with resNext blocks in my current model. All worked and I even trained the model for 1000+ epochs with the resNet blocks but when I added the following class to the model, it returned this error. (ran without errors in my local CPU but got the error when running in colab)
Added Class :
class GroupConv1D(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, padding, stride, groups):
super(GroupConv1D, self).__init__()
if not in_channels % groups == 0:
raise ValueError("The input channels must be divisible by the no. of groups")
if not out_channels % groups == 0:
raise ValueError("The output channels must be divisible by the no. of groups")
self.kernel_size = kernel_size
self.stride = stride
self.padding = padding
self.groups = groups
self.group_in_num = in_channels // groups
self.group_out_num = out_channels // groups
self.conv_list = []
for i in range(self.groups):
self.conv_list.append(
nn.Conv1d(
in_channels=self.group_out_num,
out_channels=self.group_out_num,
kernel_size=kernel_size,
stride=stride,
padding=padding)
)
def forward(self, inputs):
feature_map_list = []
for i in range(self.groups):
x_i = self.conv_list[i](
inputs[:, i * self.group_in_num: (i + 1) * self.group_in_num]
)
feature_map_list.append(x_i)
out = torch.concat(feature_map_list, dim=1)
return out
The Error :
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/content/drive/MyDrive/FYPprototypeTest2/train.py", line 268, in <module>
cycleGAN.trainModel()
File "/content/drive/MyDrive/FYPprototypeTest2/train.py", line 140, in trainModel
B_fake = self.A_generator_B(A_real, A_mask)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1102, in
_call_impl
return forward_call(*input, **kwargs)
File "/content/drive/MyDrive/FYPprototypeTest2/model.py", line 235, in forward
resnet_block_1 = self.resnet_block_1(conv2d_conv1d)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1102, in
_call_impl
return forward_call(*input, **kwargs)
File "/content/drive/MyDrive/FYPprototypeTest2/model.py", line 88, in forward
group_layer = self.groupConv_1(layer_one_GLU)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1102, in
_call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py", line 141, in
forward
input = module(input)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1102, in
_call_impl
return forward_call(*input, **kwargs)
File "/content/drive/MyDrive/FYPprototypeTest2/model.py", line 46, in forward
inputs[:, i * self.group_in_num: (i + 1) * self.group_in_num]
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1102, in
_call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 301, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 298, in
_conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should
be the same
Help would be hugely appreciated.
| Your problem in your new class GroupConv1D is that you store all your convolution modules in a regular python list self.conv_list instead of using nn Containers.
All methods that affect nn.Modules (e.g., .to(device), .eval(), etc.) are applied recursively to all relevant members of the "root" nn.Module.
However, how can pytorch tell which are the relevant members?
For this you have containers: they group together sub-modules, registers and parameters such that pytorch can recursively apply all relevant nn.Module's methods to them.
See, e.g., this answer.
| https://stackoverflow.com/questions/70829410/ |
Can't connect to GPU when building PyTorch projects | Before this, I was able to connect to the GPU through CUDA runtime version 10.2. But then I ran into an error when setting up one of my projects.
Using torch 1.10.1+cu102 (NVIDIA GeForce RTX 3080)
UserWarning:
NVIDIA GeForce RTX 3080 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
After some readings, it seems that sm_86 is only available for CUDA version 11.0 and above. That's the reason why I upgraded to the latest CUDA version and can't connect to the GPU after this.
I have tried many ways, reinstalling cuda toolkit, PyTorch, torchvision and stuff but nothing works.
CUDA Toolkit I've used:
$ wget https://developer.download.nvidia.com/compute/cuda/11.6.0/local_installers/cuda_11.6.0_510.39.01_linux.run
$ sudo sh cuda_11.6.0_510.39.01_linux.run
PyTorch I've installed (tried both conda and pip):
$ conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
$ pip3 install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio==0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
These are some basic info:
(base) ubuntu@DESKTOP:~$ python
Python 3.9.5 (default, Jun 4 2021, 12:28:51)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'1.10.1+cu113'
>>> x = torch.rand(6,6)
>>> print(x)
tensor([[0.0228, 0.3868, 0.9742, 0.2234, 0.5682, 0.7747],
[0.2643, 0.3911, 0.3464, 0.5072, 0.4041, 0.4268],
[0.2247, 0.0936, 0.4250, 0.1128, 0.0261, 0.5199],
[0.0224, 0.7463, 0.1391, 0.8092, 0.3742, 0.2054],
[0.3951, 0.4205, 0.6270, 0.4561, 0.4784, 0.5958],
[0.8430, 0.5078, 0.7759, 0.5266, 0.4925, 0.7557]])
>>> torch.cuda.get_arch_list()
[]
>>> torch.cuda.is_available()
False
>>> torch.version.cuda
'11.3'
>>> torch.cuda.device_count()
0
Below are my configurations.
(base) ubuntu@DESKTOP:~$ ls -l /usr/local/ | grep cuda
lrwxrwxrwx 1 root root 21 Jan 24 13:47 cuda -> /usr/local/cuda-11.3/
lrwxrwxrwx 1 root root 25 Jan 17 10:52 cuda-11 -> /etc/alternatives/cuda-11
drwxr-xr-x 17 root root 4096 Jan 24 13:48 cuda-11.3
drwxr-xr-x 18 root root 4096 Jan 24 10:17 cuda-11.6
ubuntu version:
(base) ubuntu@DESKTOP:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal
nvidia-smi:
(base) ubuntu@DESKTOP:~$ nvidia-smi
Mon Jan 24 17:22:42 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.39.01 Driver Version: 511.23 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:02:00.0 Off | N/A |
| 0% 26C P8 5W / 320W | 106MiB / 10240MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 4009 G /Xorg N/A |
| 0 N/A N/A 4025 G /xfce4-session N/A |
| 0 N/A N/A 4092 G /xfwm4 N/A |
| 0 N/A N/A 25903 G /msedge N/A |
+-----------------------------------------------------------------------------+
nvcc --version:
(base) ubuntu@DESKTOP:~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Mar_21_19:15:46_PDT_2021
Cuda compilation tools, release 11.3, V11.3.58
Build cuda_11.3.r11.3/compiler.29745058_0
| I'm answering my own question.
PyTorch pip wheels and Conda binaries ship with the CUDA runtime.
But CUDA does not normally come with NVCC, and requires to install separately from conda-forge/cudatoolkit-dev, which is very troublesome during the installation.
So, what I did was that I install NVCC from Nvidia CUDA toolkit.
$ wget https://developer.download.nvidia.com/compute/cuda/11.6.0/local_installers/cuda_11.6.0_510.39.01_linux.run
And Conda Pytorch-GPU version
$ conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
As it turns out both installations are not compatible with each other.
Therefore, the step I did to solve this issue:
remove any Conda environments in Ubuntu.
clean the pip list and Conda list until none of any PyTorch, torchvision, Cuda etc works.
Install Nvidia CUDA toolkit cuda_11.3 first from Nvidia's official website.
Reinstall PyTorch only. $ pip3 install torch==1.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
| https://stackoverflow.com/questions/70831932/ |
Applying non-torch function on loss before calling backward()? | I want to apply a custom non-torch function on the final calculated loss before computing the gradients (calling backward()). An example would be to replace the torch.mean() on the loss vector with a custom pythonic, non-torch mean function. But doing so will break the computation graph. I can not rewrite the custom mean function using torch operators and I am at a loss as how to do this. Any suggestions?
| In pytorch you can easily do this by inheriting from torch.autograd.Function: All you need to do is implement your custom forward() and the corresponding backward() methods. Because I don't know the function you intend to write, I'll demonstrate it by implementing the sine function in a way that works with the automatic differentiation. Note that you need to have a method to compute the derivative of your function with respect to its input to implement the backward pass.
import torch
class MySin(torch.autograd.Function):
@staticmethod
def forward(ctx, inp):
""" compute forward pass of custom function """
ctx.save_for_backward(inp) # save activation for backward pass
return inp.sin() # compute forward pass, can also be computed by any other library
@staticmethod
def backward(ctx, grad_out):
""" compute product of output gradient with the
jacobian of your function evaluated at input """
inp, = ctx.saved_tensors
grad_inp = grad_out * torch.cos(inp) # propagate gradient, can also be computed by any other library
return grad_inp
To use it you can use the function sin = MySin.apply on your input.
There is also another example worked out in the documentation.
| https://stackoverflow.com/questions/70833287/ |
Adding image size as second input to existing PyTorch model | I am using pretrained torchvision models in PyTorch and transfer learning to classify my own data set. That is working fine, but I think I could further improve my classification performance. Our images come in different dimensions, all of them are resized to fit the input of my model (e.g. to 224x224 pixels).
However, the original image size often says a lot of the class this image belongs to. So I thought it might help the model to add the original image dimension as second input to the model.
Currently I build my model in PyTorch like this:
model = resnet50(pretrained=True) # Could be another base model as well
for module, param in zip(model.modules(), model.parameters()):
if isinstance(module, nn.BatchNorm2d):
param.requires_grad = False
model.fc = nn.Sequential(
nn.Linear(2048, 512),
nn.ReLU(),
nn.Dropout(0.25),
nn.Linear(512, 256),
nn.ReLU(),
nn.Dropout(0.25),
nn.Linear(256, num_classes),
)
Now how would I add another (two-dimensional?) input to that model so that I can feed x and y dimensions of the original image to the model? Also, where does that make most sense - directly into the "beginning" of the model, or better somewhere "in between"?
| One way to inject the data into the model can be directly to the linear layers.
This will have the drawback of not affecting the conv layers.
Note that I injected to the final layer, but this can go in any layer.
model.start = nn.Sequential(
nn.Linear(2048, 512),
nn.ReLU(),
nn.Dropout(0.25),
nn.Linear(512, 256),
nn.ReLU(),
nn.Dropout(0.25),
)
model.end = nn.Sequential(
nn.Linear(256 + 2, num_classes),
)
and your forward should be (pseudocode) something like
def forward(x):
x1 = model.start(x)
mid = torch.concatenate([x, extra_2d_data])
x2 = model.end(mid)
return x2
See also this
| https://stackoverflow.com/questions/70835614/ |
Kernel Size for 3D Convolution | The kernel size of 3D convolution is defined using depth, height and width in Pytorch or TensorFlow. For example, if we consider a CT/MRI image data with 300 slices, the input tensor can be (1,1,300,128,128), corresponding to (N,C,D,H,W). Then, the kernel size can be (3,3,3) for depth, height and width. When doing 3D convolution, the kernel is passed in 3 directions.
However, I was confused if we change the situation from CT/MRI to a colourful video. Let the video has 300 frames, then the input tensor will be (1,3,300,128,128) because of 3 channels for RGB images. I know that for a single RGB image, the kernel size can be 3X3X3 for channels, height and width. But when it comes to a video, it seems both Pytorch and Tensorflow still use depth, height and width to set the kernel size. My question is, if we still use a kernel of (3,3,3), is there a potential fourth dimension for the colour channels?
| Yes.
Actually the convolution operation occurring in a CNN is one dimension higher than its namesake. The channel dimension is always spanned by the entire kernel though, so there's no sliding along the channel dimension. For example, a 2D convolution layer with kernel size set to 5x5 applied to a 3 channel input is actually using a kernel of shape 3x5x5 (assuming channel first notation). Each output channel is the result of convolving the input with a different 3x5x5 kernel, so there is one of these 3x5x5 kernels for each output channel.
This is the same for videos. A 3D convolution layer is actually performing a 4D convolution in the same way. So an input of shape 1x3x300x128x128 with kernel size set to 3x3x3 will actually be performing 4D convolutions with kernels of shape 3x3x3x3.
| https://stackoverflow.com/questions/70841365/ |
PyTorch - How to process model input in parallel? | Consider the following simple neural net:
class CustomNN(torch.nn.Module):
def __init__(self):
super(CustomNN, self).__init__()
def forward(self, x):
sleep(1)
return x
I am wondering if we can call forward() in parallel. Following the official tutorial, I thought that the following code would work:
x = torch.rand(10, 5).cuda()
futures = [torch.jit.fork(model, x[i,:]) for i in range(10)]
results = [torch.jit.wait(fut) for fut in futures]
I expected this to run in about 1 second, but it still sleeps for the full 10 seconds. Is there any way to call the model in parallel?
| You cannot do this in python, unfortunately, as per torch.jit.fork
To serve TorchScript modules you need to make a C++ application with a proper thread pool – see here
Besides, your code does not include (or you do not show it) the conversion.
You have to wrap the Module with torch.jit.script for it to be scripted (or traced) into ScriptModule
traced_NN = torch.jit.script(CustomNN())
Even still, it won't work, as only pytorch functions (and not even fully), python builtins, and math module is supported in TorchScript (see here)
| https://stackoverflow.com/questions/70843645/ |
Compute means for different subsets of 1d Tensor | Imagine the following scenario:
data = torch.Tensor([0.5,0.4,1.2,1.1,0.4,0.4])
indices = torch.Tensor([0,1,1,2,2,2])
What I would like to achieve is the following:
Compute the mean over the subset of samples within data as indexed by indices
subset_means == torch.Tensor([0.5, 0.8, 0.8, 0.63, 0.63, 0.63])
I have not been able to come up with a satisfactory solution so far.
| You can use Tensor.index_put to accumulate values of an array according to some index array. This way you can sum up all values belonging to the same index. In the following snippet I use a separate call with an array of just ones to count the number of occurences of each index, to be able to compute the means from the sums:
import torch
data = torch.tensor([0.5,0.4,1.2,1.1,0.4,0.4])
indices = torch.tensor([0,1,1,2,2,2]).to(torch.long)
# sum groups according to indices
accum = torch.zeros((indices.max()+1, )).index_put((indices,), data, accumulate=True)
# count groups according to indices
cnt = torch.zeros((indices.max()+1,)).index_put((indices,), torch.ones((1,)), accumulate=True)
# compute means and expand according to indices
subset_means = (accum / cnt)[indices]
print(subset_means)
#subset_means == torch.Tensor([0.5, 0.8, 0.8, 0.63, 0.63, 0.63])
| https://stackoverflow.com/questions/70845096/ |
Batch-wise norm of a tensor | I have a tensor t of dim n x 3. When I apply torch.linalg.norm it returns one single value. What I need is a batch-wise norm function which will return a tensor with n norms, one for each vector in t.
Thanks for your help.
| It seems the most relevant documentation place is:
https://pytorch.org/docs/stable/generated/torch.linalg.norm.html
In the terminal you could try: python3 and then the following python commands:
>>> from torch import linalg as LA
>>> c = torch.tensor([[1., 2., 3.],
... [-1, 1, 4]])
>>> LA.norm(c, dim=0)
tensor([1.4142, 2.2361, 5.0000])
>>> LA.norm(c, dim=1)
tensor([3.7417, 4.2426])
Conclusion:
In your specific case you will need to do:
torch.linalg.norm(t,dim=1)
| https://stackoverflow.com/questions/70846362/ |
Elegant way to quickly load only a small subset of data in detectron2 | I'm looking for an elegant way to load only a small subset of data in detectron2 in order to speed up the training startup for debugging purposes.
I'm building my own instance segmentation model with detectron2 and running it the usual way:
train_net.py --config-file our_training_config.yaml
But it takes several minutes to load everything...
...
[01/25 13:11:48 d2.data.datasets.coco]: Loading datasets/coco/annotations/instances_train2017.json takes 20.74 seconds.
[01/25 13:11:50 d2.data.datasets.coco]: Loaded 118287 images in COCO format from datasets/coco/annotations/instances_train2017.json
...
I was wondering if there is a parameter/trick/flag which allows one to load only a small subset of examples (say, 100) only to quickly see if all the forward and backward calls works.
Now it is a bit annoying during the debugging process, since each bug and fix requires another slow run to test if everything works.
Technically one can just cut instances_train2017.json in size, but I believe that there are some less nasty solutions to this problems.
| I had the same problem, then I found RandomSubsetTrainingSampler which comes builtin with detectron2. It allows loading a small fraction of the dataset for training. You can change the config file like that:
DATALOADER:
SAMPLER_TRAIN: "RandomSubsetTrainingSampler"
RANDOM_SUBSET_RATIO: 0.1
or you can simply pass a sampler to train loader:
subset_sampler = RandomSubsetTrainingSampler(len(dataset)), 0.1)
build_detection_train_loader(cfg, sampler=subset_sampler)
RANDOM_SUBSET_RATIO is between 0 and 1 so 0.1 means 10% of the training dataset. You can see how it is enabled by default in _train_loader_from_config when building a train loader.
However, it seems that currently there is no such nice way of loading a small part of the validation data using the config file. You can similarly pass a subset sampler to the build_detection_test_loader.
| https://stackoverflow.com/questions/70849422/ |
How does PyTorch know to which neural network the training loss shall be propagated back if you have multiple neural networks? | I want to train a neural network with the help of two other neural networks, which are already trained and tested. The input of the network that I want to train is simultaniously inputted to the first static network. The output of the of the network that I want to train is inputted to the second static network. The loss shall be computed on the outputs of the static networks and propagated back to the train network.
# Initialization
var_model_statemapper = NeuralNetwork(9, [('linear', 9), ('relu', None), ('dropout', 0.2), ('linear', 8)])
var_model_panda = NeuralNetwork(9, [('linear', 9), ('relu', None), ('dropout', 0.2), ('linear', 27)])
var_model_panda.load_state_dict(torch.load("panda.pth"))
var_model_ur5 = NeuralNetwork(8, [('linear', 8), ('relu', None), ('dropout', 0.2), ('linear', 24)])
var_model_ur5.load_state_dict(torch.load("ur5.pth"))
var_loss_function = torch.nn.MSELoss()
var_optimizer = torch.optim.Adam(var_model_statemapper.parameters(), lr=0.001)
# Forward Propagation
var_panda_output = var_model_panda(var_statemapper_input)
var_ur5_output = var_model_ur5(var_statemapper_output)
var_train_loss = var_loss_function(var_panda_output, var_ur5_output)
# Backward Propagation
var_optimizer.zero_grad()
var_train_loss.backward()
var_optimizer.step()
You can see that the "var_model_statemapper" is the network that shall be trained. The networks "var_model_panda" and "var_model_ur5" are initialized and their state_dicts are being read from the according ".pth" files, so these networks need to be static. My main question is, which of the networks is updated in the backward propagation? Just the "var_model_statemapper" or all networks? And if the "var_model_statemapper" isn't updated, how do I achive this? And does PyTorch know which network to update just from the initialization of the optimizer?
| Formalizing your pipeline to get a good idea of the setup:
x --- | state_mapper | --> y --- | ur5 | --> ur5_out
\ |
\ ↓
\--- | panda | --> panda_out ----------- | loss_fn | --> loss
Here is what is happening with lines you provided:
var_optimizer.zero_grad() # 0.
var_train_loss.backward() # 1.
var_optimizer.step() # 2.
Calling zero_grad on an optimizer will clear the cache of all parameter gradients contained in that optimizer. In your case, you have var_optimizer registered with the parameters from var_model_statemapper (the model that you want to optimize).
When you infer loss and backpropagate on it via the backward call, the gradients will propagate through the parameters of all three models.
Then calling step on the optimizer will update the parameters registered in the optimizer you're called it upon. In your case, this means var_optimizer.step() will update all parameters of the model var_model_statemapper alone using the gradients computed in step 1. (namely using the backward call on var_train_loss).
All in all, your current approach will only update the parameters of var_model_statemapper. Ideally, you can freeze models var_model_panda and var_model_ur5 by setting their parameters' requires_grad flag to False. This will save speed on inference and training since their gradients won't be computed and stored during backpropagation.
| https://stackoverflow.com/questions/70850782/ |
Unable to reload class inherited from nn.Module with importlib | I am trying to reload a class using importlib, however I am facing an error stating it is not a module, this is an jupyter notebook.
Class code
import torch.nn as nn
import torch.nn.functional as F
import torch
class FeedForwardNeuralNetwork(nn.Module):
def __init__(self, input_size, layers_data, random_seed=42):
super(FeedForwardNeuralNetwork, self).__init__()
torch.manual_seed(random_seed)
# So that number of dense layers are configurable.
self.layers = nn.ModuleList()
for size, activation in layers_data:
self.layers.append(nn.Linear(input_size, size, bias=False))
torch.nn.init.xavier_uniform(self.layers[-1].weight)
self.layers.append(activation)
input_size = size
def forward(self, x):
x = torch.flatten(x, start_dim=1)
for layer in self.layers:
x = layer(x)
return x
Reload code
import importlib
from feed_forward_neural_network import FeedForwardNeuralNetwork
print((type(nn.Module)))
print(type(FeedForwardNeuralNetwork))
importlib.reload(FeedForwardNeuralNetwork)
Error
<class 'type'>
<class 'type'>
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [53], in <module>
4 print((type(nn.Module)))
5 print(type(FeedForwardNeuralNetwork))
----> 6 importlib.reload(FeedForwardNeuralNetwork)
File /usr/lib/python3.9/importlib/__init__.py:140, in reload(module)
134 """Reload the module and return it.
135
136 The module must have been successfully imported before.
137
138 """
139 if not module or not isinstance(module, types.ModuleType):
--> 140 raise TypeError("reload() argument must be a module")
141 try:
142 name = module.__spec__.name
TypeError: reload() argument must be a module
Environment Details
Python version: 3.9.5
IPython : 8.0.0
jupyter_client : 7.1.1
jupyter_core : 4.9.1
| reload() argument should be a module. You need to write it as
importlib.reload(sys.modules.get(FeedForwardNeuralNetwork.__module__))
<module 'feed_forward_neural_network' from '/home/harshit/Downloads/feed_forward_neural_network/feed_forward_neural_network.py'>
| https://stackoverflow.com/questions/70852442/ |
Combine features in tensor in pytorch | I have a tensor with dim 2n x m. I want to compute an output tensor with dim n x m, where the i-th and the i+1-th entry are added together and divided by 2, i.e. (f_0, f_1, f_2, f_3, ...) -> ((f_0+f_1)/2, (f_2+f_3)/2, ...). How can I achieve this without looping over the tensor?
Thanks for your help.
| I would reshape the tensor to (n,2,m) and take the mean of dim 1.
In [7]: x = torch.arange(12).view(4,3).float()
In [8]: x
Out[8]:
tensor([[ 0., 1., 2.],
[ 3., 4., 5.],
[ 6., 7., 8.],
[ 9., 10., 11.]])
In [9]: x.view(2,2,3).mean(dim=1)
Out[9]:
tensor([[1.5000, 2.5000, 3.5000],
[7.5000, 8.5000, 9.5000]])
| https://stackoverflow.com/questions/70853047/ |
What makes a pre-trained model in pytorch misclassify an image | I successfully trained Data Efficient Image Transformer (deit) on cifar-10 dataset with an accuracy of about 95%. However and saved it for later use. I created a separate class to load the model and make inference on just one image. I keep getting different value for prediction every time I run it.
import torch
from models.deit import deit_small_patch16_224
import matplotlib.pyplot as plt
import torch.nn.functional as F
import numpy as np
from PIL import Image
from torchvision.transforms import transforms as transforms
class_names = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
model = deit_small_patch16_224(pretrained=True, use_top_n_heads=8, use_patch_outputs=False)
checkpoint = torch.load("./checkpoint/deit224.t7")
model.load_state_dict(checkpoint, strict=False)
model.head = torch.nn.Linear(in_features=model.head.in_features, out_features=10)
model.eval()
img = Image.open("cats.jpeg")
img_tensor = torch.tensor(np.array(img))/255.0
img_tensor = img_tensor.unsqueeze(0).permute(0, 3, 1, 2)
# print(img_tensor.shape)
with torch.no_grad():
output = model(img_tensor)
predicted_class = np.argmax(output)
print(predicted_class)
| Yes,figured out the error. updated code below
import torch
from models.deit import deit_small_patch16_224
from torch.utils.data import dataset
import torchvision.datasets
import matplotlib.pyplot as plt
import torch.nn.functional as F
import numpy as np
from PIL import Image
from torchvision.transforms import transforms as transforms
class_names = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
model = deit_small_patch16_224(pretrained=True, use_top_n_heads=8, use_patch_outputs=False)
checkpoint = torch.load("./checkpoint/deit224.t7")
state_dict = checkpoint["model"]
new_state_dict = {}
for key in state_dict:
new_key = '.'.join(key.split('.')[1:])
new_state_dict[new_key] = state_dict[key]
model.head = torch.nn.Linear(in_features=model.head.in_features, out_features=10)
model.load_state_dict(new_state_dict)
model.eval()
img = Image.open("cats.jpeg")
trans = transforms.ToTensor()
# img_tensor = torch.tensor(np.array(img, dtype=np.float64))/255.0
img_tensor = torch.tensor(np.array(img))/255.0
# img_tensor = torch.tensor(np.array(img))
img_tensor = img_tensor.unsqueeze(0).permute(0, 3, 1, 2)
# print(img_tensor.shape)
with torch.no_grad():
output = model(img_tensor)
predicted_class = np.argmax(output)
print(predicted_class)
| https://stackoverflow.com/questions/70853637/ |
How does a decaying learning rate schedule with AdamW influence the weight decay parameter? | According to the Pytorch documentation
https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html
the AdamW optimiser computes at each step the product of the learning rate gamma and the weight decay coefficient lambda. The product
gamma*lambda =: p
is then used as the actual weight for the weight decay step. To see this, consider the second line within the for-loop in the AdamW algorithm:
But what if the learning rate lambda shrinks after each epoch because we use (say) an exponential learning rate decay schedule? Is p consistently computed using the initial learning rate lambda and thus p stays constant during the whole training process? Or does p shrink dynamically as lambda shrinks due to an implicit interaction with the the learning rate decay schedule?
Thanks!
| The function torch.optim._functional.adamw is called each time you step the optimizer using the current parameters of the optimizer (that call occurs at torch/optim/adamw.py:145). This is the function that actually updates the model parameter values. So after a learning-rate scheduler changes the optimizer parameters, the steps afterwards will use those parameters, not the initial ones.
To verify this, the product is recomputed at each step in the code at torch/optim/_functional.py:137.
| https://stackoverflow.com/questions/70854091/ |
Pytorch: Most computationally and memory efficient way to make a series of concatenations from extracting tensor rows? | Say that this is my sample tensor
sample = torch.tensor(
[[2, 7, 3, 1, 1],
[9, 5, 8, 2, 5],
[0, 4, 0, 1, 4],
[5, 4, 9, 0, 0]]
)
I want to have a new tensor, which will consist of concatenations of 2 rows from the sample tensor.
So I have a tensor which contains pairs of the row numbers that I want concatenated into a single row for the new tensor
cat_indices = torch.tensor([[0, 1], [1, 2], [0, 2], [2, 3]])
The current method I am using is this
torch.cat((sample[cat_indices[:,0]], sample[cat_indices[:,1]]), dim=1)
Which gives the desired result
tensor([[2, 7, 3, 1, 1, 9, 5, 8, 2, 5],
[9, 5, 8, 2, 5, 0, 4, 0, 1, 4],
[2, 7, 3, 1, 1, 0, 4, 0, 1, 4],
[0, 4, 0, 1, 4, 5, 4, 9, 0, 0]])
Is this the most memory and computationally efficient method of doing this? I am not sure because I am making two calls to cat_indices, and then I am doing a concatenation operation.
I feel that there should be a way to do this via some sort of view. Perhaps advanced indexing. I've tried things like sample[cat_indices[:,0], cat_indices[:,1]] or sample[cat_indices[0], cat_indices[1]] but I can't make the view come out right.
| What you have should be pretty fast. An alternative is
sample[cat_indices].reshape(cat_indices.shape[0],-1)
You would have to benchmark the performance on your machine though to see which is better.
| https://stackoverflow.com/questions/70854880/ |
Initialise Pytorch layer with local random number generator | When writing larger programs that require determinism for random processes, it is generally considered good practice to create function-specific random number generators (rngs) and pass those to the randomness-dependent functions (rather than setting a global seed and have the functions depend on this). See also here.
For example, when I have a function that generates some sample using numpy, I use a rng that I create at the beginning of the script:
# at the beginning of the script
import numpy as np
seed = 5465
rng = np.random.default_rng(seed)
# much later in the script
def generate_sample(rng, size):
return rng.random(size)
generate_sample(rng, size=5)
I am trying to achieve the same when initialising a torch.nn.Linear layer, i.e. use a pre-defined rng to reproducibly initialise the layer.
Is this (reasonably) possible or am I forced to set the Pytorch global seed via torch.manual_seed() instead?
| Generators are available in Pytorch, and in-place prngs can rely on this input, e.g. the normal distribution generator.
When it comes to torch.nn.Linear, it has no such parameter, as you observed. You can still manually manage the internal state of the global prng by using a decorator that saves the internal state, loads the desired state, execute your nn.Linear, saves its state, and reset the global prng to its original state. By using this, you rely on roch.manual_seed() but also on the state getter and setter functions to reach states further from the seeded states.
| https://stackoverflow.com/questions/70855134/ |
Image Feature Extraction in PyTorch | I am going through difficulties to understand this code snippet.
import torch
import torch.nn as nn
import torchvision.models as models
def ResNet152(out_features = 10):
return getattr(models, "resnet152")(pretrained=False, num_classes = out_features)
def VGG(out_features = 10):
return getattr(models, "vgg19")(pretrained=False, num_classes = out_features)
In this code segment, features for an input image is extracted by ResNet152 and Vgg19 model. But I have the question, from whether which part of these models the features are being extracted whether the part is last pooling layer or the layer before the classification layer or something else.
| Note that getattr(models, 'resnet152') is equivalent to models.resent152.
Hence, the code below is returning the model itself.
getattr(models, "resnet152")(pretrained=False, num_classes = out_features)
# is same as
models.resnet152(pretrained=False, num_classes = out_features)
Now, if you look at the structure of the model by simply printing it, the last layer is a fully-connected layer, so that is what you're getting as features here.
print(ResNet152())
ResNet(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
...
(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
(fc): Linear(in_features=2048, out_features=10, bias=True)
)
The same is the case for VGG().
| https://stackoverflow.com/questions/70859276/ |
there is no chage to the dataset even after random affine (pytorch) | I downloaded two version of mnist dataset: one without any transformation, one with random affinement and normalization.
w/o transformation code:
mnist_train = torchvision.datasets.MNIST(root='MNIST_data/',
train=True,
transform=transforms.ToTensor(),
download=True)
with transformation code:
train_transforms = transforms.Compose(
[transforms.RandomAffine(degrees=30),
transforms.ToTensor(),
transforms.Normalize(mean=(0.5,), std=(0.5,))])
augmented_data = torchvision.datasets.MNIST(root='aug_data/',
train=True,
transform=train_transforms,
download=True)
I printed several samples from those two, but the results were all identical.
can anybody figure out the cause??
thank you
| There is no augmented MNIST dataset and download=True just downloads the same plain MNIST dataset in the roots you have specified.
The transform is applied online when you access an element of the dataset, e.g. augmented_data[0]. In this way, the transformation you have specified is applied to the first image of MNIST.
If you want to check the difference you, therefore, have to display the images accessed in this way with some visualization tool, e.g. by using matplotlib.imshow:
Code
import torch
import torchvision
from torchvision import transforms
import matplotlib.pyplot as plt
mnist_train = torchvision.datasets.MNIST(root='MNIST_data/',
train=True,
transform=transforms.ToTensor(),
download=True)
train_transforms = transforms.Compose(
[transforms.RandomAffine(degrees=30),
transforms.ToTensor(),
transforms.Normalize(mean=(0.5,), std=(0.5,))])
# you can use the same root, since the data is the same
augmented_data = torchvision.datasets.MNIST(root='MNIST_data/',
train=True,
transform=train_transforms,
download=True)
im = mnist_train[0][0][0]
aug_im = augmented_data[0][0][0]
# Normal image
plt.imshow(im, cmap="gray")
plt.colorbar()
plt.show()
# Agumented image
plt.imshow(aug_im, cmap="gray")
plt.colorbar()
plt.show()
Result
Normal Image
Augmented Image
Note the difference of value ranges in the right-sided color bar.
| https://stackoverflow.com/questions/70862122/ |
How to create batches using PyTorch DataLoader such that each example in a given batch has the same value for an attribute? | Suppose I have a list, datalist which contains several examples (which are of type torch_geometric.data.Data for my use case). Each example has an attribute num_nodes
For demo purpose, such datalist can be created using the following snippet of code
import torch
from torch_geometric.data import Data # each example is of this type
import networkx as nx # for creating random data
import numpy as np
# the python list containing the examples
datalist = []
for num_node in [9, 11]:
for _ in range(1024):
edge_index = torch.from_numpy(
np.array(nx.fast_gnp_random_graph(num_node, 0.5).edges())
).t().contiguous()
datalist.append(
Data(
x=torch.rand(num_node, 5),
edge_index=edge_index,
edge_attr=torch.rand(edge_index.size(1))
)
)
From the above datalist object, I can create a torch_geometric.loader.DataLoader (which subclasses torch.utils.data.DataLoader) naively (without any constraints) by using the DataLoader constructor as:
from torch_geometric.loader import DataLoader
dataloader = DataLoader(
datalist, batch_size=128, shuffle=True
)
My question is, how can I use the DataLoader class to ensure that each example in a given batch has the same value for num_nodes attribute?
PS:
I tried to solve it and came up with a hacky solution by combining multiple DataLoader objects using the combine_iterators function snippet from here as follows:
def get_combined_iterator(*iterables):
nexts = [iter(iterable).__next__ for iterable in iterables]
while nexts:
next = random.choice(nexts)
try:
yield next()
except StopIteration:
nexts.remove(next)
datalists = defaultdict(list)
for data in datalist:
datalists[data.num_nodes].append(data)
dataloaders = (
DataLoader(data, batch_size=128, shuffle=True) for data in datalists.values()
)
batches = get_combined_iterator(*dataloaders)
But, I think that there must be some elegant/better method of doing it, hence this question.
| If your underlying dataset is map-style, you can use define a torch.utils.data.Sampler which returns the indices of the examples you want to batch together. An instance of this will be passed as a batch_sampler kwarg to your DataLoader and you can remove the batch_size kwarg as the sampler will form batches for you depending on how you implement it.
| https://stackoverflow.com/questions/70864887/ |
What is different between DataLoader and DataLoader2 in PyTorch? | I developed a custom dataset by using the PyTorch dataset class. The code is like that:
class CustomDataset(torch.utils.data.Dataset):
def __init__(self, root_path, transform=None):
self.path = root_path
self.mean = mean
self.std = std
self.transform = transform
self.images = []
self.masks = []
for add in os.listdir(self.path):
# Some script to load file from directory and appending address to relative array
...
self.masks.sort()
self.images.sort()
def __len__(self):
return len(self.images)
def __getitem__(self, item):
image_address = self.images[item]
mask_address = self.masks[item]
if self.transform is not None:
augment = self.transform(image=np.asarray(Image.open(image_address, 'r', None)),
mask=np.asarray(Image.open(mask_address, 'r', None)))
image = Image.fromarray(augment['image'])
mask = augment['mask']
if self.transform is None:
image = np.asarray(Image.open(image_address, 'r', None))
mask = np.asarray(Image.open(mask_address, 'r', None))
# Handle Augmentation here
return image, mask
Then I created an object from this class and passed it to torch.utils.data.DataLoader. Although this works well with DataLoader but with torch.utils.data.DataLoader2 I got a problem. The error is this:
dataloader = torch.utils.data.DataLoader2(dataset=dataset, batch_size=2, pin_memory=True, num_workers=4)
Exception: thread parallelism mode is not supported for old DataSets
My question is why DataLoader2 module was added to PyTorch what is different with DataLoader and what are its benefits?
PyTorch Version: 1.10.1
| You should definitely not use it DataLoader2.
torch.utils.data.DataLoader2 (actually torch.utils.data.dataloader_experimental.DataLoader2)
was added as an experimental "feature" as a future replacement for DataLoader. It is defined here. Currently, it is only accessible on the master branch (unstable) and is of course not documented on the official pages.
| https://stackoverflow.com/questions/70865699/ |
Is there a way to send patches of image into a transformer model for inference or combine the patches together to make one image? | I am making inference with a single image of size 224x224 on a vision transformer model (deit). However, I divided the image into 196 patches and manipulated the pixels of one patch to check its behaviour. Each patch is of size 16x16.
On feeding these patches to the model, I got the error: Input image size (16*16) doesn't match (224*224). Of course the model was trained on 224x224 image and needs same size. An idea is to combine these patches together into one complete image but I am having problem in getting through.
The single image shape :([1, 3, 224, 224])
The divided-into-patches image shape: [196, 16, 16, 3]
import torch
from models.deit import deit_small_patch16_224
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
import os
from torchvision.transforms import transforms as transforms
from torchvision.utils import make_grid
def into_patches(im, xPieces, yPieces):
imgwidth, imgheight = im.size
height = imgheight // yPieces
width = imgwidth // xPieces
#fig, axs = plt.subplots(yPieces, xPieces)
img_list = []
for i in range(0, yPieces):
for j in range(0, xPieces):
box = (j * width, i * height, (j + 1) * width, (i + 1) * height)
a = im.crop(box)
np_img = np.asarray(a)
if i ==6 and j ==5:
np_img.setflags(write=1)
np_img[:] =0
img_list.append(np_img)
return img_list
class_names = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
model = deit_small_patch16_224(pretrained=True, use_top_n_heads=8, use_patch_outputs=False)
checkpoint = torch.load("./checkpoint/deit224.t7")
state_dict = checkpoint["model"]
new_state_dict = {}
for key in state_dict:
new_key = '.'.join(key.split('.')[1:])
new_state_dict[new_key] = state_dict[key]
model.head = torch.nn.Linear(in_features=model.head.in_features, out_features=10)
model.load_state_dict(new_state_dict)
model.eval()
img = Image.open("bird.jpeg")
img = img.resize((224, 224), resample=0)
a = np.array(into_patches(img, 14, 14))
img_tensor = torch.tensor(a)
# print(img_tensor.shape)
with torch.no_grad():
output = model(img_tensor)
predicted_class = np.argmax(output)
print(predicted_class.item())
I get the following error:-
AssertionError: Input image size (16*3) doesn't match model (224*224).
Is there anyway to combine these 196 patches back into 224x224 image?
| You can use fold and unfold to extract the patches, manipulate them and then re-arrange them back into an image:
# divide the batch of images into non-overlapping patches
u = nnf.unfold(x, kernel_size=16, stride=16, padding=0)
# manipulate patch number 17
u[..., 17] = my_manipulation(u[..., 17])
# fold the patches back together
f = nnf.fold(u, x.shape[-2:], kernel_size=16, stride=16, padding=0)
| https://stackoverflow.com/questions/70865704/ |
What values does torch.Conv2D return? | x = torch.randn(3,64,161,161)\
model = nn.Conv2d(64, 1, kernel_size=1) result = model(x)\
print(result.shape)
output : 3, 1, 161, 161
Output has the first two values as 3 and 1. I understood it as :
3 means the number of input channels that the model has initially received.
1 means the number of output channels after the model dealt with the tensor.
Did I understand correctly?
| No, the number of input channels is 64 (the number of output channels is 1). 3 is the batch size in your code snippet.
The convention PyTorch uses for all tensors with 2D multi-channel data is to have shape [N,C,H,W] where
N is batch dimension
C is channel dimension
H is height dimension
W is width dimension
For Conv2d
N will be the same for both the input and output tensors since each batch element in the input tensor produces one corresponding element in the output tensor.
The input tensor's C must be equal to in_channels. This is the first argument you provided to Conv2d. In your case it is 64 so you need to have an input tensor with 64 channels.
The output tensor's C will be equal to out_channels. This is the second argument you provide to Conv2d. In your case it is 1 so the output tensor will have 1 channel.
W and H are the width and height of the data that is being convolved. The output shape will vary depending on the kernel_size, padding, stride, and dilation arguments of Conv2d. The equation relating input W and H to output W and H is
For more information I recommend you check the official documentation.
| https://stackoverflow.com/questions/70869897/ |
Retraining a Model from 3 Channels (RGB) to 4 Channels (RGBA), can I use the 3 channel weights? | I need to expand a model from RGB to RGBA. I can handle the code rewrite on the model, but instead of retraining the entire model from scratch, I would love to start it off with it's 3 channel weights + zeros.
Is there an easy way to change torch's save of 3 channel weights into 4?
| Yes, you can do a little bit of "model surgery". Assuming the input to the model is only processed directly by a convolutional layer then you can just replace that conv layer with another that has in_channels set to 4. Then you can set weights to zero and copy over the old weights (and biases if applicable) from the original conv layer.
For example, say we had a simple model that looked like this
import torch
import torch.nn as nn
import torch.nn.functional as F
class SimpleModel(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 10, kernel_size=3, padding=1, bias=True)
self.conv2 = nn.Conv2d(10, 5, kernel_size=3, padding=1, bias=True)
self.linear = nn.Linear(125, 1)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
return self.linear(x.flatten(start_dim=1))
model = SimpleModel()
Supposing that the model is trained at this point, we could perform the surgery as follows
y_rgb = torch.randn(1, 3, 5, 5)
# get performance on initial z_rgb
z_rgb = model(y_rgb)
# perform model surgery
with torch.no_grad():
new_conv1 = nn.Conv2d(4, 10, kernel_size=3, padding=1, bias=True)
new_conv1.weight.zero_()
new_conv1.weight[:,:3,...]=model.conv1.weight
new_conv1.bias.copy_(model.conv1.bias)
model.conv1 = new_conv1
# add a random alpha channel to y_rgba
y_alpha = torch.randn(1,1,5,5)
y_rgba = torch.cat([y_rgb, y_alpha], dim=1)
# get results on rgba model
z_rgba = model(y_rgba)
# compare z_rgb and z_rgba, print mean-square difference
z_err = ((z_rgba-z_rgb)**2).mean().item()
print('Err:', z_err)
# save results to a new file
torch.save(model.state_dict(), 'checkpoint_rgba.pt')
which should give you an error of zero or very close to zero.
Of course if you don't have a bias term in your first conv layer then you don't need to copy that over.
Assuming you've saved the new state dictionary, then you will probably want to update the model class definition so that your input convolution layer takes 4 channel input instead of 3. Then next time you can directly load the new state dictionary without additional steps.
Now it's not strictly necessary to do the surgery on the model directly. Though I tend to prefer it as I find it easier to verify correctness.
Assuming you saved off the state dictionary for the RGB model, you could also just directly modify the state dictionary.
# assuming you saved RGB model using torch.save(model.state_dict(), 'checkpoint_rgb.pt')
state_dict = torch.load('checkpoint_rgb.pt')
old_weight = state_dict['conv1.weight']
state_dict['conv1.weight'] = torch.zeros(
old_weight.shape[0],
old_weight.shape[1]+1,
old_weight.shape[2],
old_weight.shape[3]
).type_as(old_weight)
state_dict['conv1.weight'][:,:3,...] = old_weight
torch.save(state_dict, 'checkpoint_rgba.pt')
| https://stackoverflow.com/questions/70870984/ |
correct shape (BS,H,W,C) not working in torchvision.utils.save_image | Let's BS be the batch size, H the height, w the weight, and c the number of channels which is 3 in my case.
When I save my image in this shape (BS,C,H,W) with
torchvision.utils.save_image(image, path)
it works very well but the image is unreadable since the format is wrong.
But when I am reshaping my image into the right format which is (BS,H,W,C), The code below does not work
image = image.reshape(BS,H,W,C)
torchvision.utils.save_image(image, path)
Here is the error that I am stuck in:
TypeError: Cannot handle this data type: (1, 1, 256), |u1
| You wanted your image to have size (BS, C, H, W), but you are incorrectly reshaping it.
Assuming the image.shape is (BS, H, W, C), perhaps you meant to perform image = image.permute(0, 3, 1, 2) which moves the channel dimension in the second position, to obtain the required shape.
| https://stackoverflow.com/questions/70872079/ |
Cannot figure out dense layers dimensions to run the neural network | I am trying to build a multi layer neural network. I have train data with shape:
train[0][0].shape
(4096,)
Below is my dense layer
from collections import OrderedDict
n_out = 8
net = nn.Sequential(OrderedDict([
('hidden_linear', nn.Linear(4096, 1366)),
('hidden_activation', nn.Tanh()),
('hidden_linear', nn.Linear(1366, 456)),
('hidden_activation', nn.Tanh()),
('hidden_linear', nn.Linear(456, 100)),
('hidden_activation', nn.Tanh()),
('output_linear', nn.Linear(100, n_out))
]))
I am using crossentropy as the loss function. The problem I have is when I train the model with the below code:
learning_rate = 0.001
optimizer = torch.optim.SGD(net.parameters(), lr=learning_rate)
n_epochs = 40
for epoch in range(n_epochs):
for snds, labels in final_train_loader:
outputs = net(snds.view(snds.shape[0], -1))
loss = loss_fn(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Epoch: %d, Loss: %f" % (epoch, float(loss)))
The error I receive is the matrix multiplication error.
RuntimeError: mat1 and mat2 shapes cannot be multiplied (100x4096 and 456x100)
I have the dimensions wrong but cannot figure out how to get it right.
| The OrderedDict contains three Linear layers associated with the same key, hidden_layer (the same happens with nn.Tanh). In order to make it work you need to provide such layers with a different name:
inp = torch.rand(100, 4096)
net = nn.Sequential(OrderedDict([
('hidden_linear0', nn.Linear(4096, 1366)),
('hidden_activation0', nn.Tanh()),
('hidden_linear1', nn.Linear(1366, 456)),
('hidden_activation1', nn.Tanh()),
('hidden_linear2', nn.Linear(456, 100)),
('hidden_activation2', nn.Tanh()),
('output_linear', nn.Linear(100, n_out))
]))
net(inp) # now it works!
| https://stackoverflow.com/questions/70877356/ |
How to input the image with the size 4x4 and replace the Fully Connected Layers using the Convolution Layers? | I'm a beginner at the PyTorch library, and I got stuck in an exercise.
The code below works for the input image with size 2x2. I'm trying to do the same thing as below but the input image with size 4x4.
The code:
import torch
Assume that we have a 2x2 input image
inputs = torch.tensor([[[[1., 2.],
[3., 4.]]]])
inputs.shape
Output: torch.Size([1,1,2,2]
A fully connected layer, which maps the 4 input features two 2 outputs, would be computed as follows:
fc = torch.nn.Linear(4, 2)
weights = torch.tensor([[1.1, 1.2, 1.3, 1.4],
[1.5, 1.6, 1.7, 1.8]])
bias = torch.tensor([1.9, 2.0])
fc.weight.data = weights
fc.bias.data = bias
torch.relu(fc(inputs.view(-1, 4)))
Output: torch.Size([2, 1, 2, 2])
Output: torch.Size([2])
Obtain the same outputs if we use convolutional layers where the kernel size is the same size as the input feature array:
conv = torch.nn.Conv2d(in_channels=1,
out_channels=2,
kernel_size=inputs.squeeze(dim=(0)).squeeze(dim=(0)).size())
print(conv.weight.size())
print(conv.bias.size())
Output: torch.Size([2, 1, 2, 2])
Output: torch.Size([2])
conv.weight.data = weights.view(2, 1, 2, 2)
conv.bias.data = bias
torch.relu(conv(inputs))
Output: tensor([[[[14.9000]],
[[19.0000]]]], grad_fn=<ReluBackward0>)
Replace the fully connected layer using a convolutional layer when we reshape the input image into a num_inputs x 1 x 1 image:
conv = torch.nn.Conv2d(in_channels=4,
out_channels=2,
kernel_size=(1, 1))
conv.weight.data = weights.view(2, 4, 1, 1)
conv.bias.data = bias
torch.relu(conv(inputs.view(1, 4, 1, 1)))
Output: tensor([[[[14.9000]],
[[19.0000]]]], grad_fn=<ReluBackward0>)
So based on this code how to input an image that has a size 4x4 and replace the Fully Connected Layers using Convolution Layers?
| You simply need to change the shape of input and reshape weights as per 4x4.
inputs = torch.randn(1, 1, 4, 4)
fc = torch.nn.Linear(16, 2)
torch.relu(fc(inputs.view(-1, 16)))
# output
tensor([[0.0000, 0.2525]], grad_fn=<ReluBackward0>)
Now, for conv layer
conv = torch.nn.Conv2d(in_channels=1,
out_channels=2,
kernel_size=inputs.squeeze(dim=(0)).squeeze(dim=(0)).size())
conv.weight.data = fc.weight.data.view(2, 1, 4, 4)
conv.bias.data = fc.bias.data
torch.relu(conv(inputs))
# output
tensor([[[[0.0000]],
[[0.2525]]]], grad_fn=<ReluBackward0>)
You can read Converting FC layers to CONV layers if not sure how conv layers params are taken.
| https://stackoverflow.com/questions/70879840/ |
Create a torch tensor with desired values | I want to create a torch tensor of size 100 with values 10 and 100.
For example: The following gives a tensor of values between 5 and 6.
torch.randint(5,7,(100,))
tensor([6, 6, 6, 5, 5, 6, 6, 6, 6, 5, 6, 6, 6, 6, 6, 5, 6, 5, 5, 6, 5, 5, 5, 5,
6, 5, 5, 5, 5, 5, 6, 6, 6, 5, 6, 6, 5, 5, 5, 5, 6, 5, 5, 5, 5, 5, 6, 5,
5, 6, 5, 6, 5, 6, 5, 6, 6, 6, 6, 5, 6, 6, 6, 5, 5, 5, 6, 6, 6, 6, 5, 6,
5, 5, 5, 5, 6, 6, 5, 6, 6, 6, 5, 5, 6, 6, 5, 6, 6, 6, 5, 5, 5, 5, 5, 6,
6, 6, 5, 6])
Instead of this, I want a tensor with values 10 and 100 and I do not want the values between the integers 10 and 100. Tensor should just contain 10 and 100. How do I do that?
Thanks in advance.
| You can achieve that by using the python function random.choice() to create a list of random numbers then convert it to a tensor:
import random
import torch
list_numbers = random.choices([100,10], k=100)
random_numbers = torch.Tensor(list_numbers)
print(random_numbers)
| https://stackoverflow.com/questions/70880517/ |
How to fix error with pytorch conv2d function? | I am trying to use conv2d function on these two tensors:
Z = np.random.choice([0,1],size=(100,100))
Z = torch.from_numpy(Z).type(torch.FloatTensor)
print(Z)
tensor([[0., 0., 1., ..., 1., 0., 0.],
[1., 0., 1., ..., 1., 1., 1.],
[0., 0., 0., ..., 0., 1., 1.],
...,
[1., 0., 1., ..., 1., 1., 1.],
[1., 0., 1., ..., 0., 0., 0.],
[0., 1., 1., ..., 1., 0., 0.]
and
filters = torch.tensor(np.array([[1,1,1],
[1,0,1],
[1,1,1]]), dtype=torch.float32)
print(filters)
tensor([[1., 1., 1.],
[1., 0., 1.],
[1., 1., 1.]])
But when I try to do torch.nn.functional.conv2d(Z,filters) this error returns:
RuntimeError: weight should have at least three dimensions
I really don't understand what is the problem here. How to fix it?
| The input to torch.nn.functional.conv2d(input, weight) should be
You can use unsqueeze() to add fake batch and channel dimensions thus having sizes: input: (1, 1, 100, 100) and weight: (1, 1, 3, 3).
torch.nn.functional.conv2d(Z.unsqueeze(0).unsqueeze(0), filters.unsqueeze(0).unsqueeze(0))
| https://stackoverflow.com/questions/70881910/ |
Run out of memory trying to create a tensor of size [2191, 512] with pytorch to save data from movie frames using CLIP | I'm using pytorch for the first time and I'm facing a problem I don't think I should. I currently have selected 2919 frames of a movie in jpg. I'm trying to transform all those images into a single tensor. I'm using CLIP to transform each image into a tensor of size [1, 512]. In the end, I expected to have a tensor of size [2919, 512], which should not use that much memory. But my code never finishes running and I can only assume I'm doing something terribly wrong.
First I'm doing my import and loading the model:
import torch
import clip
from glob import glob
from PIL import Image
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load("ViT-B/32", device=device)
Secondly, I'm reading the path for all the images and initializing the "film" tensor with random value to overwrite them. I tried generating an empty one and concatenate but that also consumed too much memory:
path_names = glob(r"Films/**/*.jpg")
film = torch.rand((len(files), 512), dtype=torch.float32, device = device)
film_frame_count = 0
for file in files:
print("Frame " + str(film_frame_count) + " out of " + str(len(files)))
film[film_frame_count] = model.encode_image(preprocess(Image.open(file)).unsqueeze(0).to(device))[0]
film_frame_count += 1
torch.save(film, 'output_tensor/'+ film_code[1])
If anyone could point it out what I'm doing wrong I would appreciate.
| The problem ended up being caused because pytorch was saving the gradiants for the graph, so I needed to indicate that I didn't want them to be stored with this indicator on top:
with torch.no_grad():
/*my code*/
| https://stackoverflow.com/questions/70885777/ |
Pytorch `bachward()` updates multiple models | Can anyone tell me why the gradients of the discriminator change as well and if there is a way to avoid it?
for i in range(2):
X_fake = gen_model(z)
pred_real = disc_model(X)
pred_fake = disc_model(X_fake.detach())
disc_loss = (loss_fn(pred_real, y) + loss_fn(pred_fake, y)) / 2
disc_optimizer.zero_grad()
disc_loss.backward()
disc_optimizer.step()
pred_fake = disc_model(X_fake)
gen_loss = loss_fn(pred_fake, y)
gen_optimizer.zero_grad()
i == 1 and print_grads(disc_model) # Checkpoint 1
gen_loss.backward()
i == 1 and print_grads(disc_model) # Checkpoint 2
gen_optimizer.step()
This is the rest of the code.
import torch
import torch.nn as nn
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
self._linear = nn.Sequential( nn.Linear(1, 5) )
def forward(self, X):
return self._linear(X)
class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
self._linear = nn.Sequential( nn.Linear(5, 1) )
def forward(self, X):
return self._linear(X)
def print_grads(model):
for params in model.parameters():
print(params.grad)
# Build the model and data.
gen_model = Generator()
gen_optimizer = torch.optim.Adam(gen_model.parameters(), 1)
disc_model = Discriminator()
disc_optimizer = torch.optim.Adam(disc_model.parameters(), 1)
loss_fn = torch.nn.BCEWithLogitsLoss()
z = torch.rand((1, 1))
X = torch.rand((1, 5))
y = torch.rand((1, 1))
| The gradients of the discriminator have been updated because you have backpropagated the loss gen_loss through the discriminator up to the generator itself. In your current formulation of the training loop, this is not a problem since only the generator's parameters will get updated on the next line with gen_optimizer.step(). In other words: yes the gradients of the discriminator will be updated (they change from #1 to #2) but these won't be used to update the parameters of the discriminator. As long as you properly clear the gradient cache with zero_grad before backpropagating, you will be fine.
A common practice is to freeze the discriminator when training the generator, this avoids unnecessary gradient computation:
# freeze discriminator
disc_model.requires_grad_(False)
# train the generator
gen_optimizer.zero_grad()
gen_loss.backward()
gen_optimizer.step()
# unfreeze discriminator for next step
disc_model.requires_grad_(True)
| https://stackoverflow.com/questions/70886018/ |
Torchscript/C++ jit::trace model - Accessing layers parameters | I have a model I trained in python, traced using torch.jit.trace, and load into C++ using torch::jit::load.
Is there a way to access the last layer to pull the value for the models required output depth (for example, if it is a Conv2D layer going from 16 -> 2, I want to predefine a tensor for a depth [b,d->2,x,y] of 2)?
| Not the most elegant way of solving this, but the most straightforward was just passing a dummy tensor through and accessing the shape. Another way I did try was accessing the parameter list and looking for "softmax", unfortunately I couldn't guarantee everyones model will spell it the same way when searching for this. If someone else has a good answer for this feel free to share, but this will have to do for now.
| https://stackoverflow.com/questions/70886218/ |
Normalize MNIST in PyTorch | I am trying to normalize MNIST dataset in PyTorch 1.9 and Python 3.8 to be between the range [0, 1] with the code (batch_size = 32).
# Specify path to MNIST dataset-
path_to_data = "path_to_dataset"
# Define transformation(s) to be applied to dataset-
transforms_MNIST = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize(mean = (0.1307,), std = (0.3081,))
]
)
# Load MNIST dataset-
train_dataset = torchvision.datasets.MNIST(
# root = './data', train = True,
root = path_to_data + "data", train = True,
transform = transforms_MNIST, download = True
)
test_dataset = torchvision.datasets.MNIST(
# root = './data', train = False,
root = path_to_data + "data", train = False,
transform = transforms_MNIST
)
# Create training and testing dataloaders-
train_loader = torch.utils.data.DataLoader(
dataset = train_dataset, batch_size = batch_size,
shuffle = True
)
test_loader = torch.utils.data.DataLoader(
dataset = test_dataset, batch_size = batch_size,
shuffle = False
)
print(f"Sizes of train_dataset: {len(train_dataset)} and test_dataet: {len(test_dataset)}")
print(f"Sizes of train_loader: {len(train_loader)} and test_loader: {len(test_loader)}")
# Sizes of train_dataset: 60000 and test_dataet: 10000
# Sizes of train_loader: 1875 and test_loader: 313
# Sanity check-
print(f"train_dataset: min pixel value = {train_dataset.data.min().numpy():.3f} &"
f" max pixel value = {train_dataset.data.max().numpy():.3f}")
# train_dataset: min pixel value = 0.000 & max pixel value = 255.000
print(f"test_dataset: min pixel value = {test_dataset.data.min().numpy():.3f} &"
f" max pixel value = {test_dataset.data.max().numpy():.3f}")
# test_dataset: min pixel value = 0.000 & max pixel value = 255.000
print(f"len(train_loader) = {len(train_loader)} & len(test_loader) = {len(test_loader)}")
# len(train_loader) = 1875 & len(test_loader) = 313
# Sanity check-
len(train_dataset) / batch_size, len(test_dataset) / batch_size
# (1875.0, 312.5)
# Get some random batch of training images & labels-
images, labels = next(iter(train_loader))
# You get x images due to the specified batch size-
print(f"images.shape: {images.shape} & labels.shape: {labels.shape}")
# images.shape: torch.Size([32, 1, 28, 28]) & labels.shape: torch.Size([32])
# Get min and max values for normalized pixels in mini-batch-
images.min(), images.max()
# (tensor(-0.4242), tensor(2.8215))
The min and max for 'images' should be between 0 and 1, instead, it is 0.4242 and 2.8215. What is going wrong?
| This happens because Normalize applies what is actually known (also) as a standardization: output = (input - mean) / std.
The normalization you want to achieve is automatically performed when loading the image so you can comment Normalize.
| https://stackoverflow.com/questions/70892017/ |
Transform not getting applied on CustomDataset Pytorch | I images with folder structure as following :
root_dir
│
└───folder1
│ │ file011.png
│ │ file012.png
│
└───folder2
| │ file021.png
| │ file022.png
|
└───folder2
│ file031.png
│ file032.png
...
Now I wanted to create a CustomeDataset without labels in PyTorch (since I am using it for GANs)
So I did the following :
class CurrencyDataset(Dataset):
'''
Currency Dataset with no labels
'''
def __init__(self, type, transform):
'''
Parameters
type : "Train" or "Test"
transform : Transformations to be applied
'''
root_dir = "indian-currency-notes-classifier/"
# Storing images in a list
self.data = []
self.transform = transform
dir = os.path.join(root_dir, type)
for note in os.listdir(dir) :
note_dir = os.path.join(dir, note)
for img_name in os.listdir(note_dir):
img = io.imread(os.path.join(note_dir, img_name))
self.data.append(img)
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
x = self.data[idx]
if self.transform :
x = self.transform(x)
return x
and used the following transformations :
transform = transforms.Compose([
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
transforms.RandomRotation(15),
transforms.Resize((224, 224)),
transforms.ToTensor(),
])
train_ds = CurrencyDataset("Train", transform)
but on checking the shape and values of the Tensors, I found out that transformations were not getting applied
train_ds.data[0].shape
>> (1072, 1154, 3)
I am a bit new to PyTorch so please let me know If I am doing something wrong here or What needs to be done to make it correct ?
| You forgot to assign the transform object as an attribute of the instance. This, in turn, means self.transform evaluates to None in the __getitem__ function. Simply add the following in the __init__:
self.transform = transform
Additonally, you are not calling the proper function (__getitem__) with train_ds.data[0].shape, instead it should be train_ds[0] (in other words: train_ds.__getitem__(0)).
Finally, your pipeline doesn't have the correct order of transforms as T.Resize expects a torch.Tensor, not a PIL image:
transform = T.Compose([
T.RandomRotation(15),
T.Resize((224, 224)),
T.ToTensor(),
T.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
| https://stackoverflow.com/questions/70892948/ |
Skip bad data points when loading data using DataLoader | I am trying to perform an image classification task using mini-imagenet dataset. The data that I want to use, contains a few bad data points(I am not sure why). I would like to load this data and train my model on it. In the process, I want to skip the bad data points completely. How do I do this?
The data loader I am using is as follows:
class MiniImageNet(Dataset):
def __init__(self, root, train=True,
transform=None,
index_path=None, index=None, base_sess=None):
if train:
setname = 'train'
else:
setname = 'test'
self.root = os.path.expanduser(root)
self.transform = transform
self.train = train # training set or test set
self.IMAGE_PATH = os.path.join(root, 'miniimagenet/images')
self.SPLIT_PATH = os.path.join(root, 'miniimagenet/split')
csv_path = osp.join(self.SPLIT_PATH, setname + '.csv')
lines = [x.strip() for x in open(csv_path, 'r').readlines()][1:]
self.data = []
self.targets = []
self.data2label = {}
lb = -1
self.wnids = []
for l in lines:
name, wnid = l.split(',')
path = osp.join(self.IMAGE_PATH, name)
if wnid not in self.wnids:
self.wnids.append(wnid)
lb += 1
self.data.append(path)
self.targets.append(lb)
self.data2label[path] = lb
self.y = self.targets
if train:
image_size = 84
self.transform = transforms.Compose([
transforms.RandomResizedCrop(image_size),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
else:
image_size = 84
self.transform = transforms.Compose([
transforms.Resize([image_size, image_size]),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
def __len__(self):
return len(self.data)
def __getitem__(self, i):
path, targets = self.data[i], self.targets[i]
image = self.transform(Image.open(path).convert('RGB'))
return image, targets
I tried to use a try-except sequence, but in that case, instead of skipping, the dataloader is returning None, causing an error. How do I completely skip a datapoint in a dataloader?
| Try removing the bad data at the end of the __init__ function.
for i in range(len(self.data) - 1, -1, -1):
if is_bad_data(self.data[i], self.targets[i]):
del self.data[i]
del self.targets[i]
| https://stackoverflow.com/questions/70911801/ |
Pytorch features and classes from .npy files | I am very rookie in moving from TensorFlow to Pytorch. In tensorflow, I can simply load features and labels from separate .npy files and train a CNN using them. It is simple as below:
def finetune_resnet(file_train_classes, file_train_features, name_model_to_save):
#Lets load features and classes first
print("Loading, organizing and pre-processing features")
num_classes = 12
x_train=np.load(file_train_features)
y_train=np.load(file_train_classes)
#Defining train as 70% and validation 30% of the data
#The partition is stratified with a fixed random state
#Therefore, for all networks, the partition will be the same
x_train, x_validation, y_train, y_validation = train_test_split(x_train, y_train, test_size=0.30, stratify=y_train, random_state=42)
print("transforming to categorical")
y_train = to_categorical(y_train, num_classes)
y_validation = to_categorical(y_validation, num_classes)
y_train= tf.constant(y_train, shape=[y_train.shape[0], num_classes])
y_validation= tf.constant(y_validation, shape=[y_validation.shape[0], num_classes])
print("preprocessing data")
#Preprocessing data
x_train = x_train.astype('float32')
x_validation=x_validation.astype('float32')
x_train /= 255.
x_validation /= 255.
print("Setting up the network")
#Parameters for network training
batch_size = 32
epochs=300
sgd = SGD(lr=0.01)
trainAug = ImageDataGenerator(rotation_range=30,zoom_range=0.15,width_shift_range=0.2,height_shift_range=0.2,shear_range=0.15,horizontal_flip=True,fill_mode="nearest")
print("Compiling the network")
#Load model and prepare it for fine tuning
baseModel = ResNet50(weights="imagenet", include_top=False,
input_tensor=Input(shape=(224, 224, 3)))
# construct the head of the model that will be placed on top of the
# the base model
headModel = baseModel.output
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(512, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(num_classes, activation="softmax")(headModel)
# place the head FC model on top of the base model (this will become
# the actual model we will train)
model = Model(inputs=baseModel.input, outputs=headModel)
model.compile(loss="categorical_crossentropy", optimizer=sgd, metrics=["accuracy"])
trainAug.fit(x_train)
# Fit the model on the batches generated by datagen.flow().
print("[INFO] training head...")
H=model.fit(trainAug.flow(x_train, y_train, batch_size=batch_size), steps_per_epoch=x_train.shape[0] // batch_size, epochs=epochs, validation_data=(x_validation, y_validation), callbacks=callbacks)
However, I have no idea how to load, train and evaluate training and testing data if loading these data from .npy files. I checked a tutorial that loads training data from folders, which is not what I want.
How can I train and test a RESNET-50 model starting with imagenet weights loading train and test data from .npy files with Pytorch?
P.s: most of Pytorch training loops require <class 'torch.utils.data.dataloader.DataLoader'> inputs to train. Is that possible to transform my training data in numpy arrays to such a format?
P.s= you can try with my data here
| It seems like you need to create a custom Dataset.
class MyDataSet(torch.utils.data.Dataset):
def __init__(self, x, y):
super(MyDataSet, self).__init__()
# store the raw tensors
self._x = np.load(file_train_features)
self._y = np.load(file_train_classes)
def __len__(self):
# a DataSet must know it size
return self._x.shape[0]
def __getitem__(self, index):
x = self._x[index, :]
y = self._y[index, :]
return x, y
You can further use Dataset methods to split MyDataSet into train and validation (e.g., using torch.utils.data.random_split).
You might also find TensorDataset useful.
| https://stackoverflow.com/questions/70925539/ |
How to build a tensor from one tensor of contents and another of indices? | I'm trying to assemble a tensor based on the contents of two other tensors, like so:
I have a 2D tensor called A, with shape I * J, and another 2D tensor called B, with shape M * N, whose elements are indices into the 1st dimension of A.
I want to obtain a 3D tensor C with shape M * N * J such that C[m,n,:] == A[B[m,n],:] for all m in [0, M) and n in [0, N).
I could do this using nested for-loops to iterate over all indices in M and N, assigning the right values to C at each one, but M and N are large so this is quite slow. I suspect there's some nicer, faster way of doing this using clever slicing or a built-in pytorch function, but I don't know what it would be. It looks a bit like somewhere one would use torch.gather(), but that requires all tensors to have the same number of dimensions. Does anyone know how this ought to be done?
EDIT: torch.index_select(input, dim, index) is almost what I want, but it won't work here because it requires that index be a 1D tensor, while my tensor of indices is 2D.
| You could achieve this by flattening the first dimensions which let's you index A. A broadcast will be required to recover the final shape
>>> A[B.flatten(),:].reshape(*B.shape, A.size(-1))
Indexing with A[B.flatten(),:] is equivalent to torch.index_select(A, 0, B.flatten()).
| https://stackoverflow.com/questions/70926905/ |
Pytorch - Problem with fine tune training from custom features and classes | The core of my problem is the fact that my features come from NumPy files (.npy).
Therefore I need the following class in my code
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
from torch.utils.data import Dataset, DataLoader
from torchvision.models import resnet50
import time
import copy
class MyDataSet(torch.utils.data.Dataset):
def __init__(self, x, y, transform=None):
super(MyDataSet, self).__init__()
# store the raw tensors
self._x = np.load(x)
self._y = np.load(y)
self.transform = transform
def __len__(self):
# a DataSet must know it size
return self._x.shape[0]
def __getitem__(self, index):
x = self._x[index, :]
y = self._y[index, :]
return x, y
To convert my NumPy files to DataLoaders I do the following. The code below seems to work (at least, no errors are returned)
#Transform dataset
transform = transforms.Compose([transforms.ToTensor()])
dataset = MyDataSet("train1-features.npy","train1-classes.npy",transform=transform)
dataloader = DataLoader(dataset, batch_size=32)
I am trying to fine-tune a RESNET-50 network in these data with 12 classes. Here is what I do
def set_parameter_requires_grad(model, feature_extracting):
if feature_extracting:
for param in model.parameters():
param.requires_grad = False
feature_extract = True
batch_size = 8
num_epochs = 15
num_classes=12
model_ft = resnet50(pretrained=True)
set_parameter_requires_grad(model_ft, feature_extract)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, num_classes)
input_size = 224
if torch.cuda.is_available():
model_ft.cuda()
params_to_update = model_ft.parameters()
print("Params to learn:")
if feature_extract:
params_to_update = []
for name,param in model_ft.named_parameters():
if param.requires_grad == True:
params_to_update.append(param)
print("\t",name)
else:
for name,param in model_ft.named_parameters():
if param.requires_grad == True:
print("\t",name)
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(params_to_update, lr=0.001, momentum=0.9)
# Setup the loss fxn
criterion = nn.CrossEntropyLoss()
Finally, here is the problematic training function
for epoch in range(num_epochs): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(dataloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
#transfer labels and inputs to cuda()
inputs,labels=inputs.cuda(), labels.cuda()
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model_ft(inputs)
loss = loss_func(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
running_loss = 0.0
This returns me the following error once I execute the code:
Traceback (most recent call last):
File "train_my_data_example.py", line 89, in <module>
for i, data in enumerate(dataloader, 0):
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 517, in __next__
data = self._next_data()
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 557, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "train_my_data_example.py", line 29, in __getitem__
y = self._y[index, :]
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
The error is clearly the dataloader variable, so is this creation ok? I mean, I am loading NumPy data and transforming it to a data loader as below:
transform = transforms.Compose([transforms.ToTensor()])
dataset = MyDataSet("train1-features.npy","train1-classes.npy",transform=transform)
dataloader = DataLoader(dataset, batch_size=32)
Is there any error in my data loader or is the problem the training loop of Pytorch?
P.s: you can reproduce my code by downloading the classes and features here
| You are trying to index the second axis of an array which only has a single dimension. Simply replace y = self._y[index, :] with y = self._y[index].
Actually when positioned last, : is not required as all dimensions are selected by default.
| https://stackoverflow.com/questions/70929400/ |
Change the layer format to a different image resolution | There is a class where everything is set to 32x32 image format Taken from here
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 6, 5) # here I changed the image channel from 3 to 1
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 64, 5)
self.fc1 = nn.Linear(64 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 22) # here I changed the number of output neurons from 10 to 22
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
How to change all this under resolution 96 to 96? Channel 1 (grayscale)?
| At resolution 32x32 the output of conv2 is shaped (1, 64, 5, 5). On the other hand, if the input is at resolution 96x96, it will be (1, 64, 21, 21). This means fc1 needs to have 28_224 input neurons.
>>> self.fc1 = nn.Linear(64 * 21 * 21, 120)
Alternatively, you can use nn.LazyLinear which will infer this number for you, based on the first inference.
>>> self.fc1 = nn.Linear(120)
| https://stackoverflow.com/questions/70936822/ |
I am getting this error of Float.Tensor and cuda.FloatTenson mismatch | I am getting this error while running the training code of a model.
Traceback (most recent call last):
File "train.py", line 273, in <module>
train_loss[epoch - 1] = process_epoch(
File "train.py", line 240, in process_epoch
loss = loss_fn(model, batch)
File "train.py", line 221, in <lambda>
loss_fn = lambda model, batch: weak_loss(model, batch, normalization="softmax")
File "train.py", line 171, in weak_loss
corr4d = model(batch).to("cuda")
File "/home/srtf/anaconda3/envs/ncnet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/srtf/ncnet/lib/model.py", line 263, in forward
feature_A = self.FeatureExtraction(tnf_batch['source_image'])
File "/home/srtf/anaconda3/envs/ncnet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/srtf/ncnet/lib/model.py", line 84, in forward
features = self.model(image_batch)
File "/home/srtf/anaconda3/envs/ncnet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/srtf/anaconda3/envs/ncnet/lib/python3.8/site-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/home/srtf/anaconda3/envs/ncnet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/srtf/anaconda3/envs/ncnet/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 353, in forward
return self._conv_forward(input, self.weight)
File "/home/srtf/anaconda3/envs/ncnet/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 349, in _conv_forward
return F.conv2d(input, weight, self.bias, self.stride,
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
Cuda is there on the system. Where do I need to make changes in the code?
| Your input needs to be sent to the correct device:
>>> corr4d = model(batch.cuda())
Which will copy the batch to the GPU device ('cuda:0' by default).
| https://stackoverflow.com/questions/70936889/ |
RuntimeError: mat1 and mat2 shapes cannot be multiplied (4x73034 and 200x120) | Building a Neural Network layers for Skin detection dataset, and got a error here. I know i have done some mistake but cannot figure it out. Error is am getting is after taking image size 224*224 and channels 3: RuntimeError: mat1 and mat2 shapes cannot be multiplied (4x73034 and 200x120)
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 16, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(16, 26, 5)
self.fc1 = nn.Linear(8 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 86)
self.fc3 = nn.Linear(86, 2)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x,1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net().to(device)
print(net)
These are the layers and Net module
<ipython-input-41-8c9bafb31c44> in forward(self, x)
16 x = self.pool(F.relu(self.conv2(x)))
17 x = torch.flatten(x,1)
---> 18 x = F.relu(self.fc1(x))
19 x = F.relu(self.fc2(x))
20 x = self.fc3(x)
Can anyone help me solve this.
| As Anant said, you need to match the flattened conv2 dimension (73034) to be the input dimension for the fc1 layer.
self.fc1 = nn.Linear(73034, 120)
The formula to calculate the output of each conv layer:
[(height or width) - kernel size + 2*padding] / stride + 1
For the following I will use the dimensions (Channels, Height, Width)
Input (3,224,224) -> conv1 -> (16,220,220) -> pool -> (16,110,110) -> conv2 -> (26,106,106) -> pool -> (26,53,53) -> flatten -> (73034)
It seems your batch size is 4, which refers to the "4" in (4x73034). If you print the dimensions of the output of conv1 or conv2 layers, the format will be (Batch, Channels, Height, Width).
| https://stackoverflow.com/questions/70937513/ |
Select elements of Tensor based on index tensor along same dimensions | I have the following two tensors
input shape: 16 32 32 3
index shape: 16 32 32 2
output shape: 16 32 32 3
The formula for the output would be:
output[b, h, w] = input[b, index[b, h, w, 0], index[b, h, w, 1]]
I tried to use torch.gather but I was not able to formulate the previous assignment.
Does anyone know how to do this in an efficient manner? Thanks!
For context: input contains a batch of 16 elemens where each one is a tensor of 32x32 that containts 3D points. index is a mapping from position to 3D point.
| You can achieve this by unraveling the indices from index (on dimensions 1 and 2) in order to index input on a single dimension using torch.gather.
This requires to expand the shape of the indexer to fit the shape of input:
Here is an example with some dummy data:
>>> x = torch.rand(16, 32, 32, 3)
>>> index = torch.randint(0, 10, (16,32,32,2))
Some manipulation on index is required to unravel the values:
>>> unraveled = x.size(1)*index[..., 0] + index[..., 1]
>>> u = unraveled.flatten(1).unsqueeze(-1).expand(-1, -1, x.size(-1))
Now u, reshaped expanded version of index has a shape of (16, 1024, 3).
The indexed tensor also needs to be flattened:
>>> x.flatten(1, 2)
torch.Size([16, 1024, 3])
Finally, you can gather on dim=1 (keep in mind the result needs to be reshaped to the desired shape i.e. the input's shape):
>>> out = input.flatten(1,2).gather(1, u).reshape_as(x)
| https://stackoverflow.com/questions/70937836/ |
How to perform matrix equality? | I have two matrices. A with size 160 x 250 and B with size 3200 x 250.
I want to get the set intersection of each row of A with each row of B to get a 160 x 3200 vector. (The set size is 250 elements)
Any ideas how to implement this?
I'm thinking it should require torch.eq, but not sure how to change the dimensions. For example:
result = torch.sum(torch.eq(A[0], B), dim=1) would give me a 3200 element vector comparison with just the 0th row of A. I want for all rows of A (160)
| Assuming you want to check equality between the 160x3200 possible pairs of 250-feature vectors. You can do so with an indexing trick:
>>> (A[None] == B[:, None]).all(-1)
| https://stackoverflow.com/questions/70938794/ |
PyTorch doubly stochastic normalisation of 3D tensor | I'm trying to implement double stochastic normalisation of an N x N x P tensor as described in Section 3.2 in Gong, CVPR 2019. This can be done easily in the N x N case using matrix operations but I am stuck with the 3D tensor case. What I have so far is
def doubly_stochastic_normalise(E):
"""E: n x n x f"""
E = E / torch.sum(E, dim=1, keepdim=True) # normalised across rows
F = E / torch.sum(E, dim=0, keepdim=True) # normalised across cols
E = torch.einsum('ijp,kjp->ikp', E, F)
return E
but I'm wondering if there is a method without einsum.
| In this setting, you can always fall back to using torch.matmul (batched matrix multiplication to be more precise). However, this requires you to transpose the axis. Recall the matrix multiplication for two 3D inputs, in einsum notation, it gives us:
bik,bkj->bij
Notice how the k dimension gets reduces. To get to this setting, we need to transpose the inputs of the operator. In your case we have:
ijp ? kjp -> ikp
↓ ↓ ↑
pij @ pjk -> pik
This translates to:
>>> (E.permute(2,0,1) @ F.permute(2,1,0)).permute(1,2,0)
# ijp ➝ pij kjp ➝ pjk pik ➝ ikp
You can argue your method is not only shorter but also a lot more readable. I would therefore stick with torch.einsum. The reason why the einsum operator is so useful here is because you can perform axes transpositions on the fly.
| https://stackoverflow.com/questions/70950648/ |
Miniforge Conda "PackagesNotFoundError" on ARM processor for PyTorch | I am unable to install any packages with miniforge 3 (conda 4.11.0).
I am attempting this on a Jetson Nano Developer Kit running Jetpack. Initially it had conda installed but it seems to have gone missing, so I decided to reinstall conda. It looks like the base version of anaconda/miniconda is having issues running on ARM processors, and so I downloaded miniforge which apparently is working.
I have set up an environment successfully, but attempting to download pytorch gives the following error:
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- pytorch
Current channels:
- https://conda.anaconda.org/pytorch/linux-aarch64
- https://conda.anaconda.org/pytorch/noarch
- https://conda.anaconda.org/abinit/linux-aarch64
- https://conda.anaconda.org/abinit/noarch
- https://conda.anaconda.org/matsci/linux-aarch64
- https://conda.anaconda.org/matsci/noarch
- https://conda.anaconda.org/conda-forge/linux-aarch64
- https://conda.anaconda.org/conda-forge/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
This is for Python 3.7.12. It seems this issue persists no matter what version of pytorch I try to install.
I am however able to install some other packages, as I was able to install beautifulsoup4.
| There is no linux-aarch64 version of pytorch on the default conda channel, see here
This is of course package specific. E.g. there is a linux-aarch64 version of beautifulsoup4 which is why you wre able to install it without an issue.
You can try to install from a different channel that claims to provide a pytorch for aarch64, e.g.
conda install -c kumatea pytorch
| https://stackoverflow.com/questions/70954061/ |
How to train Pytorch model on custom data | I am very rookie in transferring my code from Keras/Tensorflow to Pytorch and I am trying to retrain my TF model in Pytorch, however, my dataset has some particularities which make it difficult to me to make it run in Pytorch.
To understand my issues, recall that I have a custom dataset initialized this way:
class MyDataSet(torch.utils.data.Dataset):
def __init__(self, x, y, transform=None):
super(MyDataSet, self).__init__()
# store the raw tensors
self._x = np.load(x)
self._y = np.load(y)
self._x=np.swapaxes(self._x,3,2)
self._x=np.swapaxes(self._x,2,1)
self.transform = transform
def __len__(self):
# a DataSet must know it size
return self._x.shape[0]
def __getitem__(self, index):
x = self._x[index, :]
y = self._y[index]
return x, y
The shape of _self._x is (12000, 3, 224, 224) and the shape of self._y is (12000,)
I am fine-tuning a pre-trained RESNET-50 in this data, and the training happens the following way:
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
import numpy as np
from torch.utils.data import Dataset, DataLoader
from torchvision.models import resnet50
import time
import copy
def set_parameter_requires_grad(model, feature_extracting):
if feature_extracting:
for param in model.parameters():
param.requires_grad = False
#Transform dataset
print("Loading Data")
transform = transforms.Compose([transforms.ToTensor()])
dataset = MyDataSet("me/train1-features.npy","/me/train1-classes.npy",transform=transform)
dataloader = DataLoader(dataset, batch_size=4)
print("Configuring network")
feature_extract = True
num_epochs = 15
num_classes=12
model_ft = resnet50(pretrained=True)
set_parameter_requires_grad(model_ft, feature_extract)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, num_classes)
if torch.cuda.is_available():
model_ft.cuda()
params_to_update = model_ft.parameters()
print("Params to learn:")
if feature_extract:
params_to_update = []
for name,param in model_ft.named_parameters():
if param.requires_grad == True:
params_to_update.append(param)
print("\t",name)
else:
for name,param in model_ft.named_parameters():
if param.requires_grad == True:
print("\t",name)
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(params_to_update, lr=0.001, momentum=0.9)
# Setup the loss fxn
criterion = nn.CrossEntropyLoss()
#Train (how to validate?)
for epoch in range(num_epochs): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(dataloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
#transfer labels and inputs to cuda()
inputs,labels=inputs.cuda(), labels.cuda()
# zero the parameter gradients
optimizer_ft.zero_grad()
# forward + backward + optimize
outputs = model_ft(inputs)
loss = loss_func(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
running_loss = 0.0
However, whenever I run this code, I receive the following error
Traceback (most recent call last):
File "train_my_data_example.py", line 114, in <module>
outputs = model_ft(inputs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torchvision/models/resnet.py", line 249, in forward
return self._forward_impl(x)
File "/usr/local/lib/python3.8/dist-packages/torchvision/models/resnet.py", line 232, in _forward_impl
x = self.conv1(x)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 399, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 395, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.FloatTensor) should be the same
I also can do the train and validation procedures normally on TF/Keras, but I don't know how to do that in my custom Dataset with Pytorch.
How can I solve my problem and also run train/val loop with Pytorch in my custom data?
| It seems that np.load is loading binary data to X so ToTensor() is trying to preserve the dtype by coercing it to a ByteTensor. You can fix this by making a small change in __getitem__:
def __getitem__(self, index):
x = self._x[index, :]
y = self._y[index]
return x.astype(np.float32), y
| https://stackoverflow.com/questions/70955162/ |
Can I debug a python package after installing it? | I'm interested in a package and wanted to play around with the code: https://github.com/aiqm/torchani
The package itself is not complex and the key modules are included in the torchani folder. I wanted to use the VSCode debugger to do some experiments with the components and track the code. Do I need to run python setup.py --install, or I should simply go to the folder and run the modules without installing?
The problem is: there will be a lot of relative import issues if I directly run the code in the parent folder. If I install the package, then the code will probably be compiled and my changes will not be executed.
| You can install the package with python setup.py install (or pip install [-e] .).
For the debugging part you can use the debugger of VSCode, just set justMyCode: False in the launch.json.
| https://stackoverflow.com/questions/70959466/ |
torch.nn.functional.interpolate: difference between "linear" and "bilinear"? | In torch.nn.functional.interpolate what's the difference between the modes linear and bilinear?
To me, these are usually synonyms with regards to image resizing...
| Pytorch is explicitly differentiating between 1d interpolation (linear) and 2d interpolation (bilinear).
They differ in the the dimensionality of the input argument they are allowed to work on ( see here ). Specifically, linear works on 3D inputs and bilinear works on 4D inputs because the first two dimensions (mini-batch x channels) are understood not to be interpolated.
| https://stackoverflow.com/questions/70959660/ |
Is there any way to change parameters in pretrained DetrFeatureExtractor? | For my model, I load pretrained version of DetrFeatureExtractor:
feature_extractor = DetrFeatureExtractor(return_tensors="pt"
,do_normalize = True
,size = 400).from_pretrained("facebook/detr-resnet-50")
But when I output parameters of this variable, I get:
DetrFeatureExtractor {
"do_normalize": true,
"do_resize": true,
"feature_extractor_type": "DetrFeatureExtractor",
"format": "coco_detection",
"image_mean": [
0.485,
0.456,
0.406
],
"image_std": [
0.229,
0.224,
0.225
],
"max_size": 1333,
"size": 800
}
which still has size = 800. Is that possible to change parameters of pretrained feature extractor and, if yes, how can I change them?
| Your values to the constructor are not taken, because you are calling .from_pretrained(), which loads all values from the respective config file of the pretrained model (in your case, the corresponding config file can be viewed here).
Even if some values might not be specified in the config, they will be primarily taken from the default values, instead of whatever you're passing before.
If you want to change attributes, you can do so after loading:
from transformers import DetrFeatureExtractor
model = DetrFeatureExtractor.from_pretrained("facebook/detr-resnet-50")
model.size = 400
print(model) # will show the correct size
I do want to point out that changing parameters of pretrained networks might lead to unexpected behavior - after all, the model wasn't originally trained with this particular setting. So just be aware that this might lead to detrimental performance, and experiment at your own risk ;-)
| https://stackoverflow.com/questions/70960986/ |
What is the official implementation of first order MAML using the higher PyTorch library? | After noticing that my custom implementation of first order MAML might be wrong I decided to google how the official way to do first order MAML is. I found a useful gitissue that suggests to stop tracking the higher order gradients. Which makes complete sense to me. No more derivatives over the derivatives. But when I tried setting it to false (so that no higher derivatives are tracked) I got that there was no more training of my models and the .grad fiedl was None. Which is obviously wrong.
Is this a bug in higher or what is going on?
To reproduce run the official MAML example higher has but slightly modified here. The main code is this though:
#!/usr/bin/env python3
#
# Copyright (c) Facebook, Inc. and its affiliates.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This example shows how to use higher to do Model Agnostic Meta Learning (MAML)
for few-shot Omniglot classification.
For more details see the original MAML paper:
https://arxiv.org/abs/1703.03400
This code has been modified from Jackie Loong's PyTorch MAML implementation:
https://github.com/dragen1860/MAML-Pytorch/blob/master/omniglot_train.py
Our MAML++ fork and experiments are available at:
https://github.com/bamos/HowToTrainYourMAMLPytorch
"""
import argparse
import time
import typing
import pandas as pd
import numpy as np
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
plt.style.use('bmh')
import torch
from torch import nn
import torch.nn.functional as F
import torch.optim as optim
import higher
from support.omniglot_loaders import OmniglotNShot
def main():
argparser = argparse.ArgumentParser()
argparser.add_argument('--n_way', type=int, help='n way', default=5)
argparser.add_argument(
'--k_spt', type=int, help='k shot for support set', default=5)
argparser.add_argument(
'--k_qry', type=int, help='k shot for query set', default=15)
argparser.add_argument(
'--task_num',
type=int,
help='meta batch size, namely task num',
default=32)
argparser.add_argument('--seed', type=int, help='random seed', default=1)
args = argparser.parse_args()
torch.manual_seed(args.seed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(args.seed)
np.random.seed(args.seed)
# Set up the Omniglot loader.
# device = torch.device('cuda')
# from uutils.torch_uu import get_device
# device = get_device()
device = torch.device(f"cuda:{gpu_idx}" if torch.cuda.is_available() else "cpu")
db = OmniglotNShot(
'/tmp/omniglot-data',
batchsz=args.task_num,
n_way=args.n_way,
k_shot=args.k_spt,
k_query=args.k_qry,
imgsz=28,
device=device,
)
# Create a vanilla PyTorch neural network that will be
# automatically monkey-patched by higher later.
# Before higher, models could *not* be created like this
# and the parameters needed to be manually updated and copied
# for the updates.
net = nn.Sequential(
nn.Conv2d(1, 64, 3),
nn.BatchNorm2d(64, momentum=1, affine=True),
nn.ReLU(inplace=True),
nn.MaxPool2d(2, 2),
nn.Conv2d(64, 64, 3),
nn.BatchNorm2d(64, momentum=1, affine=True),
nn.ReLU(inplace=True),
nn.MaxPool2d(2, 2),
nn.Conv2d(64, 64, 3),
nn.BatchNorm2d(64, momentum=1, affine=True),
nn.ReLU(inplace=True),
nn.MaxPool2d(2, 2),
Flatten(),
nn.Linear(64, args.n_way)).to(device)
# We will use Adam to (meta-)optimize the initial parameters
# to be adapted.
meta_opt = optim.Adam(net.parameters(), lr=1e-3)
log = []
for epoch in range(100):
train(db, net, device, meta_opt, epoch, log)
test(db, net, device, epoch, log)
# plot(log)
def train(db, net, device, meta_opt, epoch, log):
net.train()
n_train_iter = db.x_train.shape[0] // db.batchsz
for batch_idx in range(n_train_iter):
start_time = time.time()
# Sample a batch of support and query images and labels.
x_spt, y_spt, x_qry, y_qry = db.next()
task_num, setsz, c_, h, w = x_spt.size()
querysz = x_qry.size(1)
# TODO: Maybe pull this out into a separate module so it
# doesn't have to be duplicated between `train` and `test`?
# Initialize the inner optimizer to adapt the parameters to
# the support set.
n_inner_iter = 5
inner_opt = torch.optim.SGD(net.parameters(), lr=1e-1)
qry_losses = []
qry_accs = []
meta_opt.zero_grad()
for i in range(task_num):
with higher.innerloop_ctx(
net, inner_opt, copy_initial_weights=False,
# track_higher_grads=True,
track_higher_grads=False,
) as (fnet, diffopt):
# Optimize the likelihood of the support set by taking
# gradient steps w.r.t. the model's parameters.
# This adapts the model's meta-parameters to the task.
# higher is able to automatically keep copies of
# your network's parameters as they are being updated.
for _ in range(n_inner_iter):
spt_logits = fnet(x_spt[i])
spt_loss = F.cross_entropy(spt_logits, y_spt[i])
diffopt.step(spt_loss)
# The final set of adapted parameters will induce some
# final loss and accuracy on the query dataset.
# These will be used to update the model's meta-parameters.
qry_logits = fnet(x_qry[i])
qry_loss = F.cross_entropy(qry_logits, y_qry[i])
qry_losses.append(qry_loss.detach())
qry_acc = (qry_logits.argmax(
dim=1) == y_qry[i]).sum().item() / querysz
qry_accs.append(qry_acc)
# Update the model's meta-parameters to optimize the query
# losses across all of the tasks sampled in this batch.
# This unrolls through the gradient steps.
qry_loss.backward()
assert meta_opt.param_groups[0]['params'][0].grad is not None
meta_opt.step()
qry_losses = sum(qry_losses) / task_num
qry_accs = 100. * sum(qry_accs) / task_num
i = epoch + float(batch_idx) / n_train_iter
iter_time = time.time() - start_time
if batch_idx % 4 == 0:
print(
f'[Epoch {i:.2f}] Train Loss: {qry_losses:.2f} | Acc: {qry_accs:.2f} | Time: {iter_time:.2f}'
)
log.append({
'epoch': i,
'loss': qry_losses,
'acc': qry_accs,
'mode': 'train',
'time': time.time(),
})
def test(db, net, device, epoch, log):
# Crucially in our testing procedure here, we do *not* fine-tune
# the model during testing for simplicity.
# Most research papers using MAML for this task do an extra
# stage of fine-tuning here that should be added if you are
# adapting this code for research.
net.train()
n_test_iter = db.x_test.shape[0] // db.batchsz
qry_losses = []
qry_accs = []
for batch_idx in range(n_test_iter):
x_spt, y_spt, x_qry, y_qry = db.next('test')
task_num, setsz, c_, h, w = x_spt.size()
querysz = x_qry.size(1)
# doesn't have to be duplicated between `train` and `test`?
n_inner_iter = 5
inner_opt = torch.optim.SGD(net.parameters(), lr=1e-1)
for i in range(task_num):
with higher.innerloop_ctx(net, inner_opt, track_higher_grads=False) as (fnet, diffopt):
# Optimize the likelihood of the support set by taking
# gradient steps w.r.t. the model's parameters.
# This adapts the model's meta-parameters to the task.
for _ in range(n_inner_iter):
spt_logits = fnet(x_spt[i])
spt_loss = F.cross_entropy(spt_logits, y_spt[i])
diffopt.step(spt_loss)
# The query loss and acc induced by these parameters.
qry_logits = fnet(x_qry[i]).detach()
qry_loss = F.cross_entropy(
qry_logits, y_qry[i], reduction='none')
qry_losses.append(qry_loss.detach())
qry_accs.append(
(qry_logits.argmax(dim=1) == y_qry[i]).detach())
qry_losses = torch.cat(qry_losses).mean().item()
qry_accs = 100. * torch.cat(qry_accs).float().mean().item()
print(
f'[Epoch {epoch + 1:.2f}] Test Loss: {qry_losses:.2f} | Acc: {qry_accs:.2f}'
)
log.append({
'epoch': epoch + 1,
'loss': qry_losses,
'acc': qry_accs,
'mode': 'test',
'time': time.time(),
})
def plot(log):
# Generally you should pull your plotting code out of your training
# script but we are doing it here for brevity.
df = pd.DataFrame(log)
fig, ax = plt.subplots(figsize=(6, 4))
train_df = df[df['mode'] == 'train']
test_df = df[df['mode'] == 'test']
ax.plot(train_df['epoch'], train_df['acc'], label='Train')
ax.plot(test_df['epoch'], test_df['acc'], label='Test')
ax.set_xlabel('Epoch')
ax.set_ylabel('Accuracy')
ax.set_ylim(70, 100)
fig.legend(ncol=2, loc='lower right')
fig.tight_layout()
fname = 'maml-accs.png'
print(f'--- Plotting accuracy to {fname}')
fig.savefig(fname)
plt.close(fig)
# Won't need this after this PR is merged in:
# https://github.com/pytorch/pytorch/pull/22245
class Flatten(nn.Module):
def forward(self, input):
return input.view(input.size(0), -1)
if __name__ == '__main__':
main()
Note:
I asked a similar question here Would making the gradient "data" by detaching them implement first order MAML using PyTorch's higher library? but that one is slightly different. It is asking about a custom implementation that detaches the gradients directly to make them "data". This one is asking why the setting track_higher_grads=False screws up the population of gradients -- which as I understand should not.
related:
bug report since from the discussion I expect the flag to solve the issues: https://github.com/facebookresearch/higher/issues/129
https://github.com/facebookresearch/higher/issues?q=is%3Aissue+first+order+maml+is%3Aclosed
https://github.com/facebookresearch/higher/issues/63
https://github.com/facebookresearch/higher/issues/128
https://www.reddit.com/r/pytorch/comments/sixdqd/what_is_the_official_implementation_of_first/
https://www.reddit.com/r/pytorch/comments/si5xv1/would_making_the_gradient_data_by_detaching_them/
Bounty
Explain the reasoning of why the solution here works i.e. why
track_higher_grads = True
...
diffopt.step(inner_loss, grad_callback=lambda grads: [g.detach() for g in grads])
computed FO maml but:
new_params = params[:]
for group, mapping in zip(self.param_groups, self._group_to_param_list):
for p, index in zip(group['params'], mapping):
if self._track_higher_grads:
new_params[index] = p
else:
new_params[index] = p.detach().requires_grad_() # LIKELY THIS LINE!!!
does not allow FO to work properly and sets .grads to None (not populate the grad field). The assignment with p.detach().requires_grad_() honestly looks the same to me. This .requires_grad_() evens seems extra "safe".
| The reason why track_higher_grads=False doesn't actually work is that it detaches the gradients of the post-adaptation parameters rather than just the gradients (see here). So you get no gradient at all from your outer loop loss. What you really want is just to detach the gradients on just the inner loop-computed gradients, but leave the (otherwise trivial) computation graph between model initialization and adapted parameters intact.
| https://stackoverflow.com/questions/70961541/ |
IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 3) | I created a random function with the line below
x = torch.randn(4, 3)
and used the transpose function as shown here
torch.transpose(x, 0, 1)
I got the error line below. Who can assist with a solution
IndexError Traceback (most recent call last)
<ipython-input-19-28494ba2cedc> in <module>()
----> 1 torch.transpose(x, 0, 3)
IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 3)
| You are trying to transpose x = torch.randn(4, 3), which is 2D. torch.transpose(x, 0, 1) works fine because you swap dimensions 0 and 1. But then, you try to swap dimensions 0 and 3 by doing torch.transpose(x, 0, 3) but your x does not have the 3rd dimension
| https://stackoverflow.com/questions/70964146/ |
Take multiply slices in numpy/pytorch | I have a big one dimensional array X.shape = (10000,), and a vector of indices y = [0, 7, 9995].
I would like to get a matrix with rows
[
X[0 : 100],
X[7 : 107],
concat(X[9995:], X[:95]),
]
That is, slices of length 100, starting at each index, with wrap-around.
I can do that with a python loop, but I'm wondering if there's a smarter batched way of doing it in pytorch or numpy, since my arrays can be quite large.
| Quite simple, actually.
For each element E in y, create a range from E to E + 100
Concatenate all the ranges horizontally
Modulo the resulting array by the length of X
indexes = np.hstack([np.arange(v, v + 100) for v in y]) % X.shape[0]
Output:
>>> indexes
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21,
22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32,
33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43,
44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54,
55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65,
66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76,
77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87,
88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98,
99, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27,
28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38,
39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,
50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60,
61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71,
72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82,
83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93,
94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104,
105, 106, 9995, 9996, 9997, 9998, 9999, 0, 1, 2, 3,
4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36,
37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47,
48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58,
59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69,
70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80,
81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
92, 93, 94])
Now just use index X with that:
X[indexes]
| https://stackoverflow.com/questions/70964569/ |
Getting a 'KeyError" while trying to access a Custom Dataset | I am trying to learn PyTorch and make use of a custom dataset. Code Credit - https://github.com/vineeth2309/Custom-Dataset-and-Dataloader-in-Torch.
However, when I run the code I get a 'KeyError'.
import glob
import cv2
import numpy as np
import torch
from torch.utils.data import Dataset, DataLoader
class CustomDataset(Dataset):
def __init__(self):
self.imgs_path = "Dog_Cat_Dataset/"
file_list = glob.glob(self.imgs_path + "*")
print(file_list)
self.data = []
for class_path in file_list:
class_name = class_path.split("/")[-1]
for img_path in glob.glob(class_path + "/*.jpeg"):
self.data.append([img_path, class_name])
print(self.data)
self.class_map = {"dogs" : 0, "cats": 1}
self.img_dim = (416, 416)
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
img_path, class_name = self.data[idx]
img = cv2.imread(img_path)
img = cv2.resize(img, self.img_dim)
class_id = self.class_map[class_name]
img_tensor = torch.from_numpy(img)
img_tensor = img_tensor.permute(2, 0, 1)
class_id = torch.tensor([class_id])
return img_tensor, class_id
if __name__ == "__main__":
dataset = CustomDataset()
print (dataset)
data_loader = DataLoader(dataset, batch_size=4, shuffle=True)
print (data_loader)
for imgs, labels in data_loader:
print("Batch of images has shape: ",imgs.shape)
print("Batch of labels has shape: ", labels.shape)
Stack-Trace :
C:\Users\parag\anaconda3\envs\tf-gpu\python.exe C:/Users/parag/PycharmProjects/Custom-Dataset-and-Dataloader-in-Torch/Main.py
['Dog_Cat_Dataset\\cats', 'Dog_Cat_Dataset\\dogs']
[['Dog_Cat_Dataset\\cats\\1.jpeg', 'Dog_Cat_Dataset\\cats'], ['Dog_Cat_Dataset\\cats\\2.jpeg', 'Dog_Cat_Dataset\\cats'], ['Dog_Cat_Dataset\\cats\\3.jpeg', 'Dog_Cat_Dataset\\cats'], ['Dog_Cat_Dataset\\cats\\4.jpeg', 'Dog_Cat_Dataset\\cats'], ['Dog_Cat_Dataset\\cats\\5.jpeg', 'Dog_Cat_Dataset\\cats'], ['Dog_Cat_Dataset\\dogs\\1.jpeg', 'Dog_Cat_Dataset\\dogs'], ['Dog_Cat_Dataset\\dogs\\2.jpeg', 'Dog_Cat_Dataset\\dogs'], ['Dog_Cat_Dataset\\dogs\\3.jpeg', 'Dog_Cat_Dataset\\dogs'], ['Dog_Cat_Dataset\\dogs\\4.jpeg', 'Dog_Cat_Dataset\\dogs'], ['Dog_Cat_Dataset\\dogs\\5.jpeg', 'Dog_Cat_Dataset\\dogs']]
<__main__.CustomDataset object at 0x00000254C0568FD0>
<torch.utils.data.dataloader.DataLoader object at 0x00000254C250A7F0>
Traceback (most recent call last):
File "C:\Users\parag\PycharmProjects\Custom-Dataset-and-Dataloader-in-Torch\Main.py", line 40, in <module>
for imgs, labels in data_loader.dataset:
File "C:\Users\parag\PycharmProjects\Custom-Dataset-and-Dataloader-in-Torch\Main.py", line 29, in __getitem__
class_id = self.class_map[class_name]
KeyError: 'Dog_Cat_Dataset\\cats'
Process finished with exit code 1
[My Folder Structure][1]
I have tried, but not able to resolve the error. Could someone please help me with this?
[1]: https://i.stack.imgur.com/1giJw.png
| You've used forward slash when it should be backslash:
for class_path in file_list:
class_name = class_path.split("\\")[-1]
for img_path in glob.glob(class_path + "\*.jpeg"):
self.data.append([img_path, class_name])
I'm guessing you're running on windows when the example is from linux.
| https://stackoverflow.com/questions/70964670/ |
Way to define a generic tensor-like type (like tf.Tensor | torch.Tensor | np.ndarray | float) in python 3.8? | I'm annotating my code for a high level neural network library. I want to make it as flexible as possible while still clarifying type expectations for developers using the library. In many cases, this requires writing 'generic' functions that operate on tensors defined in arbitrary backends (like tf.Tensor, torch.Tensor, np.ndarray, as well as python types like float, int, and bool; it'd be nice if I could statically enforce dtype's as well, but that not my concern here) Is there a way to define a generic Tensor-like type in python 3.8?
| After coding for a few hours, I began to see that you don't want to do this in most cases. Here's some reasons why:
it's not pythonic. "Avoid the magical wand" when it adds needless complexity
you'll have duck-typing issues like calling .size in torch but .shape in tf which results in a lot of if/elif/else structures.
you're going to be repeating yourself for a lot of generic-backend logic
I would recommend for other developers facing this prediciment to seriously ask themselves if they are willing to maintain a high level framework on two backends. You see, any time you want to perform an operation on the tensor (say transpose()), you have to ask if that operation has an identical signature in all supported backends. If not, you start ending up with huge swaths of repeated if/elif/else code and development productivity really starts to suffer. If willing to proceed, try using an existing library like keras.backend which has limited tensorflow and thaneo support, or write your own cross-framework tensor-framework abstract for us all to use. (And please post a comment if you made or found one)
In most cases however, it suffices to imitate the einops approach and simply define an arbitrary TypeVar like so:
Tensor = TypeVar('Tensor')
Also, CoPilot recommended this to me:
TensorType = type(None)
Then you can use this type var in your annotations as expected:
def iterate(self, a: Tensor, b: Tensor) -> Tensor:
c = self.neural_net(a)
d = c + a - b
return d / a
| https://stackoverflow.com/questions/70965751/ |
How to deal with CUDA version? | How to set up different versions of CUDA in one OS?
Here is my problem: Lastest Tensorflow with GPU support requires CUDA 11.2, whereas Pytorch works with 11.3. So what is the solution to install both libraries in Windows and Ubuntu?
| One solution is to use Docker Container Environment, which would only need the Nvidia Driver to be of version XYZ.AB; in this way, you can use both PyTorch and TensorFlow versions.
A very good starting point for your problem would be this one(ML-WORKSPACE) : https://github.com/ml-tooling/ml-workspace
| https://stackoverflow.com/questions/70968734/ |
Is it possible to add tensors of different sizes together in pytorch? | I have an image gradient of size (3, 224, 224) and a patch of (1, 768). is it possible to add this gradient to the patch to get a size of the patch (1, 768)?
Forgive my inquisitiveness. I know pytorch too utilizes broadcasting and I am not sure if I will able to do so with two different tensors in way similar to the line below:
torch.add(a, b)
For example:
The end product would be the same patch on the left with the gradient of an entire image on the right added to it. My understanding is that it’s not possible, but knowledge isn’t bounded.
| I figured out to do it myself. I divided the image gradient (right) into 16 x 16 patches, created a loop that adds each patch to the original image patch (left). This way, I was able to add a 224 x 224 image gradient into a 16 x 16 patch. I just wanted to see what would happen if I do such
| https://stackoverflow.com/questions/70971041/ |
What does it mean torch.rand(1, 3, 64, 64)? | I am beginner in PyTorch. In one tutorial, I saw: torch.rand(1, 3, 64, 64), I understand that it creates a Tensor with random numbers following standard normal distribution.
The outputs looks like:
tensor([[[[0.1352, 0.5110, 0.7585, ..., 0.9067, 0.4730, 0.8077],
[0.2471, 0.8726, 0.3580, ..., 0.4983, 0.9747, 0.5219],
[0.8554, 0.4266, 0.0718, ..., 0.6734, 0.8739, 0.6137],
...,
[0.2132, 0.9319, 0.5361, ..., 0.3981, 0.2057, 0.7032],
[0.3347, 0.5330, 0.7019, ..., 0.6713, 0.0936, 0.4706],
[0.6257, 0.6656, 0.3322, ..., 0.6664, 0.8149, 0.1887]],
[[0.3210, 0.6469, 0.7772, ..., 0.3175, 0.5102, 0.9079],
[0.3054, 0.2940, 0.6611, ..., 0.0941, 0.3826, 0.3103],
[0.7484, 0.3442, 0.1034, ..., 0.8028, 0.4643, 0.2800],
...,
[0.9946, 0.5868, 0.8709, ..., 0.4837, 0.6691, 0.5303],
[0.1770, 0.5355, 0.8048, ..., 0.1843, 0.0658, 0.3817],
[0.9612, 0.0122, 0.5012, ..., 0.4198, 0.3294, 0.2106]],
[[0.5800, 0.5174, 0.5454, ..., 0.3881, 0.3277, 0.5470],
[0.8871, 0.7536, 0.9928, ..., 0.8455, 0.8071, 0.0062],
[0.2199, 0.0449, 0.2999, ..., 0.3570, 0.7996, 0.3253],
...,
[0.8238, 0.1100, 0.1489, ..., 0.0265, 0.2165, 0.2919],
[0.4074, 0.5817, 0.8021, ..., 0.3417, 0.1280, 0.9279],
[0.0047, 0.1796, 0.4522, ..., 0.3257, 0.2657, 0.4405]]]])
But what do the 4 parameters (1, 3, 64, 64) mean exactly? Thanks!
| This is the shape of the output tensor. Specifically, it means that your output tensor has
(1, 3, 64, 64): 1 element of shape (3, 64, 64) in dimension 0
(1, 3, 64, 64): 3 elements of shape (64, 64) in dimension 1 for a given dimension 0
(1, 3, 64, 64): 64 elements of shape (64,) in dimension 2 for a given dimension 1 and dimension 0
(1, 3, 64, 64): 64 scalars in dimension 3 for a given dimension 2, dimension 1, and dimension 0
You can confirm this by comparing the number of elements with the tensor's "capacity":
>>> torch.rand(1,3,64,64).numel()
12288
>>> 1 * 3 * 64 * 64
12288
| https://stackoverflow.com/questions/70973324/ |
how to get mentions in pytorch NER instead of toknes? | I am using PyTorch and a pre-trained model.
Here is my code:
class NER(object):
def __init__(self, model_name_or_path, tokenizer_name_or_path):
self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path)
self.model = AutoModelForTokenClassification.from_pretrained(
model_name_or_path)
self.nlp = pipeline("ner", model=self.model, tokenizer=self.tokenizer)
def get_mention_entities(self, query):
return self.nlp(query)
when I call get_mention_entities and print its output for "اینجا دانشگاه صنعتی امیرکبیر است."
it gives:
[{'entity': 'B-FAC', 'score': 0.9454591, 'index': 2, 'word': 'دانشگاه', 'start': 6, 'end': 13}, {'entity': 'I-FAC', 'score': 0.9713519, 'index': 3, 'word': 'صنعتی', 'start': 14, 'end': 19}, {'entity': 'I-FAC', 'score': 0.9860724, 'index': 4, 'word': 'امیرکبیر', 'start': 20, 'end': 28}]
As you can see, it can recognize the university name, but there are three tokens in the list.
Is there any standard way to combine these tokens based on the "entity" attribute?
desired output is something like:
[{'entity': 'FAC', 'word': 'دانشگاه صنعتی امیرکبیر', 'start': 6, 'end': 28}]
Finally, I can write a function to iterate, compare, and merge the tokens based on the "entity" attribute, but I want a standard way like an internal PyTorch function or something like this.
my question is similar to this question.
PS: "دانشگاه صنعتی امیرکبیر" is a university name.
| Huggingface's NER pipeline has an argument grouped_entities=True which will do exactly what you seek: group BI into unified entities.
Adding
self.nlp = pipeline("ner", model=self.model, tokenizer=self.tokenizer, grouped_entities=True)
should do the trick
| https://stackoverflow.com/questions/70978468/ |
Function of Activation functions | is it possible to define a function of activation function? I tried to do :
def activation():
# return nn.Sin()
# return nn.Tanh()
# return nn.Sigmoid()
# return nn.Tanhshrink()
return nn.HardTanh(-1,1)
# return nn.Hardswish()
# return nn.functionnal.silu()
But i get an error when trying to call it. Here is an example:
def f():
return nn.Tanh()
input = torch.randn(2)
output = f(input)
print(output)
it outputs "TypeError: f() takes 0 positional arguments but 1 was given". It doesn't work even i gave it an argument x.
| You can either use the object-oriented approach:
>>> f = nn.Tanh()
>>> output = f(x)
Or the functional approach where you will find the equivalent for nn.Tanh inside nn.functional as tanh.
>>> f = nn.functional.tanh
>>> output = f(x)
| https://stackoverflow.com/questions/70984949/ |
Which model/technique to use for specific sentence extraction? | I have a dataset of tens of thousands of dialogues / conversations between a customer and customer support. These dialogues, which could be forum posts, or long-winded email conversations, have been hand-annotated to highlight the sentence containing the customers problem. For example:
Dear agent, I am writing to you because I have a very annoying problem with my washing machine. I bought it three weeks ago and was very happy with it. However, this morning the door does not lock properly. Please help
Dear customer.... etc
The highlighted sentence would be:
However, this morning the door does not lock properly.
What approaches can I take to model this, so that in future I can automatically extract the customers problem? The domain of the datasets are broad, but within the hardware space, so it could be appliances, gadgets, machinery etc.
What is this type of problem called?
I thought this might be called "intent recognition", but most guides seem to refer to multiclass classification. The sentence either is or isn't the customers problem. I considered analysing each sentence and performing binary classification, but I'd like to explore options that take into account the context of the rest of the conversation if possible.
What resources are available to research how to implement this in Python (using tensorflow or pytorch)
I found a model on HuggingFace which has been pre-trained with customer dialogues, and have read the research paper, so I was considering fine-tuning this as a starting point, but I only have experience with text (multiclass/multilabel) classification when it comes to transformers.
| If you want to get a specific sentence (without any modification) from the original input text, that is often referred to as 'span classification' where the output is the index of the first and last word of the specific sentence.
The state-of-the-art now is the attention models like BERT
.You can check the Bert models that are designed for the 'span classification' problem in huggingface as RobertaForQuestionAnswering https://huggingface.co/docs/transformers/model_doc/roberta#transformers.TFRobertaForQuestionAnswering that uses TensorFlow or PyTorch library.
| https://stackoverflow.com/questions/70990722/ |
How to reload hydra config with enumerations | Is there a better way to reload a hydra config from an experiment with enumerations? Right now I reload it like so:
initialize_config_dir(config_dir=exp_dir, ".hydra"), job_name=config_name)
cfg = compose(config_name, overrides=overrides)
print(cfg.enum)
>>> ENUM1
But ENUM1 is actually an enumeration that normally loads as
>>> <SomeEnumClass.ENUM1: 'enum1'>
I am able to fix this by adding a configstore default to the experiment hydra file:
defaults:
- base_config_cs
Which now results in
initialize_config_dir(config_dir=exp_dir, ".hydra"), job_name=config_name)
cfg = compose(config_name, overrides=overrides)
print(cfg.enum)
>>> <SomeEnumClass.ENUM1: 'enum1'>
Is there a better way to do this without adding this? Or can I add the default in the python code?
| This is a good question -- reliably reloading configs from previous Hydra runs is an area that could be improved.
As you've discovered, loading the saved file config.yaml directly results in an untyped DictConfig object.
The solution below involves a script called reload.py that creates a config node with a defaults list that loads both the schema base_config_cs and the saved file config.yaml.
At the end of this post I also give a simple solution that involves loading .hydra/overrides.yaml to re-run the config composition process.
Suppose you've run a Hydra job with the following setup:
# app.py
from dataclasses import dataclass
from enum import Enum
import hydra
from hydra.core.config_store import ConfigStore
from omegaconf import DictConfig
class SomeEnumClass(Enum):
ENUM1 = 1
ENUM2 = 2
@dataclass
class Schema:
enum: SomeEnumClass
x: int = 123
y: str = "abc"
def store_schema() -> None:
cs = ConfigStore.instance()
cs.store(name="base_config_cs", node=Schema)
@hydra.main(config_path=".", config_name="foo")
def app(cfg: DictConfig) -> None:
print(cfg)
if __name__ == "__main__":
store_schema()
app()
# foo.yaml
defaults:
- base_config_cs
- _self_
enum: ENUM1
x: 456
$ python app.py y=xyz
{'enum': <SomeEnumClass.ENUM1: 1>, 'x': 456, 'y': 'xyz'}
After running app.py, there exists a directory outputs/2022-02-05/06-42-42/.hydra containing the saved file config.yaml.
As you correctly pointed out in your question, to reload the saved config you must merge the schema base_config_cs with the contents of config.yaml. Here is a pattern for accomplishing that:
# reload.py
import os
from hydra import compose, initialize_config_dir
from hydra.core.config_store import ConfigStore
from app import store_schema
config_name = "config"
exp_dir = os.path.abspath("outputs/2022-02-05/07-19-56")
saved_cfg_dir = os.path.join(exp_dir, ".hydra")
assert os.path.exists(f"{saved_cfg_dir}/{config_name}.yaml")
store_schema() # stores `base_config_cs`
cs = ConfigStore.instance()
cs.store(
name="reload_conf",
node={
"defaults": [
"base_config_cs",
config_name,
]
},
)
with initialize_config_dir(config_dir=saved_cfg_dir):
cfg = compose("reload_conf")
print(cfg)
$ python reload.py
{'enum': <SomeEnumClass.ENUM1: 1>, 'x': 456, 'y': 'xyz'}
In the above, python file reload.py, we store a node called reload_conf in the ConfigStore. Storing reload_conf this way is equivalent to creating a file called reload_conf.yaml that is discoverable by Hydra on the config search path. This reload_conf node has a defaults list that loads both the schema base_config_cs and config. For this to work, the following two conditions must be met:
the schema base_config_cs must be stored in the ConfigStore. This is accomplished by calling the store_schema function that we have imported from app.py.
a config node with name specified by the variable config_name, i.e. config.yaml in this example, must be discoverable by Hydra (which is taken care of here by calling initialize_config_dir).
Note that in foo.yaml we have a defaults list ["base_config_cs", "_self_"] that loads the schema base_config_cs before loading the contents _self_ of foo. In order for reload_conf to reconstruct the app's config with the same merge order, base_config_cs should come before config_name in the defaults list belonging to reload_conf.
The above approach could be taken one step further by removing the defaults list from foo.yaml and using cs.store to ensure the same defaults list is used in both the app and the reloading script
# app2.py
from dataclasses import dataclass
from enum import Enum
from typing import Any, List
import hydra
from hydra.core.config_store import ConfigStore
from omegaconf import MISSING, DictConfig
class SomeEnumClass(Enum):
ENUM1 = 1
ENUM2 = 2
@dataclass
class RootConfig:
defaults: List[Any] = MISSING
enum: SomeEnumClass = MISSING
x: int = 123
y: str = "abc"
def store_root_config(primary_config_name: str) -> None:
cs = ConfigStore.instance()
# defaults list defined here:
cs.store(
name="root_config", node=RootConfig(defaults=["_self_", primary_config_name])
)
@hydra.main(config_path=".", config_name="root_config")
def app(cfg: DictConfig) -> None:
print(cfg)
if __name__ == "__main__":
store_root_config("foo2")
app()
# foo2.yaml (note NO DEFAULTS LIST)
enum: ENUM1
x: 456
$ python app2.py hydra.job.chdir=false y=xyz
{'enum': <SomeEnumClass.ENUM1: 1>, 'x': 456, 'y': 'xyz'}
# reload2.py
import os
from hydra import compose, initialize_config_dir
from hydra.core.config_store import ConfigStore
from app2 import store_root_config
config_name = "config"
exp_dir = os.path.abspath("outputs/2022-02-05/07-45-43")
saved_cfg_dir = os.path.join(exp_dir, ".hydra")
assert os.path.exists(f"{saved_cfg_dir}/{config_name}.yaml")
store_root_config("config")
with initialize_config_dir(config_dir=saved_cfg_dir):
cfg = compose("root_config")
print(cfg)
$ python reload2.py
{'enum': <SomeEnumClass.ENUM1: 1>, 'x': 456, 'y': 'xyz'}
A simpler alternative approach is to use .hydra/overrides.yaml to recompose the app's configuration based on the overrides that were originally passed to Hydra:
# reload3.py
import os
import yaml
from hydra import compose, initialize
from app import store_schema
config_name = "config"
exp_dir = os.path.abspath("outputs/2022-02-05/07-19-56")
saved_cfg_dir = os.path.join(exp_dir, ".hydra")
overrides_path = f"{saved_cfg_dir}/overrides.yaml"
assert os.path.exists(overrides_path)
overrides = yaml.unsafe_load(open(overrides_path, "r"))
print(f"{overrides=}")
store_schema()
with initialize(config_path="."):
cfg = compose("foo", overrides=overrides)
print(cfg)
$ python reload3.py
overrides=['y=xyz']
{'enum': <SomeEnumClass.ENUM1: 1>, 'x': 456, 'y': 'xyz'}
This approach has its drawbacks: if your app's configuration involves some non-hermetic operation like querying a timestamp (e.g. via Hydra's now resolver) or looking up an environment variable (e.g. via the oc.env resolver), the configuration composed by reload.py might be different from the original version loaded in app.py.
| https://stackoverflow.com/questions/70991020/ |
I think I have reconstructed the computational graph before, but it hints me "Trying to backward through the graph a second time ", why? | A image to discribe my question
From my point of view, every iterations, the computational graph will be constructed at the first arrow, and it will be used and delete at the second arrow in backward pass. So, why it tells me that:
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
Here is my code:
def train(num_epoch = 10,len_vocab = 1, num_hidden=256,embedding_dim = 8):
data = get_data()
model = MyRNN(len_vocab,num_hidden,embedding_dim)
if os.path.exists('QingBinLi'):
model.load_state_dict(torch.load('QingBinLi'))
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=1e-5)
loss_for_draw = []
for epoch in range(num_epoch+1):
h = torch.randn(1,1,num_hidden)
loss_average = []
for i in range(data.shape[-2]):
optimizer.zero_grad()
#I think my computational graph will be constructed there
pre,h = model(data[:,:,i,:] ,h)
pre = pre.unsqueeze(0).unsqueeze(0)
loss = criterion(pre, data[:,:,i+1,:])
loss_average.append(loss)
#I think everytime the backward pass will delete the computational graph.
loss.backward()
nn.utils.clip_grad_norm_(model.parameters(), max_norm=10)
optimizer.step()
print(f"finish {i+1} times")
loss_for_draw.append(sum(loss_average)/len(loss_average))
torch.save(model.state_dict(), 'QingBinLi')
print(f'now epoch:{epoch}, loss = {loss_for_draw[-1]}')
return loss_for_draw
class MyRNN(nn.Module):
def __init__(self,len_vocab, num_hidden=256,embedding_dim = 8):
super(MyRNN,self).__init__()
self.rnn = nn.RNN(embedding_dim, num_hidden)
self.num_directions=1
self.output_model = nn.Linear(num_hidden, embedding_dim)
def forward(self, x, h):
y, h = self.rnn(x, h)
output = self.output_model(y.reshape((-1)))
return output, h
So, if I'm right, it shouldn't tell me "Trying to backward through the graph a second time"..
So, where did i go wrong
| Variable h and data requires gradient, so we must add 2 lines:
h = h.detach()
data = data.detach()
| https://stackoverflow.com/questions/70995032/ |
Pytorch-index on multiple dimension tensor in a batch | Given a 1-d tensor:
A = torch.tensor([1, 2, 3, 4])
suppose we have some "indexer tensor"
ind1 = torch.tensor([3, 0, 1])
ind2 = torch.tensor([[3, 0], [1, 2]])
as we run A[ind1] & A[ind2]
we get results tensor([4, 1, 2]) & tensor([[4, 1],[2, 3]])
which is the same shape of the indexed tensor (ind1 and ind2) and its value are mapped from tensor A.
I want to ask how can I index on higher dimension tensors?
Currently I have one solution:
For a N-d tensor A, suppose we have the indexer tensor IND,
IND is like [[i11, i12, ... i1N], [i21, i22, ... i2N], ...[iM1, i22, ... iMN], where M is the number of indexed elements.
We can divide IND into N tensors, where
IND_1 = torch.tensor([i11, i21, ... iM1])
...
IND_N = torch.tensor([i1N, i2N, ... iMN])
as we run A[IND_1, ... IND_N], we got tensor(v1, v2, ... vM)
Example:
A = tensor([[1, 2], [3, 4]], [[5, 6], [7, 8]]]) # [2 * 2 * 2]
ind1 = tensor([1, 0, 1])
ind2 = tensor([1, 1, 0])
ind3 = tensor([0, 1, 0])
A[ind1, ind2, ind3]
=> tensor([7, 4, 5])
# and the good thing is you can control the shape of result tensor by modifying the inds' shape.
ind1 = tensor([[0, 0], [1, 0]])
ind2 = tensor([[1, 1], [0, 1]])
ind3 = tensor([[0, 1], [0, 0]])
A[ind1, ind2, ind3]
=> tensor([[3, 4],[5, 3]]) # same as inds' shape
Anyone has more elegant solutions?
| 1- Manual approach using unraveled indices on flattened input.
If you want to index on an arbitrary number of axes (all axes of A) then one straightforward approach is to flatten all dimensions and unravel the indices. Let's assume that A is 3D and we want to index it using a stack of ind1, ind2, and ind3:
>>> ind = torch.stack((ind1, ind2, ind3))
You can first unravel the indices using A's strides:
>>> unraveled = torch.tensor(A.stride()) @ ind.flatten(1)
Then flatten A, index it with unraveled and reshape to the final form:
>>> A.flatten()[unraveled].reshape_as(ind[0])
2- Using a simple split of ind.
You can actually perform the same operation using torch.chunk:
>>> A[ind.chunk(len(ind))][0]
Or alternatively torch.split which is identical:
>>> A[ind.split(1)][0]
3- Initial answer for single-axis indexing.
Let's take a minimal multi-dimensional example with A being a 2-D tensor defined as:
>>> A = torch.tensor([[1, 2, 3, 4],
[5, 6, 7, 8]])
From your description of the problem:
the same shape of index tensor and its value are mapped from tensor A.
Then the indexer tensor would require to have the same shape as the indexed tensor A, since this one is no longer flat. Otherwise, what would the result of A (shaped (2, 4)) indexed by ind1 (shape (3,)) be?
If you are indexing on a single dimension then you can utilize torch.gather:
>>> A.gather(1, ind2)
tensor([[4, 1],
[6, 7]])
| https://stackoverflow.com/questions/70997018/ |
PyTorch: Multi-class segmentation loss value != 0 when using target image as the prediction | I was performing semantic segmentation using PyTorch. There are a total of 103 different classes in the dataset and the targets are RGB images with only the Red channel containing the labels. I was using nn.CrossEntropyLoss as my loss function. For sanity, I wanted to check if using nn.CrossEntropyLoss is correct for this problem and whether it has the expected behaviour
I pick a random mask from my dataset and create a categorical version of it using this custom transform
class ToCategorical:
def __init__(self, n_classes: int) -> None:
self.n_classes = n_classes
def __call__(self, sample: torch.Tensor):
mask = sample.permute(1, 2, 0)
categories = torch.unique(mask).tolist()[1:] # get all categories other than 0
# build a tensor with `n_classes` channels
one_hot_image = torch.zeros(self.n_classes, *mask.shape[:-1])
for category in categories:
# get spacial locs where the categ is present
rows, cols, _ = torch.where(mask == category)
# in same spacial loc but in `categ` channel fill 1
one_hot_image[category, rows, cols] = 1
return one_hot_image
And then I send this image as the output (prediction) and use the ground truth mask as the target to the loss function.
import torch.nn as nn
mask = T.PILToTensor()(Image.open("path_to_image").convert("RGB"))
categorical_mask = ToCategorical(103)(mask).unsqueeze(0)
mask = mask[0].unsqueeze(0) # get only the red channel, add fake batch_dim
loss_fn = nn.CrossEntropyLoss()
target = mask
output = categorical_mask
print(output.shape, target.shape)
print(loss_fn(output, target.to(torch.long)))
I expected the loss to be zero but to my surprise, the output is as follows
torch.Size([1, 103, 600, 800]) torch.Size([1, 600, 800])
tensor(4.2836)
I verified with other samples in the dataset and I obtained similar values for other masks as well. Am I doing something wrong? I expect the loss to be = 0 when the output is the same as the target.
PS. I also know that nn.CrossEntropyLoss is the same as using log_softmax followed by nn.NLLLoss() but even I obtained the same value by using nllloss as well
For Reference
Dataset used: UECFoodPixComplete
| I would like to adress this:
I expect the loss to be = 0 when the output is the same as the target.
If the prediction matches the target, i.e. the prediction corresponds to a one-hot-encoding of the labels contained in the dense target tensor, but the loss itself is not supposed to equal to zero. Actually, it can never be equal to zero because the nn.CrossEntropyLoss function is always positive by definition.
Let us take a minimal example with number of #C classes and a target y_pred and a prediction y_pred consisting of prefect predictions:
As a quick reminder:
The softmax is applied on the logits (q_i) as p_i = log(exp(q_i)/sum_j(exp(q_j)):
>>> p = F.softmax(y_pred, 1)
Similarly if you are using the log-softmax, defined as logp_i = log(p_i):
>>> logp = F.log_softmax(y_pred, 1)
Then comes the negative likelihood function computed between x the input and y the target: -y*x. In association with the softmax, it comes down to -y*p, or -y*logp respectively. In any case, whether you apply the log or not, only the predictions corresponding to the true classes will remain since the others ones are zeroed-out.
That being said, applying the NLLLoss on y_pred would indeed result with a 0 as you expected in your question. However, here we apply it on the probability distribution or log-probability: p, or logp respectively!
In our specific case, p_i = 1 for the true class and p_i = 0 for all other classes (there are #C - 1 of those). This means the softmax of the logit associated with the true class will equal to exp(1)/sum_i(p_i). And since sum_i(p_i) = (#C-1)*exp(0) + exp(1). We therefore have:
softmax(p) = e / (#C - 1 + e)
Similarly for log-softmax:
log-softmax(p) = log(e / (#C-1 + e)) = 1 - log(#C - 1 + e)
If we proceed by applying the negative likelihood function we simply get cross-entropy(y_pred, y_true) = (nllloss o log-softmax)(y_pred, y_true). This results in:
loss = - (1 - log(#C - 1 + e)) = log(#C - 1 + e) - 1
This effectively corresponds to the minimum of the nn.CrossEntropyLoss function.
Regarding your specific case where #C = 103, you may have an issue in your code... since the average loss should equal to log(102 + e) - 1 i.e. around 3.65.
>>> y_true = torch.randint(0,103,(1,1,2,5))
>>> y_pred = torch.zeros(1,103,2,5).scatter(1, y_true, value=1)
You can see for yourself with one of the provided methods:
the builtin function nn.functional.cross_entropy:
>>> F.cross_entropy(y_pred, y_true[:,0])
tensor(3.6513)
manually computing the quantity:
>>> logp = F.log_softmax(y_pred, 1)
>>> -logp.gather(1, y_true).mean()
tensor(3.6513)
analytical result:
>>> log(102 + e) - 1
3.6513
| https://stackoverflow.com/questions/71000059/ |
Why GPU is much slower than cpu in google colab? | I'm training a RNN on google colab and this is my first time using gpu to train a neural network. From my point of view, GPU should be much faster than cpu, and changing device from cpu to gpu only need to add .to('cuda') in the definition of model/loss/variable and set google colab 'running on gpu'.
When I train it on cpu, the average speed is 650 iteration/s
Training on cpu in google colab
But when I train it on gpu, the average speed is only 340 iterations/s, only half of the cpu
Training on gpu in google colab
and this happened on every epoch
Here is my code.
def train(num_epoch = 30,len_vocab = 1, num_hidden=256,embedding_dim = 8,batch_size = 100):
data = get_data()
model = MyRNN(len_vocab,num_hidden,embedding_dim).to('cuda') #here
if os.path.exists('QingBinLi'):
model.load_state_dict(torch.load('QingBinLi'))
criterion = nn.MSELoss().to('cuda') #here
optimizer = torch.optim.Adam(model.parameters(), lr=0.1, weight_decay=1e-5)
loss_for_draw = []
model.train()
data = data.detach().to('cuda') #here
for epoch in range(num_epoch+1):
h = torch.randn(1,batch_size,num_hidden).to('cuda') #here
loss_average = 0
for i in tqdm(range(data.shape[-2] -batch_size)):
optimizer.zero_grad()
pre,h = model(data[:,:,i:i+batch_size,:].squeeze(0) ,h)
h = h.detach()
pre = pre.unsqueeze(0).unsqueeze(0)
loss = criterion(pre, data[:,:,i+1:i+1+batch_size,:].squeeze(0))
loss_average += loss.item()
loss.backward()
nn.utils.clip_grad_norm_(model.parameters(), max_norm=10)
optimizer.step()
loss_for_draw.append(loss_average/(data.shape[-2] -batch_size))
torch.save(model.state_dict(), 'QingBinLi')
print(f'now epoch:{epoch}, loss = {loss_for_draw[-1]}')
return loss_for_draw
I just add '.to('cuda')' there when I try to run it on gpu.
So, why it's much slower when I running my code on gpu? Maybe I should modify more code?
| My brother says that when the tensor is very big, such as 1 million dimension, gpu can be faster than cpu, otherwise we don't even need parallel computing because computing are not mainly on tensor multiply, but on copy tensors and other things like that.
My RNN has about 256x256+256x8 parameters and batch_size is 100, and the dimention of that is much lower than 1 million. So gpu is much slower.
And, when I change my batch_size to 10000, gpu is 145 iteration/s while cpu is only 15iterations/s. This time gpu is much faster.
A CNN, with stride one, in gpu we can calculate filter_size *image_size * batch_size, about 2,415,919,104 times multiply simultaneously. So in this kind of computing, gpu is much faster.
| https://stackoverflow.com/questions/71003586/ |
Is there anyway to add same index multiple times in one operation? | Let's say that I have two tensors value and index, which contain the data and all indexes we need. I want to add one to data in value with the corresponding index. If an index shows k times in tensor index, then this data should be added by k, instead of one.
Here's an example:
value = torch.zeros(3) # [0, 0, 0]
index = torch.zeros(10).long() #[0,0,0,0,0,0,0,0,0,0]
ret = some_func(value, index) # [10, 0, 0]
I know that using a for loop to go through all indexes in index could solve the problem, but I just wanna ask if there is more elegant way?
| One way you can do this is with scatter_add:
In [54]: value = torch.zeros(3)
In [55]: index = torch.tensor([0, 0, 1, 0, 2, 2, 1, 1, 1])
In [56]: value.scatter_add(0, index, torch.ones_like(index, dtype=value.dtype))
Out[56]: tensor([3., 4., 2.])
You can use scatter_add_ to operate on value in place.
You might find it more efficient to use bincount():
In [63]: index = torch.tensor([0, 0, 1, 0, 2, 2, 1, 1, 1])
In [64]: counts = index.bincount(minlength=value.shape[0])
In [65]: counts
Out[65]: tensor([3, 4, 2])
If in your actual problem, value is initialized with 0s, then you're done--just use counts as the result. Otherwise, add counts to value.
| https://stackoverflow.com/questions/71004354/ |
serving pytorch model using flask | If multiple clients make a request to one server(pytorch model), how does the problem arise in the multithread of the flask or elsewhere?
As far as I know, if running the pytorch model with multithreading using flask, the data will get tangled. If I’m wrong, please correct it.
| Use a multithreaded approach for Flask as well, the request will get throttled if you just initiate a single threaded usual Flask server.
Use Gunicorn: https://gunicorn.org/ to start your Flask server, with a good number of threads, passed as a parameter value to Gunicorn.
| https://stackoverflow.com/questions/71005127/ |
using random_split() in python to split the Trainset to train and validation | train_dataset = torchvision.datasets.FashionMNIST(data_dir, train=True, download=True)
test_dataset = torchvision.datasets.FashionMNIST(data_dir, train=False, download=True)
using two above lines I load the Mnist dataset and then transfered them to Tensor and Dataloader using below lines of code
tr =transforms.Compose([transforms.ToTensor(),])
train_dataset.transform = tr
test_dataset.transform = tr
train_dataloader = DataLoader(train_dataset, batch_size=256, shuffle=True)
test_dataloader = DataLoader(test_dataset, batch_size=256, shuffle=False)
and then by using the for loop such as bellow I iterate over the data and do training model in pytorch.
for i in train_dataloader:
But when I split the training data into two parts using random_split I get error using the for loop
train_dataset, val_dataset = random_split(train_dataset, (50000, 10000))
train_dataset.transform = tr
test_dataset.transform = tr
val_dataset.transform = tr
train_dataloader = DataLoader(train_dataset, batch_size=256, shuffle=True)
validation_dataloader = DataLoader(val_dataset, batch_size=256, shuffle=True)
test_dataloader = DataLoader(test_dataset, batch_size=256, shuffle=False)
The error is:
default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'PIL.Image.Image'>
How can solve the issue?
| You should pass transform to your FashionMNIST dataset's constructor directly.
train_dataset = torchvision.datasets.FashionMNIST(data_dir, train=True, download=True, transform=tr)
test_dataset = torchvision.datasets.FashionMNIST(data_dir, train=False, download=True, transform=tr)
| https://stackoverflow.com/questions/71007178/ |
Is it possible to run scatter matmul in pytorch? | Edit: apparently DGL is working on it already: https://github.com/dmlc/dgl/pull/3641
I have several types of embeddings and each one needs its own linear projection. I can solve the problem with a for loop of type:
emb_out = dict()
for ntype in ntypes:
emb_out[ntype] = self.lin_layer[ntype](emb[ntype])
But ideally, I wanted to do some sort of scatter operation to run it in parallel. Something like:
pytorch_scatter(lin_layers, embeddings, layer_map, reduce='matmul'), where the layer map tells which embedding should go through which layer. If I have 2 types of linear layers and batch_size = 5, then layer_map would be something like [1,0,1,1,0].
Would it be possible to vectorize the for loop in a efficient way, like in pytorch_scatter? Please check below minimal examples.
import torch
import random
import numpy as np
seed = 42
torch.manual_seed(seed)
random.seed(seed)
def matmul_single_embtype(lin_layers, embeddings, layer_map):
#run single linear layer over all embeddings, irrespective of type
output_embeddings = torch.matmul(lin_layers[0], embeddings.T).T
return output_embeddings
def matmul_for_loop(lin_layers, embeddings, layer_map):
#let each embedding type have its own projection, looping over emb types
output_embeddings = dict()
for emb_type in np.unique(layer_map):
output_embeddings[emb_type] = torch.matmul(lin_layers[emb_type], embeddings[layer_map == emb_type].T).T
return output_embeddings
def matmul_scatter(lin_layers, embeddings, layer_map):
#parallelize the for loop by creating a diagonal matrix of lin layers
#this is very innefficient, because creates a copy of the layer for each embedding, instead of broadcasting
mapped_lin_layers = [lin_layers[i] for i in layer_map]
mapped_lin_layers = torch.block_diag(*mapped_lin_layers) #batch_size*inp_size x batch_size*output_size
embeddings_stacked = embeddings.view(-1,1) #stack all embeddings to multiply the linear block
output_embeddings = torch.matmul(mapped_lin_layers, embeddings_stacked).view(embeddings.shape)
return output_embeddings
"""
GENERATE DATA
lin_layers:
List of matrices of size n_layer x inp_size x output_size
embeddings:
Matrix of size batch_size x inp_size
layer_map:
Vector os size batch_size stating which embedding should go thorugh each layer
"""
emb_size = 32
batch_size = 500
emb_types = 20
layer_map = [random.choice(list(range(emb_types))) for i in range(batch_size)]
lin_layers = [torch.arange(emb_size*emb_size, dtype=torch.float32).view(emb_size,emb_size) for i in range(emb_types)]
embeddings = torch.arange(batch_size*emb_size, dtype=torch.float32).view(batch_size,emb_size)
grouped_emb = {i: embeddings[layer_map==i] for i in np.unique(layer_map)} #separate embeddings by embedding type
#Run experiments
%timeit matmul_scatter(lin_layers, embeddings, layer_map)
%timeit matmul_for_loop(lin_layers, embeddings, layer_map)
%timeit matmul_single_embtype(lin_layers, embeddings, layer_map)
>>>>>133 ms ± 2.47 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
>>>>>1.64 ms ± 14 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
>>>>>31.4 µs ± 805 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Related stackoverflow question: how to vectorize the scatter-matmul operation
Related issue in pytorch: https://github.com/pytorch/pytorch/issues/31942
| Just found out that DGL is working on this feature already: https://github.com/dmlc/dgl/pull/3641
| https://stackoverflow.com/questions/71011135/ |
Pytorch expects each tensor to be equal size | When running this code: embedding_matrix = torch.stack(embeddings)
I got this error:
RuntimeError: stack expects each tensor to be equal size, but got [7, 768] at entry 0 and [8, 768] at entry 1
I'm trying to get embedding using BERT via:
split_sent = sent.split()
tokens_embedding = []
j = 0
for full_token in split_sent:
curr_token = ''
x = 0
for i,_ in enumerate(tokenized_sent[1:]):
token = tokenized_sent[i+j]
piece_embedding = bert_embedding[i+j]
if token == full_token and curr_token == '' :
tokens_embedding.append(piece_embedding)
j += 1
break
sent_embedding = torch.stack(tokens_embedding)
embeddings.append(sent_embedding)
embedding_matrix = torch.stack(embeddings)
Does anyone know how I can fix this?
| As per PyTorch Docs about torch.stack() function, it needs the input tensors in the same shape to stack. I don't know how will you be using the embedding_matrix but either you can add padding to your tensors (which will be a list of zeros at the end till a certain user-defined length and is recommended if you will train with this stacked tensor, refer this tutorial) to make them equidimensional or you can simply use something like torch.cat(data,dim=0).
| https://stackoverflow.com/questions/71011333/ |
Converting a PyG graph to a NetworkX graph | I am trying to convert my PyG graph to a NetworkX graph using to_networkx
According to the docs I can optionally pass node and edge attributes as str iterables, in addition to the Data object.
Below are by node and edge attribute lists, with values converted to strings:
Nodes: ['3.3375725746154785', '2.0086510181427',..., '1.5960148572921753', '3.621992349624634']
Edges: ['0.9940207804344958', '0.48573804411542043', ..., '0.7245483440145621', '0.24117984598949904']
to_networkx runs fine when I only pass it the Data object. However, when I also pass these attribute lists, I get the below error:
G[u][v][key] = values[key][i]
KeyError: '0.30194718370332896'
I've looked at the source code, but can't make out what it is doing. Could someone please help explain what is wrong with my attribute lists and what I need to change for them to be accepted.
What I can make out is that this error is specifically referring to my edge attributes. If I remove them, I get the following similar error related to the node attributes:
feat_dict.update({key: values[key][i]})
KeyError: '0.0'
How I construct my graph and pass it to to_networkx:
n1 = np.repeat(np.array([0,1,2,3,4,5,6]),5)
n2 = np.array([0,1,2,3,4,0,1,2,3,4,0,1,2,3,4,0,1,2,3,4,0,1,2,3,4,0,1,2,3,4,0,1,2,3,4])
cat = np.stack((n1,n2), axis=0)
e = torch.tensor(cat, dtype=torch.long)
edge_index = e.t().clone().detach()
edge_attr = torch.tensor(np.random.rand(35,1))
x = torch.tensor([[0], [0], [0], [0], [0], [1], [1]], dtype=torch.float)
data = Data(x=x, edge_index=edge_index.t().contiguous(), edge_attr = edge_attr)
Before I pass the node and edge attributes, I do the string conversion to conform to the str iterable requirment:
networkx_node_values = list(map(str, data.x.t()[0].tolist()))
networkx_edge_values = list(map(str, edge_attr.t()[0].tolist()))
networkX_graph = to_networkx(data, node_attrs = networkx_node_values, edge_attrs = networkx_edge_values)
| You need to pass the names of the attributes as a list:
to_networkx(<PyTorchGeometricDataObject>, node_attrs=[<Name of Node Attribute 1>, <Name of Node Attributes 2>, ... ], edge_attr=[<Edge Attribute 1>, ...])
Or in context, based on your given minimal example:
import numpy as np
import torch
from torch_geometric.data import Data
from torch_geometric.utils import to_networkx
n1 = np.repeat(np.array([0,1,2,3,4,5,6]),5)
n2 = np.array([0,1,2,3,4,0,1,2,3,4,0,1,2,3,4,0,1,2,3,4,0,1,2,3,4,0,1,2,3,4,0,1,2,3,4])
cat = np.stack((n1,n2), axis=0)
e = torch.tensor(cat, dtype=torch.long)
edge_index = e.t().clone().detach()
edge_attr = torch.tensor(np.random.rand(35,1))
x = torch.tensor([[0], [0], [0], [0], [0], [1], [1]], dtype=torch.float)
data = Data(x=x, edge_index=edge_index.t().contiguous(), edge_attr = edge_attr)
print(data)
# Data(edge_attr=[35, 1], edge_index=[2, 35], x=[7, 1])
networkX_graph = to_networkx(data, node_attrs=["x"], edge_attrs=["edge_attr"])
print(networkX_graph.nodes(data=True))
# [(0, {'x': 0.0}), (1, {'x': 0.0}),...
print(networkX_graph.edges(data=True))
# [(0, 0, {'edge_attr': 0.3412137594357493}), ...
| https://stackoverflow.com/questions/71011514/ |
Clipping a tensor if below certain treshold | I have a tensor with dim n x 3, containing n 3d vectors. I want to compute a new tensor with dim n x 3. If the norm of a vector drops below a certain treshold I want to set it to the zero vector and get a new tensor which contains the index positions of the changed vectors.
Example: tensor([1, 2, 3], [0, 1, 1], [4, 5, 6], ...) would lead to tensor([1, 2, 3], [0, 0, 0], [4, 5, 6], ...) with tensor([1]) if the treshold was set to 1.5.
How can I achieve that without using a loop? Thanks for your help.
| Simply do a[vector_norm(a,dim=1) < thr] = 0, where thr is your threshold. Here's a demonstration.
import torch
from torch.linalg import vector_norm
n = 10
a = torch.rand(10,3)
print('a before:',a)
thr = 1
ind = vector_norm(a,dim=1) < thr
a[ind] = 0
print('a after:',a)
print('list of indices',ind.nonzero())
Result of an example run:
a before: tensor([[0.0708, 0.7559, 0.3974],
[0.2969, 0.0974, 0.8652],
[0.8074, 0.8180, 0.2432],
[0.9006, 0.2447, 0.1602],
[0.6289, 0.1976, 0.8543],
[0.2109, 0.7539, 0.6334],
[0.9100, 0.2514, 0.2314],
[0.6657, 0.1940, 0.6565],
[0.4577, 0.8439, 0.5681],
[0.5566, 0.9979, 0.1468]])
a after: tensor([[0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000],
[0.8074, 0.8180, 0.2432],
[0.0000, 0.0000, 0.0000],
[0.6289, 0.1976, 0.8543],
[0.2109, 0.7539, 0.6334],
[0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000],
[0.4577, 0.8439, 0.5681],
[0.5566, 0.9979, 0.1468]])
list of indices tensor([[0],
[1],
[3],
[6],
[7]])
| https://stackoverflow.com/questions/71020921/ |
How to use 'same' padding for maxpool1d | Tensorflow tf.keras.layers.MaxPool1D has the option to set padding='same' to make the input shape the same as the output shape. Is there something equivalent for torch.nn.MaxPool1d ? I see that torch.nn.Conv1d has the option to set padding='same', but this option seems to be missing from maxpool. What is the current workaround for this?
| Note that the output of the keras version is only really the same shape as the input whenever you use it with stride and dilation set to 1, so I'll assume the same parameters in this answer.
For any uneven kernel size, this is quite easily achievable in PyTorch by setting the padding to (kernel_size - 1)/2.
For an even kernel size, both sides of the input need to be padded by a different amount, and this seems not possible in the current implementation of MaxPool1d.
| https://stackoverflow.com/questions/71021725/ |
Loading a pretrained model in PyTorch, error:object not callable | I am trying to load the Efficientnet-b6 weights using PyTorch and Fastai:
PATH = '../input/EffnetB6/efficientnet_b6.pth'
model = torch.load(PATH)
The above model is part of another model:
class EARUnet(nn.Module):
def __init__(self, pretrained_net, out_ch=1):
super(EARUnet, self).__init__()
# print("EfficientUnet_git_b6_res")
self.pretrained_net = pretrained_net
.
.
When I run:
net = EARUnet(model,1)
learn.fit_flat_cos(10)
I get this error:
TypeError: 'collections.OrderedDict' object is not callable
What is the model's format requested?
| Given the restrained context, I suspect that the problem resides in model, probably containing an OrderedDict of the EfficientNet model state dict, while the EARUnet expects the EfficientNet nn.Module.
You should instead, try something like:
eff_net = EfficientNetB6()
eff_net_state_dict = torch.load(PATH)
eff_net.load_state_dict(eff_net_state_dict)
net = EARUnet(model, 1)
Have a look at this page for more details.
| https://stackoverflow.com/questions/71023630/ |
LSTM, multi-variate, multi-feature in pytorch | I'm having trouble understanding the format of data for an LSTM in pytorch. Lets say i have a CSV file with 4 features, laid out in timestamps one after the other ( a classic time series)
time1 feature1 feature2 feature3 feature4
time2 feature1 feature2 feature3 feature4
time3 feature1 feature2 feature3 feature4
time4 feature1 feature2 feature3 feature4, label
However, this entire set of 4 sequences only has a single label. The thing we're trying to classify started at time1, but we don't know how to label it until time 4.
My question is, can a typical pytorch LSTM support this? All of the tutorials i've read, watched, walked through, involve looking at a time sequence of a single feature, or a word model, which is still a dataset with a single dimension.
If it can support it, does the data need to be flattened in some way?
Pytorch's LSTM reference states:
input: tensor of shape (L,N,Hin)(L, N, H_{in})(L,N,Hin) when batch_first=False or (N,L,Hin)(N, L, H_{in})(N,L,Hin) when batch_first=True containing the features of the input sequence. The input can also be a packed variable length sequence.
Does this mean that it cannot support any input that contains multiple sequences? Or is there another name for this?
I'm really lost here, and could use any advice, pointers, help, so on. Maybe some disambiguation too.
I've posted a couple times here but gotten no responses at all. If this post is misplaced, could someone kindly direct me towards the correct place to post it?
Edit: Following Daniel's advice, do i understand correctly that the four features should be put together like this:
[(feature1, feature2, feature3, feature4, feature1, feature2, feature3, feature4, feature1, feature2, feature3, feature4, feature1, feature2, feature3, feature4), label] when given to the LSTM?
If that's correct, is the input size (16) in this case?
Finally, I was under the impression that the output of the LSTM Would be the predicted label. Do I have that wrong?
| As you show, the LSTM layer's input size is (batch_size, Sequence_length, feature_size). This means that the feature is assumed to be a 1D vector.
So to use it in your case you need to stack your four features into one vector (if they are more then 1D themselves then flatten them first) and use that vector as the layer's input.
Regarding the label. It is defiantly supported to have a label only after a few iterations. The LSTM will output a sequence with the same length as the input sequence, but when training the LSTM you can choose to use any part of that sequence in the loss function. In your case you will want to use the last element only.
| https://stackoverflow.com/questions/71023822/ |
Apply gradient to a tensor in a sparse way in PyTorch | I have a very large tensor L (millions of elements), from which I gather a relatively small subtensor S (maybe a thousand of elements).
I then apply my model to S, compute loss, and backpropagate to S and to L with the intent to only update selected elements in L. Problem is PyTorch makes L's gradient to be a continuous tensor, so it basically doubles L's memory usage.
Is there an easy way to compute and apply gradient to L without doubling memory usage?
Sample code to illustrate the problem:
import torch
import torch.nn as nn
from torch.nn.parameter import Parameter
net = nn.Sequential(
nn.Linear(1, 64),
nn.ReLU(),
nn.Linear(64,64),
nn.ReLU(),
nn.Linear(64, 1))
L = Parameter(torch.zeros([1024*1024*256], dtype=torch.float32))
L.data.uniform_(-1, 1)
indices = torch.randint(high=256*1024*1024, size=[1024])
S = torch.unsqueeze(L[indices], dim=1)
out = net(S)
loss = out.sum()
loss.backward()
print(loss)
g = L.grad
print(g.shape) # this is huge!
| You don't actually need requires_grad on L as gradients will be computed and applied manually. Instead, set it on S. That will stop backpropagation at S.
Then, you can update the values of L using S.grad and your preferred optimization. Something along these lines
L = torch.zeros([1024*1024*256], dtype=torch.float32)
...
S = torch.unsqueeze(L[indices], dim=1)
S.requires_grad_()
out = net(S)
loss = torch.abs(out).sum()
loss.backward()
with torch.no_grad():
L[indices] -= learning_rate * torch.squeeze(S.grad)
S.grad.zero_()
| https://stackoverflow.com/questions/71026805/ |
Pytorch to ONNX: Could not find an implementation for RandomNormalLike | I am trying to convert a fairly complex model from pytorch into ONNX. The conversion succeeds without error, but I am encountering this error when loading the model:
Traceback (most recent call last):
File "/home/***/***/***.py", line 50, in <module>
main()
File "/home/***/***/***.py", line 38, in main
ort_session = ort.InferenceSession(onnx_path, providers=[
File "/home/***/miniconda3/envs/***/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 324, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/home/***/miniconda3/envs/***/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 369, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for RandomNormalLike(1) node with name 'RandomNormalLike_598'
I think that the RandomNormalLike node the error is complaining about might correspond to this module I have:
class NoiseInjection(nn.Module):
def __init__(self):
super().__init__()
self.weight = nn.Parameter(torch.zeros(1), requires_grad=True)
def forward(
self,
feat: torch.Tensor,
noise: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if noise is None:
batch, _, height, width = feat.shape
noise = torch.randn(
batch, 1, height, width,
dtype=feat.dtype,
device=feat.device,
)
return feat + self.weight * noise
I also created a different implementation, but it leads to the same error: (edit: This version actually works. I made an unrelated mistake elsewhere that mislead me into thinking it did not work)
def forward(
self,
feat: torch.Tensor,
noise: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if noise is None:
noise = torch.randn_like(feat[:, 0:1])
return feat + self.weight * noise
My pytorch and onnx version are as follows:
$ conda list torch
# Name Version Build Channel
torch 1.10.0+cu113 pypi_0 pypi
torchaudio 0.10.0+cu113 pypi_0 pypi
torchvision 0.11.1+cu113 pypi_0 pypi
$ conda list onnx
# Name Version Build Channel
onnx 1.10.2 pypi_0 pypi
onnxruntime-gpu 1.9.0 pypi_0 pypi
What can be done to be able to export such a module to onnx and run it successfully?
| From checking online I found a similar issue on GitHub about conv (https://github.com/microsoft/onnxruntime/issues/3130), could be that the types of the parameters used in torch are not compatible with the implementation of RandomNormalLike available in ONNX.
Could you check in netron what's inside the RandomNormalLike node/nodes to see if they comply with the spec: https://github.com/onnx/onnx/blob/main/docs/Operators.md#RandomNormal or https://github.com/onnx/onnx/blob/main/docs/Operators.md#RandomNormalLike
Cheers
EDIT: turns out the RandomNormal node has a type of 10 which corresponds to fp16
While the onnxruntime implementation only supports float and doubles see source code here: https://github.com/microsoft/onnxruntime/blob/24e35fba3217bf33b0e4064bc71d271a61938ba0/onnxruntime/core/providers/cpu/generator/random.cc#L354
Solution here is either to run the whole model in fp32 or ask explicitely RandomNormalLike to use floats or doubles hoping that torch allows mixed computation on fp16 and fp32/fp64 I guess
| https://stackoverflow.com/questions/71031604/ |
pycave kmeans-gpu disable model trainer outputs from being printed | This is a code snippet to run KMeans using GPU.
Documentation-link:https://pycave.borchero.com/sites/generated/clustering/kmeans/pycave.clustering.KMeans.html
import torch
from pycave.clustering import KMeans
X = torch.cat([
torch.randn(1000, 6) - 5,
torch.randn(1000, 6),
torch.randn(1000, 6) + 5,
])
estimator = KMeans(num_clusters = 3, trainer_params=dict(gpus=1,
enable_progress_bar=0,
max_epochs=100,))
labels = estimator.fit_predict(X).numpy()
pd.value_counts(labels)
The issue is with how to disable the console output from the estimator.
Current Output:
Running initialization...
{'batch_size': 3000, 'collate_fn': <function collate_tensor at 0x000002BE21221700>}
Fitting K-Means...
{'batch_size': 3000, 'collate_fn': <function collate_tensor at 0x000002BE21221700>}
{'batch_size': 1, 'sampler': None, 'batch_sampler': <pytorch_lightning.overrides.distributed.IndexBatchSamplerWrapper object at 0x000002BE593A55B0>, 'collate_fn': <function collate_tensor at 0x000002BE21221700>, 'shuffle': False, 'drop_last': False}
0 1000
2 1000
1 1000
dtype: int64
Expected Output:
0 1000
2 1000
1 1000
dtype: int64
Info regarding trainer_params parameter
(Optional[Dict[str, Any]]) --
Initialization parameters to use when initializing a PyTorch Lightning trainer. By default, it disables various stdout logs unless PyCave is configured to do verbose logging. Checkpointing and logging are disabled regardless of the log level.
| The dictionaries that are printed should never be there, that's a bug in a dependency. Resolved in the latest build.
As far as the PyCave logs are concerned (Running initialization... and Fitting K-Means...), you can turn them off easily by adding the following:
import logging
from pycave import set_logging_level
set_logging_level(logging.WARNING)
Note that set_logging_level(logging.WARNING) also turns off the progress bar and the model summary automatically so you don't have to set these flags explicitly.
| https://stackoverflow.com/questions/71032276/ |
Tensorflow: Create the torch.gather() equivalent in tensorflow | I want to replicate the torch.gather() function in TensorFlow 2.X.
I have a Tensor A (shape: [2, 4, 3]) and a corresponding Index-Tensor I (shape: [2,2,3]).
Using torch.gather() yields the following:
A = torch.tensor([[[10,20,30], [100,200,300], [1000,2000,3000]],
[[50,60,70], [500,600,700], [5000,6000,7000]]])
I = torch.tensor([[[0,1,0], [1,2,1]],
[[2,1,2], [1,0,1]]])
torch.gather(A, 1, I)
>
tensor([[[10, 200, 30], [100, 2000, 300]],
[5000, 600, 7000], [500, 60, 700]]])
I have tried using tf.gather(), but this did not yield pytorch-like results. I also tried to play around with tf.gather_nd(), but I could not find a suitable solution.
I found this StackOverflow post, but this seems not to work for me.
Edit:
When using tf.gather_nd(A, I), I get the following result:
tf.gather_nd(A, I)
>
[[100, 6000],
[ 0, 60]]
The result for tf.gather(A, I) is rather lengthy. It has the shape of [2, 2, 3, 4, 3]
| torch.gather and tf.gather_nd work differently and will therefore yield different results when using the same indices tensor (in some cases an error will also be returned). This is what the indices tensor would have to look like to get the same results:
import tensorflow as tf
A = tf.constant([[
[10,20,30], [100,200,300], [1000,2000,3000]],
[[50,60,70], [500,600,700], [5000,6000,7000]]])
I = tf.constant([[[
[0,0,0],
[0,1,1],
[0,0,2],
],[
[0,1,0],
[0,2,1],
[0,1,2],
]],
[[
[1,2,0],
[1,1,1],
[1,2,2],
],
[
[1,1,0],
[1,0,1],
[1,1,2],
]]])
print(tf.gather_nd(A, I))
tf.Tensor(
[[[ 10 200 30]
[ 100 2000 300]]
[[5000 600 7000]
[ 500 60 700]]], shape=(2, 2, 3), dtype=int32)
So, the question is actually how are you calculating your indices or are they always hard-coded? Also, check out this post on the differences of the two operations.
As for the post you linked that didn't work for you, you just need to cast the indices and everything should be fine:
def torch_gather(x, indices, gather_axis):
all_indices = tf.where(tf.fill(indices.shape, True))
gather_locations = tf.reshape(indices, [indices.shape.num_elements()])
gather_indices = []
for axis in range(len(indices.shape)):
if axis == gather_axis:
gather_indices.append(tf.cast(gather_locations, dtype=tf.int64))
else:
gather_indices.append(tf.cast(all_indices[:, axis], dtype=tf.int64))
gather_indices = tf.stack(gather_indices, axis=-1)
gathered = tf.gather_nd(x, gather_indices)
reshaped = tf.reshape(gathered, indices.shape)
return reshaped
I = tf.constant([[[0,1,0], [1,2,1]],
[[2,1,2], [1,0,1]]])
A = tf.constant([[
[10,20,30], [100,200,300], [1000,2000,3000]],
[[50,60,70], [500,600,700], [5000,6000,7000]]])
print(torch_gather(A, I, 1))
tf.Tensor(
[[[ 10 200 30]
[ 100 2000 300]]
[[5000 600 7000]
[ 500 60 700]]], shape=(2, 2, 3), dtype=int32)
| https://stackoverflow.com/questions/71035337/ |
In the PyTorch Distributed Data Parallel (DDP) tutorial, how does `setup` know it's rank? | For the tutorial Getting Started with Distributed Data Parallel
How does setup() function knows the rank when mp.spawn() doesn't pass the rank?
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("gloo", rank=rank, world_size=world_size)
def demo_basic(rank, world_size):
print(f"Running basic DDP example on rank {rank}.")
setup(rank, world_size)
.......
def run_demo(demo_fn, world_size):
mp.spawn(demo_fn,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__ == "__main__":
n_gpus = torch.cuda.device_count()
assert n_gpus >= 2, f"Requires at least 2 GPUs to run, but got {n_gpus}"
world_size = n_gpus
run_demo(demo_basic, world_size)
| mp.spawn does pass the rank to the function it calls.
From the torch.multiprocessing.spawn docs
torch.multiprocessing.spawn(fn, args=(), nprocs=1, join=True, daemon=False, start_method='spawn')
...
fn (function) -
Function is called as the entrypoint of the spawned process. This
function must be defined at the top level of a module so it can be
pickled and spawned. This is a requirement imposed by multiprocessing. The function is called as fn(i, *args), where i is the process index
and args is the passed through tuple of arguments.
So when spawn invokes fn it passes it the process index as the first argument.
| https://stackoverflow.com/questions/71036271/ |
How to Suppress "Using bos_token, but it is not set yet..." in HuggingFace T5 Tokenizer | I'd like to turn off the warning that huggingface is generating when I use unique_no_split_tokens
In[2] tokenizer = T5Tokenizer.from_pretrained("t5-base")
In[3] tokenizer(" ".join([f"<extra_id_{n}>" for n in range(1,101)]), return_tensors="pt").input_ids.size()
Out[3]: torch.Size([1, 100])
Using bos_token, but it is not set yet.
Using cls_token, but it is not set yet.
Using mask_token, but it is not set yet.
Using sep_token, but it is not set yet.
Anyone know how to do this?
| This solution worked for me:
tokenizer.add_tokens([f"_{n}" for n in range(1,100)], special_tokens=True)
model.resize_token_embeddings(len(tokenizer))
tokenizer.save_pretrained('pathToExtendedTokenizer/')
tokenizer = T5Tokenizer.from_pretrained("pathToExtendedTokenizer/")
| https://stackoverflow.com/questions/71039446/ |
Retrieve the PyTorch model from a PyTorch lightning model | I have trained a PyTorch lightning model that looks like this:
In [16]: MLP
Out[16]:
DecoderMLP(
(loss): RMSE()
(logging_metrics): ModuleList(
(0): SMAPE()
(1): MAE()
(2): RMSE()
(3): MAPE()
(4): MASE()
)
(input_embeddings): MultiEmbedding(
(embeddings): ModuleDict(
(LCLid): Embedding(5, 4)
(sun): Embedding(5, 4)
(day_of_week): Embedding(7, 5)
(month): Embedding(12, 6)
(year): Embedding(3, 3)
(holidays): Embedding(2, 1)
(BusinessDay): Embedding(2, 1)
(day): Embedding(31, 11)
(hour): Embedding(24, 9)
)
)
(mlp): FullyConnectedModule(
(sequential): Sequential(
(0): Linear(in_features=60, out_features=435, bias=True)
(1): ReLU()
(2): Dropout(p=0.13371112461182535, inplace=False)
(3): LayerNorm((435,), eps=1e-05, elementwise_affine=True)
(4): Linear(in_features=435, out_features=435, bias=True)
(5): ReLU()
(6): Dropout(p=0.13371112461182535, inplace=False)
(7): LayerNorm((435,), eps=1e-05, elementwise_affine=True)
(8): Linear(in_features=435, out_features=435, bias=True)
(9): ReLU()
(10): Dropout(p=0.13371112461182535, inplace=False)
(11): LayerNorm((435,), eps=1e-05, elementwise_affine=True)
(12): Linear(in_features=435, out_features=435, bias=True)
(13): ReLU()
(14): Dropout(p=0.13371112461182535, inplace=False)
(15): LayerNorm((435,), eps=1e-05, elementwise_affine=True)
(16): Linear(in_features=435, out_features=435, bias=True)
(17): ReLU()
(18): Dropout(p=0.13371112461182535, inplace=False)
(19): LayerNorm((435,), eps=1e-05, elementwise_affine=True)
(20): Linear(in_features=435, out_features=435, bias=True)
(21): ReLU()
(22): Dropout(p=0.13371112461182535, inplace=False)
(23): LayerNorm((435,), eps=1e-05, elementwise_affine=True)
(24): Linear(in_features=435, out_features=435, bias=True)
(25): ReLU()
(26): Dropout(p=0.13371112461182535, inplace=False)
(27): LayerNorm((435,), eps=1e-05, elementwise_affine=True)
(28): Linear(in_features=435, out_features=435, bias=True)
(29): ReLU()
(30): Dropout(p=0.13371112461182535, inplace=False)
(31): LayerNorm((435,), eps=1e-05, elementwise_affine=True)
(32): Linear(in_features=435, out_features=435, bias=True)
(33): ReLU()
(34): Dropout(p=0.13371112461182535, inplace=False)
(35): LayerNorm((435,), eps=1e-05, elementwise_affine=True)
(36): Linear(in_features=435, out_features=1, bias=True)
)
)
)
I need the corresponding PyTorch model to use in one of my other applications.
Is there a simple way to do that?
I thought of saving the checkpoint but then I don't know how to do it.
Can you please help?
Thanks
| You can manually save the weights of the torch.nn.Modules in the LightningModule. Something like:
trainer.fit(model, trainloader, valloader)
torch.save(
model.input_embeddings.state_dict(),
"input_embeddings.pt"
)
torch.save(model.mlp.state_dict(), "mlp.pt")
Then to load without needing Lightning:
# create the "blank" networks like they
# were created in the Lightning Module
input_embeddings = MultiEmbedding(...)
mlp = FullyConnectedModule(...)
# Load the models for inference
input_embeddings.load_state_dict(
torch.load("input_embeddings.pt")
)
input_embeddings.eval()
mlp.load_state_dict(
torch.load("mlp.pt")
)
mlp.eval()
For more information about saving and loading PyTorch Modules see Saving and Loading Models: Saving & Loading Model for Inference in the PyTorch documentation.
Since Lightning automatically saves checkpoints to disk (check the lightning_logs folder if using the default Tensorboard logger), you can also load a pretrained LightningModule and then save the state dicts without needing to repeat all the training. Instead of calling trainer.fit in the previous code, try
model = DecoderMLP.load_from_checkpoint("path/to/checkpoint.ckpt")
| https://stackoverflow.com/questions/71039820/ |
PyTorch - Neural Network - Output single scalar value | Let's say we have the following neural network in PyTorch
seq_model = nn.Sequential(
nn.Linear(1, 13),
nn.Tanh(),
nn.Linear(13, 1))
With the following input tensor
input = torch.tensor([1.0, 1.0, 5.0], dtype=torch.float32).unsqueeze(1)
I can run forward through the net and get
seq_model(input)
tensor([[-0.0165],
[-0.0165],
[-0.2289]], grad_fn=<TanhBackward0>)
Probably I also can get a single scalar value as an output, but I'm not sure how.
Thank you. I'm trying to use such an network for reinforcment learning, and use it
as an value function approximator for game board state evaluation.
| The first dimension of input represents the number of observations in your minibatch (3), the second dimension represents instead the number of features (1).
If you want to forward a single 3d input, the network must be modified (nn.Linear(1, 13) becomes nn.Linear(3, 13)), and you must remove unsqueeze(1) on input. Otherwise, you can merge the three outputs by using a loss to compute a single scalar from them.
| https://stackoverflow.com/questions/71040388/ |
Difference between the calculation of the training loss and validation loss using pytorch | I wanna use the following code of this traditional image classification problem for my regression problem. The code can be found here:
GeeksforGeeks-Training Neural Networks with Validation using Pytorch
class Network(nn.Module):
def __init__(self):
super(Network,self).__init__()
self.fc1 = nn.Linear(28*28, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 10)
def forward(self, x):
x = x.view(1,-1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = Network()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr = 0.01)
epochs = 5
for e in range(epochs):
train_loss = 0.0
model.train() # Optional when not using Model Specific layer
for data, labels in trainloader:
if torch.cuda.is_available():
data, labels = data.cuda(), labels.cuda()
optimizer.zero_grad()
target = model(data)
loss = criterion(target,labels)
loss.backward()
optimizer.step()
train_loss += loss.item()
valid_loss = 0.0
model.eval() # Optional when not using Model Specific layer
for data, labels in validloader:
if torch.cuda.is_available():
data, labels = data.cuda(), labels.cuda()
target = model(data)
loss = criterion(target,labels)
valid_loss = loss.item() * data.size(0)
print(f'Epoch {e+1} \t\t Training Loss: {train_loss / len(trainloader)} \t\t Validation Loss: {valid_loss / len(validloader)}')
I can understand why the training loss is summed up and then divided by the length of the training data in this example, but I can't get why the validation loss is also not summed up and divided by the length. If I understand correctly, the validation loss will be calculated here by using the validation loss of the last batch and then it is multiplied by the length of the batch size.
Is the calulation of the validation loss the correct way to do it? Can I use the code for my regression problem assuming I use regression-specific metrics (e.g. MSE instead of CrossEntropyLoss etc.)?
| Yes, you can use the code for your regression task. The targets of the code example are one-hot vectors or in the MNIST example the numbers 0 to 9, which symbolize the classes. You would make a scalar out of that in the regression case. The loss function, which is the cross-entropy in the example, can be replaced by the MSE in your case.
I assume that the validation loss in this example is only estimated by extrapolating from a single data point to all other data points.
Since data.size represents the batch size, even averaging would only come out with the loss of that single data point.
However, on the web page, the validation loss is calculated over all data points in the validation set, as it should be done.
| https://stackoverflow.com/questions/71048172/ |
Error loading caffe2_detectron_ops.dll when importing torch in IDE | Pytorch GPU did work for me, but after reinstalling anaconda I got this error:
Error loading “caffe2_detectron_ops.dll” (when installing the cpu version) or
Error loading “caffe2_detectron_ops_gpu.dll” (when installing the gpu version)
This error appears already when importing torch in spyder IDE.
Total error message: "OSError: [WinError 182] The operating system cannot run %1. Error loading "C:\Users\konin\anaconda3\envs\pytorch_env\lib\site-packages\torch\lib\caffe2_detectron_ops.dll" or one of its dependencies."
Simply deleting “caffe2_detectron_ops.dll” will give me a new error: Error loading “caffe2_module_test_dynamic.dll”, then error loading “caffe2_observers.dll”, …
Deleting all of them didn’t solve my problem, it ended up with error “ImportError: DLL load failed while importing _C”. I’m working in a conda environment (python 3.9.7), and starting new environments give the same errors. CPU or GPU download of pytorch won’t make a difference. Installing intel-openmp didn’t fix it. Reinstalling torch didn't help.
I've done everything I could find about this error message. It is actually working when using the CMD prompt, but not when running the file in spyder.
Any suggestion would be really appreciated, thanks
| Btw, for everyone curious, got it solved by resetting all of my enviroments, and setting up my environments more cleanly. Probably this conda environment got corrupted by my base environment files.
| https://stackoverflow.com/questions/71048489/ |
create one-hot encoding for values of histogram bins | Given the tensor below of size torch.Size([22])
tensor([-20.1659, -19.7022, -17.4124, -16.7115, -16.4696, -15.6848, -15.5201, -14.5384, -12.5017, -12.4227, -11.0946, -10.7844, -10.5467, -9.3933, -4.2351, -4.0521, -3.8844, -3.8668, -3.7337, -3.7002, -3.6242, -3.5820])
and the below historgram:
hist = torch.histogram(tensor, 5)
hist
torch.return_types.histogram(
hist=tensor([3., 5., 5., 1., 8.]),
bin_edges=tensor([-20.1659, -16.8491, -13.5323, -10.2156, -6.8988, -3.5820]))
For each value of the tensor, how to create a one hot encoding that corresponds to its bin number, so that the output is a tensor of size torch.Size([22, 5])
| You can use torch.repeat_interleave
import torch
bins = torch.tensor([3, 5, 5, 1, 8])
one_hots = torch.eye(len(bins))
one_hots = torch.repeat_interleave(one_hots, bins, dim=0)
print(one_hots)
output
tensor([[1., 0., 0., 0., 0.],
[1., 0., 0., 0., 0.],
[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 0., 0., 1.],
[0., 0., 0., 0., 1.],
[0., 0., 0., 0., 1.],
[0., 0., 0., 0., 1.],
[0., 0., 0., 0., 1.],
[0., 0., 0., 0., 1.],
[0., 0., 0., 0., 1.],
[0., 0., 0., 0., 1.]])
| https://stackoverflow.com/questions/71049941/ |
Transformers: How to use CUDA for inferencing? | I have fine-tuned my models with GPU but inferencing process is very slow, I think this is because inferencing uses CPU by default. Here is my inferencing code:
txt = "This was nice place"
model = transformers.BertForSequenceClassification.from_pretrained(model_path, num_labels=24)
tokenizer = transformers.BertTokenizer.from_pretrained('TurkuNLP/bert-base-finnish-cased-v1')
encoding = tokenizer.encode_plus(txt, add_special_tokens = True, truncation = True, padding = "max_length", return_attention_mask = True, return_tensors = "pt")
output = model(**encoding)
output = output.logits.softmax(dim=-1).detach().cpu().flatten().numpy().tolist()
Here is my second inferencing code, which is using pipeline (for different model):
classifier = transformers.pipeline("sentiment-analysis", model="distilbert-base-uncased-finetuned-sst-2-english")
result = classifier(txt)
How can I force transformers library to do faster inferencing on GPU? I have tried adding model.to(torch.device("cuda")) but that throws error:
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu
I suppose the problem is related to the data not being sent to GPU. There is a similar issue here: pytorch summary fails with huggingface model II: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu
How would I send data to GPU with and without pipeline? Any advise is highly appreciated.
| You should transfer your input to CUDA as well before performing the inference:
device = torch.device('cuda')
# transfer model
model.to(device)
# define input and transfer to device
encoding = tokenizer.encode_plus(txt,
add_special_tokens=True,
truncation=True,
padding="max_length",
return_attention_mask=True,
return_tensors="pt")
encoding = encoding.to(device)
# inference
output = model(**encoding)
Be aware nn.Module.to is in-place, while torch.Tensor.to is not (it does a copy!).
| https://stackoverflow.com/questions/71050697/ |
How can reshape 2d tensor to 1d | tensor([[0, 5],
[1, 4],
[2, 2],
[4, 2],
[7, 9],
[2, 0])
I want to reshape this tensor to
tensor([[5],
[14],
[22],
[42],
[79],
[20])
how can i solve this problem? please help me
| I don't know about tensor data type, but for normal list you can iterate over each inner list and convert them to digit
def toDigit(l):
z = 1
s = 0
for v in l[::-1]:
s += v * z
z *= 10
return s
a = [[0, 5],[1, 4],[2, 2],[4, 2],[7, 9],[2, 0], [0,0]]
print([toDigit(t) for t in a])
The output will be:
[5, 14, 22, 42, 79, 20, 0]
| https://stackoverflow.com/questions/71051774/ |
How to extract layer shape and type from ONNX / PyTorch? | I would like to 'translate' a PyTorch model to another framework (non-tf/keras).
I'm trying to take a pytorch model, and automate the translation to the other framework, which contains similar types of layers (i.e. conv2d, dense,...).
Is there a way from pytorch directly, or through onnx to retrieve a models layers, their types, shapes and connections ? (Weights are not important so far)
| From discussion in comments on your question:
each node in onnx has a list of named inputs and a list of named outputs.
For the input list accessed with node.input you have for each input index either the graph input_name that feeds that input or the name of a previous output that feeds that input. There are also initializers which are onnx parameters.
# model is an onnx model
graph = model.graph
# graph inputs
for input_name in graph.input:
print(input_name)
# graph parameters
for init in graph.init:
print(init.name)
# graph outputs
for output_name in graph.output:
print(output_name)
# iterate over nodes
for node in graph.node:
# node inputs
for idx, node_input_name in enumerate(node.input):
print(idx, node_input_name)
# node outputs
for idx, node_output_name in enumerate(node.output):
print(idx, node_output_name)
Shape inference is talked about here and for python here
The gist for python is found here
Reproducing the gist from 3:
from onnx import shape_inference
inferred_model = shape_inference.infer_shapes(original_model)
and find the shape info in inferred_model.graph.value_info.
You can also use netron or from GitHub to have a visual representation of that information.
| https://stackoverflow.com/questions/71057613/ |
(Torch tenor) Subtracting different dimension matrices | matrice1 = temp.unsqueeze(0)
print(M.shape)
matrice2 = M.permute(1, 0, 2, 3)
print(matrice2.shape)
print( torch.abs(matrice1 - matrice2).shape )
#torch.Size([1, 10, 3, 256])
#torch.Size([10, 1, 3, 256])
#torch.Size([10, 10, 3, 256])
I got the outcome above. I am wondering why the subtraction between two different dimension tensors make the outcome the tensor that has the shape as [10,10,3,256].
| According to the broadcast semantics of PyTorch,The two tensors are "broadcastable" in your case, so they are automatically expanded to the equal size of torch.Size([10, 10, 3, 256]).
| https://stackoverflow.com/questions/71058806/ |
How can I use custom tokenizer in opennmt transformer | I'm tring to transformer for translation with opennmt-py.
And I already have the tokenizer trained by sentencepiece(unigram).
But I don't know how to use my custom tokenizer in training config yaml.
I'm refering the site of opennmt-docs (https://opennmt.net/OpenNMT-py/examples/Translation.html).
Here are my code and the error .
# original_ko_en.yaml
## Where is the vocab(s)
src_vocab: /workspace/tokenizer/t_50k.vocab
tgt_vocab: /workspace/tokenizer/t_50k.vocab
# Corpus opts:
data:
corpus_1:
path_src: /storage/genericdata_basemodel/train.ko
path_tgt: /storage/genericdata_basemodel/train.en
transforms: [sentencepiece]
weight: 1
valid:
path_src: /storage/genericdata_basemodel/valid.ko
path_tgt: /storage/genericdata_basemodel/valid.en
transforms: [sentencepiece]
#### Subword
src_subword_model: /workspace/tokenizer/t_50k.model
tgt_subword_model: /workspace/tokenizer/t_50k.model
src_subword_nbest: 1
src_subword_alpha: 0.0
tgt_subword_nbest: 1
tgt_subword_alpha: 0.0
# filter
# src_seq_length: 200
# tgt_seq_length: 200
# silently ignore empty lines in the data
skip_empty_level: silent
# Train on a single GPU
world_size: 1
gpu_ranks: [0]
# General opts
save_model: /storage/models/opennmt_v1/opennmt
keep_checkpoint: 100
save_checkpoint_steps: 10000
average_decay: 0.0005
seed: 1234
train_steps: 500000
valid_steps: 20000
warmup_steps: 8000
report_every: 1000
# Model
decoder_type: transformer
encoder_type: transformer
layers: 6
heads: 8
word_vec_size: 512
rnn_size: 512
transformer_ff: 2048
dropout: 0.1
label_smoothing: 0.1
# Optimization
optim: adam
adam_beta1: 0.9
adam_beta2: 0.998
decay_method: noam
learning_rate: 2.0
max_grad_norm: 0.0
normalization: tokens
param_init: 0.0
param_init_glorot: 'true'
position_encoding: 'true'
# Batching
batch_size: 4096
batch_type: tokens
accum_count: 8
max_generator_batches: 2
# Visualization
tensorboard: True
tensorboard_log_dir: /workspace/runs/onmt1
and When I typing < onmt_train -config xxx.yaml >
So, the questions are two.
my sentencepiece tokenizer embedding is float. How can i resolve the int error?
When training stopped by accident or I want to train more some model.pt
what is the command to start training from the some model.pt ?
I'll look forward to any opinion.
Thanks.
| I got the answers.
we can use tools/spm_to_vocab in onmt.
train_from argument is the one.
| https://stackoverflow.com/questions/71062002/ |
What makes the difference in "grad attribute" in the following context? | Consider the following two contexts
with torch.no_grad():
params = params - learning_rate * params.grad
and
with torch.no_grad():
params -= learning_rate * params.grad
In the second case .backward() is running smoothly and in the first case it is giving the
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
What is the reason for this as it is normal to use x-= a and x = x - a interchangeably?
| Note that x -= a and x = x - a cannot be used interchangeably: The latter creates a new tensor that is assigned to the variable x, while the former performes an in place operation.
Therefore with
with torch.no_grad():
params -= learning_rate * params.grad
everything works fine in your optimization loop, while in
with torch.no_grad():
params = params - learning_rate * params.grad
the variable params gets overwritten with a new tensor. Since this new tensor was created within a torch.no_grad() context, this means that this new tensor has params.requires_grad=False and therefore does not have a .grad attribute. Therefore in the next iteration torch will complain that params.grad does not exist.
| https://stackoverflow.com/questions/71064780/ |
When doing pre-training of a transformer model, how can I add words to the vocabulary? | Given a DistilBERT trained language model for a given language, taken from the Huggingface hub, I want to pre-train the model on a specific domain, and I want to add new words that are:
definitely non existing in the original training set
and impossible to handle via word piece toeknization - basically you can think of these words as "codes" that are a normalized form of a named entity
Consider that:
I would like to avoid to learn a new tokenizer: I am fine to add the new words, and then let the model learn their embeddings via pre-training
the number of the "words" is way larger that the "unused" tokens in the "stock" vocabulary
The only advice that I have found is the one reported here:
Append it to the end of the vocab, and write a script which generates a new checkpoint that is identical to the pre-trained checkpoint, but but with a bigger vocab where the new embeddings are randomly initialized (for initialized we used tf.truncated_normal_initializer(stddev=0.02)). This will likely require mucking around with some tf.concat() and tf.assign() calls.
Do you think this is the only way of achieve my goal?
If yes, I do not have any idea of how to write this "script": does someone has some hints at how to proceeed (sample code, documentation etc)?
| As per my comment, I'm assuming that you go with a pre-trained checkpoint, if only to "avoid [learning] a new tokenizer."
Also, the solution works with PyTorch, which might be more suitable for such changes. I haven't checked Tensorflow (which is mentioned in one of your quotes), so no guarantees that this works across platforms.
To solve your problem, let us divide this into two sub-problems:
Adding the new tokens to the tokenizer, and
Re-sizing the token embedding matrix of the model accordingly.
The first can actually be achieved quite simply by using .add_tokens(). I'm referencing the slow tokenizer's implementation of it (because it's in Python), but from what I can see, this also exists for the faster Rust-based tokenizers.
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
# Will return an integer corresponding to the number of added tokens
# The input could also be a list of strings instead of a single string
num_new_tokens = tokenizer.add_tokens("dennlinger")
You can quickly verify that this worked by looking at the encoded input ids:
print(tokenizer("This is dennlinger."))
# 'input_ids': [101, 2023, 2003, 30522, 1012, 102]
The index 30522 now corresponds to the new token with my username, so we can check the first part. However, if we look at the function docstring of .add_tokens(), it also says:
Note, hen adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.
In order to do that, please use the PreTrainedModel.resize_token_embeddings method.
Looking at this particular function, the description is a bit confusing, but we can get a correctly resized matrix (with randomly initialized weights for new tokens), by simply passing the previous model size, plus the number of new tokens:
from transformers import AutoModel
model = AutoModel.from_pretrained("distilbert-base-uncased")
model.resize_token_embeddings(model.config.vocab_size + num_new_tokens)
# Test that everything worked correctly
model(**tokenizer("This is dennlinger", return_tensors="pt"))
EDIT: Notably, .resize_token_embeddings() also takes care of any associated weights; this means, if you are pre-training, it will also adjust the size of the language modeling head (which should have the same number of tokens), or fix tied weights that would be affected by an increased number of tokens.
| https://stackoverflow.com/questions/71067376/ |
Python - Show one image from an image array | I'm going through this tutorial on pytorch. https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html
And I've been able to show real images next to the fake ones that I have generated.
# Grab a batch of real images from the dataloader
real_batch = next(iter(dataloader))
# Plot the real images
plt.figure(figsize=(15,15))
plt.subplot(1,2,1)
plt.axis("off")
plt.title("Real Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0)))
# Plot the fake images from the last epoch
plt.subplot(1,2,2)
plt.axis("off")
plt.title("Fake Images")
plt.imshow(np.transpose(img_list[-1],(1,2,0)))
plt.show()
Which from my dataset results in this:
I was wondering how I can show one image from the fake images generated. I also want to show it as a 512 X 512 image if possible.
Edit:
The img_list[-1].shape is torch.Size([3, 530, 530]).
Edit2:
This part of the training shows that img_list is a list of images with each image being a group of sub-images (not being able to separate them). Is there a way I can edit this to make img_list an image of each fake image generated?
| Here is what I wanted:
noise = torch.randn(1, nz, 1, 1, device=device)
with torch.no_grad():
newfake = netG(noise).detach().cpu()
plt.axis("off")
plt.imshow(np.transpose(newfake[0],(1,2,0)))
plt.show()
As it generates a new image, with a new noise. The img_list was combining the generated images into one image.
However, this code still only generates 64 by 64 pixel images.
| https://stackoverflow.com/questions/71074150/ |
How do you know when your learning rate should be reduced? | The question is more of a general question on tips on how to know if it should be reduced. If you aren't getting good results, would a reduction in learning rate help?
Generally you have a start and and end and step down miles stones. How do you know the learning rate should not just go lower and lower? "Just try it" is probably the answer.
There was a paper where it equated learning rate with complexity. So as a model runs it's learning rate needs to be reduced to build more complex data patterns.
Any insights would be great.
Thanks.
| Often it makes sense to reduce the learning rate when no more improvements can be achieved with the currently set learning rate. The whole process can also be automated. If you use Pytorch have a look here: https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
But it is important to find a good learning rate for the beginning. I often try it out for a few epochs. If you don't get good results, it may be due to the learning rate, but it doesn't have to be. It can also be too small as well as too large. If the loss values fluctuate strongly, it is probably too large and if they do not go down too small.
| https://stackoverflow.com/questions/71083305/ |
Cannot install pyansys using pip install | I am on Pycharm and wish to install the pyansys package, but I keep getting this error:
Collecting pyansys
Using cached pyansys-0.61.3.tar.gz (11 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [21 lines of output]
*** PyAnsys has moved (and expanded!) ***
To use PyAnsys you need to install the applicable packages for your
product:
MAPDL:
- ``pip install ansys-mapdl-core``
MAPDL Post-Processing:
- ``pip install ansys-mapdl-reader``
- ``pip install ansys-dpf-core``
- ``pip install ansys-dpf-reader``
PyAEDT
- ``pip install pyaedt``
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
So I went ahead and installed all but one of those packages. I also installed cython. The only package that cannot be installed is ansys-dpf-reader and here is its error message
ERROR: Could not find a version that satisfies the requirement ansys-pdf-reader (from versions: none)
ERROR: No matching distribution found for ansys-pdf-reader
What is going on?
| So it seems like 'ansys-dpf-reader' was changed to 'ansys-dpf-post' and the 'pyansys' is not a pacakge anymore, but has now expanded into these 5 packages. Installing all of them will allow the normal use of the old pyansys.
https://docs.pyansys.com/
| https://stackoverflow.com/questions/71088494/ |
How do I retrieve the weights estimated from a neural net using skorch? | I've trained a simple neural net using skorch to make it sklearn compatible and I would like to know how to retrieve the actual estimated weights.
Here's a replicable example of what I need.
The neural net presented here uses 10 features, has one hidden layer of 2 nodes, uses ReLu activation functions and linearly combines the output of the 2 nodes.
import torch
import numpy as np
from torch.autograd import Variable
# Create example data
np.random.seed(2022)
train_size = 1000
n_features= 10
X_train = np.random.rand(n_features, train_size).astype("float32")
l2_params_1 = np.random.rand(1,n_features).astype("float32")
l2_params_2 = np.random.rand(1,n_features).astype("float32")
l1_X = np.matmul(l2_params_1, X_train)
l2_X = np.matmul(l2_params_2, X_train)
y_train = l1_X + l2_X
# Defining my NN
class NNModule(torch.nn.Module):
def __init__(self, in_features):
super(NNModule, self).__init__()
self.l1 = torch.nn.Linear(in_features, 2)
self.a1 = torch.nn.ReLU()
self.l2 = torch.nn.Linear(2, 1)
def forward(self, x):
x = self.l1(x)
x = self.a1(x)
return self.l2(x)
# Initialize the NN
torch.manual_seed(200)
model = NNModule(in_features = 10)
model.l1.weight.data.uniform_(0.0, 1.0)
model.l1.bias.data.uniform_(0.0, 1.0)
# Define criterion and optimizer
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
# Train the NN
torch.manual_seed(200)
for epoch in range(100):
inputs = Variable(torch.from_numpy(np.transpose(X_train)))
labels = Variable(torch.from_numpy(np.transpose(y_train)))
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
The parameters at which I'm arriving are the following:
list(model.parameters())
[Output]:
[Parameter containing:
tensor([[0.8997, 0.8345, 0.8284, 0.6950, 0.5949, 0.1217, 0.9067, 0.1824, 0.8272,
0.2372],
[0.7525, 0.6577, 0.4358, 0.6109, 0.8817, 0.5429, 0.5263, 0.7531, 0.1552,
0.7066]], requires_grad=True),
Parameter containing:
tensor([0.6617, 0.1079], requires_grad=True),
Parameter containing:
tensor([[0.9225, 0.8339]], requires_grad=True),
Parameter containing:
tensor([0.0786], requires_grad=True)]
Now, to wrap my NNModule with skorch, I'm using this:
from skorch import NeuralNetRegressor
torch.manual_seed(200)
net = NeuralNetRegressor(
module=NNModule(in_features=10),
criterion=torch.nn.MSELoss,
optimizer=torch.optim.SGD,
optimizer__lr=0.01,
max_epochs=100,
verbose=0
)
net.fit(np.transpose(X_train), np.transpose(y_train))
And I'd like to retrieve the weights obtained in the training. I've used dir(net) to see if the weights are stored in any attributes to no avail.
| To retrieve the weights one needs to output them like this:
list(net.module.parameters())
| https://stackoverflow.com/questions/71088862/ |
Reshaping tensor 2d to 1d | tensor([[17, 0],
[93, 0],
[0, 0],
[21, 0],
[19, 0])
I want to remove 0 from this tensor, which is a two-dimensional array, and make it a one-dimensional array.
How can I make this tensor with the tensor below?
tensor([[17],
[93],
[0],
[21],
[19])
When using the code below, there is a problem that the existing zero disappears.
How should I fix this?
x = x.flatten()
x = x[x!=0]
x = np.reshape(x, ( -1, x.shape[0] ))
array([[17, 93, 21, 19]])
| You can use slicing to index the 0 column
import torch
t = torch.tensor(
[[17, 0],
[93, 0],
[0, 0],
[21, 0],
[19, 0]]
)
print(t[:,0])
Output
tensor([17, 93, 0, 21, 19])
And if you want to keep it a 2D array then you can use numpy.reshape
import torch
import numpy as np
t = torch.tensor(
[[17, 0],
[93, 0],
[0, 0],
[21, 0],
[19, 0]]
)
print(np.reshape(t[:,0], (-1, 1)))
Output
array([[17],
[93],
[ 0],
[21],
[19]], dtype=int32)
| https://stackoverflow.com/questions/71093026/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.