instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Pytorch - How differentiate wrt two parameters | I am interested in combined derivatives using Pytorch:
In the implemented code below, I have tried, but the code compute two partial derivative (e.g. it computed firstly d'f/d'x and secondly d'f/d'y). Is it possible modify the code in some way that we can compute this derivative with respect two parameters?
import torch
def function(x,y):
f = x**3+y**3
return f
a = torch.tensor([4., 5., 6.], requires_grad=True)
b = torch.tensor([1., 2., 6.], requires_grad=True)
derivative = torch.autograd.functional.jacobian(function, (a,b))
print(derivative)
Thanks in advance!
| You can use torch.autograd.functional.hessian to get the combined derivatives.
>>> f = lambda x, y: (x**3 + y**3).mean()
>>> H = A.hessian(f, (a, b))
Since you have two inputs, the result will be a tuple containing 2 tuples.
More precisely, you will have
H[0][0] the 2nd derivative w.r.t x: d²z_i/dx_j*dx_j
H[0][1] the combined derivative w.r.t x and y: d²z_i/dx_j*dy_j
H[0][1] the combined derivative w.r.t y and x: d²z_i/dy_j*dx_j
H[1][1] the 2nd derivative w.r.t y: d²z_i/dy_j*dy_j
>>> H
((tensor([[ 8., 0., 0.],
[ 0., 10., 0.],
[ 0., 0., 12.]],
tensor([[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]]),
(tensor([[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]]))
tensor([[ 2., 0., 0.],
[ 0., 4., 0.],
[ 0., 0., 12.]])
Indeed if you look at the combined derivative: d²(x³+y³)/dxdy = d(3x²)/dy = 0, hence H[0][1] and H[1][0] are zero matrices.
On the other hand we have d²x³/d²x = 6x, since the f is averaging the values, it gives 6x/3 = 2x. Similarly, you get d²x³/d²y = 6y.
As a result, you find that H[0][0] = diag(2a) and H[1][1] = diag(2b).
| https://stackoverflow.com/questions/68949618/ |
Model takes twice the memory footprint with distributed data parallel | I have a model that trains just fine on a single GPU. But I'm getting CUDA memory errors when I switch to Pytorch distributed data parallel (DDP). Specifically, the DDP model takes up twice the memory footprint compared to the model with no parallelism. Here is a minimal reproducible example:
import os
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.distributed as dist
import torch.multiprocessing as mp
import torch
def train(rank, gpu_list, train_distributed):
device_id = gpu_list[rank]
model = torch.nn.Linear(1000, 1000)
print(device_id, torch.cuda.memory_allocated(device_id))
model.to(device_id)
print(device_id, torch.cuda.memory_allocated(device_id))
print(device_id, torch.cuda.memory_allocated(device_id))
if train_distributed:
# convert model to DDP
dist.init_process_group("gloo", rank=rank, world_size=len(gpu_list))
model = DDP(model, device_ids=[device_id], find_unused_parameters=False)
print(device_id, torch.cuda.memory_allocated(device_id))
def train_distributed():
gpu_list = [torch.device(i) for i in [5, 6]]
os.environ['MASTER_ADDR'] = '127.0.01'
os.environ['MASTER_PORT'] = '7676'
mp.spawn(train, args=(gpu_list, True), nprocs=len(gpu_list), join=True)
if __name__ == '__main__':
# First test one GPU
train(0, [torch.device(5)], False)
# Then test multiple GPUs
train_distributed()
Output - note that the GPU usage doubles on both devices when switching to DDP:
cuda:5 0
cuda:5 4004352
cuda:5 4004352
cuda:5 4004352
cuda:5 0
cuda:6 0
cuda:5 4004352
cuda:5 4004352
cuda:6 4004352
cuda:6 4004352
cuda:5 8008704
cuda:6 8008704
Why does the model take up twice the space in DDP? Is it intended behavior? Is there a way to avoid this extra memory usage?
| I'm adding here the solution of @ptrblck written in the PyTorch discussion forum.
Here're two quotes.
The statement:
[...] the allocated memory get doubled when torch.distributed.Reducer is instantiated in the constructor of DistributedDataParallel
And the answer:
[...] the Reducer will create gradient buckets for each parameter, so that the memory usage after wrapping the model into DDP will be 2x model_parameter_size. Note that the parameter size of a model is often much smaller than the activation size so that this memory increase might or might not be significant
So, from here we can see the reason why the memory footprint sometimes doubles.
| https://stackoverflow.com/questions/68949954/ |
Transformers Longformer IndexError: index out of range in self | From Transformers library I use LongformerModel, LongformerTokenizerFast, LongformerConfig (all of them use from_pretrained("allenai/longformer-base-4096")).
When I do
longformer(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)
I get such an error:
~/tfproject/tfenv/lib/python3.7/site-packages/transformers/modeling_longformer.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
177 if inputs_embeds is None:
178 inputs_embeds = self.word_embeddings(input_ids)
--> 179 position_embeddings = self.position_embeddings(position_ids)
180 token_type_embeddings = self.token_type_embeddings(token_type_ids)
181
~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1722 # remove once script supports set_grad_enabled
1723 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1724 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1725
1726
IndexError: index out of range in self
Online I found that this might mean that my input to the model has more tokens than the model's max input size.
But I have checked and all inputs have exactly 4098 tokens (which is the model max length of input size) (padding has been applied). Tokenizer has the same vocab size as the model.
I have no idea what's wrong.
| I have managed to fix this by reindexing my position_ids.
When PyTorch was creating that tensor, for some reason some value in position_ids was bigger than 4098.
I used:
position_ids = torch.stack([torch.arange(config.max_position_embeddings) for a in range(val_dataloader.batch_size)]).to(device)
to create position_ids for the entire batch.
Bear in mind that it might not be the best solution. The problem might need some more debugging. But for a quick fix, it works.
| https://stackoverflow.com/questions/68951828/ |
does is affect NN training accuracy if the color format of images is BGR not RGB? | I'm training Neural Network with ImageNet dataset and I noticed that images are in BGR color format when I read them using OpenCV function cv2.imread(), so does is affect training accuracy?, if yes then how can I change it to RGB in pytorch?
| It will not affect your NN's accuracy, in general. However, if you are using a pre-trained CNN, then it likely expects RGB images as input, and will not do as well on BGR images initially and will have to re-learn its weights for BGR.
You can convert BGR to RGB using cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB).
You can also consider the following alternatives for reading images:
torchvision.io.read_image(path) (https://pytorch.org/vision/stable/io.html#image)
torch.from_numpy(np.array(PIL.Image.open(path)))
torchvision.transforms.functional.pil_to_tensor(PIL.Image.open(path)) (https://pytorch.org/vision/stable/transforms.html#torchvision.transforms.functional.pil_to_tensor)
| https://stackoverflow.com/questions/68951911/ |
Python: How to extract connected components (bounding boxes) from 3D numpy / torch array? | I have binary segmentation masks for 3D arrays in NumPy/Torch. I would like to convert these to bounding boxes (a.k.a. connected components). As a disclaimer, each array can contain multiple connected components/bounding boxes, meaning I can't just take the min and max non-zero index values.
For concreteness, suppose I have a 3D array (I'll use 2D because 2D is easier to visualize) of binary values. I would like to know what the connected components are. For instance, I would like to take this segmentation mask:
>>> segmentation_mask
array([[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[1, 1, 1, 0, 0],
[1, 1, 0, 1, 0],
[1, 1, 0, 0, 1]], dtype=int32)
and convert it to the connected components, where the connected component have arbitrary labels i.e.
>>> connected_components
array([[1, 0, 0, 0, 0],
[0, 2, 0, 0, 0],
[2, 2, 2, 0, 0],
[2, 2, 0, 3, 0],
[2, 2, 0, 0, 4]], dtype=int32)
How do I do this with 3D arrays? I'm open to using Numpy, Scipy, Torchvision, opencv, any library.
| This should work for any number of dimensions:
import numpy as np
from scipy.sparse import csr_matrix
from scipy.sparse.csgraph import connected_components
segmentation_mask = np.array([[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[1, 1, 1, 0, 0],
[1, 1, 0, 1, 0],
[1, 1, 0, 0, 1]], dtype=np.int32)
row = []
col = []
segmentation_mask_reader = segmentation_mask.reshape(-1)
n_nodes = len(segmentation_mask_reader)
for node in range(n_nodes):
idxs = np.unravel_index(node, segmentation_mask.shape)
if segmentation_mask[idxs] == 0:
col.append(n_nodes)
else:
for i in range(len(idxs)):
if idxs[i] > 0:
new_idxs = list(idxs)
new_idxs[i] -= 1
new_node = np.ravel_multi_index(new_idxs, segmentation_mask.shape)
if segmentation_mask_reader[new_node] != 0:
col.append(new_node)
while len(col) > len(row):
row.append(node)
row = np.array(row, dtype=np.int32)
col = np.array(col, dtype=np.int32)
data = np.ones(len(row), dtype=np.int32)
graph = csr_matrix((np.array(data), (np.array(row), np.array(col))),
shape=(n_nodes+1, n_nodes+1))
n_components, labels = connected_components(csgraph=graph)
background_label = labels[-1]
solution = np.zeros(segmentation_mask.shape, dtype=segmentation_mask.dtype)
solution_writer = solution.reshape(-1)
for node in range(n_nodes):
label = labels[node]
if label < background_label:
solution_writer[node] = label+1
elif label > background_label:
solution_writer[node] = label
print(solution)
| https://stackoverflow.com/questions/68960509/ |
Get the index of subwords produced by BertTokenizer (in transformers library) | BertTokenizer can tokenize a sentence to a list of tokens, where some long words e.g. "embeddings" is splitted to several subwords i.e. 'em', '##bed', '##ding', and '##s'.
Is there a way to locate the subwords? For example,
t = BertTokenizer.from_pretrained('bert-base-uncased')
tokens = t('word embeddings', add_special_tokens=False)
location = locate_subwords(tokens)
I want the location be like [0, 1, 1, 1, 1] corresponding to ['word', 'em', '##bed', '##ding', '##s'], where 0 means normal word, 1 means subword.
| The fast tokenizers return a Batchencoding object that has a built-in word_ids:
from transformers import BertTokenizerFast
t = BertTokenizerFast.from_pretrained('bert-base-uncased')
tokens = t('word embeddings are vectors', add_special_tokens=False, return_attention_mask=False, return_token_type_ids=False)
print(tokens.word_ids())
Output:
[0, 1, 1, 1, 1, 2, 3]
| https://stackoverflow.com/questions/68961546/ |
PyTorch model take too much to load the first time in a new machine | I have a manual scaling set-up on EC2 where I'm creating instances based on an AMI which already runs my code at boot (using Systemd). I'm facing a fundamental problem: on the main instance (the one I use to create the AMI, the Python code takes 8 seconds to be ready after the image is booted, this includes importing libraries, loading state dicts of models, etc...). Now, on the images I create with the AMI, the code takes 5+ minutes to boot up the first time, it takes especially long to load the state dicts from disk to GPU memory, after the first time the code takes about the same as the main instance to load.
The AMI keeps the same pycache folders as the main instance, so it shouldn't take that much time since I think the AMI should include everything, shouldn't it?. So, my question is: Is there any other caching to make CUDA / Python faster that I'm not taking into consideration? I'm only keeping the pycache/ folders, but I don't know if there's anything I could do to make sure it doesn't take that much time to boot everything the first time. This is my main structure:
# Import libraries
import torch
import numpy as np
# Import personal models (takes 1 minute)
from model1 import model1
from model2 import model2
# Load first model
model1_object = model1()
model2_object = model2()
# Load state dicts (takes 3+ minutes, the first time in new instances, seconds other times)
# Note: the models are a bit heavy
model1_object.load_state_dict(torch.load("model1.pth"))
model2_object.load_state_dict(torch.load("model2.pth"))
Note: I'm using g4dn.xlarge instances, for both the main instance and for newer ones in AWS.
| This was caused because of the high latencies required while restoring AWS EBS snapshots. At first when you restore a snapshot, the latency is extremely high, explaining why the model takes so much to load in my example when the instance is freshly created.
Check the initialization section of this article: https://cloudonaut.io/ebs-snapshot-pitfalls/
The only solution that I've found to use an instance fast when it is first created is to enable Fast Snapshot Restore, which costs around 500$ a month: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html
If you have time to spare, you can wait until the maximum performance is achieved, or try to warm the volume up beforehand https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html
| https://stackoverflow.com/questions/68965072/ |
Does this LSTM loop code break the computational graph in PyTorch? | The code below is from https://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html
for i in inputs:
# Step through the sequence one element at a time.
# after each step, hidden contains the hidden state.
out, hidden = lstm(i.view(1, 1, -1), hidden)
For me it seems like handling the LSTM in this way breaks the computational graph as hidden keeps on getting overridden. Should all the hidden states not be stored in an array so the computational graph can be maintained, so backprop can flow through the hidden states?
| In this case, no: you are providing the hidden state from one layer to the other at every loop iteration. This means the gradient flow is kept the and backpropagation will occur through the hidden states as well.
To give a clear answer to your question: yes the hidden variable is been overwritten. However, the activations corresponding to those hidden states themselves have been cached in memory for backpropagation.
If you take the example from the tutorial page, they are looping through the sequence of elements one by one:
torch.manual_seed(1)
lstm = nn.LSTM(3, 3) # H_in = 3, H_hidden = H_out = 3
inputs = torch.randn(5, 1, 3) # sequence length = 5
Our data sample is shaped as (sequence_length=5, batch_size=1, feature_size=3).
The hidden states h_0 and c_0 are initialized once:
hidden = (torch.randn(1, 1, 3),
torch.randn(1, 1, 3))
Then we loop over the sequence and doing inference on the LSTM block with one element of the sequence at a time:
for element in inputs:
out, hidden = lstm(element[None], hidden)
The reshape with view in the tutorial page is superfluous and won't work in general cases where batch_size is not equal to 1... Doing element[None] will just add an additional dimension to the tensor which is what we want.
So the input element passed is essentially a single-element sequence shaped i.e. (1, 1, 3). Do note this is a stateful call since we are indeed providing the hidden states from the previous layer.
Here the final output and hidden states are:
>>> out
tensor([[[-0.3600, 0.0893, 0.0215]]], grad_fn=<StackBackward>) # <- h_5
>>> hidden
(tensor([[[-0.3600, 0.0893, 0.0215]]], grad_fn=<StackBackward>), # <- h_5
tensor([[[-1.1298, 0.4467, 0.0254]]], grad_fn=<StackBackward>)) # <- c_5
This is actually the default behavior performed by nn.LSTM, i.e. calling it with the whole sequence: hidden states will be passed from one sequence element to another.
torch.manual_seed(1)
hidden = (torch.randn(1, 1, 3),
torch.randn(1, 1, 3))
out, hidden = lstm(inputs, hidden)
Then:
>>> out
tensor([[[-0.2682, 0.0304, -0.1526]], # <- h_1
[[-0.5370, 0.0346, -0.1958]], # <- h_2
[[-0.3947, 0.0391, -0.1217]], # <- h_3
[[-0.1854, 0.0740, -0.0979]], # <- h_4
[[-0.3600, 0.0893, 0.0215]]], grad_fn=<StackBackward>) # <- h_5
>>> hidden
(tensor([[[-0.3600, 0.0893, 0.0215]]], grad_fn=<StackBackward>), # <- h_5
tensor([[[-1.1298, 0.4467, 0.0254]]], grad_fn=<StackBackward>)) # <- c_5
You can see, here out contains the consecutive hidden states, while it only contained the last hidden state in the previous example.
| https://stackoverflow.com/questions/68966167/ |
Batchsize in DataLoader | I have two tensors:
x[train], y[train]
And the shape is
(311, 3, 224, 224), (311) # 311 Has No Information
I want to use DataLoader to load them batch by batch, the code I write is:
from torch.utils.data import Dataset
class KD_Train(Dataset):
def __init__(self,a,b):
self.imgs = a
self.index = b
def __len__(self):
return len(self.imgs)
def __getitem__(self,index):
return self.imgs, self.index
kdt = KD_Train(x[train], y[train])
train_data_loader = Data.DataLoader(
kdt,
batch_size = 64,
shuffle = True,
num_workers = 0)
for step, (a,b) in enumerate (train_data_loader):
print(a.shape)
break
But it shows:
(64, 311, 3, 224, 224)
the DataLoader just add a dimension directly instead of choosing some batches, anyone know what should I do?
| Your dataset's __getitem__ method should return a single element:
def __getitem__(self, index):
return self.imgs[index], self.index[index]
| https://stackoverflow.com/questions/68966864/ |
How to discern which image generated a particular feature map, while training CNNs? | Let's say I feed 3 grayscale images to a CNN, having a combined shape of 3,28,28. This process will generate multiple feature maps for each image. How do I identify which feature map corresponds to a particular image.
Here is some code -
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(256, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
print("Shape of x = ", x.shape)
x = self.pool(F.relu(self.conv2(x)))
print("Shape of x = ", x.shape)
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
foo = torch.randn(3,1, 28, 28)
foo_cnn = net(foo)
For instance, the first convolution generated 6 feature maps from 3 images. Is there a way for me to identify which feature map belonged to which image, so that I can perform some operation on it.
| To distinguish which image generated which convolved feature maps, one must split the different input images into the batches dimension (#images=#batches), such that when applying any convolutional layers, they're applied on each image separately, not a weighted sum of the different input images as would be the case if they were split into the channel/depth dimension.
Right now you're not feeding 3 images into the model (in pytorch's eyes); that would require the input to be of the shape: (3, 1, 28, 28) for grayscale images and (3, 3, 28, 28) for RGB images. What you're doing instead is (in a sense) concatenating the 3 images into the depth dimension resulting in the shape: (1, 3, 28, 28), thus the 6 output feature maps cannot be attributed to a specific image (a weighted combination of the 3, since they're in depth dimension).
Therefore, reshaping the input to (3, 1, 28, 28) and changing conv1 to (1, 6, 5) will result in the following output: (3, 6, 12, 12) and hence, the 1st 6 feature maps in the 1st batch (of the output) correspond to the first image in the batch (of the input), and the 2nd 6 feature maps correspond to the 2nd image in the batch and so on.
| https://stackoverflow.com/questions/68968144/ |
Pytorch lightning: 'CIFAR10DataModule' object has no attribute 'train_loader' | Would you tell me why I failed to import CUFAR10DataModule()?
At first, I run the code on GoogleColab,
from pl_bolts.datamodules import CIFAR10DataModule
dm = CIFAR10DataModule()
then, the code was performed for the confirmation
from torch.optim import Adam
optimizer = Adam(finetune_layer.parameters(), lr=1e-4)
for epoch in range(10):
for batch in dm.train_loader:
x, y = batch
with torch.no_grad():
features = backbone(x)
preds = finetune_layer(features)
loss = cross_entropy(preds, y)
loss.backward()
optimizer.step()
optimizer.zero_grad()
print(loss.item())
However, the message AttributeError: 'CIFAR10DataModule' object has no attribute 'train_loader' was returned after running the code.
When the code was run to confirm the dm,
for batch in dm.train_dataloader:
x, y = batch
print(x.shape, y.shape)
break
The error says TypeError: 'method' object is not iterable.
The code looks the same with an example, but I wonder why such an error was generated?
| Two problems with your code:
First, the way you get the underlying PyTorch dataloader is dm.train_dataloader() not dm.train_loader. It is a function, not a property.
for batch in dm.train_dataloader():
x, y = batch
...
Secondly, since you are trying to use a LightningDataModule without a Trainer, you need to manually invoke
dm.prepare_data()
dm.setup()
.. in order for the dataloader to be available via .train_dataloader().
| https://stackoverflow.com/questions/68969811/ |
Batched index_fill in PyTorch | I have an index tensor of size (2, 3):
>>> index = torch.empty(6).random_(0,8).view(2,3)
tensor([[6., 3., 2.],
[3., 4., 7.]])
And a value tensor of size (2, 8):
>>> value = torch.zeros(2,8)
tensor([[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.]])
I want to set the element in value to 1 by the index along dim=-1.** The output should be like:
>>> output
tensor([[0., 0., 1., 1., 0., 0., 1., 0.],
[0., 0., 0., 1., 1., 0., 0., 1.]])
I tried value[range(2), index] = 1 but it triggers an error. I also tried torch.index_fill but it doesn't accept batched indices. torch.scatter requires creating an extra tensor of size 2*8 full of 1, which consumes unnecessary memory and time.
| You can actually use torch.Tensor.scatter_ by setting the value (int) option instead of the src option (Tensor).
>>> value.scatter_(dim=-1, index=index.long(), value=1)
>>> value
tensor([[0., 0., 1., 1., 0., 0., 1., 0.],
[0., 0., 0., 1., 1., 0., 0., 1.]])
Make sure the index is of type int64 though.
| https://stackoverflow.com/questions/68970780/ |
Single weight definition and freezing in Conv2d | I am dealing with a Conv2d(3,3,kernel_size=5, stride=1) and I'd need to set some specific weights to zero and render them non updatable.
For example, if I do something like
model = nn.Conv2d(3,3,kernel_size=5, stride=1)
model.weight.requires_grad = False
everything works but it affects the whole layer. I'd want to do something like this instead:
model = nn.Conv2d(3,3,kernel_size=5, stride=1)
model.weight[0,2].requires_grad = False # this line does not work
model.weight[0,2] = 0 # this line does not work either
it just does not seem to support assignment and requires_grad manipulation for layer parameter subgroups. Has anyone already tackled this issue?
| You can zero out this filter channel by either accessing the data attribute
>>> model.weight.data[0, 2] = 0
or using the torch.no_grad context manager:
>>> with torch.no_grad():
... model.weight[0, 2] = 0
As you noticed you can't have requires_grads specifically set for submodules. As such all parameter elements of a given module share the same flag, they either get updated or they don't.
One alternative way would be to kill the gradient of that channel manually just after the backward pass has been called, just before the optimizer step:
>>> model(torch.rand(2, 3, 100, 100)).mean().backward()
>>> model.weight.grad[0, 2] = 0
>>> optim.step()
This way your 3rd channel on filter n°1 won't get updated by the backward pass and remain at 0.
| https://stackoverflow.com/questions/68972092/ |
How to fix incorrect channel size in pytorch neural network? | I'm working with the Google utterance dataset in spectrogram form. Each data point has dimension (160, 101). In my data loader, I used batch_size=128. Therefore, each batch has dimension (128, 160, 101).
I use a LeNet model with the following code as the model:
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 30)
def forward(self, x):
out = F.relu(self.conv1(x))
out = F.max_pool2d(out, 2)
out = F.relu(self.conv2(out))
out = F.max_pool2d(out, 2)
out = out.view(out.size(0), -1)
out = F.relu(self.fc1(out))
out = F.relu(self.fc2(out))
out = self.fc3(out)
return out
I tried unsqueezing the data with dim=3, but got this error:
Traceback (most recent call last):
File "train_speech.py", line 359, in <module>
train_loss, reg_loss, train_acc, cost = train(epoch)
File "train_speech.py", line 258, in train
outputs = (net(inputs))['out']
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/parallel/data_parallel.py", line 166, in forward
return self.module(*inputs[0], **kwargs[0])
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/My Drive/Colab Notebooks/mixup_erm-master/models/lenet.py", line 15, in forward
out = F.relu(self.conv1(x))
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 443, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 440, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [6, 1, 5, 5], expected input[128, 160, 101, 1] to have 1 channels, but got 160 channels instead
How do I fix this issue?
EDIT: New Error Message Below
torch.Size([128, 160, 101])
torch.Size([128, 1, 160, 101])
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Traceback (most recent call last):
File "train_speech.py", line 363, in <module>
train_loss, reg_loss, train_acc, cost = train(epoch)
File "train_speech.py", line 262, in train
outputs = (net(inputs))['out']
IndexError: too many indices for tensor of dimension 2
I'm unsqueezing the data in each batch. The relevant section of my training code is below. inputs is analogous to x.
print(inputs.shape)
inputs = inputs.unsqueeze(1)
print(inputs.shape)
outputs = (net(inputs))['out']
Edit 2: New Error
Traceback (most recent call last):
File "train_speech.py", line 361, in <module>
train_loss, reg_loss, train_acc, cost = train(epoch)
File "train_speech.py", line 270, in train
loss.backward()
File "/usr/local/lib/python3.7/dist-packages/torch/_tensor.py", line 255, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py", line 149, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: Function AddmmBackward returned an invalid gradient at index 1 - got [128, 400] but expected shape compatible with [128, 13024]
Edit 3: Train Loop Below
def train(epoch):
print('\nEpoch: %d' % epoch)
net.train()
train_loss = 0
reg_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(trainloader):
if use_cuda:
inputs, targets = inputs.cuda(), targets.cuda()
inputs, targets_a, targets_b, lam,layer, cost = mixup_data(inputs, targets,
args.alpha,args.mixupBatch, use_cuda)
inputs, targets_a, targets_b = map(Variable, (inputs,
targets_a, targets_b))
outputs = net(inputs)
loss = mixup_criterion(criterion, outputs, targets_a, targets_b, lam)
train_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
total += targets.size(0)
correct += (lam * predicted.eq(targets_a.data).cpu().sum().float()
+ (1 - lam) * predicted.eq(targets_b.data).cpu().sum().float())
optimizer.zero_grad()
loss.backward()
optimizer.step()
return (train_loss/batch_idx, reg_loss/batch_idx, 100.*correct/total, cost/batch_idx)
| You should expand on axis=1 a.k.a. the channel axis:
>>> x = x.unsqueeze(1)
If you're inside the dataset __getitem__, then it corresponds to axis=0.
| https://stackoverflow.com/questions/68976343/ |
The Number of Classes in Pytorch Pretrained Model | I want to use the pre-trained models in Pytorch to do image classification in my own datasets, but how should I change the number of classes while freezing the parameters of the feature extraction layer?
These are the models I want to include:
resnet18 = models.resnet18(pretrained=True)
densenet161 = models.densenet161(pretrained=True)
inception_v3 = models.inception_v3(pretrained=True)
shufflenet_v2_x1_0 = models.shufflenet_v2_x1_0(pretrained=True)
mobilenet_v3_large = models.mobilenet_v3_large(pretrained=True)
mobilenet_v3_small = models.mobilenet_v3_small(pretrained=True)
mnasnet1_0 = models.mnasnet1_0(pretrained=True)
resnext50_32x4d = models.resnext50_32x4d(pretrained=True)
vgg16 = models.vgg16(pretrained=True)
Thanks a lot in advance!
New codes I added:
import torch
from torchvision import models
class MyResModel(torch.nn.Module):
def __init__(self):
super(MyResModel, self).__init__()
self.classifier = nn.Sequential(
nn.Linear(512,256),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(256,3),
)
def forward(self, x):
return self.classifier(x)
resnet18 = models.resnet18(pretrained=True)
resnet18.fc = MyResModel()
for param in resnet18.parameters():
param.requires_grad_(False)
| You have to change the final Linear layer of the respective model.
For example in the case of resnet, when we print the model, we see that the last layer is a fully connected layer as shown below:
(fc): Linear(in_features=512, out_features=1000, bias=True)
Thus, you must reinitialize model.fc to be a Linear layer with 512 input features and 2 output features with:
model.fc = nn.Linear(512, num_classes)
For other models you can check here
To freeze the parameters of the network you have to use the following code:
for name, param in model.named_parameters():
if 'fc' not in name:
print(name, param.requires_grad)
param.requires_grad=False
To validate:
for name, param in model.named_parameters():
print(name,param.requires_grad)
Note that for this example 'fc' was the name of the classification layer. This is not the case for other models. You have to inspect the model in order to find the name of the classification layer.
| https://stackoverflow.com/questions/68980724/ |
Memory usage of torch.einsum | I have been trying to debug a certain model that uses torch.einsum operator in a layer which is repeated a couple of times.
While trying to analyze the GPU memory usage of the model during training, I have noticed that a certain Einsum operation dramatically increases the memory usage. I am dealing with multi-dimensional matrices. The operation is torch.einsum('b q f n, b f n d -> b q f d', A, B).
It is also worth mentioning that:
x was assigned before to a tensor of the same shape.
In every layer (they are all identical), the GPU memory is linearly increases) after this operation, and does not deallocate until the end of the model iteration.
I have been wondering why this operation uses so much memory, and why the memory stays allocated after every iteration over that layer type.
| Variable "x" is indeed overwritten, but the tensor data is kept in memory (also called the layer's activation) for later usage in the backward pass.
So in turn you are effectively allocating new memory data for the result of torch.einsum, but you won't be replacing x's memory even if it has been seemingly overwritten.
To pass this to the test, you can compute the forward pass under the torch.no_grad() context manager (where those activations won't be kept in memory) and see the memory usage difference, compared with a standard inference.
| https://stackoverflow.com/questions/68983642/ |
Pytorch sum jacobian over inputs instead of outputs | Suppose I have a tensor Y that is (directly or indirectly) computed from a tensor X.
Normally when I apply torch.autograd.grad(Y, X, grad_outputs=torch.ones_like(Y)), I get a gradient mask that is of the same shape as X. This mask is actually a weighted sum of the gradients of the elements of Y w.r.t. X.
Is it possible to get a gradient mask of the same shape as Y instead, of which each element mask[i][j] is the sum of the gradients of Y[i][j] w.r.t. X?
This is equivalent to summing the Jacobian J(Y,X) over the dimensions of X instead of over the dimensions of Y.
>>> X = torch.eye(2)
>>> X.requires_grad_()
# X = [1 0]
# [0 1]
>>> Y = torch.sum(X*X, dim=0)
# Y = [1, 1]
>>> torch.autograd.grad(Y, X, grad_outputs=torch.ones_like(Y), retain_graph=True)
(tensor([[2., 0.],
[0., 2.]]),)
But instead, I want:
# [2, 2]
because torch.sum(torch.autograd.grad(Y[0],X) equals 2 and torch.sum(torch.autograd.grad(Y[1],X) equals 2 as well.
It would be easy to calculate the Jacobian of Y w.r.t X and just sum over the dimensions of X. However, this is unfeasible memory-wise, as the functions I work with are neural networks with huge inputs and outputs.
Calculating each gradient separately (as I did in the comments) is also very undesirable because this is too slow.
| If you run pytorch nightly, https://github.com/pytorch/pytorch/issues/10223 is partially implemented and should do what you want for most simple graphs. You could also try using the trick described at https://j-towns.github.io/2017/06/12/A-new-trick.html .
EDIT: It looks like https://pytorch.org/docs/stable/generated/torch.autograd.functional.jvp.html#torch.autograd.functional.jvp implements the backwards of backwards trick for you. So you could just do:
from torch.autograd.functional import jvp
X = torch.eye(2)
X.requires_grad_()
def build_Y(x):
return torch.sum(x*x, dim=0)
print(jvp(build_Y, X, torch.ones(X.shape))[1])
| https://stackoverflow.com/questions/68987046/ |
Calculate accuracy function in ShuffleNet | I'm using some code of ShuffleNet, but I have a problem with understanding the calculation of correct in this function.(this function calculates precision 1 and 5).
As I understand in the third line pred is the indices, but I can't understand why two lines later with equivalence function it has been compared with the target, because pred is indices of the most probabilities of output.
def accuracy(output, target, topk=(1,)):
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
correct = pred.eq(target.view(1, -1).expand_as(pred))
res = []
for k in topk:
correct_k = correct[:k].contiguous().view(-1).float().sum(0)
res.append(correct_k.mul_(100.0/batch_size))
return res
| Looking at the code, I can speculate output to be shaped (batch_size, n_logits) while the target is a dense representation: shaped (batch_size, 1). This means the ground truth class is designated by an integer value: the corresponding class label.
If we look into this implementation of the top-k accuracy, we first need to understand this: top-k accuracy is about counting how many ground truth labels are among the k highest predictions of our output. It's essentially a generalized form of the standard top-1 accuracy where we would only look at the single highest prediction and find out if it matches the target.
If we take a simple example with batch_size=2, n_logits=10, and k=3 i.e. we're interested in the top-3 accuracy. Here we sample a random prediction:
>>> output
tensor([[0.2110, 0.9992, 0.0597, 0.9557, 0.8316, 0.8407, 0.8398, 0.3631, 0.2889, 0.3226],
[0.6811, 0.2932, 0.2117, 0.6522, 0.2734, 0.8841, 0.0336, 0.7357, 0.9232, 0.2633]])
We first look at the k highest logits and retrieve their indices:
>>> _, pred = output.topk(k=3, dim=1, largest=True, sorted=True)
>>> pred
tensor([[3, 6, 4],
[7, 3, 5]])
This is nothing more than a sliced torch.argsort: output.argsort(1, descending=True)[:, :3] will return the same result.
We can then transpose to get batches last (3, 2):
>>> pred = pred.T
tensor([[3, 7],
[6, 3],
[4, 5]])
Now that we have the top-k predictions for each batch element, we need to compare those with the ground truths. Let us imagine now a target tensor (remember is shaped as (batch_size=2, 1)):
>>> target
tensor([[1],
[5]])
We first need to expand it to the shape of pred:
>>> target.view(1, -1).expand_as(pred)
tensor([[1, 0],
[1, 0],
[1, 0]])
We then compare eachother with torch.eq, the element-wise equality operator:
>>> correct = torch.eq(pred, target.view(1, -1).expand_as(pred))
tensor([[False, False],
[False, False],
[False, True]])
As you can tell on the 2nd batch element, one of the highest three matches the ground-truth class label 5. On the first batch element, neither of the three highest predictions matches the ground-truth label, it's not correct. The second batch element counts as one 'correct'.
Of course, based on this equality mask tensor correct, you can slice it even more, to compute other top-k' accuracies where k' <= k. For instance k' = 1:
>>> correct[:1]
tensor([[False, False]])
Here for the top-1 accuracy, we have zero correct instances out of the two batch elements.
| https://stackoverflow.com/questions/68988480/ |
YoloV5 Custom retraining | I trained my custom data set in the yoloV5s model and I got 80% accuracy on my inference. Now I need to increase the accuracy by adding more images and labels.
My question here is, I already trained 10,000+ labels to reach 80% it took 7 hours for me. Shall I need to include the old 10,000+ data with my new data which is only 1000 to train and improve my accuracy?
Is there any way that I can include the new data only to retrain the model even I add a new class?
How can I save my time and space?
| The question you are asking is of topic continual learning, which is an active area of research nowadays. Since you need to add more classes to your model, you need to add the new class with the previous data and retrain the model from start. If you don't do that, i.e., you only train on the new class, your model will forget completely about the previous data (learned feature); this forgetting is known as Catastrophic Forgetting.
Many people have suggested various ways to avoid this Catastrophic forgetting; I personally feel that Progressive Neural Network is highly immune to Forgetting. Apart from it, you can find other methods here
As I told you, this is currently a highly active area of research; There is no full-proof solution. For now, the best way is to add the new data to the previous data and retrain your model.
| https://stackoverflow.com/questions/68993575/ |
CNN LSTM from Keras to Pytroch | I am trying to convert a Notebook for an CNN LSTM model from Keras to Pytorch.
I am struggling with the dimensions/shapes in the model definition.
def build_model():
# Inputs to the model
input_img = layers.Input(shape=(200,50,1), name="image", dtype="float32")
labels = layers.Input(name="label", shape=(None,), dtype="float32")
# First conv block
x = layers.Conv2D(32,(3, 3),activation="relu",kernel_initializer="he_normal",padding="same",name="Conv1")(input_img)
x = layers.MaxPooling2D((2, 2), name="pool1")(x)
# Second conv block
x = layers.Conv2D(64,(3, 3),activation="relu",kernel_initializer="he_normal",padding="same",name="Conv2")(x)
x = layers.MaxPooling2D((2, 2), name="pool2")(x)
# We have used two max pool with pool size and strides 2.
# Hence, downsampled feature maps are 4x smaller. The number of
# filters in the last layer is 64. Reshape accordingly before
# passing the output to the RNN part of the model
x = layers.Reshape(target_shape=(50, 768), name="reshape")(x)
x = layers.Dense(64, activation="relu", name="dense1")(x)
x = layers.Dropout(0.2)(x)
# RNNs
x = layers.Bidirectional(layers.LSTM(128, return_sequences=True, dropout=0.25))(x)
x = layers.Bidirectional(layers.LSTM(64, return_sequences=True, dropout=0.25))(x)
# Output layer
x = layers.Dense(20, activation="softmax", name="dense2")(x) # 20 = 19 characters + UKN
# Add CTC layer for calculating CTC loss at each step
output = CTCLayer(name="ctc_loss")(labels, x)
# Define the model
model = keras.models.Model(inputs=[input_img, labels], outputs=output, name="ocr_cnn_lstm_model")
# Compile the model and return
model.compile(optimizer=keras.optimizers.Adam())
return model
Currently I only have the first 2 convolutional layers, which are already not working.:
# X_train Shape: (832, 1, 50, 200)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Defining a 2D convolution layer
self.conv = nn.Conv2d(1, 32, kernel_size=3, padding = 'same')
# Defining another 2D convolution layer
self.conv2 = nn.Conv2d(32, 64, kernel_size=3, stride=1, padding ='same')
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.relu = nn.ReLU(inplace=True)
self.out = nn.Linear(64 * 7 * 7, 10)
# Defining the forward pass
def forward(self, x):
x = self.conv(x)
x = self.relu(x)
x = self.pool(x)
print(x.shape)
x = x.view(x.size(0),-1)
X = self.out(x)
return x
It would be appreciated if someone could help me out with the input shapes (especially in nn.Linear but I doubt the rest corresponds to the initial notebook either).
When I try to run the model I get:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_9064/4102025856.py in <module>
----> 1 out = model(torch.Tensor(X_train))
~/env/neural/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
/tmp/ipykernel_9064/3113669837.py in forward(self, x)
25 x = x.view(x.size(0),-1)
26
---> 27 X = self.out(x)
28 return x
~/env/neural/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
~/env/neural/lib/python3.7/site-packages/torch/nn/modules/linear.py in forward(self, input)
94
95 def forward(self, input: Tensor) -> Tensor:
---> 96 return F.linear(input, self.weight, self.bias)
97
98 def extra_repr(self) -> str:
~/env/neural/lib/python3.7/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1845 if has_torch_function_variadic(input, weight):
1846 return handle_torch_function(linear, (input, weight), input, weight, bias=bias)
-> 1847 return torch._C._nn.linear(input, weight, bias)
1848
1849
RuntimeError: mat1 and mat2 shapes cannot be multiplied (832x80000 and 3136x10)
Thanks in advance.
| This works. You didn't use the good input shape for the linear layer.
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Defining a 2D convolution layer
self.conv = nn.Conv2d(1, 32, kernel_size=3, padding=2)
# Defining another 2D convolution layer
self.conv2 = nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=2)
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.relu = nn.ReLU(inplace=True)
self.out = nn.Linear(32 * 26 * 101, 10)
# Defining the forward pass
def forward(self, x):
x = self.conv(x)
x = self.relu(x)
x = self.pool(x)
print(x.shape)
# torch.Size([832, 32, 26, 101])
x = x.view(x.size(0),-1)
X = self.out(x)
return x
if __name__ == "__main__":
x = torch.randn(832, 1, 50, 200)
net = Net()
out = net(x)
| https://stackoverflow.com/questions/68997559/ |
Expected 3-dimensional input for 3-dimensional weight [200, 1, 4], but got 2-dimensional input of size [64, 1500] instead | I'm trying to create Conv1d. My data consists of byte streams with a length of 1500. My batch size is 64. I know that Conv1d expects the input to be [batch, channels, sequence_length]. Here is my neural net:
class ConvNet(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Sequential(
nn.Conv1d(
in_channels=1,
out_channels=200,
kernel_size=4,
stride=3,
padding = 0)
, nn.ReLU()
)
def forward(self,x):
output = self.conv1(x)
return output
I got the error:
Expected 3-dimensional input for 3-dimensional weight [200, 1, 4], but got 2-dimensional input of size [64, 1500] instead
I don't know how to change the input to be compatible with the input that my convent expects. Or should I change the model itself?
| You're feeding your model with data somewhere. You may have something like this:
model = ConvNet()
# get data from somewhere
input_x = # your data
# feed the model with the data
output = model(input_x)
You can change the last part to:
# feed the model with the data
output = model(input_x.unsqueeze(1))
Or you can change your model's forward to:
def forward(self,x):
output = self.conv1(x.unsqueeze(1))
return output
but I don't recommend the second approach.
Both approaches will change the shape to [64, 1, 1500].
| https://stackoverflow.com/questions/68999758/ |
Multiplication of tensors with different dimensions | Given
a = torch.randn(40, 6)
b = torch.randn(40)
I want to multiply each row of a with scalar of b,i.e
c0 = a[0]*b[0]
c1 = a[1]*b[1]
...
This works just fine. But is there more elegant way of doing this?
Thanks
| You want c.shape = (40, 6)? Then, simply:
c = a * b.unsqueeze(1)
Example with (2, 3) to make it readable:
import torch
torch.manual_seed(2021)
a = torch.randn(2, 3)
# > tensor([[ 2.2871, 0.6413, -0.8615],
# > [-0.3649, -0.6931, 0.9023]])
b = torch.randn(2)
# > tensor([-2.7183, -1.4478])
c = a * b.unsqueeze(1)
# > tensor([[-6.2169, -1.7434, 2.3418],
# > [ 0.5284, 1.0035, -1.3064]])
| https://stackoverflow.com/questions/69001368/ |
Special indexing of two 1D numpy / torch arrays to produce another array | Looking for an efficient (vectorized) way in Numpy or PyTorch to achieve the array z when given input arrays x and y:
1D array x contains a list of increasing ID's, each of which repeats 1 or more times (not necessarily repeating for the same number of times for each ID). For example, [0 0 0 1 1 2 2 2 2]
1D array y of 0's and 1's. There is at least one element equal to "1" for each unique ID in x. For example, [1 1 0 1 1 0 0 1 0].
1D output array z which is equal to y, but keeping only the first occurrence of "1" in y per ID in x. The remaining elements of y for that ID should be set to "0". So in the example, the result would be [1 0 0 1 0 0 0 1 0]
x: [0 0 0 1 1 2 2 2 2]
y: [1 1 0 1 1 0 0 1 0]
z: [1 0 0 1 0 0 0 1 0]
I feel like there's a quick way to do this in Numpy or PyTorch, but I couldn't figure it out.
Edit: Here's the "slow" version using a while-loop
x = np.array([0, 0, 0, 1, 1, 2, 2, 2, 2])
y = np.array([1, 1, 0, 1, 1, 0, 0, 1, 0])
z = y.copy()
n = z.shape[0]
i = 0
while i < n:
if y[i] == 1:
current_id = x[i]
i += 1
while i < n and x[i] == current_id:
z[i] = 0
i += 1
else:
i += 1
| You could use np.unique:
unq, ind = np.unique(np.stack((x, y)), axis=1, return_index=True)
ind now contains the first occurrence of each unique combination of elements. You just need to remove the ones where y is zero:
keep = unq[1, :] != 0
ind = ind[keep]
Now you can make z directly:
z = np.zeros_like(y)
z[ind] = 1
| https://stackoverflow.com/questions/69004064/ |
Changing a custom resnet 18 architecture subtly and still use it in pre-trained mode | Can I change a custom resnet 18 architecture and still use it in pre-trained = true mode? I am doing a subtle change in the architecture of a custom resnet18 and when i run it, i get the following error:
This is how the custom resnet18 is called:
model = Resnet_18.resnet18(pretrained=True, embedding_size=args.dim_embed)
The new change in the custom resnet18:
self.layer_attend1 = nn.Sequential(nn.Conv2d(layers[0], layers[0], stride=2, padding=1, kernel_size=3),
nn.AdaptiveAvgPool2d(1),
nn.Softmax(1))
I am loading the checkpoint using:
checkpoint = torch.load(args.resume, encoding='latin1')
args.start_epoch = checkpoint['epoch']
best_acc = checkpoint['best_prec1']
tnet.load_state_dict(checkpoint['state_dict'])
The output of running the model is:
/scratch3/venv/fashcomp/lib/python3.8/site-packages/torchvision/transforms/transforms.py:310: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
warnings.warn("The use of the transforms.Scale transform is deprecated, " +
=> loading checkpoint 'runs/nondisjoint_l2norm/model_best.pth.tar'
Traceback (most recent call last):
File "main.py", line 352, in <module>
main()
File "main.py", line 145, in main
tnet.load_state_dict(checkpoint['state_dict'])
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Tripletnet:
Missing key(s) in state_dict: "embeddingnet.embeddingnet.layer_attend1.0.weight", "embeddingnet.embeddingnet.layer_attend1.0.bias". /scratch3/venv/fashcomp/lib/python3.8/site-packages/torchvision/transforms/transforms.py:310: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
warnings.warn("The use of the transforms.Scale transform is deprecated, " +
=> loading checkpoint 'runs/nondisjoint_l2norm/model_best.pth.tar'
Traceback (most recent call last):
File "main.py", line 352, in <module>
main()
File "main.py", line 145, in main
tnet.load_state_dict(checkpoint['state_dict'])
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Tripletnet:
Missing key(s) in state_dict: "embeddingnet.embeddingnet.layer_attend1.0.weight", "embeddingnet.embeddingnet.layer_attend1.0.bias".
So, how can you implement small architectural changes without retraining from the scratch every time?
P.S.: Cross-posting here: https://discuss.pytorch.org/t/can-i-change-a-custom-resnet-18-architecture-subtly-and-still-use-it-in-pre-trained-true-mode/130783 Thanks a lot to Rodrigo Berriel teach me about https://meta.stackexchange.com/a/141824/913043
| If you really want to do this, you should construct the model and then call load_state_dict with the argument strict=False (https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=load_state_dict#torch.nn.Module.load_state_dict).
Keep in mind that A) you should initialize any new layers you added explicitly, because they won't be initialized by the state dict, and B) the model will probably not work out of the box because of the uninitialized weights, but it should train faster than a randomly initialized model.
| https://stackoverflow.com/questions/69004513/ |
Adding a simple attention layer to a custom resnet 18 architecture causes error in forward pass | I am adding the following code in resnet18 custom code
self.layer1 = self._make_layer(block, 64, layers[0]) ## code existed before
self.layer2 = self._make_layer(block, 128, layers[1], stride=2) ## code existed before
self.layer_attend1 = nn.Sequential(nn.Conv2d(layers[0], layers[0], stride=2, padding=1, kernel_size=3),
nn.AdaptiveAvgPool2d(1),
nn.Softmax(1)) ## code added by me
and also the following in its forward pass (def forward(self, x)) in the same resnet18 custom code:
x = self.layer1(x) ## the code existed before
x = self.layer_attend1(x)*x ## I added this code
x = self.layer2(x) ## the code existed before
and I get the following error. I had no error before adding this attention layer. Any idea how I could fix it?
=> loading checkpoint 'runs/nondisjoint_l2norm/model_best.pth.tar'
=> loaded checkpoint 'runs/nondisjoint_l2norm/model_best.pth.tar' (epoch 5)
/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Traceback (most recent call last):
File "main.py", line 352, in <module>
main()
File "main.py", line 153, in main
test_acc = test(test_loader, tnet)
File "main.py", line 248, in test
embeddings.append(tnet.embeddingnet(images).data)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch3/research/code/fashion/fashion-compatibility/type_specific_network.py", line 101, in forward
embedded_x = self.embeddingnet(x)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch3/research/code/fashion/fashion-compatibility/Resnet_18.py", line 110, in forward
x = self.layer_attend1(x)*x #so we don;t use name x1
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 443, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 439, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [2, 2, 3, 3], expected input[256, 64, 28, 28] to have 2 channels, but got 64 channels instead
In VSCode even though I added a checkpoint before the problematic layer, it didn't even go to the checkpoint
| The problem comes from a confusion: layers[0] is not the number of output channels, as you probably expected, but the number of blocks that layer will have. What you actually need is to use the 64, which is the number of output channels of the layer1 that precede your custom code:
self.layer_attend1 = nn.Sequential(nn.Conv2d(64, 64, stride=2, padding=1, kernel_size=3),
nn.AdaptiveAvgPool2d(1),
nn.Softmax(1)) ## code added by me
| https://stackoverflow.com/questions/69006025/ |
Something wrong when I use the Dataloader | The phrase which pre_proceed the dataset
class data_test(Dataset):
def __init__(self,data_root,transform=None):
data_image=glob.glob(data_root+'/*.jpg')
self.data_image=data_image
self.transform=transform
def __getitem__(self, index):
data_image_path=self.data_image[index]
image_data=cv2.imread(data_image_path,-1) # unchanged
if self.transform:
image_data=self.transform(image_data)
return image_data
The above operation is ordinary, but when I load the dataset,
`dataset=data_test(train_dataset,transforms)
data=DataLoader(dataset,batch_size=8,num_workers=0)
for idx,data in enumerate(data):
print(data.shape)`
an error occur,
| The error is actually pretty specific on the error, the error that was raised is NotImplementedError. You are supposed to implement the __len__ function in your custom dataset.
In your case that would be as simple as (assuming self.data_image contains all your dataset instances) adding this function to the data_test class:
def __len__(self):
return len(self.data_image)
| https://stackoverflow.com/questions/69006831/ |
Disable grad and backward Globally? | How to disable GLOBALLY grad,backward and any other non forward() functionality in Torch ?
I see examples of how to do it locally but not globally ?
The Docs say that what may be I'm looking is Inference only mode ! but how to set it globally.
| You can use torch.set_grad_enabled(False) to disable gradient propagation globally for the entire thread. Besides, after you called torch.set_grad_enabled(False), doing anything like backward() will raise an exception.
a = torch.tensor(np.random.rand(64,5),dtype=torch.float32)
l = torch.nn.Linear(5,10)
o = torch.sum(l(a))
print(o.requires_grad) #True
o.backward()
print(l.weight.grad) #showed gradients
torch.set_grad_enabled(False)
o = torch.sum(l(a))
print(o.requires_grad) #False
o.backward()# RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
print(l.weight.grad)
| https://stackoverflow.com/questions/69007342/ |
How to concat a tensor in pytorch? | What I want to do is something like this:
import torch
a = torch.arange(120).reshape(2, 3, 4, 5)
b = torch.cat(list(a), dim=2)
I want to know:
I have to convert tensor to a list, will this cause performance not good?
Even performance is OK, can I do this just with tensor?
| You want to:
Reduce the number of copies: in this specific scenario, copies need to be made since we are rearranging the layout of our underlying data.
Reduce or remove any torch.Tensor -> non-torch.Tensor conversions: this will be a pain point when working with a GPU since you're transferring data in and out of the device.
You can perform the same operation by permuting the axes such that axis=0 goes to axis=-2 (the before the last axis), then flattening the last two axes:
>>> a.permute(1,2,0,3).flatten(-2)
tensor([[[ 0, 1, 2, 3, 4, 60, 61, 62, 63, 64],
[ 5, 6, 7, 8, 9, 65, 66, 67, 68, 69],
[ 10, 11, 12, 13, 14, 70, 71, 72, 73, 74],
[ 15, 16, 17, 18, 19, 75, 76, 77, 78, 79]],
[[ 20, 21, 22, 23, 24, 80, 81, 82, 83, 84],
[ 25, 26, 27, 28, 29, 85, 86, 87, 88, 89],
[ 30, 31, 32, 33, 34, 90, 91, 92, 93, 94],
[ 35, 36, 37, 38, 39, 95, 96, 97, 98, 99]],
[[ 40, 41, 42, 43, 44, 100, 101, 102, 103, 104],
[ 45, 46, 47, 48, 49, 105, 106, 107, 108, 109],
[ 50, 51, 52, 53, 54, 110, 111, 112, 113, 114],
[ 55, 56, 57, 58, 59, 115, 116, 117, 118, 119]]])
| https://stackoverflow.com/questions/69008405/ |
Meaning of **kwargs attribute put in classes of Machine Learning Models | I am wondering about the meaning of the attribute **kwargs that I've found typically added in the constructor of some machine learning models classes. For example considering a neural network in PyTorch:
class Model(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim, **kwargs)
is **kwargs associated with extra parameters that are defined later?
| This is not specific to machine learning models classes, it is rather a Python feature.
You are indeed right, it corresponds to additional keyword arguments. It will essentially collect the remaining passed named argument, that are not defined in the function header, and add them to the dictionary variable kwargs. This variable can actually be renamed to any name, it is customary to keep 'args' for iterable unnamed arguments (*args) and 'kwargs' for keyword arguments (**kwargs).
This adds the flexibility to allow for additional arguments to be defined and passed to the function without having to specifically state their names in the header. One common use case is when extending a class. Here we are implementing a dummy 3x3 2D convolution layer named Conv3x3, which will extends the base nn.Conv2d module:
class Conv3x3(nn.Conv2d):
def __init__(self, **kwargs):
super().__init__(kernel_size=3, **kwargs)
As you can see, we didn't need to name all arguments and we still keep the same interface as nn.Conv2d in our Conv3x3 class initializer:
>>> Conv3x3(in_channels=3, out_channels=16)
Conv3x3(3, 16, kernel_size=(3, 3), stride=(1, 1))
There are a lot of nice things you can do with these two constructs. Much of which you can find on here.
| https://stackoverflow.com/questions/69010775/ |
How to integrate pytorch lightning profiler with tensorboard? | I know we can use torch profiler with tensorboard using something like this:
with torch.profiler.profile(
schedule=torch.profiler.schedule(wait=1, warmup=1, active=3, repeat=2),
on_trace_ready=torch.profiler.tensorboard_trace_handler('./log/resnet18'),
record_shapes=True,
with_stack=True
) as prof:
for step, batch_data in enumerate(train_loader):
if step >= (1 + 1 + 3) * 2:
break
train(batch_data)
prof.step() # Need to call this at the end of each step to notify profiler of steps' boundary.
It works perfectly with pytorch, but the problem is I have to use pytorch lightning and if I put this in my training step, it just doesn't create the log file nor does it create an entry for profiler. All I get is lightning_logs which isn't the profiler output. I couldn't find anything in the docs about lightning_profiler and tensorboard so does anyone have any idea?
Here's what my training function looks like:
def training_step(self, train_batch, batch_idx):
with torch.profiler.profile(
activities=[ProfilerActivity.CPU],
schedule=torch.profiler.schedule(
wait=1,
warmup=1,
active=2,
repeat=1),
with_stack=True,
on_trace_ready=torch.profiler.tensorboard_trace_handler('./logs'),
) as profiler:
x, y = train_batch
x = x.float()
logits = self.forward(x)
loss = self.loss_fn(logits, y)
profiler.step()
return loss
| You don't have to use raw torch.profiler at all. There is a whole page in Lightning Docs dedicated to Profiling ..
.. and its as easy as passing a trainer flag called profiler like
# other profilers are "simple", "advanced" etc
trainer = pl.Trainer(profiler="pytorch")
Also, set TensorBoardLogger as your preferred logger as you normally do
trainer = pl.Trainer(profiler="pytorch", logger=TensorBoardLogger(..))
| https://stackoverflow.com/questions/69014259/ |
TypeError: 'Vocab' object is not callable | I'm following the tutorial on torchtext transformers which is published on 1.9 pytorch. However, because I'm working on a Tegra TX2, I am stuck to using torchtext 0.6.0, and not 0.10.0 (which is what I assume the tutorial uses).
Following the tutorial, the following throws an error:
data = [torch.tensor(vocab(tokenizer(item)), dtype=torch.long) for item in raw_text_iter]
return torch.cat(tuple(filter(lambda t: t.numel() > 0, data)))
The error is:
TypeError: 'Vocab' object is not callable
I understand what the error means, what I don't know, is that is the expected return from Vocab in this case?
Looking at the documentation for TorchText 0.6.0 I see that it has:
stoi
itos
freqs
vectors
Is the example expecting the vectors from Vocab?
EDIT:
I looked up the 0.10.0 documentation and it doesn't have a __call__.
| Looking at the source for the implementation of Vocab in 0.10.0, apparently it is a subclass of torch.nn.Module, which means it inherits __call__ from there (calling it is roughly equivalent to calling its forward() method, but with some additional machinery for implementing hooks and such).
We can also see that it wraps some underling VocabPyBind object (equivalent to the Vocab class in older versions), and its forward() method just calls its lookup_indices method.
So in short, it seems the equivalent in older versions of the library would be to call vocab.lookup_indices(tokenizer(item)).
Update: Apparently in 0.6.0 the Vocab class does not even have a lookup_indices method, but reading the source for that, this is just equivalent to:
[vocab[token] for token in tokenizer]
If you're ever able to upgrade, for the sake of forward-compatibility you could write a wrapper like:
from torchtext.vocab import Vocab as _Vocab
class Vocab(_Vocab):
def lookup_indices(self, tokens):
return [vocab[token] for token in tokens]
def __call__(self, tokens):
return self.lookup_indices(tokens)
| https://stackoverflow.com/questions/69015430/ |
File "mtrand.pyx", line 905, in numpy.random.mtrand.RandomState.choice TypeError: 'dict_keys' object cannot be interpreted as an integer | I am switching from a much older version of PyTorch from 3 years ago to stable PyTorch 1.9 in CentOS 7 (GPU-based) and with no change in the original paper code, I get the following error. Is there a quick fix to this?
(fashcomp) [jalal@goku fashion-compatibility]$ python main.py --name test_baseline --learned --l2_embed --datadir ../../../data/fashion/
/scratch3/venv/fashcomp/lib/python3.8/site-packages/torchvision/transforms/transforms.py:310: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
warnings.warn("The use of the transforms.Scale transform is deprecated, " +
+ Number of params: 3191808
Traceback (most recent call last):
File "main.py", line 322, in <module>
main()
File "main.py", line 167, in main
train(train_loader, tnet, criterion, optimizer, epoch)
File "main.py", line 194, in train
for batch_idx, (img1, desc1, has_text1, img2, desc2, has_text2, img3, desc3, has_text3, condition) in enumerate(train_loader):
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "mtrand.pyx", line 905, in numpy.random.mtrand.RandomState.choice
TypeError: 'dict_keys' object cannot be interpreted as an integer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/scratch3/research/code/fashion/fashion-compatibility/polyvore_outfits.py", line 338, in __getitem__
neg_im = self.sample_negative(outfit_id, pos_im, item_type)
File "/scratch3/research/code/fashion/fashion-compatibility/polyvore_outfits.py", line 235, in sample_negative
choice = np.random.choice(candidate_sets)
File "mtrand.pyx", line 907, in numpy.random.mtrand.RandomState.choice
ValueError: a must be 1-dimensional or an integer
and
$ pip freeze
absl-py==0.13.0
argon2-cffi==20.1.0
attrs==21.2.0
backcall==0.2.0
bleach==4.1.0
cachetools==4.2.2
certifi==2021.5.30
cffi==1.14.6
charset-normalizer==2.0.4
cycler==0.10.0
debugpy==1.4.1
decorator==5.0.9
defusedxml==0.7.1
entrypoints==0.3
google-auth==1.35.0
google-auth-oauthlib==0.4.5
grpcio==1.39.0
h5py==3.3.0
idna==3.2
importlib==1.0.4
ipykernel==6.2.0
ipython==7.26.0
ipython-genutils==0.2.0
ipywidgets==7.6.3
jedi==0.18.0
Jinja2==3.0.1
joblib==1.0.1
jsonschema==3.2.0
jupyter==1.0.0
jupyter-client==7.0.1
jupyter-console==6.4.0
jupyter-core==4.7.1
jupyterlab-pygments==0.1.2
jupyterlab-widgets==1.0.0
kiwisolver==1.3.1
Markdown==3.3.4
MarkupSafe==2.0.1
matplotlib==3.4.3
matplotlib-inline==0.1.2
mistune==0.8.4
nbclient==0.5.4
nbconvert==6.1.0
nbformat==5.1.3
nest-asyncio==1.5.1
notebook==6.4.3
numpy==1.21.2
oauthlib==3.1.1
packaging==21.0
pandas==1.3.2
pandocfilters==1.4.3
parso==0.8.2
pexpect==4.8.0
pickleshare==0.7.5
Pillow==8.3.1
prometheus-client==0.11.0
prompt-toolkit==3.0.20
protobuf==3.17.3
ptyprocess==0.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
Pygments==2.10.0
pyparsing==2.4.7
pyrsistent==0.18.0
python-dateutil==2.8.2
pytz==2021.1
pyzmq==22.2.1
qtconsole==5.1.1
QtPy==1.10.0
requests==2.26.0
requests-oauthlib==1.3.0
rsa==4.7.2
scikit-learn==0.24.2
scipy==1.7.1
Send2Trash==1.8.0
six==1.16.0
sklearn==0.0
tensorboard==2.6.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
terminado==0.11.1
testpath==0.5.0
threadpoolctl==2.2.0
torch==1.9.0
torch-tb-profiler==0.2.1
torchaudio==0.9.0
torchvision==0.10.0
tornado==6.1
traitlets==5.0.5
typing-extensions==3.10.0.0
urllib3==1.26.6
wcwidth==0.2.5
webencodings==0.5.1
Werkzeug==2.0.1
widgetsnbextension==3.5.1
Link to issue on the repo: https://github.com/mvasil/fashion-compatibility/issues/25
| You should convert your dict_keys to a list as explained in the comments above:
np.random.choice(list(candidate_sets))
It might be because of the version change of NumPy.
| https://stackoverflow.com/questions/69018998/ |
Training mask not used in Pytorch-Geometric when inputting data to train model (Docs) | I'm working through the Pytorch-Geometric docs (here).
In the below code, we see data being passed to the model without train_mask. However, when passing the output and the label to the loss function, train_mask is applied to both. Shouldn't we also be applying the train_mask to data when inputting it into the model? As I see it, it shouldn't be a problem. However, it looks like we are then wasting computation on outputs that are not used to train the model.
model.train()
for epoch in range(200):
optimizer.zero_grad()
out = model(data)
loss = F.nll_loss(out[data.train_mask], data.y[data.train_mask])
loss.backward()
optimizer.step()
| I think the main reason that in the Pytorch Geometric examples simply the output of all nodes are computed is a different one to the "no slicing of data issue" raised in the other answer. You need the hidden representation (derived by graph convolutions) of more nodes than the train_mask contains. Hence, you cannot simply only give the features (respectively the data) for those nodes. But some optimisation is possible, which I will discuss at the end.
I'll assume you're setting is node classification (as in the example code and link in your question).
Example
Let's use a small toy example, which contains five nodes and the following edges:
A<->B
B<->C
C<->D
D<->E
and let assume you use a 2-layer GNN with only the node A as training. To calculate the GNN's output of A, you need the first hidden representation of B, which uses the input features of C. Hence, you need the 2-hop neighbourhood of A to calculate its output.
Possible Optimisation
If you have multiple training nodes (as you usually have) and you have a k-Layered GNN, it usually (and not always see diluted GNN as example) operates on the k-hop neighbourhood. Then, you can calculate the joined set of nodes by combining for each training node the k-hop neighbourhood. Since this is model dependent and requires some code, I'll guess it was not included in an "introduction by example". Probably, you anyways will only see an effect on larger graphs and only negligible effects for graphs like Cora.
| https://stackoverflow.com/questions/69019682/ |
return _VF.norm(input, p, _dim, keepdim=keepdim) IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 2) | I changed
if self.l2_norm:
norm = torch.norm(masked_embedding, p=2, dim=1) + 1e-10
masked_embedding = masked_embedding / norm.expand_as(masked_embedding)
to
if self.l2_norm:
masked_embedding = torch.nn.functional.normalize(masked_embedding, p=2.0, dim=2, eps=1e-10, out=None)
and now I get this new error (previously was getting a different error hence had to change it to so):
(fashcomp) [jalal@goku fashion-compatibility]$ python main.py --name test_baseline --learned --l2_embed --datadir ../../../data/fashion/
/scratch3/venv/fashcomp/lib/python3.8/site-packages/torchvision/transforms/transforms.py:310: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
warnings.warn("The use of the transforms.Scale transform is deprecated, " +
+ Number of params: 3191808
<class 'torch.utils.data.dataloader.DataLoader'>
/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Traceback (most recent call last):
File "main.py", line 324, in <module>
main()
File "main.py", line 167, in main
train(train_loader, tnet, criterion, optimizer, epoch)
File "main.py", line 202, in train
acc, loss_triplet, loss_mask, loss_embed, loss_vse, loss_sim_t, loss_sim_i = tnet(anchor, far, close)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch3/research/code/fashion/fashion-compatibility/tripletnet.py", line 146, in forward
acc, loss_triplet, loss_sim_i, loss_mask, loss_embed, general_x, general_y, general_z = self.image_forward(x, y, z)
File "/scratch3/research/code/fashion/fashion-compatibility/tripletnet.py", line 74, in image_forward
embedded_x, masknorm_norm_x, embed_norm_x, general_x = self.embeddingnet(x.images, c)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch3/research/code/fashion/fashion-compatibility/type_specific_network.py", line 147, in forward
masked_embedding = torch.nn.functional.normalize(masked_embedding, p=2.0, dim=2, eps=1e-10, out=None)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/functional.py", line 4428, in normalize
denom = input.norm(p, dim, keepdim=True).clamp_min(eps).expand_as(input)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/_tensor.py", line 417, in norm
return torch.norm(self, p, dim, keepdim, dtype=dtype)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/functional.py", line 1356, in norm
return _VF.norm(input, p, _dim, keepdim=keepdim) # type: ignore[attr-defined]
IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 2)
This code has been previously run with Python 2 and a much older version of PyTorch that dates back to 3 years ago. I am running it with native Python 3.8 and PyTorch 1.9 GPU-based in CentOS 7.
$ pip freeze
absl-py==0.13.0
argon2-cffi==20.1.0
attrs==21.2.0
backcall==0.2.0
bleach==4.1.0
cachetools==4.2.2
certifi==2021.5.30
cffi==1.14.6
charset-normalizer==2.0.4
cycler==0.10.0
debugpy==1.4.1
decorator==5.0.9
defusedxml==0.7.1
entrypoints==0.3
google-auth==1.35.0
google-auth-oauthlib==0.4.5
grpcio==1.39.0
h5py==3.3.0
idna==3.2
importlib==1.0.4
ipykernel==6.2.0
ipython==7.26.0
ipython-genutils==0.2.0
ipywidgets==7.6.3
jedi==0.18.0
Jinja2==3.0.1
joblib==1.0.1
jsonschema==3.2.0
jupyter==1.0.0
jupyter-client==7.0.1
jupyter-console==6.4.0
jupyter-core==4.7.1
jupyterlab-pygments==0.1.2
jupyterlab-widgets==1.0.0
kiwisolver==1.3.1
Markdown==3.3.4
MarkupSafe==2.0.1
matplotlib==3.4.3
matplotlib-inline==0.1.2
mistune==0.8.4
nbclient==0.5.4
nbconvert==6.1.0
nbformat==5.1.3
nest-asyncio==1.5.1
notebook==6.4.3
numpy==1.21.2
oauthlib==3.1.1
packaging==21.0
pandas==1.3.2
pandocfilters==1.4.3
parso==0.8.2
pexpect==4.8.0
pickleshare==0.7.5
Pillow==8.3.1
prometheus-client==0.11.0
prompt-toolkit==3.0.20
protobuf==3.17.3
ptyprocess==0.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
Pygments==2.10.0
pyparsing==2.4.7
pyrsistent==0.18.0
python-dateutil==2.8.2
pytz==2021.1
pyzmq==22.2.1
qtconsole==5.1.1
QtPy==1.10.0
requests==2.26.0
requests-oauthlib==1.3.0
rsa==4.7.2
scikit-learn==0.24.2
scipy==1.7.1
Send2Trash==1.8.0
six==1.16.0
sklearn==0.0
tensorboard==2.6.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
terminado==0.11.1
testpath==0.5.0
threadpoolctl==2.2.0
torch==1.9.0
torch-tb-profiler==0.2.1
torchaudio==0.9.0
torchvision==0.10.0
tornado==6.1
traitlets==5.0.5
typing-extensions==3.10.0.0
urllib3==1.26.6
wcwidth==0.2.5
webencodings==0.5.1
Werkzeug==2.0.1
widgetsnbextension==3.5.1
$ pip freeze
absl-py==0.13.0
argon2-cffi==20.1.0
attrs==21.2.0
backcall==0.2.0
bleach==4.1.0
cachetools==4.2.2
certifi==2021.5.30
cffi==1.14.6
charset-normalizer==2.0.4
cycler==0.10.0
debugpy==1.4.1
decorator==5.0.9
defusedxml==0.7.1
entrypoints==0.3
google-auth==1.35.0
google-auth-oauthlib==0.4.5
grpcio==1.39.0
h5py==3.3.0
idna==3.2
importlib==1.0.4
ipykernel==6.2.0
ipython==7.26.0
ipython-genutils==0.2.0
ipywidgets==7.6.3
jedi==0.18.0
Jinja2==3.0.1
joblib==1.0.1
jsonschema==3.2.0
jupyter==1.0.0
jupyter-client==7.0.1
jupyter-console==6.4.0
jupyter-core==4.7.1
jupyterlab-pygments==0.1.2
jupyterlab-widgets==1.0.0
kiwisolver==1.3.1
Markdown==3.3.4
MarkupSafe==2.0.1
matplotlib==3.4.3
matplotlib-inline==0.1.2
mistune==0.8.4
nbclient==0.5.4
nbconvert==6.1.0
nbformat==5.1.3
nest-asyncio==1.5.1
notebook==6.4.3
numpy==1.21.2
oauthlib==3.1.1
packaging==21.0
pandas==1.3.2
pandocfilters==1.4.3
parso==0.8.2
pexpect==4.8.0
pickleshare==0.7.5
Pillow==8.3.1
prometheus-client==0.11.0
prompt-toolkit==3.0.20
protobuf==3.17.3
ptyprocess==0.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
Pygments==2.10.0
pyparsing==2.4.7
pyrsistent==0.18.0
python-dateutil==2.8.2
pytz==2021.1
pyzmq==22.2.1
qtconsole==5.1.1
QtPy==1.10.0
requests==2.26.0
requests-oauthlib==1.3.0
rsa==4.7.2
scikit-learn==0.24.2
scipy==1.7.1
Send2Trash==1.8.0
six==1.16.0
sklearn==0.0
tensorboard==2.6.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
terminado==0.11.1
testpath==0.5.0
threadpoolctl==2.2.0
torch==1.9.0
torch-tb-profiler==0.2.1
torchaudio==0.9.0
torchvision==0.10.0
tornado==6.1
traitlets==5.0.5
typing-extensions==3.10.0.0
urllib3==1.26.6
wcwidth==0.2.5
webencodings==0.5.1
Werkzeug==2.0.1
widgetsnbextension==3.5.1
GitHub issue and code can be found here.
| To switch to F.normalize, you need to make sure you're applying it on dim=1:
if self.l2_norm:
masked_embedding = F.normalize(masked_embedding, p=2.0, dim=1, eps=1e-10)
If you prefer using the other alternative with either torch.norm or torch.Tensor.norm. You can use the option keepdim=True which helps when doing inplace normalization:
if self.l2_norm:
norm = masked_embedding.norm(p=2, dim=1, keepdim=True) + 1e-10
masked_embedding /= norm
| https://stackoverflow.com/questions/69020802/ |
How to create a normal 2d distribution in pytorch | Given a tensor containing N points, represented in [x,y], I want to create a 2D gaussian distribution around each point, draw them on an empty feature map.
For example, the left image shows one given point (registered as a pixel on the feature map, whose value is set to 1). The right image adds a 2D guassian distribution around it.
How could I add such distribution for each point? Is there an API for it in pytorch?
| Sampling from the multivariate normal distribution
You can use MultivariateNormal to sample from a multivariate normal.
>>> h, w = 200, 200
>>> fmap = torch.zeros(h, w)
Fill fmap with the origin points:
>>> pts = torch.rand(20, 2)
>>> pts *= torch.tensor([h, w])
>>> x, y = pts.T.long()
>>> x, y = x.clip(0, h), y.clip(0, w)
>>> fmap[x, y] = 1
Following this, we can sample from the following distribution (you can adjust the covariance matrix accordingly):
>>> sampler = MultivariateNormal(pts.T, 10*torch.eye(len(pts)))
>>> for x in range(10):
... x, y = sampler.sample()
... x, y = x.clip(0, h).long(), y.clip(0, w).long()
... fmap[x, y] = 1
As a result, you can end up with something like:
Origin points
Normal sampling
This is not documented well enough, but you can pass the sample shape to the sample function. This allows you to sample multiple points per call, i.e. you only need one to populate your canvas.
Here is a function to draw from MultivariateNormal:
def multivariate_normal_sampler(mean, cov, k):
sampler = MultivariateNormal(mean, cov)
return sampler.sample((k,)).swapaxes(0,1).flatten(1)
Then you can call it as:
>>> x, y = multivariate_normal_sampler(mean=pts.T, cov=50*torch.eye(len(pts)), k=1000)
Clip the samples:
>>> x, y = x.clip(0, h-1).long(), y.clip(0, w-1).long()
Finally insert into fmap and draw:
>>> fmap[x, y] += .1
Here is an example preview:
k=1,000
k=50,000
The utility function is available as torch.distributions.multivariate_normal.MultivariateNormal
Computing the density map using the pdf
Alternatively, instead of sampling from the normal distribution, you could compute the density values based on its probability density function (pdf):
A particular example of a two-dimensional Gaussian function is:
Origin points:
>>> h, w = 50, 50
>>> x0, y0 = torch.rand(2, 20)
>>> origins = torch.stack((x0*h, y0*w)).T
Define the gaussian 2D pdf:
def gaussian_2d(x=0, y=0, mx=0, my=0, sx=1, sy=1):
return 1 / (2*math.pi*sx*sy) * \
torch.exp(-((x - mx)**2 / (2*sx**2) + (y - my)**2 / (2*sy**2)))
Construct the grid and accumulate the gaussians from each origin points:
x = torch.linspace(0, h, h)
y = torch.linspace(0, w, w)
x, y = torch.meshgrid(x, y)
z = torch.zeros(h, w)
for x0, y0 in origins:
z += gaussian_2d(x, y, mx=x0, my=y0, sx=h/10, sy=w/10)
Multivariate normal distributions
The code to plot the grid of values is simply using matplotlib.pyplot.pcolormesh: plt.pcolormesh(x, y, z).
| https://stackoverflow.com/questions/69024270/ |
Search a tensor for data/values | Given:
tensor([[6, 6],
[4, 8],
[7, 5],
[7, 4],
[6, 4]])
How do I find the index of rows with values [7,5]?
In general, how do I search for indices of any values: full and partial row or column?
| Try with this:
>>> (a[:, None] == torch.tensor([7, 5])).all(-1).any(-1).nonzero().flatten().item()
2
>>>
| https://stackoverflow.com/questions/69024478/ |
Pytorch data pipeline | I am trying to implement a bounded buffer like solution where data generator and the model work as two separate processes. The data generator preprocess the data and stores in a shared queue (with predefined max size to limit the memory usage). The model on the other hand consumes data from this queue at its own pace until the queue is empty. Below is the snippet of my implementation.
'''
self._buffer is an object of multiprocessing.Queue
'''
def produce(self):
for obj in self._generator:
self._buffer.put(obj=obj, block=True, timeout=None)
self._buffer.put(obj=None)
def consume(self):
while True:
dat = self._buffer.get(block=True, timeout=None)
if dat is None:
break
else:
# Train model on `dat`
def run(self):
pt = multiprocessing.Process(target=self.produce)
ct = multiprocessing.Process(target=self.consume)
pt.start()
ct.start()
pt.join()
ct.join()
However, the solution above does not work. I used the torch.multiprocessing as instructed the documentation. I also set torch.multiprocessing.set_start_method('spawn') in order to avoid "RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method"
But now I get "TypeError: cannot pickle 'generator' object". How this can be fixed?
| Since you work with pytorch you should use the Dataset and Dataloader approach. This handles all problems with multiprocessing, shared memory and so on for you.
You can have map style datasets or things like iterable-style.Best to read the official documentation, what is what and how they work.
In your case you probably are fine with an iterable-style dataset. I used both approaches for similar cases. You can have the iterable style dataset, which you might need if you don't know how much samples you will be processing. For other cases I had a map-style dataset, where I knew the total number of my samples beforehand (e.g. processing all images in a directory) and could use a sequential sampler to give me the elements in order.
Regarding one of your problems. All errors like this TypeError: cannot pickle 'generator' object happen when you have objects which can't be serialized. For serialization pickle is used. In your case self._generator seems to be an object which can't be serialized for some reason. Without code it is not possible to say why. I had cases where used wrapped c++ packages created with pybind where objects were not serializable or I had some mutex variables somewhere.
| https://stackoverflow.com/questions/69026189/ |
Pytorch creating model from load_state_dict | I'm trying to learn how to save and load trained models in Pytorch, but so far, I'm only getting errors. Let's consider the following self-contained code:
import torch
lin=torch.nn.Linear; act=torch.nn.ReLU(); fnc=torch.nn.functional;
class Ann(torch.nn.Module):
def __init__(self):
super(Ann, self).__init__()
self.conv1 = torch.nn.Conv2d( 1, 10, kernel_size=5)
self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=4)
self.drop = torch.nn.Dropout2d(p=0.5)
self.fc1 = torch.nn.Linear(320,128)
self.fc2 = torch.nn.Linear(128,10)
def forward(self, x):
x = self.conv1(x[:,None,:,:]);
x = fnc.relu(fnc.max_pool2d(x,2));
x = self.drop(self.conv2(x));
x = fnc.relu(fnc.max_pool2d(x,2));
x = torch.flatten(x,1);
x = fnc.relu(self.fc1(x));
x = fnc.dropout(self.fc2(x),training=self.training);
return fnc.log_softmax(x,dim=0)
x,y=torch.rand((5,28,28)),torch.randint(0,9,(5,));
f=fnc.nll_loss;
ann1 = torch.nn.Sequential( torch.nn.Flatten(start_dim=1),
lin(784,256), act, lin(256,128), act, lin(128,10), torch.nn.LogSoftmax(dim=1))
ann2=Ann()
F1 = torch.optim.SGD(ann1.parameters(),lr=0.01,momentum=0.5)
F2 = torch.optim.SGD(ann2.parameters(),lr=0.01,momentum=0.5)
F1.zero_grad(); y_=ann1(x); loss=f(y_,y); loss.backward(); F1.step()
print(x.dtype,y.dtype,x.shape,y.shape,y_.shape,loss);
F2.zero_grad(); y_=ann2(x); loss=f(y_,y); loss.backward(); F2.step()
print(x.dtype,y.dtype,x.shape,y.shape,y_.shape,loss);
name='/home/leon/'
#ann3 = ann1.__class__().load_state_dict(ann1.state_dict()); print(ann3(x)) #outputs errors
#ann4 = ann2.__class__().load_state_dict(ann2.state_dict()); print(ann4(x)) #outputs errors
torch.save( [ann1.state_dict(),F1.state_dict()], name+'annF1.pth');
torch.save( [ann2.state_dict(),F2.state_dict()], name+'annF2.pth');
a1,d1=torch.load(name+'annF1.pth')
a2,d2=torch.load(name+'annF2.pth') #so far, works as expected
ann3, F3 = ann1.__class__().load_state_dict(a1), F1.__class__().load_state_dict(d1) #outputs errors
ann4, F4 = ann2.__class__().load_state_dict(a2), F2.__class__().load_state_dict(d2) #outputs errors
As you can see, ann1 and ann2 work, since they produce valid output. However, (re)constructing a model ann3 and ann4 from the given state_dict() invariably gives two errors (respectively):
Unexpected key(s) in state_dict: "1.weight", "1.bias", "3.weight", "3.bias", "5.weight", "5.bias".
TypeError: '_IncompatibleKeys' object is not callable
Could anyone please show me how to properly construct a model from given parameters, so I can later export and import my trained models?
| Hey you have two problems:
Remove the .__class__()
Separate the definition of ann3 and ann4.
ann1.load_state_dict(ann1.state_dict())
ann3 = ann1
print(ann3(x))
ann2.load_state_dict(ann2.state_dict())
ann4 = ann2
print(ann4(x))
But, what is the propose of thisann1.__class__().load_state_dict(ann1.state_dict())?
Maybe you wanted to do this?
ann3 = torch.nn.Sequential( torch.nn.Flatten(start_dim=1),
lin(784,256), act, lin(256,128), act, lin(128,10), torch.nn.LogSoftmax(dim=1))
ann3.load_state_dict(ann1.state_dict())
print(ann3(x))
ann4 = Ann()
ann4.load_state_dict(ann2.state_dict())
print(ann4(x))
Its Works the same as the guide here, creates a new model with the same architecture, and then loads the saved/exist state_dict.
Saving & Loading Model for Inference
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
| https://stackoverflow.com/questions/69030546/ |
Pytorch SSLError on Dataloader when Workers are greater than 1 | I have created a Dataset object that loads some data from an API when loading an item
class MyDataset(Dataset):
def __init__(self, obj_ids = []):
"""
"""
super(Dataset, self).__init__()
self.obj_ids = obj_ids
def __len__(self):
return len(self.obj_ids)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
result = session.get('/api/url/{}'.format(idx))
## Post processing work...
Then I add it to my Dataloader:
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=2, shuffle=True, num_workers=1,
collate_fn=utils.collate_fn)
Everything works fine when training this with num_workers=1. But when I increase it to 2 or greater I get an error in my training loop.
On this line:
train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
SSLError: Caught SSLError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 384, in _make_request
six.raise_from(e, None)
File "<string>", line 2, in raise_from
File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 380, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.7/http/client.py", line 1373, in getresponse
response.begin()
File "/usr/lib/python3.7/http/client.py", line 319, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.7/http/client.py", line 280, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.7/socket.py", line 589, in readinto
return self._sock.recv_into(b)
File "/usr/lib/python3.7/ssl.py", line 1071, in recv_into
return self.read(nbytes, buffer)
File "/usr/lib/python3.7/ssl.py", line 929, in read
return self._sslobj.read(len, buffer)
ssl.SSLError: [SSL: DECRYPTION_FAILED_OR_BAD_RECORD_MAC] decryption failed or bad record mac (_ssl.c:2570)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/local/lib/python3.7/dist-packages/urllib3/util/retry.py", line 399, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='mydomain.com', port=443): Max retries exceeded with url: 'url_with_error_is_here' (Caused by SSLError(SSLError(1, '[SSL: DECRYPTION_FAILED_OR_BAD_RECORD_MAC] decryption failed or bad record mac (_ssl.c:2570)')))
If I remove the post request, I stop getting the SSL error, so the problem most be something with the requests.post library or urllib maybe.
I changed the domain and url on the error to dummy values, but both url's and domains work when having just 1 worker.
I'm running this in a google collab environment with GPU enabled, but also tried it on my local machine and getting the same problem.
Can anyone help me to solve this issue?
| After debugging a bit and reading more about multiprocessing and request.session. It seems that the problem is that I cannot use requests.session inside a dataset as pytorch eventually uses multiprocessing on the training loop.
More about it on this question: How to assign python requests sessions for single processes in multiprocessing pool?
The issue is fixed by changing any session.get or session.post to a requests.get and requests.post as using it without session will avoid sharing the same connection and getting that SSLError.
| https://stackoverflow.com/questions/69033787/ |
Can BERT output be fixed in shape, irrespective of string size? | I am confused about using huggingface BERT models and about how to make them yield a prediction at a fixed shape, regardless of input size (i.e., input string length).
I tried to call the tokenizer with the parameters padding=True, truncation=True, max_length = 15, but the prediction output dimensions for inputs = ["a", "a"*20, "a"*100, "abcede"*20000] are not fixed. What am I missing here?
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
inputs = ["a", "a"*20, "a"*100, "abcede"*20000]
for input in inputs:
inputs = tokenizer(input, padding=True, truncation=True, max_length = 15, return_tensors="pt")
outputs = model(**inputs)
print(outputs.last_hidden_state.shape, input, len(input))
output:
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.bias']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
torch.Size([1, 3, 768]) a 1
torch.Size([1, 12, 768]) aaaaaaaaaaaaaaaaaaaa 20
torch.Size([1, 15, 768]) aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa 100
torch.Size([1, 3, 768]) abcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcededeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeab....deabbcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcede 120000
| When you call the tokenizer with only one sentence and padding=True, truncation=True, max_length = 15, it will pad the output sequence to the longest input sequence and truncate if required. Since you are providing only one sentence, the tokenizer can not pad anything because it is already the longest sequence of the batch. That means you can achieve what you want in two ways:
Provide a batch:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
inputs = ["a", "a"*20, "a"*100, "abcede"*200]
inputs = tokenizer(inputs, padding=True, truncation=True, max_length = 15, return_tensors="pt")
print(inputs["input_ids"])
outputs = model(**inputs)
print(outputs.last_hidden_state.shape)
Output:
tensor([[ 101, 1037, 102, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0],
[ 101, 13360, 11057, 11057, 11057, 11057, 11057, 11057, 11057, 11057,
2050, 102, 0, 0, 0],
[ 101, 13360, 11057, 11057, 11057, 11057, 11057, 11057, 11057, 11057,
11057, 11057, 11057, 11057, 102],
[ 101, 100, 102, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0]])
torch.Size([4, 15, 768])
Set padding="max_length":
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
inputs = ["a", "a"*20, "a"*100, "abcede"*200]
for i in inputs:
inputs = tokenizer(i, padding='max_length', truncation=True, max_length = 15, return_tensors="pt")
print(inputs["input_ids"])
outputs = model(**inputs)
print(outputs.last_hidden_state.shape, i, len(i))
Output:
tensor([[ 101, 1037, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0]])
torch.Size([1, 15, 768]) a 1
tensor([[ 101, 13360, 11057, 11057, 11057, 11057, 11057, 11057, 11057, 11057,
2050, 102, 0, 0, 0]])
torch.Size([1, 15, 768]) aaaaaaaaaaaaaaaaaaaa 20
tensor([[ 101, 13360, 11057, 11057, 11057, 11057, 11057, 11057, 11057, 11057,
11057, 11057, 11057, 11057, 102]])
torch.Size([1, 15, 768]) aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa 100
tensor([[101, 100, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0]])
torch.Size([1, 15, 768]) abcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcedeabcede 1200
| https://stackoverflow.com/questions/69046964/ |
Unknown behavior of hooks on batch norm in pytorch | I try to freeze the batch_norm layer and analyse their inputs/outputs with forward hooks
For fixed BN layers, I just couldn't understand why the hooked output is different from the output reproduced by the hooked input.
Really appreciate that if anyone could help me
Here's the code:
import torch
import torchvision
import numpy
def set_bn_eval(m):
classname = m.__class__.__name__
if classname.find('BatchNorm') != -1:
m.eval()
image = torch.randn((1, 3, 224, 224))
res = torchvision.models.resnet50(pretrained=True)
res.apply(set_bn_eval)
b = res(image)
layer_out = []
layer_in = []
def layer_hook(mod, inp, out):
layer_out.append(out)
layer_in.append(inp[0])
for name, key in res.named_modules():
hook = key.register_forward_hook(layer_hook)
res(image)
hook.remove()
out = layer_out.pop()
inp = layer_in.pop()
try:
assert (out.equal(key(inp)))
except AssertionError:
print(name)
break
| TLDR; Some operators will only appear in the forward of the module: such as non-parametrized layers.
Some components are not registered in the child module list. This can usually be the case for activation functions but will ultimately depend on the module implementation. In your case, ResNet's Bottleneck section as its ReLUs applied in the forward definition, just after the batch normalization layer is called.
This means the output you will catch with the layer hook will be different from the tensor you compute from just the module and its input.
for name, module in res.named_modules():
if name != 'bn1':
hook = module.register_forward_hook(layer_hook)
res(image)
hook.remove()
inp = layer_in.pop()
out = layer_out.pop()
assert out.equal(F.relu(module(inp)))
Therefore, it's a bit tricky to actually implement since you can't rely entirely on the content of res.named_modules().
| https://stackoverflow.com/questions/69055763/ |
Why conda installs old pytorch with by default with cudatoolkit=11.2 | I am trying to install conda for cudatoolkit=11.2 on google colab using:
conda install pytorch cudatoolkit=11.2 -c pytorch -c nvidia
But why does it install old pytorch=1.0.0 version not something >1.6?
If I try to force install pytorch=1.6, it gives the following error:
UnsatisfiableError: The following specifications were found to be incompatible with a past
explicit spec that is not an explicit spec in this operation (cudatoolkit):
- cudatoolkit=11.2
- pytorch=1.6 -> cudatoolkit[version='>=10.1,<10.2|>=10.2,<10.3|>=9.2,<9.3']
The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Package setuptools conflicts for:
setuptools
python=3.7 -> pip -> setuptools
conda[version='>=4.10.3'] -> setuptools[version='>=31.0.1']
wheel -> setuptools
pip -> setuptools
...
EDIT based on the answer:
When I try to use conda install -c conda-forge pytorch cudatoolkit=11.2, it gives the following error.
PackagesNotFoundError: The following packages are not available from current channels:
- cudatoolkit=11.2
- __glibc[version='>=2.17,<3.0.a0']
Current channels:
- https://conda.anaconda.org/conda-forge/linux-64
- https://conda.anaconda.org/conda-forge/noarch
- https://repo.anaconda.com/pkgs/main/linux-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/free/linux-64
- https://repo.anaconda.com/pkgs/free/noarch
- https://repo.anaconda.com/pkgs/r/linux-64
- https://repo.anaconda.com/pkgs/r/noarch
- https://repo.anaconda.com/pkgs/pro/linux-64
- https://repo.anaconda.com/pkgs/pro/noarch
| The pytorch channel doesn't yet have any pytorch builds compatible with cudatoolkit=11.2. What the solver finds is an old version of PyTorch where they did not have proper upper bounds on the dependency version.
If you insist on cudatoolkit=11.2, then you'll need to stick to the Conda Forge stack:
conda install -c conda-forge pytorch cudatoolkit=11.2
Otherwise, if you want the official PyTorch builds, then they build up to v11.1 compatible versions:
conda install -c pytorch -c conda-forge pytorch cudatoolkit=11.1
| https://stackoverflow.com/questions/69058891/ |
Pytorch geometric: how to explain the input in the below code-snippet? | I am reading PyTorch geometric documentation at https://pytorch-geometric.readthedocs.io/en/latest/notes/introduction.html
On this page, there is a code snippet:
import torch
from torch_geometric.data import Data
edge_index = torch.tensor([[0, 1, 1, 2],
[1, 0, 2, 1]], dtype=torch.long)
x = torch.tensor([[-1], [0], [1]], dtype=torch.float)
data = Data(x=x, edge_index=edge_index)
The output of the last line from the above code snippet is:
Data(edge_index=[2, 4], x=[3, 1])
How are the edge_index 2 and 4? If I understand correctly there are four edges being defined with an index starting from 0. Is this assumption wrong? Also, what does x =[3, 1] mean?
Data is a class, so I won't expect it to return anything. Class definition is here: https://pytorch-geometric.readthedocs.io/en/latest/modules/data.html . I read the documentation. x should be Node feature matrix and edge_index should be graph connectivity. But I don't understand the console output that I cross-checked in the jupyter notebook.
| Okay, I think I have got the understanding of the output Data(edge_index=[2, 4], x=[3, 1]). Here [2,4] are dimensions of edge_index and [3,1] are dimensions of x. But please, anyone correct me if I am wrong.
| https://stackoverflow.com/questions/69060114/ |
This torch project keep telling me "Expected 2 or more dimensions (got 1)" | I was trying to make my own neural network using PyTorch. I do not understand why my code is not working properly.
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optimizers
import numpy as np
from tqdm import tqdm
import os
import hashlib
# Only for the first time
MAKE_DATA = False
class Model(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(2, 100)
self.fc2 = nn.Linear(100, 200)
self.fc3 = nn.Linear(200, 200)
self.fc4 = nn.Linear(200, 1)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = self.fc4(x)
return F.relu(x)
def make_numirical(data):
data = str(data)
data = data.encode()
data = hashlib.md5(data).hexdigest()
str1 = ''
for c in data:
if not (c >= '0' and c <= '9'):
c = ord(c)
if c > 10:
c /= 10
str1 += str(int(c))
return int(str1[:20])
def make_train_data():
HITS = 'Songs_DB/Hits'
NOHITS = 'Songs_DB/NoHits'
hits_count = 0
no_hits_count = 0
LABELS = {HITS: 0, NOHITS: 0}
training_data = []
i = 0
for label in LABELS:
for f in tqdm(os.listdir(label)):
try:
path = os.path.join(label, f)
with open(path, 'rb') as file:
data = file.read()
file.close()
data = make_numirical(data)
data = int(data)
training_data.append([np.array([data]), np.eye(2)[i]])
if label == HITS:
hits_count += 1
else:
no_hits_count += 1
except:
pass
i += 1
np.random.shuffle(training_data)
np.save('training_data.npy', training_data)
print(hits_count)
print(no_hits_count)
if MAKE_DATA:
make_train_data()
model = Model()
# 1 = brown, 0 = not brown, 1 = cat, 0 = dog.
Xs = torch.Tensor([[0, 1], [1, 1], [1, 0], [1, 1], [1, 1], [0, 1], [1, 1], [0, 0], [1, 0]])
ys = torch.Tensor([[1], [0], [0], [1], [0], [1], [0], [1], [1]])
i = 0
for x in Xs:
output = model(x)
print(output)
loss = F.nll_loss(output, ys[i])
print(loss)
The program keeps giving me this error:
Expected 2 or more dimensions (got 1)
Can anyone explain what is wrong with my code?
| The tensor you use as the dataset, Xs is shaped (n, 2). So when looping over it each element x ends up as a 1D tensor shaped (2,). However, your module expects a batched tensor as input, i.e. here a 2D tensor shaped (n, 2), just like Xs. You have two possible options, either use a data loader and divide your dataset into batches, or unsqueeze your input x to make it two dimensional shaped (1, 2).
Using a TensorDataset and wrapping it with a DataLoader:
>>> dataset = TensorDataset(Xs, ys)
>>> dataloader = Dataloader(dataset, batch_size=4)
Then iterating over dataloader will return batches of fours (inputs and corresponding labels):
>>> for x, y in dataloader:
... output = model(x)
... loss = F.nll_loss(output, y)
TensorDataset and Dataloader are both imported from torch.utils.data.
Or use torch.Tensor.unsqueeze on x to add one extra dimension:
>>> for x, y in zip(Xs, ys):
... output = model(x.unsqueeze())
... loss = F.nll_loss(output, y)
Alternatively, you can do x[None] which has the same effect.
| https://stackoverflow.com/questions/69066267/ |
Expected object of scalar type Long but got scalar type Int for argument #2 in loss function | I have encountered the following error:
RuntimeError Traceback (most recent call last)
<ipython-input-42-276f5444b449> in <module>
----> 1 train_epocs(model, optimizer, train_dl, valid_dl, epochs=15)
<ipython-input-39-6f4616cc5f25> in train_epocs(model, optimizer, train_dl, val_dl, epochs, C)
11 y_bb = y_bb.cuda().float()
12 out_class, out_bb = model(x)
---> 13 loss_class = F.cross_entropy(out_class, y_class, reduction="sum")
14 loss_bb = F.l1_loss(out_bb, y_bb, reduction="none").sum(1)
15 loss_bb = loss_bb.sum()
~\anaconda3\lib\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2822 if size_average is not None or reduce is not None:
2823 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2824 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
2825
2826
RuntimeError: Expected object of scalar type Long but got scalar type Int for argument #2 'target' in call to _thnn_nll_loss_forward
I have been training the network with the following set-up. We have 26 classes. The code is adopted from https://jovian.ai/ranerajesh/road-signs-bounding-box-prediction/v/10 I have my own custom dataset that has been structured in a way required for this code to run. However, I have encountered a RunTime error.
def normalize(im):
"""Normalizes images with Imagenet stats."""
imagenet_stats = np.array([[0.485, 0.456, 0.406], [0.229, 0.224, 0.225]])
return (im - imagenet_stats[0])/imagenet_stats[1]
class RoadDataset(Dataset):
def __init__(self, paths, bb, y, transforms=False):
self.transforms = transforms
self.paths = paths.values
self.bb = bb.values
self.y = y.values
def __len__(self):
return len(self.paths)
def __getitem__(self, idx):
path = self.paths[idx]
print(path)
y_class = self.y[idx]
x, y_bb = transformsXY(path, self.bb[idx], self.transforms)
x = normalize(x)
x = np.rollaxis(x, 2)
return x, y_class, y_bb
train_ds = RoadDataset(X_train['new_path'],X_train['new_bb'] ,y_train, transforms=True)
valid_ds = RoadDataset(X_val['new_path'],X_val['new_bb'],y_val)
batch_size = 2
train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=True)
valid_dl = DataLoader(valid_ds, batch_size=batch_size)
class BB_model(nn.Module):
def __init__(self):
super(BB_model, self).__init__()
resnet = models.resnet34(pretrained=True)
layers = list(resnet.children())[:8]
self.features1 = nn.Sequential(*layers[:6])
self.features2 = nn.Sequential(*layers[6:])
self.classifier = nn.Sequential(nn.BatchNorm1d(512), nn.Linear(512, 26))
self.bb = nn.Sequential(nn.BatchNorm1d(512), nn.Linear(512, 26))
def forward(self, x):
x = self.features1(x)
x = self.features2(x)
x = F.relu(x)
x = nn.AdaptiveAvgPool2d((1,1))(x)
x = x.view(x.shape[0], -1)
return self.classifier(x), self.bb(x)
def update_optimizer(optimizer, lr):
for i, param_group in enumerate(optimizer.param_groups):
param_group["lr"] = lr
def train_epocs(model, optimizer, train_dl, val_dl, epochs=10,C=1000):
idx = 0
for i in range(epochs):
model.train()
total = 0
sum_loss = 0
for x, y_class, y_bb in train_dl:
batch = y_class.shape[0]
x = x.cuda().float()
y_class = y_class.cuda()
y_bb = y_bb.cuda().float()
out_class, out_bb = model(x)
loss_class = F.cross_entropy(out_class, y_class, reduction="sum")
loss_bb = F.l1_loss(out_bb, y_bb, reduction="none").sum(1)
loss_bb = loss_bb.sum()
loss = loss_class + loss_bb/C
optimizer.zero_grad()
loss.backward()
optimizer.step()
idx += 1
total += batch
sum_loss += loss.item()
train_loss = sum_loss/total
val_loss, val_acc = val_metrics(model, valid_dl, C)
print("train_loss %.3f val_loss %.3f val_acc %.3f" % (train_loss, val_loss, val_acc))
return sum_loss/total
def val_metrics(model, valid_dl, C=1000):
model.eval()
total = 0
sum_loss = 0
correct = 0
for x, y_class, y_bb in valid_dl:
batch = y_class.shape[0]
x = x.cuda().float()
y_class = y_class.cuda()
y_bb = y_bb.cuda().float()
out_class, out_bb = model(x)
loss_class = F.cross_entropy(out_class, y_class, reduction="sum")
loss_bb = F.l1_loss(out_bb, y_bb, reduction="none").sum(1)
loss_bb = loss_bb.sum()
loss = loss_class + loss_bb/C
_, pred = torch.max(out_class, 1)
correct += pred.eq(y_class).sum().item()
sum_loss += loss.item()
total += batch
return sum_loss/total, correct/total
model = BB_model().cuda()
parameters = filter(lambda p: p.requires_grad, model.parameters())
optimizer = torch.optim.Adam(parameters, lr=0.006)
train_epocs(model, optimizer, train_dl, valid_dl, epochs=15)
| loss_class = F.cross_entropy(out_class, y_class, reduction="sum")
In the line above, y_class is target of your out_class (for model predictions). The output from model is Long and your y_class has a Float type. So you need to change y_class's type to Long by:
y_class = y_class.long()
loss_class = F.cross_entropy(out_class, y_class, reduction="sum")
| https://stackoverflow.com/questions/69067391/ |
function for Ordinal Pooling Neural network | please I want to create a function that computes the Ordinal Pooling neural network like the following figure:
this is my function :
def Ordinal_Pooling_NN(x):
wights = torch.tensor([0.6, 0.25, 0.10, 0.05])
top = torch.topk(x, 4, dim = 1)
wights = wights.repeat(x.shape[0], 1)
result = torch.sum(wights * (top.values), dim = 1 )
return result
but as a result, I get the following error:
<ipython-input-112-ddf99c812d56> in Ordinal_Pooling_NN(x)
9 top = torch.topk(x, 4, dim = 1)
10 wights = wights.repeat(x.shape[0], 1)
---> 11 result = torch.sum(wights * (top.values), dim = 1 )
12 return result
RuntimeError: The size of tensor a (4) must match the size of tensor b (16) at non-singleton dimension 2
| Your implementation is actually correct, I believe you did not feed the function with a 2D tensor, the input must have a batch axis. For instance, the code below will run:
>>> Ordinal_Pooling_NN(torch.tensor([[1.9, 0.4, 1.3, 0.8]]))
tensor([1.5650])
Do note you are not required to repeat the weights tensor, it will be broadcasted automatically when computing the point-wise multiplication. You only need the following:
def Ordinal_Pooling_NN(x):
w = torch.tensor([0.6, 0.25, 0.10, 0.05])
top = torch.topk(x, k=4, dim=1)
result = torch.sum(w*top.values, dim=1)
return result
| https://stackoverflow.com/questions/69068108/ |
How to get indices of tensors of same value in a 2-d tensor? | As described in title, given a 2-d tensor, let's say:
tensor([
[0, 1, 0, 1], # A
[1, 1, 0, 1], # B
[1, 0, 0, 1], # C
[0, 1, 0, 1], # D
[1, 1, 0, 1], # E
[1, 1, 0, 1] # F
])
That's easy enough to tell that "A and D", "B, E and F" are two groups of tensors,
that are of same value(that means A == D and B == E == F).
So my question is:
How to get indices of those groups?
Details:
Input: tensor above
Output: (0, 3), (1, 4, 5)
| A solution using PyTorch functions:
import torch
x = torch.tensor([
[0, 1, 0, 1], # A
[1, 1, 0, 1], # B
[1, 0, 0, 1], # C
[0, 1, 0, 1], # D
[1, 1, 0, 1], # E
[1, 1, 0, 1] # F
])
_, inv, counts = torch.unique(x, dim=0, return_inverse=True, return_counts=True)
print([tuple(torch.where(inv == i)[0].tolist()) for i, c, in enumerate(counts) if counts[i] > 1])
# > [(0, 3), (1, 4, 5)]
| https://stackoverflow.com/questions/69075402/ |
pytorch hook function is not executed | May I know why the pytorch hook function does not work ?
| You might want to use torch.nn.Module.named_modules instead of torch.nn.Module.named_children. The latter will only return immediate child modules. In your case graph's immediate child is cells, so you won't be looping over modules inside of cells, i.e. the layers defined inside the ModuleList.
Either use named_modules:
for name, module in graph.named_modules()
pass
Or use named_children on graph.cells directly:
for name, module in graph.cells.named_children():
pass
However, the latter alternative won't scale if you ever decide to add additional child modules to Graph.
| https://stackoverflow.com/questions/69078576/ |
Create a simple PyTorch neural network with a normalized weights | I want to create a simple PyTorch neural network with the sum of its weights equal to 1. To understand my question here is a to give an example:
| You can simply normalize by the sum of all initialized weights:
>>> layer = nn.Linear(4, 1, bias=False)
>>> layer.weight
Parameter containing:
tensor([[-0.2565, 0.4753, -0.1129, 0.2327]], requires_grad=True)
Normalize layer.weight:
>>> layer.weight.data /= layer.weight.data.sum()
Then:
>>> layer.weight
Parameter containing:
tensor([[-0.7573, 1.4034, -0.3333, 0.6872]], requires_grad=True)
| https://stackoverflow.com/questions/69081227/ |
How to write a forward hook function for nn.Transformer in pytorch? | I have learnt that forward hook function has the form as hook_fn(m,x,y). m refers to model, x refers to input and y refers to output. I want to write a forward hook function for nn.Transformer.
However there are to input for transformer layer which is src and tgt. For example, >>> out = transformer_model(src, tgt). So how can I differ these inputs?
| Your hook will call your callback function with tuples for x and y. As described in the documentation page of torch.nn.Module.register_forward_hook (it does quite explain the type of x and y though).
The input contains only the positional arguments given to the module.
Keyword arguments won’t be passed to the hooks and only to the
forward. [...].
model = nn.Transformer(nhead=16, num_encoder_layers=12)
src = torch.rand(10, 32, 512)
tgt = torch.rand(20, 32, 512)
Define your callback:
def hook(module, x, y):
print(f'is tuple={isinstance(x, tuple)} - length={len(x)}')
src, tgt = x
print(f'src: {src.shape}')
print(f'tgt: {tgt.shape}')
Hook to your nn.Module:
>>> model.register_forward_hook(hook)
Do an inference:
>>> out = model(src, tgt)
is tuple=True - length=2
src: torch.Size([10, 32, 512])
tgt: torch.Size([20, 32, 512])
| https://stackoverflow.com/questions/69084540/ |
Classification with PyTorch is much slower than Tensorflow: 42min vs. 11min | I have been a Tensorflow user and start to use Pytorch. As a trial, I implemented simple classification tasks with both libraries.
However, PyTorch is much slower than Tensorflow: Pytorch takes 42min while TensorFlow 11min. I referred to PyTorch official Tutorial, and made only little change from it.
Could anyone share some advice for this problem?
Here is the summary what I tried.
environment: Colab Pro+
dataset: Cifar10
classifier: VGG16
optimizer: Adam
loss: crossentropy
batch size: 32
PyTorch
Code:
import torch, torchvision
from torch import nn
from torchvision import transforms, models
from tqdm import tqdm
import time, copy
trans = transforms.Compose([transforms.Resize((224, 224)),
transforms.ToTensor(),])
data = {phase: torchvision.datasets.CIFAR10('./', train = (phase=='train'), transform=trans, download=True) for phase in ['train', 'test']}
dataloaders = {phase: torch.utils.data.DataLoader(data[phase], batch_size=32, shuffle=True) for phase in ['train', 'test']}
def train_model(model, criterion, optimizer, dataloaders, device, num_epochs=5):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'test']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in tqdm(iter(dataloaders[phase])):
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / len(dataloaders[phase])
epoch_acc = running_corrects.double() / len(dataloaders[phase])
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'test' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = models.vgg16(pretrained=False)
model = model.to(device)
model = train_model(model=model,
criterion=nn.CrossEntropyLoss(),
optimizer=torch.optim.Adam(model.parameters(), lr=0.001),
dataloaders=dataloaders,
device=device,
)
Result:
Epoch 0/4
----------
0%| | 0/1563 [00:00<?, ?it/s]/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
100%|██████████| 1563/1563 [07:50<00:00, 3.32it/s]
train Loss: 75.5199 Acc: 3.2809
100%|██████████| 313/313 [00:38<00:00, 8.11it/s]
test Loss: 73.7274 Acc: 3.1949
Epoch 1/4
----------
100%|██████████| 1563/1563 [07:50<00:00, 3.33it/s]
train Loss: 73.8162 Acc: 3.2514
100%|██████████| 313/313 [00:38<00:00, 8.13it/s]
test Loss: 73.6114 Acc: 3.1949
Epoch 2/4
----------
100%|██████████| 1563/1563 [07:49<00:00, 3.33it/s]
train Loss: 73.7741 Acc: 3.1369
100%|██████████| 313/313 [00:38<00:00, 8.11it/s]
test Loss: 73.5873 Acc: 3.1949
Epoch 3/4
----------
100%|██████████| 1563/1563 [07:49<00:00, 3.33it/s]
train Loss: 73.7493 Acc: 3.1331
100%|██████████| 313/313 [00:38<00:00, 8.12it/s]
test Loss: 73.6191 Acc: 3.1949
Epoch 4/4
----------
100%|██████████| 1563/1563 [07:49<00:00, 3.33it/s]
train Loss: 73.7289 Acc: 3.1939
100%|██████████| 313/313 [00:38<00:00, 8.13it/s]test Loss: 73.5955 Acc: 3.1949
Training complete in 42m 22s
Best val Acc: 3.194888
Tensorflow
Code:
import tensorflow_datasets as tfds
from tensorflow.keras import applications, models
import tensorflow as tf
import time
ds_test, ds_train = tfds.load('cifar10', split=['test', 'train'])
def resize(ip):
image = ip['image']
label = ip['label']
image = tf.image.resize(image, (224, 224))
image = tf.expand_dims(image,0)
label = tf.one_hot(label,10)
label = tf.expand_dims(label,0)
return (image, label)
ds_train_ = ds_train.map(resize)
ds_test_ = ds_test.map(resize)
model = applications.vgg16.VGG16(input_shape = (224, 224, 3), weights=None, classes=10)
model.compile(optimizer='adam', loss = 'categorical_crossentropy', metrics= ['accuracy'])
batch_size = 32
since = time.time()
history = model.fit(ds_train_,
batch_size = batch_size,
steps_per_epoch = len(ds_train)//batch_size,
epochs = 5,
validation_steps = len(ds_test),
validation_data = ds_test_,
shuffle = True,)
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60 ))
Result:
Epoch 1/5
1562/1562 [==============================] - 125s 69ms/step - loss: 36.9022 - accuracy: 0.1069 - val_loss: 2.3031 - val_accuracy: 0.1000
Epoch 2/5
1562/1562 [==============================] - 129s 83ms/step - loss: 2.3031 - accuracy: 0.1005 - val_loss: 2.3033 - val_accuracy: 0.1000
Epoch 3/5
1562/1562 [==============================] - 129s 83ms/step - loss: 2.3035 - accuracy: 0.1069 - val_loss: 2.3031 - val_accuracy: 0.1000
Epoch 4/5
1562/1562 [==============================] - 129s 83ms/step - loss: 2.3038 - accuracy: 0.1024 - val_loss: 2.3030 - val_accuracy: 0.1000
Epoch 5/5
1562/1562 [==============================] - 129s 83ms/step - loss: 2.3028 - accuracy: 0.1024 - val_loss: 2.3033 - val_accuracy: 0.1000
Training complete in 11m 23s
| It is because in your tensorflow codes, the data pipeline is feeding a batch of 1 image into the model per step instead of a batch of 32 images.
Passing batch_size into model.fit does not really control the batch size when the data is in the form of datasets. The reason why it showed a seemingly correct steps per epoch from the log is that you passed steps_per_epoch into model.fit.
To correctly set the batch size:
ds_test, ds_train = tfds.load('cifar10', split=['test', 'train'])
def resize(ip):
image = ip['image']
label = ip['label']
image = tf.image.resize(image, (224, 224))
label = tf.one_hot(label,10)
return (image, label)
train_size=len(ds_train)
test_size=len(ds_test)
ds_train_ = ds_train.shuffle(train_size).batch(32).map(resize)
ds_test_ = ds_test.shuffle(test_size).batch(32).map(resize)
model.fit call:
history = model.fit(ds_train_,
epochs = 1,
validation_data = ds_test_)
After fixed the problem, tensorflow got similar speed performance with pytorch. In my machine, pytorch took ~27 minutes per epoch while tensorflow took ~24 minutes per epoch.
According to the benchmarks from NVIDIA, pytorch and tensorflow had similar speed performance in most popular deep learning applications with real-world datasets and problem size. (Reference: https://developer.nvidia.com/deep-learning-performance-training-inference)
| https://stackoverflow.com/questions/69086092/ |
PyTorch: "one of the variables needed for gradient computation has been modified by an inplace operation" | I'm training a PyTorch RNN on a text file of song lyrics to predict the next character given a character.
Here's how my RNN is defined:
import torch.nn as nn
import torch.optim
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
# from input, previous hidden state to new hidden state
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
# from input, previous hidden state to output
self.i2o = nn.Linear(input_size + hidden_size, output_size)
# softmax on output
self.softmax = nn.LogSoftmax(dim = 1)
def forward(self, input, hidden):
combined = torch.cat((input, hidden), 1)
#get new hidden state
hidden = self.i2h(combined)
#get output
output = self.i2o(combined)
#apply softmax
output = self.softmax(output)
return output, hidden
def initHidden(self):
return torch.zeros(1, self.hidden_size)
rnn = RNN(input_size = num_chars, hidden_size = 200, output_size = num_chars)
criterion = nn.NLLLoss()
lr = 0.01
optimizer = torch.optim.AdamW(rnn.parameters(), lr = lr)
Here's my training function:
def train(train, target):
hidden = rnn.initHidden()
loss = 0
for i in range(len(train)):
optimizer.zero_grad()
# get output, hidden state from rnn given input char, hidden state
output, hidden = rnn(train[i].unsqueeze(0), hidden)
#returns the index with '1' - indentifying the index of the right character
target_class = (target[i] == 1).nonzero(as_tuple=True)[0]
loss += criterion(output, target_class)
loss.backward(retain_graph = True)
optimizer.step()
print("done " + str(i) + " loop")
return output, loss.item() / train.size(0)
When I run my training function, I get this error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [274, 74]], which is output 0 of TBackward, is at version 5; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
Interestingly, it makes it through two complete loops of the training function before giving me that error.
Now, when I remove the retain_graph = True from loss.backward(), I get this error:
RuntimeError: Trying to backward through the graph a second time (or directly access saved variables after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved variables after calling backward.
It shouldn't be trying to go backward through the graph multiple times here. Perhaps the graph is not getting cleared between training loops?
| The issue is you are accumulating your loss values (and at the same time, the computation graphs associated attached to them) on variable loss, here:
loss += criterion(output, target_class)
In turn, this means at every iteration you are trying to backpropagate through the current and previous loss values that were computed in previous inferences. In this particular instance where you are looping through your dataset, it isn't the right thing to do.
A simple fix is to accumulate loss's underlying value, i.e. the scalar value, not the tensor itself, using item. And, backpropagate on the current loss tensor:
total_loss = 0
for i in range(len(train)):
optimizer.zero_grad()
output, hidden = rnn(train[i].unsqueeze(0), hidden)
target_class = (target[i] == 1).nonzero(as_tuple=True)[0]
loss = criterion(output, target_class)
loss.backward()
total_loss += loss.item()
Since you are updating the model's parameter straight after having done the backpropagation, you don't need to retain the graph in memory.
| https://stackoverflow.com/questions/69092258/ |
Pytorch: How to index a tensor? | I am new to PyTorch and am still wrapping my head around how to form a proper gather statement. I have a 4D input tensor of size (1,200,61,1632), where 1632 is the time dimension. I want to index it with a tensor idx which is size (4,1632) where each row of idx is a value I want to extract from the input tensor. So the rows of idx look like:
[0,20,30,0]
[0,150,9,1]
[0,180,100,2]
...
So that the output has size 1632. In other words I want to do this:
output = []
for i in range(1632):
output.append(input[idx[0,i], idx[1,i], idx[2,i], idx[3,i]])
Is this an appropriate use case for torch.gather? Looking at the documentation for gather, it says the input and index tensors must have the same shape.
| Since PyTorch doesn't offer an implementation of ravel_multi_index, the ugly way of doing it is this one:
output = input[idx[0, :], idx[1, :], idx[2, :], idx[3, :]]
In NumPy, you could do this way:
output = np.take(input, np.ravel_multi_index(idx, input.shape))
| https://stackoverflow.com/questions/69095350/ |
pic should be Tensor or ndarray. Got | I am a beginner in PyTorch. I want to train a network using NYU dataset, but I am getting an error.
The error happens while I use the Dataloader to load my local dataset, and I want to print the data to demonstrate the code is right:
test=Mydataset(data_root,transforms,'image_train')
test2=DataLoader(test,batch_size=4,num_workers=0,shuffle=False)
for idx,data in enumerate(test2):
print(idx)
Here's the rest of the code with the Mydataset definition:
from __future__ import division,absolute_import,print_function
from PIL import Image
from torch.utils.data import DataLoader,Dataset
from torchvision.transforms import transforms
data_root='D:/AuxiliaryDocuments/NYU/'
transforms=transforms.Compose([transforms.ToPILImage(),
transforms.Resize(224,101),
transforms.ToTensor()])
filename_txt={'image_train':'image_train.txt','image_test':'image_test.txt',
'depth_train':'depth_train.txt','depth_test':'depth_test.txt'}
class Mydataset(Dataset):
def __init__(self,data_root,transformation,data_type):
self.transform=transformation
self.image_path_txt=filename_txt[data_type]
self.sample_list=list()
f=open(data_root+'/'+data_type+'/'+self.image_path_txt)
lines=f.readlines()
for line in lines:
line=line.strip()
line=line.replace(';','')
self.sample_list.append(line)
f.close()
def __getitem__(self, index):
item=self.sample_list[index]
img=Image.open(item)
if self.transform is not None:
img=self.transform(img)
idx=index
return idx,img
def __len__(self):
return len(self.sample_list)
| The error in the title is different from the one in the image (which you should have posted as text, by the way). Assuming the one from the image is correct, your problem is the following:
Your transforms begins with a transforms.ToPILImage(), but the image is already read as a PIL image by the dataloader. If you remove that transformation, the code should run just fine.
# [...]
transforms = transforms.Compose([
transforms.ToPILImage(), # <<< remove this
transforms.Resize(224, 101),
transforms.ToTensor()
])
# [...]
class Mydataset(Dataset):
# [...]
def __getitem__(self, index):
item = self.sample_list[index]
img = Image.open(item) # <<< this image is already a PIL image
if self.transform is not None:
img = self.transform(img)
idx = index
return idx, img
# [...]
| https://stackoverflow.com/questions/69095479/ |
Pytorch RNN error: RuntimeError: input must have 3 dimensions got 1 | I am trying to train an RNN based off the code here
I also found two similar posts, but was not able to extrapolate from them what I should do to fix my problem here and here
The error is pretty easy to interpret, the model is expecting 3 dimensions, but I am only giving it 1. However, I do not know where to fix the issue. I know that a good stack post is to include data, but I am not sure how to include example tensors in the post. Apologies.
My input are 300d word embeddings and my output are one hot encoded vectors of length 11, where the model makes a classification choice in each of the 11 output dimensions.
I will start with the dataloader then go from there with the code.
from torch.utils.data import Dataset, DataLoader
class CustomDataset(Dataset):
def __init__(self, dat, labels):
self.labels = labels
self.dat = dat
def __len__(self):
return len(self.labels)
def __getitem__(self, idx):
label = self.labels[idx]
dat = self.dat[idx]
sample = {"Sample": dat, "Class": label}
return sample
I define my vanilla RNN as follows.
class VanillaRNN(nn.Module):
def __init__(self, input_size, output_size, hidden_dim, n_layers):
super(VanillaRNN, self).__init__()
# Defining some parameters
self.hidden_dim = hidden_dim
self.n_layers = n_layers
#Defining the layers
# RNN Layer
self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True)
# Fully connected layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, inputs):
batch_size = inputs.size(0)
# Initializing hidden state for first input using method defined below
hidden = self.init_hidden(batch_size)
# Passing in the input and hidden state into the model and obtaining outputs
out, hidden = self.rnn(inputs, hidden)
# Reshaping the outputs such that it can be fit into the fully connected layer
out = out.contiguous().view(-1, self.hidden_dim)
out = self.fc(out)
return out, hidden
def init_hidden(self, batch_size):
# This method generates the first hidden state of zeros which we'll use in the forward pass
# We'll send the tensor holding the hidden state to the device we specified earlier as well
hidden = torch.zeros(self.n_layers, batch_size, self.hidden_dim)
return hidden
and my training loop as follows
def plot_train_val(x, train, val, train_label,
val_label, title, y_label,
color):
plt.plot(x, train, label=train_label, color=color)
plt.plot(x, val, label=val_label, color=color, linestyle='--')
plt.legend(loc='lower right')
plt.xlabel('epoch')
plt.ylabel(y_label)
plt.title(title)
def count_parameters(model):
parameters = sum(p.numel() for p in model.parameters() if p.requires_grad)
return parameters
def init_weights(m):
if type(m) in (nn.Linear, nn.Conv1d):
nn.init.xavier_uniform_(m.weight)
# Training functioN
def train(model, device, train_loader, valid_loader, epochs, learning_rate):
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
train_loss, validation_loss = [], []
train_acc, validation_acc = [], []
for epoch in range(epochs):
#train
model.train()
running_loss = 0.
correct, total = 0, 0
steps = 0
for idx, batch in enumerate(train_loader):
text = batch["Sample"].to(device)
target = batch['Class'].to(device)
target = torch.autograd.Variable(target).long()
text, target = text.to(device), target.to(device)
# add micro for coding training loop
optimizer.zero_grad()
output, hideden = model(text)
print(output.shape, target.shape, target.view(-1).shape)
loss = criterion(output, target.view(-1))
loss.backward()
optimizer.step()
steps += 1
running_loss += loss.item()
# get accuracy
_, predicted = torch.max(output, 1)
print(predicted)
#predicted = torch.round(output.squeeze())
total += target.size(0)
correct += (predicted == target).sum().item()
train_loss.append(running_loss/len(train_loader))
train_acc.append(correct/total)
print(f'Epoch: {epoch + 1}, '
f'Training Loss: {running_loss/len(train_loader):.4f}, '
f'Training Accuracy: {100*correct/total: .2f}%')
# evaluate on validation data
model.eval()
running_loss = 0.
correct, total = 0, 0
with torch.no_grad():
for idx, batch in enumerate(valid_loader):
text = batch["Sample"].to(device)
print(type(text), text.shape)
target = batch['Class'].to(device)
target = torch.autograd.Variable(target).long()
text, target = text.to(device), target.to(device)
optimizer.zero_grad()
output = model(text)
loss = criterion(output, target)
running_loss += loss.item()
# get accuracy
_, predicted = torch.max(output, 1)
#predicted = torch.round(output.squeeze())
total += target.size(0)
correct += (predicted == target).sum().item()
validation_loss.append(running_loss/len(valid_loader))
validation_acc.append(correct/total)
print (f'Validation Loss: {running_loss/len(valid_loader):.4f}, '
f'Validation Accuracy: {100*correct/total: .2f}%')
return train_loss, train_acc, validation_loss, validation_acc
When I run the model with the following, I get the error provided below. Thanks in advance for any help.
# Model hyperparamters
#vocab_size = len(word_array)
learning_rate = 1e-3
output_size = 11
input_size = 300
epochs = 10
hidden_dim = 100
n_layers = 2
# Initialize model, training and testing
set_seed(SEED)
vanilla_rnn_model = VanillaRNN(input_size, output_size, hidden_dim, n_layers)
#vanilla_rnn_model = VanillaRNN(output_size, input_size, RNN_size, fc_size, DEVICE)
vanilla_rnn_model.to(DEVICE)
vanilla_rnn_start_time = time.time()
vanilla_train_loss, vanilla_train_acc, vanilla_validation_loss, vanilla_validation_acc = train(vanilla_rnn_model,
DEVICE,
train_loader,
valid_loader,
epochs = epochs,
learning_rate = learning_rate)
The error :(
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-31-bfd2f8f3456f> in <module>()
19 valid_loader,
20 epochs = epochs,
---> 21 learning_rate = learning_rate)
22 print("--- Time taken to train = %s seconds ---" % (time.time() - vanilla_rnn_start_time))
23 #test_accuracy = test(vanilla_rnn_model, DEVICE, test_iter)
6 frames
<ipython-input-30-db1fa6c8b625> in train(model, device, train_loader, valid_loader, epochs, learning_rate)
45 # add micro for coding training loop
46 optimizer.zero_grad()
---> 47 output, hideden = model(text)
48 print(output.shape, target.shape, target.view(-1).shape)
49 loss = criterion(output, target.view(-1))
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
<ipython-input-26-c34b90b3cbc3> in forward(self, x)
21
22 # Passing in the input and hidden state into the model and obtaining outputs
---> 23 out, hidden = self.rnn(x, hidden)
24
25 # Reshaping the outputs such that it can be fit into the fully connected layer
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/rnn.py in forward(self, input, hx)
263 assert hx is not None
264 input = cast(Tensor, input)
--> 265 self.check_forward_args(input, hx, batch_sizes)
266 _impl = _rnn_impls[self.mode]
267 if batch_sizes is None:
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/rnn.py in check_forward_args(self, input, hidden, batch_sizes)
227
228 def check_forward_args(self, input: Tensor, hidden: Tensor, batch_sizes: Optional[Tensor]):
--> 229 self.check_input(input, batch_sizes)
230 expected_hidden_size = self.get_expected_hidden_size(input, batch_sizes)
231
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/rnn.py in check_input(self, input, batch_sizes)
201 raise RuntimeError(
202 'input must have {} dimensions, got {}'.format(
--> 203 expected_input_dim, input.dim()))
204 if self.input_size != input.size(-1):
205 raise RuntimeError(
RuntimeError: input must have 3 dimensions, got 1
| First, you need to wrap your dataset in a proper dataloader, and you can do something like this:
from torch.utils.data import DataLoader
# [...]
# define a batch_size, I'll use 4 as an example
batch_size = 4
train_dset = CustomDataset(X2, y) # your current code (change train_loader to train_dset)
train_loader = DataLoader(train_dset, batch_size=batch_size, shuffle=True))
At this point, text now should be [4, 300].
Then, you said the your sequence length is equal to 1. To fix the error, you can add the length dimension using unsqueeze:
# [...]
output, hideden = model(text.unsqueeze(1))
# [...]
Now, text should be [4, 1, 300], and here you have the 3 dimensions the RNN forward call is expecting (your RNN has batch_first=True):
input: tensor of shape (L, N, H_in) when batch_first=False or (N, L, H_in) when batch_first=True containing the features of the input sequence. (...)
| https://stackoverflow.com/questions/69095808/ |
Import torch ModuleNotFoundError | enter image description here
the import torch in my Jupiter notebook is not work, but in my terminal is works fine.
| This looks like an interpreter configuration problem, your jupyter notebook is maybe using another interpreter while your terminal is using another. For eg, if you are using anaconda the default env is (base), so to add a certain module to your environment, you go to anaconda prompt and do this:
$ conda activate base
(base) $ conda install pytorch torchvision torchaudio cpuonly -c pytorch
Use the above command to install non-CUDA version of pytorch. If you're not using anaconda, you can just go to your terminal and do this:
$ workon myenv
(myenv) $ pip3 install torch torchvision torchaudio
And then try again. You need to do all this in the same environment as your jupyter notebook.
| https://stackoverflow.com/questions/69096945/ |
Memory Leak in Pytorch Autograd of WGAN-GP | I want to use WGAN-GP, and when I run the code, it gives me an error:
def calculate_gradient_penalty(real_images, fake_images):
t = torch.rand(real_images.size(0), 1, 1, 1).to(real_images.device)
t = t.expand(real_images.size())
interpolates = t * real_images + (1 - t) * fake_images
interpolates.requires_grad_(True)
disc_interpolates = D(interpolates)
grad = torch.autograd.grad(
outputs=disc_interpolates, inputs=interpolates,
grad_outputs=torch.ones_like(disc_interpolates),
create_graph=True, retain_graph=True, allow_unused=True)[0]
grad_norm = torch.norm(torch.flatten(grad, start_dim=1), dim=1)
loss_gp = torch.mean((grad_norm - 1) ** 2) * lambda_term
return loss_gp
RuntimeError Traceback (most recent call
last) in
/opt/conda/lib/python3.8/site-packages/torch/tensor.py in
backward(self, gradient, retain_graph, create_graph, inputs)
243 create_graph=create_graph,
244 inputs=inputs)
--> 245 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
246
247 def register_hook(self, hook):
/opt/conda/lib/python3.8/site-packages/torch/autograd/init.py in
backward(tensors, grad_tensors, retain_graph, create_graph,
grad_variables, inputs)
143 retain_graph = create_graph
144
--> 145 Variable.execution_engine.run_backward(
146 tensors, grad_tensors, retain_graph, create_graph, inputs,
147 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 2;
15.75 GiB total capacity; 13.76 GiB already allocated; 2.75 MiB free; 14.50 GiB reserved in total by PyTorch)
The train code:
%%time
d_progress = []
d_fake_progress = []
d_real_progress = []
penalty = []
g_progress = []
data = get_infinite_batches(benign_data_loader)
one = torch.FloatTensor([1]).to(device)
mone = (one * -1).to(device)
for g_iter in range(generator_iters):
print('----------G Iter-{}----------'.format(g_iter+1))
for p in D.parameters():
p.requires_grad = True # This is by Default
d_loss_real = 0
d_loss_fake = 0
Wasserstein_D = 0
for d_iter in range(critic_iter):
D.zero_grad()
images = data.__next__()
if images.size()[0] != batch_size:
continue
# Train Discriminator
# Real Images
images = images.to(device)
z = torch.randn(batch_size, 100, 1, 1).to(device)
d_loss_real = D(images)
d_loss_real = d_loss_real.mean(0).view(1)
d_loss_real.backward(mone)
# Fake Images
fake_images = G(z)
d_loss_fake = D(fake_images)
d_loss_fake = d_loss_fake.mean(0).view(1)
d_loss_fake.backward(one)
# Calculate Penalty
gradient_penalty = calculate_gradient_penalty(images.data, fake_images.data)
gradient_penalty.backward()
# Total Loss
d_loss = d_loss_fake - d_loss_real + gradient_penalty
Wasserstein_D = d_loss_real - d_loss_fake
d_optimizer.step()
print(f'D Iter:{d_iter+1}/{critic_iter} Loss:{d_loss.detach().cpu().numpy()}')
time.sleep(0.1)
d_progress.append(d_loss) # Store Loss
d_fake_progress.append(d_loss_fake)
d_real_progress.append(d_loss_real)
penalty.append(gradient_penalty)
# Generator Updata
for p in D.parameters():
p.requires_grad = False # Avoid Computation
# Train Generator
# Compute with Fake
G.zero_grad()
z = torch.randn(batch_size, 100, 1, 1).to(device)
fake_images = G(z)
g_loss = D(fake_images)
g_loss = g_loss.mean().mean(0).view(1)
g_loss.backward(one)
# g_cost = -g_loss
g_optimizer.step()
print(f'G Iter:{g_iter+1}/{generator_iters} Loss:{g_loss.detach().cpu().numpy()}')
g_progress.append(g_loss) # Store Loss
Does anyone know how to solve this problem?
| All loss tensors which are saved outside of the optimization cycle (i.e. outside the for g_iter in range(generator_iters) loop) need to be detached from the graph. Otherwise, you are keeping all previous computation graphs in memory.
As such, you should detach anything that gets appended to d_progress, d_fake_progress, d_real_progress, penalty, and g_progress.
You can do so by converting the tensor to a scalar value with torch.Tensor.item, the graph will free itself on the following iteration. Change the following lines:
d_progress.append(d_loss) # Store Loss
d_fake_progress.append(d_loss_fake)
d_real_progress.append(d_loss_real)
penalty.append(gradient_penalty)
#######
g_progress.append(g_loss) # Store Loss
to:
d_progress.append(d_loss.item()) # Store Loss
d_fake_progress.append(d_loss_fake.item())
d_real_progress.append(d_loss_real.item())
penalty.append(gradient_penalty.item())
#######
g_progress.append(g_loss.item()) # Store Loss
| https://stackoverflow.com/questions/69098077/ |
Setting results of torch.gather(...) calls | I have a 2D pytorch tensor of shape n by m. I want to index the second dimension using a list of indices (which could be done with torch.gather) then then also set new values to the result of the indexing.
Example:
data = torch.tensor([[0,1,2], [3,4,5], [6,7,8]]) # shape (3,3)
indices = torch.tensor([1,2,1], dtype=torch.long).unsqueeze(-1) # shape (3,1)
# data tensor:
# tensor([[0, 1, 2],
# [3, 4, 5],
# [6, 7, 8]])
I want to select the specified indices per row (which would be [1,5,7] but then also set these values to another number - e.g. 42
I can select the desired columns row wise by doing:
data.gather(1, indices)
tensor([[1],
[5],
[7]])
data.gather(1, indices)[:] = 42 # **This does NOT work**, since the result of gather
# does not use the same storage as the original tensor
which is fine, but I would like to change these values now, and have the change also affect the data tensor.
I can do what I want to achieve using this, but it seems to be very un-pythonic:
max_index = torch.max(indices)
for i in range(0, max_index + 1):
mask = (indices == i).nonzero(as_tuple=True)[0]
data[mask, i] = 42
print(data)
# tensor([[ 0, 42, 2],
# [ 3, 4, 42],
# [ 6, 42, 8]])
Any hints on how to do that more elegantly?
| What you are looking for is torch.scatter_ with the value option.
Tensor.scatter_(dim, index, src, reduce=None) → Tensor
Writes all values from the tensor src into self at the indices specified in the index tensor. For each value in src, its output index is specified by
its index in src for dimension != dim and by the corresponding value
in index for dimension = dim.
With 2D tensors as input and dim=1, the operation is:
self[i][index[i][j]] = src[i][j]
No mention of the value parameter though...
With value=42, and dim=1, this will have the following effect on data:
data[i][index[i][j]] = 42
Here applied in-place:
>>> data.scatter_(index=indices, dim=1, value=42)
>>> data
tensor([[ 0, 42, 2],
[ 3, 4, 42],
[ 6, 42, 8]])
| https://stackoverflow.com/questions/69100302/ |
Hello World aka. MNIST with feed forward gets less accuracy in comparison of plain with DistributedDataParallel (DDP) model with only one node | This is a cross-post to my question in the Pytorch forum.
When using DistributedDataParallel (DDP) from PyTorch on only one node I expect it to be the same as a script without DistributedDataParallel.
I created a simple MNIST training setup with a three-layer feed forward neural network. It gives significantly lower accuracy (around 10%) if trained with the same hyperparameters, same epochs, and generally same code but the usage of the DDP library.
I created a GitHub repository demonstrating my problem.
I hope it is a usage error of the library, but I do not see how there is a problem, also colleges of mine did already audit the code. Also, I tried it on macOS with a CPU and on three different GPU/ubuntu combinations (one with a 1080-Ti, one with a 2080-Ti and a cluster with P100s inside) all giving the same results. Seeds are fixed for reproducibility.
| You are using different batch sizes in your two experiments: batch_size=128, and batch_size=32 for mnist-distributed.py and mnist-plain.py respectively. This would indicate that you won't have the same performance result with those two trainings.
| https://stackoverflow.com/questions/69100602/ |
Difficulty setting batch size correctltly in 2 layer RNN | I am building an RNN that makes a multi-class classification output for 11 dimensions in the output. The input are word embeddings that I took from a pretrained glove model.
The error I get is (full traceback at the end of the question):
ValueError: Expected input batch_size (1) to match target batch_size (11).
Note that here I use batch_size=1, and the error says "expected batch size 1 to match target batch_size (11)". However, if I change batch size to 11, the error changes to:
ValueError: Expected input batch_size (11) to match target batch_size (121).
I think that the error is coming from the shape of text which is torch.Size([11, 300]), which lacks a sequence length, but I thought that if I do not assign a seq length it defaults to 1. However, I do not know how to add this in.
Training loop:
def train(model, device, train_loader, valid_loader, epochs, learning_rate):
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
train_loss, validation_loss = [], []
train_acc, validation_acc = [], []
for epoch in range(epochs):
#train
model.train()
running_loss = 0.
correct, total = 0, 0
steps = 0
for idx, batch in enumerate(train_loader):
text = batch["Sample"].to(device)
target = batch['Class'].to(device)
print(text.shape, target.shape)
text, target = text.to(device), target.to(device)
# add micro for coding training loop
optimizer.zero_grad()
print(text.shape)
output, hidden = model(text.unsqueeze(1))
#print(output.shape, target.shape, target.view(-1).shape)
loss = criterion(output, target.view(-1))
loss.backward()
optimizer.step()
steps += 1
running_loss += loss.item()
# get accuracy
_, predicted = torch.max(output, 1)
print(predicted)
#predicted = torch.round(output.squeeze())
total += target.size(0)
correct += (predicted == target).sum().item()
train_loss.append(running_loss/len(train_loader))
train_acc.append(correct/total)
print(f'Epoch: {epoch + 1}, '
f'Training Loss: {running_loss/len(train_loader):.4f}, '
f'Training Accuracy: {100*correct/total: .2f}%')
# evaluate on validation data
model.eval()
running_loss = 0.
correct, total = 0, 0
with torch.no_grad():
for idx, batch in enumerate(valid_loader):
text = batch["Sample"].to(device)
print(type(text), text.shape)
target = batch['Class'].to(device)
target = torch.autograd.Variable(target).long()
text, target = text.to(device), target.to(device)
optimizer.zero_grad()
output = model(text)
loss = criterion(output, target)
running_loss += loss.item()
# get accuracy
_, predicted = torch.max(output, 1)
#predicted = torch.round(output.squeeze())
total += target.size(0)
correct += (predicted == target).sum().item()
validation_loss.append(running_loss/len(valid_loader))
validation_acc.append(correct/total)
print (f'Validation Loss: {running_loss/len(valid_loader):.4f}, '
f'Validation Accuracy: {100*correct/total: .2f}%')
return train_loss, train_acc, validation_loss, validation_acc
This is how I call the training loop:
# Model hyperparamters
#vocab_size = len(word_array)
learning_rate = 1e-3
hidden_dim = 100
output_size = 11
input_size = 300
epochs = 10
n_layers = 2
# Initialize model, training and testing
set_seed(SEED)
vanilla_rnn_model = VanillaRNN(input_size, output_size, hidden_dim, n_layers)
vanilla_rnn_model.to(DEVICE)
vanilla_rnn_start_time = time.time()
vanilla_train_loss, vanilla_train_acc, vanilla_validation_loss, vanilla_validation_acc = train(vanilla_rnn_model,
DEVICE,
train_loader,
valid_loader,
epochs = epochs,
learning_rate = learning_rate)
This is how I create the dataloaders:
# Splitting dataset
# define a batch_size, I'll use 4 as an example
batch_size = 1
train_dset = CustomDataset(X2, y) # create data set
train_loader = DataLoader(train_dset, batch_size=batch_size, shuffle=True) #load data with batch size
valid_dset = CustomDataset(X2, y)
valid_loader = DataLoader(valid_dset, batch_size=batch_size, shuffle=True)
g_seed = torch.Generator()
g_seed.manual_seed(SEED)
Full traceback:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-bfd2f8f3456f> in <module>()
19 valid_loader,
20 epochs = epochs,
---> 21 learning_rate = learning_rate)
22 print("--- Time taken to train = %s seconds ---" % (time.time() - vanilla_rnn_start_time))
23 #test_accuracy = test(vanilla_rnn_model, DEVICE, test_iter)
3 frames
<ipython-input-22-16748701034f> in train(model, device, train_loader, valid_loader, epochs, learning_rate)
47 output, hidden = model(text.unsqueeze(1))
48 #print(output.shape, target.shape, target.view(-1).shape)
---> 49 loss = criterion(output, target.view(-1))
50 loss.backward()
51 optimizer.step()
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
1119 def forward(self, input: Tensor, target: Tensor) -> Tensor:
1120 return F.cross_entropy(input, target, weight=self.weight,
-> 1121 ignore_index=self.ignore_index, reduction=self.reduction)
1122
1123
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2822 if size_average is not None or reduce is not None:
2823 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2824 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
2825
2826
ValueError: Expected input batch_size (1) to match target batch_size (11).
| You should not be using .view(-1). This line:
loss = criterion(output, target.view(-1))
should be:
loss = criterion(output, target)
It is effectively removing your batch dimension. For batch_size=1, it changes (1, 11) to (11,). When you changed batch_size to 11, then the view changed the shape from (11, 11) to (121,), hence the error.
| https://stackoverflow.com/questions/69104293/ |
probleme of neural network :mat1 and mat2 shapes cannot be multiplied | I implemented a simple neuron network like this:
import torch
from torch import nn
class Simple_NN(nn.Module):
'''
Multilayer Perceptron.
'''
def __init__(self, input_dim):
super().__init__()
self.input = input_dim
#self.out = out_dim
self.layer = nn.Linear(self.input, 1, bias=False)
def getweights(self):
return self.layer.weight
def normalize(self):
self.layer.weight.data /= self.layer.weight.data.sum()
return self.layer.weight
def forward(self, x, dim = 0):
sort = torch.sort(x, dim, descending = True)[0]
#top = torch.topk(x, 4, dim)
sort = self.layer(sort)
return sort
when I run this piece of code:
outputs = torch.tensor([[1.9, 0.4, 1.3, 0.8, 0.2, 0.0],[1.7, 1.4, 0.3, 1.8, 1.2, 1.1]])
model = Simple_NN(input_dim = outputs.shape[0])
model.getweights()
model.normalize()
I get the following result:
Parameter containing:
tensor([[0.9772, 0.0228]], requires_grad=True)
but, when I run this line:
model(outputs, dim=0)
I get this error:
<ipython-input-1-dd06de9bb6ad> in forward(self, x, dim)
20 sort = torch.sort(x, dim, descending = True)[0]
21 #top = torch.topk(x, 4, dim)
---> 22 sort = self.layer(sort)
23 return sort
RuntimeError: mat1 and mat2 shapes cannot be multiplied (2x6 and 2x1)
How can I solve this problem?
| As you didn't provide more details, here's 2 possible ways to solve this:
If the batch_size=2, the input_dim should be 6, not 2:
model = Simple_NN(input_dim = outputs.shape[1]) # change [0] to [1]
If the batch_size=6, then outputs needs to be transposed:
model(outputs.t(), dim=0) # add .t()
I think the correct solution to your case is the first one, but both of them work. It depends on what you actually want.
| https://stackoverflow.com/questions/69106204/ |
To get gpu support to create neural networks is conda environment necessary? | So I recently started learning about deep learning in the TensorFlow lib in python. And came across a problem, so turns out, (and I genuinely didn't know this) to use the GPU for programming specially TensorFlow and PyTorch you need these CUDA toolkit, cuDNN libs, and Visual Studio community. Which far enough I'll download and already finished downloading vs community.
But what I want to ask is, is the setting up of Conda environment necessary? Can it not be done with pip?
Sorry if this is not the right place to ask this kind of question, but I couldn't find anything in detail about this.
| No, you can just use pip install tensorflow-gpu to install TensorFlow with GPU support. It's your choice if you want to create Conda environment or not. But before using that pip command, make sure you have CUDA 11.2 and cuDNN 8.1.
And in case of PyTorch just go to this site and copy the command and install it.
| https://stackoverflow.com/questions/69107178/ |
Generating probabilites from patches of image | I am working with an image of size 512x512. The image is divided into patches using einops with patch size of 32. The number of patches overall is 256, in other words, we get a new "image" of size 256x1024.
Since this image is actually a mask for a segmentation problem, the image is actually comprised of only 4 values (4 classes): 0 for the background, 1 for the first class, 2 for the second class, 3 for the third class.
My goal is to take every patch, and compute for every class C the following:
Number of pixels in this patch / Number of pixels labeled C.
This should give me an array of size 4 where the first entry is the total number of pixels in the patch (1024) over the number of background pixels (labeled as 0), the second, third and fourth entries are the same but for the corresponding class.
In theory, I know that I need to iterate over every single patch and then count how many pixels of each class exists in the current patch, then divide by 1024. Doing this 256 yields exactly what I want. The problem is that I have a (very) large amount of images that I need to do this for, and the size of 512 is just an example to make the question simpler, therefore a for loop is out of question.
I know that I can get the result that I want using numpy. I tried both: numpy.apply_over_axes and numpy.apply_along_axis but I don't know which one is better suited for this task, also there is numpy.where which I don't know how it applies here.
Here is what I did:
from einops import rearrange
import numpy as np
labn = np.random.randint(4,size= (512,512)) # Every pixels in this image is of value: 0,1,2,3
to_patch = rearrange(labn, "(h p1) (w p2) -> (h w) (p1 p2)", p1=32, p2=32)
print(to_patch.shape) # (256,1024)
c0 = np.full(1024, 0)
c1 = np.full(1024, 1)
c2 = np.full(1024, 2)
c3 = np.full(1024, 3)
def f(a):
_c0 = a == c0
_c1 = a == c2
_c2 = a == c2
_c3 = a == c3
pr = np.array([np.sum(_c0), np.sum(_c1), np.sum(_c2), np.sum(_c3)]) / 1024
return pr
resf = np.apply_along_axis(f, 1, to_patch)
print(resf.shape) # (256, 4, 1024)
Two things:
I want the output to be 256x4 where every array along the second axes sums to one.
Is there a faster/better/pythonic way to do this, preferably vectorized?
EDIT: I forgot to add the sum, so now I do get 256x4.
| There is a built-in function to count occurrences called torch.histc, it is similar to Python's collections.Counter.
torch.histc(input, bins=100, min=0, max=0, *, out=None) → Tensor
Computes the histogram of a tensor.
The elements are sorted into equal width bins between min and max. If
min and max are both zero, the minimum and maximum values of the data
are used.
Elements lower than min and higher than max are ignored.
You need to specify the number of bins, here the number of classes C. As well as the min and max values for ordering. Also, it won't work with multi-dimensional tensors as such the resulting tensor will contain global statistics of the input tensor regardless of dimensions. As a possible workaround, you can iterate through your patches, calling torch.histc each time, then stacking the results and normalizing:
resf = torch.stack([torch.histc(patch, C, min=0, max=C-1) for patch in x]) / x.size(1)
| https://stackoverflow.com/questions/69108737/ |
torch.return_types.max as Tensor | I try to pass torch.max() return type (torch.return_types.max) as argument to function torch.tile():
torch.tile(torch.max(x), (1, 1, 1, 5))
The error is: TypeError: tile(): argument 'input' (position 1) must be Tensor, not torch.return_types.max.
How can I convert torch.return_types.max to Tensor? Maybe I should use another function to find maximum in Tensor?
| For this error to be true, you have to be using some dim=?, because only then torch.max will return a tuple of (values, indices).
You can fix that error by using only the first output:
torch.tile(torch.max(x, dim=0)[0], (1, 1, 1, 5))
| https://stackoverflow.com/questions/69109069/ |
With torch or torchvision, how can I resize and crop an image batch, and get both the resizing scales and the new images? | I want to transform a batch of images such that they are randomly cropped (with fixed ratio) and resized (scaled). However, I want not only the new images but also a tensor of the scale factors applied to each image. For example, this torchvision transform will do the cropping and resizing I want:
scale_transform = torchvision.transforms.RandomResizedCrop(224, scale=(0.08, 1.0), ratio=(1.0, 1.0))
images_scaled = scale_transform(images_original)
But I also want to know the scale factors. How might I get those scale factors, or tackle this in a different way?
| If I understand correctly, you want to get ratio of how much cropped part was resized. You can get it by computing xy size of the cropped part and divide it by size you want to get.
class MyRandomResizedCrop(object):
def __init__(self, size, scale, ratio):
self.t = torchvision.transforms.RandomResizedCrop(size, scale=scale, ratio=ratio)
self.size = size
self.scale = scale
self.ratio = ratio
def __call__(self, sample):
sample = F.to_pil_image(sample)
crop_size = self.t.get_params(sample, self.scale, self.ratio)
x_size = crop_size[2] - crop_size[0]
y_size = crop_size[3] - crop_size[1]
x_ratio = sample.size[0] / x_size
y_ratio = sample.size[1] / y_size
ratio = (x_ratio, y_ratio)
output = F.crop(sample, *crop_size)
output = F.resize(output, self.size)
return ratio, output
import torchvision
from PIL import Image
import torchvision.transforms.functional as F
size = 244
scale = (0.08, 1.0)
ratio = (1.0, 1.0)
t = MyRandomResizedCrop(size, scale, ratio)
img = torch.rand((3,1024,1024), dtype=torch.float32)
r, img = t(img)
| https://stackoverflow.com/questions/69110968/ |
Plot the transformed (augmented) images in pytorch | I want to use one of the image augmentation techniques (for example rotation or horizontal flip) and apply it to some images of the CIFAR-10 dataset and plot them in PyTorch.
I know that we can use the following code to augmented images:
from torchvision import models, datasets, transforms
from torchvision.datasets import CIFAR10
data_transforms = transforms.Compose([
# add augmentations
transforms.RandomHorizontalFlip(p=0.5),
# The output of torchvision datasets are PILImage images of range [0, 1].
# We transform them to Tensors of normalized range [-1, 1]
transforms.ToTensor(),
transforms.Normalize(mean, std)
])
and then I used the transforms above when I want to load the Cifar10 dataset:
train_set = CIFAR10(
root='./data/',
train=True,
download=True,
transform=data_transforms['train'])
As far as I know, when this code is used, all CIFAR10 datasets are transformed.
Question
My question is how can I use data transform or augmentation techniques for some images in data sets and plot them? for example 10 images and their augmented images.
|
when this code is used, all CIFAR10 datasets are transformed
Actually, the transform pipeline will only be called when images in the dataset are fetched via the __getitem__ function by the user or through a data loader. So at this point in time, train_set doesn't contain augmented images, they are transformed on the fly.
You will need to construct another dataset without augmentations.
>>> non_augmented = CIFAR10(
... root='./data/',
... train=True,
... download=True)
>>> train_set = CIFAR10(
... root='./data/',
... train=True,
... download=True,
... transform=data_transforms)
Stack some images together:
>>> imgs = torch.stack((*[non_augmented[i][0] for i in range(10)],
*[train_set[i][0] for i in range(10)]))
>>> imgs.shape
torch.Size([20, 3, 32, 32])
Then torchvision.utils.make_grid can be useful to create the desired layout:
>>> grid = torchvision.utils.make_grid(imgs, nrow=10)
There you have it!
>>> transforms.ToPILImage()(grid)
| https://stackoverflow.com/questions/69110975/ |
torch.utils.data.DataLoader - why it adds a dimension | from torchvision import datasets, transforms
transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,), (0.5,)),])
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # A
trainloader = torch.utils.data.DataLoader(trainset.train_data, batch_size=64, shuffle=True) # B
dataiter = iter(trainloader)
images, labels = dataiter.next() # A
images = dataiter.next() # B
images.shape
Why the code above, approach #A gives torch.Size([64, 1, 28, 28]) , while #B gives torch.Size([64, 28, 28])? Where does the second dimension with value 1 in #A come from?
Thank you in advance.
| The second dimension describes the color channels which for grayscale is 1. RGB images would have 3 channels (red, green and blue) and would look something like 64, 3, W, H.
So when working with CNNs your data normally has to be in shape batchsize, channels, width, height therefore 64, 1, 28, 28 is correct.
| https://stackoverflow.com/questions/69111862/ |
PyTorch tutorial using testing dataset in training epoch | In PyTorch official tutorial: OPTIMIZING MODEL PARAMETERS. The dataset is said to be split into training data and testing data. However, the testing dataset was used in each epoch.
Shouldn't the testing dataset only be used once for the evaluating the final model? Or in this tutorial the 'testing dataset' is actually 'validation dataset' and there is no testing dataset in the code?
| Indeed in many basic ML/DL tutorials, the fine distinction between validation and test is often overlooked. In the tutorial you mentioned, since no "decision" is made based on the validation performance (e.g., early stopping), this set can be considered a "test" set and it is okay to monitor test performance during training.
| https://stackoverflow.com/questions/69113077/ |
AttributeError: module 'torch.fft' has no attribute 'fftfreq' | I followed the example from
https://pytorch.org/docs/stable/generated/torch.fft.fftshift.html#torch.fft.fftshift
import torch.fft
f = torch.fft.fftfreq(4)
a = torch.fft.fftshift(f)
print(a)
and got the error
AttributeError: module 'torch.fft' has no attribute 'fftfreq'
I tried pip torch==1.7.0+cu110 and pip torch==1.7.1+cu110 and also conda pytorch==1.7.1 with cudatoolkit=11.0.
Others have the same problem https://discuss.pytorch.org/t/unable-to-use-correctly-the-new-torch-fft-module/104560/6
But changing to torch1.7.0 didn't solve the problem.
How to use torch.fft correctly?
| Function torch.fft.fftfreq was introduced in PyTorch version 1.8.0. You need to upgrade to this version or higher in order to use it.
| https://stackoverflow.com/questions/69115115/ |
How to measure the semantic similarities among image features extracted by pre-trained models(e.g. vgg, resnet...)? | As far as I know, pre-trained models play well in many tasks as a feature-extractor, thanks to their abundant training dataset.
However, I'm wondering that whether the model, let's say vgg-16,
have certain ability to extract some "semantic" information from input image?
If the answer is positive, given an unlabeled dataset,
is it possible to "cluster" images by measuring the semantic similarities of the extracted features?
Actually, I've spent some efforts:
Load pre-trained vgg-16 through Pytorch.
Load Cifar-10 dataset and transform to batched-tensor X, of size(5000, 3, 224, 224).
Fine-tune vgg.classifier, define its output dimension as 4096.
Extract features:
features = vgg.features(X).view(X.shape[0], -1) # X: (5000, 3, 224, 224)
features = vgg.classifier(features) # features: (5000, 25088)
return features # features: (5000, 4096)
Try out cosine similarity, inner product, torch.cdist, however, only to find several bad clusters.
Any suggestion? Thanks in advance.
| You might not want to go all the way to the last layer, as these contain features specific to the classification task at hand. Using features from layers higher up in the classifier might help. Additionally, you want to switch to eval mode since VGG-16 has a dropout layer in its classifier.
>>> vgg16 = torchvision.models.vgg(pretrained=True).eval()
Truncate the classifier:
>>> vgg16.classifier = vgg16.classifier[:4]
Now vgg16's classifier will look like:
(classifier): Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU(inplace=True)
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=4096, out_features=4096, bias=True)
)
Then extract the features:
>>> vgg16(torch.rand(1, 3, 124, 124)).shape
torch.Size([1, 4096])
| https://stackoverflow.com/questions/69115300/ |
How to concatenate a list of tensors on a specific axis? | I have a list (my_list) of tensors all with the same shape. I want to concatenate them on the channel axis.
Helping code
for i in my_list:
print(i.shape) #[1, 3, 128, 128] => [batch, channel, width, height]
I would like to get a new tensor i.e. new_tensor = [1, 3*len(my_list), width, height]
I don't want to use torch.stack() to add a new dimension. And i am unable to figure out how can I use torch.cat() to do this?
| Given a example list containing 10 tensors shaped (1, 3, 128, 128):
>>> my_list = [torch.rand(1, 3, 128, 128) for _ in range(10)]
You are looking to concatenate your tensors on axis=1 because the 2nd dimension is where the tensor to concatenate together. You can do so using torch.cat:
>>> res = torch.cat(my_list, axis=1)
>>> res.shape
torch.Size([1, 30, 128, 128])
This is actually equivalent to stacking your tensor in my_list vertically, i.e. by using torch.vstack:
>>> res = torch.vstack(my_list)
| https://stackoverflow.com/questions/69115837/ |
How can I get a batch of samples from a dataset given a list of idxs in pytorch? | I have a torch.utils.data.Dataset object, I would like to have a DataLoader or a similar object that accepts a list of idxs and returns a batch of samples with the corresponding idxs.
Example, I have
list_idxs = [10, 109, 7, 12]
I would like to do like:
batch = loader.getbatch(list_idxs)
where batch contains:
[sample10, sample109, sample7, sample12]
Is there a simple and elegant way to do that in an optimized way?
| If I understand your question correctly, you could have a DataLoader return a sequence of hand-selected batches using a custom batch_sampler (you don't even need to pass it a sampler in this case).
Given an arbitrary Dataset:
>>> from torch.utils.data import DataLoader, Dataset
>>> from torch.utils.data.sampler import Sampler
>>> class MyDataset(Dataset):
... def __getitem__(self, idx):
... return idx
you can then define something like:
>>> class MyBatchSampler(Sampler):
... def __init__(self, batches):
... self.batches = batches
...
... def __iter__(self):
... for batch in self.batches:
... yield batch
...
... def __len__(self):
... return len(self.batches)
which just takes a list of lists containing dataset indices to include in each batch.
Then:
>>> dataset = MyDataset()
>>> batch_sampler = MyBatchSampler([[1, 2, 3], [5, 6, 7], [4, 2, 1]])
>>> dataloader = DataLoader(dataset=dataset, batch_sampler=batch_sampler)
>>> for batch in dataloader:
... print(batch)
...
tensor([1, 2, 3])
tensor([5, 6, 7])
tensor([4, 2, 1])
Should be easy to extend to your actual Dataset, etc.
| https://stackoverflow.com/questions/69121760/ |
Convert at::Tensor to double in C++ when using LibTorch (PyTorch) | In the following code, I want to compare the loss (data type at::Tensor) with a lossThreshold (data type double). I want to convert loss to double before making that comparison. How do I do it?
int main() {
auto const input1(torch::randn({28*28});
auto const input2(torch::randn({28*28});
double const lossThreshold{0.05};
auto const loss{torch::nn::functional::mse_loss(input1, input2)}; // this returns an at::Tensor datatype
return loss > lossThreshold ? EXIT_FAILURE : EXIT_SUCCESS;
}
| Thanks to GitHub CoPilot which recommended this solution. I guess I should leave my job now. :(
The solution is using the item<T>() template function as follows:
int main() {
auto const input1(torch::randn({28*28}); // at::Tensor
auto const input2(torch::randn({28*28}); // at::Tensor
double const lossThreshold{0.05}; // double
auto const loss{torch::nn::functional::mse_loss(input1, input2).item<double>()}; // the item<double>() converts at::Tensor to double
return loss > lossThreshold ? EXIT_FAILURE : EXIT_SUCCESS;
}
| https://stackoverflow.com/questions/69122354/ |
CrossEntropyLoss equivalence to LogSoftmax + NLLLoss | According to the docs, CrossEntropyLoss criterion combines LogSoftmax function and NLLLoss criterion.
That is all fine and well, but testing it doesn't seem to substantiate this claim (ie assertion fails):
model_nll = nn.Sequential(nn.Linear(3072, 1024),
nn.Tanh(),
nn.Linear(1024, 512),
nn.Tanh(),
nn.Linear(512, 128),
nn.Tanh(),
nn.Linear(128, 2),
nn.LogSoftmax(dim=1))
model_ce = nn.Sequential(nn.Linear(3072, 1024),
nn.Tanh(),
nn.Linear(1024, 512),
nn.Tanh(),
nn.Linear(512, 128),
nn.Tanh(),
nn.Linear(128, 2),
nn.LogSoftmax(dim=1))
loss_fn_ce = nn.CrossEntropyLoss()
loss_fn_nll = nn.NLLLoss()
t = torch.rand(1,3072)
target = torch.tensor([1])
with torch.no_grad():
loss_nll = loss_fn_nll(model_nll(t), target)
loss_ce = loss_fn_ce(model_ce(t), target)
assert torch.eq(loss_nll, loss_ce)
I'm obviously missing something basic here.
| As you noticed, the weights are initialized randomly.
One way to get two modules sharing the same weights is to simply export with state_dict the state of one and set it on the other with load_state_dict.
This is a one-liner:
>>> model_ce.load_state_dict(model_nll.state_dict())
| https://stackoverflow.com/questions/69123404/ |
Trying to backward through the graph a second time with GANs model | I'm trying to setup a simple GANs training loop but am getting the following error:
RuntimeError: Trying to backward through the graph a second time (or directly access saved variables after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved variables after calling backward.
for epoch in range(N_EPOCHS):
# gets data for the generator
for i, batch in enumerate(dataloader, 0):
# passing target images to the Discriminator
global_disc.zero_grad()
output_disc = global_disc(batch.to(device))
error_target = loss(output_disc, torch.ones(output_disc.shape).cuda())
error_target.backward()
# apply mask to the images
batch = apply_mask(batch)
# passes fake images to the Discriminator
global_output, local_output = gen(batch.to(device))
output_disc = global_disc(global_output.detach())
error_fake = loss(output_disc, torch.zeros(output_disc.shape).to(device))
error_fake.backward()
# combines the errors
error_total = error_target + error_fake
optimizer_disc.step()
# updates the generator
gen.zero_grad()
error_gen = loss(output_disc, torch.ones(output_disc.shape).to(device))
error_gen.backward()
optimizer_gen.step()
break
break
As far as I can tell, I have the operations in the right order, I'm zeroing out the gradients, and I'm detaching the output of the generator before it goes into discriminator.
This article was helpful but I'm still running into something I don't understand.
| Two important points come to mind:
You should feed your generator with noise, and not the real input:
global_output, local_output = gen(noise.to(device))
Above noise should have the appropriate shape (it is the input of your generator).
In order to optimize the generator, you are required to recompute the discriminator output, because it has already been backpropagated on. Simply add this line to recompute output_disc:
# updates the generator
gen.zero_grad()
output_disc = global_disc(global_output)
# ...
Please refer to this tutorial provided by PyTorch for a full walkthrough.
| https://stackoverflow.com/questions/69123542/ |
Understanding why memory allocation occurs during inference, backpropagation, and model update | In the process of tracking down a GPU OOM error, I made the following checkpoints in my Pytorch code (running on Google Colab P100):
learning_rate = 0.001
num_epochs = 50
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print('check 1')
!nvidia-smi | grep MiB | awk '{print $9 $10 $11}'
model = MyModel()
print('check 2')
!nvidia-smi | grep MiB | awk '{print $9 $10 $11}'
model = model.to(device)
print('check 3')
!nvidia-smi | grep MiB | awk '{print $9 $10 $11}'
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
print('check 4')
!nvidia-smi | grep MiB | awk '{print $9 $10 $11}'
for epoch in range(num_epochs):
train_running_loss = 0.0
train_accuracy = 0.0
model = model.train()
print('check 5')
!nvidia-smi | grep MiB | awk '{print $9 $10 $11}'
## training step
for i, (name, output_array, input) in enumerate(trainloader):
output_array = output_array.to(device)
input = input.to(device)
comb = torch.zeros(1,1,100,1632).to(device)
print('check 6')
!nvidia-smi | grep MiB | awk '{print $9 $10 $11}'
## forward + backprop + loss
output = model(input, comb)
print('check 7')
!nvidia-smi | grep MiB | awk '{print $9 $10 $11}'
loss = my_loss(output, output_array)
print('check 8')
!nvidia-smi | grep MiB | awk '{print $9 $10 $11}'
optimizer.zero_grad()
print('check 9')
!nvidia-smi | grep MiB | awk '{print $9 $10 $11}'
loss.backward()
print('check 10')
!nvidia-smi | grep MiB | awk '{print $9 $10 $11}'
## update model params
optimizer.step()
print('check 11')
!nvidia-smi | grep MiB | awk '{print $9 $10 $11}'
train_running_loss += loss.detach().item()
print('check 12')
!nvidia-smi | grep MiB | awk '{print $9 $10 $11}'
temp = get_accuracy(output, output_array)
print('check 13')
!nvidia-smi | grep MiB | awk '{print $9 $10 $11}'
train_accuracy += temp
with the following output:
check 1
2MiB/16160MiB
check 2
2MiB/16160MiB
check 3
3769MiB/16160MiB
check 4
3769MiB/16160MiB
check 5
3769MiB/16160MiB
check 6
3847MiB/16160MiB
check 7
6725MiB/16160MiB
check 8
6725MiB/16160MiB
check 9
6725MiB/16160MiB
check 10
9761MiB/16160MiB
check 11
16053MiB/16160MiB
check 12
16053MiB/16160MiB
check 13
16053MiB/16160MiB
check 6
16053MiB/16160MiB
check 7
16071MiB/16160MiB
check 8
16071MiB/16160MiB
check 9
16071MiB/16160MiB
check 10
16071MiB/16160MiB
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-11-f566d09448f9> in <module>()
65
66 ## update model params
---> 67 optimizer.step()
68
69 print('check 11')
3 frames
/usr/local/lib/python3.7/dist-packages/torch/optim/optimizer.py in wrapper(*args, **kwargs)
86 profile_name = "Optimizer.step#{}.step".format(obj.__class__.__name__)
87 with torch.autograd.profiler.record_function(profile_name):
---> 88 return func(*args, **kwargs)
89 return wrapper
90
/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
26 def decorate_context(*args, **kwargs):
27 with self.__class__():
---> 28 return func(*args, **kwargs)
29 return cast(F, decorate_context)
30
/usr/local/lib/python3.7/dist-packages/torch/optim/adam.py in step(self, closure)
116 lr=group['lr'],
117 weight_decay=group['weight_decay'],
--> 118 eps=group['eps'])
119 return loss
/usr/local/lib/python3.7/dist-packages/torch/optim/_functional.py in adam(params, grads, exp_avgs, exp_avg_sqs, max_exp_avg_sqs, state_steps, amsgrad, beta1, beta2, lr, weight_decay, eps)
92 denom = (max_exp_avg_sqs[i].sqrt() / math.sqrt(bias_correction2)).add_(eps)
93 else:
---> 94 denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(eps)
95
96 step_size = lr / bias_correction1
RuntimeError: CUDA out of memory. Tried to allocate 2.32 GiB (GPU 0; 15.78 GiB total capacity; 11.91 GiB already allocated; 182.75 MiB free; 14.26 GiB reserved in total by PyTorch)
It makes sense to me that model = model.to(device) creates 3.7G of memory.
But why does running the model output = model(input, comb) create another 3G of memory?
And then loss.backward() creates another 3G of memory?
And then optimizer.step() creates another 6.3G of memory?
I would appreciate it if someone could explain how the PyTorch GPU memory allocation model is working in this example.
|
Inference
By default, an inference on your model will allocate memory to store the activations of each layer (activation as in intermediate layer inputs). This is needed for backpropagation where those tensors are used to compute the gradients. A simple but effective example is a function defined by f: x -> x². Here, df/dx = 2x, i.e. in order to compute df/dx you are required to keep x in memory.
If you use the torch.no_grad() context manager, you will allow PyTorch to not save those values thus saving memory. This is particularly useful when evaluating or testing your model, i.e. when backpropagation is performed. Of course, you won't be able to use this during training!
Backward propagation
The backward pass call will allocate additional memory on the device to store each parameter's gradient value. Only leaf tensor nodes (model parameters and inputs) get their gradient stored in the grad attribute. This is why the memory usage is only increasing between the inference and backward calls.
Model parameter update
Since you are using a stateful optimizer (Adam), some additional memory is required to save some parameters. Read related PyTorch forum post on that. If you try with a stateless optimizer (for instance SGD) you should not have any memory overhead on the step call.
All three steps can have memory needs. In summary, the memory allocated on your device will effectively depend on three elements:
The size of your neural network: the bigger the model, the more layer activations and gradients will be saved in memory.
Whether you are under the torch.no_grad context: in this case, only the state of your model needs to be in memory (no activations or gradients necessary).
The type of optimizer used: whether it is stateful (saves some running estimates during parameter update, or stateless (doesn't require to).
whether you require to do back
| https://stackoverflow.com/questions/69125887/ |
Why my CNN regressor doesn't work (Pytorch) | I'm trying to convert my tensorflow code to pytorch.
Simply speaking, it estimates 7 values (number) from images using CNN.(regressor)
The backbone network is vgg16 with pretrained weights, I'd like to convert last fcl (actually due to ImageNet dataset, the last fcl output is 1000 classes), to (4096 x 4096), and add more fcls.
before :
vgg last fcl (4096 x 1000)
after:
vgg last fcl (change to 4096 x 4096)
----add fcl1 (4096 x 4096)
----add fcl2 (4096 x 2048)
└ add fclx (2048 x 3)
└ add fclq (2048 x 4)
: fcl2 is connected to two different tensors, with size of 3 and 4
Here, I tried to do it with only one image (for just debugging) and GT values (7 values) with L2 Loss.
If I do that using Tensorflow, the loss decreases drastically, and When I Infer an image, it gives almost similar values to GT.
However, If I try to do it using Pytorch, It looks like training doesn't work well.
I guess the loss should sharply decrease while training (almost for every iteration)
What's the problem?
The loss is actually |x-x'|^2 + b|q-q'|^2, well-known as L2-norm used in PoseNet(Kendall, 2015). x has three values of position and q has four values of quaternion(rotation). b is the hyperparameter determined by user.
from torchvision import models
import torch.nn as nn
import torch
from torch.autograd import Variable
import torch.optim as optim
import os
import os.path
import torch.utils.data as data
from torchvision import transforms as T
from PIL import Image
class DataSource(data.Dataset):
def __init__(self, root, train=True, transforms=None, txtName='dataset_train'):
self.root = os.path.expanduser(root)
self.transforms = transforms
self.train = train
self.imageFormat = '.jpg'
self.image_poses = []
self.image_paths = []
self.txtName = txtName
self._get_data()
if transforms is None:
normalize = T.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
if not train:
self.transforms = T.Compose(
[T.Resize(256),
T.CenterCrop(224),
T.ToTensor(),
normalize]
)
else:
self.transforms = T.Compose(
[T.Resize(256),
T.CenterCrop(224),
# T.RandomCrop(224),
T.ToTensor(),
normalize]
)
def _get_data(self):
txt_file = self.root + '/' + self.txtName + '.txt'
count = 0
with open(txt_file, 'r') as f:
for line in f:
if len(line.split()) != 8:
next(f)
fname, p0, p1, p2, p3, p4, p5, p6 = line.split()
p0 = float(p0); p1 = float(p1); p2 = float(p2);
p3 = float(p3); p4 = float(p4); p5 = float(p5); p6 = float(p6)
ImageFullName = self.root + '/' + fname
if count == 0:
if os.path.isfile(ImageFullName) == False:
self.imageFormat = '.png'
if self.imageFormat != '.jpg':
ImageFullName = ImageFullName.replace('.jpg', self.imageFormat)
self.image_poses.append([p0, p1, p2, p3, p4, p5, p6])
self.image_paths.append(ImageFullName)
count += 1
print('Total : ', len(self.image_paths), ' images')
def __getitem__(self, index):
img_path = self.image_paths[index]
img_pose = self.image_poses[index]
data = Image.open(img_path)
data = self.transforms(data)
return data, torch.tensor(img_pose)
def __len__(self):
return len(self.image_paths)
class PoseLoss(nn.Module):
def __init__(self, beta, device = 'cuda'):
super(PoseLoss, self).__init__()
self.beta = beta
self.device = device
self.t_loss_fn = nn.MSELoss()
def forward(self, x, q, poseGT):
GT_x = poseGT[:, 0:3]
GT_q = poseGT[:, 3:]
xx = Variable(x, requires_grad=True).to(self.device)
qq = Variable(q, requires_grad=True).to(self.device)
print('GT', GT_x, GT_q)
print('Estim', xx, qq)
loss = torch.sqrt(self.t_loss_fn(GT_x[:, :3].cpu(), xx[:, :3].cpu())) + self.beta*torch.sqrt(self.t_loss_fn(GT_q[:, 3:].cpu(), qq[:, 3:].cpu()))
return loss
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.backbone = models.vgg16(pretrained=True)
self.backbone._modules['classifier'][6] = nn.ReLU(nn.Linear(4096, 4096))
self.fcl = nn.Sequential(nn.Linear(4096, 4096), nn.ReLU(), nn.Linear(4096, 2048), nn.ReLU())
self.xyz = nn.Linear(2048, 3)
self.q = nn.Linear(2048, 4)
def forward(self, x):
x1 = self.backbone(x)
x2 = self.fcl(x1)
xyz = self.xyz(x2)
q = self.q(x2)
return xyz, q
batch_size = 1
learning_rate = 10e-5
training_epochs = 100
if __name__ == "__main__":
device = 'cuda' if torch.cuda.is_available() else 'cpu'
data = DataSource(DatasetDirectory + DatasetFolder, train=True, transforms=None, txtName=TrainDatasetList)
data_loader = torch.utils.data.DataLoader(dataset=data, batch_size=batch_size, shuffle=False, num_workers=4)
model = Net().to(device)
model.train()
criterion = PoseLoss(beta = 100, device = device)
optimizer = optim.Adam(model.parameters(), lr=learning_rate, betas = (0.9, 0.999), eps =0.00000001)
iteration = 0
minloss = 10e8
minlossindex = -1
for epoch in range(1, training_epochs):
dataiter = iter(data_loader)
for Images, Poses in dataiter:
optimizer.zero_grad()
Images = Images.to(device).float()
x, q = model(Images)
loss = criterion(x, q, Poses)
loss.backward()
loss = loss.item()/ batch_size
optimizer.step()
print(epoch, ' : ', iteration , ' -> ' , loss, ' minloss ', minloss, ' at ', minlossindex)
if loss < minloss:
minloss = loss
minlossindex = iteration
if epoch < (int)(training_epochs*0.8):
torch.save(model.state_dict(), 'Min.pth')
iteration = iteration + 1
torch.save(model.state_dict(), 'Fin.pth')
The estimated results tends to be zero for all 7 values, I cannot come up with why it gives such values.
Also, as I mentioned above, the loss values do not decrease dramatically while training(I expected It should be decreased dramatically for every iteration until it converges, because I used only one image for training)
| If you move the data from gpu to cpu, you lose the historic in the computation graph and therefore the derivative is not propagated to the previous layers.
I do the following, usually the data is transferred to the device after sampling with the dataloader.
...
for Images, Poses in dataiter:
Images = Images.to(device)
Poses = Poses.to(device)
...
From here you will have all the data in gpu. Also, is not necessary to apply the variable in x and q. Automatically, when a layer is defined in pytorch it is already indicated that the tensor is a variable and that it must have an accumulation of gradient.
On the other hand, you don't need the sqrt in the loss either. Think that sqrt function is monomically increasing, so minimize mse is the same as minimize rmse. Putting the sqrt will probably make the training somewhat more unstable and will only be useful if you want to penalize in the same order of magnitude as the data.
| https://stackoverflow.com/questions/69131370/ |
Define following multiplication of two tensors in pytorch lightning | I would like to to multiply following two tensors x (of shape (BS, N, C)) and y (of shape (BS,1,C)) in the following way:
BS = x.shape[0]
N = x.shape[1]
out = torch.zeros(size=x.shape)
for i in range(BS):
for j in range(N):
out[i, j, :] = torch.mul(x[i, j, :], y[i, 0, :])
return out
Implementing it this way yields an error "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking arugment for argument weight in method wrapper_native_layer_norm)"
When fixing it by setting
out = torch.zeros(size=x.shape).to('cuda)
then training takes forever, because my for loops aren't executed in parallel.
So my question is how to implement the two for loops above in the pytorch-lightning way, so that I can define function x = multiply_as_above(x, y) and use it in the feedword(self) method of my neural network.
Btw the operation defined above looks to me like a convolution with kernel size 1. Maybe I can use that?
| Is there anything wrong with x*y? As you can see in the code below, it yields exactly the same output as your function:
import torch
torch.manual_seed(2021)
BS = 2
N = 3
C = 4
x = torch.rand(BS, N, C)
y = torch.rand(BS, 1, C)
# your function
def f(x, y):
BS = x.shape[0]
N = x.shape[1]
out = torch.zeros(size=x.shape)
for i in range(BS):
for j in range(N):
out[i, j, :] = torch.mul(x[i, j, :], y[i, 0, :])
return out
out1 = f(x, y)
out2 = x*y
# comparing the outputs, we can see that they are identical
torch.all(out1 == out2)
# > tensor(True)
| https://stackoverflow.com/questions/69132547/ |
Delete a row by index from pytorch tensor | I have a pytorch tensor of size torch.Size([4, 3, 2])
tensor([[[0.4003, 0.2742],
[0.9414, 0.1222],
[0.9624, 0.3063]],
[[0.9600, 0.5381],
[0.5758, 0.8458],
[0.6342, 0.5872]],
[[0.5891, 0.9453],
[0.8859, 0.6552],
[0.5120, 0.5384]],
[[0.3017, 0.9407],
[0.4887, 0.8097],
[0.9454, 0.6027]]])
I would like to delete the 2nd row so that the tensor becomes torch.Size([3, 3, 2])
tensor([[[0.4003, 0.2742],
[0.9414, 0.1222],
[0.9624, 0.3063]],
[[0.5891, 0.9453],
[0.8859, 0.6552],
[0.5120, 0.5384]],
[[0.3017, 0.9407],
[0.4887, 0.8097],
[0.9454, 0.6027]]])
How can I delete the nth row of the 3D tensor?
| import torch
x = torch.randn(size=(4,3,2))
row_exclude = 2
x = torch.cat((x[:row_exclude],x[row_exclude+1:]))
print(x.shape)
>>> torch.Size([3, 3, 2])
| https://stackoverflow.com/questions/69132963/ |
train a model which is instantiated in another model ( Pytorch) | I have two classes of networks of neurons one of GNN type and the other simple of linear type, the latter is instantiated in the first !!! how can I train both at the same time?
here is an example:
class linear_NN(nn.Module):
def __init__(self, input_dim, out_dim...):
super().__init__()
def forward(self, x, dim = 0):
'''Forward pass'''
return x
the main class or the large class
class GNN(nn.Module):
def __init__(self, input_dim, n-hidden, out_dim...):
super().__init__()
def forward(self, h, dim = 0):
'''Forward pass'''
model=linear_NN(input, out..)
model(h, dim)
return h
| You must declare it in the __init__(...):
class GNN(nn.Module):
def __init__(self, input_dim, n-hidden, out_dim, ...):
super().__init__()
self.linear = linear_NN(input, out..)
def forward(self, h, dim = 0):
'''Forward pass'''
self.linear(h, dim)
return h
Then, the self.linear model will be registered to your GNN main model, and if you get GNN(...).parameters(), you'll see the linear parameters there.
| https://stackoverflow.com/questions/69135257/ |
'torchmetrics' does not work with PyTorchLightning | I am trying to understand how to use torchmetrics with PyTorch Lightning.
But, I got a same output with Accuracy, F1-score, Precision, etc.
This is the code.
metric_acc = torchmetrics.Accuracy()
metric_f1 = torchmetrics.F1()
metric_pre = torchmetrics.Precision()
metric_rec = torchmetrics.Recall()
n_batches = 3
for i in range(n_batches):
# simulate a classification problem
preds = torch.randn(10, 5).softmax(dim=-1)
target = torch.randint(5, (10,))
acc = metric_acc(preds, target)
f1 = metric_f1(preds, target)
pre = metric_pre(preds, target)
rec = metric_rec(preds, target)
print(f"Accuracy on batch {i}: {acc}")
print(f"F1 score on batch {i}: {f1}")
print(f"pre score on batch {i}: {pre}")
print(f"rec score on batch {i}: {rec}")
print('-' * 20)
acc = metric_acc.compute()
f1 = metric_f1.compute()
pre = metric_pre.compute()
rec = metric_rec.compute()
print(f"Accuracy on all data: {acc}")
print(f"f1 score on all data: {f1}")
print(f"pre score on all data: {pre}")
print(f"rec score on all data: {rec}")
Result is here.
Accuracy on batch 0: 0.10000000149011612
F1 score on batch 0: 0.10000000894069672
pre score on batch 0: 0.10000000149011612
rec score on batch 0: 0.10000000149011612
--------------------
Accuracy on batch 1: 0.30000001192092896
F1 score on batch 1: 0.30000001192092896
pre score on batch 1: 0.30000001192092896
rec score on batch 1: 0.30000001192092896
--------------------
Accuracy on batch 2: 0.4000000059604645
F1 score on batch 2: 0.40000003576278687
pre score on batch 2: 0.4000000059604645
rec score on batch 2: 0.4000000059604645
--------------------
Accuracy on all data: 0.2666666805744171
f1 score on all data: 0.2666666805744171
pre score on all data: 0.2666666805744171
rec score on all data: 0.2666666805744171
Process finished with exit code 0
I got the same result when I used it with PyTorchLightning, so I try it with simple code and got the same thing.If you know the problem or the solution, please let me know.Thank you very much.
| The reason for this is that for multi class classification if you are using F1, Precision, ACC and Recall with micro (the default )these are equivalent metrics and recommending you should use macro
metric_acc = torchmetrics.Accuracy(average='macro')
metric_f1 = torchmetrics.F1(average='macro')
metric_pre = torchmetrics.Precision(average='macro')
metric_rec = torchmetrics.Recall(average='macro')
| https://stackoverflow.com/questions/69139618/ |
Share the output of one class to another class python | I have two DNNs the first one returns two outputs. I want to use one of these outputs in a second class that represents another DNN as in the following example:
I want to pass the output (x) to the second class to be concatenated to another variable (v). I found a solution to make the variable (x) as a global variable, but I need another efficient solution
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
class Net(nn.Module):
def __init__(self):
..
def forward(self, x):
..
return x, z
class Net2(nn.Module):
def __init__(self):
..
def forward(self, v):
y = torch.cat(v, x)
return y
| You should not have to rely on global variables, you need to solve this following common practices. You can pass both v, and x as parameters of the forward of Net2. Something like:
class Net(nn.Module):
def forward(self, x):
z = x**2
return x, z
class Net2(nn.Module):
def forward(self, x, v):
y = torch.cat((v, x), dim=1)
return y
With dummy data:
>>> net = Net()
>>> net2 = Net2()
>>> input1 = torch.rand(1,10)
>>> input2 = torch.rand(1,20)
First inference:
>>> x, z = net(input1)
Second inference:
>>> out = net2(x, input2)
>>> out.shape
torch.Size([1, 30])
| https://stackoverflow.com/questions/69146243/ |
Dataset with 4D images: expected Byte but found Float | I have some MRI scans that I want to create a custom PyTorch Dataset out of. Each scan is a set of 31 RGB images, so the scans are 4 dimensional (Channels, Depth, Height, Width). The images are .png, and each scan is a folder with 31 images. After loading the scans, I tried passing them through a Conv3D, but I got an error (full traceback at the end):
x = torch.unsqueeze(dataset[0][0], 0)
x.shape # torch.Size([1, 3, 31, 512, 512])
m = nn.Conv3d(3,12,3)
out = m(x)
RuntimeError: expected scalar type Byte but found Float
How can I solve this error? I think it happens because I load the scans in as a NumPy array of NumPy arrays, but I don't know how else to do it. How can I load 4D image data into a custom Dataset?
Here's my custom Dataset class:
import torch
import os
import pandas as pd
from skimage import io
from torch.utils.data import Dataset
class TrainImages(Dataset):
def __init__(self, csv_file, root_dir, transform=None):
self.annotations = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.annotations)
def __getitem__(self, index):
# The folder containing the images of a scan
img_path = os.path.join(self.root_dir, str(self.annotations.iloc[index, 0]).zfill(5))
# Create a tensor out of a numpy array of numpy arrays, where each array is an image in the scan
image = torch.from_numpy(np.array([np.array(Image.open(os.path.join(str(img_path),"rgb-"+str(i)+".png"))) for i in range(31)]).transpose(3,0,1,2).astype(np.uint8))
y_label = torch.tensor(int(self.annotations.iloc[index, 1]))
return (image, y_label)
Full traceback:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-29-f3c4dfbd5496> in <module>
1 m=nn.Conv3d(3,12,3)
----> 2 out=m(x)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input)
571 self.dilation, self.groups)
572 return F.conv3d(input, self.weight, self.bias, self.stride,
--> 573 self.padding, self.dilation, self.groups)
574
575
RuntimeError: expected scalar type Byte but found Float
| The error message can be confusing, but the problem is that your data has the Byte type, while conv3d expects Float. You need to change from np.uint8 to np.float32 in the __getitem__(...) of your Dataset:
image = torch.from_numpy(np.array([
np.array(Image.open(os.path.join(str(img_path),"rgb-"+str(i)+".png")))
for i in range(31)
]).transpose(3, 0, 1, 2).astype(np.float32)) # <<< changed from np.uint8 to float32
or, cast x to Float before passing to the model:
out = m(x.float())
Note that if you use a transform like .ToTensor() later on, this problem will be solved as well.
| https://stackoverflow.com/questions/69147789/ |
Pytorch: non-positive stride is not supported | I have some MRI scans, where each scan is a set of 31 RGB images. The dimensions of the input data are (Channels, Depth, Height, Width). The images are png, and each scan is a folder containing its 31 images.
I created a custom Dataset class:
class TrainImages(Dataset):
def __init__(self, csv_file, root_dir, transform=None):
self.annotations = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.annotations)
def __getitem__(self, index):
img_path = os.path.join(self.root_dir, str(self.annotations.iloc[index, 0]).zfill(5))
image = torch.from_numpy(np.array([np.array(Image.open(os.path.join(str(img_path),"rgb-"+str(i)+".png"))) for i in range(31)]).transpose(3,0,1,2).astype(np.float32))
y_label = torch.tensor(int(self.annotations.iloc[index, 1]))
return (image, y_label)
Then, I created a small 3D CNN class:
class CNN2(nn.Module):
def __init__(self):
super(CNN2, self).__init__()
self.conv_layer1 = self._conv_layer(3, 12)
def _conv_layer(self, in_c, out_c, conv_kernel_size=3, padding=0):
layer = nn.Sequential(
nn.Conv3d(in_c, out_c, conv_kernel_size, padding),
)
return layer
def forward(self, x):
out = self.conv_layer1(x)
return out
Then, I tried to feed one scan into the CNN2 object:
x=torch.unsqueeze(dataset[0][0], 0)
x.shape #torch.Size([1, 3, 31, 512, 512])
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
data = x.to(device)
model = CNN2().to(device)
model(x)
But it produces this error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-24-c89306854a22> in <module>
1 model = CNN_test().to(device)
----> 2 model(x)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
<ipython-input-22-ac66ca7a2459> in forward(self, x)
14
15 def forward(self, x):
---> 16 out = self.conv_layer1(x)
17
18 return out
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input)
115 def forward(self, input):
116 for module in self:
--> 117 input = module(input)
118 return input
119
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input)
571 self.dilation, self.groups)
572 return F.conv3d(input, self.weight, self.bias, self.stride,
--> 573 self.padding, self.dilation, self.groups)
574
575
RuntimeError: non-positive stride is not supported
However, when I just create a Conv3D object and pass the same scan in, no error results:
x=torch.unsqueeze(dataset[0][0], 0)
m=nn.Conv3d(3,12,3)
out=m(x)
I think the error might have to do with the dimensions of the input data, but I don't understand what "non-positive stride" means. I'm also confused why no error occurs when I just pass the data into a Conv3D object, but an error occurs when I pass the same data into an instance of the CNN class that does the same thing.
| The issue is not with your input shape, it has to do with your layer initialization. You have essentially defined your 3D convolution with this line:
nn.Conv3d(in_c, out_c, conv_kernel_size, padding)
The issue is nn.Conv3d function head is the following:
torch.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)
Notice how stride is placed before padding. In your code, variable padding ends up being assigned to the stride argument.
To solve the issue, you can specify the argument name with a keyword argument, i.e. padding=padding. This disambiguates the issue with positional arguments stride and padding.
class CNN2(nn.Module):
def __init__(self):
super(CNN2, self).__init__()
self.conv_layer1 = self._conv_layer(3, 12)
def _conv_layer(self, in_c, out_c, conv_kernel_size=3, padding=0):
layer = nn.Sequential(
nn.Conv3d(in_c, out_c, conv_kernel_size, padding=padding))
return layer
def forward(self, x):
out = self.conv_layer1(x)
return out
| https://stackoverflow.com/questions/69148604/ |
Difference between autograd.grad and autograd.backward? | Suppose I have my custom loss function and I want to fit the solution of some differential equation with help of my neural network. So in each forward pass, I am calculating the output of my neural net and then calculating the loss by taking the MSE with the expected equation to which I want to fit my perceptron.
Now my doubt is: should I use grad(loss) or should I do loss.backward() for backpropagation to calculate and update my gradients?
I understand that while using loss.backward() I have to wrap my tensors with Variable and have to set the requires_grad = True for the variables w.r.t which I want to take the gradient of my loss.
So my questions are :
Does grad(loss) also requires any such explicit parameter to identify the variables for gradient computation?
How does it actually compute the gradients?
Which approach is better?
what is the main difference between the two in a practical scenario.
It would be better if you could explain the practical implications of both approaches because whenever I try to find it online I am just bombarded with a lot of stuff that isn't much relevant to my project.
| In addition to Ivan's answer, having torch.autograd.grad not accumulating gradients into .grad can avoid racing conditions in multi-thread scenarios.
Quoting PyTorch doc https://pytorch.org/docs/stable/notes/autograd.html#non-determinism
If you are calling backward() on multiple thread concurrently but with shared inputs (i.e. Hogwild CPU training). Since parameters are automatically shared across threads, gradient accumulation might become non-deterministic on backward calls across threads, because two backward calls might access and try to accumulate the same .grad attribute. This is technically not safe, and it might result in racing condition and the result might be invalid to use.
But this is expected pattern if you are using the multithreading approach to drive the whole training process but using shared parameters, user who use multithreading should have the threading model in mind and should expect this to happen. User could use the functional API torch.autograd.grad() to calculate the gradients instead of backward() to avoid non-determinism.
implementation details https://github.com/pytorch/pytorch/blob/7e3a694b23b383e38f5e39ef960ba8f374d22404/torch/csrc/autograd/functions/accumulate_grad.h
| https://stackoverflow.com/questions/69148622/ |
What is the alias to pytorch NN.module in TensorFlow? | I am trying to implement Triplet attention in TensorFlow. One of the question I am facing is what to use in place of NN.module in TensorFlow
class ChannelPool(nn.Module):
def forward(self, x):
return torch.cat( (torch.max(x,1)[0].unsqueeze(1), torch.mean(x,1).unsqueeze(1)), dim=1)
What do I put in Place of nn.Module here ?
| In this case, nn.Module is used to create a custom layer. TensorFlow has a tutorial on that, please take a look. In short, one way you could implement it is with tf.keras.layers.Layer, where call is the equivalent of forward in PyTorch:
class ChannelPool(tf.keras.layers.Layer):
def call(self, inputs):
return tf.concat((tf.reduce_max(inputs, axis=1, keepdims=True), tf.reduce_mean(inputs, axis=1, keepdims=True)), axis=1)
You can check that they are equivalent like this:
import torch
from torch import nn
import tensorflow as tf
import numpy as np
class PyTorch_ChannelPool(nn.Module):
def forward(self, x):
return torch.cat((torch.max(x, 1)[0].unsqueeze(1), torch.mean(x, 1).unsqueeze(1)), dim=1)
class TensorFlow_ChannelPool(tf.keras.layers.Layer):
def call(self, inputs):
return tf.concat((tf.reduce_max(inputs, axis=1, keepdims=True), tf.reduce_mean(inputs, axis=1, keepdims=True)), axis=1)
np.random.seed(2021)
x = np.random.random((1,2,3,4)).astype(np.float32)
a = PyTorch_ChannelPool()
b = TensorFlow_ChannelPool()
pytorch_output = a(torch.from_numpy(x)).numpy()
tensorflow_output = b(x).numpy()
np.all(pytorch_output == tensorflow_output)
# >>> True
| https://stackoverflow.com/questions/69148722/ |
Pooling for 1D tensor | I am looking for a way to reduce the length of a 1D tensor by applying a pooling operation. How can I do it? If I apply MaxPool1d, I get the error max_pool1d() input tensor must have 2 or 3 dimensions but got 1.
Here is my code:
import numpy as np
import torch
A = np.random.rand(768)
m = nn.MaxPool1d(4,4)
A_tensor = torch.from_numpy(A)
output = m(A_tensor)
| Your initialization is fine, you've defined the first two parameters of nn.MaxPool1d: kernel_size and stride. For one-dimensional max-pooling both should be integers, not tuples.
The issue is with your input, it should be two-dimensional (the batch axis is missing):
>>> m = nn.MaxPool1d(4, 4)
>>> A_tensor = torch.rand(1, 768)
Then inference will result in:
>>> output = m(A_tensor)
>>> output.shape
torch.Size([1, 192])
| https://stackoverflow.com/questions/69150077/ |
using ImageFolder with albumentations in pytorch | I have a situation where I need to use ImageFolder with the albumentations lib to make the augmentations in pytorch - custom dataloader is not an option.
To this end, I am stumped and I am not able to get ImageFolder to work with albumenations. I have tried something along these lines:
class Transforms:
def __init__(self, transforms: A.Compose):
self.transforms = transforms
def __call__(self, img, *args, **kwargs):
return self.transforms(image=np.array(img))['image']
and then:
trainset = datasets.ImageFolder(traindir,transform=Transforms(transforms=A.Resize(32 , 32)))
where traindir is some dir with images. I however get thrown a weird error:
RuntimeError: Given groups=1, weight of size [16, 3, 3, 3], expected input[1024, 32, 32, 3] to have 3 channels, but got 32 channels instead
and I cant seem to find a reproducible example to make a simple aug pipleline work with imagefolder.
UPDATE
On the recommendation of @Shai, I have done this now:
class Transforms:
def __init__(self):
self.transforms = A.Compose([A.Resize(224,224),ToTensorV2()])
def __call__(self, img, *args, **kwargs):
return self.transforms(image=np.array(img))['image']
trainset = datasets.ImageFolder(traindir,transform=Transforms())
but I get thrown:
self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.FloatTensor) should be the same
| You need to use ToTensorV2 transformation as the final one:
trainset = datasets.ImageFolder(traindir,transform=Transforms(transforms=A.Compose([A.Resize(32 , 32), ToTensorV2()]))
| https://stackoverflow.com/questions/69151052/ |
How to get confidence score from a trained pytorch model | I have a trained PyTorch model and I want to get the confidence score of predictions in range (0-100) or (0-1). The code below is giving me a score but its range is undefined. I want the score in a defined range of (0-1) or (0-100). Any idea how to get this?
conf, classes = torch.max(output, 1)
My code:
model = torch.load(r'best.pt')
model.eval()
def preprocess(imgs):
im = torch.from_numpy(imgs)
im = im.float() # uint8 to fp16/32
im /= 255.0
return im
img_path = cv2.imread("/content/634282.jpg",0)
cropped = cv2.resize(img_path,(28,28))
imgs = preprocess(np.array([[cropped]]))
def predict_allCharacters(imgs):
output = model(imgs)
conf, classes = torch.max(output, 1)
class_names = '0123456789'
return conf, class_names[classes.item()]
Model definition:
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(
in_channels=1,
out_channels=16,
kernel_size=5,
stride=1,
padding=2,
),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
)
self.conv2 = nn.Sequential(
nn.Conv2d(16, 32, 5, 1, 2),
nn.ReLU(),
nn.MaxPool2d(2),
)
# fully connected layer, output 10 classes
self.out = nn.Linear(32 * 7 * 7, 37)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
# flatten the output of conv2 to (batch_size, 32 * 7 * 7)
x = x.view(x.size(0), -1)
output = self.out(x)
return output # return x for visualization
| In your case, output represents the logits. One way of getting a probability out of them is to use the Softmax function. As it seems that output contains the outputs from a batch, not a single sample, you can do something like this:
probs = torch.nn.functional.softmax(output, dim=1)
Then, in probs, each row would have the probability (i.e., in range [0, 1], sum=1) of each class for a given sample.
So, your predict_allCharacters could be modified to:
def predict_allCharacters(imgs):
output = model(imgs)
probs = torch.nn.functional.softmax(output, dim=1)
conf, classes = torch.max(probs, 1)
class_names = '0123456789'
return conf, class_names[classes.item()]
| https://stackoverflow.com/questions/69154022/ |
Very high CPU usage when using opencv2 with multithreading in python | I am trying to create automatic attendance system with opencv2 in which i need to get rtsp stream from IP camera, find faces from it and recognize face.
I created different threads from frame catching and drawing because face recognition function needs some time to recognize face.
But just creating 2 threads, one for frame reading and other for drawing uses around 70% CPU.
and creating pytorch_facenet model increase usage 80-90% CPU.
does anyone know how to reduce CPU usage ?
my program:
import cv2
import threading
from facenet_pytorch import InceptionResnetV1
cap = cv2.VideoCapture("rtsp://test:[email protected]")
resnet = InceptionResnetV1(pretrained='vggface2').eval()
ret, frame = cap.read()
exit = False
def th1():
global ret, frame, exit
while True:
ret, frame = cap.read()
if exit:
break
def th2():
global ret, frame, exit
while True:
cv2.imshow('frame', frame)
cv2.waitKey(1)
if cv2.getWindowProperty('frame',cv2.WND_PROP_VISIBLE) < 1:
exit = True
break
t1 = threading.Thread(target=th1)
t1.start()
t2 = threading.Thread(target=th2)
t2.start()
Update:
I used time.sleep(0.2) in my all threads except frame reading.
and it worked, my cpu usage is 30% now.
| Two issues.
th2 runs in an almost-tight-loop. It won't consume a whole core of CPU because waitKey(1) sleeps for some time.
No synchronization at all between threads, but you need it. You need a threading.Event to notify the consumer thread of a fresh frame. The consumer thread must wait until a fresh frame is available, because it's pointless to display the same old frame again and again. You can be lazy and use waitKey(30) instead. For the displaying thread, that's good enough.
VideoCapture. You don't do any error checking at all! You must check:
cap = cv2.VideoCapture("rtsp://test:[email protected]")
assert cap.isOpened()
...
and
while True:
ret, frame = cap.read()
if not ret:
break
...
| https://stackoverflow.com/questions/69157605/ |
Follow-up to "In PyTorch how are layer weights and biases initialized by default?" | In the answer with most voted of this question, it says:
Most layers are initialized using Kaiming Uniform method. Example layers include Linear, Conv2d, RNN etc.
I was actually wondering: Where does one know this from? For example, I would like to know the default initialization of torch.nn.Conv2d and torch.nn.BatchNorm2d for PyTorch 1.9.0. For torch.nn.Linear, I found the answer here (from the second answer of the above mentioned question).
| Convolutional modules such as nn.Conv1d, nn.Conv2d, and nn.Conv3d inherit from the _ConvNd class. This class has a reset_parameters function implemented just like nn.Linear:
def reset_parameters(self) -> None:
# Setting a=sqrt(5) in kaiming_uniform is the same as initializing with
# uniform(-1/sqrt(k), 1/sqrt(k)), where k = weight.size(1) * prod(*kernel_size)
# For more details see:
# https://github.com/pytorch/pytorch/issues/15314#issuecomment-477448573
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
if self.bias is not None:
fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
bound = 1 / math.sqrt(fan_in)
init.uniform_(self.bias, -bound, bound)
As for nn.BatchNorm2d, it has reset_parameters and reset_running_stats function:
def reset_parameters(self) -> None:
self.reset_running_stats()
if self.affine:
init.ones_(self.weight)
init.zeros_(self.bias)
def reset_running_stats(self) -> None:
if self.track_running_stats:
# running_mean/running_var/num_batches... are registered at runtime depending
# if self.track_running_stats is on
self.running_mean.zero_() # type: ignore[operator]
self.running_var.fill_(1) # type: ignore[operator]
self.num_batches_tracked.zero_() # type: ignore[operator]
| https://stackoverflow.com/questions/69160276/ |
One of the variables modified by an inplace operation | I am relatively new to Pytorch. Here I want to use this model to generate some images, however as this was written before Pytorch 1.5, since the gradient calculation has been fixed then, this is the error message.
RuntimeError: one of the variables needed for gradient computation has been
modified by an inplace operation: [torch.cuda.FloatTensor [1, 512, 4, 4]]
is at version 2; expected version 1 instead.
Hint: enable anomaly detection to find the operation that
failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
I have looked at past examples and am not sure what is the problem here, I believe it is happening within this region but I don’t know where! Any help would be greatly appreciated!
def process(self, images, edges, masks):
self.iteration += 1
# zero optimizers
self.gen_optimizer.zero_grad()
self.dis_optimizer.zero_grad()
# process outputs
outputs = self(images, edges, masks)
gen_loss = 0
dis_loss = 0
# discriminator loss
dis_input_real = torch.cat((images, edges), dim=1)
dis_input_fake = torch.cat((images, outputs.detach()), dim=1)
dis_real, dis_real_feat = self.discriminator(dis_input_real) # in: (grayscale(1) + edge(1))
dis_fake, dis_fake_feat = self.discriminator(dis_input_fake) # in: (grayscale(1) + edge(1))
dis_real_loss = self.adversarial_loss(dis_real, True, True)
dis_fake_loss = self.adversarial_loss(dis_fake, False, True)
dis_loss += (dis_real_loss + dis_fake_loss) / 2
# generator adversarial loss
gen_input_fake = torch.cat((images, outputs), dim=1)
gen_fake, gen_fake_feat = self.discriminator(gen_input_fake) # in: (grayscale(1) + edge(1))
gen_gan_loss = self.adversarial_loss(gen_fake, True, False)
gen_loss += gen_gan_loss
# generator feature matching loss
gen_fm_loss = 0
for i in range(len(dis_real_feat)):
gen_fm_loss += self.l1_loss(gen_fake_feat[i], dis_real_feat[i].detach())
gen_fm_loss = gen_fm_loss * self.config.FM_LOSS_WEIGHT
gen_loss += gen_fm_loss
# create logs
logs = [
("l_d1", dis_loss.item()),
("l_g1", gen_gan_loss.item()),
("l_fm", gen_fm_loss.item()),
]
return outputs, gen_loss, dis_loss, logs
def forward(self, images, edges, masks):
edges_masked = (edges * (1 - masks))
images_masked = (images * (1 - masks)) + masks
inputs = torch.cat((images_masked, edges_masked, masks), dim=1)
outputs = self.generator(inputs) # in: [grayscale(1) + edge(1) + mask(1)]
return outputs
def backward(self, gen_loss=None, dis_loss=None):
if dis_loss is not None:
dis_loss.backward()
self.dis_optimizer.step()
if gen_loss is not None:
gen_loss.backward()
self.gen_optimizer.step()
Thank you!
| You can't compute the loss for the discriminator and for the generator in one go and have the both back-propagations back-to-back like this:
if dis_loss is not None:
dis_loss.backward()
self.dis_optimizer.step()
if gen_loss is not None:
gen_loss.backward()
self.gen_optimizer.step()
Here's the reason why: when you call self.dis_optimizer.step(), you effectively in-place modify the parameters of the discriminator, the very same that were used to compute gen_loss which you are trying to backpropagate on. This is not possible.
You have to compute dis_loss backpropagate, update the weights of the discriminator, and clear the gradients. Only then can you compute gen_loss with the newly updated discriminator weights. Finally, backpropagate on the generator.
This tutorial is a good walkthrough over a typical GAN training.
| https://stackoverflow.com/questions/69163522/ |
Function AddmmBackward returned an invalid gradient | import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import matplotlib.pyplot as plt
import numpy as np
import torch.optim as optim
class NeuralNetwork(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 3)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = NeuralNetwork()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
from torchvision import datasets, transforms
from torch.utils.data import DataLoader, random_split
def UploadData(path, train):
#set up transforms for train and test datasets
train_transforms = transforms.Compose([transforms.Grayscale(num_output_channels=1), transforms.Resize(255), transforms.CenterCrop(224), transforms.RandomRotation(30),
transforms.RandomHorizontalFlip(), transforms.transforms.ToTensor()])
valid_transforms = transforms.Compose([transforms.Grayscale(num_output_channels=1), transforms.Resize(255), transforms.CenterCrop(224), transforms.RandomRotation(30),
transforms.RandomHorizontalFlip(), transforms.transforms.ToTensor()])
test_transforms = transforms.Compose([transforms.Grayscale(num_output_channels=1), transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])
#set up datasets from Image Folders
train_dataset = datasets.ImageFolder(path + '/train', transform=train_transforms)
valid_dataset = datasets.ImageFolder(path + '/validation', transform=valid_transforms)
test_dataset = datasets.ImageFolder(path + '/test', transform=test_transforms)
#set up dataloaders with batch size of 32
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=32, shuffle=True)
validloader = torch.utils.data.DataLoader(valid_dataset, batch_size=32, shuffle=True)
testloader = torch.utils.data.DataLoader(test_dataset, batch_size=32, shuffle=True)
return trainloader, validloader, testloader
trainloader, validloader, testloader = UploadData("/home/lns/research/dataset", True)
epochs = 5
min_valid_loss = np.inf
for e in range(epochs):
train_loss = 0.0
for data, labels in trainloader:
# Transfer Data to GPU if available
if torch.cuda.is_available():
print("using GPU for data")
data, labels = data.cuda(), labels.cuda()
# Clear the gradients
optimizer.zero_grad()
# Forward Pass
target = net(data)
# Find the Loss
loss = criterion(target,labels)
# Calculate gradients
loss.backward()
# Update Weights
optimizer.step()
# Calculate Loss
train_loss += loss.item()
valid_loss = 0.0
model.eval() # Optional when not using Model Specific layer
for data, labels in validloader:
# Transfer Data to GPU if available
if torch.cuda.is_available():
print("using GPU for data")
data, labels = data.cuda(), labels.cuda()
# Forward Pass
target = net(data)
# Find the Loss
loss = criterion(target,labels)
# Calculate Loss
valid_loss += loss.item()
print('Epoch ',e+1, '\t\t Training Loss: ',train_loss / len(trainloader),' \t\t Validation Loss: ',valid_loss / len(validloader))
if min_valid_loss > valid_loss:
print("Validation Loss Decreased(",min_valid_loss,"--->",valid_loss,") \t Saving The Model")
min_valid_loss = valid_loss
# Saving State Dict
torch.save(net.state_dict(), '/home/lns/research/MODEL.pth')
After searching a lot i am asking for help. Can someone help me
understand why this error is occuring in backward propagation.
i followed pytorch cnn tutorail and geeksforgeeks tutorial
dataset is x ray images transformed into grayscale and resize to 255
Is my neural network is wrong or data is not processed correctly?
| This is a size mismmatch between the output of your CNN and the number of neurons on on your first fully-connected layer. Because of missing padding, the number of elements when flattened is 16*4*4 i.e. 256 (and not 16*5*5):
self.fc1 = nn.Linear(256, 120)
Once modified, the model will run correctly:
>>> model = NeuralNetwork()
>>> model(torch.rand(1, 1, 28, 28)).shape
torch.Size([1, 3])
Alternatively, you can use an nn.LazyLinear which will deduce the in_feature argument during the very first inference based on its input shape.
self.fc1 = nn.LazyLinear(120)
| https://stackoverflow.com/questions/69163646/ |
Getting error with Pytorch lightning when passing model checkpoint | I am training a multi-label classification problem using Hugging face models. I am using Pytorch Lightning to train the model.
Here is the code:
And early stopping triggers when the loss hasn't improved for the last
early_stopping_callback = EarlyStopping(monitor='val_loss', patience=2)
We can start the training process:
checkpoint_callback = ModelCheckpoint(
dirpath="checkpoints",
filename="best-checkpoint",
save_top_k=1,
verbose=True,
monitor="val_loss",
mode="min"
)
trainer = pl.Trainer(
logger=logger,
callbacks=[early_stopping_callback],
max_epochs=N_EPOCHS,
checkpoint_callback=checkpoint_callback,
gpus=1,
progress_bar_refresh_rate=30
)
# checkpoint_callback=checkpoint_callback,
As soon as I run this, I get this error:
~/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/callback_connector.py in _configure_checkpoint_callbacks(self, checkpoint_callback)
75 if isinstance(checkpoint_callback, Callback):
76 error_msg += " Pass callback instances to the `callbacks` argument in the Trainer constructor instead."
---> 77 raise MisconfigurationException(error_msg)
78 if self._trainer_has_checkpoint_callbacks() and checkpoint_callback is False:
79 raise MisconfigurationException(
MisconfigurationException: Invalid type provided for checkpoint_callback: Expected bool but received <class 'pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint'>. Pass callback instances to the `callbacks` argument in the Trainer constructor instead.
How can I fix this issue?
| You can look up the description of the checkpoint_callback argument in the documentation page of pl.Trainer:
checkpoint_callback (bool) – If True, enable checkpointing. It will configure a default ModelCheckpoint callback if there is no user-defined ModelCheckpoint in callbacks.
You shouldn't pass your custom ModelCheckpoint to this argument. I believe what you are looking to do is to pass both the EarlyStopping and ModelCheckpoint in the callbacks list:
early_stopping_callback = EarlyStopping(monitor='val_loss', patience=2)
checkpoint_callback = ModelCheckpoint(
dirpath="checkpoints",
filename="best-checkpoint",
save_top_k=1,
verbose=True,
monitor="val_loss",
mode="min")
trainer = pl.Trainer(
logger=logger,
callbacks=[checkpoint_callback, early_stopping_callback],
max_epochs=N_EPOCHS,
gpus=1,
progress_bar_refresh_rate=30)
| https://stackoverflow.com/questions/69164634/ |
potential bug when upgrading to PyTorch 1.9 ImportError: cannot import name 'int_classes' from 'torch._six' | I installed PyTorch using
$ pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
I get this error:
(proxy) [jalal@goku proxynca_pp]$ CUDA_VISIBLE_DEVICES=0,1 python train.py --dataset cub --config config/cub.json --mode train --apex --seed 0
Traceback (most recent call last):
File "train.py", line 3, in <module>
import dataset
File "/scratch3/research/code/fashion/proxynca_pp/dataset/__init__.py", line 6, in <module>
from . import utils
File "/scratch3/research/code/fashion/proxynca_pp/dataset/utils.py", line 8, in <module>
from torch._six import int_classes as _int_classes
ImportError: cannot import name 'int_classes' from 'torch._six' (/scratch3/venv/proxy/lib/python3.8/site-packages/torch/_six.py)
The code is from this GitHub repo.
What in the code should I change to get it working?
(proxy) [jalal@goku proxynca_pp]$ pip freeze
h5py==3.4.0
numpy==1.21.2
Pillow==8.3.2
scipy==1.7.1
torch==1.9.0+cu111
torchaudio==0.9.0
torchvision==0.10.0+cu111
tqdm==4.62.2
typing-extensions==3.10.0.2
| Easiest solution would be to just set int_classes = int instead of importing it from _six.
| https://stackoverflow.com/questions/69170518/ |
installing NVIDIA Apex for Python 3.8.5 and compatible with PyTorch 1.9 | I am running a code that apparently requires NVIDIA apex (I initially didn't know and installed the wrong apex). I am unsure how to fix the final error:
(proxy) [jalal@goku proxynca_pp]$ CUDA_VISIBLE_DEVICES=0,1 python train.py --dataset cub --config config/cub.json --mode train --apex --seed 0
(1024, 4096)
train.py:12: MatplotlibDeprecationWarning: The 'warn' parameter of use() is deprecated since Matplotlib 3.1 and will be removed in 3.3. If any parameter follows 'warn', they should be pass as keyword, not positionally.
matplotlib.use('agg', warn=False, force=True)
Traceback (most recent call last):
File "train.py", line 70, in <module>
from apex import amp
File "/scratch3/venv/proxy/lib/python3.8/site-packages/apex/__init__.py", line 13, in <module>
from pyramid.session import UnencryptedCookieSessionFactoryConfig
ImportError: cannot import name 'UnencryptedCookieSessionFactoryConfig' from 'pyramid.session' (unknown location)
After I got the above error, I tried this answer: https://stackoverflow.com/a/67188946/2414957
(proxy) [jalal@goku proxynca_pp]$ pip uninstall apex
Found existing installation: apex 0.9.10.dev0
Uninstalling apex-0.9.10.dev0:
Would remove:
/scratch3/venv/proxy/lib/python3.8/site-packages/apex-0.9.10.dev0-py3.8.egg-info
/scratch3/venv/proxy/lib/python3.8/site-packages/apex/*
Proceed (Y/n)? y
Successfully uninstalled apex-0.9.10.dev0
(proxy) [jalal@goku proxynca_pp]$ git clone https://github.com/NVIDIA/apex
Cloning into 'apex'...
remote: Enumerating objects: 8256, done.
remote: Counting objects: 100% (343/343), done.
remote: Compressing objects: 100% (192/192), done.
remote: Total 8256 (delta 204), reused 240 (delta 139), pack-reused 7913
Receiving objects: 100% (8256/8256), 14.20 MiB | 0 bytes/s, done.
Resolving deltas: 100% (5605/5605), done.
(proxy) [jalal@goku proxynca_pp]$ cd apex
(proxy) [jalal@goku apex]$ pip install -v --disable-pip-version-check --no-cache-dir \
> --global-option="--cpp_ext" --global-option="--cuda_ext" ./
/scratch3/venv/proxy/lib/python3.8/site-packages/pip/_internal/commands/install.py:229: UserWarning: Disabling all use of wheels due to the use of --build-option / --global-option / --install-option.
cmdoptions.check_install_build_global(options)
Using pip 21.2.4 from /scratch3/venv/proxy/lib/python3.8/site-packages/pip (python 3.8)
Processing /scratch3/research/code/fashion/proxynca_pp/apex
DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default.
pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555.
Running command python setup.py egg_info
torch.__version__ = 1.9.0+cu111
running egg_info
creating /scratch/tmp/pip-pip-egg-info-yc32vm37/apex.egg-info
writing /scratch/tmp/pip-pip-egg-info-yc32vm37/apex.egg-info/PKG-INFO
writing dependency_links to /scratch/tmp/pip-pip-egg-info-yc32vm37/apex.egg-info/dependency_links.txt
writing top-level names to /scratch/tmp/pip-pip-egg-info-yc32vm37/apex.egg-info/top_level.txt
writing manifest file '/scratch/tmp/pip-pip-egg-info-yc32vm37/apex.egg-info/SOURCES.txt'
reading manifest file '/scratch/tmp/pip-pip-egg-info-yc32vm37/apex.egg-info/SOURCES.txt'
writing manifest file '/scratch/tmp/pip-pip-egg-info-yc32vm37/apex.egg-info/SOURCES.txt'
/scratch/tmp/pip-req-build-fg_khhkt/setup.py:67: UserWarning: Option --pyprof not specified. Not installing PyProf dependencies!
warnings.warn("Option --pyprof not specified. Not installing PyProf dependencies!")
Skipping wheel build for apex, due to binaries being disabled for it.
Installing collected packages: apex
Running command /scratch3/venv/proxy/bin/python3.8 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/scratch/tmp/pip-req-build-fg_khhkt/setup.py'"'"'; __file__='"'"'/scratch/tmp/pip-req-build-fg_khhkt/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' --cpp_ext --cuda_ext install --record /scratch/tmp/pip-record-u812zb2v/install-record.txt --single-version-externally-managed --compile --install-headers /scratch3/venv/proxy/include/site/python3.8/apex
torch.__version__ = 1.9.0+cu111
/scratch/tmp/pip-req-build-fg_khhkt/setup.py:67: UserWarning: Option --pyprof not specified. Not installing PyProf dependencies!
warnings.warn("Option --pyprof not specified. Not installing PyProf dependencies!")
Compiling cuda extensions with
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
from /usr/local/cuda-10.0/bin
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/scratch/tmp/pip-req-build-fg_khhkt/setup.py", line 159, in <module>
check_cuda_torch_binary_vs_bare_metal(CUDA_HOME)
File "/scratch/tmp/pip-req-build-fg_khhkt/setup.py", line 99, in check_cuda_torch_binary_vs_bare_metal
raise RuntimeError("Cuda extensions are being compiled with a version of Cuda that does " +
RuntimeError: Cuda extensions are being compiled with a version of Cuda that does not match the version used to compile Pytorch binaries. Pytorch binaries were compiled with Cuda 11.1.
In some cases, a minor-version mismatch will not cause later errors: https://github.com/NVIDIA/apex/pull/323#discussion_r287021798. You can try commenting out this check (at your own risk).
Running setup.py install for apex ... error
ERROR: Command errored out with exit status 1: /scratch3/venv/proxy/bin/python3.8 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/scratch/tmp/pip-req-build-fg_khhkt/setup.py'"'"'; __file__='"'"'/scratch/tmp/pip-req-build-fg_khhkt/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' --cpp_ext --cuda_ext install --record /scratch/tmp/pip-record-u812zb2v/install-record.txt --single-version-externally-managed --compile --install-headers /scratch3/venv/proxy/include/site/python3.8/apex Check the logs for full command output.
I have these packages installed:
(proxy) [jalal@goku apex]$ pip freeze
anykeystore==0.2
certifi==2021.5.30
charset-normalizer==2.0.4
cryptacular==1.6.2
cycler==0.10.0
defusedxml==0.7.1
greenlet==1.1.1
h5py==3.4.0
hupper==1.10.3
idna==3.2
joblib==1.0.1
kiwisolver==1.3.2
MarkupSafe==2.0.1
matplotlib==3.2.0
numpy==1.21.2
oauthlib==3.1.1
PasteDeploy==2.1.1
pbkdf2==1.3
Pillow==8.3.2
plaster==1.0
plaster-pastedeploy==0.7
pyparsing==2.4.7
pyramid==2.0
pyramid-mailer==0.15.1
python-dateutil==2.8.2
python3-openid==3.2.0
repoze.sendmail==4.4.1
requests==2.26.0
requests-oauthlib==1.3.0
scikit-learn==0.24.2
scipy==1.7.1
six==1.16.0
sklearn==0.0
SQLAlchemy==1.4.23
threadpoolctl==2.2.0
torch==1.9.0+cu111
torchaudio==0.9.0
torchvision==0.10.0+cu111
tqdm==4.62.2
transaction==3.0.1
translationstring==1.4
typing-extensions==3.10.0.2
urllib3==1.26.6
velruse==1.1.1
venusian==3.0.0
WebOb==1.8.7
WTForms==2.3.3
wtforms-recaptcha==0.3.2
zope.deprecation==4.4.0
zope.interface==5.4.0
zope.sqlalchemy==1.6
And here's the code is from this GitHub repo.
Edit: I found the steps through a stackoverflow answer that I can't find now (linked above). I don't know how to find the proper link or installation that is compatible with PyTorch 1.9.
FYI, the git repo has no installation instruction hence I am installing things blindly.
| Installing CUDA 11.1 and then adding the following to ~/.bashrc and sourcing the ~/.bashrc and finally the symlink made it work:
export CUDA_HOME=/usr/local/cuda-11.1
export PATH=/usr/local/cuda-11.1/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-11.1/lib64:$LD_LIBRARY_PATH
This eliminates the need to uninstall CUDA 10.2 especially if needed later for other project. Simply exporting the path and not using the symlink didn't work.
$ sudo ln -sfT /usr/local/cuda/cuda-11.1/ /usr/local/cuda
^ Last command is assuming you have multiple CUDA versions installed in your machine.
For further information please read this GitHub issue.
| https://stackoverflow.com/questions/69170666/ |
How to calculate (N,*, input) matrix class (output, input) matrix(the result is (N,*,output)) by pytorch? | I want to rewrite what nn.Linear do. The question is that the input size is (N, *,in_feature) and weight size is (out_feature, in_feature). If I want the result to be (N,*,out_feature) using python, how should I wirte the code?
input @ weight.T
is not right, sadly.
| The sizes need to match in order to apply @, i.e. __matmul__: the input x is shaped (N, *, in_feature) and the weight tensor w is shaped (out_feature, in_feature).
x = torch.rand(2, 4, 4, 10)
w = torch.rand(5, 10)
Taking the transpose of w will get you a shape of (in_feature, out_feature). Applying __matmul__ between x and w.T will reduce down to a shape of (N, *, out_feature):
>>> z = [email protected]
>>> z.shape
torch.Size([2, 4, 4, 5])
Or equivalently using torch.matmul:
>>> z = torch.matmul(x, w.T)
| https://stackoverflow.com/questions/69171235/ |
PyTorch: How to normalize a tensor when the image is cropped randomly? | Let's say we are working with the CIFAR-10 dataset and we want to apply some data augmentation and additionally normalize the tensors. Here is some reproducible code for this
from torchvision import transforms, datasets
import matplotlib.pyplot as plt
trafo = transforms.Compose([transforms.Pad(padding = 4, fill = 0, padding_mode = "constant"),
transforms.RandomHorizontalFlip(p=0.5),
transforms.RandomCrop(size = (32, 32)),
transforms.ToTensor(),
transforms.Normalize(mean = (0.0, 0.0, 0.0), std = (1.0, 1.0, 1.0))]
)
cifar10_full = datasets.CIFAR10(root = "CIFAR-10", train = True, transform = trafo, target_transform = None, download = True)
The normalization I chose so far would do nothing with the tensors since I put the mean and std to 0 and 1 respectively. According to the documentation of torchvision.transforms.Normalize, the provided means and standard deviations are for each channel of the input. However, the problem is that that I cannot calculate the mean across each channel because of some random flipping and cropping mean. Therefore, my idea was something along the following lines
trafo_1 = transforms.Compose([transforms.Pad(padding = 4, fill = 0, padding_mode = "constant"),
transforms.RandomHorizontalFlip(p=0.5),
transforms.RandomCrop(size = (32, 32)),
transforms.ToTensor()
)
cifar10_full = datasets.CIFAR10(root = "CIFAR-10", train = True, transform = trafo_1, target_transform = None, download = True)
Now I could calculate the mean across each channel of the input and then I wanted to normalize the tensors again. However, I cannot simply use transforms.Normalize() as cifar10_full is not the original dataset anymore, but how I could proceed instead? (One solution would be to simply fix the seed of the random generators, i.e use torch.manual_seed(0), but I would like to avoid this for now...)
| The mean and std are not for each tensor, but from the whole dataset. What you are trying to do doesn't really matter, you just want a scale that is good enough for the whole data representation, there is no exact mean or std you will get, these are all random operations, just use the mean and std from the actual data, which is pretty much the standard.
First, try to calculate the mean and std of the dataset (try random sampling), and use that for normalization.
# Calculate the mean, std of the complete dataset
import glob
import cv2
import numpy as np
import tqdm
import random
# calculating 3 channel mean and std for image dataset
means = np.array([0, 0, 0], dtype=np.float32)
stds = np.array([0, 0, 0], dtype=np.float32)
total_images = 0
randomly_sample = 5000
for f in tqdm.tqdm(random.sample(glob.glob("dataset_path/**.jpg", recursive = True), randomly_sample)):
img = cv2.imread(f)
means += img.mean(axis=(0,1))
stds += img.std(axis=(0,1))
total_images += 1
means = means / (total_images * 255.)
stds = stds / (total_images * 255.)
print("Total images: ", total_images)
print("Means: ", means)
print("Stds: ", stds)
Just a simple scenario, do you think in actual testing or inference your images will be augmented this way too, probably not, you will have clean images which match closely with the mean and std from the clean version of the data, so it's useless to calculate mean and std (you can take few random samples), unless you want to apply TTA.
If you want to apply TTA too, then you can go ahead and run some augmentation on the images, do random sampling and take the mean and std of those images.
| https://stackoverflow.com/questions/69176748/ |
How does Learning Rate affect Gradients in Back Prop | I was wondering if the learning rate is set to update the weights during the backpropagation on the layers of a neural network. How and where do the weights get updated?
I can't see the connection between optimizer, learning rate, and the backpropagation function.
| Sure a fair question that comes to mind is how are the backpropagation and the optimizer related in all of this. And how are those parameters updated by the optimizer?
optimizer.zero_grad()
loss.backward()
optimizer.step()
Indeed there doesn't seem to be any link between optimizer and loss let alone the parameters of your model.
Optimizer
Every tensor object has a grad attribute which either contains a tensor representing the gradient corresponding to it or None if either it doesn't require gradient computation or simply doesn't have any gradient.
To optimize parameters in PyTorch you would go about initializing an optimizer by passing a list or iterator over those very parameters you want this optimizer to act upon:
mlp = nn.Sequential(nn.Linear(10, 2), nn.Linear(2, 1))
optimizer = torch.optim.SGD(mlp.parameters(), lr=1.0e-3)
An optimizer is not tasked with computing gradients, this is performed by another system in PyTorch.
When you call optimizer.step(), the optimizer will go over each provided parameter and update them based the optimizer's update rule (i.e. the optimizing method used) and the gradient associated with each parameter (namely the grad attribute)
For SGD it will be something like (leaving no_grad considerations aside):
for param in parameters:
param -= lr*param.grad
Backpropagation
To actually compute the backward propagation you would use torch.autograd.backward which usually comes in the form of torch.Tensor.backward (as a torch.Tensor method).
This function is a mutable operator which will update the grad attribute of all leaf tensor nodes requiring gradient computation. In other words, it will compute the gradient of the tensor backward that was called upon w.r.t each parameter of the model.
For example with model mlp, we backpropagate on a dummy loss:
>>> for x in mlp.parameters():
... print(tuple(x.shape), x.grad)
(2, 10) None
(2,) None
(1, 2) None
(1,) None
After inference and backpropagation on a random input:
>>> mlp(torch.rand(1, 10)).mean().backward()
Each grad attribute of the model's tensor parameter has been updated by that call:
>>> for x in mlp.parameters():
... print(tuple(x.shape), x.grad is not None, tuple(x.grad.shape))
(2, 10) True (2, 10)
(2,) True (2,)
(1, 2) True (1, 2)
(1,) True (1,)
Then you can call optimizer.step() to effectively perform the parameter update based on those gradients. Do note, the optimizer can only affect tensors that have been provided to it on initialization (recall the torch.optim.SGD(mlp.parameters(), lr=1.0e-3) part).
Finally you can zero the gradient of those parameter from the optimizer directly with zero_grad:
>>> optimizer.zero_grad()
This is roughly a shorthand for:
for param in mtl.parameters():
param.grad.zero_()
But its effectiveness is a lot more apparent when using multiple parameters groups and or multiple optimizers on the same model.
| https://stackoverflow.com/questions/69178744/ |
Why do repeated calls to torch.cuda.is_available() all return True? | I'm reading code that makes multiple calls to torch.cuda.is_available(). Each time it prepares to query a net, calls a function (shown below) that uses torch.cuda.is_available() to set the device (cuda or cpu) used. I don't understand why calls after the first one don't return False, thus pushing computation to the cpu.
Is the GPU released when the code leaves the method? Or, does each call take up only a relatively small part of the GPU, so that the code would need to make multiple calls to this method, before computation was pushed to the CPU?
Code in question:
def computeProposals(imageName):
app.config['args'].img = imageName
print('ARGs --img = ', app.config['args'].img)
# Setup device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Setup Model
Config = namedtuple('Config', ['iSz', 'oSz', 'gSz', 'batch'])
config = Config(iSz=160, oSz=56, gSz=112, batch=1) # default for training
model = (models.__dict__[app.config['args'].arch](config))
model = load_pretrain(model, app.config['args'].resume)
model = model.eval().to(device)
scales_range = np.arange(app.config['args'].si,
app.config['args'].sf + app.config['args'].ss,
app.config['args'].ss)
scales = [2 ** i for i in scales_range]
meanstd = {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}
infer = Infer(nps=app.config['args'].nps, scales=scales, meanstd=meanstd,
model=model, device=device)
print('| start')
tic = time.time()
im = np.array(Image.open(app.config['args'].img).convert('RGB'),
dtype=np.float32)
h, w = im.shape[:2]
img = np.expand_dims(np.transpose(im, (2, 0, 1)), axis=0).astype(np.float32)
img = torch.from_numpy(img / 255.).to(device)
infer.forward(img)
masks, scores = infer.getTopProps(.2, h, w)
toc = time.time() - tic
print('| done in %05.3f s' % toc)
return masks, scores
Here is the code that makes the repeated calls to computeProposals:
@app.route('/')
def index():
global cc_data
base_dir = app.config['base_dir']
img_name = app.config['img_name']
img_dir = os.path.join(base_dir, 'images')
img_path = os.path.join(img_dir, img_name)
print(img_path)
print('Loading image and proposals, please wait')
img = skio.imread(img_path)
img = img[:, :, :3]
masks, scores = computeProposals(img_path)
session['pos_wts'] = np.zeros(masks.shape[2], dtype=np.float64).tolist()
session['neg_wts'] = np.zeros(masks.shape[2], dtype=np.float64).tolist()
masks = np.transpose(masks, (2, 0, 1))
dilated = dilate_proposals(masks)
print('Loading done')
img_h = img.shape[0]
img_w = img.shape[1]
print('Image height {} and width {}'.format(img_h, img_w))
rendered_img = draw_buttons(np.copy(img), img_w)
if app.config['DoneFlag'] == 1:
rendered_img = draw_end(rendered_img, img_h)
img_stream = embed_image_html(rendered_img)
# Create dicts with session variables
cc_data = {'img_h': img_h, 'img_w': img_w, 'masks': masks,
'scores': scores, 'dilated': dilated,
'orig': np.copy(img).tolist(), 'render': rendered_img.tolist(),
'clicks': []}
session['response'] = {'input_img': img_stream,
'im_width': img_w, 'im_height': img_h,
'show_error': False}
return render_template('index.html', response=session['response'])
| Function torch.cuda.is_available() does not return whether the Cuda device is being used or if there is a memory left on the (those) device(s). This means the returned value will not depend on the number of processes running or the memory which has already been allocated to those processes. It only returns whether one or more Cuda devices are accessible by PyTorch.
| https://stackoverflow.com/questions/69179102/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.