instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
How to model this function in PyTorch | I want to train a feed-forward Neural Network with a single hidden layer that models the below equation.
h = g(W1.input1 + V1.input2 + b)
output1 = f(W2.h + b_w)
output2 = f(V2.h + b_v)
f and g are activation functions, h is the hidden representation, W1, W2, V1, V2 are Weight matrices, b, b_w, b_v are respective biases.
I can't concatenate 2 inputs because that will result in a single Weight matrix. I can't train two separate NNs because the latent representation will miss the interaction between 2 inputs. Any help is much appreciated. I have also attached the NN diagram below
| I decided to write my own Linear layer which calculates h = g(W1.input1 + V1.input2 + b) I do this by creating 2 parameters W1 and V1 multiply input1 and input2 with the 2 parameters and then add everything. The code is given below:
import torch
import torch.nn as nn
import math
class MyLinearLayer(nn.Module):
def __init__(self, size_in1, size_out1):
super().__init__()
self.size_in1, self.size_out1 = size_in1, size_out1
W_1 = torch.Tensor(size_out1, size_in1)
V_1 = torch.Tensor(size_out1, size_in1)
self.W1 = nn.Parameter(W_1)
self.V1 = nn.Parameter(V_1)
bias = torch.Tensor(size_out1)
self.bias = nn.Parameter(bias)
def forward(self, x):
w_times_x= torch.mm(x[0], self.W1.t())
v_times_x= torch.mm(x[1], self.V1.t())
weight_times_x = torch.add(w_times_x, v_times_x)
return torch.add(weight_times_x, self.bias) # w times x + w times v + b
class NN(nn.Module):
def __init__(self, in_ch, h_ch, out_ch):
super().__init__()
self.input = MyLinearLayer(in_ch, h_ch)
self.W2 = nn.Linear(h_ch, out_ch)
self.V2= nn.Linear(h_ch, out_ch)
self.act = nn.ReLU()
def forward(self, i1, i2):
# I pass in stacked input
inp = torch.stack([i1,i2])
h = self.act(self.input(inp))
o1 = self.act(self.W2(h))
o2 = self.act(self.V2(h))
return o1, o2
model = NN(5, 10, 5)
o1,o2 = model(torch.rand(2, 5), torch.rand(2, 5))
for name, param in model.named_parameters():
if param.requires_grad:
print(name, '->',param.data.shape)
output 7 parameters to be trained:
input.W1 -> torch.Size([10, 5])
input.V1 -> torch.Size([10, 5])
input.bias -> torch.Size([10])
W2.weight -> torch.Size([5, 10])
W2.bias -> torch.Size([5])
V2.weight -> torch.Size([5, 10])
V2.bias -> torch.Size([5])
Thanks for all the inputs @aretor, @Ivan, and @DerekG
| https://stackoverflow.com/questions/71320267/ |
How can I reduce dimension of a tensor after using Softmax? | I got a tensor of scores (lets' call it logits_tensor) that has shape: (1910, 164, 33).
Taking a look at it, logits_tensor[0][0]:
tensor([-2.5916, -1.5290, -0.8218, -0.8882, -2.0961, -2.1064, -0.7842, -1.5200,
-2.1324, -1.5561, -2.4731, -2.1933, -2.8489, -1.8257, -1.8033, -1.8771,
-2.8365, 0.6690, -0.6895, -1.7054, -2.4862, -0.8104, -1.5395, -1.1351,
-2.7154, -1.7646, -2.6595, -2.0591, -2.7554, -1.8661, -2.7512, -2.0655,
5.7374])
Now, by applying a softmax
probs_tensor = torch.nn.functional.softmax(logits_tensor, dim=-1)
I obtain another tensor with the same dimensions that contain probabilities, probs_tensor[0][0]:
tensor([2.3554e-04, 6.8166e-04, 1.3825e-03, 1.2937e-03, 3.8660e-04, 3.8263e-04,
1.4356e-03, 6.8778e-04, 3.7283e-04, 6.6341e-04, 2.6517e-04, 3.5078e-04,
1.8211e-04, 5.0665e-04, 5.1810e-04, 4.8127e-04, 1.8438e-04, 6.1396e-03,
1.5782e-03, 5.7138e-04, 2.6173e-04, 1.3984e-03, 6.7454e-04, 1.0107e-03,
2.0812e-04, 5.3857e-04, 2.2009e-04, 4.0118e-04, 1.9996e-04, 4.8660e-04,
2.0079e-04, 3.9860e-04, 9.7570e-01])
What I'd like to obtain is a tensor of shape 1910, 164) that contains the indices of the max probabilities (for each of 164 elements) shown above, like this:
precitions[0]
> tensor([32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 1, 17, 17, 17,
17, 17, 17, 17, 17, 17, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32,
32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Note that "32" is the index of the higher probability element in probs_tensor[0][0]. The same task can be achieved by using torch.argmax but I need the softmax step.
| Indeed you can apply torch.argmax on the tensor:
>>> logits_tensor = torch.rand(1910, 164, 33)
>>> probs_tensor = logits_tensor.softmax(-1)
>>> probs_tensor.argmax(-1).shape
torch.Size([1910, 164])
Do note, applying argmax on probs_tensor is identical to applying it on logits_tensor. The logit with the highest value will remain the logit with the highest probability mass.
| https://stackoverflow.com/questions/71322361/ |
didn't match because some of the keywords were incorrect: dtype | The following code is an instance method built in a class
def get_samples_from_component(self,batchSize):
SMALL = torch.tensor(1e-10, dtype=torch.float64, device=local_device)
a_inv = torch.pow(self.kumar_a,-1)
b_inv = torch.pow(self.kumar_b,-1)
r1 = torch.tensor(SMALL, dtype=torch.float64,device=self.device)
r2 = torch.tensor(1-SMALL, dtype=torch.float64, device=self.device)
v_means = torch.mul(self.kumar_b, beta_fn(1.+a_inv, self.kumar_b)).to(device=self.device)
u = torch.distributions.uniform.Uniform(low=r1, high=r2).sample([1]).squeeze()
v_samples = torch.pow(1 - torch.pow(u, b_inv), a_inv).to(device=self.device)
if v_samples.ndim > 2:
v_samples = v_samples.squeeze()
v0 = v_samples[:, -1].pow(0).reshape(v_samples.shape[0], 1)
v1 = torch.cat([v_samples[:, :self.z_dim - 1], v0], dim=1)
n_samples = v1.size()[0]
n_dims = v1.size()[1]
components = torch.zeros((n_samples, n_dims)).to(device=self.device)
for k in range(n_dims):
if k == 0:
components[:, k] = v1[:, k]
else:
components[:, k] = v1[:, k] * torch.stack([(1 - v1[:, j]) for j in range(n_dims) if j < k]).prod(axis=0)
# ensure stick segments sum to 1
assert_almost_equal(torch.ones(n_samples,device=self.device).cpu().numpy(), components.sum(axis=1).detach().cpu().numpy(),
decimal=4, err_msg='stick segments do not sum to 1')
print(f'size of sticks: {components}')
components = torch.IntTensor(torch.argmax(torch.cat( self.compose_stick_segments(v_means),1) ,1), dtype=torch.long, device=self.device)
components = torch.cat( [torch.range(0, batchSize).unsqueeze(1), components.unsqueeze(1)], 1)
print(f'size of sticks: {components}')
all_z = []
for d in range(self.z_dim):
temp_z = torch.cat(1, [self.z_sample_list[k][:, d].unsqueeze(1) for k in range(self.K)])
all_z.append(gather_nd(temp_z, components).unsqueeze(1))
out = torch.cat( all_z,1)
return out
By running my code I get the following error message
components = torch.IntTensor(torch.argmax(torch.cat( self.compose_stick_segments(v_means),1) ,1), dtype=torch.long, device=self.device)
TypeError: new() received an invalid combination of arguments - got (Tensor, device=torch.device, dtype=torch.dtype), but expected one of:
* (*, torch.device device)
* (torch.Storage storage)
* (Tensor other)
* (tuple of ints size, *, torch.device device)
didn't match because some of the keywords were incorrect: dtype
* (object data, *, torch.device device)
didn't match because some of the keywords were incorrect: dtype
I will appreciate if someone suggests a solution for this error.
| v_means is already a tensor, try to simple remove the tensor re-implementation in:
components = torch.IntTensor(torch.argmax(torch.cat( self.compose_stick_segments(v_means),1) ,1), dtype=torch.long, device=self.device)
to:
components = torch.argmax(torch.cat( self.compose_stick_segments(v_means),1) ,1)
or simply remove the dtype, since it seems to cast it to integer anyway:
components = torch.IntTensor(torch.argmax(torch.cat( self.compose_stick_segments(v_means),1) ,1), device=self.device)
| https://stackoverflow.com/questions/71326214/ |
torchvision.datasets.ImageFolder is giving me a 3x3 grid of images intead of 1 image |
I can't figure out why it's giving me 9 gray images in a 3x3 grid instead of just one color image (original image is not gray and has RGB channels). I have spent 5 hours on this. Thanks for the help.
Here is my code
test_path = "asl_data/test/" #path to the folder
test_data = torchvision.datasets.ImageFolder(test_path, transform=torchvision.transforms.ToTensor())
def test32():
for x, y in test_data:
print(x.shape)
x = x.reshape(533,800,3)
plt.axis("off")
plt.imshow(x)
plt.show()
plt.axis("off")
plt.imshow(x[:176,:267,:])
break
test32()
| Classic.
You reshape instead of permute.
See this thread on the crucial difference between the two.
Fix:
x = x.permute((1, 2, 0))
plt.imshow(x)
A simple visual example:
x, y = test_data[0] # take one image
x.shape # torch.Size([3, 223, 320])
# see the difference
fig, ax = plt.subplots(1,2)
ax[0].imshow(x.numpy().reshape(223, 320, 3))
ax[0].set_title('Wrong reshape instead of permute')
ax[1].imshow(x.permute((1,2,0)))
ax[1].set_title('correctly permuting')
| https://stackoverflow.com/questions/71329696/ |
How can I calculate the F1-score and other classification metrics from a faster-RCNN? (object detection in PyTorch) | I'm trying to wrap my head around this but struggling to understand how I can compute the f1-score in an object detection task.
Ideally, I would like to know false positives, true positives, false negatives and true negatives for every target in the image (it's a binary problem with an object in the image as one class and the background as the other class).
Eventually I would also like to extract the false positive bounding boxes from the image. I'm not sure if this is efficient but I'd save the image names and bbox predictions and whether they are false positives etc. into a numpy file.
I currently have this set up with a batch size of 1 so I can apply a non-maximum suppression algorithm per image:
def apply_nms(orig_prediction, iou_thresh=0.3):
# torchvision returns the indices of the bboxes to keep
keep = torchvision.ops.nms(orig_prediction['boxes'], orig_prediction['scores'], iou_thresh)
final_prediction = orig_prediction
final_prediction['boxes'] = final_prediction['boxes'][keep]
final_prediction['scores'] = final_prediction['scores'][keep]
final_prediction['labels'] = final_prediction['labels'][keep]
return final_prediction
cpu_device = torch.device("cpu")
model.eval()
with torch.no_grad():
for images, targets in valid_data_loader:
images = list(img.to(device) for img in images)
outputs = model(images)
outputs = [{k: v.to(cpu_device) for k, v in t.items()} for t in outputs]
predictions = apply_nms(outputs[0], iou_thresh=0.3)
Any idea on how I can determine the aforementioned classification metrics and f1-score?
I've come across this line in an evaluation code provided by torchvision and wondering whether it would help me going forward:
res = {target["image_id"].item(): output for target, output in zip(targets, outputs)}
| The use of the terms precision, recall, and F1 score in object detection are slightly confusing because these metrics were originally used for binary evaluation tasks (e.g. classifiation). In any case, in object detection they have slightly different meanings:
let:
TP - set of predicted objects that are successfully matched to a ground truth object (above IOU threshold for whatever dataset you're using, generally 0.5 or 0.7)
FP - set of predicted objects that were not successfully matched to a ground truth object
FN - set of ground truth objects that were not successfully matched to a predicted object
Precision: TP / (TP + FP)
Recall: TP / (TP + FN)
F1: 2*Precision*Recall /(Precision + Recall)
You can find many an implementation of the matching step (matching ground truth and predicted objects) generally provided with an dataset for evaluation, or you can implement it yourself. I'll suggest the py-motmetrics repository.
A simple implementation of the IOU calculation might look like:
def iou(self,a,b):
"""
Description
-----------
Calculates intersection over union for all sets of boxes in a and b
Parameters
----------
a : tensor of size [batch_size,4]
bounding boxes
b : tensor of size [batch_size,4]
bounding boxes.
Returns
-------
iou - float between [0,1]
average iou for a and b
"""
area_a = (a[2]-a[0]) * (a[3]-a[1])
area_b = (b[2]-b[0]) * (b[3]-b[1])
minx = max(a[0], b[0])
maxx = min(a[2], b[2])
miny = max(a[1], b[1])
maxy = min(a[3], b[3])
intersection = max(0, maxx-minx) * max(0,maxy-miny)
union = area_a + area_b - intersection
iou = intersection/union
return iou
| https://stackoverflow.com/questions/71330409/ |
Pass correct image shape to model that uses a conv1d and con2d in the same network | so I am trying to implement the VGG network, everything in the paper, but i have when i am using the architecture that has a conv1-255 as part of it network. below is my code
def _make_convo_layers(architecture) -> torch.nn.Sequential:
"""
Create convolutional layers from the vgg architecture type passed in.
: param architecture:
"""
layers = []
in_channels = 3
for layer in architecture:
if type(layer) == int:
out_channels = layer
layers += [nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1, stride=1), nn.ReLU()]
# layers.append([nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1, stride=1) + nn.ReLU()])
in_channels = layer
elif (layer == 'Conv1-256'):
out_channels = 256
layers += [nn.Conv1d(256, out_channels, kernel_size=3, padding=1, stride=1), nn.ReLU()]
elif (layer == 'LRN'):
layers += [nn.LocalResponseNorm(5, alpha=0.0001, beta=0.75, k=1)]
elif (layer == 'M'):
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
return nn.Sequential(*layers)
below is me passing some random data to the model
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
vgg = VGGNet(config['vgg16-C1']).to(device)
x = torch.randn(1, 3, 224, 224).to(device)
model = vgg(x).to(device)
print(model.shape)
below is the error i received when i passed the x variable to the model
RuntimeError: Expected 3-dimensional input for 3-dimensional weight [256, 256, 3], but got 4-dimensional input of size [1, 256, 56, 56] instead
any help will do, please
| just as jhso commented use are doing this wrong, looking that this VGG explanation on this VGG page, what you need to do isn't to use a 1D convolution but to perform a convolution operation of kernel size 1 instead of using the original kernel size of 3.
def _make_convo_layers(architecture) -> torch.nn.Sequential:
"""
Create convolutional layers from the vgg architecture type passed in.
: param architecture:
"""
layers = []
in_channels = 3
for layer in architecture:
if type(layer) == int:
out_channels = layer
layers += [nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1, stride=1), nn.ReLU()]
in_channels = layer
elif (layer == 'Conv1-256'):
out_channels = 256
layers += [nn.Conv1d(256, out_channels, kernel_size=1, padding=1, stride=1), nn.ReLU()]
elif (layer == 'LRN'):
layers += [nn.LocalResponseNorm(5, alpha=0.0001, beta=0.75, k=1)]
elif (layer == 'M'):
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
return nn.Sequential(*layers)
| https://stackoverflow.com/questions/71330554/ |
Assign custom weight in pytorch | I'm trying to assign some custom weight to my PyTorch model but it doesn't work correctly.
class Mod(nn.Module):
def __init__(self):
super(Mod, self).__init__()
self.linear = nn.Sequential(
nn.Linear(1, 5)
)
def forward(self, x):
x = self.linear(x)
return x
mod = Mod()
mod.linear.weight = torch.tensor([1. ,2. ,3. ,4. ,5.], requires_grad=True)
mod.linear.bias = torch.nn.Parameter(torch.tensor(0., requires_grad=True))
print(mod.linear.weight)
>>> tensor([1., 2., 3., 4., 5.], requires_grad=True)
output = mod(torch.ones(1))
print(output)
>>> tensor([ 0.2657, 0.3220, -0.0726, -1.6987, 0.3945], grad_fn=<AddBackward0>)
The output is expected to be [1., 2., 3., 4., 5.] but it doesn't work as expected. What am I missing here?
| You are not updating the weights in the right place. Your self.linear is not a nn.Linear layer, but rather a nn.Sequential container. Your nn.Linear is the first layer in the sequential. To access it you need to index self.linear:
with torch.no_grad():
mod.linear[0].weight.data = torch.tensor([1. ,2. ,3. ,4. ,5.], requires_grad=True)[:, None]
mod.linear[0].bias.data = torch.zeros((5, ), requires_grad=True) # bias is not a scalar here
| https://stackoverflow.com/questions/71331129/ |
Is there a PyTorch equivalent of tf.custom_gradient()? | I am new to PyTorch but have a lot of experience with TensorFlow.
I would like to modify the gradient of just a tiny piece of the graph: just the derivative of activation function of a single layer. This can be easily done in Tensorflow using tf.custom_gradient, which allows you to supply customized gradient for any functions.
I would like to do the same thing in PyTorch and I know that you can modify the backward() method, but that requires you to rewrite the derivative for the whole network defined in the forward() method, when I would just like to modify the gradient of a tiny piece of the graph. Is there something like tf.custom_gradient() in PyTorch? Thanks!
| You can do this in two ways:
1. Modifying the backward() function:
As you already said in your question, pytorch also allows you to provide a custom backward implementation. However, in contrast to what you wrote, you do not need to re-write the backward() of the entire model - only the backward() of the specific layer you want to change.
Here's a simple and nice tutorial that shows how this can be done.
For example, here is a custom clip activation that instead of killing the gradients outside the [0, 1] domain, simply passes the gradients as-is:
class MyClip(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
return torch.clip(x, 0., 1.)
@staticmethod
def backward(ctx, grad):
return grad
Now you can use MyClip layer wherever you like in your model and you do not need to worry about the overall backward function.
2. Using a backward hook
pytorch allows you to attach hooks to different layer (=sub nn.Modules) of your network. You can register_full_backward_hook to your layer. That hook function can modify the gradients:
The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations.
| https://stackoverflow.com/questions/71332437/ |
CNN: Why do we first resize the image to 256 and then center crop to 224? | The transformation for Alexnet image input is below:
transforms.Resize(256),
transforms.CenterCrop(224),
Why do we first resize the image to 256 and then center crop to 224? I know that 224x224 is the default image size of ImageNet but why we can't directly resize the image to 224x224?
| Perhaps this is best illustrated visually. Consider the following image (128x128px):
Say we would resize it to 16x16px directly, we'd end up with:
But if we'd resize it to 24x24px first,
and then crop it to 16x16px, it would look like this:
As you see, it's getting rid of the border, while retains details in the center. Note the differences side by side:
The same applies to 224px vs 256px, except this is at a larger resolution.
| https://stackoverflow.com/questions/71341354/ |
How to convert a list of PyTorch tensors to a list of floats? | I have a list of tensors like the following one:
[tensor(0.9757), tensor(0.9987), tensor(0.9990), tensor(0.9994), tensor(0.9994)]
How can I change the type of the entire list and obtain something like:
[0.9757, 0.9987, 0.9990, 0.9994, 0.9994]
Thanks for the help!
| You can use .item() and a list comprehension, assuming that every element is a one-element tensor:
result = [tensor.item() for tensor in data]
print(type(result[0]))
print(result)
This prints the desired result, albeit with some unavoidable precision error:
<class 'float'>
[0.9757000207901001, 0.9987000226974487, 0.9990000128746033, 0.9994000196456909, 0.9994000196456909]
| https://stackoverflow.com/questions/71342138/ |
When does Pytorch initialize parameters? | I’m now writing my own network with Pytorch. And I want to use a pretrained model in my net. Here is my overwriting init() code:
class Generator(nn.Module):
def __init__(self) -> None:
super(Generator, self).__init__()
model_path = "somedir"
chekpoint = torch.load(model_path)
h_model = H_model()
h_model.load_state_dict(chekpoint['model'])
# 设置为测试模式
h_model.eval()
self.H_model = h_model
self.unet = UNet(enc_chs=(9,64,128,256,512), dec_chs=(512, 256, 128, 64), num_class=3, retain_dim=False, out_sz=(304, 304))
Here, the h_model is loaded from checkpoint which I’ve trained it well.
My question is that after the initialization, will the parameter in h_model changed(Are the pretrained parameters vaule being modified by some function?)? And why(I mean how does Pytorch treat self-defined layer when it initializes parameters? And when does Pytorch initialize parameters?)
| For the basic layers (e.g., nn.Conv, nn.Linear, etc.) the parameters are initialized by the __init__ method of the layer.
For example, look at the source code of class _ConvNd(Module) (the class from which all other convolution layers are derived). At the bottom of its __init__ it calls self.reset_parameters() which initialize the weights.
Therefore, if your nn.Module does not have any "independent" nn.Parameters, only trainable parameters inside sub-nn.Modules, when you construct your network, all weights of the sub modules are being initialized as the sub modules are constructed.
That is, once you call h_model = H_model() the weights of h_model are already initialized to their default values. Calling h_model.load_state_dict(...) overrides these values to the desired pre-trained weights.
| https://stackoverflow.com/questions/71348500/ |
RuntimeError: mat1 and mat2 shapes cannot be multiplied (5x46656 and 50176x3) | I am trying to build a cnn model and while building i am getting the run time error. I have image size of 224x224x3 and batch size of 5 and i have 3 classes to be predicted.
class CNN(nn.Module):
def __init__(self):
super().__init__()
#5*3*224*224
self.conv1=nn.Conv2d(3,6,3,1)
self.relu1=nn.ReLU()
self.pool=nn.MaxPool2d(2)
#112*112*6
self.conv2=nn.Conv2d(6,16,3,1)
self.relu2=nn.ReLU()
self.fc1=nn.Linear(16*56*56,3)
def forward(self,x):
x=self.pool(self.relu1(self.conv1(x)))
x=self.pool(self.relu2(self.conv2(x)))
x=x.flatten(1)
x=self.fc1(x)
return x
I get the run time error during training. How do I fix this?
| Considering the size of your input, your fully connected layer should have 16*54*54 neurons, not 16*56*56.
self.fc1 = nn.Linear(16*54*54, 3)
Alternatively, you can you a lazy module that will infer the number of neurons needed: nn.LazyLinear:
self.fc1 = nn.LazyLinear(3)
| https://stackoverflow.com/questions/71349367/ |
PyTorch second derivative is zero everywhere | I'm working on a custom loss function that requires me to compute the trace of the hessian matrix of a neural network with respect to its inputs.
In one dimension, this reduces to the second derivative of the network with respect to its input. Suppose my network is u and my input is x. The problem is completely one-dimensional, so the relationship I'm trying to model is u(x) where u has one input and one output, and x is one-dimensional. However, I'm running with batches, so x "comes in" as a column vector and I'm expecting a column vector as output.
If we label the samples in the batch as x_1, x_2, ..., x_n, I'm thus interested in the following vectors:
In PyTorch, I have tried the following:
u = model(x)
d = grad(u, x, grad_outputs=torch.ones_like(u), create_graph=True)[0]
dd = grad(d, x, grad_outputs=torch.ones_like(d), retain_graph=True, create_graph=False)[0]
This works well for u' but u'' comes out as being zero everywhere:
I'm a bit confused on the whole notion of needing a vector jacobian product here to begin with. Should I view the computation as u mapping from R^n to R^n, where n is the size of my batch?
Any help is appreciated!
| Toy example
I have tried to devise a toy example for you:
x = torch.rand(3, requires_grad=True).reshape(-1, 1)
u = x ** 2.
# du/dx = 2 * x
dx = torch.autograd.grad(u, x, torch.ones_like(u), retain_graph=True, create_graph=True)[0]
# d^2 u/dx^2 = 2
ddx = torch.autograd.grad(dx, x, torch.ones_like(u))[0]
These are the outputs of x, dx, ddx:
tensor([[0.5217],
[0.0400],
[0.5509]], grad_fn=<ReshapeAliasBackward0>)
tensor([[1.0434],
[0.0800],
[1.1018]], grad_fn=<MulBackward0>)
tensor([[2.],
[2.],
[2.]])
So grad is computing the gradient of x element-wise.
Should I view the computation as u mapping from R^n to R^n, where n is the size of my batch?
My answer is no, otherwise dx should have been a Jacobian matrix instead of a vector.
This works well for u' but u'' comes out as being zero everywhere
Given the limited context, the only thing I can think of is that the second derivative of model(x) with respect to x is zero.
| https://stackoverflow.com/questions/71350474/ |
Detectron2 prediction problem on my local machine with custom model | I trained a custom model with detectron2 on google colab, and ok, it's working correctly. The model was trained, the predictions were ok, this on google colab. But when I made predictions on my local machine did'nt work. Here a similar example on google colab: https://colab.research.google.com/drive/1bSlH5Am_zFEWbJ9zTRu2wFEDKDvn0LUv?usp=sharing
I exported de final model and ran with this code:
from detectron2.config import get_cfg
from detectron2.engine import DefaultPredictor
from detectron2.data import MetadataCatalog
from detectron2.utils.visualizer import Visualizer, ColorMode
import matplotlib.pyplot as plt
import cv2.cv2 as cv2
cfg = get_cfg()
cfg.merge_from_file("./detectron2_repo/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
cfg.MODEL.WEIGHTS = "model_final.pth" # path for final model
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.8
predictor = DefaultPredictor(cfg)
im = cv2.imread('0.jpg')
outputs = predictor(im)
v = Visualizer(im[:, :, ::-1],
metadata=MetadataCatalog.get(cfg.DATASETS.TRAIN[0]),
scale=0.5,
instance_mode=ColorMode.IMAGE_BW)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
img = cv2.cvtColor(out.get_image()[:, :, ::-1], cv2.COLOR_RGBA2RGB)
cv2.imwrite('img.jpg',img)
I supose that the cfg.merge_from_file is the problem. Is there other file? Where I find on colab?
I tested the standard models and worked well on my local machine, the problem is with the custom model.
| I saved the configs with this comand and then I downloaded.
f = open('config.yml','w')
f.write(cfg.dump())
f.close()
and replaced:
cfg.merge_from_file("./detectron2_repo/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
by
cfg.merge_from_file("config.yml")
and worked.
| https://stackoverflow.com/questions/71352708/ |
What is the opposite of torch.unique_consecutive? | How can I efficiently revert torch.unique_consecutive? I.e.:
x = torch.tensor([1, 1, 2, 2, 3, 1, 1, 2])
output, counts = torch.unique_consecutive(x, return_counts=True)
y = torch.SOMETHING(output,counts) #y equals x
| Please use torch.repeat_interleave(**args) for your task
x = torch.tensor([1, 1, 2, 2, 3, 1, 1, 2])
output, counts = torch.unique_consecutive(x, return_counts=True)
y = torch.repeat_interleave(output, counts)
#>>y = [1, 1, 2, 2, 3, 1, 1, 2]
| https://stackoverflow.com/questions/71353375/ |
Constrained Linear combination of learned parameters is pytorch? | I have three tensors X,Y,Z and I want to learn the optimal convex combination of these tensors wrt to some cost, i.e.
aX + bY + cZ such that a + b + c = 1. How can I do this easily in Pytorch?
I know that I could just concatenate along an unsqueezed axis and then apply linear layer as so:
X = X.unsqueeze(-1)
Y = Y.unsqueeze(-1)
Z = Z.unsqueeze(-1)
W = torch.cat([X,Y,Z], dim = -1) #third axis has dimension 3)
W = torch.linear(3,1)(W)
but this would not apply the convex combination constraint...
| I found an answer that works well for those who are interested this would generalize to a linear combination of N tensors you just need to change the weights dim and number of tensors you concatenate.
weights = nn.Parameter(torch.rand(1,3))
X = X.unsqueeze(-1)
Y = Y.unsqueeze(-1)
Z = Z.unsqueeze(-1)
weights_normalized = nn.functional.softmax(weights, dim=-1)
output = torch.matmul(torch.cat([X, Y, Z], dim=-1), weights_normalized.t()).squeeze()
| https://stackoverflow.com/questions/71355416/ |
torchmetrics behaviour for one-hot encoded values | I am having a hard time understanding the following scenario. I have a output probability of 0.0 on each class which means value of metrics such as f1 score, accuracy and recall should be zero? However i get the following:
import torch, torchmetrics
preds = torch.tensor([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
target = torch.tensor([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
print("F1: ", torchmetrics.functional.f1_score(preds, target))
print("Accuracy: ", torchmetrics.functional.accuracy(preds, target))
print("Recall: ", torchmetrics.functional.recall(preds, target))
print("Precision: ", torchmetrics.functional.precision(preds, target))
Output:
F1: tensor(0.)
Accuracy: tensor(0.6667)
Recall: tensor(0.)
Precision: tensor(0.)
Why is accuracy 0.6667? I would expect all outputs to be 0.0.
| Your preds is a probabilities array for multi-label classification problem:
To make it simpler, I will assume the example like that:
preds = torch.tensor([[0., 0., 0.]]) # multi-labels [label1, label2, label3]
target = torch.tensor([[1, 0, 0]])
The true negatives are 2 since classifier predicts not existence for label2 and label3 while label2 and label3 indeed should not be existed.
The true positives are 0 since classifier predicts the existence of any label while a label should be existed.
The false negative is 1 since classifier predicts no existence for label1 while label1 should be existed.
The false positives are 0 since classifier predicts any label while a label should not be existed.
According to the above equation, Accuracy = 2/3 = 0.6667
You can read here more about different metrics and their calculations.
| https://stackoverflow.com/questions/71358253/ |
Pytorch: How to get mean of slices along an axis where the slices indices value are defined on a different tensor and gradients only flow into slices | I would like to take the mean along an axis of a tensor, defined by tensor which contains several slices.
So this would be my sample tensor for which I want to get mean of slices from, along the first dimension
import torch
sample = torch.arange(0,40).reshape(10,-1)
sample
tensor([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23],
[24, 25, 26, 27],
[28, 29, 30, 31],
[32, 33, 34, 35],
[36, 37, 38, 39]])
And this would be the tensor which contains the start and end indices for which I would like to get the mean of
mean_slices = torch.tensor([
[3, 5],
[1, 8],
[4, 8],
[6, 9],
])
Since Pytorch doesn't have ragged tensors, I use a trick described here
https://stackoverflow.com/a/71337491/3259896
Where the cumsum is calculated through the entire axis for which I want to get the mean from, then the row of each end slice index is retrieved and subtracted from the cumsum row that is before the start slice index. Finally, the result is divided by the length of the slices.
padded = torch.nn.functional.pad(
sample.cumsum(dim=0), (0, 0, 1, 0)
)
padded
tensor([[ 0, 0, 0, 0],
[ 0, 1, 2, 3],
[ 4, 6, 8, 10],
[ 12, 15, 18, 21],
[ 24, 28, 32, 36],
[ 40, 45, 50, 55],
[ 60, 66, 72, 78],
[ 84, 91, 98, 105],
[112, 120, 128, 136],
[144, 153, 162, 171],
[180, 190, 200, 210]])
pools = torch.diff(
padded[mean_slices], dim=1
).squeeze()/torch.diff(mean_slices, dim=1)
pools
tensor([[14., 15., 16., 17.],
[16., 17., 18., 19.],
[22., 23., 24., 25.],
[28., 29., 30., 31.]])
The only issue with this solution is that originally I was only looking to get the mean of specifically the rows defined by the slices, and while my current solution does that, the calculations involve all rows before the slices indices as well. So the backwards pass may not work as intended.
Is this guess correct?
Is there a more exact and computationally efficient way to calculate the mean for the slices defined in a tensor?
| Why do you think that the gradient calculation includes values of pixels outside the slices?
When you compute the sum over the slice using torch.cumsum you sum all the values outside the slice twice: once to estimate their sum, stored in the row before the slice and the second time you sum the slice and these values storing this value at the last row of the slice.
The most important thing is you subtract the row-before-first from the last row: That is, you eliminate the sum of all values outside the slice from the equation. Thus these values has no effect on the calculation nor on the gradients.
Here's a simple example:
Consider the function f(x,y,z) = x + y + z - z. What is the gradient of f w.r.t z? Once z is eliminated, it has no effect on the value of f nor on its gradient.
Bottom line: your backward pass is correct, and is not affected by values outside the slices.
regarding a more efficient implementation:
If the minimal slice starting index is high (that is, there's a large portion of sample that is ignored by all slices) you might remove it completely:
mn, mx = mean_slices.min(), mean_slices.max() # only the relevant par of sample
padded_ef = torch.nn.functional.pad(
sample[mn:mx, :].cumsum(dim=0), (0, 0, 1, 0)
)
# sum the slices - need to shift the index
pools_ef = torch.diff(
padded_ef[mean_slices-mn], dim=1
).squeeze()/torch.diff(mean_slices, dim=1)
Results with the same pools, but potentially involving less elements of sample if the slices are "packed".
However, unless sample is very large w.r.t the slices, I don't believe this will give you a significant boost in run time.
| https://stackoverflow.com/questions/71358928/ |
Reshape PyTorch tensor so that matrices are horizontal | I'm trying to combine n matrices in a 3-dimensional PyTorch tensor of shape (n, i, j) into a single 2-dimensional matrix of shape (i, j*n). Here's a simple example where n=2, i=2, j=2:
m = torch.tensor([[[2, 3],
[5, 7]],
[[11, 13],
[17, 19]]])
m.reshape(2, 4)
I was hoping this would produce:
tensor([[ 2, 3, 11, 13],
[ 5, 7, 17, 19]])
But instead it produced:
tensor([[ 2, 3, 5, 7],
[11, 13, 17, 19]])
How do I do this? I tried torch.cat and torch.stack, but they require tuples of tensors. I could try and create tuples, but that seems inefficient. Is there a better way?
| To combine n + j with reshape you need them consequent in shape. One can fix it with swapaxes:
m = torch.tensor([[[2, 3],
[5, 7]],
[[11, 13],
[17, 19]]])
m=m.swapaxes( 0,1 )
m.reshape(2, 4)
tensor([[ 2, 3, 11, 13],
[ 5, 7, 17, 19]])
| https://stackoverflow.com/questions/71360303/ |
TypeError: _open() got an unexpected keyword argument 'pilmode' | I am training a CNN model on the COCO dataset, and I am getting this error after a few iterations. The error is not consistent because I got this error in 1100 iterations, once in 4500 iterations and one time in 8900 iterations (all of them in 1 epoch).
I thought that this error might be a bug in the new version of imageio, so I changed the version to 2.3.0 but still, after 8900 iterations in 1 epoch, I am getting this error.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-46-4b33bec4a89e> in <module>()
52
53 # train for one epoch
---> 54 train_loss = train(train_loader, model, [criterion1, criterion2], optimizer)
55 print('train_loss: ',train_loss)
56
4 frames
/usr/local/lib/python3.7/dist-packages/torch/_utils.py in reraise(self)
432 # instantiate since we don't know how to
433 raise RuntimeError(msg) from None
--> 434 raise exception
435
436
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "<ipython-input-34-4c8722b5b16b>", line 143, in __getitem__
image = imageio.imread(img_path, pilmode='RGB')
File "/usr/local/lib/python3.7/dist-packages/imageio/core/functions.py", line 206, in imread
reader = read(uri, format, 'i', **kwargs)
File "/usr/local/lib/python3.7/dist-packages/imageio/core/functions.py", line 129, in get_reader
return format.get_reader(request)
File "/usr/local/lib/python3.7/dist-packages/imageio/core/format.py", line 168, in get_reader
return self.Reader(self, request)
File "/usr/local/lib/python3.7/dist-packages/imageio/core/format.py", line 217, in __init__
self._open(**self.request.kwargs.copy())
TypeError: _open() got an unexpected keyword argument 'pilmode'
| I've had this error before. The TLDR is that you can't assume all of your data is clean and able to be parsed. You aren't loading the data in order as far as I can tell either. You may even have data shuffling enabled. With all of that in mind you should not expect it to fail determinisitically at iteration 100 or 102 or anything.
The issue comes down to one (or more) of the files in COCO dataset is either corrupted or is of a different format. You can process the images in order with a batchsize of 1 and print out the file name to see which one it is.
To "fix" this issue you can do one of several things:
wrap the call to load the image in a try-except block and just skip it.
Convert the image yourself to another appropriate format.
Try a different way to load images in with pytorch.
See here as an example failure scenario when loading in images with imageio.
| https://stackoverflow.com/questions/71361442/ |
Quantized model gives negative accuracy after conversion from pytorch to ONNX | I'm trying to train a quantize model in pytorch and convert it to ONNX.
I employ the quantized-aware-training technique with help of pytorch_quantization package.
I used the below code to convert my model to ONNX:
from pytorch_quantization import nn as quant_nn
from pytorch_quantization import calib
from pytorch_quantization.tensor_quant import QuantDescriptor
from pytorch_quantization import quant_modules
import onnxruntime
import torch
import torch.utils.data
from torch import nn
import torchvision
def export_onnx(model, onnx_filename, batch_onnx, per_channel_quantization):
model.eval()
quant_nn.TensorQuantizer.use_fb_fake_quant = True # We have to shift to pytorch's fake quant ops before exporting the model to ONNX
if per_channel_quantization:
opset_version = 13
else:
opset_version = 12
# Export ONNX for multiple batch sizes
print("Creating ONNX file: " + onnx_filename)
dummy_input = torch.randn(batch_onnx, 3, 224, 224, device='cuda') #TODO: switch input dims by model
input_names = ['input']
output_names = ['Linear[fc]'] ### ResNet34
dynamic_axes = {'input': {0: 'batch_size'}}
try:
torch.onnx.export(model, dummy_input, onnx_filename, input_names=input_names,
export_params=True, output_names=output_names, opset_version=opset_version,
verbose=True, enable_onnx_checker=False, do_constant_folding=True)
except ValueError:
warnings.warn(UserWarning("Per-channel quantization is not yet supported in Pytorch/ONNX RT (requires ONNX opset 13)"))
print("Failed to export to ONNX")
return False
return True
After conversion, I get the following warnings:
warnings.warn("'enable_onnx_checker' is deprecated and ignored. It will be removed in "
W0305 12:39:40.472136 140018114328384 tensor_quantizer.py:280] Use Pytorch's native experimental fake quantization.
/usr/local/lib/python3.8/dist-packages/pytorch_quantization/nn/modules/tensor_quantizer.py:285: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
Also, the accuracy is not valid for ONNX model!
Accuracy summary:
+-----------+-------+
| Stage | Top1 |
+-----------+-------+
| Finetuned | 38.03 |
| ONNX | -1.00 |
+-----------+-------+
More info is here:
pytorch 1.10.2+cu102
torchvision 0.11.3+cu102
TensorRT 8.2.3-1+cuda11.4
ONNX 1.11.0
ONNX Runtime 1.10.0
cuda 11.6
python 3.8
What is the problem with ONNX conversion?
| After some tries, I found that there is a version conflict. I changed the versions accordingly:
onnx == 1.9.0
onnxruntime == 1.8.1
pytorch == 1.9.0+cu111
torchvision == 0.10.0+cu111
| https://stackoverflow.com/questions/71362729/ |
how to combine two trained models using PyTorch? | I'm currently working on two models that use different types of data but are connected. I'd want to create a combination model that takes in one instance of each of the data types, runs them through each of the pre-trained models independently, and then processes the combined output of the two distinct models through a few feed-forward layers at the top.
So far, I've learned that I can change forward to accept both inputs, so I've just cloned the structures of my individual models into the combined one, processed them each individually using forward(right )'s layers, and then merged the outputs as specified. What I'm having trouble with is figuring out how to achieve this.
| as I understand from your question you can create two models then you need a third model that combines both the neural network with the forward and in the __main__ you can then load_state_dict
for example:
the first model
class FirstM(nn.Module):
def __init__(self):
super(FirstM, self).__init__()
self.fc1 = nn.Linear(20, 2)
def forward(self, x):
x = self.fc1(x)
return x
the second model
class SecondM(nn.Module):
def __init__(self):
super(SecondM, self).__init__()
self.fc1 = nn.Linear(20, 2)
def forward(self, x):
x = self.fc1(x)
return x
here you create a model that you can merge both two models in it as follows:
class Combined_model(nn.Module):
def __init__(self, modelA, modelB):
super(Combined_model, self).__init__()
self.modelA = modelA
self.modelB = modelB
self.classifier = nn.Linear(4, 2)
def forward(self, x1, x2):
x1 = self.modelA(x1)
x2 = self.modelB(x2)
x = torch.cat((x1, x2), dim=1)
x = self.classifier(F.relu(x))
return x
and then outside the classed in the main you can do as following
# Create models and load state_dicts
modelA = FirstM()
modelB = SecondM()
# Load state dicts
modelA.load_state_dict(torch.load(PATH))
modelB.load_state_dict(torch.load(PATH))
model = Combined_model(modelA, modelB)
x1, x2 = torch.randn(1, 10), torch.randn(1, 20)
output = model(x1, x2)
| https://stackoverflow.com/questions/71364119/ |
creating tensor by composition of smaller tensors | I would like to create a 4x4 tensor that is composed of four smaller 2x2 tensors in this manner:
The tensor I would like to create:
in_t = torch.tensor([[14, 7, 6, 2],
[ 4, 8, 11, 1],
[ 3, 5, 9, 10],
[12, 15, 16, 13]])
I would like to create this tensor composed from these four smaller tensors:
a = torch.tensor([[14, 7], [ 4, 8]])
b = torch.tensor([[6, 2], [11, 1]])
c = torch.tensor([[3, 5], [12, 15]])
d = torch.tensor([[9, 10], [16, 13]])
I have tried to use torch.cat like this:
mm_ab = torch.cat((a,b,c,d), dim=0)
but I end up with an 8x2 tensor.
| You can control the layout of your tensor and achieve the desired result with a combination of torch.transpose and torch.reshape. You can perform an outer transpose followed by an inner transpose:
>>> stack = torch.stack((a,b,c,d))
tensor([[[14, 7],
[ 4, 8]],
[[ 6, 2],
[11, 1]],
[[ 3, 5],
[12, 15]],
[[ 9, 10],
[16, 13]]])
Reshape-tranpose-reshape-transpose-reshape:
>>> stack.reshape(4,2,-1).transpose(0,1).reshape(-1,2,4).transpose(0,1).reshape(-1,4)
tensor([[14, 7, 6, 2],
[ 4, 8, 11, 1],
[ 3, 5, 9, 10],
[12, 15, 16, 13]])
Essentially, reshapes allow you to group and view your tensor differently while transpose operation will alter its layout (it won't remain contiguous) meaning you can achieve the desired output.
| https://stackoverflow.com/questions/71365817/ |
Discrepancy between log_prob and manual calculation | I want to define multivariate normal distribution with mean [1, 1, 1] and variance covariance matrix with 0.3 on diagonal. After that I want to calculate log likelihood on datapoints [2, 3, 4]
By torch distributions
import torch
import torch.distributions as td
input_x = torch.tensor([2, 3, 4])
loc = torch.ones(3)
scale = torch.eye(3) * 0.3
mvn = td.MultivariateNormal(loc = loc, scale_tril=scale)
mvn.log_prob(input_x)
tensor(-76.9227)
From scratch
By using formula for log likelihood:
We obtain tensor:
first_term = (2 * np.pi* 0.3)**(3)
first_term = -np.log(np.sqrt(first_term))
x_center = input_x - loc
tmp = torch.matmul(x_center, scale.inverse())
tmp = -1/2 * torch.matmul(tmp, x_center)
first_term + tmp
tensor(-24.2842)
where I used fact that
My question is - what's the source of this discrepancy?
| You are passing the covariance matrix to the scale_tril instead of covariance_matrix. From the docs of PyTorch's Multivariate Normal
scale_tril (Tensor) – lower-triangular factor of covariance, with positive-valued diagonal
So, replacing scale_tril with covariance_matrix would yield the same results as your manual attempt.
In [1]: mvn = td.MultivariateNormal(loc = loc, covariance_matrix=scale)
In [2]: mvn.log_prob(input_x)
Out[2]: tensor(-24.2842)
However, it's more efficient to use scale_tril according to the authors:
...Using scale_tril will be more efficient:
You can calculate the lower choelsky using torch.cholesky
In [3]: mvn = td.MultivariateNormal(loc = loc, scale_tril=torch.cholesky(scale))
In [4]: mvn.log_prob(input_x)
Out[4]: tensor(-24.2842)
| https://stackoverflow.com/questions/71366015/ |
Create Pytorch "Stack of Views" to save on GPU memory | I am trying to expand datasets for analysis in Pytorch such that from one 1D (or 2D) tensor two stacks of views are generated. In the following image A (green) and B (blue) are views of the original tensor that are slid from left to right, which would then be combined into single tensors for batch processing:
The motivation behind using views for this is to save on GPU memory, since for large, multi-dimensional datasets this expansion process can convert a dataset of tens of MB into tens of GB despite tremendous data reuse (if normal tensors are used). Simply returning one view at a time is not desirable since the actual processing of tensors works in large batches.
Is what I'm trying to do possible in Pytorch? Simply using torch.stack(list of views) creates a new tensor with a copy of the original data, as verified by tensor.storage().data_ptr().
Another way to phrase the question: can you create batches of tensor views?
The current steps are:
Load and pre-process all datasets
Convert datasets into tensors and expand into stacks of sliding views, as shown above
Move all stacks to GPU to avoid transfer bottleneck during training
| As mentioned in the comments, Tensor.unfold can be used for this task. You provide a tensor, starting index, length value, and step size. This returns a batch of views exactly like I was describing, though you have to unfold tensors one at a time for A and B.
The following code can be used to generate A and B:
A = source_tensor[:-B_length].unfold(0, A_length, 1)
B = source_tensor[A_length:].unfold(0, B_length, 1)
A.storage().data_ptr() == source_tensor.storage().data_ptr() returns True
Since the data pointers are the same it is correctly returning views of the original tensor instead of copies, which saves on memory.
| https://stackoverflow.com/questions/71366054/ |
Cannot create the calibration cache for the QAT model in tensorRT | I've trained a quantized model (with help of quantized-aware-training method in pytorch). I want to create the calibration cache to do inference in INT8 mode by TensorRT. When create calib cache, I get the following warning and the cache is not created:
[03/06/2022-08:14:07] [TRT] [W] Calibrator won't be used in explicit precision mode. Use quantization aware training to generate network with Quantize/Dequantize nodes.
[03/06/2022-08:14:11] [TRT] [W] Some weights are outside of int8_t range and will be clipped to int8_t range.
[03/06/2022-08:14:11] [TRT] [W] Some weights are outside of int8_t range and will be clipped to int8_t range.
[03/06/2022-08:14:11] [TRT] [W] Some weights are outside of int8_t range and will be clipped to int8_t range.
[03/06/2022-08:14:11] [TRT] [W] Some weights are outside of int8_t range and will be clipped to int8_t range.
I've trained the model accordingly and converted to ONNX:
import os
import sys
import argparse
import warnings
import collections
import torch
import torch.utils.data
from torch import nn
from tqdm import tqdm
import torchvision
from torchvision import transforms
from torch.hub import load_state_dict_from_url
from pytorch_quantization import nn as quant_nn
from pytorch_quantization import calib
from pytorch_quantization.tensor_quant import QuantDescriptor
from pytorch_quantization import quant_modules
import onnxruntime
import numpy as np
import models
import kornia
from prettytable import PrettyTable
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
def get_parser():
"""
Creates an argument parser.
"""
parser = argparse.ArgumentParser(description='Classification quantization flow script')
parser.add_argument('--data-dir', '-d', type=str, help='input data folder', required=True)
parser.add_argument('--model-name', '-m', default='', help='model name: default resnet50')
parser.add_argument('--disable-pcq', '-dpcq', action="store_true", help='disable per-channel quantization for weights')
parser.add_argument('--out-dir', '-o', default='/tmp', help='output folder: default /tmp')
parser.add_argument('--print-freq', '-pf', type=int, default=20, help='evaluation print frequency: default 20')
parser.add_argument('--threshold', '-t', type=float, default=-1.0, help='top1 accuracy threshold (less than 0.0 means no comparison): default -1.0')
parser.add_argument('--batch-size-train', type=int, default=8, help='batch size for training: default 128')
parser.add_argument('--batch-size-test', type=int, default=8, help='batch size for testing: default 128')
parser.add_argument('--batch-size-onnx', type=int, default=20, help='batch size for onnx: default 1')
parser.add_argument('--seed', type=int, default=12345, help='random seed: default 12345')
checkpoint = parser.add_mutually_exclusive_group(required=True)
checkpoint.add_argument('--ckpt-path', default='', type=str, required=False,
help='path to latest checkpoint (default: none)')
checkpoint.add_argument('--ckpt-url', default='', type=str, required=False,
help='url to latest checkpoint (default: none)')
checkpoint.add_argument('--pretrained', action="store_true")
parser.add_argument('--num-calib-batch', default=8, type=int,
help='Number of batches for calibration. 0 will disable calibration. (default: 4)')
parser.add_argument('--num-finetune-epochs', default=0, type=int,
help='Number of epochs to fine tune. 0 will disable fine tune. (default: 0)')
parser.add_argument('--calibrator', type=str, choices=["max", "histogram"], default="max")
parser.add_argument('--percentile', nargs='+', type=float, default=[99.9, 99.99, 99.999, 99.9999])
parser.add_argument('--sensitivity', action="store_true", help="Build sensitivity profile")
parser.add_argument('--evaluate-onnx', action="store_true", help="Evaluate exported ONNX")
return parser
def prepare_model(
model_name,
num_class,
data_dir,
per_channel_quantization,
batch_size_train,
batch_size_test,
batch_size_onnx,
calibrator,
pretrained,
ckpt_path,
ckpt_url=None):
## Initialize quantization, model and data loaders
if per_channel_quantization:
print('<<<<<<< Per channel qaunt >>>>>>>>')
quant_desc_input = QuantDescriptor(calib_method=calibrator)
quant_nn.QuantConv2d.set_default_quant_desc_input(quant_desc_input)
quant_nn.QuantLinear.set_default_quant_desc_input(quant_desc_input)
else:
## Force per tensor quantization for onnx runtime
print('<<<<<<< Per tensor qaunt >>>>>>>>')
quant_desc_input = QuantDescriptor(calib_method=calibrator, axis=None)
quant_nn.QuantConv2d.set_default_quant_desc_input(quant_desc_input)
quant_nn.QuantConvTranspose2d.set_default_quant_desc_input(quant_desc_input)
quant_nn.QuantLinear.set_default_quant_desc_input(quant_desc_input)
quant_desc_weight = QuantDescriptor(calib_method=calibrator, axis=None)
quant_nn.QuantConv2d.set_default_quant_desc_weight(quant_desc_weight)
quant_nn.QuantConvTranspose2d.set_default_quant_desc_weight(quant_desc_weight)
quant_nn.QuantLinear.set_default_quant_desc_weight(quant_desc_weight)
if model_name in models.__dict__:
model = models.__dict__[model_name](pretrained=pretrained, quantize=True)
num_feats = model.fc.in_features
model.fc = nn.Linear(num_feats, num_class)
else:
print('Model is not available....downloading....')
quant_modules.initialize()
model = torchvision.models.__dict__[model_name](pretrained=pretrained)
if 'resnet' in model_name:
num_feats = model.fc.in_features
model.fc = nn.Linear(num_feats, num_class)
if 'densenet' in model_name:
num_feats = model.classifier.in_features
model.classifier = nn.Linear(num_feats, num_class)
quant_modules.deactivate()
if not pretrained:
if ckpt_path:
model = torch.load(ckpt_path)
else:
model = load_state_dict_from_url(ckpt_url)
if 'state_dict' in model.keys():
model = model['state_dict']
elif 'model' in model.keys():
model = model['model']
# model.load_state_dict(checkpoint)
model.eval()
model.cuda()
print(model)
## Prepare the data loaders
traindir = os.path.join(data_dir, 'train')
valdir = os.path.join(data_dir, 'test')
_args = collections.namedtuple("mock_args", ["model", "distributed", "cache_dataset"])
dataset, dataset_test, train_sampler, test_sampler = load_data(
traindir, valdir, _args(model=model_name, distributed=False, cache_dataset=False))
data_loader_train = torch.utils.data.DataLoader(
dataset, batch_size=batch_size_train,
sampler=train_sampler, num_workers=4, pin_memory=True)
data_loader_test = torch.utils.data.DataLoader(
dataset_test, batch_size=batch_size_test,
sampler=test_sampler, num_workers=4, pin_memory=True)
data_loader_onnx = torch.utils.data.DataLoader(
dataset_test, batch_size=batch_size_onnx,
sampler=test_sampler, num_workers=4, pin_memory=True)
return model, data_loader_train, data_loader_test, data_loader_onnx
def main(cmdline_args):
parser = get_parser()
args = parser.parse_args(cmdline_args)
print(parser.description)
print(args)
torch.manual_seed(args.seed)
np.random.seed(args.seed)
args.model_name = 'resnet34'
args.data_dir = '/home/Dataset/'
args.disable_pcq = True #set true to disbale
args.batch_size_train = 8
args.batch_size_test = 8
args.batch_size_onnx = 8
args.calibrator = 'max'
args.pretrained = True
args.ckpt_path = ''
args.ckpt_url = ''
args.num_class = 5
## Prepare the pretrained model and data loaders
model, data_loader_train, data_loader_test, data_loader_onnx = prepare_model(
args.model_name,
args.num_class,
args.data_dir,
not args.disable_pcq,
args.batch_size_train,
args.batch_size_test,
args.batch_size_onnx,
args.calibrator,
args.pretrained,
args.ckpt_path,
args.ckpt_url)
kwargs = {"alpha": 0.75, "gamma": 2.0, "reduction": 'mean'}
criterion = kornia.losses.FocalLoss(**kwargs)
## Calibrate the model
with torch.no_grad():
calibrate_model(
model=model,
model_name=args.model_name,
data_loader=data_loader_train,
num_calib_batch=args.num_calib_batch,
calibrator=args.calibrator,
hist_percentile=args.percentile,
out_dir=args.out_dir)
## Build sensitivy profile
if args.sensitivity:
build_sensitivity_profile(model, criterion, data_loader_test)
kwargs = {"alpha": 0.75, "gamma": 3.0, "reduction": 'mean'}
criterion = kornia.losses.FocalLoss(**kwargs)
optimizer = torch.optim.SGD(model.parameters(), lr=0.0001)
lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, args.num_finetune_epochs)
for epoch in range(args.num_finetune_epochs):
# Training a single epch
train_one_epoch(model, criterion, optimizer, data_loader_train, "cuda", epoch, 100)
lr_scheduler.step()
if args.num_finetune_epochs > 0:
## Evaluate after finetuning
with torch.no_grad():
print('Finetune evaluation:')
top1_finetuned = evaluate(model, criterion, data_loader_test, device="cuda")
else:
top1_finetuned = -1.0
## Export to ONNX
onnx_filename = args.out_dir + '/' + args.model_name + ".onnx"
top1_onnx = -1.0
if export_onnx(model, onnx_filename, args.batch_size_onnx, not args.disable_pcq) and args.evaluate_onnx:
## Validate ONNX and evaluate
top1_onnx = evaluate_onnx(onnx_filename, data_loader_onnx, criterion, args.print_freq)
## Print summary
print("Accuracy summary:")
table = PrettyTable(['Stage','Top1'])
table.align['Stage'] = "l"
table.add_row( [ 'Finetuned', "{:.2f}".format(top1_finetuned) ] )
table.add_row( [ 'ONNX', "{:.2f}".format(top1_onnx) ] )
print(table)
## Compare results
if args.threshold >= 0.0:
if args.evaluate_onnx and top1_onnx < 0.0:
print("Failed to export/evaluate ONNX!")
return 1
if args.num_finetune_epochs > 0:
if top1_finetuned >= (top1_onnx - args.threshold):
print("Accuracy threshold was met!")
else:
print("Accuracy threshold was missed!")
return 1
return 0
def evaluate_onnx(onnx_filename, data_loader, criterion, print_freq):
print("Loading ONNX file: ", onnx_filename)
ort_session = onnxruntime.InferenceSession(onnx_filename)
with torch.no_grad():
metric_logger = utils.MetricLogger(delimiter=" ")
header = 'Test:'
with torch.no_grad():
for image, target in metric_logger.log_every(data_loader, print_freq, header):
image = image.to("cpu", non_blocking=True)
image_data = np.array(image)
input_data = image_data
# run the data through onnx runtime instead of torch model
input_name = ort_session.get_inputs()[0].name
raw_result = ort_session.run([], {input_name: input_data})
output = torch.tensor((raw_result[0]))
loss = criterion(output, target)
acc1, acc5 = utils.accuracy(output, target, topk=(1, 5))
batch_size = image.shape[0]
metric_logger.update(loss=loss.item())
metric_logger.meters['acc1'].update(acc1.item(), n=batch_size)
metric_logger.meters['acc5'].update(acc5.item(), n=batch_size)
# gather the stats from all processes
metric_logger.synchronize_between_processes()
print(' ONNXRuntime: Acc@1 {top1.global_avg:.3f} Acc@5 {top5.global_avg:.3f}'
.format(top1=metric_logger.acc1, top5=metric_logger.acc5))
return metric_logger.acc1.global_avg
def export_onnx(model, onnx_filename, batch_onnx, per_channel_quantization):
model.eval()
quant_nn.TensorQuantizer.use_fb_fake_quant = True # We have to shift to pytorch's fake quant ops before exporting the model to ONNX
if per_channel_quantization:
opset_version = 13
else:
opset_version = 12
# Export ONNX for multiple batch sizes
print("Creating ONNX file: " + onnx_filename)
dummy_input = torch.randn(batch_onnx, 3, 224, 224, device='cuda') #TODO: switch input dims by model
input_names = ['input']
if 'resnet' in onnx_filename:
print('Changing last layer of resnet...')
output_names = ['Linear[fc]'] ### ResNet34
if 'densenet' in onnx_filename:
print('Changing last layer of densenet...')
output_names = ['Linear[classifier]'] #### DenseNet
dynamic_axes = {'input': {0: 'batch_size'}}
try:
torch.onnx.export(model, dummy_input, onnx_filename, input_names=input_names,
export_params=True, output_names=output_names, opset_version=opset_version,
dynamic_axes=dynamic_axes, verbose=True, enable_onnx_checker=False, do_constant_folding=True)
except ValueError:
warnings.warn(UserWarning("Per-channel quantization is not yet supported in Pytorch/ONNX RT (requires ONNX opset 13)"))
print("Failed to export to ONNX")
return False
return True
def calibrate_model(model, model_name, data_loader, num_calib_batch, calibrator, hist_percentile, out_dir):
if num_calib_batch > 0:
print("Calibrating model")
with torch.no_grad():
collect_stats(model, data_loader, num_calib_batch)
if not calibrator == "histogram":
compute_amax(model, method="max")
calib_output = os.path.join(
out_dir,
F"{model_name}-max-{num_calib_batch*data_loader.batch_size}.pth")
# torch.save(model.state_dict(), calib_output) # Just weights
torch.save(model, calib_output) # whole model
else:
for percentile in hist_percentile:
print(F"{percentile} percentile calibration")
compute_amax(model, method="percentile")
calib_output = os.path.join(
out_dir,
F"{model_name}-percentile-{percentile}-{num_calib_batch*data_loader.batch_size}.pth")
torch.save(model, calib_output) # whole model
for method in ["mse", "entropy"]:
print(F"{method} calibration")
compute_amax(model, method=method)
calib_output = os.path.join(
out_dir,
F"{model_name}-{method}-{num_calib_batch*data_loader.batch_size}.pth")
# torch.save(model.state_dict(), calib_output)
torch.save(model, calib_output)
def collect_stats(model, data_loader, num_batches):
# Enable calibrators
for name, module in model.named_modules():
if isinstance(module, quant_nn.TensorQuantizer):
if module._calibrator is not None:
module.disable_quant()
module.enable_calib()
else:
module.disable()
# Feed data to the network for collecting stats
for i, (image, _) in tqdm(enumerate(data_loader), total=num_batches):
model(image.cuda())
if i >= num_batches:
break
# Disable calibrators
for name, module in model.named_modules():
if isinstance(module, quant_nn.TensorQuantizer):
if module._calibrator is not None:
module.enable_quant()
module.disable_calib()
else:
module.enable()
def compute_amax(model, **kwargs):
# Load calib result
for name, module in model.named_modules():
if isinstance(module, quant_nn.TensorQuantizer):
if module._calibrator is not None:
if isinstance(module._calibrator, calib.MaxCalibrator):
module.load_calib_amax()
else:
module.load_calib_amax(**kwargs)
print(F"{name:40}: {module}")
model.cuda()
def build_sensitivity_profile(model, criterion, data_loader_test):
quant_layer_names = []
for name, module in model.named_modules():
if name.endswith("_quantizer"):
module.disable()
layer_name = name.replace("._input_quantizer", "").replace("._weight_quantizer", "")
if layer_name not in quant_layer_names:
quant_layer_names.append(layer_name)
for i, quant_layer in enumerate(quant_layer_names):
print("Enable", quant_layer)
for name, module in model.named_modules():
if name.endswith("_quantizer") and quant_layer in name:
module.enable()
print(F"{name:40}: {module}")
with torch.no_grad():
evaluate(model, criterion, data_loader_test, device="cuda")
for name, module in model.named_modules():
if name.endswith("_quantizer") and quant_layer in name:
module.disable()
print(F"{name:40}: {module}")
if __name__ == '__main__':
res = main(sys.argv[1:])
exit(res)
More info regarding system:
TensorRT == 8.2
Pytorch == 1.9.0+cu111
Torchvision == 0.10.0+cu111
ONNX == 1.9.0
ONNXRuntime == 1.8.1
pycuda == 2021
| If the ONNX model has Q/DQ nodes in it, you may not need calibration cache because quantization parameters such as scale and zero point are included in the Q/DQ nodes. You can run the Q/DQ ONNX model directly in TensorRT execution provider in OnnxRuntime (>= v1.9.0).
| https://stackoverflow.com/questions/71368760/ |
The pytorch training model cannot be created successfully | I would like to do a neural network for regression analysis using optuna based on this site.
I would like to create a model with two 1D data as input and one 1D data as output in batch learning.
x is the training data and y is the teacher data.
class Model(nn.Module):
# コンストラクタ(インスタンス生成時の初期化)
def __init__(self,trial, mid_units1, mid_units2):
super(Model, self).__init__()
self.linear1 = nn.Linear(2, mid_units1)
self.bn1 = nn.BatchNorm1d(mid_units1)
self.linear2 = nn.Linear(mid_units1, mid_units2)
self.bn2 = nn.BatchNorm1d(mid_units2)
self.linear3 = nn.Linear(mid_units2, 1)
self.activation = trial_activation(trial)
def forward(self, x):
x = self.linear1(x)
x = self.bn1(x)
x = self.activation(x)
x = self.linear2(x)
device = "cuda" if torch.cuda.is_available() else "cpu"
EPOCH = 100
x = torch.from_numpy(a[0].astype(np.float32)).to(device)
y = torch.from_numpy(a[1].astype(np.float32)).to(device)
def train_epoch(model, optimizer, criterion):
model.train()
optimizer.zero_grad() # 勾配情報を0に初期化
y_pred = model(x) # 予測
loss = criterion(y_pred.reshape(y.shape), y) # 損失を計算(shapeを揃える)
loss.backward() # 勾配の計算
optimizer.step() # 勾配の更新
return loss.item()
def trial_activation(trial):
activation_names = ['ReLU','logsigmoid']
activation_name = trial.suggest_categorical('activation', activation_names)
if activation_name == activation_names[0]:
activation = F.relu
else:
activation = F.logsigmoid
return activation
def objective(trial):
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# 中間層のユニット数の試行
mid_units1 = int(trial.suggest_discrete_uniform("mid_units1", 1024*2,1024*4, 64*2))
mid_units2 = int(trial.suggest_discrete_uniform("mid_units2", 1024, 1024*2, 64*2))
net = Model(trial, mid_units1, mid_units2).to(device)
criterion = nn.MSELoss()
# 最適化手法の試行
optimizer = trial_optimizer(trial, net)
train_loss = 0
for epoch in range(EPOCH):
train_loss = train_epoch(net, optimizer, criterion, device)
torch.save(net.state_dict(), str(trial.number) + "new1.pth")
return train_loss
strage_name = "a.sql"
study_name = 'a'
study = optuna.create_study(
study_name = study_name,
storage='sqlite:///' + strage_name,
load_if_exists=True,
direction='minimize')
TRIAL_SIZE = 100
study.optimize(objective, n_trials=TRIAL_SIZE)
error message
---> 28 loss = criterion(y_pred.reshape(y.shape), y) # 損失を計算(shapeを揃える)
29 loss.backward() # 勾配の計算
30 optimizer.step() # 勾配の更新
AttributeError: 'NoneType' object has no attribute 'reshape'
Because of the above error, I checked the value of y_pred and found it to be None.
model.train()
optimizer.zero_grad()
I am thinking that these two lines may be wrong, but I don't know how to solve this problem.
| With PyTorch, when you call y_pred = model(x) that will call the forward function which is defined in the Model class.
So, y_pred will get the result of the forward function, in your case, it returns nothing, that's why you get a None value. You can change the forward function as below:
def forward(self, x):
x = self.linear1(x)
x = self.bn1(x)
x = self.activation(x)
x = self.linear2(x)
return x
| https://stackoverflow.com/questions/71369132/ |
Rounding only specific entries of torch.tensor() | Using torch.round() is it possible to eventually round specific entries of a tensor? Example:
tensor([ 8.5040e+00, 7.3818e+01, 5.2922e+00, -1.8912e-01, 5.4389e-01,
-3.6032e-03, 4.5763e-01, -2.7471e-02])
Desired output:
tensor([ 9., 74., 5., 0., 5.4389e-01,
-3.6032e-03, 4.5763e-01, -2.7471e-02])
(Only first 4 rounded)
| you can do as follow
a[:4]=torch.round(a[:4])
| https://stackoverflow.com/questions/71370754/ |
I want to increase the frequency of each element of an array with particular amount? | This is sample array;
I want it increase the frequency of each element by constant (taking 4 here);
[64,64,64,64, 45, 45, 45, 45, 56, 56, 56, 56, 67, 67, 67, 67, 78, 78, 78, 78, 12, 12, 12, 12, 112, 112, 112, 112, 232, 232, 232, 232]
Can anyone help me with this? Please it should not be hardcoded
| You can use np.repeat to achieve this:
>>> a.repeat(4)
array([ 64, 64, 64, 64, 45, 45, 45, 45, 56, 56, 56, 56, 67,
67, 67, 67, 78, 78, 78, 78, 12, 12, 12, 12, 112, 112,
112, 112, 232, 232, 232, 232])
| https://stackoverflow.com/questions/71372812/ |
Installing PyTorch in Google Colabs | I am currently trying to run my code in Google Colabs and for that I need PyTorch. This is the feedback I got.
> ValueError Traceback (most recent call last)
<ipython-input-26-3d0da87b83cc> in <module>()
13 #from sentence_transformers import SentenceTransformer
14 get_ipython().system('pip install torch')
---> 15 import torch
16 get_ipython().system('pip install transformers')
17 from transformers import BertTokenizer, BertModel
>
> /usr/local/lib/python3.7/dist-packages/torch/__init__.py in <module>()
> 195 if USE_GLOBAL_DEPS:
> 196 _load_global_deps()
> --> 197 from torch._C import * # noqa: F403
> 198
> 199 # Appease the type checker; ordinarily this binding is inserted by the
>
> ValueError: module functions cannot set METH_CLASS or METH_STATIC
I read that it could have to do with Numpy and tried another version, which also failed.
| Remove the old numpy and run the following code.
pip install -U numpy
Found the solution here.
| https://stackoverflow.com/questions/71374207/ |
pytorch tensor of tensors to a tensor | When I print a torch tensor, I get the below output. How could I get that tensor without [] for the inner elements?
I printed the type of the first element and it returns <class 'torch.Tensor'> So this tensor seems to be a tensor of tensors...How could I convert it to a tensor of numbers?
tensor([[-5.6117e-01],
[ 3.5726e-01],
[-2.5853e-01],
[-4.8641e-01],
[-1.0581e-01],
[-1.8322e-01],
[-1.2732e+00],
[-5.9760e-02],
[ 1.2819e-01],
[ 6.3894e-02],
[-9.1817e-01],
[-1.6539e-01],
[-1.1471e+00],
[ 1.9666e-01],
[-6.3297e-01],
[-4.0876e-01],
[-2.4590e-02],
[ 2.7065e-01],
[ 3.5308e-01],
[-4.6348e-01],
[-4.1755e-01],
[-1.1554e-01],
[-4.2062e-01],
[ 1.4067e-01],
[-2.9788e-01],
[-7.4582e-02],
[-5.3751e-01],
[ 1.1344e-01],
[-2.6100e-01],
[ 2.6951e-02],
[-5.0437e-02],
[-1.9163e-01],
[-3.3893e-02],
[-5.9640e-01],
[-1.1574e-01],
[ 1.4613e-01],
[ 1.2263e-01],
[-1.5566e-01],
[ 1.4740e-01],
[-9.9924e-01],
[ 2.0878e-01],
[-2.0074e-01],
[ 7.8383e-02],
[ 7.4679e-02],
[-5.8065e-01],
[ 6.7777e-01],
[ 5.9879e-01],
[ 6.6301e-01],
[-4.7051e-01],
[-2.5468e-01],
[-2.7382e-01],
[ 1.7585e-01],
[ 3.6151e-01],
[-9.2532e-01],
[-1.6999e-01],
[ 8.4971e-02],
[-6.6083e-01],
[-3.1204e-02],
[ 6.3712e-01],
[-5.8580e-02],
[-7.7901e-04],
[-4.6792e-01],
[ 1.0796e-01],
[ 7.8766e-01],
[ 1.6809e-01],
[-7.0058e-01],
[-2.9299e-01],
[-8.2735e-02],
[ 2.0875e-01],
[-2.9426e-01],
[-7.6748e-02],
[-1.5762e-01],
[-5.7432e-01],
[-5.2042e-01],
[-1.5152e-01],
[ 1.4119e+00],
[-1.5752e-01],
[-3.0565e-01],
[-5.1378e-01],
[-5.8924e-01],
[-1.0163e+00],
[-2.2021e-01],
[ 2.9112e-02],
[ 1.8521e-01],
[ 6.2814e-01],
[-6.8793e-01],
[ 2.1395e-02],
[ 5.7168e-01],
[ 9.0977e-01],
[ 3.8899e-01],
[ 3.0209e-01],
[ 2.4655e-01],
[-1.1688e-01],
[-5.9835e-02],
[ 3.6426e-02],
[-5.2782e-01],
[ 1.4604e+00],
[ 2.9685e-01],
[-2.4077e-01],
[ 1.0163e+00],
[ 6.9770e-01],
[-2.6183e-01],
[ 3.6770e-01],
[ 3.6535e-03],
[ 4.2364e-01],
[-5.4703e-01],
[ 8.9173e-02],
[-3.9032e-01],
[-5.9740e-01],
[ 3.7479e-02],
[ 3.0257e-01],
[ 8.2539e-02],
[-6.0559e-01],
[-4.3660e-01],
[-7.0624e-01],
[-5.0503e-01],
[-4.0929e-01],
[-2.3300e-01],
[ 2.0298e-01],
[-6.3697e-01],
[-1.2584e-01],
[ 5.6092e-02],
[ 5.0150e-02],
[-1.5358e-01],
[ 2.9248e-02],
[ 1.1180e-01],
[-1.5535e-01],
[ 1.1964e-01],
[-6.5698e-01],
[ 4.1923e-01],
[ 7.4044e-02],
[ 2.4536e-02],
[ 3.2647e-01],
[-7.7464e-01],
[ 3.9898e-01],
[-2.5777e-01],
[ 8.5569e-02],
[-4.0305e-01],
[ 5.4463e-01],
[-3.4124e-01],
[-4.0789e-01],
[ 4.2093e-01],
[-3.8487e-01],
[-4.0491e-01],
[-2.1539e-01],
[-1.7979e-02],
[ 3.2492e-01],
[-2.0894e-01],
[ 2.5629e-01],
[ 9.6046e-01]], device='cuda:0', grad_fn=<AddmmBackward0>)
| That tensor has a singleton dimension (ie. it's of shape [Nx1]). Just squeeze that dimension or pick the 0th element:
In [1]: import torch
In [2]: a = torch.zeros([10,1])
In [3]: a
Out[3]:
tensor([[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.]])
In [4]: a[:,0]
Out[4]: tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
In [5]: a.squeeze(1)
Out[5]: tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
| https://stackoverflow.com/questions/71376281/ |
is there a faster way to compare two raws in a two tensors than for loop? | im new to pytorch and i would like to check for each element in the tensor1 if it is part of the same row in the tensor2 for example if i have two tensor as follwoing:
a = torch.tensor([[0,5],[1,5], [4,5], [7,8]])
b = torch.tensor([[0,1],[2,3], [7,5], [-1,7]])
the output should be as following:
[ True, False]
[False, False]
[False, False]
[False, False]
[False, False]
[False, True]
[False, True]
[False, False]
i know that i can do it using a simple for loop but is there a way to do it faster in pytorch?
| you need to use the repeat_interleave for both tensors and then reshape the first tensor, and then you compare it with the other tensor
like this
res = a.repeat_interleave(2, dim=1).reshape(-1, 2) == b.repeat_interleave(2, dim=0)
the output should be as follows
output
tensor([[ True, False],
[False, False],
[False, False],
[False, False],
[False, False],
[False, True],
[False, True],
[False, False]])
the output is tensor. If you need it without [] you can use the flatten method
| https://stackoverflow.com/questions/71378382/ |
How to initialise (and check sanity) weights efficiently of layers within complex (nested) modules in PyTorch? | Looking for an efficient way to access nested Modules and Layers to set the weights
I am replicating the DCGAN Paper and my code works as expected. I found out that in the paper, the authors said that:
All weights were initialized from a zero-centered Normal distribution
with standard deviation 0.02
This awesome answer explains that it can be done using torch.nn.init.normal_(nn.Conv2d(1,1,1, 1,1 ).weight.data, 0.0, 0.02) but I have complex structure using ModuleList and others. What is the most efficient way of doing this?
By Complex, please look at the code below for my implementation:
'''
Implement the Deep Convolution Gan AKA DCGAN in Pytorch: Paper at https://arxiv.org/pdf/1511.06434v2.pdf
'''
import torch
import torch.nn as nn
class GeneratorBlock(nn.Module):
'''
Generator Block uses TransposedConv2D -> Batch Norm (except LAST block) -> Relu
Note: kernel_size = 4, stride = 2, padding = 1 is used in the paper. When BatchNorm is used, Bias is not used for Conv2D
'''
def __init__(self, in_channels, out_channels, kernel_size = 4, stride = 2, padding = 1, use_batchnorm:bool = True):
super().__init__()
self.use_batchnorm = use_batchnorm
self.transpose_conv = nn.ConvTranspose2d(in_channels, out_channels, kernel_size = kernel_size, stride=stride, padding=padding, bias = not self.use_batchnorm)
self.batch_norm = nn.BatchNorm2d(out_channels) if self.use_batchnorm else None
self.activation = nn.ReLU() # Paper uses Relu in Generator Network
def forward(self, x):
x = self.transpose_conv(x)
return self.activation(self.batch_norm(x)) if self.use_batchnorm else self.activation(x)
class Generator(nn.Module):
'''
Generate Images using Transposed Convolution. Input is a random noise of [Batch, 100, 1,1] Dimension and then upsampled
'''
def __init__(self, input_features = 100, base_feature = 128, final_channels:int = 1):
'''
We use nn.Sequantial here just to show the workings. If you want to make the layers dynamically using a loop, find nn.ModuleList() in the Descriminator block. Both works same
So we'll use 'base_feature = 64' as a base for input and output channels
args:
input_features: The shape of Random Noise from which an image will be generated
base_feature: The shape of feature map or number or channels which will act as out base. Other inputs and outputs will be calculated based on this
final_channels: The channels / features which will be sent to the Discriminator as an input
'''
super(Generator, self).__init__()
# in Descriminator, we do the same work using ModuleList(). Uses 4 blocks
self.blocks = nn.Sequential(
GeneratorBlock(in_channels = input_features, out_channels = base_feature * 8, stride = 1, padding = 0), # from Random Noise, Generate 1024 features
GeneratorBlock(in_channels = base_feature * 8, out_channels = base_feature * 4), # 1024 -> 512 features
GeneratorBlock(in_channels = base_feature * 4, out_channels = base_feature * 2), # 512 -> 256 features
GeneratorBlock(in_channels = base_feature * 2, out_channels = base_feature), # 256 -> 128 features
nn.ConvTranspose2d(base_feature, final_channels, kernel_size = 4, stride = 2, padding = 1)# 128 -> final feature. It is just GeneratorBlock without ReLu and BatchNorm ;)
)
self.activation = nn.Tanh() # To make the outputs between [-1,1]
def forward(self, x):
'''
Takes Random Noise as input and Generte features from that
'''
return self.activation(self.blocks(x))
class DiscriminatorBlock(nn.Module):
'''
Discriminator Block uses Conv2D -> Batch Norm (except FIRST block) -> LeakyRelu
Note: kernel_size = 4, stride = 2, padding = 1 is used in the paper. When BatchNorm is used, Bias is not used for Conv2D
'''
def __init__(self, in_channels, out_channels, kernel_size = 4, stride = 2, padding = 1, use_batchnorm:bool = True):
super().__init__()
self.use_batchnorm = use_batchnorm
self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias = not self.use_batchnorm)
self.batch_norm = nn.BatchNorm2d(out_channels) if self.use_batchnorm else None
self.activation = nn.LeakyReLU(0.2)
def forward(self, x):
x = self.conv(x)
return self.activation(self.batch_norm(x)) if self.use_batchnorm else self.activation(x)
class Discriminator(nn.Module):
'''
CNNs to classify whether the image generated by the Generator are as good as the real ones
Feature Changes as :: 1 -> 64 -> 128 -> 256 -> 512 -> 1
'''
def __init__(self, input_features = 1, output_features = 1, middle_features = [64,128,256]):
'''
In the paper, they take in a feature of [Batch, 1, 64, 64] from the Generator and then output a single number per sample in the batch
'''
super().__init__()
self.layers = nn.ModuleList() # Just a fancy method of stacking layers using loop
# in the paper, the first layer does not use BatchNorm
self.layers.append(DiscriminatorBlock(input_features, middle_features[0], use_batchnorm = False)) # 1 -> 64 Because the input has 1 channel
for i, channel in enumerate(middle_features): # total 4 blocks are used in paper. 1 has already been used in the line above. 3 blocks are these
self.layers.append(DiscriminatorBlock(channel, channel*2)) # 64 -> 128 --- 128 -> 256 --- 256 -> 512
self.final_conv = nn.Conv2d(in_channels = middle_features[-1]*2, out_channels = output_features, kernel_size = 4, stride = 2, padding = 0) # Input from previous layer 512 -> 1
self.sigmoid_layer = nn.Sigmoid() # gives whether an image is real or fake or more precisely, how CLOSE is it to the real image
def forward(self, x):
for layer in self.layers:
x = layer(x)
return self.sigmoid_layer(self.final_conv(x))
def test_DCGAN_code():
noise = torch.rand(10,100,1,1)
image = Generator()(noise)
result = Discriminator()(image)
print('Model Built Successfully!!! Generating 10 random samples and their end results')
print(f"'Z' random Noise shape: {noise.shape} || Generator output shape: {image.shape} || Discriminator shape: {result.shape}")
| You can simply iterate over all submodules, at the end of your __init__ method:
class Generator(nn.Module):
def __init__(self, ....):
# all code here
# ...
# init weights, at the very bottom of __init__
for sm in self.modules():
if isinstance(sm, nn.Conv2d):
# only conv2d will be initialized in this way
torch.nn.init.normal_(sm.weight.data, 0.0, 0.02)
done.
| https://stackoverflow.com/questions/71379676/ |
PyTorch moving average computation creates inplace operation | I have a loss function that depends on an "exponential moving average" Z. A minimal example (pay special attention to the getUpdatedZ function):
import torch
import torch.nn as nn
class FeedForward(nn.Module):
def __init__(self):
super(FeedForward, self).__init__()
self.model = nn.Sequential(nn.Linear(1, 100),
nn.ReLU(),
nn.Linear(100, 1))
def forward(self, x):
return self.model(x)
model = FeedForward()
nEpochs = 100
optimizer = torch.optim.Adam(params=model.parameters(), lr=1e-3)
def getTrainingPoints():
return torch.rand(1000, 1)
def lossFunction(X, Z):
# Returning Z here is enough to expose the problem. The real loss is more complicated.
return Z
def getUpdatedZ(X, Z):
U = model(X)
Znew = torch.mean(U)
# Having Z in this computation creates an inplace operation (I'm not sure why).
# Returning, for example, Znew, does not cause any issues (but the computation is incorrect)
return 0.2 * Z + 0.8 * Znew
Z = torch.tensor([1.0])
X = getTrainingPoints()
for i in range(nEpochs):
optimizer.zero_grad()
Z = getUpdatedZ(X, Z)
loss = lossFunction(X, Z)
# loss function depends on gradient of the model in the real version of the code, hence retain_graph=True
loss.backward(retain_graph=True)
optimizer.step()
I get the following error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [100, 1]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
| After some trials, I think that the error arises because you are computing a recursive function (Z = getUpdatedZ(X, Z)) but you are changing some of its parameters (the weights of the Linear modules) at each iteration through optimizer.step().
You can backward() just at the end of the for cycle, or you may want to break the autodifferentiation graph, for example by calling Z.detach() after loss.backward(). Sometimes this trick is used to avoid too complex and inefficient backpropagations (check, for example this).
However in both cases, this will change the structure of the optimized function, so be sure of what you are doing.
| https://stackoverflow.com/questions/71382491/ |
Calculating dimensions of fully connected layer? | I am struggling to work out how to calculate the dimensions for the fully connected layer. I am inputing images which are (448x448) using a batch size (16). Below is the code for my convolutional layers:
class ConvolutionalNet(nn.Module):
def __init__(self, num_classes=182):
super().__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(3, 16, kernal_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernal_size=2, stride=2)
)
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernal_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernal_size=2, stride=2)
)
self.layer3 = nn.Sequential(
nn.Conv2d(32, 32, kernal_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernal_size=2, stride=2)
)
self.layer4 = nn.Sequential(
nn.Conv2d(32, 64, kernal_size=5, stride=1, padding=2),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(kernal_size=2, stride=2)
)
self.layer5 = nn.Sequential(
nn.Conv2d(64, 64, kernal_size=5, stride=1, padding=2),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(kernal_size=2, stride=2)
)
I want to add a fully connected layer:
self.fc = nn.Linear(?, num_classes)
Would anyone be able to explain the best way to go about calculating this? Also, if I have multiple fully connected layers e.g. (self.fc2, self.fc3), would the second parameter always equal the number of classes. I am new to coding and finding it hard to wrap my head around this.
| The conv layers don't change the width/height of the features since you've set padding equal to (kernel_size - 1) / 2. Max pooling with kernel_size = stride = 2 will decrease the width/height by a factor of 2 (rounded down if input shape is not even).
Using 448 as input width/height, the output width/height will be 448 // 2 // 2 // 2 // 2 // 2 = 448/32 = 14 (where // is floor-divide operator).
The number of channels is fully determined by the last conv layer, which outputs 64 channels.
Therefore you will have a [B,64,14,14] shaped tensor, so the Linear layer should have in_features = 64*14*14 = 12544.
Note you'll need to flatten the input beforehand, something like.
self.layer6 = nn.Sequential(
nn.Flatten(),
nn.Linear(12544, num_classes)
)
| https://stackoverflow.com/questions/71385657/ |
RuntimeError: Given groups=1, weight of size [32, 16, 5, 5], expected input[16, 3, 448, 448] to have 16 channels, but got 3 channels instead | I am getting the following error and can't figure out why. I printed the input size of my torch before it gets fed to the CNN:
torch.Size([16, 3, 448, 448])
Here is my error message:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-116-bfa18f2a99fd> in <module>()
14 # Forward pass
15 print(images.shape)
---> 16 outputs = modelll(images.float())
17 loss = criterion(outputs, labels)
18
6 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
441 _pair(0), self.dilation, self.groups)
442 return F.conv2d(input, weight, bias, self.stride,
--> 443 self.padding, self.dilation, self.groups)
444
445 def forward(self, input: Tensor) -> Tensor:
RuntimeError: Given groups=1, weight of size [32, 16, 5, 5], expected input[16, 3, 448, 448] to have 16 channels, but got 3 channels instead
I defined a CNN with 5 convolutional layers and two fully connected layers. I am feeding in batches of 16 and have resized the images to be (448x448). The images are colour, so I assumed an input of torch.Size([16, 3, 448, 448]) would be correct. Do I need to rearrange my tensor to be torch.Size([3, 448, 448, 16])? Just guessing here as I am fairly new to coding. I have looked online but haven't been able to figure it out. Any help would be greatly appreciated.
#Defining CNN
class ConvolutionalNet(nn.Module):
def __init__(self, num_classes=182):
super().__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.layer3 = nn.Sequential(
nn.Conv2d(32, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.layer4 = nn.Sequential(
nn.Conv2d(32, 64, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.layer5 = nn.Sequential(
nn.Conv2d(64, 64, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.fc1 = nn.Linear(10*10*64, 240)
self.fc2 = nn.Linear(240, 182)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(x)
out = self.layer3(x)
out = self.layer4(x)
out = self.layer5(x)
out = out.reshape(out.size(0), -1)
out = F.relu(self.fc1((x)))
out = self.fc3(x)
return out
#Creating instance
modelll = ConvolutionalNet(num_classes).to(device)
modelll
num_classes = 182
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(modelll.parameters(), lr=3e-4)
#Training loop
modelll.train()
num_epochs = 5 # Set the model into `training` mode, because certain operators will perform differently during training and evaluation (e.g. dropout and batch normalization)
total_epochs = notebook.tqdm(range(num_epochs))
for epoch in total_epochs:
for i, (images, labels, m) in enumerate(train_loader):
# Move tensors to the configured device
images = images.to(device)
labels = labels.to(device)
# Forward pass
print(images.shape)
outputs = modelll(images.float())
loss = criterion(outputs, labels)
# Backward and optimize
loss.backward()
optimizer.step()
optimizer.zero_grad()
if (i + 1) % 10 == 0:
total_epochs.set_description(
'Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(
epoch + 1, num_epochs, i + 1, len(train_loader), loss.item()))
| You haven't passed your output to the next layer's input, you're continually using the input. You should change your forward call to:
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = self.layer5(out)
out = out.reshape(out.size(0), -1)
out = F.relu(self.fc1((out)))
out = self.fc3(out)
return out
| https://stackoverflow.com/questions/71386892/ |
Pytorch Custom dataloader: TypeError: pic should be PIL Image or ndarray. Got | I want to use a custom data loader to transfer numpy files to a dataloader. When I set the transorm then I get the error TypeError: pic should be PIL Image or ndarray. Got <class 'torch.Tensor'>
import os
import torch
import numpy as np
from torch.utils.data import Dataset, TensorDataset, DataLoader
from torchvision import transforms
class CustomTensorDataset(Dataset):
"""
TensorDataset with support for transforms
"""
def __init__(self, tensors, transform=None):
assert all(tensors[0].size(0) == tensor.size(0) for tensor in tensors)
self.tensors = tensors
self.transform = transform
def __getitem__(self, index):
x = self.tensors[0][index]
if self.transform:
x = self.transform(x)
y = self.tensors[1][index]
return x, y
def __len__(self):
return self.tensors[0].size(0)
te_data = torch.FloatTensor(np.ones([100, 3, 32, 32]))
te_targets = torch.FloatTensor(np.ones([100]))
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.4914, 0.4822, 0.4465], std=[0.2023, 0.1994, 0.2010])
])
testset_custom = CustomTensorDataset(tensors=[te_data, te_targets], transform=transform)
# testset_custom = CustomTensorDataset(tensors=[te_data, te_targets], transform=None) # --> no error
for item in testset_custom:
print(item)
| Your input data to the Dataset need to be PIL image or numpy array. However, your te_data and te_targets are torch.tensor. To solve this just do not convert them to torch.tensor and before giving to Dataset and keep their dimension. Dataset, itself changes its dimension:
te_data = np.ones([100, 32, 32, 3])
te_targets = np.ones([100])
And also the condition with assert need to be changed as far as the input is numpy array:
assert all(tensors[0].shape[0] == tensor.shape[0] for tensor in tensors)
| https://stackoverflow.com/questions/71387040/ |
Pytorch datasets.UDPOS.splits throwing error | I want to split the UDPOS dataset into train, valid, and test by fields. Below is my code-
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext.legacy import data
from torchtext import datasets
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
TEXT = data.Field(lower = True)
UD_TAGS = data.Field(unk_token = None)
PTB_TAGS = data.Field(unk_token = None)
fields = (("text", TEXT), ("udtags", UD_TAGS), ("ptbtags", PTB_TAGS))
train_data, valid_data, test_data = datasets.UDPOS.splits(fields)
This code give me following error-
I am using Pytorch version - '1.10.2'.
How do I split the UDPOS dataset using fileds in the current version.
| I solved the same problem by changing the code
from torchtext import datasets
to
from torchtext.legacy import datasets
| https://stackoverflow.com/questions/71390078/ |
How to access latest torchvision.models (e.g. ViT)? | I have seen in the official torchvision docs that recently vision transformers and the ConvNeXt model families have been added to the PyTorch model zoo. However, even after upgrading to latest torchvision version 0.11.3 (via pip) these new models are not available:
>>> import torchvision; torchvision.__version__
'0.11.3+cu102'
>>> import torchvision.models as models
>>> model = models.resnext50_32x4d() # previous models work fine
>>> model = models.vit_b_16() # vision transformers don't work
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'torchvision.models' has no attribute 'vit_b_16'
Any ideas how I can access these latest model additions in PyTorch?
| Event though @Shai's answer is a nice addition, my original question was how I could access the official ViT and ConvNeXt models in torchvision.models. As it turned out the answer was simply to wait. So for the records: After upgrading to latest torchvision pip package in version 0.12 I got these new models as well.
| https://stackoverflow.com/questions/71393736/ |
Pytorch gradient error: nonetype unsupported operand type(s) for +: 'NoneType' and 'NoneType' | I get the following error message when i run the code given below. The code is a part of a bigger code but essentially i am computing gradients here. The variable displacements and colloc_points are of the type: tensor([[-0.0819, 0.1623, 0.1228]], grad_fn=),
tensor([[-0.0556, 2.2222, 0.1667]], requires_grad=True) respectively. How do i resolve this error?
---> 43 eps_xy = 0.5*(u_y+v_x)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'NoneType'
u = displacement[0,0]
v = displacement[0,1]
w = displacement[0,2]
print(colloc_point)
x = colloc_point[0,0]
y = colloc_point[0,1]
z = colloc_point[0,2]
## Compute gradients
u_y = torch.autograd.grad(u, y, create_graph=True,allow_unused=True)[0]
v_x = torch.autograd.grad(v, x, create_graph=True,allow_unused=True)[0]
eps_xy = 0.5*(u_y+v_x)```
| It is possible that, even if u, y, v and x require the gradient, u is not a function of y and, similarly v does not depend on x.
You can check this by setting allow_unused=False and see if torch.autograd.grad returns an error.
| https://stackoverflow.com/questions/71395632/ |
Semantic segmentation with detectron2 | I used Detectron2 to train a custom model with Instance Segmentation and worked well. There are several Tutorials on google colab with Detectron2 using Instance Segmentation, but nothing about Semantic Segmentation. So, to train the Custom Instance Segmentation the code based on colab (https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5#scrollTo=7unkuuiqLdqd) is this:
from detectron2.engine import DefaultTrainer
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("balloon_train",)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025 # pick a good LR
cfg.SOLVER.MAX_ITER = 300 # 300 iterations seems good enough for this toy dataset; you will need to train longer for a practical dataset
cfg.SOLVER.STEPS = [] # do not decay learning rate
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128 # faster, and good enough for this toy dataset (default: 512)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # only has one class (ballon). (see https://detectron2.readthedocs.io/tutorials/datasets.html#update-the-config-for-new-datasets)
# NOTE: this config means the number of classes, but a few popular unofficial tutorials incorrect uses num_classes+1 here.
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
And to run Semantic Segmentation train I replaced "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml" to "/Misc/semantic_R_50_FPN_1x.yaml", basicly I changed the pre-trainded model, just this. And I got this error:
TypeError: cross_entropy_loss(): argument 'target' (position 2) must be Tensor, not NoneType
How I set up to Semantic Segmentation on Google Colab?
| To train for semantic segmentation you can use the same COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml model. You don't have to change this line.
The training code you showed in your question is correct and can be used for semantic segmentation as well. All that changes are the label files.
Once the model is trained, you can use it for inference by loading the model weights from the trained model
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set the testing threshold for this model
cfg.DATASETS.TEST = ("Detectron_terfspot_" + "test", ) # the name given to your dataset when loading/registering it
cfg.DATALOADER.NUM_WORKERS = 2
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1
predictor = DefaultPredictor(cfg)
| https://stackoverflow.com/questions/71396788/ |
How can I create images for each batch using Pytorch? | I want to make a binary classifier that classifies the following:
Class 1. Some images that I already have.
Class 2. Some images that I create from a function, using the images of class 1.
The problem is that instead of pre-creating the two classes, and then loading them, to speed up the process I would like the class 2 images to be created for each batch.
Any ideas on how I can tackle the problem? If I use the DataLoader as usual, I have to enter the images of both classes directly, but if I still don't have the images of the second class I don't know how to do it.
Thanks.
| You can tackle the problem in at least two ways.
(Preferred) You create a custom Dataset class, AugDset, such that AugDset.__len__() returns 2 * len(real_dset), and when idx > len(imgset), AugDset.__getitem__(idx) generates the synthetic image from real_dset(idx).
You create your custom collate_fn function, to be passed to DataLoader that, given a batch, it augments it with your synthetic generated images.
| https://stackoverflow.com/questions/71399540/ |
RuntimeError: 0D or 1D target tensor expected, multi-target not supported I was training a deep learning model but i am getting this issue | *My Training Model*
def train(model,criterion,optimizer,iters):
epoch = iters
train_loss = []
validaion_loss = []
train_acc = []
validation_acc = []
states = ['Train','Valid']
for epoch in range(epochs):
print("epoch : {}/{}".format(epoch+1,epochs))
for phase in states:
if phase == 'Train':
model.train() *training the data if phase is train*
dataload = train_data_loader
else:
model.eval()
dataload = valid_data_loader
run_loss,run_acc = 0,0 *creating variables to calculate loss and acc*
for data in dataload:
inputs,labels = data
inputs = inputs.to(device)
labels = labels.to(device)
labels = labels.byte()
optimizer.zero_grad() #Using the optimizer
with torch.set_grad_enabled(phase == 'Train'):
outputs = model(inputs)
loss = criterion(outputs,labels.unsqueeze(1).float())
predict = outputs>=0.5
if phase == 'Train':
loss.backward() #backward propagation
optimizer.step()
acc = torch.sum(predict == labels.unsqueeze(1))
run_loss+=loss.item()
run_acc+=acc.item()/len(labels)
if phase == 'Train': #calulating train loss and accucracy
epoch_loss = run_loss/len(train_data_loader)
train_loss.append(epoch_loss)
epoch_acc = run_acc/len(train_data_loader)
train_acc.append(epoch_acc)
else: #training validation loss and accuracy
epoch_loss = run_loss/len(valid_data_loader)
validaion_loss.append(epoch_loss)
epoch_acc = run_acc/len(valid_data_loader)
validation_acc.append(epoch_acc)
print("{}, loss :{},accuracy:{}".format(phase,epoch_loss,epoch_acc))
history = {'Train_loss':train_loss,'Train_accuracy':train_acc,
'Validation_loss':validaion_loss,'Validation_Accuracy':validation_acc}
return model,history[enter image description here][1]
I was experiencing the error as 0D or 1D target tensor expected, multi-target not supported could you please help in rectifying the code which is described above. Referred the previous related articles but unable to get the desired result. What are the code snippets I had to change so that my model will run successfully. Any suggestions are mostly welcome. Thanks in Advance.
| Your problem is that labels have the correct shape to calculate the loss. When you add .unsqueeze(1) to labels you made your labels with this shape [32,1] which is not consistent to the requirment to calcualte the loss.
To fix the problem, you only need to remove .unsqueeze(1) for labels.
If you read the documentation of CrossEntropLoss, the arguments:
Input should be in (N,C) shape which is outputs in your case and [32,3].
Target should be in N shape which is labels in your case and should be [32]. Therefore, the loss function expects labels to be in 1D target not multi-target.
| https://stackoverflow.com/questions/71399847/ |
Hugginface Dataloader BERT ValueError: too many values to unpack (expected 2), AX hyperparameters tuning with Pytorch | I'm stacked with this error from one week now, I tried everything so the fact is that I'm not understanding deeply what is happening (I'm new at pytorch implementation). Anyway I'm trying to implement a Bert Classifier to discriminate between 2 sequences classes, with AX hyperparameters tuning.
This is all my code implemented anticipated by a sample of my datasets ( I have 3 csv, train-test-val). Thank you very much !
0 1
M A T T D R P T P D G T D A I D L T T R V R R... 1
M K K L F Q T E P L L E L F N C N E L R I I G... 0
M L V A A A V C P H P P L L I P E L A A G A A... 1
M I V A W G N S G S G L L I L I L S L A V S A... 0
M V E E G R R L A A L H P N I V V K L P T T E... 1
M G S K V S K N A L V F N V L Q A L R E G L T... 1
M P S K E T S P A E R M A R D E Y Y M R L A M... 1
M V K E Y A L E W I D G Y R E R L V K V S D A... 1
M G T A A S Q D R A A M A E A A Q R V G D S F... 0
df_train=pd.read_csv('CLASSIFIER_train',sep=',',header=None)
df_train
class SequenceDataset(Dataset):
def __init__(self, sequences, targets, tokenizer, max_len):
self.sequences = sequences
self.targets = targets
self.tokenizer = tokenizer
self.max_len = max_len
def __len__(self):
return len(self.sequences)
def __getitem__(self, item):
sequences = str(self.sequences[item])
target = self.targets[item]
encoding = self.tokenizer.encode_plus(
sequences,
add_special_tokens=True,
max_length=self.max_len,
return_token_type_ids=False,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt',
)
return {
'sequences_text': sequences,
'input_ids': encoding['input_ids'].flatten(),
'attention_mask': encoding['attention_mask'].flatten(),
'targets': torch.tensor(target, dtype=torch.long)
}
def create_data_loader(df, tokenizer, max_len, batch_size):
ds = SequenceDataset(
sequences=df[0].to_numpy(),
targets=df[1].to_numpy(),
tokenizer=tokenizer,
max_len=max_len
)
return DataLoader(
ds,
batch_size=batch_size,
num_workers=2,
shuffle=True
)
BATCH_SIZE = 16
train_data_loader = create_data_loader(df_train, tokenizer, MAX_LEN, BATCH_SIZE)
val_data_loader = create_data_loader(df_val, tokenizer, MAX_LEN, BATCH_SIZE)
test_data_loader = create_data_loader(df_test, tokenizer, MAX_LEN, BATCH_SIZE)
def net_train(net, train_data_loader, parameters, dtype, device):
net.to(dtype=dtype, device=device)
# Define loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), # or any optimizer you prefer
lr=parameters.get("lr", 0.001), # 0.001 is used if no lr is specified
momentum=parameters.get("momentum", 0.9)
)
scheduler = optim.lr_scheduler.StepLR(
optimizer,
step_size=int(parameters.get("step_size", 30)),
gamma=parameters.get("gamma", 1.0), # default is no learning rate decay
)
num_epochs = parameters.get("num_epochs", 3) # Play around with epoch number
# Train Network
for _ in range(num_epochs):
for inputs, labels in train_data_loader:
# move data to proper dtype and device
inputs = inputs.to(dtype=dtype, device=device)
labels = labels.to(device=device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
scheduler.step()
return net
def init_net(parameterization):
model = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME)
# The depth of unfreezing is also a hyperparameter
for param in model.parameters():
param.requires_grad = False # Freeze feature extractor
Hs = 512 # Hidden layer size; you can optimize this as well
model.fc = nn.Sequential(nn.Linear(2048, Hs), # attach trainable classifier
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(Hs, 10),
nn.LogSoftmax(dim=1))
return model # return untrained model
def train_evaluate(parameterization):
# constructing a new training data loader allows us to tune the batch size
train_data_loader=create_data_loader(df_train, tokenizer, MAX_LEN, batch_size=parameterization.get("batchsize", 32))
# Get neural net
untrained_net = init_net(parameterization)
# train
trained_net = net_train(net=untrained_net, train_data_loader=train_data_loader,
parameters=parameterization, dtype=dtype, device=device)
# return the accuracy of the model as it was trained in this run
return evaluate(
net=trained_net,
data_loader=test_data_loader,
dtype=dtype,
device=device,
)
classes=('0','1')
dtype = torch.float
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
best_parameters, values, experiment, model = optimize(
parameters=[
{"name": "lr", "type": "range", "bounds": [1e-6, 0.4], "log_scale": True},
{"name": "batchsize", "type": "range", "bounds": [16, 128]},
{"name": "momentum", "type": "range", "bounds": [0.0, 1.0]},
#{"name": "max_epoch", "type": "range", "bounds": [1, 30]},
#{"name": "stepsize", "type": "range", "bounds": [20, 40]},
],
evaluation_function=train_evaluate,
objective_name='accuracy',
)
print(best_parameters)
means, covariances = values
print(means)
print(covariances)
File "<ipython-input-71-e52ebc0d7b5b>", line 14, in train_evaluate
parameters=parameterization, dtype=dtype, device=device)
File "<ipython-input-61-66c57e7138fa>", line 20, in net_train
for inputs, labels in train_data_loader:
ValueError: too many values to unpack (expected 2)
| your dataloader returns a dictionary therefore the way you loop and access it is wrong should be done as such:
# Train Network
for _ in range(num_epochs):
# Your dataloader returns a dictionary
# so access it as such
for batch in train_data_loader:
# move data to proper dtype and device
labels = batch['targets'].to(device=device)
atten_mask = batch['attention_mask'].to(device=device)
input_ids = batch['input_ids'].to(device=device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(input_ids, attention_mask=atten_mask)
| https://stackoverflow.com/questions/71401458/ |
Using more than 1 metric in pytorch | I have some experience in Tenserflow but I'm new to pytorch. Sometimes I need more than 1 metric to check the accuracy of training. In Tenserflow, I used to do as shown below. But I wonder how could list more than 1 metric in pytorch.
LR = 0.0001
optim = keras.optimizers.Adam(LR)
dice_loss_se2 = sm.losses.DiceLoss()
mae = tf.keras.losses.MeanAbsoluteError( )
metrics = [ mae,sm.metrics.IOUScore(threshold=0.5), sm.metrics.FScore(threshold=0.5) , dice_loss_se2]
model.compile(optimizer=optim,loss= dice_loss_se2,metrics= metrics)
| In pytorch training is done mostly through loops so you have define via each step, there are packages like torchmetrics which you can run each metric heres an example:
import torchmetrics
for step, (test_image, test_labels) in tqdm(enumerate(test_dataloader), total=len(test_dataloader)):
test_batch_image = test_image.to('cuda')
test_batch_label = test_labels.to('cuda')
targets.append(test_labels)
with torch.no_grad():
logits = model(test_batch_image)
loss = criterion(logits, test_batch_label)
test_loss += loss.item()
preds.append(logits.detach().cpu().numpy().argmax(axis=1))
preds = torch.tensor(np.concatenate(preds))
targets = torch.tensor(np.concatenate(targets))
print('[Epoch %d] Test loss: %.3f' %(epoch + 1, test_loss/ len(test_dataloader)))
print('Accuracy: {}%'.format(round(torchmetrics.functional.accuracy(target=targets, preds=preds).item() * 100), 2))
| https://stackoverflow.com/questions/71404067/ |
Why does torch transpose return a non-contiguous tensor but the equivalent view operation returns a contiguous one? | Both torch.Tensor.view() and torch.tensor.transpose() can transpose a 2 dimensional tensor, e.g.
a = torch.arange(8).reshape(2, 4)
a.t().is_contiguous() # False
a.view(4,2).is_contiguous() # True
But exchanging the dimensions with .view() results in a contiguous tensor while using .t() results in a non-contiguous one. It's clearly somehow possible to transpose while retaining contiguity, though I don't really understand how, but my main question is: why is this behavior different for these two functions?
The documentation for view says that the "returned tensor shares the same data" and for transpose that the "resulting out tensor shares its underlying storage with the input tensor" so I naively expect the contiguity status of the outputs to be the same for both functions, but perhaps "sharing the same data" and "sharing underlying storage" are two different things?
Edit: It did not occur to me that there is a way to reshape this tensor into 4,2 that is not tansposing lol. Thanks to the author of the accepted answer for pointing out that a.t() != a.view(dim1, dim0).
| With view(4, 2) you are not transposing the tensor, you are simply reshaping it.
Original tensor:
t = torch.tensor([[1, 2],
[3, 4],
[5, 6],
[7, 8]])
t.transpose() returns:
torch.tensor([[1, 3, 5, 7],
[2, 4, 6, 8]])
t.view(4, 2) returns:
t.tensor([[1, 2, 3, 4],
[5, 6, 7, 8]])
| https://stackoverflow.com/questions/71407872/ |
Training with threshold in PyTorch | I have a neural network, which produces a single value when excited with input. I need to use this value returned by the network to threshold another array. The result of this threshold operation is used to compute a loss function (the value of threshold is not known before hand and needs to be arrived at by training).
Following is an MWE
import torch
x = torch.randn(10, 1) # Say this is the output of the network (10 is my batch size)
data_array = torch.randn(10, 2) # This is the data I need to threshold
ground_truth = torch.randn(10, 2) # This is the ground truth
mse_loss = torch.nn.MSELoss() # Loss function
# Threshold
thresholded_vals = data_array * (data_array >= x) # Returns zero in all places where the value is less than the threshold, the value itself otherwise
# Compute loss and gradients
loss = mse_loss(thresholded_vals, ground_truth)
loss.backward() # Throws error here
Since the operation of thresholding returns a tensor array that is devoid of any gradients the backward() operation throws error.
How does one train a network in such a case?
| Your threshold function is not differentiable in the threshold, therefore torch does not calculate the gradient for the threshold which is why your example is not working.
import torch
x = torch.randn(10, 1, requires_grad=True) # Say this is the output of the network (10 is my batch size)
data_array = torch.randn(10, 2, requires_grad=True) # This is the data I need to threshold
ground_truth = torch.randn(10, 2) # This is the ground truth
mse_loss = torch.nn.MSELoss() # Loss function
# Threshold
thresholded_vals = data_array * (data_array >= x) # Returns zero in all places where the value is less than the threshold, the value itself otherwise
# Compute loss and gradients
loss = mse_loss(thresholded_vals, ground_truth)
loss.backward() # Throws error here
print(x.grad)
print(data_array.grad)
Output:
None #<- for the threshold x
tensor([[ 0.1088, -0.0617], #<- for the data_array
[ 0.1011, 0.0000],
[ 0.0000, 0.0000],
[-0.0000, -0.0000],
[ 0.2047, 0.0973],
[-0.0000, 0.2197],
[-0.0000, 0.0929],
[ 0.1106, 0.2579],
[ 0.0743, 0.0880],
[ 0.0000, 0.1112]])
| https://stackoverflow.com/questions/71409752/ |
How to add an additional output node during training for Pytorch? | I am making a class-incremental learning multi-label classifier. Here the model first trains with 7 labels. After training, another dataset emerges that contains the same labels except one more. I want to automatically add an extra node to the trained network and continue training on this new dataset. How can I do this?
class FeedForewardNN(nn.Module):
def __init__(self, input_size, h1_size = 264, h2_size = 128, num_services=8):
super().__init__()
self.input_size = input_size
self.lin1 = nn.Linear(input_size, h1_size)
self.lin2 = nn.Linear(h1_size, h2_size)
self.lin3 = nn.Linear(h2_size, num_services)
self.relu = nn.ReLU()
self.sigmoid = nn.Sigmoid()
def forward(self, x):
x = self.lin1(x)
x = self.relu(x)
x = self.lin2(x)
x = self.relu(x)
x = self.lin3(x)
x = self.sigmoid(x)
return x
This is the architecture of the feedforward Neural Network.
Then I first train on the data set with only 7 classes.
#Create NN
input_size = len(x_columns)
net1 = FeedForewardNN(input_size, num_services=7)
alpha= 0.001
#Define optimizer
optimizer = optim.Adam(net.parameters(), lr=alpha)
criterion = nn.BCELoss()
running_loss = 0
#Training Loop
loss_list = []
auc_list = []
for i in range(len(train_data_x)):
optimizer.zero_grad()
outputs = net1(train_data_x[i])
loss = criterion(outputs, train_data_y[i])
loss.backward()
optimizer.step()
However then, I want to add one additional output node, define the new weights but maintain the old trained weights, and train on this new data set.
| I suggest to replace layer with new one, having desired shape, and than partially assign its parameter values with old ones as follows:
def increaseClassifier( m: torch.nn.Linear ):
w = m.weight
b = m.bias
old_shape = m.weight.shape
m2 = nn.Linear( old_shape[1], old_shape[0] +1 )
m2.weight = nn.parameter.Parameter( torch.cat( (m.weight, m2.weight[0:1]) ) )
m2.bias = nn.parameter.Parameter( torch.cat( (m.bias, m2.bias[0:1]) ) )
return m2
class FeedForewardNN(nn.Module):
...
def incrHere(self):
self.lin3 = increaseClassifier( self.lin3 )
UPD:
Can you explain, how these additional weights that come with this new output node are initialized?
The initial weights for new channel come from new layer creation, layer constructor make new parameters with some random initialization, then we are replace part of it with trained weight, and remained part is ready for new training.
m2.weight = nn.parameter.Parameter( torch.cat( (m.weight, m2.weight[0:1]) ) )
| https://stackoverflow.com/questions/71410194/ |
What's the correct way to implement convolutional blocks of specified depth? | I'm trying to implement bayesian optimization with Convolutional Neural Netowrk in PyTorch, specifically, I'm trying to translate network structure from Matlab BayesOptExperiment to PyTorch. I want my network to have the following structure:
Input data -> convblock -> maxpool -> convblock -> maxpool -> convblock -> avgpool -> flatten -> linear -> softmax
where convblock consists of:
[conv2Dlayer -> batch normalization layer -> ReLU],
repeated a few times. Current version works as expected only when section_depth = 1 achieving accuracy of around 65-70%, although if I raise the depth of convblock accuracy plummets to around 10%. I'm definitely missing something, but I'm not sure what is that. The structure of my network:
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
#...
class Net(nn.Module):
def __init__(self, section_depth):
super().__init__()
#! define network architecture
self.section_depth = section_depth
self.num_filters = round(16/np.sqrt(self.section_depth))
self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2)
self.avgpool = nn.AvgPool2d(kernel_size=8)
self.block1 = nn.ModuleList()
self.block2 = nn.ModuleList()
self.block3 = nn.ModuleList()
self.batchnorm1 = nn.BatchNorm2d(self.num_filters)
self.batchnorm2 = nn.BatchNorm2d(2*self.num_filters)
self.batchnorm3 = nn.BatchNorm2d(4*self.num_filters)
for i in range(self.section_depth):
channels1 = 3 if i==0 else self.num_filters
channels2 = self.num_filters if i == 0 else 2*self.num_filters
channels3 = 2*self.num_filters if i == 0 else 4*self.num_filters
self.block1.append(nn.Conv2d(in_channels=channels1, out_channels=self.num_filters, kernel_size=3, padding='same'))
self.block2.append(nn.Conv2d(in_channels=channels2, out_channels=2*self.num_filters, kernel_size=3, padding='same'))
self.block3.append(nn.Conv2d(in_channels=channels3, out_channels=4*self.num_filters, kernel_size=3, padding='same'))
self.fc1 = nn.Linear(4*self.num_filters, 10) # ? number of outputs
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
for i in self.block1:
x = F.relu(self.batchnorm1(i(x)))
x = self.maxpool(x)
for i in self.block2:
x = F.relu(self.batchnorm2(i(x)))
x = self.maxpool(x)
for i in self.block3:
x = F.relu(self.batchnorm3(i(x)))
x = self.avgpool(x)
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = self.fc1(x)
x = self.softmax(x)
return x
Any help would be appreciated.
| Ok, I figured it out. Turns out that batch normalization layer has learnable parameters, so instead of using the same layer for the whole convblock I had to make a separate batchnorm layer for every convolutional layer.
| https://stackoverflow.com/questions/71416364/ |
RuntimeError: The size of tensor a (10) must match the size of tensor b (3) at non-singleton dimension 0 | I am using this intersection over union code to determine IOU from my predictions and targets:
def intersection_over_union(boxes_preds, boxes_labels):
"""
Calculates intersection over union
Parameters:
boxes_preds (tensor): Predictions of Bounding Boxes (BATCH_SIZE, 4)
boxes_labels (tensor): Correct labels of Bounding Boxes (BATCH_SIZE, 4)
box_format (str): midpoint/corners, if boxes (x,y,w,h) or (x1,y1,x2,y2)
Returns:
tensor: Intersection over union for all examples
"""
box1_x1 = boxes_preds[..., 0:1]
box1_y1 = boxes_preds[..., 1:2]
box1_x2 = boxes_preds[..., 2:3]
box1_y2 = boxes_preds[..., 3:4] # (N, 1)
box2_x1 = boxes_labels[..., 0:1]
box2_y1 = boxes_labels[..., 1:2]
box2_x2 = boxes_labels[..., 2:3]
box2_y2 = boxes_labels[..., 3:4]
x1 = torch.max(box1_x1, box2_x1)
y1 = torch.max(box1_y1, box2_y1)
x2 = torch.min(box1_x2, box2_x2)
y2 = torch.min(box1_y2, box2_y2)
# .clamp(0) is for the case when they do not intersect
intersection = (x2 - x1).clamp(0) * (y2 - y1).clamp(0)
box1_area = abs((box1_x2 - box1_x1) * (box1_y2 - box1_y1))
box2_area = abs((box2_x2 - box2_x1) * (box2_y2 - box2_y1))
return intersection / (box1_area + box2_area - intersection + 1e-6)
My inputs looks like this:
My target bounding boxes look like this:
print(targets[0]['boxes'])
tensor([[217., 481., 249., 511.],
[435., 191., 467., 223.],
[471., 86., 503., 118.]])
And my prediction bounding boxes look like this:
predictions['boxes']
tensor([[ 29.7859, 354.9666, 63.0900, 387.6363],
[469.1072, 85.6840, 503.1974, 119.7137],
[ 89.3957, 314.1584, 123.9789, 347.1621],
[432.2971, 188.4454, 468.4712, 227.3808],
[214.5407, 482.0136, 248.7030, 512.0000],
[329.1979, 340.8802, 366.3720, 375.8683],
[298.5089, 99.0098, 334.4280, 129.4205],
[ 0.0000, 347.7724, 17.3409, 384.5709],
[485.4312, 181.3882, 512.0000, 213.2009],
[144.5959, 356.5197, 183.4489, 387.4958]])
However, when I apply the IOU function:
iou = intersection_over_union(predictions['boxes'], targets[0]['boxes'])
I get this error:
RuntimeError: The size of tensor a (10) must match the size of tensor b (3) at non-singleton dimension 0
I'm not sure how I can fix the function as I'm guessing this means I have more predictions than targets...
| I've instead opted for torchvisions IOU example:
iou = torchvision.ops.box_iou(predictions['boxes'], targets[0]['boxes'])
There are no errors here.
I alternatively found this implementation (from https://github.com/amdegroot/ssd.pytorch/blob/master/layers/box_utils.py#L48):
def intersect(box_a, box_b):
""" We resize both tensors to [A,B,2] without new malloc:
[A,2] -> [A,1,2] -> [A,B,2]
[B,2] -> [1,B,2] -> [A,B,2]
Then we compute the area of intersect between box_a and box_b.
Args:
box_a: (tensor) bounding boxes, Shape: [A,4].
box_b: (tensor) bounding boxes, Shape: [B,4].
Return:
(tensor) intersection area, Shape: [A,B].
"""
A = box_a.size(0)
B = box_b.size(0)
max_xy = torch.min(box_a[:, 2:].unsqueeze(1).expand(A, B, 2),
box_b[:, 2:].unsqueeze(0).expand(A, B, 2))
min_xy = torch.max(box_a[:, :2].unsqueeze(1).expand(A, B, 2),
box_b[:, :2].unsqueeze(0).expand(A, B, 2))
inter = torch.clamp((max_xy - min_xy), min=0)
return inter[:, :, 0] * inter[:, :, 1]
def jaccard(box_a, box_b):
"""Compute the jaccard overlap of two sets of boxes. The jaccard overlap
is simply the intersection over union of two boxes. Here we operate on
ground truth boxes and default boxes.
E.g.:
A ∩ B / A ∪ B = A ∩ B / (area(A) + area(B) - A ∩ B)
Args:
box_a: (tensor) Ground truth bounding boxes, Shape: [num_objects,4]
box_b: (tensor) Prior boxes from priorbox layers, Shape: [num_priors,4]
Return:
jaccard overlap: (tensor) Shape: [box_a.size(0), box_b.size(0)]
"""
inter = intersect(box_a, box_b)
area_a = ((box_a[:, 2]-box_a[:, 0]) *
(box_a[:, 3]-box_a[:, 1])).unsqueeze(1).expand_as(inter) # [A,B]
area_b = ((box_b[:, 2]-box_b[:, 0]) *
(box_b[:, 3]-box_b[:, 1])).unsqueeze(0).expand_as(inter) # [A,B]
union = area_a + area_b - inter
return inter / union # [A,B]
Where applying jaccard produces the same output as torchvisions own functions:
iou = jaccard(predictions['boxes'], targets[0]['boxes'])
An example tensor when printing IOU:
tensor([[0.0000, 0.0000],
[0.0000, 0.0000],
[0.0000, 0.0000],
[0.0000, 0.0000],
[0.0000, 0.9322],
[0.8021, 0.0000],
[0.0000, 0.0000],
[0.0000, 0.0000]])
| https://stackoverflow.com/questions/71417786/ |
is there any way to include a counter(a variable that count something) in a loss function in pytorch? | These are some lines from my loss function. output is the output of a multiclass classification network.
bin_count=torch.bincount(torch.where(output>.1)[0], minlength=output.shape[0])
dr_output = (bin_count == 1) & (torch.argmax(output, dim=1)==labels)
I want dr_output.sum() to be part of my loss function. But there are many limitations in my implementation. Some functions are non-differentiable in pytorch, and also dr_output may be zero which is also not allowed if I only use dr_output as my loss. Can anyone please suggest to me a way around these problems?
| If I got it correctly:
bin_count=torch.bincount(torch.where(output>.1)[0], minlength=output.shape[0])
computes how many elements are greater than .1, for each row.
Instead:
dr_output = (bin_count == 1) & (torch.argmax(output, dim=1)==labels)
is true if there is only an element greater than .1 in the corresponding row, and the prediction is correct.
dr_output.sum() then counts how many rows verify this condition, so minimizing the loss may enforce incorrect predictions or distributions with more values greater than .1.
Given these considerations, you can approximate your loss with the following:
import torch.nn.functional as F
# x are the inputs, y the labels
mask = x > 0.1
p = F.softmax(x, dim=1)
out = p * (mask.sum(dim=1, keepdim=True) == 1)
loss = out[torch.arange(x.shape[0]), y].sum()
You can devise similar variants that are more fit for your problem.
| https://stackoverflow.com/questions/71418817/ |
Are there benefits to having Actor and Critic use significantly different models? | In Actor-Critic methods the Actor and Critic are assigned two complimentary, but different goals. I'm trying to understand whether the differences between these goals (updating a policy and updating a value function) are large enough to warrant different models for the Actor and Critic, or if they are of similar enough complexity that the same model should be reused for simplicity. I realize that this could be very situational, but not in what way. For example, does the balance shift as the model complexity grows?
Please let me know if there are any rules of thumb for this, or if you know of a specific publication that addresses the issue.
| The empirical results suggest the exact opposite - that it is important to have the same network doing both (up to some final layer/head). The main reason for this is that learning value network (critis) provides signal for shaping represntation of the policy (actor) that otherwise would be nearly impossible to get.
In fact if you think about these, these are extremely similar goals, since for optimal deterministic policy
pi(s) = arg max_a Q(s, a) = arg max_a V(T(s, a))
where T is the transition dynamics.
| https://stackoverflow.com/questions/71419306/ |
How to load data from multiply datasets in pytorch | I have two datasets of images - indoors and outdoors, they don't have the same number of examples.
Each dataset has images that contain a certain number of classes (minimum 1 maximum 4), these classes can appear in both datasets, and each class has 4 categories - red, blue, green, white.
Example:
Indoor - cats, dogs, horses
Outdoor - dogs, humans
I am trying to train a model, where I tell it, "here is an image that contains a cat, tell me it's color" regardless of where it was taken (Indoors, outdoors, In a car, on the moon)
To do that,
I need to present my model examples so that every batch has only one category (cat, dog, horse or human), but I want to sample from all datasets (two in this case) that contains these objects and mix them. How can I do this?
It has to take into account that the number of examples in each dataset is different, and that some categories appear in one dataset where others can appear in more than one.
and each batch must contain only one category.
I would appreciate any help, I have been trying to solve this for a few days now.
| Assuming the question is:
Combine 2+ data sets with potentially overlapping categories of objects (distinguishable by label)
Each object has 4 "subcategories" for each color (distinguishable by label)
Each batch should only contain a single object category
The first step will be to ensure consistency of the object labels from both data sets, if not already consistent. For example, if the dog class is label 0 in the first data set but label 2 in the second data set, then we need to make sure the two dog categories are correctly merged. We can do this "translation" with a simple data set wrapper:
class TranslatedDataset(Dataset):
"""
Args:
dataset: The original dataset.
translate_label: A lambda (function) that maps the original
dataset label to the label it should have in the combined data set
"""
def __init__(self, dataset, translate_label):
super().__init__()
self._dataset = dataset
self._translate_label = translate_label
def __len__(self):
return len(self._dataset)
def __getitem__(self, idx):
inputs, target = self._dataset[idx]
return inputs, self._translate_label(target)
The next step is combining the translated data sets together, which can be done easily with a ConcatDataset:
first_original_dataset = ...
second_original_dataset = ...
first_translated = TranslateDataset(
first_original_dataset,
lambda y: 0 if y is 2 else 2 if y is 0 else y, # or similar
)
second_translated = TranslateDataset(
second_original_dataset,
lambda y: y, # or similar
)
combined = ConcatDataset([first_translated, second_translated])
Finally, we need to restrict batch sampling to the same class, which is possible with a custom Sampler when creating the data loader.
class SingleClassSampler(torch.utils.data.Sampler):
def __init__(self, dataset, batch_size):
super().__init__()
# We need to create sequential groups
# with batch_size elements from the same class
indices_for_target = {} # dict to store a list of indices for each target
for i, (_, target) in enumerate(dataset):
# converting to string since Tensors hash by reference, not value
str_targ = str(target)
if str_targ not in indices_for_target:
indices_for_target[str_targ] = []
indices_for_target[str_targ] += [i]
# make sure we have a whole number of batches for each class
trimmed = {
k: v[:-(len(v) % batch_size)]
for k, v in indices_for_target.items()
}
# concatenate the lists of indices for each class
self._indices = sum(list(trimmed.values()))
def __len__(self):
return len(self._indices)
def __iter__(self):
yield from self._indices
Then to use the sampler:
loader = DataLoader(
combined,
sampler=SingleClassSampler(combined, 64),
batch_size=64,
shuffle=True
)
I haven't run this code, so it might not be exactly right, but hopefully it will put you on the right track.
torch.utils.data Docs
| https://stackoverflow.com/questions/71422639/ |
RuntimeError: mat1 and mat2 shapes cannot be multiplied (25x340 and 360x1) | I get this error message and I'm not sure why. My input is (batch, 1, 312) from tabular data and this CNN is constructed for a regression prediction. I worked out the shapes for each step with the formula (input + 2*padding - filter size)/stride + 1 as in the comment below. The problem appears to occur at x = self.fc(x) and I can't figure out why. Your help is greatly appreciated. Thank you.
class CNNWeather(nn.Module):
# input (batch, 1, 312)
def __init__(self):
super(CNNWeather, self).__init__()
self.conv1 = nn.Conv1d(in_channels=1, out_channels=8, kernel_size=9, stride=1, padding='valid') # (312+2*0-9)/1 + 1 = 304
self.pool1 = nn.AvgPool1d(kernel_size=2, stride=2) # 304/2 = 302
self.conv2 = nn.Conv1d(in_channels=8, out_channels=12, kernel_size=3, stride=1, padding='valid') # (302-3)/1+1 = 300
self.pool2 = nn.AvgPool1d(kernel_size=2, stride=2) # 300/2 = 150
self.conv3 = nn.Conv1d(in_channels=12, out_channels=16, kernel_size=3, stride=1, padding='valid') # (150-3)/1+1 = 76
self.pool3 = nn.AvgPool1d(kernel_size=2, stride=2) # 76/2 = 38
self.conv4 = nn.Conv1d(in_channels=16, out_channels=20, kernel_size=3, stride=1, padding='valid') # (38-3)/1+1 = 36
self.pool4 = nn.AvgPool1d(kernel_size=2, stride=2) # 36/2 = 18 (batch, 20, 18)
self.fc = nn.Linear(in_features=20*18, out_features=1)
def forward(self, x):
x = self.pool1(F.relu(self.conv1(x)))
x = self.pool2(F.relu(self.conv2(x)))
x = self.pool3(F.relu(self.conv3(x)))
x = self.pool4(F.relu(self.conv4(x)))
print(x.size())
x = x.view(x.size(0), -1) # flatten (batch, 20*18)
x = self.fc(x)
return x
| The problem seems to be related to the input size of your FC layer:
self.fc = nn.Linear(in_features=20*18, out_features=1)
The output of the previous layer is 340, so you must use in_features=340.
These are the shapes of the output for the third and fourth layers.
torch.Size([5, 16, 73]) conv3 out
torch.Size([5, 16, 36]) pool3 out
torch.Size([5, 20, 34]) conv4 out
torch.Size([5, 20, 17]) pool4 out
Notice that out of the "pool4" layer come 20x17, meaning 340 elements.
| https://stackoverflow.com/questions/71423498/ |
Pytorch: layer not transferred on GPU with to() function | In the following code, I expected tensor x and layer l both on GPU, instead only the the tensor x results to be on the GPU, and not the layer l. In fact, using this approach results in RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! during a learning stage.
import torch
x = torch.zeros(1)
x = x.to('cuda')
try:
x.get_device()
print('x: gpu')
except:
print('x:','cpu')
l = torch.nn.Linear(1,1)
l = l.to('cuda')
try:
l.get_device()
print('l: gpu')
except:
print('l:','cpu')
the output is:
x: gpu
l: cpu
instead of both gpu.
why this?
Torch version: 1.10.2+cu113
| you can't call .get_device() on a nn.Linear object, therefore your second try block fails and it prints the code in the exception part. In order to check what device your module is on you can do the following:
print(next(l.parameters()).device)
output:
>> device(type='cuda', index=0)
| https://stackoverflow.com/questions/71426005/ |
Result type cast error when doing calculations with Pytorch model parameters | When I ran the code below:
import torchvision
model = torchvision.models.densenet201(num_classes=10)
params = model.state_dict()
for var in params:
params[var] *= 0.1
a RuntimeError was reported:
RuntimeError: result type Float can't be cast to the desired output type Long
But when I changed params[var] *= 0.1 to params[var] = params[var] * 0.1, the error disappears.
Why would this happen?
I thought params[var] *= 0.1 had the same effect as params[var] = params[var] * 0.1.
| First, let us know the first long-type parameter in densenet201, you will find the features.norm0.num_batches_tracked which indicates the number of mini-batches during training used to calculate the mean and variance if there is BatchNormalization layer in the model. This parameter is a long-type number and cannot be float type because it behaves like a counter.
Second, in PyTorch, there are two types of operations:
Non-Inplace operations: you assign the new output after calculation to a new copy from the variable, e.g. x = x + 1 or x = x / 2. The memory location of x before assignment not equal to the memory location after assignment because you have a copy from the original variable.
Inplace operations: when the calculations directly applied to the original copy of the variable without making any copy here e.g. x += 1 or x /= 2.
Let's move to your example to understand what happened:
Non-Inplcae operation:
model = torchvision.models.densenet201(num_classes=10)
params = model.state_dict()
name = 'features.norm0.num_batches_tracked'
print(id(params[name])) # 140247785908560
params[name] = params[name] + 0.1
print(id(params[name])) # 140247785908368
print(params[name].type()) # changed to torch.FloatTensor
Inplace operation:
print(id(params[name])) # 140247785908560
params[name] += 1
print(id(params[name])) # 140247785908560
print(params[name].type()) # still torch.LongTensor
params[name] += 0.1 # you want to change the original copy type to float ,you got an error
Finally, some remarks:
In-place operations save some memory, but can be problematic when computing derivatives because of an immediate loss of history. Hence, their use is discouraged. Source
You should be cautious when you decide to use in-place operations since they overwrite the original content.
If you use pandas, this is a bit similar to the inplace=True in pandas :).
This is a good resource to read more about in-place operation source and read also this discussion source.
| https://stackoverflow.com/questions/71427796/ |
How can I save the path of an image (string) from my dataloader (PyTorch)? | I've created a dataloader for my object detection task.
However, I cannot place the image/path name to a tensor. Instead I have it indexed, where in the last portion of the dataloader class, I have this:
target = {}
target['boxes'] = boxes
target['labels'] = labels
target['image_id'] = torch.tensor([index])
target['area'] = area
target['iscrowd'] = iscrowd
target['image_name'] = torch.tensor(index)
return image, target
where atm image_id and image_name are the same thing.
When I print out the image_name from the dataloader, I of course get this:
for image, target in valid_data_loader:
print(target[0]['image_name'])
Output:
tensor(0)
tensor(1)
tensor(2)
tensor(3)
tensor(4)
tensor(5)
tensor(6)
tensor(7)
I'm aware that strings can't be saved into torch tensors, so is there any way I can refer back to the original image name rather than the index of the tensor? Or would I just have to use the number that comes out and refer back to the dataset class (not dataloader)?
I ultimately want to save the image name, and attributes such as bounding box info to a separate numpy dataframe.
| Ok, so this is a bit ad-hoc and not exactly what I was thinking but here is one method I have used to retrieve the paths/image names. I basically find the id from the dataloader by removing it from the tensor. I then use the tensor_id to find the corresponding id in the original dataframe:
for image, target in valid_data_loader:
tensor_id = target[0]['image_name'].item()
print(valid_df.iloc[tensor_id]['image_id'])
I don't know if this is efficient though but it got what I wanted...
| https://stackoverflow.com/questions/71429238/ |
Is there a function to divide a n dimensional tensor with an n-1 dimensional tensor | I am not sure how to word this question properly so I will show some examples to describe the desired behaviour.
I'm looking to divide a tensor in this specific way.
Divide a vector by 1 scalar, for example [1, 2, 3, 4, 5] divided by 2 = [0.5, 1, 1.5, 2, 2.5]
Divide a matrix by 2 scalars, for example [[1, 2, 3], [2, 3, 4]] by [2, 4] = [[0.5, 1, 1.5], [0.5, 0.75, 1]]
Divide a 3 dimensional tensor by a 2 dimensional tensor, for example [[[1, 2, 3], [2, 3, 4]], [[4,5,6], [7,8,9]], [[9,8,6], [1,2,3]]] divided by [[1, 2], [3, 4], [5, 6]] = [[[1, 2, 3], [1, 1.5, 2]], [[4/3,5/3,2], [7/4,2,9/4]], [[9/5,8/5,6/5], [1/6,2/6,3/6]]]
Divide a N dimensional tensor by a N-1 dimensional tensor...
I'm looking for a pytorch way to do this.
| You can expand the dimensions of the N-1 dimensional tensor to make it broadcastable with the N-dimensional tensor.
tensor_a / tensor_b.unsqueeze(-1)
This generalizes, even when the denominator is a scalar. The -1 dimension means the last dimension. This follows from python indexing rules, in which sequence[-1] gives you the last element of the sequence.
a = torch.as_tensor([1, 2, 3, 4, 5])
b = torch.as_tensor(2)
a / b.unsqueeze(-1)
# tensor([0.5000, 1.0000, 1.5000, 2.0000, 2.5000])
a = torch.as_tensor([[1, 2, 3], [2, 3, 4]])
b = torch.as_tensor([2, 4])
a / b.unsqueeze(-1)
# tensor([[0.5000, 1.0000, 1.5000],
# [0.5000, 0.7500, 1.0000]])
a = torch.as_tensor([[[1, 2, 3], [2, 3, 4]], [[4,5,6], [7,8,9]], [[9,8,6], [1,2,3]]])
b = torch.as_tensor([[1, 2], [3, 4], [5, 6]])
a / b.unsqueeze(-1)
# tensor([[[1.0000, 2.0000, 3.0000],
# [1.0000, 1.5000, 2.0000]],
#
# [[1.3333, 1.6667, 2.0000],
# [1.7500, 2.0000, 2.2500]],
#
# [[1.8000, 1.6000, 1.2000],
# [0.1667, 0.3333, 0.5000]]])
| https://stackoverflow.com/questions/71429755/ |
creating a dictionary of large file names in test dataloader and assigning the prediction of all 512x512 patches in it as a list for its values | I am not sure why making a dictionary as the following is not creating the desired output. Instead of ending up with a dictionary with 887large filenames, I ended up with a dictionary with only 2 large filenames.
Quick intro to my test set. I have large images and I have tiled them into 512x512 patches. Below you can see number of large images and 512x512 patches for each positive and negative label:
--test
---pos_label 14, 11051
---neg_label 74, 45230
sample_fnames_labels = dataloaders_dict['test'].dataset.samples
test_large_images = {}
test_loss = 0.0
test_acc = 0
with torch.no_grad():
test_running_loss = 0.0
test_running_corrects = 0
print(len(dataloaders_dict['test']))
for i, (inputs, labels) in enumerate(dataloaders_dict['test']):
patch_name = sample_fname.split('/')[-1]
large_image_name = patch_name.split('_')[0]
test_inputs = inputs.to(device)
test_labels = labels.to(device)
test_outputs = saved_model_ft(test_inputs)
_, test_preds = torch.max(test_outputs, 1)
max_bs = len(test_preds)
for j in range(max_bs):
sample_file_name = sample_fnames_labels[i+j][0]
patch_name = sample_file_name.split('/')[-1]
large_image_name = patch_name.split('_')[0]
if large_image_name not in test_large_images.keys():
test_large_images[large_image_name] = list()
test_large_images[large_image_name].append(test_preds[j].item())
else:
test_large_images[large_image_name].append(test_preds[j].item())
#test_running_loss += test_loss.item() * test_inputs.size(0)
test_running_corrects += torch.sum(test_preds == test_labels.data)
#test_loss = test_running_loss / len(dataloaders_dict['test'].dataset)
test_acc = test_running_corrects / len(dataloaders_dict['test'].dataset)
here test_large_images dictionary only has two large images as the key instead of 88 test large images. Thanks for having a look.
Essentially I want to collect all the labels of 512x512 patches of each large image as a list into a dictionary with the large_image_filename as the key. So, I could do a majority voting later on.
Here's the used dataloader from PyTorch and batch size is 512.
# Create training and validation datasets
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val', 'test']}
# Create training and validation dataloaders
print('batch size: ', batch_size)
dataloaders_dict = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=batch_size, shuffle=True, num_workers=4) for x in ['train', 'val', 'test']}
Ultimately, I am hoping to get something like:
{large_image_1: [0, 1, 1, 0], large_image_2: [1, 1, 1, 0, 0 , 0 , 0, 0, 0], large_image_3: [0, 0], ...}
Please note that my large images are of different sizes in terms of number of 512x512 patches.
I do actually see 87 unique large image filenames below. Not sure why in the dictionary only two of them gets updated:
fnames = set()
for i in range(len(sample_fnames_labels)):
fname = sample_fnames_labels[i][0].split('/')[-1][:23]
fnames.add(fname)
print(len(fnames))
87
| fixed the problem by setting the batch size to 1 in dataloader of test
# Create training and validation datasets
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['test']}
# Create training and validation dataloaders
dataloaders_dict = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=1, shuffle=True, num_workers=4) for x in ['test']}
test_large_images = {}
test_loss = 0.0
test_acc = 0
with torch.no_grad():
test_running_loss = 0.0
test_running_corrects = 0
print(len(dataloaders_dict['test']))
for i, (inputs, labels) in enumerate(dataloaders_dict['test']):
print(i)
test_input = inputs.to(device)
test_label = labels.to(device)
test_output = saved_model_ft(test_input)
_, test_pred = torch.max(test_output, 1)
sample_fname, label = dataloaders_dict['test'].dataset.samples[i]
patch_name = sample_fname.split('/')[-1]
large_image_name = patch_name.split('_')[0]
if large_image_name not in test_large_images.keys():
test_large_images[large_image_name] = list()
test_large_images[large_image_name].append(test_pred.item())
else:
test_large_images[large_image_name].append(test_pred.item())
#print('test_large_images.keys(): ', test_large_images.keys())
test_running_corrects += torch.sum(test_preds == test_labels.data)
test_acc = test_running_corrects / len(dataloaders_dict['test'].dataset)
print(test_acc)
| https://stackoverflow.com/questions/71431268/ |
The size of tensor a (3) must match the size of tensor b (32) at non-singleton dimension 1 | I was training a deep learning model but i am encountering the error like The size of tensor a (3) must match the size of tensor b (32) at non-singleton dimension 1.And also while training the data the accuracy is above 1 that means i am getting the accuracy like 1.04,1.06 like that.
The Below is the Training Code
def train(model,criterion,optimizer,iters):
epoch = iters
train_loss = []
validaion_loss = []
train_acc = []
validation_acc = []
states = ['Train','Valid']
for epoch in range(epochs):
print("epoch : {}/{}".format(epoch+1,epochs))
for phase in states:
if phase == 'Train':
model.train()
dataload = train_data_loader
else:
model.eval()
dataload = valid_data_loader
run_loss,run_acc = 0,0
for data in dataload:
inputs,labels = data
#print("Inputs:",inputs.shape)
#print("Labels:",labels.shape)
inputs = inputs.to(device)
labels = labels.to(device)
labels = labels.byte()
optimizer.zero_grad()
with torch.set_grad_enabled(phase == 'Train'):
outputs = model(inputs)
print("Outputs",outputs.shape)
loss = criterion(outputs,labels)
predict = outputs>=0.5
#print("Predict",predict.shape)
if phase == 'Train':
loss.backward()
optimizer.step()
acc = torch.sum(predict == labels.data)
run_loss+=loss.item()
#print("Running_Loss",run_loss)
run_acc+=acc.item()/len(labels)
#print("Running_Acc",run_acc)
if phase == 'Train':
epoch_loss = run_loss/len(train_data_loader)
train_loss.append(epoch_loss)
epoch_acc = run_acc/len(train_data_loader)
train_acc.append(epoch_acc)
else:
epoch_loss = run_loss/len(valid_data_loader)
validaion_loss.append(epoch_loss)
epoch_acc = run_acc/len(valid_data_loader)
validation_acc.append(epoch_acc)
print("{}, loss :{},accuracy:{}".format(phase,epoch_loss,epoch_acc))
history = {'Train_loss':train_loss,'Train_accuracy':train_acc,
'Validation_loss':validaion_loss,'Validation_Accuracy':validation_acc}
return model,history
Below is the code of the base model
model = models.resnet34(pretrained = True)
for param in model.parameters():
param.requires_grad = False
model.fc = nn.Sequential(nn.Linear(model.fc.in_features,out_features = 1024),nn.ReLU(),
nn.Linear(in_features = 1024,out_features = 512),nn.ReLU(),
nn.Dropout(0.3),
nn.Linear(in_features=512,out_features=256),nn.ReLU(),
nn.Linear(in_features = 256,out_features = 3),nn.LogSoftmax(dim = 1))
device = torch.device("cuda" if cuda.is_available() else "cpu")
print(device)
model.to(device)
optimizer = optim.Adam(model.parameters(),lr = 0.00001)
criterion = nn.CrossEntropyLoss()
I tried with predict == labels.unsqueeze(1) it didn't raise any error but the accuracy is going over 1. May I know where i had to change the code.
| Your output tensor of the size [32,3] 32 is the number of mini-batches and 3 is the output of your neural network e.g.
[[0.25, 0.45, 0.3],
[0.45, 0.15, 0.4],
....
....
[0.2, 0.15, 0.65]]
When you compare whether output >= 0.5 the result is predict tensor but it is bool tensor with the same size of output [32,3] like that:
[[False, False, False],
[False, False, False],
....
....
[False, False, True]]
and Labels is 0D tensor with 32 values e.g.
[0,2,...,0]
The cause of the problem is here: to compare between predicts and labels, you should select the index of the maximum probability from each row in predicts tensor like that:
predicts = predicts.argmax(1)
# output
[0,0,...,2]
But predicts is bool tensor, and you cannot apply argmax to bool tensor directly. Therefore, you got the error message as you indicated in the comment. To solve this problem, you need only to do the following:
predicts = (output >= 0.5)*1
Now you can compare between the two tensors predicts, labels, because both have the same size.
In brief, you should use:
predicts = (output >= 0.5)*1
acc = torch.sum(predicts.argmax(1) == Labels)
Your problem is solved, but the accuracy is logically not correct.
Therefore, be careful if you want to use sigmoid with multi-classification problem because you use output >= 0.5 however, you have 3 classes in the output. This is not correct because imagine you have in the output [0.15,0.45,0.4]. Your predict will be [0, 0, 0] and then argmax(1) will select the first index if there are equal numbers, however, the second index should be selected in this case because it has the largest probability. The best way to do that if you have a multi-classification problem is to use softmax instead of sigmoid (>= 0.5).
By the way, if you come back to your model structure (last line), you will find that you already used nn.LogSoftmax. You need just to remove this line predicts = (outputs >= 0.5) and use directly:
#before the for loop
num_corrects = 0
# inside the for loop
num_corrects = num_corrects + torch.sum(outputs.argmax(1) == Labels)
#outside the loop
train_accuracy = (100. * num_corrects / len(train_loader.dataset))
| https://stackoverflow.com/questions/71432489/ |
How to fed last 4 concatenated hidden layers of BERT to FC layers | I am trying to do classification task and I got last 4 layers from BERT and concatenate them.
out = model(...)
out=torch.cat([out['hidden_states'][-i] for i in range(1,5)],dim=-1)
Now the shape is (12,200,768*4) which is batch,max_length,concatenation layer but for fully connected layer we need to have two dimension. So one way is to average like torch.mean((12,200,768*4),dim=1) and get the output as (12,768*4).
But i am confused what is the original BERT approach
| There is no "original" BERT approach for classification with concatenated hidden layers. You have several options to proceed and I will just describe a comment on your approach and suggest an alternative in the following.
Preliminary:
import torch.nn as nn
from transformers import BertTokenizerFast, BertModel
t = BertTokenizerFast.from_pretrained("bert-base-cased")
m = BertModel.from_pretrained("bert-base-cased")
fc = nn.Linear(768, 5)
s = ["This is a random sentence", "This is another random sentence with more words"]
i = t(s, padding=True,return_tensors="pt")
with torch.no_grad():
o = m(**i, output_hidden_states=True)
print(i)
At first, you should look at your input:
#print(I)
{'input_ids':
tensor([[ 101, 1188, 1110, 170, 7091, 5650, 102, 0, 0, 0],
[ 101, 1188, 1110, 1330, 7091, 5650, 1114, 1167, 1734, 102]]),
'token_type_ids':
tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]),
'attention_mask':
tensor([[1, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])
}
What you should notice here, is that the shorter sentence gets padded. That is relevant because simply pooling the mean with torch.mean, will result in different sentence embeddings for the same sentence depending on the number of padding tokens. Of course, the model will learn to handle that to some extent after sufficient training, but you should, however, use a more sophisticated mean function that removes the padding tokens right away :
def mean_pooling(model_output, attention_mask):
input_mask_expanded = attention_mask.unsqueeze(-1).expand(model_output.size()).float()
return torch.sum(model_output * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
o_mean = [mean_pooling(o.hidden_states[-x],i.attention_mask) for x in range(1,5)]
#we want a tensor and not a list
o_mean = torch.stack(o_mean, dim=1)
#we want only one tensor per sequence
o_mean = torch.mean(o_mean,dim=1)
print(o_mean.shape)
with torch.no_grad():
print(fc(o_mean))
Output:
torch.Size([2, 768])
tensor([[ 0.0677, -0.0261, -0.3602, 0.4221, 0.2251],
[-0.0328, -0.0161, -0.5209, 0.5825, 0.2405]])
These operations are pretty expensive and people often use an approach called cls pooling as a cheaper alternative with comparable performance:
#We only use the cls token (i.e. first token of the sequence)
#id 101
o_cls = [o.hidden_states[-x][:, 0] for x in range(1,5)]
#we want a tensor and not a list
o_cls = torch.stack(o_cls, dim=1)
#we want only one tensor per sequence
o_cls = torch.mean(o_cls,dim=1)
print(o_cls.shape)
with torch.no_grad():
print(fc(o_cls))
Output:
torch.Size([2, 768])
tensor([[-0.3731, 0.0473, -0.4472, 0.3804, 0.4057],
[-0.3468, 0.0685, -0.5885, 0.4994, 0.4182]])
| https://stackoverflow.com/questions/71434804/ |
Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! on DataLore | I'm working with IntelliJ DataLore to train a basic VGG16 CNN, but when I try to do it using a GPU machine I get the following error:
Traceback (most recent call last):
at block 20, line 1
at /data/workspace_files/train/trainer/training.py, line 115, in train(self, max_epochs)
at /data/workspace_files/train/trainer/training.py, line 46, in train_epoch(self, train_loader)
at /data/workspace_files/train/trainer/training.py, line 94, in forward_to_loss(self, step_images, step_labels)
at /opt/python/envs/default/lib/python3.8/site-packages/torch/nn/modules/module.py, line 1102, in _call_impl(self, *input, **kwargs)
at /data/workspace_files/models/vgg.py, line 49, in forward(self, x)
at /opt/python/envs/default/lib/python3.8/site-packages/torch/nn/modules/module.py, line 1102, in _call_impl(self, *input, **kwargs)
at /opt/python/envs/default/lib/python3.8/site-packages/torch/nn/modules/container.py, line 141, in forward(self, input)
at /opt/python/envs/default/lib/python3.8/site-packages/torch/nn/modules/module.py, line 1102, in _call_impl(self, *input, **kwargs)
at /opt/python/envs/default/lib/python3.8/site-packages/torch/nn/modules/linear.py, line 103, in forward(self, input)
at /opt/python/envs/default/lib/python3.8/site-packages/torch/nn/functional.py, line 1848, in linear(input, weight, bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_addmm)
Here is my code so you guys can review it.
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
model = model.to(device)
In this fragment of code I use self.device because I pass the device as parameter to the class Train
for _, (data, target) in tqdm(enumerate(train_loader, 1)):
self.optimizer.zero_grad()
step_images, step_labels = data.to(
self.device), target.to(self.device)
step_output, loss = self.forward_to_loss(step_images, step_labels)
I haven't had this issue before so I don't know if there something missing on DataLore or my code is wrong.
Hope you can help me!
| can you try this
step_output, loss = self.forward_to_loss(step_images.to(self.device), step_labels.to(self.device))
| https://stackoverflow.com/questions/71443214/ |
How to swap values in PyTorch tensor without using in-place operation (conserve gradient) | I have a tensor called state of shape torch.Size([N, 2**n, 2**n]), and I want to apply the following operations:
state[[0,1]] = state[[1,0]]
state[0] = -1*state[0]
Both of these are in-place operations. Are there some out-of-place operations that I can substitute them with? These lines are inside a for-loop, so it would be a bit difficult to just create new variables.
| I managed to figure it out!
Replace:
state[[0,1]] = state[[1,0]] # in-place operation
with:
state = state[[1,0]] # out-of-place operation
And for the second line, we replace:
state[0] = -1*state[0] # in-place operation
with:
# out-of-place operations
temp = torch.ones(state.shape).type(state.type()).to(state.device)
temp[1] = -1*temp[1]
state = state*temp
This seems to be doing the job!
| https://stackoverflow.com/questions/71454131/ |
Couldn't convert pytorch model to ONNX | I used this repo : https://github.com/Turoad/lanedet
to convert a pytorch model that use mobilenetv2 as backbone To ONNX but I didn't succeeded.
i got a Runtime error that says:
RuntimeError: Exporting the operator eye to ONNX opset version 12 is
not supported. Please open a bug to request ONNX export support for
the missing operator.
it's really disappointing, looking to the good result that this model gives and the quick performance that it provides,
is there any way that I can fix this bug? because I need to convert it to ONNX and then to TF lite model to use it in Android App
I will provide the pretrained model that I have used and the way that I follow in converting..
Thank you so much for helping!
my colab notebook:
https://colab.research.google.com/drive/18udIh8tNJvti7jKmR4jRaRO-oYDgRmvA?usp=sharing
the pretrained model that I use:
https://drive.google.com/file/d/1o3-BgLIQesurIyDCKGliqbo2inUA5cPw/view?usp=sharing
| Use torch>=1.7.0 to convert the model, because operation Eye is added.
| https://stackoverflow.com/questions/71455648/ |
Turning list of 2D tensors with different length to one 3D tensor | I have a list of 3 tensors with the shape: (8, 2), (8, 4), (8, 6)
And I want to turn this list into this shape: (8, 3, x)
How do I do this? I know I need to use some combination of torch.cat, torch.stack and torch.transpose, but I can't figure it out.
Thanks in advance!
| As you said, you need to use torch.cat, but also torch.reshape. Assume the following:
a = torch.rand(8,2)
b = torch.rand(8,4)
c = torch.rand(8,6)
And assume that it is indeed possible to reshape the tensors to a (8,3,-1) shape, where -1 stands for as long as it need to be, then:
d = torch.cat((a,b,c), dim=1)
e = torch.reshape(d, (8,3,-1))
I'll explain. Because the 1st dimension if different in a,b,c the concatenation has to be along the 1st dimension, as seen in variable d. Then, you can reshape the tensor as seen in e where the -1 stands for "as long as it needs to be".
| https://stackoverflow.com/questions/71456457/ |
Pytorch GPU Out of memory on example script | I tried running the example script from official huggingface transformers repository with installed Python 3.10.2, PyTorch 1.11.0 and CUDA 11.3 for Sber GPT-3 Large.
Without any file modifications I ran this script with arguments:
--output_dir out --model_name_or_path sberbank-ai/rugpt3large_based_on_gpt2 --train_file dataset.txt --do_train --num_train_epochs 15 --overwrite_output_dir
and got
RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB
also tried --block_size 32 and --per_device_train_batch_size 1 but unsuccessfully.
My GPU is RTX 2060 6GB. Maybe it's a real lack of video memory? Can it be solved without buying a new GPU?
| The GPT-3 Models have an extremely large number of parameters and are therefore very memory-heavy. Just to get an idea, if I understand Sber AIs documentation right the Large model was pre-trained on 128/16 V100 GPUs (which have 32GB each) for multiple days. Model-finetuning and inference is obviously going to be much easier on memory but even that will require some serious hardware, at least for the larger models.
You can try to use the Medium and Small model and see if that works for you. Also you can always try to run it in a cloud service like Google Colab, they also have a notebook that demonstrates this. Make sure to activate GPU usage in notebook settings of Google Colab. In the free version you get some decent GPU, if you are more serious about this you can get the pro version for better hardware in their cloud. Probably a lot cheaper than buying a GPU more powerful than an RTX 2060 with the current prices. Of course there are many cloud hardware services where you can run a large model training/fine-tuning, not only Google.
| https://stackoverflow.com/questions/71456511/ |
PyTorch NN not training | I have a bespoke NN model which works and wanted to move it to the PyTorch framework. However, the network is not training likely due to some misconfiguration. Please advise if you see something that is odd/wrong or could be a contributing reason.
import torch
from torch import nn, optim
import torch.nn.functional as F
X_train_t = torch.tensor(X_train).float()
X_test_t = torch.tensor(X_test).float()
y_train_t = torch.tensor(y_train).long().reshape(y_train_t.shape[0], 1)
y_test_t = torch.tensor(y_test).long().reshape(y_test_t.shape[0], 1)
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(22, 10)
self.fc2 = nn.Linear(10, 1)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.log_softmax(self.fc2(x), dim=1)
return x
model = Classifier()
criterion = nn.BCELoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)
epochs = 2000
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
# training loss
optimizer.zero_grad()
log_ps = model(X_train_t)
loss = criterion(log_ps, y_train_t.type(torch.float32))
loss.backward()
optimizer.step()
train_loss = loss.item()
# test loss
# Turn off gradients for validation, saves memory and computations
with torch.no_grad():
log_ps = model(X_test_t)
test_loss = criterion(log_ps, y_test_t.to(torch.float32))
ps = torch.exp(log_ps)
train_losses.append(train_loss/len(X_train_t))
test_losses.append(test_loss/len(X_test_t))
if (e % 100 == 0):
print("Epoch: {}/{}.. ".format(e, epochs),
"Training Loss: {:.3f}.. ".format(train_loss/len(X_train_t)),
"Test Loss: {:.3f}.. ".format(test_loss/len(X_test_t)))
Training is not happening:
Epoch: 0/2000.. Training Loss: 0.014.. Test Loss: 0.082..
Epoch: 100/2000.. Training Loss: 0.014.. Test Loss: 0.082..
...
| The source of your problem is the fact that you apply the softmax operation on the output of self.fc2. The output of self.fc2 has a size of 1 and therfore the output of the softmax will be 1 regardless of the input. Read more on the softmax activation function in the pytorch package here. I suspect that you wanted to use the Sigmoid function to transform the output of the last linear layer to to interval [0,1] and then apply a log function of some sorts.
Because the softmax results in an output of 1 regardless of the input, the model did not train well. I do not have access to your data so i can not simulate it exactly but from the information I have, replacing the softmax activation with the sigmoid should solve this.
A better and more numerically stable approach will be to use the BCEWITHLOGITSLOSS instead of the criterion in criterion = nn.BCELoss() and remove the activation function at the end, since this criterion applies the sigmoid along with the BCE loss for a more stable numerical computation.
To summarize, my advice will be to change criterion = nn.BCELoss() to criterion = nn.BCEWithLogitsLoss() and change the forawrd function as follows:
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
| https://stackoverflow.com/questions/71457035/ |
Matplot histogram of Pytorch Tensor | I have a tensor of size 10, with only 1 values: 0 and 1.
I want to plot an histogram of the tensor above, simply using matplotlib.pyplot.hist. This is my code:
import torch
import matplotlib.pyplot as plt
t = torch.tensor([0., 1., 1., 1., 0., 1., 1., 0., 0., 0.])
print(t)
plt.hist(t, bins=2)
plt.show()
And the output:
Why are there so many values in the histogram? Where did the rest of the values come from? How can I plot a correct histogram for my tensor?
| The plt.hist(t, bins=2) function is not meant to work with tensors. For this to work properly, you can try using t.numpy() or t.tolist() instead. As far as I could educate myself, the way to compute a histogramwith pytorch is through the torch.histc() function and to plot the histogram you use plt.bar() function as follows:
import torch
import matplotlib.pyplot as plt
t = torch.tensor([0., 0., 1., 1., 0., 1., 1., 0., 0., 0.])
hist = torch.histc(t, bins = 2, min = 0, max = 1)
bins = 2
x = range(bins)
plt.bar(x, hist, align='center')
plt.xlabel('Bins')
Some sources for plotting a histogram can be seen here and here . I could not find the root cause for this and if some could educate me it will be great, but as far as I am aware, this is the way to plot a tensor
I changed the tensor to have 4 '1.0' and 6 '0.0' to be able to see the difference
| https://stackoverflow.com/questions/71458671/ |
Problem with data cast on the GPU in PyTorch | Im trying to do an image classifier, but im having a problem with the data cast on the GPU.
def train(train_loader, net, epoch):
# Training mode
net.train()
start = time.time()
epoch_loss = []
pred_list, label_list = [], []
for batch in train_loader:
#Batch cast on the GPU
input, label = batch
input.to(args['device'])
label.to(args['device'])
#Forward
ypred = net(input)
loss = criterion(ypred, label)
epoch_loss.append(loss.cpu().data)
_, pred = torch.max(ypred, axis=1)
pred_list.append(pred.cpu().numpy())
label_list.append(label.cpu().numpy())
#Backward
optimizer.zero_grad()
loss.backward()
optimizer.step()
epoch_loss = np.asarray(epoch_loss)
pred_list = np.asarray(pred_list).ravel()
label_list = np.asarray(label_list).ravel()
acc = accuracy_score(pred_list, label_list)
end = time.time()
print('#################### Train ####################')
print('Epoch %d, Loss: %.4f +/- %.4f, Acc: %.2f, Time: %.2f' % (epoch, epoch_loss.mean(),
epoch_loss.std(), acc*100, end-start))
return epoch_loss.mean()
for epoch in range(args['epoch_num']):
train(train_loader, net, epoch)
break #Testing
Model already is in cuda, but i get error that says
Input type is torch.FloatTensor and not torch.cuda.FloatTensor
Whats the problem with input.to(args['device'])?
| UPDATE: According to the OP, an aditional data.to(device) before the train loop caused this issue.
you are probably getting a string like 0 or cuda from
args['device']; you should do this:
'cpu') #pass your args['device'] ``` so then use `device` to move the
model to GPU: ``` model.to(device) ```
then call the model with:
``` for batch,(data,label) in enumerate(train_loader):
#Batch cast on the GPU
data.to(device =device)
label.to(device =device)
| https://stackoverflow.com/questions/71460983/ |
Setting constant learning rates in Pytorch | I am optimizing lstm networks with pytorch using the Adam optimizer. I have the feeling that my learning rate is decaying too fast, but I am not even 100% sure if Adam does that, since I can't find good documentation. If Adam decays the learning rate by default, is there a way to turn this off and set a constant learning rate?
| "I can't find good documentation" - you could read the original paper, for example. Also, the documentation is here: https://pytorch.org/docs/stable/generated/torch.optim.Adam.html.
If by "learning rate" you mean the lr parameter of torch.optim.Adam, then it remains constant - Adam itself doesn' modify it, in contrast to learning-rate schedulers. However, Adam applies extra scaling to the gradient, so the learning rate is applied to this transformation of the gradient, not the gradient itself. This can't be turned off because this is the essence of the algorithm. If you'd like to apply the learning rate directly to the gradient, use stochastic gradient descent.
| https://stackoverflow.com/questions/71461240/ |
pytorch BCEWithLogitsLoss calculating pos_weight | I have a neural network as below for binary prediction. My classes are heavily imbalanced and class 1 occurs only 2% of times. Showing last few layers only
self.batch_norm2 = nn.BatchNorm1d(num_filters)
self.fc2 = nn.Linear(np.sum(num_filters), fc2_neurons)
self.batch_norm3 = nn.BatchNorm1d(fc2_neurons)
self.fc3 = nn.Linear(fc2_neurons, 1)
My loss is as below. Is this a correct way to calculate pos_weight parameter? I looked into official documentation at this link and it shows that pos_weight needs to have one value for each class for multiclass classification. Not sure if for the binary class it is a difference scenario. I tried to input 2 values and I was getting an error
My question: for binary problem, would pos_weight be a single value unlike multiclass classification where it needs to a list/array with length equal to number of classes?
BCE_With_LogitsLoss=nn.BCEWithLogitsLoss(pos_weight=class_wts[0]/class_wts[1])
My y variable is a single variable that has 0 or 1 to represent the actual class and the neural network outputs a single value
--------------------------------------------------Update 1
based upon the answer by Shai I have below questions:
BCEWithLogitsLoss - if it is a multiclass problem then how to use pos_weigh parameter?
Is there any example of using focal loss in pytorch? I found some links but most of them were old - dating 2 or 3 or more years
For training I am oversampling my class 1. Is focal loss still appropiate?
| The documentation of pos_weight is indeed a bit unclear. For BCEWithLogitsLoss pos_weight should be a torch.tensor of size=1:
BCE_With_LogitsLoss=nn.BCEWithLogitsLoss(pos_weight=torch.tensor([class_wts[0]/class_wts[1]]))
However, in your case, where pos class occurs only 2% of the times, I think setting pos_weight will not be enough.
Please consider using Focal loss:
Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, Piotr Dollár Focal Loss for Dense Object Detection (ICCV 2017).
Apart from describing Focal loss, this paper provides a very good explanation as to why CE loss performs so poorly in the case of imbalance. I strongly recommend reading this paper.
Other alternatives are listed here.
| https://stackoverflow.com/questions/71462326/ |
How to use PyTorch's nn.MultiheadAttention | I want to use PyTorch's nn.MultiheadAttention but it doesn't work.
I just want to use the functionality of pytorch for the manual calculated example of attention
I always got an error when trying to run this example.
import torch.nn as nn
embed_dim = 4
num_heads = 1
x = [
[1, 0, 1, 0], # Input 1
[0, 2, 0, 2], # Input 2
[1, 1, 1, 1] # Input 3
]
x = torch.tensor(x, dtype=torch.float32)
w_key = [
[0, 0, 1],
[1, 1, 0],
[0, 1, 0],
[1, 1, 0]
]
w_query = [
[1, 0, 1],
[1, 0, 0],
[0, 0, 1],
[0, 1, 1]
]
w_value = [
[0, 2, 0],
[0, 3, 0],
[1, 0, 3],
[1, 1, 0]
]
w_key = torch.tensor(w_key, dtype=torch.float32)
w_query = torch.tensor(w_query, dtype=torch.float32)
w_value = torch.tensor(w_value, dtype=torch.float32)
keys = x @ w_key
querys = x @ w_query
values = x @ w_value
multihead_attn = nn.MultiheadAttention(embed_dim, num_heads)
attn_output, attn_output_weights = multihead_attn(querys, keys, values)
| Try this.
First, your x is a (3x4) matrix. So you need a weight matrix of (4x4) instead.
Seems nn.MultiheadAttention only supports batch mode although the doc said it supports unbatch input. So let's just make your one data point in batch mode via .unsqueeze(0).
embed_dim = 4
num_heads = 1
x = [
[1, 0, 1, 0], # Seq 1
[0, 2, 0, 2], # Seq 2
[1, 1, 1, 1] # Seq 3
]
x = torch.tensor(x, dtype=torch.float32)
w_key = [
[0, 0, 1, 1],
[1, 1, 0, 1],
[0, 1, 0, 1],
[1, 1, 0, 1]
]
w_query = [
[1, 0, 1, 1],
[1, 0, 0, 1],
[0, 0, 1, 1],
[0, 1, 1, 1]
]
w_value = [
[0, 2, 0, 1],
[0, 3, 0, 1],
[1, 0, 3, 1],
[1, 1, 0, 1]
]
w_key = torch.tensor(w_key, dtype=torch.float32)
w_query = torch.tensor(w_query, dtype=torch.float32)
w_value = torch.tensor(w_value, dtype=torch.float32)
keys = (x @ w_key).unsqueeze(0) # to batch mode
querys = (x @ w_query).unsqueeze(0)
values = (x @ w_value).unsqueeze(0)
multihead_attn = nn.MultiheadAttention(embed_dim, num_heads, batch_first=True)
attn_output, attn_output_weights = multihead_attn(querys, keys, values)
| https://stackoverflow.com/questions/71464582/ |
Can't backward pass two losses in Classification Transformer Model | For my model I'm using a roberta transformer model and the Trainer from the Huggingface transformer library.
I calculate two losses:
lloss is a Cross Entropy Loss and dloss calculates the loss inbetween hierarchy layers.
The total loss is the sum of lloss and dloss. (Based on this)
When calling total_loss.backwards() however, I get the error:
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed
Any idea why that happens? Can I force it to only call backwards once? Here is the loss calculation part:
dloss = calculate_dloss(prediction, labels, 3)
lloss = calculate_lloss(predeiction, labels, 3)
total_loss = lloss + dloss
total_loss.backward()
def calculate_lloss(predictions, true_labels, total_level):
'''Calculates the layer loss.
'''
loss_fct = nn.CrossEntropyLoss()
lloss = 0
for l in range(total_level):
lloss += loss_fct(predictions[l], true_labels[l])
return self.alpha * lloss
def calculate_dloss(predictions, true_labels, total_level):
'''Calculate the dependence loss.
'''
dloss = 0
for l in range(1, total_level):
current_lvl_pred = torch.argmax(nn.Softmax(dim=1)(predictions[l]), dim=1)
prev_lvl_pred = torch.argmax(nn.Softmax(dim=1)(predictions[l-1]), dim=1)
D_l = self.check_hierarchy(current_lvl_pred, prev_lvl_pred, l) #just a boolean tensor
l_prev = torch.where(prev_lvl_pred == true_labels[l-1], torch.FloatTensor([0]).to(self.device), torch.FloatTensor([1]).to(self.device))
l_curr = torch.where(current_lvl_pred == true_labels[l], torch.FloatTensor([0]).to(self.device), torch.FloatTensor([1]).to(self.device))
dloss += torch.sum(torch.pow(self.p_loss, D_l*l_prev)*torch.pow(self.p_loss, D_l*l_curr) - 1)
return self.beta * dloss
| There is nothing wrong with having a loss that is the sum of two individual losses, here is a small proof of principle adapted from the docs:
import torch
import numpy
from sklearn.datasets import make_blobs
class Feedforward(torch.nn.Module):
def __init__(self, input_size, hidden_size):
super(Feedforward, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.fc1 = torch.nn.Linear(self.input_size, self.hidden_size)
self.relu = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(self.hidden_size, 1)
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x):
hidden = self.fc1(x)
relu = self.relu(hidden)
output = self.fc2(relu)
output = self.sigmoid(output)
return output
def blob_label(y, label, loc): # assign labels
target = numpy.copy(y)
for l in loc:
target[y == l] = label
return target
x_train, y_train = make_blobs(n_samples=40, n_features=2, cluster_std=1.5, shuffle=True)
x_train = torch.FloatTensor(x_train)
y_train = torch.FloatTensor(blob_label(y_train, 0, [0]))
y_train = torch.FloatTensor(blob_label(y_train, 1, [1,2,3]))
x_test, y_test = make_blobs(n_samples=10, n_features=2, cluster_std=1.5, shuffle=True)
x_test = torch.FloatTensor(x_test)
y_test = torch.FloatTensor(blob_label(y_test, 0, [0]))
y_test = torch.FloatTensor(blob_label(y_test, 1, [1,2,3]))
model = Feedforward(2, 10)
criterion = torch.nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr = 0.01)
model.eval()
y_pred = model(x_test)
before_train = criterion(y_pred.squeeze(), y_test)
print('Test loss before training' , before_train.item())
model.train()
epoch = 20
for epoch in range(epoch):
optimizer.zero_grad() # Forward pass
y_pred = model(x_train) # Compute Loss
lossCE= criterion(y_pred.squeeze(), y_train)
lossSQD = (y_pred.squeeze()-y_train).pow(2).mean()
loss=lossCE+lossSQD
print('Epoch {}: train loss: {}'.format(epoch, loss.item())) # Backward pass
loss.backward()
optimizer.step()
There must be a real second time that you call directly or indirectly backward on some varaible that then traverses through your graph. It is a bit too much to ask for the complete code here, only you can check this or at least reduce it to a minimal example (while doing so, you might already find the issue). Apart from that, I would start checking:
Does it already occur in the first iteration of training? If not: are you reusing any calculation results for the second iteration without a detach?
When you do backward on your losses individually lloss.backward() followed by dloss.backward() (this has the same effect as adding them together first as gradients are accumulated): what happens? This will let you track down for which of the two losses the error occurs.
| https://stackoverflow.com/questions/71465239/ |
Can I train my pretrained model with a totally different architecture? | I have trained a pretrained ResNet18 model with my custom dataset on Pytorch and wondered whether I could transfer my model file to train another one with a different architecture, e.g. ResNet50. I know I have to save my model accordingly (explained well on another post here) but this was a question that I have never thought before.
I was planning to use more advanced models like VisionTransformers (ViT) but I couldn't figure out whether I had to start with a pretrained ViT already or I could just take my previous model file and use it as the pretrained model to train a ViT.
Example Scenario: ResNet18 --> ResNet50 --> Inception v3 --> ViT
My best guess it that it's not possible due to number of weights, neurons and layer structures but I would love to hear that if I miss a crucial point here. Thanks!
| Between models that only differ in number of layers (Resnet-18 and Resnet-50), it has been done to initialize some layers of the larger model from the weights of the smaller model's layers. Inversely, you can truncate a larger model by taking a subset of regularly spaced layers and initialize a smaller model. In both cases, you need to retrain everything at the end if you hope to achieve semi-decent performances.
The whole point of using architectures that vastly differ (vision transformers vs CNNs) is to learn different features from the inputs and unlock new levels of semantic understanding. Recent models like BeiT also use new self-supervised training schemes that have nothing to do with the classic ImageNet pretraining. Using trained weights from another model would go against the point.
Having said that,if you want to use a ViT, why not start from the available pretrained weights on HuggingFace and fine-tune it on the data you used to train your ResNet50 ?
| https://stackoverflow.com/questions/71470169/ |
Pytorch: Weighting in BCEWithLogitsLoss, but with 'weight' instead of 'pos_weight' | I'm looking how to do class weighting using BCEWithLogitsLoss.
https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html
The example on how to use pos_weight seems clear to me. If there are 3x more negative samples than positive samples, then you can set pos_weight=3
Does the weight parameter do the same thing?
Say that I set it weight=torch.tensor([1, 3]). Is that the same thing as pos_weight=3
Also, is weight normalized? Is weight=torch.tensor([1, 3]) the same as weight=torch.tensor([3, 9]), or are they different in how they affect the magnitude of the loss?
| They are different things. pos_weight is size n_classes. weight is size batch_size. In the formula in the page you linked, weight refers to the w_n variable, whereas pos_weight refers to the p_c variable.
| https://stackoverflow.com/questions/71472148/ |
Image augmentation on deep learning training data | I have a question about mean and standard deviation in image augmentation.
Are the two parameters recommended to be filled in?
If so, how could I know the number? Do I have to iterate through the data, also each channel of image, before the train to get it?
import albumentations as A
train_transform = A.Compose(
[
A.Resize(height=IMAGE_HEIGHT, width=IMAGE_WIDTH),
A.ColorJitter(brightness=0.3, hue=0.3, p=0.3),
A.Rotate(limit=5, p=1.0),
# A.HorizontalFlip(p=0.3),
# A.VerticalFlip(p=0.2),
A.Normalize(
mean=[0.0, 0.0, 0.0],# <-----------this parameter
std=[1.0, 1.0, 1.0],# <-----------this parameter
max_pixel_value=255.0,
),
ToTensorV2(),
],
)
| Yes it is strongly recommended to normalize your images in most of the cases, obviously you will face some situations that does not require normalization. The reason is to keep the values in a certain range. The output of the network, even if the network is 'big', is strongly influenced by the input data range. If you keep your input range out of control, your predictions will drastically change from one to another. Thus, the gradient would be out of control too and might make your training unefficient. I invite you to read this and that answers to have more details about the 'why' behind normalization and have a deeper understanding of the behaviours.
It is quite common to normalize images with imagenet mean & standard deviation : mean = [0.485, 0.456, 0.406], std = [0.229, 0.224, 0.225]. Of course you could also consider, if your dataset is enough realistic, in a production context, to use its own mean and std instead of imagenet's.
Finally keep in mind those values since, once your model will be trained, you will still need to normalize any new image to achieve a good accuracy with your future inferences.
| https://stackoverflow.com/questions/71472904/ |
How am i incorrectly using SubsetRandomSampler? | I have a custom defined dataset:
rcvdataset = rcvLSTMDataSet('foo.csv', 'foolabels.csv')
I also define the following:
batch_size = 50
validation_split = .2
shuffle_rcvdataset = True
random_seed= 42
```
rcvdataset_size = len(rcvdataset)
indices = list(range(rcvdataset_size))
split = int(np.floor(validation_split * rcvdataset_size))
if shuffle_rcvdataset :
np.random.seed(random_seed)
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
test_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(rcvdataset, batch_size=batch_size,
sampler=train_sampler)
test_loader = torch.utils.data.DataLoader(rcvdataset, batch_size=batch_size,
sampler=test_sampler)
```
using this training call:
```
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
```
But when i try to run it, i get:
Epoch 1
-------------------------------
Traceback (most recent call last):
File "lstmTrainer.py", line 94, in <module>
train(train_sampler, model, loss_fn, optimizer)
File "lstmTrainer.py", line 58, in train
size = len(dataloader.dataset)
AttributeError: 'SubsetRandomSampler' object has no attribute 'dataset'
If i instead load the dataset indirectly with:
train(train_loader, model, loss_fn, optimizer)
it tells me:
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'pandas.core.series.Series'>
I'm not clear on what the first error is, at all. Is the second error trying to tell me that somewhere in my dataset, something isn't a tensor?
Thank you.
As requested, here is rcvDataSet.py:
from __future__ import print_function, division
import os
import torch
import pandas as pd
import numpy as np
from torch.utils.data import Dataset, DataLoader
class rcvLSTMDataSet(Dataset):
"""rcv dataset."""
TIMESTEPS = 10
def __init__(self, csv_data_file, annotations_file):
"""
Args:
csv_data_file (string): Path to the csv file with the training data
annotations_file (string): Path to the file with the annotations
"""
self.csv_data_file = csv_data_file
self.annotations_file = annotations_file
self.labels = pd.read_csv(annotations_file)
self.data = pd.read_csv(csv_data_file)
def __len__(self):
return len(self.labels)
def __getitem__(self, idx):
"""
pytorch expects whatever data is returned is in the form of a tensor. Included, it expects the label for the data.
Together, they make a tuple.
"""
# convert every ten indexes and label into one observation
Observation = []
counter = 0
start_pos = self.TIMESTEPS *idx
avg_1 = 0
avg_2 = 0
avg_3 = 0
while counter < self.TIMESTEPS:
Observation.append(self.data.iloc[idx + counter])
avg_1 += self.labels.iloc[idx + counter][2]
avg_2 += self.labels.iloc[idx + counter][1]
avg_3 += self.labels.iloc[idx + counter][0]
counter += 1
avg_1 = avg_1 / self.TIMESTEPS
avg_2 = avg_2 / self.TIMESTEPS
avg_3 = avg_3 / self.TIMESTEPS
current_labels = [avg_1, avg_2, avg_3]
print(current_labels)
return Observation, current_labels
def main():
loader = rcvLSTMDataSet('foo1.csv','foo2.csv')
j = 0
while j < len(loader.data % loader.TIMESTEPS):
print(loader.__getitem__(j))
j += 1
if "__main__" == __name__:
main()
| Cause: If you look to the error message, you will find that you call train function like that:
train(train_sampler, model, loss_fn, optimizer)
This is not correct, you should call train() with train_loader not with train_sampler.
Solution: you should correct it to:
train(train_loader, model, loss_fn, optimizer)
Error message:
Epoch 1
-------------------------------
Traceback (most recent call last):
File "lstmTrainer.py", line 94, in <module>
train(train_sampler, model, loss_fn, optimizer) <------ look here
File "lstmTrainer.py", line 58, in train
size = len(dataloader.dataset)
AttributeError: 'SubsetRandomSampler' object has no attribute 'dataset'
Second Error message:
If you look at your Dataset class rcvLSTMDataSet, you will find observations list append items with pandas.core.series.Series type, and it is not a pythonic scalar numbers because you read all columns in your csv file. You should use .iloc[....].values instead of iloc[....]. By doing this, you will ensure that your list contains char or float or int types and it can be converted to a tensor smoothly without errors.
Final remark:
You can read here about Dataloader and Samplers, I summarized here some points:
Samplers are used to specify the sequence of indices/keys used in data
loading.
Data loader combines a dataset and a sampler, and provides an iterable
over the given dataset.
PyTorch provides two data primitives: torch.utils.data.DataLoader and
torch.utils.data.Dataset that allow you to use pre-loaded datasets as
well as your own data. Dataset stores the samples and their
corresponding labels, and DataLoader wraps an iterable around the
Dataset to enable easy access to the samples.
| https://stackoverflow.com/questions/71473013/ |
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. Error occured using Pytorch | I've read questions about the same error, here on StackOverflow, but unfortunately they does not work.
I have a defined function:
def plot_loss(train_loss, validation_loss, title):
plt.grid(True)
plt.xlabel("subsequent epochs")
plt.ylabel('average loss')
plt.plot(range(1, len(train_loss)+1), train_loss, 'o-', label='training')
plt.plot(range(1, len(validation_loss)+1), validation_loss, 'o-', label='validation')
plt.legend()
plt.title(title)
plt.show()
and the problem is that in line
plt.plot(range(1, len(validation_loss)+1), validation_loss, 'o-', label='validation')
this error occurs. Then it flows through gca().plot(...) in pyplot.py and eventually to return self.numpy()in tensor.py
In testing phase I have the following function defined:
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
# calculate and sum up batch loss
test_loss += F.nll_loss(output, target, reduction='mean')
# get the index of class with the max log-probability
prediction = output.argmax(dim=1)
# item() returns value of the given tensor
correct += prediction.eq(target).sum().item()
test_loss /= len(test_loader)
return test_loss
I've tried to change the line
prediction = output.argmax(dim=1)
as it was described in another questions about the same error but unfortunately it did not help.
I've tried to run this code on Google Colab and also on my local machine with GPU (cuda is available) but unfortunately the same error occurs.
EDIT I've manage to find the solution in this link. It seems that it is connected with moving data between CUDA and CPU. I've invoked .cpu() and it's solved:
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
# calculate and sum up batch loss
test_loss += F.nll_loss(output, target, reduction='mean')
# get the index of class with the max log-probability
prediction = output.argmax(dim=1)
# item() returns value of the given tensor
correct += prediction.eq(target).sum().item()
test_loss /= len(test_loader)
return test_loss.cpu()
| I've manage to find the solution in this link. It seems that it is connected with moving data between CUDA and CPU. I've invoked .cpu() and it's solved:
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
# calculate and sum up batch loss
test_loss += F.nll_loss(output, target, reduction='mean')
# get the index of class with the max log-probability
prediction = output.argmax(dim=1)
# item() returns value of the given tensor
correct += prediction.eq(target).sum().item()
test_loss /= len(test_loader)
return test_loss.cpu()
| https://stackoverflow.com/questions/71473842/ |
Build confusion matrix for instance segmantation (mask r-cnn from detectron2) | I've trained a mask r-cnn on corn images (I cannot show examples because they are confidential), but they are basically pictures of corn kernels scattered over a flat surface.
There are different kinds of corn kernels I want to be able to segment and classify. I understand the AP metrics are the best way of measuring the performance of an instance segmentation algorithm and I know a confusion matrix for this kind of algorithm doesn't usually make sense.
But for his specific case, where I have 4 classes of very similar objects, I would like to be able to set a fixed AP value, like AP50/AP75 and build a confusion matrix for that.
Would it be possible? How would I do it?
I used detectron2 library to train and get predictions. Here is the code I use to load my trained model from disk, generate predictions in the validation set, and visualize the results:
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
import numpy as np
import matplotlib.pyplot as plt
import os, json, cv2, random, gc
from detectron2 import model_zoo
from detectron2.data.datasets import register_coco_instances
from detectron2.checkpoint import DetectionCheckpointer, Checkpointer
from detectron2.data import MetadataCatalog, DatasetCatalog, build_detection_test_loader
from detectron2.engine import DefaultTrainer, DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer, ColorMode
from detectron2.modeling import build_model
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
train_annotations_path = "./data/cvat-corn-train-coco-1.0/annotations/instances_default.json"
train_images_path = "./data/cvat-corn-train-coco-1.0/images"
validation_annotations_path = "./data/cvat-corn-validation-coco-1.0/annotations/instances_default.json"
validation_images_path = "./data/cvat-corn-validation-coco-1.0/images"
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("train-corn",)
cfg.DATASETS.TEST = ("validation-corn",)
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025
cfg.SOLVER.MAX_ITER = 10000
cfg.SOLVER.STEPS = []
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 4
cfg.OUTPUT_DIR = "./output"
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7
register_coco_instances(
"train-corn",
{},
train_annotations_path,
train_images_path
)
register_coco_instances(
"validation-corn",
{},
validation_annotations_path,
validation_images_path
)
metadata_train = MetadataCatalog.get("train-corn")
dataset_dicts = DatasetCatalog.get("train-corn")
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7
predictor = DefaultPredictor(cfg)
predicted_images_path = os.path.abspath("./predicted/")
dataset_dicts_validation = DatasetCatalog.get("validation-corn")
for d in dataset_dicts_validation:
im = cv2.imread(d["file_name"])
outputs = predictor(im)
v = Visualizer(im[:, :, ::-1],
metadata=metadata_train,
scale=0.5,
instance_mode=ColorMode.IMAGE_BW
)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
fig = plt.figure(frameon=False, dpi=1)
fig.set_size_inches(1024,1024)
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(cv2.cvtColor(out.get_image()[:, :, ::-1], cv2.COLOR_BGR2RGB), aspect='auto')
fig.savefig(f"{predicted_images_path}/{d['file_name'].split('/')[-1]}")
That is what my output for a given image looks like:
It is a dictionary with an Instances object as its only value, the Instances object has four lists: pred_boxes, scores, pred_classes and pred_masks. And can be visualized using the detectron2 visualizer, but I can't show the visualization for confidentiality reasons.
Those are the metrics I have for the model right now:
And for each class:
And I noticed visually that some of the kernels are being confused for other classes, specially between classes ardido and fermentado, that is why I want to somehow be able to build a confusion matrix.
I expect the confusion matrix would look something like this:
EDIT:
I found this repository:
https://github.com/kaanakan/object_detection_confusion_matrix
And tried to use it:
from confusion_matrix import ConfusionMatrix
cm = ConfusionMatrix(4, CONF_THRESHOLD=0.3, IOU_THRESHOLD=0.3)
for d in dataset_dicts_validation:
img = cv2.imread(d["file_name"])
outputs = predictor(img)
labels = list()
detections = list()
for ann in d["annotations"]:
labels.append([ann["category_id"]] + ann["bbox"])
for coord, conf, cls in zip(
outputs["instances"].get("pred_boxes").tensor.cpu().numpy(),
outputs["instances"].get("scores").cpu().numpy(),
outputs["instances"].get("pred_classes").cpu().numpy()
):
detections.append(list(coord) + [conf] + [cls])
cm.process_batch(np.array(detections), np.array(labels))
But the matrix I got is clearly wrong, and I'm having a hard time trying to fix it.
| I was able to do it, I built the confusion matrix function from scratch:
import pandas as pd
import torch
from detectron2.structures import Boxes, pairwise_iou
def coco_bbox_to_coordinates(bbox):
out = bbox.copy().astype(float)
out[:, 2] = bbox[:, 0] + bbox[:, 2]
out[:, 3] = bbox[:, 1] + bbox[:, 3]
return out
def conf_matrix_calc(labels, detections, n_classes, conf_thresh, iou_thresh):
confusion_matrix = np.zeros([n_classes + 1, n_classes + 1])
l_classes = np.array(labels)[:, 0].astype(int)
l_bboxs = coco_bbox_to_coordinates((np.array(labels)[:, 1:]))
d_confs = np.array(detections)[:, 4]
d_bboxs = (np.array(detections)[:, :4])
d_classes = np.array(detections)[:, -1].astype(int)
detections = detections[np.where(d_confs > conf_thresh)]
labels_detected = np.zeros(len(labels))
detections_matched = np.zeros(len(detections))
for l_idx, (l_class, l_bbox) in enumerate(zip(l_classes, l_bboxs)):
for d_idx, (d_bbox, d_class) in enumerate(zip(d_bboxs, d_classes)):
iou = pairwise_iou(Boxes(torch.from_numpy(np.array([l_bbox]))), Boxes(torch.from_numpy(np.array([d_bbox]))))
if iou >= iou_thresh:
confusion_matrix[l_class, d_class] += 1
labels_detected[l_idx] = 1
detections_matched[d_idx] = 1
for i in np.where(labels_detected == 0)[0]:
confusion_matrix[l_classes[i], -1] += 1
for i in np.where(detections_matched == 0)[0]:
confusion_matrix[-1, d_classes[i]] += 1
return confusion_matrix
n_classes = 4
confusion_matrix = np.zeros([n_classes + 1, n_classes + 1])
for d in dataset_dicts_validation:
img = cv2.imread(d["file_name"])
outputs = predictor(img)
labels = list()
detections = list()
for coord, conf, cls, ann in zip(
outputs["instances"].get("pred_boxes").tensor.cpu().numpy(),
outputs["instances"].get("scores").cpu().numpy(),
outputs["instances"].get("pred_classes").cpu().numpy(),
d["annotations"]
):
labels.append([ann["category_id"]] + ann["bbox"])
detections.append(list(coord) + [conf] + [cls])
confusion_matrix += conf_matrix_calc(np.array(labels), np.array(detections), n_classes, conf_thresh=0.5, iou_thresh=0.5)
matrix_indexes = metadata_train.get("thing_classes") + ["null"]
pd.DataFrame(confusion_matrix, columns=matrix_indexes, index=matrix_indexes)
I built the conf_matrix_calc, that computes the conf_matrix for each image, than I executed it for every image. It took me a while to make it work, because there was a hidden problem. For some reason the labels are saved in a different format than the detections, instead of [x1, y1, x2, y2], it is saved as [x1, y2, x1-x2, y1-y2], and searching online I didn't find anywhere in detectron's or coco's documentation where that is described, but I found one format that was saved as [(x1+x2)/2, (y1+y2)/2, x1-x2, y1-y2], anyway that wasn't my case, I only found that out because I opened the images and checked the pixel coordinates of the boxes in the labels and the predictions and noticed something was wrong.
Anyway, now it works, that is my result:
| https://stackoverflow.com/questions/71475164/ |
PyTorch normalization in onnx model | I am doing image classification in pytorch, in that, I used this transforms
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
and completed the training. After, I converted the .pth model file to .onnx file
Now, in inference, how should I apply this transforms in numpy array, because the onnx handles input in numpy array
| You have a couple options.
Since normalize is pretty trivial to write yourself you could just do
import numpy as np
mean = np.array([0.485, 0.456, 0.406]).reshape(-1,1,1)
std = np.array([0.229, 0.224, 0.225]).reshape(-1,1,1)
x_normalized = (x - mean) / std
which doesn't require the pytorch or torchvision libraries at all.
If you are still using your pytorch dataset you could use the following transform
transforms.Compose([
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
torch.Tensor.numpy # or equivalently transforms.Lambda(lambda x: x.numpy())
])
which will just apply the normalization to the tensor then convert it to a numpy array.
| https://stackoverflow.com/questions/71477766/ |
OpenCV Window Freezing when trying to combine with Neural Network Image Classification | I am using a NN to detect 4 types of objects (chassis, front-spoiler, hubcap, wheel) in the live feed of my webcam. When one is detected, I want to display an image with information about it (chassis.png, front-spoiler.png, hubcap.png, wheel.png).
When I run my NN and hold one of the items in front of the webcam, the opencv windows freezes and doesnt display anything. What is the reason for that?
def displayImg(path):
img = cv2.imread(path)
cv2.namedWindow("window", cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("window",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
cv2.imshow("window", img)
# ----------------LIVE DETECTIONS ---------------
imagePath = "picture.jpg"
frontSpoilerImageOpen = False
chassisImageOpen = False
hubcapImageOpen = False
wheelImageOpen = False
model = torch.hub.load('ultralytics/yolov5', 'custom', path='yolov5/runs/train/exp5/weights/last.pt', force_reload=True)
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
results = model(frame)
try:
detectedItem = results.pandas().xyxy[0].iloc[0, 6]
if detectedItem == "front-spoiler" and not frontSpoilerImageOpen:
frontSpoilerImageOpen = False
chassisImageOpen = False
hubcapImageOpen = False
wheelImageOpen = False
displayImg(os.path.join("imagesToDisplay", "front-spoiler.png"))
frontSpoilerImageOpen = True
elif detectedItem == "chassis" and not chassisImageOpen:
frontSpoilerImageOpen = False
chassisImageOpen = False
hubcapImageOpen = False
wheelImageOpen = False
displayImg(os.path.join("imagesToDisplay", "chassis.png"))
chassisImageOpen = True
elif detectedItem == "hubcap" and not hubcapImageOpen:
frontSpoilerImageOpen = False
chassisImageOpen = False
hubcapImageOpen = False
wheelImageOpen = False
displayImg(os.path.join("imagesToDisplay", "hubcap.png"))
hubcapImageOpen = True
elif detectedItem == "wheel" and not wheelImageOpen:
frontSpoilerImageOpen = False
chassisImageOpen = False
hubcapImageOpen = False
wheelImageOpen = False
displayImg(os.path.join("imagesToDisplay", "wheel.png"))
wheelImageOpen = True
except Exception as e:
print(e)
| Your code contains no waitKey at all.
OpenCV GUI (imshow) requires waitKey to work.
This is described in all OpenCV documentation and tutorials.
one basic tutorial: https://docs.opencv.org/4.x/db/deb/tutorial_display_image.html
documentation for imshow mentions the requirement: https://docs.opencv.org/4.x/d7/dfc/group__highgui.html#ga453d42fe4cb60e5723281a89973ee563
waitKey isn't about delays or breaks. It runs the event loop that all GUI processing requires.
You can use waitKey(1) for the shortest non-zero delay of one millisecond (a little more in practice), or you can use pollKey(), which will not wait even that millisecond.
| https://stackoverflow.com/questions/71478801/ |
Slice pytorch tensor using coordinates tensor without loop | I have a tensor T with dimension (d1 x d2 x d3 x ... dk) and a tensor I with dimension (p x q). Here, I contains coordinates of T but q < k, each column of I corresponds to a dimension of T. I have another tensor V of dimension p x di x ...dj where sum([di, ..., dj]) = k - q. (di, .., dj) corresponds to missing dimensions from I. I need to perform T[I] = V
A specific example of such problem using numpy array posted here[1].
The solution[2] uses fancy indexing[3] which relies on numpy.index_exp. In case of pytorch such option is not available. Is there any alternative way to mimic this in pytorch without using loops or casting tensors to numpy array?
Below is a demo:
import torch
t = torch.randn((32, 16, 60, 64)) # tensor
i0 = torch.randint(0, 32, (10, 1)).to(dtype=torch.long) # indexes for dim=0
i2 = torch.randint(0, 60, (10, 1)).to(dtype=torch.long) # indexes for dim=2
i = torch.cat((i0, i2), 1) # indexes
v = torch.randn((10, 16, 64)) # to be assigned
# t[i0, :, i2, :] = v ?? Obviously this does not work
[1] Slice numpy array using list of coordinates
[2] https://stackoverflow.com/a/42538465/6422069
[3] https://numpy.org/doc/stable/reference/generated/numpy.s_.html
| After some discussion in the comments, we arrived at the following solution:
import torch
t = torch.randn((32, 16, 60, 64)) # tensor
# indices
i0 = torch.randint(0, 32, (10,)).to(dtype=torch.long) # indexes for dim=0
i2 = torch.randint(0, 60, (10,)).to(dtype=torch.long) # indexes for dim=2
v = torch.randn((10, 16, 64)) # to be assigned
t[(i0, slice(None), i2, slice(None))] = v
| https://stackoverflow.com/questions/71479364/ |
How to install .whl files using dockerfile | I am trying to install .whl files using Dockerfile. Here is my dockerfile.
FROM us-docker.pkg.dev/vertex-ai/training/tf-gpu.2-8:latest
USER root
CMD python --version
COPY *.txt /
ADD ./train/ /train/
RUN pip install ./whl_files/*.whl
# RUN pip install whl_files/torchaudio-0.11.0+cu113-cp39-cp39-linux_x86_64.whl
# RUN pip install whl_files/torchvision-0.12.0+cu113-cp39-cp39-linux_x86_64.whl
# RUN pip3 install --trusted-host files.pythonhosted.org --trusted-host pypi.org --trusted-host pypi.python.org torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
# RUN pip3 install --trusted-host files.pythonhosted.org --trusted-host pypi.org --trusted-host pypi.python.org torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
RUN pip3 install --ignore-installed -r requirements.txt --trusted-host pypi.org --trusted-host files.pythonhosted.org
COPY *.py /root/
RUN chmod +x /root/docker_fetch.py
RUN python /root/docker_fetch.py
ENTRYPOINT ["/bin/bash"]
I am getting this error:
=> ERROR [4/8] RUN pip install ./whl_files/*.whl 0.9s
------
> [4/8] RUN pip install ./whl_files/*.whl:
#8 0.711 WARNING: Requirement './whl_files/*.whl' looks like a filename, but the file does not exist
#8 0.711 ERROR: *.whl is not a valid wheel filename.
#8 0.856 Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091)'))) - skipping
------
executor failed running [/bin/bash -c pip install ./whl_files/*.whl]: exit code: 1
There is no firewall issue.
I tried to install using this also but that also didn't work.
RUN pip3 install --trusted-host files.pythonhosted.org --trusted-host pypi.org --trusted-host pypi.python.org torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
| To expand on my comment:
You haven't copied any files to whl_files in your container, looks like. You only ADD and COPY some txt files and the train/ folder.
The whl files need to be there in the container (just like any other file that's to be used) to be installed (unless you use Buildkit's cache volumes instead, but it makes things more complicated).
| https://stackoverflow.com/questions/71483736/ |
AttributeError: 'T5Config' object has no attribute 'adapters' | How to solve this error? I've created the .pkl object of the T5-base model and tried to execute it but suddenly I got this error message. I wondered a bit, tried to google it but didn't get any reason why I got this error!!!!
| Well, I was thinking right!!!
I did 2 experiments :
install only transformers library :
When I load the model, each layer of the model was without an adapter attribute!!
install only adapter-Transformers library :
When I load the model, each layer of the model was with an adapter attribute!!
Conclusion : install adapter-Transformers
| https://stackoverflow.com/questions/71486242/ |
Batched input shows 3d, but got 2d, 2d tensor | I have this training loop
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = torch.stack(X).to(device), torch.stack(y).to(device)
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
and this lstm:
import torch
import torch.nn as nn
import pandas as pd
import numpy as np
class BELT_LSTM(nn.Module):
def __init__(self, input_size, hidden_size, num_layers):
super (BELT_LSTM, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.input_size = input_size
self.BELT_LSTM = nn.LSTM(input_size, hidden_size, num_layers)
def forward(self, x):
# receive an input, create a new hidden state, return output?
# reset the hidden state?
hidden = (torch.zeros(self.num_layers, self.hidden_size), torch.zeros(self.num_layers, self.hidden_size))
x, _ = self.BELT_LSTM(x, hidden)
#since our observation has several sequences, we only want the output after the last sequence of the observation'''
x = x[:, -1]
return x
and here's the dataset class:
from __future__ import print_function, division
import os
import torch
import pandas as pd
import numpy as np
import math
from torch.utils.data import Dataset, DataLoader
class rcvLSTMDataSet(Dataset):
"""rcv dataset."""
TIMESTEPS = 10
def __init__(self, csv_data_file, annotations_file):
"""
Args:
csv_data_file (string): Path to the csv file with the training data
annotations_file (string): Path to the file with the annotations
"""
self.csv_data_file = csv_data_file
self.annotations_file = annotations_file
self.labels = pd.read_csv(annotations_file)
self.data = pd.read_csv(csv_data_file)
def __len__(self):
return math.floor(len(self.labels) / 10)
def __getitem__(self, idx):
"""
pytorch expects whatever data is returned is in the form of a tensor. Included, it expects the label for the data.
Together, they make a tuple.
"""
# convert every ten indexes and label into one observation
Observation = []
counter = 0
start_pos = self.TIMESTEPS *idx
avg_avg_1 = 0
avg_avg_2 = 0
avg_avg_3 = 0
while counter < self.TIMESTEPS:
Observation.append(self.data.iloc[idx + counter].values)
avg_avg_1 += self.labels.iloc[idx + counter][2]
avg_avg_2 += self.labels.iloc[idx + counter][1]
avg_avg_3 += self.labels.iloc[idx + counter][0]
counter += 1
#average the avg_1, avg_2, avg_3 for TIMESTEPS length
avg_avg_1 = avg_avg_1 / self.TIMESTEPS
avg_avg_2 = avg_avg_2 / self.TIMESTEPS
avg_avg_3 = avg_avg_3 / self.TIMESTEPS
current_labels = [avg_avg_1, avg_avg_2, avg_avg_3]
print(current_labels)
return Observation, current_labels
def main():
loader = rcvDataSet('foo','foo2.csv')
j = 0
while j < len(loader.data % loader.TIMESTEPS):
print(loader.__getitem__(j))
j += 1
if "__main__" == __name__:
main()
When running this, i get:
File "module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "lstm.py", line 21, in forward
x, _ = self.BELT_LSTM(x, hidden)
File "module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "rnn.py", line 747, in forward
raise RuntimeError(msg)
RuntimeError: For batched 3-D input, hx and cx should also be 3-D but got (2-D, 2-D) tensors
but as far as i can tell, i've followed the nn.LSTM instructions for both setting up the layers, and shaping the data properly. What am i doing wrong?
For reference, the incoming data is rows from a csv file, 12 columns wide, and i serve 10 rows per observation
Thanks
| Your code:
hidden = (torch.zeros(self.num_layers, self.hidden_size), torch.zeros(self.num_layers, self.hidden_size))
x, _ = self.BELT_LSTM(x, hidden)
Here hx and cx are both 2-D tensors. The correct way should be:
h_0 = torch.randn(self.num_directions * self.num_layers, self.batch_size, self.hidden_size)
c_0 = torch.randn(self.num_directions * self.num_layers, self.batch_size, self.hidden_size)
x, _ = self.BELT_LSTM(x, (h_0, c_0))
| https://stackoverflow.com/questions/71486886/ |
Is there any indexing method in pytorch? | I'm studying about pytorch recently.
But this problem is so weird..
x=np.arrage(24)
ft=torch.FloatTensor(x)
print(floatT.view([@1])[@2])
Answer = tensor([[13., 16.], [19., 22.]])
Can there be indexing methods @1 and @2 that satisfy the Answer?
| by viewing ft as a tensor with 6 columns:
ft.view(-1, 6)
Out[]:
tensor([[ 0., 1., 2., 3., 4., 5.],
[ 6., 7., 8., 9., 10., 11.],
[12., 13., 14., 15., 16., 17.],
[18., 19., 20., 21., 22., 23.]])
you place the elements (13, 19), and (16, 22) on top of each other.
Now you only need to pick them up from the right rows/columns:
.view(-1, 6)[2:, (1, 4)]
Out[]:
tensor([[13., 16.],
[19., 22.]])
| https://stackoverflow.com/questions/71495423/ |
Custom Sampler correct use in Pytorch | I have a map-stype dataset, which is used for instance segmentation tasks.
The dataset is very imbalanced, in the sense that some images have only 10 objects while others have up to 1200.
How can I limit the number of objects per batch?
A minimal reproducible example is:
import math
import torch
import random
import numpy as np
import pandas as pd
from torch.utils.data import Dataset
from torch.utils.data.sampler import BatchSampler
np.random.seed(0)
random.seed(0)
torch.manual_seed(0)
W = 700
H = 1000
def collate_fn(batch) -> tuple:
return tuple(zip(*batch))
class SyntheticDataset(Dataset):
def __init__(self, image_ids):
self.image_ids = torch.tensor(image_ids, dtype=torch.int64)
self.num_classes = 9
def __len__(self):
return len(self.image_ids)
def __getitem__(self, idx: int):
"""
returns single sample
"""
# print("idx: ", idx)
# deliberately left dangling
# id = self.image_ids[idx].item()
# image_id = self.image_ids[idx]
image_id = torch.as_tensor(idx)
image = torch.randint(0, 255, (H, W))
num_objects = random.randint(10, 1200)
image = torch.randint(0, 255, (3, H, W))
masks = torch.randint(0, 255, (num_objects, H, W))
target = {}
target["image_id"] = image_id
areas = torch.randint(100, 20000, (1, num_objects), dtype=torch.int64)
boxes = torch.randint(100, H * W, (num_objects, 4), dtype=torch.int64)
labels = torch.randint(1, self.num_classes, (1, num_objects), dtype=torch.int64)
iscrowd = torch.zeros(len(labels), dtype=torch.int64)
target["boxes"] = boxes
target["labels"] = labels
target["area"] = areas
target["iscrowd"] = iscrowd
target["masks"] = masks
return image, target, image_id
class BalancedObjectsSampler(BatchSampler):
"""Samples either batch_size images or batches num_objs_per_batch objects.
Args:
data_source (list): contains tuples of (img_id).
batch_size (int): batch size.
num_objs_per_batch (int): number of objects in a batch.
Return
yields the batch_ids/image_ids/image_indices
"""
def __init__(self, data_source, batch_size, num_objs_per_batch, drop_last=False):
self.data_source = data_source
self.sampler = data_source
self.batch_size = batch_size
self.drop_last = drop_last
self.num_objs_per_batch = num_objs_per_batch
self.batch_count = math.ceil(len(self.data_source) / self.batch_size)
def __iter__(self):
obj_count = 0
batch = []
batches = []
counter = 0
for i, (k, s) in enumerate(self.data_source.iteritems()):
if (
obj_count <= obj_count + s
and len(batch) <= self.batch_size - 1
and obj_count + s <= self.num_objs_per_batch
and i < len(self.data_source) - 1
):
# because of https://pytorch.org/docs/stable/data.html#data-loading-order-and-sampler
batch.append(i)
obj_count += s
else:
batches.append(batch)
yield batch
obj_count = 0
batch = []
counter += 1
obj_sums = {}
batch_size = 10
workers = 4
fake_image_ids = np.random.randint(1600000, 1700000, 100)
# assigning any in-range number objects count to each image
for i, k in enumerate(fake_image_ids):
obj_sums[k] = random.randint(10, 1200)
obj_counts = pd.Series(obj_sums)
train_dataset = SyntheticDataset(image_ids=fake_image_ids)
balanced_sampler = BalancedObjectsSampler(
data_source=obj_counts,
batch_size=batch_size,
num_objs_per_batch=1500,
drop_last=False,
)
data_loader_sampler = torch.utils.data.DataLoader(
train_dataset,
num_workers=workers,
collate_fn=collate_fn,
sampler=balanced_sampler,
)
data_loader_iter = torch.utils.data.DataLoader(
train_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=workers,
collate_fn=collate_fn,
)
Iterating over the balanced_sampler
for i, bal_batch in enumerate(balanced_sampler):
print(f"batch_{i}: ", bal_batch)
yields
batch_0: [0]
batch_1: [2, 3]
batch_2: [5]
batch_3: [7]
batch_4: [9, 10]
batch_5: [12, 13, 14, 15]
batch_6: [17, 18]
batch_7: [20, 21, 22]
batch_8: [24, 25]
batch_9: [27]
batch_10: [29]
batch_11: [31]
batch_12: [33]
batch_13: [35, 36, 37]
batch_14: [39, 40]
batch_15: [42, 43]
batch_16: [45, 46]
batch_17: [48, 49, 50]
batch_18: [52, 53, 54]
batch_19: [56]
batch_20: [58, 59]
batch_21: [61, 62]
batch_22: [64]
batch_23: [66]
batch_24: [68]
batch_25: [70, 71]
batch_26: [73]
batch_27: [75, 76, 77]
batch_28: [79, 80]
batch_29: [82, 83, 84, 85, 86, 87]
batch_30: [89]
batch_31: [91]
batch_32: [93, 94]
batch_33: [96]
batch_34: [98]
The above displayed values are the images' indices, but could also be the batch index or even the images' ids.
By running
for i, batch in enumerate(data_loader_sampler):
print("__sample__: ", i, len(batch[0]))
One sees that the batch contains a single sample instead of the expected amount.
__sample__: 0 1
__sample__: 1 1
__sample__: 2 1
__sample__: 3 1
__sample__: 4 1
__sample__: 5 1
__sample__: 6 1
__sample__: 7 1
__sample__: 8 1
__sample__: 9 1
__sample__: 10 1
__sample__: 11 1
__sample__: 12 1
__sample__: 13 1
__sample__: 14 1
__sample__: 15 1
__sample__: 16 1
__sample__: 17 1
__sample__: 18 1
__sample__: 19 1
__sample__: 20 1
__sample__: 21 1
__sample__: 22 1
__sample__: 23 1
__sample__: 24 1
__sample__: 25 1
__sample__: 26 1
__sample__: 27 1
__sample__: 28 1
__sample__: 29 1
__sample__: 30 1
__sample__: 31 1
__sample__: 32 1
__sample__: 33 1
__sample__: 34 1
What I am really trying to prevent is the following behavior that arises from
for i, batch in enumerate(data_loader_iter):
print("__iter__: ", i, sum([k["masks"].shape[0] for k in batch[1]]))
which is
__iter__: 0 2510
__iter__: 1 2060
__iter__: 2 2203
__iter__: 3 2815
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/queues.py", line 239, in _feed
obj = _ForkingPickler.dumps(obj)
File "/usr/lib/python3.8/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/blip/venv/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 328, in reduce_storage
fd, size = storage._share_fd_()
RuntimeError: falseINTERNAL ASSERT FAILED at "../aten/src/ATen/MapAllocator.cpp":300, please report a bug to PyTorch. unable to write to file </torch_431207_56>
Traceback (most recent call last):
File "/blip/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 990, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/usr/lib/python3.8/multiprocessing/queues.py", line 107, in get
if not self._poll(timeout):
File "/usr/lib/python3.8/multiprocessing/connection.py", line 257, in poll
return self._poll(timeout)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 424, in _poll
r = wait([self], timeout)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/usr/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
File "/blip/venv/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 431257) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "so.py", line 170, in <module>
for i, batch in enumerate(data_loader_iter):
File "/blip/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/blip/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1186, in _next_data
idx, data = self._get_data()
File "/blip/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1152, in _get_data
success, data = self._try_get_data()
File "/blip/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1003, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 431257) exited unexpectedly
which invariably happens when the number of objects per batch is greater than ~2500.
An immediate workaround would be to set the batch_size low, I just need a more optimal solution.
| If what you are trying to solve really is:
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
You could try resizing the allocated shared memory with
# mount -o remount,size=<whatever_is_enough>G /dev/shm
However, as this is not always possible, one fix to your problem would be
class SyntheticDataset(Dataset):
def __init__(self, image_ids):
self.image_ids = torch.tensor(image_ids, dtype=torch.int64)
self.num_classes = 9
def __len__(self):
return len(self.image_ids)
def __getitem__(self, indices):
worker_info = torch.utils.data.get_worker_info()
batch = []
for i in indices:
sample = self.get_sample(i)
batch.append(sample)
gc.collect()
return batch
def get_sample(self, idx: int):
image_id = torch.as_tensor(idx)
image = torch.randint(0, 255, (H, W))
num_objects = idx
image = torch.randint(0, 255, (3, H, W))
masks = torch.randint(0, 255, (num_objects, H, W))
target = {}
target["image_id"] = image_id
areas = torch.randint(100, 20000, (1, num_objects), dtype=torch.int64)
boxes = torch.randint(100, H * W, (num_objects, 4), dtype=torch.int64)
labels = torch.randint(1, self.num_classes, (1, num_objects), dtype=torch.int64)
iscrowd = torch.zeros(len(labels), dtype=torch.int64)
target["boxes"] = boxes
target["labels"] = labels
target["area"] = areas
target["iscrowd"] = iscrowd
target["masks"] = masks
return image, target, image_id
and
class BalancedObjectsSampler(BatchSampler):
"""Samples either batch_size images or batches num_objs_per_batch objects.
Args:
data_source (list): contains tuples of (img_id).
batch_size (int): batch size.
num_objs_per_batch (int): number of objects in a batch.
Return
yields the batch_ids/image_ids/image_indices
"""
def __init__(self, data_source, batch_size, num_objs_per_batch, drop_last=False):
self.data_source = data_source
self.sampler = data_source
self.batch_size = batch_size
self.drop_last = drop_last
self.num_objs_per_batch = num_objs_per_batch
self.batch_count = math.ceil(len(self.data_source) / self.batch_size)
obj_count = 0
batch = []
batches = []
batches_sums = []
for i, (k, s) in enumerate(self.data_source.iteritems()):
if (
len(batch) < self.batch_size
and obj_count + s < self.num_objs_per_batch
and i < len(self.data_source) - 1
):
batch.append(s)
obj_count += s
else:
batches.append(len(batch))
batches_sums.append(obj_count)
obj_count = 0
batch = []
self.batches = batches
self.batch_count = len(batches)
def __iter__(self):
batch = []
img_counts_id = 0
for idx, (k, s) in enumerate(self.data_source.iteritems()):
if len(batch) < self.batches[img_counts_id] and idx < len(self.data_source):
batch.append(s)
elif len(batch) == self.batches[img_counts_id]:
gc.collect()
yield batch
batch = []
if img_counts_id < self.batch_count - 1:
img_counts_id += 1
else:
break
if len(batch) > 0 and not self.drop_last:
yield batch
def __len__(self) -> int:
if self.drop_last:
return len(self.data_source) // self.batch_size
else:
return (len(self.data_source) + self.batch_size - 1) // self.batch_size
As SyntheticDataset's __getitem__ was receiving a list of indices, the simplest solution would just iterate over the indices and retrieve a list of samples. You may just have to collate the output differently in order to feed it to your model.
For the BalancedObjectsSampler, I calculated the size of each batch within the __init__ and used it in __iter__ to assemble the batches.
NOTE: This will still fail if your num_workers > 0 for you are trying to pack at most 1500 objects into a batch - and usually one worker loads one batch at a time. Hence, you have to re-assess your num_objs_per_batch when considering using multiprocessing.
| https://stackoverflow.com/questions/71500629/ |
How to seperate code into train, val and test functions for pytorch cnn? | I am training a cnn using pytorch and have created a training loop. As I am performing optimisation and experimenting with hyper-parameter tuning, I want to separate my training, validation and testing into different functions. I need to be able to record my accuracy and loss for each function in order to plot graphs. For this I want to create a function which returns the accuracy.
I am pretty new to coding and was wondering the best way to go about this. I feel like my code is a bit messy at the moment. I need to be able to feed in various hyper-parameters for experimentation in my training function. Could anyone offer any advice? Below is what I can so far:
def train_model(model, optimizer, data_loader, num_epochs, criterion=criterion):
total_epochs = notebook.tqdm(range(num_epochs))
for epoch in total_epochs:
model.train()
train_correct = 0.0
train_running_loss=0.0
train_total=0.0
for i, (img, label) in enumerate(data_loader['train']):
#uploading images and labels to GPU
img = img.to(device)
label = label.to(device)
#training model
outputs = model(img)
#computing losss
loss = criterion(outputs, label)
#propagating the loss backwards
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_running_loss += loss.item()
_, predicted = outputs.max(1)
train_total += label.size(0)
train_correct += predicted.eq(label).sum().item()
train_loss=train_running_loss/len(data_loader['train'])
train_accu=100.*correct/total
print('Train Loss: %.3f | Train Accuracy: %.3f'%(train_loss,train_accu))
I have also experimented with making a functions to record accuracy:
def accuracy(outputs, labels):
_, preds = torch.max(outputs, dim = 1)
return torch.tensor(torch.sum(preds == labels).item() / len(preds))
| First, note that:
Unless you have some specific motivation, validation (and testing) should be performed on a different dataset than the training set, so you should use a different DataLoader. The computation time will increase because of an additional for loop at every epoch.
Always call model.eval() before validation/testing.
That said, The signature of the validation function is pretty much similar to that of train_model
# criterion is passed if you want to register the validation loss too
def validate_model(model, eval_loader, criterion):
...
Then, in train_model, after each epoch, you can call the function validate_model and store the returned metrics in some data structure (list, tensor, etc.) that will be used later for plotting.
At the end of the training, you can then use the same validate_model function for testing.
Instead of coding the accuracy by yourself, you can use Accuracy from TorchMetrics
Finally, if you feel the need to level up, you can use DL training frameworks like PyTorch Lightning or FastAI. Give also a look at some hyperparameter tuning library such as Ray Tune.
| https://stackoverflow.com/questions/71512214/ |
loss increasing significantly in training loop | My training loss is increasing quite dramatically. Not sure why this is. I was thinking it could be something to do with how I am calculating the loss, but not sure. I think it might be because i am printing running loss instead of loss per batch. Below is my training loop:
def train_model(model, optimizer, train_loader, num_epochs, criterion=criterion):
total_epochs = notebook.tqdm(range(num_epochs))
model.train()
running_loss=0
correct=0
total=0
for epoch in total_epochs:
for i, (x_train, y_train) in enumerate(train_loader):
x_train = x_train.to(device)
y_train = y_train.to(device)
y_pred = model(x_train)
loss = criterion(y_pred, y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_loss += loss.item()
_, predicted = y_pred.max(1)
train_loss=running_loss/len(train_loader)
total += y_train.size(0)
correct += predicted.eq(y_train).sum().item()
train_loss=running_loss/len(train_loader)
train_accu=100.*correct/total
print('Train Loss: %.3f | Train Accuracy: %.3f'%(train_loss,train_accu))
However when i call train_model():
train_md = train_model(cnn_net, optimizer, data_loaders['train'], 10)
It returns this:
Train Loss: 1.472 | Train Accuracy: 47.949
Train Loss: 2.655 | Train Accuracy: 53.324
Train Loss: 3.732 | Train Accuracy: 56.521
Train Loss: 4.750 | Train Accuracy: 58.565
Train Loss: 5.728 | Train Accuracy: 60.130
Train Loss: 6.673 | Train Accuracy: 61.364
Train Loss: 7.590 | Train Accuracy: 62.335
Train Loss: 8.484 | Train Accuracy: 63.190
Train Loss: 9.365 | Train Accuracy: 63.934
Train Loss: 10.225 | Train Accuracy: 64.571
| You keep accumulating the loss onto running_loss.
That's the reason why it's increasing every epoch!
| https://stackoverflow.com/questions/71514447/ |
Error training ELMo - RuntimeError: The size of tensor a (5158) must match the size of tensor b (5000) at non-singleton dimension 1 | I am trying to train my own custom ELMo model on AllenNLP.
The following bug RuntimeError: The size of tensor a (5158) must match the size of tensor b (5000) at non-singleton dimension 1 arises when training the model. There are instances where the size of tensor a is stated to be other values (e.g. 5300). When I tested on a small subset of files, I was able to train the model successfully.
Based on my intuition, this is something that deals with the number of tokens in my model. More specifically specific files which has tokens more than 5000. However, there is no parameter within the AllenNLP package which allows me to tweak this to bypass this error.
Any advice on how I can overcome this issue? Would tweaking the PyTorch code to set it at a 5000 size work (If yes, how can I do that)? Any insights will be deeply appreciated.
FYI, I am currently using a customised DatasetReader for tokenisation purposes. I've generated my own vocab list before training the model (to save some time) which is used to train the ELMo model via AllenNLP.
Update: I found out that there is this variable from AllenNLP max_len=5000 which is why the error is showing. See code here. I've tweaked the parameter to larger values and ended up with CUDA Out of Memory Error on many occasions instead. Making me believe this should not be touched.
Environment: Python 3.6.9, Linux Ubuntu, allennlp=2.9.1, allennlp-models=2.9.0
Traceback:
Traceback (most recent call last):
File "/home/jiayi/.local/bin/allennlp", line 8, in <module>
sys.exit(run())
File "/home/jiayi/.local/lib/python3.6/site-packages/allennlp/__main__.py", line 34, in run
main(prog="allennlp")
File "/home/jiayi/.local/lib/python3.6/site-packages/allennlp/commands/__init__.py", line 121, in main
args.func(args)
File "/home/jiayi/.local/lib/python3.6/site-packages/allennlp/commands/train.py", line 120, in train_model_from_args
file_friendly_logging=args.file_friendly_logging,
File "/home/jiayi/.local/lib/python3.6/site-packages/allennlp/commands/train.py", line 179, in train_model_from_file
file_friendly_logging=file_friendly_logging,
File "/home/jiayi/.local/lib/python3.6/site-packages/allennlp/commands/train.py", line 246, in train_model
file_friendly_logging=file_friendly_logging,
File "/home/jiayi/.local/lib/python3.6/site-packages/allennlp/commands/train.py", line 470, in _train_worker
metrics = train_loop.run()
File "/home/jiayi/.local/lib/python3.6/site-packages/allennlp/commands/train.py", line 543, in run
return self.trainer.train()
File "/home/jiayi/.local/lib/python3.6/site-packages/allennlp/training/gradient_descent_trainer.py", line 720, in train
metrics, epoch = self._try_train()
File "/home/jiayi/.local/lib/python3.6/site-packages/allennlp/training/gradient_descent_trainer.py", line 741, in _try_train
train_metrics = self._train_epoch(epoch)
File "/home/jiayi/.local/lib/python3.6/site-packages/allennlp/training/gradient_descent_trainer.py", line 459, in _train_epoch
batch_outputs = self.batch_outputs(batch, for_training=True)
File "/home/jiayi/.local/lib/python3.6/site-packages/allennlp/training/gradient_descent_trainer.py", line 352, in batch_outputs
output_dict = self._pytorch_model(**batch)
File "/home/jiayi/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jiayi/.local/lib/python3.6/site-packages/allennlp_models/lm/models/language_model.py", line 257, in forward
embeddings, mask
File "/home/jiayi/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jiayi/.local/lib/python3.6/site-packages/allennlp_models/lm/modules/seq2seq_encoders/bidirectional_lm_transformer.py", line 282, in forward
token_embeddings = self._position(token_embeddings)
File "/home/jiayi/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jiayi/.local/lib/python3.6/site-packages/allennlp_models/lm/modules/seq2seq_encoders/bidirectional_lm_transformer.py", line 68, in forward
return x + self.positional_encoding[:, : x.size(1)]
RuntimeError: The size of tensor a (5385) must match the size of tensor b (5000) at non-singleton dimension 1
AllenNLP training config file:
// For more info on config files generally, see https://guide.allennlp.org/using-config-files
local NUM_GRAD_ACC = 4;
local BATCH_SIZE = 1;
local BASE_LOADER = {
"max_instances_in_memory": 8,
"batch_sampler": {
"type": "bucket",
"batch_size": BATCH_SIZE,
"sorting_keys": ["source"]
}
};
{
"dataset_reader" : {
"type": "mimic_reader",
"token_indexers": {
"tokens": {
"type": "single_id"
},
"token_characters": {
"type": "elmo_characters"
}
},
"start_tokens": ["<S>"],
"end_tokens": ["</S>"],
},
"train_data_path": std.extVar("MIMIC3_NOTEEVENTS_DISCHARGE_PATH"),
// Note: We don't set a validation_data_path because the softmax is only
// sampled during training. Not sampling on GPUs results in a certain OOM
// given our large vocabulary. We'll need to evaluate against the test set
// (when we'll want a full softmax) with the CPU.
"vocabulary": {
// Use a prespecified vocabulary for efficiency.
"type": "from_files",
"directory": std.extVar("ELMO_VOCAB_PATH"),
// Plausible config for generating the vocabulary.
// "tokens_to_add": {
// "tokens": ["<S>", "</S>"],
// "token_characters": ["<>/S"]
// },
// "min_count": {"tokens": 3}
},
"model": {
"type": "language_model",
"bidirectional": true,
"num_samples": 8192,
# Sparse embeddings don't work with DistributedDataParallel.
"sparse_embeddings": false,
"text_field_embedder": {
"token_embedders": {
"tokens": {
"type": "empty"
},
"token_characters": {
"type": "character_encoding",
"embedding": {
"num_embeddings": 262,
// Same as the Transformer ELMo in Calypso. Matt reports that
// this matches the original LSTM ELMo as well.
"embedding_dim": 16
},
"encoder": {
"type": "cnn-highway",
"activation": "relu",
"embedding_dim": 16,
"filters": [
[1, 32],
[2, 32],
[3, 64],
[4, 128],
[5, 256],
[6, 512],
[7, 1024]],
"num_highway": 2,
"projection_dim": 512,
"projection_location": "after_highway",
"do_layer_norm": true
}
}
}
},
// Consider the following.
// remove_bos_eos: true,
// Applies to the contextualized embeddings.
"dropout": 0.1,
"contextualizer": {
"type": "bidirectional_language_model_transformer",
"input_dim": 512,
"hidden_dim": 4096,
"num_layers": 2,
"dropout": 0.1,
"input_dropout": 0.1
}
},
"data_loader": BASE_LOADER,
// "distributed": {
// "cuda_devices": [0, 1],
// },
"trainer": {
"num_epochs": 10,
"cuda_devices": [0, 1, 2, 3],
"optimizer": {
// The gradient accumulators in Adam for the running stdev and mean for
// words not used in the sampled softmax would be decayed to zero with the
// standard "adam" optimizer.
"type": "dense_sparse_adam"
},
// "grad_norm": 10.0,
"learning_rate_scheduler": {
"type": "noam",
// See https://github.com/allenai/calypso/blob/master/calypso/train.py#L401
"model_size": 512,
// See https://github.com/allenai/calypso/blob/master/bin/train_transformer_lm1b.py#L51.
// Adjusted based on our sample size relative to Calypso's.
"warmup_steps": 6000
},
"num_gradient_accumulation_steps": NUM_GRAD_ACC,
"use_amp": true
}
}
| By setting the max_tokens variable for the custom DatasetReader built to below 5000, this error no longer persists. This was also suggested by one of AllenNLP's contributor to make sure the tokenizer truncates the input to 5000 tokens.
Same question was posted on AllenNLP: https://github.com/allenai/allennlp/discussions/5601
| https://stackoverflow.com/questions/71514727/ |
Type Hinting torch.float for special form type hint | I have a function that can take a typing.Union of types including the type torch.float. But if I use the typing.Union type with torch.float as argument, I receive an error. Here is an example:
from typing import Union
import torch
def fct(my_float_or_tensor: Union[torch.float, torch.Tensor]):
pass
And I get the error
TypeError: Union[t0, t1, ...]: each t must be a type. Got torch.float32.
What am I doing wrong?
Interestingly, the same problem occurs with the special type typing.Tuple but not if I use torch.float directly when type hinting.
| There is a difference between "dtypes" and "types". torch.float is a dtype. For type hinting, use torch.FloatTensor (there are also others, e.g., DoubleTensor, HalfTensor etc.)
| https://stackoverflow.com/questions/71515031/ |
Create iterator from a Data Frame in Python | I am working on an NLP project using Seq2Seq. I created a data frame from my dataset then created a batch iterator using data loader, see the following code:
# creates lists containing each pair
original_word_pairs = [[w for w in l.split('\t')] for l in lines[:num_examples]]
data = pd.DataFrame(original_word_pairs, columns=["src", "trg"])
# conver the data to tensors and pass to the Dataloader
# to create a batch iterator
class MyData(Dataset):
def __init__(self, X, y):
self.data = X
self.target = y
# TODO: convert this into torch code is possible
self.length = [ np.sum(1 - np.equal(x, 0)) for x in X]
def __getitem__(self, index):
x = self.data[index]
y = self.target[index]
x_len = self.length[index]
return x,y,x_len
def __len__(self):
return len(self.data)
train_dataset = MyData(input_tensor_train, target_tensor_train)
val_dataset = MyData(input_tensor_val, target_tensor_val)
train_dataset = DataLoader(train_dataset, batch_size = BATCH_SIZE,
drop_last=True,
shuffle=True)
test_dataset= DataLoader(val_dataset, batch_size = BATCH_SIZE,
drop_last=True,
shuffle=True)
That is a part of my code, the thing is I want to use the iterators like this
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
But I got an error "AttributeError: 'list' object has no attribute 'src'"
How can I use the iterator and access a specific column?
| You can redefine __getitem__ in your Dataset to return a dictionary:
def __getitem__(self, index):
x = self.data[index]
y = self.target[index]
x_len = self.length[index]
return {"src": x, "trg": y, "x_len": x_len}
The default collate_fn of DataLoader will take care to provide a dictionary containing batches instead of single observations, but you need to convert x_len to a tensor into __getitem__ to make it work (or you can pass a custom collate_fn).
| https://stackoverflow.com/questions/71515161/ |
Equivalent to torch.rfft() in newest PyTorch version | I want to estimate the fourier transform for a given image of size BxCxWxH
In previous torch version the following did the job:
fft_im = torch.rfft(img, signal_ndim=2, onesided=False)
and the output was of size:
BxCxWxHx2
However, with the new version of rfft :
fft_im = torch.fft.rfft2(img, dim=2, norm=None)
I do not get the same results. Do I miss something?
| A few issues
The dim argument you provided is an invalid type, it should be a tuple of two numbers or should be omitted. Really PyTorch should raise an exception. I would argue that the fact this ran without exception is a bug in PyTorch (I opened a ticket stating as much).
PyTorch now supports complex tensor types, so FFT functions return those instead of adding a new dimension for the real/imaginary parts. You can use torch.view_as_real to convert to the old representation. Also worth pointing out that view_as_real doesn't copy data since it returns a view so shouldn't slow things down in any noticeable way.
PyTorch no longer gives the option of disabling one-sided calculation in RFFT. Probably because disabling one-sided makes the result identical to torch.fft.fft2, which is in conflict with the 13th aphorism of PEP 20. The whole point of providing a special real-valued version of the FFT is that you need only compute half the values for each dimension, since the rest can be inferred via the Hermition symmetric property.
So from all that you should be able to use
fft_im = torch.view_as_real(torch.fft.fft2(img))
Important If you're going to pass fft_im to other functions in torch.fft (like fft.ifft or fft.fftshift) then you'll need to convert back to the complex representation using torch.view_as_complex so those functions don't interpret the last dimension as a signal dimension.
| https://stackoverflow.com/questions/71515439/ |
Not able to switch off batch norm layers for faster-rcnn (PyTorch) | I'm trying to switch off batch norm layers in a faster-rcnn model for evaluation mode.
I'm doing a sanity check atm:
@torch.no_grad()
def evaluate_loss(model, data_loader, device):
val_loss = 0
model.train()
for images, targets in data_loader:
# check that all layers are in train mode
# for name, module in model.named_modules():
# if hasattr(module, 'training'):
# print('{} is training {}'.format(name, module.training))
# # set bn layers to eval
for module in model.modules():
if isinstance(module, torch.nn.BatchNorm2d):
module.eval()
# bn layers are now in eval
for name, module in model.named_modules():
if hasattr(module, 'training'):
print('{} is training {}'.format(name, module.training))
However, all the batch norm layers are still in training mode. When I replace it with for example Conv2d, I get the expected behaviour of False. Here is an example snippet of the output:
backbone.body.layer4.0.conv1 is training True
backbone.body.layer4.0.bn1 is training True
backbone.body.layer4.0.conv2 is training True
backbone.body.layer4.0.bn2 is training True
backbone.body.layer4.0.conv3 is training True
backbone.body.layer4.0.bn3 is training True
Why is this happening? What can I do to switch off these layers? I have tried this with all variations of batch norm as provided by torch.nn.
| So, after further investigation and after printing out all modules provided by the faster-rcnn, instead of BatchNorm2d, FrozenBatchNorm2d is used by the pretained model.
Furthermore, unlike what's currently stated by the documentation, you must call torchvision.ops.misc.FrozenBatchNorm2d instead of torchvision.ops.FrozenBatchNorm2d.
Additionally, as the layers are already frozen, there is no need to "switch off" these layers thus model.eval() is probably not required.
| https://stackoverflow.com/questions/71521735/ |
How to remove a prediction head from pytorch model based on the output tensor? | I am working on a ViT (Vision Transformer) related project and some low level definition is deep inside timm library, which I can not change. The low level library definition involves a linear classification prediction head, which is not a part of my network.
Every thing was fine until I switched to DDP parallel implementation. Pytorch complained about some parameters which didn’t contribute to the loss, and it instructed me to use “find_unused_parameters=True”. In fact, it is a common scenario and it worked again if I added this “find_unused_parameters=True” to the training routine. However, I am only allowed to change the model definition in our code base, but I cannot modify anything related to training …
So I guess the only thing I can do right now, is to “remove” the linear head from the model.
Although I cannot dig into the low level definition of ViT, but I can output this tensor like this:
encoder_output, linear_head_output = ViT(input)
Is it possible to remove this linear prediction head based on this linear_head_output tensor?
| Just set the num_classes=0 when you create your ViT model by calling timm.create_model().
Here is an example from TIMM documentation on Feature Extraction:
import torch
import timm
m = timm.create_model('resnet50', pretrained=True, num_classes=0, global_pool='')
o = m(torch.randn(2, 3, 224, 224))
print(f'Unpooled shape: {o.shape}')
| https://stackoverflow.com/questions/71521771/ |
Manual Bidirectional torch.nn.RNN Implementation | I'm trying to reimplement the torch.nn.RNN module without C++/CUDA bindings, i.e., using simple tensor operations and associated logic. I have developed the following RNN class and associated testing logic, which can be used to compare output with a reference module instance:
import torch
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, bidirectional=False):
super(RNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.bidirectional = bidirectional
self.w_ih = [torch.randn(hidden_size, input_size)]
if bidirectional:
self.w_ih_reverse = [torch.randn(hidden_size, input_size)]
for layer in range(num_layers - 1):
self.w_ih.append(torch.randn(hidden_size, hidden_size))
if bidirectional:
self.w_ih_reverse.append(torch.randn(hidden_size, hidden_size))
self.w_hh = torch.randn(num_layers, hidden_size, hidden_size)
if bidirectional:
self.w_hh_reverse = torch.randn(num_layers, hidden_size, hidden_size)
def forward(self, input, h_0=None):
if h_0 is None:
if self.bidirectional:
h_0 = torch.zeros(2, self.num_layers, input.shape[1], self.hidden_size)
else:
h_0 = torch.zeros(1, self.num_layers, input.shape[1], self.hidden_size)
if self.bidirectional:
output = torch.zeros(input.shape[0], input.shape[1], 2 * self.hidden_size)
else:
output = torch.zeros(input.shape[0], input.shape[1], self.hidden_size)
for t in range(input.shape[0]):
print(input.shape, t)
input_t = input[t]
if self.bidirectional:
input_t_reversed = input[-1 - t]
for layer in range(self.num_layers):
h_t = torch.tanh(torch.matmul(input_t, self.w_ih[layer].T) + torch.matmul(h_0[0][layer], self.w_hh[layer].T))
h_0[0][layer] = h_t
if self.bidirectional:
h_t_reverse = torch.tanh(torch.matmul(input_t_reversed, self.w_ih_reverse[layer].T) + torch.matmul(h_0[1][layer], self.w_hh_reverse[layer].T))
h_0[1][layer] = h_t_reverse
input_t = h_t
if self.bidirectional:
# This logic is incorrect for bidirectional RNNs with multiple layers
input_t = torch.cat((h_t, h_t_reverse), dim=-1)
input_t_reversed = input_t
output[t, :, :self.hidden_size] = h_t
if self.bidirectional:
output[-1 - t, :, self.hidden_size:] = h_t_reverse
return output
if __name__ == '__main__':
input_size = 10
hidden_size = 12
num_layers = 2
batch_size = 2
bidirectional = True
input = torch.randn(2, batch_size, input_size)
rnn_val = torch.nn.RNN(input_size=input_size, hidden_size=hidden_size, num_layers=num_layers, bias=False, bidirectional=bidirectional, nonlinearity='tanh')
rnn = RNN(input_size=input_size, hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional)
for i in range(rnn_val.num_layers):
rnn.w_ih[i] = rnn_val._parameters['weight_ih_l%d' % i].data
rnn.w_hh[i] = rnn_val._parameters['weight_hh_l%d' % i].data
if bidirectional:
rnn.w_ih_reverse[i] = rnn_val._parameters['weight_ih_l%d_reverse' % i].data
rnn.w_hh_reverse[i] = rnn_val._parameters['weight_hh_l%d_reverse' % i].data
output_val, hn_val = rnn_val(input)
output = rnn(input)
print(output_val)
print(output)
My implementation appears to work for vanilla RNNs with an arbitrary number of layers and different batch sizes/sequence lengths, in addition to single-layered bidirectional RNNs, however, it does not produce the correct result for multi-layered bidirectional RNNs.
For sake of simplicity, bias terms are not currently implemented, and only the tanh activation function is supported. I have narrowed the logic error down to the line input_t = torch.cat((h_t, h_t_reverse), dim=-1), as the first output sequence is incorrect.
It would be greatly appreciated if someone could point me in the correct direction, and let me know what the problem is!
| There is two possible approaches to forward:
step through all layers with first elements, after that step through time (this one taken in snippet above)
sequentially step through layers, computing output (or equivalently input for next layer) for each index
So while first one good for RNN with one direction, it's not working properly for bidirectional multi-layered RNN. To illustrate, let's take 2 layers (same case in code) -- for computation output[0] input from previous layer is needed, and it's concatenation of:
hidden vector from normal pass of length 1 (because it's right at start of sequence)
and hidden vector from reverse pass of length seq_length (need to step through whole sequence, from end to start, to get it)
So when step through layers is taken first, it takes only one step in time (length of pass equal 1), and therefore output[0] takes garbage as input, because second part is not correct (there was no "whole pass from end to start").
To overcome this issue, I'd suggest to rewrite cycles in forward from:
for t in range(input.shape[0]):
...
for layer in range(self.num_layers):
...
to something like:
for layer in range(self.num_layers):
...
for t in range(input.shape[0]):
...
As alternative, leave forward for computing in other normal cases, but for bidirectional multilayer write another function forward_bidir, and write this cycles there.
It also worth noting, that w_ih[k] for k > 0 in bidirectional case have shape (hidden_size, 2 * hidden_size), as stated in pytorch documentation on RNN. Also, function torch.allclose should serve better for testing outputs than prints.
For code fixes check gist, no optimisations were taken, main aim to preserve original idea, seems to work for all configuration listed above (one-directional, multi-layered, bi-directional).
| https://stackoverflow.com/questions/71522409/ |
RuntimeError: mat1 and mat2 shapes cannot be multiplied (200x16 and 32x1) | I would like to connect two Fully Connected layers. Then, after connecting them, I want to build another neural net with the Fully Connected layer.
I can see that the error is caused by not setting cat_x = torch.cat([x, x1]) properly. However, I do not know how to solve this problem.
import torch
from torch import nn, optim
import numpy as np
from matplotlib import pyplot as plt
class Regression(nn.Module):
def __init__(self):
super().__init__()
self.linear1 = nn.Linear(2, 32)
self.linear2 = nn.Linear(32, 16)
self.linear3 = nn.Linear(32, 1)
def forward(self, input):
x = nn.functional.relu(self.linear1(input))
x = nn.functional.relu(self.linear2(x))
x1 = nn.functional.elu(self.linear1(input))
x1 = nn.functional.elu(self.linear2(x1))
cat_x = torch.cat([x, x1])
cat_x = self.linear3(cat_x)
return cat_x
def train(model, optimizer, E, iteration, x, y):
losses = []
for i in range(iteration):
optimizer.zero_grad() # 勾配情報を0に初期化
y_pred = model(x) # 予測
loss = E(y_pred.reshape(y.shape), y) # 損失を計算(shapeを揃える)
loss.backward() # 勾配の計算
optimizer.step() # 勾配の更新
losses.append(loss.item()) # 損失値の蓄積
print('epoch=', i+1, 'loss=', loss)
return model, losses
def test(model, x):
y_pred = model(x).data.numpy().T[0] # 予測
return y_pred
x = np.random.uniform(0, 10, 100) # x軸をランダムで作成
y = np.random.uniform(0.9, 1.1, 100) * np.sin(2 * np.pi * 0.1 * x) # 正弦波を作成
x = torch.from_numpy(x.astype(np.float32)).float() # xをテンソルに変換
y = torch.from_numpy(y.astype(np.float32)).float() # yをテンソルに変換
X = torch.stack([torch.ones(100), x], 1) # xに切片用の定数1配列を結合
net = Regression()
optimizer = optim.RMSprop(net.parameters(), lr=0.01) # 最適化にRMSpropを設定
E = nn.MSELoss()
net, losses = train(model=net, optimizer=optimizer, E=E, iteration=5000, x=X, y=y)
error message
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
1846 if has_torch_function_variadic(input, weight, bias):
1847 return handle_torch_function(linear, (input, weight, bias), input, weight, bias=bias)
-> 1848 return torch._C._nn.linear(input, weight, bias)
1849
1850
RuntimeError: mat1 and mat2 shapes cannot be multiplied (200x16 and 32x1)
| since both x and x1 have dimensions of (100,16), the torch.cat operator concatenates in the 1st dimension (since they are of similar size in that direction). For your code to work change the cat_x = torch.cat([x, x1]) to cat_x = torch.cat([x, x1], dim=1)
| https://stackoverflow.com/questions/71527183/ |
How to structure a cnn for fine-tuning? | I want to fine-tune a model so that I can experiment with various different hyper parameters. For example:
filters
Regularisation
Convolutional filter size
Learning rate
Optimizers
I have chose do do this in PyTorch and have created a base model (see below). However I am unsure of the best way to set up my code in order to do this. Specifically my ConvNet and train function. I will need to make comparisons as I go along using graphs. Could anyone offer any advice on the best way to structure my code/go about this.
class ConvNet(nn.Module):
def __init__(self, num_classes=10):
super().__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=1),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=3, stride=1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.fcl = nn.Sequential(
nn.Flatten(),
nn.Linear(1152, 100)
)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = self.fcl(out)
return out
| If you're looking for a simple tutorial, PyTorch has one that is explained well for computer vision here: https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
This is section is most relevant to your question so you can see how fine-tuning is done when the optimizer is adjusted.
model_ft = models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
# Here the size of each output sample is set to 2.
# Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)).
model_ft.fc = nn.Linear(num_ftrs, 2)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
You have to train your model completely first. Then you can refer to this post: https://discuss.pytorch.org/t/how-to-perform-finetuning-in-pytorch/419/8?u=nullpointer
ignored_params = list(map(id, model.fc.parameters()))
base_params = filter(lambda p: id(p) not in ignored_params,
model.parameters())
optimizer = torch.optim.SGD([
{'params': base_params},
{'params': model.fc.parameters(), 'lr': opt.lr}
], lr=opt.lr*0.1, momentum=0.9)
If you'd like to change the weights of a layer without changing the weights of the network while you're fine-tuning, you can do something like the following:
model = models.vgg16(pretrained=True)
print list(list(model.classifier.children())[1].parameters())
mod = list(model.classifier.children())
mod.pop()
mod.append(torch.nn.Linear(4096, 2))
new_classifier = torch.nn.Sequential(*mod)
print list(list(new_classifier.children())[1].parameters())
model.classifier = new_classifier
If you're looking to add layers or filters to the current model:
class MyModel(nn.Module):
def __init__(self, pretrained_model):
self.pretrained_model = pretrained_model
self.last_layer = ... # create layer
def forward(self, x):
return self.last_layer(self.pretrained_model(x))
| https://stackoverflow.com/questions/71527284/ |
How can I send a Data Loader to the GPU in Google Colab? | I have two data loaders and I am trying to send them to the GPU using .to(device) but this is not working.
This is the code I am using:
# to create a batch iterator
class MyData(Dataset):
def __init__(self, X, y):
self.data = X
self.target = y
# TODO: convert this into torch code is possible
self.length = [ np.sum(1 - np.equal(x, 0)) for x in X]
def __getitem__(self, index):
x = self.data[index]
y = self.target[index]
x_len = self.length[index]
xx_len = torch.tensor(x_len)
return {"src": x, "trg": y, "x_len": xx_len}
def __len__(self):
return len(self.data)
dataset = DataLoader(train_dataset, batch_size = BATCH_SIZE,
drop_last=True,
shuffle=True)
test_Dataset= DataLoader(val_dataset, batch_size = BATCH_SIZE,
drop_last=True,
shuffle=True)
I also tried to use pin_memory = True but this is not working too.
| You do not move the dataloader to the GPU. Instead, create batch tensors that store the data and then move these tensors to the GPU.
train_dataloader = DataLoader(MyData, batch_size=BS)
...
def train(nn, optim, train_dataloader, etc...):
for batch in train_dataloader:
batch = batch.to('cuda')
...
| https://stackoverflow.com/questions/71527963/ |
How do non-image datatypes such as 3D objects, audio, video, etc work with Activeloop Hub? | I was using Hub, a dataset format for AI that allows data streaming to GPUs without sacrificing performance.
I have been using Hub for image datasets and would like to try to use the product for other data types.
How would Hub work for different data types such as 3D objects, audio, video, etc?
The following Activeloop Hub doc has an example of how to upload image datasets to Hub and I am using a similar approach for working with my image dataset.
| You can currently upload any type of data as uncompressed arrays. Simply don't assign the Hub type (htype) which is a property of each tensor that gives info to the Hub and Activeloop Platform about how to optimally store, parse, and visualize Hub datasets.
If you do this you'll be able to .append any array you like, as long as len(ds.tensor[i].numpy().shape) is equal for all samples. So, the samples don't have to have the same shape, but they need to have the same number of dimensions.
Going forward, Activeloop Hub will be adding support for application-specific multi-dimensional htypes such as video, image, and audio, etc. These will include an appropriate compression.
Hub has htype docs that can help you understand dimensionality & conventions for each type.
| https://stackoverflow.com/questions/71534239/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.