id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st47868 | Environments:
ubuntu 16.04
cuda8.0
python: 3.5.2
pytorch: 1.0.1
torchvision: 0.2.2
Hi guys, when I use torch.onnx.export, I get some problems. The code is just from the " Example: End-to-end AlexNet from PyTorch to Caffe" (https://pytorch.org/docs/stable/onnx.html?highlight=export#torch.onnx.export 5)
import torch
import torchvision
dummy_input = torch.randn(10, 3, 224, 224, device='cuda')
model = torchvision.models.alexnet(pretrained=True).cuda()
input_names = [ "actual_input_1" ] + [ "learned_%d" % i for i in range(16) ]
output_names = [ "output1" ]
torch.onnx.export(model, dummy_input, "alexnet.onnx", verbose=True, input_names=input_names, output_names=output_names)
Here is the error:
Traceback (most recent call last):
File “pytorch_to_onnx.py”, line 10, in
torch.onnx.export(model, dummy_input, “alexnet.onnx”, verbose=True)#, input_names=input_names, output_names=output_names)
File “/usr/local/lib/python3.5/dist-packages/torch/onnx/init.py”, line 27, in export
return utils.export(*args, **kwargs)
File “/usr/local/lib/python3.5/dist-packages/torch/onnx/utils.py”, line 104, in export
operator_export_type=operator_export_type)
File “/usr/local/lib/python3.5/dist-packages/torch/onnx/utils.py”, line 281, in _export
example_outputs, propagate)
File “/usr/local/lib/python3.5/dist-packages/torch/onnx/utils.py”, line 227, in _model_to_graph
graph = _optimize_graph(graph, operator_export_type)
File “/usr/local/lib/python3.5/dist-packages/torch/onnx/utils.py”, line 155, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File “/usr/local/lib/python3.5/dist-packages/torch/onnx/init.py”, line 52, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File “/usr/local/lib/python3.5/dist-packages/torch/onnx/utils.py”, line 504, in _run_symbolic_function
return fn(g, *inputs, **attrs)
File “/usr/local/lib/python3.5/dist-packages/torch/onnx/symbolic.py”, line 89, in wrapper
return fn(g, *args)
File “/usr/local/lib/python3.5/dist-packages/torch/onnx/symbolic.py”, line 600, in adaptive_avg_pool2d
assert output_size == [1, 1], “Only output_size=[1, 1] is supported”
AssertionError: Only output_size=[1, 1] is supported
By the way, the documents about torch.onnx.export() are vague. I cannot understand what the arguments input_names and output_names are for?
Any cues would be appreciated! |
st47869 | Finally, I fix this problem by downgrading the torchvision from 0.2.2 to 0.2.1.
Cannot believe that there are such huge differences |
st47870 | The original AlexNet definition in v0.2.1 does not have AdaptiveAvgPool2d, but v0.2.2 does. As stated in the error log, the error is mainly caused by AdaptiveAvgPool2d having output size other than [1,1].
I had the same error as you, solved by changing it to fixed size pooling (nn.AvgPool2d). |
st47871 | I tried it , it works. in"alexnet.py" row 34:self.avgpool = nn.AdaptiveAvgPool2d((6, 6)) to self.avgpool = nn.AvgPool2d((1, 1)) |
st47872 | Hi,
I am working in VAE, and I do know that people use Early stopping to determine a better epoch, however, is there a better way to do so? After all, using Early stopping requires validation dataset, but my model is used to cluster the data from latent variables, and also, I don’t want to have a lower amount of data used in training. |
st47873 | I am using resnet based on the pytorch code, I want to get the output of block 7 and 6 and use them as a input of another model (so gradient should there). how I can do that? as an example:
input = torch.rand(2,3,224,224)
resnet18 = models.resnet18()
model = nn.Sequential(*list(resnet18.children())[:-2])
output1 = model(x)
I want to also have output2 and output3 when they are the output of the 7th sequential block ((7): Sequential) and 6th sequential block ((6): Sequential) in the model. I don’t like to send x to the model twice or tree times because it will cost more gpu memory.
Is there a way to do it?
if I print my model it is like this
(0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(4): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(5): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(6): Sequential(
(0): BasicBlock(
(conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(7): Sequential(
(0): BasicBlock(
(conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
This question is kinda asked in this 2 and this link 1, but the answers are not very clear yet, and I am not if those answers are correct really and if their method keep the gradient.
Really appreciate your help
Update 1 :
I notice this gives me the features, but the main problem is that the gradient does not exist in this method.
I need to have gradient for all of the hooks
x = torch.rand(2,3,224,224)
activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.detach()
return hook
res50_model = models.resnet50(pretrained=True)
res50_model.layer4[2].relu.register_forward_hook(get_activation('layer4'))
res50_model.layer3[5].relu.register_forward_hook(get_activation('layer3'))
res50_model.layer2[3].relu.register_forward_hook(get_activation('layer2'))
res50_model.layer1[2].relu.register_forward_hook(get_activation('layer1'))
out = res50_model(x)
print(activation['layer1'].shape)
print(activation['layer2'].shape)
print(activation['layer3'].shape)
print(activation['layer4'].shape) |
st47874 | Hi,
I’m trying to train a three layer fully connected net to approximate a simple sine function. It seems the net is having a hard time to learn the parameters. It converges around the center, but not at the edges. It also takes much more cycles than claims I saw in academic papers.
What am I doing wrong?
Please see attached code.
Thanks!
for_forum716×457 28 KB
import numpy as np
import math
import torch
import torch.optim as optim
import torch.nn as nn
from matplotlib import pyplot as plt
myseed = 44
np.random.seed(myseed)
# Data Generation
data_range = 15
x = data_range*(np.random.rand(data_range*100, 1))-data_range/2
frq = 2
lr = 2e-3
n_epochs = 15000
plot_every = n_epochs/5
y = np.sin(frq*x)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
x_train_tensor = torch.from_numpy(x).float().to(device)
y_train_tensor = torch.from_numpy(y).float().to(device)
torch.manual_seed(myseed)
hidden = 200
hidden2 = 200
model = nn.Sequential(
nn.Linear(1, hidden),
nn.ReLU(),
nn.Linear(hidden, hidden2),
nn.ReLU(),
nn.Linear(hidden2,1),
).to(device)
loss_fn = nn.MSELoss(reduction='mean')
optimizer = optim.SGD(model.parameters(), lr=lr)
losses = []
# variables for ploting results
res = 10
x_axis = (np.arange(data_range*res)-data_range/2*res).reshape(data_range*res,1)/res
x_axis_torch = torch.from_numpy(x_axis).float().to(device)
# For each epoch...
for epoch in range(n_epochs):
model.train()
yhat = model(x_train_tensor)
loss = loss_fn(y_train_tensor, yhat)
loss.backward()
optimizer.step()
optimizer.zero_grad()
losses.append(loss)
if (epoch % plot_every)==0:
out = model(x_axis_torch).cpu().detach().numpy()
out2 = ( np.sin(frq*x_axis_torch.cpu())) .detach().numpy()
plt.plot(out)
plt.plot(out2)
plt.show()
plt.plot(losses)
plt.show() |
st47875 | The paper is called: “The Convergence Rate of Neural Networks for Learned Functions of Different Frequencies”. In one of the examples it is shown that after 50 epochs you get a pretty clear sine wave. |
st47876 | Hmm well I’ve not yet checked the paper but I see nothing wrong with your code tho
Is ur architecture the same as the one implemented in the paper? |
st47877 | No, the paper did not include these details. However, this is such a simple example, and my code produce weird results converging only in the center. Regardless of the paper, it seems like I’m doing something wrong. |
st47878 | Two things that jump out, try to move optimizer.zero_grad() in the beginning of the loop and move model.train() outside of the training loop. Also, add another dimension to your data and see if it helps at all. Finally the 3 lines after # variables for plotting results are they necessary? What if you plot y_hat at the end of training against y over x_train_tensor.
Is your data samples of size 22500? |
st47879 | I was playing around with your code, and using exactly your parameters, but including momentum (0.9) and weight_decay(1e-5), you get a better fit. However, it still takes 1000’s of epochs to get there, way more than the 50 mentioned in the paper. The best looking fit I found was using,
4 layers of 512 neurons each
max_epochs = 2000
lr = 1e-2
By 800 epochs, you have almost a perfect fit. However, the loss seems a bit iffy. I will still look into it. I’m curious as to how they managed to get that result in 50 epochs because that is blowing my mind.
Edit: Wanted to mention that this worked only for a frequency of 2. When changed to 4, it cannot estimate the full wave. |
st47880 | Probably the OP is referring to this diagram in the paper
image2174×646 188 KB
If that the case then as the legend says the network is fitting a superposition of two sine waves with frequencies 4 and 14 and not just one. Additionally, at epoch 50 they claim the network has learned the lower freq component and only at epoch 22452 fits completely the superposition of sine functions |
st47881 | I’v implemented your suggestions, no effect… accept for adding a dimension to the data. why should it change anything?
Thanks for your suggestions! |
st47882 | Tried that as well. I can get the function to fit, but it doesn’t work as described in the paper. It starts mirroring the function in the middle, and then gradually fans out.
The data_range parameter also plays a role. If I want to fit it to just one oscillation, I can get it done within 100 epochs. I’m wondering if that is enough as you can simply concatenate (don’t know if that’s the right word) it multiple times. |
st47883 | pchandrasekaran:
Tried that as well. I can get the function to fit, but it doesn’t work as described in the paper. It starts mirroring the function in the middle, and then gradually fans out
I suppose that means you tried on two superimposed sine waves?, that’s important cause it provides more structure for the network to fit. But if it didn’t work then that’s sketchy, are you replicating the same net architecture and params as in the paper, discliamer haven’t read the full paper in detail so I dunno what net they used to produce Figure 2.
pchandrasekaran:
If I want to fit it to just one oscillation, I can get it done within 100 epochs. I’m wondering if that is enough as you can simply concatenate
I don’t suppose that would work in reality, what I mean is usually we don’t know the underlying structure of data that’s one of the main reasons we use these nets. We usually hope to approximate the data as close as possible but if we know in advance our problem then we could use that knowledge. |
st47884 | kirk86:
I suppose that means you tried on two superimposed sine waves?, that’s important cause it provides more structure for the network to fit. But if it didn’t work then that’s sketchy, are you replicating the same net architecture and params as in the paper, discliamer haven’t read the full paper in detail so I dunno what net they used to produce Figure 2.
Yup, that’s on the superposition. I skimmed through the paper, but wasn’t able to find any architecture or params related to the network used in producing Figure. 2.
kirk86:
I don’t suppose that would work in reality, what I mean is usually we don’t know the underlying structure of data that’s one of the main reasons we use these nets. We usually hope to approximate the data as close as possible but if we know in advance our problem then we could use that knowledge.
That does make sense. Even playing around with the superimposed signal, increasing the number of data points, increases the time it needed to learn the entire scope of the function. |
st47885 | The paper shows that networks has the tendency to converge to lower frequencies faster than to higher frequencies, so, for the purpose of this discussion, I think you can disregard the higher frequency. and you can see a clear shape of a sine wave after 50 epochs. |
st47886 | The thing is, there is no mention of the architecture used in getting that figure. Upon trying it out myself, the learning process doesn’t work as described in the paper. As mentioned in one of my previous replies, you can get a sine wave within 100 epochs using the architecture I mentioned above, but that is only for a single oscillation. |
st47887 | After reading your comment I was thinking what would happen if you were to blow up the network? You could try depth, width and then both. In other words try to heavily overparameterise the network and see what happens? |
st47888 | The thing is, you should be able to approximate any function using a two layered net. This is the main reason this example is interesting to me. @pchandrasekaran has already shown that adding layers and epoches will get a convergence of more cycles… |
st47889 | Not going to lie, this has piqued my interest as well. 2 layers does work, but it can predict a maximum of 2 oscillations before it fails, and that too only at around 800 epochs. I will work on this during the weekend and will mention if I figure out anything. My main gripe is that the learning for the superimposed function isn’t taking place the way it’s been described.
@kirk86 Exploding it yields the same results, just more time to get there, but at literally the same epoch, lol. |
st47890 | endrew:
you should be able to approximate any function using a two layered net.
There are several approximation theorems and plenty of subtleties and nuances when transitioning from theory to practice. E.g. those theorems talk about continuous functions and almost surely when the number of samples goes to infinity. Computers are by design discrete machines and just by thinking the differences between continuous and discrete optimisation seems like magic (at least to me) that we are able to get away with it and having these models working in the first place. |
st47891 | PyTorch supplied pretrained ResNet with different depths. The ResNet has the “BasicBlock” or “Bottleneck” structure. I want to ask that how to get separate conv feature maps from these pretrained ResNet (for example, restnet18 or resnet50)? |
st47892 | your question is not clear at all. What do you mean by “get separate conv feature maps”.
Also, there are a few posts explaining how to extract intermediate outputs of resnet, please read those. |
st47893 | What I mean is “Extracting image features” from different layers in ResNet, and I read this post.
I just wonder how this can be done in ResNet. One possible way is to construct a partial model which only use the front layers for forwarding. Are there other ways?
By the way, the Torch version of ResNet has a nice script 128
to extract features from ResNet. If PyTorch also has a corresponding one will be better. |
st47894 | In your case, attaching a hook to the layer you want seems easier.
model = models.resnet18()
outputs = []
def hook(module, input, output):
outputs.append(output)
model.layer2.conv1.register_forward_hook (hook)
(Typing from the phone, the code is an approximation and might contain errors) |
st47895 | Here is what i do:
class Resnet50Extractor(nn.Module):
def __init__(self, submodule, extracted_layer):
super(Resnet50Extractor, self).__init__()
self.submodule = submodule
self.extracted_layer = extracted_layer
def forward(self, x):
if self.extracted_layer == 'maxpool':
modules = list(self.submodule.children())[:4]
elif self.extracted_layer == 'inner-layer-3':
modules = list(self.submodule.children())[:6]
third_module = list(self.submodule.children())[6]
third_module_modules = list(third_module.children())[:3] # take the first three inner modules
third_module = nn.Sequential(*third_module_modules)
modules.append(third_module)
elif self.extracted_layer == 'layer-3':
modules = list(self.submodule.children())[:7]
else: # after avg-pool
modules = list(self.submodule.children())[:9]
self.submodule = nn.Sequential(*modules)
x = self.submodule(x)
return x
then call like below:
model_ft = models.resnet50(pretrained=True)
extractor = Resnet50Extractor(model_ft, extracted_layer)
features = extractor(input_tensor) |
st47896 | alic:
Resnet50Extractor
is there any pros or cons between using your method comparing to register_forward_hook |
st47897 | Hello!
Say, we have an intermediate layer of a neural network, which gets two inputs:
Output generated by a previous layer in the following format: N=2000 elements, Cin=10 channels, H=W=100 pixels
Another input in the following format: N=2000 elements, Cin=1 channel, H=W=100 pixels
And we need to combine these two inputs into one and then apply a convolution to the combined data, which will have the following format: N=2000 elements, Cin = 10+1 = 11 channels, H=W=100 pixels
What is the most efficient way to do this?
I’ve considered several options, but all of them look bad.
In the given layer construct a new tensor which will have for each input element 10 channels from the first input stream and 1 channel from the second one. This approach will require resizing of input tensor on the fly, which is not very efficient.
Use 11 input channels in all previous operations, but somehow restrict the operations to use only 10 of them to avoid adding unused weights. Unfortunatelly, I didn’t found how to implement this approach.
Use 11 input channels in all previous operations, and do not restict them from using 11th channel. This will generate redundand connections (weights) between neurons.
Do you have any ideas? |
st47898 | If you don’t want to use torch.cat (“resizing”), you can use distributivity property:
conv(cat(A,B), cat(W1,W2), bias) = conv(A,W1)+conv(B,W2, bias)
and do two “conv” ops (10 to 10 and 1 to 10) |
st47899 | Hi! Thank you for your answer. Eventually, I’ve used torch.cat.
I use group=1 in conv operation (both outputs depend on both inputs), therefore the property has to be rewritten in the following way:
conv(cat(A,B), cat(W1,W2)) = cat(conv(A,cat(W1,W2)), conv(B,cat(W1,W2))
And the implementation of this technique seems to be even less effective than a single torch.cat. |
st47900 | groups don’t matter, the principle is the same as with matrix multiplication:
with concatenated inputs you have
batch x 11 @ 11 x 10 = batch x 10
equivalent with split matrices:
batch x 10 @ 10 x 10 + batch x 1 @ 1 x 10
Now, two separate matmul/conv ops with smaller tensors are likely to be slower than one with concatenated inputs. The only upside of avoiding cat is memory savings (also note that inplace summation is possible). |
st47901 | I have a batch of N rows each of M values that are sorted along dim=1. For each row, I want to find the first nonzero element index from M sorted values. I’d like to do it efficiently without the for-loop.
x = torch.randn(5, 7)
x[x<0] = 0
x = x.sort(dim=1)
first_nonzero = f(x) |
st47902 | Solved by adrianjav in post #4
It is easier if you count the number of zero elements in that dimension
x = torch.randn(5, 7)
x[x<0] = 0
x = x.sort(dim=1)[0] # You forgot that sort returns a pair
first_nonzero = (x == 0).sum(dim=1)
Even easier, you can skip the x[x<0] = 0 line and count the non-positive elements:
x = torch.rand… |
st47903 | Hi, I coded a solution for your problem. You can check it in the link below.
GitHub
Goutam-Kelam/PytorchForum 225
Contains my trials for answering the questions asked in the forum - Goutam-Kelam/PytorchForum
Hope this works out for you. |
st47904 | Here is another try, especially if you want to only use torch APIs.
import torch
def f(x):
# non zero values mask
non_zero_mask = x != 0
# operations on the mask to find first nonzero values in the rows
mask_max_values, mask_max_indices = torch.max(non_zero_mask, dim=1)
# if the max-mask is zero, there is no nonzero value in the row
mask_max_indices[mask_max_values == 0] = -1
return mask_max_indices
x = torch.randn(4, 5)
x[x<0] = 0
x, sort_indices = x.sort(dim=1)
print('x', x)
first_nonzero = f(x)
print(first_nonzero) |
st47905 | It is easier if you count the number of zero elements in that dimension
x = torch.randn(5, 7)
x[x<0] = 0
x = x.sort(dim=1)[0] # You forgot that sort returns a pair
first_nonzero = (x == 0).sum(dim=1)
Even easier, you can skip the x[x<0] = 0 line and count the non-positive elements:
x = torch.randn(5, 7)
x = x.sort(dim=1)[0]
first_positive = (x <= 0).sum(dim=1) |
st47906 | In the case that there are no positive numbers in a row, the answer might be misleading. Just a minor caveat to be handled I guess. |
st47907 | True, in that case first_nonzero[i] == x.size(1). I don’t think is a caveat tho, since you’ll have to mark those cases somehow. |
st47908 | Thanks for replies.
I’m interested in pytorch-only implementation to keep my code eco-friendly. Sorry for not mentioning this beforehand.
@InnovArul, your solution implies that torch.max always returns the first occurrence. I don’t find the docs mention that. Though it works with current version 0.4.1, I think it’s better to avoid accounting for internal implementation of pytorch functional. For that reason, @adrianjav answer looks more compatible as long as ByteTensor supports sum operation. |
st47909 | yes. I like @adrianjav’s answer as well. It looks more fail safe to me. All the best! |
st47910 | I was looking for a solution that does not assume the input is already sorted. Since this assumption was not mentioned in the topic title, it may be useful to have a solution that works in that case as well. I think the following should work in the general case. The idea is that an element is the first nonzero element if it is nonzero and the cumulative sum of a nonzero indicator is 1.
import torch
def first_nonzero(x, axis=0):
nonz = (x > 0)
return ((nonz.cumsum(axis) == 1) & nonz).max(axis)
x = (torch.rand(10, 5) * 10 - 6).int().clamp(0, 10)
print (x)
# Function returns if there are any nonzero's and the index of the first nonzero (0 if no nonzero)
any_nonz, idx_first_nonz = first_nonzero(x, axis=1)
print (any_nonz, idx_first_nonz)
# If you want -1 for rows with no nonzero's
idx_first_nonz[any_nonz == 0] = -1
print (idx_first_nonz) |
st47911 | @wouter’s solution above was what I was looking for.
I would like to point out two things this though.
First, it actually finds the first nonnegative element, not the first nonzero element. I.e. it finds the first element that is zero or larger.
Second, it has a failure case when all elements are negative along the axis. In this case it returns 0, which indicates that there is a nonnegative element at index zero, which is wrong.
One way to improve this is to find the places at which the sum of (x>0) along axis equals zero. This indicates that all elements along the axis were negative. We can then set those to some different value, e.g. -1 or `float(‘nan’)’. We can use the already computed cumulative sum to do this.
def first_nonnegative(tensor, axis=0, value_for_all_nonnegative=-1):
nonnegative = (tensor > 0)
cumsum = nonnegative.cumsum(axis=axis)
all_negative = cumsum[-1] == 0 # Any dimensions where all are negative
nonnegative_idx = ((cumsum == 1) & nonnegative).max(0).indices
nonnegative_idx[all_negative] = value_for_all_nonnegative
return nonnegative_idx |
st47912 | Glad that it was of help to you! You are right it finds the first non-negative value (I guess I implicitly assumed only non-negative values), but you can easily change nonz = (x > 0) for nonz = (x != 0).
The failure case you describe I already accounted for by returning any_nonz as well, which you can use to set rows without any nonzero to -1 by idx_first_nonz[any_nonz == 0] = -1 as done in my example. |
st47913 | Haha! Don’t know how I missed that in your example, sorry for ranting on with that.
And yea the other thing is pretty easy to change |
st47914 | I started experimenting with the XLM framework myself from https://github.com/facebookresearch/XLM 2.
I have the impression that this documentation is only for those who have implemented it, since there is no other documentation available (in case of an error you just have to unlock it yourself).
Nevertheless I have some questions. I have my own data (monolingual and paralingual) in the txt files, and I’d like to apply the decu preprocessing (fastBPE…) until I have the data understandable by the train.py script (BERT, XLM…).
At the same time, I’d like to understand all the formats (pth, …) used in the official website.
Each time I used the data from wikipedia and … wastes a lot of time |
st47915 | I have some unusual behavior when I train my Multi-Label Classifier using BCE_Logits_Loss with the pos_weights calculated using the following function.
def calculate_pos_weights(class_counts):
pos_weights = np.ones_like(class_counts)
neg_counts = [len(data)-pos_count for pos_count in class_counts]
print(neg_counts)
for cdx, (pos_count, neg_count) in enumerate(zip(class_counts, neg_counts)):
pos_weights[cdx] = neg_count / (pos_count + 1e-5)
print(pos_weights)
return torch.as_tensor(pos_weights, dtype=torch.float)
criterion = nn.BCEWithLogitsLoss(pos_weight=calculate_pos_weights(weight_balance).to(device))
Screenshot from 2020-11-04 14-56-381734×696 68.3 KB
Notice how the Recall starts at 1 and is reducing while precision is increasing (therefore improving the F1 score), but the loss is going up, indicating that the model is over-fitting.
When I train this model without the pos_weight I do not have this issue but I believe I can get better performance with a weighted loss.
Can anyone find and explain the issue?
Thanks! |
st47916 | I am trying to run a simple SeqGAN model but I am getting error .
SeqGAN model is like below:
type or paste code here
class GANLoss(nn.Module):
"""Reward-Refined NLLLoss Function for adversial training of Gnerator"""
def __init__(self):
super(GANLoss, self).__init__()
def forward(self, prob, target, reward):
"""
Args:
prob: (N, C), torch Variable
target : (N, ), torch Variable
reward : (N, ), torch Variable
"""
N = target.size(0)
C = prob.size(1)
one_hot = torch.zeros((N, C))
if prob.is_cuda:
one_hot = one_hot.cuda()
one_hot.scatter_(1, target.data.view((-1,1)), 1)
one_hot = one_hot.type(torch.ByteTensor)
one_hot = Variable(one_hot)
if prob.is_cuda:
one_hot = one_hot.cuda()
loss = torch.masked_select(prob, one_hot)
#====================================================================================================================================
loss = loss * reward
loss = -torch.sum(loss)
return loss
def main():
random.seed(SEED)
np.random.seed(SEED)
# Build up dataset
s_train, s_test = load_from_big_file('F:/H-data3.txt')
# idx_to_word: List of id to word
# word_to_idx: Dictionary mapping word to id
idx_to_word, word_to_idx = fetch_vocab(s_train, s_train, s_test)
# input_seq, target_seq = prepare_data(DATA_GERMAN, DATA_ENGLISH, word_to_idx)
global VOCAB_SIZE
VOCAB_SIZE = len(idx_to_word)
save_vocab(CHECKPOINT_PATH+'metadata.data', idx_to_word, word_to_idx, VOCAB_SIZE, g_emb_dim, g_hidden_dim)
print('VOCAB SIZE:' , VOCAB_SIZE)
# Define Networks
generator = Generator(VOCAB_SIZE, g_emb_dim, g_hidden_dim, opt.cuda)
discriminator = Discriminator(d_num_class, VOCAB_SIZE, d_emb_dim, d_filter_sizes, d_num_filters, d_dropout)
target_lstm = TargetLSTM(VOCAB_SIZE, g_emb_dim, g_hidden_dim, opt.cuda)
if opt.cuda:
generator = generator.cuda()
discriminator = discriminator.cuda()
target_lstm = target_lstm.cuda()
# Generate toy data using target lstm
print('Generating data ...')
# Generate samples either from sentences file or lstm
# Sentences file will be structured input sentences
# LSTM based is BOG approach
generate_real_data('F:/H-data3.txt', BATCH_SIZE, GENERATED_NUM, idx_to_word, word_to_idx, POSITIVE_FILE, TEST_FILE)
# generate_samples(target_lstm, BATCH_SIZE, GENERATED_NUM, POSITIVE_FILE, idx_to_word)
# generate_samples(target_lstm, BATCH_SIZE, 10, TEST_FILE, idx_to_word)
# Create Test data iterator for testing
test_iter = GenDataIter(TEST_FILE, BATCH_SIZE)
# Load data from file
gen_data_iter = GenDataIter(POSITIVE_FILE, BATCH_SIZE)
# Pretrain Generator using MLE
gen_criterion = nn.NLLLoss(size_average=False)
gen_optimizer = optim.Adam(generator.parameters())
if opt.cuda:
gen_criterion = gen_criterion.cuda()
print('Pretrain with MLE ...')
for epoch in range(PRE_EPOCH_NUM):
loss = train_epoch(generator, gen_data_iter, gen_criterion, gen_optimizer)
print('Epoch [%d] Model Loss: %f'% (epoch, loss))
sys.stdout.flush()
# generate_samples(generator, BATCH_SIZE, GENERATED_NUM, EVAL_FILE)
# eval_iter = GenDataIter(EVAL_FILE, BATCH_SIZE)
# loss = eval_epoch(target_lstm, eval_iter, gen_criterion)
# print('Epoch [%d] True Loss: %f' % (epoch, loss))
# Pretrain Discriminator
dis_criterion = nn.NLLLoss(size_average=False)
dis_optimizer = optim.Adam(discriminator.parameters())
if opt.cuda:
dis_criterion = dis_criterion.cuda()
print('Pretrain Discriminator ...')
for epoch in range(3):
generate_samples(generator, BATCH_SIZE, GENERATED_NUM, NEGATIVE_FILE)
dis_data_iter = DisDataIter(POSITIVE_FILE, NEGATIVE_FILE, BATCH_SIZE)
for _ in range(3):
loss = train_epoch(discriminator, dis_data_iter, dis_criterion, dis_optimizer)
print('Epoch [%d], loss: %f' % (epoch, loss))
sys.stdout.flush()
#=========================================================================================================================================
#==========================================================================================================================================
# Adversarial Training
rollout = Rollout(generator, 0.8)
print('#####################################################')
print('Start Adversarial Training...\n')
gen_gan_loss = GANLoss()
gen_gan_optm = optim.Adam(generator.parameters())
if opt.cuda:
gen_gan_loss = gen_gan_loss.cuda()
gen_criterion = nn.NLLLoss(size_average=False)
if opt.cuda:
gen_criterion = gen_criterion.cuda()
dis_criterion = nn.NLLLoss(size_average=False)
dis_optimizer = optim.Adam(discriminator.parameters())
if opt.cuda:
dis_criterion = dis_criterion.cuda()
for total_batch in range(TOTAL_BATCH):
## Train the generator for one step
for it in range(1):
samples = generator.sample(BATCH_SIZE, g_sequence_len)
# construct the input to the genrator, add zeros before samples and delete the last column
zeros = torch.zeros((BATCH_SIZE, 1)).type(torch.LongTensor)
if samples.is_cuda:
zeros = zeros.cuda()
inputs = Variable(torch.cat([zeros, samples.data], dim = 1)[:, :-1].contiguous())
targets = Variable(samples.data).contiguous().view((-1,))
# calculate the reward
rewards = rollout.get_reward(samples, 16, discriminator)
rewards = Variable(torch.Tensor(rewards))
if opt.cuda:
rewards = torch.exp(rewards.cuda()).contiguous().view((-1,))
prob = generator.forward(inputs)
# print('SHAPE: ', prob.shape, targets.shape, rewards.shape)
loss = gen_gan_loss(prob, targets, rewards)
gen_gan_optm.zero_grad()
loss.backward()
gen_gan_optm.step()
# print('GEN PRED DIM: ', prob.shape)
I am getting error:
loss = loss * reward
RuntimeError: The size of tensor a (110) must match the size of tensor b (11) at non-singleton dimension 1
I am using Windows 10, Nvidia GeForce GTX 1050 , cuda 9.2, and pytorch version is 1.0.1 |
st47917 | Could you check the shape of one_hot?
Also, which batch size are you using?
I can’t run the code currently, so if you provide all parameters and (random) input data, I could try to debug it. |
st47918 | Can you try to run the code?Whole program is here https://github.com/bhushan23/Transformer-SeqGAN-PyTorch(data、core、seq_gan)!Thanks 23! |
st47919 | Thank you very much for your kindly reply. Can you give me specific instructions, what do I need to modify,or can you help mo to run the code ?
Thanks again! |
st47920 | I’ve cloned the repo and could try to debug it. Could you tell me which script you are using and what you’ve changed so far, e.g. different input shapes etc.? |
st47921 | I am using the data(train_data_obama), the scripts data_iter.py/ helper.py in core and the scripts discriminator.py /generator.py/loss.py/main.py/rollout.py/target_lstm.py in seq_gan. I have not changed !Thanks ! |
st47922 | Thanks for the information!
I’ve run the script and the main method seems to run succesfully.
At least the train_data_obama.txt was loaded and the loss decreased in each epoch until I killed the script.
Could you clone the repo again and try it?
If you haven’t changed anything, I’m not sure why the script seems to throw this error on your machine. |
st47923 | Thank you very much for helping to debug this program. When I run the script, the loss also decreased in each epoch. But this error appears in Start Adversarial Training…! When you run, is this error not ?
SeqGAN.png994×517 37.5 KB |
st47924 | Hi,Mr Yu
I’m also trying to run this code and facing the same problem with you.
I figure out why it goes wrong.
In adversarial Training,
# Adversarial Training
rollout = Rollout(generator, 0.8)
print('#####################################################')
print('Start Adversarial Training...\n')
......
# calculate the reward
rewards = rollout.get_reward(samples, 16, discriminator)
rewards = Variable(torch.Tensor(rewards))
if opt.cuda:
rewards = torch.exp(rewards.cuda()).contiguous().view((-1,))
prob = generator.forward(inputs)
# print('SHAPE: ', prob.shape, targets.shape, rewards.shape)
The error happened because the author flats the reward shape to 1 dimension in GPU environment .However,you and I both run this code in CPU environment ,which cause the error happened.
So here is the solution
add this
rewards = torch.exp(rewards).contiguous().view((-1,))
after
rewards = Variable(torch.Tensor(rewards))
Hope it can help |
st47925 | I’m trying to set up PyTorch 1.7.0 on a Windows 10 machine with 2 GPUs (2080ti) and CUDA 10.2.
It installs correctly, and at first everything looks ok - torch.cuda.is_available() returns True, device_count() returns 2, get_device_name() returns ‘GeForce RTX 2080 Ti’, and get_device_properties().total_memory shows 11GB for each card.
However, if I create a tensor of size 1 and try to put it on either GPU I get ‘RuntimeError: CUDA error: out of memory’. If I try to put a tensor on GPU 0 after trying to put one on GPU 1, the error changes to ‘RuntimeError: CUDA error: all CUDA-capable devices are busy or unavailable’
I’ve noticed that the first time I call any CUDA function (including is_available) the display goes blank for about 10 to 15 seconds. Windows event log shows that the Nvidia driver crashed and was restarted during this time. is_available returns True when this happens, and subsequent calls do not cause this driver crash.
I have tried different versions of PyTorch and Cuda with similar results, and I have updated Windows and Nvidia drivers. I have another machine with identical hardware also running Windows 10 that can run PyTorch with no problems.
Any ideas? |
st47926 | Hi
I’m using Resnet18(Not Pre-trained) for training images with shape(1, 224, 224)
I have 15 output classes.
Hence I have modified the first conv2d and the last linear layer accordingly.
Blockquote
Sequential(
(0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(4): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(5): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(6): Sequential(
(0): BasicBlock(
(conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(7): Sequential(
(0): BasicBlock(
(conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(8): AdaptiveAvgPool2d(output_size=(1, 1))
(9): Linear(in_features=512, out_features=15, bias=True)
)
Blockquote
On training i get the following error
RuntimeError: size mismatch, m1: [512 x 1], m2: [512 x 15] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:290
It would be really great if someone could help me:) |
st47927 | Solved by ptrblck in post #2
It seems you’ve wrapped all modules into an nn.Sequential block.
If that’s the case, you are removing the flattening, which is used here directly in the forward method.
You could keep the original architecture and change both layers by reassigning the new layers:
model = models.resnet18()
model.c… |
st47928 | It seems you’ve wrapped all modules into an nn.Sequential block.
If that’s the case, you are removing the flattening, which is used here 107 directly in the forward method.
You could keep the original architecture and change both layers by reassigning the new layers:
model = models.resnet18()
model.conv1 = ...
Or by adding an nn.Flatten() module inside the nn.Sequential. |
st47929 | Hello Sir ptrblck,
Can you please point what am i wrong here. I am replicating VGG19_bn architecture (NOT Pre-Trained) for designing a CNN model for image classification. I have image 224x224 having output classes as 20.
I am getting this Error:
RuntimeError: size mismatch, m1: [64 x 25088], m2: [4096 x 20] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:283
Kindly help me out. Thank You
Here is my code:
# Replicating Architecture of VGG19_bn
class Unit(nn.Module):
def __init__(self,in_channels,out_channels):
super(Unit,self).__init__()
self.conv = nn.Conv2d(in_channels=in_channels,kernel_size=3,out_channels=out_channels,stride=1,padding=1)
self.bn = nn.BatchNorm2d(num_features=out_channels)
self.relu = nn.ReLU(inplace=True)
def forward(self,input):
output = self.conv(input)
output = self.bn(output)
output = self.relu(output)
return output
class CNN(nn.Module):
def __init__(self,num_classes=20):
super(CNN,self).__init__()
#Create 16 layers of the unit with max pooling in between
self.unit1 = Unit(in_channels=3,out_channels=64)
self.unit2 = Unit(in_channels=64, out_channels=64)
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
self.unit3 = Unit(in_channels=64, out_channels=128)
self.unit4 = Unit(in_channels=128, out_channels=128)
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
self.unit5 = Unit(in_channels=128, out_channels=256)
self.unit6 = Unit(in_channels=256, out_channels=256)
self.unit7 = Unit(in_channels=256, out_channels=256)
self.unit8 = Unit(in_channels=256, out_channels=256)
self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
self.unit9 = Unit(in_channels=256, out_channels=512)
self.unit10 = Unit(in_channels=512, out_channels=512)
self.unit11 = Unit(in_channels=512, out_channels=512)
self.unit12 = Unit(in_channels=512, out_channels=512)
self.pool4 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
self.unit13 = Unit(in_channels=512, out_channels=512)
self.unit14 = Unit(in_channels=512, out_channels=512)
self.unit15 = Unit(in_channels=512, out_channels=512)
self.unit16 = Unit(in_channels=512, out_channels=512)
self.pool5 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
self.avgpool = nn.AdaptiveAvgPool2d(output_size=(7,7)) #output_size=7, 7
#Add all the units into the Sequential layer in exact order
self.net = nn.Sequential(self.unit1, self.unit2, self.pool1, self.unit3, self.unit4, self.pool2, self.unit5, self.unit6
,self.unit7, self.unit8, self.pool3, self.unit9, self.unit10, self.unit11, self.unit12, self.pool4
,self.unit13, self.unit14, self.unit15, self.unit16, self.pool5, self.avgpool)
self.fc = nn.Linear(in_features=512*7*7, out_features=4096) #25088
self.fc = nn.Dropout(p=0.5, inplace=False)
self.fc = nn.Linear(in_features=4096, out_features=4096)
self.fc = nn.Dropout(p=0.5, inplace=False)
self.fc = nn.Linear(in_features=4096, out_features=num_classes) #20
def forward(self, input):
output = self.net(input)
output = output.view(-1,512*7*7)
output = self.fc(output)
return output |
st47930 | You are replacing self.fc multiple times in these lines:
self.fc = nn.Linear(in_features=512*7*7, out_features=4096) #25088
self.fc = nn.Dropout(p=0.5, inplace=False)
self.fc = nn.Linear(in_features=4096, out_features=4096)
self.fc = nn.Dropout(p=0.5, inplace=False)
self.fc = nn.Linear(in_features=4096, out_features=num_classes) #20
If you want to apply these layers sequentially, wrap them in an nn.Sequential container:
self.fc = nn.Sequential(nn.Linear(512*7*7, 4096),
nn.Dropout(),
...
) |
st47931 | Hi Sir ptrblck,
I really appreciate your response and for helping me out. However, I am still struggling to achieve my final goal here. I have managed to replicate VGG19_bn architecture and trained the model with my custom dataset. I have now the saved model in my hand and want to Extract the Feature Vector from the trained model and save the feature vectors as .mat file type so that I can feed those Extracted features to another Neural Network for final Classification.
Please help me out by pointing me to the correct direction.
I just want to extract 4096dim feature vectors.
Thank You
class VGG(nn.Module):
def __init__(self, features, num_classes=20, init_weights=True):
super(VGG, self).__init__()
self.features = features
self.avgpool = nn.AdaptiveAvgPool2d((7, 7))
self.classifier = nn.Sequential(
nn.Linear(512 * 7 * 7, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, num_classes),
)
if init_weights:
self._initialize_weights()
def forward(self, x):
x = self.features(x)
x = self.avgpool(x)
x = torch.flatten(x, 1)
x = self.classifier(x)
return x
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
if m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.normal_(m.weight, 0, 0.01)
nn.init.constant_(m.bias, 0)
def make_layers(cfg, batch_norm=False):
layers = []
in_channels = 3
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
else:
layers += [conv2d, nn.ReLU(inplace=True)]
in_channels = v
return nn.Sequential(*layers)
cfgs = {
'A': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'B': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'D': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
'E': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}
def _vgg(arch, cfg, batch_norm, pretrained, progress, **kwargs):
if pretrained:
kwargs['init_weights'] = False
model = VGG(make_layers(cfgs[cfg], batch_norm=batch_norm), **kwargs)
if pretrained:
state_dict = load_state_dict_from_url(model_urls[arch],
progress=progress)
model.load_state_dict(state_dict)
return model
def vgg11(pretrained=False, progress=True, **kwargs):
r"""VGG 11-layer model (configuration "A") from
`"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return _vgg('vgg11', 'A', False, pretrained, progress, **kwargs)
def vgg19_bn(pretrained=False, progress=True, **kwargs):
r"""VGG 19-layer model (configuration 'E') with batch normalization
`"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return _vgg('vgg19_bn', 'E', True, pretrained, progress, **kwargs)
TRAIN_DATA_PATH = '/content/images/train_set/'
TEST_DATA_PATH = '/content/images/test_set/'
BATCH_SIZE = 32
#Define transformations for the training set, flip the images randomly, crop out and apply mean and std normalization
train_transformations = transforms.Compose([
transforms.Resize((224,224)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
train_set = torchvision.datasets.ImageFolder(root=TRAIN_DATA_PATH, transform=train_transformations)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=BATCH_SIZE, shuffle=True, num_workers=4)
#Define transformations for the test set
test_transformations = transforms.Compose([
#transforms.Resize(size=256),
#transforms.CenterCrop(size=224),
transforms.Resize((224,224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
test_set = torchvision.datasets.ImageFolder(root=TEST_DATA_PATH, transform=test_transformations)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=BATCH_SIZE, shuffle=False, num_workers=4)
#Check if gpu support is available
cuda_avail = torch.cuda.is_available()
#Create model, optimizer and loss function
#model = vgg19_bn(in_channels=3,num_classes=20).to(device)
model = vgg19_bn()
if cuda_avail:
model.cuda()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001,weight_decay=0.0001)
loss_fn = nn.CrossEntropyLoss()
#Create a learning rate adjustment function that divides the learning rate by 10 every 30 epochs
def adjust_learning_rate(epoch):
lr = 0.001
if epoch > 180:
lr = lr / 1000000
elif epoch > 150:
lr = lr / 100000
elif epoch > 120:
lr = lr / 10000
elif epoch > 90:
lr = lr / 1000
elif epoch > 60:
lr = lr / 100
elif epoch > 30:
lr = lr / 10
for param_group in optimizer.param_groups:
param_group["lr"] = lr
def save_models(epoch):
torch.save(model.state_dict(), "/content/VGG19ImageCNN_{}.model".format(epoch))
print("Checkpoint saved")
def test():
model.eval()
test_acc = 0.0
for i, (images, labels) in enumerate(test_loader):
if cuda_avail:
images = torch.as_tensor(images.cuda())
labels = torch.as_tensor(labels.cuda())
#Predict classes using images from the test set
outputs = model(images)
_,prediction = torch.max(outputs.data, 1)
#prediction = prediction.cpu()
test_acc += torch.sum(prediction == labels)
#Compute the average acc and loss over all 10000 test images
test_acc = test_acc/len(test_loader.dataset) #test_acc = test_acc / 200
return test_acc
def train(num_epochs):
best_acc = 0.0
for epoch in range(num_epochs):
model.train()
train_acc = 0.0
train_loss = 0.0
for i, (images, labels) in enumerate(train_loader):
#Move images and labels to gpu if available
if cuda_avail:
images = torch.as_tensor(images.cuda())
labels = torch.as_tensor(labels.cuda())
#Clear all accumulated gradients
optimizer.zero_grad()
#Predict classes using images from the test set
outputs = model(images)
#Compute the loss based on the predictions and actual labels
loss = loss_fn(outputs,labels)
#Backpropagate the loss
loss.backward()
#Adjust parameters according to the computed gradients
optimizer.step()
train_loss += loss.cpu().data * images.size(0) #train_loss += loss.cpu().data[0] * images.size(0)
_, prediction = torch.max(outputs.data, 1) #_, prediction = torch.max(outputs.data, 1)
train_acc += torch.sum(prediction == labels)
#Call the learning rate adjustment function
adjust_learning_rate(epoch)
#Compute the average acc and loss over all 800 training images
train_acc = train_acc/len(train_loader.dataset) #train_acc = train_acc / 800 # train_loss = train_loss/len(train_loader.dataset)
train_loss = train_loss/len(train_loader.dataset) #train_loss = train_loss / 800
#Evaluate on the test set
test_acc = test()
# Save the model if the test acc is greater than our current best
if test_acc > best_acc:
save_models(epoch)
best_acc = test_acc
# Print the metrics
print("Epoch {}, Train Accuracy: {} , TrainLoss: {} , Test Accuracy: {}".format(epoch, train_acc, train_loss, test_acc))
# Extracting Feature vectors from the saved model VGG19ImageCNN_88.model
filepath = "/content/VGG19ImageCNN_88.model"
if cuda_avail:
model.cuda()
for i, (images, labels) in enumerate(train_loader):
if cuda_avail:
images = torch.as_tensor(images.cuda())
#Predict classes using images from the test set
#outputs = model(images)
print("Shape of First image:",images.shape)
break
model.load_state_dict(torch.load(filepath))
model_ft = model
#print(model)
### strip the last layer with -1 and second last layer with -2
feature_extractor = torch.nn.Sequential(*list(model_ft.classifier.children())[:-2])
print("Feature extractor",feature_extractor)
### check this works
x = images
#print("Print x",x.shape)
output = feature_extractor(x) # output now has the features corresponding to input x
print("Shape of Ouput is:",output.shape)
# If extracted features are equivalent to 4096 dimensional vector then SAVE the extracted features
# To (.mat) file to feed them as input to another N
if __name__ == "__main__":
#train(200) # 200 is the number of epochs
I am still getting this error message.
RuntimeError: size mismatch, m1: [21504 x 224], m2: [25088 x 4096] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:283 |
st47932 | Your model seems to work fine using:
model = vgg19_bn()
x = torch.randn(1, 3, 224, 224)
out = model(x)
If you are getting the shape mismatch error using:
feature_extractor = torch.nn.Sequential(*list(model_ft.classifier.children())[:-2])
then note that nn.Sequential will only use all registered modules in a sequential way and will not apply the functional calls defined in the forward method.
Also, you are currently only using the model_ft.classifier.children(), which will skip all other layers, so that your input won’t have the desired shape.
If you want to get the 4096 features from the first linear layer, you could use:
model = vgg19_bn()
model.classifier = model.classifier[0] # only use first linear layer
or alternatively you could replace the last linear layer with nn.Identity(), which would apply the additional relu, dropout and linear layer:
# alternatively
model.classifier[6] = nn.Identity() # replace last linear layer with Identity |
st47933 | Hello ptrblck sir,
I’m facing an issue with view, could you please help me with it below is the error:
RuntimeError Traceback (most recent call last)
in
----> 1 learn.fit_one_cycle(2)
~\Anaconda3\envs\fastai_v1\lib\site-packages\fastai\train.py in fit_one_cycle(learn, cyc_len, max_lr, moms, div_factor, pct_start, final_div, wd, callbacks, tot_epochs, start_epoch)
21 callbacks.append(OneCycleScheduler(learn, max_lr, moms=moms, div_factor=div_factor, pct_start=pct_start,
22 final_div=final_div, tot_epochs=tot_epochs, start_epoch=start_epoch))
—> 23 learn.fit(cyc_len, max_lr, wd=wd, callbacks=callbacks)
24
25 def fit_fc(learn:Learner, tot_epochs:int=1, lr:float=defaults.lr, moms:Tuple[float,float]=(0.95,0.85), start_pct:float=0.72,
~\Anaconda3\envs\fastai_v1\lib\site-packages\fastai\basic_train.py in fit(self, epochs, lr, wd, callbacks)
198 else: self.opt.lr,self.opt.wd = lr,wd
199 callbacks = [cb(self) for cb in self.callback_fns + listify(defaults.extra_callback_fns)] + listify(callbacks)
–> 200 fit(epochs, self, metrics=self.metrics, callbacks=self.callbacks+callbacks)
201
202 def create_opt(self, lr:Floats, wd:Floats=0.)->None:
~\Anaconda3\envs\fastai_v1\lib\site-packages\fastai\basic_train.py in fit(epochs, learn, callbacks, metrics)
99 for xb,yb in progress_bar(learn.data.train_dl, parent=pbar):
100 xb, yb = cb_handler.on_batch_begin(xb, yb)
–> 101 loss = loss_batch(learn.model, xb, yb, learn.loss_func, learn.opt, cb_handler)
102 if cb_handler.on_batch_end(loss): break
103
~\Anaconda3\envs\fastai_v1\lib\site-packages\fastai\basic_train.py in loss_batch(model, xb, yb, loss_func, opt, cb_handler)
24 if not is_listy(xb): xb = [xb]
25 if not is_listy(yb): yb = [yb]
—> 26 out = model(*xb)
27 out = cb_handler.on_loss_begin(out)
28
~\Anaconda3\envs\fastai_v1\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
–> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
in forward(self, x)
33 def forward(self, x):
34 x = self.seq1(x)
—> 35 x = x.view(-1, 12612616)
36 x = self.seq2(x)
37 x = self.seq3(x)
RuntimeError: shape ‘[-1, 254016]’ is invalid for input of size 1548800
and this is the code
class BuildingSegmenterNet(nn.Module):
def init(self):
super(BuildingSegmenterNet, self).init()
self.seq1 = nn.Sequential(
nn.Conv2d(3, 16, (5,5)),
nn.MaxPool2d((2,2))
)
self.seq2 = nn.Sequential(
nn.Linear((12612616), 512),
nn.ReLU()
)
self.dropout1 = nn.Dropout(0.33)
self.seq3 = nn.Sequential(
nn.Linear(512, 128),
nn.ReLU()
)
self.seq4 = nn.Sequential(
nn.Linear(128, 128),
nn.ReLU()
)
self.seq5 = nn.Sequential(
nn.Linear(128, 512),
nn.ReLU()
)
self.seq6 = nn.Sequential(
nn.Linear(512, 256256),
nn.ReLU()
)
self.seq7 = nn.Sequential(
nn.Conv2d(1, 1, (3,3), padding = 1),
nn.Sigmoid()
)
def forward(self, x):
x = self.seq1(x)
x = x.view(-1, 126126*16)
x = self.seq2(x)
x = self.seq3(x)
x = self.dropout1(x)
x = self.seq4(x)
x = self.seq5(x)
x = self.seq6(x)
x = x.view(x.size(0), -1)
x = self.seq7(x)
return x
My image is size is 200*200 RGB and i have kept the batch size as 8
Could you please help me to resolve this error.
thanking you in advance
Regareds,
Rahul Ramchandra Uppari. |
st47934 | Change this line:
x = x.view(-1, 126*126*16)
to
x = x.view(x.size(0), -1)
and rerun your code. The current view operation is failing, since x doesn’t contain a matching number of elements.
After you’ve changed the view operation, you might get a shape mismatch error in the next linear layer and should be able to fix it by changing the in_features of this layer.
PS: you can post code snippets by wrapping them into three backticks ```, which would make debugging easier. |
st47935 | Many thanks ptrblck.
I’m new to coding, could you please help me with one more thing Im getting the below error and don’t know how to fix it.
RuntimeError Traceback (most recent call last)
in
----> 1 learn.fit_one_cycle(2)
~\Anaconda3\envs\fastai_v1\lib\site-packages\fastai\train.py in fit_one_cycle(learn, cyc_len, max_lr, moms, div_factor, pct_start, final_div, wd, callbacks, tot_epochs, start_epoch)
21 callbacks.append(OneCycleScheduler(learn, max_lr, moms=moms, div_factor=div_factor, pct_start=pct_start,
22 final_div=final_div, tot_epochs=tot_epochs, start_epoch=start_epoch))
—> 23 learn.fit(cyc_len, max_lr, wd=wd, callbacks=callbacks)
24
25 def fit_fc(learn:Learner, tot_epochs:int=1, lr:float=defaults.lr, moms:Tuple[float,float]=(0.95,0.85), start_pct:float=0.72,
~\Anaconda3\envs\fastai_v1\lib\site-packages\fastai\basic_train.py in fit(self, epochs, lr, wd, callbacks)
198 else: self.opt.lr,self.opt.wd = lr,wd
199 callbacks = [cb(self) for cb in self.callback_fns + listify(defaults.extra_callback_fns)] + listify(callbacks)
–> 200 fit(epochs, self, metrics=self.metrics, callbacks=self.callbacks+callbacks)
201
202 def create_opt(self, lr:Floats, wd:Floats=0.)->None:
~\Anaconda3\envs\fastai_v1\lib\site-packages\fastai\basic_train.py in fit(epochs, learn, callbacks, metrics)
99 for xb,yb in progress_bar(learn.data.train_dl, parent=pbar):
100 xb, yb = cb_handler.on_batch_begin(xb, yb)
–> 101 loss = loss_batch(learn.model, xb, yb, learn.loss_func, learn.opt, cb_handler)
102 if cb_handler.on_batch_end(loss): break
103
~\Anaconda3\envs\fastai_v1\lib\site-packages\fastai\basic_train.py in loss_batch(model, xb, yb, loss_func, opt, cb_handler)
24 if not is_listy(xb): xb = [xb]
25 if not is_listy(yb): yb = [yb]
—> 26 out = model(*xb)
27 out = cb_handler.on_loss_begin(out)
28
~\Anaconda3\envs\fastai_v1\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
–> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
in forward(self, x)
34 x = self.seq1(x)
35 x = x.view(x.size(0), -1)
—> 36 x = self.seq2(x)
37 x = self.seq3(x)
38 x = self.dropout1(x)
~\Anaconda3\envs\fastai_v1\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
–> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
~\Anaconda3\envs\fastai_v1\lib\site-packages\torch\nn\modules\container.py in forward(self, input)
98 def forward(self, input):
99 for module in self:
–> 100 input = module(input)
101 return input
102
~\Anaconda3\envs\fastai_v1\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
–> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
~\Anaconda3\envs\fastai_v1\lib\site-packages\torch\nn\modules\linear.py in forward(self, input)
85
86 def forward(self, input):
—> 87 return F.linear(input, self.weight, self.bias)
88
89 def extra_repr(self):
~\Anaconda3\envs\fastai_v1\lib\site-packages\torch\nn\functional.py in linear(input, weight, bias)
1608 if input.dim() == 2 and bias is not None:
1609 # fused op is marginally faster
-> 1610 ret = torch.addmm(bias, input, weight.t())
1611 else:
1612 output = input.matmul(weight.t())
RuntimeError: size mismatch, m1: [8 x 193600], m2: [254016 x 512] at C:/cb/pytorch_1000000000000/work/aten/src\THC/generic/THCTensorMathBlas.cu:283 |
st47936 | That’s the shape mismatch error I’ve mentioned in my previous post.
Change the in_features of self.seq2 to 254016. |
st47937 | I have changed the in_features in self.seq2 to 254016 but still the error persists.
def __init__(self):
super(BuildingSegmenterNet, self).__init__()
self.seq1 = nn.Sequential(
nn.Conv2d(3, 16, (5,5)),
nn.MaxPool2d((2,2))
)
self.seq2 = nn.Sequential(
nn.Linear((254016), 512),
nn.ReLU()
)
self.dropout1 = nn.Dropout(0.33)
self.seq3 = nn.Sequential(
nn.Linear(512, 128),
nn.ReLU()
)
Have I made the proper changes as you mentioned |
st47938 | Could you post the complete model definition as well as the input shapes, so that I could rerun the code, please? |
st47939 | Sure ptrblck
Below is the code.
import fastai
from fastai import *
from fastai.vision import *
path = pathlib.Path('E:\dataset')
path
bs = 8
data = ImageDataBunch.from_folder(path=path,bs=bs,ds_tfms=get_transforms(),size=224,valid_pct=0.3)
data.normalize(imagenet_stats)
class SegmentationCrossEntropyLoss(nn.Module):
def __init__(self):
super(SegmentationCrossEntropyLoss, self).__init__()
def forward(self, preds, targets):
loss = nn.functional.binary_cross_entropy_with_logits(preds, targets.float())
return loss
class BuildingSegmenterNet(nn.Module):
def __init__(self):
super(BuildingSegmenterNet, self).__init__()
self.seq1 = nn.Sequential(
nn.Conv2d(3, 16, (5,5)),
nn.MaxPool2d((2,2))
)
self.seq2 = nn.Sequential(
nn.Linear((254016), 512),
nn.ReLU()
)
self.dropout1 = nn.Dropout(0.33)
self.seq3 = nn.Sequential(
nn.Linear(512, 128),
nn.ReLU()
)
self.seq4 = nn.Sequential(
nn.Linear(128, 128),
nn.ReLU()
)
self.seq5 = nn.Sequential(
nn.Linear(128, 512),
nn.ReLU()
)
self.seq6 = nn.Sequential(
nn.Linear(512, 256*256),
nn.ReLU()
)
self.seq7 = nn.Sequential(
nn.Conv2d(1, 1, (3,3), padding = 1),
nn.Sigmoid()
)
def forward(self, x):
x = self.seq1(x)
x = x.view(x.size(0), -1)
x = self.seq2(x)
x = self.seq3(x)
x = self.dropout1(x)
x = self.seq4(x)
x = self.seq5(x)
x = self.seq6(x)
x = x.view(-1, 1, 256, 256)
x = self.seq7(x)
return x
learn = Learner(data = data, model=BuildingSegmenterNet(), loss_func = SegmentationCrossEntropyLoss(), metrics = accuracy)
learn.fit_one_cycle(2) |
st47940 | Thanks for the code.
My previous recommendation was wrong and the in_features of seq2 were already set to 254016 and should be instead set to 110*110*16=193600:
self.seq2 = nn.Sequential(
nn.Linear((110*110*16), 512),
nn.ReLU()
) |
st47941 | I have two tensors I want to update “at the same time”, and those updates depend on each other, inside a for loop. In “general Python”, I can update them on one line:
for i in range(10):
x, y = x + y/2, y + x/2
x = some_nn(x)
y = some_nn(y)
Is this safe to do in PyTorch in the forward method? Specifically is it ok for autograd, and for eventual multiple GPU training?
Alternatively, I think, I could do this as
for i in range(10):
x_old = x.clone() # possibly with .detach(), too?
y_old = y.clone()
x = x + y_old/2
y = y + x_old/2
x = some_nn(x)
y = some_nn(y)
(Of course, I could put some_function a line earlier in both cases, but I don’t think that’s important here.)
Is there a reason to choose one method over the other, or is there a better method I haven’t thought of? |
st47942 | Hi, do anybody know why it will give this error? It points at if (labels[i] == predicted[i]): .
The case I am working in is image segmentation with both my images and labels being .tiff files. And I have checked to ensure that my labels, and predicted are tensors. So I am confused as to why there is a boolean when both my labels and predicted are tensors.
‘’’ with torch.no_grad():
# accuracy of each class
n_classes_correct = [0 for i in range(self.numClass)]
n_classes_samples = [0 for i in range(self.numClass)]
cmatrix = np.zeros((self.numClass,self.numClass),np.int16)
self.model = self.model.eval()
eval_loss = 0
itera = len(self.evalloader)
for i, (images,labels) in tqdm(enumerate(self.evalloader), total=itera):
images, labels = map(lambda x:x.to(device),[images,labels])
#The output of a label should be a tensor and not a tuple, if it is, look back at your y_label output of your dataset (make sure it is a tensor or a int to be able to convert into a tensor)
outputs = self.model(images)
# overall accuracy of model
_, predicted = torch.max(outputs, 1)
self.n_samples += labels.size(0)
self.n_correct += (predicted == labels).sum().item()
# loss = self.loss_function(outputs, labels,weight=torch.FloatTensor([0.2,1.0,0.4,0]).to(device))
# eval_loss += loss.item()
# for confusion matrix later
for j, k in zip(labels.cpu().numpy().flatten(),predicted.cpu().numpy().flatten()):
cmatrix[j,k] += 1
print(images,labels,predicted)
print('\n')
print(type(predicted),type(labels))
for i in range(labels.size(0)):
print(i)
print(labels[i], predicted[i])
if (labels[i] == predicted[i]):
n_classes_correct[labels[i]] += 1
n_classes_samples[labels[i]] += 1 '''
The error message
‘’’---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
in
3 # classes = [“E_needleleaf”,“E_broadleaf”, “D_needleleaf”, “D_broadleaf”, “MixedForest”, “Closeshrublands”, “Openshrublands”, “WoodySavannas”, “Savannas”, “Grasslands”, “PermWetland”, “Cropland”, “Urban”, “VegeMosaics”, “Snow&Ice”, “Barren”, “WaterBodies”]
4 checkpoint = None
----> 5 train = trainer(imgdir= imgdir, classes = classes, reloadmode=‘same’, num_epochs = 5)
6 train
in init(self, imgdir, classes, num_epochs, reloadmode, checkpoint, bs, report)
158
159 print(’\n’+’*‘6+‘EVAL FOR ONE EPOCH’+’’*6)
–> 160 overacc = self.evali()
161
162 if self.bestAccuracy is None or overacc >= self.bestAccuracy or reloadmode == ‘different’:
in evali(self)
334 print(i)
335 print(labels[i], predicted[i])
–> 336 if (labels[i] == predicted[i]):
337 n_classes_correct[labels[i]] += 1
338 n_classes_samples[labels[i]] += 1
RuntimeError: Boolean value of Tensor with more than one value is ambiguous ‘’’ |
st47943 | Solved by ptrblck in post #4
(labels[i] == predicted[i]).all() should work. However, I don’t think this will yield your desired result as it seems you expect to compare scalar values instead of tensors, so I would still recommend to check the shape and make sure the indexing works as expected. |
st47944 | This error is raised, if you are trying to compare more than a single value without calling all() or any() on the result.
This comparison:
labels[i] == predicted[i]
is apparently creating multiple result values so you could check the shape of labels[i] and predicted[i] and make sure to apply the right behavior. |
st47945 | hmm would it be possible to give an example of how i can implement the torch.all() function? Thanks alot! |
st47946 | (labels[i] == predicted[i]).all() should work. However, I don’t think this will yield your desired result as it seems you expect to compare scalar values instead of tensors, so I would still recommend to check the shape and make sure the indexing works as expected. |
st47947 | Hi, I’m training a classification model with large number of classes (40000), and I want that in each batch , there will be no more than one sample from each class.
I’ve tried to create a custom sampler , but it had to much of “edge cases”, I tried to make things after I already got the batch “in my hands”, this suffered from the same issue.
Do you guys have any idea? on how to simply do that?
Of course that my batch << number of classes.
Thanks for you help! |
st47948 | Hi, could I set the different batchsize for training and validation? The gpu ram is not enough.
For example,
Training: batchsize 128
Validation: batchsize 1
I think the batchsize of validation will not affect on the validation loss and accurancy, right? |
st47949 | Solved by ptrblck in post #2
Yes, you can use different batch sizes and the batch size during evaluation (after calling model.eval()) will not affect the validation results.
Are you using larger inputs during the validation or why do you have to reduce the batch size by 128x? |
st47950 | Yes, you can use different batch sizes and the batch size during evaluation (after calling model.eval()) will not affect the validation results.
Are you using larger inputs during the validation or why do you have to reduce the batch size by 128x? |
st47951 | Now I am using batch size 128 for both training and validation but the gpu ram (2080Ti 11G) is full.
By the way, my task is to combine image model and language model to classify. I am not sure whether my model is too large or not.
There are 443,757 question for training 214,354 for validation. I think the batch size 128 is a little bit small. The training time is almost 2.5 hours per epoch. It really drives me crazy… |
st47952 | You can reduce the memory usage during validation by wrapping the validation loop in a with torch.no_grad() block, which will make sure to not store the intermediate activations, which would be needed to calculate the gradients. If you aren’t using it already, you might be able to increase the batch size during validation further and speed up this loop. |
st47953 | Hi, I am trying to implement an LSTM model to predict coronavirus cases and I got this error : RuntimeError: Expected hidden[0] size (2, 134, 14), got (2, 14, 14)
Here is some informations :
Len test : 47
Len validation : 37
Len train : 149
seq_len = 14
input_size = 1
output_size = 1
hidden_dim = 14
n_layers = 2
dropout = 0.4
num_epochs = 10
and the model is that :
class LSTM(nn.Module):
def __init__(self, input_size, output_size, hidden_dim, n_layers,seq_len,dropout):
super(LSTM, self).__init__()
self.hidden_dim=hidden_dim
self.lstm = nn.LSTM(input_size, hidden_dim, n_layers, batch_first=True, dropout = dropout)
self.fc = nn.Linear(hidden_dim, output_size)
def reset_hidden_state(self):
self.hidden = (
torch.zeros(n_layers, seq_len, hidden_dim),
torch.zeros(n_layers, seq_len, hidden_dim))
def forward(self, sequences):
lstm_out, self.hidden = self.lstm(
sequences.view(len(sequences), seq_len, -1),
self.hidden)
last_time_step = lstm_out.view(seq_len, len(sequences), self.n_hidden)[-1]
y_pred = self.linear(last_time_step)
return y_pred
What can I do to solve this problem ? |
st47954 | There are a few issues in your code:
the states should have the shape [num_layers * num_directions, batch_size, hidden_size], while your self.hidden tuple contains the seq_len.
if you want to permute the dimensions in sequences using sequences.view(len(sequences), seq_len, -1), note that you would interleave the data, so use sequences.permute instead.
the same applies for lstm_out.view. Swapping dimensions should be done with permute. Since you are using batch_first=True, you can instead index the output via lstm_out[:, -1, :] without any permutation.
self.linear is undefined and should be replaced with self.fc.
The docs 1 provide also additional information about the expected shapes. |
st47955 | I use GAN in a NLP task. The loss of discriminator is approximately stable at 0.8, and the generator is only 0.3. This pic shows the loss of my training, does that mean my discriminator is weak? How to strengthen my discriminator? |
st47956 | Hi All,
I am using the custom loss function:
loss = torch.mean(torch.square(torch.sqrt(y_true + 1e-10) - torch.sqrt(y_predict + 1e-10)) + 10*torch.square(torch.square(torch.sqrt(y_true + 1e-10) - torch.sqrt(y_predict + 1e-10))))
After some iteration, I am getting below error.
[W python_anomaly_mode.cpp:60] Warning: Error detected in PowBackward0. Traceback of forward call that caused the error:
File “main.py”, line 38, in
main()
File “main.py”, line 34, in main
train(dataloader_train=train_dl, dataloader_eval=valid_dl, model=model, hyper_params=train_params, device=‘cuda’)
File “train_model.py”, line 81, in train
loss = my_cost(outputs,labels)
File “train_model.py”, line 15, in my_cost
loss = torch.mean(torch.square(torch.sqrt(y_true + 1e-10) - torch.sqrt(y_predict + 1e-10)) + 10*torch.square(torch.square(torch.sqrt(y_true + 1e-10) - torch.sqrt(y_predict + 1e-10))))
(function print_stack)
Traceback (most recent call last):
File “main.py”, line 38, in
main()
File “main.py”, line 34, in main
train(dataloader_train=train_dl, dataloader_eval=valid_dl, model=model, hyper_params=train_params, device=‘cuda’)
File “train_model.py”, line 83, in train
loss.backward()
File “/usr/local/lib/python3.6/dist-packages/torch/tensor.py”, line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py”, line 127, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Function ‘PowBackward0’ returned nan values in its 0th output.
My final output is after relu activation, so I am sending only +ve values to the sqrt function |
st47957 | Maybe you could output the value of y_true and y_predict when the exception happens to be sure. |
st47958 | Thank you the reply Tom.
I have printed the estimated output when the exception happened.
y_true::tensor([[[0.9508, 0.9464, 0.9941, …, 0.1872, 0.4230, 0.4505],
[0.9412, 0.9590, 0.9167, …, 0.0199, 0.0446, 0.0476],
[1.0088, 0.9939, 1.1853, …, 0.0752, 0.1353, 0.1411],
…,
[1.0073, 1.0330, 1.0652, …, 0.3139, 0.7555, 0.8156],
[0.9773, 0.9773, 0.9945, …, 0.6462, 0.8663, 0.8789],
[1.0590, 1.0328, 1.0088, …, 0.8305, 0.9132, 0.9175]]],
device=‘cuda:0’)
y_predict::tensor([[[0.2288, 0.0000, 0.0000, …, 0.0000, 0.0000, 0.1711],
[0.2288, 0.0000, 0.0000, …, 0.0000, 0.0000, 0.1711],
[0.2288, 0.0000, 0.0000, …, 0.0000, 0.0000, 0.1711],
…,
[0.2288, 0.0000, 0.0000, …, 0.0000, 0.0000, 0.1711],
[0.2288, 0.0000, 0.0000, …, 0.0000, 0.0000, 0.1711],
[0.2288, 0.0000, 0.0000, …, 0.0000, 0.0000, 0.1711]]],
device=‘cuda:0’, grad_fn=) |
st47959 | Given two patterns/images A and B, is there any loss function that would compute how these patterns are similar? |
st47960 | There are lots of different loss functions you could use, depending on how simple you want to make things, and what aspect of image similarity you care about. It’s a huge field with lots of ongoing active research. Some keywords to search are “perceptual similarity” “image similarity metric” “structural similarity”.
The simplest thing you could do is use MSE (Mean Squared Error). |
st47961 | I am writing a class for a restricted boltzmann machine using PyTorch. I am using the weights and the biases within a call to LBFGS optimizer, so I’d like to have all the weights and biases as Parameters. However, when I try to initialize them as cuda tensors, I am getting the following error:
TypeError: cannot assign 'torch.cuda.FloatTensor' as parameter 'W' (torch.nn.Parameter or None expected)
Here is my code:
class RBM(nn.Module):
def __init__(self,
visible_units = 256,
hidden_units = 64,
epsilonw = 0.1, #learning rate for weights
epsilonvb = 0.1, #learning rate for visible unit biases
epsilonhb = 0.1, #learning rate for hidden unit biases
weightcost = 0.0002,
initialmomentum = 0.5,
finalmomentum = 0.9,
use_gpu = False
):
super(RBM,self).__init__()
self.desc = "RBM"
self.visible_units = visible_units
self.hidden_units = hidden_units
self.epsilonw = epsilonw
self.epsilonvb = epsilonvb
self.epsilonhb = epsilonhb
self.weightcost = weightcost
self.initialmomentum = initialmomentum
self.finalmomentum = finalmomentum
self.use_gpu = use_gpu
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
self.h_bias = nn.Parameter(torch.zeros(self.hidden_units)) #hidden layer bias
self.v_bias = nn.Parameter(torch.zeros(self.visible_units)) #visible layer bias
if self.use_gpu:
self.W = nn.Parameter(0.1*torch.randn(self.visible_units, self.hidden_units),
device=self.device)
self.v_bias = nn.Parameter(torch.zeros(self.hidden_units),
device=self.device)
self.h_bias = nn.Parameter(torch.zeros(self.visible_units),
device=self.device)
else:
self.W = nn.Parameter(0.1*torch.randn(self.visible_units, self.hidden_units))
self.h_bias = nn.Parameter(torch.zeros(self.hidden_units)) #hidden layer bias
self.v_bias = nn.Parameter(torch.zeros(self.visible_units)) #visible layer bias
The full traceback for the call is:
from RBM import RBM
rbm = RBM(use_gpu=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/deep_autoencoder/RBM.py", line 54, in __init__
self.W = nn.Parameter(0.1*torch.randn(self.visible_units, self.hidden_units),
File "/home/anaconda3/envs/torch_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 792, in __setattr__
raise TypeError("cannot assign '{}' as parameter '{}' "
TypeError: cannot assign 'torch.cuda.FloatTensor' as parameter 'W' (torch.nn.Parameter or None expected)
I’m not sure what I’m doing wrong here. The code I’ve written seems to reflect the best advice I’ve seen in this forum. |
st47962 | Hi,
As the message mentions, it should be either a Parameter or None.
So you most likely want to wrap your Tensor into a Parameter before setting it? |
st47963 | Hello,
I am new to pytorch (I’m using 1.6.0), and I know that this topic or similar has a number of entries, but after studying them I can’t yet see the problem with my code, and would appreciate help with this. I define the following model:
import torch
“”" Model definition “”"
class NNModel( torch.nn.Module ):
def __init__( self, nFeatures, nNeurons ):
"""
The model consists of two hidden layers with tanh activation and a single neuron output
from the third layer. The input to the first layer is a tensor containing the input features
(nFeatures); the output of the third layer is a single number.
"""
super( NNModel, self).__init__()
self.linear1 = torch.nn.Linear( nFeatures, nNeurons )
self.activn1 = torch.nn.Tanh()
self.linear2 = torch.nn.Linear( nNeurons, 1 )
self.activn2 = torch.nn.Tanh()
def forward( self, x ):
"""
x is a tensor containing all symmetry functions for the present configuration; therefore
it has dimensions (nObservations, nFeatures). The model must loop over each observation,
calculating the contribution of each one to the output (the sum of them).
"""
nObservations, _ = x.shape
z = torch.zeros( nObservations, requires_grad = True )
for n in range( nObservations ):
y = self.linear1( x[n,:] )
y = self.activn1( y )
y = self.linear2( y )
z[n] = self.activn2( y )
addition = z.sum()
return addition
My loss functions and optimizer are:
lossFunction = torch.nn.MSELoss( reduction = ‘sum’ )
optimizer = torch.optim.SGD( model.parameters(), lr=1.0e-4 )
and I run this in a loop like so:
for t in range( 500 ):
# forward pass
for n in range( nCases ):
y_pred[n] = model( sym[n] )
# compute and print loss
loss = lossFunction( y_pred, energy )
print( t, loss.item() )
# zero gradients, perform a backward pass and update weights
optimizer.zero_grad()
loss.backward( )
optimizer.step()
The first pass through the loop prints a loss value, but on the next iteration the program crashes with the known RuntimeError: leaf variable has been moved into the graph interior problem.
I guess this is to do with the loop over nObservations in the forward function definition, but I do not understand why nor what can I do to solve this problem. Any help would be appreciated. Thanks! |
st47964 | Hi,
This error has been improved on master. If you use a nightly built, it should raise an error directly at the place where the faulty inplace op is
This happens because you modify inplace a view of a leaf Tensor and such inplace op is not allowed. In this case I guess this is your z.
Note that if you don’t actually need the .grad field on z, you can create it with requires_grad=False and that should solve the error. |
st47965 | Thank you albanD; I did install the nightly build and indeed it gives a slightly different output message, however I am still lost as to why my code doesn’t work. I will try to tinker with it some more and see if I can make some progress. Thanks for your help. |
st47966 | The updated error should point at the line that is doing an inplace operation that is not allowed.
But for the code above, I think changing z = torch.zeros( nObservations, requires_grad = True ) to z = torch.zeros( nObservations) should work. |
st47967 | Thank you so much, albanD. Yes, now it appears to work fine, though I need to do more testing. However, I also need to progress in my understanding: setting the flag requires_grad = True in z appeared to me to be necessary in order to be able to compute the gradient of the value returned by forward() and thus optimize the model parameters.
Thanks again. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.