id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st31368
|
@ptrblck Hi, ptrblck. Could you please explain it in detail? Since the error is “expected Variable[CPUType]”, while your comment is “creating new tensors on the CPU”. In addition, I don’t understand why we need to modify the code after setting model = model.to('cuda')
|
st31369
|
helloworld2021:
Could you please explain it in detail? Since the error is “expected Variable[CPUType]”, while your comment is “creating new tensors on the CPU”.
The error message tries to be helpful in claiming which type is expected and which types were found.
However, this expectation could be a guess in case both types would be valid for this operation and, if I’m not mistaken, the error message uses the parameter as the expected type and the input as the wrong one (as done here), since this issue is more common.
helloworld2021:
In addition, I don’t understand why we need to modify the code after setting model = model.to('cuda')
In this case, each forward pass creates new tensors in the forward method without using a device agnostic approach:
def forward(self, x, adj_mat):
weight_prod = torch.DoubleTensor(self.weight_mat(x))
Since no device attribute was used, the new weight_prod tensors will be created on the CPU by default.
Note that this approach would also detach weight_prod from the computation graph, so I would stick to my suggestion in using .double() if the dtype should be changed.
|
st31370
|
Thank you for your help. Based on previous study and your comments, if I’m not mistaken, the error is due to tensor transferring from cpu to gpu ( model = model.to('cuda')). After this operation, self.weight_mat(x) is on gpu but the operation torch.DoubleTensor(self.weight_mat(x)) can only be done on cpu.
However, I don’t fully understand your solution. Since current error is from weight_prod = torch.DoubleTensor(self.weight_mat(x)), why you suggest operations on other variables out = out.double()?
How about just set
weight_prod = torch.nn.parameter(self.weight_mat(x))
|
st31371
|
helloworld2021:
but the operation torch.DoubleTensor(self.weight_mat(x)) can only be done on cpu.
This operation is explicitly creating a CPUTensor, so it’s not a limitation of where this operation can be executed, but what it’s used for.
helloworld2021:
However, I don’t fully understand your solution.
What the user tried to do seemed to be a transformation of the float32 tensor to a float64 tensor (you could ask him to double check).
This can be easily done via the tensor.double() operation, which neither will change the device (no device mismatch) nor will it detach the tensor.
Your approach creates a new parameter (which won’t be optimized, as it’s depending on the input x, is recreated in each iteration, and is thus unknown to the optimizer), which will also detach the operation from the computation graph and won’t change the dtype to float64, which seems to be the original use case.
|
st31372
|
Thank you for your reply. If I have to build some tensor in forward function, how could I avoid the conflict between cpu tensor and gpu tensor after setting model.cuda()?
In this thread, apaszke just recommends to use nn.parameter. However, in the discussion above, you said
creates a new parameter (which won’t be optimized, as it’s depending on the input x, is recreated in each iteration, and is thus unknown to the optimizer), which will also detach the operation from the computation graph
For me, a beginner on Pytorch, it’s somehow confusing. I can understand your comments on the effects of nn.parameter in forward function (optimizer can not work on it), then what is the right way to build a tensor in the forward function where the tensor is related with both the weight parameter and the output in the same time?
Thank you!
|
st31373
|
If you want to create a trainable parameter or a tensor, which should be registered in the module during its initialization, you could use an nn.Parameter or register the tensor via self.register_buffer in the __init__ method.
This will make sure to transfer the tensors to the appropriate device when model.to() is called.
However, if you want to create a new tensor in the forward method (which is different than the original question), you could reuse the .device attribute of a known parameter or the input:
def forward(self, x):
my_new_tensor = torch.randn(1, device=x.device)
x = x + my_new_tensor
return x
|
st31374
|
I’m grateful for your help. Now I think I should avoid using nn.parameter in the forward function and use your method instead.
Thanks again
|
st31375
|
RuntimeError: Expected 5-dimensional input for 5-dimensional weight [32, 3, 1, 5, 5], but got 4-dimensional input of size [3, 256, 128, 128] instead
|
st31376
|
Based on the error message it seems you are using an nn.Conv3d module, which expect a 5-dimensional tensor as the input in the shape [batch_size, channels, depth, height, width], while one of these dimensions are missing (given the posted shape I guess it could be the channel dimension).
|
st31377
|
x_train, x_test, y_train, y_test = load_data(test_size=0.25)
n_epoch=50
model = MLPClassifier(alpha=0.01, batch_size=128, epsilon=1e-08, hidden_layer_sizes=(300,), learning_rate='adaptive',max_iter=500,early_stopping=True)
model.fit(x_train, y_train)
y_pred = model.predict(x_test)
accuracy = accuracy_score(y_true=y_test, y_pred=y_pred)
print(classification_report(y_pred, y_test))
As you can see code is able to find accuracy, classification report, and confusion matrix.
But don’t know in this situation how to plot.
Tried the code below using some online resource. But getting a small error here.
scores_train = []
scores_test = []
epoch = 0
while epoch < n_epoch:
print('epoch: ', epoch)
# SHUFFLING
random_perm = np.random.permutation(x_train.shape[0])
mini_batch_index = 0
while True:
# MINI-BATCH
indices = random_perm[mini_batch_index:mini_batch_index + 128]
model.partial_fit(x_train[indices], y_train[indices], classes=7)
mini_batch_index += 128
if mini_batch_index >= x_train.shape[0]:
break
# SCORE TRAIN
scores_train.append(model.score(x_train, y_train))
# SCORE TEST
scores_test.append(model.score(x_test, y_test))
epoch += 1
plt.plot(scores_train, color='green', alpha=0.8, label='Train')
plt.plot(scores_test, color='magenta', alpha=0.8, label='Test')
plt.title("Accuracy over epochs", fontsize=14)
plt.xlabel('Epochs')
plt.legend(loc='upper left')
plt.show()
The error is :TypeError: only integer scalar arrays can be converted to a scalar index at line
model.partial_fit(x_train[indices], y_train[indices], classes=7)
I know it’s not directly related to Pyorch but I hope if someone can guide me.
|
st31378
|
Solved by krishna511 in post #11
Got the result by just putting
early_stopping=False,warm_start=True
in MLPClassifier. Don’t know much about it, but solved the purpose.
Got the result by just putting
early_stopping=False,warm_start=True
in MLPClassifier. Don’t know much about it, but solved the purpose.
[image]
Thanks
|
st31379
|
Can you print types and shapes of x_train, y_train? type(x_train), x_train.shape, ...
|
st31380
|
I would recommend providing a minimum working example (MWE)
as a Colab Notebook in a Github Gist. Then people can quickly reproduce the problem.
Also,
krishna511:
scores_train = []
scores_test = []
epoch = 0
while epoch < n_epoch:
print('epoch: ', epoch)
# SHUFFLING
random_perm = np.random.permutation(x_train.shape[0])
mini_batch_index = 0
while True:
# MINI-BATCH
indices = random_perm[mini_batch_index:mini_batch_index + 128]
model.partial_fit(x_train[indices], y_train[indices], classes=7)
mini_batch_index += 128
if mini_batch_index >= x_train.shape[0]:
break
# SCORE TRAIN
scores_train.append(model.score(x_train, y_train))
# SCORE TEST
scores_test.append(model.score(x_test, y_test))
epoch += 1
You don’t have to do this manually. You could use from torch.utils.data import DataLoader and drop the last batch. Something like this:
train_set = VisionDataset(root=path, train=True, download=True, transform=transform)
train_loader = DataLoader(train_set, batch_size, drop_last=True, shuffle=True, num_workers=num_workers, pin_memory=True)
Then, for training a epoch, you can do something like:
for inputs, labels in train_loader:
inputs, labels = inputs.to(self._device), labels.to(self._device)
...
|
st31381
|
Thank you @m3tobom_M for reply The shapes are :
print(x_train.shape)
(360, 180)
type(x_train)
Out[8]: numpy.ndarray
len(y_train)
Out[7]: 360
type(y_train)
Out[9]: list
|
st31382
|
Tried that sir @m3tobom_M , not working
Actually y_train is a string, y_train[1]=‘class_label’
|
st31383
|
Probably your class labels shouldn’t be a string and should be represented with numbers. I think you should do some research about your task. You can start with reading word embeddings. (I guess).
|
st31384
|
@m3tobom_M sir My task is classification sir. and MLPClassifier can deal with string type targets.
But this loss and accuracy plot is problem for me now.
|
st31385
|
@m3tobom_M Sir I am following this link. Changed my x_train, y_train type exactly same they both are
type(x_train)
Out[10]: numpy.ndarray
type(y_train)
Out[11]: numpy.ndarray
as in given example.
Now when I m trying its giving error
ValueError: Expected array-like (array or non-string sequence), got 7
The complete traceback is
model.partial_fit(x_train[indices], y_train[indices], classes=7)
File "C:\Users\krishna\Anaconda3\lib\site-packages\sklearn\neural_network\_multilayer_perceptron.py", line 1061, in _partial_fit
if _check_partial_fit_first_call(self, classes):
File "C:\Users\krishna\Anaconda3\lib\site-packages\sklearn\utils\multiclass.py", line 316, in _check_partial_fit_first_call
if not np.array_equal(clf.classes_, unique_labels(classes)):
File "C:\Users\krishna\Anaconda3\lib\site-packages\sklearn\utils\multiclass.py", line 77, in unique_labels
ys_types = set(type_of_target(x) for x in ys)
File "C:\Users\krishna\Anaconda3\lib\site-packages\sklearn\utils\multiclass.py", line 77, in <genexpr>
ys_types = set(type_of_target(x) for x in ys)
File "C:\Users\krishna\Anaconda3\lib\site-packages\sklearn\utils\multiclass.py", line 243, in type_of_target
raise ValueError('Expected array-like (array or non-string sequence), '
ValueError: Expected array-like (array or non-string sequence), got 7
@ptrblck sir you are my last hope
|
st31386
|
Got the result by just putting
early_stopping=False,warm_start=True
in MLPClassifier. Don’t know much about it, but solved the purpose.
Got the result by just putting
early_stopping=False,warm_start=True
in MLPClassifier. Don’t know much about it, but solved the purpose.
Thanks
|
st31387
|
I want to make a loss module with trainable parameter, so I made the following:
class CostumLoss(torch.nn.Module):
def __init__(self):
super().__init__()
self.gamma = torch.nn.Parameter(torch.FloatTensor([.5]))
def forward(self, od_loss, depth_loss):
print(self.gamma)
loss = od_loss + self.gamma * depth_loss
print(self.gamma)
return loss
c_loss = CostumLoss()
c_loss.train()
total_loss = c_loss(od_loss , depth_loss)
total_loss.backward()
but printing the learnable parameter gamma tells that the parameter didn’t change.
|
st31388
|
Solved by ptrblck in post #2
The trainable parameters will be updated by the optimizer, once gradients were calculated in the backward() pass and optimizer.step() was called.
In your current code snippet you are printing the value of self.gamma before and after using it in the forward, so it’s expected that these values weren’…
|
st31389
|
The trainable parameters will be updated by the optimizer, once gradients were calculated in the backward() pass and optimizer.step() was called.
In your current code snippet you are printing the value of self.gamma before and after using it in the forward, so it’s expected that these values weren’t changed yet.
|
st31390
|
Okay But I have two optimizers one for each loss ( there is two models but I trained them end to end ), so I think (please correct me) I won’t put the CustomLoss parameters in either of theme and I will make its own optimizer.
|
st31391
|
Hello,
I’m working on on a vertual envoronment and I have installed pipenv, but when I try to install pytorch 1.8.0 with cuda,with this command:
pipenv install git+https://github.com/pytorch/pytorch#egg=pytorch
I get this error:
Installing git+https://github.com/pytorch/pytorch#egg=pytorch…
WARNING: Expecting value: line 1 column 1 (char 0)
Installation Failed
thanks
|
st31392
|
I’m not sure, if this workflow would be supported, as I assume this would try to build the wheels from source. If that’s the intended use case, refer to this section 4 or alternatively install the wheels as described here 4.
|
st31393
|
thanks for the answer, I just uninstalled and installed wheels and it seems working well!
|
st31394
|
I am new to PyTorch.
I see that nn.Conv2d function takes as argument a pytorch tensor x, and params as in_channels, along with others.
My question is that, since the function performs a 2D convolution, the in_channels would always be equal to the channel depth of the input tensor x, so why do we need to specify the same as a parameter?
Thanks
|
st31395
|
Solved by mailcorahul in post #4
It is because nn.Conv2d in essence uses a 3d filter i.e filter_size x filter_size x input_channels. And as @alan_ayu pointed out, you need filter size, input channels and output channels to define Conv2d’s parameters.
|
st31396
|
But the depth of the input tensor x is only known when called forward…
So…forwarding and initializing network simultaneously?
|
st31397
|
It is because nn.Conv2d in essence uses a 3d filter i.e filter_size x filter_size x input_channels. And as @alan_ayu pointed out, you need filter size, input channels and output channels to define Conv2d’s parameters.
|
st31398
|
Additionally to what was said, have a look at CS231n 104 where the convolution operation including all shapes is well explained.
|
st31399
|
I believe every channel of let’s say an RGB image contribute to the overall properties of that image meaning all of the channels must be taken into account when performing the convolution process. Let’s say you just have one filter, a convolution process applies that filter into all of the 3 channels which will output 3 difference results of same dimension and those 3 results are added.
|
st31400
|
Thanks for your reply
Can we not define the filter size at runtime? Is that against standard pytorch practice?
|
st31401
|
@ptrblck
Is there any way we could mimic the Keras Conv2D layer in PyTorch without the need to specify in_channels?
|
st31402
|
I am training a model. It ran smoothly for 10 epochs but at epoch 11 during 1022 iteration it gave an error. I think its related to batchnorm but I am confused how to tackle it. The augrad anomally detector give the following output. I am training in half precision mode with amp_opt_level 1
(BS 8) loss: 0.6229: 41%|█████▋ | 1022/2505 [1:01:46<1:29:42, 3.63s/it][W python_anomaly_mode.cpp:60] Warning: Error detected in CudnnBatchNormBackward. Traceback of forward call that caused the error:
File "main.py", line 725, in <module>
main()
File "main.py", line 721, in main
processor.start()
File "main.py", line 657, in start
self.train(epoch, save_model=save_model)
File "main.py", line 502, in train
output = self.model(batch_data)
File "/home/linux/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/linux/phd_codes/models_pristine/MS-G3D_part_GCN_with_GRU_AE/model/msg3d.py", line 189, in forward
x = F.relu(self.sgcn1(x) + self.gcn3d1(x), inplace=True)
File "/home/linux/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/linux/anaconda3/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/home/linux/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/linux/phd_codes/models_pristine/MS-G3D_part_GCN_with_GRU_AE/model/ms_tcn.py", line 97, in forward
out = tempconv(x)
File "/home/linux/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/linux/anaconda3/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/home/linux/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/linux/anaconda3/lib/python3.8/site-packages/torch/nn/modules/batchnorm.py", line 131, in forward
return F.batch_norm(
File "/home/linux/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py", line 2014, in batch_norm
return torch.batch_norm(
(function print_stack)
(BS 8) loss: 0.6229: 41%|█████▋ | 1022/2505 [1:01:47<1:29:39, 3.63s/it]
Traceback (most recent call last):
File "main.py", line 725, in <module>
main()
File "main.py", line 721, in main
processor.start()
File "main.py", line 657, in start
self.train(epoch, save_model=save_model)
File "main.py", line 513, in train
scaled_loss.backward()
File "/home/linux/anaconda3/lib/python3.8/site-packages/torch/tensor.py", line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/linux/anaconda3/lib/python3.8/site-packages/torch/autograd/__init__.py", line 125, in backward
|
st31403
|
This is just the part of the training output where the error occured and my training stopped as I set autograd_anomaly = True. Until epoch 11 training was smooth.
[ Tue Jun 1 07:36:56 2021 ] Training epoch: 10, LR: 0.0500
(BS 8) loss: 0.4095: 45%|██████▎ | 1127/2505 [1:07:54<1:23:12, 3.62s/it]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 131072.0
(BS 8) loss: 0.5079: 90%|██████████████▎ | 2243/2505 [2:15:27<15:52, 3.63s/it]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 131072.0
(BS 8) loss: 0.0766: 100%|████████████████| 2505/2505 [2:31:18<00:00, 3.62s/it]
[ Tue Jun 1 10:08:15 2021 ] Mean training loss: 0.4249 (BS 16: 0.8497).
[ Tue Jun 1 10:08:15 2021 ] Time consumption: [Data]00%, [Network]99%
[ Tue Jun 1 10:08:15 2021 ] Eval epoch: 10
100%|███████████████████████████████████████| 1031/1031 [03:59<00:00, 4.31it/s]
Accuracy: 0.7487717595681446 model: msg3d_with_part
[ Tue Jun 1 10:12:14 2021 ] Mean test loss of 1031 batches: 0.8397054561819415.
[ Tue Jun 1 10:12:15 2021 ] Top 1: 74.88%
[ Tue Jun 1 10:12:15 2021 ] Top 5: 94.84%
[ Tue Jun 1 10:12:15 2021 ] Training epoch: 11, LR: 0.0500
(BS 8) loss: 0.6229: 41%|█████▋ | 1022/2505 [1:01:46<1:29:42, 3.63s/it][W python_anomaly_mode.cpp:60] Warning: Error detected in CudnnBatchNormBackward. Traceback of forward call that caused the error:
File "main.py", line 725, in <module>
main()
|
st31404
|
I am attempting to work on the following loss function. To my understanding, this is essentially a BCE loss function where we need to work with the weight parameter. I initially started with
Counting the number of positive examples and then weight = total_samples / number_positive_per_class. However, clearly this is not what the paper suggests.
I read through the documents and it seems I need to set the pos_weight parameter. However, there is nothing to set the neg_weights as might be needed in this case. So, can I use the ratio of pos_weight = neg_samples/ pos_samples
If this should be the way, can I use the direct ratio or I need to do something else in order to ensure the loss values are correctly weighted?
Any help would be highly appreciated. Thank you
|
st31405
|
Solved by ptrblck in post #6
Yes, nn.BCEWithLogitsLoss can be used for binary and multi-label classification use cases.
|
st31406
|
I think you could transform the posted formula into the pos_weight formula used in nn.BCEWithLogitsLoss as given in the docs 1.
As described, pos_weight is specified as nb_neg/nb_pos and is multiplied with the “positive” part of the loss (left summand). If you divide both summands by lambda_0 in your formula, you would end up with the same loss formulation.
|
st31407
|
Thank you so much for the reply. Here is a followup though. I see there is also an implementation for the function MultiLabelSoftMarginLoss. I think in spirit it is similar to the BCEWithLogitLoss and since I have a multi label classification, I thought it would be better to use the MultiLabelSoftMarginLoss function. However, there is no pos_weight parameter in it. Is there a way I can use it or should I stick to BCEWithLogitLoss.
Also, just to be sure, should I use pos_weight = neg_samples_of_each_class/ pos_samples_of_each_class in the loss formulation?
|
st31408
|
I don’t know how MultiLabelSoftMarginLoss would compare to BCEWithLogitsLoss, so let’s wait for some experts to chime in.
In case you are dealing with a multi-label classification, pos_weight should get a value for each class, i.e. it should be a tensor containing nb_classes values defined as [nb_neg_class0/nb_pos_class0, nb_neg_class1/nb_pos_class1, ...].
|
st31409
|
Thank you for the quick response. Just to confirm, even if I use only the BCEWithLogitLoss, it should be fine in the Multi-label scenario. I just want to make sure that the loss I am computing in this way, is not incorrect.
|
st31410
|
Yes, nn.BCEWithLogitsLoss can be used for binary and multi-label classification use cases.
|
st31411
|
Hi,
Kindly help me resolve this problem
descriptor ‘subclasses’ of ‘type’ object needs an argument
I am importing some libraries on jupyter
|
st31412
|
Is this issue caused by an import of a PyTorch module? If so, could you please post the lines of code showing the imports and, if possible, a minimal code snippet to reproduce this issue?
|
st31413
|
Thank you so much for your reply. Following are my imports and I am trying to implement this code GitHub - tjmoon0104/pytorch-tiny-imagenet: pytorch-tiny-imagenet for ResNet18_64
import torch, os
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as datasets
import torch.utils.data as data
import torchvision.transforms as transforms
import torchvision.models as models
from train_model import train_model
from test_model import test_model
%matplotlib inline
|
st31414
|
None of these imports raises the error, when I execute them (and copy the missing definitions) in my setup.
Did you verify that these imports are causing the issue?
|
st31415
|
Thank you so much for your reply, kindly check the attached screenshots
1841×758 100 KB
2771×812 111 KB
|
st31416
|
Thanks for the update. It seems the error is raised in the livelossplot package, so you could create an issue in their repository.
Based on this issue in typing 2 it seems that newer Python versions would fix this issue (or you could also try to update typing).
|
st31417
|
Hi,
Can anyone help me in visualizing a network that has Beta VAE, Discriminator, and Task Model?
I want to visualize the losses for each one of the modules and latent space for sample selection for some iterations.
|
st31418
|
code U-net
class DoubleConv(nn.Module):
"""(convolution => [BN] => ReLU) * 2"""
def __init__(self, in_channels, out_channels, mid_channels=None):
super().__init__()
if not mid_channels:
mid_channels = out_channels
self.double_conv = nn.Sequential(
nn.Conv2d(in_channels, mid_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(mid_channels),
nn.ReLU(inplace=True),
nn.Conv2d(mid_channels, out_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True)
)
def forward(self, x):
return self.double_conv(x)
class UNet(nn.Module):
def __init__(self):
super(UNet, self).__init__()
self.n_channels = 1
self.n_classes = 2
self.bilinear = True
self.inc = DoubleConv(1, 64)
self.down1 = Down(64, 128)
self.down2 = Down(128, 256)
self.down3 = Down(256, 512)
factor = 2 if True else 1
self.down4 = Down(512, 1024 // factor)
class Down(nn.Module):
def __init__(self, in_channels, out_channels):
super(Down,self).__init__()
self.maxpool_conv = nn.Sequential(
nn.MaxPool2d(2),
DoubleConv(in_channels, out_channels)
)
def forward(self, x):
return self.maxpool_conv(x)
def forward(self, x):
x1 = self.inc(x)
x2 = self.down1(x1)
x3 = self.down2(x2)
x4 = self.down3(x3)
x5 = self.down4(x4)
self.up1 = Up(1024, 512 // factor, bilinear)
self.up2 = Up(512, 256 // factor, bilinear)
self.up3 = Up(256, 128 // factor, bilinear)
self.up4 = Up(128, 64, bilinear)
class Up(nn.Module):
"""Upscaling then double conv"""
def __init__(self, in_channels, out_channels, bilinear=True):
super(Up,self).__init__()
if bilinear:
self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
self.conv = DoubleConv(in_channels, out_channels, in_channels // 2)
else:
self.up = nn.ConvTranspose2d(in_channels , in_channels // 2, kernel_size=2, stride=2)
self.conv = DoubleConv(in_channels, out_channels)
def forward(self, x1, x2):
x1 = self.up(x1)
diffY = x2.size()[2] - x1.size()[2]
diffX = x2.size()[3] - x1.size()[3]
x1 = F.pad(x1, [diffX // 2, diffX - diffX // 2,
diffY // 2, diffY - diffY // 2])
x = torch.cat([x2, x1], dim=1)#
return self.conv(x)
x = self.up1(x5, x4)
x = self.up2(x, x3)
x = self.up3(x, x2)
x = self.up4(x, x1)
self.outc = OutConv(64, n_classes)
class OutConv(nn.Module):
def __init__(self, in_channels, out_channels):
super(OutConv, self).__init__()
self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=1)
def forward(self, x):
return self.conv(x)
I tested this code but
image925×309 8.89 KB
|
st31419
|
I can’t seem to find the forward method in UNet class. Also can you tell me why do you need two forward method in Down class? Thank you
|
st31420
|
@randinoo. Hmm I think there are many things that need to be changed. First of all, I think DoubleConv is fine.
As I guess, Down is for reducing the resolution using MaxPool and doing DoubleConv after that. So I think that class is fine too.
I think Up class needs some modifications.
class Up(nn.Module):
"""Upscaling then double conv"""
def __init__(self, in_channels, out_channels, bilinear=True):
super(Up,self).__init__()
if bilinear:
self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
self.conv = DoubleConv(in_channels, out_channels, in_channels // 2)
else:
self.up = nn.ConvTranspose2d(in_channels , in_channels // 2, kernel_size=2, stride=2)
self.conv = DoubleConv(in_channels, out_channels)
def forward(self, x1, x2):
x1 = self.up(x1)
diffY = x2.size()[2] - x1.size()[2]
diffX = x2.size()[3] - x1.size()[3]
x1 = F.pad(x1, [diffX // 2, diffX - diffX // 2,
diffY // 2, diffY - diffY // 2])
x = torch.cat([x2, x1], dim=1)#
return self.conv(x)
I do not know what are you doing with diffY and diffX but let’s consider those as the right ones.
I think OutConv is good too. Now for the final UNet Class
class UNet(nn.Module):
def __init__(self):
super(UNet, self).__init__()
self.n_channels = 1
self.n_classes = 2
self.bilinear = True
self.inc = DoubleConv(1, 64)
self.down1 = Down(64, 128)
self.down2 = Down(128, 256)
self.down3 = Down(256, 512)
factor = 2 if True else 1
self.down4 = Down(512, 1024 // factor)
self.up1 = Up(1024, 512 // factor, bilinear)
self.up2 = Up(512, 256 // factor, bilinear)
self.up3 = Up(256, 128 // factor, bilinear)
self.up4 = Up(128, 64, bilinear)
self.outc = OutConv(64, n_classes)
def forward(self, x):
x = self.inc(x)
x = self.down1(x)
x = self.down2(x)
x = self.down3(x)
x = self.down4(x)
x = self.up1(x)
x = self.up2(x)
x = self.up3(x)
x = self.up4(x)
x = self.outc(x)
return x
I have just started coding on pytorch too but I was working on UNet recently and I think that should be the right classes. I hope it helps you.
Good Luck
|
st31421
|
Accessing model learnable parameters using (model.variable_name.weight) and trying to update it with external function
|
st31422
|
Could you describe the issue you are facing a bit more, please?
Do you get an unexpected error or is your workflow failing in any other way?
|
st31423
|
When I run my experiments on GPU, it occupies large amount of cpu memory (~2.3GB). However, when I run my exps on cpu, it occupies very small amount of cpu memory (<500MB). This memory overhead restricts me on training multiple models.
Can someone please help me on debug whuch component is causing this memory overhead?
I have added sample code at GitHub - divyeshrajpura4114/asv-sample 2
|
st31424
|
This is likely the CUDA initialization. PyTorch comes with a relatively large number of kernels and CUDA does something with them on startup. This is particularly difficult on more constrained platforms like the Jetson.
I thought of patching PyTorch to load all kernels through nvrtc instead of linking them into PyTorch but it is quite a bit of work and I was hoping someone else would fix things instead (with the recent split of libtorch_cuda, it seems people are digging in various directions there even if we’re not there yet).
Best regards
Thomas
|
st31425
|
@tom Thanks for your response.
These are bit of new concepts for me. But, what I understand is that the CPU memory usage will be high whenever we train a model on GPU.
I have one more question. When I increase num_workers to 4 in DataLoader, each 4 process is taking high CPU memory (as I mensioned above ~2.3GB)?? Is it normal then?
|
st31426
|
@ptrblck @tom , I have multiple GPUs and also have large CPU ram, but when I start 2 training simultaneously the response time is highly degraded because of memory consumption (may be its doing processing on cpu also at some extent). Is there any way to improve simultaneous trainings?
|
st31427
|
You mean except buy more RAM? (This is only half-joking. I only have a single GPU but I chose to max out my computer’s RAM capacity (which is 128GB) - compared to GPU prices, that seems only reasonable. For many commercial situations “buy more RAM” might be the solution.)
More seriously: At least part of the memory usage is somewhat fundamental to how Python multiprocessing works and doesn’t work, combined with limitations of how CUDA works.
But so one thing you can look into is to split part of the processing in the dataset to preprocessing and move other parts (i.e. augmentation) to the GPU and then get by with less processes for the dataloader.
For real-world applications I have rarely seen a datapipeline that could not be drastically sped up with some tweaks.
Best regards
Thomas
|
st31428
|
Hi, I have a model that I need to save the weights during training (for example from 20 to 50 samples of weights in each 50 epochs) and in test time load the model and make inference using these weights. Then taking an average of these predictions. But when I use pickle I have this problem that when I load the weights, if I am in a different gpu then I would get the following error:
Attempting to deserialize object on CUDA device 2 but torch.cuda.device_count() is 2. Please use torch.load with map_location to map your storages to an existing device.
I save the weights using the following command:
weight_set_samples = []
weight_set_samples.append(copy.deepcopy(model.state_dict()))
and when training is finished I save the weights using the following command:
pickle.dump(net.weight_set_samples, model_dir+'/state_dicts.pkl', pickle.HIGHEST_PROTOCOL)
But when I use the following code to upload the weights:
with open(model_dir+'/state_dicts.pkl','rb') as weights:
weight_set_samples = pickle.load(weights)
I would run into a problem when the gpu during testing is not the same as the gpu during training.
I read the documentation of pickle there wasn’t anything like map-location in pickle object that I can solve the problem.
I would appreciate it if someone has any idea to solve this problem since the processes were long-term processes and it is really time consuming repeating the process.
Second question is that how can I do that without pickle that I do not to run this problem. I want to save the weights as an array or list that it would be easy to load the model on these weights and making inference quickly.
|
st31429
|
The error is raised by PyTorch in its serialization module and the map_location argument can be specified in torch.load. I don’t know how pickle can be directly used to avoid this error, but would recommend to stick to torch.save/load and specify the pickle_module, if necessary.
|
st31430
|
Hi, I have a ModuleDictionary where each module take similar size input, x.
I have two use cases for it:
For a single input x run all module in dictionary and create tensor by stacking all outputs.
Give a ordered map of {key, x} run each x to module in dictionary for that key and stack the outputs in that order. I can change input of this case to be batch of x and corresponding list of key.
Currently I am doing this by looping on dictionary. Is there a better way to do this.
|
st31431
|
Solved by ptrblck in post #2
I think iterating the dict would be the right approach. If you are concerned about the performance of this loop, you could use a dict comprehension, which could yield a speedup, but I would recommend to profile the code first and check, if this loop is indeed the bottleneck in your code.
|
st31432
|
I think iterating the dict would be the right approach. If you are concerned about the performance of this loop, you could use a dict comprehension, which could yield a speedup, but I would recommend to profile the code first and check, if this loop is indeed the bottleneck in your code.
|
st31433
|
I’m using PyTorch’s new register_full_backward_hook and getting this error without no error message:
terminate called after throwing an instance of 'python_error'
what():
Aborted
How do I debug this???
|
st31434
|
You could run the script in gdb and check the backtrace via:
gdb --args python script.py args
...
run
...
bt
This should point towards to failing operation.
|
st31435
|
Hi
I have installed cuda8 pytorch 0.4.1 with conda.
the back-prop looks too slow.
here is the report using util.bottleneck:
[email protected]×1440 326 KB
here is my test code for main training process in debug mode
start1 = time.time()
for _ in range(100):
word_in1 = torch.cuda.LongTensor(word_in)
word_out1 = torch.cuda.LongTensor(word_out)
label = torch.cuda.DoubleTensor(train_label)
emb_u = nn.functional.embedding(word_in1,syn0)
emb_v = nn.functional.embedding(word_out1,syn1)
outs = torch.sigmoid(torch.sum(torch.mul(emb_u, emb_v), dim=-1))
loss = Lossfunc.cuda()(outs, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print (time.time() - start1)
optimizer = optim.SGD([syn0, syn1], lr=alpha)
Lossfunc = nn.BCELoss(reduction='sum')
and I found the last three lines(.zero_grad(),.backward(),.step()) occupy most of the time.
So what should i do next?
|
st31436
|
Hi,
Why do you think it is too slow?
Running the backward should be between 1 and 2x the forward pass.
Then the gradient step depends on the size of your weights.
If your Embedding layers are very large compared to the rest of the net, you can do sparse updates by using sparse=True for it. See doc 254. And by using an optimizer that supports sparse updates like SGD or SparseAdam.
|
st31437
|
Thank you for your reply.
The reason I think it’s slow is that the same training process costs two minutes using numpy+CPU but an hour using pytorch+GPU.
here is my numpy code:
z = np.dot(syn0[context_word],syn1[word_out].T)
p = expit(z)
g = alpha * (label-p)
neu1e = syn1[x_]
syn1[x_] += np.outer(g,syn0[context_word])
syn0[context_word] += np.dot(g,neu1e)
There must be something wrong with my pytorch code.
|
st31438
|
According to my test, The time of backward is about 170x that of forward pass, which is 5.8s vs 0.035s in 100 cycles.
|
st31439
|
Hi,
Could you send me a full script that runs with all the sizes being the ones you use. Replacing your data with random tensors.
I guess two possible things here:
The graph expands accross iterations and thus the traversal for the backward becomes dead slow. I need the full running code to check that.
Give the simplicity of your graph and the fact that it’s mainly Embedding from what I see, you might be hitting a very bad worst case. Here again, try sparse=true for the embedding, it was made for that purpose.
|
st31440
|
albanD:
Hi,
Could you send me a full script that runs with all the sizes being the ones you use. Replacing your data with random tensors.
I guess two possible things here:
The graph expands accross iterations and thus the traversal for the backward becomes dead slow. I need the full running code to check that.
Give the simplicity of your graph and the fact that it’s mainly Embedding from what I see, you might be hitting a very bad worst case. Here again, try sparse=true for the embedding, it was made for that purpose.
Thank you very much!
Here is my code:
import torch.optim as optim
import torch
import torch.nn as nn
import numpy as np
import os
import time
if __name__ == '__main__':
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
syn0 = torch.randn((2829,100),requires_grad=True,device='cuda')
syn1 = torch.randn((2829,100),requires_grad=True,device='cuda')
optimizer = optim.SGD([syn0, syn1], lr=0.025)
Lossfunc = nn.BCELoss(reduction='sum').cuda()
start1 = time.time()
for index,_ in enumerate(range(40000)):
word_in = np.random.randint(low=0, high=2829, size=32)
word_out = np.random.randint(low=0, high=2829, size=32)
if index%10000 == 0:
print(('%d of 40000 (%.2f%%)')%(index,index / 400.0))
word_in1 = torch.cuda.LongTensor(word_in)
word_out1 = torch.cuda.LongTensor(word_out)
label = torch.cuda.FloatTensor([1]+ [0]*31)
emb_u = nn.functional.embedding(word_in1,syn0,sparse=True)
emb_v = nn.functional.embedding(word_out1,syn1,sparse=True)
outs = torch.sigmoid(torch.sum(torch.mul(emb_u, emb_v), dim=-1))
loss = Lossfunc(outs,label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print (time.time()-start1)
When i replace my data with random tensor, the backward become about 2x slowly than the forward pass just like you said.
But it’s still very slow, it took about 1 min to run 40000 rounds in a Tesla M40. Is there any mistake in my code writing? If not, can you give me some suggestion to speed up it? Should i use torch.set_num_thread() or torch.multiprocessing?
Thanks again!
|
st31441
|
Hi,
Some observations:
My first point does not happen, so no problem on that side.
Sparse=True make the perf slightly worst here. This is expected as your embedding is not that big.
In my tests, I moved all the data generation out of the loop but that does not change much.
Here each forward-backward-update takes <1ms on my machine. I don’t think you can expect it to be much faster. It’s just that the outer loop is very large. Try removing the python if statement in your loop, you actually see the difference in runtime.
The gpu usage is actually quite low, increasing the batch size to 128 still gives me a runtime of <1ms per iterations. So If you want this to run faster, increase the batch size.
torch.set_num_thread will only change cpu core usage for heavy operations. But you don’t do any such operation of cpu here.
torch.multiprocessing would allow you to do multiCPU-multiGPU. But you can’t use the one you have already fully, so there is little hope you can improve on that side.
|
st31442
|
Thank you! Your advice is very helpful to me.
The GPU usage is indeed slow, but when i want to use CPU and remove all “.cuda”, it becomes about 50x slower, why?
|
st31443
|
Well because even low GPU usage is much faster than CPU. Especially for such ops.
|
st31444
|
But the following code based on numpy could run much faster for the same training process, just using CPU(4s VS 65s):
syn0 = np.random.uniform(low=-0.5/100, high=0.5/100, size=(2829, 100))
syn1 = np.zeros(shape=(2829,100))
start1 = time.time()
for index,_ in enumerate(range(40000)):
x_ = np.random.randint(low=0, high=2829, size=32)
if index%10000 == 0:
print(('%d of 40000 (%.2f%%)')%(index,index / 400.0))
context_word = np.random.randint(low=0, high=2829, size=1)[0]
label = np.array([1] + [0]*5)
z = np.dot(syn0[context_word], syn1[x_].T)
p = expit(z)
g = 0.025 * (label - p)
neu1e = syn1[x_]
syn1[x_] += np.outer(g, syn0[context_word])
syn0[context_word] += np.dot(g, neu1e)
print(time.time() - start1)
I want to use autograd function provided by pytorch, but it slow down severely.
I tried increase the batch size, and the performance will drop correspondingly.
|
st31445
|
Hi,
It is expected that there is some overead from the autograd engine especially for such a small graph. But it looks a bit too much in this case.
I’m not super fluent in numpy code but it looks like:
Your context word is of size 1, while in the pytorch code it’s of size batch_size=32
Your label is of size 6 while in the pytorch code it’s of size 32
What is the expit function doing?
Have you tried replacing each op from your numpy code with the torch counterpart ? This should give similar runtime on cpu and speedup on gpu if the ops are big enough.
|
st31446
|
Hi~ I also encountered this problem. My code is like this:
# fd_prob: [batch_size, tgt_len, vocab_size], means words probability distribution
# bd_hyp: [batch_size, infer_len], means another output labels
# fd_bd_attn: [batch_size, tgt_len, infer_len], means edit probability distribution
# fd_p_gen: [batch_size, tgt_len, 1], means copy mode probability
batch_size, tgt_len, _ = fd_prob.size()
_, infer_len = bd_hyp.size()
# incorporate copy mode
for i in range(batch_size):
for j in range(tgt_len):
for k in range(infer_len):
fd_prob[i][j][bd_hyp[i][k]] += (1 - fd_p_gen[i][j][0]) * fd_bd_attn[i][j][k]
loss = criterion(fd_prob, ground_truth)
loss.backward()
I modified output distribution by a 3-layer loop which contained lots of index, and I found that forward cost less than 1s while backward cost more than 1.5min, which is unacceptable. I think it’s due to lots of index. Have you found the solution or is there an elegant way to do this?
Also see Indexing is very slow for backpropagation 126
|
st31447
|
Hi,
This ia expected. Each operations that you do add an operation in the computational graph. You’re creating a huge graph here so the backward pass is going to be very slow. You will need to parallelize your operations using builtin functions and/or masking.
|
st31448
|
Thank you for your quick reply. I fixed this problem since I used scatter_add_ to do this operation.
|
st31449
|
SkyAndCloud:
Thank you for your quick reply. I fixed this problem since I used scatter_add_ to do this operation.
I meet the same problem, could you please provide the modified code?
|
st31450
|
Good day,
How can I generate a mask tensor that has a specific ratio of 0 and 1? For example 70:30 of 0s and 1s in a 5 by 10 tensor will generate
[[0,0,0,0,0,0,0,0,1,0],
[1,1,1,0,0,0,1,1,0,0],
[0,0,1,0,0,1,1,0,1,0],
[0,1,0,1,0,1,1,0,0,0],
[0,0,0,1,0,0,0,0,0,0]]
Thanks in advance.
|
st31451
|
Solved by ptrblck in post #2
You could create the values using the defined number of samples beforehand, shuffle them, and reshape to the desired output:
ones, zeros = 30, 70
x = torch.cat((torch.zeros(zeros), torch.ones(ones)))
x = x[torch.randperm(x.size(0))]
x = x.view(10, 10)
print(x)
> tensor([[0., 0., 0., 0., 0., 0., 0.,…
|
st31452
|
You could create the values using the defined number of samples beforehand, shuffle them, and reshape to the desired output:
ones, zeros = 30, 70
x = torch.cat((torch.zeros(zeros), torch.ones(ones)))
x = x[torch.randperm(x.size(0))]
x = x.view(10, 10)
print(x)
> tensor([[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[1., 0., 0., 0., 0., 0., 1., 1., 1., 1.],
[1., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 1., 1., 0., 0., 1., 0., 1., 0.],
[0., 1., 0., 0., 0., 1., 1., 0., 0., 0.],
[0., 0., 0., 1., 1., 0., 0., 0., 1., 1.],
[1., 0., 0., 0., 0., 0., 1., 1., 0., 0.],
[0., 0., 0., 0., 1., 1., 0., 0., 0., 0.],
[0., 1., 1., 0., 0., 0., 1., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 1., 0., 1., 0.]])
|
st31453
|
Hello. I would like to ask to append into PyTorch mechanism to control the seed of random number generators. For people who try use PyTorch from the multi-threaded application, they can face the problem that if several threads performs model initialization with randomized schemas (He initialization, Lecun initialization, etc.) it’s very likely that models this initialization is really a part of global state for which users from Python API has only access via functions like TORCH.MANUAL_SEED.
It’s better to redesign, and make random generator state as a part of the model I think.
Code snippet which demonstates problems with global states for numpy.random and Python’s random.
#!/usr/bin/env python3
import numpy as np
import random
import threading, time
class WorkerThread(threading.Thread):
def __init__(self, i, sleep_seconds):
threading.Thread.__init__(self)
self.th_number = i
self.sleep_seconds = sleep_seconds
def run(self):
np.random.seed(123)
random.seed(123)
time.sleep(self.sleep_seconds)
print(self.th_number, np.random.random(), "(np.random)")
print(self.th_number, random.random(), "(random)")
th = [WorkerThread(k, 1*k) for k in range(3)]
for t in th: t.start()
for t in th: t.join()
|
st31454
|
This known limitation is caused by the forked subprocesses and third party libraries as given by your code snippet and explained in the FAQ 1 as well as the Randomness docs.
While the DataLoader already sets the seed for the random module (as well as PyTorch), note that numpy is not a requirement, which is why previous suggestions to seed third party libraries in PyTorch’s code were declined.
|
st31455
|
Thanks, that you are talking about is very interesting and thanks for reference, but I did not mean DataLoaders and forking strategy during having parallel threads.
I mostly care about the initialization of the models and have the ability to specify the initialization of the model in a thread-safe way.
So my suggestion append into Public API this controlling seeds technics at the level of Models, if it is possible for cases when people do not use DataLoader's.
Very often for low-level libraries - memory and logging is user-specfied callbacks, but for PyTorch in my opinion this random seeds should be controlled with allow user to control and it’s important thing for me as user.
|
st31456
|
``class Autoencoder(nn.Module):
def init(self):
super(Autoencoder, self).init()
self.encoder = nn.Sequential(
nn.Conv2d(1, 16, 3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 32, 3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(32, 64, 3)
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(64, 32, 3),
nn.ReLU(),
nn.ConvTranspose2d(32, 16, 3, stride=2, padding=1, output_padding=1),
nn.ReLU(),
nn.ConvTranspose2d(16, 1, 3, stride=2, padding=1, output_padding=1),
nn.Sigmoid()
)
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x`
the error in loss_train = criterion(output_train, y_train.long())
RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of dimension: 1
|
st31457
|
Since you are passing the targets as LongTensors, I assume you are using nn.CrossentropyLoss.
Also based on the posted architecture it seems you are working on a multi-class segmentation use case.
If that’s the case, the model output is expected to contain logits (so remove the sigmoid) in the shape [batch_size, nb_classes, height, width], while the target should be a LongTensor in the shape [batch_size, height, width] containing the class indices in the range [0, nb_classes-1].
|
st31458
|
Hi @ptrblck , it’s the same thing
class Autoencoder(nn.Module):
def __init__(self):
super(Autoencoder, self).__init__()
self.encoder = nn.Sequential(
nn.Conv2d(1, 16, 3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 32, 3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(32, 64, 3)
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(64, 32, 3),
nn.ReLU(),
nn.ConvTranspose2d(32, 16, 3, stride=2, padding=1, output_padding=1),
nn.ReLU(),
nn.ConvTranspose2d(16, 1, 3, stride=2, padding=1, output_padding=1),
)
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
x_train, y_train = Variable(train_x), Variable(train_y)
x_val, y_val = Variable(val_x), Variable(val_y)
if torch.cuda.is_available():
x_train = x_train.cuda()
y_train = y_train.cuda()
x_val = x_val.cuda()
y_val = y_val.cuda()
optimizer.zero_grad()
output_train = model(x_train.float())
output_val = model(x_val.float())
loss_train = criterion(output_train, y_train.long())
help me plzz
|
st31459
|
Are x_train, y_train, x_val, y_val batched inputs or only a single sample? That may explain the shape/batch error.
Also, is this is an image autoencoder? If it is, then:
Normalize your inputs
No need to pass targets as long.
Use MSELoss() as criterion
|
st31460
|
Here is a link 2 to a beginner friendly Image Autoencoder template that I have written. The same concept can be applied to any image feature extracting autoencoder. Please feel free to PM if you have any doubts as to what the code does, I’ll be happy to help.
Regards,
|
st31461
|
RuntimeError: Calculated padded input size per channel: (1 x 1). Kernel size: (3 x 3). Kernel size can’t be greater than actual input size
|
st31462
|
That would be because your inputs are too small. Since, that notebook is based on MNIST, the minimum input size needs to be 28x28.
|
st31463
|
hi @pchandrasekaran when I tested this code GAN,
class DiscriminatorNet(torch.nn.Module):
"""
A three hidden-layer discriminative neural network
"""
def __init__(self):
super(DiscriminatorNet, self).__init__()
n_features = 40
n_out = 2
self.hidden0 = nn.Sequential(
nn.Linear(n_features, 1024),
nn.LeakyReLU(0.2),
nn.Dropout(0.3)
)
self.hidden1 = nn.Sequential(
nn.Linear(1024, 512),
nn.LeakyReLU(0.2),
nn.Dropout(0.3)
)
self.hidden2 = nn.Sequential(
nn.Linear(512, 256),
nn.LeakyReLU(0.2),
nn.Dropout(0.3)
)
self.out = nn.Sequential(
torch.nn.Linear(256, n_out),
torch.nn.Sigmoid()
)
def forward(self, x):
x = self.hidden0(x)
x = self.hidden1(x)
x = self.hidden2(x)
x = self.out(x)
return x
image852×372 17 KB
help me plzz
|
st31464
|
Now, I’m making an assumption here as I have limited information; I’m assuming you want a single output in the range (0, 1) and are tackling a binary classification problem.
Change n_out to 1 and use nn.BCELoss(). [If n_out is 2, use nn.NLLLoss(), and some extra changes are needed, so leave it for now]
If n_out=1, you’ll need to binarize the output from the network in order to use with sklearn’s accuracy_score since a sigmoided output is going to be a float in the interval [0, 1]. You can do that by:
threshold = 0.5
network_output[network_output > threshold] = 1
network_output[network_output <= threshold] = 0
|
st31465
|
@pchandrasekaran @ptrblck
RuntimeError: The size of tensor a (16) must match the size of tensor b (5177) at non-singleton dimension 3
with this code
class Autoencoder(nn.Module):
def __init__(self):
super(Autoencoder, self).__init__()
self.encoder = nn.Sequential(
nn.Conv2d(1, 16, 3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 32, 3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(32, 64, 3)
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(64, 32, 3),
nn.ReLU(),
nn.ConvTranspose2d(32, 16, 3, stride=2, padding=1, output_padding=1),
nn.ReLU(),
nn.ConvTranspose2d(16, 1, 3, stride=2, padding=1, output_padding=1),
)
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
clearing the Gradients of the model parameters
optimizer.zero_grad()
# prediction for training and validation set
output_train = model(x_train.float())
output_val = model(x_val.float())
loss_train = criterion(output_train, y_train.long())
loss_val = criterion(output_val, y_val.long())
|
st31466
|
Your model works correctly using random input shapes, so I guess the shape mismatch is caused in the loss calculation, in which case you would have to check the shapes of the model output and target tensor and make sure they have the expected shapes.
I don’t know which criterion you are using, but the docs explain the expected shapes for them.
|
st31467
|
I have a dataset with over 1000 classes with just 10 images in each classes. In the end, I have to predict if any new image is part of the dataset or not.I tried implementing with my own neural network, tried Siamese networks. My model goes up to 72% val accuracy but shows wrong outputs.I’ve been on this for days.Could you guys help me out ?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.