id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st32468
|
Solved by liyz15 in post #2
Have you ensured that only one process is writing to checkpoint? Multiple processes writing to the same checkpoint will corrupt it.
|
st32469
|
Have you ensured that only one process is writing to checkpoint? Multiple processes writing to the same checkpoint will corrupt it.
|
st32470
|
I did not consider this problem .
Looks like that I have to modify my code and re-train my module.
Thanks a lot.
|
st32471
|
There is a function in NumPy called np.add.at which allows to add values to elements accessed by multi-index with repeated elements, such that for each repeated element all its corresponding values will be considered in the summation. E.g. see an example:
A = np.zeros(5)
np.add.at(A, [1, 1, 2], 1)
A
produces:
array([0., 2., 1., 0., 0.])
Right now I am in a very large need for the same thing in PyTorch (I can’t avoid repeated indices in my task), as the plain summation has different behavior:
A = torch.zeros(5)
A[[1, 1, 2]] += 1
A
produces:
tensor([ 0., 1., 1., 0., 0.])
Is there any way to simulate behavior of np.add.at by PyTorch operations?
Thank you!
|
st32472
|
Looks like I have already found a solution myself:
A = torch.zeros(5)
A.index_add_(0, torch.LongTensor([1, 1, 2]), torch.FloatTensor([1, 1, 1]))
A
It seems that this can also be used for multi-dimensional tensors, if they are flattened beforehand. It would be very handy to have such a function for multi-dimensional tensors also (if there is no one already).
|
st32473
|
Hello,
Is there any scalable solution to this problem? I am using multi-dimensional tensors I would like to sum elements using indices in another tensor, and some indices appear more than once. In NumPy np.add.at does the job.
Thanks
|
st32474
|
class Net(Module):
def init(self):
super(Net, self).init()
self.cnn = Sequential(
Conv2d(in_channels=1, out_channels=4, kernel_size=3, stride=1),
ReLU(inplace=True),
MaxPool2d(kernel_size=3, stride=1),
Conv2d(in_channels=4, out_channels=4, kernel_size=3, stride=1),
ReLU(inplace=True),
MaxPool2d(kernel_size=3, stride=1),
Linear(in_features=1000, out_features=500),
ReLU(inplace=True),
Linear(in_features=500, out_features=250),
ReLU(inplace=True),
Linear(in_features=250, out_features=4)
)
def forward(self, x):
x = self.cnn(x)
x = x.view(x.size(0), -1)
return x
Please tell me what is wrong, I don’t get it.
Error is: RuntimeError: Given groups=1, weight of size [4, 1, 3, 3], expected input[2190, 64, 64, 3] to have 1 channels, but got 64 channels instead
|
st32475
|
It seems you are passing the input tensor in a channels-last memory format, while PyTorch expects channels-first by default, so you would have to permute the dimensions into [batch_size, channels, height, width]. In case you are changing the memory format via tensor.to(memory_format=torch.channels_last) note that you should not change the dimensions manually, but let PyTorch do it internally.
Once you’ve permuted the input tensor you would run into the next error of a wrong number of in_channels in the first conv layer.
While it seems that your input tensor uses 3 channels, the first conv layer uses 1 input channel, so you would have to change this next.
|
st32476
|
I have a problem regarding a large variation in the result I get, by running my model multiple times. The exact same architecture and training gives anywhere from 91.5% to 93.4% accuracy on image classification (cifar 10).
The problem is that I don’t know how to use the torch random seed in order to get the better results, not the worse ones. I tried various values for the random seed, with:
torch.manual_seed(7)
and I get the lower bound of the results. Any ideas?
|
st32477
|
if you are using GPU, you might also need to set torch.cuda.manual_seed_all.
http://pytorch.org/docs/master/cuda.html#random-number-generator 3.7k
|
st32478
|
@smth do we need to set torch.manual_seed() and torch.cuda.manual_seed_all() or the second is enough? thanks.
|
st32479
|
with the latest pytorch 0.3 version you only need to set torch.manual_seed which will seed all devices
|
st32480
|
the CPU RNG is platform-independent. I am not sure about the CUDA RNG and what guarantees NVIDIA gives across GPU models, CUDA versions and platforms.
|
st32481
|
smth:
torch.cuda.manual_seed_all.
Funny, even though I have included both:
torch.manual_seed(999)
and
if torch.cuda.is_available():
torch.cuda.manual_seed_all(999)
I am still getting inconsistent results, fluctuating 1-2% by re-running the model. I wonder why that could be?
|
st32482
|
Could you try to add torch.backends.cudnn.deterministic = True to your code?
CUDNN has some non-deterministic methods, so small fluctuations might come from this.
|
st32483
|
I added:
torch.backends.cudnn.deterministic = True in addition to:
torch.manual_seed(999) and
if torch.cuda.is_available(): torch.cuda.manual_seed_all(999)
but accuracy for same model/same data still varies considerably across runs. I’ve even tried duplicating the above in the code and even tried switching to the latest version of pytorch (3.1) but still getting the same variability in accuracy across runs for same model/same data. Weird.
|
st32484
|
Hi,
I’m having the same issue, did you figure out a way to make the results consistent across runs?
Thanks,
Amir
|
st32485
|
Hi,
Have you figured out how to make the results reproducible now?
Thanks,
Darren
|
st32486
|
Same problem here, running on PyTorch 0.4. I am using RRelu though, even though I’ve set all the flags mentioned above, results differ by a margin of +/- 0.5% from run to run.
|
st32487
|
What’s the number of workers for your dataloader? The following post might be helpful for deterministic results.
Deterministic/non-deterministic results with PyTorch
Hi,
I’ve had some trouble in reproducing results, and after reading a few posts there seem to be multiple causes for non-determinism, some which are expected. I’d like to check if I got it right.
Issue 1
Dataloaders with multiple threads: They seem to be problematic, even if randoms seeds are set beforehand. [ref].
Issue 2
cuDNN: Apparently cuDNN seems to have non-deterministic kernels [ref1][ref2]
Issue 3
GPU: Apparently, some reductions are non-deterministic in GPU, even without cuDNN. …
|
st32488
|
Was following this post b/c I ran into same issues training an autoencoder. I don’t know if the OP has solved the problem. but I did a test last night on a AWS GPU and cuda on w/ the parameters below gave me consistent results.
torch.backends.cudnn.deterministic = True
torch.manual_seed(999)
Further I explicitly specify model.eval() after training when computing the decoders and encoders.
Alternatively when I have, below, the results were inconsistent.
torch.backends.cudnn.deterministic = True
torch.cuda.manual_seed_all(999)
As an above poster mentioned it seems as though torch.manual_seed() applies to both cuda and cpu devices for the latest version. So if you’re not getting consistent result w/ torch.cuda.manual_seed_all, try just torch.manual_seed. This may depend on the pytorch version you have installed…Hope this helps.
|
st32489
|
Good info.
The docs also suggest setting: torch.backends.cudnn.benchmark = False
and remember that Numpy should be seeded as well.
–> Randomness [Docs] 274
|
st32490
|
Sounds like there is another question related here 91.
anyways, I think this can be a solution:
manualSeed = 1
np.random.seed(manualSeed)
random.seed(manualSeed)
torch.manual_seed(manualSeed)
# if you are suing GPU
torch.cuda.manual_seed(manualSeed)
torch.cuda.manual_seed_all(manualSeed)
torch.backends.cudnn.enabled = False
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
also in the dataloader i set num_workers = 0
based on here 73
you also need to change worker_init_fn as :
def _init_fn():
np.random.seed(manualSeed)
DataLoding = data.DataLoader(..., batch_size = ...,
collate_fn = ...,
num_workers =...,
shuffle = ...,
pin_memory = ...,
worker_init_fn=_init_fn)
I noticed if we dont do torch.backends.cudnn.enabled = False the results are very close, but some times not match
p.s. im using pytorch 1.0.1
|
st32491
|
Thanks!
num_workers = 0 and torch.backends.cudnn.enabled = False are the real thing that works! And I also see that if you train one step 10 times, only using num_workers = 0 we can get exactly same output 8 times and different output 2 times.
|
st32492
|
np.random.seed(0)
random.seed(0)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
torch.cuda.manual_seed_all(0)
torch.backends.cudnn.enabled = False
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
and setting dataloader like the following:
torch.utils.data.DataLoader(training, shuffle = True, batch_size=BATCH_SIZE, worker_init_fn=np.random.seed(0),num_workers=0)
WORKED FOR ME!
I am using Pytorch version 1.0.0.
|
st32493
|
I tried exactly same setting, even with torch.backends.cudnn.enabled =False, the results are not the same… Do you have any idea?
|
st32494
|
I have a 2D tensor:
samples = torch.Tensor([
[0.1, 0.1], #-> group / class 1
[0.2, 0.2], #-> group / class 2
[0.4, 0.4], #-> group / class 2
[0.0, 0.0] #-> group / class 0
])
and a label for each sample corresponding to a class:
labels = torch.LongTensor([1, 2, 2, 0])
so len(samples) == len(labels). Now I want to calculate the mean for each class / label. Because there are 3 classes (0, 1 and 2) the final vector should have dimension [n_classes, samples.shape[1]] So the expected solution should be:
result == torch.Tensor([
[0.1, 0.1],
[0.3, 0.3], # -> mean of [0.2, 0.2] and [0.4, 0.4]
[0.0, 0.0]
])
Question: Can this be done in pure pytorch (i.e. no numpy so that I can autograd) and ideally without for loops?
|
st32495
|
Solved by ptrblck in post #2
You could use scatter_add_ and torch.unique to get a similar result.
However, the result tensor will be sorted according to the class index:
samples = torch.Tensor([
[0.1, 0.1], #-> group / class 1
[0.2, 0.2], #-> group / class 2
…
|
st32496
|
You could use scatter_add_ and torch.unique to get a similar result.
However, the result tensor will be sorted according to the class index:
samples = torch.Tensor([
[0.1, 0.1], #-> group / class 1
[0.2, 0.2], #-> group / class 2
[0.4, 0.4], #-> group / class 2
[0.0, 0.0] #-> group / class 0
])
labels = torch.LongTensor([1, 2, 2, 0])
labels = labels.view(labels.size(0), 1).expand(-1, samples.size(1))
unique_labels, labels_count = labels.unique(dim=0, return_counts=True)
res = torch.zeros_like(unique_labels, dtype=torch.float).scatter_add_(0, labels, samples)
res = res / labels_count.float().unsqueeze(1)
|
st32497
|
worked like a charm thanks a lot! I also posted on stackoverflow 139 and got the following alternative which I leave here for future reference:
M = torch.zeros(labels.max()+1, len(samples))
M[labels, torch.arange(4)] = 1
M = torch.nn.functional.normalize(M, p=1, dim=1)
torch.mm(M, samples)
|
st32498
|
If your labels are sparse, like: [1, 2, 2, 2] (where 0 is missing), the first solution doesn’t work. The second one works but outputs the mean for label 0, here is my fix:
def mean_by_label(samples, labels):
''' select mean(samples), count() from samples group by labels order by labels asc '''
weight = torch.zeros(labels.max()+1, samples.shape[0]).to(samples.device) # L, N
weight[labels, torch.arange(samples.shape[0])] = 1
label_count = weight.sum(dim=1)
weight = torch.nn.functional.normalize(weight, p=1, dim=1) # l1 normalization
mean = torch.mm(weight, samples) # L, F
index = torch.arange(mean.shape[0])[label_count > 0]
return mean[index], label_count[index]
|
st32499
|
Hi thanks for sharing your code! Do you know how to caculate the variance for each label (in a similar way)?
|
st32500
|
As previous solutions do not work for the case of sparse groups (e.g., not all the groups are in the data), I made one
def groupby_mean(value:torch.Tensor, labels:torch.LongTensor) -> (torch.Tensor, torch.LongTensor):
"""Group-wise average for (sparse) grouped tensors
Args:
value (torch.Tensor): values to average (# samples, latent dimension)
labels (torch.LongTensor): labels for embedding parameters (# samples,)
Returns:
result (torch.Tensor): (# unique labels, latent dimension)
new_labels (torch.LongTensor): (# unique labels,)
Examples:
>>> samples = torch.Tensor([
[0.15, 0.15, 0.15], #-> group / class 1
[0.2, 0.2, 0.2], #-> group / class 3
[0.4, 0.4, 0.4], #-> group / class 3
[0.0, 0.0, 0.0] #-> group / class 0
])
>>> labels = torch.LongTensor([1, 5, 5, 0])
>>> result, new_labels = groupby_mean(samples, labels)
>>> result
tensor([[0.0000, 0.0000, 0.0000],
[0.1500, 0.1500, 0.1500],
[0.3000, 0.3000, 0.3000]])
>>> new_labels
tensor([0, 1, 5])
"""
uniques = labels.unique().tolist()
labels = labels.tolist()
key_val = {key: val for key, val in zip(uniques, range(len(uniques)))}
val_key = {val: key for key, val in zip(uniques, range(len(uniques)))}
labels = torch.LongTensor(list(map(key_val.get, labels)))
labels = labels.view(labels.size(0), 1).expand(-1, value.size(1))
unique_labels, labels_count = labels.unique(dim=0, return_counts=True)
result = torch.zeros_like(unique_labels, dtype=torch.float).scatter_add_(0, labels, value)
result = result / labels_count.float().unsqueeze(1)
new_labels = torch.LongTensor(list(map(val_key.get, unique_labels[:, 0].tolist())))
return result, new_labels
|
st32501
|
from torch_geometric.datasets import KarateClub
this gives me the error
OSError: /usr/local/lib/python3.7/dist-packages/torch_sparse/_convert_cpu.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceIN3c107complexIfEEEEPKNS_6detail12TypeMetaDataEv
I am running on colab.
|
st32502
|
I’ve a similar issue to this: Import torch file with trained model in CUDA, in a CPU machine 2
I’d trained a model on Colab with CUDA service, and saved as model.pickle file.
When I want to load the model on my local computer via:
with open("D:/models/resnet34_new.pickle", 'rb') as file:
model = torch.load(file, map_location=torch.device('cpu'))
It shows the error:
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
I’d searched multiple solutions online, but none works for me.
What could I do the load the model?
Thank you for any help!
|
st32503
|
Have you tried saving the model as a .pt file? I’ve used that method and it works.
# Saving
path = "./your_name.pt"
torch.save(model.state_dict(), path)
# Loading
model.load_state_dict(torch.load(path, map_location=torch.device("cpu")))
|
st32504
|
Hey, I have a strange code reproducibility issue:
torch.manual_seed(0)
torch.cuda.manual_seed(0)
np.random.seed(0)
random.seed(0)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
All these options are set at the beginning of the training function, num of workers for dataloader is set to zero. I use only one GPU for training. Dataloader loads the data once and then only utilizes it.
At the first launch of training function I have RESULT 1, but at the second and all next launches I got RESULT 2.
If I increase the number of training epochs, at the first launch I have RESULT 2 + some epochs passed, at all next launches I got RESULT 3.
Any ideas what can cause this issue?
|
st32505
|
You might get non-deterministic results, if you don’t set all required flags mentioned in the reproducibility docs 2.
Could you take a look at these docs and add e.g. torch.use_deterministic_algorithms(True) to the script?
|
st32506
|
In my LightningModule subclass, I used ‘self.hparams = hparams’ which worked fine a week ago but now it unexpectedly returns an Exception saying “AttributeError: can’t set attribute”.
I believe something has been changed recently in the father class LightningModule as this issue should be solved by add a “@ATTRI.setter” but here I cannot access the father class.
class T5FineTuner(pl.LightningModule):
def __init__(self, hparams):
super(T5FineTuner, self).__init__()
self.hparams = hparams
Any way to solve this given a recent update on source code, please?
|
st32507
|
I want to sample a tensor of probability distributions with shape (N, C, H, W), where dimension 1 (size C) contains normalized probability distributions with ‘C’ possibilities. Is there a way to efficiently sample all the distributions in the tensor in parallel? I just need to sample each distribution once, so the result could either be a one-hot tensor with the same shape or a tensor of indices with shape (N, 1, H, W).
|
st32508
|
Solved by KFrank in post #2
Hi Learned!
Yes, you can use torch.distributions.Categorical, provided you
adjust your distributions tensor so that its last dimension is the distribution
dimension.
Here is an example script:
import torch
print (torch.__version__)
_ = torch.random.manual_seed (2021)
N = 2
C = 3
H = 5
W = 7
…
|
st32509
|
Hi Learned!
LearnedLately:
Is there a way to efficiently sample all the distributions in the tensor in parallel?
Yes, you can use torch.distributions.Categorical, provided you
adjust your distributions tensor so that its last dimension is the distribution
dimension.
Here is an example script:
import torch
print (torch.__version__)
_ = torch.random.manual_seed (2021)
N = 2
C = 3
H = 5
W = 7
probs = torch.randn (N, C, H, W).softmax (1)
print ('probs = ...')
print (probs)
print ('probs.sum (1) = ...')
print (probs.sum (1))
sample = torch.distributions.Categorical (probs = probs.transpose (1, -1)).sample().transpose (-1, -2).unsqueeze (1)
print ('sample.shape =', sample.shape)
print ('sample = ...')
print (sample)
And here is its output:
1.7.1
probs = ...
tensor([[[[0.1498, 0.3152, 0.2946, 0.6541, 0.3106, 0.4475, 0.3918],
[0.1289, 0.2494, 0.5813, 0.1555, 0.2688, 0.1649, 0.6196],
[0.1607, 0.7599, 0.2339, 0.3343, 0.6459, 0.7187, 0.5310],
[0.2014, 0.0938, 0.2341, 0.8172, 0.3617, 0.0953, 0.6246],
[0.8510, 0.1427, 0.0091, 0.1163, 0.2765, 0.6657, 0.2254]],
[[0.7174, 0.1177, 0.1747, 0.1609, 0.3015, 0.0444, 0.2602],
[0.1545, 0.5129, 0.2338, 0.4810, 0.2133, 0.6208, 0.1486],
[0.3673, 0.0383, 0.2041, 0.4826, 0.0756, 0.1309, 0.2405],
[0.4219, 0.5621, 0.0419, 0.0825, 0.4854, 0.4959, 0.0707],
[0.1043, 0.7390, 0.1671, 0.5642, 0.5226, 0.3112, 0.3942]],
[[0.1329, 0.5671, 0.5306, 0.1850, 0.3879, 0.5082, 0.3480],
[0.7167, 0.2377, 0.1849, 0.3635, 0.5179, 0.2143, 0.2318],
[0.4720, 0.2018, 0.5620, 0.1831, 0.2785, 0.1503, 0.2285],
[0.3767, 0.3441, 0.7239, 0.1003, 0.1529, 0.4088, 0.3047],
[0.0447, 0.1183, 0.8238, 0.3194, 0.2009, 0.0231, 0.3803]]],
[[[0.6440, 0.1537, 0.0505, 0.0511, 0.0996, 0.1050, 0.4653],
[0.1242, 0.2676, 0.6757, 0.1266, 0.6718, 0.2993, 0.0868],
[0.7833, 0.4048, 0.6902, 0.2550, 0.2607, 0.1759, 0.1606],
[0.1922, 0.3755, 0.6223, 0.2364, 0.3413, 0.9021, 0.5981],
[0.2017, 0.5419, 0.5284, 0.3065, 0.4233, 0.1412, 0.2183]],
[[0.3134, 0.2802, 0.6204, 0.7494, 0.3884, 0.0774, 0.4969],
[0.1248, 0.6669, 0.1558, 0.2342, 0.0883, 0.0252, 0.8172],
[0.1465, 0.3188, 0.0329, 0.6245, 0.6833, 0.2322, 0.1315],
[0.4668, 0.2589, 0.2702, 0.0258, 0.3919, 0.0188, 0.1836],
[0.3882, 0.3065, 0.2767, 0.0930, 0.1194, 0.4706, 0.0861]],
[[0.0425, 0.5662, 0.3291, 0.1995, 0.5120, 0.8176, 0.0378],
[0.7510, 0.0655, 0.1685, 0.6392, 0.2399, 0.6755, 0.0960],
[0.0702, 0.2764, 0.2768, 0.1205, 0.0560, 0.5918, 0.7079],
[0.3410, 0.3655, 0.1075, 0.7378, 0.2668, 0.0791, 0.2184],
[0.4101, 0.1517, 0.1949, 0.6006, 0.4573, 0.3881, 0.6956]]]])
probs.sum (1) = ...
tensor([[[1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000],
[1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000],
[1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000],
[1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000],
[1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000]],
[[1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000],
[1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000],
[1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000],
[1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000],
[1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000]]])
sample.shape = torch.Size([2, 1, 5, 7])
sample = ...
tensor([[[[1, 0, 2, 0, 0, 0, 2],
[1, 0, 2, 2, 2, 1, 2],
[2, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 1, 2, 0],
[0, 1, 2, 2, 1, 0, 1]]],
[[[0, 2, 1, 1, 1, 1, 1],
[0, 1, 0, 2, 0, 0, 1],
[2, 2, 0, 0, 1, 2, 2],
[2, 2, 0, 2, 1, 0, 0],
[1, 1, 2, 2, 0, 0, 2]]]])
Best.
K. Frank
|
st32510
|
Hi,
If I move a tensor to Cuda using .to(device), does it mean that then all the arithmetic operations done with that tensor will happen on Cuda?
Thanks.
|
st32511
|
Solved by Tejan_Mehndiratta in post #3
Yes, It’s True. If you transfer a tensor on Cuda and then do any operation on that tensor, the operation happens on Cuda.
Thanks.
|
st32512
|
My understanding is that any operation you do on the device will happen on GPU only, you can also use all the torch methods on that tensor and all the operations will also happen on GPU
|
st32513
|
Yes, It’s True. If you transfer a tensor on Cuda and then do any operation on that tensor, the operation happens on Cuda.
Thanks.
|
st32514
|
hello and regards…
I can not in test phase after changing one layer, to giving data for changed model and taking the images in output of test phase. please guide me…
my code:
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt
from torch.utils.data.sampler import SubsetRandomSampler
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
#Converting data to torch.FloatTensor
transform = transforms.ToTensor()
# Download the training and test datasets
train_data = datasets.MNIST(root='data', train=True, download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False, download=True, transform=transform)
#Prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=32, num_workers=0)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=32, num_workers=0)
class ConvAutoencoder(nn.Module):
def __init__(self):
super(ConvAutoencoder, self).__init__()
#Encoder
self.conv1 = nn.Conv2d(1, 16, 3, stride=2, padding=1)
self.conv2 = nn.Conv2d(16, 8, 3, stride=2, padding=1)
self.conv3 = nn.Conv2d(8,8,3)
#Decoder
self.conv4 = nn.ConvTranspose2d(8, 8, 3)
self.conv5 = nn.ConvTranspose2d(8, 16, 3, stride=2, padding=1, output_padding=1)
self.conv6 = nn.ConvTranspose2d(16, 1, 3, stride=2, padding=1, output_padding=1)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
x = F.relu(self.conv4(x))
x = F.relu(self.conv5(x))
x = F.relu(self.conv6(x))
return x
#Instantiate the model
model = ConvAutoencoder()
print(model)
def train(model, num_epochs=20, batch_size=64, learning_rate=1e-3):
torch.manual_seed(42)
criterion = nn.MSELoss() # mean square error loss
optimizer = torch.optim.Adam(model.parameters(),
lr=learning_rate,
weight_decay=1e-5) # <--
# train_loader =train_loader;
outputs = []
for epoch in range(num_epochs):
for data in train_loader:
img, _ = data
recon = model(img)
loss = criterion(recon, img)
loss.backward()
optimizer.step()
optimizer.zero_grad()
print('Epoch:{}, Loss:{:.4f}'.format(epoch+1, float(loss)))
outputs.append((epoch, img, recon),)
return outputs
model = ConvAutoencoder()
max_epochs =10
outputs = train(model, num_epochs=max_epochs)
for k in range(0, max_epochs, 9):
plt.figure(figsize=(9, 2))
imgs = outputs[k][1].detach().numpy()
recon = outputs[k][2].detach().numpy()
for i, item in enumerate(imgs):
if i >= 9: break
plt.subplot(2, 9, i+1)
plt.imshow(item[0])
for i, item in enumerate(recon):
if i >= 9: break
plt.subplot(2, 9, 9+i+1)
plt.imshow(item[0])
a=(ConvAutoencoder().conv3.weight)
a0=a[:,0,:,:]
a1=a[:,1,:,:]
a2=a[:,2,:,:]
a3=a[:,3,:,:]
a4=a[:,4,:,:]
a5=a[:,5,:,:]
a6=a[:,6,:,:]
a7=a[:,7,:,:]
a0=a1
a1=a2
a2=a3
a3=a4
a4=a5
a5=a6
a6=a7
a7=a0
a = torch.cat((a0, a1, a2, a3, a4, a5, a6, a7))
model = ConvAutoencoder()
a = a.reshape(8, 8, 3, 3)
model.conv3.weight = nn.Parameter(a)
print(model.conv3.weight)
def test(model,test_loader):
with torch.no_grad():
for data in test_loader:
output = model(data)
return output
output.view(1, 28, 28)
error of my code:
<ipython-input-23-b8f79c214741> in <module>()
25 # plt.imshow(item)
26
---> 27 output.view(1, 28, 28)
NameError: name 'output' is not defined
|
st32515
|
Solved by ptrmcl in post #7
I think you can just remove that line? I think the output dim will be batch size x 1 x 28 x 28
|
st32516
|
I think it’s because output is defined in the test function scope. you try changing view outside this scope so it’s undefined.
|
st32517
|
I want to give test data to changed model in test phase and take outputs for checking changes of outputs when change one layer in test phase.
|
st32518
|
I think you can just remove that line? I think the output dim will be batch size x 1 x 28 x 28
|
st32519
|
I keep getting this issue when running DDP in pytorch:
Traceback (most recent call last):
File "ml4coq-proj/embeddings_zoo/tree_nns/main_brando.py", line 330, in <module>
main_distributed()
File "ml4coq-proj/embeddings_zoo/tree_nns/main_brando.py", line 230, in main_distributed
mp.spawn(fn=train, args=(opts,), nprocs=opts.world_size)
File "/home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 199, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes
while not context.join():
File "/home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 105, in join
raise Exception(
Exception: process 1 terminated with signal SIGSEGV
but this error is rather uninformative (dones’t tell me what process or what it was trying to access for example) so I am unsure what I need to do do solve it.
Some research takes you that:
SIGSEGV: On a Unix operating system such as Linux, a “segmentation violation” (also known as “signal 11”, “SIGSEGV”, “segmentation fault” or, abbreviated, “sig11” or “segfault”) is a signal sent by the kernel to a process when the system has detected that the process was attempting to access a memory address that does not belong to it. Typically, this results in the offending process being terminated.
yes I do have multiprocessing code as the usual mp.spawn(fn=train, args=(opts,), nprocs=opts.world_size) requires.
First I read the docs on sharing strategies 1 which talks about how tensors are shared in pytorch:
Note that it applies only to CPU tensor - CUDA tensors will always use the CUDA API, as that’s the only way they can be shared.
I was using the file system sharing memory 3 since it seemed to give me less issue when I needed lots of processes but I went down to only 2 processes and 2 gpus and to the share strategy being file descriptor 3. I thought that perhaps if the processes had their own cached file descriptor then there wouldn’t be issues.
I did check the cuda devices availabe:
$ echo $CUDA_VISIBLE_DEVICES
1,3
all seems fine.
I am unsure what might be causing the issue. There are possible issues like:
two processes are trying to checkpoint at the same time but I always only let rank=0 do the checkpointing so that doesn’t make sense.
two processes are writing to tensorboard but I also only allow rank=0 to do the logging (or any of the printing).
So I am unsure what could be causing the issue. It could be that I have my dataset concatenated all 1 single json file causing the issue, but that wasn’t causing issues yesterday with multiple gpus…though, if that is the case it would be hard to fix since DDP (distributed data parallel) uses the DistributedSampler which doesn’t place any restriction like that on my data-set or dataloaders…or at least as far as I know (afaik).
Last thing is that yesterday I was getting weird error too and somehow it occurred to me to check the gpu type. I was quetting an issue because I was using a k40 gpu. I made sure that was not the case. Yesterday I was using a Quadro 6000 RTX, today it seems these are the GPUs I got:
$ nvidia-smi
Tue Mar 2 12:15:04 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.36.06 Driver Version: 450.36.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 TITAN X (Pascal) Off | 00000000:02:00.0 Off | N/A |
| 22% 37C P0 56W / 250W | 0MiB / 12196MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 TITAN X (Pascal) Off | 00000000:03:00.0 Off | N/A |
| 24% 39C P0 56W / 250W | 0MiB / 12196MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 TITAN X (Pascal) Off | 00000000:82:00.0 Off | N/A |
| 53% 84C P2 244W / 250W | 11935MiB / 12196MiB | 57% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 TITAN X (Pascal) Off | 00000000:83:00.0 Off | N/A |
| 25% 39C P0 56W / 250W | 0MiB / 12196MiB | 3% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 2 N/A N/A 31809 C python 11933MiB |
+-----------------------------------------------------------------------------+
not sure if that is causing the issue but it’s not always realistic to get the Quadro’s so I want it to work for the Titan’s too (and anything that isn’t the k40s since the k40s seem to not be supported by pytorch anymore).
There are a few pytorch discussion forum posts and gitissues but none seems very helpful (to me at lest - not clear what they did to solve things despite end of discussion):
Multiprocessing using torch.multiprocessing - #3 by Brando_Miranda 4
Using torch.Tensor over multiprocessing.Queue + Process fails - #12 by eacousineau 1
Process 3 terminated with signal SIGSEGV · Issue #1720 · pytorch/fairseq · GitHub 3
SIGSEGV while running train.py on a multi GPU setup · Issue #1308 · pytorch/fairseq · GitHub 9
Exception: process 0 terminated with signal SIGSEGV · Issue #118 · facebookresearch/SlowFast · GitHub 3
Segmentation fault 4
crossposted:
python - How to fix a SIGSEGV in pytorch when using distributed training (e.g. DDP)? - Stack Overflow 16
https://www.reddit.com/r/pytorch/comments/lwbb72/how_to_fix_a_sigsegv_in_pytorch_when_using/ 8
|
st32520
|
Hey @Brando_Miranda,
I have a very similar if not the same issue (difficult to say). Have you found a solution to this problem? In my case this issue also occurs rather infrequently. Running on the same server (same GPUs, environment, etc.) training my model sometimes is successful and sometimes ends with SIGSEGV.
Cheers
Edit: If it is of any help, I posted my code here 12.
|
st32521
|
Hi Dsethcz. I have not been able to solve the weird memory errors. However, I noticed that it happened only at the end of my training script (i.e. once destroying the dist group happened). So my model in theory seemed to have trained to the end and the issue is something when shutting down the distributed set up. From skimming your posts it seems your having the same behaviour. Did you just try checkpointing your model or doing whatever you had to do up to the point you have to destroy the dist group?
For me I just collect all the process with some wait call and then allow them to crash (so that the same error occurs) and it seems to give the same error. I think it must be something with my server because in the dgx I have access I don’t think I’ve ever happened. I recommend to ignore it if you fall in my scenario.
best of luck!
|
st32522
|
Hmm that is not an optimal solution…
Have you tested if the trains correctly until the crash occurs (by for example comparing to single GPU or DataParallel training)?
|
st32523
|
single gpu works fine. Dgx machine works fine. I can’t see a pattern on which gpu is crashing on me.
I don’t use DataParallel so no.
Yea I know it’s suboptimal but sometimes due to the laws of diminishing returns the last tiny gain (which is just that my script doesn’t print an errort) isn’t worth the (already days/weeks of effort) I put into solving it. It’s also not really my job to solve pytorch not work. So I am happy with my current solution as long as it trains to completion as it is now
|
st32524
|
Not really a solution, but a workaround that seems to work: I downgraded from python 3.9.1 to 3.8.7. A more detailed report can be found here 33.
|
st32525
|
I also play around with version of stuff. Python 3.8 pytorch 1.7.1 and cuda 10.2 (but have driver of at leat 11.0) seemed to work consistently for me.
Glad your python version change worked.
|
st32526
|
windows 10
I have install Pytorch cuda by this …
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia
Now …
torch.cuda.is_available()
>>>False
But
torch.backends.cudnn.enabled
>>>True
|
st32527
|
Did the install logs show that the cudatoolkit was indeed installed or did you see a fallback to a CPU version?
If you’ve verified that the CUDA version was indeed picked, the error might be raised by a setup issue, i.e. your GPU cannot be detected.
Is nvidia-smi working properly and do you have a sufficiently new NVIDIA driver to run the CUDA11.1 runtime?
|
st32528
|
image712×366 8.54 KB
And also if i run a model in GPU , it show AssertionError: Torch not compiled with CUDA enabled
|
st32529
|
This could mean that a CPU-only package is already installed. Create a new virtual environment and reinstall PyTorch there (and check the install logs for cudatoolkit or the corresponding CUDA tag in the wheel name) or alternatively uninstall all PyTorch packages in your current environment and reinstall it again.
|
st32530
|
I would like to know if Is there a discord server for Pytorch Community.
Some technologies have created their own servers on discord.
What do you all think?
Sorry for the off topic.
Best regards,
Matheus Santos.
|
st32531
|
In my project, I am trying to compare two techniques:
spatial convolution and filtering via FFT.
At this point I am only focused on the number of operations. Optimizing parallelism is not part of my scope.
So, in theory, the techniques I am comparing against the standard ones should be fasts due to have less operation. But that hasn’t be the case when I time both executions. I am assuming that, though running in CPU, there is still a lot of parallelism going on.
So, how do I turn that off?
From the torch notes I am assuming it would be something like this:
torch.set_num_interop_threads(1)
torch.set_num_threads(1)
But when timing both techniques, that still doesn’t give the results I am looking for, meaning, spatial convolution is still taking less time than filtering via FFT.
PS: These are the accumulated timing for 128 runs for
Standard Convolution
Custom implementation of Convolution, based on GEMM method
FFT filtering python implementation
FFT filtering C++ extension implementation
# time_default, time_custom, time_python, time_cpp
0.8500628471374512, 5.313140630722046, 2.029076337814331, 1.942748785018921
# time_default, time_custom, time_python, time_cpp
# Setting threads num. to 1
0.894277811050415, 5.291566371917725, 1.9720547199249268, 1.8894264698028564
How come the standard conv2d is way faster when it suppose to be slower?
|
st32532
|
What does “standard conv2d” mean in this case? Are you referring to just calling F.conv2d as-is? Even with a single thread, this will typically dispatch to an MKLDNN implementation on many systems, which means it will use a highly optimized vectorized and memory-hierarchy-aware implementation. You are correct in that there will still be implicit parallelism here due to the use of vectorized instructions. On the other hand, even without parallelism, performance is going to be highly dependent on how the computation aligns with a CPU’s caches (which again is going to be highly optimized here).
So in practice, even if there is some theoretical complexity improvement, this improvement can often be difficult to realize given that different algorithms will often have different memory access patterns and different levels of vectorization friendliness. It becomes even more difficult to compare when the baseline is a highly tuned library implementation (as F.conv2d will use in this case).
|
st32533
|
Hi,
I saw that in 1.5 usage of tensor.data as removed from most places,
I wonder,
why?
I want to change weights explicitly so the forward and backward will not be used on same data (I used .data so far to do it). What options do I have for that?
|
st32534
|
Solved by albanD in post #4
can you elaborate on the side effects you know?
Break the computational graph
Break the inplace correctness checks
Allow untracked inplace changes
Allow inplace metadata changes of a Tensor
When using nn.Parameter, it alias with the .data in that class that refers to the underlying Tensor.
whe…
|
st32535
|
Hi,
because it has many side effects that does more harm than good
You can use t2 = t.detach() to get a new Tensor that has the same content but does not share the gradient history.
For modifying a Tensor inplace, you can also use things like
with torch.no_grad():
# Ops here won't be tracked
t.zero_()
|
st32536
|
can you elaborate on the side effects you know?
when is it good idea to still use data?
sometimes changing in place will cause raising auto-grad errors
seems to me like tensor.data.clone() is slightly more efficient than tensor.detach().clone(), when we explicitly want cloning just the data.
similarly, when we want to send just the data to a device
a = tensor.data.to(device)
# vs
a = tensor.detach().to(device)
# vs*
with torch.no_grad():
a = tensor.to(device) # doesn't it send the grad too?
|
st32537
|
can you elaborate on the side effects you know?
Break the computational graph
Break the inplace correctness checks
Allow untracked inplace changes
Allow inplace metadata changes of a Tensor
When using nn.Parameter, it alias with the .data in that class that refers to the underlying Tensor.
when is it good idea to still use data?
For 99.9% of the users: never.
The only case where it is at the moment (until we provide an API for it) is to go around the inplace correctness checks when they are too strict and you know exactly what you’re doing.
seems to me like tensor.data.clone() is slightly more efficient than tensor.detach().clone()
What do you mean by more efficient?
If you mean faster, then not really. Both do a shallow copy of the Tensor. .detach() might be imperceptibly faster as it does not need to recreate inplace correctness tracking metadata.
For sending just the content to a new device: a = tensor.detach().to(device) will do the trick.
|
st32538
|
Thanks, this is very informative. Indeed I use data to overcome inplace correctness checks.
Didn’t know data does a shallow copy.
You say we can just avoid the correctness checks under torch.no_grad?
One thing I want to fully understand, the aliasing you mentioned in nn.Parameter
a = torch.randn(1)
b = nn.Parameter(a)
When we do b.data is it exactly like calling a.data? Is that what you mean? is it a problem?
|
st32539
|
You say we can just avoid the correctness checks under torch.no_grad?
No, you only ignore these ops for gradient computation.
Avoiding correctness checks is the last “valid” use case for .data in the sense that we don’t have another way to do this yet (be careful though! You can get wrong gradients by doing so!!)
When we do b.data is it exactly like calling a.data ?
I an not 100% sure of what the behavior is now that Variables don’t exist. But I would avoid it
But yes I think the two are the same. And so have the same limitations as .data in general.
|
st32540
|
Hi @albanD,
I don’t understand what are the main differences between calling a tensor.data and tensor.detach().
Could you please help me out here?
My understanding is just this:
Both the methods are removing the tensor from the computational graph.
But I don’t understand the difference the two methods provide.
Thanks.
|
st32541
|
Hi,
The difference is that one does it in a way that the autograd knows about and the other (.data) hides it from the autograd.
This means that many of the sanity checks that the autograd does to ensure it always return correct gradients won’t be able to run properly if you use .data.
|
st32542
|
i am using pytorch, and i need to calculate the gradients of the network at a series of points to add to the loss.
i used the code in the function get_sdf_gradient_dont_work:
image942×429 43.5 KB
and i got a memory leak in the GPU.
but when i use the code in the function get_sdf_gradient_work, it seems to work (not the functionality i needed, but no memory leak)
the network is a simple MLP using only linear layers and LeakyReLU.
and the multiresolution function is:
@staticmethod
def multi_resolution(points, resolutions):
resolutions_sin = torch.cat([torch.sin((2 ** i) * pi * points) for i in range(1, resolutions)], dim=-1)
resolutions_cos = torch.cat([torch.cos((2 ** i) * pi * points) for i in range(1, resolutions)], dim=-1)
return torch.cat([points, resolutions_sin, resolutions_cos], dim=-1)
does someone knows how to solve this ?
|
st32543
|
I read in the blog when pytorch 1.7 was released that now pytorch supports A100 GPUs. However the pre-built binaries apparently do not support sm_80 arch and pytorch has to be built from source to support that. Is this accurate? Is there a comprehensive guide on how to build from source and enable sm_80 arch support? Or even better, is there a proper pytorch docker image that supports sm_80?
|
st32544
|
amirhf:
However the pre-built binaries apparently do not support sm_80 arch and pytorch has to be built from source to support that. Is this accurate?
No, that’s not true and the binaries are shipping with sm_80 since the 1.7.0 release.
|
st32545
|
Hi @ptrblck - I just updated my pytorch to version 1.8.1 and I am getting an error which basically says that on my A100 GPU sm_80 is not supported. I opened a separate thread here about this: NVIDIA A100 GPU - RuntimeError: cuDNN error: CUDNN_STATUS_MAPPING_ERROR 313
|
st32546
|
Right - I think I have fixed this now. I’ll post the solution in the other thread.
|
st32547
|
I have created a custom layer in which i have initialized a glorot uniform weight given input_shape = (1,3,299,299).
class CustomLayer(torch.nn.Module):
def __init__(self, input_shape):
super(CustomLayer,self).__init__()
zeros = torch.zeros(input_shape)
self.weights = torch.nn.Parameter(zeros)
torch.nn.init.xavier_uniform_(self.weights)
def forward(self, x):
out= torch.tanh(self.weights)
return out+x
class MimicAndFool(torch.nn.Module):
def __init__(self,input_shape):
super(MimicAndFool,self).__init__()
self.custom = CustomLayer(input_shape)
def forward(self,x):
out = self.custom(x)
#statement 2
#statement 3...
#statement n
return out
After I print the summary
input_shape = (1,3,299,299)
maf = MimicAndFool(input_shape)
summary(maf,(3,299,299))
It says that the number of parameters for this custom layer is zero!
image807×127 3.59 KB
Please tell me what I’m missing out on . The number of parameters should have been = 3 * 299 * 299 = 268203
|
st32548
|
Solved by ptrblck in post #2
Your custom layer contains the weigth parameter printed by this code snippet:
class CustomLayer(torch.nn.Module):
def __init__(self, input_shape):
super(CustomLayer,self).__init__()
zeros = torch.zeros(input_shape)
self.weights = torch.nn.Parameter(zeros)
torch.n…
|
st32549
|
Your custom layer contains the weigth parameter printed by this code snippet:
class CustomLayer(torch.nn.Module):
def __init__(self, input_shape):
super(CustomLayer,self).__init__()
zeros = torch.zeros(input_shape)
self.weights = torch.nn.Parameter(zeros)
torch.nn.init.xavier_uniform_(self.weights)
def forward(self, x):
out= torch.tanh(self.weights)
return out+x
class MimicAndFool(torch.nn.Module):
def __init__(self,input_shape):
super(MimicAndFool,self).__init__()
self.custom = CustomLayer(input_shape)
def forward(self,x):
out = self.custom(x)
#statement 2
#statement 3...
#statement n
return out
layer = CustomLayer((1, 1))
print(dict(layer.named_parameters()))
> {'weights': Parameter containing:
tensor([[-0.7979]], requires_grad=True)}
module = MimicAndFool((1, 1))
print(dict(module.named_parameters()))
> {'custom.weights': Parameter containing:
tensor([[0.8589]], requires_grad=True)}
so I assume torchsummary might not return the right values for this custom module.
|
st32550
|
So I tried training the model, but the loss remains almost the same. The difference between the initial weights and final weights turn out to be zero, meaning that no training has actually happened for those weights. I’ve frozen all of the other layers except the custom layer.
image1459×845 53.4 KB
|
st32551
|
Could you check, if you get valid gradients in the custom module by calling
print(model.custom.weights.grad)
after the backward() call?
|
st32552
|
image852×685 22.2 KB
I do get grads for every iteration but they are very small it seems.
|
st32553
|
Oh i found out the reason why the final - initial weights were giving me a zero tensor . They were both pointing to the same object, when I printed them I got to know their difference.
|
st32554
|
Hello, I have the same question with you.
Could you tell me more about how you handle with the 0 trainable parameter in Pytorch summary problem?
|
st32555
|
Hello everybody,
I just came across a behavior that I would expect to throw an exception.
If you feed None as hidden state to nn.GRU or nn.LSTM, it won’t throw an exception.
Instead it will use a hidden state made of zeros.
I’m wondering whether this behavior is intentionally or not.
torch version 1.8.1+cu111
|
st32556
|
This is intentional.
If you look at the source code 1 you will see :
github.com
pytorch/pytorch/blob/master/torch/nn/modules/rnn.py#L659 1
if isinstance(orig_input, PackedSequence):
input, batch_sizes, sorted_indices, unsorted_indices = input
max_batch_size = batch_sizes[0]
max_batch_size = int(max_batch_size)
else:
batch_sizes = None
max_batch_size = input.size(0) if self.batch_first else input.size(1)
sorted_indices = None
unsorted_indices = None
if hx is None:
num_directions = 2 if self.bidirectional else 1
real_hidden_size = self.proj_size if self.proj_size > 0 else self.hidden_size
h_zeros = torch.zeros(self.num_layers * num_directions,
max_batch_size, real_hidden_size,
dtype=input.dtype, device=input.device)
c_zeros = torch.zeros(self.num_layers * num_directions,
max_batch_size, self.hidden_size,
dtype=input.dtype, device=input.device)
hx = (h_zeros, c_zeros)
else:
|
st32557
|
RuntimeError Traceback (most recent call last)
<ipython-input-53-62648c78c606> in <module>
6 val_loss , val_accuracy = [], []
7 # Train and evaluate
----> 8 model, hist = train_model(model, data_loaders, criterion, optimizer_ft, num_epochs=num_epochs)
<ipython-input-43-7052c1a4d9e9> in train_model(model, dataloaders, criterion, optimizer, num_epochs)
51 # statistics
52 running_loss += loss.item() * inputs.size(0)
---> 53 running_corrects += torch.sum(preds == labels.data)
54
55 epoch_loss = running_loss / len(dataloaders[phase].dataset)
~/anaconda3/lib/python3.7/site-packages/torch/tensor.py in wrapped(*args, **kwargs)
25 return handle_torch_function(wrapped, args, *args, **kwargs)
26 try:
---> 27 return f(*args, **kwargs)
28 except TypeError:
29 return NotImplemented
RuntimeError: The size of tensor a (32) must match the size of tensor b (23) at non-singleton dimension 1
|
st32558
|
After reading RNN source code in Pytorch and some blogs regarding with RNN, I want to verify my thoughts about RNN training. That is, for example, if we set batch_size from 1 to 4, the training process is totally different.
Here is the explanation about nn.RNNN() in Pytorch documentation.
h_n of shape (num_layers * num_directions, batch, hidden_size)
if batch_size = 1, then we only get one init hidden state (typically is a tensor with all zeros). While if batch_size = 4, we get four init hidden states, which means the input hidden state of RNN training in the fourth word now is different from the input hidden state of previous training process.
Here is my reason. Considering the former one, the input hidden state of fourth word in this process (batch_size=1) contains the information of first three words, while the other one is totally zero. Therefore, two training processes are totally different.
Can anybody tells me whether my thoughts are right or not?
|
st32559
|
def forward(self, input):
H1 = self.hidden_layer_1(input)
H1 = self.relu(H1)
final_inputs = self.output(H1)
optimizer = torch.optim.SGD(self.parameters(), lr =lr,momentum=momentum)
weights = np.load('./w0.npy')
bias = np.load('./b0.npy')
self.hidden_layer_1.weight.data = torch.from_numpy(weights).float()
self.hidden_layer_1.bias.data = torch.from_numpy(bias).float()
# freeze
for param in self.hidden_layer_1.parameters():
param.requires_grad = False
I am trying to freeze the layer but the loss is still changing after epochs.
|
st32560
|
Is hidden_layer_1 the only layer in your model?
Show a more complete code if possible
|
st32561
|
But I saw that you call self.output in this forward function. self.output does not have a trainable parameters?
Because if your model is still learning, it means that something is changing somewhere. If you only have one layer and you want to freeze it, what is the point of training the model here then?
|
st32562
|
I was checking if the freezing part is working for the first layer then I would add a second hidden layer which I would not freeze.
|
st32563
|
Hello,
I’m new to pytorch and I have trouble to understand how my LSTM is working with different input shapes of my data.
My Data consist of signals where each sequence has a lengths of ~300 000. The sequence length differs between 5000 and 500 000 but manly the length is around 300 000. I try to solve a many-to-one task. So one sequence belongs to one class. In total I have 65 000 different sequences (each around 300 000 time steps long).
Example:
Sequence 1 (length: 300 000) belongs to class 0
Sequence 2 (length: 310 000) belongs to class 1
Sequence 3 (length: 290 000) belongs to class 2
Sequence 4 (length: 280 000) belongs to class 0
…
I was running my LSTM with two different input shapes, while batch_first = true:
(1,sequence,1) : Here I had to limit the sequence length to 500 to be able to train the network, I’m planning to use batches here, where one batch contains the howl sequence (600,500,1). I assume that the network is getting one value at a time but the training curve do not decrease.
(1,1,sequence) :
Here I padded or cut the sequence to have equal length (250 000) for all sequences
I assume that the network is getting the howl sequence at the same time, the training curve decreases.
Both implementations were working and giving some results (not good ones but results).
Could someone explain me the difference of how the network is working with the different input shapes?
Are both making sense?
Thanks for any help
|
st32564
|
Hi, I know there is a 2D RoI Alignment in TorchVision (torch.ops.torchvision.roi_align).
However, I am using 3D data (feature map with shape Batch, Channel, Depth, Height, Width).
Is there a 3D RoI Alignment available ? Or how can I modify the 2D RoI Aligment to make it work for 3D data ?
Many Thanks.
|
st32565
|
When pytorch gets installed, it will always also install its own cuda libraries for a specific cuda version, and for one pytorch version, it is possible to install that pytorch version with various versions of the cuda library included.
I have pytorch version 1.7.1, how can I figure out from within python what the version of the cuda library that was installed with it is?
|
st32566
|
Solved by eqy in post #2
Does torch.version.cuda work?
|
st32567
|
Yes that worked, thank you!
I tried all variations of torch.cuda.__version__ instead
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.