id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st47168 | I think the confusion is due to the limitation of the text. Check the derivation below and I am happy to help with any more questions.
Note:- To implement in pytorch, you need to respect the dimensions (like adding squeeze and unsqueeze where necessary).
WhatsApp Image 2020-11-12 at 4.30.09 PM1280×997 46.9 KB |
st47169 | I am trying to implement a function similar to numpy.roll function. To use cuda, I want to use only torch Tensors, but it seems to be hard.
Ultimately, I want to implement image gradient with forward difference and Neumann boundary condition. For example, the numpy version of it is as follows:
def grad(u):
# u: 2-d images
ux = np.roll(u, -1, axis=1) - u
uy = np.roll(u, -1, axis=0) - u
ux[:,-1] = 0
uy[-1,:] = 0
I tried to use [-1, 1] filter, using torch.nn.Functional.conv2d. Because there is no boundary option except zero padding and the filter is even filter, it looks complicated.
Is there someone who can help implement one like np.roll? |
st47170 | Thanks for your reply, although I checked before. As you said, it seems hard to implement. |
st47171 | If should be quite simple to implement yourself. Just slice the tensor into two pieces, swap them, and cat along the same dimension that you used to split. |
st47172 | i made an account to say that i think thats a bit yucky, it’s nicer to read roll |
st47173 | haven’t tested this extensively, but this seems to cover if you just want a single split. logic if shift is negative could probably be cleaned up a little
def roll(tensor, shift, axis):
if shift == 0:
return tensor
if axis < 0:
axis += tensor.dim()
dim_size = tensor.size(axis)
after_start = dim_size - shift
if shift < 0:
after_start = -shift
shift = dim_size - abs(shift)
before = tensor.narrow(axis, 0, dim_size - shift)
after = tensor.narrow(axis, after_start, shift)
return torch.cat([after, before], axis) |
st47174 | Simple solution to roll around first axis:
def roll(x, n):
return torch.cat((x[-n:], x[:-n]))
Test like:
x = torch.arange(5)
print("Orig:", x)
print("Roll 2:", roll(x, 2))
print("Roll -2:", roll(x, -2))
Outputs:
Orig: tensor([0, 1, 2, 3, 4])
Roll 2: tensor([3, 4, 0, 1, 2])
Roll -2: tensor([2, 3, 4, 0, 1])
To roll around second axis, use:
def roll_1(x, n):
return torch.cat((x[:, -n:], x[:, :-n]), dim=1)
It probably can be generalised, but I didn’t need it. |
st47175 | @jaromiru thank you!
The solution below is a generalization of yours to an arbitrary axis:
def roll(x: torch.Tensor, shift: int, dim: int = -1, fill_pad: Optional[int] = None):
if 0 == shift:
return x
elif shift < 0:
shift = -shift
gap = x.index_select(dim, torch.arange(shift))
if fill_pad is not None:
gap = fill_pad * torch.ones_like(gap, device=x.device)
return torch.cat([x.index_select(dim, torch.arange(shift, x.size(dim))), gap], dim=dim)
else:
shift = x.size(dim) - shift
gap = x.index_select(dim, torch.arange(shift, x.size(dim)))
if fill_pad is not None:
gap = fill_pad * torch.ones_like(gap, device=x.device)
return torch.cat([gap, x.index_select(dim, torch.arange(shift))], dim=dim) |
st47176 | I tried to use yours, but I get a compilation error saying that Optional is not defined. |
st47177 | Extending the solution to support devices
from typing import Optional
def roll(x: torch.Tensor, shift: int, dim: int = -1, fill_pad: Optional[int] = None):
device = x.device
if 0 == shift:
return x
elif shift < 0:
shift = -shift
gap = x.index_select(dim, torch.arange(shift, device=device))
if fill_pad is not None:
gap = fill_pad * torch.ones_like(gap, device=device)
return torch.cat([x.index_select(dim, torch.arange(shift, x.size(dim), device=device)), gap], dim=dim)
else:
shift = x.size(dim) - shift
gap = x.index_select(dim, torch.arange(shift, x.size(dim), device=device))
if fill_pad is not None:
gap = fill_pad * torch.ones_like(gap, device=device)
return torch.cat([gap, x.index_select(dim, torch.arange(shift, device=device))], dim=dim) |
st47178 | Since this is still getting answers in 2020 and is the top Google answer, it’s worth pointing out that there is now a proper torch.roll 302 function. |
st47179 | Hi,
For batchnorm, it says in the doc "The mean and standard-deviation are calculated per-dimension over the mini-batches ". But for batchnorm1d, when input is of size (N,C,L), it seems N and L is merged together and the mean/var are calculated together for C. I checked the dimension of the running mean/var, it is of size C.
I was wondering is there any built-in way to implement mean/var for each C and L, but the weight/bias is only for C (sharing over L).
Thanks. |
st47180 | You could use nn.LayerNorm and specify the normalized_shape which should be used to calculate the mean and standard deviation. However, I think you would need to set elementwise_affine=False and could apply a linear layer instead on the output using your desired shape. |
st47181 | Thanks. But LayerNorm computes the mean/var over the neruons and they are computed for both training and test. I still want to compute the mean/var for each neuron over the batches only in training and fixed for test. |
st47182 | In that case, your best bet might be to implement this type of normalization layer manually. |
st47183 | Hi,
I encountered the following error. How should I modify the code?
Traceback (most recent call last):
File "E:\Wx\pixelda\pixelda2.py", line 261, in <module>
imgs_A = Variable(imgs_A.type(FloatTensor).expand(batch_size, 1, opt.img_size, opt.img_size))
RuntimeError: The expanded size of the tensor (1) must match the existing size (3) at non-singleton dimension 1
[Finished in 4.3s with exit code 1]
This is a code snippet:
# Configure input
imgs_A = Variable(imgs_A.type(FloatTensor).expand(batch_size, 1, opt.img_size, opt.img_size))
labels_A = Variable(labels_A.type(LongTensor))
imgs_B = Variable(imgs_B.type(FloatTensor)) |
st47184 | tensor.expand returns a new view of the tensor with singleton dimensions expanded to a larger size.
Based on your code snippet I assume you would like to expand the batch dimension.
If that’s the case, you would need to keep the channel, height, and width dimension equal, which is not the case in your code snippet.
imgs_A seems to have 3 channels, while you are passing 1 to the expand operation in dim1.
PS: Variables are deprecated since PyTorch 0.4, so you can use tensors in newer versions. |
st47185 | Hi,
Suppose I want to ensemble two models, I find two method, one is like this:
model1 = torchvision.models.resnet50().cuda()
model2 = torchvision.models.resnet101().cuda()
inten = torch.randn(16, 3, 224, 224).cuda()
n_iter = 1000
for i in range(n_iter):
out = model1(inten).softmax(1)
out += model2(inten).softmax(1)
And another method is to define these two models as children of a nn.Module:
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.model1 = torchvision.models.resnet50().cuda()
self.model2 = torchvision.models.resnet101().cuda()
def forward(self, x):
out = self.model1(x).softmax(1)
out += self.model2(x).softmax(1)
return out
model = Model().cuda()
for i in range(n_iter):
out = model(inten)
Which one is faster and better optimized during inference ? Will there be difference in terms of speed and memory usage between these two methods ? |
st47186 | Both methods should yield the same performance.
The two code snippets would take slightly different Python paths, but the overhead shouldn’t be visible in your profiling. |
st47187 | I am trying to create a GRU but I am running into a tensor size problem. The shape of the input tensor is (16, 3, 256, 256). But when I try to run the GRU I get an error on self.GRU(out) that says RuntimeError : input must have 3 dimensions, got 4 and I believe this is because my out shape is (16, 3, 127, 127) but I don’t know I how would go about changing this to the right shape.
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.pool = nn.MaxPool2d(4, 2)
self.gru = nn.GRU(input_size = 128, hidden_size=hidden_size, bidirectional=True)
def forward(self, x):
out = self.pool(x)
out = torch.squeeze(out, -1)
out = self.gru(out) |
st47188 | From the docs:
input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See torch.nn.utils.rnn.pack_padded_sequence() 1 for details.
So indeed a 3-dimensional input tensor is expected in the mentioned shape. |
st47189 | I am trying to use pytorch_lightning with multiple GPU, but get the following error:
RuntimeError: All input tensors must be on the same device. Received cuda:0 and cuda:3
How to fix this? Below is a MWE:
import torch
from torch import nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
import pytorch_lightning as pl
class DataModule(pl.LightningDataModule):
def __init__(self):
super().__init__()
def setup(self, stage):
# called on each gpu
n = 128
x = torch.randn(n, 1, 64,64)
data = list(zip(x,x))
self.test = DataLoader(data, batch_size=32)
self.train = DataLoader(data, batch_size=32)
self.val = DataLoader(data, batch_size=32)
def train_dataloader(self):
return self.train
def val_dataloader(self):
return self.val
def test_dataloader(self):
return self.test
class Net(pl.LightningModule):
def __init__(self):
super().__init__()
self.net = nn.Conv2d(in_channels=1, out_channels=1, kernel_size=(1,1))
def forward(self, x):
return self.net(x)
def validation_step(self, batch, batch_idx):
loss = self.compute_batch_loss(batch, batch_idx)
self.log('val_loss', loss)
return loss
def compute_batch_loss(self, batch, batch_idx):
x, y = batch
y_hat = self.net(x)
loss = F.mse_loss(y_hat, y)
return loss
def training_step(self, batch, batch_idx):
loss = self.compute_batch_loss(batch, batch_idx)
self.log('train_loss', loss)
return loss
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
return optimizer
dm = DataModule()
model = Net()
trainer = pl.Trainer(gpus=4,
distributed_backend="dp",
max_epochs=1,
)
trainer.fit(model, dm) |
st47190 | Your code works fine on my machine and outputs:
Epoch 0: 100%|████████████████████████████████████████████████████| 8/8 [00:00<00:00, 111.96it/s, loss=2.734, v_num=0] |
st47191 | Thanks a lot for digging into this! You also used 4 gpus? Could you rerun with say 10 epochs? This behavior is somewhat random for me and does not always trigger after the first epoch. |
st47192 | Update: I can reproduce the issue using 100 epochs and get:
File "tmp.py", line 64, in <module>
trainer.fit(model, dm)
File "/opt/conda/envs/tmp/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 444, in fit
results = self.accelerator_backend.train()
File "/opt/conda/envs/tmp/lib/python3.8/site-packages/pytorch_lightning/accelerators/dp_accelerator.py", line 106, in train
results = self.train_or_test()
File "/opt/conda/envs/tmp/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 74, in train_or_test
results = self.trainer.train()
File "/opt/conda/envs/tmp/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 493, in train
self.train_loop.run_training_epoch()
File "/opt/conda/envs/tmp/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 589, in run_training_epoch
self.trainer.run_evaluation(test_mode=False)
File "/opt/conda/envs/tmp/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 609, in run_evaluation
eval_loop_results = self.evaluation_loop.log_epoch_metrics(deprecated_eval_results, epoch_logs, test_mode)
File "/opt/conda/envs/tmp/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 210, in log_epoch_metrics
eval_loop_results = self.trainer.logger_connector.on_evaluation_epoch_end(
File "/opt/conda/envs/tmp/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector.py", line 113, in on_evaluation_epoch_end
self._log_on_evaluation_epoch_end_metrics(epoch_logs)
File "/opt/conda/envs/tmp/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector.py", line 181, in _log_on_evaluation_epoch_end_metrics
reduced_epoch_metrics = dl_metrics[0].__class__.reduce_on_epoch_end(dl_metrics)
File "/opt/conda/envs/tmp/lib/python3.8/site-packages/pytorch_lightning/core/step_result.py", line 464, in reduce_on_epoch_end
recursive_stack(result)
File "/opt/conda/envs/tmp/lib/python3.8/site-packages/pytorch_lightning/core/step_result.py", line 603, in recursive_stack
result[k] = collate_tensors(v)
File "/opt/conda/envs/tmp/lib/python3.8/site-packages/pytorch_lightning/core/step_result.py", line 625, in collate_tensors
return torch.stack(items)
RuntimeError: All input tensors must be on the same device. Received cuda:1 and cuda:3
I’m not familiar enough with PyTorch Lightning and would suggest to create an issue with this code snippet in their repository.
CC @williamFalcon for visibility. |
st47193 | github.com/PyTorchLightning/pytorch-lightning
Weird behaviour multi-GPU (dp, gpus > 1) tensors not converted to cuda 221
opened
Oct 29, 2020
ClaartjeBarkhof
🐛 Bug
I encounter unexpected behaviour for different versions of Pytorch Lightning:
When using PL version 1.0.4 my train step is ignored all...
DP
bug / fix
help wanted
waiting on author |
st47194 | Hi, I’ve set up my code to use either epoch-frequency schedulers like MultiStepLR or batch-frequency schedulers like OneCycleLR. In both cases optimizer.step() happens before scheduler.step(). However, when I use a batch-frequency scheduler, I get the warning advising me that the ordering is wrong:
UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
Is there a chance this is a false warning in the batch-scheduler case, or is there still likely to be something incorrect with my code? |
st47195 | Solved by ptrblck in post #4
In that case amp can skip the optimizer.step(), if invalid gradients are found and you should thus also skip the scheduler.step().
As a workaround, you could check the scale value via scaler.get_scale() and skip the scheduler.step(), if it was decreased. We’ll provide a better API for it so that yo… |
st47196 | There might always be a bug, we haven’t found yet, but I haven’t seen this error in the latest release yet, so I guess your code might be using the scheduler and optimizer in a wrong order accidentally.
Could you post a minimal, executable code snippet which shows this warning?
Also, are you using automatic mixed-precision training? If so, note that the optimizer.step() call can be skipped, if invalid gradients are found and the GradScaler needs to reduce the scaling factor. |
st47197 | Hi, thanks for your reply. I am also using AMP (also using SWA). Here’s the skeleton of what I’m doing:
model = create_model(args)
model = nn.DataParallel(model)
swa_model = torch.optim.swa_utils.AveragedModel(model)
scaler = torch.cuda.amp.GradScaler()
optimizer = torch.optim.AdamW(model.parameters(), lr=7e-3)
scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer,
max_lr=7e-2,
epochs=5, steps_per_epoch=len(dataloader),
pct_start=0.3)
swa_scheduler = torch.optim.swa_utils.SWALR(optimizer,
swa_lr=0.5,
anneal_epochs=5)
num_epochs = 20
swa_start = 15
for epoch in (1, range(num_epochs)+1):
for imgs,labels in dataloader:
with torch.cuda.amp.autocast():
output = model(imgs)
loss = loss_fn(output, labels)
optimizer.zero_grad()
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
if epoch < swa_start:
scheduler.step()
if epoch >= swa_start:
swa_model.update_parameters(model)
swa_scheduler.step() |
st47198 | In that case amp can skip the optimizer.step(), if invalid gradients are found and you should thus also skip the scheduler.step().
As a workaround, you could check the scale value via scaler.get_scale() and skip the scheduler.step(), if it was decreased. We’ll provide a better API for it so that you could directly check, if the optimizer.step() was skipped. |
st47199 | Thanks, something like this?
for epoch in range(epochs):
for batch in dataloader:
start_scale = scaler.get_scale()
# do stuff
end_scale = scaler.get_scale()
if not end_scale < start_scale:
scheduler.step()
I’m not quite sure if I follow where to place the checks. If I put the start_scale check at the beginning of each batch loop and the end_scale check at the very end, for nearly every batch I get:
start_scale == 65536.0
end_scale == 32.0
And so it would never get to the scheduler.step()
Edit: Actually by the start of epoch 2 it looks like it settles so that both start_scale and end_scale == 32.0
Should I similarly suppress the swa_scheduler step or is that left alone? |
st47200 | The scale factor should balance itself after a while and not decrease in each iteration. A value of 32.0 sounds reasonable. The end_scale should be extracted after the scaler.step() operation.
What is swa_scheduler exactly doing? If it’s (re-)creating the “averaged” model using the updated parameters, I think you should also skip it, since the model wasn’t updated. |
st47201 | I’m confused about the doc of torch.renorm and can you tell me what use of renorm and how the example in doc as follows compute.
>>> x
tensor([[ 1., 1., 1.],
[ 2., 2., 2.],
[ 3., 3., 3.]])
>>> torch.renorm(x, 1, 0, 5)
tensor([[ 1.0000, 1.0000, 1.0000],
[ 1.6667, 1.6667, 1.6667],
[ 1.6667, 1.6667, 1.6667]]) |
st47202 | Hi Guys,
I’m trying to set a dict with some parameters, like optimizers and others. How I should do?
Here is my code:
class Net(torch.nn.Module):
def __init__(self):
'''
A feedForward neural network.
Argurmets:
n_feature: How many of features in your data
n_hidden: How many of neurons in the hidden layer
n_output: How many of neuros in the output leyar (defaut=1)
'''
super(Net, self).__init__()
self.hidden = torch.nn.Linear(D_in, H, bias=True) # hidden layer
self.predict = torch.nn.Linear(H, D_out, bias=True) # output layer
self.n_feature, self.n_hidden, self.n_output = D_in, H, D_out
def forward(self, x,**kwargs):
'''
Argurmets:
x: Features to predict
'''
torch.nn.init.constant_(self.hidden.bias.data,1)
torch.nn.init.constant_(self.predict.bias.data,1)
x = torch.sigmoid(self.hidden(x)) # activation function for hidden layer
x = torch.sigmoid(self.predict(x)) # linear output
return x
from skorch import NeuralNetRegressor
net = NeuralNetRegressor(Net
, max_epochs=100
, lr=0.001
, verbose=1)
X_trf = X
y_trf = y.reshape(-1, 1)
from sklearn.model_selection import GridSearchCV
#from sklearn.metrics import SCORERS
#SCORERS.keys()
params = {
'lr': [0.001,0.005, 0.01, 0.05, 0.1, 0.2, 0.3],
'max_epochs': list(range(500,5500, 500))
}
gs = GridSearchCV(net, params, refit=False, scoring='r2', verbose=1, cv=10)
gs.fit(X_trf, y_trf) |
st47203 | Hi, dedeco,
Have you solve this issue. I met same question.
Thanks and best regards
Qiang |
st47204 | Hi, how to export models to onnx with output of List[Dict[str, tensor]], just like detection models in torchvision.
Following code mocking here, but does not work.
Any ideas?
Thanks in advance.
### test return with dict
import torch
import onnx
import io
import onnx.helper as helper
from torch.jit.annotations import Tuple, List, Dict, Optional
from torch import Tensor
class DummyModel(torch.nn.Module):
def __init__(self):
super(DummyModel, self).__init__()
def forward(self, x: Tensor, y: Tensor, z: int)->List[Dict[str, Tensor]]:
res = torch.jit.annotate(List[Dict[str, torch.Tensor]], [])
xy = x + y
xz = x + z
yz = y + z
res.append({'xy' : xy, 'xz' : xz, 'yz' : yz})
return res
model = DummyModel()
input_data = (torch.arange(4).reshape(2,2) for _ in range(3))
desired = model(*input_data)
print(f'Expect:\n{desired}')
# onnx_io = "sample.onnx"
onnx_io = io.BytesIO()
torch.onnx.export(
model,
input_data,
onnx_io,
verbose=False,
input_names=None,
output_names=None,
)
model_onnx = onnx.load(onnx_io)
# print(helper.printable_graph(model_onnx.graph))
sess = rt.InferenceSession(out_path)
input_names = [_.name for _ in sess.get_inputs()]
print(input_names)
#forward model
input_data_npy = [_.detach().cpu().numpy() for _ in input_data]
actual = sess.run(None, {name : data for name, data in zip(input_names, input_data_npy)})
actual = actual[0]
print(f'Actual:\n{actual}')
for k, v in desired.items():
np.testing.assert_allclose(desired[k], actual[k]) |
st47205 | Hi, It seems that there is something wrong with the format of the input data.
I use input_data = [torch.arange(4).reshape(2,2) for _ in range(3)] and
torch.onnx.export(
model,
tuple(input_data),
onnx_io,
verbose=False,
input_names=None,
output_names=None,
)
instead, and it works fine. |
st47206 | Thanks a lot.
the output are not dict[str, tensor], but list[tensor] and the names are not right.
Do you have any suggestion with regard to this ?
Expect:
[{'xy': tensor([[0, 2],
[4, 6]]), 'xz': tensor([[0, 2],
[4, 6]]), 'yz': tensor([[0, 2],
[4, 6]])}]
input_names=['0', '1', '2']
Actual:
[array([[0, 2],
[4, 6]], dtype=int64), array([[0, 2],
[4, 6]], dtype=int64), array([[0, 2],
[4, 6]], dtype=int64)] |
st47207 | As discussed in github, one option is using desired_list, _ = torch.jit._flatten(desired) to set the data type of the outputs in pytorch backend to a List |
st47208 | Hi,
I am currently working on a semantic segmentation project based on satellite images,
I was doing some research and I chance upon this thing called gaussian filter where the weights of the
filter are concentrated in the middle. However I am confused as to what it really is and where do I apply this in my model. Is a gaussian filter/kernel a data augmentation? Is it the same as gaussian blur or gaussian noise? or issit something I have to imlpement on my conv layers when they have kernels inside. Hope someone can explain this to me thank you! |
st47209 | A Gaussian filter in image processing is also called Gaussian blur and is a low-pass filter.
You can apply it on your images to blur them, if you think it might be beneficial for the training.
It’s not the same as Gaussian noise, which is usually additive noise to the input signal.
You can also implement the Gaussian filters in a conv layer, but depending on your use case you might want to use PIL methods to add the blur to the images.
Where did you read about these filters and did the source explain where and how these filters are applied? |
st47210 | I can’t really recall from where I read it, but yea was a little confused when doing my research on this. So like you mentioned, some of them were applying it as a data augmentation while some are applying it in their model when calling nn.Conv2D(…dilation=2). But I sort of understand the difference between these two already. Thank you for the explanation!
The case I am working with is a sen12ms dataset, which has 13 bands in a image. So I have to do my data augmenting on tensors instead of PIL image. As PIL can only work on a 3 channel image. |
st47211 | Hi,
I encountered a strange problem where my input of a torch.tensor on torchvision.transforms.functional.resize works with visual studio code but when i try to run torchvision.transforms.functional.resize on jupyter notebook with the same input, it gives an error of this
‘’‘TypeError: img should be PIL Image. Got <class ‘torch.Tensor’>’’’
Both notebook running the same version of torchvision 0.7.0. |
st47212 | Solved by ptrblck in post #4
Transformations accepting tensors were added in 0.8.0 as stated in the release notes, so I guess your VS environment might use the latest torchvision release, while your Jupyter env might use an older one. |
st47213 | It’s hard to say without looking at your code, but check the type of whatever is being passed into resize to ensure that it’s a PIL image as expected (you mentioned that you are putting in a tensor and it expects a PIL image). When weird issues come up in a jupyter notebook it might help to refresh the kernel to ensure that you are not using old variables as well. |
st47214 | yea, the resize accepts either a PIL image or a torch.tensor as stated in the documentation, thats why I find it quite weird |
st47215 | Transformations accepting tensors were added in 0.8.0 as stated in the release notes 2, so I guess your VS environment might use the latest torchvision release, while your Jupyter env might use an older one. |
st47216 | File “/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py”, line 224, in init
sampler = RandomSampler(dataset, generator=generator)
File “/usr/local/lib/python3.6/dist-packages/torch/utils/data/sampler.py”, line 96, in init
“value, but got num_samples={}”.format(self.num_samples))
ValueError: num_samples should be a positive integer value, but got num_samples=0 |
st47217 | Hi. Can you please check that the path to dataset is correct ? I had a similar issue and providing the correct path to dataset solved that. |
st47218 | How can I make the generator give the data I expect, can you give me some guidance? |
st47219 | Hi. So, I don’t think the issue is related to the generator. The generator here is basically a PyTorch generator object that manages the state of the algorithm which produces pseudo random numbers like which device to use (cpu/gpu), state of the generator, setting the seed etc. https://pytorch.org/docs/stable/generated/torch.Generator.html 2
For your problem, you just need to provide the right path to the dataset.
Can you provide a bit more details here like what is the generator in your case and what kind of dataset are you using? |
st47220 | I am using some pictures labeled as voc dataset (JPEGImages and SegmentationClass), and I want to use it in another model(image and mask).I put it directly into the file.How does the image convert into mask? is there any format? |
st47221 | Hi. So the images that you have in 1, 2, 3, 4 are basically the segmentation masks. The difference in their color represents different class/label.
It looks like that the model that you are using right now does handle the case when the mask is rgb instead of grayscale.
Just change the mask color from grayscale to rgb if your image has 3 channels.
I hope this gives some clarity. |
st47222 | I change maskcolormode to rgb but the image still not load in,
to apply this model what else should I do, can you give me some guidance?
GitHub
msminhas93/DeepLabv3FineTuning 4
Tutorial on fine tuning DeepLabv3 segmentation network for your own segmentation task in PyTorch. - msminhas93/DeepLabv3FineTuning |
st47223 | May the reason be that only 4 images can not be divided into the train and val dataset in the network?
So I try part of Voc in this model, but the size of image is not the same, how should I make them the same? |
st47224 | Hi. Yes the number of images could be a reason depending on how you are loading them. To make the images to the same size, you can use a custom transform something like this:
# Transforms for DataLoader
class Resize(object):
""" Resize Image and/or Masks """
def __init__(self, height, width):
"""
Args:
height (int): resize to height
width (int): resize to width
"""
self.height = height
self.width = width
def __call__(self, sample):
"""
Args:
sample (dict): input image/mask pair
Returns:
dict: image/mask pair as ndarray
"""
image, mask = sample['image'], sample['mask']
dim = (self.width, self.height)
image = image.resize(dim)
mask = mask.resize(dim)
return {'image': image,
'mask': mask}
NOTE that the images in the above Resize transform are loaded using PIL.Image.
Don’t forget to convert the above images to tensor by either using the ToTensor transform as below:
# Convert Image from Numpy to Tensor
class ToTensor(object):
""" Convert ndarray to Tensor """
def __call__(self, sample):
"""
Args:
sample (dict): input image/mask pair
Returns:
torch Tensor: image/mask tensor
"""
image, mask = sample['image'], sample['mask']
image = np.asarray(image)
mask = np.asarray(mask)
if len(image.shape) == 3:
image = image.transpose(2, 0, 1)
if len(mask.shape) == 2:
mask = mask.reshape((1,) + mask.shape)
if len(image.shape) == 2:
image = image.reshape((1,) + image.shape)
return {'image': torch.from_numpy(np.array(image)),
'mask': torch.from_numpy(np.array(mask))}
Then you can setup these transforms:
transforms.Compose([Resize(330, 422), ToTensor()])
The reason for writing these transforms instead of using the built in ones is due to image as feature and another image as label and we need to apply transforms to both so as to get a image/mask pair.
Hope this helps |
st47225 | Hi. So, you need to change your dataloader from that GitHub repository as below to make it work with the VOC segmentation dataset.
import torch
import torchvision
from torch.utils.data import Dataset
from PIL import Image
import glob
import numpy as np
class SegmentationDataset(Dataset):
"""Segmentation Dataset"""
def __init__(self, root_dir: str, image_dir: str, mask_dir: str,
transform=None, seed: int = None, fraction: float = None,
subset: str = None, imagecolormode: str = 'rgb',
maskcolormode: str = 'rgb'):
"""
Args:
root_dir (str): dataset dir path
image_dir (str): input image dir name
mask_dir (str): mask image dir name
transform: PyTorch data transform
seed (int): random seed for reproducibility
fraction (float): dataset train/test split percentage
subset (str): subset from existing dataset
imagecolormode (str): input image color mode
maskcolormode (str): input mask color mode
"""
self.color_dict = {'rgb': 1, 'grayscale': 0}
assert (imagecolormode in ['rgb', 'grayscale'])
assert (maskcolormode in ['rgb', 'grayscale'])
self.imagecolorflag = self.color_dict[imagecolormode]
self.maskcolorflag = self.color_dict[maskcolormode]
self.root_dir = root_dir
self.transform = transform
if not fraction:
# UPDATE: Get the Segmentation Masks Before Images
self.mask_names = sorted(
glob.glob(os.path.join(self.root_dir, mask_dir, '*')))
# UPDATE: Get images with the names in the mask_names list but with updated path and '.jpg' extension
self.image_names = sorted(
os.path.join(self.root_dir, image_dir, fname.split('/')[4].split('.png')[0] + '.jpg')
for fname in self.mask_names)
else:
assert (subset in ['Train', 'Test'])
self.fraction = fraction
# UPDATE: Get the Segmentation Masks Before Images
self.mask_list = np.array(
sorted(glob.glob(os.path.join(self.root_dir, mask_dir, '*'))))
# UPDATE: Get images with the names in the mask_names list but with updated path and '.jpg' extension
self.image_list = np.array(
sorted(os.path.join(self.root_dir, image_dir, fname.split('/')[4].split('.png')[0] + '.jpg')
for fname in self.mask_list))
if seed:
np.random.seed(seed)
indices = np.arange(len(self.image_list))
np.random.shuffle(indices)
self.image_list = self.image_list[indices]
self.mask_list = self.mask_list[indices]
if subset == 'Train':
self.image_names = self.image_list[:int(
np.ceil(len(self.image_list) * (1 - self.fraction)))]
self.mask_names = self.mask_list[:int(
np.ceil(len(self.mask_list) * (1 - self.fraction)))]
else:
self.image_names = self.image_list[int(
np.ceil(len(self.image_list) * (1 - self.fraction))):]
self.mask_names = self.mask_list[int(
np.ceil(len(self.mask_list) * (1 - self.fraction))):]
def __getitem__(self, idx):
"""
Args:
idx (int): index of input image
Returns:
dict: image and mask image
"""
img_name = self.image_names[idx]
mask_name = self.mask_names[idx]
image = Image.open(img_name)
mask = Image.open(mask_name)
sample = {'image': image, 'mask': mask}
if self.transform:
sample = self.transform(sample)
return sample
def __len__(self):
"""
Returns: length of dataset
"""
return len(self.image_names)
Now you can call the above dataset class as follows:
dataset = SegmentationDataset(root_dir='./VOCdevkit/VOC2012/', image_dir='JPEGImages', mask_dir='SegmentationClass', seed=100, fraction=0.1, subset='Train')
and everything should work. You can test the image/mask pairs using the following code:
import matplotlib.pyplot as plt
for i,data in enumerate(dataset):
image, mask = data['image'], data['mask']
show_image_mask(image, mask)
if i > 5:
break
Hope this helps |
st47226 | So grateful for you help and awesome code!
But the train not begin by running the command
81573×778 121 KB
May it be that the train and test dataset not splited successfully? |
st47227 | plt.imshow might work, but currently you are passing an invalid image path, thus the image cannot be loaded. |
st47228 | it not work within a loop when the image path is right, it just give no feedback 131361×458 51.1 KB |
st47229 | Ok. So I instantiated the dataset line just to explain the concept and test it out, but you don’t need that. Just make changes to the SegmentationDataset class and that’s it. Nothing else needs to be changed in your code from that GitHub repository. It has the code to instantiate the dataset and split into train and test sets in the get_dataloader_single_folder function. |
st47230 | Also up there, plt.imshow() takes only one image at a time. So if you just do plt.imshow(image), that should work. |
st47231 | So grateful. But when I only change imageFolder to image_dir in SegDataset it gives me TypeError of init(), but if I change it to imageFolder , SegDataset can not works well alone, what cause the problem and how should I define them?
25791×44 5.43 KB
231419×567 46.4 KB 241323×484 74.2 KB |
st47232 | Thanks you very much to teach me the original function, I conduct a breakpoints debugging in colab and the image has been read, but something still wrong with the dataloader, can you give me some suggestions?
321419×817 116 KB |
st47233 | The original problem appears and this time for more specific…but I cannot understand this…
331362×802 118 KB |
st47234 | Oct 10- Hy guys, I am trying to use this code to predict the evolution of a signal that has a frequency that is a function of time. The idea is that i generate some samples (6000) and after performing the usual preprocessing passages (scaling, dividing into testing and training data) I try to use a window of 100 prior datapoints and no features to predict the following single datapoint.
Even if my network is simple it does not seem able to get a meaningful prediction and this can be easily seen by running the fitting, that shows that the testing and training losses do not decrease.
What am I doing wrong?
I am posting a link to my notebook, hosted on github
Write me if you can’t access it, it’s my first time using git.
gist.github.com
https://gist.github.com/enrolivi/77d34ed242ee789daa7ea318bae515a2 1
lstm-pytorch.ipynb
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "LSTM-pytorch.ipynb",
"provenance": [],
"authorship_tag": "ABX9TyMFY4S/PeigTFIGVHfB3kFq",
"include_colab_link": true
},
This file has been truncated. show original |
st47235 | First of all thanks for reaching out, sorry if I am not being clear.
I am performing to perform a prediction on a series of datas, all belonging to the same time-series (which I generated thanks to a sinusoidal function, the frequency of which is time-dependent).
In order to do so I build a LSTM NN, implemented a simple training algorithm to which I feed my preprocessed datas, which are been formatted to tensors.
I appended the model and training algorithm, but if you manage to look at my code you will surely see the problem in an instant.
Thank you for the attention
class LSTM(nn.Module):
def __init__(self, n_features, n_hidden, seq_len, n_layers):
super(LSTM, self).__init__()
self.n_hidden = n_hidden
self.seq_len = seq_len
self.n_layers = n_layers
#lstm_input.shape =(seq_len=numero sequenze di input, batch_size=numero elementi ciascuna sequenza, input_size=terza
#dimensione sequenza
#output.shape=(seq_len, batch, num_directions * hidden_size) con una dimensione
self.lstm = nn.LSTM(
input_size=n_features,
hidden_size=n_hidden,
num_layers=n_layers,
)
#pass output of lstm to the linear
#output of the lstm is passed down to the linear, which performs a linear transf with random weights and bias(until update)
self.linear = nn.Linear(in_features=n_hidden, out_features=1)
def reset_hidden_state(self):
#it generates a tensor which is made of 2 3-dimensional tensors side by side
self.hidden = (torch.zeros(self.n_layers, self.seq_len, self.n_hidden), torch.zeros(self.n_layers, self.seq_len, self.n_hidden))
def forward(self, sequences):
#we pick all the sequences and pass them to te LSTM at once
#lstm_out, self.hidden = self.lstm(sequences.view(len(sequences), self.seq_len, -1), self.hidden)
lstm_out, self.hidden = self.lstm(sequences.view(len(sequences), 1, -1), self.hidden)
#print(lstm_out.shape)
#view keeps all original data while changing their shape into a 3-dimensional tensor
last_time_step = lstm_out.view(self.seq_len, len(sequences), self.n_hidden)[-1]
#print(last_time_step.shape)
#we pass the output of the last time stem to the linear operator, to get the prediction
#y_pred = self.linear(last_time_step)
y_pred = self.linear(lstm_out.view(len(sequences),-1))
return y_pred
def train_model(model, train_data, train_labels, test_data=None, test_labels=None):
#define loss unction as MSE and optimiser as adam
loss_fn = torch.nn.MSELoss(reduction='sum')
optimiser = torch.optim.Adam(model.parameters(), lr=1e-3)#learning rate
num_epochs = 60
#initialize to 0 the test and train history
train_hist = np.zeros(num_epochs)
test_hist = np.zeros(num_epochs)
#at each epoch we reset the hidden state, run the whole set, compute the loss
for t in range(num_epochs):
model.reset_hidden_state()
optimiser.zero_grad()
y_pred = model(X_train)
loss = loss_fn(y_pred.float(), y_train)
if test_data is not None:
with torch.no_grad(): #disables gradient calculation for subsequent lines of code
#predict the output thanks to the model, the compute the loss thanks to verify data
y_test_pred = model(X_test)
test_loss = loss_fn(y_test_pred.float(), y_test)
test_hist[t] = test_loss.item()
#printing progress
if t % 1 == 0:
print(f'Epoch {t} train loss: {loss.item()} test loss: {test_loss.item()}')
elif t % 1 == 0:
print(f'Epoch {t} train loss: {loss.item()}')
#including the losses in the train history
train_hist[t] = loss.item()
#we reset the previous gradient, compute the new one and give a forward step to the optimizer
#optimiser.zero_grad()
loss.backward()
optimiser.step()
return model.eval(), train_hist, test_hist |
st47236 | I don’t know which shape X_train has, but the sequences.view operation might be wrong, if you are trying to permute the dimensions.
By default nn.LSTM expects the input in the shape [seq_len, batch_size, features]. If you want to permute an input of [batch_size, seq_len, features] to this shape, use sequences = sequences.permute(1, 0, 2) instead.
The same applied for the lstm_out tensor, which has the shape [seq_len, batch_size, num_directions*hidden_size] by default. |
st47237 | my data are a time-series of 1000 elements, created as
# Number of sample points
N = 1000
# sample spacing
T = 1.0 / 500.0
time = np.linspace(0.0, N*T, N)
new_datas=[]
for i in range(len(time)):
new_data= np.sin((4.0+6*(i/N)) * 2.0*np.pi*time[i])+np.random.normal(scale=0.25)
new_datas.append(new_data)
that I then divide in sequences with
#dataframe function
def create_sequences(data, seq_length):
xs = []
ys = []
for i in range(len(data)-seq_length-1):
x = data[i:(i+seq_length)]
y = data[i+seq_length]
xs.append(x)
ys.append(y)
return np.array(xs), np.array(ys)
where seq_length is the length of the sequence I am supplying and so the first sequence sholud look like
[s0,s1,…s99] and [s100] is my first label,
then the following sequence will be:
[s1, …s100] and the second label will be [s101]and so on and so forth.
so the datas will be in the shape of [number of sequences, sequence_length]
after converting to tensors and reshaping it should look like [number of sequences, sequence length, features] where features is 1
So, if my division is 80% train data and the rest test data, my two dataset will have the dimension 800 and 200 respectively, with 1000 data starting dataset.
once I split them with the above function I will have
X_train.shape = (699, 100, 1) = number of sequences, length of each sequence, n of features y_train.shape = (699, 1) = number of sequences, n of features
torch.Size([699, 100, 1]) torch.Size([699, 1])
torch.Size([99, 100, 1]) torch.Size([99, 1]) |
st47238 | Thanks for the information. In that case, you should either use batch_first=True while creating the nn.LSTM module or use the permute approach as mentioned before.
Your current code:
sequences.view(len(sequences), 1, -1)
would reshape the data to [699, 1, 100], which is wrong for the current setup of your nn.LSTM module (explained in the previous post) and would also move the temporal dimension to the features. |
st47239 | Hello everyone,
I have been working on converting a Keras LSTM time-series prediction model into PyTorch for a project I am working on. I am new to PyTorch and have been using this as a chance to get familiar with it. I have implemented a model based on what I can find on my own, but the outputs do not compare like I was expecting. I expect some variation due to random weight initialization but not this much.
The Keras model implements some early stopping, which I have not done in PyTorch. I’m hoping to rule out any model issues before going down that rabbit hole.
In short, I am trying to implement what looks like a 2-layer LSTM network with a full-connected, linear output layer.
Both LSTM layers have the same number of features (80). I believe PyTorch LSTM dropout does not apply to the last layer which is slightly different from the Keras model that has a drop out after each LSTM layer. However, I’ve removed all drop out and got similar, mismatched results.
I am reasonably confident in my handling of input shapes and ‘many-to-one’ down-selecting with my linear layer, but nothing is off the table at this point.
My X_train data set is size (454, 250, 25), for 454 samples of 250 time steps with 25 features.
My y_train data set is size (454,1), for 454 samples of 1 target feature.
Batch size is 64 samples.
Here is the Keras model I am trying to match. I have replaced variable parameters with numerical values to improve readability between code segments.
Keras imports:
from keras.models import Sequential, load_model
from keras.callbacks import History, EarlyStopping, Callback
from keras.layers.recurrent import LSTM
from keras.layers.core import Dense, Activation, Dropout
import numpy as np
import os
import logging
Following Keras model is wrapped in a class with the following definition:
cbs = [History(), EarlyStopping(monitor='val_loss',
patience=10,
min_delta=0.0003,
verbose=0)]
self.model = Sequential()
self.model.add(LSTM(80, input_shape=(None, 25), return_sequences=True))
self.model.add(Dropout(0.3))
self.model.add(LSTM(80, return_sequences=False))
self.model.add(Dropout(0.3))
self.model.add(Dense(1))
self.model.add(Activation('linear'))
self.model.compile(loss='mse',
optimizer='adam')
self.model.fit(X_train,
y_train,
batch_size=64,
epochs=35,
validation_split=0.2,
callbacks=cbs,
verbose=True)
This text will be hidden
PyTorch attempt:
import numpy as np
import os
import logging
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, Dataset, TensorDataset
from torch.utils.data.dataset import random_split
from sklearn.model_selection import train_test_split
class LSTM_PT(nn.Module):
def __init__(self, n_features =25, hidden_dims = [80,80], seq_length = 250, batch_size=64, n_predictions=1, device = torch.device("cuda:0"), dropout=0.3):
super(LSTM_PT, self).__init__()
self.n_features = n_features
self.hidden_dims = hidden_dims
self.seq_length = seq_length
self.num_layers = len(self.hidden_dims)
self.batch_size = batch_size
self.device = device
print(f'number of layers :{self.num_layers}')
self.lstm1 = nn.LSTM(
input_size = n_features,
hidden_size = hidden_dims[0],
batch_first = True,
dropout = dropout,
num_layers = self.num_layers)
self.linear = nn.Linear(self.hidden_dims[0], n_predictions)
self.hidden = (
torch.randn(self.num_layers, self.batch_size, self.hidden_dims[0]).to(self.device),
torch.randn(self.num_layers, self.batch_size, self.hidden_dims[0]).to(self.device)
)
def init_hidden_state(self):
#initialize hidden states (h_n, c_n)
self.hidden = (
torch.randn(self.num_layers, self.batch_size, self.hidden_dims[0]).to(self.device),
torch.randn(self.num_layers, self.batch_size, self.hidden_dims[0]).to(self.device)
)
def forward(self, sequences):
batch_size, seq_len, n_features = sequences.size() #batch_first
# LSTM inputs: (input, (h_0, c_0))
#input of shape (seq_len, batch, input_size).... input_size = num_features
#or (batch, seq_len, input_size) if batch_first = True
lstm1_out , (h1_n, c1_n) = self.lstm1(sequences, (self.hidden[0], self.hidden[1])) #hidden[0] = h_n, hidden[1] = c_n
#Output: output, (h_n, c_n)
#output is of shape (batch_size, seq_len, hidden_size) with batch_first = True
last_time_step = lstm1_out[:,-1,:] #lstm_out[:,-1,:] or h_n[-1,:,:]
y_pred = self.linear(last_time_step)
#output is shape (N, *, H_out)....this is (batch_size, out_features)
return y_pred
def initialize_weights(model):
if type(model) in [nn.Linear]:
nn.init.xavier_uniform_(model.weight.data)
elif type(model) in [nn.LSTM, nn.RNN, nn.GRU]:
nn.init.xavier_uniform_(model.weight_hh_l0)
nn.init.xavier_uniform_(model.weight_ih_l0)
def predict_model(model, data, batch_size, device):
print('Starting predictions...')
data_loader = DataLoader(dataset=data, batch_size=batch_size, drop_last=True)
y_hat = torch.empty(data_loader.batch_size,1).to(device)
with torch.no_grad():
for X_batch in data_loader:
y_hat_batch = model(X_batch)
y_hat = torch.cat([y_hat, y_hat_batch])
y_hat = torch.flatten(y_hat[batch_size:,:]).cpu().numpy() #y_hat[batchsize:] is to remove first empty 'section'
print('Predictions complete...')
return y_hat
def train_model( model, train_data, train_labels, test_data, test_labels, batch_size, num_epochs, device):
model.apply(initialize_weights)
training_losses = []
validation_losses = []
loss_function = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
train_hist = np.zeros(num_epochs)
X_train, X_validation, y_train, y_validation = train_test_split(train_data, train_labels, train_size=0.8)
train_dataset=TensorDataset(X_train,y_train)
validation_dataset=TensorDataset(X_validation,y_validation)
train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, drop_last=True, shuffle=True)
val_loader = DataLoader(dataset=validation_dataset, batch_size=batch_size, drop_last=True, shuffle=True)
model.train()
print("Beginning model training...")
for t in range(num_epochs):
train_losses_batch = []
for X_batch_train, y_batch_train in train_loader:
y_hat_train = model(X_batch_train)
loss = loss_function(y_hat_train.float(), y_batch_train)
train_loss_batch = loss.item()
loss.backward()
optimizer.step()
optimizer.zero_grad()
train_losses_batch.append(train_loss_batch)
training_loss = np.mean(train_losses_batch)
training_losses.append(training_loss)
with torch.no_grad():
val_losses_batch = []
for X_val_batch, y_val_batch in val_loader:
model.eval()
y_hat_val = model(X_val_batch)
val_loss_batch = loss_function(y_hat_val.float(), y_val_batch).item()
val_losses_batch.append(val_loss_batch)
validation_loss = np.mean(val_losses_batch)
validation_losses.append(validation_loss)
print(f"[{t+1}] Training loss: {training_loss} \t Validation loss: {validation_loss} ")
print('Training complete...')
return model.eval()
Keras results (target):
y_hat is model output and y_test is test data.
My PyTorch output (y_hat) is similar in shape but plateaus at ~-0.5. I would post the image but new users can only post one |
st47240 | I’ve gone down the rabbit hole of differences between Keras and PyTorch and learned about how each package initializes weights differently. I’ve changed my code to use the same initialization methods and got MUCH better results.
Anyone who gets different results between the packages when their model looks correct should try this. |
st47241 | Could you please post the link about differences between Keras and PyTorch and learned about how each package initializes weights differently? Thanks! |
st47242 | Or can you post your changed code of PyTorch. I have the same issue. Appreciate it for any help! |
st47243 | Hi,
I’ve realised there is some sort of strange behaviour if I use max pooling in this model with the gpu version.
I’m making a batch of identical samples. They do have temporal features to be processed with 2D convolutions, thus i process everything in the batch dim.
I’ve seen that, if I use max_pool the output is different for the different elements of the batch, they are exactly the same otherwise.
This doesn’t happen with the CPU version but GPU only.
It happens for Titan GTX, 1080 Ti, Quadro P6000 with 2 different computers and setups.
import torch
import sys
import subprocess
from torch import nn
from torchaudio.transforms import MelSpectrogram, Spectrogram
import torch
N_FFT = 512
N_MELS = 256
HOP_LENGTH = 130
AUDIO_FRAMERATE = 16000
def get_sys_info():
"""
:param log: Logging logger in which to parse info
:type log: logging.logger
:return: None
"""
result = subprocess.Popen(["nvidia-smi", "--format=csv",
"--query-gpu=index,name,driver_version,memory.total,memory.used,memory.free"],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
nvidia = result.stdout.readlines().copy()
nvidia = [str(x) for x in nvidia]
nvidia = [x[2:-3] + '\r\t' for x in nvidia]
acum = ''
for x in nvidia:
acum = acum + x
return (' Python VERSION: {0} \n\t'
' pyTorch VERSION: {1} \n\t'
' CUDA VERSION: {2}\n\t'
' CUDNN VERSION: {3} \n\t'
' Number CUDA Devices: {4} \n\t'
' Devices: {5}\n\t'
'Active CUDA Device: GPU {6} \n\t'
'Available devices {7} \n\t'
'Current cuda device {8} \n\t'.format(sys.version, torch.__version__, torch.version.cuda,
torch.backends.cudnn.version(), torch.cuda.device_count(),
acum, torch.cuda.current_device(), torch.cuda.device_count(),
torch.cuda.current_device()))
def make_audio_block(filter_in, filters_out, lrn, max_pool=None, padding=0, stride=1, kernel_size=(3, 3)):
layers = [nn.Conv2d(filter_in, filters_out, kernel_size=kernel_size, padding=padding, stride=stride)]
layers.append(nn.ReLU(False))
if max_pool is not None:
layers.append(nn.MaxPool2d(max_pool))
return nn.Sequential(*layers)
def reshape(x, unrolled_shape):
return x.view(*unrolled_shape, *x.shape[1:])
def check(x, unrolled_shape):
x_unrolled = reshape(x, unrolled_shape)
ref = x_unrolled[0]
all_equal = True
error_abs = [None for _ in range(unrolled_shape[0])]
error_mean = [None for _ in range(unrolled_shape[0])]
error_abs[0] = 0
error_mean[0] = 0
for i in range(1, unrolled_shape[0]):
all_equal = all_equal and torch.allclose(ref, x_unrolled[i])
diff = torch.abs(ref - x_unrolled[i])
diff = diff[diff > 0]
error_abs[i] = diff.sum()
error_mean[i] = diff.mean()
return all_equal, max(error_abs), max(error_mean)
class AudioEncoder(nn.Module):
# 'filter_size': [96, 256, 512, 512*6*6]
def __init__(self, pooling, pooling_type='AvgPool'):
super(AudioEncoder, self).__init__()
assert pooling_type in ['MaxPool', 'AvgPool'], f'Pooling of type{pooling_type} should be MaxPool or AvgPool'
filters = [1, 32, 64, 128]
# self.preproc = MelSpectrogram(sample_rate=AUDIO_FRAMERATE, n_fft=N_FFT, hop_length=HOP_LENGTH, n_mels=N_MELS)
self.preproc = Spectrogram(n_fft=N_FFT, hop_length=HOP_LENGTH)
self.b1 = make_audio_block(filters[0], filters[1], lrn=False, max_pool=pooling, padding=2, kernel_size=(7, 7),
stride=(2, 1))
self.b2 = make_audio_block(filters[1], filters[2], lrn=False, max_pool=pooling, padding=0, kernel_size=(7, 3))
self.b3 = make_audio_block(filters[2], filters[3], lrn=False, padding=0, kernel_size=(7, 3))
if pooling_type == 'MaxPool':
self.pooling = nn.AdaptiveMaxPool2d((1, None))
else:
self.pooling = nn.AdaptiveAvgPool2d((1, None))
def forward(self, x):
verbose = True
unrolled_shape = x.shape[:2]
print(f'Unrolled shape: {unrolled_shape}')
if verbose:
print(f'Input--> Shape: {x.shape}, device:{x.device}')
x = self.preproc(x)
if verbose:
print(f'FFT --> Shape: {x.shape}')
x = x.view(-1, 1, *x.shape[2:])
if verbose:
equal, abs_max, mean_max = check(x, unrolled_shape)
print(f'view --> Shape: {x.shape}, all equal: {equal}, max error: abs {abs_max},mean {mean_max}')
x = self.b1(x)
if verbose:
equal, abs_max, mean_max = check(x, unrolled_shape)
print(f'b1 --> Shape: {x.shape}, all equal: {equal}, max error: abs {abs_max},mean {mean_max}')
x = self.b2(x)
if verbose:
equal, abs_max, mean_max = check(x, unrolled_shape)
print(f'b2 --> Shape: {x.shape}, all equal: {equal}, max error: abs {abs_max},mean {mean_max}')
x = self.b3(x)
if verbose:
equal, abs_max, mean_max = check(x, unrolled_shape)
print(f'b3 --> Shape: {x.shape}, all equal: {equal}, max error: abs {abs_max},mean {mean_max}')
x = self.pooling(x)
if verbose:
equal, abs_max, mean_max = check(x, unrolled_shape)
print(f'pooling --> Shape: {x.shape}, all equal: {equal}, max error: abs {abs_max},mean {mean_max}')
x = x.squeeze()
return x, x.shape
BATCH_SIZE = 16
@torch.no_grad()
def run_test(pooling, pooling_type, device):
device = torch.device(device)
model = AudioEncoder(pooling, pooling_type).to(device)
inp_element = torch.rand(25, 4480).to(device)
inp = torch.stack([inp_element.clone() for _ in range(BATCH_SIZE)])
# print(model)
y, shape = model(inp)
is_identical, max_abs, max_mean = check(y, [BATCH_SIZE, 25])
if is_identical:
print(f"Test: Pooling {pooling}, {pooling_type}. Device {device}, max_abs {max_abs},max_mean {max_mean} OK")
else:
print(
f"Test: Pooling {pooling}, {pooling_type}. Device {device}, max_abs {max_abs},max_mean {max_mean}. Failed")
print('---------------------------------------')
pooling_tests = [None, (3, 3)]
pooling_types = ['AvgPool']
devices = ['cuda:0', 'cuda:1', 'cpu']
if __name__ == '__main__':
print(get_sys_info())
for device in devices:
for pooling_i in pooling_tests:
for pooling_type_i in pooling_types:
run_test(pooling_i, pooling_type_i, device)
Python VERSION: 3.6.9 (default, Oct 8 2020, 12:12:24)
[GCC 8.4.0]
pyTorch VERSION: 1.7.0+cu110
CUDA VERSION: 11.0
CUDNN VERSION: 8004
Number CUDA Devices: 3
Active CUDA Device: GPU 0
Available devices 3
Current cuda device 0
Unrolled shape: torch.Size([16, 25])
Input--> Shape: torch.Size([16, 25, 4480]), device:cuda:0
/home/jfm/.local/lib/python3.6/site-packages/torch/functional.py:516: UserWarning: stft will require the return_complex parameter be explicitly specified in a future PyTorch release. Use return_complex=False to preserve the current behavior or return_complex=True to return a complex output. (Triggered internally at /pytorch/aten/src/ATen/native/SpectralOps.cpp:653.)
normalized, onesided, return_complex)
/home/jfm/.local/lib/python3.6/site-packages/torch/functional.py:516: UserWarning: The function torch.rfft is deprecated and will be removed in a future PyTorch release. Use the new torch.fft module functions, instead, by importing torch.fft and calling torch.fft.fft or torch.fft.rfft. (Triggered internally at /pytorch/aten/src/ATen/native/SpectralOps.cpp:590.)
normalized, onesided, return_complex)
FFT --> Shape: torch.Size([16, 25, 257, 35])
view --> Shape: torch.Size([400, 1, 257, 35]), all equal: True, max error: abs 0,mean 0
b1 --> Shape: torch.Size([400, 32, 128, 33]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([400, 64, 122, 31]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([400, 128, 116, 29]), all equal: True, max error: abs 0,mean 0
pooling --> Shape: torch.Size([400, 128, 1, 29]), all equal: True, max error: abs 0,mean 0
Test: Pooling None, AvgPool. Device cuda:0, max_abs 0,max_mean 0 OK
---------------------------------------
Unrolled shape: torch.Size([16, 25])
Input--> Shape: torch.Size([16, 25, 4480]), device:cuda:0
FFT --> Shape: torch.Size([16, 25, 257, 35])
view --> Shape: torch.Size([400, 1, 257, 35]), all equal: True, max error: abs 0,mean 0
b1 --> Shape: torch.Size([400, 32, 42, 11]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([400, 64, 12, 3]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([400, 128, 6, 1]), all equal: False, max error: abs 0.030087150633335114,mean 3.579247049856349e-06
pooling --> Shape: torch.Size([400, 128, 1, 1]), all equal: False, max error: abs 0.0028738644905388355,mean 1.6516462437721202e-06
Test: Pooling (3, 3), AvgPool. Device cuda:0, max_abs 0.0028738644905388355,max_mean 1.6516462437721202e-06. Failed
---------------------------------------
Unrolled shape: torch.Size([16, 25])
Input--> Shape: torch.Size([16, 25, 4480]), device:cuda:1
FFT --> Shape: torch.Size([16, 25, 257, 35])
view --> Shape: torch.Size([400, 1, 257, 35]), all equal: True, max error: abs 0,mean 0
b1 --> Shape: torch.Size([400, 32, 128, 33]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([400, 64, 122, 31]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([400, 128, 116, 29]), all equal: True, max error: abs 0,mean 0
pooling --> Shape: torch.Size([400, 128, 1, 29]), all equal: True, max error: abs 0,mean 0
Test: Pooling None, AvgPool. Device cuda:1, max_abs 0,max_mean 0 OK
---------------------------------------
Unrolled shape: torch.Size([16, 25])
Input--> Shape: torch.Size([16, 25, 4480]), device:cuda:1
FFT --> Shape: torch.Size([16, 25, 257, 35])
view --> Shape: torch.Size([400, 1, 257, 35]), all equal: True, max error: abs 0,mean 0
b1 --> Shape: torch.Size([400, 32, 42, 11]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([400, 64, 12, 3]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([400, 128, 6, 1]), all equal: False, max error: abs 0.03450433909893036,mean 4.106193046027329e-06
pooling --> Shape: torch.Size([400, 128, 1, 1]), all equal: False, max error: abs 0.0032321936450898647,mean 1.921637021951028e-06
Test: Pooling (3, 3), AvgPool. Device cuda:1, max_abs 0.0032321936450898647,max_mean 1.921637021951028e-06. Failed
---------------------------------------
Unrolled shape: torch.Size([16, 25])
Input--> Shape: torch.Size([16, 25, 4480]), device:cpu
FFT --> Shape: torch.Size([16, 25, 257, 35])
view --> Shape: torch.Size([400, 1, 257, 35]), all equal: True, max error: abs 0,mean 0
b1 --> Shape: torch.Size([400, 32, 128, 33]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([400, 64, 122, 31]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([400, 128, 116, 29]), all equal: True, max error: abs 0,mean 0
pooling --> Shape: torch.Size([400, 128, 1, 29]), all equal: True, max error: abs 0,mean 0
Test: Pooling None, AvgPool. Device cpu, max_abs 0,max_mean 0 OK
---------------------------------------
Unrolled shape: torch.Size([16, 25])
Input--> Shape: torch.Size([16, 25, 4480]), device:cpu
FFT --> Shape: torch.Size([16, 25, 257, 35])
view --> Shape: torch.Size([400, 1, 257, 35]), all equal: True, max error: abs 0,mean 0
b1 --> Shape: torch.Size([400, 32, 42, 11]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([400, 64, 12, 3]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([400, 128, 6, 1]), all equal: True, max error: abs 0,mean 0
pooling --> Shape: torch.Size([400, 128, 1, 1]), all equal: True, max error: abs 0,mean 0
Test: Pooling (3, 3), AvgPool. Device cpu, max_abs 0,max_mean 0 OK
---------------------------------------
Process finished with exit code 0 |
st47244 | You meant average pooling or max pooling? Your script seems to be only testing average pooling no?
For average pooling, I could see different batch being averaged in a different order leading to a 1e-6 error that then gets amplified later in the network? |
st47245 | Well btw it happens for the max pooling. The pooling type flag is just for the last pooling (but the error arises before)
Given the fact the the same machine processes all the elements I would expect the output to be identical as it happens for cpu. Btw the error is not happening with pytorch 1.5. But yes, that small error is later on amplified so that the output is substantially different for the same input.
I’m still trying to reduce the minimal example
Python VERSION: 3.6.9 (default, Oct 8 2020, 12:12:24)
[GCC 8.4.0]
pyTorch VERSION: 1.5.0
CUDA VERSION: 10.2
CUDNN VERSION: 7605
Number CUDA Devices: 3
Active CUDA Device: GPU 0
Available devices 3
Current cuda device 0
Unrolled shape: torch.Size([16, 25])
Input--> Shape: torch.Size([16, 25, 4480]), device:cuda:0
FFT --> Shape: torch.Size([16, 25, 257, 35])
view --> Shape: torch.Size([400, 1, 257, 35]), all equal: True, max error: abs 0,mean 0
b1 --> Shape: torch.Size([400, 32, 128, 33]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([400, 64, 122, 31]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([400, 128, 116, 29]), all equal: True, max error: abs 0,mean 0
pooling --> Shape: torch.Size([400, 128, 1, 29]), all equal: True, max error: abs 0,mean 0
Test: Pooling None, AvgPool. Device cuda:0, max_abs 0,max_mean 0 OK
---------------------------------------
Unrolled shape: torch.Size([16, 25])
Input--> Shape: torch.Size([16, 25, 4480]), device:cuda:0
FFT --> Shape: torch.Size([16, 25, 257, 35])
view --> Shape: torch.Size([400, 1, 257, 35]), all equal: True, max error: abs 0,mean 0
b1 --> Shape: torch.Size([400, 32, 42, 11]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([400, 64, 12, 3]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([400, 128, 6, 1]), all equal: True, max error: abs 0,mean 0
pooling --> Shape: torch.Size([400, 128, 1, 1]), all equal: True, max error: abs 0,mean 0
Test: Pooling (3, 3), AvgPool. Device cuda:0, max_abs 0,max_mean 0 OK
---------------------------------------
Unrolled shape: torch.Size([16, 25])
Input--> Shape: torch.Size([16, 25, 4480]), device:cuda:1
FFT --> Shape: torch.Size([16, 25, 257, 35])
view --> Shape: torch.Size([400, 1, 257, 35]), all equal: True, max error: abs 0,mean 0
b1 --> Shape: torch.Size([400, 32, 128, 33]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([400, 64, 122, 31]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([400, 128, 116, 29]), all equal: True, max error: abs 0,mean 0
pooling --> Shape: torch.Size([400, 128, 1, 29]), all equal: True, max error: abs 0,mean 0
Test: Pooling None, AvgPool. Device cuda:1, max_abs 0,max_mean 0 OK
---------------------------------------
Unrolled shape: torch.Size([16, 25])
Input--> Shape: torch.Size([16, 25, 4480]), device:cuda:1
FFT --> Shape: torch.Size([16, 25, 257, 35])
view --> Shape: torch.Size([400, 1, 257, 35]), all equal: True, max error: abs 0,mean 0
b1 --> Shape: torch.Size([400, 32, 42, 11]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([400, 64, 12, 3]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([400, 128, 6, 1]), all equal: True, max error: abs 0,mean 0
pooling --> Shape: torch.Size([400, 128, 1, 1]), all equal: True, max error: abs 0,mean 0
Test: Pooling (3, 3), AvgPool. Device cuda:1, max_abs 0,max_mean 0 OK
---------------------------------------
Unrolled shape: torch.Size([16, 25])
Input--> Shape: torch.Size([16, 25, 4480]), device:cpu
FFT --> Shape: torch.Size([16, 25, 257, 35])
view --> Shape: torch.Size([400, 1, 257, 35]), all equal: True, max error: abs 0,mean 0
b1 --> Shape: torch.Size([400, 32, 128, 33]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([400, 64, 122, 31]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([400, 128, 116, 29]), all equal: True, max error: abs 0,mean 0
pooling --> Shape: torch.Size([400, 128, 1, 29]), all equal: True, max error: abs 0,mean 0
Test: Pooling None, AvgPool. Device cpu, max_abs 0,max_mean 0 OK
---------------------------------------
Unrolled shape: torch.Size([16, 25])
Input--> Shape: torch.Size([16, 25, 4480]), device:cpu
FFT --> Shape: torch.Size([16, 25, 257, 35])
view --> Shape: torch.Size([400, 1, 257, 35]), all equal: True, max error: abs 0,mean 0
b1 --> Shape: torch.Size([400, 32, 42, 11]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([400, 64, 12, 3]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([400, 128, 6, 1]), all equal: True, max error: abs 0,mean 0
pooling --> Shape: torch.Size([400, 128, 1, 1]), all equal: True, max error: abs 0,mean 0
Test: Pooling (3, 3), AvgPool. Device cpu, max_abs 0,max_mean 0 OK
---------------------------------------
Process finished with exit code 0 |
st47246 | Seems reshaping (ravel/unravel) is one of the reasons. Works nice without it.
import torch
import sys
import subprocess
from torch import nn
from torchaudio.transforms import MelSpectrogram, Spectrogram
import torch
N_FFT = 512
N_MELS = 256
HOP_LENGTH = 130
AUDIO_FRAMERATE = 16000
def get_sys_info():
"""
:param log: Logging logger in which to parse info
:type log: logging.logger
:return: None
"""
result = subprocess.Popen(["nvidia-smi", "--format=csv",
"--query-gpu=index,name,driver_version,memory.total,memory.used,memory.free"],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
nvidia = result.stdout.readlines().copy()
nvidia = [str(x) for x in nvidia]
nvidia = [x[2:-3] + '\r\t' for x in nvidia]
acum = ''
for x in nvidia:
acum = acum + x
return (' Python VERSION: {0} \n\t'
' pyTorch VERSION: {1} \n\t'
' CUDA VERSION: {2}\n\t'
' CUDNN VERSION: {3} \n\t'
' Number CUDA Devices: {4} \n\t'
' Devices: {5}\n\t'
'Active CUDA Device: GPU {6} \n\t'
'Available devices {7} \n\t'
'Current cuda device {8} \n\t'.format(sys.version, torch.__version__, torch.version.cuda,
torch.backends.cudnn.version(), torch.cuda.device_count(),
acum, torch.cuda.current_device(), torch.cuda.device_count(),
torch.cuda.current_device()))
def make_audio_block(filter_in, filters_out, lrn, max_pool=None, padding=0, stride=1, kernel_size=(3, 3)):
layers = [nn.Conv2d(filter_in, filters_out, kernel_size=kernel_size, padding=padding, stride=stride)]
layers.append(nn.ReLU(False))
if max_pool is not None:
layers.append(nn.MaxPool2d(max_pool))
return nn.Sequential(*layers)
def reshape(x, unrolled_shape):
return x.view(*unrolled_shape, *x.shape[1:])
def check(x):
x_unrolled = x
ref = x_unrolled[0]
all_equal = True
error_abs = [None for _ in range(x.shape[0])]
error_mean = [None for _ in range(x.shape[0])]
error_abs[0] = 0
error_mean[0] = 0
for i in range(1, x.shape[0]):
all_equal = all_equal and torch.allclose(ref, x_unrolled[i])
diff = torch.abs(ref - x_unrolled[i])
diff = diff[diff > 0]
error_abs[i] = diff.sum()
error_mean[i] = diff.mean()
return all_equal, max(error_abs), max(error_mean)
class AudioEncoder(nn.Module):
# 'filter_size': [96, 256, 512, 512*6*6]
def __init__(self, pooling, pooling_type='AvgPool'):
super(AudioEncoder, self).__init__()
assert pooling_type in ['MaxPool', 'AvgPool'], f'Pooling of type{pooling_type} should be MaxPool or AvgPool'
filters = [1, 32, 64, 128]
# self.preproc = MelSpectrogram(sample_rate=AUDIO_FRAMERATE, n_fft=N_FFT, hop_length=HOP_LENGTH, n_mels=N_MELS)
self.preproc = Spectrogram(n_fft=N_FFT, hop_length=HOP_LENGTH)
self.b1 = make_audio_block(filters[0], filters[1], lrn=False, max_pool=pooling, padding=2, kernel_size=(7, 7),
stride=(2, 1))
self.b2 = make_audio_block(filters[1], filters[2], lrn=False, max_pool=pooling, padding=0, kernel_size=(7, 3))
self.b3 = make_audio_block(filters[2], filters[3], lrn=False, padding=0, kernel_size=(7, 3))
if pooling_type == 'MaxPool':
self.pooling = nn.AdaptiveMaxPool2d((1, None))
else:
self.pooling = nn.AdaptiveAvgPool2d((1, None))
def forward(self, x):
verbose = True
if verbose:
print(f'Input--> Shape: {x.shape}, device:{x.device}')
x = self.preproc(x).unsqueeze(1)
if verbose:
print(f'FFT --> Shape: {x.shape}')
x = self.b1(x)
if verbose:
equal, abs_max, mean_max = check(x)
print(f'b1 --> Shape: {x.shape}, all equal: {equal}, max error: abs {abs_max},mean {mean_max}')
x = self.b2(x)
if verbose:
equal, abs_max, mean_max = check(x)
print(f'b2 --> Shape: {x.shape}, all equal: {equal}, max error: abs {abs_max},mean {mean_max}')
x = self.b3(x)
if verbose:
equal, abs_max, mean_max = check(x)
print(f'b3 --> Shape: {x.shape}, all equal: {equal}, max error: abs {abs_max},mean {mean_max}')
x = self.pooling(x)
if verbose:
equal, abs_max, mean_max = check(x)
print(f'pooling --> Shape: {x.shape}, all equal: {equal}, max error: abs {abs_max},mean {mean_max}')
x = x.squeeze()
return x, x.shape
BATCH_SIZE = 16
@torch.no_grad()
def run_test(pooling, pooling_type, device):
device = torch.device(device)
model = AudioEncoder(pooling, pooling_type).to(device)
inp_element = torch.rand(4480).to(device)
inp = torch.stack([inp_element.clone() for _ in range(BATCH_SIZE)])
# print(model)
y, shape = model(inp)
is_identical, max_abs, max_mean = check(y)
if is_identical:
print(f"Test: Pooling {pooling}, {pooling_type}. Device {device}, max_abs {max_abs},max_mean {max_mean} OK")
else:
print(
f"Test: Pooling {pooling}, {pooling_type}. Device {device}, max_abs {max_abs},max_mean {max_mean}. Failed")
print('---------------------------------------')
pooling_tests = [None, (3, 3)]
pooling_types = ['AvgPool']
devices = ['cuda:0', 'cuda:1', 'cpu']
if __name__ == '__main__':
print(get_sys_info())
for device in devices:
for pooling_i in pooling_tests:
for pooling_type_i in pooling_types:
run_test(pooling_i, pooling_type_i, device)
Python VERSION: 3.6.9 (default, Oct 8 2020, 12:12:24)
[GCC 8.4.0]
pyTorch VERSION: 1.7.0+cu110
CUDA VERSION: 11.0
CUDNN VERSION: 8004
Number CUDA Devices: 3
Active CUDA Device: GPU 0
Available devices 3
Current cuda device 0
Input--> Shape: torch.Size([16, 4480]), device:cuda:0
/home/jfm/.local/lib/python3.6/site-packages/torch/functional.py:516: UserWarning: stft will require the return_complex parameter be explicitly specified in a future PyTorch release. Use return_complex=False to preserve the current behavior or return_complex=True to return a complex output. (Triggered internally at /pytorch/aten/src/ATen/native/SpectralOps.cpp:653.)
normalized, onesided, return_complex)
/home/jfm/.local/lib/python3.6/site-packages/torch/functional.py:516: UserWarning: The function torch.rfft is deprecated and will be removed in a future PyTorch release. Use the new torch.fft module functions, instead, by importing torch.fft and calling torch.fft.fft or torch.fft.rfft. (Triggered internally at /pytorch/aten/src/ATen/native/SpectralOps.cpp:590.)
normalized, onesided, return_complex)
FFT --> Shape: torch.Size([16, 1, 257, 35])
b1 --> Shape: torch.Size([16, 32, 128, 33]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([16, 64, 122, 31]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([16, 128, 116, 29]), all equal: True, max error: abs 0,mean 0
pooling --> Shape: torch.Size([16, 128, 1, 29]), all equal: True, max error: abs 0,mean 0
Test: Pooling None, AvgPool. Device cuda:0, max_abs 0,max_mean 0 OK
---------------------------------------
Input--> Shape: torch.Size([16, 4480]), device:cuda:0
FFT --> Shape: torch.Size([16, 1, 257, 35])
b1 --> Shape: torch.Size([16, 32, 42, 11]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([16, 64, 12, 3]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([16, 128, 6, 1]), all equal: True, max error: abs 0,mean 0
pooling --> Shape: torch.Size([16, 128, 1, 1]), all equal: True, max error: abs 0,mean 0
Test: Pooling (3, 3), AvgPool. Device cuda:0, max_abs 0,max_mean 0 OK
---------------------------------------
Input--> Shape: torch.Size([16, 4480]), device:cuda:1
FFT --> Shape: torch.Size([16, 1, 257, 35])
b1 --> Shape: torch.Size([16, 32, 128, 33]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([16, 64, 122, 31]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([16, 128, 116, 29]), all equal: True, max error: abs 0,mean 0
pooling --> Shape: torch.Size([16, 128, 1, 29]), all equal: True, max error: abs 0,mean 0
Test: Pooling None, AvgPool. Device cuda:1, max_abs 0,max_mean 0 OK
---------------------------------------
Input--> Shape: torch.Size([16, 4480]), device:cuda:1
FFT --> Shape: torch.Size([16, 1, 257, 35])
b1 --> Shape: torch.Size([16, 32, 42, 11]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([16, 64, 12, 3]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([16, 128, 6, 1]), all equal: True, max error: abs 0,mean 0
pooling --> Shape: torch.Size([16, 128, 1, 1]), all equal: True, max error: abs 0,mean 0
Test: Pooling (3, 3), AvgPool. Device cuda:1, max_abs 0,max_mean 0 OK
---------------------------------------
Input--> Shape: torch.Size([16, 4480]), device:cpu
FFT --> Shape: torch.Size([16, 1, 257, 35])
b1 --> Shape: torch.Size([16, 32, 128, 33]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([16, 64, 122, 31]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([16, 128, 116, 29]), all equal: True, max error: abs 0,mean 0
pooling --> Shape: torch.Size([16, 128, 1, 29]), all equal: True, max error: abs 0,mean 0
Test: Pooling None, AvgPool. Device cpu, max_abs 0,max_mean 0 OK
---------------------------------------
Input--> Shape: torch.Size([16, 4480]), device:cpu
FFT --> Shape: torch.Size([16, 1, 257, 35])
b1 --> Shape: torch.Size([16, 32, 42, 11]), all equal: True, max error: abs 0,mean 0
b2 --> Shape: torch.Size([16, 64, 12, 3]), all equal: True, max error: abs 0,mean 0
b3 --> Shape: torch.Size([16, 128, 6, 1]), all equal: True, max error: abs 0,mean 0
pooling --> Shape: torch.Size([16, 128, 1, 1]), all equal: True, max error: abs 0,mean 0
Test: Pooling (3, 3), AvgPool. Device cpu, max_abs 0,max_mean 0 OK
---------------------------------------
Process finished with exit code 0 |
st47247 | Given the fact the the same machine processes all the elements I would expect the output to be identical as it happens for cpu.
Don’t the cpu tests above actually give the same result? Only the cuda version shows differences right?
Also you can try to set OMP_NUM_THREADS=1 (or corresponding MKL version if you use that)to avoid non-determinism on the CPU.
Seems reshaping (ravel/unravel) is one of the reasons. Works nice without it.
What did you change exactly in the code? |
st47248 | Okay, let me introduce it again.
This is the simplest case I could find.
The idea is to ravel/unravel temporal 5-dimensional tensors to process one of them in the batch dimension.
It’s basically the following ops:
View-->conv2d-->view
Only cuda shows differences.
Torch 1.6 (cuda 10.__) is ok. Torch 1.7 (cuda 11)is not.
Soo it seems that setting
torch.backends.cudnn.deterministic = True
torch.set_deterministic(True)
Solves the issue. However it wasn’t necessary in Pytorch 1.6 compared to Pytorch 1.7.
Do you know what changed?
import torch
import sys
import subprocess
from torch import nn
import torch
def get_sys_info():
"""
:param log: Logging logger in which to parse info
:type log: logging.logger
:return: None
"""
result = subprocess.Popen(["nvidia-smi", "--format=csv",
"--query-gpu=index,name,driver_version,memory.total,memory.used,memory.free"],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
nvidia = result.stdout.readlines().copy()
nvidia = [str(x) for x in nvidia]
nvidia = [x[2:-3] + '\r\t' for x in nvidia]
acum = ''
for x in nvidia:
acum = acum + x
return (' Python VERSION: {0} \n\t'
' pyTorch VERSION: {1} \n\t'
' CUDA VERSION: {2}\n\t'
' CUDNN VERSION: {3} \n\t'
' Number CUDA Devices: {4} \n\t'
' Devices: {5}\n\t'
'Active CUDA Device: GPU {6} \n\t'
'Available devices {7} \n\t'
'Current cuda device {8} \n\t'.format(sys.version, torch.__version__, torch.version.cuda,
torch.backends.cudnn.version(), torch.cuda.device_count(),
acum, torch.cuda.current_device(), torch.cuda.device_count(),
torch.cuda.current_device()))
def reshape(x, unrolled_shape):
return x.view(*unrolled_shape, *x.shape[1:])
def check(x_unrolled,unrolled_shape):
ref = x_unrolled[0]
all_equal = True
error_abs = [None for _ in range(unrolled_shape[0])]
error_mean = [None for _ in range(unrolled_shape[0])]
error_abs[0] = 0
error_mean[0] = 0
for i in range(1, unrolled_shape[0]):
all_equal = all_equal and torch.allclose(ref, x_unrolled[i])
diff = torch.abs(ref - x_unrolled[i])
diff = diff[diff > 0]
error_abs[i] = diff.sum()
error_mean[i] = diff.mean()
return all_equal, max(error_abs), max(error_mean)
class Toy(nn.Module):
def __init__(self):
super(Toy, self).__init__()
self.conv = nn.Conv2d(1,128, padding=0, kernel_size=(7, 3))
def forward(self, x):
unraveled_shape = x.shape[:2]
print(f'Input--> Shape: {x.shape}, device:{x.device}')
x = x.view(-1, 1, 12,3).contiguous()
undo = x.view(*unraveled_shape, 12,3).contiguous()
print(f'Raveled View OP --> Shape: {x.shape}')
print(f'Unraveled View OP --> Shape: {undo.shape}')
equal, abs_max, mean_max = check(undo, unraveled_shape)
print(f'View Results: Are equal: {equal}, Max. Abs. Diff: {abs_max}, Mean Abs. Diff: {mean_max}')
x = self.conv(x).contiguous()
undo = x.view(*unraveled_shape, 128, 6,1).contiguous()
print(f'Raveled Conv2D OP --> Shape: {x.shape}')
print(f'UnRaveled Conv2D OP --> Shape: {undo.shape}')
equal, abs_max, mean_max = check(undo, unraveled_shape)
print(f'View Results: Are equal: {equal}, Max. Abs. Diff: {abs_max}, Mean Abs. Diff: {mean_max}')
return x
BATCH_SIZE = 16
@torch.no_grad()
def run_test( device):
device = torch.device(device)
model = Toy().to(device)
inp_element = torch.rand(25, 12,3).to(device)
inp = torch.stack([inp_element.clone() for _ in range(BATCH_SIZE)])
# print(model)
y = model(inp)
print('---------------------------------------')
devices = ['cuda:0', 'cuda:1', 'cpu']
if __name__ == '__main__':
print(get_sys_info())
for device in devices:
run_test( device)
REPORT FOR TORCH 1.6
Python VERSION: 3.6.9 (default, Oct 8 2020, 12:12:24)
[GCC 8.4.0]
pyTorch VERSION: 1.6.0
CUDA VERSION: 10.2
CUDNN VERSION: 7605
Number CUDA Devices: 3
Active CUDA Device: GPU 0
Available devices 3
Current cuda device 0
Input--> Shape: torch.Size([16, 25, 12, 3]), device:cuda:0
Raveled View OP --> Shape: torch.Size([400, 1, 12, 3])
Unraveled View OP --> Shape: torch.Size([16, 25, 12, 3])
View Results: Are equal: True, Max. Abs. Diff: 0, Mean Abs. Diff: 0
Raveled Conv2D OP --> Shape: torch.Size([400, 128, 6, 1])
UnRaveled Conv2D OP --> Shape: torch.Size([16, 25, 128, 6, 1])
View Results: Are equal: True, Max. Abs. Diff: 0, Mean Abs. Diff: 0
---------------------------------------
Input--> Shape: torch.Size([16, 25, 12, 3]), device:cuda:1
Raveled View OP --> Shape: torch.Size([400, 1, 12, 3])
Unraveled View OP --> Shape: torch.Size([16, 25, 12, 3])
View Results: Are equal: True, Max. Abs. Diff: 0, Mean Abs. Diff: 0
Raveled Conv2D OP --> Shape: torch.Size([400, 128, 6, 1])
UnRaveled Conv2D OP --> Shape: torch.Size([16, 25, 128, 6, 1])
View Results: Are equal: True, Max. Abs. Diff: 0, Mean Abs. Diff: 0
---------------------------------------
Input--> Shape: torch.Size([16, 25, 12, 3]), device:cpu
Raveled View OP --> Shape: torch.Size([400, 1, 12, 3])
Unraveled View OP --> Shape: torch.Size([16, 25, 12, 3])
View Results: Are equal: True, Max. Abs. Diff: 0, Mean Abs. Diff: 0
Raveled Conv2D OP --> Shape: torch.Size([400, 128, 6, 1])
UnRaveled Conv2D OP --> Shape: torch.Size([16, 25, 128, 6, 1])
View Results: Are equal: True, Max. Abs. Diff: 0, Mean Abs. Diff: 0
---------------------------------------
REPORT FOR PYTORCH 1.7
[GCC 8.4.0]
pyTorch VERSION: 1.7.0+cu110
CUDA VERSION: 11.0
CUDNN VERSION: 8004
Number CUDA Devices: 3
Active CUDA Device: GPU 0
Available devices 3
Current cuda device 0
Input--> Shape: torch.Size([16, 25, 12, 3]), device:cuda:0
Raveled View OP --> Shape: torch.Size([400, 1, 12, 3])
Unraveled View OP --> Shape: torch.Size([16, 25, 12, 3])
View Results: Are equal: True, Max. Abs. Diff: 0, Mean Abs. Diff: 0
Raveled Conv2D OP --> Shape: torch.Size([400, 128, 6, 1])
UnRaveled Conv2D OP --> Shape: torch.Size([16, 25, 128, 6, 1])
View Results: Are equal: False, Max. Abs. Diff: 0.00018408219330012798, Mean Abs. Diff: 2.6632262617454217e-08
---------------------------------------
Input--> Shape: torch.Size([16, 25, 12, 3]), device:cuda:1
Raveled View OP --> Shape: torch.Size([400, 1, 12, 3])
Unraveled View OP --> Shape: torch.Size([16, 25, 12, 3])
View Results: Are equal: True, Max. Abs. Diff: 0, Mean Abs. Diff: 0
Raveled Conv2D OP --> Shape: torch.Size([400, 128, 6, 1])
UnRaveled Conv2D OP --> Shape: torch.Size([16, 25, 128, 6, 1])
View Results: Are equal: False, Max. Abs. Diff: 0.0001880067866295576, Mean Abs. Diff: 2.7989695894348188e-08
---------------------------------------
Input--> Shape: torch.Size([16, 25, 12, 3]), device:cpu
Raveled View OP --> Shape: torch.Size([400, 1, 12, 3])
Unraveled View OP --> Shape: torch.Size([16, 25, 12, 3])
View Results: Are equal: True, Max. Abs. Diff: 0, Mean Abs. Diff: 0
Raveled Conv2D OP --> Shape: torch.Size([400, 128, 6, 1])
UnRaveled Conv2D OP --> Shape: torch.Size([16, 25, 128, 6, 1])
View Results: Are equal: True, Max. Abs. Diff: 0, Mean Abs. Diff: 0
---------------------------------------
Process finished with exit code 0 |
st47249 | It would be very helpful if you could get the cudnn versions to match.
Because the fact that setting torch.backends.cudnn.deterministic = True seems to hint that something changed on the cudnn side.
In particular, for the old version, it was picking a deterministic algorithm by chance which is not the case for the new version? |
st47250 | I’m currently using a Transformer decoder as an autoregressive model to generate a sequence. (So each element within a sequence depends on all the previously generated elements)
Now if I want to generate a sequence, I have to generate each element one by one in sequence. When I use the forward pass of the transformer decoder, the embeddings for all the previous elements in the sequence are always recomputed, but I actually only need to compute the very last element to attach to the sequence (since the previous elements are unchanged).
Is there any way to do this more efficiently in PyTorch using the MultiheadAttention/TransformerDecoder module? (Let me know if my question is unclear) |
st47251 | Hi,
Problem:
Based on the issues on Github, PyTorch does not support torch.solve for sparse tensors (neither forward nor backward). The main issue is runtime error: no stride. I wonder is there any workarounds for any special case so I can fix my issue? Based on my experiments, there is no way that I can handle my problem by using dense matrices and also, I need backward compatibility too.
I do not know it is possible, but can I use a third-party libs such as cupy and still maintain computational graph and backward through it?
Rewriting spdiags & spsolve?
I’m trying to rewrite some legacy code and switch from scipy to pytorch and I’m having difficulties re-writing the spdiags and spsolve.
How could I perform these operations on GPUs?
A = spdiags(B.T,d,k,k)
A = A + A.T + spdiags(D.T, 0, k, k)
return spsolve(A, input)
github.com
Quansight-Labs/rfcs/blob/pearu/rfc0005/RFC0003-sparse-roadmap/SparseSupportState.md#linear-algebra-algorithms 1
This file is auto-generated, do not edit!
# The state of PyTorch tensor layouts support
The following table summarizes the state of PyTorch tensor layouts for
different PyTorch functions from the following namespaces:
- torch
- torch.nn.functional
- torch.sparse
| Section | strided@cpu | strided@cuda:1 | sparse_coo@cpu | sparse_coo@cuda:1 |
| :----------------------------------------------------------------- | :-------------------------------------------------------- | :-------------------------------------------------------- | :--------------------------------------------------------------------- | :--------------------------------------------------------------------- |
| <a href="#tensor-constructors">Tensor constructors</a> | PASSED: 21, SKIPPED: 3 | PASSED: 20, SKIPPED: 4 | FAILED: 11, N/A: 2, PASSED: 7, SKIPPED: 4 | FAILED: 12, N/A: 2, PASSED: 6, SKIPPED: 4 |
| <a href="#trigonometry-functions">Trigonometry functions</a> | PASSED: 78 | PASSED: 78 | FAILED: 71, PASSED: 7 | FAILED: 71, PASSED: 7 |
| <a href="#arithmetics-functions">Arithmetics functions</a> | NOTIMPL: 1, PASSED: 36 | NOTIMPL: 1, PASSED: 36 | FAILED: 31, NOTIMPL: 1, PARTIAL: 2, PASSED: 3 | FAILED: 31, NOTIMPL: 1, PARTIAL: 2, PASSED: 3 |
| <a href="#linear-algebra-algorithms">Linear Algebra algorithms</a> | PASSED: 32 | FAILED: 2, PASSED: 30 | FAILED: 28, PASSED: 4 | FAILED: 28, PASSED: 4 |
| <a href="#convolution-operations">Convolution operations</a> | PASSED: 9 | PASSED: 9 | FAILED: 9 | FAILED: 9 |
| <a href="#matrix-functions">Matrix functions</a> | NOTIMPL: 2, PASSED: 24 | NOTIMPL: 2, PASSED: 24 | FAILED: 24, NOTIMPL: 2 | FAILED: 24, NOTIMPL: 2 |
| <a href="#general-array-functions">General array functions</a> | PASSED: 57 | PASSED: 57 | FAILED: 38, PASSED: 19 | FAILED: 38, PASSED: 19 |
This file has been truncated. show original
docs.scipy.org
scipy.sparse.linalg.spsolve — SciPy v1.5.4 Reference Guide 2
github.com/pytorch/pytorch
The state of sparse Tensors 4
opened
Jul 21, 2018
weiyangfb
This note tries to summarize the current state of sparse tensor in pytorch. It describes important invariance and properties of sparse...
module: sparse
triaged
github.com/pytorch/pytorch
Todo functions and autograd supports for Sparse Tensor
opened
Jun 25, 2018
weiyangfb
Here summarizes a list of requested Sparse Tensor functions and autograd supports from previous PRs. Please feel free to comment on...
module: sparse
triaged
So far:
I have found that Tensorflow also does not like sparse operations with the same reasons as PyTorch.
Apparently, there are some libraries such as cuSolver 14 which supports forward but still backward needs to be implemented. As there are no python bindings for this lib, I wonder does it make sense to make this happen in PyTorch.
Another approach which I don’t like (nothing is like PyTorch ) is that change framework to Jax as apparently it supports forward/backward for spsolve or cholesky_solve, or even using another language such as Julia.
Please let me know if there is any ongoing project on solving this issue or any other ideas.
Bests |
st47252 | I have two questions :
first:what’s the different between googlenet.py and inception.py if I want to use GoogLeNet model which is the suitable.
second:
QQ%E6%88%AA%E5%9B%BE202001022116302421×849 87.8 KB
QQ%E6%88%AA%E5%9B%BE202001022116481803×145 24.2 KB
I want konw if i want to get the output’s size which function is right. when i use the vgg model the shape is ok.
thanks so much |
st47253 | github.com
pytorch/vision/blob/d2c763e14efe57e4bf3ebf916ec243ce8ce3315c/torchvision/models/googlenet.py#L19 2
from torch import Tensor
from .utils import load_state_dict_from_url
__all__ = ['GoogLeNet', 'googlenet', "GoogLeNetOutputs", "_GoogLeNetOutputs"]
model_urls = {
# GoogLeNet ported from TensorFlow
'googlenet': 'https://download.pytorch.org/models/googlenet-1378be20.pth',
}
GoogLeNetOutputs = namedtuple('GoogLeNetOutputs', ['logits', 'aux_logits2', 'aux_logits1'])
GoogLeNetOutputs.__annotations__ = {'logits': Tensor, 'aux_logits2': Optional[Tensor],
'aux_logits1': Optional[Tensor]}
# Script annotations failed with _GoogleNetOutputs = namedtuple ...
# _GoogLeNetOutputs set here for backwards compat
_GoogLeNetOutputs = GoogLeNetOutputs
def googlenet(pretrained=False, progress=True, **kwargs):
r"""GoogLeNet (Inception v1) model architecture from
As you can see, GoogleNetOutputs is a namedtuple, not a tensor. |
st47254 | GoogLeNet returns a named tuple and thus behave differently than VGG. See source 3
GoogLeNetOutputs = namedtuple('GoogLeNetOutputs', ['logits', 'aux_logits2', 'aux_logits1'])
Try
output = net(x)
output_logits = output.logits
print(output_logits.shape) |
st47255 | Thank you for your code it is OK. could you answer my first question .thanks again |
st47256 | 111186:
first:what’s the different between googlenet.py and inception.py
googlenet.py: GoogLeNet aka Inception v1
inception.py: Inception v3
For more info:
import torchvision
help(torchvision.models.googlenet)
help(torchvision.models.inception_v3) |
st47257 | Hello All,
I am trying to use LSTMCell to form multiple LSTM layer and not using LSTM. Getting following error and looks like it is due due to nn.utils.rnn.pack_padded_sequence but not able to debug further. Could you please help me on this?
My network definition is:
class RNN(nn.Module):
def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers,
bidirectional, dropout, pad_idx):
super().__init__()
self.n_layers = n_layers
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
# Number of time steps
self.sequence_len = 3
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
# Initialize LSTM Cell for the first layer
self.lstm_cell_layer_1 = nn.LSTMCell(self.embedding_dim, self.hidden_dim)
# Initialize LSTM Cell for the second layer
self.lstm_cell_layer_2 = nn.LSTMCell(self.hidden_dim, self.hidden_dim)
# Initialize LSTM Cell for the second layer
self.lstm_cell_layer_3 = nn.LSTMCell(self.hidden_dim, self.hidden_dim)
## we would need to initialize the hidden and cell state for each LSTM layer.
self.fc = nn.Linear(hidden_dim*2, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text, text_lengths):
#text = [sent len, batch size]
embedded = self.dropout(self.embedding(text))
#embedded = [sent len, batch size, emb dim]
## Initialisation
#pack sequence
packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_lengths)
# packed_embedded = embedded
print(f'text_lengths{text_lengths}')
print(f'text.size(0){text.size(0)}')
# out = packed_embedded.view(self.sequence_len, text.size(0), -1)
# Creation of cell state and hidden state for layer 1
hidden_state = torch.zeros(self.embedding_dim, self.hidden_dim)
cell_state = torch.zeros(self.embedding_dim, self.hidden_dim)
# Creation of cell state and hidden state for layer 2
hidden_state_2 = torch.zeros(self.embedding_dim, self.hidden_dim)
cell_state_2 = torch.zeros(self.embedding_dim, self.hidden_dim)
# Creation of cell state and hidden state for layer 3
hidden_state_3 = torch.zeros(self.embedding_dim, self.hidden_dim)
cell_state_3 = torch.zeros(self.embedding_dim, self.hidden_dim)
# Weights initialization
torch.nn.init.xavier_normal_(hidden_state)
torch.nn.init.xavier_normal_(cell_state)
torch.nn.init.xavier_normal_(hidden_state_2)
torch.nn.init.xavier_normal_(cell_state_2)
torch.nn.init.xavier_normal_(hidden_state_3)
torch.nn.init.xavier_normal_(cell_state_3)
## End of Initialisation
# Unfolding LSTM
for input_t in range(self.sequence_len):
# print(f'packed_embedded.size(0){len(packed_embedded)}')
hidden_state_1, cell_state_1 = self.lstm_cell_layer_1(packed_embedded,(hidden_state, cell_state))
hidden_state_2, cell_state_2 = self.lstm_cell_2(hidden_state_1, (hidden_state_2, cell_state_2))
hidden_state_3, cell_state_3 = self.lstm_cell_3(hidden_state_2, (hidden_state_3, cell_state_3))
#unpack sequence
output, output_lengths = nn.utils.rnn.pad_packed_sequence(hidden_state_3)
hidden = self.dropout(torch.cat((hidden_state_3[-2,:,:], hidden_state_3[-1,:,:]), dim = 1))
return self.fc(hidden)
Error:
AttributeError Traceback (most recent call last)
in ()
7 start_time = time.time()
8
----> 9 train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
10 valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
11
5 frames
in train(model, iterator, optimizer, criterion)
13
14 text_lengths = text_lengths.cpu()
—> 15 predictions = model(text, text_lengths).squeeze(1)
16
17 loss = criterion(predictions, batch.label)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
–> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
in forward(self, text, text_lengths)
76 for input_t in range(self.sequence_len):
77 # print(f’packed_embedded.size(0){len(packed_embedded)}’)
—> 78 hidden_state_1, cell_state_1 = self.lstm_cell_layer_1(packed_embedded,(hidden_state, cell_state))
79 hidden_state_2, cell_state_2 = self.lstm_cell_2(hidden_state_1, (hidden_state_2, cell_state_2))
80 hidden_state_3, cell_state_3 = self.lstm_cell_3(hidden_state_2, (hidden_state_3, cell_state_3))
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
–> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py in forward(self, input, hx)
963
964 def forward(self, input: Tensor, hx: Optional[Tuple[Tensor, Tensor]] = None) -> Tuple[Tensor, Tensor]:
–> 965 self.check_forward_input(input)
966 if hx is None:
967 zeros = torch.zeros(input.size(0), self.hidden_size, dtype=input.dtype, device=input.device)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py in check_forward_input(self, input)
788
789 def check_forward_input(self, input: Tensor) -> None:
–> 790 if input.size(1) != self.input_size:
791 raise RuntimeError(
792 “input has inconsistent input_size: got {}, expected {}”.format(
AttributeError: ‘PackedSequence’ object has no attribute ‘size’ |
st47258 | Specifically, I see here in the docs 2:
>>> transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12)
>>> src = torch.rand((10, 32, 512))
>>> tgt = torch.rand((20, 32, 512))
>>> out = transformer_model(src, tgt)
I’m unsure what the tgt is. The docs say tgt – the sequence to the decoder (required).. But I’m not passing a sequence to the decoder. I want the decoder to give me an output, don’t I? |
st47259 | Hi, I am trying to train rcnn model, but I have few problems when I am using batch_size>1.
My function which make dataset returns a single img tensor and a single target (dictionary):
return (
img,
target,
)
Later I am loading datasets to dataloader which loads data to the training process.
with tqdm(train_loader) as _tqdm:
for x, y in _tqdm:
x = x.to(device)
for key, value in y.items():
y[key] = torch.tensor(value).to(device)
outputs = model(x, y)
When batch_size is f.e. == 2, then x is a list of two tensors of shape [D, H, W] but y is a dict where every key have two values. Then occurs error:
.../generalized_rcnn.py", line 64, in forward boxes = target["boxes"] TypeError: string indices must be integers
So I assumed that probably it can’t handle with dictionaries and I pass:
output(x, [y])
Then error changes to:
.../transform.py", line 99, in forward
target_index = targets[i] if targets is not None else None
IndexError: list index out of range
Any ideas how to make it run properly? |
st47260 | Tomash:
then x is a list of two tensors of shape [D, H, W] but y is a dict where every key have two values.
Would it work, if you pass y as a list of two dicts? |
st47261 | Finally, I solved this by adding:
y_list = []
for i in range(0, len(x)):
y_list.append(y) |
st47262 | Hello,
I am trying to train a multi-modal model on multiple GPUs using torch.nn.DataParallel.
However, I have multiple modalities so the input to the model is a dictionary.
Is there any way to make this work on multiple GPUs? As far as I’ve understood DataParallel only works if the input to the model is a tensor.
My input to the model looks like the following:
{
'modality_1': torch.Tensor((bs, *img_size)),
'modality_2': torch.Tensor((bs, *img_size)),
'modality_3': torch.Tensor((bs, *img_size)),
} |
st47263 | Solved by klory in post #2
Sol 1: define your own class for your inputs and inplement the to() function
Sol2:
self.data = {k: v.to(device) for k, v in self.data.items()}
Ref: |
st47264 | Sol 1: define your own class for your inputs and inplement the to() function
Sol2:
self.data = {k: v.to(device) for k, v in self.data.items()}
Ref:
huggingface.co
transformers.tokenization_utils_base — transformers 3.5.0 documentation 19 |
st47265 | Hi @klory, thanks for your answer!
I’m not quite sure if I understand correctly: should I send each value in the dict to another GPU for parallel computing? |
st47266 | You only need to call nn.DataParallel to your model, the data will be automatically distributed to all GPUs as long as you called to() |
st47267 | Oh okay I see, looks easy
I’m getting following error though on the line results = model(batch):
File "miniconda3/envs/mimic/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "miniconda3/envs/mimic/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 156, in forward
return self.gather(outputs, self.output_device)
File "miniconda3/envs/mimic/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 168, in gather
return gather(outputs, output_device, dim=self.dim)
File "miniconda3/envs/mimic/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather
res = gather_map(outputs)
File "miniconda3/envs/mimic/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 61, in gather_map
return type(out)(((k, gather_map([d[k] for d in outputs]))
File "miniconda3/envs/mimic/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 61, in <genexpr>
return type(out)(((k, gather_map([d[k] for d in outputs]))
File "miniconda3/envs/mimic/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 61, in gather_map
return type(out)(((k, gather_map([d[k] for d in outputs]))
File "miniconda3/envs/mimic/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 61, in <genexpr>
return type(out)(((k, gather_map([d[k] for d in outputs]))
File "miniconda3/envs/mimic/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map
return type(out)(map(gather_map, zip(*outputs)))
TypeError: 'Laplace' object is not iterable
Is there anything I could be doing wrong?
I have applied following line to the batch:
batch = {k: v.to(torch.device('cuda')) for k, v in batch.items()} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.