id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st31168
|
So you have prior = p = N(0,1) and q = N(mu, diag(var))
It seems that you compute D_KL(p || q) in the first and the more common D_KL(q || p) in the second. KL Divergence is not symmetric, so these will differ.
Best regards
Thomas
|
st31169
|
I have a system that is utilizing a few different open source networks for various applications. I’m just starting to write unit tests with nose. First plan is to test everything with one large test but that is causing crashes. If I comment things out all 3 of my networks will pass their individual tests. But if all called in succession in the same script the first one will pass and then the second one with crash on loss.backward() with either a “RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR” or a Segmentation Fault on different runs. Any thoughts on what might be causing this? Should I just write 3 separate tests?
|
st31170
|
Solved by eqy in post #4
Can you check that nothing from the tests inadvertently keeps tensors/variables around when they aren’t needed (e.g., returned values or global variables)? A litmus test for this is to use trivially small inputs for the tests so that it is expected that everything should fit in memory even if it is …
|
st31171
|
Can you keep an eye on the memory usage between the tests (e.g., via nvidia-smi)? It might be that something is kept in memory from each of the tests causing an OOM when all three are run in successsion.
|
st31172
|
That could be it… beforehand is
1502MiB / 4043MiB
and if I stop in the middle its
2209MiB / 4043MiB.
Is there a way to clear that up? I do already have a call to torch.cuda.empty_cache()
. I also remember reading that with pytorch when GPU memory is freed nvidia-smi will still show it as used even though it can be accessed by future pytorch code.
|
st31173
|
Can you check that nothing from the tests inadvertently keeps tensors/variables around when they aren’t needed (e.g., returned values or global variables)? A litmus test for this is to use trivially small inputs for the tests so that it is expected that everything should fit in memory even if it is all kept.
|
st31174
|
Hmm, global variables definitely could be it… I know there are a couple global things defined, if any globals ever have a pointer to the network itself I guess that would mean the whole network is saved. Maybe just having multiple tests is a better idea. Thanks for the help!
|
st31175
|
Based on this post here I am running some tests to check if the framework has no error/bug.
1. I created a train dataset with one single data point(3D image and mask) to overfit the U-Net. The validation set is also the same data point. After training for 200 epochs I got the following curves.
image1216×1212 184 KB
Both train and validation dice score seems to reach 80%(I expected it to be 100%). And validation loss curve follows the training curve very closely.
2. With the same loss, learning rate, and weight initialization, I changed the validation set with completely different image patches. The result:
image1212×1184 153 KB
The validation curves seem to behave in a random way(as it should) and for some reason the dice go beyond 1 in train samples(all samples are the same image/mask).
The dice overshoot may be due to the MSDL(multi-sourced dice loss) I have used(which doesn’t take background into account).
From these tests’ results, is it confirmed that the current framework works? Are there any other checks that need to be performed?
If the setup is good, shouldn’t it overfit in the first case or dice metric reaching nearly 1?
|
st31176
|
I am creating a custom DataLoader for my Dataset.
import numpy as np
import pandas as pd
from __future__ import print_function, division
import os
import cv2
import torch
import pandas as pd
from skimage import io, transform
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
Here is my custom class
class DFU_Dataset(Dataset):
def __init__(self, root_dir, csv, transform):
self.root_dir = root_dir
self.landmarks_frame = pd.read_csv(csv)
self.transform = transform
def __len__(self):
return len(self.landmarks_frame)
def __getitem__(self, idx):
img_name = os.path.join(self.root_dir , self.landmarks_frame.iloc[idx,0])
image = io.imread(img_name)
label = np.argmax(self.landmarks_frame.loc[idx, 'none':'both'].values)
# # Transform
if self.transform is not None:
image = self.transform(torch.from_numpy(image))
label = self.transform(torch.from_numpy(label))
sample = {'image': image, 'label': label}
return sample
transform = transforms.Compose([transforms.ToTensor()])
DFU_Dataset = DFU_Dataset(root_dir = '/Users/sidraaleem/Documents/code/DFU/Labelled_test_images',
csv = '/Users/sidraaleem/Documents/code/DFU/Labelled_data_ground_truth.csv',
transform = transform
)
*Here I am trying to check whether the image, and label have been converted to tensor: *
for i in range(len(DFU_Dataset)):
sample = DFU_Dataset[i]
print(sample)
However, I am having the below error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-232-2d085f7b0abc> in <module>
1 for i in range(len(DFU_Dataset)):
----> 2 sample = DFU_Dataset[i]
3 print(sample)
<ipython-input-229-31f45f491e1e> in __getitem__(self, idx)
18 # # Transform
19 if self.transform is not None:
---> 20 image = self.transform(torch.from_numpy(image))
21 label = self.transform(torch.from_numpy(label))
22 sample = {'image': image, 'label': label}
~/opt/anaconda3/lib/python3.8/site-packages/torchvision/transforms/transforms.py in __call__(self, img)
65 def __call__(self, img):
66 for t in self.transforms:
---> 67 img = t(img)
68 return img
69
~/opt/anaconda3/lib/python3.8/site-packages/torchvision/transforms/transforms.py in __call__(self, pic)
102 Tensor: Converted image.
103 """
--> 104 return F.to_tensor(pic)
105
106 def __repr__(self):
~/opt/anaconda3/lib/python3.8/site-packages/torchvision/transforms/functional.py in to_tensor(pic)
62 """
63 if not(F_pil._is_pil_image(pic) or _is_numpy(pic)):
---> 64 raise TypeError('pic should be PIL Image or ndarray. Got {}'.format(type(pic)))
65
66 if _is_numpy(pic) and not _is_numpy_image(pic):
TypeError: pic should be PIL Image or ndarray. Got <class 'torch.Tensor'>```
|
st31177
|
Some transforms expect to be passed PIL Images and some expect tensors. There are utility methods to convert between the two in case you have one format and need the other: e.g.,
ToPILImage 7
ToTensor 5
|
st31178
|
I am trying to understand the implementation of a Siamese Network in PyTorch. In particular, I am confused as to why we are able to run two images through the same model twice. In examples such as these:
How to create a Siamese network vision
I’m trying to send 2 images through a siamese network. It looks like it’s as easy as writing a for-loop, calling forward for each leg of the siamese net. Is this correct? I’ve written a baby siamese net below:
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import pdb
class BabyNet(torch.nn.Module):
def __init__(self):
super(BabyNet, self).__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), paddi…
Setting up a Siamese Network model
I’m trying my hands on Siamese Network models. After searching for some examples, this seems to be the common way to set up the model
class SiameseNetwork(nn.Module):
def __init__(self, core_model):
super(SiameseNetwork, self).__init__()
# Define layers
self.layer1 = ...
self.layer2 = ...
...
def forward_one(input):
X = self.layer1(input)
X = self.layer2(X)
...
return X
def forward(self, input1, input2):
output1 = self.layers(…
it seems like the default way to do this in PyTorch is something like (for example):
def forward(self, input1, input2):
out1 = self.conv(input1)
out2 = self.conv(input2)
return self.fc(torch.concat(out1,out2,dim=1)
my question is, why can we run two input images through the same layers twice? I understand that in a Siamese network both legs should share the same parameters, but in torch aren’t the local gradients at each layer computed during the forward pass? Wouldn’t running the second image through the network remove the local gradients from the first image and ruin the backpropogation step?
Clearly, this isn’t the case but I’d love to understand why.
|
st31179
|
Hi,
I am getting the following error when performing the GAT operator specifically in “loss = train (,)” line:
Screen Shot 2021-06-02 at 12.09.37 AM1011×165 35.8 KB
This is what train does:
Screen Shot 2021-06-02 at 12.09.26 AM869×240 34.2 KB
Full error:
Screen Shot 2021-06-02 at 12.09.51 AM1008×741 136 KB
Screen Shot 2021-06-02 at 12.10.03 AM1018×736 125 KB
Could you please help me out to solve this error. Thank you!
|
st31180
|
The error is raised in the src.expand_as(other) operation, since the shape of other cannot be used to expand src, so you would need to check the input shapes for this particular operation and make sure they match the expected shapes.
PS: you can post code snippets by wrapping them into three backticks ```, which would make debugging easier and would also allow the forum search to index the code.
|
st31181
|
Hi,
I am using the pytorch-geomeric package to implement GCN in pytorch.
The simplest example on the pytorch-geomeric official homepage is a bit confusing to me, so I’m asking a question.
(Introduction by Example — pytorch_geometric 1.7.0 documentation)
import torch
from torch_geometric.data import Data
edge_index = torch.tensor([[0, 1, 1, 2],
[1, 0, 2, 1]], dtype=torch.long)
x = torch.tensor([[-1], [0], [1]], dtype=torch.float)
data = Data(x=x, edge_index=edge_index)
print(data)
→ Data(edge_index=[2, 4], x=[3, 1])
In the code above, the [[-1], [0], [1]] list is considered to be embedded feature vectors of each node, but it seems that it is not specified which node’s feature vector each is. Since that is a list, the only information we can know is the value and index information. So, does the index become the node name?
That is, since ‘0’, ‘1’, ‘2’ appearing in edge_index are node names, is the feature vector of node ‘0’=[-1], the feature vector of node ‘1’=[0], and the feature vector of node ‘2’=[1]?
|
st31182
|
python -m torch.utils.collect_env
the output:
Collecting environment information…
PyTorch version: 1.6.0
Is debug build: No
CUDA used to build PyTorch: Could not collect
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.7
Is CUDA available: No
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce 930MX
Nvidia driver version: 465.89
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\bin\cudnn_ops_train64_8.dll
Versions of relevant libraries:
[pip3] numpy==1.18.5
[pip3] numpydoc==0.9.1
[pip3] torch==1.8.1
[pip3] torch-nightly==1.0.0.dev20190325
[pip3] torchaudio==0.6.0
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.9.1
[pip3] torchvision==0.9.1
[conda] _pytorch_select 0.1 cpu_0
[conda] _tflow_select 2.3.0 mkl
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.1.1 heb2d755_7 conda-forge
[conda] libblas 3.8.0 14_mkl conda-forge
[conda] libcblas 3.8.0 14_mkl conda-forge
[conda] liblapack 3.8.0 14_mkl conda-forge
[conda] libmklml 2019.0.5 haa95532_0
[conda] mkl 2019.4 245
[conda] mkl-service 2.3.0 py37h196d8e1_0
[conda] mkl_fft 1.3.0 py37h46781fe_0
[conda] mkl_random 1.0.2 py37h343c172_0
[conda] numpy 1.18.1 py37hc71023c_0 conda-forge
[conda] numpy-base 1.18.5 py37hc3f5095_0
[conda] numpydoc 0.9.1 py_0
[conda] pytorch 1.6.0 cpu_py37h538a6d7_0
[conda] tensorflow 2.1.0 mkl_py37ha977152_0
[conda] tensorflow-base 2.1.0 mkl_py37h230818c_0
[conda] torch 1.8.1 pypi_0 pypi
[conda] torch-nightly 1.0.0.dev20190325 pypi_0 pypi
[conda] torchaudio 0.6.0 py37 pytorch
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchtext 0.9.1 pypi_0 pypi
[conda] torchvision 0.9.1 pypi_0 pypi
nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Tue_Sep_15_19:12:04_Pacific_Daylight_Time_2020
Cuda compilation tools, release 11.1, V11.1.74
Build cuda_11.1.relgpu_drvr455TC455_06.29069683_0
nvidia-smi.exe
Thu Jun 3 14:37:38 2021
±----------------------------------------------------------------------------+
| NVIDIA-SMI 465.89 Driver Version: 465.89 CUDA Version: 11.3 |
|-------------------------------±---------------------±---------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce … WDDM | 00000000:01:00.0 Off | N/A |
| N/A 44C P8 N/A / N/A | 37MiB / 2048MiB | 1% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+
|
st31183
|
Hi!
I’m trying to understand the multiheadattention function at pytorch MultiheadAttention — PyTorch 1.8.1 documentation 3 and if I can use it to compute the selfattention as it can be done in the keras implementation MultiHeadAttention layer 5
With Keras implementation I’m able to run selfattention over a 1D vector the following way:
import tensorflow as tf
layer = tf.keras.layers.MultiHeadAttention(num_heads=2, key_dim=2)
input_tensor = tf.keras.Input(shape=[8, 16])
output_tensor = layer(input_tensor, input_tensor)
print(output_tensor.shape)
(None, 8, 16)
I’ve tried to do the same with the pytorch implementation but couldn’t make it work. Could be because of lacking knowledge of the Transformer implementation.
import torch
import torch.nn as nn
multihead_attn = nn.MultiheadAttention(16, 8)
input_tensor = torch.zeros(8,60)
multihead_attn(input_tensor,input_tensor,input_tensor)
With error
-> 4624 tgt_len, bsz, embed_dim = query.size()
4625 assert embed_dim == embed_dim_to_check
4626 # allow MHA to have different sizes for the feature dimension
ValueError: not enough values to unpack (expected 3, got 2)
It seems the keras function does some work under the hood to make it easier for the user and perform self-attention when both Query and Value are similar. Is there a way to do the same with pytorch?
|
st31184
|
Do you know, what happens under the hood for 1D inputs in Keras?
I guess you could mimic this behavior e.g. by using unsqueeze and/or expand on the input tensors, once you know how 1D tensors should be handled.
|
st31185
|
Hi, thanks for the reply.
I haven’t found any particular code in the keras call function call, compute_attention 1 that makes some underhood 1d transformation (at least that I can identify). This goes a bit beyond my understanding, I havent been able to find any 1D attention examples. Do you know if issues in pytorch repo can be use to ask how to do calculations like this?
|
st31186
|
I don’t think the GitHub repository is the right place to ask questions about the usage of modules, as it’s intended to create issues in case the framework itself encounters a bug.
For your use case: since you cannot see what Keras is doing in the background, you could try to compare the results between Keras and PyTorch by adding the missing dimensions.
I.e. the PyTorch layer expects e.g. query to have the shape:
query: (L,N,E) where L is the target sequence length, N is the batch size, E is the embedding dimension.
Based on this one dimension is missing in your input (could it be the batch dimension?).
If so, use input_tensor = input_tensor.unsqueeze(1) and pass it to the module.
|
st31187
|
I think I found the solution, I can use the TransformerEncoderLayer — PyTorch 1.8.1 documentation 24 to perform self attention myself
|
st31188
|
Hi,
I need some help advice for the following. I have a text classification problem, where i’m training for 2 labels (let’s call them A and B). The classes for A and B differ: A can be 0-4 and B can be 0-8 (A can be cast to 0-8 i guess).
How would i go about this problem? I can use a one-hot for each label, but then i don’t know how to calculate the loss.
|
st31189
|
If I understand you correctly, you have two classification tasks (so it would be multi-task not multilabel in the usual lingo).
In that case, you can just output 14 logits and then group them for the loss if you want.
More concretely, you could output a batch x 14 vector of scores (so no final activations) and organize your labels into two batch-sized tensors.
Then for F.cross_entropy_loss(output[:, :9], target_a) + F.cross_entropy_loss(output[:, 9:], target_b).
Best regards
Thomas
|
st31190
|
Thanks, i’ll give it a try:
concat the one-hots as input
aggregate the losses via the grouped outputs
|
st31191
|
I’m wonder why we don’t use model.eval() command in training_step method of the “LightningModule”
def training_step(self, batch, batch_idx):
x, y = batch
pred = self(x) ### but our model is in training mode now …
|
st31192
|
Solved by tom in post #2
There is two parts to this.
training_step is about training, so it seems natural that the model is in training mode,
Lightning automatically sets the model to training for training_step and to eval for validation.
Best regards
Thomas
|
st31193
|
There is two parts to this.
training_step is about training, so it seems natural that the model is in training mode,
Lightning automatically sets the model to training for training_step and to eval for validation.
Best regards
Thomas
|
st31194
|
Thanks for clarification, @tom !
Does Litghtning do the same for “on_train_epoch_end” callback or I have to set model.eval() manually?
Which of pre-defined callbacks provided with model.eval “under the hood”?
|
st31195
|
tom:
Lightning automatically sets the model to training for training_step and to eval for validation.
is it correct that training loss calculating using model in training mode?
|
st31196
|
andreys42:
Does Litghtning do the same for “on_train_epoch_end” callback or I have to set model.eval() manually?
I don’t know, you could find out yourself with assert not model.training (for eval mode) or so.
Which of pre-defined callbacks provided with model.eval “under the hood”?
The rule of thumb would be that training is set up with train and validation/testing with eval, but if you want this more detailed, I would probably check with in the lightning discussions 3.
andreys42:
is it correct that training loss calculating using model in training mode?
I think the training loss is just computed during trianing, yes.
|
st31197
|
I guess PL authors took care of switching between eval/train mode within pre defined callbacks… But problem is that when I try to predict test data in “on_fit_end” callback without using model.eval() it gives me different result than predicting outside training routine (and of course using model.eval in advance), thats why I’m wonder If ‘on_fit_end’ callback provided with model.eval … but I guess I chose inappropriate forum
|
st31198
|
When a tensor is multiplied/divided by 1 or added/subtracted by 0, we expect it to remain the same. So the most intuitive implementation appears to be:
def my_add(a, b):
if is_scalar(b) and b == 0:
return a
if is_scalar(a) and a == 0:
return b
return a + b
for some type-checking function is_scalar.
In other words, we can short-circuit these operations instead of performing element-wise addition by 0 on every element of a tensor. However, this is not the case in Numpy or PyTorch, which can be verified by the following script:
from timeit import timeit
n, c, h, w = 64, 3, 128, 128
arr = torch.rand(n, c, h, w, device="cuda")
t0 = timeit("arr", globals=globals(), number=1000)
t1 = timeit("arr.clone()", globals=globals(), number=1000)
t2 = timeit("arr + 0", globals=globals(), number=1000)
# t0 < t1 < t2
This can be especially time-consuming when a user inadvertently uses the following code as a basic building block in a network that deals with really large data batches, thinking it’s just an innocent summation:
def func(x: torch.Tensor):
y = 0
for i in range(n):
y += do_something(x)
return y
I see no reason against such short-circuiting (operators, autograd graphs, etc.) but I couldn’t find any discussion on this topic at all. Perhaps the improvement is too small for most users? I’m personally working on edge computing so every bit of time matters a lot.
|
st31199
|
I try to use rescaling weight in torch.nn.functional.cross_entropy, and find the result very hard to understand. Here is the code:
>>> import torch
>>> import torch.nn.functional as F
>>> pred = torch.tensor([[[0.8054, 0.6918],
[0.8704, 0.1927],
[0.4033, 0.3574],
[0.6289, 0.2227],
[0.0425, 0.8065]],
[[0.4279, 0.4677],
[0.4958, 0.3767],
[0.3411, 0.9530],
[0.4712, 0.7330],
[0.9196, 0.8033]]]).float() # [2, 5, 2], 5-way classification
>>> label = torch.tensor([[2, 4], [1, 3]]).long() # [2, 2]
>>> weight = torch.tensor([0.0886, 0.2397, 0.1851, 0.2225, 0.2640]).float() # weight.sum() == 1
>>> loss1 = F.cross_entropy(pred, label, reduction='mean', weight=weight)
>>> loss1
tensor(1.5594)
>>> loss2 = F.cross_entropy(pred, label, reduction='none', weight=weight).sum() / label.numel()
>>> loss2
tensor(0.3553)
>>> loss3 = F.cross_entropy(pred, label, reduction='sum', weight=weight) / label.numel()
>>> loss3
tensor(0.3553)
If I understand correctly, loss1 should be the same as loss2. Obviously the reduction method is not what I suppose. So I wonder how F.cross_entropy performs mean reduction when weight is provided? Thanks!
|
st31200
|
Solved by ptrblck in post #2
nn.CrossEntropyLoss normalizes with the used weights, so you would have to change the loss2 calculation to:
loss2 = F.cross_entropy(pred, label, reduction='none', weight=weight).sum() / weight[label].sum()
loss2
> tensor(1.5594)
This post also describes it using another example.
|
st31201
|
nn.CrossEntropyLoss normalizes with the used weights, so you would have to change the loss2 calculation to:
loss2 = F.cross_entropy(pred, label, reduction='none', weight=weight).sum() / weight[label].sum()
loss2
> tensor(1.5594)
This post 1 also describes it using another example.
|
st31202
|
how I can check the dimension of an HDF file and is it still an hdf file if its again passed another dataset HDF?
I have an hdf file which is again divided into trainset and testset as shown below but when I am trying to check the dimension of hdf it says HDF5 has no attribute “keys”
trainset = Dataset4DFromHDF5(args.data,
labels=(ref_type,),
device=loader_device,
start=args.intervals[0],
end=args.intervals[1],
crop=args.crop,
augment=args.img_augm,
augment_freq=args.freq_augm)
testset = Dataset4DFromHDF5(args.data,
labels=(ref_type,),
device=loader_device,
start=args.intervals[2],
end=args.intervals[3],
crop=args.crop,
augment=False,
augment_freq=False)
list(trainset.keys())
AttributeError: ‘DatasetDeepPhysHDF5’ object has no attribute ‘keys’
|
st31203
|
I don’t know how DatasetDeepPhysHDF5 is implemented, but would assume that it might be a custom torch.utils.data.Dataset, which wraps the HFD dataset internally, and is thus not implementing the keys() operation. You could check it via print(type(trainset)) and check which methods are implemented for this class.
If my guess is correct, you might be able to access the internal HDF dataset and call keys() on it directly.
|
st31204
|
What is the difference between epoch and iterations?
In this 3D multi-class segmentation paper in section 3 Experiments & Results the authors mention “…Training of 20,000 iterations.” Does it mean they ran 20,000 subvolume patches?
|
st31205
|
Solved by ptrblck in post #2
An epoch usually refers to using the entire dataset once, while an iteration usually refers to a training step using a batch.
In case you know the batch size and number of sample in the Dataset, you could transform “Training of 20,000 iterations” into epochs.
|
st31206
|
An epoch usually refers to using the entire dataset once, while an iteration usually refers to a training step using a batch.
In case you know the batch size and number of sample in the Dataset, you could transform “Training of 20,000 iterations” into epochs.
|
st31207
|
I have some issue with my tensorboard in pytorch, it does not update the updates as it goes on to further epochs.
So I decided to check its version, but from torch.utils.tensorboard import version does not exist in pytorch, so I was wondeing how I can check the tensorboard that is available there.
|
st31208
|
Try typing which tensorboard in your terminal. It should exist if you installed with pip as mentioned in the tensorboard README visit this site 28 ( although the documentation doesn’t tell you that you can now launch “tensorboard” without doing anything else.)
|
st31209
|
I am currently tryting to implement a UNet like architecture.
The tensors I am concatenating have the same shape and are both of the class torch.Tensor.
However, the operation
x = torch.cat((x, enc_out4), dim=1)
returns a tuple of len 1 with the concatenated tensor.
I am using torch 1.8.1.
Help would be very appreciated
Code of model:
class TestModel(nn.Module):
def __init__(self):
super().__init__()
self.num_features = 80
self.encoder1 = nn.Sequential(
nn.Conv3d(in_channels=2, out_channels=self.num_features, kernel_size=4, stride=2, padding=1),
nn.LeakyReLU(negative_slope=0.2)
)
self.encoder2 = nn.Sequential(
nn.Conv3d(in_channels=self.num_features, out_channels=self.num_features*2, kernel_size=4, stride=2, padding=1),
nn.BatchNorm3d(num_features=self.num_features*2),
nn.LeakyReLU(negative_slope=0.2)
)
self.encoder3 = nn.Sequential(
nn.Conv3d(in_channels=self.num_features*2, out_channels=self.num_features*4, kernel_size=4, stride=2, padding=1),
nn.BatchNorm3d(num_features=self.num_features*4),
nn.LeakyReLU(negative_slope=0.2)
)
self.encoder4 = nn.Sequential(
nn.Conv3d(in_channels=self.num_features*4, out_channels=self.num_features*8, kernel_size=4, stride=1),
nn.BatchNorm3d(num_features=self.num_features*8),
nn.LeakyReLU(negative_slope=0.2)
)
self.bottleneck = nn.Sequential(
nn.Linear(in_features=640, out_features=640),
nn.ReLU(),
nn.Linear(in_features=640, out_features=640),
nn.ReLU()
)
self.decoder1 = nn.Sequential(
nn.ConvTranspose3d(in_channels=self.num_features*8*2, out_channels=self.num_features*4, kernel_size=4, stride=1),
nn.BatchNorm3d(num_features=self.num_features*4),
nn.ReLU()
)
self.decoder2 = nn.Sequential(
nn.ConvTranspose3d(in_channels=self.num_features*4*2, out_channels=self.num_features*2, kernel_size=4, stride=2, padding=1),
nn.BatchNorm3d(num_features=self.num_features*2),
nn.ReLU()
)
self.decoder3 = nn.Sequential(
nn.ConvTranspose3d(in_channels=self.num_features*2*2, out_channels=self.num_features, kernel_size=4, stride=2, padding=1),
nn.BatchNorm3d(num_features=self.num_features),
nn.ReLU()
)
self.decoder4 = nn.Sequential(
nn.ConvTranspose3d(in_channels=self.num_features*2, out_channels=1, kernel_size=4, stride=2, padding=1)
)
def forward(self, x):
b = x.shape[0]
# Encode
enc_out1 = self.encoder1(x)
enc_out2 = self.encoder2(enc_out1)
enc_out3 = self.encoder3(enc_out2)
enc_out4 = self.encoder4(enc_out3)
x = enc_out4.view(b, -1)
x = self.bottleneck(x)
x = x.view(x.shape[0], x.shape[1], 1, 1, 1)
# Decode
#print(type(x), type(enc_out4), x.shape==enc_out4.shape)
x = torch.cat((x, enc_out4), dim=1),
#print(type(x), len(x), x[0].shape, type(x[0]))
x = x[0]
x = self.decoder1(x)
x = torch.cat((x, enc_out3), dim=1),
x = x[0]
x = self.decoder2(x)
x = torch.cat((x, enc_out2), dim=1),
x = x[0]
x = self.decoder3(x)
x = torch.cat((x, enc_out1), dim=1),
x = x[0]
x = self.decoder4(x)
x = torch.squeeze(x, dim=1)
x = torch.log(torch.add(x, 1.0))
return x
|
st31210
|
I cannot reproduce the issue on 1.8.1 and get a tensor in the expected shape:
x = torch.randn(1, 3, 24, 24)
enc_out4 = torch.randn(1, 3, 24, 24)
x = torch.cat((x, enc_out4), dim=1)
print(x.shape)
> torch.Size([1, 6, 24, 24])
print(type(x))
> <class 'torch.Tensor'>
EDIT: I just re-checked your code and you are indeed manually creating the tuple by adding comma at the end in:
x = torch.cat((x, enc_out4), dim=1),
|
st31211
|
Hi there, i want to train deeplabV3 on my own Dataset with 4 channels images. but i didn’t find any PyTorch implementation of deeplabV3 where i could change parameters and input channels number of the model to fit my (4channels) images .
How can i modify deeplabV3 to adapt it to my dataset?
|
st31212
|
Solved by ptrblck in post #2
torchvision provides deeplabv3 implementations here and you could manipulate the first conv layer as seen here:
model = models.segmentation.deeplabv3_resnet50(pretrained=False, progress=True, num_classes=21, aux_loss=None)
x = torch.randn(2, 3, 224, 224)
out = model(x)
model.backbone.conv1 = nn.C…
|
st31213
|
torchvision provides deeplabv3 implementations here 5 and you could manipulate the first conv layer as seen here:
model = models.segmentation.deeplabv3_resnet50(pretrained=False, progress=True, num_classes=21, aux_loss=None)
x = torch.randn(2, 3, 224, 224)
out = model(x)
model.backbone.conv1 = nn.Conv2d(4, 64, 7, 2, 3, bias=False)
x = torch.randn(2, 4, 224, 224)
out = model(x)
|
st31214
|
I am working on squeezenet and I want to replace a layer without changing any dimensions of other layers in the pre-trained network and finetune it.
net = SqueezeNet()
state_dict = torch.load('../pretrainedmodels/squeezenet.pth')
net.load_state_dict(state_dict, strict=True)
print(net)
for name, child in net.named_children():
for x, y in child.named_children():
print(name,x)
Output is
SqueezeNet(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(2, 2))
(1): ReLU(inplace=True)
(2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
(3): Fire(
(squeeze): Conv2d(64, 16, kernel_size=(1, 1), stride=(1, 1))
(squeeze_activation): ReLU(inplace=True)
(expand1x1): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1))
(expand1x1_activation): ReLU(inplace=True)
(expand3x3): Conv2d(16, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(expand3x3_activation): ReLU(inplace=True)
)
(4): Fire(
(squeeze): Conv2d(128, 16, kernel_size=(1, 1), stride=(1, 1))
(squeeze_activation): ReLU(inplace=True)
(expand1x1): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1))
(expand1x1_activation): ReLU(inplace=True)
(expand3x3): Conv2d(16, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(expand3x3_activation): ReLU(inplace=True)
)
(5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
(6): Fire(
(squeeze): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
(squeeze_activation): ReLU(inplace=True)
(expand1x1): Conv2d(32, 128, kernel_size=(1, 1), stride=(1, 1))
(expand1x1_activation): ReLU(inplace=True)
(expand3x3): Conv2d(32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(expand3x3_activation): ReLU(inplace=True)
)
(7): Fire(
(squeeze): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
(squeeze_activation): ReLU(inplace=True)
(expand1x1): Conv2d(32, 128, kernel_size=(1, 1), stride=(1, 1))
(expand1x1_activation): ReLU(inplace=True)
(expand3x3): Conv2d(32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(expand3x3_activation): ReLU(inplace=True)
)
(8): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
(9): Fire(
(squeeze): Conv2d(256, 48, kernel_size=(1, 1), stride=(1, 1))
(squeeze_activation): ReLU(inplace=True)
(expand1x1): Conv2d(48, 192, kernel_size=(1, 1), stride=(1, 1))
(expand1x1_activation): ReLU(inplace=True)
(expand3x3): Conv2d(48, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(expand3x3_activation): ReLU(inplace=True)
)
(10): Fire(
(squeeze): Conv2d(384, 48, kernel_size=(1, 1), stride=(1, 1))
(squeeze_activation): ReLU(inplace=True)
(expand1x1): Conv2d(48, 192, kernel_size=(1, 1), stride=(1, 1))
(expand1x1_activation): ReLU(inplace=True)
(expand3x3): Conv2d(48, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(expand3x3_activation): ReLU(inplace=True)
)
(11): Fire(
(squeeze): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1))
(squeeze_activation): ReLU(inplace=True)
(expand1x1): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
(expand1x1_activation): ReLU(inplace=True)
(expand3x3): Conv2d(64, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(expand3x3_activation): ReLU(inplace=True)
)
(12): Fire(
(squeeze): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1))
(squeeze_activation): ReLU(inplace=True)
(expand1x1): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
(expand1x1_activation): ReLU(inplace=True)
(expand3x3): Conv2d(64, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(expand3x3_activation): ReLU(inplace=True)
)
)
(classifier): Sequential(
(0): Dropout(p=0.5, inplace=False)
(1): Conv2d(512, 1000, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): AdaptiveAvgPool2d(output_size=(1, 1))
)
)
features 0
features 1
features 2
features 3
features 4
features 5
features 6
features 7
features 8
features 9
features 10
features 11
features 12
classifier 0
classifier 1
classifier 2
classifier 3
I want to change the fourth layer(3) Fire Module by some FireB Module(user defined with same input and output dimensions). How do I do this transformation easily in pretrained network?
|
st31215
|
Solved by ptrblck in post #2
Could you try to assign your new layer to the one you would like to replace?
net = models.SqueezeNet()
net.features[3] = nn.Conv2d(96, 128, 1, 1) # Replace this with your custom layer
|
st31216
|
Could you try to assign your new layer to the one you would like to replace?
net = models.SqueezeNet()
net.features[3] = nn.Conv2d(96, 128, 1, 1) # Replace this with your custom layer
|
st31217
|
Hi.
Im trying to change module’s’
I know their relative name (model.layer.1.conv …)
And i have a target module that i want to overwrite to it
And they are saved as dict{name:module}
I know that i can change the model’s module by chagning attribute of it (ie model.layer[1].conv = nn.Conv2d(3,1,1,1))
But by calling getattr won’t to what i want to
names = [‘layer’, 0, ‘conv’]
For name in names:
Try:
Module = model[0]
Except:
Module = getattr(model, name)
The code isn’t complete but you can see that I’m trying to use getattr to get the attribute of the wanted layer and overwrite it with different layer
However, it seems like getattr gives a copy of an object, not the id.
So assigning module = nn.conv2d(3,1,1,1) won’t change the network
Is there any way to do this?
I have several modules to change and I can’t do them all by hand
Help much appreciated! Thx
|
st31218
|
To assign a new module, you could alternatively to the direct assignment use setattr.
Assigning the module to the module, which was returned via getattr won’t work, as you already explained.
|
st31219
|
Is it possible to replace one layer with nn.Sequential(...) containing multiple layers?
Example:
net = models.SqueezeNet()
net.features[3] = nn.Sequential(
nn.Linear(...),
nn.ReLU(...),
nn.Dropout(...)
)
|
st31220
|
Yes, that is possible and the nn.Sequential container will all its internal layers will be called as the replacement layer.
|
st31221
|
What’s the best way to accomplish this when iterating over modules?
for module in model.modules():
classname = module.__class__.__name__
if 'Linear' in classname:
module = nn.Sequential(...) # replacing Linear with multiple layers defined in Sequential
Is this the correct approach to modify model on-the-fly?
Edit: Assigning to module as shown didn’t work, probably it creates a copy to iterate over?
|
st31222
|
I think you would have to use setattr using the module names to assign the new nn.Sequential module to the attribute, which was used for the previous linear layer.
|
st31223
|
Hi,
Exponential Moving Average (EMA) is an important feature in state-of-the-art research, in Tensorflow they already implemented it with tf.train.ExponentialMovingAverage 13. I wonder why the Pytorch team has not released an official version of EMA.
In others related question, there is no expert confirm that it is the correct implementation:
Exponential Moving Average in PyTorch, for weights and gradients nlp
Do we need to apply exponential moving average to weights during training when we use Adam (or other optimizers)?
My EMA is defined as:
class EMA(object):
def __init__(self, mu):
self.mu = mu
self.shadow = {}
def register(self, name, val):
self.shadow[name] = val.clone()
def __call__(self, name, x):
assert name in self.shadow
new_average = (1.0 - self.mu) * x + self.mu * self.shadow[name]
self.shadow[name] = new_average.clone()
return new_average
My code is like…
How to apply exponential moving average decay for variables?
I am reading following paper. And it uses EMA decay for variables.
BI-DIRECTIONAL ATTENTION FLOW FOR MACHINE COMPREHENSION
During training, the moving averages of all weights of the model are
maintained with the exponential decay rate of 0.999.
They use TensorFlow and I found the related code of EMA.
In PyTorch, how do I apply EMA to Variables? In TensorFlow, there is tf.train.ExponentialMovingAverage class.
https://www.tensorflow.org/versions/r0.12/api_docs/python/train/moving_avera…
Or in this repo 28, the issue said there was some problem with the code.
|
st31224
|
Let’s say I have a pretrained autoencoder, and I just need the pretrained encoder as a part of a new model.
class autoencoder(nn.Module):
def __init__(self):
super(autoencoder, self).__init__()
self.encoder = nn.Sequential(
nn.Linear(28 * 28, 128),
nn.ReLU(True),
nn.Linear(128, 64),
nn.ReLU(True), nn.Linear(64, 12), nn.ReLU(True), nn.Linear(12, 3))
self.decoder = nn.Sequential(
nn.Linear(3, 12),
nn.ReLU(True),
nn.Linear(12, 64),
nn.ReLU(True),
nn.Linear(64, 128),
nn.ReLU(True), nn.Linear(128, 28 * 28), nn.Tanh())
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
def get_encoder(self):
return self.encoder
class autoencoder2(nn.Module):
def __init__(self, encoder):
super(autoencoder, self).__init__()
self.encoder = encoder
self.decoder = nn.Sequential(
nn.Linear(3, 12),
nn.ReLU(True),
nn.Linear(12, 64),
nn.ReLU(True),
nn.Linear(64, 128),
nn.ReLU(True), nn.Linear(128, 28 * 28), nn.Tanh())
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
def get_encoder(self):
return self.encoder
pretrn_AE = autoencoder()
checkpoint = torch.load('AE.pt', map_location='cpu')
pretrn_AE.load_state_dict(checkpoint['model'])
pretrn_encoder = pretrn_AE.get_encoder()
new_AE = autoencoder2(pretrn_encoder)
While training new_AE, will this cause warning or error saying that some trainable weights are not used?(the decoder in pretrn_AE).
To play safe, I want to have a clean copy of the pretrained encoder to feed into new_AE.
And while training the new_AE, it has nothing to do with the pretrn_AE.
Any advice?
Thanks.
|
st31225
|
Your workflow looks generally alright, but you could also use copy.deepcopy to create a new instance of the internal encoder.
|
st31226
|
Is it possible to randomize the augmentation in the Dataset class?
def __init__(self, subdict, num_labels, params=None, isTransform=None, isplot=None):
"""
:param subdict: dictionary of 3D MR images e.g. ['img_sub'] = '/user/318_T1w.nii.gz'
:param num_labels: number of segmentation labels
"""
self.subdict = subdict
self.num_labels = num_labels
self.img_subs = subdict['img_subs']
self.img_files = subdict['img_files']
self.seg_subs = subdict['seg_subs']
self.seg_files = subdict['seg_files']
self.isTransfom = isTransform
self.isplot = isplot
self.params = params
def __getitem__(self, index):
sub_name = self.img_subs[index]
if self.isTransfom:
img, seg = self.imaugment(imgnp, segnp)
def imaugment(self, X, Y):
"""
Preprocess the tuple (image, mask) and then apply if selected:
augmentation techniques adapted from Keras ImageDataGenerator
elastic deformation
"""
if Y is not None and X.shape != Y.shape:
raise ValueError("image and mask should have the same size")
if self.params["augmentation"][0] == True:
X, Y = random_transform(X, Y, **self.params["random_deform"])
if self.params["augmentation"][1] == True:
X, Y = deform_pixel(X, Y, **self.params["e_deform_p"])
if self.params["augmentation"][2] == True:
X, Y = deform_grid(X, Y, **self.params["e_deform_g"])
return X, Y
This way, I can only choose True/False, which entirely turns the augmentation on or off for the whole dataset.
What I want to augment some index/sample and not to others.
Thanks for the help.
|
st31227
|
You could use transforms.RandomApply 4 and to randomly use the provided transformations.
|
st31228
|
Hello @ptrblck
I am using a python dictionary that has parameters for imaugment method in the data.Dataset class.
The params is as below:
params = {}
params["augmentation"] = [1, 1, 1] # , 1, 1]
params["only"] = None
params["e_deform_p"] = dict()
params["e_deform_g"] = dict()
params["random_deform"] = dict()
params["random_deform"]['height_shift_range'] = 0.1 # 0.1
params["random_deform"]['width_shift_range'] = 0.1 # 0.1
params["random_deform"]['depth_shift_range'] = None # ?
params["random_deform"]['rotation_range_alpha'] = 5
params["random_deform"]['rotation_range_beta'] = None
params["random_deform"]['rotation_range_gamma'] = None
params["random_deform"]["horizontal_flip"] = True
params["random_deform"]["vertical_flip"] = True
params["random_deform"]["z_flip"] = False
# Add elastic deformations
params["e_deform_p"]["alpha"] = 5
params["e_deform_p"]["sigma"] = 2
params["e_deform_g"]["sigma"] = 5 # 10
params["e_deform_g"]["points"] = 3
I pass the params to the dataset and imaugment method is performed on image and mask.
With RandomApply as you’ve mentioned:
transforms = transforms.RandomApply(params, p=0.3)
dataSet = Seg_aug(train_dict, num_labels=9, params=transforms, isTransform=True, isplot=True)
tl = DataLoader(dataSet, batch_size=10, shuffle=True, num_workers=1)
x, y, z = next(iter(tl))
I am getting the following error:
TypeError: 'RandomApply' object is not subscriptable
Is there ways to implement imaugment based on p=0.5(50%) of the images+masks with params?
|
st31229
|
RandomApply expects a sequence or nn.ModuleList of transformations as given in the docs, while params is a dict as you’ve mentioned, so it won’t be compatible.
In case you want to use this params dict I guess you could manually apply these transformations randomly in the __getitem__ of your Dataset by sampling a random number and then applying the corresponding transformation on the data.
|
st31230
|
Hi, @ptrblck
I could manage to randomize the transforms/augmentations from dataset with a probability input.
Thanks for the help.
|
st31231
|
OS: Ubuntu 20.04.2 LTS
Python Version: 3.8.10
GPU: RTX A6000
Cuda Version: 11.1, V11.1.74
CuDNN Version: 8.0.5
magma-cuda111 2.5.2
PyTorch: v1.7.1-rc3
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(26): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(27): error: variable "THC_API" is not a type name
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(27): error: variable "uint8_t" has already been defined
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(27): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(29): error: variable "THC_API" is not a type name
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(29): error: namespace "at" has no actual member "StorageImpl"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(29): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(30): error: variable "THC_API" is not a type name
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(30): error: namespace "at" has no actual member "StorageImpl"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(30): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(31): error: variable "THC_API" is not a type name
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(31): error: namespace "at" has no actual member "StorageImpl"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(31): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(32): error: variable "THC_API" is not a type name
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(32): error: namespace "at" has no actual member "StorageImpl"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(32): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(34): error: variable "THC_API" is not a type name
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(34): error: namespace "at" has no actual member "StorageImpl"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(34): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(37): error: variable "THC_API" is not a type name
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(37): error: namespace "at" has no actual member "StorageImpl"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(37): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(41): error: this declaration has no storage class or type specifier
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(41): error: variable "THC_API" has already been defined
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(41): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(42): error: this declaration has no storage class or type specifier
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(42): error: variable "THC_API" has already been defined
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(42): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(43): error: this declaration has no storage class or type specifier
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(43): error: variable "THC_API" has already been defined
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(43): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(45): error: this declaration has no storage class or type specifier
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(45): error: variable "THC_API" has already been defined
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(45): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(46): error: this declaration has no storage class or type specifier
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(46): error: variable "THC_API" has already been defined
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(46): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(48): error: this declaration has no storage class or type specifier
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(48): error: variable "THC_API" has already been defined
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(48): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(50): error: this declaration has no storage class or type specifier
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(50): error: variable "THC_API" has already been defined
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(50): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(22): error: variable "THC_API" is not a type name
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(22): error: "int8_t" has already been declared in the current scope
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(22): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(23): error: this declaration has no storage class or type specifier
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(23): error: variable "THC_API" has already been defined
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(23): error: expected a ";"
/tmp/tmp/pytorch/aten/src/THC/generic/THCStorage.h(26): error: this declaration has no storage class or type specifier
Error limit reached.
100 errors detected in the compilation of "/tmp/tmp/pytorch/aten/src/ATen/native/cuda/DistanceKernel.cu".
Compilation terminated.
CMake Error at torch_cuda_generated_DistanceKernel.cu.o.Release.cmake:281 (message):
Error generating file
/tmp/tmp/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/./torch_cuda_generated_DistanceKernel.cu.o
[4006/4993] Building NVCC (Device) object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/torch_cuda_generated_BinaryLogicalOpsKernels.cu.o
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "setup.py", line 760, in <module>
build_deps()
File "setup.py", line 310, in build_deps
build_caffe2(version=version,
File "/tmp/tmp/pytorch/tools/build_pytorch_libs.py", line 62, in build_caffe2
cmake.build(my_env)
File "/tmp/tmp/pytorch/tools/setup_helpers/cmake.py", line 345, in build
self.run(build_args, my_env)
File "/tmp/tmp/pytorch/tools/setup_helpers/cmake.py", line 141, in run
check_call(command, cwd=self.build_dir, env=env)
File "/home/oreo/anaconda3/envs/pytorch_1.7/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '128']' returned non-zero exit status 1.
|
st31232
|
Could you update to a stable tag (you are currently using 1.7.1-rc3) or the current master and try to rebuild? I’m unsure, if your current version had known issues during the build.
|
st31233
|
I tried to use the tag v1.7.1 and got the same error. Switching to current master compiles successfully but I in fact need PyTorch 1.7.
|
st31234
|
Unfortunately, I cannot reproduce the build issue and was able to rebuild v1.7.1 with CUDA11.1 just now.
|
st31235
|
In this code:
def rotation_points_single_angle_cuda(points, angle, axis=0):
# points: [N, 3]
rot_sin = np.sin(angle)
rot_cos = np.cos(angle)
if axis == 1:
rot_mat_T = np.array(
[[rot_cos, 0, -rot_sin], [0, 1, 0], [rot_sin, 0, rot_cos]],
)
elif axis == 2 or axis == -1:
rot_mat_T = np.array(
[[rot_cos, -rot_sin, 0], [rot_sin, rot_cos, 0], [0, 0, 1]],
)
elif axis == 0:
rot_mat_T = np.array(
[[1, 0, 0], [0, rot_cos, -rot_sin], [0, rot_sin, rot_cos]],
)
else:
raise ValueError("axis should in range")
points_ = torch.matmul(points,torch.from_numpy(rot_mat_T).float().cuda())
return points_
It gives me error that there is value changed in-place.
This is the error:
[W python_anomaly_mode.cpp:60] Warning: Error detected in MmBackward. Traceback of forward call that caused the error:
File "train.py", line 137, in <module>
train(args)
File "train.py", line 121, in train
batch = COR(infos, sample["idx"], pred_disp ,pipeline )
File "/notebooks/E2E/cor_interface.py", line 52, in COR
res, _ = pipeline(res_temp, infos[idx[i]])
File "/notebooks/cia/det3d/datasets/pipelines/compose.py", line 23, in __call__
res, info = t(res, info)
File "/notebooks/cia/det3d/datasets/pipelines/preprocess_v4.py", line 170, in __call__
gt_dict["gt_boxes"], points = prep.global_rotation(gt_dict["gt_boxes"], points,
File "/notebooks/cia/det3d/core/sampler/preprocess.py", line 826, in global_rotation
points[:, :3] = box_np_ops.rotation_points_single_angle_cuda(points[:, :3], noise_rotation, axis=2)
File "/notebooks/cia/det3d/core/bbox/box_np_ops.py", line 453, in rotation_points_single_angle_cuda
points_ = torch.matmul(points,torch.from_numpy(rot_mat_T).float().cuda())
(function print_stack)
Traceback (most recent call last):
File "train.py", line 137, in <module>
train(args)
File "train.py", line 129, in train
total_loss.backward()
File "/opt/conda/envs/cia/lib/python3.8/site-packages/torch/tensor.py", line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/opt/conda/envs/cia/lib/python3.8/site-packages/torch/autograd/__init__.py", line 125, in backward
Variable._execution_engine.run_backward(
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [17045, 3]], which is output 0 of SliceBackward, is at version 4; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
|
st31236
|
Could you check the next operations applied on points_, which might manipulate it inplace?
|
st31237
|
def global_rotation(gt_boxes, points, rotation=np.pi / 4):
if not isinstance(rotation, list):
rotation = [-rotation, rotation]
noise_rotation = np.random.uniform(rotation[0], rotation[1])
points[:, :3] = box_np_ops.rotation_points_single_angle_cuda(points[:, :3], noise_rotation, axis=2)
gt_boxes[:, :3] = box_np_ops.rotation_points_single_angle(gt_boxes[:, :3], noise_rotation, axis=2)
if gt_boxes.shape[1] > 7:
gt_boxes[:, 6:8] = box_np_ops.rotation_points_single_angle(
np.hstack([gt_boxes[:, 6:8], np.zeros((gt_boxes.shape[0], 1))]),
noise_rotation,
axis=2,
)[:, :2]
gt_boxes[:, -1] += noise_rotation
return gt_boxes, points
in this function ?
gt_dict["gt_boxes"], points = prep.random_flip(gt_dict["gt_boxes"], points)
gt_dict["gt_boxes"], points = prep.global_translate_(gt_dict["gt_boxes"], points, self.global_translate_noise_std)
gt_dict["gt_boxes"], points = prep.global_rotation(gt_dict["gt_boxes"], points,
rotation=self.global_rotation_noise)
gt_dict["gt_boxes"], points = prep.global_scaling_v2(gt_dict["gt_boxes"], points,
*self.global_scaling_noise)
only the global_rotation gives error, If I comment it it works.
|
st31238
|
I don’t know, if this operation:
points[:, :3] = box_np_ops.rotation_points_single_angle_cuda
would use the previous points_ tensor, but if so, it could be problematic, since it’s changing it inplace, so you might want to create a new tensor instead.
|
st31239
|
Screen Shot 2021-06-01 at 2.59.08 PM731×411 9.01 KB
Hi everyone
I am using the mount drive, but sometimes the screen will have this problem. how can I avoid it? thanks
|
st31240
|
This issue doesn’t seem to be PyTorch-related, so I would recommend to post this question in a Google/Colab discussion board or raise an issue in their GitHub repository.
|
st31241
|
Unfortunately not, as I don’t know where Colab questions are discussed. Maybe you could take a look at StackOverflow or check other Colab references.
|
st31242
|
Hello, so in my code I have a tensor of size[1,8,64,1024].
Let’s say I want to reshape it to its original size, that is [1,512,1024].
So I want to “integrate” (this is not exactly the word) 8x64 dimensions to one dim of 512.
I used view(*(1,512,1024)) to get from [1,8,64,1024] back to [1,512,1024].
But then I was experimenting to understand torch functions and then with
permute(0, 2, 1, 3) followed by reshape(1, 512, 1024) I had the same result.
The results I get are equal, checking with torch.eq(). But what is better to use for less complexity ?
Thanks a lot
|
st31243
|
Solved by eqy in post #2
I’m confused that you get the same results here. (e.g., in this code snippet the results are clearly not the same)
$ cat temp.py
import torch
a = torch.randn(1, 8, 64, 1024)
b = a.reshape(1, 512, 1024)
c = a.permute(0, 2, 1, 3).reshape(1, 512, 1024)
a = a.view(1, 512, 1024)
print(torch.allclose(a…
|
st31244
|
I’m confused that you get the same results here. (e.g., in this code snippet the results are clearly not the same)
$ cat temp.py
import torch
a = torch.randn(1, 8, 64, 1024)
b = a.reshape(1, 512, 1024)
c = a.permute(0, 2, 1, 3).reshape(1, 512, 1024)
a = a.view(1, 512, 1024)
print(torch.allclose(a,b))
print(torch.allclose(b,c))
$ python3 temp.py
True
False
$
In summary permute is very different from view and reshape in that it actually changes the data layout or ordering of elements (e.g., consider what happens as you access each element by incrementing the last index by 1).
This post For beginners: Do not use view() or reshape() to swap dimensions of tensors! - PyTorch Forums 3 is a great intro to the pitfalls of using view or reshape when the intent is to change the ordering of elements.
|
st31245
|
The following error occurred during model training.
ValueError: operands could not be broadcast together with shapes (1855,) (1855,64)
input : audio feature(waveform) , log mel spectrogram = 64
using librosa package
AudioAugmentation is not used.
I padded input(audio feature) with different length with class :clotho_collate_fn for the longest length. The detailed code is as follows.
settings_features=
{‘keep_raw_audio_data’: False,
‘process’: {‘sr’: 44100,
‘sr_resample’: 16000,
‘nb_fft’: 1024,
‘hop_size’: 512,
‘nb_mels’: 64,
‘window_function’: ‘hann’,
‘center’: True,
‘f_min’: 0.0,
‘f_max’: None,
‘htk’: False,
‘power’: 1.0,
‘norm’: 1}}
import numpy as np
import random
from tools.features_log_mel_bands import feature_extraction
from pathlib import Path
import pysndfx
import gc
import copy
from tools.file_io import load_audio_file
import torch
__author__ = 'Nikita Kuzmin -- Lomonosov Moscow State University'
class MixUp:
def __init__(self, p, settings_features, simple_concat_captions=True,
sample_audio=False):
self.p = p
self.sample_audio = sample_audio
self.settings_features = settings_features
self.simple_concat_captions = simple_concat_captions
def from_mel(self, mel):
return 700 * (10 ** (mel / 2595.0) - 1)
def to_mel(self, hertz):
return 2595.0 * np.log10(1 + hertz / 700.0)
def mix_audio(self, first_audio, second_audio):
a = np.random.uniform(0.4, 0.6) #베타 분포에서 뽑음
shorter, longer = first_audio, second_audio
if shorter.shape[0] == longer.shape[0]:
if self.sample_audio:
return (longer + shorter) / 2.0
else:
longer = from_mel_to_audio(longer, **self.settings_features['process']) * a
shorter = from_mel_to_audio(shorter,
**self.settings_features['process'])
return feature_extraction((longer + shorter) / 2, **self.settings_features['process'])
if first_audio.shape[0] > second_audio.shape[0]:
shorter, longer = longer, shorter
if self.sample_audio:
start = random.randint(0, longer.shape[0] - 1 - shorter.shape[0])
end = start + shorter.shape[0]
longer *= a
longer[start:end] += np.dot(shorter,(1 - a)) #shorter * (1 - a)
else:
longer = from_mel_to_audio(longer, **self.settings_features['process']) * a
shorter = from_mel_to_audio(shorter,
**self.settings_features['process'])
start = random.randint(0, longer.shape[0] - 1 - shorter.shape[0])
end = start + shorter.shape[0]
longer[start:end] += np.dot(shorter,(1 - a))
longer = feature_extraction(longer,
**self.settings_features['process'])
return longer
def mix_labels(self, first_labels, second_labels):
if self.simple_concat_captions:
return np.hstack([first_labels[:-1], second_labels[1:]])
else:
first_token = first_labels[0]
last_token = first_labels[-1]
first_labels = first_labels[1:-1]
second_labels = second_labels[1:-1]
res = np.empty((first_labels.size + second_labels.size,),
dtype=first_labels.dtype)
min_size = min(first_labels.size, second_labels.size)
res[0:2*min_size:2] = first_labels[:min_size]
res[1:2*min_size:2] = second_labels[:min_size]
if first_labels.size > second_labels.size:
res[min_size * 2:] = first_labels[min_size:]
elif second_labels.size > first_labels.size:
res[min_size*2:] = second_labels[min_size:]
res = np.concatenate(([first_token], res))
res = np.concatenate((res, [last_token]))
return res
def mix_audio_and_labels(self,
first_audio, second_audio,
first_labels, second_labels):
mixed_audio = self.mix_audio(first_audio, second_audio)
mixed_labels = self.mix_labels(first_labels, second_labels)
return mixed_audio, mixed_labels
def __call__(self, dataset, inputs):
resulted_audio, resulted_labels, filename = inputs[0], inputs[1], inputs[2]
if np.random.uniform() <= self.p:
random_sample = dataset.random_sample(sample_audio=self.sample_audio)
resulted_audio, resulted_labels = self.mix_audio_and_labels(
resulted_audio, random_sample[0],
resulted_labels, random_sample[1]
)
return resulted_audio, resulted_labels
class AudioAugmentation:
# https://github.com/ex4sperans/freesound-classification
def __init__(self, p):
self.p = p
self.effects_chain = (
pysndfx.AudioEffectsChain()
.reverb(
reverberance=random.randrange(50),
room_scale=random.randrange(50),
stereo_depth=random.randrange(50)
)
.pitch(shift=random.randrange(-300, 300))
.overdrive(gain=random.randrange(2, 10))
.speed(random.uniform(0.9, 1.1))
)
def __call__(self, dataset, inputs):
resulted_audio = inputs[0]
captions = inputs[1]
del inputs
gc.collect()
if np.random.uniform() < self.p:
resulted_audio = torch.from_numpy(self.effects_chain(resulted_audio.numpy()))
return resulted_audio, captions
#clotho_collate_fn
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from typing import MutableSequence, Union, Tuple, AnyStr
from numpy import ndarray
import torch
from torch import cat as pt_cat, zeros as pt_zeros, \
ones as pt_ones, from_numpy, Tensor
from hparams import hparams as hp
from data_augmentation.SpecAugment import spec_augment
__author__ = 'Konstantinos Drossos -- Tampere University'
__docformat__ = 'reStructuredText'
__all__ = ['clotho_collate_fn']
def clotho_collate_fn(batch: MutableSequence[ndarray],
nb_t_steps: Union[AnyStr, Tuple[int, int]],
input_pad_at: str,
output_pad_at: str) \
-> Tuple[Tensor, Tensor]:
"""Pads data.
:param batch: Batch data.
:type batch: list[numpy.ndarray]
:param nb_t_steps: Number of time steps to\
pad/truncate to. Cab use\
'max', 'min', or exact number\
e.g. (1024, 10).
:type nb_t_steps: str|(int, int)
:param input_pad_at: Pad input at the start or\
at the end?
:type input_pad_at: str
:param output_pad_at: Pad output at the start or\
at the end?
:type output_pad_at: str
:return: Padded data.
:rtype: torch.Tensor, torch.Tensor
"""
if type(nb_t_steps) == str:
truncate_fn = max if nb_t_steps.lower() == 'max' else min
in_t_steps = truncate_fn([i[0].shape[0] for i in batch])
out_t_steps = truncate_fn([i[1].shape[0] for i in batch])
else:
in_t_steps, out_t_steps = nb_t_steps
in_dim = batch[0][0].shape[-1]
eos_token = batch[0][1][-1]
PAD = 4367
input_tensor, output_tensor = [], []
for in_b, out_b in batch:
if in_t_steps >= in_b.shape[0]:
padding = pt_zeros(in_t_steps - in_b.shape[0], in_dim).float()
data = [from_numpy(in_b).float()]
if input_pad_at.lower() == 'start':
data.insert(0, padding)
else:
data.append(padding)
tmp_in: Tensor = pt_cat(data)
else:
tmp_in: Tensor = from_numpy(in_b[:in_t_steps, :]).float()
input_tensor.append(tmp_in.unsqueeze_(0))
if out_t_steps >= out_b.shape[0]:
padding = pt_ones(out_t_steps - len(out_b)).mul(PAD).long()
data = [from_numpy(out_b).long()]
if output_pad_at.lower() == 'start':
data.insert(0, padding)
else:
data.append(padding)
tmp_out: Tensor = pt_cat(data)
else:
tmp_out: Tensor = from_numpy(out_b[:out_t_steps]).long()
output_tensor.append(tmp_out.unsqueeze_(0))
input_tensor = pt_cat(input_tensor)
output_tensor = pt_cat(output_tensor)
file_names = [i[2] for i in batch]
return input_tensor, output_tensor, file_names
def clotho_collate_fn_eval(batch: MutableSequence[ndarray],
nb_t_steps: Union[AnyStr, Tuple[int, int]],
input_pad_at: str,
output_pad_at: str,
split: str,
augment:bool) \
-> Tuple[Tensor, Tensor, Tensor, list]:
"""Pads data.
:param batch: Batch data.
:type batch: list[numpy.ndarray]
:param nb_t_steps: Number of time steps to\
pad/truncate to. Cab use\
'max', 'min', or exact number\
e.g. (1024, 10).
:type nb_t_steps: str|(int, int)
:param input_pad_at: Pad input at the start or\
at the end?
:type input_pad_at: str
:param output_pad_at: Pad output at the start or\
at the end?
:type output_pad_at: str
:return: Padded data.
:rtype: torch.Tensor, torch.Tensor
"""
if type(nb_t_steps) == str:
truncate_fn = max if nb_t_steps.lower() == 'max' else min
in_t_steps = truncate_fn([i[0].shape[0] for i in batch])
out_t_steps = truncate_fn([i[1].shape[0] for i in batch])
else:
in_t_steps, out_t_steps = nb_t_steps
in_dim = batch[0][0].shape[-1]
eos_token = batch[0][1][-1]
batch = sorted(batch, key=lambda x: x[-1],reverse=True)
PAD = 4367
input_tensor, output_tensor = [], []
for in_b, out_b, ref, filename,out_len in batch:
if in_t_steps >= in_b.shape[0]:
padding = pt_zeros(in_t_steps - in_b.shape[0], in_dim).float()
data = [from_numpy(in_b).float()]
if input_pad_at.lower() == 'start':
data.insert(0, padding)
else:
data.append(padding)
tmp_in: Tensor = pt_cat(data)
else:
tmp_in: Tensor = from_numpy(in_b[:in_t_steps, :]).float()
input_tensor.append(tmp_in.unsqueeze_(0))
if out_t_steps >= out_b.shape[0]:
padding = pt_ones(out_t_steps - len(out_b)).mul(PAD).long()
data = [from_numpy(out_b).long()]
if output_pad_at.lower() == 'start':
data.insert(0, padding)
else:
data.append(padding)
tmp_out: Tensor = pt_cat(data)
else:
tmp_out: Tensor = from_numpy(out_b[:out_t_steps]).long()
output_tensor.append(tmp_out.unsqueeze_(0))
input_tensor = pt_cat(input_tensor)
if augment:
input_tensor = spec_augment(input_tensor)
output_tensor = pt_cat(output_tensor)
all_ref = [i[2] for i in batch]
filename = [i[3] for i in batch]
*_, target_len = zip(*batch)
target_len = torch.LongTensor(target_len)
file_names = [i[2] for i in batch]
return input_tensor, output_tensor,file_names,target_len, all_ref
# EOF
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from typing import Tuple, List, AnyStr, Union
from pathlib import Path
from numpy import ndarray, recarray
from torch.utils.data import Dataset
from numpy import load as np_load
import torchaudio
import torch
import numpy as np
import os
__author__ = 'Konstantinos Drossos -- Tampere University'
__docformat__ = 'reStructuredText'
__all__ = ['ClothoDataset']
class ClothoDataset(Dataset):
def __init__(self, data_dir: Path,
split: AnyStr,
input_field_name: AnyStr,
output_field_name: AnyStr,
load_into_memory: bool,
transforms=None) \
-> None:
"""Initialization of a Clotho dataset object.
:param data_dir: Directory with data.
:type data_dir: pathlib.Path
:param split: Split to use (i.e. 'development', 'evaluation')
:type split: str
:param input_field_name: Field name of the clotho data\
to be used as input data to the\
method.
:type input_field_name: str
:param output_field_name: Field name of the clotho data\
to be used as output data to the\
method.
:type output_field_name: str
:param load_into_memory: Load all data into memory?
:type load_into_memory: bool
"""
super(ClothoDataset, self).__init__()
the_dir: Path = data_dir.joinpath(split)
self.examples: List[Path] = sorted(the_dir.iterdir())
self.input_name: str = input_field_name
self.output_name: str = output_field_name
self.load_into_memory: bool = load_into_memory
self.transforms = transforms
self.resampler = torchaudio.transforms.Resample(orig_freq=settings_features['process']['sr'],
new_freq=settings_features['process']['sr_resample'])
if load_into_memory:
self.examples: List[recarray] = [np_load(str(f), allow_pickle=True)
for f in self.examples]
def __len__(self) \
-> int:
"""Gets the amount of examples in the dataset.
:return: Amount of examples in the dataset.
:rtype: int
"""
return len(self.examples)
def __getitem__(self,
item: int) \
-> Tuple[ndarray, ndarray]:
"""Gets an example from the dataset.
:param item: Index of the item.
:type item: int
:return: Input and output values.
:rtype: numpy.ndarray. numpy.ndarray
"""
ex: Union[Path, recarray] = self.examples[item]
if not self.load_into_memory:
ex: recarray = np_load(str(ex), allow_pickle=True)
in_e, ou_e = [ex[i].item() for i in [self.input_name, self.output_name]]
return in_e, ou_e
class ClothoDatasetEval(Dataset):
def __init__(self, data_dir: Path,
split: AnyStr,
input_field_name: AnyStr,
output_field_name: AnyStr,
load_into_memory: bool,
transforms=None) \
-> None:
"""Initialization of a Clotho dataset object.
:param data_dir: Directory with data.
:type data_dir: pathlib.Path
:param split: Split to use (i.e. 'development', 'evaluation')
:type split: str
:param input_field_name: Field name of the clotho data\
to be used as input data to the\
method.
:type input_field_name: str
:param output_field_name: Field name of the clotho data\
to be used as output data to the\
method.
:type output_field_name: str
:param load_into_memory: Load all data into memory?
:type load_into_memory: bool
"""
super(ClothoDatasetEval, self).__init__()
the_dir: Path = data_dir.joinpath(split)
self.split = split
if split == 'evaluation':
self.examples: List[Path] = sorted(the_dir.iterdir())[::5] # changed
else:
self.examples: List[Path] = sorted(the_dir.iterdir()) # changed
# self.examples: List[Path] = sorted(the_dir.iterdir())
self.input_name: str = input_field_name
self.output_name: str = output_field_name
self.load_into_memory: bool = load_into_memory
self.data_dir = the_dir
self.transforms = transforms
self.resampler = torchaudio.transforms.Resample(orig_freq=settings_features['process']['sr'],
new_freq=settings_features['process']['sr_resample'])
if load_into_memory:
self.examples: List[recarray] = [np_load(str(f), allow_pickle=True)
for f in self.examples]
def __len__(self) \
-> int:
"""Gets the amount of examples in the dataset.
:return: Amount of examples in the dataset.
:rtype: int
"""
return len(self.examples)
def __getitem__(self,
item: int):
"""Gets an example from the dataset.
:param item: Index of the item.
:type item: int
:return: Input and output values.
:rtype: numpy.ndarray. numpy.ndarray
"""
ex: Union[Path, recarray] = self.examples[item]
if not self.load_into_memory:
ex: recarray = np_load(str(ex), allow_pickle=True)
in_e, ou_e = [ex[i].item() for i in [self.input_name, self.output_name]]
all_ref = get_all_ref(ex['file_name'].item(), self.data_dir)
filename = str(ex['file_name'].item())
out_len = len(ou_e)
if self.transforms is not None:
for transform in self.transforms:
in_e, ou_e = transform(dataset=self, inputs=(in_e, ou_e, filename))
return in_e, ou_e, all_ref, filename,out_len
def random_sample(self, sample_audio=True):
"""
Sampling audio or melspectrogram and encoded output
:return:
"""
item = random.randint(0, len(self.examples) - 1)
ex = self.examples[item]
if not self.load_into_memory:
ex = np_load(str(ex), allow_pickle=True)
#if sample_audio:
thedir = Path('./create_dataset/data/clotho_audio_files/').joinpath(self.split)
filename = Path(thedir, ex.file_name[0])
in_e = torchaudio.load(filepath=filename)[0][0]
in_e = self.resampler.forward(in_e)
ou_e = ex[self.output_name].item()
else:
in_e, ou_e = [ex[i].item()
for i in [self.input_name, self.output_name]]
return in_e, ou_e
def get_all_ref(filename, data_dir):
filename = str(filename)
# tgt = [np.load(d, allow_pickle=True).words_ind.tolist()
tgt = [np.load(d, allow_pickle=True)['words_ind'].item().tolist()
for d in [os.path.join(data_dir, 'clotho_file_{filename}.wav_{i}.npy'.
format(filename=filename[:-4], # 删除'.wav'
i=i)) for i in range(5)] # wav_0-wav_4
]
return tgt
# EOF
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from typing import Callable, Union, Tuple, AnyStr, Optional
from functools import partial
from pathlib import Path
from torch.utils.data.dataloader import DataLoader
from typing import MutableSequence, MutableMapping, Union,\
Tuple, List
#from .clotho_dataset import ClothoDataset, ClothoDatasetEval
#from .collate_fn import clotho_collate_fn, clotho_collate_fn_eval
__author__ = 'Konstantinos Drossos'
__docformat__ = 'reStructuredText'
__all__ = ['get_clotho_loader']
def get_clotho_loader(data_dir: Path,
split: str,
settings_features:MutableMapping[
str, Union[str, bool, MutableMapping[str, str]]],
input_field_name: str,
output_field_name: str,
load_into_memory: bool,
batch_size: int,
nb_t_steps_pad: Union[AnyStr, Tuple[int, int]],
shuffle: Optional[bool] = True,
drop_last: Optional[bool] = True,
input_pad_at: Optional[str] = 'start',
output_pad_at: Optional[str] = 'end',
num_workers: Optional[int] = 1,
return_reference: Optional[bool] = False,
) \
-> DataLoader:
"""Gets the clotho data loader.
:param return_reference:
:param data_dir: Directory with data.
:type data_dir: pathlib.Path
:param split: Split to use (i.e. 'development', 'evaluation')
:type split: str
:param input_field_name: Field name of the clotho data\
to be used as input data to the\
method.
:type input_field_name: str
:param output_field_name: Field name of the clotho data\
to be used as output data to the\
method.
:type output_field_name: str
:param load_into_memory: Load all data into memory?
:type load_into_memory: bool
:param batch_size: Batch size to use.
:type batch_size: int
:param nb_t_steps_pad: Number of time steps to\
pad/truncate to. Cab use\
'max', 'min', or exact number\
e.g. (1024, 10).
:type nb_t_steps_pad: str|(int, int)
:param shuffle: Shuffle examples? Defaults to True.
:type shuffle: bool, optional
:param drop_last: Drop the last examples if not making\
a batch of `batch_size`? Defaults to True.
:type drop_last: bool, optional
:param input_pad_at: Pad input at the start or\
at the end?
:type input_pad_at: str
:param output_pad_at: Pad output at the start or\
at the end?
:type output_pad_at: str
:param num_workers: Amount of workers, defaults to 1.
:type num_workers: int, optional
:return: Dataloader for Clotho data.
:rtype: torch.utils.data.dataloader.DataLoader
"""
transforms = []
transforms.append(MixUp(p=0.5,
settings_features=settings_features,
simple_concat_captions=True,
sample_audio=True))
if return_reference:
dataset: ClothoDatasetEval = ClothoDatasetEval(
data_dir=data_dir, split=split,
input_field_name=input_field_name,
output_field_name=output_field_name,
load_into_memory=load_into_memory,transforms=transforms)
collate_fn: Callable = partial(
clotho_collate_fn_eval,
nb_t_steps=nb_t_steps_pad,
input_pad_at=input_pad_at,
output_pad_at=output_pad_at, split=split)
else:
dataset: ClothoDataset = ClothoDataset(
data_dir=data_dir, split=split,
input_field_name=input_field_name,
output_field_name=output_field_name,
load_into_memory=load_into_memory)
collate_fn: Callable = partial(
clotho_collate_fn,
nb_t_steps=nb_t_steps_pad,
input_pad_at=input_pad_at,
output_pad_at=output_pad_at)
return DataLoader(
dataset=dataset, batch_size=batch_size,
shuffle=shuffle, num_workers=num_workers,
drop_last=drop_last, collate_fn=collate_fn)
# EOF
학습 데이터 정의
training_data = get_clotho_loader(data_dir=data_dir, split='development',
settings_features=settings['feature_extraction_settings'],
input_field_name='features',
output_field_name='words_ind',
load_into_memory=False,
batch_size=hp.batch_size,
nb_t_steps_pad='max',
num_workers=4, return_reference=True)
학습
def train():
model.train()
total_loss_text = 0.
start_time = time.time()
batch = 0
for src, tgt, tgt_len, ref in training_data:
src = src.to(device)
tgt = tgt.to(device)
tgt_pad_mask = get_padding(tgt, tgt_len)
tgt_in = tgt[:, :-1]
tgt_pad_mask = tgt_pad_mask[:, :-1]
tgt_y = tgt[:, 1:]
optimizer.zero_grad()
output = model(src, tgt_in, target_padding_mask=tgt_pad_mask)
loss_text = criterion(output.contiguous().view(-1, hp.ntoken), tgt_y.transpose(0, 1).contiguous().view(-1))
loss = loss_text
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), hp.clip_grad)
optimizer.step()
total_loss_text += loss_text.item()
writer.add_scalar('Loss/train-text', loss_text.item(), (epoch - 1) * len(training_data) + batch)
batch += 1
if batch % hp.log_interval == 0 and batch > 0:
mean_text_loss = total_loss_text / hp.log_interval
elapsed = time.time() - start_time
current_lr = [param_group['lr'] for param_group in optimizer.param_groups][0]
logging.info('| epoch {:3d} | {:5d}/{:5d} batches | lr {:02.2e} | ms/batch {:5.2f} | '
'loss-text {:5.4f}'.format(
epoch, batch, len(training_data), current_lr,
elapsed * 1000 / hp.log_interval, mean_text_loss))
total_loss_text = 0
start_time = time.time()
epoch = 1
if hp.mode == 'train':
while epoch < hp.training_epochs + 1:
epoch_start_time = time.time()
train()
torch.save(model.state_dict(), '{log_dir}/{num_epoch}.pt'.format(log_dir=log_dir, num_epoch=epoch))
scheduler.step(epoch)
eval_all(evaluation_beam, word_dict_pickle_path=word_dict_pickle_path)
eval_with_beam(evaluation_beam, max_len=30, eos_ind=9, word_dict_pickle_path=word_dict_pickle_path,
beam_size=2)
eval_with_beam(evaluation_beam, max_len=30, eos_ind=9, word_dict_pickle_path=word_dict_pickle_path,
beam_size=3)
eval_with_beam(evaluation_beam, max_len=30, eos_ind=9, word_dict_pickle_path=word_dict_pickle_path,
beam_size=4)
epoch += 1
에러발생
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File “/home/hj20/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py”, line 202, in _worker_loop
data = fetcher.fetch(index)
File “/home/hj20/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py”, line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File “/home/hj20/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py”, line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File “”, line 171, in getitem
in_e, ou_e = transform(dataset=self, inputs=(in_e, ou_e, filename))
File “”, line 105, in call
resulted_labels, random_sample[1]
File “”, line 94, in mix_audio_and_labels
mixed_audio = self.mix_audio(first_audio, second_audio)
File “”, line 56, in mix_audio
longer[start:end] += np.dot(shorter,(1 - a)) #shorter * (1 - a)
ValueError: operands could not be broadcast together with shapes (1855,) (1855,64)
help me ㅠㅠ
|
st31246
|
I think the broadcasting should be possible, if you unsqueeze(1) the missing dim1 in one of the tensors, which is raising the error (assuming that broadcast is really wanted).
|
st31247
|
You should use it on the tensor having only one dimension (check the shape of longer and shorter) and also make sure these shapes “make sense”, i.e. you really want to unsqueeze and broadcast the operation.
|
st31248
|
Hi! I am new and just installed pytorch via anaconda. I used the steps listed on the pytorch website and even tried reinstalling following those instructions as well. My installation of pytorch goes smoothly but when I try running the verification code, I get the error listed: Intel MKL FATAL ERROR: Cannot load libmkl_core.dylib. I am using a Mac. Whenever I have researched this topic in the past, the most similar solution I have seen is: https://github.com/JuliaPy/PyPlot.jl/issues/315 72 but I am still a beginner and all the comments on this site seem incredibly advanced ): thank you in advance!
|
st31249
|
Solved by Cami_Williams in post #4
Hey Autumn! Have you tried:
conda install nomkl numpy scipy scikit-learn numexpr
conda remove mkl mkl-service
I have had similar issues myself trying to run PyTorch locally. If you can try using Google colab or Jupyter notebooks instead for experimentation. You can switch the runtime type’s hardwa…
|
st31250
|
It seems youre missing the intel core math libraries from what I gather. Have you already tried installing mkl in the same environment that you installed pytorch in, i.e.:
conda install mkl in your terminal.
|
st31251
|
I just tried that and it downloaded, but whenever I restarted anaconda, the same error was given. Here’s the expanded code if it’s helpful:
Adapting from protocol version 5.1 (kernel 79056037-3763-439f-b1f0-27896a9bb6e1) to 5.3 (client).
INTEL MKL ERROR: dlopen(/Users/name/anaconda3/lib/libmkl_core.dylib, 9): image not found.
Intel MKL FATAL ERROR: Cannot load libmkl_core.dylib.
|
st31252
|
Hey Autumn! Have you tried:
conda install nomkl numpy scipy scikit-learn numexpr
conda remove mkl mkl-service
I have had similar issues myself trying to run PyTorch locally. If you can try using Google colab or Jupyter notebooks instead for experimentation. You can switch the runtime type’s hardware accelerator to CPU/GPU/TPU which (I have found) sometimes is all it takes. The cloud notebooks make setup a lot easier.
|
st31253
|
I get this issue too and i fixed it. Thanks so much! @Autumn_Thompson, @Cami_Williams
|
st31254
|
It took me an hour to install and remove as per your instructions.
Unfortunately my problem still persists.
I am using Jupyter.
|
st31255
|
This link gives a detailed explanation for the problem in question from multiple aspects along with solutions.
https://docs.anaconda.com/mkl-optimizations/index.html 155
Enjoy!
|
st31256
|
Hi,
I try to profile my code with the profiler module. Here the results given by the profiler :
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg # of Calls
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
aten::to 1.61% 13.661ms 18.94% 160.685ms 57.800us 11.134ms 1.23% 156.273ms 56.213us 2780
aten::copy_ 18.26% 154.965ms 18.26% 154.965ms 195.170us 151.355ms 16.74% 151.355ms 190.623us 794
Optimizer.step#SGD.step 4.55% 38.603ms 13.86% 117.564ms 117.564ms 39.414ms 4.36% 117.565ms 117.565ms 1
aten::add 6.44% 54.640ms 6.44% 54.640ms 17.769us 55.985ms 6.19% 55.985ms 18.206us 3075
aten::add_ 6.39% 54.254ms 6.39% 54.254ms 15.404us 45.105ms 4.99% 45.105ms 12.807us 3522
aten::conv2d 0.47% 3.992ms 6.21% 52.728ms 110.079us 2.794ms 0.31% 68.593ms 143.201us 479
CudnnConvolutionBackward 0.43% 3.682ms 6.20% 52.600ms 149.009us 1.476ms 0.16% 73.655ms 208.655us 353
aten::cudnn_convolution_backward 0.93% 7.918ms 5.77% 48.918ms 138.577us 3.593ms 0.40% 72.179ms 204.473us 353
aten::convolution 0.46% 3.891ms 5.74% 48.736ms 101.744us 2.782ms 0.31% 65.799ms 137.368us 479
aten::_convolution 0.71% 6.014ms 5.29% 44.845ms 93.622us 4.255ms 0.47% 63.018ms 131.561us 479
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
But I only have two “.to” in my code, one for the model and the other for the inputs. Where can come from the 2000 calls ?
Here are also samples tracing from the profiler :
Capture d’écran du 2021-06-02 21-50-031422×452 106 KB
Capture d’écran du 2021-06-02 21-58-321204×365 80.1 KB
Thanks for your help
|
st31257
|
Hi, I’m new to distributed data parallel (DDP) module.
I have some questions one my model in which I use syncBatchNorm start to train on 2GPU in a single node, it seems to use a lot of CPU cores, and seems limited by CPU.
I tried to profile my code and the two most CPU time consuming operations are “to” (which is called around 4000times ) and “syncBatchNorm”.
So, I don’t know if it’s normal ? And if any optimization are possible ?
Thanks for your help !
|
st31258
|
Do you have a code snippet that reproduces the issue? From the description I guess most of the profiler results are from moving your model and parameters from CPU to GPU, instead of doing the actual training. Can you profile it after move the model to GPU? The model should be converted to GPU before wrapping it with DDP, see the DDP tutorial 2
|
st31259
|
Hi
I defined a ResNet as follows
# Residual Block
class DenseResidual(torch.nn.Module):
def __init__(self, inp_dim, neurons, layers, **kwargs):
super(DenseResidual, self).__init__(**kwargs)
self.h1 = torch.nn.Linear(inp_dim, neurons)
self.hidden = [torch.nn.Linear(neurons, neurons)
for _ in range(layers-1)]
def forward(self, inputs):
h = torch.tanh(self.h1(inputs))
x = h
for layer in self.hidden:
x = torch.tanh(layer(x))
# Defining Residual Connection and returning
return x + h
# ResNet Architecture
class MyResNet(torch.nn.Module):
def __init__(self, **kwargs):
super(MyResNet, self).__init__(**kwargs)
self.b1 = DenseResidual(2, 8, 3)
self.b2 = DenseResidual(8, 16, 3)
self.hn = torch.nn.Linear(16, 8)
self.out = torch.nn.Linear(8, 1)
def forward(self, inputs):
x = self.b1(inputs)
x = self.b2(x)
x = torch.tanh(self.hn(x))
x = self.out(x)
return x
model = MyResNet()
When I run the forward pass using code
model.to(device)
optimizer = torch.optim.Adam(model.parameters())
criterion = torch.nn.MSELoss()
EPOCHS = 5
for epoch in range(EPOCHS):
optimizer.zero_grad()
train_m.requires_grad = True
p = model(train_m)
print(p)
I get an error message
RuntimeError Traceback (most recent call last)
<ipython-input-22-cf2450a381be> in <module>
9
10 train_m.requires_grad = True
---> 11 p = model(train_m)
12 print(p)
13
~\miniconda3\envs\torch\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
<ipython-input-20-e99f45da034d> in forward(self, inputs)
9
10 def forward(self, inputs):
---> 11 x = self.b1(inputs)
12 x = self.b2(x)
13 x = torch.tanh(self.hn(x))
~\miniconda3\envs\torch\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
<ipython-input-14-9481614a812a> in forward(self, inputs)
11 x = h
12 for layer in self.hidden:
---> 13 x = torch.tanh(layer(x))
14
15 # Defining Residual Connection and returning
~\miniconda3\envs\torch\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~\miniconda3\envs\torch\lib\site-packages\torch\nn\modules\linear.py in forward(self, input)
91
92 def forward(self, input: Tensor) -> Tensor:
---> 93 return F.linear(input, self.weight, self.bias)
94
95 def extra_repr(self) -> str:
~\miniconda3\envs\torch\lib\site-packages\torch\nn\functional.py in linear(input, weight, bias)
1688 if input.dim() == 2 and bias is not None:
1689 # fused op is marginally faster
-> 1690 ret = torch.addmm(bias, input, weight.t())
1691 else:
1692 output = input.matmul(weight.t())
RuntimeError: Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU (while checking arguments for addmm)
As I’ve moved the model to gpu, I can’t understand why this is happening?
|
st31260
|
Solved by Tushar_Gautam in post #7
Hi, I found the solution to this problem. I forgot to use ModuleList in class defining Residual Block. When I added it, the code ran perfectly. Here’s the modified code:
# Residual Block
class DenseResidual(torch.nn.Module):
def __init__(self, inp_dim, neurons, layers, **kwargs):
super(…
|
st31261
|
Did you also move the input data to the GPU?
Note that you would have to reassign tensors (unlike modules):
tensor = tensor.to('cuda:0') # needs assignment
model.to('cuda:0') # works without
|
st31262
|
I have been getting the exact same error. Even with the reassignment. Could there be any other issue?
|
st31263
|
Could you post a code snippet to reproduce this issue, so that we can have a look?
|
st31264
|
Hi, I found the solution to this problem. I forgot to use ModuleList in class defining Residual Block. When I added it, the code ran perfectly. Here’s the modified code:
# Residual Block
class DenseResidual(torch.nn.Module):
def __init__(self, inp_dim, neurons, layers, **kwargs):
super(DenseResidual, self).__init__(**kwargs)
self.h1 = torch.nn.Linear(inp_dim, neurons)
self.hidden = [torch.nn.Linear(neurons, neurons)
for _ in range(layers-1)]
# Using ModuleList so that this layer list can be moved to CUDA
self.hidden = torch.nn.ModuleList(self.hidden)
def forward(self, inputs):
h = torch.tanh(self.h1(inputs))
x = h
for layer in self.hidden:
x = torch.tanh(layer(x))
# Defining Residual Connection and returning
return x + h
|
st31265
|
I got the same bug as yours. But I have use the ModuleList.Have you solved the problem?
|
st31266
|
def clones(module, N):
"Produce N identical layers."
return nn.ModuleList([copy.deepcopy(module) for _ in range(N)])
class Encoder(nn.Module):
"Core encoder is a stack of N layers"
def __init__(self, layer, N):
# layer = one EncoderLayer object, N=6
super(Encoder, self).__init__()
self.layers = clones(layer, N)
# 深copy,N=6,
self.norm = LayerNorm(layer.size)
def forward(self, x, mask):
"Pass the input (and mask) through each layer in turn."
# x is alike (30, 10, 512)
# (batch.size, sequence.len, d_model)
# mask是类似于(batch.size, 10, 10)的矩阵
for layer in self.layers:
x = layer(x, mask)
return self.norm(x)
class EncoderLayer(nn.Module):
"Encoder is made up of self-attn and "
"feed forward (defined below)"
def __init__(self, size, self_attn, feed_forward, dropout):
super(EncoderLayer, self).__init__()
self.self_attn = self_attn
self.feed_forward = feed_forward
self.sublayer = clones(SublayerConnection(size, dropout), 2)
# 使用深度克隆方法,完整地复制出来两个SublayerConnection
self.size = size # 512
def forward(self, x, mask):
"Follow Figure 1 (left) for connections."
# x shape = (batch_size, sequence_length, d_model)
# mask 是(batch.size, sequence_length, sequence_length)的矩阵,类似于当前一个词w,有哪些词是w可见的
x = self.sublayer[0](x,
lambda x: self.self_attn(x, x, x, mask))
# x (batch_size, sequence_length, d_model) -> self_attn (MultiHeadAttention)
# shape is same (batch_size, sequence_length, d_model) -> SublayerConnection
# -> (batch_size, sequence_length, d_model)
return self.sublayer[1](x, self.feed_forward)
# x 和feed_forward对象一起,给第二个SublayerConnection
class Encoder_model(nn.Module):
def __init__(self, encoder, src_embed, generator):
super(Encoder_model, self).__init__()
self.encoder = encoder
self.src_embed = src_embed
self.generator = generator
def forward(self, src, src_mask):
out = self.encoder(self.src_embed(src), src_mask)
out = self.generator(out)
return out
def make_Encoder_model(src_vocab, N=6, d_model=64, d_ff=512, h=8, input_dim=168, out_dim = 24, dropout=0):
c = copy.deepcopy
attn = MultiHeadedAttention(h, d_model)
ff = PositionwiseFeedForward(d_model, d_ff, dropout)
position = PositionalEncoding(d_model, dropout)
model = Encoder_model(Encoder(EncoderLayer(d_model, c(attn), c(ff), dropout), N),
nn.Sequential(Embeddings(d_model, src_vocab), c(position)),
EncoderGenerator(d_model, 1, input_dim, out_dim))
for p in model.parameters():
if p.dim() > 1:
nn.init.xavier_uniform(p)
return model
model = make_ConvEncoder_model(src_vocab, N, d_model, d_ff, h, vector_in_dim, vector_out_dim-1, dropout)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = model.to(device)
Could you help me to solve my problem?
/content/Transformer_forecasting/Module/model.py:113: UserWarning: nn.init.xavier_uniform is now deprecated in favor of nn.init.xavier_uniform_.
nn.init.xavier_uniform§
Traceback (most recent call last):
File “main.py”, line 78, in
Train()
File “main.py”, line 52, in Train
myloss = run_module.run_epoch(epoch, DataSet.construct_batch(dataloader_train, vector_in_dim, vector_out_dim), model, loss, model_name)
File “/content/Transformer_forecasting/train_module/run_module.py”, line 60, in run_epoch
out = model.forward(batch.src, batch.src_mask)
File “/content/Transformer_forecasting/Module/model.py”, line 64, in forward
out = self.encoder(self.src_embed(src), src_mask)
File “/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py”, line 117, in forward
input = module(input)
File “/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “/content/Transformer_forecasting/Module/Embedding.py”, line 17, in forward
return self.lut(x)*math.sqrt(self.d_model)
File “/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/torch/nn/modules/linear.py”, line 93, in forward
return F.linear(input, self.weight, self.bias)
File “/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py”, line 1692, in linear
output = input.matmul(weight.t())
RuntimeError: Tensor for ‘out’ is on CPU, Tensor for argument #1 ‘self’ is on CPU, but expected them to be on GPU (while checking arguments for addmm)
|
st31267
|
@ptrblck I’m facing a similar error:
model = torch.nn.Sequential(
torch.nn.Conv1d(1, 1024, kernel_size=7, stride=1, padding=0, dilation=1, groups=1, bias=True),
torch.nn.BatchNorm1d(1024),
torch.nn.ReLU(inplace=True),
torch.nn.Conv1d(1024, 1024, kernel_size=1, stride=1, padding=0, dilation=1, groups=1, bias=True),
torch.nn.BatchNorm1d(1024),
torch.nn.ReLU(inplace=True),
torch.nn.Conv1d(1024, 51, kernel_size=1, stride=1, padding=0, dilation=1, groups=1, bias=True)
)
classifier = torch.nn.Linear(51 * 7, 51)
model.cuda()
for e in range(num_epochs):
running_loss = 0
for batch in train_loader:
features, labels = batch[:, :-1], batch[:, -1]
features, labels = features.to(device), labels.to(device)
features = features.unsqueeze(dim=1)
outputs = model(features)
outputs = outputs.view(outputs.size(0), -1)
scores = classifier(outputs)
loss = criterion(outputs, labels.long())
loss.backward()
optimizer.zero_grad()
optimizer.step()
running_loss += loss.item()
Here’s the trace
RuntimeError Traceback (most recent call last)
<ipython-input-93-8ef77d1136dc> in <module>
11 outputs = model(features)
12 outputs = outputs.view(outputs.size(0), -1)
---> 13 scores = classifier(outputs)
14 loss = criterion(outputs, labels.long())
15 loss.backward()
~/miniconda3/envs/mytraining/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~/miniconda3/envs/mytraining/lib/python3.6/site-packages/torch/nn/modules/linear.py in forward(self, input)
92
93 def forward(self, input: Tensor) -> Tensor:
---> 94 return F.linear(input, self.weight, self.bias)
95
96 def extra_repr(self) -> str:
~/miniconda3/envs/mytraining/lib/python3.6/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1751 if has_torch_function_variadic(input, weight):
1752 return handle_torch_function(linear, (input, weight), input, weight, bias=bias)
-> 1753 return torch._C._nn.linear(input, weight, bias)
1754
1755
RuntimeError: Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU (while checking arguments for addmm)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.