id
stringlengths
3
8
text
stringlengths
1
115k
st31768
Thanks! However, in this way, will the data sampling strategy be affected? That said, if I have 10 batches for the data, and I do next for 20 times, will that be 2 full iteration?
st31769
Alternatively you should be able to just catch the StopIteration exception and reinitialize the generator (maybe with some previous shuffling)
st31770
Also it seems that every time I called next(dataloader.__iter__()), I will get the same data.
st31771
next(data loader. __iter__()) Creates a new iterator starting at the beginning of the dataset every time you call it. # create dataloader-iterator data_iter = iter(data_loader) # iterate over dataset # alternatively you could use while(True) for i in range(NUM_ITERS_YOU_WANT) try: data = next(data_iter) except StopIteration: # StopIteration is thrown if dataset ends # reinitialize data loader data_iter = iter(data_loader) data = next(data_iter) Could do the trick (I did not test the code, just typed it here)
st31772
Yes, that will be 2 full iterations. And you can even disregard the shuffling of the data indices by the dataloader and follow your own stragety in the next function. For examle irrespective of ehat index the dataloader sends, you can calculate your own index based on your dataset, that way your strategy would not be affected even if you put a huge nunber in len. It’s all subjective how you want to design your dataset.
st31773
This should do the trick. def loopy(dl): while True: for x in iter(dl): yield x Then, everywhere you used to use iter(dataloader), use loopy(dataloader). Beware though, that samplers passed to the DataLoader can raise StopIteration() exceptions.
st31774
@justusschock how can I make the data loader shuffle the data every time its iterator is reinitialized? @AlexisW maybe you know?
st31775
Official non-hacky support for this is happening in https://github.com/pytorch/pytorch/pull/19228 1.1k
st31776
As @SimonW mentioned, the release of PyTorch 1.2 brought with it a new dataset class: torch.utils.data.IterableDataset. Here you can read the official documentation related to it IterableDataset 166
st31777
I am trying to do section 3 of Google Colaboratory 1 However, there are two errors.When I try to train my model, I get an error for the loss function: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15. The second error occurs when I change the hidden_dim back to 256, I get an error regarding the batch normalization. How do I fix these problems?
st31778
Solved by ptrblck in post #4 Remove the unnecessary dim1 from the target via .squeeze(1) and it should work.
st31779
This error is raised, if the shapes of the input tensors are unexpected, so make sure the model output has the shape [batch_size, nb_classes, *], while the target has the shape [batch_size, *] containing the class indicex in the range [0, nb_classes-1]. Here is a small code snippet showing the error: batch_size, nb_classes = 2, 10 x = torch.randn(batch_size, nb_classes) y = torch.randint(0, nb_classes, (batch_size,)) criterion = nn.CrossEntropyLoss() loss = criterion(x, y) y = torch.randint(0, nb_classes, (batch_size, nb_classes)) loss = criterion(x, y) > RuntimeError: 1D target tensor expected, multi-target not supported
st31780
Still not sure how to fix this. My pred.shape is [169343, 40], my data.y shape is [169343, 1], where each entry in data.y is in [0, 39].
st31781
The outputs of torch.fft.irfft and torch.fft.irfftn are real. Would it be possible, by any chance, to the the complex version of it, i.e., before the result is clipped to the real domain? I already went through these code on github and I could not find a way to, yet. ATen/native/SpectralOps ATen/native/cuda/SpectralOps
st31782
This error pops up after the 9th iteration of training Traceback (most recent call last): File "train.py", line 75, in <module> train(args) File "train.py", line 64, in train output = CIA_interface.cia_forward(batch ,epoch ,i) File "/notebooks/E2E/cia_interface.py", line 90, in cia_forward losses = self.model(example, return_loss = True) File "/opt/conda/envs/btngan1/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/notebooks/cia/det3d/models/detectors/voxelnet.py", line 38, in forward return self.bbox_head.loss(example, preds) File "/notebooks/cia/det3d/models/bbox_heads/mg_head_v4_release.py", line 612, in loss iou_pred_loss = iou_pred_loss.sum() / batch_size RuntimeError: CUDA error: invalid configuration argument
st31783
Could you post the output of python -m torch.utils.collect_env as well as an executable code snippet to reproduce this issue?
st31784
(btngan1) root@n1byw4j8oz:/notebooks# python -m torch.utils.collect_env Collecting environment information... PyTorch version: 1.6.0 Is debug build: No CUDA used to build PyTorch: 10.2 OS: Ubuntu 20.04.1 LTS GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 CMake version: version 3.19.4 Python version: 3.8 Is CUDA available: Yes CUDA runtime version: Could not collect GPU models and configuration: GPU 0: Quadro P5000 Nvidia driver version: 450.36.06 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.0 Versions of relevant libraries: [pip3] numpy==1.19.2 [pip3] torch==1.6.0 [pip3] torch-scatter==2.0.6 [pip3] torchsummary==1.5.1 [pip3] torchvision==0.7.0 [pip3] vit-pytorch==0.15.2 [conda] blas 1.0 mkl [conda] cudatoolkit 10.2.89 hfd86e86_1 [conda] mkl 2020.2 256 [conda] mkl-service 2.3.0 py38he904b0f_0 [conda] mkl_fft 1.3.0 py38h54f3939_0 [conda] mkl_random 1.1.1 py38h0573a6f_0 [conda] numpy 1.19.2 py38h54aff64_0 [conda] numpy-base 1.19.2 py38hfa32c7d_0 [conda] pytorch 1.6.0 py3.8_cuda10.2.89_cudnn7.6.5_0 pytorch [conda] torch-scatter 2.0.6 pypi_0 pypi [conda] torchsummary 1.5.1 pypi_0 pypi [conda] torchvision 0.7.0 py38_cu102 pytorch [conda] vit-pytorch 0.15.2 pypi_0 pypi
st31785
Thanks! Could you update PyTorch in the meantime to the latest nightly release (in a new virtual env, in necessary) and rerun your code?
st31786
I will try, the thing is that there is to much dependencies on the pytorch and cuda version due to there is a lot of libraries to setup like the IOU3D_Cuda inhereted from the PointRCNN model and some other things. Can you tell me what is this error for or what is meant by it, it is non intuitive error, and the thing is that the line that produces the error executed normally 8 times before.
st31787
The error is raised by an invalid kernel launch config. In case you are using a custom CUDA extension, you could try to rerun the code via CUDA_LAUNCH_BLOCKING=1 python script.py args and check the kernel launch configs in the failing operation given by the stack trace.
st31788
Traceback (most recent call last): File "train.py", line 75, in <module> train(args) File "train.py", line 64, in train output = CIA_interface.cia_forward(batch ,epoch ,i) File "/notebooks/E2E/cia_interface.py", line 90, in cia_forward losses = self.model(example, return_loss = True) File "/opt/conda/envs/btngan1/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/notebooks/cia/det3d/models/detectors/voxelnet.py", line 38, in forward return self.bbox_head.loss(example, preds) File "/notebooks/cia/det3d/models/bbox_heads/mg_head_v4_release.py", line 612, in loss iou_pred_loss = iou_pred_loss.sum() / batch_size RuntimeError: CUDA error: invalid configuration argument it gives me the same error The error is in the sum() function and yes I uses IOU3D_CUDA from this repo: GitHub - sshaoshuai/PointRCNN: PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud, CVPR 2019. 1
st31789
In that case you could e.g. add debug prints to the CUDA code and check the launch configs, as apparently one kernel call (in the 9th iteration) is using invalid values (I guess the grid or block dimension might be too large).
st31790
Well, I’m new using CUDA extensions, could you illustrate to me where to put it as the file extension or something like this.
st31791
Okay now I changed the random seed, and the error became in the 11th iteration @ptrblck
st31792
You could check the launch config e.g. here 2 and make sure the values are valid. I’m not familiar with the code base and this was the first .cu file I’ve found, so the error could of course be raised by another kernel.
st31793
I am new to deep learning and Pytorch. I have data set of 6000 images that have all four classes in a single folder. I used the following snippet to upload my data. torchvision.datasets.ImageFolder(root='/content/drive/My Drive/DFU/base_dir/train_dir', transform=None) I read that for ImageFolder, the images should be organized into sub-folders based on class labels. However, my dataset has all four class images in a single folder. I have a .csv file (Ground_Truth 3)that contains the one-hot-encoded class label for each image. How to load my dataset to Pytorch?
st31794
You should create a custom dataset class, which loads the corresponding csv file and return the tuple <image, label> in the __getitem__ function: Writing Custom Datasets, DataLoaders and Transforms — PyTorch Tutorials 1.8.1+cu102 documentation 3
st31795
@carloalbertobarbano the sample example has all the images of the same types in a single directory. I have multiple classes and all the images are in one directory. I have a .csv file thatGround_Truth_File 1 has an image name column and then their respective one hot encoder.
st31796
The sample in the docs shows you how to create your custom dataset by extending all of the required functions. Of course you need to adjust the logic to suit your needs (read the CSV files and return the required attribute in the __getitem__) as I suggested in my previous reply
st31797
class DFUDataset(Dataset): def __init__(self, csv_file, root_dir, transform=None): self.DFU = pd.read_csv(csv_file) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.DFU) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() img_name = os.path.join(self.root_dir, self.DFU.iloc[idx, 0]) #image names image = io.imread(img_name) img_label = os.path.join(self.root_dir, self.DFU.iloc[0, idx]) #image labels sample = {'image': image, 'img_label': img_label} return sample DFU_dataset = DFUDataset(csv_file='C:/Users/aleems2/Desktop/dfu/DFUC2021_trainset_210427/DFUC2021_train/Labelled_data_ground_truth.csv', root_dir="C:/Users/aleems2/Desktop/dfu/DFUC2021_trainset_210427/DFUC2021_train/Labelled_test_images")``` I am debugging but my code does not go to len, get_item function
st31798
Hi am beginner in Pytorch, I have deployed a model in django for classifying whether lungs x ray image belong to positive or negative pneumonia. My question is that how i can make sure that the uploaded image is belong to lungs x ray.(sorry for my English )
st31799
I have implemented Transformer model based on nn.Transformer Unfortunately, my model is not learning. I have LSTM based networks that show good learning on the same sequence to sequence dataset. Can you please suggest what can be wrong? class TransformerBase(nn.Module): def __init__(self, input_size, output_size, hidden_size=256): super(TransformerBase, self).__init__() self.input_size = input_size self.output_size = output_size self.hidden_size = hidden_size self.transformer_model = nn.Transformer(d_model=hidden_size) self.embedding_input = nn.Embedding(self.input_size, hidden_size) self.embedding_output = nn.Embedding(self.output_size, hidden_size) self.pos_encoder = PositionalEncoding(hidden_size, 0.1) self.fc_out = nn.Linear(hidden_size, self.output_size) def forward(self, src, trg): embedded_input = self.pos_encoder(self.embedding_input(src) * math.sqrt(self.hidden_size)) embedding_output = self.pos_encoder(self.embedding_output(trg) * math.sqrt(self.hidden_size)) tgt_mask = generate_square_subsequent_mask(trg.shape[0]) x = self.transformer_model(src=embedded_input, tgt=embedding_output, tgt_mask=tgt_mask.to(device)) return self.fc_out(x) def generate_square_subsequent_mask(sz: int): mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1) mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0)) return mask # https://pytorch.org/tutorials/beginner/transformer_tutorial.html class PositionalEncoding(nn.Module): def __init__(self, d_model, dropout=0.1, max_len=5000): super(PositionalEncoding, self).__init__() self.dropout = nn.Dropout(p=dropout) pe = torch.zeros(max_len, d_model) position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1) div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model)) pe[:, 0::2] = torch.sin(position * div_term) pe[:, 1::2] = torch.cos(position * div_term) pe = pe.unsqueeze(0).transpose(0, 1) self.register_buffer('pe', pe) def forward(self, x): x = x + self.pe[:x.size(0), :] return self.dropout(x) def train(model, iterator, optimizer, criterion, clip): model.train() epoch_loss = 0 for i, batch in enumerate(iterator): # Get input and targets and get to cuda src = batch.src.to(device) trg = batch.trg.to(device) optimizer.zero_grad() output = model(src, trg[:-1]) output = output.reshape(-1, output.shape[-1]).contiguous() trg = trg[1:].reshape(-1).contiguous() loss = criterion(output, trg) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), clip) optimizer.step() epoch_loss += loss.item() return epoch_loss / len(iterator) criterion = nn.CrossEntropyLoss(ignore_index=pad_idx) optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
st31800
@odats Can you explain to me what exactly is happening? Are you getting repeated tokens during training?
st31801
Model converges to the almost (I think it is due to dropouts) the same prediction for most inputs. Loss slowly decreases to some stable high point. LSTM based models achieve N times better score. Do you think the problem might be in the model definition or training process?
st31802
Can post some examples of your predictions outputs? from the training phase is fine.
st31803
training: tensor([[ 2.4237, 2.5905, 2.2169, -10.5635, -1.0118, -10.1921], [ 3.0716, 3.8252, 3.1995, -8.1787, -4.2393, -8.7988], [ 2.8787, 3.6345, 2.8708, -8.3596, -4.4989, -8.4322], ..., [ 1.5031, 1.0884, 1.2739, -9.6650, 2.4617, -9.0367], [ 1.3291, 1.5163, 1.4576, -10.0381, 2.1117, -9.5944], [ 0.9889, 1.2590, 1.0715, -9.5567, 2.5086, -9.4345]], grad_fn=<ViewBackward>) tensor([[ 3.1798, 3.6515, 2.7826, -8.4505, -4.2253, -8.5992], [ 3.2187, 3.5649, 2.8608, -8.9943, -3.8257, -8.1756], [ 2.8734, 3.4187, 2.6570, -7.9880, -4.0824, -8.3490], ..., [ 1.0231, 1.5060, 0.9266, -9.2981, 2.0333, -9.3198], [ 1.3715, 1.2931, 1.0752, -9.4257, 2.2230, -9.2558], [ 1.7830, 1.6602, 1.5629, -10.3801, 1.7262, -10.1634]], grad_fn=<ViewBackward>) tensor([[ 3.0281, 3.7362, 3.0541, -8.4012, -4.0972, -8.8399], [ 2.5452, 3.5018, 2.7213, -7.9301, -3.7148, -8.4083], [ 2.9479, 3.6801, 2.8139, -8.8359, -3.9234, -8.8597], prediction loop: SOS=3, EOS=4, PAD=5 def show_predictions_transformer(model, loader): model.eval() with torch.no_grad(): for source, target in loader: target_max_len = 12 outputs = torch.zeros(target_max_len, dtype=torch.long) outputs[0] = torch.LongTensor([SOS]) for i in range(1, target_max_len): pred = model(source, outputs[:i].unsqueeze(1)) outputs[i] = pred[-1].argmax(1) if outputs[i] == EOS: #print('eos') break print('targ', target.squeeze()) print('pred', outputs) targ tensor([3, 1, 1, 2, 1, 0, 2, 4]) pred tensor([3, 1, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0]) Epoch: 04 | Time: 0m 27s Train Loss: 1.180 | Train PPL: 3.254 Val. Loss: 1.104 | Val. PPL: 3.017 targ tensor([3, 2, 2, 2, 0, 1, 1, 4]) pred tensor([3, 1, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0]) Epoch: 05 | Time: 0m 27s Train Loss: 1.176 | Train PPL: 3.240 Val. Loss: 1.131 | Val. PPL: 3.098 targ tensor([3, 2, 1, 1, 2, 1, 0, 4]) pred tensor([3, 1, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0]) Epoch: 06 | Time: 0m 26s Train Loss: 1.166 | Train PPL: 3.210
st31804
Your mask might be off? I’m not entirely sure, but it seems that you are are setting the trg[:-1] to be all but the last value.
st31805
It’s an old issue but I think I found something related to this issue recently. When I looked into my site-packages/torch/nn/functional.py, which I assume to be the path to MultiheadAttention function, I found that the attention scores are not divided by, as suggested in the Transformer paper, the square root of attention dimension. As the reason to this division in the paper suggest, this may saturate softmax function.
st31806
Hello everyone, How to reset the parameters of layer4 in resnet18, using the Module.apply? Here my code but it doesn’t work. Thank you. def reset_weights(m): for name, layer in m.named_children(): if name=='layer4' and hasattr(layer, 'reset_parameters'): print(f'Reset trainable parameters of layer = {layer}') layer.reset_parameters() model.appy(reset_weight)
st31807
Solved by ptrblck in post #2 layer4 is an nn.Sequential module, which doesn’t have the reset_parameters method, so your conditions won’t be met. You could iterate all modules of layer4 and check for this attribute: def reset_weights(m): for name, layer in m.named_children(): if name=='layer4': print(na…
st31808
layer4 is an nn.Sequential module, which doesn’t have the reset_parameters method, so your conditions won’t be met. You could iterate all modules of layer4 and check for this attribute: def reset_weights(m): for name, layer in m.named_children(): if name=='layer4': print(name) for n, l in layer.named_modules(): print(n) if hasattr(l, 'reset_parameters'): print(f'Reset trainable parameters of layer = {l}') l.reset_parameters() model = models.resnet18() model.apply(reset_weights)
st31809
How can I reset parameters of layers 3 and higher layers of Resnet18 so that I can train for those layers?
st31810
I trained my network on a gpu device and saved checkpoint by torch.save Loading this checkpoint on my cpu device gives an error: raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled```
st31811
You can remap the Tensor location at load time using the map_location argument to torch.load. For example this will forcefully remap everything onto CPU: torch.load('my_file.pt', map_location=lambda storage, location: 'cpu') While this will only map storages from GPU0: torch.load('my_file.pt', map_location={'cuda:0': 'cpu'})
st31812
I’m trying to load a GPU-trained model onto a CPU with the code you suggested: torch.load('my_file.pt', map_location=lambda storage, location: 'cpu') … and I get this error: Traceback (most recent call last): File "net_predict.py", line 146, in <module> net = torch.load(f_net, map_location=(lambda storage, location: 'cpu')) File "/home/[...]/anaconda2/lib/python2.7/site-packages/torch/serialization.py", line 248, in load return _load(f, map_location, pickle_module) File "/home/[...]/anaconda2/lib/python2.7/site-packages/torch/serialization.py", line 340, in _load tensor = tensor_type._new_with_metadata_file(f, storage) AttributeError: type object 'str' has no attribute '_new_with_metadata_file' (I replaced my username with […]) Any idea what I’m doing wrong?
st31813
I’m sorry, my bad. This should work: torch.load('my_file.pt', map_location=lambda storage, loc: storage)
st31814
It works - brilliant! Out of curiosity: could you explain what this does? I’m not sure how it knows to remap storage to CPU, since the lambda returns the storage it got as an argument.
st31815
Sure. map_location can be either a dict where the locations corresponding to keys are remaped to their values. Alternatively, we support passing in a function, that will get a CPU storage and its serialized location, and it should return some storage that will replace the CPU one. If you just want to load everything onto the CPU, you can just return the first arugment, but you could do some more crazy stuff like sending all CUDA tensors to the next GPU, by parsing out the original device from the loc argument.
st31816
@apaszke Hi! I am sorry to reopen this thread. I have encountered a problem when I used the above method to load a GPU-trained model on CPU mode. The code fragment is: import torch encoder = torch.load('encoder.pt', map_location=lambda storage, loc: storage) decoder = torch.load('decoder.pt', map_location=lambda storage, loc: storage) encoder.cpu() decoder.cpu() And the error I met was: image.png829×513 63.5 KB The full code can be viewed at seq2seq-translation/eval.py 26 How can I load a GPU-trained model on a CPU device (without any GPUs) correctly? Thank you for your great work!
st31817
Hey, no problem! I only have a couple more questions: What’s your PyTorch version? Do you have torch.__version__? If no, when did you install it? When did you create that checkpoint?
st31818
Good morning! The version of my PyTorch is 0.1.9+b46d5e0. I have compiled the PyTorch from source since I want to try to use half tensor with stateless methods. (You have mentioned it in this pull request 15. Excellent!) I created the checkpoint about 12 hours before, which also used the 0.1.9+b46d5e0 version of PyTorch. Thank you very much!
st31819
I have uploaded some test data to my github repo 90. If you have time maybe you can try it: train a model: python train_attn.py load the model and do some inferences: python eval.py May this is useful to provide some information for solving the problem. Thank you!
st31820
@apaszke I suggest that it may be the version problem. I can’t reproduce this error when I use the 0.1.9_2 version of PyTorch. Thanks!
st31821
Sorry to reopen the thread. After running the code: params = torch.load(input_file, lambda storage, loc: storage) I met the same problem as Yangyu met before. The error message shows: TypeError: set_ received an invalid combination of arguments - got (torch.FloatStorage, int, tuple, tuple), but expected one of: no arguments (torch.cuda.FloatTensor source) (torch.cuda.FloatStorage storage) (torch.cuda.FloatStorage sourceStorage, int storage_offset, int … size) didn’t match because some of the arguments have invalid types: (!torch.FloatStorage!, int, !tuple!, !tuple!) (torch.cuda.FloatStorage sourceStorage, int storage_offset, torch.Size size) (torch.cuda.FloatStorage sourceStorage, int storage_offset, torch.Size size, tuple strides) I just updated my pytorch to the latest version in the master branch. The version number is 0.1.11+761eef1. Any idea why? Thanks, Yaozong
st31822
Hello, I tried to load a snapshot from gpu-training to run it on CPU-mode, but faced with the same problem, that described above. Of course, tried to use given advice, but there is no effect. torch.load('./snapshots/cpu_final_snapshot.pth', map_location=lambda storage, loc: storage) I have the following traceback: Traceback (most recent call last): File "predict.py", line 39, in <module> params = torch.load('./snapshots/cpu_final_snapshot.pth', map_location=lambda storage, loc: storage) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/serialization.py", line 222, in load return _load(f, map_location, pickle_module) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/serialization.py", line 370, in _load result = unpickler.load() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/cuda/__init__.py", line 279, in __new__ _lazy_init() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/cuda/__init__.py", line 96, in _lazy_init _check_driver() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/cuda/__init__.py", line 63, in _check_driver raise AssertionError("Torch not compiled with CUDA enabled") torch.__version__ '0.1.10_1' Would be appreciated any help.
st31823
It seems that I found the problem that causes the error of “invalid combination of arguments”. Yesterday I used the model trained on 0.1.9 version of pytorch, and loaded it to cpu using the latest version of 0.1.11. The error appeared. Today I retrained the model using the latest version of 0.1.11 and loaded also using the latest version. Everything works. So I guess that there are inconsistencies between different versions of pytorch models.
st31824
We have trained an Alexnet with pytorch examples imagenet (https://github.com/pytorch/examples/blob/master/imagenet/main.py 350) and have been struggling to convert the model for use on CPU and for inference only. Here is a solution for AlexNet: https://github.com/e-lab/pytorch-toolbox/blob/master/convert-save-load.md 966 It would be nice to have something more generic…
st31825
When I use the torch 1.0.0, the given code will produce the result as the following: torch.load(‘save/best_BiLSTMCRF_pos_2019-01-10 12-42-50’, map_location=lambda storage, location: ‘cpu’) Traceback (most recent call last): File “”, line 1, in File “/home/jiaxin/.local/lib/python3.6/site-packages/torch/serialization.py”, line 367, in load return _load(f, map_location, pickle_module) File “/home/jiaxin/.local/lib/python3.6/site-packages/torch/serialization.py”, line 538, in _load result = unpickler.load() File “/home/jiaxin/.local/lib/python3.6/site-packages/torch/_utils.py”, line 135, in _rebuild_tensor_v2 tensor = _rebuild_tensor(storage, storage_offset, size, stride) File “/home/jiaxin/.local/lib/python3.6/site-packages/torch/_utils.py”, line 129, in _rebuild_tensor module = importlib.import_module(storage.module) AttributeError: ‘str’ object has no attribute ‘module’ Is anything wrong with the new version of PyTorch?
st31826
Had the same thing. See the comments about using map_location=lambda storage, location: storage instead of 'cpu'
st31827
If you want to force the map_location to cpu, you can eliminate the lambda and simply use: torch.load(‘save/best_BiLSTMCRF_pos_2019-01-10 12-42-50’,map_location=‘cpu’) This is discussed in the report for issue #9139 206.
st31828
Sorry for reviving this post. I have a closely related question. I want to do the exact same thing, but using the C++ front-end. I.e. I want to save a model, trained using the C++ front-end on GPU, and then load in using the C++ front-end on a CPU device. It is possible? The documentation on torch::load does not give the map_location? Thanks for any help.
st31829
Hi everyone, I have a question about how to change the Normlization methond in resnet. When I first look at the code of resnet, I found that there is a attribute named norm_layer, where we could create BN layer. So, I try to initializing the norm_layer with nn.GroupNorm. However, I notice in the code of resnet, we just deliver the name nn.BatchNorm to norm_layer and use it to create our network. Sadly, the GN has two parameters, which means I can’t deliver the parameters of GN to ResNet class. So, my question is if I wanna change the Normlization from BN to GN, should I rewrite the Bottleneck and ResNet. And I already try it but get some error with the weight shape and network input shape. def norm2d(num_channels_per_group, planes): print("num_channels_per_group:{}".format(num_channels_per_group)) if num_channels_per_group > 0: return GroupNorm(num_channels_per_group, planes, affine=True) else: return nn.BatchNorm2d(planes) class Bottleneck(nn.Module): expansion = 4 def __init__(self, inplanes, planes, stride=1, downsample=None, group_norm=0): super(Bottleneck, self).__init__() self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) self.bn1 = norm2d(group_norm, planes) self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn2 = norm2d(group_norm, planes) self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False) self.bn3 = norm2d(group_norm, planes * 4) self.relu = nn.ReLU(inplace=True) self.downsample = downsample self.stride = stride def forward(self, x): residual = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out = self.relu(out) out = self.conv3(out) out = self.bn3(out) if self.downsample is not None: residual = self.downsample(x) out += residual out = self.relu(out) return out class ResNet(nn.Module): def __init__(self, block, layers, num_classes=1000, group_norm=0): self.inplanes = 64 super(ResNet, self).__init__() self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = norm2d(64, group_norm) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = self._make_layer(block, 64, layers[0], group_norm=group_norm) self.layer2 = self._make_layer(block, 128, layers[1], stride=2, group_norm=group_norm) self.layer3 = self._make_layer(block, 256, layers[2], stride=2, group_norm=group_norm) self.layer4 = self._make_layer(block, 512, layers[3], stride=2, group_norm=group_norm) self.avgpool = nn.AvgPool2d(7, stride=1) self.fc = nn.Linear(512 * block.expansion, num_classes) for m in self.modules(): if isinstance(m, nn.Conv2d): n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels m.weight.data.normal_(0, math.sqrt(2. / n)) elif isinstance(m, nn.BatchNorm2d): m.weight.data.fill_(1) m.bias.data.zero_() elif isinstance(m, GroupNorm): m.weight.data.fill_(1) m.bias.data.zero_() for m in self.modules(): if isinstance(m, Bottleneck): m.bn3.weight.data.fill_(0) if isinstance(m, BasicBlock): m.bn2.weight.data.fill_(0) def _make_layer(self, block, planes, blocks, stride=1, group_norm=0): downsample = None if stride != 1 or self.inplanes != planes * block.expansion: downsample = nn.Sequential( nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False), norm2d(planes * block.expansion, group_norm), ) layers = [] layers.append(block(self.inplanes, planes, stride, downsample, group_norm)) self.inplanes = planes * block.expansion for i in range(1, blocks): layers.append(block(self.inplanes, planes, group_norm=group_norm)) return nn.Sequential(*layers) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = x.view(x.size(0), -1) x = self.fc(x) return x def _resnet_gn(arch, block, layers, pretrained, num_classes, group_norm, **kwargs): model = ResNet(block, layers, num_classes, group_norm, **kwargs) if pretrained: model.load_state_dict(torch.load(MODEL_DIR)) return model class PD2SEModel(nn.Module): def __init__(self): super(PD2SEModel, self).__init__() # res_net_50_base = models.resnet50(pretrained = True) res_net_50_gn = _resnet_gn('resnet50', Bottleneck, [3, 4, 6, 3], pretrained=False, num_classes=45, group_norm=16) # res_net_50_base = models.resnet50() children_list = list(res_net_50_gn.children()) ... And the error is as follow: Traceback (most recent call last): File “/home/wzq/Work4money/AI-challenge-plant/code/train1.py”, line 305, in train_model() File “/home/wzq/Work4money/AI-challenge-plant/code/train1.py”, line 140, in train_model out1, out2, out3 = PD2SE(img) File “/usr/local/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 547, in call result = self.forward(*input, **kwargs) File “/home/wzq/Work4money/AI-challenge-plant/code/network.py”, line 55, in forward severity_class = self.Layer0(x) File “/usr/local/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 547, in call result = self.forward(*input, **kwargs) File “/usr/local/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/container.py”, line 92, in forward input = module(input) File “/usr/local/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 547, in call result = self.forward(*input, **kwargs) File “/usr/local/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/normalization.py”, line 225, in forward input, self.num_groups, self.weight, self.bias, self.eps) File “/usr/local/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/functional.py”, line 1692, in group_norm torch.backends.cudnn.enabled) RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [16] and input of shape [32, 64, 112, 112] Hope for your reply. Thanks a lot.
st31830
Solved by ptrblck in post #4 Rewriting the model definition would of course work. However, using getattr and setattr might be the hacky but faster way: class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.conv1 = nn.Conv2d(3, 3, 3, 1, 1) self.bn1 = nn.BatchNorm2d(3) …
st31831
It seems you are passing the arguments to your norm2d method in ResNet in the wrong order: self.bn1 = norm2d(64, group_norm) I assume it should be created as norm2d(group_norm, 64) as done in Bottleneck.
st31832
Hi, @ptrblck. Thanks for your reply. I have tried the other way to change it. And it works.The code is as follow: class ResNet(torchvision.models.resnet.ResNet): def __init__(self, block, layers, num_classes=1000, group_norm=False): if group_norm: norm_layer = lambda x: nn.GroupNorm(32, x) else: norm_layer = None super(ResNet, self).__init__(block, layers, num_classes, norm_layer=norm_layer) if not group_norm: self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=0, ceil_mode=True) # change for i in range(2, 5): getattr(self, 'layer%d' % i)[0].conv1.stride = (2, 2) getattr(self, 'layer%d' % i)[0].conv2.stride = (1, 1) But in my assignment, maybe I should try to change the BN in Shufflenet to GN. That’s really puzzled me, since I import the ShuffleUnit from the pytorchcv, and it dose not have the attribute like norm_layer in resnet. By the way, I have tried name_module method, and selecting the BN layer from ShuffleNet. But, it didn’t work, of course. We can’t change the value in tuple. modules = model.named_modules() for i in modules: if isinstance(i[1], nn.BatchNorm2d): i[2] = lambda x: nn.GroupNorm(32, x) print(i) So I’m tring to rewrite the ShuffleUnit class in pytorchcv, and add the new attribute to it, just like this: class ShuffleUnit(pytorchcv.models.shufflenetv2.ShuffleUnit): def __init__(self, in_channels, out_channels, downsample=False, use_residual=True, use_se=False, group_norm=False): if group_norm: norm_layer = lambda x: nn.GroupNorm(32, x) else: norm_layer = None super(ShuffleUnit, self).__init__() Actually, I stop at this point, I’m not sure this method would work. Is there any advises about this work? The code of ShuffleNet comes from pytorchcv is as follow: class ShuffleUnit(nn.Module): """ ShuffleNetV2 unit. Parameters: ---------- in_channels : int Number of input channels. out_channels : int Number of output channels. downsample : bool Whether do downsample. use_se : bool Whether to use SE block. use_residual : bool Whether to use residual connection. """ def __init__(self, in_channels, out_channels, downsample, use_se, use_residual): super(ShuffleUnit, self).__init__() self.downsample = downsample self.use_se = use_se self.use_residual = use_residual mid_channels = out_channels // 2 self.compress_conv1 = conv1x1( in_channels=(in_channels if self.downsample else mid_channels), out_channels=mid_channels) self.compress_bn1 = nn.BatchNorm2d(num_features=mid_channels) self.dw_conv2 = depthwise_conv3x3( channels=mid_channels, stride=(2 if self.downsample else 1)) self.dw_bn2 = nn.BatchNorm2d(num_features=mid_channels) self.expand_conv3 = conv1x1( in_channels=mid_channels, out_channels=mid_channels) self.expand_bn3 = nn.BatchNorm2d(num_features=mid_channels) if self.use_se: self.se = SEBlock(channels=mid_channels) if downsample: self.dw_conv4 = depthwise_conv3x3( channels=in_channels, stride=2) self.dw_bn4 = nn.BatchNorm2d(num_features=in_channels) self.expand_conv5 = conv1x1( in_channels=in_channels, out_channels=mid_channels) self.expand_bn5 = nn.BatchNorm2d(num_features=mid_channels) self.activ = nn.ReLU(inplace=True) self.c_shuffle = ChannelShuffle( channels=out_channels, groups=2) Thanks a lot.
st31833
Rewriting the model definition would of course work. However, using getattr and setattr might be the hacky but faster way: class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.conv1 = nn.Conv2d(3, 3, 3, 1, 1) self.bn1 = nn.BatchNorm2d(3) def forward(self, x): x = self.bn1(self.conv1(x)) return x model = MyModel() print(model) for name, module in model.named_modules(): if isinstance(module, nn.BatchNorm2d): # Get current bn layer bn = getattr(model, name) # Create new gn layer gn = nn.GroupNorm(1, bn.num_features) # Assign gn print('Swapping {} with {}'.format(bn, gn)) setattr(model, name, gn) print(model) Let me know, if this would work for your model.
st31834
Thank you @ptrblck. This really works for me. And before your answering I did not relize that we could use getattr and setattr to change the model. I will take notes about this. Have a nice day!
st31835
@ptrblck i have a efficientnet model and i want to replace all of it’s batchnorm layers with groupnorm,how do i do it? here is my model code : out_dim = 5 enet_type = 'efficientnet-b0' pretrained_model = { 'efficientnet-b0': '../input/efficientnet-pytorch/efficientnet-b0-08094119.pth' } class enetv2(nn.Module): def __init__(self, backbone, out_dim): super(enetv2, self).__init__() self.enet = enet.EfficientNet.from_name(backbone) self.enet.load_state_dict(torch.load(pretrained_model[backbone])) self.myfc = nn.Linear(self.enet._fc.in_features, out_dim) self.enet._fc = nn.Identity() def extract(self, x): return self.enet(x) def forward(self, x): x = self.extract(x) x = self.myfc(x) return x model = enetv2(enet_type, out_dim=out_dim) model = model.to(device) with your code i get this error : AttributeError Traceback (most recent call last) in 2 if isinstance(module, nn.BatchNorm2d): 3 # Get current bn layer ----> 4 bn = getattr(model, name) 5 # Create new gn layer 6 gn = nn.GroupNorm(1, bn.num_features) /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in getattr(self, name) 592 return modules[name] 593 raise AttributeError("’{}’ object has no attribute ‘{}’".format( –> 594 type(self).name, name)) 595 596 def setattr(self, name, value): AttributeError: ‘enetv2’ object has no attribute ‘enet._bn0’
st31836
however model.enet._bn0 gives me this output : BatchNorm2d(32, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
st31837
My approach is hacky as described and if it’s not working on your particular model, I would recommend to create a custom model by deriving from your base model and to explicitly replace the layers you would like to change.
st31838
sorry @ptrblck i have never converted a model into custom model,i don’t know how to do it efficiently,a little help from you would be highly appreciated, i was using efficientnet from here : https://github.com/lukemelas/EfficientNet-PyTorch 4
st31839
@ptrblck i tried to change all the bn layers to gn layers and was able to change but while training model with groupnorm i am getting this error : --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <timed exec> in <module> <ipython-input-22-cf57c28a75f1> in train_epoch(loader, optimizer) 9 loss_func = criterion 10 optimizer.zero_grad() ---> 11 logits = model(data) 12 loss = loss_func(logits, target) 13 loss.backward() /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) <ipython-input-10-902386208771> in forward(self, x) 26 27 def forward(self, x): ---> 28 x = self.extract(x) 29 x = self.myfc(x) 30 return x <ipython-input-10-902386208771> in extract(self, x) 23 24 def extract(self, x): ---> 25 return self.enet(x) 26 27 def forward(self, x): /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) /kaggle/input/efficientnet-pytorch/EfficientNet-PyTorch/EfficientNet-PyTorch-master/efficientnet_pytorch/model.py in forward(self, inputs) 176 177 # Convolution layers --> 178 x = self.extract_features(inputs) 179 180 # Pooling and final linear layer /kaggle/input/efficientnet-pytorch/EfficientNet-PyTorch/EfficientNet-PyTorch-master/efficientnet_pytorch/model.py in extract_features(self, inputs) 158 159 # Stem --> 160 x = relu_fn(self._bn0(self._conv_stem(inputs))) 161 162 # Blocks /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) /opt/conda/lib/python3.7/site-packages/torch/nn/modules/normalization.py in forward(self, input) 223 def forward(self, input): 224 return F.group_norm( --> 225 input, self.num_groups, self.weight, self.bias, self.eps) 226 227 def extra_repr(self): /opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in group_norm(input, num_groups, weight, bias, eps) 1971 + list(input.size()[2:])) 1972 return torch.group_norm(input, num_groups, weight, bias, eps, -> 1973 torch.backends.cudnn.enabled) 1974 1975 RuntimeError: expected device cpu but got device cuda:0 here is my full model : enetv2( (enet): EfficientNet( (_conv_stem): Conv2dStaticSamePadding( 3, 32, kernel_size=(3, 3), stride=(2, 2), bias=False (static_padding): ZeroPad2d(padding=(0, 1, 0, 1), value=0.0) ) (_bn0): GroupNorm(1, 32, eps=1e-05, affine=True) (_blocks): ModuleList( (0): MBConvBlock( (_depthwise_conv): Conv2dStaticSamePadding( 32, 32, kernel_size=(3, 3), stride=[1, 1], groups=32, bias=False (static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (_bn1): GroupNorm(1, 32, eps=1e-05, affine=True) (_se_reduce): Conv2dStaticSamePadding( 32, 8, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 8, 32, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_project_conv): Conv2dStaticSamePadding( 32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn2): GroupNorm(1, 16, eps=1e-05, affine=True) ) (1): MBConvBlock( (_expand_conv): Conv2dStaticSamePadding( 16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn0): GroupNorm(1, 96, eps=1e-05, affine=True) (_depthwise_conv): Conv2dStaticSamePadding( 96, 96, kernel_size=(3, 3), stride=[2, 2], groups=96, bias=False (static_padding): ZeroPad2d(padding=(0, 1, 0, 1), value=0.0) ) (_bn1): GroupNorm(1, 96, eps=1e-05, affine=True) (_se_reduce): Conv2dStaticSamePadding( 96, 4, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 4, 96, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_project_conv): Conv2dStaticSamePadding( 96, 24, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn2): GroupNorm(1, 24, eps=1e-05, affine=True) ) (2): MBConvBlock( (_expand_conv): Conv2dStaticSamePadding( 24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn0): GroupNorm(1, 144, eps=1e-05, affine=True) (_depthwise_conv): Conv2dStaticSamePadding( 144, 144, kernel_size=(3, 3), stride=(1, 1), groups=144, bias=False (static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (_bn1): GroupNorm(1, 144, eps=1e-05, affine=True) (_se_reduce): Conv2dStaticSamePadding( 144, 6, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 6, 144, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_project_conv): Conv2dStaticSamePadding( 144, 24, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn2): GroupNorm(1, 24, eps=1e-05, affine=True) ) (3): MBConvBlock( (_expand_conv): Conv2dStaticSamePadding( 24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn0): GroupNorm(1, 144, eps=1e-05, affine=True) (_depthwise_conv): Conv2dStaticSamePadding( 144, 144, kernel_size=(5, 5), stride=[2, 2], groups=144, bias=False (static_padding): ZeroPad2d(padding=(1, 2, 1, 2), value=0.0) ) (_bn1): GroupNorm(1, 144, eps=1e-05, affine=True) (_se_reduce): Conv2dStaticSamePadding( 144, 6, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 6, 144, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_project_conv): Conv2dStaticSamePadding( 144, 40, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn2): GroupNorm(1, 40, eps=1e-05, affine=True) ) (4): MBConvBlock( (_expand_conv): Conv2dStaticSamePadding( 40, 240, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn0): GroupNorm(1, 240, eps=1e-05, affine=True) (_depthwise_conv): Conv2dStaticSamePadding( 240, 240, kernel_size=(5, 5), stride=(1, 1), groups=240, bias=False (static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0) ) (_bn1): GroupNorm(1, 240, eps=1e-05, affine=True) (_se_reduce): Conv2dStaticSamePadding( 240, 10, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 10, 240, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_project_conv): Conv2dStaticSamePadding( 240, 40, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn2): GroupNorm(1, 40, eps=1e-05, affine=True) ) (5): MBConvBlock( (_expand_conv): Conv2dStaticSamePadding( 40, 240, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn0): GroupNorm(1, 240, eps=1e-05, affine=True) (_depthwise_conv): Conv2dStaticSamePadding( 240, 240, kernel_size=(3, 3), stride=[2, 2], groups=240, bias=False (static_padding): ZeroPad2d(padding=(0, 1, 0, 1), value=0.0) ) (_bn1): GroupNorm(1, 240, eps=1e-05, affine=True) (_se_reduce): Conv2dStaticSamePadding( 240, 10, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 10, 240, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_project_conv): Conv2dStaticSamePadding( 240, 80, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn2): GroupNorm(1, 80, eps=1e-05, affine=True) ) (6): MBConvBlock( (_expand_conv): Conv2dStaticSamePadding( 80, 480, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn0): GroupNorm(1, 480, eps=1e-05, affine=True) (_depthwise_conv): Conv2dStaticSamePadding( 480, 480, kernel_size=(3, 3), stride=(1, 1), groups=480, bias=False (static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (_bn1): GroupNorm(1, 480, eps=1e-05, affine=True) (_se_reduce): Conv2dStaticSamePadding( 480, 20, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 20, 480, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_project_conv): Conv2dStaticSamePadding( 480, 80, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn2): GroupNorm(1, 80, eps=1e-05, affine=True) ) (7): MBConvBlock( (_expand_conv): Conv2dStaticSamePadding( 80, 480, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn0): GroupNorm(1, 480, eps=1e-05, affine=True) (_depthwise_conv): Conv2dStaticSamePadding( 480, 480, kernel_size=(3, 3), stride=(1, 1), groups=480, bias=False (static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (_bn1): GroupNorm(1, 480, eps=1e-05, affine=True) (_se_reduce): Conv2dStaticSamePadding( 480, 20, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 20, 480, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_project_conv): Conv2dStaticSamePadding( 480, 80, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn2): GroupNorm(1, 80, eps=1e-05, affine=True) ) (8): MBConvBlock( (_expand_conv): Conv2dStaticSamePadding( 80, 480, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn0): GroupNorm(1, 480, eps=1e-05, affine=True) (_depthwise_conv): Conv2dStaticSamePadding( 480, 480, kernel_size=(5, 5), stride=[1, 1], groups=480, bias=False (static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0) ) (_bn1): GroupNorm(1, 480, eps=1e-05, affine=True) (_se_reduce): Conv2dStaticSamePadding( 480, 20, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 20, 480, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_project_conv): Conv2dStaticSamePadding( 480, 112, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn2): GroupNorm(1, 112, eps=1e-05, affine=True) ) (9): MBConvBlock( (_expand_conv): Conv2dStaticSamePadding( 112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn0): GroupNorm(1, 672, eps=1e-05, affine=True) (_depthwise_conv): Conv2dStaticSamePadding( 672, 672, kernel_size=(5, 5), stride=(1, 1), groups=672, bias=False (static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0) ) (_bn1): GroupNorm(1, 672, eps=1e-05, affine=True) (_se_reduce): Conv2dStaticSamePadding( 672, 28, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 28, 672, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_project_conv): Conv2dStaticSamePadding( 672, 112, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn2): GroupNorm(1, 112, eps=1e-05, affine=True) ) (10): MBConvBlock( (_expand_conv): Conv2dStaticSamePadding( 112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn0): GroupNorm(1, 672, eps=1e-05, affine=True) (_depthwise_conv): Conv2dStaticSamePadding( 672, 672, kernel_size=(5, 5), stride=(1, 1), groups=672, bias=False (static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0) ) (_bn1): GroupNorm(1, 672, eps=1e-05, affine=True) (_se_reduce): Conv2dStaticSamePadding( 672, 28, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 28, 672, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_project_conv): Conv2dStaticSamePadding( 672, 112, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn2): GroupNorm(1, 112, eps=1e-05, affine=True) ) (11): MBConvBlock( (_expand_conv): Conv2dStaticSamePadding( 112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn0): GroupNorm(1, 672, eps=1e-05, affine=True) (_depthwise_conv): Conv2dStaticSamePadding( 672, 672, kernel_size=(5, 5), stride=[2, 2], groups=672, bias=False (static_padding): ZeroPad2d(padding=(1, 2, 1, 2), value=0.0) ) (_bn1): GroupNorm(1, 672, eps=1e-05, affine=True) (_se_reduce): Conv2dStaticSamePadding( 672, 28, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 28, 672, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_project_conv): Conv2dStaticSamePadding( 672, 192, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn2): GroupNorm(1, 192, eps=1e-05, affine=True) ) (12): MBConvBlock( (_expand_conv): Conv2dStaticSamePadding( 192, 1152, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn0): GroupNorm(1, 1152, eps=1e-05, affine=True) (_depthwise_conv): Conv2dStaticSamePadding( 1152, 1152, kernel_size=(5, 5), stride=(1, 1), groups=1152, bias=False (static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0) ) (_bn1): GroupNorm(1, 1152, eps=1e-05, affine=True) (_se_reduce): Conv2dStaticSamePadding( 1152, 48, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 48, 1152, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_project_conv): Conv2dStaticSamePadding( 1152, 192, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn2): GroupNorm(1, 192, eps=1e-05, affine=True) ) (13): MBConvBlock( (_expand_conv): Conv2dStaticSamePadding( 192, 1152, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn0): GroupNorm(1, 1152, eps=1e-05, affine=True) (_depthwise_conv): Conv2dStaticSamePadding( 1152, 1152, kernel_size=(5, 5), stride=(1, 1), groups=1152, bias=False (static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0) ) (_bn1): GroupNorm(1, 1152, eps=1e-05, affine=True) (_se_reduce): Conv2dStaticSamePadding( 1152, 48, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 48, 1152, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_project_conv): Conv2dStaticSamePadding( 1152, 192, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn2): GroupNorm(1, 192, eps=1e-05, affine=True) ) (14): MBConvBlock( (_expand_conv): Conv2dStaticSamePadding( 192, 1152, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn0): GroupNorm(1, 1152, eps=1e-05, affine=True) (_depthwise_conv): Conv2dStaticSamePadding( 1152, 1152, kernel_size=(5, 5), stride=(1, 1), groups=1152, bias=False (static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0) ) (_bn1): GroupNorm(1, 1152, eps=1e-05, affine=True) (_se_reduce): Conv2dStaticSamePadding( 1152, 48, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 48, 1152, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_project_conv): Conv2dStaticSamePadding( 1152, 192, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn2): GroupNorm(1, 192, eps=1e-05, affine=True) ) (15): MBConvBlock( (_expand_conv): Conv2dStaticSamePadding( 192, 1152, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn0): GroupNorm(1, 1152, eps=1e-05, affine=True) (_depthwise_conv): Conv2dStaticSamePadding( 1152, 1152, kernel_size=(3, 3), stride=[1, 1], groups=1152, bias=False (static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (_bn1): GroupNorm(1, 1152, eps=1e-05, affine=True) (_se_reduce): Conv2dStaticSamePadding( 1152, 48, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 48, 1152, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_project_conv): Conv2dStaticSamePadding( 1152, 320, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn2): GroupNorm(1, 320, eps=1e-05, affine=True) ) ) (_conv_head): Conv2dStaticSamePadding( 320, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False (static_padding): Identity() ) (_bn1): GroupNorm(1, 1280, eps=1e-05, affine=True) (_fc): Identity() ) (myfc): Linear(in_features=1280, out_features=5, bias=True) (avg_pool): GeM(p=3.0000, eps=1e-06) )
st31840
thanks, i just solved the error,so after replacing bn with gn i had to do model = model.to(device)
st31841
Hi, i know it’s way to late. But for everyone, who is trying to replace batch norm with group norm for more complex models like you wanted to do, you can use the following hacky approach : def batch_norm_to_group_norm(layer): """Iterates over a whole model (or layer of a model) and replaces every batch norm 2D with a group norm Args: layer: model or one layer of a model like resnet34.layer1 or Sequential(), ... """ for name, module in layer.named_modules(): if name: try: # name might be something like: model.layer1.sequential.0.conv1 --> this wont work. Except this case sub_layer = getattr(layer, name) if isinstance(sub_layer, torch.nn.BatchNorm2d): num_channels = sub_layer.num_features # first level of current layer or model contains a batch norm --> replacing. layer._modules[name] = torch.nn.GroupNorm(constants.GROUP_NORM_LOOKUP[num_channels], num_channels) except AttributeError: # go deeper: set name to layer1, getattr will return layer1 --> call this func again name = name.split('.')[0] sub_layer = getattr(layer, name) sub_layer = batch_norm_to_group_norm(sub_layer) layer.__setattr__(name=name, value=sub_layer) return layer And the GROUP_NORM_LOOKUP is build as follows: # group norm paper stated, that 16 channels per group are best # group norm paper stated, that 8 or 32 groups are best # channels per group = num_features / num_group # channels per group have less influence when being >= 8 so here i try to set optimal values. # Paper values: # groups (G) # 64 32 16 8 4 2 1 (=LN) # 24.6 24.1 24.6 24.4 24.6 24.7 25.3 # validation error in % # 0.5 - 0.5 0.3 0.5 0.6 1.2 # channels per group # 64 32 16 8 4 2 1 (=IN) # 24.4 24.5 24.2 24.3 24.8 25.6 28.4 # validation error in % # 0.2 0.3 - 0.1 0.6 1.4 4.2 # num_channels: num_groups GROUP_NORM_LOOKUP = { 16: 2, # -> channels per group: 8 32: 4, # -> channels per group: 8 64: 8, # -> channels per group: 8 128: 8, # -> channels per group: 16 256: 16, # -> channels per group: 16 512: 32, # -> channels per group: 16 1024: 32, # -> channels per group: 32 2048: 32, # -> channels per group: 64 } Of course you can use a different approach for setting the number of groups.
st31842
Hi, thanks for your answer, but I have a doubt: What does the last line of code do? layer.setattr(name=name, value=sub_layer)
st31843
Hi, suppose for the first call of this function that layer is a neural network containing a nn.Sequential() with batch norm. While iterating we will at some point hit the sequential layer which will raise the AttributeError which then triggers the recursuve part. In this case the function passes the current sub_layer (nn.Sequential()) separated from the actual model and changes every batch norm to a group norm. Now the actual model has not changed and so we need to overwrite the original nn.Sequential() with the returned one. I think you are wondering about the name right? The name returned by layer.named_modules() is a chained name as described in my function (i can’t edit the post but model.layer1.sequential… is actually not correct. It should only show the theoretical build. The correct name would be layer1.0…). Example with a ResNet: The network does have some attributes like self.layer1 which is a sequential. To iterate over the modules, pytorch returns name = layer1.0 or layer1.1 and so on… where layer1 is the attribute of the ResNet model and 0 is the attribute of layer1. Zero in that case is the index of the first module in the sequential as a string and the module is then added like a key value pair. See the sequential init which is build as follows: def __init__(self, *args): super(Sequential, self).__init__() if len(args) == 1 and isinstance(args[0], OrderedDict): for key, module in args[0].items(): self.add_module(key, module) else: for idx, module in enumerate(args): self.add_module(str(idx), module) The sequential actually adds the module by passing str(idx). Actually you will overwrite the resnet.layer1 attribute multiple times but since this is done in no time, i did not insert another mechanism to simply check if we already processed the layer1 or not. So as a result: i split the name to get layer1 out of layer1.0 then i get the sequential by getattr i change batch norm to group norm inside the sequential i add the correct layer1 to the ResNet by setattr.
st31844
Thank you very much for your detailed answer. Is it true that if you modify the bn of sub_layer, the actual_layer will also be modified, so we don’t need to use the second setattr (layer.setattr(name=name, value=sub_layer))?
st31845
Ah now i got your point. I did not thought about that but actually you are right. The setattr is not needed here. Thanks for pointing out
st31846
content The metrics of train() is not bad. But when it comes to validating/eval mode, the metrics result is disaster. So I try to print the predict value as the graph below. image835×591 51.7 KB the blue one is the ground truth, and the orange one is my prediction. From the graph, the output of the model in validating mode is almost the same every time. So I’m wondering that why it happened and what can I do to fix it? Here is parts of my code. In case of overfitting, I use the same dataset in training and validating actually. But the output of validating is still the same. Pseudo code: def train(train_inf, model, criterion, optimizer): # some not important code is ignored model.train() for idx, movie in enumerate(train_video_names): movie_path = os.path.join(video_path, movie) train_dataset = video_dataset(movie_path, annot_path) train_loader = DataLoader(train_dataset, batch_size=4, num_workers=4) for i, data_batch in enumerate(train_loader): img_data, label = data_batch img_data = img_data.type(‘torch.FloatTensor’).cuda() label = label.type(‘torch.FloatTensor’).cuda() output = model(img_data) loss = criterion(output, label) optimizer.zero_grad() loss.backward() nn.utils.clip_grad_norm_(model.parameters(), max_norm=20, norm_type=2) optimizer.step() def validate(val_inf, model, criterion, optimizer): # some not important code is ignored model.eval() for idx, movie in enumerate(val_video_names): movie_path = os.path.join(video_path, movie) train_dataset = video_dataset(movie_path, annot_path) train_loader = DataLoader(val_dataset, batch_size=4, num_workers=4) for i, data_batch in enumerate(val_loader): img_data, label = data_batch img_data = img_data.type(‘torch.FloatTensor’).cuda() label = label.type(‘torch.FloatTensor’).cuda() output = model(img_data) loss = criterion(output, label) def main(): train(train_inf, model, criterion, optimizer) valdate(val_inf, model, criterion, optimizer) Notes: I have already make sure that the data in validation mode is not the same, even not close, though the result is still almost the same. Because I use the same data in train mode and eval mode, so it should not be the problem of overfitting. I use the transformer architecture. I don’t think that it gonna make this thing happen, just trying to provide more information. Thanks for your help. Any kind of suggestion will be my pleasure.
st31847
If you remove the model.eval() for testing purposes does it improve the results? It might be useful to print or track some tensors layer by layer to see where the divergence begins during training and validation.
st31848
Sorry for replying late. I did some experiment in the day. It is the same if i remove the model.eval(). And I also try to print some parameters of specific layers. It’s the same parameters. After further debugging, I find that when I stop at the last batch in train(), nomater what I input it comes out almost the same result. As you can see, when I stop at the last batch in train(), the result is the same. But in the last batch, the result will change. Do you have any more suggestions?
st31849
image1092×502 30.5 KB Here is part of my code, I don’t have any idea what I do that will cause this happen.
st31850
Hello all. I’m trying to build a neural machine translation system and trying to load train and Val CSV, but it’s more than an hour and TabularDataset is still loading can you please tell me the solution for it or alternative methods to load the dataset. I have uploaded the image of the code please click on it to see it in better quality. screencapture-colab-research-google-drive-1Kwx-samRtoy5d6ZYObJ37vA3yAuTR-sM-2021-05-26-19_22_211264×1011 56.7 KB
st31851
Hello, I have the below code which uses a fine tuned BERT model to predict if a sentence is positive or negative. I have 2 problems: The BertClassifier class is defined in another .py file where i trained and fine tuned the model. If i try to import just the class by from bert import BertClassifier and run the code, it will start to train the model again and that’s why i defined the class again. Is there any way i can use the import without starting the training? If I use the below code, I receive the error Input, output and indices must be on the current device. It works fine if it is run on cpu but it takes some time to return the result and this is why i would like to run it on GPU. What can i do, modify, to resolve this? import torch from transformers import BertTokenizer, BertModel import torch.nn as nn class BertClassifier(nn.Module): """Bert Model for Classification Tasks.""" def __init__(self, freeze_bert=False): super(BertClassifier, self).__init__() # Instantiate BERT model self.bert = BertModel.from_pretrained('bert-base-multilingual-uncased') self.lstm = nn.LSTM(768, 50, batch_first=True, bidirectional=True) self.linear = nn.Linear(50*2 , 2) # Freeze the BERT model if freeze_bert: for param in self.bert.parameters(): param.requires_grad = False def forward(self, input_ids, attention_mask): # Feed input to BERT outputs = self.bert(input_ids=input_ids,attention_mask=attention_mask) sequence_output = outputs[0] sequence_output, _ = self.lstm(sequence_output) linear_output = self.linear(sequence_output[:, -1]) return linear_output device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = BertClassifier(freeze_bert=False) tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-uncased', do_lower_case=True) model.load_state_dict(torch.load('finetuned_model.pt')) max_input_len = tokenizer.max_model_input_sizes['bert-base-multilingual-uncased'] init_token_id = tokenizer.cls_token_id # input token eos_token_id = tokenizer.sep_token_id # end of sentence token # function to make sentiment prediction during inference def predict_sentiment(model, tokenizer, sentence): model.eval() tokens = tokenizer.tokenize(sentence) tokens = tokens[:max_input_len - 2] indexed = [init_token_id] + tokenizer.convert_tokens_to_ids(tokens) + [eos_token_id] tensor = torch.LongTensor(indexed).to(device) tensor = tensor.unsqueeze(0) padded_sequences = tokenizer([sentence], padding=True) attention_mask = padded_sequences["attention_mask"] attention_mask = torch.LongTensor(attention_mask) prediction = torch.sigmoid(model(tensor,attention_mask)) if prediction[0][0] > prediction[0][1]: return "Negative" else: return "Positive" sentiment = predict_sentiment(model, tokenizer, "It is such a wonderful weather outside.") print(sentiment)
st31852
Solved by ptrblck in post #2 I guess the other .py file doesn’t use a guard such as if __name__ == "__main__", but executes code in the global space directly. If that’s the case, move the executable code (which would start the training) into the if-clause protection. Make sure all model parameters as well as the inputs and…
st31853
I guess the other .py file doesn’t use a guard such as if __name__ == "__main__", but executes code in the global space directly. If that’s the case, move the executable code (which would start the training) into the if-clause protection. Make sure all model parameters as well as the inputs and targets are transferred properly to the GPU via .cuda() or .to('cuda'). Based on the error message it seems that at least some tensors are still on the CPU.
st31854
HI, I am trying to call the native functions. However, I met ‘CUDA error: an illegal memory access was encountered’ when I ran the CUDA version and it gave ‘Segmentation fault’ when I switched to the CPU version. I tested the code on pytorch 1.5/1.6/1.7 with cuda 9.2, all pytorch version gave the same error. What I was trying to do: First wrap the functions: #include <torch/extension.h> #include <ATen/NativeFunctions.h> #include <ATen/Config.h> std::tuple<at::Tensor, at::Tensor, at::Tensor> layer_norm_forward_cpu( const at::Tensor & input, const at::Tensor & weight, const at::Tensor & bias, int64_t M, int64_t N, double eps) { return at::native::layer_norm_cpu(input, weight, bias, M, N, eps); } std::tuple<at::Tensor, at::Tensor, at::Tensor> backward_layer_norm_cpu( const at::Tensor & grad_out, const at::Tensor & input, const at::Tensor & mean, const at::Tensor & rstd, const at::Tensor & weight, int64_t M, int64_t N, std::array<bool,3> output_mask) { return at::native::layer_norm_backward_cpu(grad_out, input, mean, rstd, weight, M, N, output_mask); } std::tuple<at::Tensor, at::Tensor, at::Tensor> layer_norm_forward_cuda( const at::Tensor & input, const at::Tensor & weight, const at::Tensor & bias, int64_t M, int64_t N, double eps) { return at::native::layer_norm_cuda(input, weight, bias, M, N, eps); } std::tuple<at::Tensor, at::Tensor, at::Tensor> backward_layer_norm_cuda( const at::Tensor & grad_out, const at::Tensor & input, const at::Tensor & mean, const at::Tensor & rstd, const at::Tensor & weight, int64_t M, int64_t N, std::array<bool,3> output_mask) { return at::native::layer_norm_backward_cuda(grad_out, input, mean, rstd, weight, M, N, output_mask); } PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { m.def("layer_norm_forward_cpu", &layer_norm_forward_cpu, "layer norm forward (cpu version)"); m.def("layer_norm_backward_cpu", &backward_layer_norm_cpu, "layer norm backward (cpu version)"); m.def("layer_norm_forward_cuda", &layer_norm_forward_cuda, "layer norm forward (cuda version)"); m.def("layer_norm_backward_cuda",&backward_layer_norm_cuda, "layer norm backward (cuda version)"); } and then call the functions in import torch import torch.nn as nn import torch.nn.functional as F import pdb import native class layer_norm(torch.autograd.Function): @staticmethod def forward(ctx, x, normalized_shape, weight, bias, eps, training): N = 1 if isinstance(normalized_shape, int): N = normalized_shape elif isinstance(normalized_shape, (list, tuple)): for i in normalized_shape: N *= i else: raise RuntimeError("unexpected type of normalized_shape".format(type(normalized_shape))) M = x.nelement() // N if x.is_cuda: y, mean, rstd = native.layer_norm_forward_cuda(x, weight, bias, M, N, eps) else: y, mean, rstd = native.layer_norm_forward_cpu(x, weight, bias, M, N, eps) if training: ctx.layer_norm_input = x ctx.layer_norm_parameters = (mean, rstd, weight, M, N) return y @staticmethod def backward(ctx, grad_output): x = ctx.layer_norm_input mean, rstd, weight, M, N = ctx.layer_norm_parameters output_mask = [True, True, True] if grad_output.is_cuda: grad_input, grad_weight, grad_bias = native.layer_norm_backward_cuda(grad_output, x, mean, rstd, weight, M, N, output_mask) else: grad_input, grad_weight, grad_bias = native.layer_norm_backward_cpu(grad_output, x, mean, rstd, weight, M, N, output_mask) ctx.layer_norm_input = None ctx.layer_norm_parameters = None return grad_input, None, grad_weight, grad_bias, None, None, None, None class LayerNorm(nn.LayerNorm): def __init__(self, normalized_shape, eps=1e-05, elementwise_affine=True): nn.LayerNorm.__init__(self, normalized_shape, eps=eps, elementwise_affine=elementwise_affine) def forward(self, x): y = layer_norm.apply(x, self.normalized_shape, self.weight, self.bias, self.eps, self.training) return y if __name__ == "__main__": seed = 2809 torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.backends.cudnn.benchmark = True torch.backends.cudnn.enabled = True torch.backends.cudnn.deterministic=True #https://github.com/pytorch/pytorch/issues/8019 model = nn.Sequential( nn.Conv2d(64, 64, kernel_size=3, padding=1, bias=False), LayerNorm([64,56,56]) ) print(model) #model = model.cuda() model.train() iteration = 10 for i in range(iteration): print("index: ", i) x = torch.rand(512,64,56,56) x = x - 0.5 #x = x.cuda() y = model(x) z = y.sum() z.backward() I also uploaded all the code to GitHub - irving-qin/nativefunctions Just run bash install.sh for testing. The forward process of the wrapped layer norm seemed to be normal. However, it throwed errors in the backward function. Thanks so much if anyone could give me some tips.
st31855
Solved by singleroc in post #4 Hi, I found out the reason for the issue. It is caused by the grad_output to be not contiguous. After adding grad_output = grad_output.contiguous(), the error is gone.
st31856
Your current code cannot be build using a recent source build and fails with: /workspace/src/nativefunctions/native.cpp:39:66: error: could not convert ‘mean’ from ‘const at::Tensor’ to ‘c10::IntArrayRef’ {aka ‘c10::ArrayRef<long int>’} 39 | return at::native::layer_norm_backward_cuda(grad_out, input, mean, rstd, weight, M, N, output_mask); | ^~~~ | | | const at::Tensor EDIT: it seems the IntArrayRef normalized_shape input is missing as seen here 2.
st31857
ptrblck: t code cannot be build using a recent source build and fails with: Hi, @ptrblck Thank so much for your attention on the question. It seems the api was changed since pytorch 1.8. In my eary trival, I didn’t build the code again the source code of the pytorch. Instead I build it with a pre-installed version. I found the function definition in path something like ~/.pyenv/versions/3.6.5/lib/python3.6/site-packages/torch/include/ATen/NativeFunctions.h The path prefix might be different on different machines. If I ran grep layer_norm ~/.pyenv/versions/3.6.5/lib/python3.6/site-packages/torch/include/ATen/NativeFunctions.h on pytorch1.5/1.6/1.7 it gave CAFFE2_API Tensor layer_norm(const Tensor & input, IntArrayRef normalized_shape, const Tensor & weight={}, const Tensor & bias={}, double eps=1e-05, bool cudnn_enable=true); CAFFE2_API std::tuple<Tensor,Tensor,Tensor> layer_norm_cpu(const Tensor & input, const Tensor & weight, const Tensor & bias, int64_t M, int64_t N, double eps); CAFFE2_API std::tuple<Tensor,Tensor,Tensor> layer_norm_cuda(const Tensor & input, const Tensor & weight, const Tensor & bias, int64_t M, int64_t N, double eps); CAFFE2_API std::tuple<Tensor,Tensor,Tensor> layer_norm_backward_cpu(const Tensor & grad_out, const Tensor & input, const Tensor & mean, const Tensor & rstd, const Tensor & weight, int64_t M, int64_t N, std::array<bool,3> output_mask); CAFFE2_API std::tuple<Tensor,Tensor,Tensor> layer_norm_backward_cuda(const Tensor & grad_out, const Tensor & input, const Tensor & mean, const Tensor & rstd, const Tensor & weight, int64_t M, int64_t N, std::array<bool,3> output_mask); I just tried the grep on pytorch 1.8, it gave TORCH_API Tensor layer_norm(const Tensor & input, IntArrayRef normalized_shape, const Tensor & weight={}, const Tensor & bias={}, double eps=1e-05, bool cudnn_enable=true); TORCH_API std::tuple<Tensor,Tensor,Tensor> layer_norm_cpu(const Tensor & input, IntArrayRef normalized_shape, const Tensor & weight, const Tensor & bias, double eps); TORCH_API std::tuple<Tensor,Tensor,Tensor> layer_norm_cuda(const Tensor & input, IntArrayRef normalized_shape, const Tensor & weight, const Tensor & bias, double eps); TORCH_API std::tuple<Tensor,Tensor,Tensor> math_native_layer_norm(const Tensor & input, IntArrayRef normalized_shape, const Tensor & weight, const Tensor & bias, double eps); TORCH_API std::tuple<Tensor,Tensor,Tensor> layer_norm_backward_cpu(const Tensor & grad_out, const Tensor & input, IntArrayRef normalized_shape, const Tensor & mean, const Tensor & rstd, const Tensor & weight, const Tensor & bias, std::array<bool,3> output_mask); TORCH_API std::tuple<Tensor,Tensor,Tensor> layer_norm_backward_cuda(const Tensor & grad_out, const Tensor & input, IntArrayRef normalized_shape, const Tensor & mean, const Tensor & rstd, const Tensor & weight, const Tensor & bias, std::array<bool,3> output_mask); I have updated the code on github repo (run git pull, please ) to make it support pytorch 1.8. However the error was still there.
st31858
Hi, I found out the reason for the issue. It is caused by the grad_output to be not contiguous. After adding grad_output = grad_output.contiguous(), the error is gone.
st31859
Hi, I have seen this image which describes the difference between InstanceNorm and BatchNorm, so they seem pretty different, correct me if I wrong but BatchNorm with batchsize=1 isn’t equal to InstanceNorm, is it ?
st31860
Hi, I was trying to calculate the inference time for a couple of batches. Model being used- ResNet18 num_workers = 4 Image Size = 3, 1024, 3072 (Channels, Height, Width) Here’s how I’m calculating the inference with torch.no_grad(): for batch_num, data in enumerate(valloader, 0): # get the images and labels and move tensors to GPU inputs = data["image"] labels = data["label"].to(device) labels = labels.long() # For time Sync in Cuda torch.cuda.synchronize() start_time = time.time() output = net(inputs.to(device)) torch.cuda.synchronize() end_time = time.time() elapsed_time += (end_time - start_time) execution_time = elapsed_time * 1000 # Convert to milliseconds avg_inference_time = execution_time / len(valloader) print("Avg Inference Time: ",avg_inference_time) This is the result I get: Avg Inference Time: 60.62364959716797 for a batch size of 3 Avg Inference Time: 122.38671875 for a batch size of 6 Avg Inference Time: 103.451 for a batch size of 5 Avg Inference Time: 20.9023842 for a batch size of 1 If things were running parallel, shouldn’t be the time involved for any batch size be the same as long as it fits in the GPU memory. Or is it because my resolution is quite high, the size of the activation layer grows due to which higher batch size takes more time for inference. Could anyone please throw some light on this topic? Thanks
st31861
Trying to run this piece of code from Ben Trevett’s github repo, but it takes a hell of time for something pretty basic. Any thoughts on why this is happening and how do I get out of it? from torchtext.legacy import datasets train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
st31862
number of train data: 346 number of test data: 69 Epoch: [0] [0/346] eta: 0:35:20 lr: 0.000019 loss: -312.6024 (-312.6024) loss_classifier: 1.5789 (1.5789) loss_box_reg: 0.1299 (0.1299) loss_mask: -314.3485 (-314.3485) loss_objectness: 0.0266 (0.0266) loss_rpn_box_reg: 0.0106 (0.0106) time: 6.1275 data: 0.1599 max mem: 0 Loss is nan, stopping training {‘loss_classifier’: tensor (nan, grad_fn = ), ‘loss_box_reg’: tensor (nan, grad_fn = ), ‘loss_mask’: tensor (nan, grad_fn = ), ’ tensor (nan, grad_fn = ), ‘loss_rpn_box_reg’: tensor (nan, grad_fn = )} An exception has occurred, use% tb to see the full traceback. SystemExit : 1 And this is the dataset code class maskrcnn_Dataset(torch.utils.data.Dataset): def __init__(self, root, transforms=None): self.root = root self.transforms = transforms # load all image files, sorting them to # ensure that they are aligned self.imgs = list(sorted(os.listdir(os.path.join(root, "images")))) self.masks = list(sorted(os.listdir(os.path.join(root, "masks")))) #self.class_masks = list(sorted(os.listdir(os.path.join(root, "SegmentationClass")))) def __getitem__(self, idx): # load images ad masks img_path = os.path.join(self.root, "images", self.imgs[idx]) x=self.imgs[idx].split('.') mask_path = os.path.join(self.root, "masks", self.masks[idx]) #class_mask_path = os.path.join(self.root, "SegmentationClass", self.class_masks[idx]) #read and convert image to RGB img = cv2.imread(img_path) mask_for_all=[] img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # note that we haven't converted the mask to RGB, # because each color corresponds to a different instance # with 0 being background # mask = Image.open(mask_path) mask_folder=os.path.join(self.root,"masks") source_mask = os.path.join(mask_folder, x[0]) #print(os.listdir(source_mask)) boxes = [] xx=trier(os.listdir(source_mask)) #print(xx) for file_name in xx: mask = Image.open(os.path.join(source_mask,file_name)) mask = np.array(mask) mask_for_all.append(mask) obj_ids = np.unique(mask) obj_ids = obj_ids[1:] masks = mask == obj_ids[:, None, None] num_objs = len(obj_ids) for i in range(num_objs): pos = np.where(masks[i]) xmin = np.min(pos[1]) xmax = np.max(pos[1]) ymin = np.min(pos[0]) ymax = np.max(pos[0]) boxes.append([xmin, ymin, xmax, ymax]) num_objs=len(boxes) boxes = torch.as_tensor(boxes, dtype=torch.float32) # there is only one class if(self.root.find("train")!=-1): #print("bisgltjf") labels =class_ids_train[class_ids_train_names.index(self.imgs[idx])] #print(labels) else: labels =class_ids_val[class_ids_val_names.index(self.imgs[idx])] #print('l3assba') #labels = np.array([]) #for i in range(masks.shape[0]): # labels = np.append(labels, (masks[i] * class_mask).max()) labels = torch.as_tensor(labels, dtype=torch.int64) #print(boxes,":",labels) masks = torch.as_tensor(mask_for_all, dtype=torch.uint8) #print(labels) #print(masks) #print(masks.shape) image_id = torch.tensor([idx]) area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0]) # suppose all instances are not crowd iscrowd = torch.zeros((num_objs,), dtype=torch.int64) #print(img.shape) #print(self.imgs[idx]) target = {} target["boxes"] = boxes #print(boxes) target["labels"] = labels #print(labels.shape) target["masks"] = masks #print(masks.shape) target["image_id"] = image_id #print(image_id.shape) target["area"] = area #print(area) target["iscrowd"] = iscrowd #print(iscrowd.shape) if self.transforms is not None: img, target = self.transforms(img, target) return img, target def __len__(self): return len(self.imgs)
st31863
I’m not sure, how you are calculating the loss_mask, but do you expect it to be negative, as it’s a bit unusual? If not, I guess this might cause your model to diverge and create invalid outputs in the end.
st31864
thank you for your replying, I do not expect it to be negative and I use these three helper functions to train my model with this data set from engine import train_one_epoch, evaluate import utils
st31865
thank you for your replying, I do not expect it to be negative and I use these three helper functions to train my model with this data set from engine import train_one_epoch, evaluate import utils
st31866
I have been training a multi-layer Resnet with stucture DataParallel( (module): resnet( (phi): Tanh() (stack): ModuleList( (0): Linear(in_features=5000, out_features=3000, bias=True) (1): Block( (L1): Linear(in_features=3000, out_features=3000, bias=True) (L2): Linear(in_features=3000, out_features=3000, bias=True) (phi): Tanh() ) (2): Block( (L1): Linear(in_features=3000, out_features=3000, bias=True) (L2): Linear(in_features=3000, out_features=3000, bias=True) (phi): Tanh() ) (3): Block( (L1): Linear(in_features=3000, out_features=3000, bias=True) (L2): Linear(in_features=3000, out_features=3000, bias=True) (phi): Tanh() ) (4): Linear(in_features=3000, out_features=1460, bias=True) ) ) ) I have noticed that the time for the input layer Linear(in_features=5000, out_features=3000, bias=True) have cost time: 1.201575756072998 layer:0 in the meanwhile 0.0006377696990966797 layer:1 0.0002562999725341797 layer:2 0.00022459030151367188 layer:3 7.271766662597656e-05 layer:4 why would this happen? ... x_input=torch.rand(1,5000).to(device) pre = model(x_input) .. def forward(self, x): # first layer for i in range(len(self.stack)): t1=time.time() x = self.stack[i](x) t2=time.time() print(t2-t1,'layer:{}'.format(i)) #with 4*nvidia rtx 3090
st31867
CUDA operations are executed asynchronously so you would have to synchronize the code via torch.cuda.synchronize() before starting and stopping the timer, otherwise you would profile e.g. the CUDA context creation, the kernel launches etc. Also, you should add warmup iterations and measure the average time of multiple iterations.