instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
****I set my model and data to the same device, but always raise the error like this: RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same** The following is training code** total_epoch = 1 best_epoch = 0 training_losses = [] val_losses = [] for epoch in range(total_epoch): epoch_train_loss = 0 for X, y in train_loader: X, y = X.cuda(), y.cuda() optimizer.zero_grad() result = model(X) loss = criterion(result, y). epoch_train_loss += loss.item() loss.backward() optimizer.step() training_losses.append(epoch_train_loss) epoch_val_loss = 0 correct = 0 total = 0 with torch.no_grad(): for X, y in val_loader: X, y = X.cuda(), y.cuda() result = model(X) loss = criterion(result, y) epoch_val_loss += loss.item() _, maximum = torch.max(result.data, 1) total += y.size(0) correct += (maximum == y).sum().item() val_losses.append(epoch_val_loss) accuracy = correct/total print("EPOCH:", epoch, ", Training Loss:", epoch_train_loss, ", Validation Loss:", epoch_val_loss, ", Accuracy: ", accuracy) if min(val_losses) == val_losses[-1]: best_epoch = epoch checkpoint = {'model': model, 'state_dict': model.state_dict(), 'optimizer' : optimizer.state_dict()} torch.save(checkpoint, models_dir + '{}.pth'.format(epoch)) print("Model saved") when i Run the following code for detection using cv2.capture(0) . import cvlib as cv from PIL import Image cap = cv2.VideoCapture(0) font_scale=1 thickness = 2 red = (0,0,255) green = (0,255,0) blue = (255,0,0) font=cv2.FONT_HERSHEY_SIMPLEX face_cascade = cv2.CascadeClassifier( cv2.data.haarcascades +'haarcascade_frontalface_default.xml') while(cap.isOpened()): ret, frame = cap.read() if ret == True: gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.4, 4) for (x, y, w, h) in faces: cv2.rectangle(frame, (x, y), (x+w, y+h), blue, 2) croped_img = frame[y:y+h, x:x+w] pil_image = Image.fromarray(croped_img, mode = "RGB") pil_image = train_transforms(pil_image) image = pil_image.unsqueeze(0) result = loaded_model(image) _, maximum = torch.max(result.data, 1) prediction = maximum.item() if prediction == 0: cv2.putText(frame, "Masked", (x,y - 10), font, font_scale, green, thickness) cv2.rectangle(frame, (x, y), (x+w, y+h), green, 2) elif prediction == 1: cv2.putText(frame, "No Mask", (x,y - 10), font, font_scale, red, thickness) cv2.rectangle(frame, (x, y), (x+w, y+h), red, 2) cv2.imshow('frame',frame) if (cv2.waitKey(1) & 0xFF) == ord('q'): break else: break cap.release() cv2.destroyAllWindows() about the function loaded_model declared as below def load_checkpoint(filepath): checkpoint = torch.load(filepath) model = checkpoint['model'] model.load_state_dict(checkpoint['state_dict']) for parameter in model.parameters(): parameter.requires_grad = False return model.eval() filepath = models_dir + str(best_epoch) + ".pth" loaded_model = load_checkpoint(filepath) ERROR: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-29-b3a630684f44> in <module>() 43 44 ---> 45 result = loaded_model(image) 46 _, maximum = torch.max(result.data, 1) 47 prediction = maximum.item() 5 frames /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight) 344 _pair(0), self.dilation, self.groups) 345 return F.conv2d(input, weight, self.bias, self.stride, --> 346 self.padding, self.dilation, self.groups) 347 348 def forward(self, input): RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same **I hope you can answer it.Thanks!**
The model's weights are on the GPU, while the image is on the CPU. You need to put it onto the GPU as well. image = pil_image.unsqueeze(0) image = image.cuda() result = loaded_model(image) It looks like you didn't manually put the model onto the GPU, but rather that you saved the model's weights, which were originally on the GPU, and PyTorch keeps the device information when saving the state dict. If you want to run the model on the CPU, you should make sure that the weights are on the CPU. torch.load accepts a map_location argument, which forces the loaded data to be on the specified device, rather than using the saved device. # Load weights onto the CPU regardless of saved device. checkpoint = torch.load(filepath, map_location="cpu")
https://stackoverflow.com/questions/62302878/
Installing PyTorch3D fails with anaconda and pip on Windows 10
I saw that more people seem to have the same issue, but it was not resolved. I am trying to install Pytorch3D with Anaconda and got the following PackageNotFound error. Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. PackagesNotFoundError: The following packages are not available from current channels: - pytorch3d Current channels: - https://conda.anaconda.org/pytorch3d/win-64 - https://conda.anaconda.org/pytorch3d/noarch - https://repo.anaconda.com/pkgs/main/win-64 - https://repo.anaconda.com/pkgs/main/noarch - https://repo.anaconda.com/pkgs/r/win-64 - https://repo.anaconda.com/pkgs/r/noarch - https://repo.anaconda.com/pkgs/msys2/win-64 - https://repo.anaconda.com/pkgs/msys2/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. I have also tried using pip install 'git+https://github.com/facebookresearch/pytorch3d.git' and get the following: C:\Users\Alexandra>pip install 'git+https://github.com/facebookresearch/pytorch3d.git' ERROR: Invalid requirement: "'git+https://github.com/facebookresearch/pytorch3d.git'" C:\Users\Alexandra>pip install git+https://github.com/facebookresearch/pytorch3d.git Collecting git+https://github.com/facebookresearch/pytorch3d.git Cloning https://github.com/facebookresearch/pytorch3d.git to c:\users\alexan~1\appdata\local\temp\pip-req-build-uspo7an4 Running command git clone -q https://github.com/facebookresearch/pytorch3d.git 'C:\Users\ALEXAN~1\AppData\Local\Temp\pip-req-build-uspo7an4' ERROR: Error [WinError 2] The system cannot find the file specified while executing command git clone -q https://github.com/facebookresearch/pytorch3d.git 'C:\Users\ALEXAN~1\AppData\Local\Temp\pip-req-build-uspo7an4' ERROR: Cannot find command 'git' - do you have 'git' installed and in your PATH? I am on Windows 10, using python 3.8, PyTorch 1.5 and CUDA 10.2. I am very new to python, so I have no idea how to fix this (you can tell, that I've never installed from git before..) (please be lenient!) Thank you! EDIT: Thank you for your answers. I did install Git and it got me a bit further, but still not completeing the build.. Also, interesting enough, when I run the commands !pip install torch torchvision !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable' in Google Collab it seems to work, but I cannot run it , let's say, in jupyter. Any more ideas?
Edit 10-17-2022 With CUDA 11.6 downloading CUB and setting CUB_HOME is no longer necessary. Trying to use CUB_HOME will give nvcc.exe compile error. Any previous CUB_HOME environment variable should be deleted and command line restarted before running setup. Original Answer I have also tried to install pytorch3d on windows 10. As of writing this there is no windows package in, https://anaconda.org/pytorch3d/pytorch3d. Pytorch3d install doc has detailed instructions but some information is missing and only found inside various issues. Following various issues I was able get pytorch3d installed by compiling from source on pytorch 1.8.1 and 1.10.0(This version is not supported yet in official docs for pytorch3d 0.6.0). I have tested on pytorch 1.8.1 with CUDA 10.2 and pytorch 1.10.0 with CUDA 11.3. I had CUDA Toolkit 11.0, CuDNN installed separately with environment variables set to be used by tensorflow gpu. For both environment a new python 3.9 was used. Visual studio 16.11.5 was used with Desktop Development with C++ enabled, CMake 3.21.3. It is probably better to have same CUDA Toolkit version as the Pytorch GPU version. There was warning regarding version but in my case it was installed successfully. The pytorch3d source code must be downloaded and extracted in order to compile. When running python setup.py install from pytorch3d folder it looked for CUDA_HOME. It was able to find correct CUDA path probably based on other flags. I initially faced these errors, RuntimeError: Error compiling objects for extension. xutility(...): error: expected a "(" Install on windows Create conda enviroment, install torch and dependencies. conda create -n pytorch3d python=3.9 conda activate pytorch3d conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch conda install -c fvcore -c iopath -c conda-forge fvcore iopath Install appropriate CUDA Toolkit, CuDNN and set enviroment variables. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\libnvvp C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\extras\CUPTI\lib64 C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include Set correct path to cl.exe Visual C++ compiler. This will allow to run compilation from conda prompt with correct environment selected. Host x64 was used and x86 for the folder inside as x64 one gave error for me. In my case it was, C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current\Bin C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\Hostx64\x86 Following this issue, PYTORCH3D_NO_NINJA system environment variable was set with value of 1. FORCE_CUDA with value of 1 (though not required if CUDA is available in pytorch) and CUB_HOME system environment variable was set by downloading it from, https://github.com/NVIDIA/cub/releases. PYTORCH3D_NO_NINJA 1 CUB_HOME C:\portable\cub-1.9.9 The following env variable were probably set by CUDA Toolkit install. CUDA_PATH_V11_0 C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0 CUDA_PATH C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0 Following this issue instead of removing -std=c++14 from extra_compile_args in setup.py, commenting out "-std=c++14" in nvcc_args worked for me. I did check if the suggested method in the issue works. Now running python setup.py install from pytorch3d source folder should start compile and install it. Install following requirements in conda env to run demos and examples. conda install jupyter pip install scikit-image matplotlib imageio plotly opencv-python Some of the steps mentioned above is probably not needed. There is no need to modify any header files. This is the source that worked for me, https://github.com/facebookresearch/pytorch3d/tree/bfeb82efa38f29ed5b9cf8d8986fab744fe559ea.
https://stackoverflow.com/questions/62304087/
Pytorch installation on windows
I have tried many suggestions online to install in my virtual env "torch" but to no avail. It won't let me import torch. I am able to install torchvision through conda though following this link: https://pytorch.org/get-started/locally/. Any suggestions are welcome! Here is the error message (I downgrade to python 3.5 in the virtualenv) (env_peem) PS E:\Users\Maggie\TS_MatSeg_Share> pip install torch Collecting torch\Users\Maggie\TS_MatSeg_Share> Using cached torch-0.1.2.post2.tar.gz (128 kB) Requirement already satisfied: pyyaml in e:\users\maggie\ts_matseg_share\env_peem\lib\site-packages (from torch) (5.3.1) Building wheels for collected packages: torch Building wheel for torch (setup.py) ... error ERROR: Command errored out with exit status 1: command: 'E:\Users\Maggie\TS_MatSeg_Share\env_peem\Scripts\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'E:\\Users\\Maggie\\AppData\\Local\\Temp\\7\\pip-install-80ts7bt9\\torch\\setup.py'"'"'; __file__='"'"'E:\\Users\\Maggie\\AppData\\Local\\Temp\\7\\pip-install-80ts7bt9\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'E:\Users\Maggie\AppData\Local\Temp\7\pip-wheel-xn9ld0bu' cwd: E:\Users\Maggie\AppData\Local\Temp\7\pip-install-80ts7bt9\torch\ Complete output (30 lines): running bdist_wheel running build running build_deps Traceback (most recent call last): File "<string>", line 1, in <module> File "E:\Users\Maggie\AppData\Local\Temp\7\pip-install-80ts7bt9\torch\setup.py", line 265, in <module> description="Tensors and Dynamic neural networks in Python with strong GPU acceleration", File "E:\Users\Maggie\TS_MatSeg_Share\env_peem\lib\site-packages\setuptools\__init__.py", line 144, in setup return distutils.core.setup(**attrs) File "C:\Users\Maggie\AppData\Local\Programs\Python\Python35\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\Users\Maggie\AppData\Local\Programs\Python\Python35\lib\distutils\dist.py", line 955, in run_commands self.run_command(cmd) File "C:\Users\Maggie\AppData\Local\Programs\Python\Python35\lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "E:\Users\Maggie\TS_MatSeg_Share\env_peem\lib\site-packages\wheel\bdist_wheel.py", line 223, in run self.run_command('build') File "C:\Users\Maggie\AppData\Local\Programs\Python\Python35\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\Users\Maggie\AppData\Local\Programs\Python\Python35\lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "C:\Users\Maggie\AppData\Local\Programs\Python\Python35\lib\distutils\command\build.py", line 135, in run self.run_command(cmd_name) File "C:\Users\Maggie\AppData\Local\Programs\Python\Python35\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\Users\Maggie\AppData\Local\Programs\Python\Python35\lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "E:\Users\Maggie\AppData\Local\Temp\7\pip-install-80ts7bt9\torch\setup.py", line 51, in run from tools.nnwrap import generate_wrappers as generate_nn_wrappers ImportError: No module named 'tools.nnwrap' ---------------------------------------- ERROR: Failed building wheel for torch Running setup.py clean for torch ERROR: Command errored out with exit status 1: command: 'E:\Users\Maggie\TS_MatSeg_Share\env_peem\Scripts\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'E:\\Users\\Maggie\\AppData\\Local\\Temp\\7\\pip-install-80ts7bt9\\torch\\setup.py'"'"'; __file__='"'"'E:\\Users\\Maggie\\AppData\\Local\\Temp\\7\\pip-install-80ts7bt9\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' clean --all cwd: E:\Users\Maggie\AppData\Local\Temp\7\pip-install-80ts7bt9\torch Complete output (2 lines): running clean error: [Errno 2] No such file or directory: '.gitignore' ---------------------------------------- ERROR: Failed cleaning build dir for torch Failed to build torch Installing collected packages: torch Running setup.py install for torch ... error ERROR: Command errored out with exit status 1: command: 'E:\Users\Maggie\TS_MatSeg_Share\env_peem\Scripts\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'E:\\Users\\Maggie\\AppData\\Local\\Temp\\7\\pip-install-80ts7bt9\\torch\\setup.py'"'"'; __file__='"'"'E:\\Users\\Maggie\\AppData\\Local\\Temp\\7\\pip-install-80ts7bt9\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'E:\Users\Maggie\AppData\Local\Temp\7\pip-record-z6gt4ig9\install-record.txt' --single-version-externally-managed --compile --install-headers 'E:\Users\Maggie\TS_MatSeg_Share\env_peem\include\site\python3.5\torch' cwd: E:\Users\Maggie\AppData\Local\Temp\7\pip-install-80ts7bt9\torch\ Complete output (23 lines): running install running build_deps Traceback (most recent call last): File "<string>", line 1, in <module> File "E:\Users\Maggie\AppData\Local\Temp\7\pip-install-80ts7bt9\torch\setup.py", line 265, in <module> description="Tensors and Dynamic neural networks in Python with strong GPU acceleration", File "E:\Users\Maggie\TS_MatSeg_Share\env_peem\lib\site-packages\setuptools\__init__.py", line 144, in setup return distutils.core.setup(**attrs) File "C:\Users\Maggie\AppData\Local\Programs\Python\Python35\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\Users\Maggie\AppData\Local\Programs\Python\Python35\lib\distutils\dist.py", line 955, in run_commands self.run_command(cmd) File "C:\Users\Maggie\AppData\Local\Programs\Python\Python35\lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "E:\Users\Maggie\AppData\Local\Temp\7\pip-install-80ts7bt9\torch\setup.py", line 99, in run self.run_command('build_deps') File "C:\Users\Maggie\AppData\Local\Programs\Python\Python35\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\Users\Maggie\AppData\Local\Programs\Python\Python35\lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "E:\Users\Maggie\AppData\Local\Temp\7\pip-install-80ts7bt9\torch\setup.py", line 51, in run from tools.nnwrap import generate_wrappers as generate_nn_wrappers ImportError: No module named 'tools.nnwrap' ---------------------------------------- ERROR: Command errored out with exit status 1: 'E:\Users\Maggie\TS_MatSeg_Share\env_peem\Scripts\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'E:\\Users\\Maggie\\AppData\\Local\\Temp\\7\\pip-install-80ts7bt9\\torch\\setup.py'"'"'; __file__='"'"'E:\\Users\\Maggie\\AppData\\Local\\Temp\\7\\pip-install-80ts7bt9\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'E:\Users\Maggie\AppData\Local\Temp\7\pip-record-z6gt4ig9\install-record.txt' --single-version-externally-managed --compile --install-headers 'E:\Users\Maggie\TS_MatSeg_Share\env_peem\include\site\python3.5\torch' Check the logs for full command output.
It tries to install torch-0.1.2.post2.tar.gz, which is an extremely outdated version, and it probably was more involved to install it back then. You don't want that version, but rather the most recent one, which currently is 1.5.0. You didn't select the appropriate configuration from PyTorch - Getting Started Locally, because if you select Windows, which is the operating system you are using, the installation command is: pip install torch===1.5.0 torchvision===0.6.0 -f https://download.pytorch.org/whl/torch_stable.html That is because PyTorch does not publish the Windows versions to PyPI anymore, and it needs to be installed from their custom registry.
https://stackoverflow.com/questions/62313841/
How did Pytorch process images in ImageNet when training resnet pretrained models in torchvision.models.resnet34?
I downloaded the pretrained parameters of resnet34 in torchvision.models and put them to a tensorflow1.X network, but just get 58% accurary testing on the ImageNet2015 Validation set (50,000 picture). I guess it may be caused by the different precessing method to the data set. The Validation I am using is in TFRecord format processed by my friend. So I am wondering how Pytorch process images of ImageNet when training resnet34 pretrained models? Turn RGB to BGR? Scale pictures values to 0--1?
From the documentation: All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. You can use the following transform to normalize: normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])``` The documentation links this example for preprocessing ImageNet data.
https://stackoverflow.com/questions/62326497/
Prunning model doesn't improve inference speed or reduce model size
I'm trying to prune my model in PyTorch with torch.nn.utils.prune, which provides 2 tensors, one is the original weight and the other is a mask contain 0s and 1s that help us close certain connections in the network. I have tried both of the solutions, but none improve the inference speed: Use the network after pruning to infer which will first close some connections with the mask and then run inference. Zeros out the original weights with the mask and then remove the mask from the state_dict to infer. Is there a way to improve the speed with the model tensor and the mask? Doesn't multiply with a non-zero float number with 0 will faster than multiply 2 floats with each other? Here is my prune function and the pruning speed calculating procedure: def prune_net(net): """Prune 20% net's weights that have abs(value) approx. 0 Function that will be use when an iteration is reach Args: Return: newnet (nn.Module): a newnet contain mask that help prune network's weight """ if not isinstance(net,nn.Module): print('Invalid input. Must be nn.Module') return newnet = copy.copy(net) modules_list = [] for name, module in newnet.named_modules(): if isinstance(module, torch.nn.Conv2d): modules_list += [(module,'weight'),(module,'bias')] if isinstance(module, torch.nn.Linear): modules_list += [(module,'weight'),(module,'bias')] prune.global_unstructured( modules_list, pruning_method=prune.L1Unstructured, amount=0.2,) return newnet Test inference speed 1st case: import torch from torch import nn import torch.nn.utils.prune as prune import torch.nn.functional as F import time from torch.autograd import Variable torch.set_default_tensor_type('torch.cuda.FloatTensor') old_net = init_your_net() new_net = prune_net(old_net) new_net = prune_net(new_net) old_net.eval() new_net.eval() old_net = old_net.cuda() new_net = new_net.cuda() dataset = load_your_dataset() for i in range(100): x = dataset[i] x = x.cuda() y = x.cuda() #new infer start_time = time.perf_counter() detections = new_net(x).data time_new += time.perf_counter() - start_time #old infer start_time = time.perf_counter() detections = old_net(y).data time_old += time.perf_counter() - start_time print('old ',time_old) print('new ', time_new) Test inference speed 2nd case: import torch from torch import nn import torch.nn.utils.prune as prune import torch.nn.functional as F import time from torch.autograd import Variable torch.set_default_tensor_type('torch.cuda.FloatTensor') old_net = init_your_net() new_net = prune_net(old_net) new_net = prune_net(new_net) # Apply mask to model tensor and remove mask from state_dict for name, module in new_net.named_modules(): if isinstance(module, torch.nn.Conv2d): prune.remove(module,'weight') prune.remove(module,'bias') if isinstance(module, torch.nn.Linear): prune.remove(module,'weight') prune.remove(module,'bias') old_net.eval() new_net.eval() old_net = old_net.cuda() new_net = new_net.cuda() dataset = load_your_dataset() for i in range(100): x = dataset[i] x = x.cuda() y = x.cuda() #new infer start_time = time.perf_counter() detections = new_net(x).data time_new += time.perf_counter() - start_time #old infer start_time = time.perf_counter() detections = old_net(y).data time_old += time.perf_counter() - start_time print('old ',time_old) print('new ', time_new) UPDATE I found torch have a sparse module that can reduce memory usage if we prune enough parameter but it hasn't support nn.Module yet, only Tensor object. Here are some useful link: https://github.com/pytorch/pytorch/issues/36214#issuecomment-619586452 https://pytorch.org/docs/stable/sparse.html
It is important to understand the difference between unstructured pruning and structured pruning. Structured pruning: the dimensions of the weight tensors are reduced by removing entire rows/columns of the tensors. This translates into removing neurons with all their incoming and outgoing connections (in dense layers) or entire convolutional filters (in convolutional layers). Unstructured pruning: individual weights can be "removed" (zeroed-out) without constraints of the shape of the final tensor. This translates into removing individual connections between neurons (in dense layers) or removing individual weights of the convolutional filters (in convolutional layers). Notice that the resulting weight tensors can be sparse but maintain their original shape. Currently, torch.nn.utils.prune only supports unstructured pruning, which hardly helps to reduce the inference cost because GPUs are not optimized for sparse matrix multiplications. While you might want to reduce the dimensions of your weight tensors to reduce the number of floating-point operations, unstructured pruning produces weight tensors with many zeros but does not automatically reduce the size of such tensors. Unstructured pruning can help improve the performance only when a lot of weights are removed. In this case, you can either rely on PyTorch sparse operations or try to find rows/columns that contain all zeros and thus can be removed. Instead, if you want to look into structured pruning, you can take a look at TorchPruner, a library that I have developed myself for research purposes and that provides utilities to find the least important neurons and slice the weight tensors accordingly.
https://stackoverflow.com/questions/62326683/
How to rename Pytorch object name?
My Pytorch Model: EfficientDet( (backbone): EfficientNetFeatures( (conv_stem): Conv2d(4, 48, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ... ... Is there a way to rename backbone object to some other name?
We can rename an attribute of an instance using the following function. def rename_attribute(obj, old_name, new_name): obj._modules[new_name] = obj._modules.pop(old_name) Example class EfficientNetFeatures(nn.Module): def __init__(self): super(EfficientNetFeatures, self).__init__() self.conv_stem = nn.Conv2d(4, 48, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) self.bn1 = nn.BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) class EfficientDet(nn.Module): def __init__(self): super(EfficientDet, self).__init__() self.backbone = EfficientNetFeatures() model = EfficientDet() print(model) rename_attribute(model, 'backbone', 'newname') print(model) Outputs: EfficientDet( (backbone): EfficientNetFeatures( (conv_stem): Conv2d(4, 48, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) EfficientDet( (newname): EfficientNetFeatures( (conv_stem): Conv2d(4, 48, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) )
https://stackoverflow.com/questions/62334279/
TypeError: __call__() takes 2 positional arguments but 3 were given. To train Raccoon prediction model using FastRCNN through Transfer Learning
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor from engine import train_one_epoch, evaluate import utils import torchvision.transforms as T num_epochs = 10 for epoch in range(num_epochs): train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10) lr_scheduler.step() evaluate(model, data_loader_test, device=device) I am using the same code as provided in this link Building Raccoon Model but mine is not working. This is the error message I am getting TypeError Traceback (most recent call last) in () 2 for epoch in range(num_epochs): 3 # train for one epoch, printing every 10 iterations 4 ----> train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10) 5 # update the learning rate 6 lr_scheduler.step() 7 frames in getitem(self, idx) 29 target["iscrowd"] = iscrowd 30 if self.transforms is not None: 31 ---> img, target = self.transforms(img, target) 32 return img, target 33 TypeError: call() takes 2 positional arguments but 3 were given
The above answer is incorrect, I accidentally upvoted before noticing. You are using the wrong Compose, note that it says https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#putting-everything-together "In references/detection/, we have a number of helper functions to simplify training and evaluating detection models. Here, we will use references/detection/engine.py, references/detection/utils.py and references/detection/transforms.py. Just copy them to your folder and use them here." there are helper scripts. They subclass the compose and flip methods https://github.com/pytorch/vision/blob/6315358dd06e3a2bcbe9c1e8cdaa10898ac2b308/references/detection/transforms.py#L17 I did the same thing before noticing this. Do not use the compose method from torchvision.transforms, or else you will get the error above. Download their module and load it.
https://stackoverflow.com/questions/62341052/
How to use permute on this Input and Target?
I am having errors on my semantic segmentation masks with 5 classes + 1 (background). How do I use permute to avoid this? Target size (torch.Size([4, 1, 320, 480, 6])) must be the same as input size (torch.Size([4, 6, 320, 480]))
You can combine permute and unsqueeze: import torch x = torch.rand((4, 6, 320, 480)) new_x = x.permute((0,2,3,1)).unsqueeze(1) # new_x.shape = torch.Size([4, 1, 320, 480, 6])
https://stackoverflow.com/questions/62348624/
Python Neural Network: 'numpy.ndarray' object has no attribute 'dim'
I am using this database for modeling http://archive.ics.uci.edu/ml/datasets/Car+Evaluation after preprocessing X_train = df.drop('class', axis=1).to_numpy() y_train = df['class'].to_numpy() X_train, X_test, y_train, y_test = train_test_split(X_train, y_train, test_size=0.2) class class network(nn.Module): def __init__(self, input_size, hidden1_size, hidden2_size, num_classes): super(network, self).__init__() self.fc1 = nn.Linear(input_size, hidden1_size) self.relu1 = nn.ReLU() self.fc2 = nn.Linear(hidden1_size, hidden2_size) self.relu2 = nn.ReLU() self.fc3 = nn.Linear(hidden2_size, num_classes) def forward(self, x): out = self.fc1(x) out = self.relu1(out) out = self.fc2(out) out = self.relu2(out) out = self.fc3(out) return out net = network(input_size=6, hidden1_size=5, hidden2_size=4, num_classes=4) optimizer = torch.optim.SGD(net.parameters(), lr=0.2) loss_func = torch.nn.MSELoss() Error is in this block plt.ion() for t in range(200): prediction = net(X_train) # input x and predict based on x loss = loss_func(prediction, y_train) # must be (1. nn output, 2. target) optimizer.zero_grad() # clear gradients for next train loss.backward() # backpropagation, compute gradients optimizer.step() # apply gradients if t % 5 == 0: # plot and show learning process plt.cla() plt.scatter(x.data.numpy(), y.data.numpy()) plt.plot(x.data.numpy(), prediction.data.numpy(), 'r-', lw=5) plt.text(0.5, 0, 'Loss=%.4f' % loss.data.numpy(), fontdict={'size': 20, 'color': 'red'}) plt.pause(0.1) plt.ioff() plt.show() Error message AttributeError Traceback (most recent call last) in () 2 3 for t in range(200): ----> 4 prediction = net(X_train) # input x and predict based on x 5 6 loss = loss_func(prediction, y_train) # must be (1. nn output, 2. target) > 4 frames /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py > in linear(input, weight, bias) 1606 if any([type(t) is not > Tensor for t in tens_ops]) and has_torch_function(tens_ops): 1607 > return handle_torch_function(linear, tens_ops, input, weight, > bias=bias) > -> 1608 if input.dim() == 2 and bias is not None: 1609 # fused op is marginally faster 1610 ret = torch.addmm(bias, input, weight.t()) > > AttributeError: 'numpy.ndarray' object has no attribute 'dim'
in prediction = net(X_train), X_train is a numpy array, but torch expects a tensor. You need to convert to torch tensor, and move to gpu if you want the 1st line should be X_train = torch.from_numpy(df.drop('class', axis=1).to_numpy())
https://stackoverflow.com/questions/62350980/
PyTorch ToTensor Changes C x H x W (5 x 600 x 900) to H x C x W (900 x 5 x 600)
Here is my DataLoader. When I use ToTensor, it changes the dimensions of the image to H x C x W. Is permute okay to fix this or this might change some orientation? class DPWHDataset(Dataset): def __init__(self, mean=None, std=None, phase=None, dataset=None): self.data = dataset self.mean = mean self.std = std self.phase = phase self.transforms = get_transforms(phase, mean, std) def __len__(self): return len(self.data) def __getitem__(self, idx): image_name = self.data[idx] image_path = image_prefix + image_name + ".jpg" mask_path = binary_mask_prefix + image_name + "_mask.png" mask = cv2.imread(mask_path, 0) print(image_path) # image = np.array(Image.open(image_path)) # mask = np.array(Image.open(mask_path)) image = cv2.imread(image_path) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) mask = create_channel_mask(mask) # augmented = self.transforms(image=image, mask=mask) # image = augmented['image'] # mask = augmented['mask'] image = torchvision.transforms.ToTensor()(image) image = torchvision.transforms.Normalize(mean=self.mean, std=self.std)(image) mask = torchvision.transforms.ToTensor()(mask) return image, mask
According to the documentation, torchvision.transforms.ToTensor converts a PIL Image or numpy.ndarray (H x W x C) to a torch.FloatTensor of shape (C x H x W). So, in the following line: image = torchvision.transforms.ToTensor()(image) The resultant image tensor is of shape (C x H x W) and the input tensor is of shape (H x W x C). You can verify this by printing the tensor shapes. And yes, you can adjust the shape using torch.permute, it won't cause any issue.
https://stackoverflow.com/questions/62357045/
Is A PyTorch Dataset Accessed by Multiple DataLoader Workers?
When using more than 1 DataLoader workers in PyTorch, does every worker access the same Dataset instance? Or does each DataLoader worker have their own instance of Dataset? from torch.utils.data import DataLoader, Dataset class NumbersDataset(Dataset): def __init__(self): self.samples = list(range(1, 1001)) def __len__(self): return len(self.samples) def __getitem__(self, idx): return self.samples[idx] dataset = NumbersDataset() train_loader = DataLoader(dataset, num_workers=4)
It seams like they are accessing to the same instance. I have tried adding a static variable inside the dataset class and incrementing it every time a new instance is created. Code can be found below. from torch.utils.data import DataLoader, Dataset class NumbersDataset(Dataset): i = 0 def __init__(self): NumbersDataset.i += 1 self.samples = list(range(1, 1001)) def __len__(self): return len(self.samples) def __getitem__(self, idx): return self.samples[idx] dataset_1 = NumbersDataset() train_loader = DataLoader(dataset_1, num_workers=4) for i, data in enumerate(train_loader): pass dataset_2 = NumbersDataset() train_loader = DataLoader(dataset_2, num_workers=4) for i, data in enumerate(train_loader): pass print(NumbersDataset.i) The output is 2. Hope it helps :D
https://stackoverflow.com/questions/62361162/
Installing torchaudio in Windows
I am trying to install torchaudio in Windows from source. I installed sox and added it in the path env variable. Then I run python setup.py install cloned from the GitHub. When I import torchaudio, I get warning No audio backend is available. I think this means that the sox is not configured correctly or I have done something very wrong. How to install torchaudio in Windows? My Linux machine is not good and I can't use Linux which has CUDA.
You need to install the backend. On Windows it's PySoundFile pip install PySoundFile This should do the trick You can follow torch documentation here https://pytorch.org/audio/backend.html
https://stackoverflow.com/questions/62369522/
PyTorch: how to apply another transform to an existing Dataset?
This is a code example: dataset = datasets.MNIST(root=root, train=istrain, transform=None) #preserve raw img print(type(dataset[0][0])) # <class 'PIL.Image.Image'> dataset = torch.utils.data.Subset(dataset, indices=SAMPLED_INDEX) # for resample for ind in range(len(dataset)): img, label = dataset[ind] # <class 'PIL.Image.Image'> <class 'int'>/<class 'numpy.int64'> img.save(fp=os.path.join(saverawdir, f'{ind:02d}-{int(label):02d}.png')) dataset.transform = transforms.Compose([ transforms.RandomResizedCrop(image_size), transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) #transform for net forwarding print(type(dataset[0][0])) # expected <class 'torch.Tensor'>, however it's still <class 'PIL.Image.Image'> Since dataset is randomly resampled, I don't want to reload a new dataset with transform, but just apply transform to the already existing dataset. Thanks for your help :D
You can create a small wrapper Dataset that will take care of applying the given transform to the underlying dataset on the fly: Here's an example that was posted over on the pytorch forums: https://discuss.pytorch.org/t/torch-utils-data-dataset-random-split/32209/4 class MyDataset(Dataset): def __init__(self, subset, transform=None): self.subset = subset self.transform = transform def __getitem__(self, index): x, y = self.subset[index] if self.transform: x = self.transform(x) return x, y def __len__(self): return len(self.subset) With your code it could look something like: dataset = datasets.MNIST(root=root, train=istrain, transform=None) #preserve raw img print(type(dataset[0][0])) # <class 'PIL.Image.Image'> dataset = torch.utils.data.Subset(dataset, indices=SAMPLED_INDEX) # for resample transformed_dataset = TransformDataset(dataset, transform=transforms.Compose([ transforms.RandomResizedCrop(image_size), transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]))
https://stackoverflow.com/questions/62371522/
(pytorch / mse) How can I change the shape of tensor?
Problem definition: I have to use MSELoss function to define the loss to classification problem. Therefore it keeps saying the error message regarding the shape of tensor. Entire error message: torch.Size([32, 10]) torch.Size([32]) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) in 53 output = model.forward(images) 54 print(output.shape, labels.shape) ---> 55 loss = criterion(output, labels) 56 loss.backward() 57 optimizer.step() /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) /opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py in forward(self, input, target) 429 430 def forward(self, input, target): --> 431 return F.mse_loss(input, target, reduction=self.reduction) 432 433 /opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in mse_loss(input, target, size_average, reduce, reduction) 2213 ret = torch.mean(ret) if reduction == 'mean' else torch.sum(ret) 2214 else: -> 2215 expanded_input, expanded_target = torch.broadcast_tensors(input, target) 2216 ret = torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction)) 2217 return ret /opt/conda/lib/python3.7/site-packages/torch/functional.py in broadcast_tensors(*tensors) 50 [0, 1, 2]]) 51 """ ---> 52 return torch._C._VariableFunctions.broadcast_tensors(tensors) 53 54 > RuntimeError: The size of tensor a (10) must match the size of tensor b (32) at non-singleton dimension 1 How can I reshape the tensor, and which tensor (output or labels) should I change to calculate the loss? Entire code is attached below. import numpy as np import torch # Loading the Fashion-MNIST dataset from torchvision import datasets, transforms # Get GPU Device device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('MNIST_data/', download = True, train = True, transform = transform) testset = datasets.FashionMNIST('MNIST_data/', download = True, train = False, transform = transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size = 32, shuffle = True, num_workers=4) testloader = torch.utils.data.DataLoader(testset, batch_size = 32, shuffle = True, num_workers=4) # Examine a sample dataiter = iter(trainloader) images, labels = dataiter.next() # Define the network architecture from torch import nn, optim import torch.nn.functional as F model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 10), nn.LogSoftmax(dim = 1)) model.to(device) # Define the loss criterion = nn.MSELoss() # Define the optimizer optimizer = optim.Adam(model.parameters(), lr = 0.001) # Define the epochs epochs = 5 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: # Flatten Fashion-MNIST images into a 784 long vector images = images.to(device) labels = labels.to(device) images = images.view(images.shape[0], -1) # Training pass optimizer.zero_grad() output = model.forward(images) print(output.shape, labels.shape) loss = criterion(output, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computation with torch.no_grad(): # Set the model to evaluation mode model.eval() # Validation pass for images, labels in testloader: images = images.to(device) labels = labels.to(device) images = images.view(images.shape[0], -1) ps = model(images) test_loss += criterion(ps, labels) top_p, top_class = ps.topk(1, dim = 1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() print("Epoch: {}/{}..".format(e+1, epochs), "Training loss: {:.3f}..".format(running_loss/len(trainloader)), "Test loss: {:.3f}..".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
From the output you print before it error, torch.Size([32, 10]) torch.Size([32]). The left one is what the model gives you and the right one is from trainloader, normally you use this for something like nn.CrossEntropyLoss. And from the full error log, the error is from this line loss = criterion(output, labels) The way to make this work is called One-hot Encoding, if it's me for sake of my laziness I'll write it like this. ones = torch.sparse.torch.eye(10).to(device) # number of class class labels = ones.index_select(0, labels)
https://stackoverflow.com/questions/62383595/
Horovod Elasticity Adjust number of Workers in RunTime
I have been using Horovod using both TensorFlow and PyTorch in docker, every thing works fine under a fixed number of containers as explained in Horovod docker I have checked the Horovod Elastic Demos Horovod examples but they doesn't show how to change the number of workers in runtime. What I need is to know how to change the number of workers up or down by runtime?
What you need for that is a Docker-specific host discovery that tells Elastic Horovod about all available containers. A generic way to do this is by using horovodrun and providing a host discovery script via --host-discovery-script. When invoked, the script returns a list of available hosts. See the Running with horovodrun section of the Elastic Horovod documentation. In the near future there will be service provider specific host discoveries built into Horovod so users do not need to implement scripts for common providers.
https://stackoverflow.com/questions/62393009/
Pytorch Error, RuntimeError: expected scalar type Long but found Double
I have run into the following error while training a BERT classifier. The type(b_input_mask) = type(b_labels) = torch.Tensor type(b_labels[i]) = tensor(1., dtype=torch.float64) type(b_input_masks[i]) = class'torch.Tensor' What could be the possible data type error here since I have not typecasted any variable to either long or double? Thanks in advance!
In a classification task, the data type for input labels should be Long but you assigned them as float64 type(b_labels[i]) = tensor(1., dtype=torch.float64) => type(b_labels[i]) = tensor(1., dtype=torch.long)
https://stackoverflow.com/questions/62400112/
Simple L1 loss in PyTorch
I want to calculate L1 loss in a neural network, I came across this example at https://discuss.pytorch.org/t/simple-l2-regularization/139/2, but there are some errors in this code. Is this really how to calculate L1 Loss in a NN or is there a simpler way? l1_crit = nn.L1Loss() reg_loss = 0 for param in model.parameters(): reg_loss += l1_crit(param) factor = 0.0005 loss += factor * reg_loss Is this equivalent in any way to simple doing: loss = torch.nn.L1Loss() I assume not, because I am not passing along any network parameters. Just checking if there isn existing function to do this.
If I am understanding well, you want to compute the L1 loss of your model (as you say in the begining). However I think you might got confused with the discussion in the pytorch forum. From what I understand, in the Pytorch forums, and the code you posted, the author is trying to normalize the network weights with L1 regularization. So it is trying to enforce that weights values fall in a sensible range (not too big, not too small). That is weights normalization using L1 normalization (that is why it is using model.parameters()). Normalization takes a value as input and produces a normalized value as output. Check this for weights normalization: https://pytorch.org/docs/master/generated/torch.nn.utils.weight_norm.html On the other hand, L1 Loss it is just a way to determine how 2 values differ from each other, so the "loss" is just measure of this difference. In the case of L1 Loss this error is computed with the Mean Absolute Error loss = |x-y| where x and y are the values to compare. So error compute takes 2 values as input and produces a value as output. Check this for loss computing: https://pytorch.org/docs/master/generated/torch.nn.L1Loss.html To answer your question: no, the above snippets are not equivalent, since the first is trying to do weights normalization and the second one, you are trying to compute a loss. This would be the loss computing with some context: sample, target = dataset[i] target_predicted = model(sample) loss = torch.nn.L1Loss() loss_value = loss(target, target_predicted)
https://stackoverflow.com/questions/62404149/
pytorch versus autograd.numpy
What are the big differences between pytorch and numpy, in particular, the autograd.numpy package? ( since both of them can compute the gradient automatically for you.) I know that pytorch can move tensors to GPU, but is this the only reason for choosing pytorch over numpy? While pytorch is well known for deep learning, obviously it can be used for almost any machine learning algorithm, its nn.Module structure is very flexible and we don't have to confine to the neural networks. (although I've never seen any neural network model written in numpy) So I'm wondering what's the biggest difference underlying pytorch and numpy.
I'm not sure if this question can be objectively answered, but besides the GPU functionality, it offers Parallelisation across GPUs Parallelisation across Machines DataLoaders / Manipulators incl. asynchronous pre-fetching Optimizers Predefined/Pretrained Models (can save you a lot of time) ... But as you said, it's build around deep/machine learning, so that is what it's good as while numpy (together with scipy) is much more general and can be used to solve a large range of other engineering problems (possibly using methods that are not en vogue at the moment).
https://stackoverflow.com/questions/62404451/
PyTorch Networks within a Model
I would like to define a network that comprises many templates. Below under Network Definitions is a simplified example where the first network definition is used as a template in the second one. This doesn't work - when I initialise my optimiser is says that the network parameters are empty! How should I do this properly? The network that I ultimately want is very complicated. Main Function if __name__ == "__main__": myNet = Network().cuda().train() optimizer = optim.SGD(myNet.parameters(), lr=0.01, momentum=0.9) Network definitions: class NetworkTemplate(nn.Module): def __init__(self): super(NetworkTemplate, self).__init__() self.conv1 = nn.Conv2d(1, 3, kernel_size=1, bias=False) self.bn1 = nn.BatchNorm2d(3) def forward(self, x): x = self.conv1(x) x = self.bn1(x) return x class Network(nn.Module): def __init__(self, nNets): super(Network, self).__init__() self.nets = [] for curNet in range(nNets): self.nets.append(NetworkTemplate()) def forward(self, x): for curNet in self.nets: x = curNet(x) return x
Just use torch.nn.Sequential? Like self.nets=torch.nn.Sequential(*self.nets) after you populated self.nets and then call return self.nets(x) in your forward function? If you want to do something more complicated, you can put all networks into torch.nn.ModuleList, however you'll need to manually take care of calling them in your forward method in that case (but it can be more complicated than just sequential).
https://stackoverflow.com/questions/62407647/
Pytorch doesn't find a CUDA device
I tried using Cuda in Pytorch in my set up but it can't be detected and I am puzzled as to why. torch.cuda.is_available() return False. Digging deeper, torch._C._cuda_getDeviceCount() returns 0. Using version 1.5, e.g. $ pip freeze | grep torch torch==1.5.0 I tried to write a small C program to do the same, e.g. #include <stdio.h> #include <cuda_runtime_api.h> int main() { int count = 0; cudaGetDeviceCount(&count); printf("Device count: %d\n", count); return 0; } prints 1, so the Cuda runtime can obviously find a device. Also, running nvidia-smi: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 435.21 Driver Version: 435.21 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 106... Off | 00000000:02:00.0 On | N/A | | 0% 41C P8 9W / 200W | 219MiB / 6075MiB | 0% Default | +-------------------------------+----------------------+----------------------+ So where did my Cuda device disappear in Python?
I now just realized that there is a different version if Pytorch for every different minor version of CUDA, so in my case version torch==1.5.0 defaults to CUDA 10.2 apparently, while the special package torch==1.5.0+cu101 works. I hope this clears things up for other people who like me start reading the docs on PyPi (more up to date docs if you know where to look are here: https://pytorch.org/get-started/locally/)
https://stackoverflow.com/questions/62407851/
Building RNN from scratch in pytorch
I am trying to build RNN from scratch using pytorch and I am following this tutorial to build it. import torch import torch.nn as nn import torch.nn.functional as F class BasicRNN(nn.Module): def __init__(self, n_inputs, n_neurons): super(BasicRNN, self).__init__() self.Wx = torch.randn(n_inputs, n_neurons) # n_inputs X n_neurons self.Wy = torch.randn(n_neurons, n_neurons) # n_neurons X n_neurons self.b = torch.zeros(1, n_neurons) # 1 X n_neurons def forward(self, X0, X1): self.Y0 = torch.tanh(torch.mm(X0, self.Wx) + self.b) # batch_size X n_neurons self.Y1 = torch.tanh(torch.mm(self.Y0, self.Wy) + torch.mm(X1, self.Wx) + self.b) # batch_size X n_neurons return self.Y0, self.Y1 class CleanBasicRNN(nn.Module): def __init__(self, batch_size, n_inputs, n_neurons): super(CleanBasicRNN, self).__init__() self.rnn = BasicRNN(n_inputs, n_neurons) self.hx = torch.randn(batch_size, n_neurons) # initialize hidden state def forward(self, X): output = [] # for each time step for i in range(2): self.hx = self.rnn(X[i], self.hx) output.append(self.hx) return output, self.hx FIXED_BATCH_SIZE = 4 # our batch size is fixed for now N_INPUT = 3 N_NEURONS = 5 X_batch = torch.tensor([[[0,1,2], [3,4,5], [6,7,8], [9,0,1]], [[9,8,7], [0,0,0], [6,5,4], [3,2,1]] ], dtype = torch.float) # X0 and X1 model = CleanBasicRNN(FIXED_BATCH_SIZE,N_INPUT,N_NEURONS) a1,a2 = model(X_batch) Running this code returns this error RuntimeError: size mismatch, m1: [4 x 5], m2: [3 x 5] at /pytorch/.. After some digging I found this error happens when passing the hidden states to the BasicRNN model N_INPUT = 3 # number of features in input N_NEURONS = 5 # number of units in layer X0_batch = torch.tensor([[0,1,2], [3,4,5], [6,7,8], [9,0,1]], dtype = torch.float) #t=0 => 4 X 3 X1_batch = torch.tensor([[9,8,7], [0,0,0], [6,5,4], [3,2,1]], dtype = torch.float) #t=1 => 4 X 3 test_model = BasicRNN(N_INPUT,N_NEURONS) a1,a2 = test_model(X0_batch,X1_batch) a1,a2 = test_model(X0_batch,torch.randn(1,N_NEURONS)) # THIS LINE GIVES ERROR What is happening in the hidden states and How can I solve this problem?
Maybe the tutorial is wrong: torch.mm(X1, self.Wx) multiplies a 3 x 5 and a 4 x 5 tensor, which doesn't work. Even if you make it work by rewriting as torch.mm(self.Wx, X1.t()), you expect it to output a 4 x 5 tensor, but the result is a 4 x 3 tensor.
https://stackoverflow.com/questions/62408067/
Get Training error if setting CUDA VISIBLE DEVICE
I am using PyTorch and Cuda 10.1. If I set CUDA VISIBLE DEVICE in training, the loss is always NAN, if I don't set CUDA VISIBLE DEVICE, everything is working well. Does anyone know what's the problem is?
CUDA_VISIBLE_DEVICES is an os level variable stored in CUDA files I believe. It controls which of your machine's GPUs are made available to perform CUDA computations. It must be set prior to running your code. If you are trying to control if pytorch uses GPUs and which ones, you should use the built-in pytorch.cuda package for device management. import torch n_gpus = torch.cuda.device_count() if n_gpus > 0: device = torch.device("cuda:0") # first device as indexed by pytorch cuda print("cuda:0 is device {}".format(torch.cuda.get_device_name(device))) # prints name of device if n_gpus > 1: # if you have more than one device, and so on device2 = torch.device("cuda:1") print("cuda:1 is device {}".format(torch.cuda.get_device_name(device2))) # from here, decide which device you want to use and # transfer files to this device accordingly model.to(device) x.to(device2) # etc. The only reason why you'd want to use CUDA_VISIBLE_DEVICES is if you have multiple GPUs and you need some of them to be available for Cuda / Pytorch tasks, and other GPUs to be available for non-cuda tasks, and are worried about the small amount of GPU memory that torch.cuda packages consume on the GPU when registered as pytorch devices. For most applications this isn't necessary and you should just use pytorch's device management.
https://stackoverflow.com/questions/62416184/
Pytorch vsion size mismatch, m1
im trying to run a simple linear regression but i have error when i try to train. The size of images is the shapes of data train print(dataset_train[0][0].shape) shows me torch.Size([3, 227, 227]) size_of_image=3*227*227 class linearRegression(nn.Module): def __init__(self, inputSize, outputSize): super(linearRegression, self).__init__() self.linear = nn.Linear(inputSize, outputSize) def forward(self, x): out = self.linear(x) return out model = linearRegression(size_of_image, 1) optimizer = torch.optim.SGD(model.parameters(), lr=0.1) criterion = torch.nn.CrossEntropyLoss() trainloader = DataLoader(dataset = dataset_train, batch_size = 1000) for epoch in range(5): for x, y in trainloader: yhat = model(x) loss = criterion(yhat, y) optimizer.zero_grad() loss.backward() optimizer.step() I tried to unserstand what its the mean of error but i dont found a solution, can anyone help me? --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-44-6f00f9272a22> in <module> 1 for epoch in range(5): 2 for x, y in trainloader: ----> 3 yhat = model(x) 4 loss = criterion(yhat, y) 5 optimizer.zero_grad() ~/PycharmProjects/estudios/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) <ipython-input-21-d20eb6e0c349> in forward(self, x) 5 6 def forward(self, x): ----> 7 out = self.linear(x) 8 return out ~/PycharmProjects/estudios/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) ~/PycharmProjects/estudios/venv/lib/python3.8/site-packages/torch/nn/modules/linear.py in forward(self, input) 85 86 def forward(self, input): ---> 87 return F.linear(input, self.weight, self.bias) 88 89 def extra_repr(self): ~/PycharmProjects/estudios/venv/lib/python3.8/site-packages/torch/nn/functional.py in linear(input, weight, bias) 1610 ret = torch.addmm(bias, input, weight.t()) 1611 else: -> 1612 output = input.matmul(weight.t()) 1613 if bias is not None: 1614 output += bias Im RuntimeError: size mismatch, m1: [681000 x 227], m2: [154587 x 1] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:41
In linearRegression, you have defined the linear transformation as: nn.Linear(3*227*227, 1) which means the Linear layer expects 3*227*227 input features and it will output 1 feature. However, you feed a 4D tensor of shape [1000, 3, 227, 227] (batch-channel-height-width) to the Linear layer which considers the last dimension as the feature dimension. It means Linear layer is getting 227 input features instead of 3*227*227. So, you are getting the following error. RuntimeError: size mismatch, m1: [681000 x 227], m2: [154587 x 1] Note that, Linear layers are associated with a weight matrix of shape in_features x out_features (in your case, it is [154587 x 1]). And the input to a Linear layer is flattened to a 2D tensor, in your case, it is [1000*3*227 x 227] = [681000 x 227]. So, an attempt to perform matrix multiplication of two tensors with shape [681000 x 227] and [154587 x 1] results in the above error.
https://stackoverflow.com/questions/62420902/
Unable to get cuda available in Jupyter Notebook or Spyder for PyTorch
I've installed pytorch cuda with pip and conda when i run this command in IDLE: >> import torch >> torch.cuda.is_available() I get "True", but in Spyder or Jupyter Notebook it gives as "False" even after updating the package and conda. The conda update of pytorch cuda was from 10.1 to 10.2
It can be resolved by creating another environment in conda and then install torch type: >conda create -n yourenvname python=x.x anaconda yourenvname is environment name python=x.x is python version for your environment activate the environment using: >conda activate yourenvname then install the PyTorch with cuda: >conda install pytorch torchvision cudatoolkit=10.2 -c pytorch open "spyder" or "jupyter notebook" verify if it is installed, type: > import torch > torch.cuda.is_available()
https://stackoverflow.com/questions/62423921/
group rows by specific value in one column and calculate the mean in PyTorch
the sample tensor: tensor([[ 0., 1., 2., 3., 4., 5.], # class1 [ 6., 7., 8., 9., 10., 11.], # class3 [12., 13., 14., 15., 16., 17.], # class2 [18., 19., 20., 21., 22., 23.], # class0 [24., 25., 26., 27., 28., 29.]. # class1 ]) the expected result: tensor([[18., 19., 20., 21., 22., 23.], # class0 [12., 13., 14., 15., 16., 17.], # class1 [12., 13., 14., 15., 16., 17.], # class2 [ 6., 7., 8., 9., 10., 11.]. # class3 ]) Is there a pure PyTorch method to implement this?
You can add according to class index using index_add and then divide by the number of each label, computed using unique: # inputs x = torch.arange(30.).view(5,6) # sample tensor c = c = torch.tensor([1, 3, 2, 0, 1], dtype=torch.long) # class indices # allocate space for output result = torch.zeros((c.max() + 1, x.shape[1]), dtype=x.dtype) # use index_add_ to sum up rows according to class result.index_add_(0, c, x) # use "unique" to count how many of each class _, counts = torch.unique(c, return_counts=True) # divide the sum by the counts to get the average result /= counts[:, None] The result is as expected: Out[*]: tensor([[18., 19., 20., 21., 22., 23.], [12., 13., 14., 15., 16., 17.], [12., 13., 14., 15., 16., 17.], [ 6., 7., 8., 9., 10., 11.]])
https://stackoverflow.com/questions/62424100/
Decrease the maximum learning rate after every restart
I'm training a neural network for a computer vision-based task. For the optimizer, I found out that it isn't ideal to use a single learning rate for the entire training, and what people do is that they use learning rate schedulers to decay the learning rate in a specific manner. So to do this, I tried out PyTorch's CosineAnnealingWarmRestarts(). What this does is that it anneals/decreases the initial learning rate (set by us) in a cosine manner until it hits a restart. After this "restart," the learning rate is set back to the initial learning rate, and the cycle happens again. This worked pretty well for me, but I wanted to make a few changes in it. I wanted to change the learning rate, the optimizer is assigned after each restart, so that after every restart the maximum learning rate for the optimizer also decreases. Can this be done in PyTorch?
It seems to me that a straight-forward solution would just be to inherit from CosineAnnealingWarmRestarts and then change its self.optimizer parameters inside an overriden step function. In pseudo-code, that would be something like class myScheduler(torch.optim.lr_scheduler.CosineAnnealingWarmRestarts): def __init__(self, optimizer, T_0, T_mult=1, eta_min=0, last_epoch=-1): #initialize base calss super().__init__(.... blablabla ...) def step(self): #call step() from base class #Do some book-keeping to determine if you've hit a restart #now change optimizer lr for each parameter group if some_condition:#condition like number of iterations, restarts, etc self.optimizer.param_groups[i]['lr']*=some_coef
https://stackoverflow.com/questions/62427719/
pytorch: How to implement one link per neuron?
For example, I would like to have a standard feed-forward neural network with the following structure: n input neurons n neurons on the second layer 2 neurons on the third layer n neurons on the fourth layer where the i-th neuron in the first layer is connected precisely to the i-th neuron in the second layer (don't know how to do that) the second and the third layer are fully connected, the same goes for the third and the fourth layer (I know how to do that - using nn.Linear) loss function is MSE + L1 norm of the (vector of) weights between the first two layers (depends on the solution of the question whether I can do that) Motivation: I want to implement an autoencoder and try to achieve some sparsity (this is why the inputs are multiplied by a single weight (going from the first to the second layer)).
You can implement a custom layer, similar to nn.Linear: import math import torch from torch import nn class ElementWiseLinear(nn.Module): __constants__ = ['n_features'] n_features: int weight: torch.Tensor def __init__(self, n_features: int, bias: bool = True) -> None: super(ElementWiseLinear, self).__init__() self.n_features = n_features self.weight = nn.Parameter(torch.Tensor(1, n_features)) if bias: self.bias = nn.Parameter(torch.Tensor(n_features)) else: self.register_parameter('bias', None) self.reset_parameters() def reset_parameters(self) -> None: nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5)) if self.bias is not None: fan_in, _ = nn.init._calculate_fan_in_and_fan_out(self.weight) bound = 1 / math.sqrt(fan_in) nn.init.uniform_(self.bias, -bound, bound) def forward(self, input: torch.Tensor) -> torch.Tensor: output = torch.mul(input, self.weight) if self.bias is not None: output += self.bias return output def extra_repr(self) -> str: return 'in_features={}, out_features={}, bias={}'.format( self.n_features, self.n_features, self.bias is not None ) and use it like this: x = torch.rand(3) layer = ElementWiseLinear(3, bias=False) output = layer(x) Of course you make make things a lot simpler than that :)
https://stackoverflow.com/questions/62429782/
What is the difference between detach, clone and deepcopy in Pytorch tensors in detail?
I've been struggling to understand the differences between .clone(), .detach() and copy.deepcopy when using Pytorch. In particular with Pytorch tensors. I tried writing all my question about their differences and uses cases and became overwhelmed quickly and realized that perhaps have the 4 main properties of Pytorch tensors would clarify much better which one to use that going through every small question. The 4 main properties I realized one needs keep track are: if one has a new pointer/reference to a tensor if one has a new tensor object instance (and thus most likely this new instance has it's own meta-data like require_grads, shape, is_leaf, etc.) if it has allocated a new memory for the tensor data (i.e. if this new tensor is a view of a different tensor) if it's tracking the history of operations or not (or even if it's tracking a completely new history of operations or the same old one in the case of deep copy) According to what mined out from the Pytorch forums and the documentation this is my current distinctions for each when used on tensors: Clone For clone: x_cloned = x.clone() I believe this is how it behaves according to the main 4 properties: the cloned x_cloned has it's own python reference/pointer to the new object it has created it's own new tensor object instance (with it's separate meta-data) it has allocated a new memory for x_new with the same data as x it is keeping track of the original history of operations and in addition included this clone operation as .grad_fn=<CloneBackward> it seems that the main use of this as I understand is to create copies of things so that inplace_ operations are safe. In addition coupled with .detach as .detach().clone() (the "better" order to do it btw) it creates a completely new tensor that has been detached with the old history and thus stops gradient flow through that path. Detach x_detached = x.detach() creates a new python reference (the only one that does not is doing x_new = x of course). One can use id for this one I believe it has created it's own new tensor object instance (with it's separate meta-data) it has NOT allocated a new memory for x_detached with the same data as x it cuts the history of the gradients and does not allow it to flow through it. I think it's right to think of it as having no history, as a brand new tensor. I believe the only sensible use I know of is of creating new copies with it's own memory when coupled with .clone() as .detach().clone(). Otherwise, I am not sure what the use it. Since it points to the original data, doing in place ops might be potentially dangerous (since it changes the old data but the change to the old data is NOT known by autograd in the earlier computation graph). copy.deepcopy x_deepcopy = copy.deepcopy(x) if one has a new pointer/reference to a tensor it creates a new tensor instance with it's own meta-data (all of the meta-data should point to deep copies, so new objects if it's implemented as one would expect I hope). it has it's own memory allocated for the tensor data If it truly is a deep copy, I would expect a deep copy of the history. So it should do a deep replication of the history. Though this seems really expensive but at least semantically consistent with what deep copy should be. I don't really see a use case for this. I assume anyone trying to use this really meant 1) .detach().clone() or just 2) .clone() by itself, depending if one wants to stop gradient flows to the earlier graph with 1 or if they want just to replicate the data with a new memory 2). So this is the best way I have to understand the differences as of now rather than ask all the different scenarios that one might use them. So is this right? Does anyone see any major flaw that needs to be correct? My own worry is about the semantics I gave to deep copy and wonder if it's correct wrt the deep copying the history. I think a list of common use cases for each would be wonderful. Resources these are all the resources I've read and participated to arrive at the conclusions in this question: Migration guide to 0.4.0 https://pytorch.org/blog/pytorch-0_4_0-migration-guide/ Confusion about using clone: https://discuss.pytorch.org/t/confusion-about-using-clone/39673/3 Clone and detach in v0.4.0: https://discuss.pytorch.org/t/clone-and-detach-in-v0-4-0/16861/2 Docs for clone: https://pytorch.org/docs/stable/tensors.html#torch.Tensor.clone Docs for detach (search for the word detach in your browser there is no direct link): https://pytorch.org/docs/stable/tensors.html#torch.Tensor Difference between detach().clone() and clone().detach(): https://discuss.pytorch.org/t/difference-between-detach-clone-and-clone-detach/34173 Why am I able to change the value of a tensor without the computation graph knowing about it in Pytorch with detach? Why am I able to change the value of a tensor without the computation graph knowing about it in Pytorch with detach? What is the difference between detach, clone and deepcopy in Pytorch tensors in detail? What is the difference between detach, clone and deepcopy in Pytorch tensors in detail? Copy.deepcopy() vs clone() https://discuss.pytorch.org/t/copy-deepcopy-vs-clone/55022/10
Note: Since this question was posted the behaviour and doc pages for these functions have been updated. torch.clone() Copies the tensor while maintaining a link in the autograd graph. To be used if you want to e.g. duplicate a tensor as an operation in a neural network (for example, passing a mid-level representation to two different heads for calculating different losses): Returns a copy of input. NOTE: This function is differentiable, so gradients will flow back from the result of this operation to input. To create a tensor without an autograd relationship to input see detach(). torch.tensor.detach() Returns a view of the original tensor without the autograd history. To be used if you want to manipulate the values of a tensor (not in place) without affecting the computational graph (e.g. reporting values midway through the forward pass). Returns a new Tensor, detached from the current graph. The result will never require gradient. This method also affects forward mode AD gradients and the result will never have forward mode AD gradients. NOTE: Returned Tensor shares the same storage with the original one. In-place modifications on either of them will be seen, and may trigger errors in correctness checks. 1 copy.deepcopy deepcopy is a generic python function from the copy library which makes a copy of an existing object (recursively if the object itself contains objects). This is used (as opposed to more usual assignment) when the underlying object you wish to make a copy of is mutable (or contains mutables) and would be susceptible to mirroring changes made in one: Assignment statements in Python do not copy objects, they create bindings between a target and an object. For collections that are mutable or contain mutable items, a copy is sometimes needed so one can change one copy without changing the other. In a PyTorch setting, as you say, if you want a fresh copy of a tensor object to use in a completely different setting with no relationship or effect on its parent, you should use .detach().clone(). IMPORTANT NOTE: Previously, in-place size / stride / storage changes (such as resize_ / resize_as_ / set_ / transpose_) to the returned tensor also update the original tensor. Now, these in-place changes will not update the original tensor anymore, and will instead trigger an error. For sparse tensors: In-place indices / values changes (such as zero_ / copy_ / add_) to the returned tensor will not update the original tensor anymore, and will instead trigger an error.
https://stackoverflow.com/questions/62437509/
tensor index manipulation with "..."
Hi I'm new to Pytorch and torch tensors. I'm reading yolo_v3 code and encounter this question. I think it relates to tensor indexing with ..., but it's difficult to search ... by google, so I figure to ask it here. The code is: prediction = ( x.view(num_samples, self.num_anchors, self.num_classes + 5, grid_size, grid_size) .permute(0, 1, 3, 4, 2) .contiguous() ) print (prediction.shape) # Get outputs x = torch.sigmoid(prediction[..., 0]) # Center x y = torch.sigmoid(prediction[..., 1]) # Center y w = prediction[..., 2] # Width h = prediction[..., 3] # Height pred_conf = torch.sigmoid(prediction[..., 4]) # Conf pred_cls = torch.sigmoid(prediction[..., 5:]) # Cls pred. My understanding is that the prediction will be a tensor with shape of [batch, anchor, x_grid, y_grid, class]. But what does prediction[..., x] do (x=0,1,2,3,4,5)? Is it similar as numpy indexing of [:, x]? If so the calculation of x, y, w, h, pred_conf and pred_cls don't make sense.
It's call Ellipsis. It indicate unspecified dimensions of ndarray or tensor. Here, if prediction shape is [batch, anchor, x_grid, y_grid, class] then prediction[..., 0] # is equivalent to prediction[:,:,:,:,0] prediction[..., 1] # is equivalent to prediction[:,:,:,:,1] More prediction[0, ..., 0] # equivalent to prediction[0,:,:,:,0] You can also write ... as Ellipsis prediction[Ellipsis, 0]
https://stackoverflow.com/questions/62442198/
Multiclass semantic segmentation model evaluation
I am doing a project on multiclass semantic segmentation. I have formulated a model that outputs pretty descent segmented images by decreasing the loss value. However, I cannot evaluate the model performance in metrics, such as meanIoU or Dice coefficient. In case of binary semantic segmentation it was easy just to set the threshold of 0.5, to classify the outputs as an object or background, but it does not work in the case of multiclass semantic segmentation. Could you please tell me how to obtain model performance on the aforementioned metrics? Any help will be highly appreciated! By the way, I am using PyTorch framework and CamVid dataset.
If anyone is interested in this answer, please also look at this issue. The author of the issue points out that mIoU can be computed in a different way (and that method is more accepted in literature). So, consider that before using the implementation for any formal publication. Basically, the other method suggested by the issue-poster is to separately accumulate the intersections and unions over the entire dataset and divide them at the final step. The method in the below original answer computes intersection and union for a batch of images, then divides them to get IoU for the current batch, and then takes a mean of the IoUs over the entire dataset. However, this below given original method is problematic because the final mean IoU would vary with the batch-size. On the other hand, the mIoU would not vary with the batch size for the method mentioned in the issue as the separate accumulation would ensure that batch size is irrelevant (though higher batch size can definitely help speed up the evaluation). Original answer: Given below is an implementation of mean IoU (Intersection over Union) in PyTorch. def mIOU(label, pred, num_classes=19): pred = F.softmax(pred, dim=1) pred = torch.argmax(pred, dim=1).squeeze(1) iou_list = list() present_iou_list = list() pred = pred.view(-1) label = label.view(-1) # Note: Following for loop goes from 0 to (num_classes-1) # and ignore_index is num_classes, thus ignore_index is # not considered in computation of IoU. for sem_class in range(num_classes): pred_inds = (pred == sem_class) target_inds = (label == sem_class) if target_inds.long().sum().item() == 0: iou_now = float('nan') else: intersection_now = (pred_inds[target_inds]).long().sum().item() union_now = pred_inds.long().sum().item() + target_inds.long().sum().item() - intersection_now iou_now = float(intersection_now) / float(union_now) present_iou_list.append(iou_now) iou_list.append(iou_now) return np.mean(present_iou_list) Prediction of your model will be in one-hot form, so first take softmax (if your model doesn't already) followed by argmax to get the index with the highest probability at each pixel. Then, we calculate IoU for each class (and take the mean over it at the end). We can reshape both the prediction and the label as 1-D vectors (I read that it makes the computation faster). For each class, we first identify the indices of that class using pred_inds = (pred == sem_class) and target_inds = (label == sem_class). The resulting pred_inds and target_inds will have 1 at pixels labelled as that particular class while 0 for any other class. Then, there is a possibility that the target does not contain that particular class at all. This will make that class's IoU calculation invalid as it is not present in the target. So, you assign such classes a NaN IoU (so you can identify them later) and not involve them in the calculation of the mean. If the particular class is present in the target, then pred_inds[target_inds] will give a vector of 1s and 0s where indices with 1 are those where prediction and target are equal and zero otherwise. Taking the sum of all elements of this will give us the intersection. If we add all the elements of pred_inds and target_inds, we'll get the union + intersection of pixels of that particular class. So, we subtract the already calculated intersection to get the union. Then, we can divide the intersection and union to get the IoU of that particular class and add it to a list of valid IoUs. At the end, you take the mean of the entire list to get the mIoU. If you want the Dice Coefficient, you can calculate it in a similar fashion.
https://stackoverflow.com/questions/62461379/
PyTorch Dataset / Dataloader batching
I'm a little confused regarding the 'best practise' to implement a PyTorch data pipeline on time series data. I have a HD5 file which I read using a custom DataLoader. It seems that I should return the data samples as a (features,targets) tuple with the shape of each being (L,C) where L is seq_len and C is number of channels - i.e. don't preform batching in the data loader, just return as a table. PyTorch modules seem to require a batch dim, i.e. Conv1D expects (N, C, L). I was under the impression that the DataLoader class would prepend the batch dimension but it isn't, I'm getting data shaped (N,L). dataset = HD5Dataset(args.dataset) dataloader = DataLoader(dataset, batch_size=N, shuffle=True, pin_memory=is_cuda, num_workers=num_workers) for i, (x, y) in enumerate(train_dataloader): ... In the code above the shape of x is (N,C) not (1,N,C), which results in the code below (from a public git repo) to fail on the first line. def forward(self, x): """expected input shape is (N, L, C)""" x = x.transpose(1, 2).contiguous() # input should have dimension (N, C, L) The documentation states When automatic batching is enabled It always prepends a new dimension as the batch dimension which leads me to believe that automatic batching is disabled but I don't understand why?
If you have a dataset of pairs of tensors (x, y), where each x is of shape (C,L), then: N, C, L = 5, 3, 10 dataset = [(torch.randn(C,L), torch.ones(1)) for i in range(50)] dataloader = data_utils.DataLoader(dataset, batch_size=N) for i, (x,y) in enumerate(dataloader): print(x.shape) Will produce (50/N)=10 batches of shape (N,C,L) for x: torch.Size([5, 3, 10]) torch.Size([5, 3, 10]) torch.Size([5, 3, 10]) torch.Size([5, 3, 10]) torch.Size([5, 3, 10]) torch.Size([5, 3, 10]) torch.Size([5, 3, 10]) torch.Size([5, 3, 10]) torch.Size([5, 3, 10]) torch.Size([5, 3, 10])
https://stackoverflow.com/questions/62461602/
Efficient batch derivative operations in PyTorch
I am using Pytorch to implement a neural network that has (say) 5 inputs and 2 outputs class myNetwork(nn.Module): def __init__(self): super(myNetwork,self).__init__() self.layer1 = nn.Linear(5,32) self.layer2 = nn.Linear(32,2) def forward(self,x): x = torch.relu(self.layer1(x)) x = self.layer2(x) return x Obviously, I can feed this an (N x 5) Tensor and get an (N x 2) result, net = myNetwork() nbatch = 100 inp = torch.rand([nbatch,5]) inp.requires_grad = True out = net(inp) I would now like to compute the derivatives of the NN output with respect to one element of the input vector (let's say the 5th element), for each example in the batch. I know I can calculate the derivatives of one element of the output with respect to all inputs using torch.autograd.grad, and I could use this as follows: deriv = torch.zeros([nbatch,2]) for i in range(nbatch): for j in range(2): deriv[i,j] = torch.autograd.grad(out[i,j],inp,retain_graph=True)[0][i,4] However, this seems very inefficient: it calculates the gradient of out[i,j] with respect to every single element in the batch, and then discards all except one. Is there a better way to do this?
By virtue of backpropagation, if you did only compute the gradient w.r.t a single input, the computational savings wouldn't necessarily amount to much, you would only save some in the first layer, all layers afterwards need to be backpropagated either way. So this may not be the optimal way, but it doesn't actually create much overhead, especially if your network has many layers. By the way, is there a reason that you need to loop over nbatch? If you wanted the gradient of each element of a batch w.r.t a parameter, I could understand that, because pytorch will lump them together, but you seem to be solely interested in the input...
https://stackoverflow.com/questions/62463093/
What does unsqueeze do in pytorch?
lower_bounds = torch.max(set_1[:, :2].unsqueeze(1), set_2[:, :2].unsqueeze(0)) #(n1, n2, 2) This code snippet uses unsqueeze(1) for one tensor, but unsqeeze(0) for another. What is the difference between them?
unsqueeze turns an n-dimensionsal tensor into an n+1-dimensional one, by adding an extra dimension of zero depth. However, since it is ambiguous which axis the new dimension should lie across (i.e. in which direction it should be "unsqueezed"), this needs to be specified by the dim argument. Hence the resulting unsqueezed tensors have the same information, but the indices used to access them are different. Here is a visual representation of what squeeze/unsqueeze do for an effectively 2d matrix, where it is going from a 2d tensor to a 3d one, and hence there are 3 choices for the new dimension's position:
https://stackoverflow.com/questions/62464462/
Evaluate CNN model for multiclass image classification
i want to ask what metric can be used to evalutate my CNN model for multi class, i have 3 classes for now and i’m just using accuracy and confussion matrix also plot the loss of model, is there any metric can be used to evaluate my model performance?
Evaluating the performance of a model is one of the most crucial phase of any Machine Learning project cycle and must be done effectively. Since, you have mentioned that you are using accuracy and confusion metrics for the evaluation. I would like to add some points for developing a better evaluation strategy: Consider you are developing a classifier that classifies an EMAIL into SPAM or NON - SPAM (HAM), now one of the possible evaluation criteria can be the FALSE POSITIVE RATE because it can be really annoying if a non-spam email ends in spam category (which means you will read a valuable email) So, I recommend you to consider metrics based on the problem you are targeting. There are many metrics such as F1 score, recall, precision that you can choose based on the problem you are havning. You can visit: https://medium.com/apprentice-journal/evaluating-multi-class-classifiers-12b2946e755b for better understanding.
https://stackoverflow.com/questions/62468271/
Intermittent "RuntimeError: CUDA out of memory" error in Google Colab Fine Tuning BERT Base Cased with Transformers and PyTorch
I'm running the following code to fine-tune a BERT Base Cased model in Google Colab. Sometimes the code runs fine first time without error. Other times, the same code, using the same data, results in a "CUDA out of memory" error. Previously, restarting the runtime or exiting the notebook, going back into the notebook, doing a factory runtime restart, and re-running the code runs successfully without error. Just now though, I've tried a restart and re-try 5 times and got the error every time. The issue doesn't appear to be the combination of data and code that I'm using because sometimes it works without error. So it appears to be something to do with the Google Colab runtime. Does anyone know why this is happening, why it is intermittent, and/or what I can do about it? I'm using Huggingface's transformers library and PyTorch. The code cell that results in an error: # train the model %%time history = defaultdict(list) for epoch in range(EPOCHS): print(f'Epoch {epoch + 1}/{EPOCHS}') print('-' * 10) train_acc, train_loss = train_epoch( model, train_data_loader, loss_fn, optimizer, device, scheduler, train_set_length ) print(f'Train loss {train_loss} accuracy {train_acc}') dev_acc, dev_loss = eval_model( model, dev_data_loader, loss_fn, device, evaluation_set_length ) print(f'Dev loss {dev_loss} accuracy {dev_acc}') history['train_acc'].append(train_acc) history['train_loss'].append(train_loss) history['dev_acc'].append(dev_acc) history['dev_loss'].append(dev_loss) model_filename = f'model_{epoch}_state.bin' torch.save(model.state_dict(), model_filename) The full error: RuntimeError Traceback (most recent call last) <ipython-input-29-a13774d7aa75> in <module>() ----> 1 get_ipython().run_cell_magic('time', '', "\nhistory = defaultdict(list)\n\nfor epoch in range(EPOCHS):\n\n print(f'Epoch {epoch + 1}/{EPOCHS}')\n print('-' * 10)\n\n train_acc, train_loss = train_epoch(\n model,\n train_data_loader, \n loss_fn, \n optimizer, \n device, \n scheduler, \n train_set_length\n )\n\n print(f'Train loss {train_loss} accuracy {train_acc}')\n\n dev_acc, dev_loss = eval_model(\n model,\n dev_data_loader,\n loss_fn, \n device, \n evaluation_set_length\n )\n\n print(f'Dev loss {dev_loss} accuracy {dev_acc}')\n\n history['train_acc'].append(train_acc)\n history['train_loss'].append(train_loss)\n history['dev_acc'].append(dev_acc)\n history['dev_loss'].append(dev_loss)\n \n model_filename = f'model_{epoch}_state.bin'\n torch.save(model.state_dict(), model_filename)") 15 frames <decorator-gen-60> in time(self, line, cell, local_ns) <timed exec> in <module>() /usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask) 234 # Take the dot product between "query" and "key" to get the raw attention scores. 235 attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) --> 236 attention_scores = attention_scores / math.sqrt(self.attention_head_size) 237 if attention_mask is not None: 238 # Apply the attention mask is (precomputed for all layers in BertModel forward() function) RuntimeError: CUDA out of memory. Tried to allocate 24.00 MiB (GPU 0; 7.43 GiB total capacity; 5.42 GiB already allocated; 8.94 MiB free; 5.79 GiB reserved in total by PyTorch)
I was facing the same problem in transformers,Transformers are extremely memory intensive. Hence, there is quite a high probability that we will run out of memory or the runtime limit while training larger models or for longer epochs. There are some promising well-known out of the box strategies to solve these problems and each strategy comes with its own benefits. Dyanmic Padding and Uniform Length Batching(Smart batching) Gradient Accumulation Freeze Embedding Numeric Precision Reduction Gradient Checkpointing Training neural networks on a batch of sequences requires them to have the exact same length to build the batch matrix representation. Because real life NLP datasets are always made of texts of variable lengths, we often need to make some sequences shorter by truncating them, and some others longer by adding at the end a repeated fake token called “pad” token. Because the pad token doesn’t represent a real word, when most computations are done, before computing the loss, we erase the pad token signal by multiplying it by 0 through the “attention mask” matrix for each sample, which identifies the [PAD] tokens and tells Transformer to ignore them. Dynamic Padding: Here we limit the number of added pad tokens to reach the length of the longest sequence of each mini batch instead of a fixed value set for the whole train set Because the number of added tokens changes across mini batches, we call it "dynamic" padding. Uniform Length Batching: We push the logic futher by generating batches made of similar length sequences so we avoid extreme cases where most sequences in the mini batch are short and we are required to add lots of pad tokens to each of them because 1 sequence of the same mini batch is very long.
https://stackoverflow.com/questions/62468346/
AutoTokenizer.from_pretrained fails to load locally saved pretrained tokenizer (PyTorch)
I am new to PyTorch and recently, I have been trying to work with Transformers. I am using pretrained tokenizers provided by HuggingFace. I am successful in downloading and running them. But if I try to save them and load again, then some error occurs. If I use AutoTokenizer.from_pretrained to download a tokenizer, then it works. [1]: tokenizer = AutoTokenizer.from_pretrained('distilroberta-base') text = "Hello there" enc = tokenizer.encode_plus(text) enc.keys() Out[1]: dict_keys(['input_ids', 'attention_mask']) But if I save it using tokenizer.save_pretrained("distilroberta-tokenizer") and try to load it locally, then it fails. [2]: tmp = AutoTokenizer.from_pretrained('distilroberta-tokenizer') --------------------------------------------------------------------------- OSError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 238 resume_download=resume_download, --> 239 local_files_only=local_files_only, 240 ) /opt/conda/lib/python3.7/site-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only) 266 # File, but it doesn't exist. --> 267 raise EnvironmentError("file {} not found".format(url_or_filename)) 268 else: OSError: file distilroberta-tokenizer/config.json not found During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-25-3bd2f7a79271> in <module> ----> 1 tmp = AutoTokenizer.from_pretrained("distilroberta-tokenizer") /opt/conda/lib/python3.7/site-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 193 config = kwargs.pop("config", None) 194 if not isinstance(config, PretrainedConfig): --> 195 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) 196 197 if "bert-base-japanese" in pretrained_model_name_or_path: /opt/conda/lib/python3.7/site-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 194 195 """ --> 196 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) 197 198 if "model_type" in config_dict: /opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 250 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n" 251 ) --> 252 raise EnvironmentError(msg) 253 254 except json.JSONDecodeError: OSError: Can't load config for 'distilroberta-tokenizer'. Make sure that: - 'distilroberta-tokenizer' is a correct model identifier listed on 'https://huggingface.co/models' - or 'distilroberta-tokenizer' is the correct path to a directory containing a config.json file Its saying 'config.josn' is missing form the directory. On checking the directory, I am getting list of these files: [3]: !ls distilroberta-tokenizer Out[3]: merges.txt special_tokens_map.json tokenizer_config.json vocab.json I know this problem has been posted earlier but none of them seems to work. I have also tried to follow the docs but still can't make it work. Any help would be appreciated.
There is currently an issue under investigation which only affects the AutoTokenizers but not the underlying tokenizers like (RobertaTokenizer). For example the following should work: from transformers import RobertaTokenizer tokenizer = RobertaTokenizer.from_pretrained('YOURPATH') To work with the AutoTokenizer you also need to save the config to load it offline: from transformers import AutoTokenizer, AutoConfig tokenizer = AutoTokenizer.from_pretrained('distilroberta-base') config = AutoConfig.from_pretrained('distilroberta-base') tokenizer.save_pretrained('YOURPATH') config.save_pretrained('YOURPATH') tokenizer = AutoTokenizer.from_pretrained('YOURPATH') I recommend to either use a different path for the tokenizers and the model or to keep the config.json of your model because some modifications you apply to your model will be stored in the config.json which is created during model.save_pretrained() and will be overwritten when you save the tokenizer as described above after your model (i.e. you won't be able to load your modified model with tokenizer config.json).
https://stackoverflow.com/questions/62472238/
Does tracking the loss via lists affect the training?
I wanted to plot the loss of my CNN, so created lists before starting to train with test_loss_history = [] train_loss_history = [] and added the values after every epoch with train_loss_history.append(train_loss) test_loss_history.append(test_loss). I had done the same the same with the accuracy before, but when I add these lines for the loss, the accuracy drops around 40%. Does storing values affect the training process in any way? I am using Google Colab and train a ResNet18 with a subset of MNIST. My code looks like that: train_loss_history = [] train_acc_history = [] for epoch in range(epoch_resume, opt.max_epochs): ... for i, data in enumerate(trainloader, 0): train_loss += imgs.size(0)*criterion(logits, labels).data ... train_loss /= len(trainset) train_acc_history.append(train_acc) train_loss_history.append(train_loss)
You can just use Tensorboard to plot loss and other metrics that you want to keep track of. Just you tensorboard default callback. No need to saves metrics when tensorboard got your back
https://stackoverflow.com/questions/62475859/
Huggingface language modeling stuck at data reading phase
I have a large file (1 GB+) with a mix of short and long texts (format: wikitext-2) for fine tuning the masked language model with bert-large-uncased as baseline model. I followed the instruction at https://github.com/huggingface/transformers/tree/master/examples/language-modeling. The process seems to be stuck at a stage "Creating features from dataset file at <file loc>". I am unsure what is wrong, is it really stuck or does it take really long for file of this size? Command looks pretty much this: export TRAIN_FILE=/path/to/dataset/my.train.raw export TEST_FILE=/path/to/dataset/my.test.raw python run_language_modeling.py \ --output_dir=local_output_dir \ --model_type=bert \ --model_name_or_path=local_bert_dir \ --do_train \ --train_data_file=$TRAIN_FILE \ --do_eval \ --eval_data_file=$TEST_FILE \ --mlm Added: The job is running on CPU
Since the file is huge, I would strongly recommend trying your code on a toy dataset before running it on your actual large data. This will be helpful when you debug too. If your system has multi-cores, please follow some multi-processing strategies. Take a look at https://github.com/PyTorchLightning/pytorch-lightning.
https://stackoverflow.com/questions/62476760/
Why Tensor.clone().detach() is recommended when copying a tensor?
What is the difference between copying a tensor using torch.tensor(sourcetensor) and tensor.clone().detach()? Like torch.tensor(srctsr) always copies data, tensor.clone().detach() copies data too. x = torch.tensor([1, 2, 3]) y1 = x.clone().detach() y2 = torch.tensor(x) x[0] = 0 print(y1, y2) # both are same So they seem to be exactly the same. Below is the explanation given in the PyTorch documentation about torch.tensor() and torch.clone().detach() Therefore torch.tensor(x) is equivalent to x.clone().detach() and torch.tensor(x, requires_grad=True) is equivalent to x.clone().detach().requires_grad_(True). The equivalents using clone() and detach() are recommended. So if they are equivalent to each other, why is .clone().detach() more preferred than the other?
The difference is described here. I am adding some text (from the link) for the sake of completeness. torch.tensor() always copies data. If you have a Tensor data and want to avoid a copy, use torch.Tensor.requires_grad_() or torch.Tensor.detach(). When data is a tensor x, torch.tensor() reads out ‘the data’ from whatever it is passed, and constructs a leaf variable. Therefore torch.tensor(x) is equivalent to x.clone().detach() and torch.tensor(x, requires_grad=True) is equivalent to x.clone().detach().requires_grad_(True). The equivalents using clone() and detach() are recommended.
https://stackoverflow.com/questions/62484790/
Pytorch MSE loss function nan during training
I am trying linear regression from boston dataset. MSE loss function is nan since the first iteration. I tried altering learning rate and batch_size but of no use. from torch.utils.data import TensorDataset , DataLoader inputs = torch.from_numpy(Features).to(torch.float32) targets = torch.from_numpy(target).to(torch.float32) train_ds = TensorDataset(inputs , targets) train_dl = DataLoader(train_ds , batch_size = 5 , shuffle = True) model = nn.Linear(13,1) opt = optim.SGD(model.parameters(), lr=1e-5) loss_fn = F.mse_loss def fit(num_epochs, model, loss_fn, opt, train_dl): # Repeat for given number of epochs for epoch in range(num_epochs): # Train with batches of data for xb,yb in train_dl: # 1. Generate predictions pred = model(xb) # 2. Calculate loss loss = loss_fn(pred, yb) # 3. Compute gradients loss.backward() # 4. Update parameters using gradients opt.step() # 5. Reset the gradients to zero opt.zero_grad() # Print the progress if (epoch+1) % 10 == 0: print('Epoch [{}/{}], Loss: {}'.format(epoch+1, num_epochs, loss.item())) fit(100, model, loss_fn , opt , train_dl) output
Pay attention to: Use normalization: x = (x - x.mean()) / x.std() y_train / y_test have to be (-1, 1) shapes. Use y_train.view(-1, 1) (if y_train is torch.Tensor or something) (not your case, but for someone else) If you use torch.nn.MSELoss(reduction='sum') than you have to reduse the sum to mean. It can be done with torch.nn.MSELoss() or in train-loop: l = loss(y_pred, y) / y.shape[0]. Example: ... loss = torch.nn.MSELoss() ... for epoch in range(num_epochs): for x, y in train_iter: y_pred = model(x) l = loss(y_pred, y) optimizer.zero_grad() l.backward() optimizer.step() print("epoch {} loss: {:.4f}".format(epoch + 1, l.item()))
https://stackoverflow.com/questions/62485229/
How to add a learnable bias to one of the network output channel in pytorch
class pu_fc(nn.Module): def __init__(self, input_dim): super(pu_fc, self).__init__() self.input_dim = input_dim self.fc1 = nn.Linear(input_dim, 50) self.fc2 = nn.Linear(50, 2) self.loss_fn = custom_NLL() device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") self.bias = torch.autograd.Variable(torch.rand(1,1), requires_grad=True).to(device) def forward(self, x): out = self.fc1(x) out = F.relu(out, inplace=True) out = self.fc2(out) out[..., 1] = out[..., 1] + self.bias print('bias: ', self.bias) return out As you can see from the code, I wanted to add a bias term to the second output channel. However, my implementation does not work. The bias term is not updated at all. It kept the same during training which I assume that it is not learnable during training. So the question is that how I can make the bias term learnable? Is it possible to do this? Below is some output of the bias during training. Any hint is grateful, thanks in advance! bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) Current Epoch: 1 Epoch loss: 0.4424589276313782 bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) Current Epoch: 2 Epoch loss: 0.3476297199726105 bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>) bias: tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
The bias should be an nn.Parameter. Being a parameter means that it will show up in model.parameters() and also automatically be transferred to the specified device when calling model.to(device). self.bias = nn.Parameter(torch.rand(1,1)) Note: Don't use Variable, it was deprecated with PyTorch 0.4.0, which was released over 2 years ago, and all of its functionality has been merged into the tensors.
https://stackoverflow.com/questions/62487029/
Runtime error: CUDA out of memory: Cant train SEGAN
I am currently trying to run a SEGAN for speech enhancement but can't seem to get the network to start training since it runs the following error: Runtime error: CUDA out of memory: Tried to allocate 30.00 MiB (GPU 0; 3.00 GiB total capacity; 2.00 GiB already allocated; 5.91 MiB free; 2.03 GiB reserved in total by PyTorch I have already tried to include torch.cuda.empty_cache() but that did not seem to have solved the issue This is the script I am currently running import argparse import os import torch import torch.nn as nn from scipy.io import wavfile from torch import optim from torch.autograd import Variable from torch.utils.data import DataLoader from tqdm import tqdm from data_preprocess import sample_rate from model import Generator, Discriminator from utils import AudioDataset, emphasis if __name__ == '__main__': parser = argparse.ArgumentParser(description='Train Audio Enhancement') parser.add_argument('--batch_size', default=50, type=int, help='train batch size') parser.add_argument('--num_epochs', default=86, type=int, help='train epochs number') opt = parser.parse_args() BATCH_SIZE = opt.batch_size NUM_EPOCHS = opt.num_epochs # load data torch.cuda.empty_cache() print('loading data...') train_dataset = AudioDataset(data_type='train') test_dataset = AudioDataset(data_type='test') train_data_loader = DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4) test_data_loader = DataLoader(dataset=test_dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=4) # generate reference batch ref_batch = train_dataset.reference_batch(BATCH_SIZE) # create D and G instances discriminator = Discriminator() generator = Generator() if torch.cuda.is_available(): discriminator.cuda() generator.cuda() ref_batch = ref_batch.cuda() ref_batch = Variable(ref_batch) print("# generator parameters:", sum(param.numel() for param in generator.parameters())) print("# discriminator parameters:", sum(param.numel() for param in discriminator.parameters())) # optimizers g_optimizer = optim.RMSprop(generator.parameters(), lr=0.0001) d_optimizer = optim.RMSprop(discriminator.parameters(), lr=0.0001) for epoch in range(NUM_EPOCHS): train_bar = tqdm(train_data_loader) for train_batch, train_clean, train_noisy in train_bar: # latent vector - normal distribution z = nn.init.normal(torch.Tensor(train_batch.size(0), 1024, 8)) if torch.cuda.is_available(): train_batch, train_clean, train_noisy = train_batch.cuda(), train_clean.cuda(), train_noisy.cuda() z = z.cuda() train_batch, train_clean, train_noisy = Variable(train_batch), Variable(train_clean), Variable(train_noisy) z = Variable(z) # TRAIN D to recognize clean audio as clean # training batch pass discriminator.zero_grad() outputs = discriminator(train_batch, ref_batch) clean_loss = torch.mean((outputs - 1.0) ** 2) # L2 loss - we want them all to be 1 clean_loss.backward() # TRAIN D to recognize generated audio as noisy generated_outputs = generator(train_noisy, z) outputs = discriminator(torch.cat((generated_outputs, train_noisy), dim=1), ref_batch) noisy_loss = torch.mean(outputs ** 2) # L2 loss - we want them all to be 0 noisy_loss.backward() # d_loss = clean_loss + noisy_loss d_optimizer.step() # update parameters # TRAIN G so that D recognizes G(z) as real generator.zero_grad() generated_outputs = generator(train_noisy, z) gen_noise_pair = torch.cat((generated_outputs, train_noisy), dim=1) outputs = discriminator(gen_noise_pair, ref_batch) g_loss_ = 0.5 * torch.mean((outputs - 1.0) ** 2) # L1 loss between generated output and clean sample l1_dist = torch.abs(torch.add(generated_outputs, torch.neg(train_clean))) g_cond_loss = 100 * torch.mean(l1_dist) # conditional loss g_loss = g_loss_ + g_cond_loss # backprop + optimize g_loss.backward() g_optimizer.step() train_bar.set_description( 'Epoch {}: d_clean_loss {:.4f}, d_noisy_loss {:.4f}, g_loss {:.4f}, g_conditional_loss {:.4f}' .format(epoch + 1, clean_loss.data[0], noisy_loss.data[0], g_loss.data[0], g_cond_loss.data[0])) # TEST model test_bar = tqdm(test_data_loader, desc='Test model and save generated audios') for test_file_names, test_noisy in test_bar: z = nn.init.normal(torch.Tensor(test_noisy.size(0), 1024, 8)) if torch.cuda.is_available(): test_noisy, z = test_noisy.cuda(), z.cuda() test_noisy, z = Variable(test_noisy), Variable(z) fake_speech = generator(test_noisy, z).data.cpu().numpy() # convert to numpy array fake_speech = emphasis(fake_speech, emph_coeff=0.95, pre=False) for idx in range(fake_speech.shape[0]): generated_sample = fake_speech[idx] file_name = os.path.join('results', '{}_e{}.wav'.format(test_file_names[idx].replace('.npy', ''), epoch + 1)) wavfile.write(file_name, sample_rate, generated_sample.T) # save the model parameters for each epoch g_path = os.path.join('epochs', 'generator-{}.pkl'.format(epoch + 1)) d_path = os.path.join('epochs', 'discriminator-{}.pkl'.format(epoch + 1)) torch.save(generator.state_dict(), g_path) torch.save(discriminator.state_dict(), d_path)
Try to lower your batch size (like David S mentionned). Also run the test without gradient computation using with torch.no_grad(): statement. If you wish to run your training with a bigger batch and you have insufficient memory one solution is to use gradient accumulation.
https://stackoverflow.com/questions/62513349/
T5 model custom vocabulary
Is there a way to choose my custom vocabulary in T5-model while fine-tuning for a text summarization task? I tried using a sentencepiece model to create my custom tokenizer but the model predicted some tokens which was not present in my tokenizer and hence the tokenizer takes it as an unknown token.
It is okay to add few tokens but you can not be using a totally different vocabulary and fine-tuning at the same time! The pre-trained weights are trained with the pre-trained vocabulary :) If you change the vocabulary, the trained weights become meaningless and invalid! If you want to use another vocabulary you have to train from scratch! To add tokens to vocabulary you can for example do: tokenizer = BertTokenizer.from_pretrained(model_name) tokenizer.add_tokens(['new', 'codekali', 'blabla']) model = Bert.from_pretrained(model_name, return_dict=False) model.resize_token_embeddings(len(tokenizer)) The last line is important because you need to tell the model that the number of tokens is changed.
https://stackoverflow.com/questions/62519413/
Pytorch Cityscapes Dataset, train_distribute problem - "Typeerror: path should be string, bytes, pathlike or integer, not NoneType"
I'm very unfamiliar with Machine Learning, python, and such, so forgive my oblivious errors. I'm trying to use machine learning systems on a dataset of streetscapes I have. I found a lot or resources, and I'm working off of this package which has a lot of examples and seems straightforward. When I attempted to run the train_distribute.py file, I received this error: (base) corey@corona:~/Desktop/pycity/GALD-Net-master$ python train_distribute.py Traceback (most recent call last): File "train_distribute.py", line 261, in <module> main() File "train_distribute.py", line 136, in main if not os.path.exists(args.save_dir): File "/home/corey/anaconda3/lib/python3.7/genericpath.py", line 19, in exists os.stat(path) TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType Looking in the code, it's coming from these lines: def main(): # make save dir if args.local_rank == 0: if not os.path.exists(args.save_dir): os.makedirs(args.save_dir) # launch the logger Log.init( log_level=args.log_level, I'm guessing this means I need a more exact file structure, and to point the code at the right location. I am in no way a computer scientist and have close to zero understanding of what does what and how things like this work. Any advice for what I'm doing wrong and how I can approach fixing things?
From the error message, my guess would be that args.save_dir is None. os.path.exists cannot deal with None as a path: >>> import os >>> os.path.exists(None) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.8/genericpath.py", line 19, in exists os.stat(path) TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType Looking at the script you cited, the save_dir argument has a default value of None. It might be useful to make this a required argument and remove the default value, since the main function depends on it.
https://stackoverflow.com/questions/62520641/
Pointer type behavior in pytorch
Is there pointer type assignment in pytorch? What I would like to do is have the following (pseudo-code): b=8 a->b print(a) --> 8 b=10 print(a) --> 10
It depends on how you modify b : # Assign a new object to b : b*2 >>> b = torch.tensor([8]); a=b; print(a); b=b*2; print(a) tensor([8]) tensor([8]) # Assign a new object to b : tensor([10]) >>> b = torch.tensor([8]); a=b; print(a); b=torch.tensor([10]); print(a) tensor([8]) tensor([8]) # In-place operation which doesn't assign a new object >>> b = torch.tensor([8]); a=b; print(a); b*=2; print(a) tensor([8]) tensor([16]) # In-place operation because it acts directly on the underlying data >>> b = torch.tensor([8]); a=b; print(a); b.data*=2; print(a) tensor([8]) tensor([16]) So you can't really do pointer assignment in Pytorch... But you can get the desired result using in-place operations or directly acting on the underlying data of the tensor.
https://stackoverflow.com/questions/62523708/
How to freeze selected layers of a model in Pytorch?
I am using the mobileNetV2 and I only want to freeze part of the model. I know I can use the following code to freeze the entire model MobileNet = models.mobilenet_v2(pretrained = True) for param in MobileNet.parameters(): param.requires_grad = False but I want everything from (15) onward to remain unfrozen. How can I selectively freeze everything before the desired layer is frozen? (15): InvertedResidual( (conv): Sequential( (0): ConvBNReLU( (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNReLU( (0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False) (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (16): InvertedResidual( (conv): Sequential( (0): ConvBNReLU( (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNReLU( (0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False) (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (17): InvertedResidual( (conv): Sequential( (0): ConvBNReLU( (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNReLU( (0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False) (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(960, 320, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (18): ConvBNReLU( (0): Conv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(1280, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) ) (classifier): Sequential( (0): Dropout(p=0.2, inplace=False) (1): Linear(in_features=1280, out_features=1000, bias=True) ) )
Pytorch's model implementation is in good modularization, so like you do for param in MobileNet.parameters(): param.requires_grad = False , you may also do for param in MobileNet.features[15].parameters(): param.requires_grad = True afterwards to unfreeze parameters in (15). Loop from 15 to 18 to unfreeze the last several layers.
https://stackoverflow.com/questions/62523912/
Save only best weights with huggingface transformers
Currently, I'm building a new transformer-based model with huggingface-transformers, where attention layer is different from the original one. I used run_glue.py to check performance of my model on GLUE benchmark. However, I found that Trainer class of huggingface-transformers saves all the checkpoints that I set, where I can set the maximum number of checkpoints to save. However, I want to save only the weight (or other stuff like optimizers) with best performance on validation dataset, and current Trainer class doesn't seem to provide such thing. (If we set the maximum number of checkpoints, then it removes older checkpoints, not ones with worse performances). Someone already asked about same question on Github, but I can't figure out how to modify the script and do what I want. Currently, I'm thinking about making a custom Trainer class that inherits original one and change the train() method, and it would be great if there's an easy and simple way to do this. Thanks in advance.
You may try the following parameters from trainer in the huggingface training_args = TrainingArguments( output_dir='/content/drive/results', # output directory do_predict= True, num_train_epochs=3, # total number of training epochs **per_device_train_batch_size=4, # batch size per device during training per_device_eval_batch_size=2**, # batch size for evaluation warmup_steps=1000, # number of warmup steps for learning rate save_steps=1000, save_total_limit=10, load_best_model_at_end= True, weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=0, evaluate_during_training=True) There may be better ways to avoid too many checkpoints and selecting the best model. So far you can not save only the best model, but you check when the evaluation yields better results than the previous one.
https://stackoverflow.com/questions/62525680/
What machine instance to use for running GPU workloads in Google Cloud Platform
I am trying to run Elasticsearch BERT application and would like to understand the minimal configuration for fine-tuning the model using GPU. What machine configuration should I be using? Reference github: Fast-Bert
You would probably need to attach different GPUs to your compute instance to test performance. The Tesla T4 is the cheapest, while the Tesla V100 is the most expensive. The n1-highmem or the n1-highcpu families of compute instance would be a good place to start. Some of the specs published by Google:
https://stackoverflow.com/questions/62526950/
RuntimeError: CUDA out of memory in training with pytorch "Pose2Seg"
When I run this code https://github.com/erezposner/Pose2Seg And I made all steps in this tutorial https://towardsdatascience.com/detection-free-human-instance-segmentation-using-pose2seg-and-pytorch-72f48dc4d23e but I have this error in cuda: RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 4.00 GiB total capacity; 2.57 GiB already allocated; 74.77 MiB free; 2.85 GiB reserved in total by PyTorch) (malloc at ..\c10\cuda\CUDACachingAllocator.cpp:289) (no backtrace available) How can I solve this? (base) C:\Users\ASUS\Pose2Seg>python train.py 06-23 07:30:01 ===========> loading model <=========== total params in model is 334, in pretrained model is 336, init 334 06-23 07:30:03 ===========> loading data <=========== loading annotations into memory... Done (t=4.56s) creating index... index created! 06-23 07:30:08 ===========> set optimizer <=========== 06-23 07:30:08 ===========> training <=========== C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\torch\nn\functional.py:2796: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead. warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.") C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\torch\nn\functional.py:2973: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\torch\nn\functional.py:3289: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. warnings.warn("Default grid_sample and affine_grid behavior has changed " C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\torch\nn\functional.py:3226: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. warnings.warn("Default grid_sample and affine_grid behavior has changed " 06-23 07:30:13 Epoch: [0][0/56599] Lr: [6.68e-05] Time 4.228 (4.228) Data 0.028 (0.028) loss 0.85738 (0.85738) 06-23 07:30:22 Epoch: [0][10/56599] Lr: [6.813333333333334e-05] Time 0.847 (1.280) Data 0.012 (0.051) loss 0.44195 (0.71130) 06-23 07:30:33 Epoch: [0][20/56599] Lr: [6.946666666666667e-05] Time 0.882 (1.180) Data 0.045 (0.037) loss 0.41523 (0.60743) Traceback (most recent call last): File "train.py", line 157, in <module> optimizer, epoch, iteration) File "train.py", line 74, in train loss.backward() File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\torch\tensor.py", line 198, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\torch\autograd\__init__.py", line 100, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 4.00 GiB total capacity; 2.57 GiB already allocated; 74.77 MiB free; 2.85 GiB reserved in total by PyTorch) (malloc at ..\c10\cuda\CUDACachingAllocator.cpp:289) (no backtrace available) cudatoolkit == 10.1.243 python3.6.5 The version of libs: >>> import tensorflow 2020-06-23 09:45:01.840827: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll >>> tensorflow.__version__ '2.2.0' >>> import keras Using TensorFlow backend. >>> keras.__version__ '2.3.1' >>> import torch >>> torch.__version__ '1.5.1' >>> import torchvision >>> torchvision.__version__ '0.6.1' >>> import pycocotools train.py code import os import sys import time import logging import argparse import numpy as np from tqdm import tqdm import torch import torch.utils.data from lib.averageMeter import AverageMeters from lib.logger import colorlogger from lib.timer import Timers from lib.averageMeter import AverageMeters from lib.torch_utils import adjust_learning_rate import os from modeling.build_model import Pose2Seg from datasets.CocoDatasetInfo import CocoDatasetInfo, annToMask from test import test NAME = "release_base" # Set `LOG_DIR` and `SNAPSHOT_DIR` def setup_logdir(): timestamp = time.strftime("%Y-%m-%d_%H_%M_%S", time.localtime()) LOGDIR = os.path.join(os.getcwd(), 'logs', '%s_%s' % (NAME, timestamp)) SNAPSHOTDIR = os.path.join( os.getcwd(), 'snapshot', '%s_%s' % (NAME, timestamp)) if not os.path.exists(LOGDIR): os.makedirs(LOGDIR) if not os.path.exists(SNAPSHOTDIR): os.makedirs(SNAPSHOTDIR) return LOGDIR, SNAPSHOTDIR LOGDIR, SNAPSHOTDIR = setup_logdir() # Set logging logger = colorlogger(log_dir=LOGDIR, log_name='train_logs.txt') # Set Global Timer timers = Timers() # Set Global AverageMeter averMeters = AverageMeters() def train(model, dataloader, optimizer, epoch, iteration): # switch to train mode model.train() averMeters.clear() end = time.time() for i, inputs in enumerate(dataloader): averMeters['data_time'].update(time.time() - end) iteration += 1 lr = adjust_learning_rate(optimizer, iteration, BASE_LR=0.0002, WARM_UP_FACTOR=1.0/3, WARM_UP_ITERS=1000, STEPS=(0, 14150*15, 14150*20), GAMMA=0.1) # forward outputs = model(**inputs) # loss loss = outputs # backward averMeters['loss'].update(loss.data.item()) optimizer.zero_grad() loss.backward() optimizer.step() # measure elapsed time averMeters['batch_time'].update(time.time() - end) end = time.time() if i % 10 == 0: logger.info('Epoch: [{0}][{1}/{2}]\t' 'Lr: [{3}]\t' 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t' 'Data {data_time.val:.3f} ({data_time.avg:.3f})\t' 'loss {loss.val:.5f} ({loss.avg:.5f})\t' .format( epoch, i, len(dataloader), lr, batch_time=averMeters['batch_time'], data_time=averMeters['data_time'], loss=averMeters['loss']) ) if i % 10000 == 0: torch.save(model.state_dict(), os.path.join( SNAPSHOTDIR, '%d_%d.pkl' % (epoch, i))) torch.save(model.state_dict(), os.path.join( SNAPSHOTDIR, 'last.pkl')) return iteration class Dataset(): def __init__(self): ImageRoot = r'C:\Users\ASUS\Pose2Seg\data\coco2017\train2017' AnnoFile = r'C:\Users\ASUS\Pose2Seg\data\coco2017\annotations\person_keypoints_train2017_pose2seg.json' self.datainfos = CocoDatasetInfo( ImageRoot, AnnoFile, onlyperson=True, loadimg=True) def __len__(self): return len(self.datainfos) def __getitem__(self, idx): rawdata = self.datainfos[idx] img = rawdata['data'] image_id = rawdata['id'] height, width = img.shape[0:2] gt_kpts = np.float32(rawdata['gt_keypoints']).transpose( 0, 2, 1) # (N, 17, 3) gt_segms = rawdata['segms'] gt_masks = np.array([annToMask(segm, height, width) for segm in gt_segms]) return {'img': img, 'kpts': gt_kpts, 'masks': gt_masks} def collate_fn(self, batch): batchimgs = [data['img'] for data in batch] batchkpts = [data['kpts'] for data in batch] batchmasks = [data['masks'] for data in batch] return {'batchimgs': batchimgs, 'batchkpts': batchkpts, 'batchmasks': batchmasks} if __name__ == '__main__': logger.info('===========> loading model <===========') model = Pose2Seg().cuda() # model.init("") model.train() logger.info('===========> loading data <===========') datasetTrain = Dataset() dataloaderTrain = torch.utils.data.DataLoader(datasetTrain, batch_size=1, shuffle=True, num_workers=0, pin_memory=False, collate_fn=datasetTrain.collate_fn) logger.info('===========> set optimizer <===========') ''' set your optimizer like this. Normally is Adam/SGD. ''' #optimizer = torch.optim.SGD(model.parameters(), 0.0002, momentum=0.9, weight_decay=0.0005) optimizer = torch.optim.Adam( model.parameters(), 0.0002, weight_decay=0.0000) iteration = 0 epoch = 0 try: while iteration < 14150*25: logger.info('===========> training <===========') iteration = train(model, dataloaderTrain, optimizer, epoch, iteration) epoch += 1 logger.info('===========> testing <===========') test(model, dataset='cocoVal', logger=logger.info) test(model, dataset='OCHumanVal', logger=logger.info) except (KeyboardInterrupt): logger.info('Save ckpt on exception ...') torch.save(model.state_dict(), os.path.join( SNAPSHOTDIR, 'interrupt_%d_%d.pkl' % (epoch, iteration))) logger.info('Save ckpt done.')
Your GPU doesn't have enough memory. Try to reduce the batch size. If still the same, try to reduce input image size. It should work fine then. By the way, for this type of model, 8GB of GPU memory is recommended.
https://stackoverflow.com/questions/62529109/
What is the correct way to fetch weights and biases in a pytorch model and copy those to a similar layer in another model?
I am trying to copy weights from a pretrained model layer by layer into another model of exactly similar structure. The original model gives an accuracy of 94% on a binary image classification problem but the target model is unable to predict, results in predicting only one class for entire test set. For example, I used this piece of code to manually copy weights from the stem of the pretrained model to the stem of the target: modelmix.stem[0].weight = modelSep.stem[0].weight modelmix.stem[1].weight = modelSep.stem[1].weight modelmix.stem[1].bias = modelSep.stem[1].bias where modelmix is the target and modelSep is the pretrained model. Used a similar snippet for all the other layers. The target model is not working even though I can see the weights are similar for all layers. I am Using pytorch 1.1. Thank you
You can create another model that the parameters names are equal for example: import torch.nn as nn model1 = nn.Sequential() model1.add_module('layer1', nn.Linear(10, 20)) model1.add_module('layer2', nn.Linear(20, 10)) model2 = nn.Sequential() model2.add_module('layer1', nn.Linear(10, 20)) model2.add_module('layer2', nn.Linear(20, 10)) model2.add_module('layer3', nn.Linear(10, 5)) Then you can load the model1 state_dict to model2, and viceversa with the kwargs strict=False. model2.load_state_dict(model1.state_dict(), strict=False) If you want something more custom, you should go as you mention.
https://stackoverflow.com/questions/62529729/
Pytorch code error.... ' NoneType' object has no attribute 'zero_' 'unsupported operand type(s) for *: 'float' and 'NoneType''
lr=0.001 x=np.linspace(-6,6,120) y=0.5+3*x-x**2+np.exp(-0.4*x) x=torch.from_numpy(x) y=torch.from_numpy(y) w0=torch.tensor(0.1,requires_grad=True) w1=torch.tensor(0.1,requires_grad=True) w2=torch.tensor(0.1,requires_grad=True) w3=torch.tensor(0.1,requires_grad=True) Y=w0+w1*x+w2*x**2+torch.exp(w3*x) #opt=torch.optim.Adam([w0,w1,w2,w3],0.001) L=nn.MSELoss() for i in range(20): opt.zero_grad() err=L(Y,y) print(err) err.backward(retain_graph=True) with torch.no_grad(): w0=w0-lr*w0.grad w1=w1-lr*w1.grad w2=w2-lr*w2.grad w3=w3-lr*w3.grad w0.grad.zero_() w1.grad.zero_() w2.grad.zero_() w3.grad.zero_() from the line w0 = w0 - lr * w0.grad: ===> unsupported operand type(s) for *: 'float' and 'NoneType' from the line w0.grad.zero_(): ===> 'NoneType' object has no attribute 'zero_' comes up. How should I fix it? And x=np.linspace(-6,6,120) y=0.5+3*x-x**2+np.exp(-0.4*x) x=torch.from_numpy(x) y=torch.from_numpy(y) w0=torch.tensor(0.1,requires_grad=True) w1=torch.tensor(0.1,requires_grad=True) w2=torch.tensor(0.1,requires_grad=True) w3=torch.tensor(0.1,requires_grad=True) Y=w0+w1*x+w2*x**2+torch.exp(w3*x) opt=torch.optim.Adam([w0,w1,w2,w3],0.001) L=nn.MSELoss() for i in range(20): opt.zero_grad() err=L(Y,y) print(err) err.backward(retain_graph=True) opt.step() If I try this code, then err is not updated. What is the problem? and how should I fix it Further, should the input of nn.MSELoss() be torch.double? Sometimes I get expected dtype Double but got dtype Float error. What should be the type of parameters w0, w1, ...?
You should do the following. for i in range(20): Y=w0+w1*x+w2*x**2+torch.exp(w3*x) err=L(Y,y) print(err) err.backward(retain_graph=True) with torch.no_grad(): w0.add_(-lr*w0.grad) w1.add_(-lr*w1.grad) w2.add_(-lr*w2.grad) w3.add_(-lr*w3.grad) w0.grad.zero_() w1.grad.zero_() w2.grad.zero_() w3.grad.zero_() Please note, you are performing Y=w0+w1*x+w2*x**2+torch.exp(w3*x) outside of the for loop which is wrong, and as a result err was not updating. dtype=torch.float64 should be fine.
https://stackoverflow.com/questions/62533460/
Parallel hyperparameter optimization with pytorch on a multi-gpu machine
I have access to a multi-gpu machine and I am running a grid search loop for parameter optimisation. I would like to know if I can distribute several iterations of the loop on multiple gpu at the same time, and if so how do I do it (what me mechanism? threading? how to gather the results if the loop execute asynchronously? etc.) Thank you.
I'd suggest using Optuna to handle hyper-parameters search, which should in general perform better than grid search (you can still use it with grid sampling though). I have modified Optuna distributed example to use one GPU per process. Create a training script like: # optimize.py import sys import optuna import your_model DEVICE = 'cuda:' + sys.argv[1] def objective(trial): hidden_size = trial.suggest_int('hidden_size', 8, 64, log=True) # define other hyperparameters return your_model.score(hidden_size=hidden_size, device=DEVICE) if __name__ == '__main__': study = optuna.load_study(study_name='distributed-example', storage='sqlite:///example.db') study.optimize(objective, n_trials=100) In terminal: pip install optuna optuna create-study --study-name "distributed-example" --storage "sqlite:///example.db" Then for every GPU device: python optimize.py 0 python optimize.py 1 ... Finally, best results can be easily discovered: import optuna study = optuna.create_study(study_name='distributed-example', storage='sqlite:///example.db', load_if_exists=True) print(study.best_params) print(study.best_value) Or even visualized.
https://stackoverflow.com/questions/62535341/
Display misclassified images in pytorch
I'm new to pytorch and numpy so this may be a dumb question. I'd like to see some images misclassified by my net, with the correct label and the predicted label. Here is my code valid_and_test_set = torchvision.datasets.MNIST("./mnist", train=False, download=True) dataset_valid, dataset_test = torch.utils.data.random_split(valid_and_test_set,[5000, 5000]) dataset_test.dataset.transform = transform #transform is composed by unsqueeze, normalize, view and gaussian noise with randn dataset_test.dataset.target_transform = OneHot() #OneHot return the label dataloader_test = torch.utils.data.DataLoader(dataset_test.dataset, batch_size=5000, num_workers=num_workers, pin_memory=True) def test(dataset, dataloader): net.eval() with torch.no_grad(): for batch in dataloader: inputs = batch[0] inputs = inputs.to(device, non_blocking=True) outputs = net(inputs) predictions = torch.argmax(outputs, dim=1) return predictions Thank you in advance
There's atleast two ways you could do this in. One is, to store the images which were misclassified during evaluation(running through the test data) and plot those. This is shown here Another way is to make use of TensorBoard. This is quite elegant in my opinion, and you can find a comprehensive guide for it here
https://stackoverflow.com/questions/62537079/
Iterating through DataLoader (PyTorch): RuntimeError: Expected object of scalar type unsigned char but got scalar type float for sequence element 9
I am new to PyTorch and am running into an expected error. The overall context is trying to build a building segmentation model off of Spacenet imagery. I am forked off of this repo from someone at Microsoft AI who built a segmentation model, and I am just trying to re-run her training scripts. I've been able to download the data, and do the pre-processing. My issue comes when trying to actually train the model, I am trying to iterate through my DataLoader, and I get the following error message: RuntimeError: Expected object of scalar type unsigned char but got scalar type float for sequence element 9. Snippets of code that are useful: I have a dataset.py that creates the SpaceNetDataset class and looks like: import os # Ignore warnings import warnings import numpy as np from PIL import Image import torch from torch.utils.data import Dataset warnings.filterwarnings('ignore') class SpaceNetDataset(Dataset): """Class representing a SpaceNet dataset, such as a training set.""" def __init__(self, root_dir, splits=['trainval', 'test'], transform=None): """ Args: root_dir (string): Directory containing folder annotations and .txt files with the train/val/test splits splits: ['trainval', 'test'] - the SpaceNet utilities code would create these two splits while converting the labels from polygons to mask annotations. The two splits are created after chipping larger images into the required input size with some overlaps. Thus to have splits that do not have overlapping areas, we manually split the images (not chips) into train/val/test using utils/split_train_val_test.py, followed by using the SpaceNet utilities to annotate each folder, and combine the trainval and test splits it creates inside each folder. transform (callable, optional): Optional transform to be applied on a sample. """ self.root_dir = root_dir self.transform = transform self.image_list = [] self.xml_list = [] data_files = [] for split in splits: with open(os.path.join(root_dir, split + '.txt')) as f: data_files.extend(f.read().splitlines()) for line in data_files: line = line.split(' ') image_name = line[0].split('/')[-1] xml_name = line[1].split('/')[-1] self.image_list.append(image_name) self.xml_list.append(xml_name) def __len__(self): return len(self.image_list) def __getitem__(self, idx): img_path = os.path.join(self.root_dir, 'RGB-PanSharpen', self.image_list[idx]) target_path = os.path.join(self.root_dir, 'annotations', self.image_list[idx].replace('.tif', 'segcls.tif')) image = np.array(Image.open(img_path)) target = np.array(Image.open(target_path)) target[target == 100] = 1 # building interior target[target == 255] = 2 # border sample = {'image': image, 'target': target, 'image_name': self.image_list[idx]} if self.transform: sample = self.transform(sample) return sample To create the DataLoader, I have something like: dset_train = SpaceNetDataset(data_path_train, split_tags, transform=T.Compose([ToTensor()])) loader_train = DataLoader(dset_train, batch_size=train_batch_size, shuffle=True, num_workers=num_workers) I then iterate over the data loader by doing something like: for batch in loader_train: image_tensors = batch['image'] images = batch['image'].cpu().numpy() break # take the first shuffled batch but then I get the error: Traceback (most recent call last): File "training/train_aml.py", line 137, in <module> sample_images_train, sample_images_train_tensors = get_sample_images(which_set='train') File "training/train_aml.py", line 123, in get_sample_images for i, batch in enumerate(loader): File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 345, in __next__ data = self._next_data() File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 856, in _next_data return self._process_data(data) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 881, in _process_data data.reraise() File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/collate.py", line 74, in default_collate return {key: default_collate([d[key] for d in batch]) for key in elem} File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/collate.py", line 74, in <dictcomp> return {key: default_collate([d[key] for d in batch]) for key in elem} File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/collate.py", line 55, in default_collate return torch.stack(batch, 0, out=out) RuntimeError: Expected object of scalar type unsigned char but got scalar type float for sequence element 9. The error seems quite similar to this one, although I did try a similar solution by casting: dtype = torch.cuda.CharTensor if torch.cuda.is_available() else torch.CharTensor for batch in loader: batch['image'] = batch['image'].type(dtype) batch['target'] = batch['target'].type(dtype) but I end up with the same error. A couple of other things that are weird: This seems to be non-deterministic. Most of the time I get this error, but some times the code keeps running (not sure why) The "Sequence Element" number at the end of the error message keeps changing. In this case it was "sequence element 9" sometimes it's "sequence element 2", etc. Not sure why.
Ah nevermind. Turns out unsigned char comes from C++ where it gives you 0 to 255, so it makes sense that's what it expects from image data. So I actually fixed this by doing: image = np.array(Image.open(img_path)).astype(np.int) target = np.array(Image.open(target_path)).astype(np.int) inside the SpaceNetDataset class and it seemed to work!
https://stackoverflow.com/questions/62543665/
cannot import torch audio ' No audio backend is available.'
import torchaudio When I just try to import torch audio on Pycharm, I have this error 61: UserWarning: No audio backend is available. warnings.warn('No audio backend is available.')
You need to install the audio file I/O backend. If Linux it's Sox, if Windows it's SoundFile To check if you have one set run str(torchaudio.get_audio_backend()) and if 'None' is the result then install the backend. SoundFile for Windows pip install soundfile Sox for Linux pip install sox Check out the PyTorch Audio Backend docs here
https://stackoverflow.com/questions/62543843/
Huggingface GPT2 and T5 model APIs for sentence classification?
I've successfully used the Huggingface Transformers BERT model to do sentence classification using the BERTForSequenceClassification class and API. I've used it for both 1-sentence sentiment analysis and 2-sentence NLI. I can see that other models have analogous classes, e.g. XLNetForSequenceClassification and RobertaForSequenceClassification. This type of sentence classification usually involves placing a classifier layer on top of a dense vector representing the entirety of the sentence. Now I'm trying to use the GPT2 and T5 models. However, when I look at the available classes and API for each one, there is no equivalent "ForSequenceClassification" class. For example, for GPT2 there are GPT2Model, GPT2LMHeadModel, and GPT2DoubleHeadsModel classes. Perhaps I'm not familiar enough with the research for GPT2 and T5, but I'm certain that both models are capable of sentence classification. So my questions are: What Huggingface classes for GPT2 and T5 should I use for 1-sentence classification? What classes should I use for 2-sentence (sentence pair) classification (like natural language inference)? Thank you for any help.
Well, why not use the code for GPT2LMHeadModel itself as an inspiration : class MyGPT2LMHeadModel(GPT2PreTrainedModel): def __init__(self, config, num_classes): super().__init__(config) self.transformer = GPT2Model.from_pretrained('gpt2') #self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False) self.lm_head = nn.Linear(config.n_embd, num_classes, bias=False) ... def forward(...): hidden_states = self.transformer(...)[0] lm_logits = self.lm_head(hidden_states) ...
https://stackoverflow.com/questions/62561471/
TensorRT (C++ API) undefined reference to `createNvOnnxParser_INTERNAL'
I am trying to create a tensorrt engine from ONNX model using the TensorRT C++ API. I have written code to read, serialize and write a tensorrt engine to disk as per the documentation. I have installed tensorrt7 on colab using debian installation instructions. This is my c++ code that I am compiling using g++ rnxt.cpp -o rnxt #include <cuda_runtime_api.h> #include <NvOnnxParser.h> #include <NvInfer.h> #include <cstdlib> #include <fstream> #include <iostream> #include <sstream> #include <iterator> #include <algorithm> class Logger : public nvinfer1::ILogger { void log(Severity severity, const char* msg) override { // suppress info-level messages if (severity != Severity::kINFO) std::cout << msg << std::endl; } } gLogger; int main(){ int maxBatchSize = 32; nvinfer1::IBuilder* builder = nvinfer1::createInferBuilder(gLogger); const auto explicitBatch = 1U << static_cast<uint32_t>(nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH); nvinfer1::INetworkDefinition* network = builder->createNetworkV2(explicitBatch); nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger); parser->parseFromFile("saved_resnext.onnx", 1); for (int i = 0; i < parser->getNbErrors(); ++i) { std::cout << parser->getError(i)->desc() << std::endl; } builder->setMaxBatchSize(maxBatchSize); nvinfer1::IBuilderConfig* config = builder->createBuilderConfig(); config->setMaxWorkspaceSize(1 << 20); nvinfer1::ICudaEngine* engine = builder->buildEngineWithConfig(*network, *config); parser->destroy(); network->destroy(); config->destroy(); builder->destroy(); nvinfer1::IHostMemory *serializedModel = engine->serialize(); std::ofstream engine_file("saved_resnext.engine"); engine_file.write((const char*)serializedModel->data(),serializedModel->size()); serializedModel->destroy(); return 0; } While compiling, I get the following error: /tmp/ccJaGxCX.o: In function `nvinfer1::(anonymous namespace)::createInferBuilder(nvinfer1::ILogger&)': rnxt.cpp:(.text+0x19): undefined reference to `createInferBuilder_INTERNAL' /tmp/ccJaGxCX.o: In function `nvonnxparser::(anonymous namespace)::createParser(nvinfer1::INetworkDefinition&, nvinfer1::ILogger&)': rnxt.cpp:(.text+0x43): undefined reference to `createNvOnnxParser_INTERNAL' collect2: error: ld returned 1 exit status I also get error related to <cuda_runtime_api.h> so I have added (pasted) those files from cuda's include directory (/usr/local/cuda-11.0/targets/x86_64-linux/include) to the /usr/include directoryafter which I am getting the said error. I don't have much experience with C++ and any help would be appreciated. Edit: I have also installed libnvinfer using !apt-get install -y libnvinfer7=7.1.3-1+cuda11.0 !apt-get install -y libnvinfer-dev=7.1.3-1+cuda11.0
This problem is due to nvonnxparser.so was not linked in Makefile. Just add target_link_libraries(${TARGET_NAME} nvonnxparser) in your CMake.
https://stackoverflow.com/questions/62573335/
What does array[...,list([something]) mean?
I am going through the following lines of code but I didn't understand image[...,list()]. What do the three dots mean? self.probability = 0.5 self.indices = list(permutations(range(3), 3)) if random.random() < self.probability: image = np.asarray(image) image = Image.fromarray(image[...,list(self.indices[random.randint(0, len(self.indices) - 1)])]) What exactly is happening in the above lines? I have understood that the list() part is taking random channels from image? Am I correct?
list(permutations(range(3), 3)) generates all permutations of the intergers 0,1,2. from itertools import permutations list(permutations(range(3), 3)) # [(0, 1, 2), (0, 2, 1), (1, 0, 2), (1, 2, 0), (2, 0, 1), (2, 1, 0)] So the following chooses among these tuples of permutations: list(self.indices[random.randint(0, len(self.indices) - 1)])] In any case you'll have a permutation over the last axis of image which is usually the image channels RGB (note that with the ellipsis (...) here image[...,ixs] we are taking full slices over all axes except for the last. So this is performing a shuffling of the image channels. An example run - indices = list(permutations(range(3), 3)) indices[np.random.randint(0, len(indices) - 1)] # (2, 0, 1) Here's an example, note that this does not change the shape, we are using integer array indexing to index on the last axis only: a = np.random.randint(0,5,(5,5,3)) a[...,(0,2,1)].shape # (5, 5, 3)
https://stackoverflow.com/questions/62574264/
Build vocabulary only from training data or entire data?
Should I build the vocabulary only from train data or all data, wouldn't that effect test data in both ways? I mean : If we only build the vocab from train data, The model wouldn't recognize a lot of the words in the validation and testing data, if the word is not available in the vocabulary. Would considering a pre-trained word embedding help in this situation (i.e. the model learns the new word not from training data but from the pre-trained word embedding)? If yes, Would a randomly Initialized word embedding have the same effect? On the contrary, I've seen many examples where the coders build their vocab from the entire data, testing and validation data are shared with training data. Wouldn't this be an obvious data leakage problem?
If you're talking about word embeddings, then you should have some special token for out-of-vocabulary words (you probably don't want to have all unique words, but rather top N). E.g. add a special token like [UNK], and replace every unknown word with it. If you have pre-trained word embeddings and small training set, use them as initial point. Also, there's no reason to initialize embeddings for the words that you won't optimize during training. The only information that may leak is word frequency, which is not a serious issue.
https://stackoverflow.com/questions/62575028/
pytorch , changing learning rate during training
x=np.linspace(0,20,100) g=1+0.2*np.exp(-0.1*(x-7)**2) y=np.sin(g*x) plt.plot(x,y) plt.show() x=torch.from_numpy(x) y=torch.from_numpy(y) x=x.reshape((100,1)) y=y.reshape((100,1)) MM=nn.Sequential() MM.add_module('L1',nn.Linear(1,128)) MM.add_module('R1',nn.ReLU()) MM.add_module('L2',nn.Linear(128,128)) MM.add_module('R2',nn.ReLU()) MM.add_module('L3',nn.Linear(128,128)) MM.add_module('R3',nn.ReLU()) MM.add_module('L4',nn.Linear(128,128)) MM.add_module('R5',nn.ReLU()) MM.add_module('L5',nn.Linear(128,1)) MM.double() L=nn.MSELoss() lr=3e-05 ###### opt=torch.optim.Adam(MM.parameters(),lr) ######### Epo=[] COST=[] for epoch in range(8000): opt.zero_grad() err=L(torch.sin(MM(x)),y) Epo.append(epoch) COST.append(err) err.backward() if epoch%100==0: print(err) opt.step() Epo=np.array(Epo)/1000. COST=np.array(COST) pred=torch.sin(MM(x)).detach().numpy() Trans=MM(x).detach().numpy() x=x.reshape((100)) pred=pred.reshape((100)) Trans=Trans.reshape((100)) fig = plt.figure(figsize=(10,10)) #ax = fig.gca(projection='3d') ax = fig.add_subplot(2,2,1) surf = ax.plot(x,y,'r') #ax.plot_surface(x_dat,y_dat,z_pred) #ax.plot_wireframe(x_dat,y_dat,z_pred,linewidth=0.1) fig.tight_layout() #plt.show() ax = fig.add_subplot(2,2,2) surf = ax.plot(x,pred,'g') fig.tight_layout() ax = fig.add_subplot(2,2,3) surff=ax.plot(Epo,COST,'y+') plt.ylim(0,1100) ax = fig.add_subplot(2,2,4) surf = ax.plot(x,Trans,'b') fig.tight_layout() plt.show() This is the original code 1. For changing learning rate during training, I tried to move the position of 'opt' as Epo=[] COST=[] for epoch in range(8000): lr=3e-05 ###### opt=torch.optim.Adam(MM.parameters(),lr) ######### opt.zero_grad() err=L(torch.sin(MM(x)),y) Epo.append(epoch) COST.append(err) err.backward() if epoch%100==0: print(err) opt.step() This is code 2. The code 2 also operate, but the result is quite different with code 1. What is the difference and for changing learning rate during training(like lr=(1-epoch/10000 *0.99), what should I do?
You shouldn't move the optimizer definition into the training loop, because the optimizer keeps many other information related to training history, e.g in case of Adam there are running averages of gradients that are stored and updated dynamically in the optimizer's internal mechanism,... So instanciating a new optimizer each iteration makes you lose this history track. To update the learning rate dynamically there are lot of schedulers classes proposed in pytorch (exponential decay, cyclical decay, cosine annealing , ...). you can check them from the documentation for the full list of schedulers or you can implement your own if needed: https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate Example from the documentation: to decay the learning rate by multiplying it by 0.5 each 10 epochs you can use the StepLR scheduler as follows: opt = torch.optim.Adam(MM.parameters(), lr) scheduler = torch.optim.lr_scheduler.StepLR(opt, step_size=10, gamma=0.5) And in your original code 1 you can do : for epoch in range(8000): opt.zero_grad() err=L(torch.sin(MM(x)),y) Epo.append(epoch) COST.append(err) err.backward() if epoch%100==0: print(err) opt.step() scheduler.step() As I say you have many other type of lr schedulers so you can choose from the documentation or implement your own
https://stackoverflow.com/questions/62575226/
TensorboardX input problem about add_scalar()
When I use the tensorboardX to plot my data loss, it show me that: AssertionError Traceback (most recent call last) <ipython-input-76-73419a51fcc9> in <module> ----> 1 writer.add_scalar('resnet34_loss', loss) F:\Program Files\Python\lib\site-packages\tensorboardX\writer.py in add_scalar(self, tag, scalar_value, global_step, walltime) 403 scalar_value = workspace.FetchBlob(scalar_value) 404 self._get_file_writer().add_summary( --> 405 scalar(tag, scalar_value), global_step, walltime) 406 407 def add_scalars(self, main_tag, tag_scalar_dict, global_step=None, walltime=None): F:\Program Files\Python\lib\site-packages\tensorboardX\summary.py in scalar(name, scalar, collections) 145 name = _clean_tag(name) 146 scalar = make_np(scalar) --> 147 assert(scalar.squeeze().ndim == 0), 'scalar should be 0D' 148 scalar = float(scalar) 149 return Summary(value=[Summary.Value(tag=name, simple_value=scalar)]) AssertionError: scalar should be 0D I have turn the loss from float into np.array, and I have read the doc of tensorboardX, it tell me that add_scalar() function must input the scalar data and I do it, but it shows me a bug. Thanks for your help!
I had the same issue, here is a minimal sample to reproduce your error, writer = SummaryWriter(osp.join('runs', 'hello')) loss = np.random.randn(10) writer.add_scalar(tag='Checking range', scalar_value=loss) writer.close() This returns, Traceback (most recent call last): File "untitled0.py", line 26, in <module> writer.add_scalar(tag='Checking range', scalar_value=loss) File "/home/melike/anaconda2/envs/pooling/lib/python3.6/site-packages/torch/utils/tensorboard/writer.py", line 346, in add_scalar scalar(tag, scalar_value), global_step, walltime) File "/home/melike/anaconda2/envs/pooling/lib/python3.6/site-packages/torch/utils/tensorboard/summary.py", line 248, in scalar assert(scalar.squeeze().ndim == 0), 'scalar should be 0D' AssertionError: scalar should be 0D As indicated by the assertion error, scalar.squeeze().ndim should have 0-dimension. Let's check our scalar_value which is loss, print(loss.squeeze().ndim) This outputs 1 So, we found the reason of error, add_scalar expects 0-d scalar after squeeze operation and we gave it a 1-d scalar. Tensorboard page of PyTorch docs has add_scalar examples. Let's convert our code to that version. writer = SummaryWriter(osp.join('runs', 'hello')) loss = np.random.randn(10) for i, val in enumerate(loss): writer.add_scalar(tag='Checking range', scalar_value=val, global_step=i) writer.close() And this is the output,
https://stackoverflow.com/questions/62596016/
Pytorch error "RuntimeError: index out of range: Tried to access index 512 out of table with 511 rows"
I have sentences that I vectorize using sentence_vector() method of BiobertEmbedding python module (https://pypi.org/project/biobert-embedding/). For some group of sentences I have no problem but for some others I have the following error message : File "/home/nobunaga/.local/lib/python3.6/site-packages/biobert_embedding/embedding.py", line 133, in sentence_vector encoded_layers = self.eval_fwdprop_biobert(tokenized_text) File "/home/nobunaga/.local/lib/python3.6/site-packages/biobert_embedding/embedding.py", line 82, in eval_fwdprop_biobert encoded_layers, _ = self.model(tokens_tensor, segments_tensors) File "/home/nobunaga/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/home/nobunaga/.local/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 730, in forward embedding_output = self.embeddings(input_ids, token_type_ids) File "/home/nobunaga/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/home/nobunaga/.local/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 268, in forward position_embeddings = self.position_embeddings(position_ids) File "/home/nobunaga/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/home/nobunaga/.local/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/home/nobunaga/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1467, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: index out of range: Tried to access index 512 out of table with 511 rows. at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:237 I discovered that for some group of sentences, the problem was related to tags like <tb> for instance. But for others, even when tags are removed, the error message is still there. (Unfortunately I can't share the code for confidentiality reasons) Do you have any ideas of what could be the problem? Thank you by advance EDIT : you are right cronoik, it will be better with an example. Example : sentences = ["This is the first sentence.", "This is the second sentence.", "This is the third sentence." biobert = BiobertEmbedding(model_path='./biobert_v1.1_pubmed_pytorch_model') vectors = [biobert.sentence_vector(doc) for doc in sentences] This last line of code is what caused the error message in my opinion.
The problem is that the biobert-embedding module isn't taking care of the of the maximum sequence length of 512 (tokens not words!). This is the relevant source code. Have a look at the example below to force the error you received: from biobert_embedding.embedding import BiobertEmbedding #sentence has 385 words sentence = "The near-ubiquity of ASCII was a great help, but failed to address international and linguistic concerns. The dollar-sign was not so useful in England, and the accented characters used in Spanish, French, German, and many other languages were entirely unavailable in ASCII (not to mention characters used in Greek, Russian, and most Eastern languages). Many individuals, companies, and countries defined extra characters as needed—often reassigning control characters, or using value in the range from 128 to 255. Using values above 128 conflicts with using the 8th bit as a checksum, but the checksum usage gradually died out. Text is considered plain-text regardless of its encoding. To properly understand or process it the recipient must know (or be able to figure out) what encoding was used; however, they need not know anything about the computer architecture that was used, or about the binary structures defined by whatever program (if any) created the data. Text is considered plain-text regardless of its encoding. To properly understand or process it the recipient must know (or be able to figure out) what encoding was used; however, they need not know anything about the computer architecture that was used, or about the binary structures defined by whatever program (if any) created the data. Text is considered plain-text regardless of its encoding. To properly understand or process it the recipient must know (or be able to figure out) what encoding was used; however, they need not know anything about the computer architecture that was used, or about the binary structures defined by whatever program (if any) created the data. Text is considered plain-text regardless of its encoding. To properly understand or process it the recipient must know (or be able to figure out) what encoding was used; however, they need not know anything about the computer architecture that was used, or about the binary structures defined by whatever program (if any) created the data The near-ubiquity of ASCII was a great help, but failed to address international and linguistic concerns. The dollar-sign was not so useful in England, and the accented characters used in Spanish, French, German, and many other languages were entirely unavailable in ASCII (not to mention characters used in Greek, Russian, and most Eastern languages). Many individuals, companies, and countries defined extra characters as needed—often reassigning control" longersentence = sentence + ' some' biobert = BiobertEmbedding() print('sentence has {} tokens'.format(len(biobert.process_text(sentence)))) #works biobert.sentence_vector(sentence) print('longersentence has {} tokens'.format(len(biobert.process_text(longersentence)))) #didn't work biobert.sentence_vector(longersentence) Output: sentence has 512 tokens longersentence has 513 tokens #your error message.... What you should do is to implement a sliding window approach to process these texts: import torch from biobert_embedding.embedding import BiobertEmbedding maxtokens = 512 startOffset = 0 docStride = 200 sentence = "The near-ubiquity of ASCII was a great help, but failed to address international and linguistic concerns. The dollar-sign was not so useful in England, and the accented characters used in Spanish, French, German, and many other languages were entirely unavailable in ASCII (not to mention characters used in Greek, Russian, and most Eastern languages). Many individuals, companies, and countries defined extra characters as needed—often reassigning control characters, or using value in the range from 128 to 255. Using values above 128 conflicts with using the 8th bit as a checksum, but the checksum usage gradually died out. Text is considered plain-text regardless of its encoding. To properly understand or process it the recipient must know (or be able to figure out) what encoding was used; however, they need not know anything about the computer architecture that was used, or about the binary structures defined by whatever program (if any) created the data. Text is considered plain-text regardless of its encoding. To properly understand or process it the recipient must know (or be able to figure out) what encoding was used; however, they need not know anything about the computer architecture that was used, or about the binary structures defined by whatever program (if any) created the data. Text is considered plain-text regardless of its encoding. To properly understand or process it the recipient must know (or be able to figure out) what encoding was used; however, they need not know anything about the computer architecture that was used, or about the binary structures defined by whatever program (if any) created the data. Text is considered plain-text regardless of its encoding. To properly understand or process it the recipient must know (or be able to figure out) what encoding was used; however, they need not know anything about the computer architecture that was used, or about the binary structures defined by whatever program (if any) created the data The near-ubiquity of ASCII was a great help, but failed to address international and linguistic concerns. The dollar-sign was not so useful in England, and the accented characters used in Spanish, French, German, and many other languages were entirely unavailable in ASCII (not to mention characters used in Greek, Russian, and most Eastern languages). Many individuals, companies, and countries defined extra characters as needed—often reassigning control" longersentence = sentence + ' some' sentences = [sentence, longersentence, 'small test sentence'] vectors = [] biobert = BiobertEmbedding() #https://github.com/Overfitter/biobert_embedding/blob/b114e3456de76085a6cf881ff2de48ce868e6f4b/biobert_embedding/embedding.py#L127 def sentence_vector(tokenized_text, biobert): encoded_layers = biobert.eval_fwdprop_biobert(tokenized_text) # `encoded_layers` has shape [12 x 1 x 22 x 768] # `token_vecs` is a tensor with shape [22 x 768] token_vecs = encoded_layers[11][0] # Calculate the average of all 22 token vectors. sentence_embedding = torch.mean(token_vecs, dim=0) return sentence_embedding for doc in sentences: #tokenize your text docTokens = biobert.process_text(doc) while startOffset < len(docTokens): print(startOffset) length = min(len(docTokens) - startOffset, maxtokens) #now we calculate the sentence_vector for the document slice vectors.append(sentence_vector( docTokens[startOffset:startOffset+length] , biobert) ) #stop when the whole document is processed (document has less than 512 #or the last document slice was processed) if startOffset + length == len(docTokens): break startOffset += min(length, docStride) startOffset = 0 P.S.: Your partial success with removing <tb> was possible because removing <tb> will remove 4 tokens ('<', 't', '##b', '>').
https://stackoverflow.com/questions/62598130/
Modifying a pretrained model in PyTorch
I am attempting to modify this particular section of code in the mobilenetv2 model (17): InvertedResidual( (conv): Sequential( (0): ConvBNReLU( (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) After the conv2d section, I want to add a max-pooling layer but I am having trouble figuring out to do so. I suspect it would be similar to doing something like this: MobileNet.features = nn.Sequential(nn.Linear(1280, 1000), nn.LeakyReLU(), nn.Dropout(0.5), nn.Linear(1000,3), nn.LogSoftmax(dim=1)) Where I would do something like: MobileNet.features[17].conv[0] = nn.ConvBRELU(nn.Conv2d(),nn.maxpool,nn.BatchNorm(),nn.ReLU()) but when I tried that I got the error message module 'torch.nn' has no attribute 'ConvBNReLU' How can I go about modifying the provided section of code?
ConvBNReLU is not a nn module -- you can find all the available nn modules here. It is defined in torchvision. You would need to import it by from torchvision.models.mobilenet import ConvBNReLU While you cannot just insert a max-pool in ConvBNReLU, it is just inherited from nn.Sequential and helps to specify the parameters. I would sugget you to make a new class, copying the code from ConvBNReLU, and insert a max-pool there. class ConvMaxPoolBNReLU(nn.Sequential): def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1, norm_layer=None): padding = (kernel_size - 1) // 2 if norm_layer is None: norm_layer = nn.BatchNorm2d super(ConvBNReLU, self).__init__( nn.Conv2d(in_planes, out_planes, kernel_size, stride, padding, groups=groups, bias=False), nn.MaxPool2d(2), norm_layer(out_planes), nn.ReLU6(inplace=True) )
https://stackoverflow.com/questions/62602296/
Config change for a pre-trained transformer model
I am trying to implement a classification head for the reformer transformer. The classification head works fine, but when I try to change one of the config parameters- config.axial_pos_shape i.e sequence length parameter for the model it throws an error; size mismatch for reformer.embeddings.position_embeddings.weights.0: copying a param with shape torch.Size([512, 1, 64]) from checkpoint, the shape in current model is torch.Size([64, 1, 64]). size mismatch for reformer.embeddings.position_embeddings.weights.1: copying a param with shape torch.Size([1, 1024, 192]) from checkpoint, the shape in current model is torch.Size([1, 128, 192]). The config: { "architectures": [ "ReformerForSequenceClassification" ], "attention_head_size": 64, "attention_probs_dropout_prob": 0.1, "attn_layers": [ "local", "lsh", "local", "lsh", "local", "lsh" ], "axial_norm_std": 1.0, "axial_pos_embds": true, "axial_pos_embds_dim": [ 64, 192 ], "axial_pos_shape": [ 64, 256 ], "chunk_size_feed_forward": 0, "chunk_size_lm_head": 0, "eos_token_id": 2, "feed_forward_size": 512, "hash_seed": null, "hidden_act": "relu", "hidden_dropout_prob": 0.05, "hidden_size": 256, "initializer_range": 0.02, "intermediate_size": 3072, "is_decoder": true, "layer_norm_eps": 1e-12, "local_attention_probs_dropout_prob": 0.05, "local_attn_chunk_length": 64, "local_num_chunks_after": 0, "local_num_chunks_before": 1, "lsh_attention_probs_dropout_prob": 0.0, "lsh_attn_chunk_length": 64, "lsh_num_chunks_after": 0, "lsh_num_chunks_before": 1, "max_position_embeddings": 8192, "model_type": "reformer", "num_attention_heads": 2, "num_buckets": [ 64, 128 ], "num_chunks_after": 0, "num_chunks_before": 1, "num_hashes": 1, "num_hidden_layers": 6, "output_past": true, "pad_token_id": 0, "task_specific_params": { "text-generation": { "do_sample": true, "max_length": 100 } }, "vocab_size": 320 } Python Code: config = ReformerConfig() config.max_position_embeddings = 8192 config.axial_pos_shape=[64, 128] #config = ReformerConfig.from_pretrained('./cnp/config.json', output_attention=True) model = ReformerForSequenceClassification(config) model.load_state_dict(torch.load("./cnp/pytorch_model.bin"))
I run into the same issue, trying to halve the size of the 65536 (128*512) by default max sequence length used in Reformer pre-training. As @cronoik mentioned, you must: load pretrained Reformer resize it to your need by dropping unnecessary weights save this new model load this new model to perform your desired tasks Those unnecessary weights are the ones from the Position Embeddings layer. In Reformer model, the Axial Position Encodings strategy was used to learn the position embeddings (rather than having fixed ones like BERT). Axial Position Encodings stores position embeddings in a memory efficient manner, using two small tensors rather than a big one. However, the idea of position embeddings remains exactly the same, which is obtaining different embeddings for each position. That said, in theory (correct me if I am misunderstanding somewhere), removing the last position embeddings to match your custom max sequence length should not hurt the performance. You can refer to this post from HuggingFace to see a more detailed description of Axial Position Encodings and understand where to truncate your position embeddings tensor. I have managed to resize and use Reformer with a custom max length of 32768 (128*256) with the following code: # Load intial pretrained model model = ReformerForSequenceClassification.from_pretrained('google/reformer-enwik8', num_labels=2) # Reshape Axial Position Embeddings layer to match desired max seq length model.reformer.embeddings.position_embeddings.weights[1] = torch.nn.Parameter(model.reformer.embeddings.position_embeddings.weights[1][0][:256]) # Update the config file to match custom max seq length model.config.axial_pos_shape = 128, 256 model.config.max_position_embeddings = 128*256 # 32768 # Save model with custom max length output_model_path = "path/to/model" model.save_pretrained(output_model_path)
https://stackoverflow.com/questions/62603089/
Saving a PyTorch models state_dict into redis cache
I’m building a distributed parameter/server type architecture and wanting to communicate model updates through table solutions on Azure. I’m having a hard time finding any useful information about saving a PyTorch models state_dict into a redis cache. I’ve given up on Azure Cosmos tables because of the size limit (64kb) per entity and looked toward redis since model state_dict params/weights are much larger, even for a small model. Does anyone have any recommendations for me on how to pursue this? Or if this is even possible?
My solution (after @GuyKorland commented above) was RedisAI. I implemented key-value mehanism for model data and communicated it that way between VMs. for name, param in model.named_parameters(): redisai_client.tensorset(f'{name}',param.data.numpy().cpu().detach())
https://stackoverflow.com/questions/62603414/
Trouble with PyTorch's 'ToPILImage'
Why does this not work? import torchvision.transforms.functional as tf from torchvision import transforms pic = np.random.randint(0, 255+1, size=28*28).reshape(28, 28) pic = pic.astype(int) plt.imshow(pic) t = transforms.ToPILImage() t(pic.reshape(28, 28, 1)) # tf.to_pil_image(pic.reshape(28, 28, 1)) A beautiful random picture is plotted by matplotlib, but no matter what datatype I chose for my NumPy ndarray, neither to_pil_image or ToPILImage work as expected. The docs have this to say: Converts a tensor ... or a numpy ndarray of shape H x W x C to a PIL Image while preserving the value range. ... If the input has 1 channel, the mode is determined by the data type (i.e int , float , short ). None of these datatypes work except for "short". Everything else results in: TypeError: Input type int64/float64 is not supported thrown from torchvision/transforms/functional.py in to_pil_image(). Further, even though the short datatype will work for the stand alone code snippet I provided first, it breaks down when used inside a transform.Compose() called from a Dataset object's __getitem__: choices = transforms.RandomChoice([transforms.RandomAffine(30), transforms.RandomPerspective()]) transform = transforms.Compose([transforms.ToPILImage(), transforms.RandomApply([choices], 0.5), transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) trainset = MNIST('data/train.csv', transform=transform) trainloader = DataLoader(trainset, batch_size=32, shuffle=True, num_workers=4) RuntimeError: DataLoader worker (pid 12917) is killed by signal: Floating point exception. RuntimeError: DataLoader worker (pid(s) 12917) exited unexpectedly
The above answer worked for me only with the following change: pic = pic.astype('uint8') hope it works for you.
https://stackoverflow.com/questions/62617533/
Why return self.head(x.view(x.size(0), -1)) in the nn.Module for pyTorch reinforcement learning example
I understand that the balancing the pole example requires 2 outputs. Reinforcement Learning (DQN) Tutorial Here is the output for self.head print ('x',self.head) x = Linear(in_features=512, out_features=2, bias=True) When I run the epochs below is the outputs: print (self.head(x.view(x.size(0), -1))) return self.head(x.view(x.size(0), -1)) tensor([[-0.6945, -0.1930]]) tensor([[-0.0195, -0.1452]]) tensor([[-0.0906, -0.1816]]) tensor([[ 0.0631, -0.9051]]) tensor([[-0.0982, -0.5109]]) ... The size of x is: x = torch.Size([121, 32, 2, 8]) So I am trying to understand what x.view(x.size(0), -1) is doing? I understand from the comment in the code that it's returning: Returns tensor([[left0exp,right0exp]...]). But how does x which is torch.Size([121, 32, 2, 8]) being reduced to a tensor of size 2? Is there an alternative way of writing that makes more sense? What if I had 4 outputs. How would I represent that? Why x.size(0). Why -1? So appears to take self.head with 4 outputs to 2 outputs. Is that correct? At the bottom is that class I am referring: class DQN(nn.Module): def __init__(self, h, w, outputs): super(DQN, self).__init__() self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2) self.bn1 = nn.BatchNorm2d(16) self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2) self.bn2 = nn.BatchNorm2d(32) self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2) self.bn3 = nn.BatchNorm2d(32) # Number of Linear input connections depends on output of conv2d layers # and therefore the input image size, so compute it. def conv2d_size_out(size, kernel_size = 5, stride = 2): return (size - (kernel_size - 1) - 1) // stride + 1 convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w))) convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h))) linear_input_size = convw * convh * 32 self.head = nn.Linear(linear_input_size, outputs) # Called with either one element to determine next action, or a batch # during optimization. Returns tensor([[left0exp,right0exp]...]). def forward(self, x): x = F.relu(self.bn1(self.conv1(x))) x = F.relu(self.bn2(self.conv2(x))) x = F.relu(self.bn3(self.conv3(x))) return self.head(x.view(x.size(0), -1))
x.view(x.size(0), -1) is flattening the tensor, this is because the Linear layer only accepts a vector (1d array). To break it down, x.view() reshapes the tensor of the specified shape (more info). x.shape(0) returns 1st dimension of the tensor (which is the batch size, this should remain the constant). The -1 in x.view() is a filler, in other words, its dimensions that we don't know, so PyTorch automatically calculates it. For example, if x = torch.tensor([1,2,3,4]), to reshape the tensor to a 2x2, you could do x.view(2,2) or x.view(2,-1) or x.view(-1,2). The output shape is not a tensor shape of 2, but that of 121,2 (the 121 is the batch size, and the 2 comes from the Linear layers output). So to change the output size from 2, to 4, you would have to change the outputs argument in the __init__ function to 4.
https://stackoverflow.com/questions/62621317/
What are the differences between torch.jit.trace and torch.jit.script in torchscript?
Torchscript provides torch.jit.trace and torch.jit.script to convert pytorch code from eager mode to script model. From the documentation, I can understand torch.jit.trace cannot handle control flows and other data structures present in the python. Hence torch.jit.script was developed to overcome the problems in torch.jit.trace. But it looks like torch.jit.script works for all the cases, then why do we need torch.jit.trace? Please help me understand the difference between these two methods
If torch.jit.script works for your code, then that's all you should need. Code that uses dynamic behavior such as polymorphism isn't supported by the compiler torch.jit.script uses, so for cases like that, you would need to use torch.jit.trace.
https://stackoverflow.com/questions/62626052/
how to modify resnet 50 with 4 channels as input using pre-trained weights in Pytorch?
I would like to change the resnet50 so that I can switch to 4 channel input, use the same weights for the rgb channels and initialize the last channel with a normal with mean 0 and variance 0.01. here is my code: import torch.nn as nn import torch from torchvision import models from misc.layer import Conv2d, FC import torch.nn.functional as F from misc.utils import * import pdb class Res50(nn.Module): def __init__(self, pretrained=True): super(Res50, self).__init__() self.de_pred = nn.Sequential(Conv2d(1024, 128, 1, same_padding=True, NL='relu'), Conv2d(128, 1, 1, same_padding=True, NL='relu')) self._initialize_weights() res = models.resnet50(pretrained=pretrained) pretrained_weights = res.conv1.weight res.conv1 = nn.Conv2d(4, 64, kernel_size=7, stride=2, padding=3,bias=False) res.conv1.weight[:,:3,:,:] = pretrained_weights res.conv1.weight[:,3,:,:].data.normal_(0.0, std=0.01) self.frontend = nn.Sequential( res.conv1, res.bn1, res.relu, res.maxpool, res.layer1, res.layer2 ) self.own_reslayer_3 = make_res_layer(Bottleneck, 256, 6, stride=1) self.own_reslayer_3.load_state_dict(res.layer3.state_dict()) def forward(self,x): x = self.frontend(x) x = self.own_reslayer_3(x) x = self.de_pred(x) x = F.upsample(x,scale_factor=8) return x def _initialize_weights(self): for m in self.modules(): if isinstance(m, nn.Conv2d): m.weight.data.normal_(0.0, std=0.01) if m.bias is not None: m.bias.data.fill_(0) elif isinstance(m, nn.BatchNorm2d): m.weight.fill_(1) m.bias.data.fill_(0) but it produces the following error, does anyone have any advice? /usr/local/lib/python3.6/dist-packages/torch/tensor.py:746: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. warnings.warn("The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad " Traceback (most recent call last): File "train.py", line 62, in <module> cc_trainer = Trainer(loading_data,cfg_data,pwd) File "/content/drive/My Drive/Folder/Code/trainer.py", line 28, in __init__ self.optimizer = optim.Adam(self.net.CCN.parameters(), lr=cfg.LR, weight_decay=1e-4) #remenber was 1e-4 File "/usr/local/lib/python3.6/dist-packages/torch/optim/adam.py", line 44, in __init__ super(Adam, self).__init__(params, defaults) File "/usr/local/lib/python3.6/dist-packages/torch/optim/optimizer.py", line 51, in __init__ self.add_param_group(param_group) File "/usr/local/lib/python3.6/dist-packages/torch/optim/optimizer.py", line 206, in add_param_group raise ValueError("can't optimize a non-leaf Tensor") ValueError: can't optimize a non-leaf Tensor
Ideally, ResNet accepts 3-channel input. To make it work for 4-channel input, you have to add one extra layer (2D conv), pass the 4-channel input through this layer to make the output of this layer suitable for ResNet architecture. steps Copy the model weight weight = model.conv1.weight.clone() Add the extra 2d conv for the 4-channel input model.conv1 = nn.Conv2d(4, 64, kernel_size=7, stride=2, padding=3, bias=False) #here 4 indicates 4-channel input You can add Relu and BatchNorm on top of the extra con2d. In this example, I am not using. Connect the extra cov2d with the ResNet model (the weight you copied before) with torch.no_grad(): model.conv1.weight[:, :3] = weight model.conv1.weight[:, 3] = model.conv1.weight[:, 0] Done Sorry, I didn't modify your code. You can adjust the changes in your code.
https://stackoverflow.com/questions/62629114/
Anaconda always want to replace my GPU Pytorch version to CPU Pytorch version when updating
I have a newly installed Anaconda3 (version 2020.02) environment, and I have installed Pytorch GPU version by the command conda install pytorch torchvision cudatoolkit=10.2 -c pytorch. I have verified that my Pytorch indeed runs fine on GPU. However, whenever I update Anaconda by conda update --all, the following messages always shows: The following packages will be SUPERSEDED by a higher-priority channel: pytorch pytorch::pytorch-1.5.0-py3.7_cuda102_~ --> pkgs/main::pytorch-1.5.0-cpu_py37h9f948e0_0 In other words, it always want to replace my GPU version Pytorch to CPU version. I have tried that if continue the update, it will install the CPU version Pytorch and my previous Pytorch code on GPU could not run anymore. I have also tried the command conda update --all --no-channel-priority but the message still shows. To my knowledge I have never modified Anaconda channels or add custom channels. How can I get rid of this message?
It's happening because, by default, conda prefers packages from a higher priority channel over any version from a lower priority channel. -- conda docs You can solve this problem by setting the priority of pytorch channel higher than the default channel by changing the order in .condarc -- more here channels: - pytorch - defaults - conda-forge channel_priority: true or you can upgrade it by specifying as option: conda update --all -c pytorch
https://stackoverflow.com/questions/62630186/
How can I fix the weights of 'torch.nn.Linear'?
I want to know why there are two tensors in parameter list of nn.linear?? I tried to set parameters, but it didn't work. How can I fix it? XX=torch.from_numpy(X) YY=torch.from_numpy(Y) Ytt=torch.from_numpy(Yt) XX=XX.view(100,1) YY=YY.view(100,1) Ytt=Ytt.view(100,1) class model(torch.nn.Module): def __init__(self): super(model,self).__init__() self.linear1 = torch.nn.Linear(1,2).double() self.linear2 = torch.nn.Linear(2,2).double() self.linear3 = torch.nn.Linear(2,1).double() def forward(self,x): x=F.relu(self.linear1(x)) x=F.relu(self.linear2(x)) x=self.linear3(x) return x M=model() L=nn.MSELoss() print(list(M.linear1.parameters())) list(M.linear1.parameters())[0]=torch.Tensor([[-0.1], [ 0.2]]) print(list(M.linear1.parameters())) Then [Parameter containing: tensor([[-0.2288], [ 0.2211]], dtype=torch.float64, requires_grad=True), Parameter containing: tensor([-0.9185, -0.2458], dtype=torch.float64, requires_grad=True)] [Parameter containing: tensor([[-0.2288], [ 0.2211]], dtype=torch.float64, requires_grad=True), Parameter containing: tensor([-0.9185, -0.2458], dtype=torch.float64, requires_grad=True)]
You have two parameter tensors in each nn.Linear: one for the weight matrix and the other for the bias. The function this layer implements is y = Wx + b You can set the values of a parameter tensor by accessing its data: with torch.no_grad(): M.linear1.weight.data[...] = torch.Tensor([[-0.1], [0.2]])
https://stackoverflow.com/questions/62635046/
How to write a RNN with RNNCell in pytorch?
I am trying to rewrite a code from this simple Vanilla RNN to RNNCell format in pytorch. This is the full code import torch import torch.nn as nn from torch.autograd import Variable torch.manual_seed(777) class SimpleRNN(nn.Module): def __init__(self,inputs,hiddens,n_class): super().__init__() self.rnn = nn.RNNCell(inputs,hiddens) self.linear = nn.Linear(hiddens,n_class) self.hiddens = hiddens def forward(self,x): hx = torch.zeros((x.shape[1],hiddens)) rnn_out = [] for i in x: hx = self.rnn(i,hx) rnn_out.append(hx) linear_out = self.linear(rnn_out.view(-1, hiddens)) return linear_out # hyperparameters seq_len = 6 # |hihell| == 6, equivalent to time step input_size = 5 # one-hot size batch_size = 1 # one sentence per batch num_layers = 1 # one-layer rnn num_classes = 5 # predicting 5 distinct character hidden_size = 4 # output from the RNN # create an index to character mapping idx2char = ['h', 'i', 'e', 'l', 'o'] # Teach hihell -> ihello x_data = [[0, 1, 0, 2, 3, 3]] # hihell x_one_hot = [[[1, 0, 0, 0, 0], # h 0 [0, 1, 0, 0, 0], # i 1 [1, 0, 0, 0, 0], # h 0 [0, 0, 1, 0, 0], # e 2 [0, 0, 0, 1, 0], # l 3 [0, 0, 0, 1, 0]]] # l 3 x_one_hot = torch.Tensor(x_one_hot) y_data = torch.Tensor([1, 0, 2, 3, 3, 4]) # ihello rnn = SimpleRNN(input_size,hidden_size,num_classes) # train the model num_epochs = 15 criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(rnn.parameters(), lr = 0.1) for epoch in range(1, num_epochs + 1): optimizer.zero_grad() outputs = rnn(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # check the current predicted string # max gives the maximum value and its # corresponding index, we will only # be needing the index _, idx = outputs.max(dim = 1) idx = idx.data.numpy() result_str = [idx2char[c] for c in idx] print('epoch: {}, loss: {:1.3f}'.format(epoch, loss.item())) print('Predicted string: ', ''.join(result_str)) I am trying to emulate the original RNN + fully connected class object from the tutorial and reusing many of its code. I computed the rnn_out and appended its value in a python list rnn_out = [] for i in x: hx = rnn(i,hx) rnn_out.append(hx) because its a python list I cant further execute the code and it will result in this error AttributeError: 'list' object has no attribute 'view' How should I write a RNN using RNNCell?
I am not sure the rest of your code is alright, but in order to fix this error, you can convert your rnn_out list to a torch tensor by adding the following line after the ending of your for loop: rnn_out = torch.stack(rnn_out)
https://stackoverflow.com/questions/62642034/
Applying a 2D convolution kernel to each channel in Pytorch?
I have a single 2D kernel of size [3,3], and a Tensor of size [B, 64, H, W]. My question is, how can I apply the same 2D kernel to each input channel? Should I reshape/repeat the kernel? I tried to repeat my kernel as follows: kernel = kernel.repeat((B, 64, 1, 1)) But when I apply it the tensor size changes to [1, 64, H, 1].
One way is to use grouped convolutions with one group per input channel. Example using nn.functional.conv2d directly # suppose kernel.shape == [3, 3] and x.shape == [B, 64, H, W] weights = kernel[None, None, ...].repeat(64, 1, 1, 1) y = nn.functional.conv2d(x, weights, groups=64) or using nn.Conv2d conv = nn.Conv2d(64, 64, 3, groups=64, bias=False) conv.weight.data = kernel[None, None, ...].repeat(64, 1, 1, 1) y = conv(x) Of course you could also specify any padding, stride, or dilation that you want by including those arguments.
https://stackoverflow.com/questions/62643204/
Modified PyTorch loss function BCEWithLogitsLoss returns NaNs
I'm trying to solve a binary classification problem (target=0 and target=1) with an exception: Some of my labels are classified as target=0.5 on purpose, and I wish to have zero loss for either classifying it as 0 or 1 (i.e both classes are "correct"). I tried to implement a custom loss from scratch, based on PyTorch's BCEWithLogitsLoss: class myLoss(torch.nn.Module): def __init__(self, pos_weight=1): super().__init__() self.pos_weight = pos_weight def forward(self, input, target): epsilon = 10 ** -44 my_bce_loss = -1 * (self.pos_weight * target * F.logsigmoid(input + epsilon) + (1 - target) * log(1 - sigmoid(input) + epsilon)) add_loss = (target - 0.5) ** 2 * 4 mean_loss = (my_bce_loss * add_loss).mean() return mean_loss epsilon was chosen so the log will be bounded to -100, as suggested in BCE loss. However I'm still getting NaN errors, after several epochs: Function 'LogBackward' returned nan values in its 0th output. or Function 'SigmoidBackward' returned nan values in its 0th output. Any suggestions how can I correct my loss function? maybe by somehow inherit and modify forward function? Update: The way I call my custom loss function: y = batch[:, -1, :].to(self.device, dtype=torch.float32) y_pred_batch = self.model(x) LossFun = myLoss(self.pos_weight) batch_result.loss = LossFun.forward(y_pred_batch, y) I use Temporal Convolutional Network model, implemented as follows: out = self.conv1(x) out = self.chomp1(out) out = self.elu(out) out = self.dropout1(out) res = x if self.downsample is None else self.downsample(x) return self.tanh(out + res)
Try it this way: class myLoss(torch.nn.Module): def __init__(self, pos_weight=1): super().__init__() self.pos_weight = pos_weight def forward(self, input, target): epsilon = 10 ** -44 input = input.sigmoid().clamp(epsilon, 1 - epsilon) my_bce_loss = -1 * (self.pos_weight * target * torch.log(input) + (1 - target) * torch.log(1 - input)) add_loss = (target - 0.5) ** 2 * 4 mean_loss = (my_bce_loss * add_loss).mean() return mean_loss To test I perform 1000 backwards: target = torch.randint(high=2, size=(32,)) loss_fn = myLoss() for i in range(1000): inp = torch.rand(1, 32, requires_grad=True) loss = loss_fn(inp, target) loss.backward() if torch.isnan(loss): print('Loss NaN') if torch.isnan(inp.grad).any(): print('NaN') All works nice.
https://stackoverflow.com/questions/62652271/
PyTorch - Save just the model structure without weights and then load and train it
I want to separate model structure authoring and training. The model author designs the model structure, saves the untrained model to a file and then sends it training service which loads the model structure and trains the model. Keras has the ability to save the model config and then load it. How can the same be accomplished with PyTorch?
You can write your own function to do that in PyTorch. Saving of weights is straight forward where you simply do a torch.save(model.state_dict(), 'weightsAndBiases.pth'). For saving the model structure, you can do this: (Assume you have a model class named Network, and you instantiate yourModel = Network()) model_structure = {'input_size': 784, 'output_size': 10, 'hidden_layers': [each.out_features for each in yourModel.hidden_layers], 'state_dict': yourModel.state_dict() #if you want to save the weights } torch.save(model_structure, 'model_structure.pth') Similarly, we can write a function to load the structure. def load_structure(filepath): structure = torch.load(filepath) model = Network(structure['input_size'], structure['output_size'], structure['hidden_layers']) # model.load_state_dict(structure['state_dict']) if you had saved weights as well return model model = load_structure('model_structure.pth') print(model) Edit: Okay, the above was the case when you had access to source code for your class, or if the class was relatively simple so you could define a generic class like this: class Network(nn.Module): def __init__(self, input_size, output_size, hidden_layers, drop_p=0.5): ''' Builds a feedforward network with arbitrary hidden layers. Arguments --------- input_size: integer, size of the input layer output_size: integer, size of the output layer hidden_layers: list of integers, the sizes of the hidden layers ''' super().__init__() # Input to a hidden layer self.hidden_layers = nn.ModuleList([nn.Linear(input_size, hidden_layers[0])]) # Add a variable number of more hidden layers layer_sizes = zip(hidden_layers[:-1], hidden_layers[1:]) self.hidden_layers.extend([nn.Linear(h1, h2) for h1, h2 in layer_sizes]) self.output = nn.Linear(hidden_layers[-1], output_size) self.dropout = nn.Dropout(p=drop_p) def forward(self, x): ''' Forward pass through the network, returns the output logits ''' for each in self.hidden_layers: x = F.relu(each(x)) x = self.dropout(x) x = self.output(x) return F.log_softmax(x, dim=1) However, that will only work for simple cases so I suppose that's not what you intended. One option is, you can define the architecture of model in a separate .py file and import it along with other necessities(if the model architecture is complex) or you can altogether define the model then and there. Another option is converting your pytorch model to onxx and saving it. The other option is that, in Tensorflow you can create a .pb file that defines both the architecture and the weights of the model and in Pytorch you would do something like that this way: torch.save(model, filepath) This will save the model object itself, as torch.save() is just a pickle-based save at the end of the day. model = torch.load(filepath) This however has limitations, your model class definition might not for example be picklable(possible in some complicated models). Because this is a such an iffy workaround, the answer that you'll usually get is - No, you have to declare the class definition before loading the trained model, ie you need to have access to the model class source code. Side notes: An official answer by one of the core PyTorch devs on limitations of loading a pytorch model without code: We only save the source code of the class definition. We do not save beyond that (like the package sources that the class is referring to). import foo class MyModel(...): def forward(input): foo.bar(input) Here the package foo is not saved in the model checkpoint. There are limitations on robustly serializing python constructs. For example the default picklers cannot serialize lambdas. There are helper packages that can serialize more python constructs than the standard, but they still have limitations. Dill 25 is one such package. Given these limitations, there is no robust way to have torch.load work without having the original source files.
https://stackoverflow.com/questions/62666027/
is it possible to remove the dtype from tensor in pytorch?
targets-> [{'boxes': tensor([[ 23.7296, 28.9209, 122.0997, 213.2374]], device='cuda:0', dtype=torch.float64), 'labels': tensor([1], device='cuda:0'), 'area': tensor([18131.2344], device='cuda:0'), 'iscrowd': tensor([0], device='cuda:0')}] right now the boxes has the dtype=torch.float64 is it possible it just looks like targets-> [{'boxes': tensor([[ 23.7296, 28.9209, 122.0997, 213.2374]], device='cuda:0',), 'labels': tensor([1], device='cuda:0'), 'area': tensor([18131.2344], device='cuda:0'), 'iscrowd': tensor([0], device='cuda:0')}]
All tensors have a dtype attribute, no exceptions. However, PyTorch has a default float dtype, usually torch.float32 (single precision 32bit floating point). When displaying tensors with this default dtype, it is omitted. However, your boxes tensor has a non-default dtype, torch.float64 and therefore it is being displayed. You can use the .to() command to cast this tensor to the default torch.float32 dtype, and consequently make PyTorch not explicitly display the dtype: targets[0]['boxes'] = targets[0]['boxes'].to(dtype=torch.float32) #.to() in _not_ an in-place operation This will result in In [*]: targets Out[*]: [{'boxes': tensor([[ 23.7296, 28.9209, 122.0997, 213.2374]], device='cuda:0'), 'labels': tensor([1], device='cuda:0'), 'area': tensor([18131.2344], device='cuda:0'), 'iscrowd': tensor([0], device='cuda:0')}]
https://stackoverflow.com/questions/62670050/
Runtime Error - element 0 of tensors does not require grad and does not have a grad_fn
I am using a Unet model for semantic segmentation - I have a custom dataset of images and their masks both in .png format. I have looked in the online forums and tried stuff, but not much works? Any suggestions in how to resolve the error or improve the code would be helpful. model.eval() with torch.no_grad(): for xb, yb in val_dl: yb_pred = model(xb.to(device)) # yb_pred = yb_pred["out"].cpu() print(yb_pred.shape) yb_pred = torch.argmax(yb_pred,axis = 1) break print(yb_pred.shape) criteron = nn.CrossEntropyLoss(reduction = 'sum') opt = optim.Adam(model.parameters(), lr = 3e-4) def loss_batch(loss_func, output, target, opt = None): loss = loss_func(output, target) if opt is not None: opt.zero_grad() loss.backward() opt.step() return loss.item(), None lr_scheduler = ReduceLROnPlateau(opt, mode = 'min', factor = 0.5, patience= 20, verbose = 1) def get_lr(opt): for param_group in opt.param_groups: return param_group['lr'] current_lr = get_lr(opt) print('current_lr = {}'.format(current_lr)) def loss_epoch(model, loss_func, dataset_dl, sanity_check = False, opt = None): running_loss = 0.0 len_data = len(dataset_dl.dataset) for xb, yb in dataset_dl: xb = xb.to(device) yb = yb.to(device) # xb = torch.tensor(xbh, requires_grad=True) output = model(xb) loss_b, metric_b = loss_batch(loss_func, output, yb, opt) running_loss += loss_b if sanity_check is True: break loss = running_loss/float(len_data) return loss, None def train_val(model, params): num_epochs = params["num_epochs"] loss_func = params["loss_func"] opt = params["optimizer"] train_dl = params["train_dl"] val_dl = params["val_dl"] sanity_check = params["sanity_check"] lr_scheduler = params["lr_scheduler"] path2weights = params["path2weights"] loss_history = {"train": [], "val": []} best_model_wts = copy.deepcopy(model.state_dict()) best_loss = float('inf') for epoch in range(num_epochs): current_lr = get_lr(opt) print('Epoch {}/{}, current_lr = {}'.format(epoch, num_epochs - 1, current_lr)) with torch.enable_grad(): model.train() train_loss, _ = loss_epoch(model, loss_func, train_dl, sanity_check, opt) loss_history["train"].append(train_loss) model.eval() with torch.no_grad(): val_loss, _ = loss_epoch(model, loss_func, val_dl, sanity_check, opt) loss_history["val"].append(val_loss) if val_loss < best_loss: best_loss = val_loss best_model_wts = copy.deepcopy(model.state_dict()) torch.save(model.state_dict(), path2weights) print("copied best model weights!!") lr_scheduler.step(val_loss) if current_lr != get_lr(opt): print("Loading best model weights!!") model.load_state_dict(best_model_wts) print("train Loss: %.6f" %(train_loss)) print("val_loss: %.6f" %(val_loss)) print("-"*20) model.load_state_dict(best_model_wts) return model, loss_history, metric_history path2models = "./models/" if not os.path.exists(path2models): os.mkdir(path2models) param_train = { "num_epochs": 10, "loss_func": criteron, "optimizer": opt, "train_dl": train_dl, "val_dl": val_dl, "sanity_check": False, "lr_scheduler": lr_scheduler, "path2weights": path2models + "weights.pt" model, loss_hist, _ = train_val(model, param_train) The error message looks like - File "", line 10, in model, loss_hist, _ = train_val(model, param_train) File "", line 27, in train_val val_loss, _ = loss_epoch(model, loss_func, val_dl, sanity_check, opt) File "", line 13, in loss_epoch loss_b, metric_b = loss_batch(loss_func, output, yb, opt) File "", line 6, in loss_batch loss.backward() File "C:\Users\W540\anaconda3\lib\site-packages\torch\tensor.py", line 198, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "C:\Users\W540\anaconda3\lib\site-packages\torch\autograd_init_.py", line 100, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn I am not sure which variable to set as require_grad = True or where I should enable grad...
You can try this before loss.backward(): loss = Variable(loss, requires_grad = True) Or, because the Variable has been removed from PyTorch (still exists but deprecated), you can do the same thing simply by using following code: loss.requires_grad = True
https://stackoverflow.com/questions/62699306/
Error trying to build docker for flask app
I'm trying to build a docker to run a flask app. I've never done this before. I have the flask app working locally. Here is my approach: My directory structure for the project looks like this: model.pkl README.md images/ static/ Dockerfile flaskapp.py requirements.txt templates/ I can launch the flask app by running python flaskapp.py and it runs in my browser (locally). I want to create a Docker so other machines can run this project without dealing with all the dependency stuff. To do so, I've done the following: I created a Dockerfile with this inside: FROM python:3 COPY requirements.txt /tmp COPY flaskapp.py /tmp COPY model.pkl /tmp COPY images /tmp COPY static /tmp COPY templates /tmp WORKDIR /tmp ADD flaskapp.py / RUN pip install -r requirements.txt CMD [ "python", "flaskapp.py" ] Ran the command docker build -t python-barcode . That worked, so. I ran docker run python-barcode. The terminal printed out * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit), but it didn't work and I got this error on the browser: This site can’t be reached0.0.0.0 refused to connect. Try: Checking the connection Checking the proxy and the firewall ERR_CONNECTION_REFUSED So I did some digging and I updated my Dockerfile to this (adding the last line): FROM python:3 COPY requirements.txt /tmp COPY flaskapp.py /tmp COPY model.pkl /tmp COPY images /tmp COPY static /tmp COPY templates /tmp WORKDIR /tmp ADD flaskapp.py / RUN pip install -r requirements.txt CMD [ "python", "flaskapp.py" ] CMD ["flask", "run", "--host", "0.0.0.0" ] Then running docker run python-barcode again, I get this error: Usage: flask run [OPTIONS] Error: Could not locate Flask application. You did not provide the FLASK_APP environment variable. For more information see http://flask.pocoo.org/docs/latest/quickstart/ How should I proceed? If its relevant, my flaskapp.py looks like this: model = load_learner('', 'model.pkl') app = Flask(__name__) app.config['SEND_FILE_MAX_AGE_DEFAULT'] = 0 def classify(document): X = document y = model.predict(X) return y class ReviewForm(Form): pred = TextAreaField('',[validators.DataRequired(),validators.length(min=1)]) @app.route('/') def index(): form = ReviewForm(request.form) return render_template('reviewform.html', form=form) @app.route('/results', methods=['POST']) def results(): form = ReviewForm(request.form) if request.method == 'POST' and form.validate(): sequence = request.form['pred'] y = classify(sequence) return render_template('results.html', y = y) return render_template('reviewform.html', form=form) if __name__ == '__main__': app.run(host= '0.0.0.0') EDIT 1 Now I am getting this error: [2020-07-03 00:29:51,222] ERROR in app: Exception on / [GET] Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1988, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1641, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1544, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/local/lib/python3.8/site-packages/flask/_compat.py", line 33, in reraise raise value File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1639, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1625, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "flaskapp.py", line 94, in index return render_template('reviewform.html', form=form) File "/usr/local/lib/python3.8/site-packages/flask/templating.py", line 133, in render_template return _render(ctx.app.jinja_env.get_or_select_template(template_name_or_list), File "/usr/local/lib/python3.8/site-packages/jinja2/environment.py", line 930, in get_or_select_template return self.get_template(template_name_or_list, parent, globals) File "/usr/local/lib/python3.8/site-packages/jinja2/environment.py", line 883, in get_template return self._load_template(name, self.make_globals(globals)) File "/usr/local/lib/python3.8/site-packages/jinja2/environment.py", line 857, in _load_template template = self.loader.load(self, name, globals) File "/usr/local/lib/python3.8/site-packages/jinja2/loaders.py", line 115, in load source, filename, uptodate = self.get_source(environment, name) File "/usr/local/lib/python3.8/site-packages/flask/templating.py", line 57, in get_source return self._get_source_fast(environment, template) File "/usr/local/lib/python3.8/site-packages/flask/templating.py", line 85, in _get_source_fast raise TemplateNotFound(template) jinja2.exceptions.TemplateNotFound: reviewform.html
You can have only CMD per Dockerfile. so one will likely be ignored. CMD [ "python", "flaskapp.py" ] CMD ["flask", "run", "--host", "0.0.0.0" ] Just remove the second one as you already listening to all interfaces in the code. CMD [ "python", "flaskapp.py" ] Now run the Docker container with this command. docker run -p 5000:5000 -it python-barcode And then you will able to hit endpoint htpp://localhost:5000
https://stackoverflow.com/questions/62706674/
How to find the mean and Std of a 3 channel image
from numpy import asarray from PIL import Image image = Image.open('../input/chest-xray-pneumonia/chest_xray/train/NORMAL/IM-0115-0001.jpeg') pixels = asarray(image) pixels = pixels.astype('float32') means = pixels.mean(axis=(0,1), dtype='float64') stds = pixels.std(axis=(0,1), dtype='float64') print('Means: %s, Stds: %s' % (means, stds)) pixels = (pixels - means) / stds means = pixels.mean(axis=(0,1), dtype='float64') stds = pixels.std(axis=(0,1), dtype='float64') print('Means: %s, Stds: %s' % (means, stds))''' output>> Means: 128.90747832983968, Stds: 62.30103035552067 Means: 1.2235509834827096e-07, Stds: 1.0000000181304383 The problem is while putting 3 channel image ,only got two values each
Separate the image channels into r,g,b using OpenCV, Then use numpy mean and std function to calculate the mean and standard deviations for each channel. Example for separating the image into rgb channels. import cv2 import numpy as np img = cv2.imread("image.jpg") b = img[:,:,0] g = img[:,:,1] r = img[:,:,2]
https://stackoverflow.com/questions/62709189/
Is there a way to obtain IUV map from image in tensorflow?
I have been using detectron2/densepose to generate IUV map which helped me generate UV texture from the input image. Now, for deployment, I need to have IUV map generation from input image in client-side using JavaScript. I am familiar with TensorFlow but the current dense pose model runs only on PyTorch and interconversion tools are giving errors too. Any comment or solution to the problem will be very helpful. IUV image needed: Final UV map:
First, you need to use dump command to generate pkl file after that we can generate the needed IUV image. Inside Densepose use apply_net.py to generate the pkl file. It has a lot of other options as well. You can check them out here. python3 apply_net.py dump configs/densepose_rcnn_R_101_FPN_s1x.yaml /models/R_101_FPN_s1x.pkl /Images/frame.jpg --output output.pkl -v After having the pkl file we will need to extract information from it and create your IUV needed image. img = Image.open('/Images/frame.jpg') img_w ,img_h = img.size with open('output.pkl','rb') as f: data=pickle.load(f) i = data[0]['pred_densepose'][0].labels.cpu().numpy() uv = data[0]['pred_densepose'][0].uv.cpu().numpy() iuv = np.stack((uv[1,:,:], uv[0,:,:], i * 0,)) iuv = np.transpose(iuv, (1,2,0)) iuv_img = Image.fromarray(np.uint8(iuv*255),"RGB") iuv_img.show() #It shows only the croped person box = data[0]["pred_boxes_XYXY"][0] box[2]=box[2]-box[0] box[3]=box[3]-box[1] x,y,w,h=[int(v) for v in box] bg=np.zeros((img_h,img_w,3)) bg[y:y+h,x:x+w,:]=iuv bg_img = Image.fromarray(np.uint8(bg*255),"RGB") bg_img.save('output.png')
https://stackoverflow.com/questions/62710513/
Why EfficientNet same model return different predictions
!pip install efficientnet_pytorch -q import torch import torch.nn as nn import efficientnet_pytorch as efn device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') device model = efn.EfficientNet.from_name('efficientnet-b0') model = model.to(device) img = torch.ones((2, 3, 680, 680))*0.5 img = img.to(device) preds1 = model(img) preds2 = model(img) preds3 = model(img) print(preds1[0][0]) print(preds2[0][0]) print(preds3[0][0]) del model, img, preds1, preds2, preds3 And preds1, preds2, and preds3 are different. I am confused about this point, model is same and input is same, why predictions are different? tensor(0.2599, grad_fn=<SelectBackward>) tensor(0.1364, grad_fn=<SelectBackward>) tensor(0.1263, grad_fn=<SelectBackward>)
Wow, solved! I checked the Efficientnet-pytorch source code and suddendly found I should turn the model to eval() mode. Now the predictions are same!! model = efn.EfficientNet.from_name('efficientnet-b0') model = model.to(device) _ = model.eval() ## hey, look here!! img = torch.ones((2, 3, 680, 680))*0.5 img = img.to(device) preds1 = model(img) preds2 = model(img) preds3 = model(img) print(preds1[0][0]) print(preds2[0][0]) print(preds3[0][0])
https://stackoverflow.com/questions/62714886/
Error while inferencing a LSTM model with the help of onnx-runtime . Invalid Argument Error
I have exported a LSTM model from pytorch to onnx . The model takes sequences of length 200. It has hidden state size 256 , number of layers = 2.The forward function takes input size of (batches , sequencelength) as input along with a tuple consisting of hidden state and cell state. I am getting an error while inferencing the model with onnx runtime. hidden state and cell state dimensions are same. ioio1 = np.random.rand(1,200) ioio2 = np.zeros((2,1,256),dtype = np.float) pred = runtime_session.run([output_name],{runtime_session.get_inputs()[0].name:ioio1, runtime_session.get_inputs()[1].name :ioio2, runtime_session.get_inputs()[2].name : ioio2}) InvalidArgument Traceback (most recent call last) <ipython-input-204-3928823f661e> in <module>() 1 pred = runtime_session.run([output_name],{runtime_session.get_inputs()[0].name:ioio1, 2 runtime_session.get_inputs()[1].name :ioio2, ----> 3 runtime_session.get_inputs()[2].name : ioio2}) /usr/local/lib/python3.6/dist-packages/onnxruntime/capi/session.py in run(self, output_names, input_feed, run_options) 109 output_names = [output.name for output in self._outputs_meta] 110 try: --> 111 return self._sess.run(output_names, input_feed, run_options) 112 except C.EPFail as err: 113 if self._enable_fallback: InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (N11onnxruntime17PrimitiveDataTypeIdEE) , expected: (N11onnxruntime17PrimitiveDataTypeIlEE)
This issue is similar : https://github.com/microsoft/onnxruntime/issues/4423 Resolution: ioio1 = np.random.rand(1,200) is float64 (double) , which isn't the dtype your model is expecting.
https://stackoverflow.com/questions/62720299/
Does creating a data loader inside another data loader in pytorch slow things down (during meta-learning)?
I was trying to create a data loader for meta-learning but got that my code is extremely slow and I can't figure out why. I am doing this because a set of data sets (so I need data loaders for them) is what is used in meta-learning. I am wondering if it's because I have a collate function generating data loaders. Here is the collate function that generates data loaders (and receives ALL the data sets): class GetMetaBatch_NK_WayClassTask: def __init__(self, meta_batch_size, n_classes, k_shot, k_eval, shuffle=True, pin_memory=True, original=False, flatten=True): self.meta_batch_size = meta_batch_size self.n_classes = n_classes self.k_shot = k_shot self.k_eval = k_eval self.shuffle = shuffle self.pin_memory = pin_memory self.original = original self.flatten = flatten def __call__(self, all_datasets, verbose=False): NUM_WORKERS = 0 # no need to change get_data_loader = lambda data_set: iter(data.DataLoader(data_set, batch_size=self.k_shot+self.k_eval, shuffle=self.shuffle, num_workers=NUM_WORKERS, pin_memory=self.pin_memory)) #assert( len(meta_set) == self.meta_batch_size*self.n_classes ) # generate M N,K-way classification tasks batch_spt_x, batch_spt_y, batch_qry_x, batch_qry_y = [], [], [], [] for m in range(self.meta_batch_size): n_indices = random.sample(range(0,len(all_datasets)), self.n_classes) # create N-way, K-shot task instance spt_x, spt_y, qry_x, qry_y = [], [], [], [] for i,n in enumerate(n_indices): data_set_n = all_datasets[n] dataset_loader_n = get_data_loader(data_set_n) # get data set for class n data_x_n, data_y_n = next(dataset_loader_n) # get all data from current class spt_x_n, qry_x_n = data_x_n[:self.k_shot], data_x_n[self.k_shot:] # [K, CHW], [K_eval, CHW] # get labels if self.original: #spt_y_n = torch.tensor([n]).repeat(self.k_shot) #qry_y_n = torch.tensor([n]).repeat(self.k_eval) spt_y_n, qry_y_n = data_y_n[:self.k_shot], data_y_n[self.k_shot:] else: spt_y_n = torch.tensor([i]).repeat(self.k_shot) qry_y_n = torch.tensor([i]).repeat(self.k_eval) # form K-shot task for current label n spt_x.append(spt_x_n); spt_y.append(spt_y_n) # array length N with tensors size [K, CHW] qry_x.append(qry_x_n); qry_y.append(qry_y_n) # array length N with tensors size [K, CHW] # form N-way, K-shot task with tensor size [N,W, CHW] spt_x, spt_y, qry_x, qry_y = torch.stack(spt_x), torch.stack(spt_y), torch.stack(qry_x), torch.stack(qry_y) # form N-way, K-shot task with tensor size [N*W, CHW] if verbose: print(f'spt_x.size() = {spt_x.size()}') print(f'spt_y.size() = {spt_y.size()}') print(f'qry_x.size() = {qry_x.size()}') print(f'spt_y.size() = {qry_y.size()}') print() if self.flatten: CHW = qry_x.shape[-3:] spt_x, spt_y, qry_x, qry_y = spt_x.reshape(-1, *CHW), spt_y.reshape(-1), qry_x.reshape(-1, *CHW), qry_y.reshape(-1) ## append to N-way, K-shot task to meta-batch of tasks batch_spt_x.append(spt_x); batch_spt_y.append(spt_y) batch_qry_x.append(qry_x); batch_qry_y.append(qry_y) ## get a meta-set of M N-way, K-way classification tasks [M,K*N,C,H,W] batch_spt_x, batch_spt_y, batch_qry_x, batch_qry_y = torch.stack(batch_spt_x), torch.stack(batch_spt_y), torch.stack(batch_qry_x), torch.stack(batch_qry_y) return batch_spt_x, batch_spt_y, batch_qry_x, batch_qry_y that is passed to another data loader here: def get_meta_set_loader(meta_set, meta_batch_size, n_episodes, n_classes, k_shot, k_eval, pin_mem=True, n_workers=4): """[summary] Args: meta_set ([type]): the meta-set meta_batch_size ([type]): [description] n_classes ([type]): [description] pin_mem (bool, optional): [Since returning cuda tensors in dataloaders is not recommended due to cuda subties with multithreading, instead set pin=True for fast transfering of the data to cuda]. Defaults to True. n_workers (int, optional): [description]. Defaults to 4. Returns: [type]: [description] """ if n_classes > len(meta_set): raise ValueError(f'You really want a N larger than the # classes in the meta-set? n_classes, len(meta_set = {n_classes, len(meta_set)}') collator_nk_way = GetMetaBatch_NK_WayClassTask(meta_batch_size, n_classes, k_shot, k_eval) episodic_sampler = EpisodicSampler(total_classes=len(meta_set), n_episodes=n_episodes) episodic_metaloader = data.DataLoader( meta_set, num_workers=n_workers, pin_memory=pin_mem, # to make moving to cuda more efficient collate_fn=collator_nk_way, # does the collecting to return M N,K-shot task batch_sampler=episodic_sampler # for keeping track of the episode ) return episodic_metaloader (will generate a smaller example) related: https://discuss.pytorch.org/t/what-does-runtimeerror-cuda-driver-error-initialization-error-mean/87505/6
Conceptually pytorch dataloaders should have no problem being fast even if one is inside the other. One way to debug your issue is to use the line_profiler package to get a better idea of where the slowdown happens. If you cannot resolve the issue after using the line_profiler, please update your questions with the output of the profiler to help us understand what might be wrong. Allow the profiler to run for some time to gather enough statistics about the execution of your dataloader. The @profile decorator works for both functions and class functions too so it should work for your dataloader functions.
https://stackoverflow.com/questions/62723569/
Pytorch: RuntimeError: expected dtype Float but got dtype Long
I encounter this weird error when building a simple NN in Pytorch. I dont understand this error and why this consern Long and Float datatype in backward function. Anyone encounter this before? Thanks for any help. Traceback (most recent call last): File "test.py", line 30, in <module> loss.backward() File "/home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages/torch/tensor.py", line 198, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages/torch/autograd/__init__.py", line 100, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: expected dtype Float but got dtype Long (validate_dtype at /opt/conda/conda-bld/pytorch_1587428398394/work/aten/src/ATen/native/TensorIterator.cpp:143) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x4e (0x7f5856661b5e in /home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: at::TensorIterator::compute_types() + 0xce3 (0x7f587e3dc793 in /home/liuyun/anaconda3/envs/torch/lib/python3.7/site -packages/torch/lib/libtorch_cpu.so) frame #2: at::TensorIterator::build() + 0x44 (0x7f587e3df174 in /home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages /torch/lib/libtorch_cpu.so) frame #3: at::native::smooth_l1_loss_backward_out(at::Tensor&, at::Tensor const&, at::Tensor const&, at::Tensor const&, long) + 0x193 (0x7f587e22cf73 in /home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #4: <unknown function> + 0xe080b7 (0x7f58576960b7 in /home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages/torc h/lib/libtorch_cuda.so) frame #5: at::native::smooth_l1_loss_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, long) + 0x16e (0x7f587 e23569e in /home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #6: <unknown function> + 0xed98af (0x7f587e71c8af in /home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages/torc h/lib/libtorch_cpu.so) frame #7: <unknown function> + 0xe22286 (0x7f587e665286 in /home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages/torc h/lib/libtorch_cpu.so) Here is the source code: import torch import torch.nn as nn import numpy as np import torchvision from torchvision import models from UTKLoss import MultiLoss from ipdb import set_trace # out features [13, 2, 5] model_ft = models.resnet18(pretrained=True) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, 20) model_ft.cuda() criterion = MultiLoss() optimizer = torch.optim.Adam(model_ft.parameters(), lr = 1e-3) image = torch.randn((1, 3, 128, 128)).cuda() age = torch.randint(110, (1,)).cuda() gender = torch.randint(2, (1,)).cuda() race = torch.randint(5, (1,)).cuda() optimizer.zero_grad() output = model_ft(image) age_loss, gender_loss, race_loss = criterion(output, age, gender, race) loss = age_loss + gender_loss + race_loss loss.backward() optimizer.step() Here is what I define my loss function import torch import torch.nn as nn import torch.nn.functional as F class MultiLoss(nn.Module): def __init__(self): super().__init__() def forward(self, output, age, gender, race): age_pred = output[:, :13] age_pred = torch.sum(age_pred, 1) gender_pred = output[:, 13: 15] race_pred = output[:, 15:] age_loss = F.smooth_l1_loss(age_pred.view(-1, 1), age.cuda()) gender_loss = F.cross_entropy(gender_pred, torch.flatten(gender).cuda(), reduction='sum') race_loss = F.cross_entropy(race_pred, torch.flatten(race).cuda(), reduction='sum') return age_loss, gender_loss, race_loss
Change the criterion call to: age_loss, gender_loss, race_loss = criterion(output, age.float(), gender, race) If you look at your error we can trace it to: frame #3: at::native::smooth_l1_loss_backward_out In the MultiLoss Class, the smooth_l1_loss works with age. So I changed it's type to float (as the expected dtype is Float) while passing it to the criterion. You can check that age is torch.int64 (i.e. torch.long) by printing age.dtype I am not getting the error after doing this. Hope it helps.
https://stackoverflow.com/questions/62726792/
pytorch albumentations augmentation p value?
augmented_images(raw.image_id.unique()[1230], albumentations.HorizontalFlip(p=1)) for augmented_image what is the p=1 mean? is value difference make angle different? if its not it how should I make various angle different horizontal augmentation?
As you can see in the docs of albumentations.HorizontalFlip: Parameters: p (float) – probability of applying the transform. Default: 0.5. If you want to rotate, you should consider using albumentations.augmentations.transforms.Rotate: Rotate(limit=90, interpolation=1, border_mode=4, value=None, mask_value=None, always_apply=False, p=0.5) Rotate the input by an angle selected randomly from the uniform distribution. Parameters: limit ((int, int) or int) – range from which a random angle is picked. If limit is a single int an angle is picked from (-limit, limit). Default: (-90, 90) [...] p (float) – probability of applying the transform. Default: 0.5.
https://stackoverflow.com/questions/62730486/
RuntimeError: CuDNN error: CUDNN_STATUS_SUCCESS
I am running code that UI downloaded from github. It is supposed to be working (I saw that other people managed to activate it). When I try to run it I get the following error message: RuntimeError: CuDNN error: CUDNN_STATUS_SUCCESS The code uses pytorch 0.4.1. I have cuda installed. When I run the command cat /usr/local/cuda/version.txt I get the answer: CUDA Version 10.0.130 When I run the command conda list -n <my env name> I see: cudatoolkit ver 9.0 cudnn ver 7.6.5 And now, my question: What should I do to avoid this error? Do I need to use pip install for a more recent version of cudnn? If so, which one?
I also faced the same issue. In my case, the PyTorch version was 0.4.1, and the Cuda version was 9.0. I solved the issue by adding this piece of code: torch.backends.cudnn.benchmark = True
https://stackoverflow.com/questions/62752522/
Hugging face: tokenizer for masked lm question
I am using transformer version 3.0.0 for my project and have some questions. I want to use a bert model with masked lm pretraining for protein sequences. To get a character level tokenizer I derived from the BertTokenizer from transformers import BertTokenizer class DerivedBertTok(BertTokenizer): def __init__(self, **kwargs): super().__init__(**kwargs) def tokenize(self, text): if isinstance(text, np.ndarray): assert len(text) == 1 text = text[0] return [x if x in self.vocab else self.unk_token for x in text] my vocab looks like this [PAD] [CLS] [SEP] [UNK] [MASK] A R N D B C E Q Z G H I L K M F P S T W Y V The usage seems quite similar to what i have seen in the docs: d_tokenizer = DerivedBertTok( vocab_file=vocab_path, do_lower_case=False, do_basic_tokenize=False, tokenize_chinese_chars=False ) d_tokenizer.encode_plus(np.array(["AXEF"])[0], max_length=20, pad_to_max_length=True, add_special_tokens=True, truncation=True, return_tensors='pt') From this I was building a pytorch Dataset with a custom collate function. all the collate function does is taking all input tensors and stacking them from transformers import BatchEncoding def collate_fn(self, batch): # this function will not work for higher dimension inputs elem = batch[0] elem_type = type(elem) if isinstance(elem, BatchEncoding): new_shapes = {key: (len(batch), value.shape[1]) for key, value in elem.items()} outs = {key: value.new_empty(new_shapes[key]) for key, value in elem.items()} if torch.utils.data.get_worker_info() is not None: [v.share_memory_() for v in outs.values()] return {key: torch.stack(tuple((d[key].view(-1) for d in batch)), 0, out=outs[key]) for key in elem.keys()} else: raise ValueError(f"type: {elem_type} not understood") Question 1: So I was wondering if the BatchEncoding or another class is already capable of doing this (and doing it possibly better?). Or using a different Dataset/ DataLoader class altogether. Question 2: Additionally, I want to mask some of the Inputs as required for the masked LM, however I did not manage find any implementation in the transformer library. Are there any recommendations for doing this?
After some more digging I found a DataCollator, which implements replacing token randomly with the mask token at: https://github.com/huggingface/transformers/blob/615be03f9d961c0c9722fe10e7830e011066772e/src/transformers/data/data_collator.py#L69. So I changed my DataSource to return raw text instead of the BatchEncoding in the __getitem__ method and then do the encoding and masking in the collate function.
https://stackoverflow.com/questions/62757772/
How to use pytorch in flask and run it on wsgi_mod for apache2
I'm trying to run a flask app with apache in wsgi_mod As Describe in following link https://pytorch.org/tutorials/recipes/deployment_with_flask.html i configure my app for working with pytorch when my app tries to import torchvision it hangs with no error log. Is there any way to fix this issue? the main python code is: import torchvision.models as models import torchvision.transforms as transforms from flask import Flask app = Flask(__name__) @app.route('/') def hello(): return ("Hello World qwertyuiop!!") if __name__ == "main": app.run() and the apache2 config is: User daemon Group daemon </IfModule> LoadFile "G:/Python3764/python37.dll" LoadModule wsgi_module "G:/Python3764/lib/site-packages/mod_wsgi/server/mod_wsgi.cp37-win_amd64.pyd" WSGIPythonHome "G:/Python3764" <VirtualHost *:80> ServerName localhost:80 WSGIScriptAlias / "D:/WSGIAppDir/web.wsgi" DocumentRoot "D:/WSGIAppDir" <Directory "D:/WSGIAppDir"> Require all granted </Directory> </VirtualHost> import sys sys.path.insert(0, 'D:/WSGIAppDir') from hello import app as application
Add following line to the file sites-available/your-site.conf WSGIApplicationGroup %{GLOBAL}
https://stackoverflow.com/questions/62788479/
Training Detectron2 on part of COCO dataset
I'm trying to train model with Detectron2 and COCO dataset for vehicle and person detection and I'm having problems with model loading. I've used posts here on SO and https://github.com/immersive-limit/coco-manager (filter.py file) code to filter COCO dataset to only include annotations and images from classes "person", "car", "bike", "truck" and "bicycle". Now my directory structure is: main - annotations: - instances_train2017_filtered.json - instances_val2017_filtered.json - images: - train2017_filtered (lots of images inside) - val2017_filtered (lots of images inside) Basically, the only thing that I've done here was to remove documents and images not corresponding to those classes, and changed their IDs (so they are from 1 to 5). Then I've used code from Detectron2 tutorial: import random import cv2 from detectron2.data import MetadataCatalog, DatasetCatalog from detectron2.data.datasets import register_coco_instances from detectron2.engine import DefaultTrainer, DefaultPredictor from detectron2.config import get_cfg import os from detectron2.model_zoo import model_zoo from detectron2.utils.visualizer import Visualizer register_coco_instances("train", {}, "/home/jakub/Projects/coco/annotations/instances_train2017_filtered.json", "/home/jakub/Projects/coco/images/train2017_filtered/") register_coco_instances("val", {}, "/home/jakub/Projects/coco/annotations/instances_val2017_filtered.json", "/home/jakub/Projects/coco/images/val2017_filtered/") metadata = MetadataCatalog.get("train") dataset_dicts = DatasetCatalog.get("train") cfg = get_cfg() cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")) cfg.DATASETS.TRAIN = ("train",) cfg.DATASETS.TEST = () cfg.DATALOADER.NUM_WORKERS = 2 cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") cfg.SOLVER.IMS_PER_BATCH = 2 cfg.SOLVER.BASE_LR = 0.00025 cfg.SOLVER.MAX_ITER = 300 cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512 cfg.MODEL.ROI_HEADS.NUM_CLASSES = 5 os.makedirs(cfg.OUTPUT_DIR, exist_ok=True) trainer = DefaultTrainer(cfg) trainer.resume_or_load(resume=False) trainer.train() cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth") cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 cfg.DATASETS.TEST = ("val", ) predictor = DefaultPredictor(cfg) img = cv2.imread("demo/input.jpg") outputs = predictor(img) for d in random.sample(dataset_dicts, 1): im = cv2.imread(d["file_name"]) outputs = predictor(im) v = Visualizer(im[:, :, ::-1], metadata=metadata, scale=0.8) out = v.draw_instance_predictions(outputs["instances"].to("cpu")) cv2.imwrite('demo/output_retrained.jpg', out.get_image()[:, :, ::-1]) During training, I get the following errors: Unable to load 'roi_heads.box_predictor.cls_score.weight' to the model due to incompatible shapes: (81, 1024) in the checkpoint but (6, 1024) in the model! Unable to load 'roi_heads.box_predictor.cls_score.bias' to the model due to incompatible shapes: (81,) in the checkpoint but (6,) in the model! Unable to load 'roi_heads.box_predictor.bbox_pred.weight' to the model due to incompatible shapes: (320, 1024) in the checkpoint but (20, 1024) in the model! Unable to load 'roi_heads.box_predictor.bbox_pred.bias' to the model due to incompatible shapes: (320,) in the checkpoint but (20,) in the model! Unable to load 'roi_heads.mask_head.predictor.weight' to the model due to incompatible shapes: (80, 256, 1, 1) in the checkpoint but (5, 256, 1, 1) in the model! Unable to load 'roi_heads.mask_head.predictor.bias' to the model due to incompatible shapes: (80,) in the checkpoint but (5,) in the model! The model cannot predict anything useful after training, despite reducing total_loss during training. I get that I should get warnings because of size mismatch (I've reduced number of classes), which is normal from what I've seen on the internet, but I don't get "Skipped" after each error line. I think that model is actually not loading anything here, and I wonder why and how can I fix this. EDIT For comparison, a similar behaviour in almost identical situation was reported as an Issue, but it had "Skipped" at the end of each error line, making them effectively warnings, not errors: https://github.com/facebookresearch/detectron2/issues/196
This 'warning' basically says that you are trying to initialize weights from a model that was trained on a different number of classes. This is expected as you have read. I suspect you are not getting any results from your training because your MetadataCatalog does not have the 'thing_classes' property set. You are only calling MetadataCatalog.get("train") Calling MetadataCatalog.get("train").set(thing_classes=["person", "car", "bike", "truck", "bicycle"]) Should solve the issue, but if it doesn't I'm pretty sure your json is corrupted.
https://stackoverflow.com/questions/62796881/
using 2d array as indices of a 4d array
I have a Numpy 2D array (4000,8000) from a tensor.max() operation, that stores the indices of the first dimension of a 4D array (30,4000,8000,3). I need to obtain a (4000,8000,3) array that uses the indices over this set of images and extract the pixels of each position in the 2D max array. A = np.random.randint( 0, 29, (4000,8000), dtype=int) B = np.random.randint(0,255,(30,4000,8000,3),dtype=np.uint8) final = np.zeros((B.shape[1],B.shape[2],3)) r = 0 c = 0 for row in A: c = 0 for col in row: x = A[r,c] final[r,c] = B[x,r,c] c=c+1 r=r+1 print(final.shape) Is there any vectorised way to do that? I am fighting with the RAM usage using loops. Thanks
You can use np.take_along_axis. First let's create some data (you should have provided a reproducible example): >>> N, H, W, C = 10, 20, 30, 3 >>> arr = np.random.randn(N, H, W, C) >>> indices = np.random.randint(0, N, size=(H, W)) Then, we'll use np.take_along_axis. But for that the indices array must be of the same shape than the arr array. So we are using np.newaxis to insert axis where shapes don't match. >>> res = np.take_along_axis(arr, indices[np.newaxis, ..., np.newaxis], axis=0) It already gives usable output, but with a singleton dimension on first axis: >>> res.shape (1, 20, 30, 3) So we can squeeze that: >>> res = np.squeeze(res) >>> res.shape (20, 30, 3) And eventually check if the data is as we wanted: >>> np.all(res[0, 0] == arr[indices[0, 0], 0, 0]) True >>> np.all(res[5, 3] == arr[indices[5, 3], 5, 3]) True
https://stackoverflow.com/questions/62817662/
Conv3d error on dimensions of a 5d tensor
I have a tensor shaped ([5, 1, 3, 126, 126]), which represents a video (5 frames each 126x126 rgb). I need to forward it into a self.resnet = nn.Sequential( nn.Conv3d(5,5,1), nn.UpsamplingBilinear2d(size=None, scale_factor=0.5) ) but i get RuntimeError: Given groups=1, weight of size [5, 5, 1, 1, 1], expected input[5, 1, 3, 126, 126] to have 5 channels, but got 1 channels instead I think that I have probably misunderstood how the conv3d works but I can't really understand why the expected dimensions are so different from the ones that my 5d tensor has at that moment
The reason this is happening is because the shape of your tensor is wrong. The Conv3d class expects the batch_size to come first then the number of channels then the number of frames then the height and width. That is why you are getting the error. You should change the shape of your input tensor to [5,3,1,126,126] Your conv3d parameters are also wrong. The first number should be the number of input channels the conv3d is supposed to get which in your case is 3 because it is an rgb image. The second number is the number of output channels which you do not need to change.
https://stackoverflow.com/questions/62834321/
Problems initializing model in pytorch
I can't initialize my model in pytorch and get: TypeError Traceback (most recent call last) <ipython-input-82-9bfee30a439d> in <module>() 288 dataset = News_Dataset(true_path=args.true_news_file, fake_path=args.fake_news_file, 289 embeddings_path=args.embeddings_file) --> 290 classifier = News_classifier_resnet_based().cuda() 291 try: 292 classifier.load_state_dict(torch.load(args.model_state_file)) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) TypeError: forward() missing 1 required positional argument: 'input' Someone asked for code. It is given below class News_classifier_resnet_based(torch.nn.Module): def __init__(self): super().__init__() self.activation = torch.nn.ReLU6() self.sigmoid = torch.nn.Sigmoid() self.positional_encodings = PositionalEncoder() self.resnet = list(torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True).children()) self.to_appropriate_shape = torch.nn.Conv2d(in_channels=1, out_channels=1, kernel_size=77) self.conv1 = torch.nn.Conv2d(in_channels=1,out_channels=64,kernel_size=7,stride=2,padding=3) self.conv1.weight = torch.nn.Parameter(self.resnet[0].weight[:,0,:,:].data) self.center = torch.nn.Sequential(*self.resnet[1:-2]) self.conv2 = torch.nn.Conv2d(in_channels=512, out_channels=1, kernel_size=1) self.conv3 = torch.nn.Conv2d(in_channels=1,out_channels=1,kernel_size=7) self.title_conv = torch.nn.Sequential( torch.nn.Conv2d(in_channels=1,out_channels=1,kernel_size=3,stride=3), self.activation(), torch.nn.Conv2d(in_channels=1,out_channels=1,kernel_size=2,stride=2), self.activation(), torch.nn.Conv2d(in_channels=1,out_channels=1,kernel_size=2,stride=2) ) self.title_lin = torch.nn.Linear(25,1) self.year_lin = torch.nn.Linear(10,1) self.month_lin = torch.nn.Linear(12,1) self.day_lin = torch.nn.Linear(31,1) self.date_lin = torch.nn.Linear(3,1) self.final_lin = torch.nn.Linear(3,1) def forward(self,x_in): #input shape - (batch_size, 3+title_len+seq_len, embedding_dim) #output shape - (batch_size, 1) year = x_in[:,0,:10] month = x_in[:,1,:12] day = x_in[:,2,:31] title = x_in[:,3:3+args.title_len,:] text = x_in[:,3+args.title_len:,:] title = self.positional_encodings(title) text = self.positional_encodings(text) text = text.unsqueeze(1) text = self.activation(self.to_appropriate_shape(text)) text = self.activation(self.conv1(text)) text = self.activation(self.center(text)) text = self.activation(self.conv2(text)) text = self.activation(self.conv3(text)) text = text.reshape(args.batch_size,-1) title = title.unsqueeze(1) title = self.activation(self.title_conv(title)) title = title.reshape(args.batch_size,-1) title = self.activation(self.title_lin(title)) year = self.activation(self.year_lin(year)) month = self.activation(self.month_lin(month)) day = self.activation(self.day_lin(day)) date = torch.cat([year,month,day], dim=-1) date = self.activation(self.date_lin(date)) final = torch.cat([date,title,text], dim=-1) final = self.sigmoid(self.final_lin(final)) return final classifier = News_classifier_resnet_based().cuda() What should I do? StackOverflow asked for more details. I'm trying to classify texts using word embeddings but problem lies in last line. I am working in google colab. Also when I created some models in other code blocks, I've got no problems
The problem is in your init function. When you create title_conv, insted of passing the activation object previously created, your are calling the activation without arguments. You can fix it by changing that part of code with this: self.title_conv = torch.nn.Sequential( torch.nn.Conv2d(in_channels=1,out_channels=1,kernel_size=3,stride=3), self.activation, # Notice I have removed () torch.nn.Conv2d(in_channels=1,out_channels=1,kernel_size=2,stride=2), self.activation, # Notice I have removed () torch.nn.Conv2d(in_channels=1,out_channels=1,kernel_size=2,stride=2) )
https://stackoverflow.com/questions/62849789/
MNIST dataset is failed to transform as tensor object
How to properly transform the MNIST dataset to tensor type? I tried below but not work. The error message AttributeError: 'int' object has no attribute 'type' indicates it is not tensor type. The codes below can be tested in Google Colab. It appears that PyTorch Version 1.3.1 can run this, but not for 1.5.1. >>> import torch >>> import torch.nn as nn >>> import torchvision.transforms as transforms >>> import torchvision.datasets as dsets >>> import numpy as np >>> torch.__version__ 1.5.1+cu101 >>> train_dataset = dsets.MNIST(root='./data', train=True, download=True, transform=transforms.ToTensor()) Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ./data/MNIST/raw/train-images-idx3-ubyte.gz 100.1%Extracting ./data/MNIST/raw/train-images-idx3-ubyte.gz to ./data/MNIST/raw Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to ./data/MNIST/raw/train-labels-idx1-ubyte.gz 113.5%Extracting ./data/MNIST/raw/train-labels-idx1-ubyte.gz to ./data/MNIST/raw Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to ./data/MNIST/raw/t10k-images-idx3-ubyte.gz 100.4%Extracting ./data/MNIST/raw/t10k-images-idx3-ubyte.gz to ./data/MNIST/raw Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to ./data/MNIST/raw/t10k-labels-idx1-ubyte.gz 180.4%Extracting ./data/MNIST/raw/t10k-labels-idx1-ubyte.gz to ./data/MNIST/raw Processing... /pytorch/torch/csrc/utils/tensor_numpy.cpp:141: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. Done! >>> print("Print the training dataset:\n ", train_dataset) Print the training dataset: Dataset MNIST Number of datapoints: 60000 Root location: ./data Split: Train StandardTransform Transform: ToTensor() >>> print("Type of data element: ", train_dataset[0][1].type()) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'int' object has no attribute 'type'
You need to access Ist element (corresponding to image tensors), not the second one (labels) i.e. >>> print("Type of data element: ", train_dataset[0][0].type()) Type of data element: torch.FloatTensor >>> print(train_dataset[0][0].shape, train_dataset[0][1]) (torch.Size([1, 28, 28]), 5)
https://stackoverflow.com/questions/62873654/
Loss function negative log likelihood giving loss despite perfect accuracy
I am debugging a sequence-to-sequence model and purposely tried to perfectly overfit a small dataset of ~200 samples (sentence pairs of length between 5-50). I am using negative log-likelihood loss in pytorch. I get low loss (~1e^-5), but the accuracy on the same dataset is only 33%. I trained the model on 3 samples as well and obtained 100% accuracy, yet during training I had loss. I was under the impression that negative log-likelihood only gives loss (loss is in the same region of ~1e^-5) if there is a mismatch between predicted and target label? Is a bug in my code likely?
There is no bug in your code. The way things usually work in deep nets is that the networks predicts the logits (i.e., log-likelihoods). These logits are then transformed to probability using soft-max (or a sigmoid function). Cross-entropy is finally evaluated based on the predicted probabilities. The advantage of this approach is that is numerically stable, and easy to train with. On the other side, because of the soft-max you can never have "perfect" 0/1 probabilities for your predictions: That is, even when your network has perfect accuracy it will never assign probability 1 to the correct prediction, but "close to one". As a result, the loss will always be positive (albeit small).
https://stackoverflow.com/questions/62886651/
how can I install PyTorch?
I Install PyTorch in Anaconda But when I write : pip install torchvision In anaconda this Error show me : No matching distribution found for torch==1.4.0 (from torchvision) Did I install it badly?
The following worked for me. First install MKL: conda install -c anaconda mkl After this, install torchvision: conda install pytorch torchvision cudatoolkit=10.0 -c pytorch For pip: pip install pytorch torchvision
https://stackoverflow.com/questions/62888778/
Modify PyTorch model for inference - then resume training
I’d like to alternate inference and training in a model in PyTorch, but I need to modify it during inference, and I have some questions about that. Can I unload the model from the gpu by calling model.to('cpu'), make a modified copy (and run it on the gpu), and then move the original back to gpu by calling model.to('gpu')? In other words, is moving the model gpu->cpu->gpu a lossless operation? What happens to the parameters that were passed to the optimizer? I don’t want to lose the optimizer state What is the best way to make a copy of an in-memory model? I can save it and then reload a copy, but not sure if that is necessary just to copy. If I want to run inference in half precision (more than 2x faster in this case), can I change the model to half and then change it back? Is that lossless? (Does the model keep a full-precision copy of everything, or does it replace weights with half-precision in place?) The model is a ResNet50 lookalike. Not enough GPU memory for two models :)
"What is the best way to make a copy of an in-memory model? I can save it and then reload a copy, but not sure if that is necessary just to copy." You can copy a model with: import copy ... best_model = copy.deepcopy(model) with best_model you can save to disk or load in other model, etc
https://stackoverflow.com/questions/62890734/
pytorch: Random classifier: ValueError: optimizer got an empty parameter list
Is there any best practice or efficient way to have a random classifier in pytorch? My random classifier basically looks like this: def forward(self, inputs): # get a random tensor logits = torch.rand(batch_size, num_targets, num_classes) return logits This should be fine in principle, but the optimizer raises a ValueError because the classifier - in contrast to all other classifiers / models in the system - does not have any parameters that can be optimized, obviously. Is there a torch built-in solution to this or must I change the system's architecture (to not perform optimization)? Edit: If adding some arbitrary parameters to the model as shown below, the loss will raise an RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn def __init__(self, transformer_models: Dict, opt: Namespace): super(RandomMulti, self).__init__() self.num_classes = opt.polarities_dim # add some parameters so that the optimizer doesn't raise an exception self.some_params = nn.Linear(2, 2) My assumption really would be that there is a simpler solution, since having a random baseline classifier is a rather common thing in machine learning.
Indeed having a "random" baseline is common practice, but usually you do not need to explicitly generate one, let alone "train" it. In most cases you can have quite accurate expectation values for the "random" baseline. For instance, in ImageNet classification you have 1000 categories of equal size, than predicting a category at random should give you an expected accuracy of 1/1000. You do not need to instantiate a random classifier to produce that number. If you insist on explicitly instantiate a random classifier - what is the meaning of "training" it? There are the errors you get, pytorch simply cannot understand what you are doing. You can have a random classifier and you can evaluate its performance, but there is no meaning to training it.
https://stackoverflow.com/questions/62892891/
pytorch backward error, one of variables for gradient computation modified by an inplace operation
I'm new to pytorch, i've been trying to implement a text summarization network. When i call loss.backward() an error appears. RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [10, 1, 1, 400]], which is output 0 of UnsqueezeBackward0, is at version 98; expected version 97 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! it's a seq2seq model, and i think the problem comes from this code snippet final_dists=torch.zeros((batch_size,dec_max_len,extended_vsize)) #to hold the model outputs with extended vocab attn_dists=torch.zeros((batch_size,dec_max_len,enc_max_len)) #to retain the attention weights over decoder steps coverages=torch.zeros((batch_size,dec_max_len,enc_max_len)) #the coverages are retained to compute coverage loss inp=self.emb_dropout(self.embedding(dec_batch[:,0])) #starting input: <SOS> shape [batch_size] #self.prev_coverage is the accumulated coverage coverage=None #initially none, but accumulates with torch.autograd.set_detect_anomaly(True): for i in range(1,dec_max_len): #NOTE: the outputs, atten_dists, p_gens assignments start from i=1 (DON'T FORGET!) vocab_dists,hidden,attn_dists_tmp,p_gen,coverage=self.decoder(inp,hidden,enc_outputs,enc_lens,coverage) attn_dists[:,i,:]=attn_dists_tmp.squeeze(1) coverages[:,i,:]=coverage.squeeze(1) #vocab_dists: [batch_size, 1, dec_vocab_size] Note: this is the normalized probability #hidden: [1,batch_size, dec_hid_dim] #attn_dists_tmp: [batch_size, 1, enc_max_len] #p_gen: [batch_size, 1] #coverage: [batch_size, 1, enc_max_len] #=================================================================== #To compute the final dist in pointer-generator network by extending vocabulary vocab_dists_p=p_gen.unsqueeze(-1)*vocab_dists #[batch_size,1,dec_vocab_size] note we want to maintain vocab_dists for teacher_forcing_ratio attn_dists_tmp=(1-p_gen).unsqueeze(-1)*attn_dists_tmp #[batch_size, 1, enc_max_len] note we want to maintain attn_dists for later use extra_zeros=torch.zeros((batch_size,1,max_art_oovs)).to(self.device) vocab_dists_extended=torch.cat((vocab_dists_p,extra_zeros),dim=2) #[batch_size, 1, extended_vsize] attn_dists_projected=torch.zeros((batch_size,1,extended_vsize)).to(self.device) indices=enc_batch_extend_vocab.clone().unsqueeze(1) #[batch_size, 1,enc_max_size] attn_dists_projected=attn_dists_projected.scatter(2,indices,attn_dists_tmp) #We need this otherwise we would modify a leaf Variable inplace #attn_dists_projected_clone=attn_dists_projected.clone() #attn_dists_projected_clone.scatter_(2,indices,attn_dists_tmp) #this will project the attention weights #attn_dists_projected.scatter_(2,indices,attn_dists_tmp) final_dists[:,i,:]=vocab_dists_extended.squeeze(1)+attn_dists_projected.squeeze(1) #=================================================================== #teacher forcing, whether or not should use pred or dec sequence label if random.random()<teacher_forcing_ratio: inp=self.emb_dropout(self.embedding(dec_batch[:,i])) else: inp=self.emb_dropout(self.embedding(vocab_dists.squeeze(1).argmax(1))) if i remove the for loop, and just do one step of updating attn_dists[:,1,:] etc, with toy loss from the outputs returned by forward, then it works fine. Anyone has any idea what is wrong here? There is no inplace operation here. Many thanks!
From looking at your code, the problem likely comes from the following lines: attn_dists[:,i,:]=attn_dists_tmp.squeeze(1) coverages[:,i,:]=coverage.squeeze(1) you are performing an in place operation that conflicts with the graph created by pytorch for backprop. It should be solved by concatenating the new info at every loop (you may run out of memory very soon!) attn_dists = torch.cat((attn_dists, attn_dists_tmp.squeeze(1)), dim=1) coverages = torch.cat(coverages, coverage.squeeze(1)),dim=1) You should, change their initialization as well, otherwise you will endup of a tensor that is twice the size you were accounting for.
https://stackoverflow.com/questions/62910872/
How to initialise the Weights for specific task and Backpropagation modification
My model is used to predict values based on an minimising a loss function L. But, the loss function doesn’t have a single global minima value, but rather a large number of places where it achieves global minima. So, the model is based like this: Model Input is [nXn] tensor (let’s say: inp=[ [i_11, i_12, i_13, ..., i_1n],[i_21, i_22, ..., i_2n],...,[i_n1,i_n2, ..., i_nn] ]) and model output is [nX1] tensor (let’s say: out1=[o_1, o_2,..., o_n ]) Output tensor is out1 is passed in a function f to get out2 (let’s say: f(o_1, o_2, o_3,..., o_n)=[O_1, O_2, O_3, ..., O_n] ) These 2 values (i.e., out1 and out2) are minimised using MSELoss i.e., Loss = ||out1 - out2|| Now, there are a lot of values for [o_1, o_2, ..., o_n] for which the Loss goes to minimum. But, I want the values of [o_1, o_2, ..., o_n] for which |o_1| + |o_2| + |o_3| + ... + |o_n| is maximum Right now, the weights are initialised randomly: self.weight = torch.nn.parameter.Parameter(torch.FloatTensor(in_features, out_features)) for some value of in_features and out_features But by doing this, I am getting the values of [o_1, o_2, ..., o_n] for which |o_1| + |o_2| + |o_3| + ... + |o_n| is minimum. I know this problem can be solved by without using deep-learning, but I am trying to get the results like this for some task computation. Is there a way to change this to get the largest values predicted at the output of the neural net? Or is there any other technique (backpropagation change) to change it to get the desired largest valued output? Thanks in advance. EDIT 1: Based on the answer, out1=[o_1, o_2,..., o_n ] is tending to zero-valued tensor. In the initial epochs, out2=[O_1, O_2, O_3, ..., O_n] takes very large values, but subsequently comes down to lower values. A snippet of code below will give the idea: import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import numpy as np class Model(nn.Module): def __init__(self, inp_l, hid_l, out_l=1): super(Model, self).__init__() self.lay1 = nn.Linear(inp_l ,hid_l) self.lay2 = nn.Linear(hid_l ,out_l) self.dp = nn.Dropout(p=0.5) def forward(self, inp): self.out1= torch.tensor([]).float() for row in range(x.shape[0]): y = self.lay1(inp[row]) y = F.relu(y) y = self.dp(y.float()) y = self.lay2(y) y = F.relu(y) self.out1= torch.cat((self.out1, y)) return self.out1.view(inp.shape[0],-1) def function_f(inp, out1): ''' Some functional computation is done to return out2. ''' return out2 def train_model(epoch): model.train() t = time.time() optimizer.zero_grad() out1 = model(inp) out2 = function_f(inp, out1) loss1 = ((out1-out2)**2).mean() loss2 = -out1.abs().mean() loss_train = loss1 + loss2 loss_train.backward(retain_graph=True) optimizer.step() if epoch%40==0: print('Epoch: {:04d}'.format(epoch+1), 'loss_train: {:.4f}'.format(loss_train.item()), 'time: {:.4f}s'.format(time.time() - t)) model= Model(inp_l=10, hid_l=5, out_l=1) optimizer = optim.Adam(model.parameters(), lr=0.001) inp = torch.randint(100, (10, 10)) for ep in range(100): train_model(ep) But, out1 value goes to trivial solution i.e., zero-valued tensor which is the minimum valued solution. As mentioned before EDIT, I want to get the max-valued solution. Thank you.
I am not sure I understand what you want. Your weight initialization is overly complicated as well, you may just do: self.weight = torch.nn.Linear(in_features, out_featues) If you want to have the largest value of a batch of inputs you may simply do: y = self.weight(x) return y.max(dim=0)[0] But I am not entirely sure that is what you meant with your question. EDIT: It seems you have two objectives. The first thing I would try is to convert both of them in losses to be minimized by the optimizer. loss1 = MSE(out1, out2) loss2 = - out1.abs().mean() loss = loss1 + loss2 minimizing loss will simutaneously minimize the MSE between out1 and out2 and maximize the absolute values of out1. (minimizing - out1.abs().mean() is the same as maximizing out1.abs().mean()). Notice that it is possible your neural net will just create large biases and zero the weights as a lazy solution for the objective. You may turn of biases to avoid the problem, but I would still expect some other training problems.
https://stackoverflow.com/questions/62911421/
Why did this CUDA error occured in PyTorch?
On the process in building a RNN model, I met with the below error. The following are part of my code: class RNN(nn.Module): def __init__(self): super().__init__() self.embedding = nn.Embedding(emb_num, emb_size) self.dropout1 = nn.Dropout(dropout_rate) self.LSTM = nn.LSTM(50, 128, 1, bidirectional = True) self.dropout2 = nn.Dropout(dropout_rate) self.full_connect = nn.Linear(256 , 5) # biLSTM state * 2 def forward(self, x): x = self.embedding(x) x = x.permute(1,0,2) x = self.dropout1(x) _, (hn, cn) = self.LSTM(x) out = self.dropout2(hn) #print(out.shape) out = torch.cat([out[i, :, :] for i in range(2)], 1) out = out.squeeze() out = self.full_connect(out) return out def train(): optimizer = torch.optim.Adam(model.parameters(), lr = learning_rate, weight_decay = 0.001) Loss = nn.CrossEntropyLoss() for epoch in range(epochs): model.train() max_acc = 0 print('epoch:{}'.format(epoch + 1)) for i, data in enumerate(trainloader, 0): X_train, y_train = data optimizer.zero_grad() X_train = X_train.long().to(device) y_train = y_train.long().to(device) output = model(X_train) loss = Loss(output, y_train) loss.backward() optimizer.step() print('loss:{:3f}'.format(loss)) model.eval() acc = valid(validloader) print('epoch:{} acc:{}'.format(epoch+1, acc)) if epoch + 1 == 50: torch.save(model.state_dict(), 'epoch50.pt') if acc > max_acc: max_acc = acc torch.save(model.state_dict(), 'max_acc model.pt') torch.save(model.state_dict(), 'final model.pt') def valid(dataloader): correct = 0 total = 0 with torch.no_grad(): for i, data in enumerate(dataloader, 0): X_train, y_train = data #optimizer.zero_grad() X_train = X_train.long().to(device) y_train = y_train.long().to(device) output = model(X_train) #loss = Loss(output, y_train) #loss.backward() #optimizer.step() correct += (torch.argmax(output, dim = 1) == y_train).sum().item() total += y_train.shape[0] return correct / total In the above code, I created a devset to test the model in training. But after 4 epoch or more, this error occured: Traceback (most recent call last): File "c:\Users\hhhh\Desktop\NLP-beginner\task2\task2.py", line 287, in <module> train() File "c:\Users\hhhh\Desktop\NLP-beginner\task2\task2.py", line 185, in train acc = valid(validloader) File "c:\Users\hhhh\Desktop\NLP-beginner\task2\task2.py", line 207, in valid correct += (torch.argmax(output, dim = 1) == y_train).sum().item() RuntimeError: CUDA error: unspecified launch failure I have tried to switch to cpu device to train the model, yet the training speed is to slow even for 1 epoch. Is it because my computer configuration that not enough to run this?
To check whether your system has CUDA or not: from torch.cuda import is_available def main(): use_cuda = not args.no_cuda and is_available() dev = device("cuda" if use_cuda else "cpu") model = RNN().to(device=dev) # Call train and test methods below if __name__ == '__main__': main()
https://stackoverflow.com/questions/62911946/