instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
How do I add an image title to tensorboardX?
I am currently using tensorboardX to visualize input images while training a ResNet image classifier. Is there a way to add the image title along with the added image? I would like to have the image name (as stored in the dataset) displayed below the image in the tensorboard display. So far I have tried passing a comment parameter into my tensorboard writer, which does not seem to do the job. Currently, the relevant lines of my code are: pretrain_train_writer = SummaryWriter('log/pretrain_train') img_grid = vutils.make_grid(inputs[tp_idx_0], normalize=True, scale_each=True, nrow=8) pretrain_val_writer.add_image('true_positive_class_0', img_grid, global_step=epoch, comment = img_path)
there is no way of doing it directly with tensorboard, instead you have to create images with titles using matplotlib and then supply them to tensorboard. Here is a sample code from the tensorboard documentation: def plot_to_image(figure): """Converts the matplotlib plot specified by 'figure' to a PNG image and returns it. The supplied figure is closed and inaccessible after this call.""" # Save the plot to a PNG in memory. buf = io.BytesIO() plt.savefig(buf, format='png') # Closing the figure prevents it from being displayed directly inside # the notebook. plt.close(figure) buf.seek(0) # Convert PNG buffer to TF image image = tf.image.decode_png(buf.getvalue(), channels=4) # Add the batch dimension image = tf.expand_dims(image, 0) return image def image_grid(): """Return a 5x5 grid of the MNIST images as a matplotlib figure.""" # Create a figure to contain the plot. figure = plt.figure(figsize=(10,10)) for i in range(25): # Start next subplot. plt.subplot(5, 5, i + 1, title=class_names[train_labels[i]]) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap=plt.cm.binary) return figure # Prepare the plot figure = image_grid() # Convert to image and log with file_writer.as_default(): tf.summary.image("Training data", plot_to_image(figure), step=0) here is the link to the doc: https://www.tensorflow.org/tensorboard/image_summaries
https://stackoverflow.com/questions/60907358/
Why PyTorch nn.Module.cuda() not moving Module tensor but only parameters and buffers to GPU?
nn.Module.cuda() moves all model parameters and buffers to the GPU. But why not the model member tensor? class ToyModule(torch.nn.Module): def __init__(self) -> None: super(ToyModule, self).__init__() self.layer = torch.nn.Linear(2, 2) self.expected_moved_cuda_tensor = torch.tensor([0, 2, 3]) def forward(self, input: torch.Tensor) -> torch.Tensor: return self.layer(input) toy_module = ToyModule() toy_module.cuda() next(toy_module.layer.parameters()).device >>> device(type='cuda', index=0) for the model member tensor, the device stays unchanged. >>> toy_module.expected_moved_cuda_tensor.device device(type='cpu')
If you define a tensor inside the module it needs to be registered as either a parameter or a buffer so that the module is aware of it. Parameters are tensors that are to be trained and will be returned by model.parameters(). They are easy to register, all you need to do is wrap the tensor in the nn.Parameter type and it will be automatically registered. Note that only floating point tensors can be parameters. class ToyModule(torch.nn.Module): def __init__(self) -> None: super(ToyModule, self).__init__() self.layer = torch.nn.Linear(2, 2) # registering expected_moved_cuda_tensor as a trainable parameter self.expected_moved_cuda_tensor = torch.nn.Parameter(torch.tensor([0., 2., 3.])) def forward(self, input: torch.Tensor) -> torch.Tensor: return self.layer(input) Buffers are tensors that will be registered in the module so methods like .cuda() will affect them but they will not be returned by model.parameters(). Buffers are not restricted to a particular data type. class ToyModule(torch.nn.Module): def __init__(self) -> None: super(ToyModule, self).__init__() self.layer = torch.nn.Linear(2, 2) # registering expected_moved_cuda_tensor as a buffer # Note: this creates a new member variable named expected_moved_cuda_tensor self.register_buffer('expected_moved_cuda_tensor', torch.tensor([0, 2, 3]))) def forward(self, input: torch.Tensor) -> torch.Tensor: return self.layer(input) In both of the above cases the following code behaves the same >>> toy_module = ToyModule() >>> toy_module.cuda() >>> next(toy_module.layer.parameters()).device device(type='cuda', index=0) >>> toy_module.expected_moved_cuda_tensor.device device(type='cuda', index=0)
https://stackoverflow.com/questions/60908827/
Error trying to convert simple convolutional model to CoreML
I'm trying to convert a simple GAN generator (from ClusterGAN): self.name = 'generator' self.latent_dim = latent_dim self.n_c = n_c self.x_shape = x_shape self.ishape = (128, 7, 7) self.iels = int(np.prod(self.ishape)) self.verbose = verbose self.model = nn.Sequential( # Fully connected layers torch.nn.Linear(self.latent_dim + self.n_c, 1024), nn.BatchNorm1d(1024), nn.LeakyReLU(0.2, inplace=True), torch.nn.Linear(1024, self.iels), nn.BatchNorm1d(self.iels), nn.LeakyReLU(0.2, inplace=True), # Reshape to 128 x (7x7) Reshape(self.ishape), # Upconvolution layers nn.ConvTranspose2d(128, 64, 4, stride=2, padding=1, bias=True), nn.BatchNorm2d(64), nn.LeakyReLU(0.2, inplace=True), nn.ConvTranspose2d(64, 1, 4, stride=2, padding=1, bias=True), nn.Sigmoid() ) But onnx-coreml fails with Error while converting op of type: BatchNormalization. Error message: provided number axes 2 not supported I thought it was the BatchNorm2d, so I tried reshaping and applying BatchNorm1d, but I get the same error. Any thoughts? I'm very surprised that I'm having problems converting such a simple model, so I'm assuming that I must be missing something obvious. I'm targeting iOS 13 and using Opset v10 for the onnx conversion.
Core ML does not have 1-dimensional batch norm. The tensor must have at least rank 3. If you want to convert this model, you should fold the batch norm weights into those of the preceding layer and remove the batch norm layer. (I don't think PyTorch has a way to automatically do this for you.)
https://stackoverflow.com/questions/60917399/
Pytorch cuda get_device_name and current_device() hang and are killed?
I've just installed a new GPU (RTX 2070) in my machine alongside the old GPU. I wanted to see if PyTorch picked up it, so following the instructions here: How to check if pytorch is using the GPU?, I ran the following commands (Python3.6.9, Linux Mint Tricia 19.3) >>> import torch >>> torch.cuda.is_available() True >>> torch.cuda.current_device() Killed >>> torch.cuda.get_device_name(0) Killed Both of the two Killed processes took some time and one of them froze the machine for half a minute or so. Does anyone have any experience with this? Are there some setup steps I'm missing?
If I understand correctly, you would like to list the available cuda devices. This can be done via nvidia-smi (not a PyTorch function), and both your old GPU and the RTX 2070 should show up, as devices 0 and 1. In PyTorch, if you want to pass data to one specific device, you can do device = torch.device("cuda:0") for GPU 0 and device = torch.device("cuda:1") for GPU 1. While running, you can do nvidia-smi to check the memory usage & running processes for each GPU.
https://stackoverflow.com/questions/60917618/
KeyError: 'answers' error when using BioASQ dataset using Huggingface Transformers
I am using run_squad.py https://github.com/huggingface/transformers/blob/master/examples/run_squad.py from Huggingface Transformers for fine-tuning on BioASQ Question Answering dataset. I have converted the tensorflow weights provided by the authors of BioBERT https://github.com/dmis-lab/bioasq-biobert to Pytorch as discussed here https://github.com/huggingface/transformers/issues/312. Further, I am using the preprocessed data of BioASQ https://github.com/dmis-lab/bioasq-biobert which is converted to the SQuAD form. However, when I am running the run_squad.py script with the below parameters --model_type bert \ --model_name_or_path /scratch/oe7/uk1594/BioBERT/BioBERT-PyTorch/BioBERTv1.1-SQuADv1.1-Factoid-PyTorch/ \ --do_train \ --do_eval \ --save_steps 1000 \ --train_file $data/BioASQ-train-factoid-6b.json \ --predict_file $data/BioASQ-test-factoid-6b-1.json \ --per_gpu_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /scratch/oe7/uk1594/BioBERT/BioBERT-PyTorch/QA_output_squad/BioASQ-factoid-6b/BioASQ-factoid-6b-1-issue-23mar/ I get the below error: 03/23/2020 12:53:12 - INFO - transformers.modeling_utils - loading weights file /scratch/oe7/uk1594/BioBERT/BioBERT-PyTorch/QA_output_squad/BioASQ-factoid-6b/BioASQ-factoid-6b-1-issue-23mar/pytorch_model.bin 03/23/2020 12:53:15 - INFO - __main__ - Creating features from dataset file at . 0%| | 0/1 [00:00<?, ?it/s] 0%| | 0/1 [00:00<?, ?it/s] Traceback (most recent call last): File "run_squad.py", line 856, in <module> main() File "run_squad.py", line 845, in main result = evaluate(args, model, tokenizer, prefix=global_step) File "run_squad.py", line 299, in evaluate dataset, examples, features = load_and_cache_examples(args, tokenizer, evaluate=True, output_examples=True) File "run_squad.py", line 475, in load_and_cache_examples examples = processor.get_dev_examples(args.data_dir, filename=args.predict_file) File "/scratch/oe7/uk1594/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 522, in get_dev_examples return self._create_examples(input_data, "dev") File "/scratch/oe7/uk1594/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 549, in _create_examples answers = qa["answers"] KeyError: 'answers' Really appreciate your help. Thanks a lot for your guidance. The evaluaton dataset is looks like this: { "version": "BioASQ6b", "data": [ { "title": "BioASQ6b", "paragraphs": [ { "context": "emMAW: computing minimal absent words in external memory. Motivation: The biological significance of minimal absent words has been investigated in genomes of organisms from all domains of life. For instance, three minimal absent words of the human genome were found in Ebola virus genomes", "qas": [ { "question": "Which algorithm is available for computing minimal absent words using external memory?", "id": "5a6a3335b750ff4455000025_000" } ] } ] } ] }
The BioASQ evaluation files are test files that don't contain answers, only used for predictions. for evaluation during training you can use a portion of the training files
https://stackoverflow.com/questions/60942088/
BERT training with character embeddings
Does it make sense to change the tokenization paradigm in the BERT model, to something else? Maybe just a simple word tokenization or character level tokenization?
That is one motivation behind the paper "CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters" where BERT's wordpiece system is discarded and replaced with a CharacterCNN (just like in ELMo). This way, a word-level tokenization can be used without any OOV issues (since the model attends to each token's characters) and the model produces a single embedding for any arbitrary input token. Performance-wise, the paper shows that CharacterBERT is generally at least as good BERT while at the same time being more robust to noisy texts.
https://stackoverflow.com/questions/60942550/
How to install torch 0.4.1 on Windows 10?
I have windows 10 on a Lenovo Thinkpad P72 with a Nvidia Quadro P5200, and I absolutely need to install (py)torch v0.4.1 to use a 3D Mask R-CNN. So I tried the following link: https://github.com/pytorch/pytorch/issues/19457 However, when I finish with "python setup.py install", I obtain: C:\Users\...\pytorch-0.4.1\build>msbuild INSTALL.vcxproj /p:Configuration=Release Microsoft (R) Build Engine, version 4.8.3752.0 [Microsoft .NET Framework, Version 4.0.30319.42000] Copyright (C) Microsoft Corporation. Tous droits réservés. La génération a démarré 31/03/2020 07:03:00. Projet "C:\Users\...\pytorch-0.4.1\build\INSTALL.vcxproj" sur le noud 1 (cibles par défaut). C:\Users\...\pytorch-0.4.1\build\INSTALL.vcxproj(32,3): error MSB4019: Le projet importé "C:\Microsoft.Cpp.Default.props" est introuvable. Vérifiez que le chemin d'accès dans la déclaration <Import> est correct et que le fichier existe sur le disque. Génération du projet "C:\Users\...\pytorch-0.4.1\build\INSTALL.vcxproj" terminée (cibles par défaut) -- ÉCHEC. ÉCHEC de la build. "C:\Users\...\pytorch-0.4.1\build\INSTALL.vcxproj" (cible par défaut) (1) -> C:\Users\...\pytorch-0.4.1\build\INSTALL.vcxproj(32,3): error MSB4019: Le projet importé "C:\Microsoft.Cpp.Default.props" est introuvable. Vérifiez que le chemin d'accès dans la déclaration <Import> est correct et que le fichier existe sur le disque. 0 Avertissement(s) 1 Erreur(s) Temps écoulé 00:00:00.28 C:\Users\...\pytorch-0.4.1\build>IF ERRORLEVEL 1 exit 1 Failed to run 'tools\build_pytorch_libs.bat --use-cuda --use-nnpack caffe2 nanopb libshm_windows' Since I wasn't able to solve this issue, I copied all the missing files there, and then I obtained (even if C:\Microsoft.Build.CppTasks.Common.dll exists): ÉCHEC de la build. "C:\Users\...\pytorch-0.4.1\build\INSTALL.vcxproj" (cible par défaut) (1) -> "C:\Users\...\pytorch-0.4.1\build\ZERO_CHECK.vcxproj" (cible par défaut) (2) -> (SetBuildDefaultEnvironmentVariables cible) -> C:\Microsoft.Cpp.Current.targets(64,5): error MSB4062: Impossible de charger la tâche "SetEnv" à partir de l'assembly C:\Microsoft.Build.CppTasks.Common.dll. Impossible de charger le fichier ou l'assembly 'Microsoft.Build.Utilities.Core, Version=14.0.0.0, Culture=neutral, PublicKeyToken=...' ou une de ses dépendances. Le fichier spécifié est introuvable. Assurez-vous que la déclaration <UsingTask> est correcte, que l'assembly et toutes ses dépendances sont disponibles et que la tâche contient une classe publique qui implémente Microsoft.Build.Framework.ITask. [C:\Users\...\pytorch-0.4.1\build\ZERO_CHECK.vcxproj] Someone has an idea?
for pip pip install torch===0.4.1 torchvision===0.5.0 -f https://download.pytorch.org/whl/torch_stable.html
https://stackoverflow.com/questions/60944201/
Use a generator to perform operation on a matrix in Python
I have a similarity matrix (torch tensor) which is a cosine similarity matrix between two matrix (source and target). From the matrix I need to obtain the sum of the distance between the top nearest neighbor of each source and target. Then fillup two defaultdicts using the computed values above as shown in the code snippet below import torch from collections import defaultdict src2tgt = defaultdict(dict) tgt2src = defaultdict(dict) #similarity matrix between source and target matrix matx = torch.Tensor([[3,2,1,7],[1,1,0,8],[0,7,1,0],[2,0,0,0],[1,5,2,1]]) #the src and tgt src = torch.LongTensor([[1,1],[1,2],[1,3],[1,4],[1,5]]) tgt = torch.LongTensor([[2,1],[2,2],[2,3],[2,4]]) #the data above are dummy, in my actual code, they are generated by a process similarities=[] #I need a kinda of the sum of nearest neighbor k = 2 nearestSrc = torch.topk(matx, k, dim=1, largest=True, sorted=False, out=None) sumDistSource = torch.sum(nearestSrc[0], 1) nearestTgt = torch.topk(matx, k, dim=0, largest=True, sorted=False, out=None) sumDistTarget = torch.sum(nearestTgt[0], 0) #finally fill default dictionary of source2target and target2source for i in range(matx.shape[0]): for j in range(matx.shape[1]): src2tgt[src[i]][tgt[j]] = matx[i][j].tolist() / (sumDistSource[i].tolist() + sumDistTarget[j].tolist()) tgt2src[tgt[j]][src[i]] = matx[i][j].tolist() / (sumDistTarget[j].tolist() + sumDistSource[i].tolist()) similarities.append(matx[i][j].tolist() ) Is there a way I can optimize the above code, either using a generator without having to create nearestSrc, sumDistSource, nearestTgt, sumDistTarget explicitly thus requiring less memory? Or can I also reduce the double loop?
I don't think it's necessary to save memory here. Let's say the shape of matx is [n x m], then the nearestSrc/Tgt and sumDistSource/Target tensors will contain no more than 2 * (n + m), memory consumption of which is almost ignorable compared to matx. Besides, I don't think PyTorch provides an API to generate top-k elements on-the-fly, and it would be hard to implement a differentiable or GPU-optimized version of that. The double-loop can be optimized, although I'm a bit confused by what you're doing here. It seems that the values you're computing for src2tgt[src[i]][tgt[j]] and tgt2src[tgt[j]][src[i]] are exactly the same. Also, I don't think storing these in a nested dictionary is a good idea, for two reasons: src[i] and tgt[j] are floating-point tensors, and interestingly, the hash value for PyTorch tensors does not depend on tensor values. In fact, it is equivalent to the id function. See more discussions here. This means that two tensors with the same value would still be stored as different keys in the dictionary. Besides, it's also probably not a good idea to use floating-point numbers as keys, because testing equality for floating-point numbers often require special care. For more information on this topic, consider reading this very helpful blog. Taking values out of a tensor and storing them in another structure would prevent further optimizations. Operations on tensors can benefit from the highly-optimized PyTorch functions, which scales sublinearly on GPUs (i.e., a 2x increase in data size leads to a <2x increase in compute time). It's often more desirable to do stuff in tensor form as much as you can. Thus, we can optimize your code by first computing all the values you need to store in src2tgt by a batch tensor operation, and then storing them into the dictionary. norm = (sumDistSource.unsqueeze(-1).expand(-1, matx.size(1)) + sumDistTarget.unsqueeze(0).expand(matx.size(0), -1)) s2t = (matx / norm).tolist() src_vals = src.tolist() tgt_vals = tgt.tolist() for i, s in enumerate(src_vals): s = tuple(s) # convert to tuples because lists are not hashable for j, t in enumerate(tgt_vals): t = tuple(t) src2tgt[s][t] = s2t[i][j] tgt2src[t][s] = s2t[i][j] similarities = matx.flatten().tolist() However, the speedup would be small as long as you still use a dictionary. I would encourage you to consider other ways to store the data.
https://stackoverflow.com/questions/60949990/
Tensorflow and PyTorch hang on initializing with CUDA
When I try to run a very minimal Tensorflow example: import tensorflow as tf c = tf.constant([1,2,3]) The system hangs forever (at least for ten minutes) with no sign of what it is doing. It uses 100% of one virtual CPU core when in this state. When run in a Juypter notebook the kernel outputs this to the console: 2020-03-31 11:12:04.840507: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory 2020-03-31 11:12:04.840576: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory 2020-03-31 11:12:04.840589: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. 2020-03-31 11:12:05.521172: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-03-31 11:12:05.539193: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-03-31 11:12:05.539639: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7845GHz coreCount: 15 deviceMemorySize: 7.93GiB deviceMemoryBandwidth: 238.66GiB/s 2020-03-31 11:12:05.539841: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-03-31 11:12:05.541113: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-03-31 11:12:05.542119: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-03-31 11:12:05.542324: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-03-31 11:12:05.543632: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-03-31 11:12:05.544401: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-03-31 11:12:05.547212: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-03-31 11:12:05.547337: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-03-31 11:12:05.548015: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-03-31 11:12:05.548512: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0 2020-03-31 11:12:05.567845: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3393550000 Hz 2020-03-31 11:12:05.568364: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x564107e16440 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-03-31 11:12:05.568395: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version I did have Tensorflow working previously on this system, so I think this might be some sort of library issue that got caused by a system update. The GPU is an Nvidia GTX 1070. Tensorflow version is 2.1.0, and hasn't changed since when it was working. Running Arch Linux, if that matters. I tried downgrading from CUDA 10.2 to 10.1, but the issue still occurs. I can also reproduce this with PyTorch: import torch import transformers t = torch.tensor([1,2,3]) t.cuda() (import transformers prevents a "CUDA: Out of memory" issue - there must be something it does that initializes PyTorch that I don't know how to do.) This has the same issue, where it freezes pegging one CPU core, though it produces less output: 020-03-31 11:13:41.428483: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory 2020-03-31 11:13:41.428571: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory 2020-03-31 11:13:41.428587: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. I'm pretty sure the complaints about TensorRT are not relevant, because when I had this working previously it would output those as well. How can I resolve this issue? Or at least, what else can I do to determine what it is doing while frozen?
My issue was caused by a ulimit I had set on the amount of virtual memory the Python process was aloud to consume (with ulimit -Sv 12000000 in zsh). I don't know why that would cause it to hang, but if anyone else encounters a similar issue, make sure you aren't limiting virtual memory.
https://stackoverflow.com/questions/60954107/
Pytorch: multi-target error with CrossEntropyLoss
So I was training a Conv. Neural Network. Following are the essential details: original label dim = torch.Size([64, 1]) output from the net dim = torch.Size([64, 2]) loss type = nn.CrossEntropyLoss() error = RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15 WHERE AM I WRONG..? training: EPOCHS = 5 LEARNING_RATE = 0.0001 BATCH_SIZE = 64 net = Net().to(device) optimizer = optim.Adam(net.parameters(), lr=LEARNING_RATE) loss_log = [] loss_log = train(net, trainSet, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE) train function: def train(net, train_set, loss_log=[], EPOCHS=5, LEARNING_RATE=0.001, BATCH_SIZE=32): print('Initiating Training..') loss_func = nn.CrossEntropyLoss() # Iteration Begins for epoch in tqdm(range(EPOCHS)): # Iterate over every sample in the batch for data in tqdm(trainSet, desc=f'Iteration > {epoch+1}/{EPOCHS} : ', leave=False): x, y = data net.zero_grad() #Compute the output output, sm = net(x) # Compute Train Loss loss = loss_func(output, y.to(device)) # Backpropagate loss.backward() # Update Parameters optimizer.step() # LEARNING_RATE -= LEARNING_RATE*0.0005 loss_log.append(loss) lr_log.append(LEARNING_RATE) return loss_log, lr_log FULL ERROR: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-20-8deb9a27d3b4> in <module>() 13 14 total_epochs += EPOCHS ---> 15 loss_log = train(net, trainSet, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE) 16 17 plt.plot(loss_log) 4 frames <ipython-input-9-59e1d2cf0c84> in train(net, train_set, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE) 21 # Compute Train Loss 22 # print(output, y.to(device)) ---> 23 loss = loss_func(output, y.to(device)) 24 25 # Backpropagate /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target) 914 def forward(self, input, target): 915 return F.cross_entropy(input, target, weight=self.weight, --> 916 ignore_index=self.ignore_index, reduction=self.reduction) 917 918 /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction) 2019 if size_average is not None or reduce is not None: 2020 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2021 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) 2022 2023 /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 1836 .format(input.size(0), target.size(0))) 1837 if dim == 2: -> 1838 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 1839 elif dim == 4: 1840 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15
The problem is that your target tensor is 2-dimensional ([64,1] instead of [64]), which makes PyTorch think that you have more than 1 ground truth label per data. This is easily fixed via loss_func(output, y.flatten().to(device)). Hope this helps!
https://stackoverflow.com/questions/60961466/
Importance weighted autoencoder doing worse than VAE
I've been implementing VAE and IWAE models on the caltech silhouettes dataset and am having an issue where the VAE outperforms IWAE by a modest margin (test LL ~120 for VAE, ~133 for IWAE!). I don't believe this should be the case, according to both theory and experiments produced here. I'm hoping someone can find some issue in how I'm implementing that's causing this to be the case. The network I'm using to approximate q and p is the same as that detailed in the appendix of the paper above. The calculation part of the model is below: data_k_vec = data.repeat_interleave(K,0) # Generate K samples (in my case K=50 is producing this behavior) mu, log_std = model.encode(data_k_vec) z = model.reparameterize(mu, log_std) # z = mu + torch.exp(log_std)*epsilon (epsilon ~ N(0,1)) decoded = model.decode(z) # this is the sigmoid output of the model log_prior_z = torch.sum(-0.5 * z ** 2, 1)-.5*z.shape[1]*T.log(torch.tensor(2*np.pi)) log_q_z = compute_log_probability_gaussian(z, mu, log_std) # Definitions below log_p_x = compute_log_probability_bernoulli(decoded,data_k_vec) if model_type == 'iwae': log_w_matrix = (log_prior_z + log_p_x - log_q_z).view(-1, K) elif model_type =='vae': log_w_matrix = (log_prior_z + log_p_x - log_q_z).view(-1, 1)*1/K log_w_minus_max = log_w_matrix - torch.max(log_w_matrix, 1, keepdim=True)[0] ws_matrix = torch.exp(log_w_minus_max) ws_norm = ws_matrix / torch.sum(ws_matrix, 1, keepdim=True) ws_sum_per_datapoint = torch.sum(log_w_matrix * ws_norm, 1) loss = -torch.sum(ws_sum_per_datapoint) # value of loss that gets returned to training function. loss.backward() will get called on this value Here are the likelihood functions. I had to fuss with the bernoulli LL in order to not get nan during training def compute_log_probability_gaussian(obs, mu, logstd, axis=1): return torch.sum(-0.5 * ((obs-mu) / torch.exp(logstd)) ** 2 - logstd, axis)-.5*obs.shape[1]*T.log(torch.tensor(2*np.pi)) def compute_log_probability_bernoulli(theta, obs, axis=1): # Add 1e-18 to avoid nan appearances in training return torch.sum(obs*torch.log(theta+1e-18) + (1-obs)*torch.log(1-theta+1e-18), axis) In this code there's a "shortcut" being used in that the row-wise importance weights are being calculated in the model_type=='iwae' case for the K=50 samples in each row, while in the model_type=='vae' case the importance weights are being calculated for the single value left in each row, so that it just ends up calculating a weight of 1. Maybe this is the issue? Any and all help is huge - I thought that addressing the nan issue would permanently get me out of the weeds but now I have this new problem. EDIT: Should add that the training scheme is the same as that in the paper linked above. That is, for each of i=0....7 rounds train for 2**i epochs with a learning rate of 1e-4 * 10**(-i/7)
The K-sample importance weighted ELBO is $$ \textrm{IW-ELBO}(x,K) = \log \sum_{k=1}^K \frac{p(x \vert z_k) p(z_k)}{q(z_k;x)}$$ For the IWAE there are K samples originating from each datapoint x, so you want to have the same latent statistics mu_z, Sigma_z obtained through the amortized inference network, but sample multiple z K times for each x. So its computationally wasteful to compute the forward pass for data_k_vec = data.repeat_interleave(K,0), you should compute the forward pass once for each original datapoint, then repeat the statistics output by the inference network for sampling: mu = torch.repeat_interleave(mu,K,0) log_std = torch.repeat_interleave(log_std,K,0) Then sample z_k. And now repeat your datapoints data_k_vec = data.repeat_interleave(K,0), and use the resulting tensor to efficiently evaluate the conditional p(x |z_k) for each importance sample z_k. Note you may also want to use the logsumexp operation when calculating the IW-ELBO for numerical stability. I can't quite figure out what's going on with the log_w_matrix calculation in your post, but this is what I would do: log_pz = ... log_qzCx = .... log_pxCz = ... log_iw = log_pxCz + log_pz - log_qzCx log_iw = log_iw.reshape(-1, K) iwelbo = torch.logsumexp(log_iw, dim=1) - np.log(K) EDIT: Actually after thinking about it a bit and using the score function identity, you can interpret the IWAE gradient as an importance weighted estimate of the standard single-sample gradient, so the method in the OP for calculation of the importance weights is equivalent (if a bit wasteful), provided you place a stop_gradient operator around the normalized importance weights, which you call w_norm. So I the main problem is the absence of this stop_gradient operator.
https://stackoverflow.com/questions/60974047/
optimized_execution() takes 1 positional argument but 2 were given
I'm following the pytorch sagemaker docs here and I'm stuck on this line torch.jit.optimized_execution(True, {'target_device': 'eia:device ordinal'}) When I run it, I get the error optimized_execution() takes 1 positional argument but 2 were given. I'm using pytorch 1.3.1, but I tried with 1.4.0 and was running into similar problems. Can I use optimized execution without this second argument? How can I specify the accelerator?
(I'll refer to the Elastic Inference enabled PyTorch framework as "PyTorch-EI" for convenience) Are you using SageMaker through notebook or hosting? SageMaker notebook support is not currently released, so there's no official notebook kernel / Conda environment that you can activate that will have the Elastic Inference enabled PyTorch framework. You could however create your own by activating the pytorch_p36 environment (which has standard PyTorch), uninstalling PyTorch, and then installing using the PyTorch-EI 1.3.1 framework wheel - linked from here. SageMaker hosting does currently support PyTorch-EI out of the box. If you're currently using SageMaker hosting and having issues, share some of your inference code + what container you're using. List of containers can be found at here. Also note that EC2 currently supports Elastic Inference through the DLAMI v27.0. The Conda environment name is amazonei_pytorch_p36.
https://stackoverflow.com/questions/60981262/
Reading h5py files into tensors
So I have a training set and a test set both in h5py format. I also have a data_load function that loads the files and returns NumPy arrays. The main problem is I don't need NumPy as I am working with Tensors. I am expecting to have an x&y tensor of size N(batch size) and D_in(input size for each image) and D_out(Output size of each tensor). The problem: x&y do not get converted to tensors of dimensions mentioned below.If anything their types remain to be numpy.ndarray. Any help is appreciated. def load_data(train_file, test_file): # Load the training data train_dataset =h5py.File(train_file, 'r') # Separate features(x) and labels(y) for training set train_set_x_orig =np.array(train_dataset["train_set_x"][:]) train_set_y_orig =np.array(train_dataset["train_set_y"][:]) # Load the test data test_dataset =h5py.File(test_file,'r') # Separate features(x) and labels(y) for training set test_set_x_orig =np.array(test_dataset["test_set_x"][:]) test_set_y_orig =np.array(test_dataset["test_set_y"][:]) classes = np.array(test_dataset["list_classes"][:]) # the list of classes train_set_y_orig = torch.from_numpy(train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))) test_set_y_orig = torch.from_numpy(test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))) return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes x = torch.Tensor(N, D_in) y = torch.Tensor(N, D_out) train_file="data/train_catvnoncat.h5" test_file="data/test_catvnoncat.h5" x,y,_,_,_=load_data(train_file,test_file)
Because you did not convert train_set_x_orig to a torch tensor before returning. Either use torch.from_numpy() on train_set_x_orig before returning as you do with train_set_y_orig or cast it to a tensor before assigning to x. However, y should be of type torch.tensor. Below is a demonstration that explains the issue: # some sample tensor In [27]: x = torch.Tensor(3, 2) # check its type In [28]: type(x) Out[28]: torch.Tensor # some sample ndarray In [29]: arrx = np.arange(6).reshape(3, -1) # assign array to tensor # note that now the object `x` refers to the numpy array object In [30]: x = arrx # see that the type() of `x` is now numpy ndarray In [31]: type(x) Out[31]: numpy.ndarray Also, as hpaulj pointed out in the comments, there is no need to wrap the sliced objects from h5py in np.array() since the sliced objects are already of type numpy ndarrays. So, you can just get rid of them and the code will look more cleaner!
https://stackoverflow.com/questions/60993802/
How to optimize pip imports for Dockerfile layers caching
I have a Dockerfile for ML/DL stack that needs a lot of requirements that could be logically split into python standard libraries and python ml libraries at least: Python libraries (requirements.txt): Cython python-dateutil==2.8.0 setuptools>=41.0.0 progressbar2 argparse smart_open backoff boto3 botocore google protobuf tornado==5.1.1 Python ml libraries (requirements.lib.txt): numpy==1.15.1 intel-numpy matplotlib pandas scipy==1.2.1 scikit-learn==0.21.3 torch tensorflow==1.14.0 keras==2.1.1 Now when I build my Docker image it turns out that the whole image size is ~5-7GB. The other point is that I have, in terms of size in MB, small layers (< 100MB), big layers (~100MB-500MB) and huge layers (>500MB up to 1-2GB). Of course ml python libraries cached to layers do not help since torch itself is about 800MB, tensorflow is ~500MB, intel mkl is about 300MB, etc. Currently to prevent packages version override I do like COPY $ROOT_APPLICATION/src/requirements.txt /tmp/requirements.txt RUN cat /tmp/requirements.txt | xargs -n 1 -L 1 pip3 install COPY $ROOT_APPLICATION/src/requirements.lib.txt /tmp/requirements.lib.txt RUN cat /tmp/requirements.lib.txt | xargs -n 1 -L 1 pip3 install where I do a copy to /tmp before RUN for caching reasons. To keep everything compact I start from the good python slim buster: FROM python:3.7.4-slim-buster while the missing part of the Dockerfile is the apt-get of required libraries like: ######################################## BASE SYSTEM RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections RUN apt-get update && apt-get install -y --no-install-recommends \ software-properties-common \ build-essential \ pkg-config \ libjemalloc2 \ libgmp3-dev \ libicu-dev \ python3.7-icu \ swig \ curl \ unzip \ cron \ jq ######################################## AUDIO RUN apt-get update && apt-get install -y \ libyaml-0-2 \ libfftw3-3 \ libtag1v5 \ libsamplerate0 \ libasound-dev \ portaudio19-dev \ libportaudio2 \ libportaudiocpp0 \ ffmpeg \ espeak How to optimize layers caching to 1) keep the docker images as small as possible 2) optimize layer's size for bandwidth when using docker push (then avoiding to push to registry huge layers over the network)?
You're going to want pip install --no-cache-dir, so it doesn't keep copies of the downloads around. You don't want to keep the toolchain (compiler etc.) installed, but you need them to build the image. So what you do is, you use multi-stage builds: you use one image to build everything, and then a second image that just copies the built packages over and omits all the build tools and artifacts. You can find a guide to multi-stage builds for Python here (it's three parts, this is part 1): https://pythonspeed.com/articles/smaller-python-docker-images/
https://stackoverflow.com/questions/60997203/
How to revert BERT/XLNet embeddings?
I've been experimenting with stacking language models recently and noticed something interesting: the output embeddings of BERT and XLNet are not the same as the input embeddings. For example, this code snippet: bert = transformers.BertForMaskedLM.from_pretrained("bert-base-cased") tok = transformers.BertTokenizer.from_pretrained("bert-base-cased") sent = torch.tensor(tok.encode("I went to the store the other day, it was very rewarding.")) enc = bert.get_input_embeddings()(sent) dec = bert.get_output_embeddings()(enc) print(tok.decode(dec.softmax(-1).argmax(-1))) Outputs this for me: ,,,,,,,,,,,,,,,,, I would have expected the (formatted) input sequence to be returned since I was under the impression that the input and output token embeddings were tied. What's interesting is that most other models do not exhibit this behavior. For example, if you run the same code snippet on GPT2, Albert or Roberta, it outputs the input sequence. Is this a bug? Or is it expected for BERT/XLNet?
Not sure if it's too late, but I've experimented a bit with your code and it can be reverted. :) bert = transformers.BertForMaskedLM.from_pretrained("bert-base-cased") tok = transformers.BertTokenizer.from_pretrained("bert-base-cased") sent = torch.tensor(tok.encode("I went to the store the other day, it was very rewarding.")) print("Initial sentence:", sent) enc = bert.get_input_embeddings()(sent) dec = bert.get_output_embeddings()(enc) print("Decoded sentence:", tok.decode(dec.softmax(0).argmax(1))) For this, you get the following output: Initial sentence: tensor([ 101, 146, 1355, 1106, 1103, 2984, 1103, 1168, 1285, 117, 1122, 1108, 1304, 10703, 1158, 119, 102]) Decoded sentence: [CLS] I went to the store the other day, it was very rewarding. [SEP]
https://stackoverflow.com/questions/60997438/
Google Colab become slower with the same code sometimes. What is the possible reasons?
I am training a CNN model with Google Colab's GPU through pytorch. My question is, even though running with the same code, it gets about three times slower sometimes(30s -> 90s in my case). I've tried restart runtime(it clears all local variable but keep files), it doesn't work I have seen this post, however, I've checked my GPU status, it works well with 11.5GB. Sometimes it goes normal after disconnect it for a while though, I still want to figure out what can be the possible reason of it.
Guys I think I found the possible answer here So it might be the limit of Google Colab itself. Due to their policy, sometimes you'll get fewer computation resources, which slower down the process even though no change in any code.
https://stackoverflow.com/questions/61016380/
Text classification using BERT - how to handle misspelled words
I am not sure if this is the best place to submit that kind of question, perhaps CrossValdation would be a better place. I am working on a text multiclass classification problem. I built a model based on BERT concept implemented in PyTorch (huggingface transformer library). The model performs pretty well, except when the input sentence has an OCR error or equivalently it is misspelled. For instance, if the input is "NALIBU DRINK" the Bert tokenizer generates ['na', '##lib', '##u', 'drink'] and model's prediction is completely wrong. On the other hand, if I correct the first character, so my input is "MALIBU DRINK", the Bert tokenizer generates two tokens ['malibu', 'drink'] and the model makes a correct prediction with very high confidence. Is there any way to enhance Bert tokenizer to be able to work with misspelled words?
You can leverage BERT's power to rectify the misspelled word. The article linked below beautifully explains the process with code snippets https://web.archive.org/web/20220507023114/https://www.statestitle.com/resource/using-nlp-bert-to-improve-ocr-accuracy/ To summarize, you can identify misspelled words via a SpellChecker function and get replacement suggestions. Then, find the most appropriate replacement using BERT.
https://stackoverflow.com/questions/61016422/
How to index a 3-d tensor with 2-d tensor in pytorch?
import torch a = torch.rand(5,256,120) min_values, indices = torch.min(a,dim=0) aa = torch.zeros(256,120) for i in range(256): for j in range(120): aa[i,j] = a[indices[i,j],i,j] print((aa==min_values).sum()==256*120) I want to know how to avoid to using the for-for loop to get the aa values? (I want to use the indices to select elements in another 3-d tensors so I can't use the values return by min directly)
You can use torch.gather aa = torch.gather(a, 0, indices.unsqueeze(0)) as explained here: Slicing a 4D tensor with a 3D tensor-index in PyTorch
https://stackoverflow.com/questions/61031110/
Pytorch - Indexing a range of multiple Indices?
Lets say I have a tensor of size [100, 100] and I have a set of start_indices and end_indices of size [100] I want to be able to do something like this: tensor[start_indices:end_indices, :] = 0 Unfortunately, I get an error saying TypeError: only integer tensors of a single element can be converted to an index So is this actually possible without a for loop?
To the best of my knowledge this is not possible without some sort of loop or list comprehension. Below are some alternatives which may be useful depending on your use-case. Specifically if you are looking to reuse the same start_indices and end_indices for multiple assignments, or if you are looking have only one in-place assignment to tensor then the solutions below would be useful. If instead of start_indices and end_indices you were given a list of indices, for example row_indices = torch.cat([torch.arange(s, e, dtype=torch.int64) for s, e in zip(start_indices, end_indices)]) Then this would be possible using tensor[row_indices, :] = 0 Or if you were given a mask mask = torch.zeros(tensor.shape, dtype=torch.bool, device=tensor.device) for s, e in zip(start_indices, end_indices): mask[s:e, :] = True then this would be possible using tensor[mask] = 0
https://stackoverflow.com/questions/61034839/
Maxpool of an image in pytorch
I'm trying to just apply maxpool2d (from torch.nn) on a single image (not as a maxpool layer). Here is my code right now: name = 'astronaut' imshow(images[name], name) img = images[name] # pool of square window of size=3, stride=1 m = nn.MaxPool2d(3,stride = 1) img_transform = torch.Tensor(images[name]) plt.imshow(m(img_transform).view((512,510))) The issue is, this code gives me a very green image as a result. I am sure the problem is with the dimensions of view, but I was unable to find how to apply maxpool to just one image so I couldn't fix it. The dimension of the image I'm considering is 512x512. The arguments for view make no sense for me right now, it's just the only number that gives a result... If for example, I gave 512,512 as the argument for view, I get the following error: RuntimeError: shape '[512, 512]' is invalid for input of size 261120 If anyone can tell me how to apply maxpool, avgpool, or minpool to an image and display the result I would be super grateful! Thanks (:
Assuming your image is a numpy.array upon loading (please see comments for explanation of each step): import numpy as np import torch # Assuming you have 3 color channels in your image # Assuming your data is in Width, Height, Channels format numpy_img = np.random.randint(low=0, high=255, size=(512, 512, 3)) # Transform to tensor tensor_img = torch.from_numpy(numpy_img) # PyTorch takes images in format Channels, Width, Height # We have to switch their dimensions using `permute` tensor_img = tensor_img.permute(2, 0, 1) tensor_img.shape # Shape [3, 512, 512] # Layers always need batch as first dimension (even for one image) # unsqueeze will add it for you ready_tensor_img = tensor_img.unsqueeze(dim=0) ready_tensor_img.shape # Shape [1, 3, 512, 512] pooling = torch.nn.MaxPool2d(kernel_size=3, stride=1) # You need to cast your image to float as # pooling is not implemented for Tensors of type long new_img = pooling(ready_tensor_img.float()) If your image is black and white you would need shape [1, 1, 512, 512] (single channel only), you can't leave/squeeze those dimensions, they always have to be there for any torch.nn.Module! To transform tensor into image again you could use similar steps: # Cast to long and squeeze batch dimension no_batch = new_img.long().squeeze(dim=0) # Unpermute width_height_channels = no_batch.permute(1, 2, 0) width_height_channels.shape # Shape: [510, 510, 3] # Cast to numpy and you have your image final_image = width_height_channels.numpy()
https://stackoverflow.com/questions/61049808/
why cannot cuda model be initialized under the __init__ method in a class that inherits multiprocessing.process?
Here is my code: from MyDetector import Helmet_Detector from multiprocessing import Process class Processor(Process): def __init__(self): super().__init__() self.helmet_detector = Helmet_Detector() def run(self): print(111) if __name__ == '__main__': p=Processor() p.start() As you can see, the class 'Processor' inherits multiprocessing.Process, and Helmet_Detector is a YOLO model using cuda. But when I ran it, the error occurred as follow: THCudaCheck FAIL file=C:\w\1\s\tmp_conda_3.7_075911\conda\conda-bld\pytorch_1579075223148\work\torch/csrc/generic/StorageSharing.cpp line=245 error=71 : operation not supported Traceback (most recent call last): File "E:/python-tasks/WHU-CSTECH/Processor.py", line 17, in <module> p.start() File "C:\Anaconda\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self) File "C:\Anaconda\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__ reduction.dump(process_obj, to_child) File "C:\Anaconda\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) File "C:\Anaconda\lib\site-packages\torch\multiprocessing\reductions.py", line 242, in reduce_tensor event_sync_required) = storage._share_cuda_() RuntimeError: cuda runtime error (71) : operation not supported at C:\w\1\s\tmp_conda_3.7_075911\conda\conda-bld\pytorch_1579075223148\work\torch/csrc/generic/StorageSharing.cpp:245 then I tried to intialize the Helmet_Detector in run method: def run(self): print(111) self.helmet_detector = Helmet_Detector() No error occurred. Could anyone please tell me the reason for this and how could I solve this problem? Thank you!
Error occurs because in python multiprocessing requires Process class objects to be pickelable so that data can be transferred to the process being created i.e. Serialisation and deserialization of the object. Suggestion to overcome the issue, lazy instantiate the Helmet_Detector object (hint: try property in python). Edit: As per the comment by @jodag, you should use pytorch's multiprocessing library instead of standard multiprocessing library Example: import torch.multiprocessing as mp class Processor(mp.Process): . . .
https://stackoverflow.com/questions/61052513/
how does the neural netwok definition in pytorch use pyton classes
in order to understand how this code works, I have written a small reproducer. How does the self.hidden variable use a variable x in the forward method? enter code class Network(nn.Module): def __init__(self): super().__init__() # Inputs to hidden layer linear transformation self.hidden = nn.Linear(784, 256) # Output layer, 10 units - one for each digit self.output = nn.Linear(256, 10) # Define sigmoid activation and softmax output self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) def forward(self, x): # Pass the input tensor through each of our operations x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) return x
You misunderstood what self.hidden = nn.Linear(784, 256) does. You wrote that: hidden is defined as a function but this is not true. self.hidden is an object of the class nn.Linear. And when you call self.hidden(...), you are not passing arguments to nn.Linear; you are passing arguments to __call__ (defined in the nn.Linear class). If you want more details on that, I have expanded on how it works in PyTorch: see this answer.
https://stackoverflow.com/questions/61068166/
Anaconda Integration with Cuda 9.0 shows Incompatible Package Error
I am trying to install CUDA 9.0 with NVIDIA-SMI: 445.75 in Windows 10. My Cuda 9.0 installation is successful, as shown from Command-prompt *(DL) C:\Users\User>nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2017 NVIDIA Corporation Built on Fri_Sep__1_21:08:32_Central_Daylight_Time_2017 **Cuda compilation tools, release **9.0**, V9.0.176*** (1) I downloaded cudnn-9.0-windows10-x64-v7.zip, extracted it, and moved it to the fold, which was created when Cuda was installed. (2) In the terminal prompt of the Anaconda, I input conda install pytorch=1.1.0 torchvision=0.3.0 cudatoolkit=9.0 –c pytorch. However, Anaconda prompt gives the following error **Error messages** *Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: | Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed UnsatisfiableError: The following specifications were found to be incompatible with the existing python installation in your environment: Specifications: - pytorch=1.1.0 -> python[version='>=3.5,<3.6.0a0|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0'] - torchvision=0.3.0 -> python[version='>=3.5,<3.6.0a0|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0'] Your python: python=3.8 If python is on the left-most side of the chain, that's the version you've asked for. When python appears to the right, that indicates that the thing on the left is somehow not available for the python version you are constrained to. Note that conda will not change your python version to a different minor version unless you explicitly specify that. The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versions Package cudatoolkit conflicts for: torchvision=0.3.0 -> cudatoolkit[version='>=10.0,<10.1|>=9.0,<9.1'] pytorch=1.1.0 -> cudatoolkit[version='>=10.0,<10.1|>=9.0,<9.1'] torchvision=0.3.0 -> pytorch[version='>=1.1.0'] -> cudatoolkit[version='>=10.1,<10.2|>=9.2,<9.3']The following specifications were found to be incompatible with your CUDA driver: - feature:/win-64::__cuda==11.0=0 Your installed CUDA driver is: 11.0*
I got solved this issue as follows. Open Anaconda Powershell Prompt by searching it on the start menu. then run conda install -c anaconda tensorflow-gpu command. it may be asked to your acceptance. finally tensorflow-gpu listed on the installed list. Reference: https://anaconda.org/anaconda/tensorflow-gpu
https://stackoverflow.com/questions/61072464/
Pytorch "NCCL error": unhandled system error, NCCL version 2.4.8"
I use pytorch to distributed training my model.I have two nodes and two gpu for each node, and I run the code for one node: python train_net.py --config-file configs/InstanceSegmentation/pointrend_rcnn_R_50_FPN_1x_coco.yaml --num-gpu 2 --num-machines 2 --machine-rank 0 --dist-url tcp://192.168.**.***:8000 and the other: python train_net.py --config-file configs/InstanceSegmentation/pointrend_rcnn_R_50_FPN_1x_coco.yaml --num-gpu 2 --num-machines 2 --machine-rank 1 --dist-url tcp://192.168.**.***:8000 However the other has RuntimeError problem global_rank 3 machine_rank 1 num_gpus_per_machine 2 local_rank 1 global_rank 2 machine_rank 1 num_gpus_per_machine 2 local_rank 0 Traceback (most recent call last): File "train_net.py", line 109, in <module> args=(args,), File "/root/detectron2_repo/detectron2/engine/launch.py", line 49, in launch daemon=False, File "/root/anaconda3/envs/PointRend/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn while not spawn_context.join(): File "/root/anaconda3/envs/PointRend/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 118, in join raise Exception(msg) Exception: -- Process 0 terminated with the following error: Traceback (most recent call last): File "/root/anaconda3/envs/PointRend/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap fn(i, *args) File "/root/detectron2_repo/detectron2/engine/launch.py", line 72, in _distributed_worker comm.synchronize() File "/root/detectron2_repo/detectron2/utils/comm.py", line 79, in synchronize dist.barrier() File "/root/anaconda3/envs/PointRend/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 1489, in barrier work = _default_pg.barrier() RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:410, unhandled system error, NCCL version 2.4.8 IF I change mask-rank = 1 to mask-rank = 0, then no error will be reported, but can't distributed training,Does anyone know why this error may occur?
A number of things can cause this issue, see for example 1, 2. Adding the line import os os.environ["NCCL_DEBUG"] = "INFO" to your script will log more specific debug info leading up to the error, giving you a more helpful error message to google.
https://stackoverflow.com/questions/61075390/
Kornia rotation not quite rotating as expected
I'm trying to use the kornia.geometry.transform.rotate function, in Python, to rotate a PyTorch tensor by arbitrary angles. However if I do a simple 90 degree rotation, the resulting tensor doesn't look like it's been fully rotated. Here's some sample code: import torch from kornia.geometry.transform import rotate import matplotlib.pyplot as plt a = torch.ones((1,64,64)) a[0,:,2] += 1 angle = torch.tensor([90]) c = rotate(a,angle) plt.figure() plt.subplot(121) plt.imshow(a[0].detach().numpy()) plt.subplot(122) plt.imshow(c[0].detach().numpy()) And the results before and after the rotation: Am I missing a subtlety due to the tensor being too coarse here, which causes interpolation issues or something that would be alleviated with a much finer grained tensor? Many thanks in advance! Note I am using: python 3.6.10 kornia 0.2.0 pytorch 1.4.0
to use Kornia, you can use the Rotate class. Below is an example to rotating all tensors in a mini-batch by a fixed 45 degrees: import kornia as tgm # set the rotation angles - assume batch size is N; angle = torch.tensor([45]*N).cuda() # do the rotation: tensor_rotated = tgm.Rotate(angle)(tensor_input) The only caveat right now is that it seems super-slow.. Hope this helps!
https://stackoverflow.com/questions/61076613/
Image classification in Pytorch
I'm working on facenet-pytorch library in Pytorch, I want to know the data augmentation should be in train dataset or test data set? how many images should I put to test data set at least (I've used 2% of images in test data set) I have 21 classes(21 persons face) and with (vggface2 dataset ) with evaluation mode , does it enough for training and test data set? how to visualize the images in test dataset to display if a face matched or not I tried this but it will rise this error : TypeError: Invalid shape (3, 160, 160) for image data The shape of images are : (10, 3, 160, 160) dataiter = iter(test_loader) images, labels = dataiter.next() # get predictions preds = np.squeeze(net(images).data.max(1, keepdim=True)[1].numpy()) images = images.numpy() # plot the images in the batch, along with predicted and true labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(batch_size): ax = fig.add_subplot(2, batch_size/2, idx+1, xticks=[], yticks=[]) ax.imshow(np.squeeze(images[idx]), cmap='gray') ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]), color=("green" if preds[idx]==labels[idx] else "red")) how to take input faces from webcam after detected the face (prediction function)? cap = cv.VideoCapture(0) while True: ret, frame = cap.read() frame = cv.resize(frame, None, fx=0.5, fy=0.5, interpolation=cv2.INTER_AREA) image = predict_draw_bounding_box(frame) cv.imshow('Output', image) c = cv.waitKey(1) if c == 27: break cap.release() cv.destroyAllWindows() But I don't know to implement predict_draw_bounding_box function? Thanks for any advice
That's a lot of questions; you should probably split those up into multiple questions. In any case, I'll try answering some. Data augmentation should generally be done on the train dataset only. Typical augmentations include random rotation, resized crops, horizontal flips, cutout etc. All of these only go on the train set. Other than this, off the top of my head, I can only think of channel normalization as the only augmentation you usually apply to both training and testing set. You compute x-x_mean/sigma channelwise for all images in a dataset. The percentage of images in your test dataset is entirely empirical, and depends on how many images you actually have. For very large datasets with a million plus images, small percentages like 2% is okay. However if your number of images is in the ten thousands, thousands, or even less, it's good practice to keep around 20% as the test set. Can't understand your question. Your images are in the shape (3, 160, 160). It's the channel-first syntax used by pytorch's nn.Module system, but plotting an RGB image in matplotlib requires it to have the channel in the last dimension, ie, (160,160,3). If images is a batch of images of shape (10,3,160,160), then do: ... images = images.numpy() images = images.swapaxes(1,2).swapaxes(2,3) ... This will reshape it to (10,160,160,3), without harming the axes order. No clue.
https://stackoverflow.com/questions/61101206/
pytorch element intersection
When I calculate the Hit Ratio, I need to calculate the number of elements of predict tensor in the target tensor, I wanna calculate the number of elements in their intersection. For example: [# of classes: 20, # of samples: 2] target: tensor([[14, 13, 8, 11, 18, 12, 5, 1, 0, 10], [ 8, 10, 2, 10, 7, 17, 6, 12, 13, 14]]) pred_idx: (HR@5): tensor([[14, 11, 8, 19, 4], [ 6, 9, 8, 13, 18]]) now when I do >>> (pred_idx & target).sum((1,2)) RuntimeError: The size of tensor a (5) must match the size of tensor b (10) at non-singleton dimension 1. But the thing is, the prediction and target have different size, how can I calculate the number of elements in the pred@5 that are also in the target?
Perhaps you could convert to numpy and then use its set operations. import torch import numpy as np target = torch.tensor([[14, 13, 8, 11, 18, 12, 5, 1, 0, 10], [ 8, 10, 2, 10, 7, 17, 6, 12, 13, 14]]) pred_idx = torch.tensor([[14, 11, 8, 19, 4], [ 6, 9, 8, 13, 18]]) Find elements of p@5 in target: [np.intersect1d(t,p) for t,p in zip(target.cpu().numpy(),pred_idx.cpu().numpy())] Find number of elements in p@5 also in target: [len(np.intersect1d(t,p)) for t,p in zip(target.cpu().numpy(),pred_idx.cpu().numpy())]
https://stackoverflow.com/questions/61108901/
Cannot improve model accuracy
I am building a general-purpose NN that would classify images (Dog/No Dog) and movie reviews(Good/Bad). I have to stick to a very specific architecture and loss function so changing these two seems out of the equation. My architecture is a two-layer network with relu followed by a sigmoid and a cross-entropy loss function. With 1000 epochs and a learning rate of around .001 I am getting 100 percent training accuracy and .72 testing accuracy.I was looking for suggestions to improve my testing accuracy.This is the layout of what I have: def train_net(epochs,batch_size,train_x,train_y,model_size,lr): n_x,n_h,n_y=model_size model = Net(n_x, n_h, n_y) optim = torch.optim.Adam(model.parameters(),lr=0.005) loss_function = nn.BCELoss() train_losses = [] accuracy = [] for epoch in range(epochs): count=0 model.train() train_loss = [] batch_accuracy = [] for idx in range(0, train_x.shape[0], batch_size): batch_x = torch.from_numpy(train_x[idx : idx + batch_size]).float() batch_y = torch.from_numpy(train_y[:,idx : idx + batch_size]).float() model_output = model(batch_x) batch_accuracy=[] loss = loss_function(model_output, batch_y) train_loss.append(loss.item()) preds = model_output > 0.5 nb_correct = (preds == batch_y).sum() count+=nb_correct.item() optim.zero_grad() loss.backward() # Scheduler made it worse # scheduler.step(loss.item()) optim.step() if epoch % 100 == 1: train_losses.append(train_loss) print("Iteration : {}, Training loss: {} ,Accuracy %: {}".format(epoch,np.mean(train_loss),(count/train_x.shape[0])*100)) plt.plot(np.squeeze(train_losses)) plt.ylabel('loss') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(lr)) plt.show() return model My model parameters: batch_size = 32 lr = 0.0001 epochs = 1500 n_x = 12288 # num_px * num_px * 3 n_h = 7 n_y = 1 model_size=n_x,n_h,n_y model=train_net(epochs,batch_size,train_x,train_y,model_size,or) and this is the testing phase. model.eval() #Setting the model to eval mode, hence making it deterministic. test_loss = [] count=0; loss_function = nn.BCELoss() for idx in range(0, test_x.shape[0], batch_size): with torch.no_grad(): batch_x = torch.from_numpy(test_x[idx : idx + batch_size]).float() batch_y = torch.from_numpy(test_y[:,idx : idx + batch_size]).float() model_output = model(batch_x) preds = model_output > 0.5 loss = loss_function(model_output, batch_y) test_loss.append(loss.item()) nb_correct = (preds == batch_y).sum() count+=nb_correct.item() print("test loss: {},test accuracy: {}".format(np.mean(test_loss),count/test_x.shape[0])) Things I have tried: Messing around with the learning rate, having momentum, using schedulers and changing batch sizes.Of course these were mainly guesses and not based on any valid assumptions.
The issue you're facing is overfitting. With 100% accuracy on the training set, your model is effectively memorizing the training set, then failing to generalize to unseen samples. The good news is this is a very common major challenge! You need regularization. One method is dropout, whereby on different training epochs a random set of the NN connections are dropped, forcing the network to "learn" alternate pathways and weights, and softening sharp peaks in parameter space. Since you need to keep your architecture and loss function the same, you won't be able to add such an option in (though for completeness, read this article for a description and implementation of dropout in PyTorch). Given your constraints, you'll want to use something like L2 or L1 weight regularization. This typically shows up in the way of adding an additional term to the cost/loss function, which penalizes large weights. In PyTorch, L2 regularization is implemented via the torch.optim construct, with the option weight_decay. (See documentation: torch.optim, search for 'L2') For your code, try something like: def train_net(epochs,batch_size,train_x,train_y,model_size,lr): ... optim = torch.optim.Adam(model.parameters(),...,weight_decay=0.01) ...
https://stackoverflow.com/questions/61110186/
PyTorch Model throwing error for list of layers
I have designed the following torch model with 2 conv2d layers. It works without any error. import torch.nn as nn from torchsummary import summary class mini_unet(nn.Module): def __init__(self): super(mini_unet, self).__init__() self.c1 = nn.Conv2d(1, 1, 3, padding = 1) self.r1 = nn.ReLU() self.c2 = nn.Conv2d(1, 1, 3, padding = 1) self.r2 = nn.ReLU() def forward(self, x): x = self.c1(x) x = self.r1(x) x = self.c2(x) x = self.r2(x) return x a = mini_unet().cuda() print(a) But, let's say I have too many layers, I don't want to explicitly write each of them in the forward function. So, I used a list to automate it like below. import torch.nn as nn from torchsummary import summary class mini_unet2(nn.Module): def __init__(self): super(mini_unet2, self).__init__() self.layers = [nn.Conv2d(1, 1, 3, padding = 1), nn.ReLU(), nn.Conv2d(1, 1, 3, padding = 1), nn.ReLU()] def forward(self, x): for l in self.layers: x = l(x) return x a2 = mini_unet2().cuda() print(a2) summary(a2, (1,4,4)) This gives me the following error which is strange, I have used cuda() why it doesn't work? RuntimeError Traceback (most recent call last) <ipython-input-36-1d71e75b96e0> in <module> 17 a2 = mini_unet2().cuda() 18 print(a2) ---> 19 summary(a2, (1,4,4)) ~/anaconda3/envs/torch/lib/python3.6/site-packages/torchsummary/torchsummary.py in summary(model, input_size, batch_size, device) 70 # make a forward pass 71 # print(x.shape) ---> 72 model(*x) 73 74 # remove these hooks ~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 487 result = self._slow_forward(*input, **kwargs) 488 else: --> 489 result = self.forward(*input, **kwargs) 490 for hook in self._forward_hooks.values(): 491 hook_result = hook(self, input, result) <ipython-input-36-1d71e75b96e0> in forward(self, x) 12 def forward(self, x): 13 for l in self.layers: ---> 14 x = l(x) 15 return x 16 ~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 487 result = self._slow_forward(*input, **kwargs) 488 else: --> 489 result = self.forward(*input, **kwargs) 490 for hook in self._forward_hooks.values(): 491 hook_result = hook(self, input, result) ~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input) 318 def forward(self, input): 319 return F.conv2d(input, self.weight, self.bias, self.stride, --> 320 self.padding, self.dilation, self.groups) 321 322 RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
The error is maybe a little counter-intuitive but the error originates from you using python lists for the layers. From the documentation, you need to use torch.nn.ModuleList to contain the submodules, not a python list. So, just changing the list with nn.Modulelist(list) will solve the error. import torch.nn as nn from torchsummary import summary class mini_unet2(nn.Module): def __init__(self): super(mini_unet2, self).__init__() self.layers = nn.ModuleList([nn.Conv2d(1, 1, 3, padding = 1), nn.ReLU(), nn.Conv2d(1, 1, 3, padding = 1), nn.ReLU()]) def forward(self, x): for l in self.layers: x = l(x) return x a2 = mini_unet2().cuda() print(a2) summary(a2, (1,4,4))
https://stackoverflow.com/questions/61116039/
Maybe I found something strange on pytorch, which result in property setter not working
Maybe I found something strange on pytorch, which result in property setter not working. Below is a minimal example that demonstrates this: import torch.nn as nn class A(nn.Module): def __init__(self): super(A, self).__init__() self.aa = 1 self.oobj = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] @property def obj(self): print('get attr [obj]: {0}'.format(self.oobj)) return self.oobj @obj.setter def obj(self, val): print('set attr [obj] to {0}'.format(val)) self.oobj = val class B(nn.Module): def get_attr(self): print('no any attr.') class C: def get_attr(self): print('no any attr.') b = A() # set obj, and prints my setter message b.obj # get obj using my getter # respectively run the following 3 lines, only the last line not call the setter I defined explicitly. b.obj = C() # set obj, and prints my setter message # b.obj = [1, 2, 3] # set obj, and prints my setter message # b.obj = B() # set obj, but it doesn't print my setter message The last line doesn't call property setter I defined on class A, but call setter on torch.nn.Module. Because A regard B as a nn.Module, call the setter on nn.Module to set attr [obj] as a Module, but it still strange, why not call the setter I explicitly defined on class A? And my project needs to set a nn.Module attribute via setter I defined explicitly, which causes BUG( because it failed). Now I change my code solved the BUG, but still puzzle with the problem.
It may not look obvious at first, but up until you set b.obj as a nn.Module object, you are defining a normal attribute; but once you set b.obj as a nn.Module object, then you can "only" replace b.obj with another nn.Module, because you registered it to _modules. Let me walk you through the code and you'll get it. nn.Module()'s __setattr__ implementation can be found here. First, you defined a new nn.Module: b = A() # btw, why not a = A() :) Then, you set (I'll skip unnecessary steps to reproduce the behavior): b.obj = [1, 2, 3] In this case, because [1,2,3] is not a nn.Parameter; You haven't set a nn.Parameter as attribute before; [1,2,3] is not a nn.Module; You haven't set a nn.Module as attribute before; You haven't registered a buffer before; Then, this line will be execute: object.__setattr__(self, name, value) which is nothing but a normal attribute set, which calls your setter. Now, when you set: b.obj = B() Then, because B() is a nn.Module, the following block will be executed instead: modules = self.__dict__.get('_modules') if isinstance(value, Module): if modules is None: raise AttributeError( "cannot assign module before Module.__init__() call") remove_from(self.__dict__, self._parameters, self._buffers) modules[name] = value So, now you are actually registering a nn.Module to self.__dict__.get('_modules') (print it before and after and you'll see... do it before and after setting [1,2,3] as well). After this point, if you are not setting a nn.Parameter, and you try to set .obj again, then it will fall into this block: elif modules is not None and name in modules: if value is not None: raise TypeError("cannot assign '{}' as child module '{}' " "(torch.nn.Module or None expected)" .format(torch.typename(value), name)) modules[name] = value That is: you already have modules['obj'] set to something and from now on you need to provide another nn.Module or None if you want to set it again. And, as you can see, because you are providing a list if you try to set b.obj = [1,2,3] again, you'll get the error message in the block above, and that is what you get. If you really want set it to something else, then you have to delete it before: b.obj = B() del b.obj b.obj = [1,2,3]
https://stackoverflow.com/questions/61116433/
Differentiable image compression operations in PyTorch
During a CNN classification model training while calculating the loss I am applying the encoding jpeg compression on the image in PyTorch. While I call loss.backward() it must also backpropagate through encoding and compression operation performed on the images. Are those compression algorithms (e.g. encoding and JPEG compression) are differentiable otherwise how to backpropagate the loss gradient through those operations? If those operations are not differentiable is there any differentiable compression algorithm that exists in PyTorch which performs H.264 encoding and JPEG compression? Any suggestions will be highly helpful.
To start with, carefully consider whether you need to differentiate across the JPEG compression step. The vast majority of projects do not differentiate across this step, and if you're unsure if you need to, you probably don't. If you really need to differentiate across an image compressor, you might consider a codec that is easier to implement than JPEG. Wavelett-based compression (the technology behind the ill-fated JPEG 2000 format) is mathematically elegant and easy to differentiate across. In a recent application of this technique, Thies et al. 2019 represent an image as a laplacian pyramid, with a loss component that serves to force sparsity in the higher resolution levels. Now, as a thought experiment, we can look at the different steps within JPEG compression and determine if they could be implemented in a differentiable way. Color transform (RBG to YCbCr): We can represent this as a point-wise convolution. Chroma downsampling: Easy enough with torch.nn.functional.interpolate on the chroma channels. Discrete Cosine Transform (DCT): Now things are getting interesting. Here is a Pytorch implementation of DCT that might work: https://github.com/zh217/torch-dct. Quantization table: Easy again. This should just be multiplying output of the DCT with the values in the table. Huffman encoding: Hard; I'm not sure this is possible. The number of output elements is going to vary based on the image entropy, which rules out many differentiable building blocks. Depending on your application, you might be able to skip this step (this step is lossless compression; so if you're trying to differentiate across the compression artifacts introduced by JPEG, the previous steps should be sufficient). For an interesting related work on inputting JPEG DCT components directly into a neural net, see Faster Neural Networks Straight from JPEG.
https://stackoverflow.com/questions/61132905/
Why doesn't nn.Sequential contain a softmax output layer in the example?
The example from PyTorch's official tutorial has the following ConvNet. My understanding is that the output layer uses a softmax to estimate the digit an image corresponds to. Why doesnt the code have a softmax layer or fully connected layer? model = nn.Sequential( nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.AdaptiveAvgPool2d(1), Lambda(lambda x: x.view(x.size(0), -1)), ) opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
This is a very good question! The reason why no fully-connected layer is used is because of a technique called Global Average Pooling, implemented via nn.AdaptiveAvgPool2d(1). The benefits of this operation over fc layers were introduced in this paper, including reducing the number of model parameters while preserving performance, acting as a regulariser, and modelling deep localisation information. GAP can be used in place of fc, as well as before a subsequent fc layer. As for why there is no softmax layer, I think that this is because they use the CrossEntropyLoss loss function in the backend. This function takes in raw logits and combines nn.LogSoftmax() and nn.NLLLoss() in one computation. So there is no need to perform an additional softmax function before loss evaluation.
https://stackoverflow.com/questions/61150929/
Creating a stack of convolutional layers using for loop in forward function of a pytoch class for Residual block
I'm defining a residual block in pytorch for ResNet in which you can input how many convolutional layers you want to have and not necessarily two. This is done through a parameter named nc (number of Convs). The first layer gets ni as the number of input nf number of filters. But from second layer on I put them in a for loop. Here's my code: class ResBlock(nn.Module): def __init__(self, ni, nf,nc=2): super().__init__() self.conv1 = nn.Conv2d(ni,nf, kernel_size=3, stride=2, padding=0) self.conv2 = nn.Conv2d(nf,nf, kernel_size=3, stride=1, padding=0) self.conv1x1 = nn.Conv2d(ni, nf, kernel_size=1, stride=1, padding=0) self.nc = nc def forward(self, x): y = self.conv1(x) for i in range(self.nc-1): y = self.conv2(y) print(torch.mean(y)) return self.conv1x1(x) + y But no matter what value I give to nc, it always returns 2 convs with kernel size 3. I'm not sure if for loop can really do this job in pytorch but it was working when I used functional API in Keras. Could anyone help me understand what's going on?
Yeah, printing a nn.Module object is often misleading. When you print, you get: # for ni=3, nf=16 ResBlock( (conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(2, 2)) (conv2): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1)) (conv1x1): Conv2d(3, 16, kernel_size=(1, 1), stride=(1, 1)) ) because these are the only 3 Modules you registered in the __init__ of the ResBlock. The actual forward can (and in your case will) be doing something completely different.
https://stackoverflow.com/questions/61164539/
Predicting the next track using a vanilla RNN in PyTorch
For some context, I have a set of 37 playlists of 12 tracks long. Each track has been hand-selected in a certain way. Early songs in the playlist are generally more chilled and as the playlist progresses tracks begin to increase in tempo. I decided to commit to a project and build a deep playlist generator. I am implementing a many-to-many vanilla RNN in PyTorch and am seeking clarity on how to train the RNN one playlist at a time, where each track is then parsed and the model predicts the features of the next track. Pictured is a Many-to-many RNN - for this case - each red box is the current track's features and the opposite blue box is the predicted next track's features: The feature set (9), X, looks like so: The target y simply mirrors the above feature set of the next track. For my RNN Class it looks like so: class RNNEstimator(nn.Module): def __init__(self, input_size=9, hidden_size=30, output_size=9): super(RNNEstimator, self).__init__() self.hidden_size = hidden_size self.i2h = nn.Linear(input_size + hidden_size, hidden_size) self.i2o = nn.Linear(input_size + hidden_size, output_size) def forward(self, inp, hidden): print("inp", inp.shape) print("hid", hidden.shape) combined = torch.cat((inp, hidden), 1) hidden = self.i2h(combined) output = self.i2o(combined) return output, hidden def initHidden(self): return torch.zeros(1, self.hidden_size) This is taken from the PyTorch tutorials page. However, I have adapted the RNN Class to output 9 features rather than a binary classification. The playlist dataset has been processed into a tensor of shape torch.Size([37, 12, 18]), and stride (12, 1, 444)) - meaning 37 playlist, 12 tracks longs with 9 X_features + 9 y_features (18). The train_rnn function: # Model Initiation device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = RNNEstimator(9, 30, 9) optimizer = optim.Adam(model.parameters(), lr=0.001) loss_fn = torch.nn.L1Loss() # Training function for RNN def train_rnn(model, train_loader, epochs, criterion, optimizer, device): model.train() # Make sure that the model is in training mode. # training loop is provided for epoch in range(1, epochs + 1): for batch in train_loader: total_loss = 0 # get data batch_x = batch[:, :9, :].float().squeeze() batch_y = batch[:, 9:, :].float() batch_x = batch_x.to(device) batch_y = batch_y.to(device) optimizer.zero_grad() hidden = model.initHidden() # For each track in batch/playlist # TODO: THIS NEEDS WORK for x, y in zip(batch_x, batch_y): output, hidden = model(x, hidden) loss = criterion(output, y) loss.backward() optimizer.step() total_loss += loss.data.item() if epoch % 10 == 0: print('Epoch: {}/{}.............'.format(epoch, epochs), end=' ') print("Loss: {:.4f}".format(loss.item())) What I am trying to understand is how to train this model one playlist per batch? I am getting an error from the cat function like so: IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) The model should parse each track (t) - via the forward method - then output the next track (t+1). The hidden state will reset each playlist given they are independent of one another.
Even though you want to run with a batch size of 1, your input (x) still needs a batch dimension. Try: output, hidden = model(x.unsqueeze(0), hidden)
https://stackoverflow.com/questions/61165591/
what is the difference between if-else statement and torch.where in pytorch?
See the code snippet: import torch x = torch.tensor([-1.], requires_grad=True) y = torch.where(x > 0., x, torch.tensor([2.], requires_grad=True)) y.backward() print(x.grad) The output is tensor([0.]), but import torch x = torch.tensor([-1.], requires_grad=True) if x > 0.: y = x else: y = torch.tensor([2.], requires_grad=True) y.backward() print(x.grad) The output is None. I'm confused that why the output of torch.where is tensor([0.])? update import torch a = torch.tensor([[1,2.], [3., 4]]) b = torch.tensor([-1., -1], requires_grad=True) a[:,0] = b (a[0, 0] * a[0, 1]).backward() print(b.grad) The output is tensor([2., 0.]). (a[0, 0] * a[0, 1]) is not in any way related to b[1], but the gradient of b[1] is 0 not None.
Tracking based AD, like pytorch, works by tracking. You can't track through things that are not function calls intercepted by the library. By using an if statement like this, there's no connection between x and y, whereas with where, x and y are linked in the expression tree. Now, for the differences: In the first snippet, 0 is the correct derivative of the function x ↦ x > 0 ? x : 2 at the point -1 (since the negative side is constant). In the second snippet, as I said, x is not in any way related to y (in the else branch). Therefore, the derivative of y given x is undefined, which is represented as None. (You can do such things even in Python, but that requires more sophisticated technology like source transformation. I don't thing it is possible with pytorch.)
https://stackoverflow.com/questions/61184437/
How to add pooling layer to BERT QA for large text
I'm trying to implement a Question answering system that deal with large input text: so the idea is to split the large input text into subsequences of 510 tokens, after I will generate the representation of each sequence independently and using a pooling layer to generate the final representation of the input sequence. I using the CamemBERT model for French language. I have tried the following code: class CamemBERTQA(nn.Module): # the initialization of the model def __init__(self, do_lower_case: bool = True): super(CamemBERTQA, self).__init__() self.config_keys = ['do_lower_case'] self.do_lower_case = do_lower_case self.camembert = CamembertForQuestionAnswering.from_pretrained('fmikaelian/camembert-base-fquad') self.tokenizer = CamembertTokenizer.from_pretrained('fmikaelian/camembert-base-fquad', do_lower_case=do_lower_case) self.cls_token_id = self.tokenizer.convert_tokens_to_ids([self.tokenizer.cls_token])[0] self.sep_token_id = self.tokenizer.convert_tokens_to_ids([self.tokenizer.sep_token])[0] self.pool = nn.MaxPool2d(2, 2) # Split long input text into subsequences with overlapping def split_text(self, text, max_length, overlapp): #511 max f = [] text = text.split() for i in range(0, int(len(text)-overlapp),(max_length-overlapp)): f.append(" ".join(text[i:i+max_length])) # print (f) return f # Generate representation of a text, def text_representation(self, l): # l here is a list result = [] for i in l: input_ids = torch.tensor([self.tokenizer.encode(i, add_special_tokens=True)]) with torch.no_grad(): last_hidden_states = model(input_ids)[0] # Models outputs are now tuples result.append(last_hidden_states) # print(last_hidden_states[0]) return result def forward(self, text, input_ids): # Split input text to subsequences of 511 with overlapping subsequences = self.split_text(text, 511, 10) # Generate IDs of each subsequence (Sequence representation) input_ids_list = self.text_representation(subsequences) print("input_ids_list") # Pooling layer # pool = self.pool(...) ########### The problem is here: how can I add a pooling layer ################# # input_ids = # the final output of the pooling layer, the result should contain 510 elements/tokens # generate the start and end logits of the answer start_scores, end_scores = self.camembert(torch.tensor([input_ids])) start_logits = torch.argmax(start_scores) end_logits = torch.argmax(end_scores)+1 outputs = (start_logits, end_logits,) # print(outputs) return outputs Since I'm a beginner with pyTorch, I'm not sure about if the code should be like that. Please if you have any advice or if you need any further information contact me.
I'm pretty new to all of this myself, but maybe this could help you: def max_pooling(input_tensor, max_sequence_length): mxp = nn.MaxPool2d((max_sequence_length, 1),stride=1) return mxp(input_tensor)
https://stackoverflow.com/questions/61185592/
What does "conda install pytorch torchvision cudatoolkit=10.2 -c pytorch" install?
I tried installing PyTorch on my system with not just the pip install pytorch -c pytorch command but with conda install pytorch torchvision cudatoolkit=10.2 -c pytorch but I see a very long command prompt running since last 2 hours giving a very large outputs. Is the process going good? I've CUDA 10.2 installed and also tensorflow-gpu and cuDNN too. I see these arguments being shown. Here's the long list of commands running since past 2 hours https://drive.google.com/file/d/1D4p9bfxNHXZfe8PCjc45OPlPZfasxk_4/view?usp=sharing Thanks in advance.
The preferred way of installing PyTorch is through Anaconda, it has some of the common dependencies (packages) pre-installed and saves you a lot of time. Try a clean install of Conda and run: conda install pytorch torchvision cudatoolkit=10.1 -c pytorch The main difference between Anaconda and a vanilla Python installation would be the packages that come pre-installed and the source of those packages. Conda has it's own Python environment, own set of packages and Conda CLI (and a GUI now) to manage the environment. The conda command can be thought of as pip, but the difference being that conda can install libraries and packages that are not only for Python. Refer to here for more details: https://www.anaconda.com/understanding-conda-and-pip/ As for your log, I don't see anything wrong with it. Just let it do it's job. If there is an error message I missed on there, let me know and I'll take a look.
https://stackoverflow.com/questions/61186333/
how does BatchNorm1d() method whithin the torch library work?
I'm learning pytorch, I don;t know if this question is stupid but I can't find the official web for explaining nn.batchnorm1d. I'm wondering how torch.nn.BatchNorm1d(d1) work? I know that batch norm is about making mean and variance of a batch of example to be 0 and 1 respectively. I'm wondering if there is nn.batchnorm2d, if so, what does it do? what is the d1 parameter ?
BatchNorm1d normalises data to 0 mean and unit variance for 2/3-dimensional data (N, C) or (N, C, L), computed over the channel dimension at each (N, L) or (N,) slice; while BatchNorm2d does the same thing for 4 dimensions (N, C, H, W), computed over the channel dimension at each (N, H, W) slice. Which one to use depends on the dimensionality of input data. For instance in image processing, feature maps ususally have 2 spatial dimensions (N, C, H, W), so BatchNorm2d is useful here. However for some NLP tasks, if there is only the length dimension to consider, one would use BatchNorm1d. For both functions, the d1 parameter is the number of features, and equals dim C of the input tensor.
https://stackoverflow.com/questions/61193517/
Wrong Number of Init Arguments for Tanh in Pytorch
For a homework assignment, I am implementing a simple neural network in Python using Pytorch. Here is my network class: class Net(torch.nn.Module): def __init__(self, layer_dims, activation="sigmoid"): super(Net, self).__init__() layers = [] if activation == 'sigmoid': for i in range(1, len(layer_dims) - 1): layers.append(nn.Sigmoid(layer_dims[i - 1], layer_dims[i])) layers.append(nn.Sigmoid(layer_dims[i - 1])) layers.append(nn.Sigmoid(layer_dims[-2], layer_dims[-1])) layers.append(nn.Sigmoid()) elif activation == 'relu': for i in range(1, len(layer_dims) - 1): layers.append(nn.ReLu(layer_dims[i - 1], layer_dims[i])) layers.append(nn.ReLU(layer_dims[i - 1])) layers.append(nn.ReLu(layer_dims[-2], layer_dims[-1])) layers.append(nn.ReLu()) elif activation == 'tanh': for i in range(1, len(layer_dims) - 1): layers.append(nn.Tanh(layer_dims[i - 1], layer_dims[i])) layers.append(nn.Tanh(layer_dims[i - 1])) layers.append(nn.Tanh(layer_dims[-2], layer_dims[-1])) layers.append(nn.Tanh()) elif activation == 'identity': for i in range(1, len(layer_dims) - 1): layers.append(nn.Identity(layer_dims[i - 1], layer_dims[i])) layers.append(nn.Identity(layer_dims[i - 1])) layers.append(nn.Identity(layer_dims[-2], layer_dims[-1])) layers.append(nn.Identity()) self.out = nn.Sequential(*layers) def forward(self, input): return self.out(input) def train(data, labels, n, l, activation='sigmoid'): if activation not in ['sigmoid','identity','tanh','relu']: return net = Net([l for i in range(0,n)], activation) optim = torch.optim.Adam(net.parameters()) for i in range(0,5): ypred = net.forward(torch.Tensor(data)) ypred.backward() optim.step() optim.zero_grad() ypred = net.forward(torch.Tensor(data)) return (net, torch.nn.CrossEntropyLoss(ypred, labels), net.parameters(), ypred) When testing this, I have been trying to run the following code segment: for i in range(3,5): for num in [10,30,50]: print(train(data.get('X_trn'), data.get('y_trn'), i, num, activation='tanh')) Which is erroring out with a TypeError, saying that init() takes 1 positional argument when 3 is given. <ipython-input-30-376b6c739a71> in __init__(self, layer_dims, activation) 18 elif activation == 'tanh': 19 for i in range(1, len(layer_dims) - 1): ---> 20 layers.append(nn.Tanh(layer_dims[i - 1], layer_dims[i])) 21 layers.append(nn.Tanh(layer_dims[i - 1])) 22 layers.append(nn.Tanh(layer_dims[-2], layer_dims[-1])) TypeError: __init__() takes 1 positional argument but 3 were given This error has appeared when I switch the activation function as well. I am unsure what the issue is, because as far as I know, when you create a layer you need to give the input and output dimensions, which is what I have. Any help sorting this out would be appreciated.
The error clearly says, Tanh only takes 1 argument, a tensor. From documentation, https://pytorch.org/docs/stable/nn.html Tanh class torch.nn.Tanh [source] Applies the element-wise function: Tanh(x)=tanh⁡(x)=ex−e−xex+e−x\text{Tanh}(x) = \tanh(x) = \frac{e^x - e^{-x}} {e^x + e^{-x}} Tanh(x)=tanh(x)=ex+e−xex−e−x​ Shape: Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions Output: (N,∗)(N, *)(N,∗) , same shape as the input You have so many mistakes it's hard to fix them all, you also didn't give a data sample. Activation functions accept a single tensor, you are passing two random list elements. Usually, you can use torch.cat to concatenate two tensors. I would suggest you start with a simpler model and read the documentation first.
https://stackoverflow.com/questions/61193726/
Can't train ResNet using gpu with pytorch
I'm trying to use gpu to train a ResNet architecture on CIFAR10 dataset. Here's my code for ResNet : import torch import torch.nn as nn import torch.nn.functional as F class ResNetBlock(nn.Module): def __init__(self, in_planes, planes, stride=1): super(ResNetBlock, self).__init__() self.stride = stride self.in_planes=in_planes self.planes = planes if stride!=1: self.fx = nn.Sequential(nn.Conv2d(in_planes, planes, 3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(planes, planes,3, padding=1)) else: self.fx = nn.Sequential(nn.Conv2d(planes, planes, 3, padding = 1), nn.ReLU(), nn.Conv2d(planes, planes,3, padding=1)) def forward(self, x): if self.stride ==1: fx = self.fx(x) id = nn.Sequential() out = fx + id(x) relu = nn.ReLU() return relu(out) else: fx = self.fx(x) id = nn.Conv2d(self.in_planes, self.planes, 2, stride = 2) out = fx + id(x) relu = nn.ReLU() return relu(out) class ResNet(nn.Module): def __init__(self, block, num_blocks, num_classes=10, num_filters=16, input_dim=3): super(ResNet, self).__init__() self.in_planes = num_filters self.conv1 = nn.Conv2d(input_dim, num_filters, kernel_size=3, stride=1, padding=1, bias=False) self.bn1 = nn.BatchNorm2d(num_filters) layers = [] plane = num_filters for nb in num_blocks: layer = self._make_layer(block,plane ,nb,2) layers.append(layer) plane*=2 self.layers = nn.Sequential(*layers) self.linear = nn.Linear(2304, num_classes) def _make_layer(self, block, planes, num_blocks, stride): layers = [] block1 = ResNetBlock(planes, 2*planes, stride = 2) planes *=2 layers.append(block1) for i in range(1,num_blocks): block = ResNetBlock(planes, planes, stride =1) layers.append(block) return nn.Sequential(*layers) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) out = self.layers(out) out = F.avg_pool2d(out, 4) out = out.view(out.size(0), -1) out = self.linear(out) return out # (1 + 2*(1 + 1) + 2*(1 + 1) + 2*(1 + 1) + 2*(1 + 1)) + 1 = 18 def ResNet18(): return ResNet(ResNetBlock, [2,2,2,2]) Then I train the network using gpu : net = ResNet18() net = net.to('cuda') train2(net, torch.optim.Adam(net.parameters(), lr=0.001), trainloader, criterion, n_ep=3) And I get the error : RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same which is annoying because my weights should be cuda as well because of the resnet.cuda(). With another network the train function works well, so it must come from the classes mentioned above. Also, next(resnet.parameters()).is_cuda returns True. Update : Here's my training function. def train(net, optimizer, trainload, criterion, n_ep=10, cuda = True): if cuda: net = net.to('cuda') for epoch in range(n_ep): for data in trainload: inputs, labels = data if cuda: inputs = inputs.type(torch.cuda.FloatTensor) labels = labels.type(torch.cuda.LongTensor) optimizer.zero_grad() print(next(net.parameters()).is_cuda) ## this actually prints "True" ! outputs = net.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() return net The thing is, this training function works well with another type of net. For example is used this one (AlexNet) : class AlexNet(nn.Module): def __init__(self, num_classes=1000): super(AlexNet, self).__init__() self.features = nn.Sequential(nn.Conv2d(3,64,11), nn.ReLU(),nn.MaxPool2d(2, stride = 2), nn.Conv2d(64,192,5), nn.ReLU(), nn.MaxPool2d(2, stride = 2), nn.Conv2d(192,384,3), nn.ReLU(),nn.Conv2d(384,256,3), nn.ReLU(), nn.Conv2d(256,256,3), nn.ReLU()) self.avgpool = nn.AdaptiveAvgPool2d((6, 6)) self.classifier = nn.Sequential( nn.Dropout(), nn.Linear(256 * 6 * 6, 4096), nn.ReLU(inplace=True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Linear(4096, num_classes),) def forward(self, x): x = self.features(x) x = self.avgpool(x) x = x.view(x.size(0), 256 * 6 * 6) x = self.classifier(x) return x and with this one the gpu training works well. There's something else I don't understand. I tried to train a network that I moved to GPU (using .cuda() ) with training data that I did not move to GPU (on purpose). And this time I get the error that weights type is torch.cuda and data type isn't. EDIT : I thought it had to do with using nn.ModuleList instead of regular python lists. However I tried that and it has not fixed the issue.
We would need a snippet of your training loop to better determine your error. I am asuming that somewhere on that loop you have some lines of code which do the following: for data, label in CifarDataLoader: data, label = data.to('cuda'), label.to('cuda') My first guess would be to add a line just before the for loop -> resnet = resnet.to('cuda') Let me know if this works, if not I would need more of your code to find the error.
https://stackoverflow.com/questions/61197394/
ValueError: Target and input must have the same number of elements. target nelement (50) != input nelement (100)
I'm new to Pytorch so I tried to learn it by creating simple dogs vs cats classification. The code: class DogCatClassifier(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 32, 5) self.conv2 = nn.Conv2d(32, 64, 5) self.conv3 = nn.Conv2d(64, 128, 5) self.fc1 = nn.Linear(512, 256) self.fc2 = nn.Linear(256, 2) def forward(self, x): x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) print("1-st: ", x.shape) x = F.max_pool2d(F.relu(self.conv2(x)), (2, 2)) print("2-nd: ", x.shape) x = F.max_pool2d(F.relu(self.conv3(x)), (2, 2)) print("3-rd: ", x.shape) x = torch.flatten(x, start_dim=1) x = F.relu(self.fc1(x)) print("6-th: ", x.shape) x = self.fc2(x) # bc this is our output layer. No activation here. print("7-th: ", x.shape) x = F.sigmoid(x) print("8-th: ", x.shape) return x I pass a single batch of data (data shape is (50, 1, 50, 50) model = DogCatClassifier() images, labels = next(iter(train_loader)) preds = model(images) print(pred) loss = F.binary_cross_entropy(preds, labels) My prediction shape is (50, 2), so as I understand F.binary_cross_entropy(preds, labels) checks both predictions from a single image and that's why I get 100 predictions against 50 labels. Coming from tensorflow I thought that I could just implement the same logic like using sigmoid as last activation and binary_cross_entropy as loss function. What I don't understand is how to make this piece of code work.
Your problem arises because you are using binary cross entropy instead of regular cross entropy. As the name implies, it checks weather the label is correct or not thus the shape of both tensors (preds and labels in your code) should be the same. As you are giving the confidence of both classes, the BCE loss function gets confused and the code crashes. You can either do two things: 1- Change to F.cross_entropy(preds, label) as your loss function. 2- Change your code to pick the maximum value as the target. pred = pred.argmax(dim=1, keepdim=True) # gets the max value Let me know if this works, if it doesn't please update with the new error.
https://stackoverflow.com/questions/61206312/
How can I load a partial pretrained pytorch model?
I'm trying to get a pytorch model running on a sentence classification task. As I am working with medical notes I am using ClinicalBert (https://github.com/kexinhuang12345/clinicalBERT) and would like to use its pre-trained weights. Unfortunately the ClinicalBert model only classifies text into 1 binary label while I have 281 binary labels. I am therefore trying to implement this code https://github.com/kaushaltrivedi/bert-toxic-comments-multilabel/blob/master/toxic-bert-multilabel-classification.ipynb where the end classifier after bert is 281 long. How can I load the pre-trained Bert weights from the ClinicalBert model without loading the classification weights? Naively trying to load the weights from the pretrained ClinicalBert weights I get the following error: size mismatch for classifier.weight: copying a param with shape torch.Size([2, 768]) from checkpoint, the shape in current model is torch.Size([281, 768]). size mismatch for classifier.bias: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([281]). I currently tried to replace the from_pretrained function from the pytorch_pretrained_bert package and pop the classifier weights and biases like this: def from_pretrained(cls, pretrained_model_name, state_dict=None, cache_dir=None, *inputs, **kwargs): ... if state_dict is None: weights_path = os.path.join(serialization_dir, WEIGHTS_NAME) state_dict = torch.load(weights_path, map_location='cpu') state_dict.pop('classifier.weight') state_dict.pop('classifier.bias') old_keys = [] new_keys = [] ... And I get the following error message: INFO - modeling_diagnosis - Weights of BertForMultiLabelSequenceClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias'] In the end I would like to load the bert embeddings from the clinicalBert pretrained weights and have the top classifier weights initialized randomly.
Removing the keys in the state dict before loading is a good start. Assuming you're using nn.Module.load_state_dict to load the pretrained weights then you'll also need to set the strict=False argument to avoid errors from unexpected or missing keys. This will ignore entries in the state_dict that aren't present in the model (unexpected keys) and, more importantly for you, will leave the missing entries with their default initialization (missing keys). For safety you can check the return value of the method to verify the weights in question are part of the missing keys and that there aren't any unexpected keys.
https://stackoverflow.com/questions/61211685/
Loss Function in Multi-GPUs training (PyTorch)
I use Pytorch and BERT to traing a model. Everithing works great on one GPU, but when I try to use multi GPUs I am getting an error: ValueError Traceback (most recent call last) <ipython-input-168-507223f9879c> in <module>() 92 # single value; the `.item()` function just returns the Python value 93 # from the tensor. ---> 94 total_loss += loss.item() 95 96 # Perform a backward pass to calculate the gradients. ValueError: only one element tensors can be converted to Python scalars Can someone help me what I am missing and how I should fix it? Here is my code for training: import random seed_val = 42 random.seed(seed_val) np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val) loss_values = [] for epoch_i in range(0, epochs): t0 = time.time() total_loss = 0 for step, batch in enumerate(train_dataloader): if step % 40 == 0 and not step == 0: elapsed = format_time(time.time() - t0) b_input_ids = batch[0].to(device).long() b_input_mask = batch[1].to(device).long() b_labels = batch[2].to(device).long() model.zero_grad() outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) loss = outputs[0] total_loss += loss.item() loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() scheduler.step() avg_train_loss = total_loss / len(train_dataloader) loss_values.append(avg_train_loss) print("") print(" Average training loss: {0:.2f}".format(avg_train_loss)) print(" Training epcoh took: {:}".format(format_time(time.time() - t0))) And here is my code for the model: from transformers import BertForSequenceClassification, AdamW, BertConfig model_to_parallel = BertForSequenceClassification.from_pretrained( "./bert_cache.zip", num_labels = 2, output_attentions = False, output_hidden_states = False, ) model = nn.DataParallel(model_to_parallel, device_ids=[0,1,2,3]) model.to(device)
After loss loss = outputs[0] the loss is a multi-element tensor, the size is number of GPUs. You can use loss = outputs[0].mean() instead.
https://stackoverflow.com/questions/61214154/
PyTorch equivalent of numpy's np.random.RandomState
I'm looking for a way to create random objects without actually altering the pytorch global seed. i.e. an equivalent to numpy's: rand_gen = np.random.RandomState(seed) rand_gen.randint(0, 256, self.image_dim)) # for example
You could pass your torch.Generator manually to the random function. I think this code should work: gen0 = torch.Generator() gen1 = torch.Generator() gen0 = gen0.manual_seed(0) gen1 = gen1.manual_seed(1) torch.rand(5, generator=gen0) torch.rand(5, generator=gen0) torch.rand(5, generator=gen1) torch.rand(5, generator=gen1) gen0 = gen0.manual_seed(0) gen1 = gen1.manual_seed(1) torch.rand(5, generator=gen1) torch.rand(5, generator=gen1) torch.rand(5, generator=gen0) torch.rand(5, generator=gen0)
https://stackoverflow.com/questions/61224933/
How are contents of hidden_states tuple in BertModel in the transformers library arranged
model = BertModel.from_pretrained('bert-base-uncased', config=BertConfig.from_pretrained('bert-base-uncased',output_hidden_states=True)) outputs = model(input_ids) hidden_states = outputs[2] hidden_states is a tuple of 13 torch.FloatTensors. Each tensor is of size: (batch_size, sequence_length, hidden_size). According to the documentation, the 13 tensors are the hidden states of the embedding and the 12 encoder layers. My question: Is hidden_states[0] the embedding layer while hidden_states[12] is the 12th encoder layer or Is hidden_states[0] the embedding layer while hidden_states[12] is the 1st encoder layer or Is hidden_states[0] the 12th encoder layer while hidden_states[12] is the embedding layer or Is hidden_states[0] the 1st encoder layer while hidden_states[12] is the embedding layer I havent found this found clearly stated anywhere else.
Looking at the source-code for BertModel, it can be concluded that hidden_states[0] contains the outputs of the initial embedding layer, and the rest of the elements in tuples contain the hidden states in the increasing order of each layer. Simply put, hidden_states[1] contains the outputs of the first layer of BERT and hidden_states[12] contains the last i.e. 12th layer.
https://stackoverflow.com/questions/61227950/
How to solve ' CUDA out of memory. Tried to allocate xxx MiB' in pytorch?
I am trying to train a CNN in pytorch,but I meet some problems. The RuntimeError: RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 2.00 GiB total capacity; 584.97 MiB already allocated; 13.81 MiB free; 590.00 MiB reserved in total by PyTorch) This is my code: import os import numpy as np import cv2 import torch as t import torch.nn as nn import torchvision.transforms as transforms from torch.utils.data import DataLoader,Dataset import time import matplotlib.pyplot as plt %matplotlib inline root_path='C:/Users/60960/Desktop/recet-task/course_LeeML20/course_LeeML20-datasets/hw3/food-11' training_path=root_path+'/training' testing_path=root_path+'/testing' validation_path=root_path+'/validation' def readfile(path,has_label): img_paths=sorted(os.listdir(path)) x=np.zeros((len(img_paths),128,128,3),dtype=np.uint8) y=np.zeros((len(img_paths)),dtype=np.uint8) for i,file in enumerate(img_paths): img=cv2.imread(path+'/'+file) x[i,:,:]=cv2.resize(img,(128,128)) if has_label: y[i]=int(file.split('_')[0]) if has_label: return x,y else: return x def show_img(img_from_cv2): b,g,r=cv2.split(img_from_cv2) img=cv2.merge([r,g,b]) plt.imshow(img) plt.show() x_train,y_train=readfile(training_path,True) x_val,y_val=readfile(validation_path,True) x_test=readfile(testing_path,False) train_transform=transforms.Compose([ transforms.ToPILImage(), transforms.RandomHorizontalFlip(), transforms.RandomRotation(15), transforms.ToTensor() ]) test_transform=transforms.Compose([ transforms.ToPILImage(), transforms.ToTensor() ]) class ImgDataset(Dataset): def __init__(self,x,y=None,transform=None): self.x=x self.y=y if y is not None: self.y=t.LongTensor(y) self.transform=transform def __len__(self): return len(self.x) def __getitem__(self,idx): X=self.x[idx] if self.transform is not None: X=self.transform(X) if self.y is not None: Y=self.y[idx] return X,Y return X batch_size=128 train_set=ImgDataset(x_train,y_train,transform=train_transform) val_set=ImgDataset(x_val,y_val,transform=test_transform) train_loader=DataLoader(train_set,batch_size=batch_size,shuffle=True) val_loader=DataLoader(val_set,batch_size=batch_size,shuffle=False) class Classifier(nn.Module): def __init__(self): super(Classifier,self).__init__() self.cnn=nn.Sequential( nn.Conv2d(3,64,3,1,1), nn.BatchNorm2d(64), nn.ReLU(), nn.MaxPool2d(2,2,0), nn.Conv2d(64,128,3,1,1), nn.BatchNorm2d(128), nn.ReLU(), nn.MaxPool2d(2,2,0), nn.Conv2d(128,256,3,1,1), nn.BatchNorm2d(256), nn.ReLU(), nn.MaxPool2d(2,2,0), nn.Conv2d(256,512,3,1,1), nn.BatchNorm2d(512), nn.ReLU(), nn.MaxPool2d(2,2,0), nn.Conv2d(512,512,3,1,1), nn.BatchNorm2d(512), nn.ReLU(), nn.MaxPool2d(2,2,0) ) self.fc=nn.Sequential( nn.Linear(512*4*4,1024), nn.ReLU(), nn.Linear(1024,512), nn.ReLU(), nn.Linear(512,11) ) def forward(self,x): out=self.cnn(x) out=out.view(out.size()[0],-1) return self.fc(out) model=Classifier().cuda() loss_fn=nn.CrossEntropyLoss() optim=t.optim.Adam(model.parameters(),lr=0.001) epochs=30 for epoch in range(epochs): epoch_start_time=time.time() train_acc=0.0 train_loss=0.0 val_acc=0.0 val_loss=0.0 model.train() for i,data in enumerate(train_loader): optim.zero_grad() train_pred=model(data[0].cuda()) batch_loss=loss_fn(train_pred,data[1].cuda()) batch_loss.backward() optim.step() train_acc+=np.sum(np.argmax(train_pred.cpu().data.numpy(),axis=1)==data[1].numpy()) train_loss+=batch_loss.item() model.eval() with t.no_grad(): for i,data in enumerate(val_loader): val_pred=model(data[0].cuda()) batch_loss=loss_fn(val_pred,data[1].cuda()) val_acc+=np.sum(np.argmax(val_pred.cpu().data.numpy(),axis=1)==data[1].numpy()) val_loss+=batch_loss.item() print('[%03d/%03d] %2.2f sec(s) Train Acc: %3.6f Loss: %3.6f | Val Acc: %3.6f loss: %3.6f' % (epoch + 1, epochs, time.time()-epoch_start_time,train_acc/train_set.__len__(), train_loss/train_set.__len__(), val_acc/val_set.__len__(), val_loss/val_set.__len__())) x_train_val=np.concatenate((x_train,x_val),axis=0) y_train_val=np.concatenate((y_train,y_val),axis=0) train_val_set=ImgDataset(x_train_val,x_train_val,train_transform) train_val_loader=DataLoader(train_val_set,batch_size=batch_size,shuffle=True) model_final=Classifier().cuda() loss_fn=nn.CrossEntropy() optim=t.optim.Adam(model_final.parameters(),lr=0.001) epochs=30 for epoch in range(epochs): epoch_start_time=time.time() train_acc=0.0 train_loss=0.0 model_final.train() for i,data in enumerate(train_val_loader): optim.zero_grad() train_pred=model_final(data[0].cuda()) batch_loss=loss_fn(train_pred,data[1].cuda()) batch_loss.backward() optim.step() train_acc+=np.sum(np.argmax(train_pred.cpu().data.numpy(),axis=1)==data[1].numpy()) train_loss+=batch_loss.item() print('[%03d/%03d] %2.2f sec(s) Train Acc: %3.6f Loss: %3.6f' % (epoch + 1, epochs, time.time()-epoch_start_time,train_acc/train_val_set.__len__(), train_loss/train_val_set.__len__())) test_set=ImgDataset(x_test,transform=test_transform) test_loader=DataLoader(test_set,batch_size=batch_size,shuffle=False) model_final.eval() prediction=[] with t.no_grad(): for i,data in enumerate(test_loader): test_pred=model_final(data.cuda()) test_label=np.argmax(test_pred.cpu().data.numpy(),axis=1) for y in test_label: prediction.append(y) with open('predict.csv','w') as f: f.write('Id,Category\n') for i,y in enumerate(prediction): f.write('{},{}\n,'.format(i,y)) Pytorch version is 1.4.0, opencv2 version is 4.2.0. The training dataset are pictures like these:training set The error happens at this line: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-1-770be67177f4> in <module> 119 for i,data in enumerate(train_loader): 120 optim.zero_grad() --> 121 train_pred=model(data[0].cuda()) 122 batch_loss=loss_fn(train_pred,data[1].cuda()) 123 batch_loss.backward() I have already installed: some information. GPU utilization is low,close to zero: GPU utilization. Error message says: RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB. So I want to know how to allocate more memory. What's more, I have tried to reduce the batch size to 1, but this doesn't work. HELP!!!
Before reducing the batch size check the status of GPU memory :slight_smile: nvidia-smi Then check which process is eating up the memory choose PID and kill :boom: that process with sudo kill -9 PID or sudo fuser -v /dev/nvidia* sudo kill -9 PID
https://stackoverflow.com/questions/61234957/
Pytorch - TypeError: ToTensor() takes no arguments using torchvision.transform
I’m trying to load in a dataset for super-resolution and I have set up two functions which use Compose to crop and resize the images. The function I have created for the input images works correctly and they are outputting as expected. The transform function for the target images is basically identical, just omitting the resize part of it. def input_trans(c_size, sF): return Compose([ CenterCrop(c_size), Resize(c_size // sF), ToTensor(), ]) def goal_trans(c_size): return Compose([ CenterCrop(c_size), ToTensor(), ]) These functions are used in my dataset class when the images are loaded. I originally had goal = input.Copy() but I have changed it so both input and goal load the image separately. (was testing if the .copy() was the issue def __getitem__(self, idx): input = Image.open(self.image_filenames[idx]).convert('RGB') goal = Image.open(self.image_filenames[idx]).convert('RGB') if self.input_transform: input = self.input_transform(input) if self.goal_transform: print(goal) print(goal.size) goal = self.goal_transform(goal) return input, goal The error I receive is the following: Traceback (most recent call last): File "main.py", line 128, in <module> main() # execute this only when run directly, not when imported! File "main.py", line 55, in main train_model(epoch) File "main.py", line 40, in train_model for data_item, batch in enumerate(training_data_loader): File "C:\Users\[NAME]\anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 345, in __next__ data = self._next_data() File "C:\Users\[NAME]\anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "C:\Users\[NAME]\anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "C:\Users\[NAME]\anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "main.py", line 118, in __getitem__ goal = self.goal_transform(goal) File "C:\Users\[NAME]\anaconda3\envs\pytorch\lib\site-packages\torchvision\transforms\transforms.py", line 70, in __call__ img = t(img) TypeError: ToTensor() takes no arguments Confuses me because it doesn’t seem to have a problem with the first transformation (Ive checked and it does output before crashing). I would appreciate any help you guys can give, Thanks :)
Problem solved! nothing to do with torchvision.transforms. I wasn't actually using the functions above, but inline declarations for compose which I had tried to use previously. My bad
https://stackoverflow.com/questions/61250268/
Can auto-encoder encode new vector without re-training afresh?
Here is a simple autoencoder to encode 3 vectors of dimension 1x3 : [1,2,3],[1,2,3],[100,200,500] to 1x1 : epochs = 1000 from pylab import plt plt.style.use('seaborn') import torch.utils.data as data_utils import torch import torchvision import torch.nn as nn from torch.autograd import Variable cuda = torch.cuda.is_available() FloatTensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor import numpy as np import pandas as pd import datetime as dt features = torch.tensor(np.array([ [1,2,3],[1,2,3],[100,200,500] ])) print(features) batch = 1 data_loader = torch.utils.data.DataLoader(features, batch_size=2, shuffle=False) encoder = nn.Sequential(nn.Linear(3,batch), nn.Sigmoid()) decoder = nn.Sequential(nn.Linear(batch,3), nn.Sigmoid()) autoencoder = nn.Sequential(encoder, decoder) optimizer = torch.optim.Adam(params=autoencoder.parameters(), lr=0.001) encoded_images = [] for i in range(epochs): for j, images in enumerate(data_loader): # images = images.view(images.size(0), -1) images = Variable(images).type(FloatTensor) optimizer.zero_grad() reconstructions = autoencoder(images) loss = torch.dist(images, reconstructions) loss.backward() optimizer.step() # encoded_images.append(encoder(images)) # print(decoder(torch.tensor(np.array([1,2,3])).type(FloatTensor))) encoded_images = [] for j, images in enumerate(data_loader): images = images.view(images.size(0), -1) images = Variable(images).type(FloatTensor) encoded_images.append(encoder(images)) The variable encoded_images is an array of size 3 where each array entry represents the reduced dimensionality of a feature array : [tensor([[0.9972], [0.9972]], grad_fn=<SigmoidBackward>), tensor([[1.]], grad_fn=<SigmoidBackward>)] In order to determine similarity of a new feature, for example [1,1,1] is it required to re-train the network or can the existing trained network configuration/weights be "bootstrapped" such that the new vector can be encoded without require to retrain the network afresh ?
Sorry but your code is a mess... And if it's just to showcase the autoencoder idea (here you just have X, Y, Z coordinates while you name it image) it's chosen pretty poorly. Out of the way: If it's an image you won't be able to encode it as a single pixel, this needs a little more sophistication. Source code Here is a simple autoencoder to encode 3 vectors of dimension 1x3 : [1,2,3],[1,2,3],[100,200,500] to 1x1 Which is true only in this case as you have batch of 3 elements (while you named batch out_features of the network!). Their dimensions are not 1x3, it's just 3 as well. Here is a Minimal Reproducible Example with commentary: import torch # Rows are batches, there could be 3, there could be a thousand data = torch.tensor([[1, 2, 3], [1, 2, 3], [100, 200, 500]]).float() # 3 input features, columns of data encoder = torch.nn.Sequential(torch.nn.Linear(3, 1), torch.nn.Sigmoid()) decoder = torch.nn.Sequential(torch.nn.Linear(1, 3), torch.nn.Sigmoid()) autoencoder = torch.nn.Sequential(encoder, decoder) optimizer = torch.optim.Adam(params=autoencoder.parameters(), lr=0.001) epochs = 10000 for i in range(epochs): optimizer.zero_grad() reconstructions = autoencoder(data) loss = torch.dist(data, reconstructions) loss.backward() optimizer.step() # Print loss every 100 epoochs if i % 100 == 0: print(loss) Will it work? This one is more interesting. In principle, if your neural network is trained, you don't have to retrain it to include example it didn't previously see (as the goal of neural network is to learn some patterns to solve the task). In your case it won't. Why it won't work? First of all, you have sigmoid activation in decoder which restricts output to [0, 1] range. You are trying to predict data which is outside this range so it's impossible. Without running, I can tell you what's the loss of this example will go towards to (with all weights being +inf). All predictions will be always [1, 1, 1] (or as close to it as possible) as this value penalizes the network the least, so you just have to calculate distance of each vector in data to [1, 1, 1]. Here loss is stuck around 546.2719. Weights and biases are around 10 (which is pretty huge for sigmoid) after 100000 epochs. Your values may vary but the trend is clear (though it will stop, as 10 is pretty close to 1 when you squash it with sigmoid). Removing torch.nn.Sigmoid from decoder What if we remove torch.nn.Sigmoid() from decoder? It will learn to almost perfectly reconstruct just your 3 examples, with loss being 0.002 after "only" 500000 epochs: Here are the learned weights of decoder: tensor([[ 99.0000], [198.0000], [496.9999]], requires_grad=True) And here is the bias: tensor([1.0000, 2.0000, 2.9999]) And here is the output of encoder for each example: tensor([[2.2822e-13], [2.2822e-13], [1.0000e+00]]) Analysis of results Your network learned just what you told it to learn, which is... magnitude (+ clever bias hackery). [1, 2, 3] vector Take [1, 2, 3] example (repeated twice). encoding of it is 2e-13 and goes towards zero, so we will assume it's zero. Now, multiply 0 with all the weights, you still get zero. Add bias which is [1.0, 2.0, 2.99999] and you magically got your input reconstructed. [100, 200, 500] vector You can probably see where it's going. Encoded value is 1.0, when multiplied by decoder weights we get [99.0, 198.0, 497.0]. Add bias to it and voila, we get our [100.0, 200.0, 500.0]. [1, 1, 1] vector In your case it obviously will not work as magnitude of [1, 1, 1] is really small, hence it will be encoded as zero and reconstruced as [1, 2, 3]. Removing torch.nn.Sigmoid from encoder A little off-topic, but when you remove sigmoid from encoder it won't be able to learn this pattern as "easily". The reason is the network has to be more conservative with the weights (as those won't be squashed). You would have to drop the learning rate (preferably constantly lowering it as the training progresses) as it becomes unstable at some point (when trying to hit "the perfect spot"). Learning similarity It's hard (at least for the network) to define "similar" in this case. Is [1, 2, 3] similar to [3, 2, 1]? It has no concept of different dimensions and is required to squash those three numbers into a single value (later used for reconstruction). As demonstrated, it would probably learn some implicit patterns in your data to be good at reconstructing "at least something", but won't find the general pattern you are looking for. Still it depends on your data and it's properties but I would argue for no in general and I think it's generalization capabilities would be poor. And as you've seen in the analysis above, neural network is pretty good at finding those patterns even when you didn't see them (or maybe you did and that's what you were after?) or they don't exist at all. If you need dimension similarity (and it's not just a thought experiment), you have a lot of "human-made" stuff like the p-norm, some encodings (those also measure similiarity but in a different way) so it's better to go for that IMO.
https://stackoverflow.com/questions/61260489/
About using RNN in pytorch
I am trying to use RNN to do a binary classification. But when my model is training, it gets stuck at loss.backward(). Here is my model: class RNN2(nn.Module): def __init__(self, input_size, hidden_size, output_size=2, num_layers=1): super(RNN2, self).__init__() self.rnn = nn.RNN(input_size, hidden_size, num_layers) self.reg = nn.Linear(hidden_size, output_size) #self.softmax = nn.LogSoftmax(dim=1) def forward(self,x): x, hidden = self.rnn(x) return self.reg(x[:,2]) rnn = RNN2(13,10) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate) for e in range(10): out = rnn(train_X) optimizer.zero_grad() print(out[0]) print(out.shape) print(train_Y.shape) loss = criterion(out, train_Y) print(loss) loss.backward() print("1") optimizer.step() print("2") The shape of train_X is 420000*3*13 and the shape of train_Y is 420000 So it can print loss. Can anyone tell me why it gets stuck at loss.backward(). It can't print 1.
You have to know that in RRNs, computing the backward function for a sequence of length 420000 is extremely slow. If you run your code on a machine with a GPU (or google colab) and add the following lines before the for loop, your code finishes executing in less than two minutes. rnn = rnn.cuda() train_X = train_X.cuda() train_Y = train_Y.cuda() Note that by default, the second input dimension passed to RNN will be treated as the batch size. Therefore, if the 420000 is the number of batches, pass batch_first=True to the RNN constructor. self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first=True) This would significantly speed up the process (less than one second in google colab). However, if that is not the case, you should try chunking the sequences into smaller parts and increasing the batch size from 3 to a larger value.
https://stackoverflow.com/questions/61260945/
TensorBoard: Tutorial Pytorch: module 'tensorflow._api.v2.io.gfile' has no attribute 'get_filesystem'
I am having issues running this tutorial about Pytorch and TensorBoard with Embeddings https://pytorch.org/tutorials/intermediate/tensorboard_tutorial.html I am having this message, AttributeError Traceback (most recent call last) <ipython-input-10-e0404d94b4cd> in <module>() 22 writer.add_embedding(features, 23 metadata=class_labels, ---> 24 label_img=images.unsqueeze(1)) 25 writer.close() /usr/local/lib/python3.6/dist-packages/torch/utils/tensorboard/writer.py in add_embedding(self, mat, metadata, label_img, global_step, tag, metadata_header) 779 save_path = os.path.join(self._get_file_writer().get_logdir(), subdir) 780 --> 781 fs = tf.io.gfile.get_filesystem(save_path) 782 if fs.exists(save_path): 783 if fs.isdir(save_path): and I can visualize the projector
Try this import tensorflow as tf import tensorboard as tb tf.io.gfile = tb.compat.tensorflow_stub.io.gfile
https://stackoverflow.com/questions/61261451/
Is it possible to use LSTM predictions as inputs for next time steps?
I am working with LSTM (in PyTorch) for multivariate time series prediction. Let’s imagine the situation: I have 2 time series, A and B, and I want to predict t-value of B using previous values of A and B (before t). Such prediction works fine, my model gets good results. But what if (during testing, after training) I want to use predicted values of B as inputs for next time step instead of real values? For example: I predict first value of B, make a step, put predicted value instead of a real, and make prediction again. Then I use two predicted values instead of real two, and so on. In some steps only predicted values will be in time series B. Are there any possibilities to do that?
This is exactly what people do for machine translation and text generation in general. In this case, the LSTM predicts a distribution over a vocabulary, you select one word and use it as an input to the network in the next step. See PyTotrch tutorial on machine translation for more details. The important point is that the LSTM executed in two regimes: For training: as a standard sequence labeling. It is provided input and it should predict one step in the future. For inference: It gradually generates new samples and uses them as the next input. In PyTorch, this needs to be implemented with an explicit for loop.
https://stackoverflow.com/questions/61265768/
How to visualize a torch_geometric graph in Python?
Let's consider as an example that I have the following adjacence matrix in coordinate format: > edge_index.numpy() = array([[ 0, 1, 0, 3, 2], [ 1, 0, 3, 2, 1]], dtype=int64) which means that the node 0 is linked toward the node 1, and vice-versa, the node 0 is linked to 3 etc... Do you know a way to draw this graph as in networkx with nx.draw() ? Thank you.
import networkx as nx edge_index = torch.tensor([[0, 1, 1, 2], [1, 0, 2, 1]], dtype=torch.long) x = torch.tensor([[-1], [0], [1]], dtype=torch.float) data = torch_geometric.data.Data(x=x, edge_index=edge_index) g = torch_geometric.utils.to_networkx(data, to_undirected=True) nx.draw(g)
https://stackoverflow.com/questions/61274847/
BERT token importance measuring issue. Grad is none
I am trying to measure token importance for BERT via comparing token embedding grad value. So, to get the grad, I've copied the 2.8.0 forward of BertModel and changed it a bit: huggingface transformers 2.8.0 BERT https://github.com/huggingface/transformers/blob/11c3257a18c4b5e1a3c1746eefd96f180358397b/src/transformers/modeling_bert.py Code: embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds ) embedding_output = embedding_output.requires_grad_(True) # my code encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, ) sequence_output = encoder_outputs[0] sequence_output.mean().backward() # my code assert(embedding_output.grad is not None) # my code Colab link: https://colab.research.google.com/drive/1MggBUaDWAAZNuXbTDM11E8jvdMGEkuRD But it gives assertion error. I do not understand why and it seems to be a bug for me. Please, help!
I needed to add this line: embedding_output = torch.tensor(embedding_output, requires_grad=True) It seems, that I used .requires_grad_ method incorrectly.
https://stackoverflow.com/questions/61286574/
Why not super().__init__(Model,self) in Pytorch
For torch.nn.Module() According to the official documentation: Base class for all neural network modules. Your models should also subclass this class. Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes. import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) It used super(Model, self).__init__() Why not super().__init__(Model, self)
This construct: super().__init__(self) is valid only in Python 3.x whereas the following construct, super(Model, self).__init__() works both in Python 2.x and Python 3.x. So, the PyTorch developers didn't want to break all the code that's written in Python 2.x by enforcing the Python 3.x syntax of super() since both constructs essentially do the same thing in this case, which is initializing the following variables: self.training = True self._parameters = OrderedDict() self._buffers = OrderedDict() self._backward_hooks = OrderedDict() self._forward_hooks = OrderedDict() self._forward_pre_hooks = OrderedDict() self._state_dict_hooks = OrderedDict() self._load_state_dict_pre_hooks = OrderedDict() self._modules = OrderedDict() For details, see the relevant discussion in the PyTorch forum on the topic, is-there-a-reason-why-people-use-super-class-self-init-instead-of-super-init?
https://stackoverflow.com/questions/61288224/
Finding memory leak in python by tracemalloc module
I have a python script which uses an opensource pytorch model and this code has a memory leak. I am running this with memory_profiler mprof run --include-children python my_sctipt.py and get the following image: I am trying to search for the reason of the leak by the system python module tracemalloc: tracemalloc.start(25) while True: ... snap = tracemalloc.take_snapshot() domain_filter = tracemalloc.DomainFilter(True, 0) snap = snap.filter_traces([domain_filter]) stats = snap.statistics('lineno', True) for stat in stats[:10]: print(stat) If looking only at tracemalloc output, I will not be able to identify the problem. I assume that the problem is in the C extension but, I would like to make sure it is true. I tried to change the domain by DomainFilter, but I have output only in 0 domain. Also, I don't understand the meaning of the parameter which tracemalloc.start(frameno) has got, frameno is a number of the most recent frames, but nothing happens when I change it. What can I do next to find the problematic place in the code which causes the memory leak? Looking forward to your answer.
Given that your guess is that the problem is in the C extension, but that you want to make sure this is true, I would suggest that you do so using a tool that is less python-specific like https://github.com/vmware/chap or at least if you are able to run your program on Linux. What you will need to do is run your script (uninstrumented) and at some point gather a live core (for example, using "gcore pid-of-your-running-program"). Once you have that core, open that core in chap ("chap your-core-file-path") and try the following command from the chap prompt: summarize writable The output will be something like this, but your numbers will likely vary considerably: chap> summarize writable 5 ranges take 0x2021000 bytes for use: stack 6 ranges take 0x180000 bytes for use: python arena 1 ranges take 0xe1000 bytes for use: libc malloc main arena pages 4 ranges take 0x84000 bytes for use: libc malloc heap 8 ranges take 0x80000 bytes for use: used by module 1 ranges take 0x31000 bytes for use: libc malloc mmapped allocation 4 ranges take 0x30000 bytes for use: unknown 29 writable ranges use 0x23e7000 (37,646,336) bytes. The lines in the summary are given in decreasing order of byte usage, so you can follow that order. So looking at the top one first we see that the use is "stack": 5 ranges take 0x2021000 bytes for use: stack This particular core was for a very simple python program that starts 4 extra threads and has all 5 threads sleep. The reason large stack allocations can happen rather easily with a multi-threaded python program is that python uses pthreads to create additional threads and pthreads uses the ulimit value for stack size as a default. If your program has a similarly large value, you can change the stack size in one of several ways, including running "ulimit -s" in the parent process to change the default stack size. To see what values actually make sense you can use the following command from the chap prompt: chap> describe stacks Thread 1 uses stack block [0x7fffe22bc000, 7fffe22dd000) current sp: 0x7fffe22daa00 Peak stack usage was 0x7798 bytes out of 0x21000 total. Thread 2 uses stack block [0x7f51ec07c000, 7f51ec87c000) current sp: 0x7f51ec87a750 Peak stack usage was 0x2178 bytes out of 0x800000 total. Thread 3 uses stack block [0x7f51e7800000, 7f51e8000000) current sp: 0x7f51e7ffe750 Peak stack usage was 0x2178 bytes out of 0x800000 total. Thread 4 uses stack block [0x7f51e6fff000, 7f51e77ff000) current sp: 0x7f51e77fd750 Peak stack usage was 0x2178 bytes out of 0x800000 total. Thread 5 uses stack block [0x7f51e67fe000, 7f51e6ffe000) current sp: 0x7f51e6ffc750 Peak stack usage was 0x2178 bytes out of 0x800000 total. 5 stacks use 0x2021000 (33,689,600) bytes. So what you see above is that 4 of the stacks are 8MiB in size but could easily be well under 64KiB. Your program may not have any issues with stack size, but if so, you can fix them as described above. Continuing with checking for causes of growth, look at the next line from the summary: 6 ranges take 0x180000 bytes for use: python arena So python arenas use the next most memory. These are used strictly for python-specific allocations. So if this value is large in your case it disproves your theory about C allocations being the culprit, but there is more you can do later to figure out how those python allocations are being used. Looking at the remaining lines of the summary, we see a few with "libc" as part of the "use" description: 1 ranges take 0xe1000 bytes for use: libc malloc main arena pages 4 ranges take 0x84000 bytes for use: libc malloc heap 1 ranges take 0x31000 bytes for use: libc malloc mmapped allocation Note that libc is responsible for all that memory, but you can't know that the memory is used for non-python code because for allocations beyond a certain size threshold (well under 4K) python grabs memory via malloc rather than grabbing memory from one of the python arenas. So lets assume that you have resolved any issues you might have had with stack usage and you have mainly "python arenas" or "libc malloc" related usages. The next thing you want to understand is whether that memory is mostly "used" (meaning allocated but never freed) or "free" (meaning "freed but not given back to the operating system). You can do that as shown: chap> count used 15731 allocations use 0x239388 (2,331,528) bytes. chap> count free 1563 allocations use 0xb84c8 (754,888) bytes. So in the above case, used allocations dominate and what one should do is to try to understand those used allocations. The case where free allocations dominate is much more complex and is discussed a bit in the user guide but would take too much time to cover here. So lets assume for now that used allocations are the main cause of growth in your case. We can find out why we have so many used allocations. The first thing we might want to know is whether any allocations were actually "leaked" in the sense that they are no longer reachable. This excludes the case where the growth is due to container-based growth. One does this as follows: chap> summarize leaked 0 allocations use 0x0 (0) bytes. So for this particular core, as is pretty common for python cores, nothing was leaked. Your number may be non-zero. If it is non-zero but still much lower than the totals associated with memory used for "python" or "libc" reported above, you might just make a note of the leaks but continue to look for the real cause of growth. The user guide has some information about investigating leaks but it is a bit sparse. If the leak count is actually big enough to explain your growth issue, you should investigate that next but if not, read on. Now that you are assuming container-based growth the following commands are useful: chap> redirect on chap> summarize used Wrote results to scratch/core.python_5_threads.summarize_used chap> summarize used /sortby bytes Wrote results to scratch/core.python_5_threads.summarize_used::sortby:bytes The above created two text files, one which has a summary ordered in terms of object counts and another which has a summary in terms of the total bytes used directly by those objects. At present chap has only very limited support for python (it finds those python objects, in addition to any allocated by libc malloc but for python objects the summary only breaks out limited categories for python objects in terms of patterns (for example, %SimplePythonObject matches things like "int", "str", ... that don't hold other python objects and %ContainerPythonObject matches things like tuple, list, dict, ... that do hold references to other python objects). With that said, it should be pretty easy to tell from the summary whether the growth in used allocations is primarily due to objects allocated by python or objects allocated by native code. So in this case, given that you specifically are trying to find out whether the growth is due to native code or not, look in the summary for counts like the following, all of which are python-related: Pattern %SimplePythonObject has 7798 instances taking 0x9e9e8(649,704) bytes. Pattern %ContainerPythonObject has 7244 instances taking 0xc51a8(807,336) bytes. Pattern %PyDictKeysObject has 213 instances taking 0xb6730(747,312) bytes. So in the core I have been using for an example, definitely python allocations dominate. You will also see a line for the following, which is for allocations that chap does not yet recognize. You can't make assumptions about whether these are python-related or not. Unrecognized allocations have 474 instances taking 0x1e9b8(125,368) bytes. This will hopefully answer your basic question of what you can do next. At least at that point you will understand whether the growth is likely due to C code or python code and depending on what you find, the chap user guide should help you go further from there.
https://stackoverflow.com/questions/61288749/
PyTorch: why the difference between dir(nn.Module()) and dir(nn.Module)
Tried e = dir(nn.Module()) f = dir(nn.Module) print([item for item in e if item not in f]) It gives ['_backward_hooks', '_buffers', '_forward_hooks', '_forward_pre_hooks', '_load_state_dict_pre_hooks', '_modules', '_parameters', '_state_dict_hooks', 'training'] why these are only available for the object not class ? What's special about these attributes?
It's the other way around, those attributes are only available on the object (e in your case), but not on the class. The reason is simple, those are the attributes that are created in the constructor, hence they don't exist on the class and are only created when the object is created. From the nn.Module implementation: def __init__(self): """ Initializes internal Module state, shared by both nn.Module and ScriptModule. """ torch._C._log_api_usage_once("python.nn_module") self.training = True self._parameters = OrderedDict() self._buffers = OrderedDict() self._backward_hooks = OrderedDict() self._forward_hooks = OrderedDict() self._forward_pre_hooks = OrderedDict() self._state_dict_hooks = OrderedDict() self._load_state_dict_pre_hooks = OrderedDict() self._modules = OrderedDict() There is nothing special about these attributes, they are simply used to track the state of each individual module.
https://stackoverflow.com/questions/61290416/
The Batch Size of batch normalisation and gradient descent
We need to choose a batch size for GD as well as the normalization, they both called batch size, but in actual implementation, do they need to be the same? Or otherwise how the framework handle them? In Pytorch for example, one batch size is defined in dataloader, e.g. torch.utils.data.DataLoader(image_datasets[x], batch_size=16, shuffle=True, num_workers=4) and if use BN self.bn1 = nn.BatchNorm2d(16) Does it has to be the same (16) or it can be different ? If different, is there any preferred relationship between the two 'batch size' Thanks
No, batch_size is only defined in the data loader, not in the model. The BatchNorm2d has a num_features parameter, and it depends on the number of channels and not the batch size, as you can see in the docs. They are completely unrelated. BatchNorm2d torch.nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) Parameters num_features – C from an expected input of size (N,C,H,W)(N, C, H, W)(N,C,H,W) eps – a value added to the denominator for numerical stability. Default: 1e-5 momentum – the value used for the running_mean and running_var computation. Can be set to None for cumulative moving average (i.e. simple average). Default: 0.1 affine – a boolean value that when set to True, this module has learnable affine parameters. Default: True track_running_stats – a boolean value that when set to True, this module tracks the running mean and variance, and when set to False, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: True
https://stackoverflow.com/questions/61293098/
pytorch geometric "Detected that PyTorch and torch_sparse were compiled with different CUDA versions" on google colab
I'm new to pytorch geometric, tried to install it to my computer but failed, so I'm trying to run the code on Google Colab instead. According to this previous question (which didn't help me and I'mnot sure its the same issue): PyTorch Geometric CUDA installation issues on Google Colab I did: !pip install --upgrade torch-scatter !pip install --upgrade torch-sparse !pip install --upgrade torch-cluster !pip install --upgrade torch-spline-conv !pip install torch-geometric !pip install torch-cluster==latest+cu101 -f https://s3.eu-central-1.amazonaws.com/pytorch-geometric.com/whl/torch-1.4.0.html !pip install torch-scatter==latest+cu101 torch-sparse==latest+cu101 torch-spline-conv==latest+cu101 -f https://s3.eu-central-1.amazonaws.com/pytorch-geometric.com/whl/torch-1.4.0.html they print: Successfully installed torch-cluster-1.5.4 Successfully installed torch-scatter-2.0.4 torch-sparse-0.6.1 torch-spline-conv-1.2.0 However, when I try to run import torch_geometric.datasets as datasets I get: RuntimeError: Detected that PyTorch and torch_sparse were compiled with different CUDA versions. PyTorch has CUDA version 10.1 and torch_sparse has CUDA version 0.0. Please reinstall the torch_sparse that matches your PyTorch install. Any help would be greatly appretiated.
I came up with the following snippet that should work on Colab to install PyTorch Geometric and its dependencies: https://gist.github.com/ameya98/b193856171d11d37ada46458f60e73e7 # Add this in a Google Colab cell to install the correct version of Pytorch Geometric. import torch def format_pytorch_version(version): return version.split('+')[0] TORCH_version = torch.__version__ TORCH = format_pytorch_version(TORCH_version) def format_cuda_version(version): return 'cu' + version.replace('.', '') CUDA_version = torch.version.cuda CUDA = format_cuda_version(CUDA_version) !pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html !pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html !pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html !pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html !pip install torch-geometric
https://stackoverflow.com/questions/61297150/
From Coco annotation json to semantic segmentation image like VOC's .png in pytorch
I am trying to use COCO 2014 data for semantic segmentation training in PyTorch. I have a PSPNet model with a Cross Entropy loss function that worked perfectly on PASCAL VOC dataset from 2012. Now I am trying to use a portion of COCO pictures to do the same process. But Coco has json data instead of .png images for annotation and I somehow have to covert one to the other. I have noticed that there is annToMask in cocotools, but I cannot quiet figure out how to use that function in my case. This is kind of what my dataloader's pull item looks like def pull_item(self, index): I DONT KNOW WHAT TO DO HERE raw_img = self.transform(raw_img) anns_img = self.transform(anns_img) return raw_img, anns_img Below is what my training function that uses data from dataloaders looks like. for images, labels in dataloaders_dict[phase]: images = images.to(device) labels = torch.squeeze(labels) labels = labels.to(device) with torch.set_grad_enabled(phase == 'train'): outputs = net(images) loss = criterion(outputs, labels.long())
I have worked on creating a Data Generator for the COCO dataset with PyCOCO and I think my experience can help you out. My post on medium documents the entire process from start to finish, including the creation of masks. However, point to note, I was working with Tensorflow Keras and not pytorch. But the logic flow should largely be the same, so I am sure you can take back something useful from it.
https://stackoverflow.com/questions/61318213/
How to understand hidden_states of the returns in BertModel?(huggingface-transformers)
Returns last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)): Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor: of shape (batch_size, hidden_size)): Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pre-training. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence. hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True): Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when config.output_attentions=True): Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. This is from https://huggingface.co/transformers/model_doc/bert.html#bertmodel. Although the description in the document is clear, I still don't understand the hidden_states of returns. There is a tuple, one for the output of the embeddings, and the other for the output of each layer. Please tell me how to distinguish them, or what is the meaning of them? Thanks very much!![wink~
hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True): Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. For a given token, its input representation is constructed by summing the corresponding token embedding, segment embedding, and position embedding. This input representation is called the initial embedding output which can be found at index 0 of the tuple hidden_states. This figure explains how the embeddings are calculated. The remaining 12 elements in the tuple contain the output of the corresponding hidden layer. E.g: the last hidden layer can be found at index 12, which is the 13th item in the tuple. The dimension of both the initial embedding output and the hidden states are [batch_size, sequence_length, hidden_size]. It would be useful to compare the indexing of hidden_states bottom-up with this image from the BERT paper.
https://stackoverflow.com/questions/61323621/
IndexError: Target -1 is out of bounds error in tabular learner fatai2
Getting the below error when trying to fit a tabular_learner from fastai2 library. used data loaders learn = tabular_learner(dls, layers=[1000,500], metrics=accuracy) learn.fit(30,1e-2) IndexError Traceback (most recent call last) <ipython-input-35-f0c57ab3748f> in <module> ----> 1 learn.fit(30,1e-2) /mnt/c/fastai2/fastai2/learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt) 191 self.epoch=epoch; self('begin_epoch') 192 self._do_epoch_train() --> 193 self._do_epoch_validate() 194 except CancelEpochException: self('after_cancel_epoch') 195 finally: self('after_epoch') /mnt/c/fastai2/fastai2/learner.py in _do_epoch_validate(self, ds_idx, dl) 173 dl,old,has = change_attrs(dl, names, [False,False]) 174 self.dl = dl; self('begin_validate') --> 175 with torch.no_grad(): self.all_batches() 176 except CancelValidException: self('after_cancel_validate') 177 finally: /mnt/c/fastai2/fastai2/learner.py in all_batches(self) 141 def all_batches(self): 142 self.n_iter = len(self.dl) --> 143 for o in enumerate(self.dl): self.one_batch(*o) 144 145 def one_batch(self, i, b): /mnt/c/fastai2/fastai2/learner.py in one_batch(self, i, b) 149 self.pred = self.model(*self.xb); self('after_pred') 150 if len(self.yb) == 0: return --> 151 self.loss = self.loss_func(self.pred, *self.yb); self('after_loss') 152 if not self.training: return 153 self.loss.backward(); self('after_backward') /mnt/c/fastai2/fastai2/layers.py in __call__(self, inp, targ, **kwargs) 291 if targ.dtype in [torch.int8, torch.int16, torch.int32]: targ = targ.long() 292 if self.flatten: inp = inp.view(-1,inp.shape[-1]) if self.is_2d else inp.view(-1) --> 293 return self.func.__call__(inp, targ.view(-1) if self.flatten else targ, **kwargs) 294 295 # Cell ~/anaconda3/envs/py3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) ~/anaconda3/envs/py3/lib/python3.6/site-packages/torch/nn/modules/loss.py in forward(self, input, target) 914 def forward(self, input, target): 915 return F.cross_entropy(input, target, weight=self.weight, --> 916 ignore_index=self.ignore_index, reduction=self.reduction) 917 918 ~/anaconda3/envs/py3/lib/python3.6/site-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction) 2019 if size_average is not None or reduce is not None: 2020 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2021 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) 2022 2023 ~/anaconda3/envs/py3/lib/python3.6/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 1836 .format(input.size(0), target.size(0))) 1837 if dim == 2: -> 1838 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 1839 elif dim == 4: 1840 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) IndexError: Target -1 is out of bounds. Any clue would be much appreciated! Thanks
Finally got figured this out, this happened because my validation set accidentally had more dependent variable classes than in my training set (or may be it was the other way around)......To fix this I had to ensure that my class size of training set and validation set are the same i:e make sure you do this check len(train_df["my_category"].unique()) == len(valid_df["my_category"].unique())
https://stackoverflow.com/questions/61347613/
Need help understanding this Python list syntax
I'm having trouble understanding what this syntax means in Python: out = out[lengths - 1, range(len(lengths))] Why is there a range inside a list? How does that work? For context, I'm training a machine learning model in PyTorch. lengths is a list of the lengths of the input.
I assume lengths is an array of integers. (probably a Numpy array) The first index lengths - 1 will give a list of indices that is subtracted by -1. The second index range(len(lengths)) will give a list of numbers from 0 to the size of lengths. I don't know what the specific logic is in your code, but in general, you can give a list of indices to pick the data at specific locations. out = np.array([[1,2,3],[4,5,6],[7,8,9]]) -> array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) first_idx = [0, 1, 2] second_idx = [2, 1, 0] # (0,2), (1,1), (2,0) out[first_idx, second_idx] -> array([3, 5, 7])
https://stackoverflow.com/questions/61356477/
PyTorch - sparse tensors do not have strides
I am building my first sentiment analysis model for a small dataset of 1000 reviews using TF-IDF approach along with LSTM using the below code. I am preparing the train data by preprocessing it and feeding to the Vectorizer as below def tfidf_features(X_train, X_val, X_test): tfidf_vectorizer = TfidfVectorizer(analyzer='word', token_pattern = '(\S+)', min_df = 5, max_df = 0.9, ngram_range=(1,2)) X_train=tfidf_vectorizer.fit_transform(X_train) X_val=tfidf_vectorizer.transform(X_val) X_test=tfidf_vectorizer.transform(X_test) return X_train, X_val, X_test, tfidf_vectorizer.vocabulary_ I am converting my csr_matrix to a pytorch tensor using the below code def spy_sparse2torch_sparse(data): samples=data.shape[0] features=data.shape[1] values=data.data coo_data=data.tocoo() indices=torch.LongTensor([coo_data.row,coo_data.col]) t=torch.sparse.FloatTensor(indices,torch.from_numpy(values).float(),[samples,features]) return t And I am getting the training sentences tensor as this tensor(indices=tensor([[ 0, 0, 1, ..., 599, 599, 599], [ 97, 131, 49, ..., 109, 65, 49]]), values=tensor([0.6759, 0.7370, 0.6076, ..., 0.3288, 0.3927, 0.3288]), size=(600, 145), nnz=1607, layout=torch.sparse_coo) I am creating a TensorDataSet using the below code wherein I am also converting my label data from bumpy to a torch tensor train_data = TensorDataset(train_x, torch.from_numpy(train_y)) I have defined my LSTM network and calling it with the following parameters n_vocab = len(vocabulary) n_embed = 100 n_hidden = 256 n_output = 1 # 1 ("positive") or 0 ("negative") n_layers = 2 net = Sentiment_Lstm(n_vocab, n_embed, n_hidden, n_output, n_layers) I have also defined the loss and optimizer. Now I am training my model using the below code print_every = 100 step = 0 n_epochs = 4 # validation loss increases from ~ epoch 3 or 4 clip = 5 # for gradient clip to prevent exploding gradient problem in LSTM/RNN for epoch in range(n_epochs): h = net.init_hidden(batch_size) for inputs, labels in train_loader: step += 1 # making requires_grad = False for the latest set of h h = tuple([each.data for each in h]) net.zero_grad() output, h = net(inputs) loss = criterion(output.squeeze(), labels.float()) loss.backward() nn.utils.clip_grad_norm(net.parameters(), clip) optimizer.step() if (step % print_every) == 0: net.eval() valid_losses = [] v_h = net.init_hidden(batch_size) for v_inputs, v_labels in valid_loader: v_inputs, v_labels = inputs.to(device), labels.to(device) v_h = tuple([each.data for each in v_h]) v_output, v_h = net(v_inputs) v_loss = criterion(v_output.squeeze(), v_labels.float()) valid_losses.append(v_loss.item()) print("Epoch: {}/{}".format((epoch+1), n_epochs), "Step: {}".format(step), "Training Loss: {:.4f}".format(loss.item()), "Validation Loss: {:.4f}".format(np.mean(valid_losses))) net.train() However, I am getting a major error on the line output, h = net(inputs) as RuntimeError: sparse tensors do not have strides The workarounds given on other websites are not understandable. I am expecting an exact code change I need to make in order to fix this issue.
Pytorch does not support sparse (S) to sparse matrix multiplication. Let us consider : torch.sparse.mm(c1,c2), where c1 and c2 are sparse_coo_tensor matrices. case1: If we try c1 and c2 to be S --> It gives the erros RuntimeError: sparse tensors do not have strides. case2: If c1 is dense (D) and c2 is S --> It gives the same error. case3: Only when c1 is S and c2 is D --> It works fine. Reference: https://blog.csdn.net/w55100/article/details/109086131 I guess the matrix multiplication happening in your Sentiment_Lstm might be falling under the first two cases. And thereby throwing this error. By using dense input format it should work.
https://stackoverflow.com/questions/61364160/
Making transformers BertForSequenceClassification initial layers non-trainable for pytorch training
I'm trying to do a transfer learning with BertForSequenceClassification https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification This is my simple NN model for classification. from transformers import BertTokenizer, BertForSequenceClassification class NN(nn.Module): def __init__(self): super(NN, self).__init__() self.bert = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels = 17) def forward(self, x): return self.bert(x) a = NN() Once I print my model I get this: NN( (bert): BertForSequenceClassification( (bert): BertModel( (embeddings): BertEmbeddings( (word_embeddings): Embedding(30522, 768, padding_idx=0) (position_embeddings): Embedding(512, 768) (token_type_embeddings): Embedding(2, 768) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) (encoder): BertEncoder( (layer): ModuleList( (0): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (1): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (2): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (3): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (4): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (5): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (6): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (7): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (8): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (9): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (10): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (11): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) (pooler): BertPooler( (dense): Linear(in_features=768, out_features=768, bias=True) (activation): Tanh() ) ) (dropout): Dropout(p=0.1, inplace=False) (classifier): Linear(in_features=768, out_features=17, bias=True) ) ) I want to make only the last Linear layer trainable but I can't access the layers of my model. I have tried iterating model as a list and features but both gives error that there is no such attribute. a.features How can make all my layers frozen except the last linear layer?
As the documentation says, class transformers.BertForSequenceClassification(config)[source] Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and behavior. So, you can treat your bert layer as an nn.Module container and use these attributes and functions https://pytorch.org/docs/stable/nn.html You can use list(a.parameters()) to get the layers and use requires_grad to make layers trainable or non-trainable. for pp in a.parameters(): pp.requires_grad = False list(a.parameters())[-1].requires_grad = True
https://stackoverflow.com/questions/61374361/
How to initialize empty tensor with certain dimension and append to it through a loop without CUDA out of memory?
I am trying to append tensors (t) generated in a for-loop to a list [T] that accumulates all these tensors. Next, the list [T] requires to be converted into a tensor and needs to be loaded onto GPU. b_output = [] for eachInputId, eachMask in zip(b_input_ids, b_input_mask): # unrolled into each individual document # print(eachInputId.size()) # individual document here outputs = model(eachInputId, token_type_ids=None, attention_mask=eachMask) # combine the [CLS] output layer to form the document doc_output = torch.mean(outputs[1], dim=0) # size = [1, ncol] b_output.append( doc_output ) t_b_output = torch.tensor( b_output ) Another method that I tried was initializing a tensor {T} with fixed dimensions and appending the tensors (t) to it from the for-loop. b_output = torch.zeros(batch_size, hidden_units) b_output.to(device) # cuda device for index, (eachInputId, eachMask) in enumerate(zip(b_input_ids, b_input_mask)): # unrolled into each individual document # print(eachInputId.size()) # individual document here outputs = model(eachInputId, token_type_ids=None, attention_mask=eachMask) # combine the [CLS] output layer to form the document doc_output = torch.mean(outputs[1], dim=0) # size = [1, ncol] b_output[index] = doc_output Doing either of this produces this error: RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 11.17 GiB total capacity; 10.65 GiB already allocated; 2.81 MiB free; 10.86 GiB reserved in total by PyTorch) I assume this is because of appending the tensors (that are on the GPU) to a list (of course not on the GPU) and then trying to convert the list into a tensor (thats not on the GPU). What could be done to append those tensors to another tensor and then load the tensor to GPU for further processing? I will be grateful for any hint or information.
Try using torch.cat instead of torch.tensor. You are currently trying to allocate memory for you new tensor while all the other tensors are still stored, which might be the cause of the out of memory error. Change : t_b_output = torch.tensor( b_output ) with: t_b_output = torch.cat( b_output ) Hope this help
https://stackoverflow.com/questions/61390323/
Libtorch:how to create tensor from tensorRT fp16 half type pointer?
how to create tensor from tensorRT fp16 half type pointer in libtorch? I am working on a detection model. I change the backbone of it to tensorRT to do FP16 inference, and the detection code such as decode boxes and nms is done in libtorch and torchvisoin, so how to create fp16 tensor from tensorRT half type pointers? The important code is to illustrate the issue: // tensorRT code to get half type outpus half_float::half* outputs[18]; doInference(*engine, data, outputs, 1); // to get the final outputs with libtorch vector<torch::Tensor> output; //???? how to feed the date in outpus to output???? // get the result with libtorch method detect_trt->forward auto res = detect_trt->forward(output); Thanks in advance.
I have to do backbone inference in TensorRT, but the post process is using libtorch for convenience.And now I figure it out by using the following code: out = torch::from_blob(outputs[i], {1, num, dim, dim}, torch::kFloat16).to(device_used);
https://stackoverflow.com/questions/61400032/
Pytorch: Weight in cross entropy loss
I was trying to understand how weight is in CrossEntropyLoss works by a practical example. So I first run as standard PyTorch code and then manually both. But the losses are not the same. from torch import nn import torch softmax=nn.Softmax() sc=torch.tensor([0.4,0.36]) loss = nn.CrossEntropyLoss(weight=sc) input = torch.tensor([[3.0,4.0],[6.0,9.0]]) target = torch.tensor([1,0]) output = loss(input, target) print(output) >>1.7529 Now for manual Calculation, first softmax the input: print(softmax(input)) >> tensor([[0.2689, 0.7311], [0.0474, 0.9526]]) and then negetive log of the correct class probality and multiply with the respective weight: ((-math.log(0.7311)*0.36) - (math.log(0.0474)*0.4))/2 >> 0.6662 What I am missing here?
To compute class weight of your classes use sklearn.utils.class_weight.compute_class_weight(class_weight, *, classes, y) read it here This will return you an array i.e weight. eg . x = torch.randn(20, 5) y = torch.randint(0, 5, (20,)) # classes class_weights=class_weight.compute_class_weight('balanced',np.unique(y),y.numpy()) class_weights=torch.tensor(class_weights,dtype=torch.float) print(class_weights) #([1.0000, 1.0000, 4.0000, 1.0000, 0.5714]) Then pass it to nn.CrossEntropyLoss's weight variable criterion = nn.CrossEntropyLoss(weight=class_weights,reduction='mean') loss = criterion(...)
https://stackoverflow.com/questions/61414065/
Installing Pytorch Transformers in AWS Sagemaker
I'm trying to install the pytorch transformers package for my AWS Sagemaker notebook instance. However, it keeps giving me error of "No Module Found" for the package when i run my entry point script. I saw in an example for TensorFlowModel which requires to set up env but for Pytorch it is not the case (How do I load python modules which are not available in Sagemaker?). Anyway, below is my code : estimator = PyTorch(entry_point='model.py', role=role, framework_version='1.4.0', train_instance_count=2, train_instance_type='ml.c4.xlarge', source_dir = 'src', hyperparameters={ 'train_path': 's3://bucket-train', 'validation_path': 's3://bucket-val', 'epochs': 3, 'backend': 'gloo' })
although you may be running that command from a SageMaker notebook, the training job you launch with the PyTorch estimator does not run on the notebook. It runs on remote, ephemeral infrastructure. You need to install your package on that remote machine. You need to add in the srcsource directory a requirements.txt file that contains the list of extra packages you want to install, such as the transformers package
https://stackoverflow.com/questions/61415420/
ResNet model of pytorch and tensorflow give different results when stride=2
class BasicBlock(nn.Module): def __init__(self, in_planes, out_planes, stride, dropRate=0.0): super(BasicBlock, self).__init__() self.bn1 = nn.BatchNorm2d(in_planes) self.relu1 = nn.ReLU(inplace=True) self.conv1 = nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False) # 1 self.bn2 = nn.BatchNorm2d(out_planes) self.relu2 = nn.ReLU(inplace=True) self.conv2 = nn.Conv2d(out_planes, out_planes, kernel_size=3, stride=1, padding=1, bias=False) self.droprate = dropRate self.equalInOut = (in_planes == out_planes) self.convShortcut = (not self.equalInOut) and nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, padding=0, bias=False) or None def forward(self, x): if not self.equalInOut: x = self.relu1(self.bn1(x)) else: out = self.relu1(self.bn1(x)) out = self.relu2(self.bn2(self.conv1(out if self.equalInOut else x))) if self.droprate > 0: out = F.dropout(out, p=self.droprate, training=self.training) out = self.conv2(out) if self.convShortcut is not None: return torch.add(x if self.equalInOut else self.convShortcut(x), out) class NetworkBlock(nn.Module): def __init__(self, nb_layers, in_planes, out_planes, block, stride, dropRate=0.0): super(NetworkBlock, self).__init__() self.layer = self._make_layer(block, in_planes, out_planes, nb_layers, stride, dropRate) def _make_layer(self, block, in_planes, out_planes, nb_layers, stride, dropRate): layers = [] for i in range(int(nb_layers)): layers.append(block(i == 0 and in_planes or out_planes, out_planes, i == 0 and stride or 1, dropRate)) return nn.Sequential(*layers) def forward(self, x): return self.layer(x) class WideResNet(nn.Module): def __init__(self, depth=34, num_classes=10, widen_factor=10, dropRate=0.0): super(WideResNet, self).__init__() nChannels = [16, 16 * widen_factor, 32 * widen_factor, 64 * widen_factor] assert ((depth - 4) % 6 == 0) n = (depth - 4) / 6 block = BasicBlock # 1st conv before any network block self.conv1 = nn.Conv2d(3, nChannels[0], kernel_size=3, stride=1, padding=1, bias=False) # 1st block self.block1 = NetworkBlock(n, nChannels[0], nChannels[1], block, 1, dropRate) # 1st sub-block self.sub_block1 = NetworkBlock(n, nChannels[0], nChannels[1], block, 1, dropRate) # 2nd block self.block2 = NetworkBlock(n, nChannels[1], nChannels[2], block, 2, dropRate) # 2 # 3rd block self.block3 = NetworkBlock(n, nChannels[2], nChannels[3], block, 2, dropRate) # 2 # global average pooling and classifier self.bn1 = nn.BatchNorm2d(nChannels[3]) self.relu = nn.ReLU(inplace=True) self.fc = nn.Linear(nChannels[3], num_classes) self.nChannels = nChannels[3] for m in self.modules(): if isinstance(m, nn.Conv2d): n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels m.weight.data.normal_(0, math.sqrt(2. / n)) elif isinstance(m, nn.BatchNorm2d): m.weight.data.fill_(1) m.bias.data.zero_() elif isinstance(m, nn.Linear): m.bias.data.zero_() def forward(self, x): out = self.conv1(x) out = self.block1(out) out = self.block2(out) out = self.block3(out) out = self.relu(self.bn1(out)) out = F.avg_pool2d(out, 8) out = out.view(-1, self.nChannels) return self.fc(out) def _conv(self, name, x, filter_size, in_filters, out_filters, strides, padding='SAME'): """Convolution.""" with tf.variable_scope(name): n = filter_size * filter_size * out_filters kernel = tf.get_variable( 'DW', [filter_size, filter_size, in_filters, out_filters], tf.float32, initializer=tf.random_normal_initializer( stddev=np.sqrt(2.0/n))) return tf.nn.conv2d(x, kernel, strides, padding=padding) def _residual(self, x, in_filter, out_filter, stride, activate_before_residual=False, is_log=False): """Residual unit with 2 sub layers.""" if activate_before_residual: x = self._batch_norm('bn1', x) x = self._relu(x) orig_x = x else: orig_x = x x = self._batch_norm('bn1', x) x = self._relu(x) x = self._conv('conv1', x, 3, in_filter, out_filter, stride) x = self._batch_norm('bn2', x) x = self._relu(x) x = self._conv('conv2', x, 3, out_filter, out_filter, [1, 1, 1, 1]) if in_filter != out_filter: orig_x = self._conv('shortcut_conv', orig_x, filter_size=1, in_filters=in_filter, out_filters=out_filter, strides=stride, padding="VALID") x += orig_x return x def _build_model(self): assert self.mode == 'train' or self.mode == 'eval' with tf.variable_scope('input'): self.x_input = tf.placeholder(tf.float32, shape=[None, 32, 32, 3]) self.y_input = tf.placeholder(tf.float32, shape=[None, 10]) self.is_training = tf.placeholder(tf.bool, shape=None) x = self._conv('conv1.weight', self.x_input, 3, 3, 16, self._stride_arr(1)) strides = [1, 2, 2] activate_before_residual = [True, True, True] res_func = self._residual # wide residual network (https://arxiv.org/abs/1605.07146v1) filters = [16, 160, 320, 640] with tf.variable_scope('block1.layer.0'): x = res_func(x, filters[0], filters[1], self._stride_arr(strides[0]), activate_before_residual[0]) for i in range(1, 5): with tf.variable_scope('block1.layer.%d' % i): x = res_func(x, filters[1], filters[1], self._stride_arr(1), False) with tf.variable_scope('block2.layer.0'): x = res_func(x, filters[1], filters[2], self._stride_arr(strides[1]), activate_before_residual[1], is_log=True) for i in range(1, 5): with tf.variable_scope('block2.layer.%d' % i): x = res_func(x, filters[2], filters[2], self._stride_arr(1), False) with tf.variable_scope('block3.layer.0'): x = res_func(x, filters[2], filters[3], self._stride_arr(strides[2]), activate_before_residual[2]) for i in range(1, 5): with tf.variable_scope('block3.layer.%d' % i): x = res_func(x, filters[3], filters[3], self._stride_arr(1), False) x = self._batch_norm('bn1', x) x = self._relu(x) x = self._global_avg_pool(x) with tf.variable_scope('fc'): self.pre_softmax = self._fully_connected(x, 10) I'm doing experiment on "adversarial defense", and I checked that the performances of pytorch and tensorflow is different with same weights (I exported it as numpy and loaded to pytorch and tensorflow) I printed out each result of WideResNet34 and calculate the difference of each output, then, the above output of below image comes out The results start to be different from block2. Then, I only change the stride of each block to all 1 (stride of block 2 and 3), the below output of above image comes out The differences are negligible at all layers, so I think the difference appear only when stride=2. I don't know why there is no difference when stride=1 but different when stride=2... Who knows about this thing?
I finally found that the problem was the "padding". Tensorflow's "SAME" padding zero-pads assymmetrically (left=0, right=1, top=0, bottom=1) when symmetric padding results in odd number... While, pytorch do not support assymmetric padding in nn.conv2d, so it zero-pads symmetrically (left=1, right=1, top=1, bottom=1).. So, I think that when input size=8, filter size=3, and stride=2, index of left-top of filter in tensorflow would be 0,2,4,6 but in pytorch it would be -1(zero-pad), 1, 3, 5... I checked that when I zero-pads assymetrically using nn.Zero-pad2d , it gives almost same results (2-norm diff < 1e-2) class BasicBlock(nn.Module): def __init__(self, in_planes, out_planes, stride, dropRate=0.0): super(BasicBlock, self).__init__() self.bn1 = nn.BatchNorm2d(in_planes) self.relu1 = nn.ReLU(inplace=True) if stride==1: self.conv1 = nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False) # 1 else: self.conv1 = nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=0, bias=False) # 1 self.pad1 = nn.ZeroPad2d((0,1,0,1)) # 0,1,0,1 self.stride = stride self.bn2 = nn.BatchNorm2d(out_planes) self.relu2 = nn.ReLU(inplace=True) self.conv2 = nn.Conv2d(out_planes, out_planes, kernel_size=3, stride=1, padding=1, bias=False) self.droprate = dropRate self.equalInOut = (in_planes == out_planes) self.convShortcut = (not self.equalInOut) and nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, padding=0, bias=False) or None def forward(self, x): if not self.equalInOut: x = self.relu1(self.bn1(x)) else: out = self.relu1(self.bn1(x)) if self.stride==1: out = self.relu2(self.bn2(self.conv1(out if self.equalInOut else x))) else: out = self.relu2(self.bn2(self.conv1(out if self.equalInOut else self.pad1(x)))) if self.droprate > 0: out = F.dropout(out, p=self.droprate, training=self.training) out = self.conv2(out) return torch.add(x if self.equalInOut else self.convShortcut(x), out)
https://stackoverflow.com/questions/61422046/
Loss not Converging for CNN Model
Image Transformation and Batch transform = transforms.Compose([ transforms.Resize((100,100)), transforms.ToTensor(), transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225]) ]) data_set = datasets.ImageFolder(root="/content/drive/My Drive/models/pokemon/dataset",transform=transform) train_loader = DataLoader(data_set,batch_size=10,shuffle=True,num_workers=6) Below is my Model class pokimonClassifier(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3,6,3,1) self.conv2 = nn.Conv2d(6,18,3,1) self.fc1 = nn.Linear(23*23*18,520) self.fc2 = nn.Linear(520,400) self.fc3 = nn.Linear(400,320) self.fc4 = nn.Linear(320,149) def forward(self,x): x = F.relu(self.conv1(x)) x = F.max_pool2d(x,2,2) x = F.relu(self.conv2(x)) x = F.max_pool2d(x,2,2) x = x.view(-1,23*23*18) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.log_softmax(self.fc4(x), dim=1) return x Creating Instance of model, Use GPU, Set Criterion and optimizer Here is firsr set lr = 0.001 then later changed to 0.0001 model = pokimonClassifier() model.to('cuda') criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(),lr = 0.0001) Training Dataset for e in range(epochs): train_crt = 0 for b,(train_x,train_y) in enumerate(train_loader): b+=1 train_x, train_y = train_x.to('cuda'), train_y.to('cuda') # train model y_preds = model(train_x) loss = criterion(y_preds,train_y) # analysis model predicted = torch.max(y_preds,1)[1] correct = (predicted == train_y).sum() train_crt += correct # print loss and accuracy if b%50 == 0: print(f'Epoch {e} batch{b} loss:{loss.item()} ') # updating weights and bais optimizer.zero_grad() loss.backward() optimizer.step() train_loss.append(loss) train_correct.append(train_crt) My loss value remains between 4 - 3 and its not converging to 0. I am super new to deep learning and I don't know much about it. The dataset I am using is here: https://www.kaggle.com/thedagger/pokemon-generation-one A help will be much appreciated. Thank You
The problem with your network is that you are applying softmax() twice - once at fc4() layer and once more while using nn.CrossEntropyLoss(). According to the official documentation, Pytorch takes care of softmax() while applying nn.CrossEntropyLoss(). So in your code, please change this line x = F.log_softmax(self.fc4(x), dim=1) to x = self.fc4(x)
https://stackoverflow.com/questions/61426911/
XLNetForSequenceClassification Pretrained model unable to load
I tried loading the XLNet pretrained but this occurred. I've tried this before and it worked, however, now it doesn't. Any suggestion on how to fix this problem? model = XLNetForSequenceClassification.from_pretrained("xlnet-large-cased", num_labels = 2) model.to(device) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-55-d6f698a3714b> in <module>() ----> 1 model = XLNetForSequenceClassification.from_pretrained("xlnet-large-cased", num_labels = 2) 2 model.to(device) 3 frames /usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py in __init__(self, num_embeddings, embedding_dim, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse, _weight) 95 self.scale_grad_by_freq = scale_grad_by_freq 96 if _weight is None: ---> 97 self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim)) 98 self.reset_parameters() 99 else: RuntimeError: Trying to create tensor with negative dimension -1: [-1, 1024]
You should import XLNetForSequenceClassification from transformers and not from pytorch-transformers. First, make sure transformers is installed: > pip install transformers Then, in your code: from transformers import XLNetForSequenceClassification model = XLNetForSequenceClassification.from_pretrained("xlnet-large-cased", num_labels = 2) This should work.
https://stackoverflow.com/questions/61431500/
Get the input channels for the conv2d from previous layer?
I was wondering if there are many convolutional layers (conv1 --> conv2 ). How can we get the input channels parameter for the conv2 from the conv1 output channel? class MyModel(nn.Module): def __init__(self, in_ch, num_features, out_ch2): super(MyModel, self).__init__() self.conv1 = nn.Conv2D(in_channels,num_features) self.conv2 = nn.Conv2D(in_channnels_from_out_channels_of_conv1,out_ch2) Can I get the out_channels from the conv1 layer and use it as in_ch for conv2?
Second parameter of nn.Conv2D constructor is number of output channels: self.conv1 = nn.Conv2D(in_channels,conv1_out_channels) self.conv2 = nn.Conv2D(conv1_out_channels,out_ch2) as described in the docs Also it available as a property: self.conv1.out_channels
https://stackoverflow.com/questions/61441174/
PyTorch Dataset and Conv1d using a ton of memory
I am trying to write a convolutional neural network in pytorch. I'm very new to machine learning and PyTorch, so I'm not very familiar with the package. I have written a custom dataset and it has loaded my data from a csv file properly. However, when I load it into a data loader, my system monitor shows python using a huge amount of memory. I'm currently using only a fraction of my data set, and one instance uses about 5 gigs as a data loader. My dataset is 1 dimensional tensors. Each one is very long - about 33 million values. I used sys.getsizeof(train_set.sample_list[0][0].storage()) to check the size of the underlying tensor, and it was only 271 megabytes. Additionally, if I continue on and create an instance of my CNN, the initializer eats up memory until my kernel crashes. The reason for this is unclear to me. Here is the code for my custom Dataset: def __init__(self, csv_file, train): self.train = train self.df_tmp = pd.read_csv(csv_file, header=None, sep='\t') self.df_tmp.drop(self.df_tmp.shape[1]-1, axis=1, inplace=True) self.df = self.df_tmp.transpose() self.sample_list = [] for i in range(self.df.shape[0]): #num rows, 33 million ish sample = torch.tensor([self.df.iloc[i][1:].values]) label = torch.tensor(self.df.iloc[i][0]) self.sample_list.append((sample, label)) def __len__(self): return len(self.sample_list) def __getitem__(self, idx): return self.sample_list[idx] And the code for the NN: #input batch shape is (9 x 33889258 x 1) def __init__(self): super(CNN, self).__init__() #input channels 1, output 3 self.conv1 = torch.nn.Conv1d(1, out_channels=3, kernel_size=(100), stride=10, padding=1) #size in is 3,1,33889258 self.pool = torch.nn.MaxPool1d(kernel_size=2, stride=2, padding=0) self.fc1 = torch.nn.Linear(45750366, 1000) #3 * 1 * 3388917 self.fc2 = torch.nn.Linear(1000, 2) def forward(self, x): #size: (1x1x33889258) to (3x1x33889258) tmp = self.conv1(x.float()) x = F.relu(tmp) # x = self.pool(x) #whatever shape comes out of here needs to go into x.view x = x.view(45750366) #-1, 1*1*3388927 x = self.fc1(x) x = F.relu(x) x = self.fc2(x) return(x) Some of my input sizes might be off, I'm still working that out but the memory issue is preventing me from making progress
You are storing all datapoints in list(i.e. in memory) so it kinda deafeats the purpose of custom dataset/dataloader. you should just keep the reference of dataframe in your dataset class and for each index return the correct data something like def __init__(self, csv_file, train): self.train = train self.df_tmp = pd.read_csv(csv_file, header=None, sep='\t') self.df_tmp.drop(self.df_tmp.shape[1]-1, axis=1, inplace=True) self.df = self.df_tmp.transpose() def __len__(self): return self.df.shape[0] def __getitem__(self, idx): sample = torch.tensor([self.df.iloc[idx][1:].values]) label = torch.tensor(self.df.iloc[idx][0]) return sample, label one small note: you are returning tensors from dataset's getitem method returning pure numpy array is prefered and easier because the dataloader will convert it into a pytorch tensor.
https://stackoverflow.com/questions/61452287/
Unable to find a valid cuDNN algorithm to run convolution
I just got this message when trying to run a feed forward torch.nn.Conv2d, getting the following stacktrace: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-26-04bd4a00565d> in <module> 3 4 # call training function ----> 5 losses = train(D, G, n_epochs=n_epochs) <ipython-input-24-b539315e0aa0> in train(D, G, n_epochs, print_every) 46 real_images = real_images.cuda() 47 ---> 48 D_real = D(real_images) 49 d_real_loss = real_loss(D_real, True) # smoothing label 1 => 0.9 50 ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) <ipython-input-14-bf68e57c25ff> in forward(self, x) 48 """ 49 ---> 50 x = self.leaky_relu(self.conv1(x)) 51 x = self.leaky_relu(self.conv2(x)) 52 x = self.leaky_relu(self.conv3(x)) ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input) 98 def forward(self, input): 99 for module in self: --> 100 input = module(input) 101 return input 102 ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input) 347 348 def forward(self, input): --> 349 return self._conv_forward(input, self.weight) 350 351 class Conv3d(_ConvNd): ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight) 344 _pair(0), self.dilation, self.groups) 345 return F.conv2d(input, weight, self.bias, self.stride, --> 346 self.padding, self.dilation, self.groups) 347 348 def forward(self, input): RuntimeError: Unable to find a valid cuDNN algorithm to run convolution Running nvidia-smi shows: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 770 On | 00000000:01:00.0 N/A | N/A | | 38% 50C P8 N/A / N/A | 624MiB / 4034MiB | N/A Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 Not Supported | +-----------------------------------------------------------------------------+ I'm using Python 3.7, Pytorch 1.5, and GPU is Nvidia GeForce GTX 770, running on Ubuntu 18.04.2. I haven't found that error message anywhere. Does it ring any bell?. Thanks a lot in advance.
According to this answer for similar issue with tensorflow, it could occur because the VRAM memory limit was hit (which is rather non-intuitive from the error message). For my case with PyTorch model training, decreasing batch size helped. You could try this or maybe decrease your model size to consume less VRAM.
https://stackoverflow.com/questions/61467751/
Stacking binary mask frames in pytorch?
I am using Pytorch to attempt to create a 4-dimensional tensor (binary mask) using a "stack" of three-dimensional tensors that each hold binary masks. The three-dimensional tensors have n instances of some segmented object in a binary mask that is 704 wide and 1080 high. Lets lets say I have three of these 3-dimensional tensors. They are all size [n,704,1080], where n is the numbers of individual obejcts (and thus, individual binary masks) for the frame, 704 is height and 1080 is width: t1.size = torch.Size([9, 704, 1280]) t2.size = torch.Size([12, 704, 1280]) t3.size = torch.Size([10, 704, 1280]) I want to create a stack of them by adding a fourth dimension, a. ie. [a, n, 704, 1280], where a is the original 3-dimensional tensor. The goal is to have a 4-dimensional tensor that can hold the data of numerous 3-dimensional tensors. I have tried to torch.stack([t1, t2, t3]) but that does not work since the second dimension, n, is not consistent between all the tensors. How can I go about this since stack does not work?
You can't. All tensor dimensions except first must be the same. Only way to do this, append dummy rows to first and third tensor to make them the same size (12,704,1280) Or you can stack it in one 3 -dim tensor.
https://stackoverflow.com/questions/61468368/
pytorch+tensorboard error " AttributeError: 'Tensor' object has no attribute 'items' "
Good afternoon. I want to log the loss of the train using the tensorboard in pytorch. I got an error there. AttributeError: 'Tensor' object has no attribute 'items' I want to solve this error and check the log using tensorboard. Here I show my code. l_mse = mseloss(img,decoder_out) writer.add_scalars("MSE",l_mse,n_iter) img is real image in GAN and decoder_out is Generator output. then I have error blow. Traceback (most recent call last): File "main.py", line 39, in <module> main() File "main.py", line 22, in main solover.train(dataloader) File "path to my file", line 239, in train writer.add_scalars("MSE",l_mse,n_iter) File "/~~/anaconda3/lib/python3.7/site-packages/torch/utils/tensorboard/writer.py", line 378, in add_scalars for tag, scalar_value in tag_scalar_dict.items(): AttributeError: 'Tensor' object has no attribute 'items' I tried writer.add_scalars("MSE",l_mse,n_iter).eval() writer.add_scalars("MSE",l_mse.item(),n_iter) writer.add_scalars("MSE",l_mse.detach().cpu().numpy(),n_iter) but still not work well.
You are calling for writer.add_scalars with a s. From Pytorch Tensorboardx documentation you can see that this function expects a dictionary as input. add_scalars(main_tag, tag_scalar_dict, global_step=None, walltime=None) tag_scalar_dict (dict) – Key-value pair storing the tag and corresponding values writer = SummaryWriter() r = 5 for i in range(100): writer.add_scalars('run_14h', {'xsinx':i*np.sin(i/r), 'xcosx':i*np.cos(i/r), 'tanx': np.tan(i/r)}, i) writer.close() Use writer.add_scalar instead To log a scalar value, use writer.add_scalar('myscalar', value, iteration). Note that the program complains if you feed a PyTorch tensor. Remember to extract the scalar value by x.item() if x is a torch scalar tensor. writer.add_scalar("MSE", l_mse.item(), n_iter)
https://stackoverflow.com/questions/61471370/
CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
I got the following error when I ran my pytorch deep learning model in colab /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias) 1370 ret = torch.addmm(bias, input, weight.t()) 1371 else: -> 1372 output = input.matmul(weight.t()) 1373 if bias is not None: 1374 output += bias RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` I even reduced batch size from 128 to 64 i.e., reduced to half, but still, I got this error. Earlier, I ran the same code with a batch size of 128 but didn't get any error like this.
No, batch size does not matter in this case The most likely reason is that there is an inconsistency between number of labels and number of output units. Try printing the size of the final output in the forward pass and check the size of the output print(model.fc1(x).size()) Here fc1 would be replaced by the name of your model's last linear layer before returning Make sure that label.size() is equal to prediction.size() before calculating the loss And even after fixing that problem, you'll have to restart the GPU runtime (I needed to do this in my case when using a colab GPU) This answer might also be helpful
https://stackoverflow.com/questions/61473330/
How calculate the dice coefficient for multi-class segmentation task using Python?
I am wondering how can I calculate the dice coefficient for multi-class segmentation. Here is the script that would calculate the dice coefficient for the binary segmentation task. How can I loop over each class and calculate the dice for each class? Thank you in advance import numpy def dice_coeff(im1, im2, empty_score=1.0): im1 = numpy.asarray(im1).astype(numpy.bool) im2 = numpy.asarray(im2).astype(numpy.bool) if im1.shape != im2.shape: raise ValueError("Shape mismatch: im1 and im2 must have the same shape.") im_sum = im1.sum() + im2.sum() if im_sum == 0: return empty_score # Compute Dice coefficient intersection = numpy.logical_and(im1, im2) return (2. * intersection.sum() / im_sum)
You can use dice_score for binary classes and then use binary maps for all the classes repeatedly to get a multiclass dice score. I'm assuming your images/segmentation maps are in the format (batch/index of image, height, width, class_map). import numpy as np import matplotlib.pyplot as plt def dice_coef(y_true, y_pred): y_true_f = y_true.flatten() y_pred_f = y_pred.flatten() intersection = np.sum(y_true_f * y_pred_f) smooth = 0.0001 return (2. * intersection + smooth) / (np.sum(y_true_f) + np.sum(y_pred_f) + smooth) def dice_coef_multilabel(y_true, y_pred, numLabels): dice=0 for index in range(numLabels): dice += dice_coef(y_true[:,:,:,index], y_pred[:,:,:,index]) return dice/numLabels # taking average num_class = 5 imgA = np.random.randint(low=0, high= 2, size=(5, 64, 64, num_class) ) # 5 images in batch, 64 by 64, num_classes map imgB = np.random.randint(low=0, high= 2, size=(5, 64, 64, num_class) ) plt.imshow(imgA[0,:,:,0]) # for 0th image, class 0 map plt.show() plt.imshow(imgB[0,:,:,0]) # for 0th image, class 0 map plt.show() dice_score = dice_coef_multilabel(imgA, imgB, num_class) print(f'For A and B {dice_score}') dice_score = dice_coef_multilabel(imgA, imgA, num_class) print(f'For A and A {dice_score}')
https://stackoverflow.com/questions/61488732/
Cannot import Pytorch [WinError 126] The specified module could not be found
I'm trying to do a basic install and import of Pytorch/Torchvision on Windows 10. I installed a Anaconda and created a new virtual environment named photo. I opened Anaconda prompt, activated the environment, and I ran: (photo) C:\Users\<user>\anaconda3\envs>conda install pytorch torchvision cudatoolkit=10.2 -c pytorch** This installed pytorch successfully. Running conda list I see: pytorch pytorch/win-64::pytorch-1.5.0-py3.7_cuda102_cudnn7_0 torchvision pytorch/win-64::torchvision-0.6.0-py37_cu102 Then I open a python command prompt while in the virtual environment, and type: import torch The following error is printed: Traceback (most recent call last): File "", line 1, in File "C:\Users\njord\anaconda3\envs\photo\lib\site-packages\torch__init__.py", line 81, in ctypes.CDLL(dll) File "C:\Users\njord\anaconda3\envs\photo\lib\ctypes__init__.py", line 364, in init self._handle = _dlopen(self._name, mode) OSError: [WinError 126] The specified module could not be found I have uninstalled/reinstalled python and anaconda but still run into the same issue. Advice appreciated.
Refer to below link: https://discuss.pytorch.org/t/cannot-import-torch-on-jupyter-notebook/79334 This is most probably because you are using a CUDA variant of PyTorch on a system that doesn’t have GPU driver installed. That is to say, if you don’t have a Nvidia GPU card, please install the cpu-only package according to the commands on https://pytorch.org. Conda conda install pytorch torchvision cpuonly -c pytorch Pip pip install torch==1.5.0+cpu torchvision==0.6.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
https://stackoverflow.com/questions/61488902/
Resuming pytorch model training raises error “CUDA out of memory”
My goal is to save the model at every epoch as I have to stop the training during the night and I don't want to lose progress. After I trained my model for 1 epoch I interrupted the process via terminal with CTRL+Z. When I tried to resume the training I got this error THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1525909934016/work/aten/src/THC/generic/THCStorage.cu line=58 error=2 : out of memory Traceback (most recent call last): File "train.py", line 174, in <module> train(train_loader, model, optimizer, epoch) File "train.py", line 97, in train loss1 = CE(atts, gts) File "/home/albytree/miniconda3/envs/cpd-wandb/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/home/albytree/miniconda3/envs/cpd-wandb/lib/python2.7/site-packages/torch/nn/modules/loss.py", line 500, in forward reduce=self.reduce) File "/home/albytree/miniconda3/envs/cpd-wandb/lib/python2.7/site-packages/torch/nn/functional.py", line 1516, in binary_cross_entropy_with_logits max_val = (-input).clamp(min=0) RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1525909934016/work/aten/src/THC/generic/THCStorage.cu:58 The code that manages everything is this one import wandb import torch import torch.nn.functional as F from torch.autograd import Variable import numpy as np import pdb, os, argparse from datetime import datetime from model.CPD_models import CPD_VGG from model.CPD_ResNet_models import CPD_ResNet from data import get_loader from utils import clip_gradient, adjust_lr parser = argparse.ArgumentParser() parser.add_argument('--epoch', type=int, default=10, help='epoch number') parser.add_argument('--lr', type=float, default=1e-4, help='learning rate') parser.add_argument('--batchsize', type=int, default=1, help='training batch size') parser.add_argument('--trainsize', type=int, default=352, help='training dataset size') parser.add_argument('--clip', type=float, default=0.5, help='gradient clipping margin') parser.add_argument('--is_ResNet', type=bool, default=False, help='VGG or ResNet backbone') parser.add_argument('--decay_rate', type=float, default=0.1, help='decay rate of learning rate') parser.add_argument('--decay_epoch', type=int, default=50, help='every n epochs decay learning rate') parser.add_argument('--model_id', type=str, required=True, help='required unique id for trained model name') parser.add_argument('--resume', type=str, default='', help='path to resume model training from checkpoint') parser.add_argument('--wandb', type=bool, default=False, help='enable wandb tracking model training') opt = parser.parse_args() model_id = opt.model_id WANDB_EN = opt.wandb if WANDB_EN: wandb.init(entity="albytree", project="cpd-train") # Add all parsed config in one line if WANDB_EN: wandb.config.update(opt) tot_epochs = opt.epoch print("Training Info") print("EPOCHS: {}".format(opt.epoch)) print("LEARNING RATE: {}".format(opt.lr)) print("BATCH SIZE: {}".format(opt.batchsize)) print("TRAIN SIZE: {}".format(opt.trainsize)) print("CLIP: {}".format(opt.clip)) print("USING ResNet backbone: {}".format(opt.is_ResNet)) print("DECAY RATE: {}".format(opt.decay_rate)) print("DECAY EPOCH: {}".format(opt.decay_epoch)) print("MODEL ID: {}".format(opt.model_id)) # build models if opt.is_ResNet: model = CPD_ResNet() else: model = CPD_VGG() model.cuda() params = model.parameters() optimizer = torch.optim.Adam(params, opt.lr) # If no previous training, 0 epochs passed last_epoch = 0 resume_model_path = opt.resume; if resume_model_path: print("Loading previous trained model:"+resume_model_path) checkpoint = torch.load(resume_model_path) model.load_state_dict(checkpoint['model_state_dict']) optimizer.load_state_dict(checkpoint['optimizer_state_dict']) last_epoch = checkpoint['epoch'] last_loss = checkpoint['loss'] dataset_name = 'ECSSD' image_root = '../../DATASETS/TEST/'+dataset_name+'/im/' gt_root = '../../DATASETS/TEST/'+dataset_name+'/gt/' train_loader = get_loader(image_root, gt_root, batchsize=opt.batchsize, trainsize=opt.trainsize) total_step = len(train_loader) print("Total step per epoch: {}".format(total_step)) CE = torch.nn.BCEWithLogitsLoss() #################################################################################################### def train(train_loader, model, optimizer, epoch): model.train() for i, pack in enumerate(train_loader, start=1): optimizer.zero_grad() images, gts = pack images = Variable(images) gts = Variable(gts) images = images.cuda() gts = gts.cuda() atts, dets = model(images) loss1 = CE(atts, gts) loss2 = CE(dets, gts) loss = loss1 + loss2 loss.backward() clip_gradient(optimizer, opt.clip) optimizer.step() if WANDB_EN: wandb.log({'Loss': loss}) if i % 100 == 0 or i == total_step: print('{} Epoch [{:03d}/{:03d}], Step [{:04d}/{:04d}], Loss1: {:.4f} Loss2: {:0.4f}'. format(datetime.now(), epoch, opt.epoch, i, total_step, loss1.data, loss2.data)) # Save model and optimizer training data trained_model_data = { 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict(), 'epoch': epoch, 'loss': loss } if opt.is_ResNet: save_path = 'models/CPD_Resnet/' else: save_path = 'models/CPD_VGG/' if not os.path.exists(save_path): print("Making trained model folder [{}]".format(save_path)) os.makedirs(save_path) torch_model_ext = '.pth' wandb_model_ext = '.h5' model_unique_id = model_id+'_'+'ep'+'_'+'%d' % epoch trained_model_name = 'CPD_train' save_full_path_torch = save_path + trained_model_name + '_' + model_unique_id + torch_model_ext save_full_path_wandb = save_path + trained_model_name + '_' + model_unique_id + wandb_model_ext if os.path.exists(save_full_path_torch): print("Torch model with name ["+save_full_path_torch+"] already exists!") answ = raw_input("Do you want to replace it? [y/n] ") if("y" in answ): torch.save(trained_model_data, save_full_path_torch) print("Saved torch model in "+save_full_path_torch) else: torch.save(trained_model_data, save_full_path_torch) print("Saved torch model in "+save_full_path_torch) if WANDB_EN: if os.path.exists(save_full_path_wandb): print("Wandb model with name ["+save_full_path_wandb+"] already exists!") answ = raw_input("Do you want to replace it? [y/n] ") if("y" in answ): wandb.save(save_full_path_wandb) print("Saved wandb model in "+save_full_path_wandb) else: wandb.save(save_full_path_wandb) print("Saved wandb model in "+save_full_path_wandb) #################################################################################################### print("Training on dataset: "+dataset_name) print("Train images path: "+image_root) print("Train gt path: "+gt_root) print("Let's go!") if WANDB_EN: wandb.watch(model, log="all") for epoch in range(last_epoch+1, tot_epochs+1): adjust_lr(optimizer, opt.lr, epoch, opt.decay_rate, opt.decay_epoch) train(train_loader, model, optimizer, epoch) print("TRAINING DONE!") It seems that there's something wrong with the loss but I cannot understand what's the problem. EDIT 1: I trained the model for 2 epochs without errors and then I interrupted the process. I also killed the process that was leaved in the gpu memory. After I tried to resume the model saved at epoch 1 and epoch 2 I got the same cuda error but in a different part of the code THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1525909934016/work/aten/src/THC/generic/THCStorage.cu line=58 error=2 : out of memory Traceback (most recent call last): File "train.py", line 191, in <module> train(train_loader, model, optimizer, epoch) File "train.py", line 112, in train atts, dets = model(images) File "/home/albytree/miniconda3/envs/cpd-wandb/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/home/albytree/TESI/CODICE/Workspace/ALGS/CPD/model/CPD_models.py", line 131, in forward detection = self.agg2(x5_2, x4_2, x3_2) File "/home/albytree/miniconda3/envs/cpd-wandb/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/home/albytree/TESI/CODICE/Workspace/ALGS/CPD/model/CPD_models.py", line 86, in forward x3_2 = torch.cat((x3_1, self.conv_upsample5(self.upsample(x2_2))), 1) RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1525909934016/work/aten/src/THC/generic/THCStorage.cu:58 Moreover I tried to test the saved model at epoch 1 and epoch 2 and got this error Traceback (most recent call last): File "test.py", line 45, in <module> model.load_state_dict(torch.load(opt.model_path)) File "/home/albytree/miniconda3/envs/cpd-wandb/lib/python2.7/site-packages/torch/nn/modules/module.py", line 721, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for CPD_VGG: Missing key(s) in state_dict: "vgg.conv1.conv1_1.bias", "vgg.conv1.conv1_1.weight", "vgg.conv1.conv1_2.bias", "vgg.conv1.conv1_2.weight", "vgg.conv2.conv2_1.bias", "vgg.conv2.conv2_1.weight", "vgg.conv2.conv2_2.bias", "vgg.conv2.conv2_2.weight", "vgg.conv3.conv3_1.bias", "vgg.conv3.conv3_1.weight", "vgg.conv3.conv3_2.bias", "vgg.conv3.conv3_2.weight", "vgg.conv3.conv3_3.bias", "vgg.conv3.conv3_3.weight", "vgg.conv4_1.conv4_1_1.bias", "vgg.conv4_1.conv4_1_1.weight", "vgg.conv4_1.conv4_2_1.bias", "vgg.conv4_1.conv4_2_1.weight", "vgg.conv4_1.conv4_3_1.bias", "vgg.conv4_1.conv4_3_1.weight", "vgg.conv5_1.conv5_1_1.bias", "vgg.conv5_1.conv5_1_1.weight", "vgg.conv5_1.conv5_2_1.bias", "vgg.conv5_1.conv5_2_1.weight", "vgg.conv5_1.conv5_3_1.bias", "vgg.conv5_1.conv5_3_1.weight", "vgg.conv4_2.conv4_1_2.bias", "vgg.conv4_2.conv4_1_2.weight", "vgg.conv4_2.conv4_2_2.bias", "vgg.conv4_2.conv4_2_2.weight", "vgg.conv4_2.conv4_3_2.bias", "vgg.conv4_2.conv4_3_2.weight", "vgg.conv5_2.conv5_1_2.bias", "vgg.conv5_2.conv5_1_2.weight", "vgg.conv5_2.conv5_2_2.bias", "vgg.conv5_2.conv5_2_2.weight", "vgg.conv5_2.conv5_3_2.bias", "vgg.conv5_2.conv5_3_2.weight", "rfb3_1.branch0.0.bias", "rfb3_1.branch0.0.weight", "rfb3_1.branch1.0.bias", "rfb3_1.branch1.0.weight", "rfb3_1.branch1.1.bias", "rfb3_1.branch1.1.weight", "rfb3_1.branch1.2.bias", "rfb3_1.branch1.2.weight", "rfb3_1.branch1.3.bias", "rfb3_1.branch1.3.weight", "rfb3_1.branch2.0.bias", "rfb3_1.branch2.0.weight", "rfb3_1.branch2.1.bias", "rfb3_1.branch2.1.weight", "rfb3_1.branch2.2.bias", "rfb3_1.branch2.2.weight", "rfb3_1.branch2.3.bias", "rfb3_1.branch2.3.weight", "rfb3_1.branch3.0.bias", "rfb3_1.branch3.0.weight", "rfb3_1.branch3.1.bias", "rfb3_1.branch3.1.weight", "rfb3_1.branch3.2.bias", "rfb3_1.branch3.2.weight", "rfb3_1.branch3.3.bias", "rfb3_1.branch3.3.weight", "rfb3_1.conv_cat.bias", "rfb3_1.conv_cat.weight", "rfb3_1.conv_res.bias", "rfb3_1.conv_res.weight", "rfb4_1.branch0.0.bias", "rfb4_1.branch0.0.weight", "rfb4_1.branch1.0.bias", "rfb4_1.branch1.0.weight", "rfb4_1.branch1.1.bias", "rfb4_1.branch1.1.weight", "rfb4_1.branch1.2.bias", "rfb4_1.branch1.2.weight", "rfb4_1.branch1.3.bias", "rfb4_1.branch1.3.weight", "rfb4_1.branch2.0.bias", "rfb4_1.branch2.0.weight", "rfb4_1.branch2.1.bias", "rfb4_1.branch2.1.weight", "rfb4_1.branch2.2.bias", "rfb4_1.branch2.2.weight", "rfb4_1.branch2.3.bias", "rfb4_1.branch2.3.weight", "rfb4_1.branch3.0.bias", "rfb4_1.branch3.0.weight", "rfb4_1.branch3.1.bias", "rfb4_1.branch3.1.weight", "rfb4_1.branch3.2.bias", "rfb4_1.branch3.2.weight", "rfb4_1.branch3.3.bias", "rfb4_1.branch3.3.weight", "rfb4_1.conv_cat.bias", "rfb4_1.conv_cat.weight", "rfb4_1.conv_res.bias", "rfb4_1.conv_res.weight", "rfb5_1.branch0.0.bias", "rfb5_1.branch0.0.weight", "rfb5_1.branch1.0.bias", "rfb5_1.branch1.0.weight", "rfb5_1.branch1.1.bias", "rfb5_1.branch1.1.weight", "rfb5_1.branch1.2.bias", "rfb5_1.branch1.2.weight", "rfb5_1.branch1.3.bias", "rfb5_1.branch1.3.weight", "rfb5_1.branch2.0.bias", "rfb5_1.branch2.0.weight", "rfb5_1.branch2.1.bias", "rfb5_1.branch2.1.weight", "rfb5_1.branch2.2.bias", "rfb5_1.branch2.2.weight", "rfb5_1.branch2.3.bias", "rfb5_1.branch2.3.weight", "rfb5_1.branch3.0.bias", "rfb5_1.branch3.0.weight", "rfb5_1.branch3.1.bias", "rfb5_1.branch3.1.weight", "rfb5_1.branch3.2.bias", "rfb5_1.branch3.2.weight", "rfb5_1.branch3.3.bias", "rfb5_1.branch3.3.weight", "rfb5_1.conv_cat.bias", "rfb5_1.conv_cat.weight", "rfb5_1.conv_res.bias", "rfb5_1.conv_res.weight", "agg1.conv_upsample1.bias", "agg1.conv_upsample1.weight", "agg1.conv_upsample2.bias", "agg1.conv_upsample2.weight", "agg1.conv_upsample3.bias", "agg1.conv_upsample3.weight", "agg1.conv_upsample4.bias", "agg1.conv_upsample4.weight", "agg1.conv_upsample5.bias", "agg1.conv_upsample5.weight", "agg1.conv_concat2.bias", "agg1.conv_concat2.weight", "agg1.conv_concat3.bias", "agg1.conv_concat3.weight", "agg1.conv4.bias", "agg1.conv4.weight", "agg1.conv5.bias", "agg1.conv5.weight", "rfb3_2.branch0.0.bias", "rfb3_2.branch0.0.weight", "rfb3_2.branch1.0.bias", "rfb3_2.branch1.0.weight", "rfb3_2.branch1.1.bias", "rfb3_2.branch1.1.weight", "rfb3_2.branch1.2.bias", "rfb3_2.branch1.2.weight", "rfb3_2.branch1.3.bias", "rfb3_2.branch1.3.weight", "rfb3_2.branch2.0.bias", "rfb3_2.branch2.0.weight", "rfb3_2.branch2.1.bias", "rfb3_2.branch2.1.weight", "rfb3_2.branch2.2.bias", "rfb3_2.branch2.2.weight", "rfb3_2.branch2.3.bias", "rfb3_2.branch2.3.weight", "rfb3_2.branch3.0.bias", "rfb3_2.branch3.0.weight", "rfb3_2.branch3.1.bias", "rfb3_2.branch3.1.weight", "rfb3_2.branch3.2.bias", "rfb3_2.branch3.2.weight", "rfb3_2.branch3.3.bias", "rfb3_2.branch3.3.weight", "rfb3_2.conv_cat.bias", "rfb3_2.conv_cat.weight", "rfb3_2.conv_res.bias", "rfb3_2.conv_res.weight", "rfb4_2.branch0.0.bias", "rfb4_2.branch0.0.weight", "rfb4_2.branch1.0.bias", "rfb4_2.branch1.0.weight", "rfb4_2.branch1.1.bias", "rfb4_2.branch1.1.weight", "rfb4_2.branch1.2.bias", "rfb4_2.branch1.2.weight", "rfb4_2.branch1.3.bias", "rfb4_2.branch1.3.weight", "rfb4_2.branch2.0.bias", "rfb4_2.branch2.0.weight", "rfb4_2.branch2.1.bias", "rfb4_2.branch2.1.weight", "rfb4_2.branch2.2.bias", "rfb4_2.branch2.2.weight", "rfb4_2.branch2.3.bias", "rfb4_2.branch2.3.weight", "rfb4_2.branch3.0.bias", "rfb4_2.branch3.0.weight", "rfb4_2.branch3.1.bias", "rfb4_2.branch3.1.weight", "rfb4_2.branch3.2.bias", "rfb4_2.branch3.2.weight", "rfb4_2.branch3.3.bias", "rfb4_2.branch3.3.weight", "rfb4_2.conv_cat.bias", "rfb4_2.conv_cat.weight", "rfb4_2.conv_res.bias", "rfb4_2.conv_res.weight", "rfb5_2.branch0.0.bias", "rfb5_2.branch0.0.weight", "rfb5_2.branch1.0.bias", "rfb5_2.branch1.0.weight", "rfb5_2.branch1.1.bias", "rfb5_2.branch1.1.weight", "rfb5_2.branch1.2.bias", "rfb5_2.branch1.2.weight", "rfb5_2.branch1.3.bias", "rfb5_2.branch1.3.weight", "rfb5_2.branch2.0.bias", "rfb5_2.branch2.0.weight", "rfb5_2.branch2.1.bias", "rfb5_2.branch2.1.weight", "rfb5_2.branch2.2.bias", "rfb5_2.branch2.2.weight", "rfb5_2.branch2.3.bias", "rfb5_2.branch2.3.weight", "rfb5_2.branch3.0.bias", "rfb5_2.branch3.0.weight", "rfb5_2.branch3.1.bias", "rfb5_2.branch3.1.weight", "rfb5_2.branch3.2.bias", "rfb5_2.branch3.2.weight", "rfb5_2.branch3.3.bias", "rfb5_2.branch3.3.weight", "rfb5_2.conv_cat.bias", "rfb5_2.conv_cat.weight", "rfb5_2.conv_res.bias", "rfb5_2.conv_res.weight", "agg2.conv_upsample1.bias", "agg2.conv_upsample1.weight", "agg2.conv_upsample2.bias", "agg2.conv_upsample2.weight", "agg2.conv_upsample3.bias", "agg2.conv_upsample3.weight", "agg2.conv_upsample4.bias", "agg2.conv_upsample4.weight", "agg2.conv_upsample5.bias", "agg2.conv_upsample5.weight", "agg2.conv_concat2.bias", "agg2.conv_concat2.weight", "agg2.conv_concat3.bias", "agg2.conv_concat3.weight", "agg2.conv4.bias", "agg2.conv4.weight", "agg2.conv5.bias", "agg2.conv5.weight", "HA.gaussian_kernel". Unexpected key(s) in state_dict: "loss", "optimizer_state_dict", "model_state_dict", "epoch". Maybe I'm not saving the states as intended ? The weird thing is that before adding the resume training code I was just saving the model at every epoch only with torch.save(model.state_dict(), save_full_path_torch) : I managed to train the model in 10 epochs and it still works during testing.
Although this question has been posted 5 months ago, in case if anyone else comes across a similar issue, here is a simple solution. As explained in Pytorch FAQ, tensors defining the loss is accumulating history across the training loop because loss is a differentiable variable here. One simple solution is to typecast the loss with float. Secondly, make sure that you do not use the loss anywhere else but loss.item() while you are printing the loss or logging it to the wandb.
https://stackoverflow.com/questions/61509872/
Cannot import torch - Image not found
I'm trying to import torch but failed because of Image not Found error. Here is the error when I entered import torch: --------------------------------------------------------------------------- ImportError Traceback (most recent call last) in ----> 1 import torch /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/torch/__init__.py in 134 # See Note [Global dependencies] 135 _load_global_deps() --> 136 from torch._C import * 137 138 __all__ += [name for name in dir(_C) ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/torch/_C.cpython-38-darwin.so, 2): Library not loaded: @rpath/libc++.1.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/torch/_C.cpython-38-darwin.so Reason: image not found Thank you very much! EDIT: This works for me: >>> install_name_tool -add_rpath /usr/lib /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/torch/_C.cpython-38-darwin.so
I solved this by doing this way, suppose you are using the virtual environment. Replace YOUR_PATH_TO_PYTHON_ENV with your python environment path. install_name_tool -add_rpath /usr/lib YOUR_PATH_TO_PYTHON_ENV/venv/lib/python3.8/site-packages/torch/_C.cpython-38-darwin.so If you are using your local python, maybe it will look like. install_name_tool -add_rpath /usr/lib /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/torch/_C.cpython-38-darwin.so Basically, you need to add rpath of your library to torch in your python environment.
https://stackoverflow.com/questions/61525299/
How to save custom embedding matrix to .txt file format?
I have made a dictionary which contains word and its corresponding word vector in the following format: {'word1': array([ 4.530e-02, -1.170e-02, -1.201e-01, 2.439e-01, 4.670e-02d], type=float32), 'word2': array([ 4.530e-02, -1.170e-02, -1.201e-01, 2.439e-01, 4.670e-02d], type=float32)} I would like to save this dictionary to custom_embeddings.txt file in the following format: The format of your custom_embeddings.txt file needs to be the token followed by the values of each of the dimensions for the embedding, all separated by a single space, e.g. here's two tokens with 5 dimensional embeddings: word1 4.530e-02 -1.170e-02 -1.201e-01 2.439e-01 4.670e-02d word2 4.530e-02 -1.170e-02 -1.201e-01 2.439e-01 4.670e-02d It will be really helpful if you could tell me how to achieve this result? Thanks in advance
Python's .items() call is an elegant way to loop over all the words in your dictionary. This will save the output as lines of a text file: txt_filename = 'output.txt' with open(txt_filename, 'w') as f: for word, vec in my_wordvec_dict.items(): f.write('{} {}\n'.format(word, ' '.join(['{:e}'.format(item) for item in vec])))
https://stackoverflow.com/questions/61530603/
Why do we need state_dict = state_dict.copy()
I want to load the weights of a pre-trained model on my local model. I don’t understand why state_dict = state_dict.copy() is necessary if the two networks have the same name state_dict. # copy state_dict so _load_from_state_dict can modify it metadata = getattr(state_dict, '_metadata', None) state_dict = state_dict.copy() if metadata is not None: state_dict._metadata = metadata def load(module, prefix=''): local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {}) module._load_from_state_dict( state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs) for name, child in module._modules.items(): if child is not None: load(child, prefix + name + '.') start_prefix = '' # print("hasattr(model, 'bert')",hasattr(model, 'bert') ) :false if not hasattr(model, 'bert') and any(s.startswith('bert.') for s in state_dict.keys()): start_prefix = 'bert.' load(model, prefix=start_prefix) Note: the above code is from Hugging Face.
state_dict = state_dict.copy() does exactly what you tell him to do: it copies in place the state_dict. State dict are all the parameters of your model, and copying it allows to make them independant. One should be careful whether you need a copy or a deepcopy though !
https://stackoverflow.com/questions/61531864/
Can conditionality be added inside Pytorch nn.Sequential()
Is there a way to add conditional statements inside the nn.Sequential(). Something similar to the code below. import torch class Building_Blocks(torch.nn.Module): def conv_block (self, in_features, out_features, kernal_size, upsample=False): block = torch.nn.Sequential( torch.nn.Conv2d(in_features, out_features, kernal_size), torch.nn.ReLU(inplace = True), torch.nn.Conv2d(out_features, out_features, kernal_size), torch.nn.ReLU(inplace = True), if(upsample): torch.nn.ConvTranspose2d(out_features, out_features, kernal_size) ) return block def __init__(self): super(Building_Blocks, self).__init__() self.contracting_layer1 = self.conv_block(3, 64, 3, upsample=True) def forward(self, x): x=self.contracting_layer1(x) return x
No, but in your case it's easy to take if out of nn.Sequential: class Building_Blocks(torch.nn.Module): def conv_block(self, in_features, out_features, kernal_size, upsample=False): layers = [ torch.nn.Conv2d(in_features, out_features, kernal_size), torch.nn.ReLU(inplace=True), torch.nn.Conv2d(out_features, out_features, kernal_size), torch.nn.ReLU(inplace=True), ] if upsample: layers.append( torch.nn.ConvTranspose2d(out_features, out_features, kernal_size) ) block = torch.nn.Sequential(*layers) return block def __init__(self): super(Building_Blocks, self).__init__() self.contracting_layer1 = self.conv_block(3, 64, 3, upsample=True) def forward(self, x): x = self.contracting_layer1(x) return x You can always construct a list containing layers however you want and unpack it into torch.nn.Sequential afterwards.
https://stackoverflow.com/questions/61545224/
CNN vs SVM for smile intensity detection training?
I have a dataset made up of images of faces, with the corresponding landmarks that make up the mouth. These landmarks are sets of 2D points (x,y pixel position). Each image-landmark set data pair is tagged as either a smile, or neutral. What i would like to do is train a deep learning model to return a smile intensity for a new image-landmark data pair. What should I be searching for to help me with the next step? Is it a CNN that i need? In my limited understanding, the usual training input is just an image, where I would be passing the landmark sets to train with. Or would an SVM approach be more accurate? I am looking for maximum accuracy, as much as is possible. What is the approach that I need called? I am happy to use PyTorch, Dlib or any framework, I am just a little stuck on the search terms to help me move forward. Thank you.
It's hard to tell without looking into the dataset and experimenting. But hopefully, the following research materials will guide you in the right direction. Machine learning-based approach: https://www.researchgate.net/publication/266672947_Estimating_smile_intensity_A_better_way Deep learning (CNN): https://arxiv.org/pdf/1602.00172.pdf A list of awesome papers for smile and smile intensity detection: https://github.com/EvelynFan/AWESOME-FER/blob/master/README.md SmileNet project: https://sites.google.com/view/sensingfeeling/ Now, I'm assuming you don't have any label for actual smile intensity. In such a scenario, the existing smile detection methods can be used directly, you'll use the last activation output (sigmoid) as a confidence score for smiling. If the confidence is higher, the intensity should be higher. Now, you can use the facial landmark points as separate features (pass them through an LSTM block) and concatenate to the CNN at an early stage/ or later to improve the performance of your model. If you have the label for smiling intensity, you can just solve it as a regression problem, the CNN will have one output, will try to regress the smile intensity (the normalized smile intensity with sigmoid in this case).
https://stackoverflow.com/questions/61549401/
pytorch: How to do layer wise multiplication?
I have a tensor containing five 2x2 matrices - shape (1,5,2,2), and a tensor containing 5 elements - shape ([5]). I want to multiply each 2x2 matrix(in the former tensor) with the corresponding value (in the latter tensor). The resultant tensor should be of shape (1,5,2,2). How to do that? Getting the following error when I run this code a = torch.rand(1,5,2,2) print(a.shape) b = torch.rand(5) print(b.shape) mul = a*b RuntimeError: The size of tensor a (2) must match the size of tensor b (5) at non-singleton dimension 3
You can use either a * b or torch.mul(a, b) but you must use permute() before and after you multiply, in order to have the compatible shape: import torch a = torch.ones(1,5,2,2) b = torch.rand(5) a.shape # torch.Size([1, 5, 2, 2]) b.shape # torch.Size([5]) c = (a.permute(0,2,3,1) * b).permute(0,3,1,2) c.shape # torch.Size([1, 5, 2, 2]) # OR # c = torch.mul(a.permute(0,2,3,1), b).permute(0,3,1,2) c.shape # torch.Size([1, 5, 2, 2]) The permute() function transposes the dimention in the order of it's arguments. I.e, a.permute(0,2,3,1) will be of shape torch.Size([1, 2, 2, 5]) which fits the shape of b (torch.Size([5])) for matrix multiplication, since the last dimention of a equals the first dimention of b. After we finish the multiplication we transpose it again, using permute(), to the. desired shape of torch.Size([1, 5, 2, 2]) by permute(0,3,1,2). You can read about permute() in the docs. But it works with it's arguments numbering the current shape of [1, 5, 2, 2] by 0 to 3, and permutes as you inserted the arguments, meaning for a.permute(0,2,3,1) it will keep the first dimention in its place, since the first argument is 0, the second dimention it will move to the forth dimention, since the index 1 is the forth argument. And the third and forth dimention will move to the second and third dimention, because the 2 and 3 indices are located in the second and third place. Remember when talking about the 4th dimention for instance, its representation as an argument is 3 (not 4). EDIT If you want to element-wise multiply tensors of shape [32,5,2,2] and [32,5] for example, such that each 2x2 matrix will be multiplied by the corresponding value, you could rearrange the dimentions as [2,2,32,5] by permute(2,3,0,1), then perform the multiplication by a * b and then return to the original shape by permute(2,3,0,1) again. The key here, is that the last n dimentions of the first matrix, need to align with the first n dimentions of the second matrix. In our case n=2. Hope that helps.
https://stackoverflow.com/questions/61555342/
RuntimeError: stack expects each tensor to be equal size, but got [32, 1] at entry 0 and [32, 0] at entry 1
I have a very large tensor of shape (512,3,224,224). I input it to model in batches of 32 and I then save the scores corresponding to the target label which is 2. in each iteration, after every slice, the shape of scores changes. Which leads to the following error. What am I doing wrong and how to fix it. label = torch.ones(1)*2 def sub_forward(self, x): x = self.vgg16(x) x = self.bn1(x) x = self.linear1(x) x = self.linear2(x) return x def get_scores(self, imgs, targets): b, _, _, _ = imgs.shape batch_size = 32 total_scores = [] for i in range(0, b, batch_size): scores = self.sub_forward(imgs[i:i+batch_size,:,:,:]) scores = F.softmax(scores) labels = targets[i:i+batch_size] labels = labels.long() scores = scores[:,labels] print(i," scores: ", scores) total_scores.append(scores) print(i," total_socres: ", total_scores) total_scores = torch.stack(total_scores) return scores 0 scores: tensor([[0.0811], [0.0918], [0.0716], [0.1680], [0.1689], [0.1319], [0.1556], [0.2966], [0.0913], [0.1238], [0.1480], [0.1215], [0.2524], [0.1283], [0.1603], [0.1282], [0.2668], [0.1146], [0.2043], [0.2475], [0.0865], [0.1869], [0.0860], [0.1979], [0.1677], [0.1983], [0.2623], [0.1975], [0.1894], [0.3299], [0.1970], [0.1094]], device='cuda:0') 0 total_socres: [tensor([[0.0811], [0.0918], [0.0716], [0.1680], [0.1689], [0.1319], [0.1556], [0.2966], [0.0913], [0.1238], [0.1480], [0.1215], [0.2524], [0.1283], [0.1603], [0.1282], [0.2668], [0.1146], [0.2043], [0.2475], [0.0865], [0.1869], [0.0860], [0.1979], [0.1677], [0.1983], [0.2623], [0.1975], [0.1894], [0.3299], [0.1970], [0.1094]], device='cuda:0')] 32 scores: tensor([], device='cuda:0', size=(32, 0)) 32 total_socres: [tensor([[0.0811], [0.0918], [0.0716], [0.1680], [0.1689], [0.1319], [0.1556], [0.2966], [0.0913], [0.1238], [0.1480], [0.1215], [0.2524], [0.1283], [0.1603], [0.1282], [0.2668], [0.1146], [0.2043], [0.2475], [0.0865], [0.1869], [0.0860], [0.1979], [0.1677], [0.1983], [0.2623], [0.1975], [0.1894], [0.3299], [0.1970], [0.1094]], device='cuda:0'), tensor([], device='cuda:0', size=(32, 0))] > RuntimeError: stack expects each tensor to be equal size, but got [32, 1] at entry 0 and [32, 0] at entry 1
I don't know what happen to your code but you shouldn't do the batching like that honestly. Please use Dataset: import torch class MyDataloader(torch.utils.data.Dataset): def __init__(self): self.images = torch.Tensor(512, 3, 224, 224) def __len__(self): return 512 def __getitem__(self, idx): return self.images[idx, :, :, :], torch.ones(1) * 2 train_data = MyDataloader() train_loader = torch.utils.data.DataLoader(train_data, shuffle=True, num_workers=2, batch_size=32) for batch_images, targets in train_loader: print(batch_images.shape) # should be 32*3*224*224 ... # let train your model logits = model(batch_images, targets)
https://stackoverflow.com/questions/61558291/
Problem with Dataloader object not subscriptable
I am now running a Python program using Pytorch. I use my own dataset, not torch.data.dataset. I download data from a pickle file extracted from feature extraction. But the following errors appear: Traceback (most recent call last): File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 326, in <module> fire.Fire(demo) File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py", line 138, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py", line 468, in _Fire target=component.__name__) File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py", line 672, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 304, in demo train(model,train_set1, valid_set=valid_set, test_set=test1, save=save, n_epochs=n_epochs,batch_size=batch_size,seed=seed) File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 172, in train n_epochs=n_epochs, File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 37, in train_epoch loader=np.asarray(list(loader)) File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\dataloader.py", line 345, in __next__ data = self._next_data() File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\dataset.py", line 257, in __getitem__ return self.dataset[self.indices[idx]] TypeError: 'DataLoader' object is not subscriptable The code is: train_set1 = Owndata() train1, test1 = train_set1 .get_splits() # prepare data loaders train_dl = torch.utils.data.DataLoader(train1, batch_size=32, shuffle=True) test_dl =torch.utils.data.DataLoader(test1, batch_size=1024, shuffle=False) test_set1 = Owndata() '''print('test_set# ',test_set)''' if valid_size: valid_set = Owndata() indices = torch.randperm(len(train_set1)) train_indices = indices[:len(indices) - valid_size] valid_indices = indices[len(indices) - valid_size:] train_set1 = torch.utils.data.Subset(train_dl, train_indices) valid_set = torch.utils.data.Subset(valid_set, valid_indices) else: valid_set = None model = DenseNet( growth_rate=growth_rate, block_config=block_config, num_classes=10, small_inputs=True, efficient=efficient, ) train(model,train_set1, valid_set=valid_set, test_set=test1, save=save, n_epochs=n_epochs, batch_size=batch_size, seed=seed) Any help is appreciated! Thanks a lot in advance!!
It is not the line giving you an error as it's the very last train function you are not showing. You are confusing two things: torch.utils.data.Dataset object is indexable (dataset[5] works fine for example). It is a simple object which defines how to get a single (usually single) sample of data. torch.utils.data.DataLoader - non-indexable, only iterable, usually returns batches of data from above Dataset. Can work in parallel using num_workers. It's what you are trying to index while you should use dataset for that. Please see PyTorch documentation about data to get a better grasp on how those work.
https://stackoverflow.com/questions/61562456/
Why pandas.core.series.Series sometimes cannot convert to torch tensor in Python?
I have a dataframe in which I pick two columns: X_train, X_test, y_train, y_test = train_test_split(df["EnergyFront"], df["particle"], test_size=0.2) the type of both X_train and X_test is pandas.core.series.Series, the results are quite similar: IMAGE I can transform X_train to torch tensor: X_train = torch.Tensor(X_train) but, when I try to do the same with X_test: X_test = torch.Tensor(X_test) I get the following error: ValueError Traceback (most recent call last) <ipython-input-174-14117eb3ce4e> in <module>() ----> 1 X_test = torch.Tensor(X_test) ValueError: could not determine the shape of object type 'Series' How can I solve it? By the way, I am running on Google Colaboratory.
This issue is described here: https://github.com/pytorch/pytorch/pull/7583 In order to determine the shape of the series, they try to access the element with index 0. If that element is not found, this error occurs. In your case, presumably this happens because your X_test doesn't contain the first element of the whole Series. I believe a valid fix for your case would be to convert your X_test to an array like so: X_test = torch.Tensor(X_test.to_numpy())
https://stackoverflow.com/questions/61565156/
Can not import fastai [WinError 126] The specified module could not be foun
Firstly I run the following command: conda install -c pytorch -c fastai fastai. After finishing install. I import this: from fastai.imports import * I got an error like this: Traceback (most recent call last): File "C:\Users\acer\Desktop\Reddit Bot\demo.py", line 70, in <module> from fastai.imports import * File "C:\Users\acer\anaconda3\lib\site-packages\fastai\imports\__init__.py", line 2, in <module> from .torch import * File "C:\Users\acer\anaconda3\lib\site-packages\fastai\imports\torch.py", line 1, in <module> import torch, torch.nn.functional as F File "C:\Users\acer\anaconda3\lib\site-packages\torch\__init__.py", line 81, in <module> ctypes.CDLL(dll) File "C:\Users\acer\anaconda3\lib\ctypes\__init__.py", line 364, in __init__ self._handle = _dlopen(self._name, mode) OSError: [WinError 126] The specified module could not be found I watch some post on stackoverflow but can't solve this. How can I solve this? Please help me.
This probably happened if you, like me - are using a machine without a NVIDIA GPU card. conda install -c pytorch -c fastai fastai will install a version that uses the GPU. To resolve this, I uninstalled pytorch and then conda installed the CPU version using conda install pytorch torchvision cpuonly -c pytorch. It worked for me! You can check the pytorch installation guide on the official website :)
https://stackoverflow.com/questions/61569612/
An out of bounds index error when using Pytorch gather
I have Two Tensors I am trying to gather one from each row with the column being specified by these indices. So I am trying to get: [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1] This is my code for this: self.manDistMat.gather(1, state.unsqueeze(-1))) self.manDistMat being the 16x16 matrix and state.unsqueeze(-1) being the other matrix. When I try this I get this error. RuntimeError: index 578437695752307201 is out of bounds for dimension 1 with size 16 What am I doing wrong?
I encountered the similar problem. It appears to be a bug in pytorch.
https://stackoverflow.com/questions/61572694/
Pytorch: Memory Efficient weighted sum with weights shared along channels
Inputs: 1) I = Tensor of dim (N, C, X) (Input) 2) W = Tensor of dim (N, X, Y) (Weight) Output: 1) O = Tensor of dim (N, C, Y) (Output) I want to compute: I = I.view(N, C, X, 1) W = W.view(N, 1, X, Y) PROD = I*W O = PROD.sum(dim=2) return O without incurring N * C * X * Y memory overhead. Basically I want to calculate the weighted sum of a feature map wherein the weights are the same along the channel dimension, without incurring memory overhead per channel. Maybe I could use from itertools import product O = torch.zeros(N, C, Y) for n, x, y in product(range(N), range(X), range(Y)): O[n, :, y] += I[n, :, x]*W[n, x, y] return O but that would be slower (no broadcasting) and I'm not sure how much memory overhead would be incurred by saving variables for the backward pass.
You can use torch.bmm (https://pytorch.org/docs/stable/torch.html#torch.bmm). Just do torch.bmm(I,W) To verify the results : import torch N, C, X, Y= 100, 10, 9, 8 i = torch.rand(N,C,X) w = torch.rand(N,X,Y) o = torch.bmm(i,w) # desired result code I = i.view(N, C, X, 1) W = w.view(N, 1, X, Y) PROD = I*W O = PROD.sum(dim=2) print(torch.allclose(O,o)) # should output True if outputs are same. EDIT: Ideally, I would assume using pytorch's internal matrix multiplication is efficient. However, you can also measure the memory usage with tracemalloc (at least on CPU). See https://discuss.pytorch.org/t/measuring-peak-memory-usage-tracemalloc-for-pytorch/34067 for GPU. import torch import tracemalloc tracemalloc.start() N, C, X, Y= 100, 10, 9, 8 i = torch.rand(N,C,X) w = torch.rand(N,X,Y) o = torch.bmm(i,w) # output is a tuple indicating current memory and peak memory print(tracemalloc.get_traced_memory()) You can do the same with other code and see the bmm implementation is indeed efficient. import torch import tracemalloc tracemalloc.start() N, C, X, Y= 100, 10, 9, 8 i = torch.rand(N,C,X) w = torch.rand(N,X,Y) I = i.view(N, C, X, 1) W = w.view(N, 1, X, Y) PROD = I*W O = PROD.sum(dim=2) # output is a tuple indicating current memory and peak memory print(tracemalloc.get_traced_memory())
https://stackoverflow.com/questions/61582511/
Pytorch | I don't know why it is throwing an error? (Beginner)
import torch.nn as nn import torch.nn.functional as F ## TODO: Define the NN architecture class Net(nn.Module): def __init__(self): super(Net, self).__init__() # linear layer (784 -> 1 hidden node) self.fc1 = nn.Linear(28 * 28, 512) self.fc2 = nn.Linear(512 * 512) self.fc3 = nn.Linear(512 * 10) def forward(self, x): # flatten image input x = x.view(-1, 28 * 28) # add hidden layer, with relu activation function x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) return x # initialize the NN model = Net() print(model) When I run this, it throws this error. Why? TypeError: __ init __() missing 1 required positional argument: 'out_features'
This error is because you have not provided the output size of the fully connected layer in your fc2 and fc3. Below is the modified code. I added the output size, I am not sure if this is the output size architecture you want. But for the demonstration, I put the output size. Please edit the code and add the output size as per your requirement. Remember that the output size of the previous fully connected layer should be the input size of the next FC layer. Else it will throw size mismatch error. import torch.nn as nn import torch.nn.functional as F ## TODO: Define the NN architecture class Net(nn.Module): def __init__(self): super(Net, self).__init__() # linear layer (784 -> 1 hidden node) self.fc1 = nn.Linear(28 * 28, 512) self.fc2 = nn.Linear(512 ,512*10) self.fc3 = nn.Linear(512 * 10,10) def forward(self, x): # flatten image input x = x.view(-1, 28 * 28) # add hidden layer, with relu activation function x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) return x # initialize the NN model = Net() print(model)
https://stackoverflow.com/questions/61587563/
Pytorch squeeze and unsqueeze
I don't understand what squeeze and unsqueeze do to a tensor, even after looking at the docs and related questions. I tried to understand it by exploring it myself in python. I first created a random tensor with x = torch.rand(3,2,dtype=torch.float) >>> x tensor([[0.3703, 0.9588], [0.8064, 0.9716], [0.9585, 0.7860]]) But regardless of how I squeeze it, I end up with the same results: torch.equal(x.squeeze(0), x.squeeze(1)) >>> True If I now try to unsqueeze I get the following, >>> x.unsqueeze(1) tensor([[[0.3703, 0.9588]], [[0.8064, 0.9716]], [[0.9585, 0.7860]]]) >>> x.unsqueeze(0) tensor([[[0.3703, 0.9588], [0.8064, 0.9716], [0.9585, 0.7860]]]) >>> x.unsqueeze(-1) tensor([[[0.3703], [0.9588]], [[0.8064], [0.9716]], [[0.9585], [0.7860]]]) However if I now create a tensor x = torch.tensor([1,2,3,4]), and I try to unsqueeze it then it appears that 1 and -1 makes it a column where as 0 remains the same. x.unsqueeze(0) tensor([[1, 2, 3, 4]]) >>> x.unsqueeze(1) tensor([[1], [2], [3], [4]]) >>> x.unsqueeze(-1) tensor([[1], [2], [3], [4]]) Can someone provide an explanation of what squeeze and unsqueeze are doing to a tensor? And what's the difference between providing the arguements 0, 1 and -1?
Here is a visual representation of what squeeze/unsqueeze do for an effectively 2d matrix: When you are unsqueezing a tensor, it is ambiguous which dimension you wish to 'unsqueeze' it across (as a row or column etc). The dim argument dictates this - i.e. position of the new dimension to be added. Hence the resulting unsqueezed tensors have the same information, but the indices used to access them are different.
https://stackoverflow.com/questions/61598771/
Is there any difference between tensor2tensor and pytorch in view of memory?
I'm trying to train seq2seq model(transformer) with pytorch and tensor2tensor. When using tensor2tensor, the batch size can be like 1024, while pytorch model shows CUDA out of memory error with 8 batch size. Is there any technique used in tensor2tensor to make best use of memory. If anyone know this, please tell me. Thanks in advance.
In Tensor2Tensor by default, the batch size is specified in the number of tokens (subwords) per single GPU. This allows to use a higher number of short sequences (sentences) in one batch or a smaller number of long sequences. Most other toolkits use a fixed batch size specified in the number of sequences. Either way, it may be a good idea to limit the maximum sentence length in training to a reasonable number to prevent Out-of-memory errors and excessive padding. Some toolkits also prefer to specify the total batch size per all GPU cards.
https://stackoverflow.com/questions/61607629/
How to split a dataset into a custom training set and a custom validation set with pytorch?
I'm using a non-torchvision dataset and I have extracted it with the ImageFolder method. I'm trying to split the dataset into 20% validation set and 80% training set. I can only find this method (random_split) from PyTorch library which allows splitting dataset. However, this is random every time. I'm wondering is there a way to split the dataset with a specific amount in the PyTorch library? This is my code for extracting the dataset and split it randomly. transformations = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) TrafficSignSet = datasets.ImageFolder(root='./train/', transform=transformations) ####### split data train_size = int(0.8 * len(TrafficSignSet)) test_size = len(TrafficSignSet) - train_size train_dataset_split, test_dataset_split = torch.utils.data.random_split(TrafficSignSet, [train_size, test_size]) #######put into a Dataloader train_dataset = torch.utils.data.DataLoader(train_dataset_split, batch_size=32, shuffle=True) test_dataset = torch.utils.data.DataLoader(test_dataset_split, batch_size=32, shuffle=True)
If you look "under the hood" of random_split you'll see it uses torch.utils.data.Subset to do the actual splitting. You can do so yourself with fixed indices: import random indices = list(range(len(TrafficSignSet)) random.seed(310) # fix the seed so the shuffle will be the same everytime random.shuffle(indices) train_dataset_split = torch.utils.data.Subset(TrafficSignSet, indices[:train_size]) val_dataset_split = torch.utils.data.Subset(TrafficSignSet, indices[train_size:])
https://stackoverflow.com/questions/61623709/
Pytorch equivalent features in tensorflow?
I recently was reading a Pytorch code and came across loss.backward() and optimizer.step() functions, are there any equivalent of these using tensorflow/keras?
loss.backward() equivalent in tensorflow is tf.GradientTape(). TensorFlow provides the tf.GradientTape API for automatic differentiation - computing the gradient of computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a tf.GradientTape onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using reverse mode differentiation. optimizer.step() equivalent in tensorflow is minimize(). Minimizes the loss by updating the variable list. Calling minimize() takes care of both computing the gradients and applying them to the variables. If you want to process the gradients before applying them you can instead use the optimizer in three steps: Compute the gradients with tf.GradientTape. Process the gradients as you wish. Apply the processed gradients with apply_gradients(). Hope this answers your question. Happy Learning.
https://stackoverflow.com/questions/61623722/
PyTorch nn.Transformer learns to copy target
I’m trying to train a Transformer Seq2Seq model using nn.Transformer class. I believe I am implementing it wrong, since when I train it, it seems to fit too fast, and during inference it repeats itself often. This seems like a masking issue in the decoder, and when I remove the target mask, the training performance is the same. This leads me to believe I am doing the target masking wrong. Here is my model code: class TransformerModel(nn.Module): def __init__(self, vocab_size, input_dim, heads, feedforward_dim, encoder_layers, decoder_layers, sos_token, eos_token, pad_token, max_len=200, dropout=0.5, device=(torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu"))): super(TransformerModel, self).__init__() self.target_mask = None self.embedding = nn.Embedding(vocab_size, input_dim, padding_idx=pad_token) self.pos_embedding = nn.Embedding(max_len, input_dim, padding_idx=pad_token) self.transformer = nn.Transformer( d_model=input_dim, nhead=heads, num_encoder_layers=encoder_layers, num_decoder_layers=decoder_layers, dim_feedforward=feedforward_dim, dropout=dropout) self.out = nn.Sequential( nn.Linear(input_dim, feedforward_dim), nn.ReLU(), nn.Linear(feedforward_dim, vocab_size)) self.device = device self.max_len = max_len self.sos_token = sos_token self.eos_token = eos_token # Initialize all weights to be uniformly distributed between -initrange and initrange def init_weights(self): initrange = 0.1 self.encoder.weight.data.uniform_(-initrange, initrange) self.decoder.bias.data.zero_() self.decoder.weight.data.uniform_(-initrange, initrange) # Generate mask covering the top right triangle of a matrix def generate_square_subsequent_mask(self, size): mask = (torch.triu(torch.ones(size, size)) == 1).transpose(0, 1) mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0)) return mask def forward(self, src, tgt): # src: (Max source seq len, batch size, 1) # tgt: (Max target seq len, batch size, 1) # Embed source and target with normal and positional embeddings embedded_src = (self.embedding(src) + self.pos_embedding( torch.arange(0, src.shape[1]).to(self.device).unsqueeze(0).repeat(src.shape[0], 1))) # Generate target mask target_mask = self.generate_square_subsequent_mask(size=tgt.shape[0]).to(self.device) embedded_tgt = (self.embedding(tgt) + self.pos_embedding( torch.arange(0, tgt.shape[1]).to(self.device).unsqueeze(0).repeat(tgt.shape[0], 1))) # Feed through model outputs = self.transformer(src=embedded_src, tgt=embedded_tgt, tgt_mask=target_mask) outputs = F.log_softmax(self.out(outputs), dim=-1) return outputs
For those having the same problem, my issue was that I wasn't properly adding the SOS token to the target I was feeding the model, and the EOS token to the target I was using in the loss function. For reference: The target fed to the model should be: [SOS] .... And the target used for the loss should be: .... [EOS]
https://stackoverflow.com/questions/61626779/
Unable to install fastai on Jupyter Notebook
I'm currently trying to get fastai installed on a conda environment using the command conda install -c fastai fastai as shown in the installation guide. This is what appears when that command is ran: (fastai) C:\>conda install -v -c fastai fastai Collecting package metadata (current_repodata.json): ...working... Unable to retrieve repodata (response: 404) for https://conda.anaconda.org/fastai/win-64/current_repodata.json done Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve. Solving environment: ...working... failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve. Solving environment: ...working... Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed Traceback (most recent call last): File "C:\Users\username\AppData\Local\Continuum\anaconda2\lib\site-packages\conda\exceptions.py", line 1079, in __call__ return func(*args, **kwargs) File "C:\Users\username\AppData\Local\Continuum\anaconda2\lib\site-packages\conda\cli\main.py", line 84, in _main exit_code = do_call(args, p) File "C:\Users\username\AppData\Local\Continuum\anaconda2\lib\site-packages\conda\cli\conda_argparse.py", line 82, in do_call return getattr(module, func_name)(args, parser) File "C:\Users\username\AppData\Local\Continuum\anaconda2\lib\site-packages\conda\cli\main_install.py", line 20, in execute install(args, parser, 'install') File "C:\Users\username\AppData\Local\Continuum\anaconda2\lib\site-packages\conda\cli\install.py", line 308, in install raise e UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versionsThe following specifications were found to be incompatible with your CUDA driver: - feature:/win-64::__cuda==9.2=0 Your installed CUDA driver is: 9.2 I can't say I completely understand what the issue is here. I previously thought it was an issue with PyTorch but after installing PyTorch successfully, I am still greeted with this message. Any ideas on how I can this package installed on my environment? Any help is appreciated. Thank you!
Try downgrading from python3.8 to python 3.7, it works for me.
https://stackoverflow.com/questions/61627669/
Logloss metric in Fastai
i'm doing a competition in zindi plateform which they are using The evaluation metric for this challenge as Log Loss. so i'm working with fastai library and i want the metric log loss .. i didn't find LogLoss as metric in this library ! i tried some codes like the function provided by sklearn from sklearn.metrics import log_loss but i didn't work the link of the competition : https://zindi.africa/competitions/basic-needs-basic-rights-kenya-tech4mentalhealth
if needed as a metric (typically mostly used as a loss) you should be able to use cross_entropy function from pytorch: import torch.nn.functional as F metrics=[F.cross_entropy,(plus other metrics if needed)] model= cnn_learner(data, model, metrics=metrics,...)
https://stackoverflow.com/questions/61627797/