id
stringlengths
3
8
text
stringlengths
1
115k
st49368
I have successfully installed NVIDIA driver & cudatoolkit via conda. However, I am not able to use cuda in pytorch (even though it installed successfully). Previously, I was using Pytorch with CUDA 8.0, and wanted to upgrade. I removed / purge all CUDA through: sudo apt-get --purge remove cuda sudo apt-get autoremove dpkg --list |grep "^rc" | cut -d " " -f 3 | xargs sudo dpkg --purge Then I updated my Nvidia drivers to 4.10 via PPA (Ubuntu 16.04): sudo add-apt-repository ppa:graphics-drivers/ppa sudo apt-get update sudo apt-get install nvidia-410 Everything worked smoothly. The output of nvidia-smi: Fri Aug 23 22:29:48 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 410.78 Driver Version: 410.78 CUDA Version: N/A | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 108... Off | 00000000:01:00.0 On | N/A | | 25% 35C P8 13W / 250W | 531MiB / 11177MiB | 1% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1445 G /usr/lib/xorg/Xorg 317MiB | | 0 2035 G compiz 101MiB | | 0 3572 G ...uest-channel-token=13099850080781834209 110MiB | +-----------------------------------------------------------------------------+ The output of cat /proc/driver/nvidia/version NVRM version: NVIDIA UNIX x86_64 Kernel Module 410.78 Sat Nov 10 22:09:04 CST 2018 GCC version: gcc version 4.9.4 (Ubuntu 4.9.4-2ubuntu1~16.04) Since I wanted conda to manage my CUDA version, I installed the cudatoolkit through conda env (python 3.6): conda install pytorch torchvision cudatoolkit=10.0 -c pytorch again, everything installs perfectly. When I run: print(torch.cuda.device_count()) # --> 0 print(torch.version.cuda) # --> 10.0.130 but using cuda fails. I get the following error message Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/rana/anaconda3/envs/py36torch12cu10/lib/python3.6/site-packages/torch/cuda/__init__.py", line 178, in _lazy_init _check_driver() File "/home/rana/anaconda3/envs/py36torch12cu10/lib/python3.6/site-packages/torch/cuda/__init__.py", line 99, in _check_driver http://www.nvidia.com/Download/index.aspx""") AssertionError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx I restarted, removed all irrelevant environment variables which may have caused issues (LD_LIBRARY_PATH), removed conda, reinstalled, tried cuda 9.2, but nothing works. I am not sure what the issue could be. Any ideas? I searched a bit, and found this 5 pytorch thread. Since I completely removed CUDA from my system this shouldn’t be the problem, but I think somehow it may be related. EDIT: It isn’t surprising given my error, but following this issue 11, I checked: torch._C._cuda_getDriverVersion() # -> 0
st49369
Solved by ptrblck in post #4 Thanks for the information. Based on the codes, it looks like 384.81 is still installed (at least nvidia-settings) and still contains config files. I would recommend to purge all drivers, and reinstall the latest (or desired) one.
st49370
That’s some good debugging. Could you post the output of dpkg -l | grep -i nvidia? Probably unrelated to this issue, but are you using a secure boot option?
st49371
Good call… the output of dpkg -l | grep -i nvidia is ii bbswitch-dkms 0.8-3ubuntu1 amd64 Interface for toggling the power on NVIDIA Optimus video cards rc nvidia-384 384.90-0ubuntu0.16.04.1 amd64 NVIDIA binary driver - version 384.90 hi nvidia-410 410.78-0ubuntu0~gpu16.04.1 amd64 NVIDIA binary driver - version 410.78 rc nvidia-opencl-icd-384 384.90-0ubuntu0.16.04.1 amd64 NVIDIA OpenCL ICD ii nvidia-prime 0.8.2 amd64 Tools to enable NVIDIA's Prime ii nvidia-settings 384.81-0ubuntu1 amd64 Tool for configuring the NVIDIA graphics driver very odd. so it seems the old driver (384) is still around. What do you think the best way to fix this is? About secure boot: I don’t think I changed the bootloader, and I am not running dualboot…
st49372
Thanks for the information. Based on the codes, it looks like 384.81 is still installed (at least nvidia-settings) and still contains config files. I would recommend to purge all drivers, and reinstall the latest (or desired) one.
st49373
Yay, it works! Posting my solution: Just purged nvidia by running: sudo apt-get remove --purge '^nvidia-.*' after reinstalling 410 via ppa, the output of dpkg -l | grep -i nvidia is: ii bbswitch-dkms 0.8-3ubuntu1 amd64 Interface for toggling the power on NVIDIA Optimus video cards ii libcuda1-410 410.78-0ubuntu0~gpu16.04.1 amd64 NVIDIA CUDA runtime library hi nvidia-410 410.78-0ubuntu0~gpu16.04.1 amd64 NVIDIA binary driver - version 410.78 ii nvidia-opencl-icd-410 410.78-0ubuntu0~gpu16.04.1 amd64 NVIDIA OpenCL ICD ii nvidia-prime 0.8.2 amd64 Tools to enable NVIDIA's Prime ii nvidia-settings 418.56-0ubuntu0~gpu16.04.1 amd64 Tool for configuring the NVIDIA graphics driver odd that nvidia settings is 418, but anyway it works. Also I used sudo apt-mark hold nvidia-410 to make sure the driver won’t update with sudo apt-get update.
st49374
I am trying to implement binary classification. I have 100K (3 channel, 224 x 224px pre-resized) image dataset that I am trying to train the model for if picture is safe for work or not. I am data engineer with statistician background so I am working on the model like last 5-10 days. I have read many answers from ptrblck and tried to implement the solution based on suggestions but unfortunately loss didn’t decrease. Here is the class implemented by using PyTorch Lightning, from .dataset import CloudDataset from .split import DatasetSplit from pytorch_lightning import LightningModule from pytorch_lightning.metrics import Accuracy from torch import stack from torch.nn import BCEWithLogitsLoss, Conv2d, Dropout, Linear, MaxPool2d, ReLU from torch.optim import Adam from torch.utils.data import DataLoader from torch.utils.data.dataloader import default_collate from torchvision.transforms import ToTensor from util import logger from util.config import config class ClassifyModel(LightningModule): def __init__(self): super(ClassifyModel, self).__init__() # custom dataset split class ds = DatasetSplit(config.s3.bucket, config.train.ratio) # split records for train, validation and test self._train_itr, self._valid_itr, self._test_itr = ds.split() self.conv1 = Conv2d(3, 32, 3, padding=1) self.conv2 = Conv2d(32, 64, 3, padding=1) self.conv3 = Conv2d(64, 64, 3, padding=1) self.pool = MaxPool2d(2, 2) self.fc1 = Linear(7 * 28 * 64, 512) self.fc2 = Linear(512, 16) self.fc3 = Linear(16, 4) self.fc4 = Linear(4, 1) self.dropout = Dropout(0.25) self.relu = ReLU(inplace=True) self.accuracy = Accuracy() def forward(self, x): # comments are shape before execution # [32, 3, 224, 224] x = self.pool(self.relu(self.conv1(x))) # [32, 32, 112, 112] x = self.pool(self.relu(self.conv2(x))) # [32, 64, 56, 56] x = self.pool(self.relu(self.conv3(x))) # [32, 64, 28, 28] x = self.pool(self.relu(self.conv3(x))) # [32, 64, 14, 14] x = self.dropout(x) # [32, 64, 14, 14] x = x.view(-1, 7 * 28 * 64) # [32, 12544] x = self.relu(self.fc1(x)) # [32, 512] x = self.relu(self.fc2(x)) # [32, 16] x = self.relu(self.fc3(x)) # [32, 4] x = self.dropout(self.fc4(x)) # [32, 1] x = x.squeeze(1) # [32] return x def configure_optimizers(self): return Adam(self.parameters(), lr=0.001) def training_step(self, batch, batch_idx): image, target = batch target = target.float() output = self.forward(image) loss = BCEWithLogitsLoss() output = loss(output, target) logits = self(image) self.accuracy(logits, target) return {'loss': output} def validation_step(self, batch, batch_idx): image, target = batch target = target.float() output = self.forward(image) loss = BCEWithLogitsLoss() output = loss(output, target) return {'val_loss': output} def collate_fn(self, batch): batch = list(filter(lambda x: x is not None, batch)) return default_collate(batch) def train_dataloader(self): transform = ToTensor() workers = 0 if config.train.test else config.train.workers cds = CloudDataset(config.s3.bucket, self._train_itr, transform) return DataLoader( dataset=cds, batch_size=32, shuffle=True, num_workers=workers, collate_fn=self.collate_fn, ) def val_dataloader(self): transform = ToTensor() workers = 0 if config.train.test else config.train.workers cds = CloudDataset(config.s3.bucket, self._valid_itr, transform) return DataLoader( dataset=cds, batch_size=32, num_workers=workers, collate_fn=self.collate_fn, ) def test_dataloader(self): transform = ToTensor() workers = 0 if config.train.test else config.train.workers cds = CloudDataset(config.s3.bucket, self._test_itr, transform) return DataLoader( dataset=cds, batch_size=32, shuffle=True, num_workers=workers, collate_fn=self.collate_fn, ) def validation_epoch_end(self, outputs): avg_loss = stack([x['val_loss'] for x in outputs]).mean() logger.info(f'Validation loss is {avg_loss}') def training_epoch_end(self, outs): accuracy = self.accuracy.compute() logger.info(f'Training accuracy is {accuracy}') Here is the custom log output, epoch 0 Validation loss is 0.5988735556602478 Training accuracy is 0.4441356360912323 epoch 1 Validation loss is 0.6406065225601196 Training accuracy is 0.4441356360912323 epoch 2 Validation loss is 0.621654748916626 Training accuracy is 0.443579763174057 epoch 3 Validation loss is 0.5089989304542542 Training accuracy is 0.4580322504043579 epoch 4 Validation loss is 0.5484663248062134 Training accuracy is 0.4886047840118408 epoch 5 Validation loss is 0.5552918314933777 Training accuracy is 0.6142301559448242 epoch 6 Validation loss is 0.661466121673584 Training accuracy is 0.625903308391571
st49375
Solved by ptrblck in post #2 The last squeeze() operation is most likely not needed (x = x.squeeze(1)), but might be alright if your target has the same shape ([32]). Could you try to overfit a small data samples, e.g. just 10 samples and see if your model is able to do so by playing around with some hyperparameters. I think …
st49376
The last squeeze() operation is most likely not needed (x = x.squeeze(1)), but might be alright if your target has the same shape ([32]). Could you try to overfit a small data samples, e.g. just 10 samples and see if your model is able to do so by playing around with some hyperparameters. I think Lightning also ships with a functionality in newer versions, which does exactly this.
st49377
When I remove x = x.squeeze(1) from forward the loss function throws an exception just like, ValueError: Target size (torch.Size([32])) must be the same as input size (torch.Size([32, 1])) I have added batch normalization, weight initialization and updated layers so problem solved. thank you @ptrblck
st49378
Pytorch trace model vs ONXX, which one is better for deploying web app in front or back, or running in other languages, such as Java but trained in Python?
st49379
I’m a new to python as well as machine learning. I’m trying to use logistic regression for federated learning program for multiclass labels (sitting, sittingdown, standing, standingup, walking) for UCI HAR Dataset. it’s working for binary classification but for multiclassification when I’m trying to find out precision and recall, it is throwing the following error: ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted']. Can anyone please suggest me how to proceed with it? I have tried to classify 6 labels by LG classified (multi-classes) with FL instead of using binary classification. code: github.com datafleets/horizontal-federated-learning-blog/blob/master/horizontal-fl.ipynb 16 { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Federated averaging with non-IID data in multiple silos\n", "\n", "This notebook explores the robustness of model/gradient averaging with respect to varying degrees of non-iid'ness and different data partitioning schemes." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import warnings\n", "import math\n", "import random\n", This file has been truncated. show original
st49380
sklearn.metrics.precision_score uses the binary average by default, which is only defined for the binary use case. For multi-class/-label classification, you would have to define one of the suggested average settings. The docs 9 give you some information about each average.
st49381
Hey mohammed_ibrahim I’m one of the cofounders of DataFleets 5. Way to dive in on Federated Learning…was your question sufficiently addressed? -Nick
st49382
Hey Nick unfortunately no, I want to use the code but for 6 classes? Such that the prediction falls into one of the classes or activities (walking, walkingupstairs, walkingdownstairs, sitting, standing, lying). kaggle.com Human Activity Recognition with Smartphones 3 Recordings of 30 study participants performing activities of daily living Any advice or solution is highly appreciated.
st49383
I was recently practicing building an image classification model and the training loss is not changing. Can someone guide me on what am I doing wrong ? I am using the dataset available on Kaggle here. Model Definition: class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(3, 16, kernel_size = (3, 3)) self.conv2 = nn.Conv2d(16, 64, kernel_size = (3, 3)) self.conv3 = nn.Conv2d(64, 256, kernel_size = (3, 3)) self.maxpool = nn.MaxPool2d(2, stride = 3) self.flatten = nn.Flatten() self.fc1 = nn.Linear(4*4*256, 256) self.fc2 = nn.Linear(256, 32) self.fc3 = nn.Linear(32, 10) def forward(self, image): image = F.relu(self.conv1(image)) image = self.maxpool(image) image = F.relu(self.conv2(image)) image = self.maxpool(image) image = F.relu(self.conv3(image)) image = self.maxpool(image) image = self.flatten(image) image = F.relu(self.fc1(image)) image = F.relu(self.fc2(image)) image = image.reshape(image.shape[0], -1) image = F.relu(self.fc3(image)) return image model = Model() And the training script: device = 'cpu' model = model.to(device) optimizer = torch.optim.Adam(model.parameters(), lr = 0.01) criterion = nn.CrossEntropyLoss() epochs = 5 training_loss = [] accuracy = [] thresh = 50 iters = 0 total_loss = 0 for e in range(epochs): for sample, label in image_loader: sample, label = sample.to(device), label.to(device) optimizer.zero_grad() output = model(sample) loss = criterion(output, label) loss.backward() optimizer.step() total_loss += loss.item() if iters%thresh == 0: pred = torch.argmax(output, dim = 1) correct = pred.eq(label) acc = torch.mean(correct.float()) print('[Epoch {}/{}] Iteration {} -> Train Loss: {:.4f}, Accuracy: {:.3f}'.format(e+1, epochs, iters, loss/thresh, acc)) training_loss.append(loss) accuracy.append(acc) total_loss = 0 iters += 1 plt.plot(loss_list, label='loss') plt.plot(acc_list, label='accuracy') plt.legend() plt.title('training loss and accuracy') plt.show() Below is the loss and accuracy while training: [Epoch 1/5] Iteration 0 -> Train Loss: 0.0402, Accuracy: 0.125 [Epoch 1/5] Iteration 50 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 1/5] Iteration 100 -> Train Loss: 0.0402, Accuracy: 0.312 [Epoch 1/5] Iteration 150 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 1/5] Iteration 200 -> Train Loss: 0.0402, Accuracy: 0.125 [Epoch 1/5] Iteration 250 -> Train Loss: 0.0402, Accuracy: 0.125 [Epoch 1/5] Iteration 300 -> Train Loss: 0.0402, Accuracy: 0.125 [Epoch 1/5] Iteration 350 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 1/5] Iteration 400 -> Train Loss: 0.0402, Accuracy: 0.062 [Epoch 1/5] Iteration 450 -> Train Loss: 0.0402, Accuracy: 0.000 [Epoch 1/5] Iteration 500 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 1/5] Iteration 550 -> Train Loss: 0.0402, Accuracy: 0.312 [Epoch 1/5] Iteration 600 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 1/5] Iteration 650 -> Train Loss: 0.0402, Accuracy: 0.312 [Epoch 1/5] Iteration 700 -> Train Loss: 0.0402, Accuracy: 0.125 [Epoch 1/5] Iteration 750 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 1/5] Iteration 800 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 1/5] Iteration 850 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 2/5] Iteration 900 -> Train Loss: 0.0402, Accuracy: 0.125 [Epoch 2/5] Iteration 950 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 2/5] Iteration 1000 -> Train Loss: 0.0404, Accuracy: 0.062 [Epoch 2/5] Iteration 1050 -> Train Loss: 0.0402, Accuracy: 0.125 [Epoch 2/5] Iteration 1100 -> Train Loss: 0.0402, Accuracy: 0.125 [Epoch 2/5] Iteration 1150 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 2/5] Iteration 1200 -> Train Loss: 0.0402, Accuracy: 0.000 [Epoch 2/5] Iteration 1250 -> Train Loss: 0.0402, Accuracy: 0.125 [Epoch 2/5] Iteration 1300 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 2/5] Iteration 1350 -> Train Loss: 0.0402, Accuracy: 0.062 [Epoch 2/5] Iteration 1400 -> Train Loss: 0.0402, Accuracy: 0.250 [Epoch 2/5] Iteration 1450 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 2/5] Iteration 1500 -> Train Loss: 0.0402, Accuracy: 0.438 [Epoch 2/5] Iteration 1550 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 2/5] Iteration 1600 -> Train Loss: 0.0402, Accuracy: 0.250 [Epoch 2/5] Iteration 1650 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 2/5] Iteration 1700 -> Train Loss: 0.0402, Accuracy: 0.375 [Epoch 2/5] Iteration 1750 -> Train Loss: 0.0402, Accuracy: 0.062 [Epoch 3/5] Iteration 1800 -> Train Loss: 0.0402, Accuracy: 0.250 [Epoch 3/5] Iteration 1850 -> Train Loss: 0.0402, Accuracy: 0.062 [Epoch 3/5] Iteration 1900 -> Train Loss: 0.0402, Accuracy: 0.250 [Epoch 3/5] Iteration 1950 -> Train Loss: 0.0402, Accuracy: 0.062 [Epoch 3/5] Iteration 2000 -> Train Loss: 0.0402, Accuracy: 0.250 [Epoch 3/5] Iteration 2050 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 3/5] Iteration 2100 -> Train Loss: 0.0402, Accuracy: 0.062 [Epoch 3/5] Iteration 2150 -> Train Loss: 0.0402, Accuracy: 0.250 [Epoch 3/5] Iteration 2200 -> Train Loss: 0.0402, Accuracy: 0.375 [Epoch 3/5] Iteration 2250 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 3/5] Iteration 2300 -> Train Loss: 0.0402, Accuracy: 0.125 [Epoch 3/5] Iteration 2350 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 3/5] Iteration 2400 -> Train Loss: 0.0402, Accuracy: 0.125 [Epoch 3/5] Iteration 2450 -> Train Loss: 0.0402, Accuracy: 0.125 [Epoch 3/5] Iteration 2500 -> Train Loss: 0.0402, Accuracy: 0.250 [Epoch 3/5] Iteration 2550 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 3/5] Iteration 2600 -> Train Loss: 0.0405, Accuracy: 0.125 [Epoch 4/5] Iteration 2650 -> Train Loss: 0.0402, Accuracy: 0.375 [Epoch 4/5] Iteration 2700 -> Train Loss: 0.0402, Accuracy: 0.000 [Epoch 4/5] Iteration 2750 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 4/5] Iteration 2800 -> Train Loss: 0.0402, Accuracy: 0.125 [Epoch 4/5] Iteration 2850 -> Train Loss: 0.0402, Accuracy: 0.312 [Epoch 4/5] Iteration 2900 -> Train Loss: 0.0402, Accuracy: 0.312 [Epoch 4/5] Iteration 2950 -> Train Loss: 0.0402, Accuracy: 0.312 [Epoch 4/5] Iteration 3000 -> Train Loss: 0.0402, Accuracy: 0.312 [Epoch 4/5] Iteration 3050 -> Train Loss: 0.0402, Accuracy: 0.312 [Epoch 4/5] Iteration 3100 -> Train Loss: 0.0402, Accuracy: 0.062 [Epoch 4/5] Iteration 3150 -> Train Loss: 0.0402, Accuracy: 0.125 [Epoch 4/5] Iteration 3200 -> Train Loss: 0.0402, Accuracy: 0.062 [Epoch 4/5] Iteration 3250 -> Train Loss: 0.0402, Accuracy: 0.125 [Epoch 4/5] Iteration 3300 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 4/5] Iteration 3350 -> Train Loss: 0.0402, Accuracy: 0.250 [Epoch 4/5] Iteration 3400 -> Train Loss: 0.0402, Accuracy: 0.312 [Epoch 4/5] Iteration 3450 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 4/5] Iteration 3500 -> Train Loss: 0.0402, Accuracy: 0.062 [Epoch 5/5] Iteration 3550 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 5/5] Iteration 3600 -> Train Loss: 0.0402, Accuracy: 0.062 [Epoch 5/5] Iteration 3650 -> Train Loss: 0.0402, Accuracy: 0.188 [Epoch 5/5] Iteration 3700 -> Train Loss: 0.0402, Accuracy: 0.062
st49384
Remove the last F.relu so that your model is able to return negative and positive logits and rerun the script. If that doesn’t help, try to overfit a small dataset (e.g. just 10 samples) by playing around with the hyperparameters.
st49385
I tried this, but didn’t help. When I removed the F.relu from the last layer, the loss deviated a bit, but it went just ± 0.002, and the accuracy also didn’t improve. I also added a large number of convolutional layers to try overfitting the model, but it still remains the same.
st49386
The model itself seems to be working and is able to overfit a small data sample perfectly: model = Model() optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) data = torch.randn(10, 3, 140, 140) target = torch.randint(0, 10, (10,)) criterion = nn.CrossEntropyLoss() for epoch in range(100): optimizer.zero_grad() out = model(data) loss = criterion(out, target) loss.backward() optimizer.step() print('epoch {}, loss {}'.format(epoch, loss.item())) model.eval() pred = model(data) pred = torch.argmax(pred, dim=1) acc = (pred == target).float().mean() print(acc) > tensor(1.)
st49387
The model is correctly overfitting, that means there has to be issues with the training loop
st49388
I am concatenating two vectors, say A and B, into a single vector. Each of them have a dimension of 128. When training I would like randomly drop one of them completely. How can I do that?
st49389
You can simply define a dropout layer and pass the tensor to it. For example - a = torch.rand(1, 5) print(a) # output is tensor([[0.3119, 0.1485, 0.6420, 0.4604, 0.0724]]) dropout = torch.nn.Dropout(1) print(dropout(a)) # output is tensor([[0., 0., 0., 0., 0.]])
st49390
Yes the dropout layer will always make everything zero as it is set to 1. To randomly drop off the layer, generate a random number between 0 and 1 and if the number is greater than the threshold drop it prob = random.uniform(0, 1) if prob > threshold: b = dropout(b) Then you can concatenate the two vectors.
st49391
If you want to stack these tensors together, you could use something like this: a = torch.randn(128) b = torch.randn(128) c = torch.stack((a, b)) * torch.randperm(2).unsqueeze(1) which would zero out one of these tensors completely. If that’s not your use case, could you explain the shapes a bit more?
st49392
What could be used to replace LSTM sentence encoding with the Transformer model? Currently I am using nn.LSTM that accepts packed sequence that is constructed from an input (batch_size, max_seq_len, embed1) = (128, 20, 1024) and it outputs (1, batch_size, embed2) = (1, 128, 2048) E.g we learned a single embedding from multiple embeddings max_seq_len. How can I replace LSTM with Transformer to achieve the same results with exactly the same input? E.g input is a tensor of size (128, 20, 1024) and the output is a tensor of size (1, 128, 2048). Can I achieve this with nn.Tranformer or nn.TransformerEncoder? So far I could not figure out how to achieve this with these classes. Thank you
st49393
Hi I want to save my model during the training. My model has 9 conv layers with batch norm and softmax as activation function. I use this method to save my training modeel but when I resumed my training I sensed that it resumed from begining. state = { 'epoch': epoch, 'state_dict': model.state_dict(), 'optimizer': optimizer.state_dict(), } torch.save(state, '/home/superblock/state_train.pt') state = torch.load('/home/superblock/state_train.pt') model.load_state_dict(state['state_dict']) optimizer.load_state_dict(state['optimizer']) what should I do? does it need model.eval()? I read some where that for resuming training we dont use that and I have another question… It’s better to save when loss is fewer in validation data or train data?
st49394
Solved by a_d in post #2 model.eval() changes the behaviour of a few layers like batchnorm and dropout. So if you want to infer using your model then put the model in .eval() mode or if you want to continue training then put the model into training using model.train(). Save at the least validation loss
st49395
model.eval() changes the behaviour of a few layers like batchnorm and dropout. So if you want to infer using your model then put the model in .eval() mode or if you want to continue training then put the model into training using model.train(). Save at the least validation loss
st49396
Not sure if this is a feature but is it possible to make individual linear parameter for each input along the axis that is not the input channel? say an Nx4 matrix fed to an nn.linear layer of size 4 input and 4 for output, I would have a parameter size of Nx4 for bias and Nx4x4 weight instead of 4 and 4x4. (I know I can do this manually by setting up my own parameters but is there a way to do so in nn.linear?)
st49397
Solved by peepeepoopoo in post #2 just do it the way you think how it might work
st49398
I am using LSTM to make Language Model and I found out freezing embedding weight makes Volatile GPU-Util go up. (30% -> 80%). I don’t get it why it happens. Would you give me some advice about the issue(?)? weight_vec = torch.load('./data/pretrained_embedding.pt') model.emb.weight.data.copy_(weight_vec) model.emb.weight.requires_grad = False
st49399
Solved by albanD in post #2 Hi, This might be because the autograd is not running, so the the CPU has less work to do and can send work to the GPU faster?
st49400
Hi, This might be because the autograd is not running, so the the CPU has less work to do and can send work to the GPU faster?
st49401
I see. I asked it just out of curiosity So, CPU is supposed to work on autograd even in cuda(gpu) mode? I didn’t know that. Many thanks.
st49402
I have a multi-class classification problem (5 classes), and the data is imbalanced. So I was thinking if we can let the model learn the best weights by making the “weights” option in CrossEntropyLoss learnable parameters. Is this a valid assumption or not?
st49403
Hello Emad! emad: So I was thinking if we can let the model learn the best weights by making the “weights” option in CrossEntropyLoss learnable parameters. First, you would not want to. Consider what would happen if you tried. Your optimizer would drive your loss to zero simply by driving all of the learnable class weights to zero (with reduction = 'sum') or drive only one class to have a non-zero weight, and drive your model to predict only that class (with reduction = 'mean'). Second, as written, pytorch’s CrossEntropyLoss doesn’t support calculating gradients with respect to the class weights. (You could, of course, write your own version of CrossEntropyLoss that did, but, in line with my first comment, you wouldn’t want to.) Best. K. Frank
st49404
pytorchhh888×414 12.3 KB i am getting error - size mismatch, m1: [10 x 32], m2: [320 x 564] at C:\cb\pytorch_1000000000000\work\aten\src\TH/generic/THTensorMath.cpp:41
st49405
Solved by ptrblck in post #4 The complete input batch will be used and you don’t have to specify the batch size in any layer arguments, such as the number of input features etc.
st49406
thanks for the answer, but i have one doubt since my batch size is 10 then does the images get loaded one by one or simultaneously
st49407
The complete input batch will be used and you don’t have to specify the batch size in any layer arguments, such as the number of input features etc.
st49408
I was building an Image Classification model using PyTorch, where I came across this error while training. ValueError: empty range for randrange() (0,-14, -14) I am using the dataset available on Kaggle here 1. I wasn’t able to identify what might be the issue here. I attaching the relevant code: Model Definition: class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(3, 16, kernel_size = (3, 3)) self.conv2 = nn.Conv2d(16, 64, kernel_size = (3, 3)) self.conv3 = nn.Conv2d(64, 256, kernel_size = (3, 3)) self.maxpool = nn.MaxPool2d(2, stride = 3) self.flatten = nn.Flatten() self.fc1 = nn.Linear(4*4*256, 256) self.fc2 = nn.Linear(256, 32) self.fc3 = nn.Linear(32, 10) def forward(self, image): image = F.relu(self.conv1(image)) image = self.maxpool(image) image = F.relu(self.conv2(image)) image = self.maxpool(image) image = F.relu(self.conv3(image)) image = self.maxpool(image) image = self.flatten(image) image = F.relu(self.fc1(image)) image = F.relu(self.fc2(image)) image = image.reshape(image.shape[0], -1) image = F.relu(self.fc3(image)) return image model = Model() And the training script: device = 'cpu' model = model.to(device) optimizer = torch.optim.Adam(model.parameters(), lr = 0.01) criterion = nn.CrossEntropyLoss() epochs = 5 training_loss = [] accuracy = [] thresh = 50 iters = 0 total_loss = 0 for e in range(epochs): for sample, label in image_loader: sample, label = sample.to(device), label.to(device) optimizer.zero_grad() output = model(sample) loss = criterion(output, label) loss.backward() optimizer.step() total_loss += loss.item() if iters%thresh == 0: pred = torch.argmax(output, dim = 1) correct = pred.eq(label) acc = torch.mean(correct.float()) print('[Epoch {}/{}] Iteration {} -> Train Loss: {:.4f}, Accuracy: {:.3f}'.format(e+1, epochs, iters, loss/thresh, acc)) training_loss.append(loss) accuracy.append(acc) total_loss = 0 iters += 1 plt.plot(loss_list, label='loss') plt.plot(acc_list, label='accuracy') plt.legend() plt.title('training loss and accuracy') plt.show() Also, the logs: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-22-db0270104083> in <module> 11 total_loss = 0 12 for e in range(epochs): ---> 13 for sample, label in image_loader: 14 sample, label = sample.to(device), label.to(device) 15 optimizer.zero_grad() /opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self) 361 362 def __next__(self): --> 363 data = self._next_data() 364 self._num_yielded += 1 365 if self._dataset_kind == _DatasetKind.Iterable and \ /opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self) 401 def _next_data(self): 402 index = self._next_index() # may raise StopIteration --> 403 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 404 if self._pin_memory: 405 data = _utils.pin_memory.pin_memory(data) /opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] /opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] <ipython-input-16-449d83383611> in __getitem__(self, idx) 10 def __getitem__(self, idx): 11 image = Image.open(self.data['filename'][idx]) ---> 12 image = self.transform(image) 13 image = image.numpy().astype('float32') 14 label = self.data['labels'][idx] /opt/conda/lib/python3.7/site-packages/torchvision/transforms/transforms.py in __call__(self, img) 59 def __call__(self, img): 60 for t in self.transforms: ---> 61 img = t(img) 62 return img 63 /opt/conda/lib/python3.7/site-packages/torchvision/transforms/transforms.py in __call__(self, img) 518 img = F.pad(img, (0, self.size[0] - img.size[1]), self.fill, self.padding_mode) 519 --> 520 i, j, h, w = self.get_params(img, self.size) 521 522 return F.crop(img, i, j, h, w) /opt/conda/lib/python3.7/site-packages/torchvision/transforms/transforms.py in get_params(img, output_size) 496 return 0, 0, h, w 497 --> 498 i = random.randint(0, h - th) 499 j = random.randint(0, w - tw) 500 return i, j, th, tw /opt/conda/lib/python3.7/random.py in randint(self, a, b) 220 """ 221 --> 222 return self.randrange(a, b+1) 223 224 def _randbelow(self, n, int=int, maxsize=1<<BPF, type=type, /opt/conda/lib/python3.7/random.py in randrange(self, start, stop, step, _int) 198 return istart + self._randbelow(width) 199 if step == 1: --> 200 raise ValueError("empty range for randrange() (%d,%d, %d)" % (istart, istop, width)) 201 202 # Non-unit step argument supplied. ValueError: empty range for randrange() (0,-14, -14)
st49409
I guess you are using a random crop transformation, which apparently fails. Could you post the code for the Dataset and the transformations you are passing to it?
st49410
Yes, I am using the RandomCrop transformation Here is the code: class ImageDataset(Dataset): def __init__(self, data, transform = None): self.data = data if transform != None: self.transform = transform else: self.transform = transforms.Compose([transforms.ToTensor()]) def __len__(self): return len(self.data) def __getitem__(self, idx): image = Image.open(self.data['filename'][idx]) image = self.transform(image) image = image.numpy().astype('float32') label = self.data['labels'][idx] return image, label transform = transforms.Compose([ transforms.ColorJitter(), transforms.RandomCrop(128), transforms.RandomHorizontalFlip(), transforms.RandomVerticalFlip(), transforms.ToTensor() ])
st49411
Are you making sure the images are not smaller than the crop size of 128? I get a similar issue, if I pass small images to the transformation: img = TF.to_pil_image(torch.randn(3, 50, 50)) crop = transforms.RandomCrop(128) out = crop(img) > RuntimeError: random_ expects 'from' to be less than 'to', but got from=0 >= to=-77 The error message might be different, if you are using an older torchvision, since the random size sampling was changed in this PR 3 to torch.randint.
st49412
I figured it out. The image size was 150X150. It is working now. I replaced RandomCrop with Resize and the model is now training. Can you help me on this ? Loss remains constant/unchanged 5
st49413
I have an input of BxKx768 which is my embedded features. Is there a way to give them to a transformer and get the output of size BxM where M can be any number?
st49414
Is there any way of changing the sample rate using torchaudio, either when loading it or afterwards via a transform, similar to how librosa allows librosa.load('soundfile.mp3',sr=16000)? This is an essential feature to have, as all ML models require a fixed sample rate of audio, but I cannot find it anywhere in the docs 4.
st49415
It’s not possible with the torchaudio library as of now. You could preprocess your files with a shell script that uses sox to do this. #!/bin/bash TMPDIR=/tmp/sox for fn in $(find . -name "*.wav"); do TMPFILE=$TMPDIR/$(basename $fn) sox $fn $TMPFILE rate 16000 mv $TMPFILE $fn done
st49416
Thanks David. Do you think there’s any chance that this feature will be added to torchaudio in a later release? It would be very useful to have.
st49417
Maybe, I’ve got some time and have been playing around with the pytorch cpp extensions. I’ll look into how difficult it would be to integrate more of the sox functions into the torchaudio library. But no promises.
st49418
Hi David, I just got round to running your script, but I get the error sox FAIL formats: can't open output file `/tmp/sox/filename.mp3': No such file or directory mv: cannot stat '/tmp/sox/filename.mp3': No such file or directory Any ideas why this is happening?
st49419
yes, /tmp/sox doesn’t exist. You need to create it first. Or use a different temporary directory.
st49420
@Blaze I know there doesn’t exist anything in torchaudio to do that for you, but I needed a back-propagable method for changing the sample_rate. So I’m using this: import torch.nn as nn import torch import torchaudio class ChangeSampleRate(nn.Module): def __init__(self, input_rate: int, output_rate: int): super().__init__() self.output_rate = output_rate self.input_rate = input_rate def forward(self, wav: torch.tensor) -> torch.tensor: # Only accepts 1-channel waveform input wav = wav.view(wav.size(0), -1) new_length = wav.size(-1) * self.output_rate // self.input_rate indices = (torch.arange(new_length) * (self.input_rate / self.output_rate)) round_down = wav[:, indices.long()] round_up = wav[:, (indices.long() + 1).clamp(max=wav.size(-1) - 1)] output = round_down * (1. - indices.fmod(1.)).unsqueeze(0) + round_up * indices.fmod(1.).unsqueeze(0) return output if __name__ == '__main__': wav, sr = torchaudio.load('small_stuff/original.wav') osr = 22050 batch = wav.unsqueeze(0).repeat(10, 1, 1) csr = ChangeSampleRate(sr, osr) out_wavs = csr(wav) torchaudio.save('down1.wav', out_wavs[0], osr)
st49421
I am a beginner in Machine Learning and I want to use GPU to run my code. it works with the CPU (without model.cuda ()), but it doesn’t work with the GPU. Can anyone help me please? Code : class CNNModel(nn.Module): def __init__(self): super(CNNModel, self).__init__() self.conv_layer1 = self._conv_layer_set(3, 32) #The first dimension of Pytorch Convolution should always be #the the number of channels (3) self.conv_layer2 = self._conv_layer_set(32, 64) self.fc1 = nn.Linear(64*28*28*28, 2) self.fc2 = nn.Linear(1404928, num_classes) ###### take care self.relu = nn.LeakyReLU() self.batch=nn.BatchNorm1d(2) self.drop=nn.Dropout(p=0.15, inplace = True) def _conv_layer_set(self, in_c, out_c): conv_layer = nn.Sequential( nn.Conv3d(in_c, out_c, kernel_size=(3, 3, 3), padding=0), nn.LeakyReLU(), nn.MaxPool3d((2, 2, 2)), ) return conv_layer def forward(self, x): # Set 1 out = self.conv_layer1(x) out = self.conv_layer2(out) out = out.view(out.size(0), -1) out = self.fc1(out) out = self.relu(out) out = self.batch(out) out = self.drop(out) out = F.softmax(out, dim=1) return out #Definition of hyperparameters n_iters = 2 num_epochs = 2 # Create CNN model = CNNModel() model.cuda() print(model) # Cross Entropy Loss for param in model.parameters(): param.requires_grad = True error = nn.CrossEntropyLoss() # SGD Optimizer learning_rate = 0.001 optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) ..... for epoch in range(num_epochs): outputs = [] outputs= torch.tensor(outputs) for fold in range(0, len(training_data), 4): xtrain = training_data[fold : fold+4] xtrain = torch.Tensor(xtrain) xtrain = Variable(xtrain.view(4,3,120,120,120)) # Clear gradients optimizer.zero_grad() # Forward propagation v = model(xtrain) outputs = torch.cat((outputs,v.detach()),dim=0) # Calculate softmax and ross entropy loss targets = torch.Tensor(targets) labels = targets outputs = torch.tensor(outputs, requires_grad=True) _, predicted = torch.max(outputs, 1) accuracy = accuracyCalc(predicted, targets) labels = labels.long() labels=labels.view(-1) loss = nn.CrossEntropyLoss() loss = loss(outputs, labels) # Calculating gradients loss.backward() # Update parameters optimizer.step() loss_list_train.append(loss.data) accuracy_list_train.append(accuracy/100) Result : ----------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-1-927d13183a0f> in <module> 173 optimizer.zero_grad() 174 # Forward propagation --> 175 v = model(xtrain) 176 outputs = torch.cat((outputs,v.detach()),dim=0) 177 # Calculate softmax and ross entropy loss /opt/tljh/user/envs/fethi_env/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 487 result = self._slow_forward(*input, **kwargs) 488 else: --> 489 result = self.forward(*input, **kwargs) 490 for hook in self._forward_hooks.values(): 491 hook_result = hook(self, input, result) <ipython-input-1-927d13183a0f> in forward(self, x) 108 def forward(self, x): 109 # Set 1 --> 110 out = self.conv_layer1(x) 111 out = self.conv_layer2(out) 112 out = out.view(out.size(0), -1) /opt/tljh/user/envs/fethi_env/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 487 result = self._slow_forward(*input, **kwargs) 488 else: --> 489 result = self.forward(*input, **kwargs) 490 for hook in self._forward_hooks.values(): 491 hook_result = hook(self, input, result) /opt/tljh/user/envs/fethi_env/lib/python3.6/site-packages/torch/nn/modules/container.py in forward(self, input) 90 def forward(self, input): 91 for module in self._modules.values(): ---> 92 input = module(input) 93 return input 94 /opt/tljh/user/envs/fethi_env/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 487 result = self._slow_forward(*input, **kwargs) 488 else: --> 489 result = self.forward(*input, **kwargs) 490 for hook in self._forward_hooks.values(): 491 hook_result = hook(self, input, result) /opt/tljh/user/envs/fethi_env/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input) 446 def forward(self, input): 447 return F.conv3d(input, self.weight, self.bias, self.stride, --> 448 self.padding, self.dilation, self.groups) 449 450 RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight' thank you in advance
st49422
Solved by a_d in post #2 As the error suggests, both your model and your input to the model should lie on the same device either the CPU or the GPU. Therefore once you shift the model to gpu, shift your input tensors too. Also, I suggest to not use the Variable API as it’s an old API. Not using the Variable would do too. …
st49423
As the error suggests, both your model and your input to the model should lie on the same device either the CPU or the GPU. Therefore once you shift the model to gpu, shift your input tensors too. Also, I suggest to not use the Variable API as it’s an old API. Not using the Variable would do too. xtrain = torch.tensor(xtrain).cuda() xtrain = xtrain.view(4, 3, 120, 120, 120) # Do not use the Variable....
st49424
Thank you for answering @a_d I tried this but it gives another error message --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-6-629bcd318457> in <module> 174 # Forward propagation 175 v = model(xtrain) --> 176 outputs = torch.cat((outputs,v.detach()),dim=0) 177 # Calculate softmax and ross entropy loss 178 targets = torch.Tensor(targets) RuntimeError: Expected object of backend CPU but got backend CUDA for sequence element 1 in sequence argument at position #1 'tensors'
st49425
This arises because the two tensors you want to cat are not on the same device, shift them either to the CPU or the GPU. Anna_yah: outputs = torch.cat((outputs,v.detach()),dim=0) Also here you are concatenating an empty tensor to your model output, I do not know what exactly is the overall model and objective, but this statement seems to have no effect. Anna_yah: outputs = torch.tensor(outputs, requires_grad=True) Here you do not need to create the tensor all over again, you could just do outputs.require_grad=True
st49426
a_d: Also here you are concatenating an empty tensor to your model output, I do not know what exactly is the overall model and objective, but this statement seems to have no effect. the dataset is very large so i split it into batches of 4 items, then i concatenate the result of each batch with the other results. In this way i can use all the images in the dataset in each epoch.
st49427
I used outputs= torch.tensor(outputs).cuda() and it works ! Thank you very much @a_d
st49428
Hello all, I am having trouble understanding how the dataloader works internally, especially when we define the number of workers. I noticed a weird behavior and made a minimal code snippet replicating the issue. Here is the dataset class. class testClass(Dataset): def __init__(self): pass def __len__(self): return 1000 def __getitem__(self, item): print("Accessing the __getitem__ method") return torch.rand(10) I am calling the class as and testing as follows - dataset = testClass() dataloader = DataLoader(dataset, batch_size=5, num_workers=5) for _, data in enumerate(dataloader): print(data.shape) print('--------------------------------------------------') break when my batch_size is 5, the output is as follows - Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method torch.Size([5, 10]) -------------------------------------------------- Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method There are 2 things I am not able to understand. firstly, why is the __getitem__ method being called so many times, since my batch is 5, I expect it to be called only 5 times. Secondly, as you can see the why is it being called after I have printed the dashed lines, I have already received my first batch of data and added a break . Nothing should be printed after the dashed lines I suppose. Also the behaviour is not always the same, it sometimes prints or not prints after the dashed lines. This is also the issue when I specify num_workers=1 and batch_size=1. The output for this combination is as follows- Accessing the __getitem__ method Accessing the __getitem__ method torch.Size([1, 10]) -------------------------------------------------- Again it is being called twice. The only time I notice the expected behavior is when I do not pass the num_workers argument. For example for batch_size=1 and not passing num_workers the output is as follows - Accessing the __getitem__ method torch.Size([1, 10]) -------------------------------------------------- and for batch_size=5 the output is as follows(Again not passing num_workers argument when calling Dataloader). Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method Accessing the __getitem__ method torch.Size([5, 10]) -------------------------------------------------- When dealing with my original issue, I realized that I was loading gibberish data all along, that is wrong labels corresponding to input data… What am I doing wrong here?
st49429
Hi, firstly, why is the __getitem__ method being called so many times, since my batch is 5, I expect it to be called only 5 times This is because the worker processes are loading batch in advance to be able to provide them to the training as quickly as possible when it asks for a new one. Secondly, as you can see the why is it being called after I have printed the dashed lines, I have already received my first batch of data and added a break . Nothing should be printed after the dashed lines I suppose. This most likely happens because the worker that load the data work asynchronously and load the data before the main process tells them to stop doing stuff. Again it is being called twice. Again because it preloads 2 batchs in advance. You can check the doc for dataloader on master and in particular the prefetch_factor argument that allows you to control how many batch are loaded in advance: https://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader 7
st49430
Thanks a lot for your reply @albanD. a_d: Accessing the __getitem__ method Accessing the __getitem__ method torch.Size([1, 10]) -------------------------------------------------- Here, I noticed that It loaded the data of the first and label of the second. This is actually the reason I started digging into the Dataloader… I am still not sure why was that happening.
st49431
a_d: Here, I noticed that It loaded the data of the first and label of the second. I’m not sure what you mean by that? There is not concept of data and label here. The Dataset should return the right pair of them for a given item.
st49432
Hi, Is there any good video tutorial for PyTorch.Profilers as currently I am lost in between Profilers and utils.bottlenecks. I read all the discussion questions here mentioning profilers but could not get a good starting point as its my first time diving in this topic. Currently there is just this 1 one on youtube as per my search but thats also in korean without any subtitles. I have seen the written tutorials but they were not much explanatory for the beginner. Thanks.
st49433
Hi, have you seen our profiler recipe doc? https://pytorch.org/tutorials/recipes/recipes/profiler.html 4 ; as for videos - please stay tuned and make sure to check our upcoming virtual developer day in November with many videos and tutorials, we plan to have one for profiler too.
st49434
It’s been a bit time for me to look for an example of using textual, numerical and categorical features together but I couldn’t find one. It would be nice to see a concrete example. Can you help with that?
st49435
Solved by a_d in post #12 Okay so you have many nodes containing data(like a graph or tree)… Well given you have text, you would be padding them to make them the same length. You might as well pad these numbers with zeros to make them the same length as your sentence, and concat them with the embeddings. On a side node, i…
st49436
Hello, Firstly, it actually depends on how this data is correlated, and understanding that would be the first step incorporating this data in your model. You may want to introduce some data in the later layers of your model and sometimes you can pass the data to your model all at once. Secondly, you might want to extract features from your data, and doing so from categorical data is tough, especially because one hot encoded representations of such data would result in sparse tensors, and applying normal layers of such data would yield in meaningless results. I generally apply one of the two strategies, One is such that you would use an embedding layer for one hot encoded representations of each categorical field and then concatenate them. Another way is to use sparse convolutions and sparse linear layers. Nvidia has a package called Minkowski Engine which has sparse implementations of the convolutional operations. You might want to go through the theory first to see which fits your case better. To load the text data, load them as the pretrained vector representations of the words. Is it a standard dataset with which you are working ?
st49437
My previous reply answered the way you could include categorical data in your model. Your processing layers will be a part of your model as they would be trainable. Could you please be more specific as to what exactly you are looking for? Where are you getting stuck? and maybe with some dummy or a single datapoint from your dataset.
st49438
Not yet, got busy a little, but I assume your main problem converting textual data to tensors? Am I correct?
st49439
Hello , I understand the issue now. Will I be right to assume that the numbers would denote some weightage to the text? In that case, simple element-wise of these weights(i.e. your numbers) to the vectorized text. Combining these three pieces of information might be done through some operation which relates them. This might be the way to go.
st49440
Okay so you have many nodes containing data(like a graph or tree)… Well given you have text, you would be padding them to make them the same length. You might as well pad these numbers with zeros to make them the same length as your sentence, and concat them with the embeddings. On a side node, if your data is structured like a tree, have you tried out graph networks and structured learning ?
st49441
Hi everyone I have a general question regarding saving and loading models in PyTorch. My case: I save a checkpoint consisting of the model.state_dict, optimizer.state_dict, and the last epoch. The saved checkpoint refers to the best performing model, evaluated by accuracy. I load all the three checkpoint entries and resume…However, I do not want to continue training but I want to use the saved state and make one forward pass to get the same accuracy as I had when I saved the checkpoint. How can I do that? Basically, I want to be able to reproduce my results, since I have not figured out how to seed in PyTorch. It somehow does not really work…So I figured I can do it by saving and loading models. Any help is very much appreciated! All the best, snowe
st49442
Hi, first of all here 10 you can find a short documentation about reproducibility in PyTorch. I think your results could be reproduced, if you’re able to load the same data as in the original evaluation (e.g. your validation dataset) and you haven’t used any random operations in your first evaluation, like Dropout as an example. How different are your current results from the original one?
st49443
Hi @Caruso, thank you for reaching out! I am aware of the different seeding functions in PyTorch and I have used them. Somehow my results differ in the range of roughly 1 - 2%. My model does not use Dropout… All the best, snowe
st49444
UPDATE I was able to make my results reproducible by using the following code: def set_seed(seed): torch.manual_seed(seed) np.random.seed(seed) random.seed(seed) # for cuda torch.cuda.manual_seed_all(seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False torch.backends.cudnn.enabled = False I call the set_seed function with every run I do. Tbh, I don’t fully understand why I cannot just set the seed once, but it works
st49445
Help!!! Does anyone knows how to insert a new layer in the middel of a pre-trained model? e.g insert a new conv in the middle of Resnet’s bottelneck.
st49446
There’s no easy way to insert a new layer in the middle of an existing model as far as I’m aware. A definite solution is to build the structure that you want in a new class and then copy the corresponding weights over from the pretrained model.
st49447
Figured!!! After loading model,we can directly specify model.conv_x = nn.Sequential([new_layer, model.conv_x]), by this way, we can still use thepretrained model.conv_x
st49448
@wangchust Can you help a newbie like me. I import pretrained resnest34 as: resnet = models.resnet34(pretrained=True) Now I want to insert a conv2d 1x1 kernel layer before the fc to increase channel size from 512 to 6000 and then add a fc 6000 x 6000 I am so new to pytorch that I need some hand-holding. Can you write the lines of code needed? I am still at monkey-see monkey-learn stage. Thanks in anticipation
st49449
I think the easiest approach would be to derive from ResNet and add your layers. This should do what you need: class MyResnet2(models.ResNet): def __init__(self, block, layers, num_classes=1000): super(MyResnet2, self).__init__(block, layers, num_classes) self.conv_feat = nn.Conv2d(in_channels=512, out_channels=6000, kernel_size=1) self.fc = nn.Linear(in_features=6000, out_features=6000) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = self.conv_feat(x) x = x.view(x.size(0), -1) x = self.fc(x) return x from torchvision.models.resnet import BasicBlock model = MyResnet2(BasicBlock, [3, 4, 6, 3], 1000) x = Variable(torch.randn(1, 3, 224, 224)) output = model(x) Note that the shape of x in x= self.avgpool(x) is already [batch, 512, 1, 1] for x.shape = [batch, 3 ,224, 224]. You could therefore flatten x and just use two Linear layers, since it would be the same as a Conv2d with kernel_size=1.
st49450
@ptrblck Thank you sir!, Now if I re-read some of the tutorials, they will register in my head.
st49451
Sorry for being so late to reply. My way should be replace ‘Resnet.fc = nn.Linear(512, num_classes)’ with ‘Resnet.fc = nn.Sequential(nn.Conv2d(512, 6000), nn.Linear(6000,6000))’
st49452
Using this approach you would have to define a Flatten layer, since it will crash in the Sequential model. class Flatten(nn.Module): def __init__(self): super(Flatten, self).__init__() def forward(self, x): x = x.view(x.size(0), -1) return x seq = nn.Sequential(nn.Conv2d(512, 6000, 1), Flatten(), nn.Linear(6000,6000)) x = Variable(torch.randn(1, 512, 1, 1)) seq(x)
st49453
I find this very useful to create a custom model which inserts a new layer in middle of ResNet, but how do we get the pretrained weights? This is giving me a new neural net which doesn’t have the pre trained weights. Thank you.
st49454
I want to add attention layers in the pretrained resnet model how can I do so after every resnet block in the model.
st49455
This is what I did, I think it works. vgg11.features[0] = nn.Sequential(inserted_layer, vgg11.features[0]) I checked both parameters, and I think the pretrained parameters are preserved.
st49456
I first divided the model into two parts; 1st half are the layers before the layer you want to add and the 2nd half contains the layers after your layer. So something like this: encoder = nn.Sequential(*list(m.children())[:8]) decoder = nn.Sequential(*list(m.children())[8:]) Then I added the layers I need, get a list of the layers and append each layer in the 2nd layer into the 1st one. Like this: tmp_1 = list(encoder.children()) tmp_2 = list(decoder.children()) for i in tmp_2: tmp_1.append(i) And then model = nn.Sequential(*list(tmp_1))
st49457
Hi, I am new to deep learning. I am trying to make CNN on CSV data. The data is derived from the images. And I am predicting x,y,z coordinates for mono pong game. Now I am stuck at the preprocessing of the data. I have (667,225) CSV of input set and (667,3) CSV of labels. So I read through the internet but still struggling with reshaping the CSV. How should I convert CSV to tensor? The actual pixel of the image is (1920, 1440). (I do not have images.) Can anyone tell me how can I prepare the dataset to feed it to the architecture? and one more thing In CNN architecture how we can decide the input node, hidden lawyer and kernel size? I know there is lots of information on the internet but after reading it I became more confused. Please help me out. Thanks in advance.
st49458
From your CSV shape, I would have to say your images might be 15x15, 667 is the number of images. So the shape of the tensor would be (667, 15, 15, 1) adhering to the format (batch_size, image_width, image_height, num_channel)
st49459
I’m building a model similar to a GAN, the sigmoid layer of discriminator outputs 1 when the discriminator gets better. This causes the loss function log(1 - D(Z)) to become -inf. This is the code for my discriminator: class Discriminator(nn.Module): def __init__(self, hidden_size, latent_dim): super(Discriminator, self).__init__() self.dim_h = hidden_size self.n_z = latent_dim self.main = nn.Sequential( nn.Linear(self.n_z, self.dim_h), nn.LeakyReLU(), nn.Dropout(0.25), nn.Linear(self.dim_h, self.dim_h), nn.LeakyReLU(), nn.Dropout(0.5), nn.Linear(self.dim_h, 1), nn.Sigmoid() ) def forward(self, x): x = self.main(x) return x And this is the training the discriminator portion d_optim.zero_grad() X_train, y_train, length = next(data_generator(data, y, word_to_dict, batch_size = batch_size)) X_train = X_train.cuda() encoded_z, _ , _, _= model.encode(X_train) z = torch.randn_like(encoded_z) z = z.cuda() d_z = critic(z) d_z_hat = critic(encoded_z) d_z_loss = lambd * torch.log(d_z).mean() d_z_hat_loss = lambd * torch.log(1 - d_z_hat).mean() loss = -(d_z_hat_loss + d_z_loss) loss.backward() torch.nn.utils.clip_grad_norm_(critic.parameters(), 0.25) d_optim.step() To avoid -inf loss to cause instability during backpropagation, I am clipping the gradient by norm before d_optim.step(). I have also tried using torch.nn.utils.clip_grad_value_(parameters, clip_value) to solve the -inf problem. Both the functions did not work correctly, is there anything I am missing regarding this?
st49460
Solved by ptrblck in post #2 The gradient norm clipping wouldn’t work, since multiplying a +/-Inf gradient with the scale factor won’t change the gradient (used here). While clipping the gradient with values would work for +/-Inf values, unfortunately the +/-Inf loss might create NaN gradients as seen here: model = nn.Linear(…
st49461
The gradient norm clipping wouldn’t work, since multiplying a +/-Inf gradient with the scale factor won’t change the gradient (used here). While clipping the gradient with values would work for +/-Inf values, unfortunately the +/-Inf loss might create NaN gradients as seen here: model = nn.Linear(1, 1) x = torch.randn(1, 1) out = model(x) loss = torch.log(out - out) print(loss) > tensor([[-inf]], grad_fn=<LogBackward>) loss.backward() print(model.weight.grad) > tensor([[nan]]) I think the right approach would be to avoid creating the invalid loss values and e.g. add a small eps value to the loss calculation.
st49462
hello, I want to use one-hot encoder to do cross entropy loss for example input: [[0.1, 0.2, 0.8, 0, 0], [0,0, 2, 0,0,1]] target is [[1,0,1,0,0]] [[1,1,1,0,0]] I saw the discussion to do argmax of label to return index, but I have multiple 1s in one row, argmax will only return 1, how do I solve this problem?
st49463
You cannot use nn.CrossEntropyLoss for a mulit-label classification and would need to use nn.BCEWithLogtisLoss. To do so you can keep the shape of the target tensor and transform it to a FloatTensor via target = target.float().
st49464
Hi there. I have a tensor, say v, of shape, say (10, 3, 50,25,12) I have a variable shape, that contains, say (10, 2, 13, 18, 12) Is there a convenient way to “truncate” v so that its shape becomes shape ? thanks !
st49465
Solved by aliutkus in post #2 ok, got it: v[[slice(k) for k in shape]]
st49466
Hi, I’m working now at my diploma and I decided to do Image Captioning. I’ve already implemented CNN -> LSTM (without attention) and it works. Also, I found that when I made 2-layers LSTM performance increased. Then I decided to replace RNN by Transformer using it almost in the same way (when in case of RNN I put vector that I got from pre-trained CNN to first time-step of LSTM and the caption as an output, in case of Transformer I put this vector to Transformer’s encoder, the caption to Transformer’s decoder and shifted one as aspected output). So, after this, I found that it doesn’t work well. Maybe the reason for that is that I put to the batch captions of the same length and my tgt_key_padding_mas is always like [False, False, False, False ... ]? (I don’t use src and memory mask, as my input is vector from CNN) What do you think about that and can you suggest me something to increase the performance?
st49467
Hi, were you able to solve this? I am currently replacing my LSTM model with transformers and was trying to find any existing literature or source. If you can help me it will be great