id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st115668
|
HI Solomon K,
Thanks a lot for your reply. Yes I do see that my two GPUs have the same physical ID, and this is exactly where I am confused about. Why are they assigned the same ID?
The output from your code is following:
Screenshot from 2017-08-21 17-44-48.png1334×222 49.4 KB
Sorry the part of your code that run commandline commands does not work and I have not figure our how to make it work, so I just run them separately. You can see, it seems nvidia labels them differently, but lshw labels them as having the same ID. What could be wrong in this case?
Thanks,
Shuokai
|
st115669
|
Strange …
Run this and report the output:
github.com
QuantScientist/Deep-Learning-Boot-Camp/blob/master/docker/deps_nvidia_docker.sh 6
#!/usr/bin/env bash
apt-get install nvidia-modprobe
# curl -O -s https://raw.githubusercontent.com/minimaxir/keras-cntk-docker/master/deps_nvidia_docker.sh
if lspci | grep -i 'nvidia'
then
echo "\nNVIDIA GPU is likely present."
else
echo "\nNo NVIDIA GPU was detected. Exiting ...\n"
exit 1
fi
echo "\nChecking for NVIDIA drivers ..."
# Check for CUDA and try to install.
if ! dpkg-query -W cuda; then
# The 16.04 installer works with 16.10.
curl -O http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
dpkg -i ./cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
apt-get update
This file has been truncated. show original
|
st115670
|
Hi,
The output from your script is the following,
save.PNG865×940 34.2 KB
Seems to be similar to the outputs I got previously. Can you see if anything is wrong?
Cheers,
Shuokai
|
st115671
|
Reinstall NVidia driver in the HOST OS.
When you swap graphics card or remove and change anything physically, the driver does NOT update device info automatically. You have to rerun the driver installer.
Let me know if this solves your issue.
|
st115672
|
HI guys,
Thanks a lot for your help. Yes, indeed I recently added another GPU. I will reinstall Nvidia driver now. But could I ask how to do this in Ubuntu? The problem is I also have Cuda 8.0 installed, will reinstalling Nvidia driver makes me reinstall Cuda 8.0? Please see the pic below to see Nvidia packages in my machine.
Screenshot from 2017-08-22 09-24-58.png1483×530 115 KB
I have searched around, but the answers I found vary. Some say I should use this command,
`sudo apt-get remove --purge nvidia*`
Some says I should use,
sudo nvidia-uninstall
To reinstall, I just use,
sudo add-apt-repository ppa:graphics-drivers/ppa
Is this the correct way? I guess my Cuda 8.0 will not be removed during the process?
Sorry I am relatively new to ubuntu and want confirm before I do this.
Cheers
|
st115673
|
HI guys,
I have reinstalled the nvidia driver following the my previous post. But the problem remains. I can still only use one GPU, and lshw gives the info exactly as before. Both GPU have the same physical ID, although nvidia-smi labels them differently.
I also reboot my machine after reinstall Nvidia driver. Strange…
What could I do then?
Cheers
|
st115674
|
THere are only two available slots for PCI 16, so I switch the position for them?
|
st115675
|
Also when use the following command,
ubuntu-drivers devices
Here is what I got,
Screenshot from 2017-08-22 10-31-32.png810×347 51.8 KB
It seems Ubuntu can only find one GPU which is K2200 in this case.
|
st115676
|
@ShuokaiPan Don’t bother with NVidia repositories and apt-get. There is a better way.
Download the Ubuntu 16.04 drivers from NVidia website here:
http://www.nvidia.com/object/unix.html 2
It is a “run” file, so you just run it with bash and it will do a full uninstall and reinstall of the driver.
Then download NVidia CUDA Docker image here and run “nvidia-smi” inside the Docker image:
https://hub.docker.com/r/nvidia/cuda/tags/ 3
Let us know what the output shows.
|
st115677
|
Hi @FuriouslyCurious
Thanks for the help. I will try that when convenient these days. But what is the difference between installing CUDA on ubuntu natively and installing the CUDA docker image?
Cheers
|
st115678
|
As I stated above, if you use nvidia-docker (as opposed to plain docker), you only need the drivers on the HOST machine.
Refer to my scripts above again.
|
st115679
|
Hello,
I have tried implementing an autoencoder for mnist, but the loss function does not seem to be accepting this type of network.
Code is as follows:
from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=10, metavar='N',
help='number of epochs to train (default: 10)')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='how many batches to wait before logging training status')
args = parser.parse_args()
args.cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
if args.cuda:
torch.cuda.manual_seed(args.seed)
kwargs = {'num_workers': 1, 'pin_memory': True} if args.cuda else {}
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.hidden = nn.Linear(784,1000)
self.hidden2 = nn.Linear(1000,500)
self.hidden3 = nn.Linear(500,250)
self.hidden4 = nn.Linear(250,30)
self.hidden5 = nn.Linear(30,250)
self.hidden6 = nn.Linear(250,500)
self.hidden7 = nn.Linear(500,1000)
self.hidden8 = nn.Linear(1000,784)
self.out = nn.Linear(784,784)
def forward(self, x):
x = x.view (-1, 784)
x = F.sigmoid(self.hidden(x))
x = F.dropout(x,0.1)
x = F.sigmoid(self.hidden2(x))
x = F.dropout(x,0.1)
x = F.sigmoid(self.hidden3(x))
x = F.dropout(x,0.1)
x = F.sigmoid(self.hidden4(x))
x = F.dropout(x,0.1)
x = F.sigmoid(self.hidden5(x))
x = F.dropout(x,0.1)
x = F.sigmoid(self.hidden6(x))
x = F.dropout(x,0.1)
x = F.sigmoid(self.hidden7(x))
x = F.dropout(x,0.1)
x = F.sigmoid(self.hidden8(x))
x = self.out(x)
return x #F.log_softmax(x)
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
model = Net()
print(model)
if args.cuda:
model.cuda()
optimizer = optim.SGD(model.parameters(), lr=.01, momentum=0)
def train(epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
if args.cuda:
data, target = data.cuda(), data.cuda()
target = Variable(target)
data = Variable(data)
optimizer.zero_grad()
output = model(data)
loss = F.cross_entropy(output, data)#F.nll_loss(output, data)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.data[0]))
def test():
model.eval()
test_loss = 0
correct = 0
for (data, target) in test_loader:
if args.cuda:
data, target = data.cuda(), data.cuda()
target = Variable(target, volatile=True)
data = Variable(data)
output = model(data)
test_loss += F.cross_entropy(output, data).data[0]# F.nll_loss(output, target).data[0] #F.nll_loss(output, target, size_average=False).data[0] # sum up batch loss
pred = output.data.max(1)[1] # get the index of the max log-probability
correct += pred.eq(target.data).cpu().sum()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
for epoch in range(1, args.epochs + 1):
train(epoch)
test()
and the error I get is,
File “mymodelc.py 1”, line 139, in train
loss = F.cross_entropy(output, data)#F.nll_loss(output, data)
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/functional.py”, line 533, in cross_entropy
return nll_loss(log_softmax(input), target, weight, size_average)
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/functional.py”, line 501, in nll_loss
return f(input, target)
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/_functions/thnn/auto.py”, line 41, in forward
output, *self.additional_args)
TypeError: FloatClassNLLCriterion_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, torch.FloatTensor, torch.FloatTensor, bool, NoneType, torch.FloatTensor), but expected (int state, torch.FloatTensor input, torch.LongTensor target, torch.FloatTensor output, bool sizeAverage, [torch.FloatTensor weights or None], torch.FloatTensor total_weight)
Thank you
|
st115680
|
You need to cast your target variable as a Long tensor. Right now it is a float tensor. Different loss functions require the input and target to be of different types. NLLCriterion needs target labels to be Long and input to be Float.
Look closely error says it got (“int, torch.FloatTensor, torch.FloatTensor …”) while it expected (int state, torch.FloatTensor input, torch.LongTensor target…)
Change the line
target = Variable(target)
to
target.Long()
target = Variable(target)
Hope this helps! Read more about casting tensor to different type if this doesn’t work. Here - How to cast a tensor to another type?
|
st115681
|
I tried this and got
TypeError: FloatClassNLLCriterion_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, torch.FloatTensor, torch.FloatTensor, bool, NoneType, torch.FloatTensor), but expected (int state, torch.FloatTensor input, torch.LongTensor target, torch.FloatTensor output, bool sizeAverage, [torch.FloatTensor weights or None], torch.FloatTensor total_weight)
then I tried changing forward to return x.long() as well
and now get,
Traceback (most recent call last):
File “”, line 1, in
File “mymodelc.py”, line 170, in
train(epoch)
File “mymodelc.py”, line 140, in train
loss = F.cross_entropy(output, data)#F.nll_loss(output, data)
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/functional.py”, line 533, in cross_entropy
return nll_loss(log_softmax(input), target, weight, size_average)
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/functional.py”, line 434, in log_softmax
return _functions.thnn.LogSoftmax()(input)
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/_functions/thnn/auto.py”, line 110, in forward
self._backend = type2backend[type(input)]
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/_thnn/init.py”, line 15, in getitem
return self.backends[name].load()
KeyError: <class ‘torch.LongTensor’>
|
st115682
|
Please let me know what could be the issue, it seems like an important thing to be able to train autoencoders. Thank you
|
st115683
|
slavakung:
data
First up, I understand that you’re training and autoencoder, so you want to get the loss between the data and output. In that case, you need to either somehow use target to be the same as the data, or use a different loss function. NLL_loss is used for classification into n classes. What you need is probably a different loss function.
But, if you just want to run the code you gave me here, I think I found the error. It’s probably in the line
loss = F.cross_entropy(output, data)
Loss takes in output and TARGET, not data. when you read using data loader into (data,target) your data stores input data and target stores their ground truth labels. The loss is calculated on predicted labels (output) and the ground truth label (target).
So, that might be the error. So, including the change I mentioned in the previous answer, all the changes are -
target.Long()
target = Variable(target)
loss = F.cross_entropy(output, target)
so you either somehow mask your target to be your data (but I doubt that’s possible), or else you use a different loss function.
|
st115684
|
Thank you very much. Indeed now the code runs.
However, I tried changing the loss function, with everything in the code being the same except now,
F.mse_loss(output, target)
and I get
AttributeError: ‘module’ object has no attribute ‘mse_loss’
but the documentation
http://pytorch.org/docs/master/nn.html 11
has
torch.nn.functional.mse_loss(input, target, size_average=True)[source]
|
st115685
|
Try doing
from torch import nn
criterion = nn.MSELoss()
loss = MSELoss(output,target)
loss.backward()
|
st115686
|
I’m guessing you meant
criterion = nn.MSELoss()
loss = criterion(output,target)
But now it complains again about the type, even though I still have the target.long() statement earlier
TypeError: FloatMSECriterion_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, torch.LongTensor, torch.FloatTensor, bool), but expected (int state, torch.FloatTensor input, torch.FloatTensor target, torch.FloatTensor output, bool sizeAverage)
|
st115687
|
I tried now
target = target.float()
target = Variable(target)
data = Variable(data)
optimizer.zero_grad()
output = model(data)
criterion = nn.MSELoss()
loss = criterion(output,target)
and get
RuntimeError: input and target have different number of elements: input[64 x 784] has 50176 elements, while target[64] has 64 elements at /b/wheel/pytorch-src/torch/lib/THNN/generic/MSECriterion.c:12
|
st115688
|
if you dont read the documentation, it’s much harder to help you. NLLLoss and MSELoss take targets of different formats.
|
st115689
|
The error seems very strange to me, in the documentation MSELoss expects the same dimension tensors. I made target the same exact thing as as the input, and the output is the same exact dimensions as the input as it is an autoencoder. Why would MSELoss complain that the dimensions do not match?
Not that it practically matters, but as you mention the documentation perhaps there is some deeper problem–why does it say that there is a module nn.function.mse_loss, and when I try to assign the loss to this I get an error that nn.functional has no attribute ‘mse_loss’?
|
st115690
|
Hi,
I thought it was my own code that is problematic (see this post: CIFAR-10 bad results, loss: 1.1991, Accuracy: 6296/11001 (57%))
I run the original example (http://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html 191), without any modifications, this is the result:
Accuracy of the network on the 10000 test images: 57 %
Why are the results this bad?
|
st115691
|
GitHub
prlz77/ResNeXt.pytorch 239
ResNeXt.pytorch - Reproduces ResNet-V3 with pytorch
this works very well.
|
st115692
|
How does one combine network parameters from two different networks?
Suppose I have two (could be more but let’s do two) distinct networks model1 and model2. I would like the optimizer to be aware of and optimize both models simultaneously.
I want to do something like this:
import torch.optim as optim
optimizer = optim.SGD(model1.parameters() + model2.parameters(), lr=0.001, momentum=0.9)
An example use case is where model1 is pretrained and potentially also model2.
Any ideas or suggestions?
|
st115693
|
This might work for anyone who is wondering.
import itertools
optimizer = optim.SGD(itertools.chain(model1.parameters(), model2.parameters()), lr=0.001, momentum=0.9)
|
st115694
|
another (actually worse) method:
optimizer = optim.SGD([p for p in model1.parameters()] +
[p for p in model2.parameters()]),
lr=0.001, momentum=0.9)
|
st115695
|
My code was ok in v0.12.
Recently I updated to 0.20 using pip.
After that, when runing the same code, I got this error. Not know why.
AttributeError: ‘GRU’ object has no attribute ‘_param_buf_size’
|
st115696
|
Is there a version of cross entropy loss already implemented that supports smooth targets?
|
st115697
|
Hi,
Can someone please take a look and help me understand what am I doing wrong?
Original training set (https://www.kaggle.com/c/cifar-10/data 12) was split into train and validation sets.
Everything is documented here:
https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyTorch/21-PyTorch-CIFAR-10-Custom-data-loader-from-scratch.ipynb 48
Thanks,
|
st115698
|
The network is too simple.
Try some complex and successful networks such as vgg and resnet.
Or make your network deeper.
And larger epochs(e,g epochs:100) must should be taken.
Actually, there are lots of things you can do to improve accuracy.
|
st115699
|
Many Thanks,
just wanted to know its not my own Dataset loader that is causing this issue, rather it is the simple network.
|
st115700
|
I have problem about the latest PyTorch. For more details, refer to the Issue of PyTorch at pytorch/pytorch - GitHub 9. I wonder it is due to the latest Pytorch since the code has run correctly with pytorch 0.1.2.
If you need to know more about the test code which cause the freezing of PyTorch 0.2.0_1. Feel free to write an email to me. My email is [email protected]
Do you have any ideas? Thank you!
|
st115701
|
it is because you have installed the CUDA 7.5 version i think (and you have a pascal GPU).
Install the CUDA 8.0 version of pytorch 0.2.0 using the commands from http://pytorch.org 4 and see if that fixes it.
|
st115702
|
No. I have no CUDA and just run PyTorch on CPU.
After choose the following options of environment on pytorch.org: Linux, pip, Python 2.7, No CUDA
The following command are given from http://pytorch.org/ and I use it to install Pytorch 0.2.0_1:
pip install http://download.pytorch.org/whl/cu75/torch-0.2.0.post1-cp27-cp27mu-manylinux1_x86_64.whl
Is there any problem? Thank you.
|
st115703
|
I made a wrong description about this problem before. It should be pytorch 0.1.12, not pytorch 0.1.2
Recently, I test it again. The code run correctly with pytorch 0.1.12 but cause the freezing of PyTorch 0.2.0_1 and PyTorch 0.2.0_2 .
|
st115704
|
Dear All,
This relates to one of my earlier posts (Custom data loader and label encoding with CIFAR-10), but it deserves a new thread.
When I iterate the Data set during training, like so:
for i, (inputs, labels) in enumerate(train_loader):
print (type(inputs))
print (type(labels))
print (“Label:” + str(labels))
The labels return 4 items as a tuple instead of only one Item (for instance, “dog”). I understand that the problem is with my data loader however I can not seem to figure out which line of code is the culprit.
The full code is here:
https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyTorch/21-PyTorch-CIFAR-10-Custom-data-loader-from-scratch.ipynb 19
And this is the exception:
21-PyTorch-CIFAR-10-Custom-data-loader-from-scratch.jpg997×1098 194 KB
Many thanks for any help!
|
st115705
|
turn your label to a number,i.e frog->0,truck->1, and then turn them to a tensor.
|
st115706
|
Thanks Chen, I found this yesterday, used defaultdict(LabelEncoder) and updated the Notebooke: https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyTorch/21-PyTorch-CIFAR-10-Custom-data-loader-from-scratch.ipynb 108
I was surprised since PyTorch can work with class labels which are not encoded, but when you write a Custom dataset it actually forces you to do so (e.g. encode)
|
st115707
|
Actually Dataset and DataLoader are not so complicated. I would strongly advise you to read
github.com
pytorch/vision/blob/master/torchvision/datasets/folder.py#L66-L125 71
if get_image_backend() == 'accimage':
return accimage_loader(path)
else:
return pil_loader(path)
class ImageFolder(data.Dataset):
"""A generic data loader where the images are arranged in this way: ::
root/dog/xxx.png
root/dog/xxy.png
root/dog/xxz.png
root/cat/123.png
root/cat/nsdf3.png
root/cat/asd932_.png
Args:
root (string): Root directory path.
transform (callable, optional): A function/transform that takes in an PIL image
This file has been truncated. show original
and
github.com
pytorch/pytorch/blob/master/torch/utils/data/dataloader.py#L81-L112 40
if pin_memory:
batch = pin_memory_batch(batch)
except Exception:
out_queue.put((idx, ExceptionWrapper(sys.exc_info())))
else:
out_queue.put((idx, batch))
numpy_type_map = {
'float64': torch.DoubleTensor,
'float32': torch.FloatTensor,
'float16': torch.HalfTensor,
'int64': torch.LongTensor,
'int32': torch.IntTensor,
'int16': torch.ShortTensor,
'int8': torch.CharTensor,
'uint8': torch.ByteTensor,
}
def default_collate(batch):
This file has been truncated. show original
|
st115708
|
Hi,
I am trying to use a Dataset loader in order to load the CIFAR-1O data set from a local drive.
For learning purposes, I do NOT wish to use the already available loader as shown here:
github.com
pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py 51
# -*- coding: utf-8 -*-
"""
Training a classifier
=====================
This is it. You have seen how to define neural networks, compute loss and make
updates to the weights of the network.
Now you might be thinking,
What about data?
----------------
Generally, when you have to deal with image, text, audio or video data,
you can use standard python packages that load data into a numpy array.
Then you can convert this array into a ``torch.*Tensor``.
- For images, packages such as Pillow, OpenCV are useful.
- For audio, packages such as scipy and librosa
- For text, either raw Python or Cython based loading, or NLTK and
This file has been truncated. show original
E.g. torchvision.datasets.CIFAR10.
I downloaded the data manually from here: https://www.kaggle.com/c/cifar-10/data 17
Few questions:
Using the original example, I can see that the original labels, are NOT one hot encoded, do I assume correctly that
cross-entropy and neg. log-likelihood losses in pytorch do NOT require one-hot encodings?
In my custom code (https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyTorch/19%20PyTorch%20CIFAR-10.ipynb 49), I can see that the data as well as the labels are loaded correctly, when I tried (wrongly?) to one hot encode them, an exception was thrown during iteration.
However, now I do not one hot encode them (assuming 1 above is true), and the following exception is thrown while iterating the dataset:
TypeError: ufunc 'add' did not contain a loop with signature matching types dtype('S55') dtype('S55') dtype('S55')
This is the code for iterating:
imagesToShow=4
for i, data in enumerate(train_loader, 0):
lgr.info 2('i=%d: '%(i))
images, labels = data
num = len(images)
ax = plt.subplot(1, imagesToShow, i + 1)
plt.tight_layout()
ax.set_title('Sample #{}'.format(i))
ax.axis('off')
for n in range(num):
image=images[n]
label=labels[n]
plt.imshow (GenericImageDataset.flaotTensorToImage(image))
if i==imagesToShow-1:
break
Thanks for any help,
|
st115709
|
QuantScientist:
Using the original example, I can see that the original labels, are NOT one hot encoded, do I assume correctly that
cross-entropy and neg. log-likelihood losses in pytorch do NOT require one-hot encodings?
yes,they don’t require
I have a rough look at your code, the bug seems come from self.X_train, self.X_train[index] is not string
|
st115710
|
Thanks,
Had several other issues,
I have a basic working version here:
https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyTorch/19%20PyTorch%20CIFAR-10.ipynb 63
image.png710×446 28.4 KB
Now if my batch size is larger than 1, the images wont display
|
st115711
|
torchvision.utils.make_grid would help
gist.github.com
https://gist.github.com/anonymous/bf16430f7750c023141c562f3e9f2a91 27
normalize_image.ipynb
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
This file has been truncated. show original
|
st115712
|
Hi, I have a quick question about indexing.
I have a 2d tensor src of size (n, 2) storing n 2d points, and another 2d tensor index of size (224, 224) storing indices. I would like to assign values to a 3d tensor output of size (224, 224, 2) so that
output[i][j] = src[index[i][j]]
It doesn’t seem like a difficult task but due to my noobness I can’t find a way to do this. I played around with scatter_ and gather but they seem to serve different purpose. Is there a simple way to do this in Pytorch? Thanks in advance for your help!
|
st115713
|
# create dummy src and index for example
n = 5
h = 224
w = 224
src = torch.arange(1, 11).view(n, 2)
index = torch.arange(1,h * w + 1).view(h, w).remainder_(n)
# do indexing operation
output = src[index.view(-1).long(), :].view(h, w, 2)
|
st115714
|
Hello,
The following code replace a number (with probability corrupt_prob) of true labels in the training (or test) data with random labels:
labels = np.array(self.train_labels if self.train else self.test_labels)
np.random.seed(12345)
mask = np.random.rand(len(labels)) <= corrupt_prob
rnd_labels = np.random.choice(self.n_classes, mask.sum())
labels[mask] = rnd_labels
In PyTorch 0.2 it gave me the following error:
TypeError: len() of unsized object
The type of labels is [torch.LongTensor of size 60000].
Could anybody help me to fix this please?
Thank you in advance for your help!
|
st115715
|
Problem solved. It works if I convert the tensor to numpy array as follows:
labels_t = self.train_labels if self.train else self.test_labels
labels = labels_t.numpy()
|
st115716
|
maybe module.named_parameters will help
for name,param in model.named_parameters():
if name.endswith('weight'): weight_norm(param)
|
st115717
|
Hello,
is there a tutorial on the use of multiple GPUs? For now I have only seen a small tutorial that says to wrap the module using DataParallel, but is that all one needs to do? Just write the model normally and then call
model = DataParallel(model).cuda()?
In the imagenet example, I have seen the use of distributed sampler when loading the training data. Is that something we need to care about?
Thank you!
|
st115718
|
There’s a little more nuance to it if you want to control the exact GPUs that you parallelize over. If you look at the doc for DataParallel 103 you’ll see that you can specify device_ids. If you do that, you’ll also want to make sure you load all of your variables onto the same GPU to start with (with your_variable.cuda(device_id=ID). That should be pretty much it.
|
st115719
|
Thank for your answer! It’s not clear to me what you mean with
“If you do that, you’ll also want to make sure you load all of your variables onto the same GPU to start with”
Which GPU are you talking about? I have, say, 3 of them, that I want to use. I have the input to the network; should I just call
input = input.cuda()
model = DatsParallel(model) #I want to use all available GPUs anyhow
output = model(input)
or do something else?
|
st115720
|
Yes, basically if you have 3 GPUs but only want to use 2 of them then you’d have to specify which ones you want to use. Otherwise you can call cuda() on the model and the Variables. Do keep in mind that unless you’re running this on a dedicated headless server, one of your GPUs may be tied up displaying your desktop etc. so you might get strange errors.
|
st115721
|
So I can specify multiple GPUs in the .cuda() call? Because if I have to specify only one, which one should I specify?
In any case I usually use all the GPUs, and if I want to restrict pytorch access I just set CUDA_VISIBLE_DEVICES to the devices I want to use
|
st115722
|
Hi.
Is there any example or related document which describes how to deal with variable length sequences in minibatch for 1D convolution?
Here is my detail situation.
I have two sequences with size : (#Channel, #Length).
Each pair of sequence have same length, but it differs within dataset.
For example, let’s say (X1,Y1), (X2, Y2) is paired data with size
X1.size() --> [5, 10], Y1.size() --> [3, 10]
X2.size() --> [5, 20], Y2.size() --> [3, 20]
My goal is learning the model f such that Y = f(X)
I am considering f as 1D convolution with batch normalization like,
h1 = bn(relu(conv(input)))
h2 = bn(relu(conv(h1)))
However, I am confused how to deal with multiple sequences in minibatch. If we zero-pad sequence when making minibatch, there seems no example to exclude zero-pad data involved in computation.
|
st115723
|
if you zero-pad the data, and then subsequently narrow the output to avoid the padding-contributed regions, this should be sufficient.
|
st115724
|
Generally, we create a tensor by following code:
t = torch.ones(4)
t is a tensor on cpu, How can I create it on GPU as default??
In other words , I want to create my tensors all on GPU as default.
|
st115725
|
Hi, here is my code:
import torch
torch.set_default_tensor_type(‘torch.cuda.FloatTensor’)
t = torch.rand(1,3,24,24)
and it caused bellow error:
TypeError Traceback (most recent call last)
in ()
----> 1 t = torch.rand(1,3,24,24)
TypeError: Type torch.cuda.FloatTensor doesn’t implement stateless methods
Any idea for this type of error?
|
st115726
|
use_cuda = torch.cuda.is_available()
FloatTensor = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
LongTensor = torch.cuda.LongTensor if use_cuda else torch.LongTensor
Tensor = FloatTensor
if use_cuda:
lgr.info ("Using the GPU")
X = Variable(torch.from_numpy(x_data_np).cuda()) # Note the conversion for pytorch
Y = Variable(torch.from_numpy(y_data_np).cuda())
else:
lgr.info ("Using the CPU")
X = Variable(torch.from_numpy(x_data_np)) # Note the conversion for pytorch
Y = Variable(torch.from_numpy(y_data_np))
|
st115727
|
Hi I am trying out pytorch with a basic continuous bag of words word2vec implementation. The pytorch implementation seems to be very slow requiring multiple hours. A similar implementation in tensorflow trains within 15-20 min on the text8.zip dataset available from http://mattmahoney.net/dc 39
I tried to debug the implementation and found that most of the time ~100ms each is being spent in the model and the call to backward method.
I am not able to find out how to speed it up.
class CBOW(nn.Module):
def __init__(self, vocabulary_size, embedding_dimension):
super(CBOW, self).__init__()
self.vocabulary_size = vocabulary_size
self.embedding_dimension = embedding_dimension
self.embeddings = nn.Embedding(self.vocabulary_size, self.embedding_dimension, sparse=True)
self.linear = nn.Linear(embedding_dimension, vocabulary_size)
self.init_embeddings()
def init_embeddings(self):
initrange = 0.5 / self.embedding_dimension
self.embeddings.weight.data.uniform_(-initrange, initrange)
def forward(self, inputs):
# print inputs.data.shape
embedding = self.embeddings(inputs)
avg_embedding = torch.mean(embedding, dim=1)
out = self.linear(avg_embedding)
log_probs = F.log_softmax(out)
return log_probs
#return torch.max(log_probs, dim=1, keepdim=True)[1]
class Word2Vec:
def __init__(self):
logger.info('CBOW Training ....')
self.batch_size = 128
self.embedding_dimension = 128
self.skip_window = 1
self.input_data = InputData()
self.cbow = CBOW(vocabulary_size=VOCABULARY_SIZE, embedding_dimension=self.embedding_dimension)
def train(self):
loss_function = nn.NLLLoss()
optimizer = optim.SGD(self.cbow.parameters(), lr=0.01, momentum=0.5)
optimizer.zero_grad()
epochs = 100001
data_index = 0
for epoch in range(epochs):
batch_data, batch_labels, data_index = self.input_data.generate_batch_cbow(data_index, self.batch_size, self.skip_window)
x_values = autograd.Variable(batch_data)
y_labels = autograd.Variable(batch_labels[:,0])
# start_model = time.time()
predicted = self.cbow(x_values)
# end_model = time.time()
# logger.info('Elapsed Time %s' % (end_model - start_model))
loss = loss_function(predicted, y_labels)
optimizer.zero_grad()
# start_backward = time.time()
loss.backward()
# end_backward = time.time()
# logger.info('Elapsed Time %s' % (end_backward - start_backward))
optimizer.step()
if epoch % 2000 == 0:
print('[%d/%d] Loss: %.3f' % (epoch + 1, epochs, loss.data.mean()))
|
st115728
|
Matt, can you run the code with:
OMP_NUM_THREADS=1 MKL_NUM_THREADS=1 python foo.py
Does that help?
If your machine has lots of cores (20 cores for example), OpenMP’s threading overhead doesn’t fare well for small workloads.
|
st115729
|
I’m working on a sequence tagging algorithm, my input are sentence / labels pairs:
Pytorch is so much better than tensorflow . <- sentence
1 0 0 0 0 0 1 0 <- labels
I’m using a classic bi-LSTM with softmax to get one prediction per timestep. At training time, I’m using minibatches of size batch_size * max_sent_length * input_emb_size, where max_sent_length is the length of the longest sentence in the batch, I zero pad the others. I use nn.utils.rnn.pack_padded_sequence to compute just what is needed. Once I forward my input batch into my net, I need to compute the loss and backpropagate. I’m not sure how to properly compute the loss here:
The sentences being zero-padded, the only solution I see would be to iterate over each entry in the batch so I can ignore the zero-padded timesteps. Is there a better way ?
If I call multiple times the criterion function to compute the loss, do I have to call the backward function each time? Or is there a smart way to accumulate the losses and call backward only once ?
|
st115730
|
hey, When I install the pytorch in mac, something wrong:
ld: library not found for -lcudnn
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command ‘clang’ failed with exit status 1
|
st115731
|
Hi,
i have problems to understand a piece of code in Pytorch:
github.com
pytorch/pytorch/blob/6648677acf1b653a2c80ee790dd85dc29664f6b0/torch/nn/modules/rnn.py#L104 5
fn.x_descs = cudnn.descriptor(any_param.new(1, self.input_size), 1)
fn.rnn_desc = rnn.init_rnn_descriptor(fn, handle)
# Allocate buffer to hold the weights
num_weights = rnn.get_num_weights(handle, fn.rnn_desc, fn.x_descs[0], fn.datatype)
fn.weight_buf = any_param.new(num_weights).zero_()
fn.w_desc = rnn.init_weight_descriptor(fn, fn.weight_buf)
# Slice off views into weight_buf
params = rnn.get_parameters(fn, handle, fn.weight_buf)
all_weights = [[p.data for p in l] for l in self.all_weights]
# Copy weights and update their storage
rnn._copyParams(all_weights, params)
for orig_layer_param, new_layer_param in zip(all_weights, params):
for orig_param, new_param in zip(orig_layer_param, new_layer_param):
orig_param.set_(new_param.view_as(orig_param))
self._data_ptrs = list(p.data.data_ptr() for p in self.parameters())
def _apply(self, fn):
Where is self.all_weights initialized or get its values here? I can only find self._all_weights with an underscore.
Thanks you for any pointers,
Christoph
|
st115732
|
Ah after sending this I directly found the property:
github.com
pytorch/pytorch/blob/master/torch/nn/modules/rnn.py#L203 15
self.input_size,
self.hidden_size,
num_layers=self.num_layers,
batch_first=self.batch_first,
dropout=self.dropout,
train=self.training,
bidirectional=self.bidirectional,
batch_sizes=batch_sizes,
dropout_state=self.dropout_state,
flat_weight=flat_weight
)
output, hidden = func(input, self.all_weights, hx)
if is_packed:
output = PackedSequence(output, batch_sizes)
return output, hidden
def __repr__(self):
s = '{name}({input_size}, {hidden_size}'
if self.num_layers != 1:
s += ', num_layers={num_layers}'
if self.bias is not True:
Thanks,
#close
|
st115733
|
I met a problem similar to https://github.com/fchollet/keras/issues/2115 165
It is actually using weight_matrix in loss function and can be implemented in Keras. So how to implement it in Pytorch?
Here is the Keras code copied from the upper link:
def w_categorical_crossentropy(y_true, y_pred, weights):
nb_cl = len(weights)
final_mask = K.zeros_like(y_pred[:, 0])
y_pred_max = K.max(y_pred, axis=1)
y_pred_max = K.reshape(y_pred_max, (K.shape(y_pred)[0], 1))
y_pred_max_mat = K.cast(K.equal(y_pred, y_pred_max), K.floatx())
for c_p, c_t in product(range(nb_cl), range(nb_cl)):
final_mask += (weights[c_t, c_p] * y_pred_max_mat[:, c_p] * y_true[:, c_t])
return K.categorical_crossentropy(y_pred, y_true) * final_mask
Any suggestions is welcome!
Thanks in advance!
Ben
|
st115734
|
Because I’m new in Pytorch, I can’t find the conterparts in Pytorch, especially in the for loop. I’d appreciate if you show me sorm code!
|
st115735
|
I think the major steps are:
calculate the cross entropy for each sample in a batch
calculate the weight for each sample, which is like a lookup table in a for loop
loss = sum(cross_entropy_tensor * weight_tensor) / batch_size
Now I can get softmax tensor with shape batch_size * num_class by using nn.LogSoftmax. Then I’m a little confused about how to implement 1 and 2.
nn.NLLLoss seems combine 1 and 3 with no per sample weight.
|
st115736
|
I guess, you can split the crossentropy loss to [softmax, log, NLLLoss].
So you can mul a weight matrix after the log operation and pass the log(p(x)) to NLLLoss.
|
st115737
|
I implemented the loss function. Here is the gist:
gist.github.com
https://gist.github.com/benwu232/1fbf1cd6b637810f5d57902fa6d4ef1b 102
gistfile1.txt
def one_hot(size, index):
""" Creates a matrix of one hot vectors.
```
import torch
import torch_extras
setattr(torch, 'one_hot', torch_extras.one_hot)
size = (3, 3)
index = torch.LongTensor([2, 0, 1]).view(-1, 1)
torch.one_hot(size, index)
# [[0, 0, 1], [1, 0, 0], [0, 1, 0]]
This file has been truncated. show original
Know issue: Part of the program must run on CPU, it may be slow. In my own test, the speed is ok. If you have better solution, please let me know!
|
st115738
|
The code ( https://gist.github.com/cswhjiang/be475ef9a3a7d1f781830ebfb7970719 11 ) failed on pytorch 0.2. What is the correct way to edit parameter?
python3.6 -u weight_drop.py
Applying weight drop of 0 to weight_hh_l0
[['weight_ih_l0', 'bias_ih_l0', 'bias_hh_l0', 'weight_hh_l0_raw']]
odict_keys(['weight_ih_l0', 'bias_ih_l0', 'bias_hh_l0', 'weight_hh_l0_raw'])
Traceback (most recent call last):
File "weight_drop.py", line 44, in <module>
a.cuda()
File "/data1/XXX/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 147, in cuda
return self._apply(lambda t: t.cuda(device_id))
File "/data1/XXX/.local/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 116, in _apply
self.flatten_parameters()
File "/data1/XXX/.local/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 107, in flatten_parameters
rnn._copyParams(all_weights, params)
File "/data1/XXX/.local/lib/python3.6/site-packages/torch/backends/cudnn/rnn.py", line 187, in _copyParams
param_to.copy_(param_from, broadcast=False)
RuntimeError: invalid argument 2: sizes do not match at /pytorch/torch/lib/THC/THCTensorCopy.cu:31
|
st115739
|
This work
m = nn.Conv2d(16, 32, (3, 3)).float()
loss = nn.NLLLoss2d()
input = autograd.Variable(torch.randn(3, 16, 10, 10))
target = autograd.Variable(torch.LongTensor(3, 8, 8).random_(0, 4))
input = m(input)
output = loss(input, target)
But this one do not work
m = nn.Conv2d(16, 32, (3, 3)).float()
loss = nn.NLLLoss2d()
input = autograd.Variable(torch.randn(3, 16, 10, 10))
target = np.arange(3*32*8*8).reshape(3,32,8,8).astype('int64')
target = Variable(torch.from_numpy(target))
input = m(input)
output = loss(input, target)
Input and target of both have the same shape and type, but the later one(transformed from ndarray) always throw runtime error
RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of dimension: 4 at /b/wheel/pytorch-src/torch/lib/THNN/generic/SpatialClassNLLCriterion.c:39
Why this happen?What is the correct way to convert from numpy array to torch tensor?Thanks
|
st115740
|
I think your loss function input argument is wrong. As shown in pytorch documents, the shape of input is BatchCHW, and the target is BatchH*W. http://pytorch.org/docs/master/nn.html?highlight=torch%20nn%20nll#torch.nn.NLLLoss2d 510
|
st115741
|
Hello everybody:
First I want to give you a thumbs up, because pytorch rocks!
Now to my question (plz be forgiving I am a total pytorch newbie):
I want to adapt the seq2seq 8 example to a time series prediction model.
I retraced the code and in the AttnDecoderRNN class the forward function (attached to this message) gets a encoder_output variable as input but never uses it in any way.
Is there a reason for this?
Thanks in advance,
Florian
def forward(self, input, hidden, encoder_output, encoder_outputs):
embedded = self.embedding(input).view(1, 1, -1)
embedded = self.dropout(embedded)
attn_weights = F.softmax(
self.attn(torch.cat((embedded[0], hidden[0]), 1)))
attn_applied = torch.bmm(attn_weights.unsqueeze(0),
encoder_outputs.unsqueeze(0))
output = torch.cat((embedded[0], attn_applied[0]), 1)
output = self.attn_combine(output).unsqueeze(0)
for i in range(self.n_layers):
output = F.relu(output)
output, hidden = self.gru(output, hidden)
output = F.log_softmax(self.out(output[0]))
return output, hidden, attn_weights
|
st115742
|
That’s left over from an old implementation, more up to date version here https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation.ipynb 21
|
st115743
|
I was trying to profile my code with line_profiler because cProfile does not give much useful information for some reason (e.g. does not measure things like a[x]). It seems like the actual computations happen only when one tries to infer resulting values, because in code above, if I add a print, runtime splits pretty much exactly into two parts where “inference” is required (prints and casting to float).
64 102 862955 8460.3 1.8 output_var = model(data) # [B, C, H, W]
65 101 600 5.9 0.0 class_n = output_var.size(1)
66 101 20781085 205753.3 42.8 print(output_var.sum())
67 101 10971 108.6 0.0 output_flat = output_var.permute(0, 2, 3, 1).contiguous().view(-1, class_n)
68 101 15572 154.2 0.0 cross_ent = F.cross_entropy(output_flat, target.view(-1), size_average=False)
69 101 18728730 185433.0 38.6 test_loss_t += cross_ent.data[0]
70 100 5518 55.2 0.0 pred = output_var.data.max(1)[1] # [B, H, W]
and without print on line 66
64 102 862955 8460.3 1.8 output_var = model(data) # [B, C, H, W]
65 102 606 5.9 0.0 class_n = output_var.size(1)
66 102 7586 74.4 0.0 output_flat = output_var.permute(0, 2, 3, 1).contiguous().view(-1, class_n)
67 102 12863 126.1 0.0 cross_ent = F.cross_entropy(output_flat, target.view(-1), size_average=False)
68 102 39538526 387632.6 81.3 test_loss_t += cross_ent.data[0]
see - total runtime just slitted between those two lines.
Do I interpret this right? If yes, is there a way to change this behaviour for profiling purposes? Thanks.
|
st115744
|
Is this on CUDA? CUDA kernel calls are asynchronous, kernels won’t complete running until later in your Python code. You can disable this with CUDA_LAUNCH_BLOCKING=1 but prepare for slower code.
|
st115745
|
Indeed! Very useful for profiling, thank you! In my case, performance drop was absolutely minimal.
|
st115746
|
I am training network with two GPUs.
I found many topics said that pinned memory can help improve the training speed a lot. But when I used pinned memory, it has not any speedup.Following is my code.
1, Make the DataLoader return batches placed in pinned memory by passing pin_memory=True to its constructor.
data_loader = torch.utils.data.DataLoader(
dataset,
batch_size=self.opt.batchSize, #batchSize is 6.
shuffle=bool(self.opt.shuffle_data),
num_workers=int(self.opt.nThreads), pin_memory=True)
2, Use pin_memory() method and pass an additional async=True argument to a cuda() call.
#data is torch.FloatTensor, come from DataLoader
#self.input_A and self.input_B are torch.cuda.FloatTensor
input_A = torch.Tensor.pin_memory(data)
input_B = torch.Tensor.pin_memory(data)
self.input_A.resize_(input_A.size()).copy_(input_A, async=True)
self.input_B.resize_(input_B.size()).copy_(input_B, async=True)
self.real_A = Variable(self.input_A)
self.fake_B = self.netG.forward(self.real_A)
self.real_B = Variable(self.input_B)
--------start training D and G network-------------
Is my code wrong? Does anyone has any ideas or give me a example to show how to use pinned memory?, thanks a lot.
|
st115747
|
I experience same thing with a single GPU setup. Passing pin_memory=True to DataLoader does not seem to improve performance in any way. From cProfile it seems like torch._C.CudaFloatTensorBase._copy() consumes one third of all batch processing time, which is a lot! Both with and without pinned_memory(). Thank you!
Screenshot from 2017-08-24 19:12:13.png1366×754 56.6 KB
So majority of time was spent moving variables to GPU and doing forward pass (this is GPU), backward pass took surprisingly little time.
UPDATE: It turns out that if one passes CUDA_LAUNCH_BLOCKING=1 when running a script, profiling results are much more meaningful. Here, for example, majority of time is spent in backwards_run and forward, which makes sense.
Screenshot from 2017-08-24 19:23:46.png1347×779 54.8 KB
|
st115748
|
I was using torch.smm for cuda sparse matrix multiplication a few weeks back, after I move to the 0.2.0. the same code will complain,
RuntimeError: WARNING: Sparse Cuda Tensor op sspaddmm is not implemented at /pytorch/torch/lib/THCS/generic/THCSTensorMath.cu:156
are the old sparse kernels removed from 0.2.0. Or API has been changed?
Thanks.
|
st115749
|
When I look at the history of torch/lib/THCS/generic/THCSTensorMath.cu, it does not look like this kernel was ever implemented. So what is likely happening is that there was some change in how we select kernels and now we are attempting to use an unimplemented kernel. If you posted some code I might be able to say better.
|
st115750
|
Thanks. I went back and check the code, I was doing torch.smm on CPU. Any plans on implementing smm corresponding kernels on GPU?
|
st115751
|
Hi,
As always, as part of my first post I thank the developers for this amazing library that helps a lot of us in our deep learning escapades. I’ve finished running through the first tutorial involving the CIFAR10 dataset and have some questions.
Code
In this code block,
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
Could some explain in detail on what is going on here. These might be more of python related questions than a pytorch question but I think its crucial to understand what is happening here. I understand whats happening in an abstract level but not on the code level. In particular,
net is an object, so what is net(inputs) calling? Because its not a constructor, so I’m not sure whats happening here. Also this returns the output but where is this function (if it is a function at all) defined to return the output and what does it do?
Where do we call the forward function for the that was defined as part of the model class? I’m guessing this has something to do with the previous question.
Similar to the first point criterion(outputs, labels), where is this function defined? I checked the docs for crossentropyloss() and its a class that only takes weights and size_average in the constructor.
In the prediction code block,
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
This code (net(images)) is similar to the training stage, so I’m not sure how we are “testing” because we don’t have testing mode. For example, in Keras for training we use model.fit and testing we use model.evaluate, and I’m not seeing a similar distinction here.
EDIT-1: I got the answers to the above questions from the Learning PyTorch with Examples 20. It all happens through the _call_ function in python.
Others
Can I get a small dataset from the dataloader for overfitting before I get the whole thing? I’m guessing I could just run the for loop till train_loader[:small_number], any thoughts?
The dataloader only provides train and test, how would I get a validation set out of this?
We print out loss.data[0], does it contain the loss for the entire mini-batch? Could I get some pointers on how to keep track of the loss history for entire epochs (for plotting purposes)?
If I want to use GPU, do I have to call the .cuda() function in every place where I have Variables and instantiation of my models? Or is there some global param I can set that automatically makes all the Variables and instantiated net into cuda compatible objects?
Why is torch.save(model.state_dict) recommended over torch.save(model) since the latter can be used to save the entire model including architecture and params?
The normalize method in transform takes a list of 2 tuples representing the desired mean and stddev for each of the color channels. Is that calculated within that particular set? How would I normalize the test set with the training set mean and stddev?
Can I add to the post category list or is it strictly confined to the 4 that is defined?
I apologize for a whole lot of questions, most of them born out of ignorance and I’m sure I’ll have more as I start using pytorch for my problems. If I need to split them up into separate posts, please let me know and I’ll edit the post accordingly.
Thanks and I appreciate everyone’s help!
|
st115752
|
No, these are great questions.
questions 1 & 2
In Python, there are several special methods which user-defined classes can override to allow certain kinds of operations on the class or instances.
They’re all surrounded by double underscores (so they’re called “dunder” methods):
__init__ is one of them, which defines the constructor;
__str__ is another one – what you implement there defines what Python will do if you call str(obj).
The __call__ dunder method defines what Python will do if you call
an instance of the class as if it were a function.
In PyTorch, the __call__ 3 method of nn.Module instances sets up user-defined hooks if they exist, then calls the instance’s forward method. So calling net(var) is the same thing as calling net.__call__(var), which will itself call net.forward(var) to perform the actual forward pass.
question 3
The same thing is happening here.
nn.CrossEntropyLoss is a class whose constructor takes weights and size_average, but instances of nn.CrossEntropyLoss, including criterion, define a forward method which is called from nn.Module's __call__ method.
The distinction between training and testing modes is actually implemented similarly to Keras, but in PyTorch it’s done with a pair of methods that change the state of the model: model.train() sets the model to training mode while model.eval() sets it to test mode.
Others questions:
Yes, that should work.
I think that means CIFAR doesn’t natively have a validation set? If so you can always split the train set further.
Yes, loss.data[0] has the average loss for the minibatch. You can just keep appending it to a list to keep track of the losses for the whole epoch; make sure you use loss.data rather than just loss because the temporary buffers for the graph won’t be freed if you keep around a bunch of loss variables.
Yes, you have to call .cuda() on your model and your input data. You shouldn’t have to call it anywhere else – if you’re creating Variables in your model’s forward pass, you should use expressions like Variable(existing_var.data.new(size).zero_()) to make the new variable created on the same device (CPU/GPU) as the existing variable.
The latter saves the entire model using Python’s pickling, which is a very precarious way to save complicated custom classes. Basically, it doesn’t actually save the model’s structure, just the names of the classes that built it, so changing your model’s code can lead to weird and unpredictable behavior of loaded pickles while with load_state_dict you know you’re only saving and loading the params.
Not a computer vision person; I don’t know.
If the site lets you add to the list, I think you should go ahead.
|
st115753
|
@jekbradbury
Thank you so much for your replies, I appreciate it. It cleared up a lot of stuff. I still have some questions maybe other people can pop in to the conversation.
Validation set:
The following code is how we load the CIFAR10 dataset. For test, we just set train=False
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
train_set = datasets.CIFAR10(root=expanduser(
'~/learning/cifar10-data'), train=True, download=True, transform=transform)
train_loader = DataLoader(train_set, batch_size=4,
shuffle=True, num_workers=2)
My initial intuition was just to set train_loader = train_loader[:small_number] but I got an error:
train_loader[:100]
Traceback (most recent call last):
File “”, line 1, in
TypeError: ‘DataLoader’ object is not subscriptable
Then I thought I could mess with the train_set directly but I got another error:
train_set[:100]
Traceback (most recent call last):
File “”, line 1, in
File “/home/sudarshan/anaconda3/envs/torch/lib/python3.6/site-packages/torchvision-0.1.7-py3.6.egg/torchvision/datasets/cifar.py”, line 89, in getitem
File “/home/sudarshan/anaconda3/envs/torch/lib/python3.6/site-packages/numpy/core/fromnumeric.py”, line 550, in transpose
return _wrapfunc(a, ‘transpose’, axes)
File “/home/sudarshan/anaconda3/envs/torch/lib/python3.6/site-packages/numpy/core/fromnumeric.py”, line 57, in _wrapfunc
return getattr(obj, method)(*args, **kwds)
ValueError: axes don’t match array
Both these objects have a len function:
len(train_set)
50000
len(train_loader)
12500
So I’m not sure how to get a validation set out of this.
Train/Test mode:
According to the docs 3, eval has effect only on dropout and batch norm, which makes sense since their functions differ during testing as opposed to training. Further, we don’t explicitly set model.train() or model.eval(), when the testing is happening in the prediction code block.
So where is this flag being set and how do we know its not training again? I can think of two reasons on this works, but I’m not sure which one:
While loading the test_set, the train flag is set to False. Since testing is done on the test_loader (which was instantiated using the test_set), the mode was already set to “test” and testing automatically happened.
We just didn’t calculate loss, the gradients, and update the gradients through the optimization step. Take a look at the code blocks for training and testing:
Training block:
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
Test block:
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels.data).sum()
During testing (aka prediction) we don’t compute the loss, run the backward, and run the optimization.step() which would mean we are just getting the class labels. So by omitting those steps we do the prediction? This makes sense to me after thinking about it, but it would be helpful if I could get confirmation that this is in fact what is happening.
So end of the day question is lets say we have loaded our dataset using standard numpy techniques and converted them into torch Tensors and have (X_train, y_train, X_test, y_test). How we specify when using X_test, y_test do testing as opposed to training (which would just be calculate loss, its gradients and update weights).
Thanks.
|
st115754
|
Train/test mode is something like this:
net.train()
#train loop/function
for (images, labels) in train_loader:
# train code
net.eval()
#test loop or function
for (images, labels) in test_loader
# test code eg. outputs = net(images)
So, you set the flags before you iterate over the corresponding data loader. You can wrap them in functions which can enable you measure your performance on the validation set after every n training iterations.
|
st115755
|
@vabh
Thanks, but I don’t see that flag explicitly set in the examples shown in the tutorials here 4. Does that mean when prediction is happening in that example it is still training mode?
|
st115756
|
The eval function changes behaviour of dropout (no nodes are dropped) and batchnorm (use global statistics rather than batch statistics), during testing. This is different from how they behave during training.
For all other operations/layers, the train and test outputs are the same. http://pytorch.org/docs/nn.html#torch.nn.Module.eval 11
The network in the example you linked does not have these layers which is why I suspect they did not call the eval function. Calling the train and eval functions won’t affect the model output. For models which have dropout/batchnorm layers, its quite imperative that you call the eval function before training.
|
st115757
|
shaun:
So I’m not sure how to get a validation set out of this.
If the dataset is reasonably simple, you can split the dataset like so:
gist.github.com
https://gist.github.com/t-vi/9f6118ff84867e89f3348707c7a1271f 48
validation_set_split.py
import torch.utils.data
from torchvision import datasets, transforms
class PartialDataset(torch.utils.data.Dataset):
def __init__(self, parent_ds, offset, length):
self.parent_ds = parent_ds
self.offset = offset
self.length = length
assert len(parent_ds)>=offset+length, Exception("Parent Dataset not long enough")
super(PartialDataset, self).__init__()
This file has been truncated. show original
Best regards
Thomas
|
st115758
|
@vabh
You are right. I believe I’ve referenced this in my previous posts as well. Unfortunately, this still doesn’t answer my question on how does the system know that I’m training instead of testing. Is it just that I don’t call loss.backward() and don’t propagate the gradients and update the weights?
@tom
Thank you for this! This is exactly what I’ve been looking for!
This looks great, but I have one question. It is my understanding that when we decide to validate the model we just use the entire dataset instead of going mini-batch by mini-batch. If we decide to do that, do you just keep track of the running loss and running acc for the validation set’s each mini-batch and average it out to number of mini-batches? Or just use val_loader.val_ds.data directly for prediction?
|
st115759
|
Hello @shaun,
the idea of using a validation set is that whatever you plan do to the test dataset would work for the validation set as well (really, you would use the test set’s DataLoader for the val_dataset).
For example, in the MNIST example’s (https://github.com/pytorch/examples/blob/master/mnist/main.py 15) test function, you can see how the function sums loss and correct guesses and then compute the average after the loop (for very large validation sets, you would need to look at overflow etc.).
As such, my suggestion would be to feed it to a dataloader that works similar to the test one (for the MNIST example, in fact, you could make test_loader a parameter to the test function and feed the val_loader. That would also do the model.eval() call mentioned earlier).
For “mass testing” I would expect that - like most examples I have seen - you would use minibatches as well if you have a sizeable validation set and aggregate the accuracy. If your validation set happens to fit in memory, you could pass batch_size = len(val_ds) to the validation DataLoader constructor.
Personally, I would prefer to keep the workflow with Dataset->DataLoader->Validation as that is scalable and looks like an efficient use of my time, but you should certainly just do whatever works for you.
Regarding the distinction between test and train: You would call model.eval() for validation and testing, that’s how the model knows.
One thing that changes is that the forward pass’s info will not be kept around to save memory. Many things will not depend on it (so people might leave it out) in terms of numerical results of the forward pass, but for example for models using dropout in the usual fashion (as opposed to Yarin Gal and collaborators 2), the model needs to know whether you are training or testing in the forward pass before it knows whether you use backward.
Hope this helps, even if it’s just my very limited take on something that ultimately boils down to style preferences and I cannot vouch for my expertise in that.
Best regards
Thomas
|
st115760
|
this works well, i believe something like this should be part of pytorch, at least as an example
|
st115761
|
Might be a bit late to the party, but to create a train/val/test split (of 45k/5k/10k) in CIFAR10, this should work :
n_train = 45000
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_train)
# create a validation set
valset = deepcopy(trainset)
trainset.train_data = trainset.train_data[:n_train]
trainset.train_labels = trainset.train_labels[:n_train]
valset.train_data = valset.train_data[n_train:]
valset.train_labels = valset.train_labels[n_train:]
# create a test set
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform_test)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True, num_workers=2)
valloader = torch.utils.data.DataLoader(valset, batch_size=128, shuffle=True, num_workers=2)
testloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False, num_workers=2)
|
st115762
|
I want to define very deep networks with identical layers. Say I want to define a block of 1000 convolutional layers, how can I do this without a for loop ?
|
st115763
|
Is there any specific reason why torch.smm is only implemented on CPU but not GPU whereas torch.hsmm is implemented on both?
|
st115764
|
What is the best way to run the validation on CPU when the network is on GPU? I want to run the entire validation dataset during the validation phase, which will of course not on GPU, and so I would expect to run this on CPU whenever I want a validation run. What’s the best approach to doing this in PyTorch. The simplest (but probably not efficient) way I would think of is to somehow just copy the entire network on each validation run with the device target being CPU (is there an easy way to do this?). Of course, perhaps the variables could just be retained and the values updated before the validation. Or is there a better way still? Or would it be recommended just to run the validation dataset in batches on GPU to process it? Other thoughts/suggestions? If one of the copying of variables or variable values is the recommended way, would someone mind giving a pointer as to which function I might use to copy over those values? Thank you!
|
st115765
|
One approach is to have two processes in two different terminals.
First process runs training, at end of each epoch, writes the model to disk (torch.save(model.state_dict()) or something)
Second process polls for a valid checkpoint on disk, and once it detects the file, it loads it and runs the checkpoint on the validation set.
Personally, I just run train -> validation -> train -> validation, all on GPU, because GPU is kinda much faster.
|
st115766
|
I’m trying to implement dqn, but having trouble processing the image before feeding it into my q network.
transform = T.Compose([
T.ToPILImage(),
T.Lambda(lambda x: x.convert('L')),
T.Scale((84, 84), interpolation=T.Image.CUBIC),
T.ToTensor()
])
def process(img):
return Variable(torch.Tensor(transform(img))).unsqueeze(0)
class Qnet(nn.Module):
def __init__(self, num_actions):
super(Qnet, self).__init__()
# (84 - 8) / 4 + 1 = 20 (len,width) output size
self.cnn1 = nn.Conv2d(in_channels=4, out_channels=32, kernel_size=8, stride=4, padding=0)
self.relu1 = nn.ReLU()
# (20 - 4) / 2 + 1 = 9 (len,width) output size
self.cnn2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=4, stride=2, padding=0)
self.relu2 = nn.ReLU()
# (9 - 3) / 1 + 1 = 7 (len,width )output size
self.cnn3 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=0)
self.relu3 = nn.ReLU()
# fully connected layer
self.fc1 = nn.Linear(64 * 7 * 7, 512)
self.fc2 = nn.Linear(512, num_actions)
def forward(self, x):
out = self.cnn1(x)
out = self.relu1(out)
out = self.cnn2(out)
out = self.relu2(out)
out = self.cnn3(out)
out = self.relu3(out)
# Resize from (batch_size, 64, 7, 7) to (batch_size,64*7*7)
out = out.view(out.size(0), -1)
out = self.fc1(out)
return self.fc2(out)
When running my agent:
# my network:
Q = Qnet(env.action_space.n)
#policy
def e_greedy(state):
epsilon = 1 / epsilon_step
if np.random.random() < epsilon:
return np.random.choice(range(env.action_space.n), 1)[0]
else:
# since pytorch networks expect batch input instead of 1 state, add zeros into the Variable
qvalues = Q(state)
maxq, actions = torch.max(qvalues, 1)
return actions[0].data[0]
for episode in range(10000):
state = env.reset()
state = process(state)
while True:
env.render()
action = e_greedy(state)
When I call my network on the processed image state, I get this error:
Traceback (most recent call last):
File "dqn.py", line 89, in <module>
action = e_greedy(state)
File "dqn.py", line 69, in e_greedy
qvalues = Q(state)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 224, in __call__
result = self.forward(*input, **kwargs)
File "dqn.py", line 48, in forward
out = self.cnn1(x)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 224, in __call__
result = self.forward(*input, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 254, in forward
self.padding, self.dilation, self.groups)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/functional.py", line 52, in conv2d
return f(input, weight, bias)
RuntimeError: Need input of dimension 4 and input.size[1] == 4 but got input to be of shape: [1 x 1 x 84 x 84] at /Users/soumith/code/builder/wheel/pytorch-src/torch/lib/THNN/generic/SpatialConvolutionMM.c:47
I don’t understand what I’m doing wrong. I’m passing in a 4D tensor of the right size. Initially I thought that the network only expects batches instead of single input, but even if I add an extra image to the image (dimension= [2 x 1 x 84 x 84], I get the same error. Thanks!
|
st115767
|
the networks only expects batches.
your input channels have to be 4.
So pass in an input of: 1 x 4 x H x W
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.