id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st102800 | I’m working on a recurrent model and was running into NaN and inf loss values but only when training on an Ubuntu machine. When running the model on the Ubuntu machine, loss quickly approaches inflation and then nan:
[epoch 0]
0/4731 [00:00<?, ?it/s]
1/4731 [00:00<1:13:42, 1.07it/s, acc=6.970%, acc_avg=6.970%, loss=5.097, loss_avg=5.097]
2/4731 [00:01<1:12:55, 1.08it/s, acc=20.000%, acc_avg=13.485%, loss=00inf, loss_avg=00inf]
4/4731 [00:03<1:01:55, 1.27it/s, acc=11.667%, acc_avg=15.909%, loss=00nan, loss_avg=00nan]
5/4731 [00:03<58:01, 1.36it/s, acc=6.667%, acc_avg=14.061%, loss=00nan, loss_avg=00nan]
6/4731 [00:04<57:29, 1.37it/s, acc=30.000%, acc_avg=16.717%, loss=00nan, loss_avg=00nan]
7/4731 [00:05<52:24, 1.50it/s, acc=16.667%, acc_avg=16.710%, loss=00nan, loss_avg=00nan]
8/4731 [00:06<1:00:44, 1.30it/s, acc=33.333%, acc_avg=18.788%, loss=00nan, loss_avg=00nan]
10/4731 [00:07<56:47, 1.39it/s, acc=38.333%, acc_avg=20.697%, loss=00nan, loss_avg=00nan]
11/4731 [00:08<56:05, 1.40it/s, acc=36.667%, acc_avg=22.149%, loss=00nan, loss_avg=00nan]
However, when running the identical code with identical training parameters (except on a Mac with a CPU rather than a GPU), training proceeds seemingly fine. Also strangely, the train accuracy drops by a huge margin on the Mac.
[epoch 0]
0/4731 [00:00<?, ?it/s]
1/4731 [00:05<7:48:52, 5.95s/it, acc=7.727%, acc_avg=7.727%, loss=5.022, loss_avg=5.022]
2/4731 [00:11<7:40:02, 5.84s/it, acc=10.455%, acc_avg=9.091%, loss=5.131, loss_avg=5.077]
3/4731 [00:17<7:36:24, 5.79s/it, acc=10.000%, acc_avg=9.394%, loss=5.053, loss_avg=5.069]
4/4731 [00:22<7:20:40, 5.59s/it, acc=8.182%, acc_avg=9.091%, loss=5.032, loss_avg=5.060]
5/4731 [00:27<7:21:14, 5.60s/it, acc=11.515%, acc_avg=9.576%, loss=5.156, loss_avg=5.079]
6/4731 [00:33<7:21:09, 5.60s/it, acc=9.848%, acc_avg=9.621%, loss=5.127, loss_avg=5.087]
7/4731 [00:38<7:14:57, 5.52s/it, acc=8.333%, acc_avg=9.437%, loss=5.068, loss_avg=5.084]
8/4731 [00:45<7:30:22, 5.72s/it, acc=6.818%, acc_avg=9.110%, loss=5.114, loss_avg=5.088]
9/4731 [00:50<7:28:37, 5.70s/it, acc=4.848%, acc_avg=8.636%, loss=5.072, loss_avg=5.086]
10/4731 [00:56<7:29:38, 5.71s/it, acc=8.333%, acc_avg=8.606%, loss=5.001, loss_avg=5.078]
Regardless, to remedy the issue on the Ubuntu machine which was going to perform the training, I tried to add gradient clipping to the model as follows:
def train(model, loader, criterion, optimizer, scheduler, device, clip=None, summary=None):
loss_avg = RunningAverage()
acc_avg = RunningAverage()
model.train()
with tqdm(total=len(loader)) as t:
for i, (frames, label_map, centers, _) in enumerate(loader):
frames, label_map, centers = frames.to(device), label_map.to(device), centers.to(device)
outputs = model(frames, centers)
loss = criterion(outputs, label_map)
acc = accuracy(outputs, label_map)
optimizer.zero_grad()
loss.backward()
if clip is not None:
utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
scheduler.step()
loss_avg.update(loss.item())
acc_avg.update(acc)
if summary is not None:
summary.add_scalar_value('Train Accuracy', acc)
summary.add_scalar_value('Train Loss', loss.item())
t.set_postfix(loss='{:05.3f}'.format(loss.item()), acc='{:05.3f}%'.format(acc * 100),
loss_avg='{:05.3f}'.format(loss_avg()), acc_avg='{:05.3f}%'.format(acc_avg() * 100))
t.update()
return loss_avg(), acc_avg()
I then tried to train the model with clip values of 100, 10, 1, 0.25, 0.1, 0.01, all the way down to 0.000001 and ran into the exact same output. I verified that the clip_norm function was being called by placing a print statement within that if statement. I then inspected model.named_parameters to verify that all the correct layers were included (which they were).
Any ideas as to (1) what’s causing the training to proceed fine on the Mac but the gradients to explode on the Ubuntu machine, (2) why the accuracy also decreases by a large amount on the Mac, and (3) why gradient clipping isn’t mitigating the issue? |
st102801 | So, pytorch builds some final shared objects that contain all the code. If you want to use pytorch as a C/++ library, you have to link against _C.so and libshm and libcaffe2
The problem I’ve run into on OSX is this:
ld: can't link with bundle (MH_BUNDLE) only dylibs (MH_DYLIB) file
I need to add a target to pytorch to get it to build me a dylib file. I know that python can load it.
Following the response here:
stackoverflow.com
What are the g++ flags to build a true .so/MH_BUNDLE shared library on mac osx (not a dylib)? 3
c++, macos, shared-libraries
asked by
marathon
on 09:56PM - 01 Jul 14
I think I only need to change the command that produces the .so from using -bundle to -shared. I don’t know where that is, and I’m not familiar with python’s setup.py conventions. |
st102802 | I’m trying to reproduce a classical result of Hochreiter and Schmidhuber namely the addition problem (https://papers.nips.cc/paper/1215-lstm-can-solve-hard-long-time-lag-problems.pdf 1). This is cited in many modern papers that deal with training RNN/LSTM. My results just can’t seem to match what’s described in the paper. They claim that for sequences with T=100, they stop training after processing 74,000 sequences which would require the average training error to be below 0.01 and have all of the last 2,000 processed sequences be classified correctly (error below 0.04). However, after processing 100,000 sequences, I’m getting that the average training error converges to slightly above 0.04 and I can never classify more than 1,400 (of the last 2,000) sequences correctly (see attached plots). My code is
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import matplotlib.pyplot as plt
import numpy as np
#Generate Random Sequence
def gen_sequence(T):
seq_length = int(torch.randint(T, T + int(T/10) + 1, (1,)).item())
x = torch.zeros(seq_length, 2)
x[:,0].uniform_(-1.0, 1.0)
x[0,1] = -1.0
x[seq_length - 1, 1] = -1.0
first_mark = int(torch.randint(0, 10, (1,)).item())
x[first_mark,1] = 1.0
if first_mark == 0:
x[0,0] = 0.0
second_mark = int(torch.randint(0, int(T/2), (1,)).item())
while second_mark == first_mark:
second_mark = int(torch.randint(0, int(T/2), (1,)).item())
x[second_mark,1] = 1.0
return x.view(1,seq_length,2), 0.5 + (x[first_mark,0] + x[second_mark,0])/4.0
#LSTM architecture
class MyLSTM(nn.Module):
def __init__(self):
super(MyLSTM, self).__init__()
self.lstm = torch.nn.LSTM(input_size=2, hidden_size=2, num_layers=3, batch_first=True)
self.fc = nn.Linear(2,1)
def forward(self, x):
x = self.lstm(x)[0][0,-1,:]
x = x.view(2,)
return F.sigmoid(self.fc(x))
T = 100
ep = 100000
model = MyLSTM().cuda()
optimizer = optim.SGD(model.parameters(), lr=0.5)
running_mean = 0.0
means = np.zeros((ep,))
classifications = np.zeros((ep,))
classified = []
for j in range(ep):
x, y = gen_sequence(T)
x, y = x.cuda(), y.cuda()
optimizer.zero_grad()
loss = torch.pow(model(x) - y, 2)
loss.backward()
my_loss = loss.cpu().item()
optimizer.step()
running_mean = (j*running_mean + my_loss)/(j+1)
means[j] = running_mean
if len(classified) == 2000:
classified.pop(0)
if my_loss < 0.04:
classified.append(1)
else:
classified.append(0)
classifications[j] = sum(classified)
t = np.arange(1, ep+1)
plt.figure(1)
plt.plot(t, means)
plt.xlabel('Processed Sequences')
plt.ylabel('Running Mean of Training Error')
plt.figure(2)
plt.plot(t, classifications)
plt.xlabel('Processed Sequences')
plt.ylabel('Correctly Classified Sequences (from last 2000)')
plt.show()
I think I’m generating the sequences correctly. Is my LSTM architecture wrong? Maybe it’s the initialization, but they claim it shouldn’t matter much? Does anyone have a source where this is implemented, so I can at least check against it (doesn’t have to be in PyTorch)? Thanks. |
st102803 | Dear:
i understand tensor shape torch.Size([1]) and scalar shape torch.Size([]), but
what does torch.Size([0]) means generated by torch.randn(0), is it a bug??
In [4]: a=torch.randn(0)
In [5]: a
Out[5]: tensor([])
In [8]: a.shape
Out[8]: torch.Size([0])
In [9]: a=torch.tensor(0)
In [10]: a.shape
Out[10]: torch.Size([])
In [11]: a=torch.tensor([0.0])
In [12]: a.shape
Out[12]: torch.Size([1]) |
st102804 | Solved by richard in post #2
A tensor of this size is 1-dimensional but has no elements.
Contrast this to a tensor of size torch.Size([1]), which means it is 1 dimensional and has one element. |
st102805 | dragen:
torch.Size([0])
A tensor of this size is 1-dimensional but has no elements.
Contrast this to a tensor of size torch.Size([1]), which means it is 1 dimensional and has one element. |
st102806 | Are there any cases where ReLU(inplace=True) will do something unexpected without warning about it? From this question 5, it was suggested that if it doesn’t cause an error, we shouldn’t expect any problems. However, I’m wondering about slightly more complex circumstances, specifically when using double backward (in a WGAN). In these kinds of cases, is it possible that using ReLU(inplace=True) might end up using the wrong number at some point? I suppose I’m also just a bit confused about how the backward is calculated after the inplace operation has taken place. |
st102807 | Solved by SimonW in post #2
If there isn’t any error, it should be fine in terms of autograd correctness (unless you are fiddling with .data). |
st102808 | If there isn’t any error, it should be fine in terms of autograd correctness (unless you are fiddling with .data). |
st102809 | Thanks. Do you happen to have a simple explanation of how the calculations are done inplace, but that the backward can still be calculated? That is, if values are being overwritten, how are the values used for the backward? Or does the backward just infer the original value based on the node? (For ReLU, this would make sense as it’s just deciding whether or not any gradient should be passed at all) |
st102810 | Yes, for things like relu, dropout it is easily decided by looking at the locations of non-zero values. This can’t be done generally though. If the input is needed to compute gradient in some backward function, then inplace operations cannot be used. |
st102811 | Hi all, I am running this code from official documentation - I am not getting a proper image as the output
Super-resolution - Pytorch to caffe2 using ONNX
https://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html 6
I get a Blacked out image as an output instead of the extended size of the image
wrong output image - https://drive.google.com/file/d/1Q46dUuVAjhRGn8uLTUvdi0fnfvfkmJOM/view?usp=sharing 2 |
st102812 | I have an nn.Embedding of dimension 5, let’s say x = (x1,x2…x5) now I would like to apply a function on it such that each embedding becomes one with dimensions=6 with new embedding tensor x’ = (2||x||, x1,x2,x3…x5), ||x|| is the norm of the original tensor. Is there a way to do this?
Thank You. |
st102813 | Yes you just need to concatenate them - https://pytorch.org/docs/stable/torch.html#torch.cat 49 |
st102814 | Hi, I noticed we can use SVHN dataset using the official dataset wrapper. 17.
my question is, how can I merge the train and extra parts together and use them for training?
The API does not provide any options in this case.
Thanks alot in advance |
st102815 | I did it and here is the full sourcecode :
However, I noticed it consumes a lot of memory. Is there an easy way that I can dump what I have read and then read it batch by batch from there so that it does not consume that much memory?
from __future__ import print_function
import torch.utils.data as data
from PIL import Image
import os
import os.path
import numpy as np
from utils import download_url, check_integrity
class SVHN(data.Dataset):
"""`SVHN <http://ufldl.stanford.edu/housenumbers/>`_ Dataset.
Note: The SVHN dataset assigns the label `10` to the digit `0`. However, in this Dataset,
we assign the label `0` to the digit `0` to be compatible with PyTorch loss functions which
expect the class labels to be in the range `[0, C-1]`
Args:
root (string): Root directory of dataset where directory
``SVHN`` exists.
split (string): One of {'train', 'test', 'extra'}.
Accordingly dataset is selected. 'extra' is Extra training set.
transform (callable, optional): A function/transform that takes in an PIL image
and returns a transformed version. E.g, ``transforms.RandomCrop``
target_transform (callable, optional): A function/transform that takes in the
target and transforms it.
download (bool, optional): If true, downloads the dataset from the internet and
puts it in root directory. If dataset is already downloaded, it is not
downloaded again.
"""
url = ""
filename = ""
file_md5 = ""
split_list = {
'train': ["http://ufldl.stanford.edu/housenumbers/train_32x32.mat",
"train_32x32.mat", "e26dedcc434d2e4c54c9b2d4a06d8373"],
'test': ["http://ufldl.stanford.edu/housenumbers/test_32x32.mat",
"test_32x32.mat", "eb5a983be6a315427106f1b164d9cef3"],
'extra': ["http://ufldl.stanford.edu/housenumbers/extra_32x32.mat",
"extra_32x32.mat", "a93ce644f1a588dc4d68dda5feec44a7"]
}
def __init__(self, root, split='train-full',
transform=None, target_transform=None, download=False):
self.root = os.path.expanduser(root)
self.transform = transform
self.target_transform = target_transform
self.split = split # training set or test set or extra set
# if self.split not in [self.split_list,'train-full']:
# raise ValueError('Wrong split entered! Please use split="train" or train-full '
# 'or split="extra" or split="test"')
self.urls=[]
self.filenames=[]
self.file_md5s=[]
if split=='train-full':
for splt in enumerate(self.split_list):
if(splt[1] not in ['test']):
#print(splt)
self.urls.append(self.split_list[splt[1]][0])
self.filenames.append(self.split_list[splt[1]][1])
self.file_md5s.append(self.split_list[splt[1]][2])
else:
self.urls.append(self.split_list[split][0])
self.filenames.append(self.split_list[split][1])
self.file_md5s.append(self.split_list[split][2])
if download:
self.download()
if not self._check_integrity():
raise RuntimeError('Dataset not found or corrupted.' +
' You can use download=True to download it')
# import here rather than at top of file because this is
# an optional dependency for torchvision
import scipy.io as sio
self.data=np.empty((32,32,3,0))
self.labels=np.empty(0)
for i in range(len(self.filenames)):
# reading(loading) mat file as array
loaded_mat = sio.loadmat(os.path.join(self.root, self.filenames[i]))
self.data = np.concatenate((self.data, loaded_mat['X']), axis=3)
# loading from the .mat file gives an np array of type np.uint8
# converting to np.int64, so that we have a LongTensor after
# the conversion from the numpy array
# the squeeze is needed to obtain a 1D tensor
y = loaded_mat['y'].astype(np.int64).squeeze()
self.labels = np.concatenate((self.labels, y),axis=0)
#self.data = loaded_mat['X']
#self.labels = loaded_mat['y'].astype(np.int64).squeeze()
# the svhn dataset assigns the class label "10" to the digit 0
# this makes it inconsistent with several loss functions
# which expect the class labels to be in the range [0, C-1]
np.place(self.labels, self.labels == 10, 0)
self.data = np.transpose(self.data, (3, 2, 0, 1))
def __getitem__(self, index):
"""
Args:
index (int): Index
Returns:
tuple: (image, target) where target is index of the target class.
"""
img, target = self.data[index], int(self.labels[index])
# doing this so that it is consistent with all other datasets
# to return a PIL Image
img = Image.fromarray(np.transpose(img, (1, 2, 0)))
if self.transform is not None:
img = self.transform(img)
if self.target_transform is not None:
target = self.target_transform(target)
return img, target
def __len__(self):
return len(self.data)
def _check_integrity(self):
root = self.root
for i in range(len(self.filenames)):
md5 = self.file_md5s[i]
fpath = os.path.join(root, self.filenames[i])
return check_integrity(fpath, md5)
def download(self):
for i in range(len(self.filenames)):
md5 = self.file_md5s[i]
download_url(self.urls[i], self.root, self.filenames[i], md5)
def __repr__(self):
fmt_str = 'Dataset ' + self.__class__.__name__ + '\n'
fmt_str += ' Number of datapoints: {}\n'.format(self.__len__())
fmt_str += ' Split: {}\n'.format(self.split)
fmt_str += ' Root Location: {}\n'.format(self.root)
tmp = ' Transforms (if any): '
fmt_str += '{0}{1}\n'.format(tmp, self.transform.__repr__().replace('\n', '\n' + ' ' * len(tmp)))
tmp = ' Target Transforms (if any): '
fmt_str += '{0}{1}'.format(tmp, self.target_transform.__repr__().replace('\n', '\n' + ' ' * len(tmp)))
return fmt_str
I have another example, does pytorch shuffle the info by itself or should I be doing the shuffling as well?
Do I still have to create a new transform lambda function for making the input to be in the range 0-1 or the the transform.ToTensor() suffices?
Thanks in advance |
st102816 | I see that you have done the merge/fusion yourself by tweaking the original class. Nonetheless, if you want to do this ‘train’-‘extra’ merge, you can inherit the SVHN Dataset class and combine the two ‘splits’ as one. This would be a much neater solution. Here’s an example of how this could be done:
TVDatasetFusion 90
As for your second question, the shuffle is done when you initiate the Dataloader, something like:
train_loader = torch.utils.data.DataLoader(train_set, batch_size, shuffle=True, num_workers=2) |
st102817 | Thanks a lot, I’ll keep that in mind. I’m new to python and know just the basic stuff that’s why my methods are not efficient at all.
However, Can you help me on somehow making this more memory efficient? the current(original) implementation tries to read everything into the memory and when both training and extra are merged, they take a huge amount of memory (more than 15+Gigabyte!) what are my options here? |
st102818 | I have read some previous threads on this issue. It seems that Pytorch is using Magma for GPU linear algebra as mentioned in this thread:
Issues about symeig and svd on GPU 29
While using Magma for hybrid systems is a good idea when there is only one process running since it uses both CPU and GPU to make computation as fast as possible for different sized matrices.
However, I found when I run several jobs together the symeig function quickly makes my CPU as a hugh bottleneck. The issue is exactly as reported in the above thread, when I have a matrix A as torch.cuda.floattensor, size 1024 x 1024 (I think for size like this, Magma chooses to place major portion of the computation on CPU). torch.symeig(A) occupies all of the CPU cores. Then if I run 4 such processes independently on 4 GPUs, it scales really poorly and the CPU becomes the bottleneck and my GPUs are all relatively empty.
Is that possible to make a feature to force symeig running only on GPU? Like @smth mentioned here:
switch CUDA svd and qr to using cuSolver 23
I appreciate Magma’s potential acceleration for a single process on a hybrid system, but usually on servers it’s better to put the computation tasks on GPUs or at least have such an option. Also, can anyone suggest a workaround for now? Would cupy + pytorch be a good option? I have many GPUs and would like to run independent jobs on them while only using only CPU to feed them. Thanks a lot in advance!
My quick bench mark is that Magma is about 2.4 times faster for matrices from 512x512~2048x2048~4096x4096 shows that Magma is about 5x ~ 2.5x ~ 2x faster than cupy for symmetric matrix eigen-decomposition. This consistent with the number reported by SsnL. It seems MAGMA is a good feature to keep in case there is only one process running. |
st102819 | If cupy does what you need then using it as a workaround is good. Just keep in mind autograd won’t work with the operations unless you define a backward function in a custom autograd function. |
st102820 | @richard Autograd is one issue, I can manually compute the gradients. But another issue is the communication.
Every time I need to do a svd or symeig, I have to copy the matrix from GPU to CPU and convert it to Cupy array and send to GPU back. Then after the actual operation, I copy it back to CPU and convert it to pytorch tensor and send back to GPU again. Using Cupy introduces this two round-trip communication, which is a little problematic. |
st102821 | This is the model I am using:
class DecoderRNN(nn.Module):
def __init__(self, embed_size,vocab_size, hidden_size, num_layers=1):
super(DecoderRNN, self).__init__()
self.embed = nn.Embedding(vocab_size, embed_size)
self.linear = torch.nn.Linear(2048,embed_size)
self.bn = nn.BatchNorm1d(embed_size, momentum=0.01)
self.gru = nn.GRU(embed_size, hidden_size, num_layers, batch_first=True)
def forward(self, features, captions, lengths):
pdb.set_trace()
features = self.linear(features)
embeddings = self.embed(captions)
pdb.set_trace()
embeddings = torch.cat((features.unsqueeze(1), embeddings), 1)
packed = pack_padded_sequence(embeddings, lengths, batch_first=True)◀
hiddens, _ = self.gru(packed)
outputs = self.linear(hiddens[0])
return outputs
However after forward pass through embedding, all tensors on cuda are messed up. The tensors which are not cuda tensors seems to be fine. I get the following error. The following should clarify
(Pdb) type(features.data)
<class ‘torch.cuda.FloatTensor’>
(Pdb) features.data
2.9174 1.9323 0.8640 … 0.1553 0.9829 0.8675
[torch.cuda.FloatTensor of size 1x2048 (GPU 0)]
(Pdb) aa = self.embed(captions)
(Pdb) aa.data
THCudaCheck FAIL file=/py/conda-bld/pytorch_1493669264383/work/torch/lib/THC/generic/THCTensorCopy.c line=65 error=59 : device-side assert triggered
*** RuntimeError: cuda runtime error (59) : device-side assert triggered at /py/conda-bld/pytorch_1493669264383/work/torch/lib/THC/generic/THCTensorCopy.c:65
(Pdb) aa.data.contiguous()
*** RuntimeError: cuda runtime error (59) : device-side assert triggered at /py/conda-bld/pytorch_1493669264383/work/torch/lib/THC/generic/THCTensorCopy.c:65
(Pdb) features
*** RuntimeError: cuda runtime error (59) : device-side assert triggered at /py/conda-bld/pytorch_1493669264383/work/torch/lib/THC/generic/THCTensorCopy.c:65
(Pdb) lengths
5332
[torch.LongTensor of size 1] |
st102822 | On first glance, it looks like your shapes aren’t right.
features is (1, D), where D=2048. embeddings on the line = self.embed... appears to be (1, S, E) where S is sequence length, presumably 5332.
So it would seem to me that cat only works if E=D, packing works if S = lengths - 1, and self.linear requires hidden_size = D.
Given the stack trace, it seems to me that the problem is with S, or lengths. |
st102823 | Thanks for the reply.
Although the sizes seem to be correct. ‘embed_size’ is 256, and length of vocabulary is 256(taking values from 0 to 255) . Therefore, ‘self.embed’ should embed to 256-dimensional vector. The out of ‘linear’ is [1,256] and that of the embedding should be [1,5332,256]. In the concat, I do features.unsqueeze which will introduce one more dimension and hence the dimensions should not be a problem.
On further inspection, I realize that when I try to print embedding, right after it executes it, it gives the same error. So, somehow, embedding is getting messed up, although it does executes that statement correctly. |
st102824 | Interesting. I wonder if you accidentally have values beyond 255 in your caption. That usually gives a nastier looking error, but it could explain what you’re seeing.
You might try running this with the environment variable CUDA_LAUNCH_BLOCKING=1. That usually will make the stack trace point to the line where things are actually going wrong. |
st102825 | Thanks so much.
I was able to debug . The captions did have entries greater than 255. |
st102826 | I’m often in a situation where on my local machine (without a GPU) my code runs fine, but I’d like to test that all the tensors are added to the correct device if there’s a GPU present. I want to test this before I commit my code, but do so would require deploying it to a remote machine, which are usually running something I don’t want to interrupt, so I need to wait until they’re finished before testing and committing my code. Of course, there are ways I could change my workflow in committing my code, but ideally I would just be able to test the code locally by simulating that a GPU is present. Is there anyway to do this? |
st102827 | Solved by ptrblck in post #2
I’m not aware of a way to simulate a GPU, but you could run an AWS instance to try it out or use a Colab notebook, although I’m not sure about the licensing. |
st102828 | I’m not aware of a way to simulate a GPU, but you could run an AWS instance to try it out or use a Colab notebook, although I’m not sure about the licensing. |
st102829 | Thanks! Not quite the answer I wanted to hear, but probably the correct answer nonetheless. Also, didn’t know about Colab, and that looks pretty neat, so thank you! |
st102830 | They provide a free GPU for afaik 24 hours. After that you would need to restart your notebook.
Because of that, I would look closely to the license and intellectual property, if you are using code written for your employer. |
st102831 | Alternatively you could try to use one of the programs listed in this thread 133.
However, I am not sure whether any of these options are runnable in python. If not you might be able to use them together with torch.jit as it is mentioned in Road to 1.0 18 |
st102832 | I wrote a function that returns a balanced sampler for SubsetRandomSampler, which can be used as as a sampler in Dataloder(s). The function is working well and might be useful for others. However, I wonder if anyone has any comments, suggestions or improvements.
import numpy as np
import torch
def get_a_balanced_sampler(score, sampler_size, labels, no_classes):
'''
Args in -
score: posteriori values in the range [0 to 1] for each of the labels,
a value 1 indicates that the label has high likeliness to be of a correct
class
sampler_size: the intended sampler size the user wants to get back, the
size of the returned sampler will be (slightly) less than this, depending on
the minimum number of labels per class
labels: an array containing the labels
no_classes: the number of classes in the problem
Parameters -
percentage_of_selected_samples: selecting 50% for the samples with the highest
'score' values, thus, selection will be made randomly from these samples in a
balanced manner. The 50% can be changed according to the user requirements,
e.g., using less or higher values.
'''
percentage_of_selected_samples = 50/100
len_labels_per_class = np.zeros(no_classes, dtype=int)
idx_per_class = np.zeros([no_classes, len(labels)], dtype=int)
for i in range(no_classes):
idx_per_class[i] = labels==i
len_labels_per_class[i] = sum(idx_per_class[i] == True)
no_labels_per_class = min(len_labels_per_class)
sampler_pool_size = int(no_labels_per_class * percentage_of_selected_samples)
sampler_size = int(sampler_size/no_classes)
if(sampler_size > sampler_pool_size):
print('You need to decrease the value percentage_of_selected_samples: ', percentage_of_selected_samples)
exit('Exiting functget_the_samplerpler(): sampler_size has become larger than sampler_pool_size')
my_sampler = []
for i in range(no_classes):
sample_idx = (-score[idx_per_class[i]]).argsort()[:sampler_pool_size]
sample_idx = np.random.permutation(sample_idx)
sample_idx = sample_idx[:sampler_size]
my_sampler.extend(sample_idx)
if len(my_sampler) < 100: exit('Exiting function get_a_balanced_sampler(): small sampler has been geneated')
my_sampler = torch.utils.data.sampler.SubsetRandomSampler(my_sampler)
return my_sampler |
st102833 | I am working on training the ImageNet dataset on RseNet18, following the official example 1.
As is recommended, I spilt one batch into 4 GPUs, with each one containing 128 samples(128 per GPU, 512 totally). Surprisingly, loading data costs about 75%(1.7s/2.2s) of the total time consumption and the time cost between different iterations varies hugely, i.e., 17s vs. 0.5s. It seems that the GPUs are waiting for new data batches (GPU utilization goes down to 0%), even though I run 16 workers to load data.
So, I am wondering is there any thing wrong for the settings? Too large batchsize or too many threads to load data? |
st102834 | I am using batch norm in my network. For each epoch in training, I will perform evaluate loss in validation set. Will the batch norm statistic in train() mode maintain in next epoch because I want to train these statistic? Thanks
for epoch in range of (100):
net.train()
loss=...
loss.backward()
optimizer.step()
#Evaluate in validation set
with torch.no_grad():
for val in valloader:
images, targets= val
net.eval()
... |
st102835 | Solved by ptrblck in post #2
The running statistics will be updated once your model is set to .train() again.
Your code snippet looks fine. You could move the net.eval() before the loop through your validation set, but it’s not a problem if you call .eval() repeatedly. |
st102836 | The running statistics will be updated once your model is set to .train() again.
Your code snippet looks fine. You could move the net.eval() before the loop through your validation set, but it’s not a problem if you call .eval() repeatedly. |
st102837 | Thanks. So the running statistic will be stored somewhere when I call .eval(). Then if I call .train(), the information of running statistic will be recovered from somewhere to update. Am I right? |
st102838 | They will just not be updated. The running stats are already stored in bn.running_mean and bn.running_var. If you set this layer to eval, the running stats will just be applied withour updating them. Have a look at this small example:
bn = nn.BatchNorm2d(3)
x = torch.randn(10, 3, 24, 24)
# Print initial stats
print(bn.running_mean, bn.running_var)
# Update once and print stats
output = bn(x)
print(bn.running_mean, bn.running_var)
# Set to eval; the stats should stay the same
bn.eval()
output = bn(x)
print(bn.running_mean, bn.running_var)
# Set to train again; the stats should be changed now
bn.train()
output = bn(x)
print(bn.running_mean, bn.running_var) |
st102839 | What’s the recommended method for GPU profiling? I installed the latest version of pytorch with conda, torch.__version__ reports 0.3.0.post4, but when I try to call torch.autograd.profiler.profile(use_cuda=True) I get the error __init__() got an unexpected keyword argument 'use_cuda'. Is this feature only available in the version from the github repo? |
st102840 | The use_cuda parameter is only available in versions newer than 0.3.0, yes. Even then it adds some overhead. The recommended approach appears to be the emit_nvtx 238 function:
with torch.cuda.profiler.profile():
model(x) # Warmup CUDA memory allocator and profiler
with torch.autograd.profiler.emit_nvtx():
model(x) |
st102841 | Trying to run that code gives me an error about the use_cuda flag (with version 0.3.1). For example:
import torch
from torch.autograd import Variable
x = Variable(torch.randn(5,5), requires_grad=True).cuda()
with torch.autograd.profiler.profile() as prof:
y = x**2
with torch.autograd.profiler.emit_nvtx():
y = x**2
print(prof)
Gives:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-18-c54fc33dff6e> in <module>()
1 with torch.autograd.profiler.profile() as prof:
2 y = x**2
----> 3 with torch.autograd.profiler.emit_nvtx():
4 y = x**2
5
~/.pyenv/versions/3.6.1/envs/phdnets2/lib/python3.6/site-packages/torch/autograd/profiler.py in __enter__(self)
213 self.entered = True
214 torch.cuda.synchronize()
--> 215 torch.autograd._enable_profiler(True)
216 return self
217 |
st102842 | I try to run the script on 0.4.0 and it works fine with torch.autograd.profiler.profile(use_cuda=True).
It seems this problem should be solved on upgrading to 0.4.0.
import torch
cuda = torch.device('cuda')
x = torch.randn((1, 1), requires_grad=True)
print(x.device)
with torch.autograd.profiler.profile(use_cuda=True) as prof:
y = x ** 2
y.backward()
print(prof) |
st102843 | I’m obviously doing something wrong trying to finetune this 10 implementation of Segnet. This is my results with accuracy and loss in TensorBoard.
tensorboard screenshot.png390×624 18.3 KB
The loss graph has the right curve, but both functions present a very strange and wrong behaviour during the first training epoch. Based on accuracy, it almost looks like it performs finetuning correctly for the first epoch, then it starts from scratch.
This is the bare-bones of the code I’m working with:
def train(epoch):
model.train()
# update learning rate
exp_lr_scheduler.step()
total_loss = 0
total_accuracy = 0
# iteration over the batches
for batch_idx, (img, gt) in enumerate(train_loader):
input = Variable(img)
target = Variable(gt)
# initialize gradients
optimizer.zero_grad()
# predictions
output = model(input)
cr_en_loss = nn.CrossEntropyLoss()
loss = cr_en_loss(output, target)
loss.backward()
optimizer.step()
"""
Here I calculate accuracy for this batch and log results
"""
total_loss += loss.data[0]
total_accuracy += accuracy
return total_loss / len(train_loader), total_accuracy / len(train_loader)
# create SegNet model
model = SegNet(input_channels, label_numbers)
th = torch.load('path/of/pretrained/weights.pth')
model.load_state_dict(th)
# finetuning - freezing all the net's layers but the last one
ftparams = ['conv11d.weight', 'conv11d.bias']
for name, param in model.named_parameters():
if name not in ftparams:
param.requires_grad = False
# define the optimizer
optimizer = optim.SGD(model.conv11d.parameters(), lr=lr, momentum=momentum)
exp_lr_scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=step_size, gamma=gamma)
transform_train = transforms.Compose([
"""
Here I apply my transforms
"""
])
train_dataset = MyDataset(root_dir_img, root_dir_gt, transform_train)
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=num_workers,
pin_memory=True
)
for epoch in range(epochs):
# training
train_loss, train_acc = train(epoch)
Where is my mistake? Why my net forgets everything starting from the second epoch? |
st102844 | Solved by ptrblck in post #30
Add the following line:
elif mask.mode == '1':
img2 = torch.from_numpy(np.array(mask, np.uint8, copy=False)) |
st102845 | While finetuning your model you have to make sure the learning rate is not too high, since the pre-trained model has already “good” weights.
How high is your learning rate? |
st102846 | learning rate = 0.001
momentum = 0.5
Could this be one of this parameter fault? The change of behaviour after the first epoch looks really strange to me, almost like finetuning is going okay at the beginning, then starting from scratch in the second epoch. |
st102847 | Try to lower the lr to 1e-4 or even 1e-5.
How did you chose the settings for StepLR? |
st102848 | I’ll try right now and let you now in a minute.
This is the lr scheduler:
exp_lr_scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1)
Since it’s a pretrained net and I only need to twitch a little the last layer, I was thinking about training it only for 15 epochs. |
st102849 | I tried lr = 1e-4 and 1e-5 but the problem is still there, I think it has something to do with my implementation of this training method maybe? Or could it be the momentum? |
st102850 | Well, you are apparently using an older version of PyTorch, but this shouldn’t be the problem here I think.
However, you should upgrade to the latest stable release, since e.g. Variables and tensors were merged.
You can find the install instructions of the website 4.
Also, you don’t have to reconstruct the criterion in each run.
Move the cr_en_loss = nn.CrossEntropyLoss() above the for loop.
This shouldn’t be the problem either.
Is the loss increase exactly happening after one full epoch? |
st102851 | I’m stuck to the older version because of my work group, unfortunately I can’t upgrade to the new version right now.
The loss criterion is out the for loop in my version of the code, I put it there for easy-to-read purposes.
The loss drastical increase and accuracy decrease both happen during exactly one epoch, every time I run the experiment.
My guess is the net just use the pretrained weight during the first epoch (which are ok at their job), giving me good results. Starting from the second epoch the net fresh-starts without any weights, learning from zero.
This should explain why the accuracy drop so much (it’s a segmentation task, all images become almost completely black). |
st102852 | OK, something seems to be broken. Could you post the whole code?
As far as I can tell, the current code looks good.
If you cannot post the code due to your work policy, could you have a look at the norm of the gradients in the first and second epoch? |
st102853 | In a few moments I will post the whole code, no problem. I will comment some part to make it easier to read. |
st102854 | This is the full code:
import argparse
import logger
import time
import torch
import torch.backends.cudnn as cudnn
import torch.nn as nn
import torch.optim as optim
import transforms
from data import MyDataset
from segnet import SegNet
from torch.autograd import Variable
def train(epoch):
model.train()
# update learning rate
exp_lr_scheduler.step()
total_loss = 0
total_accuracy = 0
# iteration over the batches
for batch_idx, (img, gt) in enumerate(train_loader):
if use_cuda:
img = img.cuda(async=True)
gt = gt.cuda(async=True)
input = Variable(img)
target = Variable(gt)
# initialize gradients
optimizer.zero_grad()
# predictions
output = model(input)
"""
output is (24, 2, 224, 224)
target is (24, 1, 224, 224)
Here I change target.view() and type in order to use nn.CrossEntropyLoss()
"""
tb = target.size(0)
tc = target.size(1)
th = target.size(2)
tw = target.size(3)
target_long = target.view(tb, th, tw).long()
loss = cren_loss(output.cuda(), target_long.cuda())
loss.backward()
optimizer.step()
"""
This is a segmentation task, so in the next part I compute how many 1 pixels are correctly classificated
as 1 and how many 0 pixels are correctly 0. Then I simply calculate the mean of foreground and background
accuracy.
"""
output_pred = softmax(output)
_, prediction = output_pred.max(dim=1)
prediction = prediction.unsqueeze(1)
mat_zero2zero = ((prediction == 0) * (target == 0)).int()
mat_one2one = ((prediction == 1) * (target == 1)).int()
prediction_back = mat_zero2zero.sum().float()
target_back = target.numel() - target.sum()
prediction_fore = mat_one2one.sum().float()
target_fore = target.sum()
acc_back = prediction_back / target_back
acc_fore = prediction_fore / target_fore
accuracy = (acc_back + acc_fore) / 2
# TensorBoard logging
info = {'train-loss': loss.data[0],
'train-accuracy': accuracy}
for tag, value in info.items():
log.scalar_summary(tag, value, batch_idx + 1)
print('batch: %5s | loss: %.3f | acc_back: %.3f | acc_fore: %.3f | acc: %.3f |'
% (str(batch_idx + 1) + '/' + str(len(train_loader)),
loss.data[0],
acc_back,
acc_fore,
accuracy),
time.strftime("%H:%M:%S", time.gmtime(time.time())),
'training')
total_loss += loss.data[0]
total_accuracy += accuracy
return total_loss / len(train_loader), total_accuracy / len(train_loader)
# training settings
parser = argparse.ArgumentParser(description='PyTorch SegNet')
parser.add_argument('--epochs', type=int, default=10, help='train epochs')
parser.add_argument('--lr', type=float, default=0.0001, help='learning rate')
parser.add_argument('--momentum', type=float, default=0.5, help='SGD momentum')
parser.add_argument('--resume', '-r', action='store_true', help='resume from checkpoint')
args = parser.parse_args()
# cuda
use_cuda = torch.cuda.is_available()
input_nbr = 3
label_nbr = 2
img_size = 224
batch_size = 24
num_workers = 4
start_epoch = 0
softmax = torch.nn.Softmax(dim=1)
if use_cuda:
cren_loss = nn.CrossEntropyLoss().cuda()
else:
cren_loss = nn.CrossEntropyLoss()
# create SegNet model
model = SegNet(input_nbr, label_nbr)
model.load_from_filename('/path/to/pretrained/weights')
# convert to cuda if needed
if use_cuda:
model.cuda()
cudnn.benchmark = True
else:
model.float()
# finetuning
ftparams = ['conv11d.weight', 'conv11d.bias']
for name, param in model.named_parameters():
if name not in ftparams:
param.requires_grad = False
# define the optimizer
optimizer = optim.SGD(model.conv11d.parameters(), lr=args.lr, momentum=args.momentum)
exp_lr_scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1)
# define data
root_dir_img = '/path/to/img/dir'
root_dir_gt = './path/to/gt/dir'
transform_train = transforms.Compose([
transforms.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),
transforms.RandomResizedCrop(img_size),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor()
])
train_dataset = MyDataset(root_dir_img, root_dir_gt, transform_train)
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=num_workers,
pin_memory=True
)
# Set the logger
log = logger.Logger('./logs')
for epoch in range(start_epoch, start_epoch + args.epochs):
print('epoch: %5s' % str(epoch+1))
# training
train_loss, train_acc = train(epoch)
print('\nepoch: %5s | loss: %.3f | acc: %.3f |'
% (str(epoch + 1) + '/' + str(start_epoch + args.epochs),
train_loss,
train_acc),
time.strftime("%H:%M:%S", time.gmtime(time.time())),
'training')
print('\n') |
st102855 | Thanks for the code. I am currently working on it creating some dummy data and targets.
One thing I’ve seen so far is the usage of transformation.
Since you are working on a segmentation task, I assume you have segmentation maps as the target.
I cannot see, how your Dataset is implemented, but if you are using some random transformations like RandomResizedCrop, and flipping, you have to take care of applying them also on your target.
Otherwise your input will be transformed and the model might have a hard time to learn the relationship between the input and target.
The easiest way would be to use the functional API of torchvision.
Here 7 is a small example I created a while ago.
Let me know, if this helps! |
st102856 | The transformations are already applied both on images and ground truths where needed.
The dataset consist of some objects and their binary segmentation map.
I could provide you the code I’m using for dataset creation / transforms / net implementation if this could help.
Anyway everything seems to work fine during the first epoch, accuracy is high and loss low, since the pretrained weights are good. The problem is the passage from the first epoch to the second, my guess is some parameter are not handled correctly.
How can I check the norm of the gradients you were talking about? |
st102857 | Could you post the transformation part of your Dataset please?
Are you using the transform_train in it?
You can check if with model.conv11d.weight.grad.norm(). |
st102858 | This is my Dataset class.
import os
import torch.utils.data
from PIL import Image
from PIL import ImageFile
class MyDataset(torch.utils.data.Dataset):
def __init__(self, root_dir_img, root_dir_gt, transform=None):
self.root_dir_img = root_dir_img
self.root_dir_gt = root_dir_gt
self.transform = transform
img_names = [os.path.join(root_dir_img, name) for name in os.listdir(root_dir_img) if
os.path.isfile(os.path.join(root_dir_img, name))]
gt_names = [os.path.join(root_dir_gt, name) for name in os.listdir(root_dir_gt) if
os.path.isfile(os.path.join(root_dir_gt, name))]
self.img_files = []
self.gt_files = []
for i in range(len(img_names)):
self.img_files.append(Image.open(img_names[i]))
self.gt_files.append(Image.open(gt_names[i]))
def __len__(self):
return len(self.img_files)
def __getitem__(self, idx):
ImageFile.LOAD_TRUNCATED_IMAGES = True
img = self.img_files[idx]
gt = self.gt_files[idx]
sample = {'image': img, 'mask': gt}
if self.transform:
sample = self.transform(sample)
img = sample['image']
gt = sample['mask']
return img, gt
I will check grad.norm() now. |
st102859 | epoch: 1/10 | loss: 0.499 | acc: 0.877 | 14:18:40 training
Variable containing:
0.5379
[torch.cuda.FloatTensor of size 1 (GPU 0)]
epoch: 2/10 | loss: 4.012 | acc: 0.506 | 14:18:48 training
Variable containing:
2.0424
[torch.cuda.FloatTensor of size 1 (GPU 0)]
epoch: 3/10 | loss: 4.082 | acc: 0.504 | 14:18:57 training
Variable containing:
2.2331
[torch.cuda.FloatTensor of size 1 (GPU 0)]
These are the stats and norm of the gradients for the first three epochs. |
st102860 | Could you try to run your code with one or two images-mask pairs and see how your model is behaving then?
I still don’t see any obvious errors in your code, so we might have a look if the data is somehow corrupted/changed, even though you are not calling anything after the train() call, right? |
st102861 | I’m not calling anything after the train function.
If i try running the net it works fine, it does a good job at segmenting using the pretrained weights.
But the model obtained after finetuning is unusable (as shown by accuracy drop from 85% to 50%).
I noticed that if I let the training process run for many epochs (100+) I get a working model, basically trained from scratch. This does not solve my problem, but I guess is just another confirmation that the whole thing “is working”, but the parameters “get lost” moving from epoch 1 to epoch 2. |
st102862 | Yeah, I see the issue.
Could you remove the truncated images and try it again?
I still have the feeling the error is somehow related to the data.
EDIT: Also, could you remove the cuda() calls from this line:
loss = cren_loss(output.cuda(), target_long.cuda()) |
st102863 | Probably you are on the right lead.
I removed this line:
ImageFile.LOAD_TRUNCATED_IMAGES = True
And I got this error:
Traceback (most recent call last):
File "/.../train.py", line 192, in <module>
train_loss, train_acc = train(epoch)
File "/.../train.py", line 28, in train
for batch_idx, (img, gt) in enumerate(train_loader):
File "/.../venv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 281, in __next__
return self._process_next_batch(batch)
File "/...e/venv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 301, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
OSError: Traceback (most recent call last):
File "/.../venv/lib/python3.6/site-packages/PIL/ImageFile.py", line 215, in load
s = read(self.decodermaxblock)
File "/.../venv/lib/python3.6/site-packages/PIL/PngImagePlugin.py", line 619, in load_read
cid, pos, length = self.png.read()
File "/.../venv/lib/python3.6/site-packages/PIL/PngImagePlugin.py", line 114, in read
length = i32(s)
File "/.../venv/lib/python3.6/site-packages/PIL/_binary.py", line 76, in i32be
return unpack(">I", c[o:o+4])[0]
struct.error: unpack requires a buffer of 4 bytes
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/.../venv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 55, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/.../venv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 55, in <listcomp>
samples = collate_fn([dataset[i] for i in batch_indices])
File "/.../data.py", line 41, in __getitem__
sample = self.transform(sample)
File "/.../transforms.py", line 584, in __call__
sample = t(sample)
File "/.../transforms.py", line 1074, in __call__
img = transform(img)
File "/.../transforms.py", line 584, in __call__
sample = t(sample)
File "/.../transforms.py", line 794, in __call__
return self.lambd(img)
File "/.../transforms.py", line 1048, in <lambda>
transforms.append(Lambda(lambda img: adjust_contrast(img, contrast_factor)))
File "/.../transforms.py", line 462, in adjust_contrast
enhancer = ImageEnhance.Contrast(img)
File "/.../venv/lib/python3.6/site-packages/PIL/ImageEnhance.py", line 66, in __init__
mean = int(ImageStat.Stat(image.convert("L")).mean[0] + 0.5)
File "/.../venv/lib/python3.6/site-packages/PIL/Image.py", line 879, in convert
self.load()
File "/.../venv/lib/python3.6/site-packages/PIL/ImageFile.py", line 220, in load
raise IOError("image file is truncated")
OSError: image file is truncated |
st102864 | Hey Pytorch user and developers
It seems that the current nightly pytorch to up-to-date. It also does not have cuda91 dependency.
When I type in:
conda install -c pytorch pytorch-nightly
It produces:
Solving package specifications: .
Package plan for installation in environment /opt/anaconda3/envs/test:
The following NEW packages will be INSTALLED:
intel-openmp: 2018.0.3-0
libgcc-ng: 7.2.0-hdf63c60_3
libgfortran-ng: 7.2.0-hdf63c60_3
libopenblas: 0.2.20-h9ac9557_7
libstdcxx-ng: 7.2.0-hdf63c60_3
ninja: 1.8.2-h2d50403_1 conda-forge
numpy-base: 1.14.3-py36h0ea5e3f_1
pytorch-nightly: 2018.05.07-py36_cuda8.0.61_cudnn7.1.2_1 pytorch
The following packages will be UPDATED:
mkl: 2017.0.3-0 --> 2018.0.3-1
numpy: 1.13.1-py36_0 --> 1.14.3-py36h28100ab_2
scipy: 0.19.1-np113py36_0 --> 1.1.0-py36hfc37229_0
If this channel is periodically maintained? I have had a hard time building pytorch directly from github… |
st102865 | We are having some build system issues in our nightly builds following the merge with caffe2 code. After releasing 0.4.1, we will fix that. cc @smth |
st102866 | Hello,
I am working on a project where I want to have one network take a 4 x4,7x7 or 14x14 be reconstructed to a 28 x28 image. (Working with MNIST) My current architecture would be as such:
blow the smaller image up to the target dimensions, then encode it to the original input dimensions(so the network can learn the mapping) and then from there decode it until it is the desired 28 x28 dimension. But as far as I know I would have to make a different network for each of these cases.
Is there anyway to do this dynamically in PyTorch? |
st102867 | If you are thinking about how to resize the images, you can just do that in your dataset class. IIUC, the resizing doesn’t need to be backprop-able. |
st102868 | how can i Dynamically nn.Linear input ?
ex) nn.Linear(Dynamic_number, fix_number) |
st102869 | Is there an existing dataset for text translated into emoji’s? Not just for a single word but transcribing an entire sentence. Even if the domain of labels is limited.
Example
Input: “I’m flying over to Italy so I can eat some pasta”
Output: ️
Link
Similar project that utilize a text to emoji database.
getdango.com
Dango - Your Emoji Assistant 1
Dango quickly helps you find the best emoji, GIFs, & stickers. |
st102870 | Hi,
I’m using PyTorch tensor library for tensor based computations. I’m implementing a particular type of spiking neural network and I do not use PyTorch’s Network or Module. I just use tensor and its methods plus some functions in nn.functionals.
Currently, in order to have a single code capable of running on GPU and CPU, I just set the device property for tensors assuming that all the computations that involves GPU-allocated tensors will be done completely by the GPU. I was monitoring my GPU usage and I found that it does not reach to the maximum usage capacity even for a large instance of the problem (I’m not sure if it was large enough!). This made me unsure about my assumption. Is it enough to put all the tensors on GPU in order to have all the computations on GPU, or I have to explicitly call .cuda() for all the built-in methods and functions? |
st102871 | Solved by ptrblck in post #2
It should be sufficient to move your tensors to the GPU.
You can check the current device by printing tensor.device.
If your GPU is not fully utilized your code might have some bottleneck or the operations are just too small so that the overhead of moving between host and device is bigger than the… |
st102872 | It should be sufficient to move your tensors to the GPU.
You can check the current device by printing tensor.device.
If your GPU is not fully utilized your code might have some bottleneck or the operations are just too small so that the overhead of moving between host and device is bigger than the actual performance gain. |
st102873 | I’m asking because I wonder why .zero_grad() would cause memory out. From my understanding this op is just to set param.grad.data to zero, why extra memory would be required? |
st102874 | Is there a small script you can give to reproduce this? I am happy to look into what’s happening. |
st102875 | I may need a while to reduce the code as small as possible. The error trace back is as follows:
THCudaCheck FAIL file=/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory
Warning: out of memory
Warning: out of memory
epoch = 0, loss =2.78738046, ER_train = 100.00, ER_batch = 100.00, time = 2.90s(2.90|0.00), progress = 0.00%, time remained = 1781.43h
epoch = 0, loss =2.77562714, ER_train = 98.44, ER_batch = 96.88, time = 0.73s(0.73|0.00), progress = 0.00%, time remained = 1983.91h
epoch = 0, loss =2.74634695, ER_train = 97.40, ER_batch = 95.31, time = 1.40s(1.40|0.00), progress = 0.04%, time remained = 5.93h
Warning: out of memory
Traceback (most recent call last):
File "DIC_train_pytorch.py", line 397, in <module>
optimizer.zero_grad()
File "/home/David/App/anaconda3/lib/python3.5/site-packages/torch/optim/optimizer.py", line 136, in zero_grad
param.grad.data.zero_()
RuntimeError: cuda runtime error (2) : out of memory at /data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/generic/THCTensorMath.cu:35
In the above trace back logs, “Warning: out of memory” is printed by my code to warn me that an out of memory exception (exactly the exception as shown in the last line of the above log) is catched. This exception would be raised by pytorch when input train data batch is big. After catching the exception, I’ll reduce the batch size and try the training procedure again. The corresponding code snippet is as
optimizer.zero_grad()
try:
if device >= 0:
score = model(Variable(torch.from_numpy(X)).cuda(device))
else:
score = model(Variable(torch.from_numpy(X)))
except RuntimeError as e:
if e.args[0].startswith('cuda runtime error (2) : out of memory'):
print('Warning: out of memory')
cached_data.extend(split_train_data([X, Y]))
continue
else:
raise e |
st102876 | it’s possible that OOM occurs elsewhere but is reported at zero_grad.
Run your program with:
CUDA_LAUNCH_BLOCKING=1 python script.py
and see if it still reports the OOM at zero_grad. |
st102877 | We’re probably missing a check somewhere so the error pops up only there. You’re likely working under a super heavy memory pressure, and the model doesn’t fit. What’s the last operation you do (loss fn + last op before)? Did you try reducing the batch size? |
st102878 | I have the same error.
Tried running the model with CUDA_LAUNCH_BLOCKING=1 but still the error pops up at optimizer.zero_grad(). Can anyone help me out? I can post the model and training snippet if needed.
Thanks |
st102879 | I am trying to build a custom convolution using the method shown in pytorch unfold function 4
The custom convolution function is given below:
import torch
from torch import nn
import torch.nn.functional as F
from torch.nn.parameter import Parameter
import math
from torch.nn.modules.utils import _pair
class customConv(nn.Module):
def __init__(self, n_channels, out_channels, kernel_size, dilation=1, padding=0, stride=1, bias=True):
super(customConv, self).__init__()
self.kernel_size = _pair(kernel_size)
self.out_channels = out_channels
self.dilation = _pair(dilation)
self.padding = _pair(padding)
self.stride = _pair(stride)
self.n_channels = n_channels
self.weight = Parameter(torch.Tensor(self.out_channels, self.n_channels, self.kernel_size[0], self.kernel_size[1]))
if bias:
self.bias = Parameter(torch.Tensor(out_channels))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self):
n = self.n_channels
for k in self.kernel_size:
n *= k
stdv = 1. / math.sqrt(n)
self.weight.data.uniform_(-stdv, stdv)
if self.bias is not None:
self.bias.data.uniform_(-stdv, stdv)
def forward(self, input_):
hout = ((input_.shape[2] + 2 * self.padding[0] - self.dilation[0] * (self.kernel_size[0]-1)-1)//self.stride[0])+1
wout = ((input_.shape[3] + 2 * self.padding[1] - self.dilation[1] * (self.kernel_size[1]-1)-1)//self.stride[1])+1
inputUnfolded = F.unfold(input_, kernel_size=self.kernel_size, padding=self.padding, dilation=self.dilation, stride=self.stride)
if self.bias:
convolvedOutput = (inputUnfolded.transpose(1, 2).matmul(
self.weight.view(self.weight.size(0), -1).t()).transpose(1, 2)) + self.bias.view(-1, 1)
else:
convolvedOutput = (inputUnfolded.transpose(1, 2).matmul(self.weight.view(self.weight.size(0), -1).t()).transpose(1, 2))
convolutionReconstruction = convolvedOutput.view(input_.shape[0], self.out_channels, hout, wout)
return convolutionReconstruction
But when I try comparing it with the pytorch implementation, I do not get the exact value. The code to check for difference is provided below
import torch
from torch import nn
from customConvolve import customConv
torch.manual_seed(1)
input = torch.randn (10,3,64,64)
conv1 = nn.Conv2d(input.shape[1],5, kernel_size=3, dilation=1, padding=1, stride=1 ,bias = False)
conv1_output = conv1(input)
conv2 = customConv(n_channels=input.shape[1], out_channels=5, kernel_size=3, dilation=1, stride =1, padding = 1, bias = False)
conv2_output = conv2(input)
print(torch.equal(conv1.weight.data, conv2.weight.data))
print(torch.equal(conv1_output, conv2_output))
I would like to know why the variation exists and how to solve this?
Thank you. |
st102880 | To me, your implementation of custom conv seems to be correct, except two things in testing code:
In testing code, the two conv layers are not sharing the same weight. You can assign the weight of one conv layer to the other as follows:
conv1 = nn.Conv2d(input.shape[1], 1, kernel_size=3, dilation=1, padding=1, stride=1 ,bias = False)
conv2 = customConv(n_channels=input.shape[1], out_channels=1, kernel_size=3, dilation=1, stride =1, padding = 1, bias = False)
conv1.weight = conv2.weight
conv1.bias = conv2.bias
Once you assign the same weights, the conv outputs are very close (maybe only varies in numerical precision). You can check using L2 loss:
print(torch.pow(conv1_output - conv2_output, 2).sum())
Maybe, take a simple 5 x 5 input and verify the output by printing. |
st102881 | Yes, you are correct. I forgot to assign the same weights. The error is very low now. The variation is because of numerical precision.
Thank you so much. |
st102882 | Hi,
I have implemented a custom pytorch Dataset.
The dataset is initialized with a folder. Each file in the folder contains a long signal that I do some signal processing on to split it into M segments.
Right now for each of the files, the __getitem__() method generates an NxM matrix where N is a known number of features, and M is the number of extracted segments from that file. I have no way of knowing M in advance, i.e. not until I analyze the entire signal and it’s different per file.
The __len__() method currently returns the number of files in the folder the Dataset was initialized with.
What I actually want is to work with these segments as individual samples for my model, not the entire signal.
In other words, I want batch sizes of say 128 segments (so a batch would be of shape 128xN).
Ideally, I would give my dataset to some customized DataLoader and it would create these batches on the fly by loading one file (with M>128 segments) and taking random batches of 128 segments. When there aren’t enough segments left in the loaded file to fill a batch, a new file should be loaded (in random order) and so on.
I tried looking into the Sampler and BatchSampler classes and also at the custom collate_fn that can be provided to a DataLoader, but I haven’t found a way to achieve this… Everything seems to expect the number of samples to be known in advance.
So, is there some trick I can use?
Or do I need to resort to simply saving each segment to a separate file and loading them using the standard 1file=1sample based approach?
Thanks. |
st102883 | I don’t know a valid approach without knowing the number of segments in each file.
Don’t you have any way to calculate the number of segments beforehand? |
st102884 | Another approach would be to set an arbitrary high number as length and if you encounter the index to be to high you can choose of two options:
You can modify the index to be in your range (e.g. By using the % operator. This would result in a fixed epoch length but probably iterating much more often over some parts of the dataset (should not matter if you shuffle anyway)
You can raise the StopIteration yourself which should be caught inside the Dataloader/automatically inside the loop. This is what I did in early releases of pytorch to write own imagefolder-like datasets (if it is not caught you need to catch it yourself)
Note that for the second approach might cause loggers or other modules which are build upon a fixed number of batches to raise errors. |
st102885 | Thanks. Interesting ideas.
Let’s say I can look at the folder, read some metadata about each file and calculate a rough upper bound K of the number of segments in all files combined.
The problem is, that given an index in the range [0,K-1], it’s not possible to know which file to look at to find that index. I could again do some estimations to just pick a file based on the index in a deterministic way, load it, process it and use the % to get a segment inside it even if it actually doesn’t have enough segments.
However this would cause lots of re-loading of the files and re-running the signal processing algorithms that split it… Each file would be processed many times. It would cause a non-negligible overhead. Probably I could fix it with some caching, but even that might still be slow since I have a lot of data. The reason I was looking for some DataLoader trick here was to do all the loading and processing lazily, on demand…
Thanks again. |
st102886 | How exactly is your dataset structured?
I have uploaded a small snippet here 55 which is not directly related to your problem but should show how to use StopIteration inside an iterable (in this case the dataloaders are simply reset if running out of bounds). |
st102887 | While this approach can be used to create a Dataset with an unknown size (which is good!), it’s still not “lazy enough” just like that because when combined with Samplers and DataLoader I need to make sure to only generate indices of segments within the current file the DataSet has loaded and split.
It now seems possible to perhaps do this by coupling the Dataset and the Sampler, but I think for simplicity i’ll just pre-process all the files and split them in advance.
Thanks for your help |
st102888 | If you could provide more details towards your dataset structure (or maybe some example files) we may be able to help you further |
st102889 | Hi!
Tensorflow provides a implementation of attention mechanism in tensorflow.contrib.seq2seq. I’m just wondering if pytorch also provide an implementation of attention mechanism to start from?
if not, I wish it could be included as nn.Attention
I’m want to use attention mechanism in an image classifier
Thank you |
st102890 | I try to implement a customized optimizer that performs step-length search.
It requires computing, $\phi(a) = net(w + a * p)$ and $d_{\phi}/d_a$,
where
$a$ is the step-length,
$w$ is the weight vector,
$p$ is the step-direction vector,
net is the neural network function that output a scalar loss value, $L$
I can compute $d_{\phi}/d_a$, which is equivalent to $d_L/d_a$, for simple net below.
The question is:
For a net constructed using nn.module,
how to compute the gradient of loss wrt step-length?
how to update the net parameter so that we can compute the gradient of the loss wrt to step-length?
#!/usr/bin/env python3
import torch
torch.manual_seed(12345)
N, D_in, H, D_out = 10, 2, 5, 1
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
w1 = torch.randn(D_in, H, requires_grad=True)
w2 = torch.randn(H, D_out, requires_grad=True)
p1 = torch.randn(D_in, H, requires_grad=True)
p2 = torch.randn(H, D_out, requires_grad=True)
a = torch.randn(1, requires_grad=True)
w1 = torch.randn(D_in, H, requires_grad=True)
w2 = torch.randn(H, D_out, requires_grad=True)
w1 = w1 + a * p1
w2 = w2 + a * p2
h = x.mm(w1)
h_relu = h.clamp(min=0)
y_pred = h_relu.mm(w2)
loss = (y_pred - y).pow(2).mean()
# Compute ga: grad of loss wrt a
ga, = torch.autograd.grad(loss, a, create_graph=True)
print(ga)
# Net using nn.module #####
class Net(torch.nn.Module):
def __init__(self, D_in, H, D_out):
super(Net, self).__init__()
self.hidden = torch.nn.Linear(D_in, H)
self.output = torch.nn.Linear(H, D_out)
def forward(self, x):
y = torch.nn.functional.relu( self.hidden(x) )
y = self.output(y)
return y
net = Net(2, 5, 1)
loss_fn = torch.nn.MSELoss()
p1 = torch.transpose(p1, 0, 1)
p2 = torch.transpose(p2, 0, 1)
for name, p in net.named_parameters():
# Got RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.
# if name == 'hidden.weight':
# p.add_(a * p1)
# elif name == 'output.weight':
# p.add_(a * p2)
# else:
# pass
# RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavio
# if name == 'hidden.weight':
# p = p + (a * p1)
# elif name == 'output.weight':
# p = p + (a * p2)
# else:
# pass
# TODO: how to update the weight so that we can compute the gradient of loss wrt step-length a?
pass
y_pred = net(x)
loss = loss_fn(y_pred, y)
print(loss.item())
# TODO: Compute ga: grad of loss wrt a
# ga, = torch.autograd.grad(loss, a, create_graph=True)
# print(ga) |
st102891 | Is your first approach working?
You can find a similar example here 7. Could you try to use the parameter updates shown in the code? |
st102892 | @ptrblck: thanks for the link.
Yes, I was following that tutorial, and end up with the following that so far, work as expected.
I would be grateful if you could comment on it, thank you
class BareNet(torch.nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
super(BareNet, self).__init__()
def _init_linear_layer(w, b):
# Following:
# https://github.com/pytorch/pytorch/blob/769cb5a6405b39a0678e6bc4f2d6fea62e0d3f12/torch/nn/modules/linear.py#L48
stdv = 1. / math.sqrt(w.size(1))
w.data.uniform_(-stdv, stdv)
b.data.uniform_(-stdv, stdv)
self.hidden_w = torch.nn.Parameter(torch.zeros(hidden_dim, input_dim))
self.hidden_b = torch.nn.Parameter(torch.zeros(hidden_dim))
self.hidden_pw = torch.nn.Parameter(torch.zeros(input_dim, hidden_dim))
self.hidden_pb = torch.nn.Parameter(torch.zeros(hidden_dim))
self.output_w = torch.nn.Parameter(torch.zeros(input_dim, hidden_dim))
self.output_b = torch.nn.Parameter(torch.zeros(output_dim))
self.output_pw = torch.nn.Parameter(torch.zeros(hidden_dim, input_dim))
self.output_pb = torch.nn.Parameter(torch.zeros(output_dim))
self.alpha = torch.nn.Parameter(torch.zeros(1))
self.w_params = torch.nn.ParameterList([self.hidden_w, self.output_w])
self.b_params = torch.nn.ParameterList([self.hidden_b, self.output_b])
self.pw_params = torch.nn.ParameterList([self.hidden_pw, self.output_pw])
self.pb_params = torch.nn.ParameterList([self.hidden_pb, self.output_pb])
self.a_params = torch.nn.ParameterList([self.alpha])
self.wb_params = [w for w in self.w_params] + [b for b in self.b_params]
_init_linear_layer(self.hidden_w, self.hidden_b)
_init_linear_layer(self.output_w, self.output_b)
def forward(self, x):
hidden_w = self.hidden_w.transpose(0, 1) + (self.alpha * self.hidden_pw)
hidden_b = self.hidden_b + (self.alpha * self.hidden_pb)
output_w = self.output_w.transpose(0, 1) + (self.alpha * self.output_pw)
output_b = self.output_b + (self.alpha * self.output_pb)
y = torch.nn.functional.relu( x.mm(hidden_w) + hidden_b )
y = y.mm(output_w) + output_b
return y
And in my_optim.py:
...
def _phi(alpha):
# Update a_, pw_, pb_ params
for p in a_params: p.data.fill_(alpha)
for i, p in enumerate(pw_params): p.data.copy_(w_step_dirs[i].transpose(0, 1))
for i, p in enumerate(pb_params): p.data.copy_(b_step_dirs[i])
# Get loss and its grad
loss = closure(do_backward=False)
grad_alpha, = torch.autograd.grad(loss, a_params, create_graph=False)
# Zero a_, pw_, pb_ params
for p in a_params: p.data.fill_(0.0)
for p in pw_params: p.data.copy_(torch.zeros_like(p))
for p in pb_params: p.data.copy_(torch.zeros_like(p))
return (loss.item(), grad_alpha.item())
... |
st102893 | Hi guys, I’m trying to debug my code which contains such assignments:
weights = list(model.parameters())
sigma_sum = [Variable(torch.zeros(weights[i].size())).cuda() for i in range(len(weights))]
The code can be executed normally when I ran the program, but in my debug mode, it just stucked for a long time then shown “Unable to display frame variables”. See the screenprint follows:
image.png1101×724 69.8 KB
I ran my code on PyCharm, Ubuntu 14.04, Pytorch 0.2.0, with two 1080Ti.
What’s the problem with it? Thanks in advance. |
st102894 | I have a large data (10GB), each data sample is about 100MB, I want to know whether it is a wise method to preload all data
I tried to load everything first and store them as a property of Dataset, and the getitem function only slice a piece of it and do data augmentation.
Since the data augmentation is time-costing, I used multi cores in data_loader, but I found that it becomes very slow.
I wonder whether do those sub-processes share the Dataset memory? Or the dataset was copied several times to different sub-processes?
Thank you |
st102895 | Storing 10GB in memory is never a good idea, I recommend you write the samples to csv-files or something alike an then use a read function to read the samples from the csv-files in __getitem(self, i)__ |
st102896 | I would also load the data lazily as @string111 suggested. Especially in the beginning when you are experimenting with different hyperparameters it can take a lot of time to load the whole dataset just to realize after a few iterations that your model won’t learn anything. |
st102897 | Dear community,
I’m a noob with Pytorch and I tried to fit the exponent of an equation with a custom activation fuction. However, I failed in defining it. Why?
import math
import torch
from torch.autograd import Variable
from torch import optim
from torch.nn.parameter import Parameter
class powerActivation(nn.Module):
def __init__(self):
super(powerActivation, self).__init__()
self.weight = Parameter(torch.Tensor(1, 1))
self.reset_parameters()
def reset_parameters(self):
self.weight.data.uniform_(1, 2)
def forward(self, x):
return x**self.weight
def build_model():
model = torch.nn.Sequential()
model.add_module("linear", powerActivation())
return model
def train(model, loss, optimizer, x, y):
x = Variable(x, requires_grad=False)
y = Variable(y, requires_grad=False)
# Reset gradient
optimizer.zero_grad()
# Forward
fx = model.forward(x.view(len(x), 1))
output = loss.forward(fx, y)
# Backward
output.backward()
# Update parameters
optimizer.step()
return output.data[0]
def main():
torch.manual_seed(42)
X = torch.linspace(2, 10, 101)
Y = X **2
model = build_model()
loss = torch.nn.MSELoss(size_average=True)
optimizer = optim.SGD(model.parameters(), lr=0.1)
batch_size = 10
for i in range(20):
cost = 0.
num_batches = len(X) // batch_size
for k in range(num_batches):
start, end = k * batch_size, (k + 1) * batch_size
cost += train(model, loss, optimizer, X[start:end], Y[start:end])
print("Epoch = %d, cost = %s" % (i + 1, cost / num_batches))
w = next(model.parameters()).data # model has only one parameter
print("w = %.2f" % w.numpy()) # will be approximately 2
print(model(Variable(X)).data)
print(list(zip(X,Y)))
main()
the result is:
Epoch = 1, cost = 2439.914280539751
Epoch = 2, cost = 2449.544422531128
Epoch = 3, cost = 2449.544422531128
Epoch = 4, cost = 2449.544422531128
Epoch = 5, cost = 2449.544422531128
Epoch = 6, cost = 2449.544422531128
Epoch = 7, cost = 2449.544422531128
Epoch = 8, cost = 2449.544422531128
Epoch = 9, cost = 2449.544422531128
Epoch = 10, cost = 2449.544422531128
Epoch = 11, cost = 2449.544422531128
Epoch = 12, cost = 2449.544422531128
Epoch = 13, cost = 2449.544422531128
Epoch = 14, cost = 2449.544422531128
Epoch = 15, cost = 2449.544422531128
Epoch = 16, cost = 2449.544422531128
Epoch = 17, cost = 2449.544422531128
Epoch = 18, cost = 2449.544422531128
Epoch = 19, cost = 2449.544422531128
Epoch = 20, cost = 2449.544422531128
w = -21.63
Thank you |
st102898 | Your code works, if you lower the learning rate to approx. 1e-6.
I debugged by looking at the gradient at the beginning of the training and it was way to big for your learning rate. |
st102899 | Hello all, I have a loss function as
loss = loss1 + 0.1 * loss2
where loss1 and loss2 are CrossEntropyLoss. The loss1 has two inputs are outputs from network and ground-truth labeled, called Supervised Loss, while the loss2 takes two inputs as outputs and labeled (just threshold the outputs), called Unsupervised Loss. They are balanced by the weight 0.1. This is my implementation
optimizer.zero_grad()
###############
#Loss1: given images and labels
###############
criterion = nn.CrossEntropyLoss().to(device)
outputs = model(images)
loss1 = criterion(outputs, labels)
loss1.backward()
###############
#Loss2: given images
###############
outputs = model(images)
labels = outputs>0.5
_, labels = torch.max(outputs, 1)*labels
loss2 = 0.1*criterion(outputs, labels)
loss2.backward()
optimizer.step()
Could you look at my implementation and give me the comments for two thing:
Is the implementation correct to perform loss=loss1+0.1*loss2?
Does the optimizer.step() and optimizer.zero_grad() will apply in end of the script or after each loss backward function?
For the second point, I mean
optimizer.zero_grad()
###############
#Loss1: given images and labels
###############
criterion = nn.CrossEntropyLoss().to(device)
outputs = model(images)
loss1 = criterion(outputs, labels)
###############
#Loss2: given images
###############
outputs = model(images)
labels = outputs>0.5
_, labels = torch.max(outputs, 1)*labels
loss2 = 0.1*criterion(outputs, labels)
loss = loss1+0.1*loss2
loss.backward()
optimizer.step()
Thanks in advance! |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.