id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st115068
|
gradient of norm at 0 is inf.
We fixed this instability 2 days ago in the master branch, so that for norm, the subgradient is used instead.
github.com/pytorch/pytorch
Norm subgradient at 0 26
by albanD
on 12:19PM - 18 Sep 17
3 commits
changed 2 files
with 28 additions
and 26 deletions.
It’ll be part of the next release, or if you are interested you can install the master branch from source via instructions: https://github.com/pytorch/pytorch#from-source 21
|
st115069
|
i installed pytorch using anaconda using the conda command on python 2.7
i am getting this error when trying to use print torch.rand(3,3).cuda()
Traceback (most recent call last):
File “”, line 1, in
File “/home/tensor/anaconda3/envs/python2/lib/python2.7/site-packages/torch/_utils.py”, line 66, in cuda
return new_type(self.size()).copy(self, async)
File “/home/tensor/anaconda3/envs/python2/lib/python2.7/site-packages/torch/cuda/init.py”, line 266, in _lazy_new
_lazy_init()
File “/home/tensor/anaconda3/envs/python2/lib/python2.7/site-packages/torch/cuda/init.py”, line 84, in _lazy_init
_check_driver()
File “/home/tensor/anaconda3/envs/python2/lib/python2.7/site-packages/torch/cuda/init.py”, line 58, in _check_driver
http://www.nvidia.com/Download/index.aspx 4""")
AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx 4
|
st115070
|
hmm, that’s definitely weird.
What’s the output of the nvidia-smi command on your machine?
|
st115071
|
command not found is the error, even nvidia-settings says
nvidia-settings
ERROR: Error querying enabled displays on GPU 0 (Missing Extension).
|
st115072
|
nvidia-settings could not find the registry key file. This file should
have been installed along with this driver at
/usr/share/nvidia/nvidia-application-profiles-key-documentation. The
application profiles will continue to work, but values cannot be
prepopulated or validated, and will not be listed in the help text.
Please see the README for possible values and descriptions.
|
st115073
|
Maybe you have an incomplete driver installation. Try reinstalling the driver. nvidia-smi should be present usually.
|
st115074
|
What is the rationale for having torch.Tensor.scatter_ but not torch.scatter?
Most functions in Tensor are also present in torch. Why not scatter?
|
st115075
|
I guess we could provide torch.scatter(destination, indices, source), I cant remember the rationale for us not implementing it the first time round.
|
st115076
|
hello to everyone, this is the first time I’m dealing with RNN and LSTM but I have to do a project that gives 500 voice recordings (each with a sum of 2 numbers summing up: example “2 + 5”) I have to create a model to learn these voice recordings.
Is there someone who can help me???
THIS IS THE CODE:
# Define options
class Options():
pass
opt = Options()
# Dataset options
opt.frac_train = 0.8
# Training options
opt.learning_rate = 0.001
opt.epochs = 1
# Model options
opt.hidden = 512
# Backend options
opt.cuda = False
# optimization function
opt.optim_function = "SGD"
# Momentum
opt.momentum = 0
# incremental lr
opt.incremental = False
# Imports
import math
import glob
import string
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
#all_char="0123456789";
#dataa=[]
#datatrainingprova = matrix[:8]
#datatestprova = matrix[8:10]
# Shuffle data
shuffled_idx = torch.randperm(len(matrix))
# Num samples
num_samples = len(matrix)
num_train = math.ceil(num_samples*opt.frac_train)
num_test = num_samples - num_train
# Separate splits
train_idx = shuffled_idx[0:num_train]
test_idx = shuffled_idx[num_train:]
#len(train_idx) #= 400
#len(test_idx) #= 100
# --------------------------------
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
# Model layout
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.lstm1 = nn.LSTM(input_size, hidden_size)
self.output = nn.Linear(hidden_size, output_size)
def forward(self, input):
output = Variable(torch.zeros(1, self.output_size));
c = torch.zeros(1, self.hidden_size);
h = torch.zeros(1,self.hidden_size);
if opt.cuda:
c = c.cuda()
h = h.cuda()
output= Variable(c);
c_t = Variable(c);
h_t = Variable(h);
#h_t2 = Variable(torch.zeros(1, self.hidden_size) );
#c_t2 = Variable(torch.zeros(1, self.hidden_size) );
#for i in range(0, input.size(0)):
h_t, c_t = self.lstm1(input, (h_t, c_t))
#h_t2, c_t2 = self.lstm2(c_t, (h_t2, c_t2))
output= self.output(c_t)
return output[-1]
#def init_hidden(self):
#return torch.zeros(1, self.hidden_size)
rnn = RNN(1, opt.hidden, 20)
# ----------
#Setup loss criterion
criterion = nn.MSELoss(size_average=False)
if opt.cuda: rnn.cuda()
optimizer=torch.optim.SGD(rnn.parameters(), lr = opt.learning_rate)
# ------------
num_train_t = num_train
num_test_t = num_test
# ------------
rnn = nn.LSTM(1, 20, 1, batch_first=True)
input = matrix[train_idx[i]]
input_size = input.size(0)
input = input.contiguous().view(1, input_size, 1)
input = Variable(input)
c = torch.zeros(1, 1, 20);
h = torch.zeros(1, 1, 20);
c_t = Variable(c);
h_t = Variable(h);
h_t, c_t = rnn(input, (h_t, c_t))
# -----------------------------
# Start training
for epoch in range(1, opt.epochs+1):
# Training mode
rnn.train()
# Shuffle training data indexes
train_idx = train_idx.index(torch.randperm(num_train))
# Initialize training loss and accuracy
train_loss = 0
train_accuracy = 0
# Process all training samples
for i in range(0, num_train):
# Prepare sample
input = matrix[train_idx[i]]
#input = input.contiguous().view(-1, input.size(-1))
target = torch.LongTensor([labels[train_idx[i]]])
# Initialize hidden state
#hidden = rnn.init_hidden()
# Check cuda
if opt.cuda:
input = input.cuda()
hidden = hidden.cuda()
target = target.cuda()
# Wrap for autograd
input = Variable(input)
#hidden = Variable(hidden)
target = Variable(target)
#for j in range(0, input.size(0)):
# Forward
output, hidden = rnn(input) # **_<------ here I am mistaken_**
# Compute loss
loss = criterion(output, target)
train_loss += loss.data[0]
# Compute accuracy
_,pred = output.data.max(1)
correct = pred.eq(target.data).sum()
train_accuracy += correct
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Evaluation mode
rnn.eval()
Error generated: dimension out of range (expected to be in range of [-1, 0], but got 1)
|
st115077
|
import pandas as pd
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.init as init
from torch.autograd import Variable
import pdb
import numpy as np
embedding_dim = 10
margin = 1
# Read data
trainFilename = 'base'
testFilename = 'test'
train = pd.read_csv(trainFilename, header=None, sep='\t', names=['1','2','3'], usecols=[0,1,2])
test = pd.read_csv(testFilename, header=None, sep='\t', names=['1','2','3'], usecols=[0,1,2])
numUsers = len(set(train.uid.unique()).union(set(test.uid.unique())))
numItems = len(set(train.iid.unique()).union(set(test.iid.unique())))
class modeler(nn.Module):
def __init__(self, numUsers, numItems, embedding_dim):
super(modeler, self).__init__()
self.userEmbed = nn.Embedding(numUsers, embedding_dim)
self.itemEmbed = nn.Embedding(numItems, embedding_dim)
self.rel = nn.Parameter(torch.randn(1,embedding_dim), requires_grad=True)
def forward(self, head, tail):
userEmbeds = self.userEmbed(head)
itemEmbeds = self.itemEmbed(tail)
rel = self.rel
out = (userEmbeds + rel - itemEmbeds).sum(1)
return out
losses = []
model = modeler(numUsers, numItems, embedding_dim)
optimizer = optim.SGD(model.parameters(), lr = 0.01)
Given the above code, I want to also train the variable ‘rel’.
How can I do this?
|
st115078
|
you can print rel.grad() to see what happens after loss.backward(), if the grad is 0, maybe it is out of the computation graph.
|
st115079
|
Yes exactly, the grad of rel turns out to be 0.
I think the problem is that rel is not registered as a leaf node.
ps. Sorry i figured it out. I wrote the equation incorrectly.
|
st115080
|
Hi, I am trying to implement PCA using SVD in GPU. Following is my code. It doesn’t work when the number of records is greater than or equal to 100K. The number of features is 300 (100K by 300 matrix).
import pandas as pd
from sklearn.preprocessing import StandardScaler
import torch
from datetime import datetime
def load_data(filepath, header, sep):
df = pd.read_csv(
filepath_or_buffer=filepath,
header=header,
sep=sep)
return df
def split_data(df, features_len):
first_col = df[df.columns[0]]
df[df.columns[0]] = first_col.apply(lambda x: x.split(',')[1])
features = df.ix[:,0:(features_len-2)]
return features
def get_minimum_features(s, retainedVariance):
var_percentage = (torch.cumsum(s, dim=0)/torch.sum(s))*100
_, index = torch.max(torch.gt(var_percentage, retainedVariance), 0)
return index
folder_path = '../pca/dataset/'
data_file = 'mat_200K_300F'
data_frame = load_data(folder_path + data_file, None, ' ')
features = split_data(data_frame, len(data_frame.columns))
normalized_features = StandardScaler().fit_transform(features)
U, s, V = torch.svd(torch.Tensor(normalized_features.T).cuda(), some=True)
k = get_minimum_features(s, 95)
U_reduced = U[:, : k[0]]
Z = torch.mm(torch.Tensor(normalized_features).cuda(), U_reduced.cuda())
The stack trace for when segmentation fault occurs is shown below.
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fff20c19b00 (LWP 21760)]
0x00007ffee15a2db0 in ?? ()
from /sabra/anaconda3/lib/python3.6/site-packages/numexpr/…/…/…/libmkl_avx2.so
Missing separate debuginfos, use: debuginfo-install glibc-2.17-196.el7.x86_64
(gdb) where
#0 0x00007ffee15a2db0 in ?? ()
from /sabra/anaconda3/lib/python3.6/site-packages/numexpr/…/…/…/libmkl_avx2.so
#1 0x0000000000000018 in ?? ()
#2 0x00007fff20c18870 in ?? ()
#3 0x0000000000000018 in ?? ()
#4 0x0000000000000000 in ?? ()
Based on the simple implementation below (2M by 300 matrix) that runs on GPU I think my code fails because a large amount of data is being transferred to and from CPU and GPU.
import torch
x = torch.zeros(2000000, 300).cuda();
u, s, v = torch.svd(x, some=True);
print(u);
Any help on understanding and fixing this issue is highly appreciated.
|
st115081
|
I just tried the small sample:
import torch
x = torch.zeros(2000000, 300).cuda();
u, s, v = torch.svd(x, some=True);
print(u);
It did not crash for me.
What GPU are you using, I wonder if it is because of some GPU memory situation.
|
st115082
|
Sorry If I was unclear about the question. The small sample is straightforward. It initializes the matrix in GPU. But my code (the first code section) crashes since it initializes in CPU and then needs to be transferred into the GPU.
Is there a better way to read a file with a large data matrix, load it into GPU and compute the SVD in GPU?
I have a Tesla K80 GPU.
|
st115083
|
I had made a mistake in the comparison. My data matrix loaded from the file has 200K records and 300 features but I use the transpose when calculating the SVD. This was the mistake.
U, s, V = torch.svd(torch.Tensor(normalized_features.T).cuda(), some=True)
In Pytorch, even the following simple code fails with a segmentation fault. There is a limit to the number of columns in the matrix that SVD can handle.
import torch
x = torch.zeros(300, 100001).cuda();
u, s, v = torch.svd(x, some=True);
print(u);
So I made the following changes to my code and it works.
# perform single value decomposition
U, s, V = torch.svd(torch.Tensor(normalized_features).cuda(), some=True)
# compute the value k: number of minumum features that retains the given variance
k = get_minimum_features(s, 95)
# compute the reduced dimensional features by projection
V_reduced = torch.t(V)[:, : k[0]].cuda()
Z = torch.mm(torch.Tensor(normalized_features).cuda(), V_reduced)
|
st115084
|
Thanks a lot for the small failure case. I’ll look at the segfault and get this fixed.
I’ve opened an issue here: https://github.com/pytorch/pytorch/issues/2790 67
|
st115085
|
I am having trouble with the dataloader.
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
dataset = datasets.ImageFolder(root='images', transform=transforms.Compose([normalize]))
test_loader = torch.utils.data.DataLoader(dataset, batch_size=4, shuffle=True)
for i, (input, target) in enumerate(test_loader):
pass
Error:
TypeError: zip argument #1 must support iteration
The “images” directory has single subdirectory with single image.
|
st115086
|
Maybe you could do something like this:
for i, itr in enumerate(test_loader):
input, target = itr[0], itr[1]
pass
|
st115087
|
It gives the same error. I think the problem is with test_loader not being iterable!
|
st115088
|
Oh silly mistake. I forgot to inclue ToTensor() transform.
This works;
dataset = datasets.ImageFolder(root='images', transform=transforms.Compose([transforms.ToTensor(), normalize]))
|
st115089
|
From docs, we know that we only need to write __init__ and forward if extending torch.nn.
However, here it adds backward as below:
class ContentLoss(nn.Module):
def __init__(self, target, weight):
super(ContentLoss, self).__init__()
# we 'detach' the target content from the tree used
self.target = target.detach() * weight
# to dynamically compute the gradient: this is a stated value,
# not a variable. Otherwise the forward method of the criterion
# will throw an error.
self.weight = weight
self.criterion = nn.MSELoss()
def forward(self, input):
self.loss = self.criterion(input * self.weight, self.target)
self.output = input
return self.output
def backward(self, retain_variables=True):
self.loss.backward(retain_variables=retain_variables)
return self.loss
|
st115090
|
The reason of this is just because we want to compute the gradient wrt the computed loss self.loss, which is not the output of the forward. So I have overwritten the backward function to tell I just want to backward through this parameter.
|
st115091
|
Hi alexis,
Thank you for your reply. The following is something I think about it.
As neural style gets its loss in hidden layer. In the original implementation of Torch, take ContentLoss for example:
function ContentLoss:updateGradInput(input, gradOutput)
if self.mode == 'loss' then
if input:nElement() == self.target:nElement() then
self.gradInput = self.crit:backward(input, self.target)
end
if self.normalize then
self.gradInput:div(torch.norm(self.gradInput, 1) + 1e-8)
end
self.gradInput:mul(self.strength)
-- This is where the layer loss has taken its effects.
self.gradInput:add(gradOutput)
else
self.gradInput:resizeAs(gradOutput):copy(gradOutput)
end
return self.gradInput
end
We can see self.gradInput:add(gradOutput) means the “hidden layer gradient” can be added to the main gradient flow.
However, in pytorch, we don’t bother to add it ourselves. As pytorch’s autograd mechanics, the gradient of the node will be accumulated automatically. So for the back-propagation of hidden gradient, we just need to overwritten the backward function:
def backward(self, retain_variables=True):
self.loss.backward(retain_variables=retain_variables)
return self.loss
Then the node adds the gradient of self.loss and grad_output and finally back-propagate it.
If any misreading, please tell me!
|
st115092
|
In fact, loss.backward(...) call the backward function of the MSELoss wich already implements the back-propagation’s line. If I wanted to really create my own loss function, I would have had to implement the backward pass with such a line.
|
st115093
|
Hi all,
I’ve recently noticed that pytorch GRUCell is mathematically different from the tensorflow one.
In pytorch, the GRU cell is implemented like this 25:
r = sigmoid(W_{ir} x + b_{ir} + W_{hr} h + b_{hr})
z = sigmoid(W_{iz} x + b_{iz} + W_{hz} h + b_{hz})
n = tanh(W_{in} x + b_{in} + r * (W_{hn} h + b_{hn}))
h' = (1 - z) * n + z * h
In tensorflow, the GRU cell is implemented like this 21:
r = sigmoid(W_{ir} x + b_{ir} + W_{hr} h + b_{hr})
z = sigmoid(W_{iz} x + b_{iz} + W_{hz} h + b_{hz})
n = tanh(W_{in} x + b_{in} + W_{hn} (r * h) + b_{hn}))
h' = (1 - z) * n + z * h
A subtle difference appears in the computation of n, where pytorch first applies a linear layer to the memory state h and then multiplies by the gate r, whereas TF does this two in the reversed order (and merges biases together, but this seems not important).
The original paper proposing GRU http://arxiv.org/abs/1406.1078 18 seems to match the tensorflow version. The CUDNN implementation seems do match the pytorch code.
In my case, it seems that the pytorch version converges slower and to worse results. Has anybody measured differences between these two variants?
Does anybody know why this discrepancy appeared in the first place?
|
st115094
|
aosokin:
In my case, it seems that the pytorch version converges slower and to worse results. Has anybody measured differences between these two variants?
For completeness, the difference appeared to come from other differences between TF and pytorch implementations. Difference coming directly from different equations for GRU cells was in fact very small and its direction was not consistent.
|
st115095
|
if I have a variable x and I wrap it in a pytorch structure, does it duplicate the data? As in:
x = torch.FloatTensor(x) #one copy?
xv = Variable( torch.FloatTensor(x)) #second copy?!
we have now 3 copies? one in numpy and 2 in pytorch?
|
st115096
|
Hi,
Going from tensor to/from numpy array does not create any copy.
Wrapping tensor in a Variable does not create a copy.
|
st115097
|
Given a triplet of a knowledge graph, (head, relation, tail), I want to implement the following:
embedding of head entity + embedding of relation entity= embedding tail entity
How should I design my model?
|
st115098
|
As reported in the issue https://github.com/pytorch/examples/issues/215 11 the example has broken in PyTorch v0.2, especially the LBFGS optimizer behaves strangely, from STEP 4 it only prints one training loss case. Will this be caused by the upgrade of PyTorch?
|
st115099
|
For example:
gpu 0: 1
gpu 1: 2
pug 3: 1
the total batchsize is 4.
|
st115100
|
if you manually implement DataParallel yourself, this is possible.
We implemented DataParallel using collectives such as broadcast, scatter, gather.
See this function for reference (and if you want to copy it and modify yourself):
github.com
pytorch/pytorch/blob/master/torch/nn/parallel/data_parallel.py#L76-L106 12
def parallel_apply(self, replicas, inputs, kwargs):
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
def gather(self, outputs, output_device):
return gather(outputs, output_device, dim=self.dim)
def data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None):
r"""Evaluates module(input) in parallel across the GPUs given in device_ids.
This is the functional version of the DataParallel module.
Args:
module: the module to evaluate in parallel
inputs: inputs to the module
device_ids: GPU ids on which to replicate module
output_device: GPU location of the output Use -1 to indicate the CPU.
(default: device_ids[0])
Returns:
This file has been truncated. show original
|
st115101
|
I wrote a customized module with c and cuda. It works fine with GPU 0. But when I switch to GPU 1 (I do have 2 GPUs on my machine), the following error occurs:
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1502006348621/work/torch/lib/THC/THCTensorCopy.cu line=100 error=77 : an illegal memory access was encountered
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1502006348621/work/torch/lib/THC/THCTensorCopy.cu line=100 error=77 : an illegal memory access was encountered
terminate called after throwing an instance of 'std::runtime_error'
what(): terminate called recursively
cuda runtime error (77) : an illegal memory access was encountered at /opt/conda/conda-bld/pytorch_1502006348621/work/torch/lib/THC/THCTensorCopy.cu:100
Aborted (core dumped)
I selected GPU using the following code:
torch.cuda.set_device(args.gpu_id)
A sample of my customized module is as follows:
void updateOutput_cuda(THCudaTensor *input, THCudaTensor *output,){
input = THCudaTensor_newContiguous(state, input);
THCudaTensor_resize4d(state, output, batchSize, nInputPlane, outputHeight, outputWidth);
output = THCudaTensor_newContiguous(state, output);
THCudaTensor_zero(state,output);
THCudaTensor_free(state, input);
THCudaTensor_free(state, output);
}
Thanks for your help!
|
st115102
|
I can only guess that there is a problem with the copying of your tensor. Are you sure the Tensor dimesions are correct?
cutorch_THCTensorCopy_cu_at_master_·_torch_cutorch.jpg767×353 53.8 KB
|
st115103
|
I think it should be correct because it works well on GPU 0.
I’m feeling that this problem is relevant to this: https://github.com/pytorch/pytorch/issues/689 15
But I have no idea where the problem is.
|
st115104
|
Can you run this:
import pycuda
from pycuda import compiler
import pycuda.driver as drv
import torch
import sys
print('__Python VERSION:', sys.version)
print('__pyTorch VERSION:', torch.__version__)
print('__CUDA VERSION')
from subprocess import call
# call(["nvcc", "--version"]) does not work
! nvcc --version
print('__CUDNN VERSION:', torch.backends.cudnn.version())
print('__Number CUDA Devices:', torch.cuda.device_count())
print('__Devices')
call(["nvidia-smi", "--format=csv", "--query-gpu=index,name,driver_version,memory.total,memory.used,memory.free"])
print('Active CUDA Device: GPU', torch.cuda.current_device())
print ('Available devices ', torch.cuda.device_count())
print ('Current cuda device ', torch.cuda.current_device())
drv.init()
print("%d device(s) found." % drv.Device.count())
for ordinal in range(drv.Device.count()):
dev = drv.Device(ordinal)
print (ordinal, dev.name())
|
st115105
|
('__Python VERSION:', '2.7.13 |Continuum Analytics, Inc.| (default, Dec 20 2016, 23:09:15) \n[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]')
('__pyTorch VERSION:', '0.2.0_2')
__CUDA VERSION
('__CUDNN VERSION:', 6021)
('__Number CUDA Devices:', 2L)
__Devices
index, name, driver_version, memory.total [MiB], memory.used [MiB], memory.free [MiB]
0, GeForce GTX TITAN X, 375.66, 12204 MiB, 8355 MiB, 3849 MiB
1, GeForce GTX TITAN X, 375.66, 12207 MiB, 9174 MiB, 3033 MiB
('Active CUDA Device: GPU', 0L)
('Available devices ', 2L)
('Current cuda device ', 0L)
2 device(s) found.
(0, 'GeForce GTX TITAN X')
(1, 'GeForce GTX TITAN X')
I commented this line:
! nvcc --version
The memory seems to be consistent with nvidia-smi, I have 2 process working with them
|
st115106
|
Is it the first or second call to THCudaTensor_newContiguous that crashes? can you debug it?
input = THCudaTensor_newContiguous(state, input);
THCudaTensor_resize4d(state, output, batchSize, nInputPlane, outputHeight, outputWidth);
output = THCudaTensor_newContiguous(state, output);
|
st115107
|
The error is gone, it’s possibly because I should not use openmp
#pragma omp parallel for private(elt)
|
st115108
|
Hi,
the VRAM usage reported by nvidia-smi includes memory occupied by the pytorch caching allocator. Is it possible to disable the caching allocator to get the actual memory usage without caching?
Thanks,
Stefan
|
st115109
|
Not at the moment.
Also, note that the memory allocator is fundamental to have decent performance on the GPU, because it avoids free which are expensive on the GPU (because they imply a synchronization point).
|
st115110
|
Hi !
I found a question that most of the indexing functions ,such as index_copy,index_select,scatter, can only specify one dimension and can’t be used to get a multi-dim part from a multi-dim tensor. Is there any elegant way to do that task ?
|
st115111
|
You can use advanced indexing (a la numpy) for that. It currently has some limitations, but can work fine in many situations:
a = torch.rand(2, 3, 4, 5)
b = a[:, :, [2, 3], [0, 3]]
a[:, [1, 2], [0, 3], :] = 1
|
st115112
|
I was trying to run code on 4 GPUs, the GPU id is 4, 5, 6, 7. However I got this problem. When I am running on GPU 0, 1, 2, 3, it is fine. Does anyone have any idea about the reason here?
Traceback (most recent call last):
File "main_boxencoder_new_loss.py", line 279, in <module>
errD_real.backward()
File "/home/didoyang/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py", line 155, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/home/didoyang/anaconda2/lib/python2.7/site-packages/torch/autograd/__init__.py", line 98, in backward
variables, grad_variables, retain_graph)
File "/home/didoyang/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/_functions.py", line 25, in backward
return comm.reduce_add_coalesced(grad_outputs, self.input_device)
File "/home/didoyang/anaconda2/lib/python2.7/site-packages/torch/cuda/comm.py", line 122, in reduce_add_coalesced
result = reduce_add(flattened, destination)
File "/home/didoyang/anaconda2/lib/python2.7/site-packages/torch/cuda/comm.py", line 92, in reduce_add
nccl.reduce(inputs, outputs, root=destination)
File "/home/didoyang/anaconda2/lib/python2.7/site-packages/torch/cuda/nccl.py", line 161, in reduce
assert(root >= 0 and root < len(inputs))
AssertionError
|
st115113
|
Is it something like, your batch size is 4, and on 8 gpus, there are not examples to go around? (I’m not saying it is, but just checking this point)
|
st115114
|
I had the same problem with the same error message.
It occurs when I use GPU2 and GPU3 of the system.
The batch size is 32 so certainly larger than the number of GPUs.
To solve this problem I used the CUDA_VISIBLE_DEVICES flag:
CUDA_VISIBLE_DEVICES=2,3 python3 train.py 17 --gpu_ids 0,1
|
st115115
|
Hi,
I used TensorFlow for deep learning research until recently and started learning PyTorch this time.
PyTorch’s syntax is simpler than the TensorFlow, making it easier for me to implement the neural network model.
As I was studying PyTorch, I created the tutorial code.
I hope this tutorial will help you get started with PyTorch.
GitHub
yunjey/pytorch-tutorial 442
PyTorch Tutorial for Deep Learning Researchers. Contribute to yunjey/pytorch-tutorial development by creating an account on GitHub.
|
st115116
|
Yes whoever came up with pytorch’s high level design was a genius. I think its design is objectively superior to any other python framework. In TF or Theano you invariably end up ditching the object oriented style (if you had one to begin at all), in pytorch it makes too much sense to ditch.
|
st115117
|
The design was initially seeded from three libraries: torch-autograd, Chainer, LuaTorch-nn.
Then we iterated over it for over a month between Sam Gross, Adam Paszke, me, Adam Lerer, Zeming Lin with occasional input from pretty much everyone. We initially didn’t have a functional interface at all (F.relu() for example), and Sergey Zagoruyko had pestered us to death until we saw value in it, and hurriedly wrote it / committed it in the last minute.
We’re glad that you like it.
|
st115118
|
Hi,
Thank you for these tutorials.
I recently went through a course on DL with Keras so I thought it would be a good idea to reproduce what I learned in that course and port it over and learn PyTorch.
It seems that I have got some fundamentals wrong in PyTorch. I copied your code for the linear regression sample, but it doesn’t correctly fit like it did in Keras. Obxviously I am missing something. Tried different optimizers and learning rates.
PyTorch code
gist.github.com
https://gist.github.com/NinZine/0db6b8af0cb020160d8a4264da1b5b8b 69
linear_regression.py
import torch
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch.nn as nn
from torch.autograd import Variable
%matplotlib inline
# From here: https://github.com/Dataweekends/zero_to_deep_learning_udemy/blob/master/data/weight-height.csv
df = pd.read_csv('weight-height.csv')
This file has been truncated. show original
Keras code
github.com
Dataweekends/zero_to_deep_learning_udemy/blob/master/course/3 Machine Learning.ipynb 10
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Linear Regression"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"import pandas as pd\n",
This file has been truncated. show original
What am I doing wrong?
Edit:
Ok, I’ve realised my mistake. Number of epochs could be in the thousands, for example.
|
st115119
|
Adding my own recommended list:
Great modular structure to each PyTorch class:
https://github.com/zhedongzheng/finch/tree/master/pytorch-models 30
Excellent for beginners:
GitHub
hunkim/DeepLearningZeroToAll 22
DeepLearningZeroToAll - TensorFlow Basic Tutorial Labs
GitHub
MorvanZhou/PyTorch-Tutorial 37
PyTorch-Tutorial - Build your neural network easy and fast
And my own Jupyter notebooks:
GitHub
QuantScientist/Deep-Learning-Boot-Camp 29
Deep-Learning-Boot-Camp - A community run, 5-day PyTorch Deep Learning Bootcamp
In Korean:
GitHub
GunhoChoi/PyTorch-FastCampus 21
PyTorch-FastCampus - PyTorch로 시작하는 딥러닝 입문 CAMP (www.fastcampus.co.kr/data_camp_pytorch/) 강의자료
|
st115120
|
I am trying to switch my code to run on CUDA-enabled machine. I received a warning and AssertionError below. The code works fine if setting cuda_on = False. Since the error message is very brief, I didn’t know where the problem is. Any suggestion how to solve it? Thanks!
Error message:
char_rnn_shakespeare.py:33: UserWarning: RNN module weights are not part of
single contiguous chunk of memory. This means they need to be compacted at
every call, possibly greately increasing memory usage. To compact weights again
call flatten_parameters().
output, self.hidden = self.lstm(input, self.hidden)
Traceback (most recent call last):
File "char_rnn_shakespeare.py", line 213, in <module>
all_losses = start_training()
File "char_rnn_shakespeare.py", line 193, in start_training
output, loss = train(input, target)
File "char_rnn_shakespeare.py", line 157, in train
output = rnn.forward(input)
File "char_rnn_shakespeare.py", line 33, in forward
output, self.hidden = self.lstm(input, self.hidden)
File "/home/chaiyong/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 224, in __call__
result = self.forward(*input, **kwargs)
File "/home/chaiyong/anaconda3/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 162, in forward
output, hidden = func(input, self.all_weights, hx)
File "/home/chaiyong/anaconda3/lib/python3.6/site-packages/torch/nn/_functions/rnn.py", line 351, in forward
return func(input, *fargs, **fkwargs)
File "/home/chaiyong/anaconda3/lib/python3.6/site-packages/torch/autograd/function.py", line 284, in _do_forward
flat_output = super(NestedIOFunction, self)._do_forward(*flat_input)
File "/home/chaiyong/anaconda3/lib/python3.6/site-packages/torch/autograd/function.py", line 306, in forward
result = self.forward_extended(*nested_tensors)
File "/home/chaiyong/anaconda3/lib/python3.6/site-packages/torch/nn/_functions/rnn.py", line 293, in forward_extended
cudnn.rnn.forward(self, input, hx, weight, output, hy)
File "/home/chaiyong/anaconda3/lib/python3.6/site-packages/torch/backends/cudnn/rnn.py", line 259, in forward
_copyParams(weight, params)
File "/home/chaiyong/anaconda3/lib/python3.6/site-packages/torch/backends/cudnn/rnn.py", line 186, in _copyParams
assert param_from.type() == param_to.type()
AssertionError
The code:
from __future__ import unicode_literals, print_function, division
from io import open
import glob
import unicodedata
import string
import torch
import torch.nn as nn
from torch.autograd import Variable
import random
import time
import math
import torch.optim as optim
all_letters = string.ascii_letters + " .,;'-"
n_letters = len(all_letters) + 1 # Plus EOS marker
batch_size = 5
input_length = 10
cuda_on = True
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(LSTM, self).__init__()
self.hidden_size = hidden_size
self.output_size = output_size
self.o2o = nn.Linear(hidden_size, output_size)
self.lstm = nn.LSTM(input_size, hidden_size, dropout=0.1, num_layers=1)
self.softmax = nn.LogSoftmax()
self.hidden = self.initHidden()
def forward(self, input):
output, self.hidden = self.lstm(input, self.hidden)
output = self.o2o(output)
for v in self.hidden:
v.detach_()
soutput = self.softmax(output[0])
self.lstm.flatten_parameters()
# print(soutput)
return soutput
def initHidden(self):
h0 = Variable(torch.zeros(2, batch_size, self.hidden_size), requires_grad=True)
c0 = Variable(torch.zeros(2, batch_size, self.hidden_size), requires_grad=True)
if torch.cuda.is_available() and cuda_on:
h0 = h0.cuda()
c0 = c0.cuda()
return h0, c0
def findFiles(path): return glob.glob(path)
# Turn a Unicode string to plain ASCII, thanks to http://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)
# Read a file and split into lines
def readLines(filename):
lines = open(filename, encoding='utf-8').read().strip().split('\n')
return [unicodeToAscii(line) for line in lines]
# Build a list of names
filename = 'data/shakespeare.txt'
lines = readLines(filename)
# Random item from a list
def randomChoice(l):
return l[random.randint(0, len(l) - 1)]
# Get a random category and random line from that category
def randomTraining():
line = randomChoice(lines)
return line
def inputTensor(lines):
tensors = []
for index, line in enumerate(lines):
tensor = torch.zeros(1, batch_size, n_letters)
for i in range(input_length):
if i < len(line):
tensor[0][index][all_letters.find(line[i])] = 1
else:
tensor[0][index][n_letters - 1] = 1
tensors.append(tensor)
return tensors
def targetTensor(lines):
targets = []
for i in range(1, input_length + 1):
target = []
for idx, line in enumerate(lines):
if i < len(line):
target.append(all_letters.find(line[i]))
else:
target.append(n_letters - 1)
targets.append(target)
return torch.LongTensor(targets)
def randomTrainingExample():
# create input of 5 lines
lines = []
while len(lines) != batch_size:
line = randomTraining()
# skip blank line
while line == "":
line = randomTraining()
lines.append(line)
input_tensors = inputTensor(lines)
target_tensor = targetTensor(lines)
return input_tensors, target_tensor
criterion = nn.NLLLoss()
learning_rate = 0.001
hidden_size = 30
rnn = LSTM(n_letters, hidden_size, n_letters)
optimizer = optim.SGD(rnn.parameters(), lr=learning_rate)
def train(input_line_tensor, target_line_tensor):
rnn.zero_grad()
# rnn.initHidden()
loss = 0
for i in range(len(input_line_tensor)):
input = Variable(input_line_tensor[i])
target = Variable(target_line_tensor[i])
if torch.cuda.is_available() and cuda_on:
input = input.cuda()
target = target.cuda()
output = rnn.forward(input)
loss += criterion(output, target)
loss.backward()
optimizer.step()
return output, loss.data[0] / len(input_line_tensor)
def timeSince(since):
now = time.time()
s = now - since
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
n_iters = 1000000
print_every = 50000
plot_every = 500
all_losses = []
def start_training():
total_loss = 0
start = time.time()
# TRAINING
for iter in range(1, n_iters + 1):
input, target = randomTrainingExample()
output, loss = train(input, target)
total_loss += loss
if iter % print_every == 0:
print('%s (%d %d%%) %.4f' % (timeSince(start), iter, iter / n_iters * 100, loss))
if iter % plot_every == 0:
all_losses.append(total_loss / plot_every)
total_loss = 0
torch.save(rnn, "model.data")
return all_losses
all_losses = start_training()
|
st115121
|
Hi,
You forgot to send your model on the gpu as well with:
rnn.cuda()
criterion.cuda()
|
st115122
|
Due to the firewall, i could not install pytorch using the conda command. so I clone the pytorch from the github and install it according to the README.md 7, but i got the follow error:
/home/flml/pytorch/torch/lib/THC/THCAtomics.cuh(131): error: no instance of overloaded function “atomicCAS” matches the argument list
argument types are: (uint64_t *, uint64_t, uint64_t)
1 error detected in the compilation of “/tmp/tmpxft_0000559f_00000000-7_THCTensorIndex.cpp1.ii”.
CMake Error at THC_generated_THCTensorIndex.cu.o.cmake:267 (message):
Error generating file
/home/flml/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorIndex.cu.o
CMakeFiles/THC.dir/build.make:161: recipe for target ‘CMakeFiles/THC.dir/THC_generated_THCTensorIndex.cu.o’ failed
make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorIndex.cu.o] Error 1
make[2]: *** 正在等待未完成的任务…
CMakeFiles/Makefile2:67: recipe for target ‘CMakeFiles/THC.dir/all’ failed
make[1]: *** [CMakeFiles/THC.dir/all] Error 2
Makefile:127: recipe for target ‘all’ failed
make: *** [all] Error 2
Besides, The nvcc -V and nvidia-smi command output the following lines:
nvcc: NVIDIA ® Cuda compiler driver
Copyright © 2005-2016 NVIDIA Corporation
Built on Tue_Jan_10_13:22:03_CST_2017
Cuda compilation tools, release 8.0, V8.0.61
±----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66 Driver Version: 375.66 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro K620 Off | 0000:01:00.0 On | N/A |
| 34% 40C P8 1W / 30W | 262MiB / 1999MiB | 0% Default |
±------------------------------±---------------------±---------------------+
±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 990 G /usr/lib/xorg/Xorg 148MiB |
| 0 1756 G compiz 110MiB |
| 0 2299 G /usr/lib/firefox/firefox 1MiB |
±----------------------------------------------------------------------------+
Anyone can give some help to solve this ,thanks !!!
|
st115123
|
this should now be fixed. if you pull the source code again and reinstall, it will work.
sorry for the trouble.
|
st115124
|
Hello,
I am trying to use nn.Bottle, but I got
AttributeError: module 'torch.nn' has no attribute 'Bottle'
I am wondering how can I get around.
Thanks
|
st115125
|
Neither does nn.Cup. This should be a severe bug and please post an issue in PyTorch repo.
|
st115126
|
I think you might be looking for some Caffe?
There is no need for nn.Bottle, because you can view your weights and apply the operation yourself.
That being said, nn.Linear support some variant of nn.Bottle, as it uses matmul internally, which performs automatic broadcasting.
|
st115127
|
Hello. I need to unfold some feature map of may network during training, which is cuda memory consuming. I found that the program dumps because of “out of cuda memory” after a few training loop, however during training loop, the variable I allocate should be local in the '‘for’ statement, I don’t know why it consumes out of memory after a few success loop.I think the memory consuming should be fixed during every loop. Can anyone help me out? Thanks!
|
st115128
|
Two methods which I frequently use for debugging:
By @smth
def memReport():
for obj in gc.get_objects():
if torch.is_tensor(obj):
print(type(obj), obj.size())
def cpuStats():
print(sys.version)
print(psutil.cpu_percent())
print(psutil.virtual_memory()) # physical memory usage
pid = os.getpid()
py = psutil.Process(pid)
memoryUse = py.memory_info()[0] / 2. ** 30 # memory use in GB...I think
print('memory GB:', memoryUse)
cpuStats()
memReport()
Edited by @smth for PyTorch 0.4 and above, which doesn’t need the .data check.
|
st115129
|
Thanks! Does python gc collect garbage as soon as variable has no reference? Or with delay?
|
st115130
|
@smth
Thanks! Do you means that:
def func():
a = Variable(torch.randn(2,2))
a = Variable(torch.randn(100,100))
return
the memory allocated in a = Variable(torch.randn(2,2)) will be freed as soon as the code a = Variable(torch.randn(100,100)) is executed?
|
st115131
|
But, don’t forget that once you call a = Variable(torch.rand(2, 2)), a holds the data.
When you call a = Variable(torch.rand(100, 100)) afterwards, first Variable(torch.rand(100, 100)) is allocated (so the first tensor is still in memory), then it is assigned to a, and then Variable(torch.rand(2, 2)) is freed.
|
st115132
|
@fmassa
that means there have to be enough memory for two variable during the creation of the second variable?
|
st115133
|
That means that if you have something like
a = torch.rand(1024, 1024, 1024) # 4GB
# the following line allocates 4GB extra before the assignment,
# so you need to have 8GB in order for it to work
a = torch.rand(1024, 1024, 1024)
# now you only use 4GB
|
st115134
|
Hello.
I made a simple code to understand class torch.nn.LSTM in PyTorch.
I changed input’s axis from (seq_len, batch, input_size) to (batch, seq_len, input_size) when I use batch_first option.
However, I could not understand why I get different result with batch_first option.
Here is my code.
import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
lstm = nn.LSTM(input_size=3, hidden_size=3, num_layers=1)
lstm2 = nn.LSTM(input_size=3, hidden_size=3, num_layers=1, batch_first=True)
inputs = autograd.Variable(torch.randn((30)))
h0 = autograd.Variable(torch.randn(1, 2, 3))
c0 = autograd.Variable(torch.randn((1, 2, 3)))
inputs1 = inputs.view(5, 2, -1).contiguous()
inputs2 = torch.transpose(inputs1, 0, 1).contiguous()
out = lstm(inputs1, (h0, c0))[0]
print("Case 1")
print(torch.transpose(inputs1, 0, 1).contiguous())
print(torch.transpose(out, 0, 1).contiguous())
print("#######"*5)
out = lstm2(inputs2, (h0, c0))[0]
print("Case 2")
print(inputs2)
print(out)
And the result below.
Case 1
Variable containing:
(0 ,.,.) =
1.4114 -0.9804 -0.7578
-0.4270 -0.3868 -0.6089
1.1848 -1.0322 -0.7039
-0.8018 -0.7855 0.7877
-0.4594 -1.1798 0.3812
(1 ,.,.) =
-0.3782 1.7211 0.0310
1.1652 -0.1326 -0.0228
0.8813 1.4276 -0.9245
0.0786 1.7053 -0.8098
-0.0064 0.5302 0.9990
[torch.FloatTensor of size 2x5x3]
Variable containing:
(0 ,.,.) =
0.0122 -0.0571 -0.0294
0.0310 -0.0025 0.2622
-0.0092 -0.1369 0.1579
-0.0573 -0.0152 0.2817
-0.1080 0.0052 0.2817
(1 ,.,.) =
0.0885 0.0426 0.3910
0.0516 -0.0430 0.2333
0.0789 0.0719 0.1446
0.1162 0.1515 0.2469
0.1018 0.0987 0.3232
[torch.FloatTensor of size 2x5x3]
###################################
Case 2
Variable containing:
(0 ,.,.) =
1.4114 -0.9804 -0.7578
-0.4270 -0.3868 -0.6089
1.1848 -1.0322 -0.7039
-0.8018 -0.7855 0.7877
-0.4594 -1.1798 0.3812
(1 ,.,.) =
-0.3782 1.7211 0.0310
1.1652 -0.1326 -0.0228
0.8813 1.4276 -0.9245
0.0786 1.7053 -0.8098
-0.0064 0.5302 0.9990
[torch.FloatTensor of size 2x5x3]
Variable containing:
(0 ,.,.) =
0.1568 -0.2322 0.0824
0.1147 -0.0394 0.1534
0.0238 -0.1611 -0.0544
0.0642 -0.0818 0.0314
0.0626 -0.1119 0.0638
(1 ,.,.) =
-0.0989 0.0636 -0.1731
-0.0746 -0.0633 -0.3005
0.0611 0.0328 -0.3500
0.1299 0.1404 -0.1330
0.1082 -0.0088 -0.0390
[torch.FloatTensor of size 2x5x3]
I really want to understand this.
Thanks in advance.
|
st115135
|
Oh. I solved this just by adding torch.manual_seed(1) before defining each LSTMs.
Now I understand that batch_first works as I expected.
|
st115136
|
I want to visualize a python list where each element is a torch.FloatTensor variable.
Lets say that the list is stored in Dx. When I am trying the following
plt.figure()
plt.plot(Dx)
I am getting the following error:
ValueError: x and y can be no greater than 2-D, but have shapes (1200,) and (1200, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1)
Can somebody please throw some light.
Or, if somebody can give me some guidance on how to plot these kinds of lists which contain error rates of a model. I just wanted an iteration wise plot.
|
st115137
|
You need to do two things: remove the superfluous dimensions from the y argument to plt.plot and convert your tensors to numpy, like so:
In [1]: import torch
In [2]: import matplotlib.pyplot as plt
In [3]: x = torch.ones(1200)
In [4]: y = torch.ones(1200, 1, 1, 1, 1, )
In [5]: x.size()
Out[5]: torch.Size([1200])
In [6]: y.size()
Out[6]: torch.Size([1200, 1, 1, 1, 1])
In [7]: y.squeeze().size()
Out[7]: torch.Size([1200])
In [8]: plt.plot(x.numpy(), y.squeeze().numpy())
|
st115138
|
Hello,
l have a pytorch variable :
preds[4,4]
Out[305]:
Variable containing:
-96.7809
[torch.cuda.FloatTensor of size 1 (GPU 0)]
l want to do the following :
import math
x=preds[4,4]
y=maths.exp(x)
z= y / (y+1)
However when l do :
y=maths.exp(x)
l get the following error :
math.exp(preds[4,4])
TypeError: a float is required
How can l transform a torch variable to a float in order to be able to do these operations ?
or is there any trick to do that in pytorch ?
Thanks
|
st115139
|
The Python math module doesn’t work on torch objects. You could get the Python float out of your variable with x.data[0]. However you probably just want to use torch.exp instead of math.exp (torch.exp works on Tensors and Variables). You could also just use x.exp() for the same result, if you prefer that syntax. Generally you want to find the PyTorch function for any mathematical operations you want to perform.
|
st115140
|
I think it is because the data is in form of Tensor. Meanwhile you doing processing with python math library. My solution is you must convert it to numpy first or you just using Tensor operation in pytorch Tensor Operation 11 . If you want to just calculate using python function you can cast it as numpy array. It is so simple, you can just doing preds.data.numpy() . I also found that your tensor is in GPU so don’t forget to copy it to GPU, so you can use preds.data.cpu().numpy(). Here is the example in figure for clearer.
|
st115141
|
For example, if I want to implement a smoothl1loss output per instance loss, I need to check every elements whether they are in [-1, 1]. Now I only know I can use torch.index_select, then cat two part together but in this way I can’t restore them to one tensor as the same order of input, Is there any way to do this ? Or I must implement this using cffi?
|
st115142
|
You don’t need to implement it using C/CUDA (but it will be definitely more efficient if you do so).
Here is one (untested) implementation for SmoothL1Loss using the functional interface:
def smooth_l1_loss(input, target):
diff = input - target
mask = diff.abs() < 1
diff[mask] = diff[mask] ** 2
return diff
|
st115143
|
Yeah, It’s working on tensor, but if they are variable, diff[mask] = diff[mask] ** 2 is a in-place operation, and when backwards, it will raise RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.
|
st115144
|
Found a way, I just need to make a new variable and assign two part to the new Variable is OK. Thanks
|
st115145
|
I am building a chatbot using a Seq2Seq model. During training, my gpu runs out of memory on pretty small batch sizes (anything >= 20); the tensorflow implementations I have seen 2 can comfortably handle larger models:
Here are some specs for context (my pytorch implementation/tensorflow implementations):
num of layers (applies to encoder and decoder) = 1/2
hidden size (applies to encoder and decoder) = 256/512
batch size = 20/64
vocab size (shared vocab between encoder and decoder) = 20,000/20,000
GPU total memory = 11GB (nvidia gtx 1080 ti)
longest seq len = 686 words(cornell movie dialog corpus)
I tried playing around with the code a bit, it appears that the GPU memory is not freed after each training iteration.
here is the model code:
import time
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch.optim import SGD
from data import TRAIN_FILE_NAME
from data import VAL_FILE_NAME
from data import pad_seqs
from dataset import Dataset
from data import PAD_TOKEN
from data import load_vocab
from data import VOCAB_FILE_NAME
from numpy import prod
class Seq2Seq(nn.Module):
def __init__(self, vocab_size, hidden_size):
super(Seq2Seq, self).__init__()
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.encoder = EncoderRNN(vocab_size, hidden_size)
self.decoder = DecoderRNN(vocab_size, hidden_size)
def _get_loss(self, batch):
questions = [example['question'] for example in batch]
answers = [example['answer'] for example in batch]
answer_lens = [len(a) for a in answers]
# print max(len(el) for el in questions), max(len(el) for el in answers)
questions = Variable(torch.LongTensor(pad_seqs(questions))).cuda()
answers = Variable(torch.LongTensor(pad_seqs(answers))).cuda()
output = self(questions, answers)
# print questions.size(), answers.size()
loss = 0
loss_fn = torch.nn.NLLLoss()
batch_size = len(batch)
for i in xrange(batch_size):
loss += loss_fn(output[i, :answer_lens[i] - 1], answers[i, 1:answer_lens[i]])
return loss / batch_size
def forward(self, input_seqs, target_seqs):
_, encoder_hidden = self.encoder(input_seqs)
decoder_output, _ = self.decoder(target_seqs, encoder_hidden)
return decoder_output
def train(self, lr=1e-3, batch_size=1, iters=7500, print_iters=100):
optimizer = SGD(self.parameters(), lr=lr)
train_losses = []
val_losses = []
train = Dataset(TRAIN_FILE_NAME)
val = Dataset(VAL_FILE_NAME)
start_time = time.time()
for i in xrange(1, iters + 1):
train_batch = [train.get_random_example() for _ in xrange(batch_size)]
val_batch = [val.get_random_example() for _ in xrange(batch_size)]
train_loss = self._get_loss(train_batch)
optimizer.zero_grad()
train_loss.backward()
optimizer.step()
val_loss = self._get_loss(val_batch)
train_losses.append(train_loss.data[0])
val_losses.append(val_loss.data[0])
if i % print_iters == 0:
end_time = time.time()
string = 'epoch: {}, iters: {}, train loss: {:.2f}, val loss: {:.2f}, time: {:.2f} s'
print string.format(i / len(train), i, train_loss.data[0], val_loss.data[0], end_time - start_time)
start_time = time.time()
return train_losses, val_losses
class EncoderRNN(nn.Module):
def __init__(self, vocab_size, hidden_size):
super(EncoderRNN, self).__init__()
self.num_layers = 1
self.hidden_size = hidden_size
self.embedding = nn.Embedding(vocab_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size, self.num_layers, batch_first=True)
def init_hidden(self, batch_size):
return Variable(torch.zeros(self.num_layers, batch_size, self.hidden_size)).cuda()
def forward(self, input_seqs):
input_seqs = self.embedding(input_seqs)
batch_size = input_seqs.size()[0]
output, hidden = self.gru(input_seqs, self.init_hidden(batch_size))
return output, hidden
class DecoderRNN(nn.Module):
def __init__(self, vocab_size, hidden_size):
super(DecoderRNN, self).__init__()
self.num_layers = 1
self.hidden_size = hidden_size
self.embedding = nn.Embedding(vocab_size, hidden_size)
self.gru = nn.GRU(2 * hidden_size, hidden_size, self.num_layers, batch_first=True)
self.out = nn.Linear(hidden_size, vocab_size)
self.softmax = nn.LogSoftmax()
def init_hidden(self, batch_size):
return Variable(torch.zeros(self.num_layers, batch_size, self.hidden_size)).cuda()
@staticmethod
def create_rnn_input(embedded, thought):
# reorder axes to be (seq_len, batch_size, hidden_size)
embedded = embedded.permute(1, 0, 2)
seq_len, batch_size, hidden_size = embedded.size()
rnn_input = Variable(torch.zeros((seq_len, batch_size, 2 * hidden_size))).cuda()
for i in xrange(seq_len):
for j in xrange(batch_size):
rnn_input[i, j] = torch.cat((embedded[i, j], thought[0, j]))
# make batch first
return rnn_input.permute(1, 0, 2)
def softmax_batch(self, linear_output):
result = Variable(torch.zeros(linear_output.size())).cuda()
batch_size = linear_output.size()[0]
for i in xrange(batch_size):
result[i] = self.softmax(linear_output[i])
return result
def forward(self, target_seqs, thought):
target_seqs = self.embedding(target_seqs)
rnn_input = self.create_rnn_input(target_seqs, thought)
batch_size = target_seqs.size()[0]
output, hidden = self.gru(rnn_input, self.init_hidden(batch_size))
output = self.softmax_batch(self.out(output))
return output, hidden
and the training code:
import matplotlib.pyplot as plt
from data import VOCAB_SIZE
from models import Seq2Seq
def plot_loss(train_losses, val_losses):
plt.plot(train_losses, color='red', label='train')
plt.plot(val_losses, color='blue', label='val')
plt.legend(loc='upper right', frameon=False)
plt.show()
# VOCAB_SIZE is 20,000
model = Seq2Seq(VOCAB_SIZE, 256).cuda()
train_losses, val_losses = model.train(iters=1000, batch_size=100, print_iters=100)
plot_loss(train_losses, val_losses)
|
st115146
|
Can you run this before and after you start training:
from subprocess import call
! nvcc --version
print('__CUDNN VERSION:', torch.backends.cudnn.version())
print('__Number CUDA Devices:', torch.cuda.device_count())
call(["nvidia-smi", "--format=csv", "--query-gpu=index,name,driver_version,memory.total,memory.used,memory.free"])
|
st115147
|
before training:
(/home/jkarimi91/Apps/anaconda2/envs/torch) jkarimi91@jkarimi91-desktop:~/Projects/chatbot$ python call.py
('__CUDNN VERSION:', 6021)
('__Number CUDA Devices:', 1L)
index, name, driver_version, memory.total [MiB], memory.used [MiB], memory.free [MiB]
0, GeForce GTX 1080 Ti, 375.82, 11169 MiB, 569 MiB, 10600 MiB
during training (after input and target seqs have been pushed to gpu but before feeding them through the model):
(/home/jkarimi91/Apps/anaconda2/envs/torch) jkarimi91@jkarimi91-desktop:~/Projects/chatbot$ python chatbot.py
('__CUDNN VERSION:', 6021)
('__Number CUDA Devices:', 1L)
index, name, driver_version, memory.total [MiB], memory.used [MiB], memory.free [MiB]
0, GeForce GTX 1080 Ti, 375.82, 11169 MiB, 1063 MiB, 10106 MiB
('__CUDNN VERSION:', 6021)
('__Number CUDA Devices:', 1L)
index, name, driver_version, memory.total [MiB], memory.used [MiB], memory.free [MiB]
0, GeForce GTX 1080 Ti, 375.82, 11169 MiB, 4985 MiB, 6184 MiB
('__CUDNN VERSION:', 6021)
('__Number CUDA Devices:', 1L)
index, name, driver_version, memory.total [MiB], memory.used [MiB], memory.free [MiB]
0, GeForce GTX 1080 Ti, 375.82, 11169 MiB, 4985 MiB, 6184 MiB
('__CUDNN VERSION:', 6021)
('__Number CUDA Devices:', 1L)
index, name, driver_version, memory.total [MiB], memory.used [MiB], memory.free [MiB]
0, GeForce GTX 1080 Ti, 375.82, 11169 MiB, 5765 MiB, 5404 MiB
('__CUDNN VERSION:', 6021)
('__Number CUDA Devices:', 1L)
index, name, driver_version, memory.total [MiB], memory.used [MiB], memory.free [MiB]
0, GeForce GTX 1080 Ti, 375.82, 11169 MiB, 8881 MiB, 2288 MiB
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1503966894950/work/torch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory
Traceback (most recent call last):
File "chatbot.py", line 16, in <module>
train_losses, val_losses = model.train(iters=1000, batch_size=100, print_iters=100)
File "/home/jkarimi91/Projects/chatbot/models.py", line 84, in train
train_loss.backward()
File "/home/jkarimi91/Apps/anaconda2/envs/torch/lib/python2.7/site-packages/torch/autograd/variable.py", line 156, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/home/jkarimi91/Apps/anaconda2/envs/torch/lib/python2.7/site-packages/torch/autograd/__init__.py", line 98, in backward
variables, grad_variables, retain_graph)
RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1503966894950/work/torch/lib/THC/generic/THCStorage.cu:66
|
st115148
|
My best guess is maybe there is a flaw/memory leak in init_hidden or create_rnn_input.
|
st115149
|
Two solutions/ideas come to mind to address this problem:
when computing the val loss, set volatile=True
many chatbot implementations that use Seq2Seq, coupled with the cornell movie dialog corpus, limit
the sequence length to ~20 words
|
st115150
|
I am willing to run it locally and debug it, please upload the data set and the full code to git.
|
st115151
|
GitHub
jkarimi91/chatbot 38
chatbot - A Neural Conversation Model implemented in PyTorch.
|
st115152
|
I have a problem about indexing variable in a forward pass. I have already known how to get a specified part to do some operation, but I don’t know how to write the result to the corresponding area and let gradients flow through the indexing part properly. For example:
input=Variable(torch.randn(2,3,5,6),requires_grad=True)
idx_w=Variable(torch.FloatTensor([2,3,4]),requires_grad=True).floor().detach()
idx_h=Variable(torch.FloatTensor([1,2]),requires_grad=True).floor().detach()
result=Variable(torch.zeros(input.size()),requires_grad=True)
output=input.index_select(dim=-1,index=idx_w.long()).index_select(dim=-2,index=idx_h.long())
output+=1
result[:,:,index_h,index_w]=output
But this way can cause an error that idx_h is a variable and can’t be converted to LongTensor.
Any advice would be appreciated !
|
st115153
|
I’m not sure I understand properly all the points, but here are some tips:
try using index_copy_ instead of advanced indexing (because you used two index_select, which is equivalent to input[:, :, :, idx_w.long()][:, :, idx_h.long(), :]
don’t forget to call .long() in your indexing variables
|
st115154
|
I have a [x y z] 3-d tensor and a [y z] 2-d tensor, how can I easily use pytorch to get the production, liker [1 y z][y z] ,[2 y z][y z]…[x y z]*[y z], all this result?
|
st115155
|
It’s not clear to me what operation you are looking to perform. Can you explain a bit more, e.g., give the sizes of the 3d and 2d tensor, and what you mean by the * operator, is it matrix multiply or pointwise multiply?
Maybe you could use torch.matmul?
|
st115156
|
If I have a tensor like x = torch.rand((3,4,8)) and I would like to slice ‘x’ in order to fit into
y = torch.rand((2,3,4,4)).
I am able to do slicing in 2D using torch.narrow or torch.select. But, I am really confused in the 3D.
I really need to know this because I want to split up a bunch of patches.
Thank you in advance.
|
st115157
|
I believe I should be using torch.split() 915
x = torch.rand((3,4,8))
slices = torch.split(x, 4, 2)
print(len(slices))
> 2
print(slices[0].size())
> torch.Size([3, 4, 4])
This will nicely return tuples of 3D tensors.
PS. Just posting as it might be a help to one in need.
|
st115158
|
Note that you don’t need to use narrow or select, but instead basic indexing will do it for you.
For example
a = torch.rand(2, 4, 8)
print(a[:, 1, :3])
print(a[1, 2])
print(a[..., 1])
|
st115159
|
How to multiply a vector A of nx1 to a matrix B of nxmxk? What I want is each element of A to be multiplied to each matrix mxk of B.
|
st115160
|
I’m not sure I understand your question properly, but you could do something like the following in PyTorch 0.2
result = A[:, :, None] * B
But I’m not sure that’s the operation you want to perform. If you write it down, I could have a better look.
|
st115161
|
I am at a hackathon, and I spent 8 hours on trying to implement Patch match for N channels of images.
I know this is very unreasonable, but I am sincerely frustrated, and hope somebody from this community can help me implement.
I want to implement this paper: http://gfx.cs.princeton.edu/pubs/Barnes_2009_PAR/patchmatch.pdf 17
but instead of 3 channels, I want it across N channels. This is a for a neural network, and I will be more than happy to collaborate with whoever helps me.
I would like to add this as a pytorch extension https://github.com/msracver/Deep-Image-Analogy/blob/master/windows/deep_image_analogy/source/GeneralizedPatchMatch.cuh 17
Help is greatly appreciated.
|
st115162
|
As I mentioned on Twitter, I’d start with something based on numpy and adapt/optimize for pytorch. Something like https://github.com/jabooth/patchmatch/blob/master/patchmatch.py 106 for example.
It will be definitely slower than using hand-made kernels, but will definitely be easier to debug.
Good luck!
|
st115163
|
I run the follow code before definition of modules. (My model uses Embeding, Dropout, LSTM, and Linear layers.)
torch.manual_seed(1000)
torch.backends.cudnn.enabled = False
torch.cuda.manual_seed(1000)
However, the final results are still different for each trial (regardless I use CPU or GPU). Are there any other things I should do to get the deterministic results given the same input?
Thanks.
|
st115164
|
Are you using multiple GPUs? If yes, you need to use torch.cuda.manual_seed_all.
This should be enough and is enough for getting deterministic results in all examples we have. The problem probably lies somewhere in your code.
|
st115165
|
For example, if you’re using numpy or random modules, you need to seed them too.
|
st115166
|
Thanks! It was because I was using random module.
BTW, is it determinsitic to use cudnn libraries such as SpatialConvolution and SpatialMaxPooling, which can be non-deterministic in lua torch?
|
st115167
|
@supakjk I think that for the moment one cannot chose which algorithm will be used by cudnn in pytorch, so you can’t assume that it will pick the deterministic algo for SpatialConvolution. Also, SpatialMaxPooling is not deterministic in cudnn.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.