id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st119768
|
I must be missing something, I thought registe_buffer was to specify persistent fields, not the ones to be “cudaified”. In any case, I presume it should be self.register(‘s’, self.s) ? The following , with s a Variable
import torch
from torch import Tensor
from torch.nn.parameter import Parameter
from torch.nn import Module
from torch.autograd import Variable
class Blah(Module):
def __init__(self, dim):
super(Blah, self).__init__()
# self.s = Parameter(torch.rand(dim), requires_grad = False)
self.s = Variable(torch.rand(dim))
self.register_buffer('s', self.s)
self.t = Parameter(torch.rand(dim))
blah = Blah(10)
blah.zero_grad()
print('s', type(blah.s.data))
print('t', type(blah.t.data))
if torch.cuda.is_available():
blah.cuda()
print('s', type(blah.s.data))
print('t', type(blah.t.data))
does print
s <class 'torch.FloatTensor'>
t <class 'torch.FloatTensor'>
s <class 'torch.FloatTensor'>
t <class 'torch.cuda.FloatTensor'>
And making s a Tensor instead of a Variable does not help.
|
st119769
|
What about this.
import torch
from torch import Tensor
from torch.nn.parameter import Parameter
from torch.nn import Module
from torch.autograd import Variable
class Blah(Module):
def __init__(self, dim):
super(Blah, self).__init__()
self.register_buffer('s', torch.rand(dim))
self.t = Parameter(torch.rand(dim))
blah = Blah(10)
blah.zero_grad()
print('s', type(blah.s))
print('t', type(blah.t.data))
if torch.cuda.is_available():
blah.cuda()
print('s', type(blah.s))
print('t', type(blah.t.data))
Which outputs the following.
/home/atcold/anaconda3/bin/python /home/atcold/Work/buffer.py
s <class 'torch.FloatTensor'>
t <class 'torch.FloatTensor'>
s <class 'torch.cuda.FloatTensor'>
t <class 'torch.cuda.FloatTensor'>
Process finished with exit code 0
Here is a ref. 8 to the documentation of the register_buffer() method.
|
st119770
|
I’d also recommend using a buffer for that. However it is a bug, and it should be fixed anyway.
@csarofeen Parameter’s can’t be volatile and it’s likely not what’s wanted. Volatile will forcefully turn off the graph construction. You can read more in these notes 23.
|
st119771
|
I managed to make my PyTorch code work on CPU. While I was porting it over to GPU, I’m stuck at the backprogpagation with this error AttributeError: 'CudnnRNN' object has no attribute '_nested_output'
The API is designed this way because I need this API to interact with another code that requires such calls.
Simple LSTM (single input with multiple hidden states that are updated)
from torch.autograd import Variable
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
class Net(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, bias, dropout):
super(Net, self).__init__()
self.rnn = nn.LSTM(input_size=input_size,
hidden_size=hidden_size,
num_layers=num_layers,
bias=bias,
dropout=dropout)
def input_var(i):
test = np.array([i])
# print(test.shape)
# test = np.array([i])
input_var = test.reshape(1, 1, 1) # (seq_len, batch, input_size)
input_var = torch.from_numpy(input_var).float()
return input_var
def label_var(i):
test = np.array([i*4])
label_var = test.reshape(1, 1) #
label_var = torch.from_numpy(label_var).float()
return label_var
class lstmModule:
def __init__(self, input_size, hidden_size, num_layers, bias, dropout,
seq_len, batch_size, meta_lr, n_meta_iter):
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.bias = bias
self.dropout = dropout
self.seq_len = seq_len
self.batch_size = batch_size
self.meta_lr = meta_lr
self.n_meta_iter = n_meta_iter
self.net = Net(input_size=input_size,
hidden_size=hidden_size,
num_layers=num_layers,
bias=bias,
dropout=dropout)
self.net.cuda()
self.h0 = Variable(torch.randn(self.num_layers,
self.batch_size,
self.hidden_size)).cuda()
self.c0 = Variable(torch.randn(self.num_layers,
self.batch_size,
self.hidden_size)).cuda()
self.optimizer = optim.Adam(self.net.rnn.parameters(), lr=self.meta_lr)
self.loss_lst = []
self.loss_lst2 = []
def lstm_forward(self, seq_num, meta_num):
print('i fed', seq_num)
def pseudo_loss(output, label):
return torch.mean(torch.sum(torch.abs(output - label)))
inp = input_var(seq_num)
input = Variable(inp).cuda()
lab = label_var(seq_num)
label = Variable(lab).cuda()
if seq_num == 0:
# Ensure clear gradient buffer
self.optimizer.zero_grad()
self.loss_tot = [0 for i in range(self.hidden_size)]
# Label concatenation
self.label_all = label
# LSTM
output, hn = self.net.rnn(input, (self.h0, self.c0))
output = 100 * output
op = [output[:, :, i] for i in range(self.hidden_size)]
self.output_all = op
# print('1 step length:', len(self.output_all))
self.h, self.c = hn
print('Done', i)
else:
self.label_all = torch.cat((self.label_all, label), 0)
output, hn = self.net.rnn(input, (self.h, self.c))
output = 100 * output
op = [output[:, :, i] for i in range(self.hidden_size)]
self.h, self.c = hn
self.output_all = [torch.cat((self.output_all[i], op[i]), 0) for i in range(self.hidden_size)]
print('Done', i)
# print('{} step length: {}'.format(i, len(self.output_all)))
# print('{} step output size: {}'.format(i, output.size()))
# print(self.output_all[0].size())
print('-'*10)
if seq_num == (self.seq_len - 1):
# Get loss
self.loss_tot = [self.loss_tot[i] + pseudo_loss(self.output_all[i], self.label_all) for i in range(self.hidden_size)]
# Append loss
self.loss_lst.append(self.loss_tot[0].cpu().data.numpy()[0])
self.loss_lst2.append(self.loss_tot[1].cpu().data.numpy()[0])
# Backprop
print(len(self.loss_tot))
print(self.loss_tot)
for k in range(self.hidden_size):
print('backprop', k)
# print('backprop', k)
# print(self.loss_tot[k].size())
self.loss_tot[k].backward(retain_variables=True)
# Update optimizer
self.optimizer.step()
if seq_num == (self.seq_len - 1) and meta_num == (self.n_meta_iter - 1):
# print(len(self.loss_lst))
print('Loss 1', self.loss_tot[0].cpu().data.numpy())
print('Loss 2', self.loss_tot[1].cpu().data.numpy())
plt.clf()
plt.plot()
plt.title('Loss Curve')
plt.plot(self.loss_lst, label='Hidden 1')
plt.plot(self.loss_lst2, label='Hidden 2')
plt.legend(loc='best')
plt.savefig('loss.png')
def lstm_check(self, seq_num):
inp = input_var(seq_num)
input = Variable(inp).cuda()
lab = label_var(seq_num)
label = Variable(lab).cuda()
if seq_num == 0:
# Ensure clear gradient buffer
self.optimizer.zero_grad()
self.loss_tot = [0 for i in range(self.hidden_size)]
# Label concatenation
self.label_all = label
# LSTM
output, hn = self.net.rnn(input, (self.h0, self.c0))
output = 100 * output
op = [output[:, :, i] for i in range(self.hidden_size)]
self.output_all = op
self.h, self.c = hn
else:
self.label_all = torch.cat((self.label_all, label), 0)
output, hn = self.net.rnn(input, (self.h, self.c))
output = 100 * output
op = [output[:, :, i] for i in range(self.hidden_size)]
self.h, self.c = hn
self.output_all = [torch.cat((self.output_all[i], op[i]), 0) for i
in
range(self.hidden_size)]
if seq_num == (self.seq_len - 1):
print('-' * 10)
print(self.output_all[0].cpu().data.numpy())
print(self.label_all.cpu().data.numpy())
print('-' * 10)
print(self.output_all[1].cpu().data.numpy())
print(self.label_all.cpu().data.numpy())
N_meta = 100
LR_meta = 0.1
N_seq = 4
batch_size = 1
layers = 4
input_size = 1
hidden_size = 10
# Initialize and assign class to object once
# input_size, hidden_size, num_layers, bias, dropout, seq_len, batch_size, meta_lr, n_meta_iter):
print 'Initializing LSTM'
lstm = lstmModule(input_size, hidden_size, layers, True, 0.1, N_seq, batch_size, LR_meta, N_meta)
print 'Initialized LSTM'
# Run through meta iterations
print 'Training'
for j in range(N_meta):
# Run through each step
for i in range(N_seq):
print('i start', i)
lstm.lstm_forward(i, j)
print 'Done Training'
# Check
print 'Checking'
for i in range(N_seq):
lstm.lstm_check(i)
print 'Done Checking'
Error:
Traceback (most recent call last):
File "test.py", line 202, in <module>
lstm.lstm_forward(i, j)
File "test.py", line 127, in lstm_forward
self.loss_tot[k].backward(retain_variables=True)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 158, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/function.py", line 208, in backward
nested_gradients = _unflatten(gradients, self._nested_output)
AttributeError: 'CudnnRNN' object has no attribute '_nested_output'
It works in backpropagating one of the hidden state (the first one) but not the second one onwards.
|
st119772
|
We’re aware of that issue. Right now there’s an error in RNNs that occurs when you try to backprop through them multiple times. However, your code doesn’t need to do that, and would be much more efficient if it didn’t.
Replacing
for k in range(self.hidden_size):
self.loss_tot[k].backward(retain_variables=True)
with
sum(self.loss_tot).backward()
will be much better. Backproping from a number of losses is equal to backproping from their sum (the gradients are accumulated). Additionally, this will save a lot of computation, because the backward will batch all operations for all the losses and execute them in one go.
|
st119773
|
Thanks for the prompt reply on this issue.
Thankfully your recommendation works. Really appreciate it.
Cheers!
Ritchie
|
st119774
|
How can I multiply two 2d tensors(matrices). torch.Tensor.cmul is not implemented. There is addcmul, but I am not sure how to use it for this, without generating dummy variables.
|
st119775
|
I think that you can just use * such as:
a = torch.range(0, 99).view(10, 10)
b = torch.range(0, 99).view(10, 10)
c = a * b
|
st119776
|
Also, functionality of cmul is now merged into mul - you can use it both with tensors and scalars.
|
st119777
|
I am trying to sum two tensors with dimensions:
a: 10 x 49 x 1024
b: 10 x 1024
Using the following code:
a + b.unsqueeze(1)
But it seems to expect both inputs with equal dimensions resulting in a RuntimeError:
RuntimeError: inconsistent tensor size at home/soumith/local/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:601
Thanks
|
st119778
|
Don’t hate me. In Tensorflow, the following code works:
a = tf.placeholder(tf.float32, [10,49,1024])
b = tf.placeholder(tf.float32, [10,1024])
b + tf.expand_dims(a, 1)
I’ll replace my code using the following:
c = a + b.unsqueeze(1).repeat(1,a.size(1),1)
Correct?
|
st119779
|
I’m not hating anyone! I was just asking, sorry if you understood it that way.
Also, I’ve just realized that I have misread your question!
This should do it for you:
c = a + b.unsqueeze(1).expand_as(a)
It’s better to use expand in most cases, as it doesn’t allocate any new memory, while repeat does.
|
st119780
|
Don’t hate me refers to the comparison with “competitors”
Thank you for your help!
|
st119781
|
After I wrapped my model with DataParallel, this error happened:
RuntimeError: Assertion `THCTensor_(checkGPU)(state, 5, input, gradOutput, gradWeight, sorted, indices)’ failed. Some of weight/gradient/input tensors are located on different GPUs. Please move them to a single one. at /home/soumith/local/builder/wheel/pytorch-src/torch/lib/THCUNN/generic/LookupTable.cu:17
My model includes an embedding() layer.
Is this caused by embedding()?
If so, any suggestion on how to do multi-gpu properly with embedding() layers inside the model?
Thanks
|
st119782
|
Yes, the embedding doesn’t work with multi-GPU due to a bug. There’s a PR 66 that fixes it, but it needs some small changes.
|
st119783
|
have the bug been fixed ? i meet the same error:
RuntimeError: Assertion `THCTensor_(checkGPU)(state, 5, input, gradOutput, gradWeight, sorted, indices)’ failed. Some of weight/gradient/input tensors are located on different GPUs. Please move them to a single one. at /data/plat/peakzeng/solfware/pytorch/torch/lib/THCUNN/generic/LookupTable.cu:17
|
st119784
|
This is merged now. But you need to build from source if you need this change right away.
|
st119785
|
Hi, I was wondering if there is anything wrong with the example below. Basically I need several processes to enqueue tensors in a shared torch.multiprocessing.Queue. I figured to ask here first before posting an issue on github.
import torch
import torch.multiprocessing as mp
def put_in_q(idx, q):
q.put(torch.IntTensor(2, 2).fill_(idx))
# q.put(idx) # works with int, float, str, np.ndarray, but not torch.Tensor
q = mp.Queue()
p = mp.Process(target=put_in_q, args=(0, q))
p.start()
x = q.get()
print(x)
p.join()
The error I get:
Traceback (most recent call last):
File "test_torch_queue.py", line 15, in <module>
x = q.get()
File "/home/florin/Tools/anaconda3/lib/python3.6/multiprocessing/queues.py", line 113, in get
return _ForkingPickler.loads(res)
File "/home/florin/Tools/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 72, in rebuild_storage_fd
fd = df.detach()
File "/home/florin/Tools/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 57, in detach
with _resource_sharer.get_connection(self._id) as conn:
File "/home/florin/Tools/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 87, in get_connection
c = Client(address, authkey=process.current_process().authkey)
File "/home/florin/Tools/anaconda3/lib/python3.6/multiprocessing/connection.py", line 493, in Client
answer_challenge(c, authkey)
File "/home/florin/Tools/anaconda3/lib/python3.6/multiprocessing/connection.py", line 732, in answer_challenge
message = connection.recv_bytes(256) # reject large message
File "/home/florin/Tools/anaconda3/lib/python3.6/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/home/florin/Tools/anaconda3/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/home/florin/Tools/anaconda3/lib/python3.6/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
ConnectionResetError: [Errno 104] Connection reset by peer
Thanks!
|
st119786
|
The subprocess needs to be alive at the time when the master process receives the Tensor. There are two ways to fix that:
Use an Event to synchronize processes
Use file_system sharing strategy (not recommended)
This example should work:
import torch
import torch.multiprocessing as mp
def put_in_q(idx, q, evt):
q.put(torch.IntTensor(2, 2).fill_(idx))
evt.wait()
q = mp.Queue()
evt = mp.Event()
p = mp.Process(target=put_in_q, args=(0, q, evt))
p.start()
x = q.get()
evt.set()
print(x)
p.join()
To synchronize a larger number of processes you can use mp.Barrier (available only in Python3).
|
st119787
|
Hi,
Awesome library! I’d like to ask if it is possible to save a trained PyTorch model in ("*.t7") and read it in Torch.
Thanks
|
st119788
|
No, we don’t have such option. PyTorch allows for creating much more complex models than Lua Torch without a need to define a lot of helpers, and because of that it’s not easy to translate models in this direction.
|
st119789
|
I’m using this command to install 'pip install https://s3.amazonaws.com/pytorch/whl/cu75/torch-0.1.8.post1-cp27-none-linux_x86_64.whl 2 '. But it occors some error. How to fix it. thx
Exception:
Traceback (most recent call last):
File “/usr/lib/python2.7/site-packages/pip/basecommand.py”, line 215, in main
status = self.run(options, args)
File “/usr/lib/python2.7/site-packages/pip/commands/install.py”, line 324, in run
requirement_set.prepare_files(finder)
File “/usr/lib/python2.7/site-packages/pip/req/req_set.py”, line 380, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File “/usr/lib/python2.7/site-packages/pip/req/req_set.py”, line 620, in _prepare_file
session=self.session, hashes=hashes)
File “/usr/lib/python2.7/site-packages/pip/download.py”, line 821, in unpack_url
hashes=hashes
File “/usr/lib/python2.7/site-packages/pip/download.py”, line 659, in unpack_http_url
hashes)
File “/usr/lib/python2.7/site-packages/pip/download.py”, line 882, in _download_http_url
_download_url(resp, link, content_file, hashes)
File “/usr/lib/python2.7/site-packages/pip/download.py”, line 605, in _download_url
consume(downloaded_chunks)
File “/usr/lib/python2.7/site-packages/pip/utils/init.py”, line 852, in consume
deque(iterator, maxlen=0)
File “/usr/lib/python2.7/site-packages/pip/download.py”, line 571, in written_chunks
for chunk in chunks:
File “/usr/lib/python2.7/site-packages/pip/utils/ui.py”, line 139, in iter
for x in it:
File “/usr/lib/python2.7/site-packages/pip/download.py”, line 560, in resp_read
decode_content=False):
File “/usr/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/response.py”, line 357, in stream
data = self.read(amt=amt, decode_content=decode_content)
File “/usr/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/response.py”, line 324, in read
flush_decoder = True
File “/usr/lib64/python2.7/contextlib.py”, line 35, in exit
self.gen.throw(type, value, traceback)
File “/usr/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/response.py”, line 246, in _error_catcher
raise ReadTimeoutError(self._pool, None, ‘Read timed out.’)
ReadTimeoutError: HTTPSConnectionPool(host=‘s3.amazonaws.com’, port=443): Read timed out.
|
st119790
|
It’s a network error. Probably there’s a problem with your internet connection, or with a proxy if you use one.
|
st119791
|
I’m trying to get a pre-trained network to output something that makes sense… but I’m having some troubles.
Here’s the snippet.
# get input image
import skimage.io
import os
file_name = '26132.jpg'
if not os.access(file_name, os.R_OK):
file_URL = 'http://www.zooclub.ru/attach/26000/26132.jpg'
os.system('wget ' + file_URL)
img = skimage.io.imread(file_name)
# get model
import torchvision
resnet_18 = torchvision.models.resnet18(pretrained=True)
# get classes
file_name = 'synset_words.txt'
if not os.access(file_name, os.W_OK):
synset_URL = 'https://github.com/szagoruyko/functional-zoo/raw/master/synset_words.txt'
os.system('wget ' + synset_URL)
classes = list()
with open(file_name) as class_file:
for line in class_file:
classes.append(line.strip().split(' ', 1)[1].split(', ', 1)[0])
classes = tuple(classes)
# define image transformation
from torchvision import transforms as trn
centre_crop = trn.Compose([
trn.ToPILImage(),
trn.Scale(256),
trn.CenterCrop(224),
trn.ToTensor(),
trn.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
# get top 5 probabilities
from torch.autograd import Variable as V
from torch.nn import functional as f
x = V(centre_crop(img).unsqueeze(0), volatile=True)
logit = resnet_18.forward(x)
h_x = f.softmax(logit).data.squeeze()
probs, idx = h_x.sort(0, True)
for i in range(0, 5):
print('{:.3f} -> {}'.format(probs[i], classes[idx[i]]))
And this is the output
0.009 -> bucket
0.007 -> plunger
0.006 -> hook
0.005 -> water bottle
0.005 -> water jug
which should be, instead, roughly
0.99 -> German shepherd
0.01 -> malinois
0.00 -> Norwegian elkhound
0.00 -> Leonberg
0.00 -> red wolf
And this is the input picture.
|
st119792
|
It’s missing a call to resnet_18.eval(). Otherwise, it’s in training mode and batch normalization behaves differently.
|
st119793
|
Pretrained networks saved in train mode?
OK, this was hard to guess.
Sweet. Something works today, finally. Phew.
Thank you, @colesbury.
0.935 -> German shepherd
0.033 -> Leonberg
0.031 -> malinois
0.000 -> Norwegian elkhound
0.000 -> African hunting dog
What do you think, shall we embed the classes tuple, input image size, and the normalisation settings in the network object and make it in eval mode by default?
This code above should really be a oneliner, IMO.
At least I should send a PR with this example to be included in torchvision documentation, right?
|
st119794
|
I tried the ImageNet example with ResNet152 on 8GPUs but it is much slower than fb.resnet.torch (1.5s vs 0.8s per iter).
The replicate 7 in DataParallel could be the bottleneck that costs half of the forward time. The Broadcast 2 function is called for every parameter and buffer, while in fb.resnet.torch, the parameters are flattened first and the bcast is only called once.
It’s elegant to implement the Broadcast as an Op/Function. I wonder if it is possible to overlap the communication with computation during forward/backward? Or it is necessary to flatten the parameters in order to improve the efficiency?
|
st119795
|
We’re still working on that. It’s true that replicate can add some overhead for larger networks, and that’s why we also have a DataParallel module 27. The overhead of separate broadcasts isn’t very large from what we’ve seen. Nevertheless, if it’s quite high, we’re going to overlap the transfers with the computation, so the overhead is going to be even smaller if we were to sync the flattened parameters.
We don’t support flattening the parameters. It’s quite complex and bug prone to do that correctly, and maintain flexibility.
The main problem is that with 8 GPUs we’re a bit limited because of Python’s GIL (only one thread can execute Python code at a time). That’s why we’re working on moving more of logic commonly used in the vision networks, as well as some others, to our C backends, so they can proceed without blocking other threads.
|
st119796
|
How to use “retain_variables” in Variable backward function.
I tried the following code:
import torch
from torch.autograd import Variable
x = Variable(torch.ones(2, 2), requires_grad = True)
y = x + 2
y.backward(torch.ones(2, 2), retain_variables=True )
print "first gradient of x is:"
print x.grad
z = y * y
gradient = torch.ones(2, 2)
z.backward(gradient)
print "second gradient of x is:"
print x.grad
import torch
from torch.autograd import Variable
x = Variable(torch.ones(2, 2), requires_grad = True)
y = x + 2
y.backward(torch.ones(2, 2), retain_variables=False)
print "first gradient of x is:"
print x.grad
z = y * y
gradient = torch.ones(2, 2)
z.backward(gradient)
print "second gradient of x is:"
print x.grad
Both print the same results:
first gradient of x is:
Variable containing:
1 1
1 1
[torch.FloatTensor of size 2x2]
second gradient of x is:
Variable containing:
7 7
7 7
[torch.FloatTensor of size 2x2]
|
st119797
|
Hi,
According to http://pytorch.org/docs/autograd.html#torch.autograd.Variable.backward 404 this flag is used to prevent any buffer from being freed during the backprop (this is usually done to reduce the memory requirements).
In practice, that means that calling y.backward twice is only possible if the first one was done with retain_variables=True.
You can see this behaviour in the sample below when switching retain_variable between True and False:
import torch
from torch.autograd import Variable
x = Variable(torch.ones(2, 2), requires_grad = True)
y = x ** 2
y.backward(torch.ones(2, 2), retain_variables=False)
print "first backward of x is:"
print x.grad
y.backward(2*torch.ones(2, 2), retain_variables=False)
print "second backward of x is:"
print x.grad
|
st119798
|
Hi. I’m trying to understand something…
In the word_language_model example, the network is trained on “data” sequences which are args.bptt long, which are by default 20 words long (in batches which are also 20 by default):
output, hidden = model(data, hidden)
And then in the generate.py 7, you load the same model via the checkpoint file, but then the starting “input” is only one word long:
input = Variable(torch.rand(1, 1).mul(ntokens).long(), volatile=True)
and then you predict a new word via…
output, hidden = model(input, hidden)
How is this possible? If the model is expecting 20 inputs, shouldn’t it produce an error when you try to send it only 1?
Furthermore, when I try to actually send the generation code a sequence of length 20 by creating…
input = corpus.test[0:20]
print("input =",input)
Then I get…
('input = ',
142
78
54
251
2360
405
24
315
706
32
101
934
935
936
874
251
572
5564
2680
34
[torch.LongTensor of size 20]
)
Traceback (most recent call last):
File “generate.py 7”, line 85, in
output, hidden = model(input, hidden)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py”, line 210, in call
result = self.forward(*input, **kwargs)
File “/home/mcskwayrd/neural/torch/pytorch/examples/word_language_model/model.py”, line 27, in forward
emb = self.encoder(input)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py”, line 210, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/sparse.py”, line 94, in forward
)(input, self.weight)
RuntimeError: expected a Variable argument, but got LongTensor
And if instead I use the get_batch() method, as it was used in main.py 1…
corpus = data.Corpus(args.data)
ntokens = len(corpus.dictionary)
hidden = model.init_hidden(1)
def batchify(data, bsz): # breaks into parallel streams
nbatch = data.size(0) // bsz
data = data.narrow(0, 0, nbatch * bsz)
data = data.view(bsz, -1).t().contiguous()
if args.cuda:
data = data.cuda()
return data
eval_batch_size = 10
test_data = batchify(corpus.test, eval_batch_size)
def get_batch(source, i, evaluation=False):
bptt = 20
seq_len = min(bptt, len(source) - 1 - i)
data = Variable(source[i:i+seq_len], volatile=evaluation)
target = Variable(source[i+1:i+1+seq_len].view(-1))
return data, target
#input = Variable(torch.rand(1, 1).mul(ntokens).long(), volatile=True)
input, targets = get_batch(test_data, 0, evaluation=True)
Then I when I get to the prediction step (i.e., " output, hidden = model(input, hidden)" ), I get the error…
Traceback (most recent call last):
File “generate.py 7”, line 96, in
output, hidden = model(input, hidden)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py”, line 210, in call
result = self.forward(*input, **kwargs)
File “/home/mcskwayrd/neural/torch/pytorch/examples/word_language_model/model.py”, line 28, in forward
output, hidden = self.rnn(emb, hidden)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py”, line 210, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/rnn.py”, line 79, in forward
return func(input, self.all_weights, hx)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.py”, line 228, in forward
return func(input, *fargs, **fkwargs)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.py”, line 138, in forward
nexth, output = func(input, hidden, weight)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.py”, line 67, in forward
hy, output = inner(input, hidden[l], weight[l])
File “/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.py”, line 96, in forward
hidden = inner(input[i], hidden, *weight)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.py”, line 22, in LSTMCell
gates = F.linear(input, w_ih, b_ih) + F.linear(hx, w_hh, b_hh)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py”, line 748, in add
return self.add(other)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py”, line 288, in add
return self._add(other, False)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py”, line 282, in _add
return Add(inplace)(self, other)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/_functions/basic_ops.py”, line 13, in forward
return a.add(b)
RuntimeError: inconsistent tensor size at /home/soumith/local/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:601
Confused: How does sending 20-word sequences work in main.py 1 but fail in generate.py 7?
PS- I see the documentation for torch.nn.RNN says input is supposed to be a Tensor, but that’s just what I’m sending. It didn’t say anything about needing a Variable or other “matrix”:
input (seq_len, batch, input_size): tensor containing the features of the input sequence.
Thanks!
|
st119799
|
Yes, the documentation is wrong - all these arguments should be torch.autograd.Variables.
The confusion comes from a dynamic vs static graph framework. PyTorch constructs the graphs every time, so it doesn’t care in advance what length of the sequence will you be using with the RNN. The only arguments that you have to pass in to the constructor of the RNN are how many features should the input have, and what’s the hidden layer size. Then, you can use sequences of different lengths at every iteration, and it should work just fine.
The only note that can lower the memory usage is to forward a fake batch before the training, that’s of the size of the longest sequence. This will allow our CUDA allocator to preallocate memory that can be reused for all (smaller) batches.
|
st119800
|
Thanks for writing back Adam. So this is the “flexible input size” feature I’ve been hearing so much about. Great!
If I may ask a related question then: if I actually wanted to try to start the generator code using a sequence that is 20 timesteps long, using data from the “test” dataset as in the two attempts I listed above, how would you make it so that “model” would accept that input?
I tried converting “input” to a variable (in generate.py)…
eval_batch_size = 10
test_data = batchify(corpus.test, eval_batch_size)
def get_batch(source, i, evaluation=False):
bptt = 20
seq_len = min(bptt, len(source) - 1 - i)
data = Variable(source[i:i+seq_len], volatile=evaluation)
target = Variable(source[i+1:i+1+seq_len].view(-1))
return data, target
input, target = get_batch(test_data, 0, evaluation=True)
#input = Variable(torch.rand(1, 1).mul(ntokens).long(), volatile=True)
input = Variable(input, volatile=True)
…but when I do that, I get the error that it’s already a Variable (presumably because of the cast in batchify):
Traceback (most recent call last):
File “generate.py”, line 83, in
input = Variable(input, volatile=True)
RuntimeError: Variable data has to be a tensor, but got Variable
But if it’s already a Variable, then I don’t understand why I can’t use it as an input to “model” further below. (?)
Because if I don’t include that extra “Variable” re-casting, then still I get “inconsistent tensor size”…
Traceback (most recent call last):
File “generate.py”, line 90, in
output, hidden = model(input, hidden)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py”, line 210, in call
result = self.forward(*input, **kwargs)
File “/home/mcskwayrd/neural/torch/pytorch/examples/word_language_model/model.py”, line 28, in forward
output, hidden = self.rnn(emb, hidden)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py”, line 210, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/rnn.py”, line 79, in forward
return func(input, self.all_weights, hx)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.py”, line 228, in forward
return func(input, *fargs, **fkwargs)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.py”, line 138, in forward
nexth, output = func(input, hidden, weight)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.py”, line 67, in forward
hy, output = inner(input, hidden[l], weight[l])
File “/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.py”, line 96, in forward
hidden = inner(input[i], hidden, *weight)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.py”, line 22, in LSTMCell
gates = F.linear(input, w_ih, b_ih) + F.linear(hx, w_hh, b_hh)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py”, line 748, in add
return self.add(other)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py”, line 288, in add
return self._add(other, False)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py”, line 282, in _add
return Add(inplace)(self, other)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/_functions/basic_ops.py”, line 13, in forward
return a.add(b)
RuntimeError: inconsistent tensor size at /home/soumith/local/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:601
What is the source of this inconsistency, if the input length isn’t supposed to matter?
|
st119801
|
You shouldn’t rewrap the input into Variable again. get_batch already does it for you. It’s weird that you’re getting that error though. It seems that there’s some problem with the network definition. Are you using a model trained with main.py earlier?
|
st119802
|
Ok, removed the rewrap.
Yes, I’ve run main.py which finishes and saves a model.pt file, then I immediately run generate.py.
The only difference is, I replaced one line in generate.py 1 ("input = Variable") with code borrowed from main.py:
corpus = data.Corpus(args.data)
ntokens = len(corpus.dictionary)
hidden = model.init_hidden(1)
# input = Variable(torch.rand(1, 1).mul(ntokens).long(), volatile=True)
def batchify(data, bsz): # breaks into parallel streams
nbatch = data.size(0) // bsz
data = data.narrow(0, 0, nbatch * bsz)
data = data.view(bsz, -1).t().contiguous()
if args.cuda:
data = data.cuda()
return data
eval_batch_size = 10
test_data = batchify(corpus.test, eval_batch_size)
def get_batch(source, i, evaluation=False):
bptt = 20
seq_len = min(bptt, len(source) - 1 - i)
data = Variable(source[i:i+seq_len], volatile=evaluation)
target = Variable(source[i+1:i+1+seq_len].view(-1))
return data, target
input, target = get_batch(test_data, 0, evaluation=True)
…and the rest of generate.py is unchanged from your original.
Running this version of the code produces an error about hidden size…
Traceback (most recent call last):
File “generate.py 1”, line 93, in
output, hidden = model(input, hidden)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py”, line 210, in call
result = self.forward(*input, **kwargs)
File “/home/mcskwayrd/neural/torch/pytorch/examples/word_language_model/model.py”, line 28, in forward
output, hidden = self.rnn(emb, hidden)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py”, line 210, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/rnn.py”, line 79, in forward
return func(input, self.all_weights, hx)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.py”, line 228, in forward
return func(input, *fargs, **fkwargs)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/function.py”, line 202, in _do_forward
flat_output = super(NestedIOFunction, self)._do_forward(*flat_input)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/function.py”, line 218, in forward
result = self.forward_extended(*nested_tensors)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.py”, line 180, in forward_extended
cudnn.rnn.forward(self, input, hx, weight, output, hy)
File “/usr/local/lib/python2.7/dist-packages/torch/backends/cudnn/rnn.py”, line 244, in forward
hidden_size, tuple(hx.size())))
RuntimeError: Expected hidden size (2, 10L, 200), got (2L, 1L, 200L)
It seems that it wants the hidden size to somehow follow the batch size, only with no "L"s…
|
st119803
|
Ah I see the problem. You’ve increased the input size, but you haven’t changed the hidden = ... part, so the hidden state is too small. These are not only the Ls that make the difference - second dimension was expected to be 10, but is 1.
|
st119804
|
Right, ok. I needed to set eval_batch_size=1. and I can keep hidden=model.init_hidden(1). That makes the dimensions agree.
The only issue is that “output” then ends up being [20x1x10000] instead of [1x1x10000] like the remainder of the code expects. So I grab only the last element of output via
output = output[-1]
The following, then, is some working code for generate.py 1 that feeds it an initial sequence of length 20! Thanks for your help!
###############################################################################
# Language Modeling on Penn Tree Bank
#
# This file generates new sentences sampled from the language model
#
###############################################################################
import argparse
import time
import math
import torch
import torch.nn as nn
from torch.autograd import Variable
import data
parser = argparse.ArgumentParser(description='PyTorch PTB Language Model')
# Model parameters.
parser.add_argument('--data', type=str, default='./data/penn',
help='location of the data corpus')
parser.add_argument('--checkpoint', type=str, default='./model.pt',
help='model checkpoint to use')
parser.add_argument('--outf', type=str, default='generated.txt',
help='output file for generated text')
parser.add_argument('--words', type=int, default='1000',
help='number of words to generate')
parser.add_argument('--seed', type=int, default=1111,
help='random seed')
parser.add_argument('--cuda', action='store_true',
help='use CUDA')
parser.add_argument('--temperature', type=float, default=1.0,
help='temperature - higher will increase diversity')
parser.add_argument('--log-interval', type=int, default=100,
help='reporting interval')
args = parser.parse_args()
# Set the random seed manually for reproducibility.
torch.manual_seed(args.seed)
if torch.cuda.is_available():
if not args.cuda:
print("WARNING: You have a CUDA device, so you should probably run with --cuda")
else:
torch.cuda.manual_seed(args.seed)
if args.temperature < 1e-3:
parser.error("--temperature has to be greater or equal 1e-3")
with open(args.checkpoint, 'rb') as f:
model = torch.load(f)
if args.cuda:
model.cuda()
else:
model.cpu()
def batchify(data, bsz): # breaks into parallel streams
nbatch = data.size(0) // bsz
data = data.narrow(0, 0, nbatch * bsz)
data = data.view(bsz, -1).t().contiguous()
if args.cuda:
data = data.cuda()
return data
corpus = data.Corpus(args.data)
ntokens = len(corpus.dictionary)
def batchify(data, bsz): # breaks into parallel streams
nbatch = data.size(0) // bsz
data = data.narrow(0, 0, nbatch * bsz)
data = data.view(bsz, -1).t().contiguous()
if args.cuda:
data = data.cuda()
return data
eval_batch_size = 1
test_data = batchify(corpus.test, eval_batch_size)
hidden = model.init_hidden(1)
def get_batch(source, i, evaluation=False):
bptt = 20
seq_len = min(bptt, len(source) - 1 - i)
data = Variable(source[i:i+seq_len], volatile=evaluation)
target = Variable(source[i+1:i+1+seq_len].view(-1))
return data, target
input, target = get_batch(test_data, 0, evaluation=True)
#input = Variable(torch.rand(1, 1).mul(ntokens).long(), volatile=True)
#print("input = ",input)
if args.cuda:
input.data = input.data.cuda()
with open(args.outf, 'w') as outf:
for i in range(args.words):
output, hidden = model(input, hidden)
output = output[-1]
# print("output = ",output)
word_weights = output.squeeze().data.div(args.temperature).exp().cpu()
word_idx = torch.multinomial(word_weights, 1)[0]
input.data.fill_(word_idx)
word = corpus.dictionary.idx2word[word_idx]
outf.write(word + ('\n' if i % 20 == 19 else ' '))
if i % args.log_interval == 0:
print('| Generated {}/{} words'.format(i, args.words))
print(" ")
|
st119805
|
Actually that might not be what you want. You want to pass the large input only once, to initialize the network, and then do the steps one by one. In this example you’ll forward a sequence of 20 words from the data, and then you’ll be feeding each output for 20 steps, and taking the last one as the next input (that will be applied 20 times). You should forward the batch through the network only once and slice off the last hidden state. Then, use that slice with an input of length 1 to generate the data.
|
st119806
|
Oh, I see. Tensor.data.fill_() just repeats that last value over & over throughout the tensor.
So in my code,
input.data.fill_(word_idx)
was just taking that one value and repeating it.
Got it. I need to re-size input after the first iteration of the loop. I’ll work on that… Thanks.
|
st119807
|
I am trying to recreate a torch7 model architecture in pytorch. I have used several layers of SpatialFullConvolution and as such, was wondering if there is anything analogous to that in PyTorch. I have not been able to find anything similar by name.
|
st119808
|
Okay. I made the changes but now I am getting a runtime error:
RuntimeError: input and target have different number of elements:
input[128 x 1 x 128 x 128] has 2097152 elements, while target[128 x 2 x 128 x 128] has 4194304 element
This is the code for my model architecture:
class ColorizerNet(nn.Module):
def __init__(self):
super(ColorizerNet, self).__init__()
self.layer1 = nn.Conv2d(1, 8, 2, 2)
self.layer2 = nn.Conv2d(8, 16, 2, 2)
self.layer3 = nn.ConvTranspose2d(16, 8, 2, 2)
self.layer4 = nn.ConvTranspose2d(8, 1, 2, 2)
def forward(self, x):
x = F.relu(self.layer1(x))
x = F.relu(self.layer2(x))
x = F.relu(self.layer3(x))
x = F.relu(self.layer4(x))
return x
Am I making any obvious errors here? If required, I can start a separate thread on this followup error.
|
st119809
|
It’s an error in the loss function. Your output’s second dimension has a different size (1) than the target’s (2).
|
st119810
|
I made a customized layer for scalar x vector when scalar is a variable and vector is fixed.
class mul_scalar(torch.autograd.Function):
"""
Customized autograd.Function of
f(T,s) = s * T,
where T is a fixed Tensor and s is a Variable
"""
def forward(self, T, s_var):
self.save_for_backward(T, s_var)
return T.mul(s_var[0])
def backward(self, grad_output):
T, s_var = self.saved_tensors
return grad_output.mul(s_var[0]), grad_output.dot(T)
I made my nn.Module and declare self.ms = mul_scalar() in nn.Module.__init__.
class Net(nn.Module):
def __init__(self, var=1):
super(Net, self).__init__()
self.ms = mul_scalar()
def forward(self, x):
...
self.ms(x, w)
...
However, when backpropagating it, there is an error related to retain variables.
How to declare my own function properly in this case?
As an alternative way, I can use in forward function as follows: (but I want to declare mul_scalar() in __init__)
def forward(self, x):
c = Variable(torch.FloatTensor([1]), requires_grad = True)
ms = mul_scalar()
z1 = ms(x, c)
|
st119811
|
@Ja-Keoung_Koo Because all your operations are already differentiable by autograd, you don’t need to create a new torch.autograd.Function for that.
You can simply define a python function, and it will perform as expected:
def mul_scalar(T, s_var):
# supposes that T is 1D var
# need to unsqueeze s_var if not
return T * s_var.expand_as(T)
|
st119812
|
@Ja-Keoung_Koo also, note that all arguments to forward have to be Variables. If you have any non-variable parameters (like a fixed tensor), you should pass them to the function’s __init__.
|
st119813
|
Thanks for kind replies. Is it safe to use functions like expand_as, unsqueeze and so on? I wonder that when backprogating, grad_output is automatically resized.
|
st119814
|
Yes, it is safe to use these functions, and they were unit tested so they should be fine.
|
st119815
|
Hi,
I am interested in using the cuda primitive and also DataParallel, currently I have implemented my network, it has multiple inputs, here is the train function :
for i, (input, target) in enumerate(train_loader):
# measure data loading time
data_time.update(time.time() - end)
target = target.cuda(async=True)
input_25 = torch.autograd.Variable(input[0])
input_51 = torch.autograd.Variable(input[1])
input_75 = torch.autograd.Variable(input[2])
target_var = torch.autograd.Variable(target)
# compute output
output = model(patch25=input_25, patch51=input_51, patch75=input_75)
This is actually working if I keep everything on CPU. The first implementation used DataParallel, however the forward function only takes a single input, not a list or dict, as the default implementation of nn.Module.forward(), maybe this is an intended choice.
So I tried to just move the network to the GPU:
# basic_conv() returns a nn.Module
net = basic_conv().cuda()
Then I get this error that I cannot interpret myself :
Traceback (most recent call last):
File "read_network.py", line 29, in <module>
net(patch25=in25, patch51=in51, patch75=in75)
File "/home/ganaye/deps/miniconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 210, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/hdd/code/scripts/simple_conv.py", line 30, in forward
x_25 = self.conv2d_25_5(x['patch25'])
File "/home/ganaye/deps/miniconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 210, in __call__
result = self.forward(*input, **kwargs)
File "/home/ganaye/deps/miniconda3/lib/python3.5/site-packages/torch/nn/modules/conv.py", line 235, in forward
self.padding, self.dilation, self.groups)
File "/home/ganaye/deps/miniconda3/lib/python3.5/site-packages/torch/nn/functional.py", line 37, in conv2d
return f(input, weight, bias) if bias is not None else f(input, weight)
File "/home/ganaye/deps/miniconda3/lib/python3.5/site-packages/torch/nn/_functions/conv.py", line 33, in forward
output = self._update_output(input, weight, bias)
File "/home/ganaye/deps/miniconda3/lib/python3.5/site-packages/torch/nn/_functions/conv.py", line 88, in _update_output
return self._thnn('update_output', input, weight, bias)
File "/home/ganaye/deps/miniconda3/lib/python3.5/site-packages/torch/nn/_functions/conv.py", line 147, in _thnn
return impl[fn_name](self, self._bufs[0], input, weight, *args)
File "/home/ganaye/deps/miniconda3/lib/python3.5/site-packages/torch/nn/_functions/conv.py", line 225, in call_update_output
bias, *args)
TypeError: FloatSpatialConvolutionMM_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, torch.FloatTensor, torch.cuda.FloatTensor, torch.cuda.FloatTensor, torch.FloatTensor, torch.FloatTensor, int, int, int, int, int, int), but expected (int state, torch.FloatTensor input, torch.FloatTensor output, torch.FloatTensor weight, [torch.FloatTensor bias or None], torch.FloatTensor finput, torch.FloatTensor fgradInput, int kW, int kH, int dW, int dH, int padW, int padH)
It seems the error is coming from here, this is the forward call of my network:
def forward(self, **x):
# patch of size 25
x_25 = self.conv2d_25_5(x['patch25'])
x_25 = F.max_pool2d(x_25, 2, stride=1, padding=0)
I don’t get why it would work in CPU mode and not in GPU, the only thing I changed is calling cuda() on the network.
Help !!
Thanks
|
st119816
|
Ok…
The input Variable were not transferred to the GPU, I guess the DataLoader was doing this implicitly. So the cuda problem is solved, thanks !
Any ideas welcomed to use the DataParallel with multiple inputs !
|
st119817
|
Modules can take as many parameters as you want, they’re not restricted to a single one. DataLoader never transfers the data to the GPU for you, so you have to do it manually. What’s the problem with the DataLoader with multiple inputs? Your dataset can return more than 2 values per index.
|
st119818
|
Sorry, I was meaning DataParallel instead of DataLoader. I would like to give multiple inputs to the DataParallel. I will probably need to modify my network for each input to be distributed to a DataParallel.
Can you explain this code extracted from the imagenet example :
if args.arch.startswith('alexnet') or args.arch.startswith('vgg'):
model.features = torch.nn.DataParallel(model.features)
model.cuda()
else:
model = torch.nn.DataParallel(model).cuda()
Is there a specific reason to separate the classifier and the features in the alexnet and vgg models ?
Why not giving the whole model to DataParallel, like in the resnet model ?
Thanks
|
st119819
|
I think the reason is that data parallel is more efficient for convolutional layers. This is explained in https://arxiv.org/abs/1404.5997 109
|
st119820
|
@trypag we should support having multiple inputs to DataParallel. I’ve opened an issue for this 252.
|
st119821
|
I implement the data_parallel with two inputs, but it does not work
def data_parallel2(module, input1, input2, device_ids, output_device=None):
"""Evaluates module(input) in parallel across the GPUs given in device_ids.
This is the functional version of the DataParallel module.
Args:
module: the module to evaluate in parallel
input: input to the module
device_ids: GPU ids on which to replicate module
output_device: GPU location of the output Use -1 to indicate the CPU.
(default: device_ids[0])
Returns:
a Variable containing the result of module(input) located on
output_device
"""
if not device_ids:
return module(input1, input2)
if output_device is None:
output_device = device_ids[0]
replicas = replicate(module, device_ids)
input1s = scatter(input1, device_ids)
input2s = scatter(input2, device_ids)
replicas = replicas[:len(input1s)]
outputs = parallel_apply2(replicas, input1s, input2s)
return gather(outputs, output_device)
def parallel_apply2(modules, input1s, input2s):
assert len(modules) == len(input1s)
# Fast track
if len(modules) == 1:
return (modules[0](input1s[0], input2s[0]),)
lock = threading.Lock()
results = {}
def _worker(module, input1, input2, results, lock):
var_input1 = input1
var_input2 = input2
while not isinstance(var_input1, Variable):
var_input1 = var_input1[0]
while not isinstance(var_input2, Variable):
var_input2 = var_input2[0]
try:
with torch.cuda.device_of(var_input1):
output = module(input1, input2)
with lock:
results[input1] = output
except Exception as e:
with lock:
results[input1] = e
threads = [threading.Thread(target=_worker,
args=(module, input1, input2, results, lock))
for module, input1, input2 in zip(modules, input1s, input2s)]
|
st119822
|
Hi, this is the example given in the document 7 (I modified the numbers just to show a simple example of 1 in and 1 out LSTM):
>>> rnn = nn.LSTM(1, 100, 4)
>>> input = Variable(torch.randn(1, 1, 1))
>>> h0 = Variable(torch.randn(4, 1, 100))
>>> c0 = Variable(torch.randn(4, 1, 100))
>>> output, hn = rnn(input, (h0, c0))
This gives an output dimension of [1, 1, 100].
How do I reduce it to be of a size [1, 1, 1] (basic 1 in 1 out LSTM)? I tried a linear layer and it didn’t work (loss not decreasing properly). I could only make a simple LSTM work with 1 hidden state at the moment.
Any ideas?
|
st119823
|
Not sure what 1 in 1 out LSTM is, you want to have 1 of input and output features, while using 100 for hidden size? I think the standard practice is to add a Linear layer on top of the LSTM. If the network learns with hidden states of size 1, but doesn’t with that of size 100, it might mean that the optimization task becomes too complex for a simple task that you’re training the network to perform. Can’t really help you a lot, it depends on a lot of factors.
|
st119824
|
Do we need to fill the other Variable declared with (requires_grad=True) inside Module to 0 as well?
|
st119825
|
It is expected that it only affects parameters - things you optimize are considered model parameters. Not sure what’s your exact use case, but as @smth pointed out, you can just iterate over the other Variables and zero the gradients yourself.
|
st119826
|
Thanks for the explanations, but if I don’t fill other variable.grad.data to 0, will the grad of parameters that depended on those variables be wrongly estimated? Keeping tracking of all the variables and setting their grad to 0 properly seems quite error prone.
|
st119827
|
No, why would it matter? Gradient of parameters is not a function of the gradient w.r.t. some other Variable.
|
st119828
|
Do you mean whatever the module is, as long as it’s inherited from base nn.Module class, simply calling the default model.zero_grad() is enough?
I mean the error can pass from other Variable to the parameter, if some Variable’s grad is unintentionally accumulated from the last run, the gradient of parameters may also be wrong.
|
st119829
|
I really can’t help you a lot without knowing what “other Variables” are you referring to? Are you optimizing them too? If not, just doing zero_grad() should be enough. There’s no way the content of .grad of a Variable, can affect what gets accumulated into another .grad. That’s not how derivatives work (of course I’m talking in a context where we don’t have multi backward).
|
st119830
|
Thanks for your reply.
For example in the following case, Do I need to add self.out.grad.data.fill_(0) to the mode.zero_grad() function?
def __init__(self):
self.out = torch.autograd.Variable(torch.zeros(timestep, batchsize,
self.W_decode.size()[1]) , requires_grad=True)
def forward(self, input, state=None):
# input is timesetep*N
batchsize, timestep = input.size()[1], input.size()[0]
vec_input = input.view(-1)
emb = torch.index_select(self.W_emb, 0, vec_input).view(timestep,batchsize, -1) # emb = N*ninp
inp = matmul(emb, self.W_rnn)
state = torch.autograd.Variable( torch.zeros(inp.size()[1:])) if state is None else state
for step in range(inp.size()[0]):
this_input = inp[step] # N * nhid
this_input = torch.addmm(this_input, state, self.U_rnn)
state = F.tanh(this_input + self.b_rnn.expand_as(this_input) )
self.out[step] = torch.addmm(self.out[step], state, self.W_decode)
self.out[step] = F.softmax(self.out[step] + self.b_decode.expand_as(out[step]) )
return self.out
|
st119831
|
Oh, with this approach, you’re likely to have an expanding history, because self.out will contain a pointer to part of the graph for each iteration. As far as I see, you’re overwriting completely at every forward, so you should recreate it every time like that:
def __init__(self):
self.out = get_new_out()
def forward(self, input):
...
next_out = get_new_out()
for step in range(inp.size(0)):
next_out[step] = ... # an expression containing self.out
self.out = next_out
return next_out
You don’t need to worry about zeroing any gradients then. Also, keep in mind that self.out doesn’t require gradient, so it will be always 0.
|
st119832
|
Thanks for the explanation.
In the first approach, I don’t need to allocate memory every time, so it will be a little bit faster? And it is written in that way to see if Mode.zero_grad is always safe.
And later self.out will be used to compute the loss, and I need to backpropagate from the loss to the Model’s parameters through self.out. In this case, it seems that self.out will need meaningful gradient for the chain rule to work?
|
st119833
|
No, you’re not going to feel any difference in speed. CPU allocations are fast and we have a custom CUDA allocator that caches the memory, so it’s also very fast. I don’t know how you define the safety of Module.zero_grad - it does what it’s meant to do, i.e. zero the grad of parameters.
No, self.out.grad will never be used in the process of computing gradients w.r.t. parameters. It’s just a buffer where the gradient gets accumulated, it is not used in any way during backpropagation.
|
st119834
|
Also, as I said, if you don’t reallocate the output your graphs will never be freed and that will blow up the memory. Don’t cache intermediate things too long, PyTorch philosophy is quite different from Lua torch.
|
st119835
|
Thanks for you explanation. I guess I know what do you mean.
Sorry I am quite new to torch. In theano or tensorflow, I can get gradient of any node in the computational graph. But it seems that pytorch only save gradient of the leaf nodes?
from torch.autograd import Variable
x = Variable(torch.ones(2, 2), requires_grad = True)
y = x + 2
z = y * y * 3
out = z.mean()
out.backward(retain_variables=True)
print(x.grad)
I get
Variable containing:
4.5000 4.5000
4.5000 4.5000
[torch.FloatTensor of size 2x2]
But print(y.grad) gives me
Variable containing:
0 0
0 0
[torch.FloatTensor of size 2x2]
How do I get the gradient of the inner node ( y)?
|
st119836
|
@ypxie autograd by default frees the intermediate gradients that are not needed anymore, so that the memory usage is minimal.
If you want to inspect internal gradients, you can use hooks, as explained in this post.
But if you don’t want to free the gradients, you can pass the retain_variables=True to backward, as explained in the docs 13
|
st119837
|
Thanks for your reply.
But even I pass the retain_variables=True to backward, y.grad is still all zero.
hooks look interesting, but is there anyway I can return the gradient rather than modify or print it?
|
st119838
|
you can return the gradient into a separate variable using a closure. Look at this post for sample code: Why cant I see .grad of an intermediate variable?
|
st119839
|
retain_variables will only prevent autograd from freeing some buffers needed for backward (e.g. when you want to backprop multiple times through a graph). Use hooks to access intermediate gradients.
|
st119840
|
In lua torch, we can access a Tensor using luajit-FFI pointer as fast as in C. Do we have similar thing in pytorch?
|
st119841
|
no. Python doesn’t have JITting.
However, you can use Cython, cython code is as concise and nice, and also has indexing and stuff.
For now, in your cython code, you can convert the Tensor to .numpy() and use cython to modify the Tensor.
Here’s an example of using cython to modify numpy arrays:
github.com
rbgirshick/py-faster-rcnn/blob/master/lib/nms/cpu_nms.pyx 173
# --------------------------------------------------------
# Fast R-CNN
# Copyright (c) 2015 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ross Girshick
# --------------------------------------------------------
import numpy as np
cimport numpy as np
cdef inline np.float32_t max(np.float32_t a, np.float32_t b):
return a if a >= b else b
cdef inline np.float32_t min(np.float32_t a, np.float32_t b):
return a if a <= b else b
def cpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh):
cdef np.ndarray[np.float32_t, ndim=1] x1 = dets[:, 0]
cdef np.ndarray[np.float32_t, ndim=1] y1 = dets[:, 1]
cdef np.ndarray[np.float32_t, ndim=1] x2 = dets[:, 2]
This file has been truncated. show original
|
st119842
|
Thanks a lot for the reply and example. I noticed the documentation mentioned cython and numba, this is really sweet.
another maybe more noob question: since we are talking about the access speed here, how’s the overhead of .numpy() operation?
|
st119843
|
the overhead of .numpy() is zero. We dont do any memory copies whatsoever, we just recast the Tensor C struct to a numpy array (the underlying memory is shared between Tensor and numpy array). so it’s free.
|
st119844
|
And if you feel comfortable with working with torch tensors, you can check out our ffi examples 393.
|
st119845
|
Using the pytorch framework.
Suppose you have 4 NN modules of which 2 share weights such that one objective relies on the computation of 3 NN modules (including the 2 that share weights) and the other objective relies on the computation of 2 NN modules of which only 1 belongs to the weight sharing pair, the other module is not used for the first objective.
What would the optimisation step in this scenario entail? With efficiency in mind.
|
st119846
|
4 NN modules of which 2 share weights
In this case, you only have 3 NN modules, and one of them is simply reused.
If you have multiple objectives that you want to backprop, you can use:
autograd.backward http://pytorch.org/docs/autograd.html#torch.autograd.backward 137
You give it the list of losses and grads.
The optimization step is pretty standard, you give the all the modules’ parameters to a single optimizer.
|
st119847
|
So just to be clear, specify a single objective that merges (concat) all the sub-objectives and backward() on it? There won’t be any issue regarding going over the same variables twice through different pathways?
Thanks.
|
st119848
|
So just to be clear, specify a single objective that merges all the sub-objectives and backward() on it?
Yes.
There won’t be any issue regarding going over the same variables twice through different pathways?
No issues.
|
st119849
|
I trained pytorch example resnet18 on imagenet, after about 1 epoch the training hangs and nvidia-smi says GPU lost …
nvidia-smi -l 1 says:
Unable to determine the device handle for GPU 0000:09:00.0: GPU is lost. Reboot the system to recover this GPU
|
st119850
|
this is not specific to pytorch. it looks like you have either a hardware issue or a NVIDIA driver issue. I suspect hardware / thermal issue.
|
st119851
|
Hi, I want to do something similar to this:
mu = torch.zeros(5, 2)
sd = torch.ones(5)
torch.normal(mu, sd)
But I get RuntimeError: inconsistent tensor size.
I noticed in the 1d case it works:
mu = torch.zeros(5, 1)
sd = torch.ones(5)
torch.normal(mu, sd)
|
st119852
|
Sorry for the spam, sd = torch.ones(5, 2) works, so we could do sd.repeat(1, 2) if sd is one-dimensional.
Or torch.cat([sd, sd], 1) if sd is a Variable, since repeat is not supported by Variable.
|
st119853
|
Actually, if you want the mean/std to be the same for all samples, you can just pass a number to torch.normal.
|
st119854
|
Good call. The use case I had was a network outputting gaussian parameters mu in N x D and log_sd in N x 1, so the example above was a bit off.
After doing some more searching, I think that using expand_as might be the most efficient? To summarize:
mu = Variable(torch.zeros(5, 2))
sd = Variable(torch.rand(5, 1))
torch.normal(mu, sd.expand_as(mu))
|
st119855
|
Hi @smth I am trying to use the DataParallel routine, but getting stuck on an error. Right now I am just trying to DataParallelify just one conv module within my net. So previously I had:
self.conv1 = nn.Conv2d(N.inputChannels, N.outputChannels, N.kernelSquareSize, stride = (1,1), padding = (1,1));
and now, as per the DataParallel documentation, I have:
self.conv1 = nn.Conv2d(N.inputChannels, N.outputChannels, N.kernelSquareSize, stride = (1,1), padding = (1,1)); self.conv1 = torch.nn.DataParallel(self.conv1, device_ids = [1, 2])
With this new code however, I am unable to get it to run, and I get the following error:
File “/billly/sillyNet.py”, line 67, in forward_prop
x = F.max_pool2d(F.relu(self.bn1(self.conv1(x))), (2,2))
File “/data/venv/pytorch/local/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 210, in call
result = self.forward(*input, **kwargs)
File “/data/venv/pytorch/local/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py”, line 45, in forward
return self.gather(outputs, self.output_device)
File “/data/venv/pytorch/local/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py”, line 57, in gather
return gather(outputs, output_device)
File “/data/venv/pytorch/local/lib/python2.7/site-packages/torch/nn/parallel/scatter_gather.py”, line 25, in gather
return gather_map(outputs)
File “/data/venv/pytorch/local/lib/python2.7/site-packages/torch/nn/parallel/scatter_gather.py”, line 23, in gather_map
return Gather(target_device)(*outputs)
File "/data/venv/pytorch/local/lib/python2.7/site-packages/torch/nn/parallel/functions.py", line 32, in forward
return comm.gather(inputs, self.dim, self.target_device)
File “/data/venv/pytorch/local/lib/python2.7/site-packages/torch/cuda/comm.py”, line 141, in gather
result.narrow(dim, chunk_start, tensor.size(dim)).copy(tensor, True)
RuntimeError: cuda runtime error (77) : an illegal memory access was encountered at /home/soumith/local/builder/wheel/pytorch-src/torch/lib/THC/THCTensorCopy.cu:85
And it is occurring during the (attempted) forward prop in my code…
thanks…
|
st119856
|
Can you please give us the parameters of the conv so we can try to reproduce the issue?
|
st119857
|
Hi @apaszke,
Sure, here is my complete snippet:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.bn1 = nn.BatchNorm2d(8)
self.bn2 = nn.BatchNorm2d(16)
self.conv1 = nn.Conv2d(1, 8, 3 ,stride = (1,1), padding = (1,1))
self.conv1 = torch.nn.DataParallel(self.conv1, device_ids = [1, 2])
self.conv2 = nn.Conv2d(8, 16 ,3, stride = (1,1), padding = (1,1))
//etc ...
def forward_prop(self, x):
x = F.max_pool2d(F.relu(self.bn1(self.conv1(x))), (2,2))
x = F.max_pool2d(F.relu(self.bn2(self.conv2(x))), (2,2))
This is my only change. I will also mention that if I simply remove the torch.nn.DataParallel line in the above, my code runs and trains fine.
Thanks,
|
st119858
|
I need some additional information. Why do you define forward_prop and not forward? What’s the input size? On which GPUs are the modules? On which GPU is the input?
|
st119859
|
Hi @apaszke,
Why do you define forward_prop and not forward?
For this question, I am not sure what the relevance is as far as the name goes?.. Perhaps I am missing something - but to be honest I had defined it before in my class and it worked - perhaps I am missing something deeper here? It’s just a name of the forward propagation function that I give…
What’s the input size?
This one is a 100x100 image, single channel, minibatch size of 16.
On which GPUs are the modules? On which GPU is the input?
I believe they are all on my GPU 1, and I only say this because when I nvidia-smi, this GPU is the only one that seems to ever be used. Put another way - at no where in my code have I explicitly specified that I want to use specific GPUs, except for the device_ids in the seemingly problematic statement.
|
st119860
|
@Kalamaya the name forward is important because the __call__ operator uses forward from the module 4. You need to define a forward function in every Module that you create, if not it will raise an error in __call__.
|
st119861
|
Ah right. The problem is that we’re numbering GPUs starting from 0. So all the modules and data is on GPU0, but you’re telling the DataParallel to run on GPU1 and GPU2 (i.e. 2nd and 3rd GPU). Can you change that and see if it helps? If that’s it, then we need to improve the error message.
Also, unless this is a helper used in forward that’s defined somewhere else, you need to call it like that. @fmassa wrote a nice explanation why.
|
st119862
|
@fmassa @apaszke I guess I am getting very confused here… To my knowledge, my forward_prop member function IS my definition of the forward function in my Net module.… have I misunderstood something here? For completeness, I have pasted my Net class here:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Define the network
self.bn1 = nn.BatchNorm2d(8)
self.bn2 = nn.BatchNorm2d(16)
self.conv1 = nn.Conv2d(1, 8, 3, stride = (1,1), padding = (1,1))
self.conv2 = nn.Conv2d(8, 16, 3, stride = (1,1), padding = (1,1))
self.conv3 = nn.Conv2d(16, 32, 2, stride = (1,1), padding = (0,0))
self.conv4 = nn.Conv2d(32, 64, 3, stride = (1,1), padding = (1,1))
self.conv5 = nn.Conv2d(64, 64, 3, stride = (3,3), padding = (0,0))
self.fc1 = nn.Linear(256, 32)
self.fc2 = nn.Linear(32, 16)
self.fc3 = nn.Linear(16, 2)
def forward_prop(self, x):
# Conv1 with batch norm
x = F.max_pool2d(F.relu(self.bn1(self.conv1(x))), (2,2))
# Conv2 with batch norm.
x = F.max_pool2d(F.relu(self.bn2(self.conv2(x))), (2,2))
# Conv3
x = F.max_pool2d(F.relu(self.conv3(x)), (2,2))
# Conv4
x = F.max_pool2d(F.relu(self.conv4(x)), (2,2))
# Conv5
x = F.relu(self.conv5(x))
# Flatten the feature map thus far for use in the fully connected layer.
x = x.view(-1, self.num_flat_features(x))
# Fully connected 1
x = F.relu(self.fc1(x))
# Fully connected 2
x = F.relu(self.fc2(x))
# Final layer
x = self.fc3(x)
return x
def num_flat_features(self, x):
# all dimensions except the batch dimension
size = x.size()[1:]
num_features = 1
for s in size:
num_features *= s
return num_features
So real quick, in the rest of my file, I basically have:
net = Net().cuda()
net.forward_prop(trainingBatch)
Now what I would simply like to do is given this setup, I want to just use the DataParallel setup. I looked at the DCGAN example, however I cannot use it at first glance, because I do not know how to do a reshape-operation in the nn.Sequentual() function, and as far as the image_net example, it is not clear to me what their “model” is, vis-a-vis what I have here.
So instead I wrote a simple net (as above), and would like to use the DataParallel capability here. What would I do differently on this setup as above?
thanks again!
|
st119863
|
Ok, I got this to work, because apparently the indexing of the GPUs used by nvidia-smi, is not the same as the indexing used by the program. This was what was causing on of the issues. (There was another issue where if the batch size was too small, it complained, but that seemed to go away once I made the batch size larger. Thanks!!
|
st119864
|
You’re using the Module incorrectly. You should never call a function directly to apply it. You should only define the forward function and simply call a module like a function - module(input). This will run the __call__ method, that we’ve implementented and that will call into your forward function with the inputs. It’s necessary for some additional bookkeeping like hooks, etc.
So the problem was 1 vs 0-based GPU indexing right? We need to fix that, it should never give you an invalid memory access.
|
st119865
|
@apaszke thanks, yes I now understand - I just need to fill in / define the forward function with whatever needs to be done, and then from the object instantiation I made, do something like:
myNet = Net()
myNet = torch.nn.DataParallel(myNet, device_ids=[0,1])
myNet.cuda()
output = myNet(input)
Is this correct?
However there is a subtlety that still confuses me: In the DCGAN example, we have:
Pasted image1060×1142 111 KB
The nn.parallel.data_parallel here is applied to a member of the _netG class, in this case, self.main. This contrasts to what I have, where I apply the torch.nn.DataParallel to the entire network object. I guess this works because both the net object in my example and the nn.Sequential are both modules? At any rate, some elucidation of this would help me greatly. Thanks again.
|
st119866
|
They’re both equivalent. The DCGAN example has been written when we didn’t have the DataParallel module, and it only existed in that functional form, that you can use inside the forward.
|
st119867
|
Hi,
This works OK:
x=Variable(torch.range(1,24))
x.resize(3,8)
But this gives an error:
x=Variable(torch.range(1,24))
m=Variable(torch.FloatTensor([3]))
n=Variable(torch.FloatTensor([8]))
x.resize(m,n)
Is this a bug?
The error message I get is:
RuntimeError: requested resize to Variable containing:
3
[torch.FloatTensor of size 1]
xVariable containing:
8
[torch.FloatTensor of size 1]
(Variable containing:
24
[torch.FloatTensor of size 1]
elements in total), but the given tensor has a size of 24 (24 elements). autograd’s resize can only change the shape of a given tensor, while preserving the number of elements.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.