id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st118968
|
Some things I would try:
have you tried to access your datasets instance through the interactive python console (if that works, does creating a data loader interactively work too?)
https://github.com/bamos/densenet.pytorch/blob/master/train.py#L101 11 (background commands are always a bit “difficult”)
https://github.com/bamos/densenet.pytorch/blob/master/train.py#L67 17 (you need to use pin_memory? - do not know this one)
|
st118969
|
Thanks Kaiser - I am starting debugging now and will look in to all three points you mentioned.
|
st118970
|
Hi Adam,
Yes - running the scripts in nvidia-docker with --ipc=host.
I got some strange behavior, so opened an issue about it here. https://github.com/pytorch/pytorch/issues/1120 52
Looks like models are hardcoded with 224x224 dimensions. I am looking into it now.
|
st118971
|
I’m a bit of a newbie, and I’m having what I think may be a fairly trivial problem. I’m trying to get my model to backprop properly, but it simply refuses to. Here’s a rough example of what my code does (the full code is here 10):
import torch
import torch.nn as nn
import torch.nn.functional as Funct
from torch.autograd import Variable
import torch.optim as optim
class EMM_NTM(nn.Module):
def __init__(self, *params):
# init hidden state, etc. Make sure all Variables have requires_grad=True
def forward(self, h):
# pass forward through the memory module (this one is really long)
class FeedForwardController(nn.Module):
def __init__(self,
num_inputs,
num_hidden,
batch_size,
num_reads=1,
memory_dims=(128, 20)):
super(FeedForwardController, self).__init__()
self.num_inputs = num_inputs
self.num_hidden = num_hidden
self.batch_size = batch_size
self.memory_dims = memory_dims
self.in_to_hid = nn.Linear(self.num_inputs, self.num_hidden)
self.read_to_hid = nn.Linear(self.memory_dims[1]*num_reads, self.num_hidden)
def forward(self, x, read):
x = x.contiguous()
x = x.view(-1, num_flat_features(x))
read = read.contiguous()
read = read.view(-1, num_flat_features(read))
x = Funct.relu(self.in_to_hid(x) + self.read_to_hid(read))
return x
class NTM(nn.Module):
def __init__(self,
num_inputs,
num_hidden,
num_outputs,
batch_size,
num_reads,
memory_dims=(128, 20)):
super(NTM, self).__init__()
self.num_inputs = num_inputs
self.num_hidden = num_hidden
self.num_outputs = num_outputs
self.batch_size = batch_size
self.num_reads = num_reads
self.memory_dims = memory_dims
self.hidden = Variable(torch.rand(batch_size, self.num_hidden), requires_grad=True)
self.EMM = EMM_NTM(self.num_hidden, self.batch_size, num_reads=self.num_reads,
num_shifts=3, memory_dims=self.memory_dims)
# self.EMM.register_backward_hook(print) # <- an attempt to see what's happening, this doesn't print
self.controller = FeedForwardController(self.num_inputs, self.num_hidden, self.batch_size,
num_reads=self.num_reads, memory_dims=self.memory_dims)
# self.controller.register_backward_hook(print) # <- this doesn't print either
self.hid_to_out = nn.Linear(self.num_hidden, self.num_outputs)
def forward(self, x):
x = x.permute(1, 0, 2, 3)
def step(x_t):
r_t = self.EMM(self.hidden)
# r_t.register_hook(print) # <- this one doesn't print
h_t = self.controller(x_t, r_t)
h_t = h_t.view(-1, num_flat_features(h_t))
# self.hidden.register_hook(print) # <- this one prints
self.hidden = Variable(h_t.data, requires_grad=True)
out = Funct.sigmoid(self.hid_to_out(self.hidden))
return out
outs = torch.stack(
[
step(x_t) for x_t in torch.unbind(x, 0)
], 0)
outs = outs.permute(1, 0, 2)
return outs
For some reason when I call backwards it doesn’t look like the gradients are getting updated. I tried adding a bunch of backward hooks to see when it stops printing, and it looks like the backward calls just aren’t happening in the child modules. Any idea how to fix this?
The reason I think that backward is not getting called is because I checked the parameters at the beginning of the training and after 1000 inputs (and calls to loss.backward(), zeroing gradients each time) and they are equal. I also printed a set of parameters for the first ~100 iterations and they didn’t change at all.
I included the controller code because the same issue seems to be happening there as in the EMM_NTM code, I think the same fix should apply to both. Any help would be great - I’m quite confused!
|
st118972
|
I see that in your code you unpacked the .data multiple times. Remember that every time you do that, you won’t save that part of the computation in autograd and it might break the backprop. For example, if you meant _write_to_mem to be differentiable, then it’s not. You unpacked the Variable that contained the whole history and then repacked it in a new one, that won’t backprop gradients to the graph that created the data. The problem is in here 20.
|
st118973
|
Cheers - I took out all the .data calls and something changed - all the variables in EMM_NTM (memory, wr, ww) are now nan (which I’m guessing comes from trying to backprop, though I’m not sure). I registered a backward hook and it looks like a bunch of gradients/parameters for nn.Linear modules are getting initialized to nan which is weird.
Still worrisome is that the state doesn’t seem to change at all after 1-100 call(s) to optimizer.step() even with a large learning rate. The way I’m testing this is to save the state_dict at the very beginning of the training process and then compare it with the state_dict later on during training.
The core of my training loop looks like this:
ntm = NTM(num_inputs, num_hidden, num_inputs, batch, num_reads=1)
try:
ntm.load_state_dict(torch.load("models/copy_seqlen_{}.dat".format(seq_len)))
except FileNotFoundError or AttributeError:
pass
ntm.train()
state = ntm.state_dict()
criterion = nn.MSELoss()
optimizer = optim.RMSprop(ntm.parameters(), lr=5e-3, weight_decay=0.0005)
max_seq_len = 20
for length in range(10, max_seq_len):
test = CopyTask(length, [num_inputs, 1], num_samples=2e4)
data_loader = DataLoader(test, batch_size=batch, shuffle=True, num_workers=4)
for epoch in range(5):
for i, data in enumerate(data_loader, 0):
inputs, labels = data
inputs = Variable(inputs, requires_grad=True)
labels = Variable(labels)
optimizer.zero_grad()
ntm.zero_grad()
outputs = ntm(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
assert not (ntm.state_dict()['hid_to_out.bias'] == state['hid_to_out.bias'])[0] # this just breaks it on the first loop
# do stuff with the outputs, plot, running loss, etc.
|
st118974
|
Using model.[layer_name].weight.grad.data I am able to access the gradients for weights of a particular layer. However, I assume that these gradients are averaged across samples in a mini-batch. Is there a way for me to access the gradients of weights for each sample?
I was able to obtain per-sample gradients for activations or neurons using register_hook, so not sure what to do about weights.
Thanks!
|
st118975
|
there is no way to access the gradients wrt weight for each individual sample. Gradients wrt weights in pytorch are always accumulated (even over the mini-batch). If you want gradients wrt each sample, you will have to run each sample individually through the network. This is because both the THNN backend and CuDNN dont support individual sample gradients wrt weight.
|
st118976
|
I have installed CUDA7.5,and because my GPU is GeForce so I use CUDA without CUDNN.After I installed PyTorch followed this web’s command,I found I can’t train the mnist model,even with torch.backends.cudnn.enabled=False.
My GPU’s computation capacity is 2.1, I don’t know if the error reason is this
the error is RuntimeError: cuda runtime error (8) : invalid device function at /data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/generic/THCTensorMath.cu:15
Can you tell me how to solve this problem?Thank you!
|
st118977
|
Alan,
for your particular GPU, you will have to build pytorch from source. Your GPU is too old to be supported by the binaries.
You can find instructions on how to build from source here:
GitHub
pytorch/pytorch 109
Tensors and Dynamic neural networks in Python with strong GPU acceleration - pytorch/pytorch
|
st118978
|
Hi, I just implemented an customized modules as follows:
class Loss(torch.nn.Module):
’’‘
Implement the loss function from output from RNN.
Ref paper: https://arxiv.org/abs/1308.0850 1
’’‘
def init(self):
’’‘
x is sequence of coordinates with dim (batch, seq_length, 3).
Parameters are sequence of output from rnn with dim (batch, seq_length, 128).
’’'
self.e = [] # predicted end of stroke probability scalar
self.m1 = [] # vector of means for x1 with len 20
self.m2 = [] # vector of means for x2 with len 20
self.pi = [] # vector of mixture density network coefficients with len 20
self.rho = [] # vector of correlation with len 20
self.s1 = [] # vector of standard deviation for x1 with len 20
self.s2 = [] # vector of standard deviation for x2 with len 20
self.x1 = [] # x1 coordinate at t+1
self.x2 = [] # x2 coordinates at t + 1
self.et = [] # end of probability indicator from ground truth
self.batch = 0 # batch size
self.seq_length = 0 # reduce by 1 because loss is caculated at t+1 timestamp
self.parameters = []
def forward(self, x, para):
'''
Implement eq 26 of ref paper for each batch.
Input:
para: dim(seq_len, batch, 121)
x: dim(seq_len, batch, 3)
'''
if x.size()[0] == para.size()[0]:
self.seq_length = x.size()[0] - 1
total_loss = 0
for i in range(self.seq_length):
# prepare parameters
self.__get_para(i, x, para)
normalpdf = self.__para2normal(self.x1, self.x2, self.m1, self.m2, self.s1, self.s2, self.rho) #dim (n_batch, 20)
single_loss = self.__singleLoss(normalpdf)
total_loss += single_loss
return total_loss
else:
raise Exception("x and para don't match")
def __get_para(self, i, x, para):
'''
Slice and process parameters to the right form.
Implementing eq 18-23 of ref paper.
'''
self.batch = x.size()[1]
self.e = torch.sigmoid(-para[i,:,0]) # eq 18
self.parameters = para
# slice remaining parameters and training inputs
self.pi, self.m1, self.m2, self.s1, self.s2, self.rho = torch.split(self.parameters[i,:,1:], 20, dim = 1) # dim(batch, 20)
self.x1 = x[i+1,:,0].resize(self.batch, 1) # dim(batch, 1)
self.x2 = x[i+1,:,1].resize(self.batch, 1)
self.et = x[i+1,:,2].resize(self.batch, 1)
## process parameters
# pi
max_pi = torch.max(self.pi, dim = 1)[0]
max_pi = max_pi.expand_as(self.pi)
diff = self.pi - max_pi
red_sum = torch.sum(diff, dim = 1).expand_as(self.pi)
self.pi = diff.div(red_sum)
# sd
self.s1 = self.s1.exp()
self.s2 = self.s2.exp()
# rho
self.rho = self.rho.tanh()
# reshape ground truth x1, x2 to match m1, m2 because broadcasting is currently not supported by pytorch
self.x1 = self.x1.expand_as(self.m1)
self.x2 = self.x2.expand_as(self.m2)
def __para2normal(self, x1, x2, m1, m2, s1, s2, rho):
'''
Implement eq 24, 25 of ref paper.
All input with dim(1, batch, 20)
'''
norm1 = x1.sub(m1)
norm2 = x2.sub(m2)
s1s2 = torch.mul(s1, s2)
z = torch.pow(torch.div(norm1, s1), 2) + torch.pow(torch.div(norm2, s2), 2) - \
2*torch.div(torch.mul(rho, torch.mul(norm1, norm2)), s1s2)
negRho = 1 - torch.pow(rho, 2)
expPart = torch.exp(torch.div(-z, torch.mul(negRho, 2)))
coef = 2*np.pi*torch.mul(s1s2, torch.sqrt(negRho))
result = torch.div(expPart, coef)
return result
def __singleLoss(self, normalpdf):
'''
Calculate loss for single time stamp. eq 26
Input: normalpdf (1,n_batch, 20).
'''
epsilon = 1e-20 # floor of loss from mixture density component since initial loss could be zero
mix_den_loss = torch.mul(self.pi, normalpdf)
red_sum_loss = torch.sum(torch.log(mix_den_loss)) # sum for all batch
end_loss = torch.sum(torch.log(torch.mul(self.e, self.et) + torch.mul(1-self.e, 1 - self.et)))
total_loss = -red_sum_loss - end_loss
return total_loss/self.batch
when I call loss(x, para), the following error comes up:
‘Loss’ object has no attribute ‘_forward_hooks’
What does the error message imply ?
Specifically, what is hook used for ?
Did I get the error because I have no parameters in this module ?
|
st118979
|
Hi,
Firstly let me say it would be nice if You could format all of your code, including init method.
Also, it would be helpful if You could provide a working example of how to replicate the error (the code doesn’t run)
Nevertheless, your error is caused by the fact that you don’t call Module class __init__. Your code should be:
class Loss(torch.nn.Module):
def __init__(self):
super(torch.nn.Module, self).__init__()
... # rest of the __init__ here
|
st118980
|
Hi, I am trying to implement a optimizer of RMSprop type.
The optimizer is initialized as follows:
optimizer = torch.optim.RMSprop(model.parameters(), alpha = 0.95, eps = 0.0001, centered = True)
Then I got the following error:
init() got an unexpected keyword argument ‘centered’
I am wondering is there any change made to the RMSprop so that it no longer support centered version ?
|
st118981
|
the centered option is in master but not in 0.1.10. It is due to appear in 0.1.11 today/tomorrow.
|
st118982
|
Hi,
what is the most elegant way to force a Tensor to always stay on the CPU?
I have a SparseLinear layer that won’t fit on my GPU, so I’d like that part of the net to stay on the CPU, even when the rest of my model lives on the GPU.
Currently I’m using a rather ugly hack, by simply replacing the cuda() method of the Tensor [1]
And one more question, what’s the reason cuda() / cpu() methods of a Model do not call the same methods of the children? E.g I thought that calling model.cuda() would call model.sparse_layer.cuda() (which would move result of sparse dot product to GPU), but that’s not the case, since self._apply() only works with parameters / buffers.
Is calling model.apply(lambda t: t.cuda()) the solution here, or I shouldn’t call cuda() on all the children?
[1]
def force_cpu_(tensor):
tensor.cuda = types.MethodType(lambda self, *args, **kwargs: self,
tensor)
return tensor
|
st118983
|
For now, I would override the cuda method of the model’s top-level Module. That should be slightly less hacky than your (impressive!) monkeypatching.
|
st118984
|
Since the area of Deep Learning is nascent, hundreds of brilliant research papers are being published every week. This requires that a Deep Learning framework should have a policy towards adding modules for the same.
Is there a policy for implementing new modules in PyTorch (pytorch.nn)?
Which modules do you like to add/ wish being added in PyTorch in nearby future? Please comment.
If I want to implement a new module (like Batch Renormalization), what are the criteria that my implementation should fulfill in order to get incorporated in PyTroch? How should I open an issue regarding the same and submit a Pull Request? (I am relatively new to Open Source development)
|
st118985
|
In general, we promote users to build new modules in their own extensions, i.e. instead of torch.nn.BatchRenormalization, foo.BatchRenormalization. Here’s an example: https://github.com/fxia22/stn.pytorch 137
We will take the most popular / most used modules and carefully integrate them into the core torch.nn.
For examples of how to contribute, you can look at Pull Requests that have already been made to the pytorch core that have contributed an additional module.
|
st118986
|
Hi,
I just wonder what is a tape-based autograd system?
I see this term on PyTorch about site.
Could you guys explain a little bit for me?
Thank you a lot.
|
st118987
|
something that uses reverse-mode automatic differentiation:
Justin Domke's Weblog – 24 Mar 09
A simple explanation of reverse-mode automatic differentiation 1.9k
My previous rant about automatic differentiation generated several requests for an explanation of how it works. This can be confusing because there are different types of automatic differentiation …
|
st118988
|
QQ截图20170321130328.jpg841×129 15.1 KB
firstly I’ve training on a regular python file using 4 GPUs using function Dataparallel.
Then I loaded saved parameters using ipython notebook through SSH while the previous job is still running.
when I load it on a single GPU instead of Dataparallel, it shows that weight doesn’t exist.So instead I use Dataparallel function on the same GPUs just like training process, then the problem occurred.
The ipython froze, and I immediately kill the job. Then My GPU is locked up like the picture shows. I can restart any job but it only shows 1MB memory whatever I tried.
I’ve ran into the same problem before, reboot can do but my peers are also using this remote server. What can I do ;-(, I searched through ‘ps -ef’ but still cannot find relevant jobs that caused the problem.
What have I done ;-(.
|
st118989
|
So the problem is because the NVIDIA libraries we’re using for inter-GPU communication in DataParallel do some funky stuff and they can leave the driver in some inconsistent state. Just remember to never launch multiple DataParallel jobs that share some of the GPUs (it’s ok to run one job on GPU 0, 1 and anoher on 2, 3).
|
st118990
|
I have rebooted my server, but still some strange errors occur occasionally
like ‘RuntimeError: cuda runtime error (4) : unspecified launch failure at /home/soumith/local/builder/wheel/pytorch-src/torch/lib/THC/generic/THCTensorCopy.c:18’.
And when I ran ‘nvidia-smi’ sometimes it’s extremely slow…
Thanks for your replying now I won’t commit the same mistake again.
|
st118991
|
e… it seems like a serious problem.
unspecified launch failure constantly occurs about 1 or 2 hours after I launch my code. I can’t find any solution related. Should I reinstall my NVIDIA driver…?
|
st118992
|
is your GPU becoming too hot? occurs after 1 to 2 hours of launching your code sounds like that might be a problem? (because Unspecified Launch Failure might sometimes be that)
|
st118993
|
Not sure why but this comment was addressed to me? I received an email notification.
Screenshot from 2017-03-22 18:09:38.png1562×352 42 KB
|
st118994
|
sorry for my late reply!
I restart my system again and the problem is solved I think. I can’t reproduce it now.
Though I don’t think it’s due to the temperature, our school’s GPUs are deposed in a exclusive area where two air-conditioners are working. But still thanks for your replying. Admire your group’s work!
|
st118995
|
Hello,
I’ve noticed that PyTorch (torch (0.1.10.post2), torchvision (0.1.7)) uses significantly more GPU RAM than e.g. Theano running a similar code. Unfortunately I cannot disclose the actual code but I think it should be possible to reproduce this behavior with the following simple sequential architecture (all dimensions are in form MINIBATCH_LENGTH x NUM_CHANNELS x H x W):
N - minibatch size, e.g. 20, padding mode == “same” everywhere
Input: Nx10x1024x1024
Layer0: Conv2D, stride=2, filters=32, output: Nx32x512x512
Layer1: Conv2D, stride=2, filters=64, output: Nx64x256x256
Layer2: Conv2D, stride=2, filters=128, output: Nx128x128x128
… all the way down to 16x16 feature maps
LayerX: Conv2D, stride=2, filters=…, output: Nx…x16x16
… now, go opposite way - transposed convolution
LayerX+1: ConvTranspose2D, stride=2, filters=…, output: Nx…x32x32
… all the way up …
Layer(Last): ConvTranspose2D, stride=2, filters=…, output: Nx10x1024x1024
Main loop:
net = Net()
net.cuda()
net.train()
optimizer = optim.SGD(net.parameters(), lr = ...)
criterion = nn.MSELoss()
input = Variable(torch.from_numpy(<your-Nx10x1024x1024-tensor>).cuda())
target = Variable(torch.from_numpy(<your-Nx10x1024x1024-tensor>).cuda())
for epoch in xrange(...):
output = net(input)
loss = criterion(output, target)
net.zero_grad()
loss.backward()
optimizer.step()
On a GTX-1060 corresponding code takes around 30% more GPU RAM than its Theano counterpart (exection times are about the same for Theano and PyTorch versions).
Is this something that can be fixed?
Thanks,
|
st118996
|
PyTorch uses a caching memory allocator, which means that it might oversubscribe memory even if it is not using it and cache blocks. So in that case, nvidia-smi wont give you the exact memory used by pytorch. In out-of-memory situations, pytorch automatically frees blocks that are cached.
This might be one reason to explain the memory difference.
|
st118997
|
thanks for your reply. This seems to be the case indeed (the maximum mini-batch size seems to be quite similar for both Theano and PyTorch).
However, is there a way to constrain memory allocation? In Theano there’s lib.cnmem which can be assigned % of memory to pre-allocate. It’s a soft limit but still helpful when sharing GPU between multiple jobs.
|
st118998
|
Here’s a reproduction using 0.1.10_2. This is not an in-place operation, so I don’t understand why it’s failing for the Variable and not for the Tensor.
import torch
from torch.autograd import Variable
x = torch.rand(1, 2, 3, 4)
# This succeeds.
x.sub(x.max())
v = Variable(x, volatile=True)
# This fails with "RuntimeError: inconsistent tensor size at /data/users/soumith/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:827"
v.sub(v.max())
|
st118999
|
you could do: v.sub(v.max().data[0]).
We are thinking about introducing an autograd.Scalar type to better do this, rather than returning scalars in autograd as 1-dimensional Tensors with 1 element.
|
st119000
|
I like the sound of that. Keeping the semantics of Tensor and Variable nearly identical would satisfy the principle of least surprise.
|
st119001
|
If I wrap RNN with DataParallel, it seems like output is not consistent with the target size. For instance, if the batch size is 32 and 2 gpus are active then 16 instances per gpu are processed. However, these instances should be aggregated in the end to get the whole batch of 32 instances for loss function. But when I use rnn, aggregation is not happening and model outputs only 16 instances which is conflicting with the target value size.
I don’t know it makes sense ?
|
st119002
|
you can look at https://github.com/OpenNMT/PyOpenNMT 350 to see how to wrap RNN in DataParallel
You have to check whether you are using batchFirst for your RNN and also which dimension is being scattered by DataParallel
|
st119003
|
This is strange. If custom RNN module returns the both RNN state and the output, DataParallel does not work and the problem above appears. If you just return output of RNN then things are fine
|
st119004
|
I am not sure if this is Pytorch related…apologies if not.
In [1]: %cpaste
Pasting code; enter '--' alone on the line to stop or use Ctrl-D.
import numpy as np
import torch
a = torch.from_numpy(np.random.rand(5000,100000).astype(np.float32))
b = torch.from_numpy(np.random.rand(5000,100000).astype(np.float32))
c = a.cuda()
d = b.cuda()
print(a.dot(b))
print(c.dot(d))
:<EOF>
124996952.0
124997016.0
|
st119005
|
Hi,
That is very likely due to the limited numerical precision of float32. If you use np.float64, the values should be much closer to each other (but it might be a bit large for the GPU memory).
I believe that torch hands the CPU calculation to a numerical library (with various options), so the method to compute the dot product can be slightly different between the two.
(I have also run into this e.g. when computing the mean over all images on a dataset with float32.)
|
st119006
|
Thanks, you are right about the float64. The number of different digits is similar (depends on the experiment), but they are way more closer numbers.
import numpy as np
import torch
a = torch.from_numpy(np.random.rand(5000,100000).astype(np.float64))
b = torch.from_numpy(np.random.rand(5000,100000).astype(np.float64))
c = a.cuda()
d = b.cuda()
print(a.dot(b))
print(c.dot(d))::::::::::
:<EOF>
125000868.65247717
125000868.65247723
|
st119007
|
There’s also a possible difference in the execution order of the operations. I suppose dot product in CPU is done in sequence, while in GPU there must be a reduction.
Example:
CPU:
(((a + b) + c) + d)
GPU:
((a+b) + (c+d))
|
st119008
|
@kmichaelkills Thanks a lot for the answer. Out of interest, what is the reason for this difference in computing the dot product? Why there must be a reduction in GPU but it is sequential in CPU?
|
st119009
|
it is because GPUs have thousands of cores, and doing a map-reduce style computation best exploits the parallelism of GPUs
|
st119010
|
Once that I specify that my model will operate in the GPU (i.e. model.cuda()), then I also have to convert the inputs and targets to CUDA with .cuda() and get the results back with .cpu(). Is there a way to just tell pytorch to copy the data to GPU when required, and bring back the results to main memory when needed?
|
st119011
|
I try to apt the merge(mode=‘avg’) of keras. What is the torch way of doing this? I try to sum and divide by the count but it raises RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
def forward(self, inputs):
outputs = []
for x in inputs:
x = self.conv_column(x)
x = self.clf_layer(x)
outputs.append(x)
# avg all outputs here !
return outputs
|
st119012
|
Hi!
What I want to do: create a numpy array, fill this array with text data and then use the Data loader to feed the data to neural net
Is this a good way to feed data to a neural network using Pytorch?
Thanks
|
st119013
|
you should look at our examples, which give you better hints on how to achieve this:
https://github.com/pytorch/examples 1.1k
|
st119014
|
Current it seems only support sample-wise weighting.
How about having an element wise option for loss function?
|
st119015
|
there isn’t an easy work-around. you can call the loss in a for-loop over the mini-batch.
|
st119016
|
Currently it seems that you’d input the LR from the start with optim.SGD for example. How do you anneal it?
|
st119017
|
If you recreate the optimizer, would it lose its weights, it seems to be? What do you mean by recreate?
|
st119018
|
I want to preprocess the text data, for example, converting each word to index and adding some pads (for seq2seq learning). Is it good way to handle this as below?
class MyDataset(torch.data.utils.Dataset):
def __init__(self):
self.data_files = os.listdir('data_dir')
sort(self.data_files)
def __getitem__(self, idx):
data = load_file(self.data_files[idx])
data = preprocess_data(data) # preprocess
return data
def __len__(self):
return len(self.data_files)
dset = MyDataset()
loader = torch.data.utils.DataLoader(dset, num_workers=8)
|
st119019
|
yes, this is a good way to handle text loading.
You can also look at the torchtext package for more complex examples:
GitHub
pytorch/text 202
Data loaders and abstractions for text and NLP. Contribute to pytorch/text development by creating an account on GitHub.
|
st119020
|
The repo looks really good. Is there a potential timeline on when it might be released on pip?
|
st119021
|
I was following the example of SNLI classifier from pytorch official examples and found the following two classes.
class Bottle(nn.Module):
def forward(self, input):
if len(input.size()) <= 2:
return super(Bottle, self).forward(input)
size = input.size()[:2]
out = super(Bottle, self).forward(input.view(size[0]*size[1], -1))
return out.view(*size, -1)
class Linear(Bottle, nn.Linear):
pass
I am not understanding the flow of execution when I use the Linear class instance. How the class Linear is associated with the class Bottle? I can understand Linear class inherits nn.Linear but what is the purpose of the first parameter in the class declaration - class Linear(Bottle, nn.Linear) ?
Can anyone share your insight?
|
st119022
|
The Bottle layer is a custom layer that is designed to apply a Linear layer over 3D tensors by flattening out the tensor over batch and sequence length dimensions (or the first two dimensions generically).
The new Linear layer they define inherits from both nn.Linear and Bottle. This means that Linear has its init and other methods the same as nn.Linear and only the forward method is redefined as the forward method in Bottle.
@jekbradbury, do you think that it would be better if such complexities (multiple inheritance etc.,) are not there in intro examples?
|
st119023
|
@pranav
I guess this is more of a python question - But, doesn’t Bottle inherit from nn.Module alone?
Bottle.__bases__ returns (<class 'torch.nn.modules.module.Module'>,). So how does super(Bottle, self).forward(...) call Linear's forward (which is turn falls back to nn.Linear's forward) ?
Shouldn’t it call nn.Module's forward which should raise a Not Implemented error?
|
st119024
|
In the model that they subsequently write, they use the Linear layer that they defined through multiple inheritance and not the Bottle layer.
|
st119025
|
This is the standard Python way to implement a “mixin”, although it’s a little confusing for people who are used to Java/C++ inheritance. Python’s super function is poorly named; it should really be something like next_in_method_resolution_order; if you check out help(Linear) at the interactive prompt it will tell you that its MRE is Linear, Bottle, nn.Linear, nn.Module, object, so calling Linear.forward will call Bottle.forward and the super call inside that method will call nn.Linear.forward.
|
st119026
|
I believe its best if we do not have such concepts at all - just simple Python and letting PyTorch/Model take the center stage
|
st119027
|
Hi all,
I want to train a large CNN with limited amount of data.
So I need some random cropping and horizontal flipping to train this net.
How can I get the data augmentation by the torchvision.transforms ?
Could you please show me some examples?
Thanks.
|
st119028
|
you can look at the imagenet example to get yourself started. Here are the relevant lines:
github.com
pytorch/examples/blob/master/imagenet/main.py#L95-L103 261
weight_decay=args.weight_decay)
# optionally resume from a checkpoint
if args.resume:
if os.path.isfile(args.resume):
print("=> loading checkpoint '{}'".format(args.resume))
checkpoint = torch.load(args.resume)
args.start_epoch = checkpoint['epoch']
best_prec1 = checkpoint['best_prec1']
|
st119029
|
Will the composed transformer just give one output or every transform give a output image?Just the last one can be the data augmentation method.
|
st119030
|
I was playing around with the MNIST example, and noticed the doing TOO big batches (10,000 images per batch) seem to be hurting accuracy scores.
This got me thinking : What are the general rules of thumb you guys have found when it comes to batch sizes?
Also, what is your favorite optimizer and activation function?
Thanks!
|
st119031
|
The batch size is usually set between 64 and 256.
The batch size does have an effect on the final test accuracy. One way to think about it is that smaller batches means that the number of parameter updates per epoch is greater. Inherently, this update will be much more noisy as the loss is computed over a smaller subset of the data. However, this noise seems to help the generalization of the model.
Refer to: https://arxiv.org/abs/1609.04836 438 and https://arxiv.org/abs/1703.04933 258 for two recent studies on this subject
|
st119032
|
Thank you for the papers! I was trying to minimize noise through massive batches. The papers you provided are a great explanation of why massive batches may not work.
Thanks!
|
st119033
|
Currently I would like to perform this numpy equivalent command
a = np.random.randn(5, 4, 3, 3)
a[range(5), :, [0, 1, 1, 2, 0], [1, 2, 0, 1, 0]]
I am a bit confused how to use torch.gather() to achieve this.
|
st119034
|
It’s a bit involved to do it currently with pytorch.
A proof of concept that was working before some pytorch changes can be seen in https://gist.github.com/fmassa/f8158d1dfd25a8047c2c668a44ff57f4 198
(but you first need to perform the 2 slicings before using the advanced indexing)
|
st119035
|
I want to make a pixel wise softmax loss function.
That is my predicted value would be a matrix of integers and the target would also be a matrix of integers.
Is there a way in which I can create this loss function in pytorch so that it stores the gradients for backprop.
|
st119036
|
There are other threads showcasing making custom loss functions. Any of those should get you started.
|
st119037
|
How would I apply a different learning rate to different portions of a model? Would it be as simple as creating two optimizers with different sets of model parameters and calling optimizer.step() on both for each batch?
|
st119038
|
Check the Per-parameter options section here: http://pytorch.org/docs/optim.html 6.2k
Instead to feeding in a generator over all parameters, pass an iterable of dicts, each with the key params and the value as the parameter group. It should be simple to group our model params according to the learning rates you want to apply to them.
|
st119039
|
Hi,
I’ve seen some recent progress on SparseTensors, and I have some questions:
Do You have a roadmap for the whole sparse module? I’m curious if you’re planning to use cuSPARSE and if there is any way to contribute to this module.
I’d need a backward pass for sparse_mat @ dense_mat. What are my options? Can I write a Function using the available spmm for forward, and add some quick-and-dirty cython code to do the backward? This would mean returning a sparse grad, is this supported?
Thank You for your time, I’m really enjoying PyTorch so far.
|
st119040
|
See this thread: Exploiting sparsity in batch operations?
torch.smm impleemnts sparse * dense -> sparse, and it assumes your sparsity is strong enough that it’ll help (you’ll need a really sparse tensor in the forward op). Sparse gradients are supported, and are implemented in the Embedding module.
|
st119041
|
Hi,
for now sparse tensor support basically evolves with the needs of our respective projects. Basic GPU support is being worked on - it will rely on cuSPARSE for some operations.
returning sparse gradients from backward should work
in addition to the thread @ebetica pointed to, since pytorch supports hybrid tensors (i.e. tensors with both sparse and dense dimensions), we may add a sparse * dense -> hybrid function in the future (where “hybrid” here means one sparse dimension and one dense dimension). That would probably be more efficient in this case.
|
st119042
|
@martinraison @ebetica thanks for the answers.
I’ve just noticed that there is also an implementation of SparseLinear here 39. Can I use those
c functions to create a Function which will simply call self._backend.SparseLinear_accGradParameters, just like Embedding Function does (link 6)? Or there is some catch here?
I know that You guys probably have SparseLinear module on your roadmap, but I wanted to play around with it during the weekend, hence the question.
|
st119043
|
Hi, I am trying to implementing a customized loss function as follows, can it work without a customized backward routine ? Actually, how do I know if the operation can be supported by autograd automatically so that I don’t need to specify backward pass?
class Loss(torch.autograd.Function):
'''
Implement the loss function from output from RNN.
Ref paper: https://arxiv.org/abs/1308.0850
'''
def __init__(self):
'''
x is sequence of coordinates with dim (batch, seq_length, 3).
Parameters are sequence of output from rnn with dim (batch, seq_length, 128).
'''
self.e = [] # predicted end of stroke probability scalar
self.m1 = [] # vector of means for x1 with len 20
self.m2 = [] # vector of means for x2 with len 20
self.pi = [] # vector of mixture density network coefficients with len 20
self.rho = [] # vector of correlation with len 20
self.s1 = [] # vector of standard deviation for x1 with len 20
self.s2 = [] # vector of standard deviation for x2 with len 20
self.x1 = [] # x1 coordinate at t+1
self.x2 = [] # x2 coordinates at t + 1
self.et = [] # end of probability indicator from ground truth
self.parameters = []
self.batch = 0 #batch size
self.seq_length = 0 # reduce by 1 because loss is caculated at t+1 timestamp
def forward(self, x, para):
'''
Implement eq 26 of ref paper for single time stamp.
'''
self.save_for_backward(para)
total_loss = 0
for i in range(self.seq_length):
# prepare parameters
self.__get_para(i, x, para)
normalpdf = self.__para2normal(self.x1, self.x2, self.m1, self.m2, self.s1, self.s2, self.rho) #dim (n_batch, 20)
single_loss = self.__singleLoss(normalpdf)
total_loss += single_loss
return total_loss
def __get_para(self, i, x, para):
'''
Slice and process parameters to the right form.
Implementing eq 18-23 of ref paper.
'''
self.batch = torch.size(x)[0]
self.e = torch.sigmoid(-para[:,i,0]) # eq 18
self.parameters = para
self.seq_length = torch.size(x)[1] -1 # reduce by 1 because loss is caculated at t+1 timestamp
# slice remaining parameters and training inputs
self.pi, self.m1, self.m2, self.s1, self.s2, self.rho = torch.split(self.parameters[:,i,1:], 6, dim = 1)
self.x1 = x[:,i+1,0]
self.x2 = x[:,i+1,1]
self.et = x[:,i+1,2]
## process parameters
# pi
max_pi = torch.max(self.pi, dim = 1)[0]
max_pi = max_pi.expand_as(self.pi)
self.pi.sub_(max_pi)
red_sum = torch.sum(self.pi, dim = 1).expand_as(self.pi)
self.pi.div_(red_sum)
# sd
self.s1.exp_()
self.s2.exp_()
# rho
self.rho.tanh_()
# reshape ground truth x1, x2 to match m1, m2 because broadcasting is currently not supported by pytorch
self.x1.expand_as(self.m1)
self.x2.expand_as(self.m2)
def __para2normal(self, x1, x2, m1, m2, s1, s2, rho):
'''
Implement eq 24, 25 of ref paper.
'''
norm1 = x1.sub(m1)
norm2 = x2.sub(m2)
s1s2 = torch.mul(s1, s2)
z = torch.pow(torch.div(norm1, s1), 2) + torch.pow(torch.div(norm2, s2), 2) - \
2*torch.div(torch.mul(pho, torch.mul(norm1, norm2)), s1s2)
negRho = 1 - torch.pow(rho, 2)
expPart = torch.exp(torch.div(-z, torch.mul(negRho, 2)))
coef = 2*np.pi*torch.mul(s1s2, torch.sqrt(negRho))
result = torch.div(expPart, coef)
return result
def __singleLoss(self, normalpdf):
'''
Calculate loss for single time stamp.
Input: normalpdf (n_batch, 20).
'''
epsilon = 1e-20 # floor of loss from mixture density component since initial loss could be zero
mix_den_loss = torch.mul(self.pi, normalpdf)
red_sum_loss = torch.sum(mix_den_loss) # sum for all batch
end_loss = torch.mul(self.e, self.et) + torch.mul(1-self.e, 1 - self.et)
total_loss = -torch.log(tf.max(red_sum_loss, epsilon)) - torch.log(end_loss)
return total_loss/self.batch
|
st119044
|
if you write the loss function as a subclass of nn.Module then you dont need to specify the backward. If you write it as a subclass of autograd.Function you have to specify the backward.
|
st119045
|
I use an embedding layer to project one-hot indices to continuous space. However, during the training, I don’t want to update the weight of it. How could I do that?
|
st119046
|
you can set the weight of the embedding layer to not require grad.
m = nn.Embedding(...)
m.weight.requires_grad=False
|
st119047
|
Oh, sorry. After setting Embedding.weit.requires_grad = False, an error was raised.
ValueError: optimizing a parameter that doesn't require gradients
The optimizer is used in the following way:
self.optimizer = optim.Adadelta(self.model.parameters(), lr=args.learning_rate)
And, model is defined as follow:
class DecomposableModel(nn.Module):
def __init__(self, word_embedding, config):
super(DecomposableModel, self).__init__()
self.name = 'DecomposableModel'
self.drop_p = config['drop_p']
self.word_dim = word_embedding.embeddings.size(1)
self.embedding = nn.Embedding(word_embedding.embeddings.size(0), self.word_dim)
self.embedding.weight = nn.Parameter(word_embedding.embeddings)
self.embedding.weight.requires_grad = False
# self.embedding_normalize()
self.F = nn.Linear(self.word_dim, config['F_dim'])
self.G = nn.Linear(2 * self.word_dim, config['G_dim'])
self.H = nn.Linear(2 * config['G_dim'], config['relation_num'])
self.cuda_flag = config['cuda_flag']
def forward(self, p_ids, h_ids):
......
|
st119048
|
Hi,
Please see this post Freeze the learnable parameters of resnet and attach it to a new network that solves exactly your problem.
|
st119049
|
Hi, I am new to pytorch. I want to implement a customized layer and insert it between two LSTM layers within a RNN network.
The layer should take input h and do the following:
parameters = W*h + b # W is the weight of the layer
a = parameters[0:x]
b = parameters[x:2x]
k = parameters[2x: ]
return some_complicated_function(a, b, k)
It seems that both autograd Function and nn.Module are used to design customized layers.
My question is
What are the difference between them in a single layer case ?
autograd Function usually take weights as input arguments. Can it store weights internally ?
Which one should I pick for my implementation ?
when do I need to specify backward function while gradients are all auto computed ?
Thanks!
|
st119050
|
Hi,
This post Difference of methods between torch.nn and functional should answer most of your questions.
2: I would say nn.Module since you have parameters
3: You need to specify the backward function if you implement a Function because it works with Tensors. On the other hand, nn.Module work with Variable and thus are differentiated with autograd.
|
st119051
|
I went ahead and installed the new .whl that @smth put out, (changing to cu80 since I have cuda 8.0), and the install seemed to go well. I can train a net, but I am wondering if I need to do anything else to make sure that it uses my GPUs? When I do nvidia-smi I do not see any activity… how can I force it to train using my GPUs? thanks.
EDIT:
I use net.cuda() and this seems to work. However when I try to run my script I get this error:
Traceback (most recent call last):
File “test.py”, line 266, in
yEst = net.forward_prop(currentBatchData)
File “test.py”, line 126, in forward_prop
x = F.max_pool2d(F.relu(self.conv1(x)), (2,2))
File “/data/venv/pytorch/local/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 210, in call
result = self.forward(*input, **kwargs)
File “/data/venv/pytorch/local/lib/python2.7/site-packages/torch/nn/modules/conv.py”, line 235, in forward
self.padding, self.dilation, self.groups)
File “/data/venv/pytorch/local/lib/python2.7/site-packages/torch/nn/functional.py”, line 37, in conv2d
return f(input, weight, bias) if bias is not None else f(input, weight)
File “/data/venv/pytorch/local/lib/python2.7/site-packages/torch/nn/_functions/conv.py”, line 33, in forward
output = self._update_output(input, weight, bias)
File “/data/venv/pytorch/local/lib/python2.7/site-packages/torch/nn/_functions/conv.py”, line 88, in _update_output
return self._thnn(‘update_output’, input, weight, bias)
File “/data/venv/pytorch/local/lib/python2.7/site-packages/torch/nn/_functions/conv.py”, line 147, in _thnn
return impl[fn_name](self, self._bufs[0], input, weight, *args)
File “/data/venv/pytorch/local/lib/python2.7/site-packages/torch/nn/_functions/conv.py”, line 219, in call_update_output
bias, *args)
TypeError: FloatSpatialConvolutionMM_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, torch.FloatTensor, torch.cuda.FloatTensor, torch.cuda.FloatTensor, torch.FloatTensor, torch.FloatTensor, long, long, int, int, int, int), but expected (int state, torch.FloatTensor input, torch.FloatTensor output, torch.FloatTensor weight, [torch.FloatTensor bias or None], torch.FloatTensor finput, torch.FloatTensor fgradInput, int kW, int kH, int dW, int dH, int padW, int padH)
(If I do not try to run net.cuda() my script works find on the CPU).
|
st119052
|
Look like your input haven’t gotten sent to GPU yet.
Just to be sure, did you call something akin to input = Variable(input.cuda()) before forward it through the net?
|
st119053
|
@NgPDat Ok so just to be sure, all I did was run myDNN.cuda() after the net was created but before the training loop, and in the forward prop, I have myDNN.forward(input.cuda()). However this still gives me the same error…
|
st119054
|
Nevermind, I think it works now: I had forgotten to do the same for the labels as well. Thanks!
|
st119055
|
Hi,
I installed PyTorch by Docker image pytorch-cudnnv6 on a VM following [https://github.com/pytorch/pytorch#installation 19].
Then I tried to translate a test text using the pretrained model onmt_model_en_fr_b1M published on https://github.com/OpenNMT/OpenNMT-py 8 with the command:
python translate.py -model ../onmt-model/onmt_model_en_fr_b1M-261c69a7.pt -src ../test.txt -output ../test.tok
, but failed with the following error:
Traceback (most recent call last):
File "translate.py", line 116, in <module>
main()
File "translate.py", line 55, in main
translator = onmt.Translator(opt)
File "/root/OpenNMT-py/onmt/Translator.py", line 11, in __init__
checkpoint = torch.load(opt.model)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/serialization.py", line 222, in load
return _load(f, map_location, pickle_module)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/serialization.py", line 355, in _load
return legacy_load(f)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/serialization.py", line 300, in legacy_load
obj = restore_location(obj, location)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/serialization.py", line 85, in default_restore_location
result = fn(storage, location)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/serialization.py", line 67, in _cuda_deserialize
return obj.cuda(device_id)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/_utils.py", line 56, in _cuda
with torch.cuda.device(device):
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/cuda/__init__.py", line 136, in __enter__
_lazy_init()
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/cuda/__init__.py", line 96, in _lazy_init
_check_driver()
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/cuda/__init__.py", line 70, in _check_driver
http://www.nvidia.com/Download/index.aspx""")
AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx
It looks having no GPU support. Yes, I’m using a VM, so don’t have GPU. Is this right?
My question is: how can I use CPU with OpenNMT commands in this environment so as to avoid the error and get successful?
I then tried compiling and installing PyTorch from source without CUDA support on a VM following [https://github.com/pytorch/pytorch#installation 19], and executed the same command above again. And also I got error below:
Traceback (most recent call last):
File "translate.py", line 123, in <module>
main()
File "translate.py", line 56, in main
translator = onmt.Translator(opt)
File "/root/OpenNMT-py/onmt/Translator.py", line 12, in __init__
checkpoint = torch.load(opt.model)
File "/root/anaconda3/lib/python3.6/site-packages/torch/serialization.py", line 229, in load
return _load(f, map_location, pickle_module)
File "/root/anaconda3/lib/python3.6/site-packages/torch/serialization.py", line 362, in _load
return legacy_load(f)
File "/root/anaconda3/lib/python3.6/site-packages/torch/serialization.py", line 307, in legacy_load
obj = restore_location(obj, location)
File "/root/anaconda3/lib/python3.6/site-packages/torch/serialization.py", line 85, in default_restore_location
result = fn(storage, location)
File "/root/anaconda3/lib/python3.6/site-packages/torch/serialization.py", line 67, in _cuda_deserialize
return obj.cuda(device_id)
File "/root/anaconda3/lib/python3.6/site-packages/torch/_utils.py", line 57, in _cuda
with torch.cuda.device(device):
File "/root/anaconda3/lib/python3.6/site-packages/torch/cuda/__init__.py", line 129, in __enter__
_lazy_init()
File "/root/anaconda3/lib/python3.6/site-packages/torch/cuda/__init__.py", line 89, in _lazy_init
_check_driver()
File "/root/anaconda3/lib/python3.6/site-packages/torch/cuda/__init__.py", line 56, in _check_driver
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Could anyone help? Thanks a lot!
|
st119056
|
@lifengd i think that example requires GPU or has a command-line flag controlling not requiring it.
|
st119057
|
@smth
Hi Smth,
Do you mean the Python edition of OpenNMT require GPU? Not ok with CPU only on VM?
And seems there is no such a control flag from the command help:
root@1bf56383b1ca:~/OpenNMT-py# python translate.py --help
usage: translate.py [-h] -model MODEL -src SRC [-tgt TGT] [-output OUTPUT]
[-beam_size BEAM_SIZE] [-batch_size BATCH_SIZE]
[-max_sent_length MAX_SENT_LENGTH] [-replace_unk]
[-verbose] [-n_best N_BEST] [-gpu GPU]
translate.py
optional arguments:
-h, --help show this help message and exit
-model MODEL Path to model .pt file
-src SRC Source sequence to decode (one line per sequence)
-tgt TGT True target sequence (optional)
-output OUTPUT Path to output the predictions (each line will be the
decoded sequence
-beam_size BEAM_SIZE Beam size
-batch_size BATCH_SIZE
Batch size
-max_sent_length MAX_SENT_LENGTH
Maximum sentence length.
-replace_unk Replace the generated UNK tokens with the source token
that had the highest attention weight. If phrase_table
is provided, it will lookup the identified source
token and give the corresponding target token. If it
is not provided (or the identified source token does
not exist in the table) then it will copy the source
token
-verbose Print scores and predictions for each sentence
-n_best N_BEST If verbose is set, will output the n_best decoded
sentences
-gpu GPU Device to run on
Thanks!
|
st119058
|
Hi, its my second day with PyTorch. I have ran into this problem:
a = Variable(torch.randn(5, 3))
b = Variable(torch.randn(3))
print(a/b)
It is throwing an error:
RuntimeError: inconsistent tensor size at /data/users/soumith/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:869
What should I do now? In numpy, I would have done so through broadcasting:
x = np.random.randn(5,3)
y = np.random.randn(3)
print(x/y)
|
st119059
|
Thanks for the suggestion. Also, what is the reason that broadcasting is not supported?
|
st119060
|
I defined a function in which there’re temporary variables are defined, for example:
def foo(input_variable):
tmp = Variable(torch.zeros(input_variable.size()))
return torch.cat((input_variable, tmp), 1)
This function runs well on CPU, however when run on GPU, it will break because tmp is created on CPU by default.
Is there any elegant way to make it transparent? I.e., tmp will be created on the same device with input_variable?
|
st119061
|
tensor.new 67 may be helpful
by the way, you can use below to format your code
```python
def foo(input_variable):
tmp = Variable(torch.zeros(input_variable.size()))
return torch.cat((input_variable, tmp), 1)
it will be formated like this
def foo(input_variable):
tmp = Variable(torch.zeros(input_variable.size()))
return torch.cat((input_variable, tmp), 1)
it’s markdown sytanx
|
st119062
|
How do you use torch.Tensor.new ?
The following works when X is on CPU but fails to if its not:
>>> X = Variable(torch.zeros(2, 2))
>>> torch.Tensor.new(X.data, torch.zeros(2, 2))
0 0
0 0
[torch.FloatTensor of size 2x2]
>>> X = Variable(torch.zeros(2, 2)).cuda()
>>> torch.Tensor.new(X.data, torch.zeros(2, 2))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-5-44093d4f4ab5> in <module>()
----> 1 torch.Tensor.new(X.data, torch.zeros(2, 2))
TypeError: unbound method new() must be called with FloatTensor instance as first argument (got FloatTensor instance instead)
|
st119063
|
Hi,
If you have an input Tensor a, you should replace torch.FloatTensor(2,2) by a.new(2,2) to create a Tensor of the same type as a.
If you want to created tensor to be zeroed out, you can do b = a.new(2,2).zero_().
This will work for a being any type (cuda included).
If your second tensor already exist, you can also look into the type_as 8 method to change the type of a tensor to the type of another tensor.
In you case it was not working because torch.Tensor.new(X.data, torch.zeros(2, 2)) is equivalent, for X.data being a cuda.FloatTensor to torch.cuda.FloatTensor(torch.FloatTensor(2,2).zero_()) meaning that you try to create a cuda tensor from a cpu tensor which is not allowed.
|
st119064
|
After trying out with x.new(), it turns out it’s much slower than creating a Variable on CPU then moving it into GPU.
For example, I experimented with 3 ways of adding noise to input variable x:
Way 1:
noise =x.data.new(np.random.rand(*x.size()))
Way 2:
noise =torch.from_numpy(np.random.rand(*x.size()).astype(np.float32)).cuda()
Way 3:
noise = x.data.clone().normal_()
And then
x.data += noise
For batch size = 128, on my model way 1 runs 3 times slower than way 2 & 3 (~4s vs ~1.6s). Way3 is slightly faster than way 2.
Is this supposed to be or am I doing wrong? Besides, is there any way to get the device a tensor currently resides on?
|
st119065
|
what you want to do is:
noise = x.data.new(x.size()).normal_()
That will be the fastest.
|
st119066
|
smth:
noise = torch.data.new(x.size()).normal_()
did you mean x.data.new(...) ?
|
st119067
|
do
noise =x.data.new(*x.size()).normal_()
you should avoid the CPU -> GPU copy, that was slow. Also use torch’s build_in function if it’s accessible, even though it won’t cost much to transform between numpy.ndarray and torch.Tensor.
as far as I know, tensor.new has 3 usages:
tensor.new(size1, size2, …)
tensor.new(Tensor) or tensor.new(ndarray)
tensor.new() which would create tensor, then you can resize like a.new().resize_as_(b)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.