id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st118668
|
Hello,
I have a question about the “batch_first” parameter of both RNN and pack_padded_sequence.
Say I have a tensor AxBxC, where A = batch_size, B = seq_len.
I convert this tensor to a sequence with pack_padded_sequence(…, batch_first = True)
Now I input this converted sequence to an RNN, so the question is, if I need to set this RNN batch_first = True too?
Thanks!
|
st118669
|
Setting batch_first doesn’t affect the output of a RNN if the input is a sequence… just tried…
|
st118670
|
Anyone have any good ways to grab just the weights and not the biases from a module?
I can think of the following kinda hacky solution to filter by dimension since all biases should be 1 dim, maybe? :
for param in model.parameters():
if param.dim() > 1:
#this is a weight parameter
|
st118671
|
There is a very recent PR that adds a named_parameters functionality 48, so that you can select the names of the parameters that you want. You need to have the master branch of pytorch for it to work. Here is an example
for name, param in model.named_parameters():
if 'bias' not in name:
# do something
|
st118672
|
Hi,
So far as I know , I can just browse the model by enumerating the model.modules().
Any good tool to visualize the model ?
|
st118673
|
Hello guys,
I would like to know how to solve this type of problem? Assume that my code is:
class MyNet(nn.Module):
def __init__(self, extractor):
super(MyNet, self).__init__()
self.features = nn.Sequential(
# Select Feature
*list(extractor.children())[:-2]
)
self.maxpool1 = nn.MaxPool2d(2,2)
self.conv1 = nn.Conv2d(512,1024,3,padding=1)
self.batchNorm1 = nn.BatchNorm2d(1024)
self.conv2 = nn.Conv2d(1024,512,1)
self.batchNorm2 = nn.BatchNorm2d(512)
self.conv3 = nn.Conv2d(512,1024,3,padding=1)
self.batchNorm3 = nn.BatchNorm2d(1024)
self.conv4 = nn.Conv2d(1024,512,1)
self.batchNorm4 = nn.BatchNorm2d(512)
self.conv5 = nn.Conv2d(512,1024,3,padding=1)
self.batchNorm5 = nn.BatchNorm2d(1024)
self.final = nn.Conv2d(1024,30,1)
def forward(self, input):
output = self.features(input)
output = self.maxpool1(output)
output = f.leaky_relu(self.batchNorm1(self.conv1(output)),0.1)
output = f.leaky_relu(self.batchNorm2(self.conv2(output)),0.1)
output = f.leaky_relu(self.batchNorm3(self.conv3(output)),0.1)
output = f.leaky_relu(self.batchNorm4(self.conv4(output)),0.1)
output = f.leaky_relu(self.batchNorm5(self.conv5(output)),0.1)
output = f.dropout(output, p = 0.5)
output = self.final(output)
output = f.sigmoid(output)
return output
And here is a basic initialization of above class:
resnet18 = torchvision.models.resnet18(pretrained=True)
net = MyNet(resnet18)
for param in net.features.parameters():
param.requires_grad = False
imageSize = (448,448)
nc = 3
input = V(torch.randn(1,nc,imageSize[0], imageSize[1]))
if cuda:
net.cuda()
input = input.cuda()
output = net(input)
But when run above code, I receive following error:
Traceback (most recent call last): File "MyCode.py", line 194, in <module> output = net(input) File "/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__ result = self.forward(*input, **kwargs) File "MyCode.py", line 110, in forward output = self.features(input) File "/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__ result = self.forward(*input, **kwargs) File "/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py", line 64, in forward input = module(input) File "/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__ result = self.forward(*input, **kwargs) File "/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py", line 64, in forward input = module(input) File "/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__ result = self.forward(*input, **kwargs) File "/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torchvision-0.1.8-py3.6.egg/torchvision/models/resnet.py", line 46, in forward File "/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__ result = self.forward(*input, **kwargs) File "/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 43, in forward self.training, self.momentum, self.eps) File "/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 438, in batch_norm f = torch._C._functions.BatchNorm(running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled) AttributeError: 'str' object has no attribute 'enabled'
Could you please help me?
|
st118674
|
I need elementwise multiplication between a 3 dimensional array and a vector. I understand broadcasting is not yet supported in pytorch so i select a single element from my vector in a for loop as float(my_vector.data.numpy()[i]) and multiply it by slicing 2d arrays from my 3d matrix.
I need to update both the 3d arrary and the vector using the gradient. But when i call backward on my loss, only the 3d array’s grad attribute is populated with gradients and the vector’s grad attribute is left untouched. I am guessing this is because of the weird way i am accessing elements from the vector for multiplication (not sure though). The problem was that torch.mul either supports passing a float value or scalar multiplication ot a FloatTensor for element wise multiplication. Can someone suggest a modification for my code so that the gradients for the vector are also computed?
|
st118675
|
Nevermind, I figured out the solution. It was because of the way I was accessing the elements of the vector. If I use expand to turn my vector into the same dimension as the matrix, it works. Thank you.
|
st118676
|
I am trying to install pytorch on macOS sierra. I have encountered the following error.
make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorRandom.cu.o] Error 1
make[1]: *** [CMakeFiles/THC.dir/all] Error 2
make: *** [all] Error 2
|
st118677
|
Does torch.diag work on variables without calling .data ? I’m getting weird behavior:
x = Variable(torch.ones((5,5)))
print(torch.diag(x)) # prints diag
print(torch.diag(x,1)) # still prints middle diag
print(torch.diag(x,2)) # still prints middle diag
print(torch.diag(x.data, 2)) # prints correct diag
EDIT: this was a small bug which should be fixed now.
|
st118678
|
I’m a former Theano user and when using Theano, if you wanted to use the GPU and you had enough GPU memory available, you would set your whole dataset as theano.shared() to avoid going back and forth between CPU and GPU.
With PyTorch I’m not sure if there’s an equivalent to this… Let’s say I’m using a DataLoader class to iterate over my dataset minibatches. In the examples I’ve seen, you would only call data_batch.cuda() within the training loop, which makes me think that we’re only passing the data to the GPU as we are training.
Am I wrong? What is the best practice here in order to get full advantage of the GPU?
|
st118679
|
Hi, I’m not a pytorch expert nor a pytorch developer, but have you tried increasing the number of mini batches? Increase it until you get a resource allocator error or sth equivalent where it cannot allocate any more memory. After that decrease it until you find the sweet spot.
|
st118680
|
To add to James’ comment:
http://pytorch.org/docs/notes/cuda.html#use-pinned-memory-buffers 278
Rather than placing everything on the GPU in one go, you can ensure that each CPU-GPU memory transfer is as fast as possible.
|
st118681
|
The following code results in nan, but as far as I can tell it shouldn’t?
def batch_outer_product(x: torch.autograd.Variable, y: torch.autograd.Variable):
result = []
for xv, yv in zip(x, y):
result.append(xv.ger(yv).view(1, -1))
result = torch.cat(result, 0)
return result
x = torch.autograd.Variable(torch.ones(50,64), requires_grad=True)
y = torch.autograd.Variable(torch.ones(50,64), requires_grad=True)
xy = batch_outer_product(x,y)
loss = xy.sum()
print(loss)
|
st118682
|
thanks for the report. It’s a bug, and I’ve fixed it in this PR https://github.com/pytorch/pytorch/pull/1236 7 that i just merged into master.
It will be in the next binary release next Wednesday, or you can install the master branch via instructions here: https://github.com/pytorch/pytorch#from-source 1
In the meanwhile, if you want to continue using v0.1.11, then you can use this workaround:
import torch
import torch.autograd
torch.set_printoptions(profile='full')
def batch_outer_product(x: torch.autograd.Variable, y: torch.autograd.Variable):
result = []
for xv, yv in zip(x, y):
out = torch.autograd.Variable(xv.data.new(xv.size(0), yv.size(0)).zero_())
result.append(torch.addr(out, xv, yv).view(1, -1))
result = torch.cat(result, 0)
return result
x = torch.autograd.Variable(torch.ones(50,64), requires_grad=True)
y = torch.autograd.Variable(torch.ones(50,64), requires_grad=True)
xy = batch_outer_product(x,y)
loss = xy.sum()
print(loss)
|
st118683
|
Thank you for the quick response and also for the solution until the next release!
|
st118684
|
Hi, I encountered this error when doing torch.equal(X, Y) between two variables. I can circumvent this by doing torch.equal(X.data, Y.data). My question is: when running on GPU, will this solution be slower than supporting stateless equal() natively? Thanks!
|
st118685
|
you are using the GPU with torch.equal(X.data, Y.data). There are some functions which are not differentiable, or have not yet been implemented in autograd, this is one of them.
|
st118686
|
I’m reading source code of DiscoGAN 37 and found they use this for two Discriminators’ parameters:
gen_params = chain(generator_A.parameters(), generator_B.parameters())
optim_gen = optim.Adam( gen_params, lr=args.learning_rate, betas=(0.5,0.999), weight_decay=0.00001)
I can’t find documentation about what chain operation does, could anyone please explain to me ?
|
st118687
|
It chains (concatenates) two iterable objects. chain yields the elements of the first iterator until it gets exhausted, and then it yields the elements of the second one. In your code chain puts together the parameters of the two generators so they will be optimized simultaneously. You can read more here: [docs] 257.
|
st118688
|
Hi, I was wondering how I should go about implementing a SBN in torch? (references 14 and 7 in https://arxiv.org/pdf/1506.05254v3.pdf 44, for example)
You guys put in something for stochastic nodes, with the RL example here, https://github.com/pytorch/examples/blob/master/reinforcement_learning/reinforce.py 14
But the neural network itself isn’t actually stochastic - just that the output is sampled. I get that this generates the next state (input to the network) and so in a way it’s like a stochastic neural network - but each stochastic node needs a reward right? Would I just use the same ‘reward’ for each stochastic neuron?
Cheers.
|
st118689
|
So I adapted the reinforce.py 9 file to learn XOR, with the output sampled and using that to get a ‘reward’ (-loss). This is as opposed to using negative log likelihood as the loss.
A full SBN would also sample the hidden layer (which have neurons as Bernoulli r.v., parameterised by sigmoid(affine of prev layer) ).
Is there an efficient way to implement this as the full stochastic neuron? Would I call a simple one layer nn recursively, since Torch is dynamic?
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.autograd as autograd
from torch.autograd import Variable
torch.manual_seed(543)
class Policy(nn.Module):
def __init__(self):
super(Policy, self).__init__()
self.affine1 = nn.Linear(2, 10)
self.affine2 = nn.Linear(10, 2)
self.affine = nn.Linear(2, 2)
self.saved_outputs = []
self.rewards = []
def forward(self, x):
x = F.sigmoid(self.affine1(x))
action_scores = self.affine2(x)
return F.softmax(action_scores)
model = Policy()
optimizer = optim.Adam(model.parameters(), lr=1e-2)
x_input = np.array([[1.0,1.0],[1.0,0.0],[0.0,1.0],[0.0,0.0]])
target = np.array([[0.0], [1.0],[1.0],[0.0]])
for i_episode in range(400):
for t in range(20):
ind = np.random.randint(4)
xin = x_input[ind]
tar = target[ind]
x_input_Tensor = torch.from_numpy(xin).float().unsqueeze(0)
probs = model(Variable(x_input_Tensor)) # prob of y
output = probs.multinomial() # sampled from softmax
print(xin, tar, output.data.numpy())
model.saved_outputs.append(output) # action is a torch.LongTensor, 0s and 1s
reward = 1.0*(output.data.numpy() == tar)
model.rewards.append(reward)
saved_outputs = model.saved_outputs
rewards = []
for r in model.rewards[::-1]:
R = r
rewards.insert(0, R)
rewards = torch.Tensor(rewards)
rewards = (rewards - rewards.mean()) / rewards.std()
for output, r in zip(model.saved_outputs, rewards):
output.reinforce(r)
optimizer.zero_grad()
autograd.backward(model.saved_outputs, [None for _ in model.saved_outputs])
optimizer.step()
del model.rewards[:]
del model.saved_outputs[:]
|
st118690
|
Yeah, you could just generate the probabilities of firing with a Linear followed by a Sigmoid, and then call .bernoulli() on the output. That will sample the activations. Then, when you’ll want to backpropagate the errors, you’ll need to provide a reward for every sampled value. Once you do this, you can just call .backward() on the output, and that should do it. You can reuse a single layer multiple times in a single forward pass, it’s perfectly valid in PyTorch.
I think you might have a small bug in your XOR example - you only call the optimizer once after all these iterations. Not sure if that’s what you wanted.
|
st118691
|
Hi Adam, I came back to this this week, and I managed to get the basic SBN working. I took the basic mnist example https://github.com/pytorch/examples/blob/master/mnist/main.py 22 and wrote my own net class.
I’m now trying to implement the MuProp algorithm (https://arxiv.org/abs/1511.05176 6), to derive a control variate for the REINFORCE signal - something to reduce the variance, also see http://dustintran.com/blog/muprop-unbiased-backpropagation-for-stochastic-neural-networks 10
Basically, you subtract from the ‘reward’ something (a control variate) that correlates with it (but does not depend on the sample generating the reward), but to keep the gradient estimate unbiased you add the mean that this term has contributed. The MuProp algorithm uses a control variate based on a Taylor expansion around what it calls the mean field network, but is really the deterministic ‘standard’ neural net (neurons take values of sigmoids without sampling).
Below is a single sample SBN that works with the MNIST example. If you change the line "z1.reinforce(loss_repeated-CV_repeated) " to just have loss_repeated as the reward, you have standard REINFORCE for an SBN.
I am OK with implementing the Taylor expansion, but I’m wondering how I would add back the mean to make the gradient unbiased. Should I use a hook to the parameter gradients to add something to the gradient before the update step? At the moment I call backward() on the deterministic network’s loss inside the forward pass to add this as a hook - can I do this?
Cheers,
George
PS. I’m aware this is pretty specific and may not be that easy to follow!
class SBNBase(nn.Module):
def init(self):
super(SBNBase, self).init()
self.w1 = Parameter(torch.Tensor(28 * 28, 200)) #seems to be more flexibility in using parameters than Linear layers
self.wlast = Parameter(torch.Tensor(200, 10))
def expected_loss(self, target, forward_result):
(a1, mu1, z1), (a2, logprobs_out) = forward_result
return F.nll_loss(logprobs_out, target)
def expected_loss(self, target, forward_result):
(a1, mu1, z1), (a2, logprobs_out) = forward_result
return F.nll_loss(logprobs_out, target)
def forward(self, x, target):
x = x.view(-1, 28*28)
a1 = x.mm(self.w1)
mu1 = F.sigmoid(a1)
z1 = torch.bernoulli(mu1) # first hidden layer samples
alast = z1.mm(self.wlast)
logprobs_out = F.log_softmax(alast)
expected_loss = self.expected_loss(target, ((a1, mu1, z1), (alast, logprobs_out)))
'''MuProp Taylor expansion, deterministic forward prop'''
deta1 = x.mm(self.w1)
detmu1 = F.sigmoid(deta1)
detalast = detmu1.mm(self.wlast)
detlogprobs_out = F.log_softmax(detalast)
detexpected_loss = self.expected_loss(target, ((a1, mu1, z1), (detalast, detlogprobs_out)))
detexpected_loss.backward() # can I do this in forward???
control_var = detexpected_loss.data + torch.sum(detmu1.grad.data*(z1.data-detmu1.data))
loss_repeated = expected_loss.data.repeat(z1.size())
CV_repeated = control_var.repeat(z1.size())
z1.reinforce(loss_repeated-CV_repeated) # REINFORCE stochastic layer, control variate included
#print(self.w1.grad.size())
cvmu = self.w1.grad.data # Here is where I am confused! Is this gradient from the deterministic network?
h1 = self.w1.register_hook(lambda grad: grad+cvmu)
return ((alast, logprobs_out)), expected_loss
|
st118692
|
I am trying to use torch.kthvalue function. While the code runs perfectly on CPU, I get the following error on GPU.
RuntimeError: Type FloatTensor doesn't implement stateless method kthvalue
Is this a missing PyTorch feature? If so, are there any workarounds?
Edit: I currently perform the kthvalue operation in CPU and move the results to GPU.
|
st118693
|
kthvalue is a missing feature. There are 3 or 4 functions still missing on the GPU.
topk is available on the GPU for FloatTensor and currently being implemented for other tensor types. Maybe you can get torch.topk and get the last value? That might be faster
|
st118694
|
Hello, everyone.
I want to run my pytorch codes on a board with ARM processor(aarch64).
The OS on that board is linux(Ubuntu 14.04).
I have tried so many things to build Pytorch on it but all failed.
Simple installation using Anaconda(or miniconda) has failed.
It seems Anaconda does not support aarch64 at all.
(original 86-64, arm7l, ppc64le binaries do not work with my board which has an aarch64 processor)
So I installed some dependencies using pip and somehow managed to reach to a point to build the pytorch from source, and the messages from setup.py 14 tells me that configuring and generating process of CMAKE was successfully done.
But the build process stopped at some point with the following messages.
…
aarch64-linux-gnu-gcc: internal compiler error: Killed (program cc1plus)
aarch64-linux-gnu-gcc: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-5/README.Bugs> for instructions.
See <file:///usr/share/doc/gcc-5/README.Bugs> for instructions.
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/home/firefly/Downloads/pytorch-master -I/home/firefly/Downloads/pytorch-master/torch/csrc -I/home/firefly/Downloads/pytorch-master/torch/lib/tmp_install/include -I/home/firefly/Downloads/pytorch-master/torch/lib/tmp_install/include/TH -I/home/firefly/Downloads/pytorch-master/torch/lib/tmp_install/include/THPP -I/home/firefly/Downloads/pytorch-master/torch/lib/tmp_install/include/THNN -I/usr/lib/python2.7/dist-packages/numpy/core/include -I/usr/include/python2.7 -c torch/csrc/autograd/functions/batch_normalization.cpp -o build/temp.linux-aarch64-2.7/torch/csrc/autograd/functions/batch_normalization.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY
error: command ‘aarch64-linux-gnu-gcc’ failed with exit status 4
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
So the question is, does pytorch support aarch64 processor?
And if so, it would be very much appreciated if anyone can tell me where I am doing wrong, or an easier way to build it.
Thank you
p.s.: I failed to install the MKL library on that board too. So I instead installed openBLAS. I heard that performance gap btw. MKL and openBLAS are huge and runtime of the algorithm is very important in my project. Is there any suggestion or advice to install MKL on aarch64 processor? Thanks!
|
st118695
|
it does support aarch64 processor. Someone from NVIDIA wrote how to build it on their jetson platform:
gist.github.com
https://gist.github.com/dusty-nv/ef2b372301c00c0a9d3203e42fd83426 2.0k
pytorch_jetson_install.sh
#!/bin/bash
#
# pyTorch install script for NVIDIA Jetson TX1/TX2,
# from a fresh flashing of JetPack 2.3.1 / JetPack 3.0 / JetPack 3.1
#
# for the full source, see jetson-reinforcement repo:
# https://github.com/dusty-nv/jetson-reinforcement/blob/master/CMakePreBuild.sh
#
# note: pyTorch documentation calls for use of Anaconda,
# however Anaconda isn't available for aarch64.
This file has been truncated. show original
|
st118696
|
Hey,
Everything is running smoothly in my model but for some reason I get a weird error in the torch.cat() backward function. The forward pass and the loss are computing OK, though… Does anyone has a clue of what is going on?
Traceback (most recent call last):
File "/Users/miguel/Documents/Unbabel/pytorch-tools/pytorch_tools/models/slqe.py", line 217, in <module>
loss = net.update(input_source, input_target, tags, input_editor=input_editor, input_client=input_client)
File "/Users/miguel/Documents/Unbabel/pytorch-tools/pytorch_tools/models/slqe.py", line 176, in update
loss.backward()
File "/Users/miguel/Documents/Unbabel/pytorch-tools/venv/lib/python2.7/site-packages/torch/autograd/variable.py", line 146, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File "/Users/miguel/Documents/Unbabel/pytorch-tools/venv/lib/python2.7/site-packages/torch/autograd/_functions/tensor.py", line 314, in backward
in zip(self.input_sizes, _accumulate(self.input_sizes)))
File "/Users/miguel/Documents/Unbabel/pytorch-tools/venv/lib/python2.7/site-packages/torch/autograd/_functions/tensor.py", line 313, in <genexpr>
return tuple(grad_output.narrow(self.dim, end - size, size) for size, end
RuntimeError: out of range at /Users/soumith/code/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensor.c:367
|
st118697
|
What is the next operation after torch.cat ?
Is the size of the gradients you give it correct?
|
st118698
|
i’m concatenating two different branches of my model and after, it goes through an nn.Linear layer with a relu activation.
Do i need to pass gradients?
I’m computing the BCELoss and then calling loss.backward()
|
st118699
|
No you don’t, I was just wondering if you had a custom layer after that that could be misbehaving.
But that should work.
Could you make a minimal example that reproduce this error please?
|
st118700
|
I have a custom layer before the torch.cat, and actually one after as well now that I recall…
Here’s the code:
https://gist.github.com/miguelvr/351a1b56209490e7dcfe757c5b2d7765 75
|
st118701
|
Ho,
Actually the support for negative dimension for all modules has been added recently.
It is on the latest master branch, but may not be in the binary release yet.
If you cannot install from source, change the dim argument you give to the cat function to be positive (with inp.dim()-1)
|
st118702
|
Lifesaver!!! Thank you so much, it worked!
It stays as a tip for the devs, tho…
Cheers
|
st118703
|
As shown in the title, I want to know what is the function in pytorch that is equivalent to theano.tensor.dimshuffle or np.dimshuffle?
Thanks!
|
st118704
|
torch.permute() for swapping dimensions
torch.unsqueeze() to add an extra dimension
these two should serve you right
|
st118705
|
Hi guys, I just have a quick question. Is it possible to use third party torch packages in pytorch. For instance, is it possible to use manifold package from https://github.com/clementfarabet/manifold 13? If so how can we achieve that? Is it possible to install it through luarocks install manifold and then in pytorch import torch; import torch.manifold? Is that feasible, if not is there an equivalent method?
|
st118706
|
@apaszke thanks for the response. So there is no other way to use all those third party packages developed for torch ecosystem? Is pytorch going to support only the basic/main torch packages?
|
st118707
|
No there’s not, but I’m pretty sure you can find something in a (much richer) Python ecosystem. You can easily convert between tensors and numpy arrays.
|
st118708
|
I’m afraid that won’t work. PyTorch uses a slightly different version of C backends (0 vs 1-based indexing), and when you load Lua packages with lutorpy, it will resolve the Lua backend symbols to 0-based ones, likely leading to errors in weird places.
|
st118709
|
Ah, I see. Thanks for mentioning that. If you don’t mind, I have just one last question. Did you guys integrated the third party rnn package into nn for pytorch or is this a re-write?
|
st118710
|
I believe the functions you’re looking for are implemented in scipy and/or scikit-learn, and you can use them by calling .numpy and .from_numpy on Torch tensors.
|
st118711
|
Yep, it’s true for the case of manifold learning but sklearn (I don’t wanna be to harsh on it) has very bad implementation that raises errors in some particular cases and doesn’t use the optimal approach. For instance check the issues I raised on github for more in regards to tsne implementation & LLE.
But in the case of rnn package I guess you’ll have to roll your own.
|
st118712
|
Hello everyone,
I send you this post to see if anyone can help me in compiling torch. It is very strange but when I compile pytorch without the WITH_DISTRIBUTED = 1 parameter the compilation seems to go well. But when I put the parameter WITH_DISTRIBUTED = 1 python3 setup.py build_deps gives me the following error that I attach below. I need this flag enabled for parallelization.
I am using a debian 8.6.0, cmake 3.7.0, and python-3.5.2 and I’m really very lost …
THE ERROR:
/usr/include/string.h:66:14: note: ‘memset’
extern void *memset (void *__s, int __c, size_t __n) __THROW __nonnull ((1));
^
CMakeFiles/THD.dir/build.make:398: recipe for target 'CMakeFiles/THD.dir/master_worker/master/THDStorage.cpp.o' failed
make[2]: *** [CMakeFiles/THD.dir/master_worker/master/THDStorage.cpp.o] Error 1
/soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp: In member function ‘virtual void thd::DataChannelMPI::send(const thd::Scalar&, int)’:
/soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp:329:52: error: invalid conversion from ‘const void*’ to ‘void*’ [-fpermissive]
MPI_UINT8_T, dst_rank, 0, MPI_COMM_WORLD);
^
In file included from /soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.hpp:5:0,
from /soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp:1:
/usr/lib/openmpi/include/mpi.h:1384:20: note: initializing argument 1 of ‘int MPI_Send(void*, int, MPI_Datatype, int, int, MPI_Comm)’
OMPI_DECLSPEC int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest,
^
/soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp: In member function ‘virtual void thd::DataChannelMPI::send(thpp::Tensor&, int)’:
/soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp:340:52: error: invalid conversion from ‘const void*’ to ‘void*’ [-fpermissive]
MPI_UINT8_T, dst_rank, 0, MPI_COMM_WORLD);
^
In file included from /soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.hpp:5:0,
from /soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp:1:
/usr/lib/openmpi/include/mpi.h:1384:20: note: initializing argument 1 of ‘int MPI_Send(void*, int, MPI_Datatype, int, int, MPI_Comm)’
OMPI_DECLSPEC int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest,
^
/soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp: In member function ‘virtual THDGroup thd::DataChannelMPI::newGroup(const std::vector<int>&)’:
/soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp:476:56: error: invalid conversion from ‘const int*’ to ‘int*’ [-fpermissive]
MPI_Group_incl(world_group, ranks.size(), ranks.data(), &ranks_group);
^
In file included from /soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.hpp:5:0,
from /soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp:1:
/usr/lib/openmpi/include/mpi.h:1269:20: note: initializing argument 3 of ‘int MPI_Group_incl(MPI_Group, int, int*, ompi_group_t**)’
OMPI_DECLSPEC int MPI_Group_incl(MPI_Group group, int n, int *ranks,
^
/soft/pytorch-dist/torch/lib/THD/base/data_channels/DataChannelMPI.cpp:479:66: error: ‘MPI_Comm_create_group’ was not declared in this scope
MPI_Comm_create_group(MPI_COMM_WORLD, ranks_group, 0, &new_comm);
^
CMakeFiles/THD.dir/build.make:422: recipe for target 'CMakeFiles/THD.dir/master_worker/master/THDTensor.cpp.o' failed
make[2]: *** [CMakeFiles/THD.dir/master_worker/master/THDTensor.cpp.o] Error 1
CMakeFiles/THD.dir/build.make:158: recipe for target 'CMakeFiles/THD.dir/base/data_channels/DataChannelMPI.cpp.o' failed
make[2]: *** [CMakeFiles/THD.dir/base/data_channels/DataChannelMPI.cpp.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/THD.dir/all' failed
make[1]: *** [CMakeFiles/THD.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2
I follow this instructions:
Using Python 3 (Python 3.4)
Install build dependencies
Essentials
sudo apt-get update
sudo apt-get install git build-essential
ccache
sudo apt-get install ccache
export CC="ccache gcc"
export CXX="ccache g++"
CMake
The default CMake version in Debian’s repositories is too old.
Ubuntu 16.10 has version 3.5.2 and it works fine.
wget https://cmake.org/files/v3.7/cmake-3.7.0.tar.gz
tar xf cmake-3.7.0.tar.gz
rm cmake-3.7.0.tar.gz
cd cmake-3.7.0
./bootstrap
make
sudo make install
cd ..
Install THD dependencies
Asio C++ Library
sudo apt-get install libasio-dev
MPI implementation
sudo apt-get install mpich
Set up Python
sudo apt-get install python3-dev python3-pip
Set up virtual environment
sudo pip3 install virtualenv
virtualenv venv
source venv/bin/activate
Install PyTorch
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$HOME/pytorch-dist/torch/lib"
git clone https://github.com/apaszke/pytorch-dist/
cd pytorch-dist
pip3 install -r requirements.txt
WITH_DISTRIBUTED=1 python3 setup.py build_deps
WITH_DISTRIBUTED=1 python3 setup.py develop
Thanks a lot for your help.
Dani
|
st118713
|
Thanks, we’ll look into it. However, note that the distributed package is still in pre-alpha and will be likely slow or can break in weird ways. We’ll notify everyone once it’s ready for use.
|
st118714
|
Thanks for your answer, Adam .
Exactly, which are the requirements for pytorch to work?
I mean, versions of the operating system, gcc, cmake, cuda, etc. etc.
We are a little lost, but we think that the problem in compilation comes from cuda…
Thanks a lot!
|
st118715
|
cmake and an anaconda based python install will get pytorch going with all required dependencies.
Optional dependencies are CUDA, CUDNN
More can be read here: https://github.com/pytorch/pytorch#from-source 20 or you can look at the docker image: https://github.com/pytorch/pytorch#docker-image 10
WITH_DISTRIBUTED is not fully fleshed out or supported, we cant spec out the requirements yet.
|
st118716
|
Thank you guys for all your work. If I may, I would like to remind you also of users like me where in most of the cases we don’t have root privileges on the machines we operate upon (e.g. clusters). So, whenever the pytorch.distribute package rolls out, I would like to request, if possible to also make it available for easy installation via anaconda or pip which might take care also of the dependencies.
Thanks again, and keep up the good work!
Cheers.
|
st118717
|
Hi,
We have a machine with 4 Titan X GPUs - but we can only train Pytorch models on the 3 GPUs other than the primary one. As long as we utilize the primary GPU, pytorch would hangs after a few iterations and nvidia-smi would report GPU loss afterwards - only rebooting can recover the machine.
We have tried to uninstall X-org from Ubuntu 16 desktop, or re-install Ubuntu 16 server without X. Disconnect display etc. It always causes GPU loss if the training utilizes the primary GPU.
We also tried to use MXNET train on 4 GPUs, it goes well without seeing this problem.
Any idea why Pytorch cannot train on the primary GPU? Any workaround we could try?
Thanks!
Ming
|
st118718
|
Hi Ming,
Try to update your NVIDIA drivers to latest version.
This is definitely an issue either related to the nvidia driver or related to hardware issue (overheating or other issue).
We also tried to use MXNET train on 4 GPUs, it goes well without seeing this problem.
This could be because when you are using MXNet, the mxnet install did not end up using cudnn or nccl (for example) and runs fine.
I would also suggest that you try installing pytorch from source or docker image, just to see if that helps your case. It can also help me debug your case: https://github.com/pytorch/pytorch#from-source 4
|
st118719
|
Thank you Soumith for the prompt reply!
We are using Nvidia 375.26, CUDA 8.0 and CUDNN 5.1 which are the latest. I doubt it’s overheating issue because we can train the model using GPUs 1, 2, 3, but cannot use GPUs 0, 1, 2, where GPU 0 is the primary one.
We also see cases that on machine with single GTX 1080 GPU, training pytorch model that maximize usage of GPU memory could cause GPU failure too.
In any case, I will try installing pytorch from source and report back result.
|
st118720
|
Hi Ming,
Were you able to solve the problem? I am also stuck at the same problem…
|
st118721
|
Greetings!
I’m implementing a very simple recommender system in PyTorch, using the MovieLens dataset (the one with +20M ratings). Here is the github address to the notebook I’m creating: https://github.com/echo66/pytorch-rec-sys/blob/master/step-1.ipynb 16
I will summarize what I’m doing:
Each item and user will be described by a vector of latent features. So, all the parameters are within two matrices: the users and the items features matrices. Each matrix has F columns, being F the number of latent features.
The prediction is generated by the inner product of user and item vectors.
The loss function used is the the Mean Squared Error.
I’m using two L2 regularizers, one for the user features and another for the item features.
I sorted the dataset by timestamp.
Due to the lack of good support for general sparse tensors (AFAIK) in PyTorch, for each batch, I’m slicing the matrices. Check this function 4.
Batch Size: 2000
Number of Batches: 8542
Number of users: 259.137
Number of items: 165.201
Number of ratings: 24.404.096
Training dataset size: 17.082.867
Test dataset size: 7.321.228
Right now, each backward pass is taking between 7-10 seconds, while the forward passes are taking less than 0.5 seconds. Am I doing anything considered naive?
Thanks in advance!
|
st118722
|
Hey guys,
I’m trying to window word embeddings as a 3-gram. I know it is possible to do this with a convolutional layer, so I’m trying to implement it.
# (batch_size, sentence_size, embedding_size)
x = Variable(torch.rand(10, 5, 64))
conv = nn.Conv1d(64, 192, 3, padding=1)
y_hat = conv(x.permute(0, 2, 1)).permute(0, 2, 1)
Is this even remotely correct with respect to what I want to do?
|
st118723
|
Hi, i’m very interested in the implementation details of pytorch, i’ve been trying to read some source code of pytorch but i cannot seem to find a good way, can anyone show me how pytorch integrate python with c/c++? why not use numpy for numerical computation but use c/c++ dynamic lib instead? For extending python with c/c++, which is the best choice, cffi, ctype, cython? Thanks very much!
|
st118724
|
why not use numpy for numerical computation but use c/c++ dynamic lib instead?
Because numpy does not have GPU support.
For extending python with c/c++, which is the best choice, cffi, ctype, cython?
Internally we use our own C wrapper engine: https://github.com/pytorch/pytorch/tree/master/tools/cwrap 144
For end-users, we recommend that they use CFFI https://github.com/pytorch/extension-ffi 155
|
st118725
|
I recently updated from pytorch 0.1.8 to 0.1.10 and I now get the following somewhat confusing error when I try to call torch.load on a file serialized with 0.1.8:
----> 1 a = torch.load("file.pth7")
/usr/local/lib/python2.7/dist-packages/torch/serialization.pyc in load(f, map_location, pickle_module)
220 f = open(f, 'rb')
221 try:
--> 222 return _load(f, map_location, pickle_module)
223 finally:
224 if new_fd:
/usr/local/lib/python2.7/dist-packages/torch/serialization.pyc in _load(f, map_location, pickle_module)
353 # try the legacy loader first, which only works if f is a tarfile
354 try:
--> 355 return legacy_load(f)
356 except tarfile.TarError:
357 pass
/usr/local/lib/python2.7/dist-packages/torch/serialization.pyc in legacy_load(f)
297 args = pickle_module.load(f)
298 key, location, storage_type = args
--> 299 obj = storage_type._new_with_file(f)
300 obj = restore_location(obj, location)
301 deserialized_objects[key] = obj
RuntimeError: Success
|
st118726
|
I encounter a similar problem with the version ‘0.1.10+ac9245a’ only.
I save my checkpoint using :
save_checkpoint({
'epoch': epoch + 1,
'arch': options['model']['arch'],
'state_dict': model.state_dict(),
'best_prec1': best_prec1,
}, is_best)
But when I try to load it with:
torch.load(filename)
I get this error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/cadene/anaconda3/envs/vqa/lib/python3.6/site-packages/torch/serialization.py", line 222, in load
return _load(f, map_location, pickle_module)
File "/home/cadene/anaconda3/envs/vqa/lib/python3.6/site-packages/torch/serialization.py", line 377, in _load
deserialized_objects[key]._set_from_file(f, offset)
RuntimeError: Success
It could be due to the fact that the data in my state_dict are of type torch.cuda.FloatTensor.
|
st118727
|
I was playing with simple neural style transfer model, using the pretrained vgg model in torchvision, and followed the official advice 4, I preprocessed the input images (content, style, however input is a random noise image, so I didn’t preprocess it) before training (see prep function below), and then post processed the images (content, style, optimized input) when training is done (see inv_prep below).
The problem is, seems like preprocessing gave me worse results, and I removed preprocessing and tried again, which gave me better results, both with the same initial input noise image.
# below, no mean/std preprocessing, only image Scale + axis-moving
prep = thv.transforms.Compose([
thv.transforms.Scale(200),
thv.transforms.Lambda(lambda x: np.moveaxis(np.array(x), 2, 0)),
thv.transforms.Lambda(lambda x: th.from_numpy(x.astype('float32'))),
thv.transforms.Lambda(lambda x: x.unsqueeze(0))
])
inv_prep = thv.transforms.Compose([
thv.transforms.Lambda(lambda x: x.squeeze(0)),
thv.transforms.Lambda(lambda x: (x.numpy().clip(0,255).astype('uint8'))),
thv.transforms.Lambda(lambda x: np.moveaxis(np.array(x), 0, 2)),
])
# below, with mean/std preprocessing, and yes I know I could use ToPILImage to convert tensors back to images
prep = thv.transforms.Compose([
thv.transforms.Scale(200),
thv.transforms.ToTensor(), # need to do this before applying mean/std transformation
thv.transforms.Normalize(mean=mean, std=std),
thv.transforms.Lambda(lambda x: x.unsqueeze(0))
])
inv_prep = thv.transforms.Compose([
thv.transforms.Lambda(lambda x: x.squeeze(0)),
thv.transforms.Normalize(mean=[0,0,0], std=1/std),
thv.transforms.Normalize(mean=-mean, std=[1,1,1]),
thv.transforms.Lambda(lambda x: (x.numpy().clip(0,1) * 255).astype('uint8')),
thv.transforms.Lambda(lambda x: np.moveaxis(np.array(x), 0, 2)),
])
This is the result image I got, with mean/std preprocessing, 100 itertaions:
This is the result image I got, without preprocessing, 100 itertaions:
Why is that? Anything I did wrong? I checked the final optimized input, the pixel values range from around [-50, 50], and I’m not sure how to convert them back to real image pixel values, and I think this is the problem I didn’t get right, which gives me bad image result as shown in the first pic (although I still can see the shape of Peter :-|).
|
st118728
|
if the input is not a natural image, but a random noise, just drawing it from a uniform [-1, 1] distribution makes much more sense than pre-processing it.
|
st118729
|
The input is indeed drawn from uniform (0,1), and not preprocessed in both experiments, but the optimized results are post processed, i.e. the inv_prep function.
|
st118730
|
I am just curious, in what scenario, one would like to migrate from torch7 to Pytorch (other than one like to work on python)?
|
st118731
|
More efficient memory consumption, and if you use nngraph in torch7, constructing computational graph is much easier and simpler in PyTorch.
|
st118732
|
Built-in autograd and RNN modules convinced me.
Anecdotally, a fairly complex seq2seq+attention model took me about a week to build in Torch. The majority of that was figuring out backpropagation. The same model took 3 weeks (and counting, I gave up) in TensorFlow. In PyTorch it took about an hour.
|
st118733
|
Thanks a lot. This seems more compelling reason, infact the Pytorch seems more intuitive than Torch containers.
|
st118734
|
In my code below:
output_var.is_cuda is True.
_, max_probs = torch.max(output_var, 2)
print output_var.size()
print max_probs.size()
print torch.max(max_probs)
the outputs:
(10L, 26L. 37L)
(10L. 26L, 1L)
37
So size of output_var is (10L, 26L. 37L) and with
_, max_probs = torch.max(output_var, 2)
the value of max_probs should be from 0 to 36 according to my understanding. Is it correct?
In my code sometimes torch.max(max_probs) gets 37. It happens randomly.
It is a bug of pytorch?
|
st118735
|
Hi Melody,
I’ve tried to reproduce this issue, but I am not able to.
Here is the snippet of code I am using:
import torch
from torch.autograd import Variable
output_var = Variable(torch.randn(10, 26, 37).cuda())
_, max_probs = torch.max(output_var, 2)
print output_var.size()
print max_probs.size()
for i in range(1000):
assert torch.max(max_probs).data[0] == 36
Does this snippet fail on your computer?
If so, what is:
print(pytorch.__version__)
and also what is the output of:
nvidia-smi
|
st118736
|
I am trying to run a code where I am getting error in the following code snippet.
combined_representation = torch.cat([self.encoder_hidden_states1[last_time_step_sent1][0],
self.encoder_hidden_states2[last_time_step_sent2][0]], 1)
if self.config.cuda:
combined_representation = combined_representation.cuda()
print(combined_representation.size()) # prints torch.Size([16, 600])
scores = self.linear(combined_representation)
If I don’t convert the combined_representation to cuda, I get the following error.
RuntimeError: Assertion `THCTensor_(checkGPU)(state, 4, r_, t, m1, m2)’ failed. at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.9_1487344852722/work/torch/lib/THC/generic/THCTensorMathBlas.cu:230
Please note, self.encoder_hidden_states1 and self.encoder_hidden_states2 are cuda variable, then why after torch.cat() operation, I am getting a non-cuda Variable?
|
st118737
|
torch.cat shouldn’t change the cuda variable to non-cuda variable. I suspect you are doing something slightly unexpected here?
If you give a small snippet of 10 lines to reproduce the issue, i can investigate.
|
st118738
|
Hello, I am new to Pytorch so sorry for any naiveté or ignorance. I am doing a little bit of experimenting replicating the functionality of distributions in tensorflow.contrib. While working on a sample generator for random normal, I tried to pass in first erroneously a torch.Size type to randn and then after converting this to tensor, still received an error. So the args supplied to randn need to be integers then.
I notice that a lot of tensorflow arguments are tensors. Let’s say I store or derive through a function call the shape or size of my normal distribution parameters as regular integers. Is there possibly a performance hit that can take place when things get more complex by having to derive and pass in integers instead of working with tensors as arguments?
That might be a really terrible question, so I am sorry if so
|
st118739
|
passing in a few arguments is unlikely to have any performance impact
x = torch.randn(10, 20)
y = torch.randn(*x.size()) # should work
|
st118740
|
I have a convNet:
class convNet(nn.Module):
#constructor
def __init__(self):
super(convNet, self).__init__()
#defining layers in convnet
#input size=1*657*1625
self.conv1 = nn.Conv2d(1,8, kernel_size=3,stride=1,padding=1)
self.conv2 = nn.Conv2d(8,16, kernel_size=3,stride=1,padding=1)
self.pconv1= nn.Conv2d(16,16, kernel_size=(3,3),stride=1,padding=(1,1))
self.pconv2= nn.Conv2d(16,16, kernel_size=(3,7),stride=1,padding=(1,3))
self.pconv3= nn.Conv2d(16,16, kernel_size=(7,3),stride=1,padding=(3,1))
self.conv3 = nn.Conv2d(16,8,kernel_size=3,stride=1,padding=1)
self.conv4 = nn.Conv2d(8,1,kernel_size=3,stride=1,padding=1)
def forward(self, x):
x = nnFunctions.leaky_relu(self.conv1(x))
x = nnFunctions.leaky_relu(self.conv2(x))
x = nnFunctions.leaky_relu(self.pconv1(x))+nnFunctions.leaky_relu(self.pconv2(x))+nnFunctions.leaky_relu(self.pconv3(x))
x = nnFunctions.leaky_relu(self.conv3(x))
x = nnFunctions.leaky_relu(self.conv4(x))
return x
L1Loss function:
def L1Loss(outputs,targets):
return Variable.abs(outputs-targets).sum()
When I train the above CNN without the following line in the forward :
x = nnFunctions.leaky_relu(self.pconv1(x))+nnFunctions.leaky_relu(self.pconv2(x))+nnFunctions.leaky_relu(self.pconv3(x))
I get some values for loss but if I add the above line in the net forward I get nan value for loss.
Can someone explain.
|
st118741
|
maybe your learning rate is too high after the additional layers you added? (your network is not scale-invariant layer-wise). Either adjusting the learning rate or adding BatchNorm will help.
|
st118742
|
Why is the loss nan? If the learning rate is too high then the loss should be high not nan right?
|
st118743
|
Erm, I’m probably being very dumb, but I can’t see a simple way of doing this, without defining a new layer class?
That is, what is the simplest way to constrain the weights of a linear layer to [0,1] or [-1,1] - I guess either will work?
|
st118744
|
You can clip the weights after the optimizer update each time or you can over-ride the forward call of Linear layer to do it before multiplying with input. If you are doing the latter, you could also apply tanh or sigmoid to the weights before doing the dot product.
Smooth ways of achieving this would be to simply include L1 or L2 penalty on the weights.
This is a very specific requirement, so I think there won’t be a basic way to do this.
|
st118745
|
Hi @pranav thank you very much
I’m experimenting with different ways to implement something like, XNOR net layers,
http://allenai.org/plato/xnornet/ 174
It seems to me now, that there are really a few different plausible ways to do this? So I just wanted to try a few things very quickly, before actually doing lots of experiments using hard coded layers. My aims is to have a graph composed of mostly binary layers, which gives fast low cost inference.
Here’s the Torch modules in case you’re interested?
https://github.com/allenai/XNOR-Net/blob/master/newLayers/BinActiveZ.lua 67
Cheers,
Aj
|
st118746
|
Hi @AjayTalati ,
Cool!
If you want a FloatTensor, you could use
torch.rand(1000).round()
if you want a ByteTensor, this would work
(torch.rand(1000)>=0.5)
Best regards
Thomas
|
st118747
|
Hey @tom, some snippets to initialise weights and convert a real valued data_vec to -1 or 1 as they use in the paper above
a) Randomly Initialize weights as -1 or 1
weights = np.random.randint(2, size=10) weights = 2*weights weights = weights-1
b) convert data vectors to -1 or 1
data_vec = torch.randn(out_features, in_features) weights_ge0 = torch.ge(data_vec ,0) # greater than zero weights_le0 = torch.le(data_vec ,0) # less than zero weights_le0xmin1 = torch.mul(weights_le0.float(),-1) # flip 1 to -1 data_vec_for_xnor_layers = weights_ge0.float() + weights_le0xmin1.float()
Cheers,
Aj
|
st118748
|
Hi @AjayTalati
looking at the article and code I think the trick with the weights during training is to change the weights before the forward pass. This is done before the forward pass (line 1-6 in Algorithm 1 in the article or https://github.com/allenai/XNOR-Net/blob/master/util.lua#L111-L122 174 in the code).
The linear/convolution layers themselves are left unchanged as far as I can see.
The new binarization layer is later used for the inputs.
Best regards
Thomas
|
st118749
|
I want to load my own data to Net, and there are 1000 silhouette images as input and the corresponding 1000 12d vectors as target in my data. The format of silhouette images is .png in
folder /hsdata_train/, the format of 12d vector is .txt in file
/data/txt/list_1000.txt.
The following is my code. What should I do? Please help, thanks!
text_path = '/data/txt/list_1000.txt'
float_data = []
with open(text_path) as f:
text_data = f.readlines()
for line_index in range(len(text_data)):
line = text_data[line_index].strip('\n')
words = line.split()
del words[0]
float_data.append(words)
target_train = torch.from_numpy(np.asfarray(float_data))
# The output of torchvision datasets are PILImage images of range [0, 1].
# We transform them to Tensors of normalized range [-1, 1]
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
image_train = torchvision.datasets.ImageFolder(root='/home/pangdan/hsdata_train', transform=transform)
train = torch.utils.data.TensorDataset(image_train, target_train)
trainloader = torch.utils.data.DataLoader(train, batch_size=4,
shuffle=True, num_workers=2)
|
st118750
|
Convert the images to numpy array and then you can use dataloader to use your custom data to feed it into the network.
|
st118751
|
I am working with CT scans which have a sequence of 150 slices in each scan. I take one slice at a time and run it through a 2D DenseNet (trained from scratch on medical images), and extract last layer features.
So at the end of feature extraction phase I have (num_features X 150) array. What is the best way to put a binary classifier on this array? Is it just putting one / multiple layers of Linear classifiers, LSTM, or are there any interesting Arxiv papers that I should look at?
nn.Linear( num_feature x 150, 2 )
Thanks!
|
st118752
|
Just put a multi-layer Linear + ReLU neural network on top of these features.
You can read this paper by Karpathy:
http://www.cv-foundation.org/openaccess/content_cvpr_2014/html/Karpathy_Large-scale_Video_Classification_2014_CVPR_paper.html 41
The early-fusion / late-fusion etc. are relevant.
|
st118753
|
Where can I find advice for writing performant PyTorch code? Are there things to be careful about like going back and forth between PyTorch and Numpy or always trying to use batching? What do you reckon are main aspects to keep in mind?
|
st118754
|
If you are using the GPU, then going back and forth between pytorch and numpy will be a performance bottleneck.
I dont think we’ve really established performance tips as such.
|
st118755
|
Hi,
I am facing an issue when I try to train OpenNMT from a pretrained model (https://github.com/OpenNMT/OpenNMT-py/blob/master/train.py 30)
Following is the error I get :
$python train.py 1 -data data/5kprocessed.train.pt -train_from=“50kcycle2_model_acc_31.18_ppl_99.08_e50.pt” -save_model 50kcycle2_finetuned -learning_rate=0.5 -brnn -epochs=500 -gpus 0
Namespace(batch_size=64, brnn=True, brnn_merge=‘concat’, curriculum=False, data=‘data/5kprocessed.train.pt’, dropout=0.5, epochs=500, extra_shuffle=False, gpus=[0], input_feed=1, layers=4, learning_rate=0.5, learning_rate_decay=0.5, log_interval=50, max_generator_batches=64, max_grad_norm=5, optim=‘sgd’, param_init=0.1, pre_word_vecs_dec=None, pre_word_vecs_enc=None, rnn_size=1024, save_model=‘50kcycle2_finetuned’, start_decay_at=8, start_epoch=1, train_from=‘50kcycle2_model_acc_31.18_ppl_99.08_e50.pt’, train_from_state_dict=’’, word_vec_size=500)
Loading data from 'data/5kprocessed.train.pt’
Loading dicts from checkpoint at 50kcycle2_model_acc_31.18_ppl_99.08_e50.pt
vocabulary size. source = 25004; target = 25004
number of training sentences. 2984
maximum batch size. 64
Building model…
Loading model from checkpoint at 50kcycle2_model_acc_31.18_ppl_99.08_e50.pt
Traceback (most recent call last):
File “train.py 1”, line 353, in
main()
File “train.py 1”, line 302, in main
generator_state_dict = chk_model.generator.state_dict()
AttributeError: ‘dict’ object has no attribute ‘generator’
Any inputs what’s causing this?
|
st118756
|
Hi,
İ am trying to convert a lua/torch program to pytorch. I have used nn.MaskZeroCriterion method in lua. Is there any method in pytoch i can use?If not, can you suggest ways how i can implement in pytorch?
thanks
|
st118757
|
I dont think there is any implementation of maskedcriterion. The best way is to mask the score vector/logits yourself as per this thread, using masked_select. gather doesn’t works for variable length sequences.
Having said that, i’m also interested to know if there’s a better approach to this
How can i compute seq2seq loss using mask?
I am working on image captioning task with PyTorch.
In seq2seq, padding is used to handle the variable-length sequence problems.
Additionally, mask is multiplied by the calculated loss (vector not scalar) so that the padding does not affect the loss.
In TensorFlow, i can do this as below.
# targets is an int64 tensor of shape (batch_size, padded_length) which contains word indices.
# masks is a tensor of shape (batch_size, padded_length) which contains 0 or 1 (0 if pad otherwise 1).
out…
|
st118758
|
Is there a correct way to detach a loaded model, when we don’t need to compute its gradients ?
For the moment, I’m doing that:
model = torch.load('mymodel.pth')
for variable in model.parameters():
variable.detach_()
Here, I am lucky because the model contains Variable parameters, but it could contain sub-models…
|
st118759
|
the correct way is to make the model’s parameters not require gradients:
model = torch.load('mymodel.pth')
for p in model.parameters()
p.requires_grad = False
Alternatively, if you are purely using the model for inference with no gradients needed, the input to the model can be volatile (this is better, it saves more memory as well):
myinput = torch.randn(10, 20) # example input
model = torch.load('mymodel.pth')
input = Variable(myinput, volatile=True)
output = model(input)
See these notes 66 for more hints.
|
st118760
|
In :
http://pytorch.org/docs/torch.html#torch.addmm 368
we have :
torch.addmm(beta=1, mat, alpha=1, mat1, mat2, out=None)
out = (beta * M) + (alpha*mat1 @ mat2)
In torch.nn.Linear we have :
output.addmm_(0, 1, input, weight.t())
So is it the case that :
0 in output.addmm_(0, 1, input, weight.t()) is beta
1 in output.addmm_(0, 1, input, weight.t()) is M or mat
So out = alpha*mat1 @ mat2
? Thanks
|
st118761
|
Notice that addmm_ is the in-place version of addmm, and it’s a bound method.
output.addmm_(0, 1, input, weight.t()) is actually translated to torch.Tensor.addmm_(0, output, 1, input, weight.t()).
The fact that is an in-place method implies that the result will be written to the same tensor output the method was called with.
Therefore what happens there is that output becomes 0 * output + 1 * input @ weight.t()
|
st118762
|
It’s hard to read the code in my phone.
maybe you can use markdown to beautify your code.
and you need
net.eval()
before output=net(image)
also make sure you have the same preprocessing for images during training and validate period
|
st118763
|
Hi,
Sorry about the styling.
But adding net.eval() worked great. Now it classifies correctly.
Thanks for the help!
|
st118764
|
I use a lstm code like this
class StackedLSTM(nn.Module):
def __init__(self, input_size, hidden_size, num_layers=1, dropout=0):
super(StackedLSTM, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.dropout = nn.Dropout(dropout)
self.layers = nn.ModuleList()
for i in range(num_layers):
self.layers.append(nn.LSTMCell(input_size, hidden_size))
input_size = hidden_size
def forward(self, input, hidden):
h_0, c_0 = hidden
h_1, c_1 = [], []
for i, layer in enumerate(self.layers):
h_1_i, c_1_i = layer(input, (h_0[i], c_0[i]))
input = h_1_i
h_1_i, c_1_i = layer(input, (h_0[i], c_0[i]))
input = h_1_i
if i != self.num_layers:
input = self.dropout(input)
h_1 += [h_1_i]
c_1 += [c_1_i]
h_1 = torch.stack(h_1)
c_1 = torch.stack(c_1)
return input, (h_1, c_1)
when i compare it to the nn.LSTM (official API) on language model, the perplexity is close but the training speed differ a lot, my code may need 170s for one epoch while use nn.LSTM will only take 50s.
Can someone tell me the reason?
|
st118765
|
nn.LSTM uses the optimized CUDNN LSTM kernel by default, which avoids all the overhead of building a graph that contains hundreds of separate mathematical operations and the overhead of launching those operations as separate kernels. You’ll get the same slow performance by setting torch.backends.cudnn.enabled=False.
|
st118766
|
Hey guys,
I’m currently working with word embeddings and I need to perform a pointwise division operations over 5 dimensional tensors and I’m not being very successful so far…
Maybe someone can tell me why this is not working:
def forward(self, x, mask):
boolean_mask = torch.from_numpy(np.not_equal(mask.data.numpy(), self.mask_value).astype('int32'))
count = boolean_mask.sum(self.axis)
count.squeeze_(self.axis)
count.unsqueeze_(len(count.size()))
total = x.sum(self.axis)
return total / count.float()
To give you a specific example:
x.size(): (10, 5, 6, 3, 64)
mask: (10, 5, 6, 3)
It goes well right before the return, where:
total.size(): (10, 5, 3, 64)
count.size(): (10, 5, 3, 1)
On one hand, if I don’t cast count to a Variable the division operator I get this:
assert not torch.is_tensor(other) AssertionError
On the other hand, If I do Variable(count) I get:
RuntimeError: inconsistent tensor size at /Users/soumith/code/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:869
Is this a broadcasting problem, or something like that?
|
st118767
|
Hi,
I trained my model on the 2 different GPUs(using Tensor.cuda()) and saved the parameters. Then I want to load these parameters to the CPU model, but I got this error: cuda runtime error (10) : invalid device ordinal at torch/csrc/cuda/Module.cpp:84
How can I solve this?
Thanks!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.