id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st115368
|
Hello, I have a problem with torch.max operation.
When I used torch.max with 1x1x1xn tensor, like
import torch
a = torch.randn(1, 1, 1, 5)
a.max(0)
It produces a tuple of two 1x1x1xn tensor. (One for maximum value, and another for indices)
However, if i try the same operation with n = 1, like
import torch
b = torch.randn(1, 1, 1, 1)
b.max(0)
torch.max produces a tuple of two 1x1x1 tensor.
On the other hand
import torch
c = torch.randn(2, 1, 1, 1)
c.max(0)
it produces a tuple of two 1x1x1 tensor.
I think b and c work correctly, and
a.max(0)
should produce a tuple of 1x1xn tensor. What’s wrong with it?
|
st115369
|
I think this is a bug that was recently fixed in master (typing from the phone, else I’d get you the reference)
|
st115370
|
I have trained a neural model and now I want to load it. Here is my code:
model = FlowNet()
model.load_state_dict(torch.load(’/Users/hanjun/Desktop/model1000.pth’,map_location=lambda storage, loc:storage))
)But when I run it, the error makes me confused. How can I modify my code?
|
st115371
|
I have seen this post
Why Parameter is automatically registered?
It’s because of the following code in __get_attr__ but also in the rest of this file:
My question is that
Is there any way to avoid automatically registering parameters when setting a module to another one’s attribute?
The only way I can think of is appending it to a list. What’s the common approach(workaround)?
|
st115372
|
yes appending it to a list or putting it any list like structure that’s not an nn.ModuleList should work.
Alternatively, you can override parameters() and in your custom function return a filtered set of parameters.
|
st115373
|
Hi, I am new to pytorch and I have some problems running my program on multiple GPUs.
I wrapped my network as the docs said
net = CNN()
net.load_state_dict(torch.load(“cnn/model/epoch_1_subfile_55.pkl”))
net = torch.nn.DataParallel(net, device_ids=[0, 1, 2, 3])
net = net.cuda()
The net module has a method named forward_batch. When I ran this code, it told me that ‘DataParallel’ object has no attribute ‘forward_batch’. I wonder if there is any method to solve this problem?
Thank you a lot!
|
st115374
|
what are you trying to do? calling forward_batch on net wont work after you wrapped it in a DataParallel because its a new class that is returned.
|
st115375
|
Thanks for your reply! I just want to forward batches. I know people usually rewrite the forward method and derive the result by
out = net(input)
I add the forward_batch method because the inputs to my network sometimes are different. I derive the result by
out = net.forward_batch(input)
It seems that the returned class doesn’t support this method. Maybe I should change the name of the method to forward?
|
st115376
|
So I’m building a Sentiment analyzer , but having problems training it .
Here’s my sample neural network :
train_vecs_w2v = np.concatenate([word_vector(z ,size) for z in tqdm(map(lambda x : x.words , X_train))])
class net(nn.Module):
def __init__(self):
super(net , self).__init__()
self.l1 = nn.Linear(200, 32)
self.relu = nn.ReLU()
self.l2 = nn.Linear(32 , 1)
def forward(self , x):
x = self.relu(self.l1(x))
x = self.l2(x)
x = F.sigmoid(x)
return x
net = net()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.RMSprop(net.parameters() ,lr = learning_rate)
inputs = Variable(torch.from_numpy(train_vecs_w2v))
targets = Variable(torch.from_numpy(Y_train))
for epoch in range(num_epochs):
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs , targets)
loss.backward()
optimizer.step()
if(epoch+1)%5 ==0:
print('Epoch [%d%d] , Loss : %.4f'%(epoch+1 , num_epochs , loss.data[0]))
Inputs has a dimension of 959646 X 200
targets - 959646 X 1
I’m getting this error:
TypeError : addmm_ received an invalid combination of arguments - got (int, int, torch.DoubleTensor, torch.FloatTensor), but expected one of:
* (torch.DoubleTensor mat1, torch.DoubleTensor mat2)
* (torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2)
* (float beta, torch.DoubleTensor mat1, torch.DoubleTensor mat2)
* (float alpha, torch.DoubleTensor mat1, torch.DoubleTensor mat2)
* (float beta, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2)
* (float alpha, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2)
* (float beta, float alpha, torch.DoubleTensor mat1, torch.DoubleTensor mat2)
* (float beta, float alpha, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2)
The inputs variable is wrapped in Variable() , can’t figure it out .
Thanks.
|
st115377
|
Fixed ,
had to use .float() , to the tensor array.
Still cannot understand why this happens , my numpy array was already float ?
|
st115378
|
Numpy’s default is float64 aka double, pytorch defaults to float aka float32.
Best regards
Thomas
|
st115379
|
I just upgraded to v0.2 with pip install http://download.pytorch.org/whl/torch-0.2.0.post1-cp27-none-macosx_10_7_x86_64.whl and pip install torchvision. When I try importing torch in the shell, I get:
>>> import torch
RuntimeError: module compiled against API version 0xb but this version of numpy is 0xa
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/torch/__init__.py", line 53, in <module>
from torch._C import *
ImportError: numpy.core.multiarray failed to import
I already have numpy and can do
>>> import numpy as np
>>> np.empty(3)
array([ 0., 0., 2.])
>>>
But cannot use Pytorch…
|
st115380
|
Had same issue. I uninstalled and reinstalled numpy than installed torch again. That fixed for me
|
st115381
|
This does not solve it for me:
import numpy as np
np.__version__
'1.11.2'
import torch
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
RuntimeError: module compiled against API version 0xb but this version of numpy is 0xa
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-2-21195f21f05e> in <module>()
----> 1 import torch
/home/twenty/anaconda3/envs/mp/lib/python3.5/site-packages/torch/__init__.py in <module>()
51 sys.setdlopenflags(_dl_flags.RTLD_GLOBAL | _dl_flags.RTLD_NOW)
52
---> 53 from torch._C import *
54
55 __all__ += [name for name in dir(_C)
ImportError: numpy.core.multiarray failed to import
|
st115382
|
I have the same issue.
>>> import numpy as np
>>> np.__version__
'1.12.1'
>>> import torch
RuntimeError: module compiled against API version 0xb but this version of numpy is 0xa
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/odin/miniconda3/lib/python3.5/site-packages/torch/__init__.py", line 53, in
<module>
from torch._C import *
ImportError: numpy.core.multiarray failed to import
reinstall does not help
|
st115383
|
Use .float() to convert your tensor array to Float.
That pretty much fixes the error
x = Variable(tensor.from_numpy(x).float())
|
st115384
|
I have a float number saved to A, as follows, I wish I can convert it to torch.autograd.variable. Searching the interent I found i can do the following, but now after doing the following, when I print the value of the B (which is the converted number) it does not show me anything.
Any suggestion?
A= np.float64(0.85210)
print(A)
print(type(A))
Output:
0.8521
<type ‘numpy.float64’>
Then for converting I do:
B=(Variable((torch.from_numpy(np.array(A, dtype=np.float64))).cuda()))
print(B)
print(type(B))
and the output is:
Variable containing:[torch.cuda.DoubleTensor with no dimension]
<class ‘torch.autograd.variable.Variable’>
P.S. the reason I convert to np.array first is because using torch.from_numpy(A) giving me the error that :
RuntimeError: from_numpy expects an np.ndarray but got numpy.float64
|
st115385
|
torch.from_numpy() requires a numpy array as an argument , but what you are giving it is a float value.
First you need to convert your float value to an numpy array , which can be done like this :
A = 0.85210
A = numpy.array([A])
B = torch.autograd.Variable(torch.from_numpy(A))
This should work.
|
st115386
|
Dear All,
I created a minimal example here 26 to reproduce the problem I am facing.
I have a data set with 21 features (e.g covariates) and 108405 rows, e.g.:
torch.Size([108405, 21])
I am interested in using a CNN, for this problem and I have the following NN architecture:
Net2 (
(l1): Sequential (
(0): Linear (21 -> 126)
(1): Dropout (p = 0.15)
(2): LeakyReLU (0.1)
(3): BatchNorm1d(126, eps=1e-05, momentum=0.1, affine=True)
)
(c1): Sequential (
(0): Conv1d(21, 126, kernel_size=(3,), stride=(1,), padding=(1,))
(1): Dropout (p = 0.25)
(2): LeakyReLU (0.1)
(3): BatchNorm1d(126, eps=1e-05, momentum=0.1, affine=True)
)
(out): Sequential (
(0): Linear (756 -> 1)
)
(sig): Sigmoid ()
)
Now, I would like to eliminate the first Linear (21 -> 126) layer:
55-PyTorch-using-CONV1D-on-one-dimensional-data-CNN-minimal-example.jpg894×325 75.4 KB
The only reason it is there is because I was unable to correctly shape my input to feed directly into the CNN that follows, e.g.:
55-PyTorch-using-CONV1D-on-one-dimensional-data-CNN-minimal-example.jpg725×231 45.9 KB
Is this possible? How can I reshape my x_tensor so that I t can be fed directly into the Conv1d layer?
This is the full network:
# References:
# https://github.com/vinhkhuc/PyTorch-Mini-Tutorials/blob/master/5_convolutional_net.py
# https://gist.github.com/spro/c87cc706625b8a54e604fb1024106556
X_tensor_train= XnumpyToTensor(trainX) # default order is NBC for a 3d tensor, but we have a 2d tensor
X_shape=X_tensor_train.data.size()
# Dimensions
# Number of features for the input layer
N_FEATURES=trainX.shape[1]
# Number of rows
NUM_ROWS_TRAINNING=trainX.shape[0]
# this number has no meaning except for being divisable by 2
N_MULT_FACTOR=6 # min should be 4
# Size of first linear layer
N_HIDDEN=N_FEATURES * N_MULT_FACTOR
# CNN kernel size
N_CNN_KERNEL=3
DEBUG_ON=True
def debug(msg, x):
if DEBUG_ON:
print (msg + ', (size():' + str (x.size()))
class Net2(nn.Module):
def __init__(self, n_feature, n_hidden, n_output, n_cnn_kernel, n_mult_factor=N_MULT_FACTOR):
super(Net2, self).__init__()
self.n_feature=n_feature
self.n_hidden=n_hidden
self.n_output= n_output
self.n_cnn_kernel=n_cnn_kernel
self.n_mult_factor=n_mult_factor
self.n_l2_hidden=self.n_hidden * (self.n_mult_factor - self.n_cnn_kernel + 3)
self.l1 = nn.Sequential(
torch.nn.Linear(self.n_feature, self.n_hidden),
torch.nn.Dropout(p=1 -.85),
torch.nn.LeakyReLU (0.1),
torch.nn.BatchNorm1d(self.n_hidden, eps=1e-05, momentum=0.1, affine=True)
)
self.c1= nn.Sequential(
torch.nn.Conv1d(self.n_feature, self.n_hidden,
kernel_size=(self.n_cnn_kernel,), stride=(1,), padding=(1,)),
torch.nn.Dropout(p=1 -.75),
torch.nn.LeakyReLU (0.1),
torch.nn.BatchNorm1d(self.n_hidden, eps=1e-05, momentum=0.1, affine=True)
)
self.out = nn.Sequential(
torch.nn.Linear(self.n_l2_hidden,
self.n_output),
)
self.sig=nn.Sigmoid()
def forward(self, x):
debug('raw',x)
varSize=x.data.shape[0] # must be calculated here in forward() since its is a dynamic size
x=self.l1(x)
debug('after lin',x)
# for CNN
x = x.view(varSize,self.n_feature,self.n_mult_factor)
debug('after view',x)
x=self.c1(x)
debug('after CNN',x)
# for Linear layer
x = x.view(varSize, self.n_hidden * (self.n_mult_factor -self.n_cnn_kernel + 3))
debug('after 2nd view',x)
# x=self.l2(x)
x=self.out(x)
debug('after self.out',x)
x=self.sig(x)
return x
net = Net2(n_feature=N_FEATURES, n_hidden=N_HIDDEN, n_output=1, n_cnn_kernel=N_CNN_KERNEL)
if use_cuda:
net=net.cuda()
lgr.info(net)
b = net(X_tensor_train)
print ('(b.size():' + str (b.size()))
Output:
(108405, 21)
<type 'numpy.ndarray'>
<class 'torch.cuda.FloatTensor'>
torch.Size([108405, 21])
raw, (size():torch.Size([108405, 21])
after lin, (size():torch.Size([108405, 126])
after view, (size():torch.Size([108405, 21, 6])
after CNN, (size():torch.Size([108405, 126, 6])
after 2nd view, (size():torch.Size([108405, 756])
after self.out, (size():torch.Size([108405, 1])
(b.size():torch.Size([108405, 1])
Many thanks,
|
st115387
|
I was able to resolve this (will upload a new version), but would greatly appreciate someone taking a look at the code.
|
st115388
|
Well … the party was too early, whenever I set N_CNN_KERNEL to anything other than 3, I get a dimension mismatch error.
Anyone …?
Thanks,
|
st115389
|
Hi,
I’m trying to investigate the reason for a high GPU memory usage in my code.
For that, I would like to list all allocated tensors/storages created explicitly or within autograd. The closest thing I found is Soumith’s snippet to iterate over all tensors known to the garbage collector.
However, there has to be something missing… For example, I run python -m pdb -c continue to break at a cuda out of memory error (with or without CUDA_LAUNCH_BLOCKING=1). At this time, nvidia-smi reports around 9 GB being occupied. In the snipped I sum .numel()s of all tensors found and I get 17092783 elements, which with max size of 8 B per element gives ~130 MB. In the list, I find especially many autograd Variables (intermediate computations) missing. Can anyone give me a hint? Thanks!
|
st115390
|
Martin,
it’s possible that these references to Variables are alive, but not in Python. These buffers can be of Functions who did save_for_backward of inputs which they need for gradient, and some Variable somewhere is alive in your code that is holding a reference to the graph that has all these buffer references alive.
|
st115391
|
Thanks for the clarification! So is there then any way how to enumerate all these Tensors, ideally by somehow querying the memory manager? One could also traverse the autograd graph similar as in Sergey’s visualization code 222, this should give me saved Tensors as well, but will that be complete?
|
st115392
|
Newbie question but I was not able to google an answer.
When running Keras model on GPU (with TensorFlow backend) a message is displayed automatically showing total and free amount of GPU memory.
Can I get the same functionality in PyTorch? How can I display the total and free GPU memory with PyTorch?
What’s the simplest way to display total GPU memory consumed by all Variables in my program?
|
st115393
|
Other options are:
! watch -n 0.1 'ps f -o user,pgrp,pid,pcpu,pmem,start,time,command -p `lsof -n -w -t /dev/nvidia*`'
sudo apt-get install dstat #install dstat
sudo pip install nvidia-ml-py #install Python NVIDIA Management Library
wget https://raw.githubusercontent.com/datumbox/dstat/master/plugins/dstat_nvidia_gpu.py
sudo mv dstat_nvidia_gpu.py /usr/share/dstat/ #move file to the plugins directory of dstat
|
st115394
|
retain_graph (bool, optional) – If False, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph.
I know that we don’t really need it in most cases, But i still want to know what cases would we need retain_graph to be true?
|
st115395
|
Hi, I want to free the GPU after the calculation of the neural network model. I tried del the model, imort gc and use gc.collect(), it just cannot work. is there any way to free the GPU memory?
|
st115396
|
the GPU memory is probably freed. But nvidia-smi reporting of GPU memory will be incorrect because pytorch uses it’s own caching memory allocator for GPU.
|
st115397
|
Hi,
I want to create a list that contains tensors of different sizes together. For example:
A list with two tensors
[[1,2,3],
[4,5,6,7]]
Is there any other way of keeping these tensors together apart from python list? Something more native to PyTorch…
|
st115398
|
Hi,
I am confused how to megre all intermediary output in forward. In each loop, att1 have a shape (batch x 1 x 64 x64), i want pfeat have a shape (batch x num_parts x 64 x 64) .
When i run the below scipt.There comes the problem,AttributeError: ‘list’ object has no attribute 'size. How to fix this,Thanks for any help.
def forward(self,x):
pfeat = []
if self.use_part is not None:
for i in range(self.num_parts):
att = self.attention(x)
att1 = self.conv(att)
pfeat.append(att)
return pfeat
else:
return self.attention(x)
|
st115399
|
you may use torch.cat()
http://pytorch.org/docs/master/torch.html?highlight=cat#torch.cat 4
|
st115400
|
I have tried torch.cat() and torch.stack(). But it doesn’t works.Maybe I have torch.cat misused.Thanks for your help.
|
st115401
|
I am trying to install 1 warp-ctc for Pytorch. The installer requires that _cffi_backend to be available in my machine (Ubuntu 16.04). Although I have the file _cffi_backend.cpython-36m-x86_64-linux-gnu.so under my Anaconda (python 3.6.0) site packages, whenever I write the instruction:
import _cffi_backend
I get the following error:
ImportError: /home/bishwajit/anaconda3/lib/python3.6/site-packages/_cffi_backend.cpython-36m-x86_64-linux-gnu.so: undefined symbol: PySlice_AdjustIndices
I browsed a lot and many people have had this problem 6 before. But they were importing torch when this problem occurred. I don’t have any problem importing torch though. The answers said to downgrade the torch version (to 0.1.10) which I did with no effect. Is there any way out?
My current pytorch version is: ‘0.2.0_4’
|
st115402
|
iambishwa:
undefined symbol: PySlice_AdjustIndices
This seems to be unrelated to PyTorch:
https://bugzilla.redhat.com/show_bug.cgi?id=1435135 16
|
st115403
|
Use my docker file:
github.com
QuantScientist/Deep-Learning-Boot-Camp/blob/master/docker/Dockerfile.gpu3 10
FROM nvidia/cuda:8.0-cudnn6-devel-ubuntu16.04
ENV CUDA_ARCH_BIN "30 35 50 52 60"
ENV CUDA_ARCH_PTX "60"
RUN rm -rf /var/lib/apt/lists/*
RUN apt-get clean
RUN apt-get update && apt-get install --no-install-recommends -y \
git cmake build-essential libgoogle-glog-dev libgflags-dev libeigen3-dev libopencv-dev libcppnetlib-dev libboost-dev libboost-all-dev libboost-iostreams-dev libcurl4-openssl-dev protobuf-compiler libopenblas-dev libhdf5-dev libprotobuf-dev libleveldb-dev libsnappy-dev liblmdb-dev libutfcpp-dev wget unzip \
python \
python-dev \
python2.7-dev \
python3-dev \
python-virtualenv \
python-wheel \
python-tk \
pkg-config \
libopenblas-base \
This file has been truncated. show original
I tested it with import _cffi_backend and it works.
|
st115404
|
Finally I found the trick. It was all due to version conflicts. It doesn’t work with Python 3.6+ as far as I have seen it. Works great with Python 3.5.0 and 3.5.2. Thanks!
|
st115405
|
I am trying to install pytorch on a Mac Pro with the following specs:
OS: 10.11
Python: 2.7
Cuda: 8.0.61
Apple LLVM version 7.0.2
I seem to have followed all the steps mentioned at https://github.com/pytorch/pytorch 2
and tried some of the recommendations given in several posts in this forum.
However, I am currently stuck with this error and can’t find a way out. Any help would be highly appreciated
In file included from /Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/CPUByteTensor.cpp:1:
In file included from /Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/CPUByteTensor.h:11:
In file included from /Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/TensorMethods.h:4:
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:16:14: error:
call to constructor of 'at::Scalar' is ambiguous
Scalar() : Scalar(0L) {}
^ ~~
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:28:26: note:
candidate constructor
AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/Type.h:18:35: note:
expanded from macro 'AT_FORALL_SCALAR_TYPES'
#define AT_FORALL_SCALAR_TYPES(_) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:23:3: note:
expanded from macro 'DEFINE_IMPLICIT_CTOR'
Scalar(type vv) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:28:26: note:
candidate constructor
/Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/Type.h:19:19: note:
expanded from macro 'AT_FORALL_SCALAR_TYPES'
_(uint8_t,Byte,i) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:23:3: note:
expanded from macro 'DEFINE_IMPLICIT_CTOR'
Scalar(type vv) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:28:26: note:
candidate constructor
/Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/Type.h:20:18: note:
expanded from macro 'AT_FORALL_SCALAR_TYPES'
_(int8_t,Char,i) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:23:3: note:
expanded from macro 'DEFINE_IMPLICIT_CTOR'
Scalar(type vv) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:28:26: note:
candidate constructor
/Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/Type.h:21:20: note:
expanded from macro 'AT_FORALL_SCALAR_TYPES'
_(double,Double,d) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:23:3: note:
expanded from macro 'DEFINE_IMPLICIT_CTOR'
Scalar(type vv) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:28:26: note:
candidate constructor
/Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/Type.h:22:18: note:
expanded from macro 'AT_FORALL_SCALAR_TYPES'
_(float,Float,d) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:23:3: note:
expanded from macro 'DEFINE_IMPLICIT_CTOR'
Scalar(type vv) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:28:26: note:
candidate constructor
/Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/Type.h:23:14: note:
expanded from macro 'AT_FORALL_SCALAR_TYPES'
_(int,Int,i) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:23:3: note:
expanded from macro 'DEFINE_IMPLICIT_CTOR'
Scalar(type vv) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:28:26: note:
candidate constructor
/Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/Type.h:24:19: note:
expanded from macro 'AT_FORALL_SCALAR_TYPES'
_(int64_t,Long,i) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:23:3: note:
expanded from macro 'DEFINE_IMPLICIT_CTOR'
Scalar(type vv) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:14:7: note:
candidate is the implicit move constructor
class Scalar {
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:14:7: note:
candidate is the implicit copy constructor
In file included from /Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/CPUCharTensor.cpp:1:
In file included from /Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/CPUCharTensor.h:11:
In file included from /Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/TensorMethods.h:4:
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:16:14: error:
call to constructor of 'at::Scalar' is ambiguous
Scalar() : Scalar(0L) {}
^ ~~
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:28:26: note:
candidate constructor
AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/Type.h:18:35: note:
expanded from macro 'AT_FORALL_SCALAR_TYPES'
#define AT_FORALL_SCALAR_TYPES(_) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:23:3: note:
expanded from macro 'DEFINE_IMPLICIT_CTOR'
Scalar(type vv) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:28:26: note:
candidate constructor
/Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/Type.h:19:19: note:
expanded from macro 'AT_FORALL_SCALAR_TYPES'
_(uint8_t,Byte,i) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:23:3: note:
expanded from macro 'DEFINE_IMPLICIT_CTOR'
Scalar(type vv) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:28:26: note:
candidate constructor
/Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/Type.h:20:18: note:
expanded from macro 'AT_FORALL_SCALAR_TYPES'
_(int8_t,Char,i) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:23:3: note:
expanded from macro 'DEFINE_IMPLICIT_CTOR'
Scalar(type vv) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:28:26: note:
candidate constructor
/Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/Type.h:21:20: note:
expanded from macro 'AT_FORALL_SCALAR_TYPES'
_(double,Double,d) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:23:3: note:
expanded from macro 'DEFINE_IMPLICIT_CTOR'
Scalar(type vv) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:28:26: note:
candidate constructor
1 error generated.
/Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/Type.h:22:18: note:
expanded from macro 'AT_FORALL_SCALAR_TYPES'
_(float,Float,d) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:23:3: note:
expanded from macro 'DEFINE_IMPLICIT_CTOR'
Scalar(type vv) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:28:26: note:
candidate constructor
/Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/Type.h:23:14: note:
expanded from macro 'AT_FORALL_SCALAR_TYPES'
_(int,Int,i) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:23:3: note:
expanded from macro 'DEFINE_IMPLICIT_CTOR'
Scalar(type vv) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:28:26: note:
candidate constructor
/Users/Ashis/Documents/Github/pytorch/torch/lib/build/ATen/ATen/Type.h:24:19: note:
expanded from macro 'AT_FORALL_SCALAR_TYPES'
_(int64_t,Long,i) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:23:3: note:
expanded from macro 'DEFINE_IMPLICIT_CTOR'
Scalar(type vv) \
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:14:7: note:
candidate is the implicit move constructor
class Scalar {
^
/Users/Ashis/Documents/Github/pytorch/torch/lib/ATen/../ATen/Scalar.h:14:7: note:
candidate is the implicit copy constructor
1 error generated.
make[2]: *** [CMakeFiles/ATen.dir/ATen/CPUByteStorage.cpp.o] Error 1
make[2]: *** [CMakeFiles/ATen.dir/ATen/CPUCharStorage.cpp.o] Error 1
1 error generated.
make[2]: *** [CMakeFiles/ATen.dir/ATen/CPUByteTensor.cpp.o] Error 1
1 error generated.
make[2]: *** [CMakeFiles/ATen.dir/ATen/CPUCharTensor.cpp.o] Error 1
1 error generated.
1 error generated.
make[2]: *** [CMakeFiles/ATen.dir/ATen/CPUByteType.cpp.o] Error 1
make[2]: *** [CMakeFiles/ATen.dir/ATen/CPUCharType.cpp.o] Error 1
make[1]: *** [CMakeFiles/ATen.dir/all] Error 2
make: *** [all] Error 2
|
st115406
|
Sometimes we need to build models with if like resizing the shorter side of an image to 640 in faster rcnn, in this situation, according to doc 7 here, onnx can’t correctly serialize the model, does this problem have a solution?
|
st115407
|
we do not support models that have control flow to be exported to onnx yet, it’s slated to be released at a later stage.
|
st115408
|
You can use this solution :
Define ModuleA which expects fixed size input and doesn’t have any control flow
Define ModuleB which which handles the input resize and then call ModuleA
ModuleA can be exported into onnx format, and you should care about the resize manually.
|
st115409
|
Hi,
I am reproducing a torch model to a pytorch model. but i have difficulties in how to share the parameters and the gradParameters in two layers.
Here is torch scipt,
1.png818×369 41.8 KB
.
Thanks for any help.
|
st115410
|
you can reuse the same layer again and again. you dont need to clone a separate layer for a separate output.
For example, this works fine:
m = nn.Conv2d(...)
output1 = m(input1)
output2 = m(input2)
(output1 + output2).sum().backward()
|
st115411
|
Before I upgraded to the latest pytorch, this command worked for me:
ss = pp.gather(1, labels)
where pp is:
Variable containing:
0.0651 0.9349
0.6208 0.3792
0.3024 0.6976
0.2226 0.7774
0.0394 0.9606
0.4197 0.5803
0.1205 0.8795
0.3774 0.6226
0.1682 0.8318
0.2281 0.7719
0.3845 0.6155
0.4658 0.5342
0.4982 0.5018
0.2653 0.7347
0.2694 0.7306
0.6550 0.3450
[torch.cuda.FloatTensor of size 16x2 (GPU 0)]
and where labels
Variable containing:
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
[torch.cuda.LongTensor of size 16 (GPU 0)]
However after I upgraded to the latest version off of master, this same line gives me the following error:
*** RuntimeError: Input tensor must have same dimensions as output tensor at /data/pytorch/torch/lib/THC/generic/THCTensorScatterGather.cu:16
I’m not sure how/why this error occurs now, and in fact it does not even make sense to me, I do not see why/what sizes need to be the same here.
Thanks
|
st115412
|
The answer, is that labels needs to be reshaped, into an explicit 16x1 vector, from the 16 it is at. Therefore, doing labels.view(-1,1) will make it work.
|
st115413
|
Hi guys,
did someone encounter the same problem after upgrading to ver 0.2?
I try to do view(-1,1) but had no luck…
got the following error message :
RuntimeError: Index tensor must have same dimensions as input tensor at /home/jdily/Desktop/project/lib/pytorch/torch/lib/THC/generic/THCTensorScatterGather.cu:111
And there is no error when running under cpu mode.
thx!
|
st115414
|
@jdily please see the release notes, especially the section called “Important Breakages and Workarounds”.
Maybe you are affected by this.
GitHub
pytorch/pytorch 336
pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
|
st115415
|
[Edit: Sorry if my question is just a variation on the question: why are hand-implemented RNNs so slow – which has been asked already before. However, the thing that I don’t understand is that if the answer to that question is that kernel launches take a long time, why is my CPU pegged at 100% and a faster CPU core gives better performance?]
I have a model (made up of a hand-implemented LSTM and a three-layer convnet) that achieves GPU utilization of only around (very roughly) 50% when training. There is always one core pegged at 100%, and the model trains slower on a machine with a slower CPU and faster GPU than on a machine with a faster CPU and slower GPU. I can change the LSTM from using 1024 units to 2048 units, and the model only trains about 20% slower.
I have tried profiling the model using cProfile (and I also profiled an earlier version with the line profiler), but nothing jumps out at me as out of the ordinary. It seems to spend roughly half of its time in the forward pass and half of its time in the backwards pass. I am not doing any significant preprocessing or transferring large quantities of data between the CPU and GPU (at least not intentionally). The python process uses 22GB of virtual memory, but there is only about 2GB resident, and the system seems to have plenty of free memory. The hard disk is mostly idle. My best guess is that maybe I am doing some things that take a lot of computation and have to execute on the CPU, but it is not clear to me what these things would be. Or possibly there is kernel launch overhead or CPU-GPU latency that is adding up, but in that case I am not sure why faster single core performance would help.
|
st115416
|
if your RNN is written in terms of RNNCell/LSTMCell/GRUCell + for loops and it is a fairly small RNN, then maybe you are suffering from autograd overhead. We’re working on this. See: https://github.com/pytorch/pytorch/issues/2518#issuecomment-327835296 209
|
st115417
|
I am trying to implement a loss function that combines hinge loss with KL Divergence ( paper 4, Lua implementation 3).
The current implementation of kl_div() returns either the sum or mean of the values over the batch, but I need the exact values. Calling kl_div() on each pair separately is extremely slow (5-10x slower than calling it on the entire batch). any advice on what I need to change/implement to get a version of kl_div() to return a vector of values rather than sum or mean ?
Thank you!
|
st115418
|
you could implement it separate in terms of autograd ops. maybe that’s your best option for now.
|
st115419
|
I am running this Pytorch example 23 on a g2.2xlarge AWS machine. So, when I run time python imageNet.py ImageNet2, it runs well with the following timing:
real 3m16.253s
user 1m50.376s
sys 1m0.872s
However, when I add the world-size parameter, it gets stuck and does not execute anything. The command is as follows: time python imageNet.py --world-size 2 ImageNet2
So, how do I leverage the DistributedDataParallel functionality with the world-size parameter in this script. The world-size parameter is nothing but number of distributed processes.
Do I spin up another similar instance for this purpose? If yes, then how do the script recognize the instance? Do I need to add some parameters like the instance’s IP or something?
[Also asked the question on StackOverflow, if someone be willing to help there: https://stackoverflow.com/q/45674497/4993513 9]
|
st115420
|
Hi @Dawny33,
I recently play with distributed pytorch, and I can give you some pointers here, but I’m not sure if you’ve already figured this out.
I’m not sure if you can create multiple processes on a single machine by running init_process_group on different threads (it works in MPI) but you can try that out and let me know. However, you can definitely run distributed version of this example on a distributed cluster (with more than one EC2 instances).
To do that, the first thing you need to do is to set up your own cluster on EC2, which means you need to let all your worker nodes ssh-able from master node. Here is a good tutorial for doing this (http://mpitutorial.com/tutorials/running-an-mpi-cluster-within-a-lan/ 15). Then, if you look at the example code, it uses tcp to init the cluster, so you will need both to set the init_method to private address of your master node (with a self-defined port e.g. 23456), and to set the rank for each node in your cluster. Like it was described here (http://pytorch.org/docs/master/distributed.html#tcp-initialization 27).
After setting all these things up, run the code on each node (you may want to write a script to simplify this step), then you should get what you want.
Hope this one helps somehow.
|
st115421
|
Hi! I’m learning to design a FNN using PyTorch with a GPU. And I came accross the problem that the test accuracy of my work turned to be 100% when the epoch is around 20.
After debugging, I found that the labels of the training data and test data were changed. When the test accuracy changed to be 100%, the labels of test data were all the same, so were the training data.
But I really couldn’t figure out where my code changed the labels. T_T
My python version is 2.7, my torch version is 0.2.0_2 and my GPU is Titan xp. I’m using PyCharm to program.
Here is my code.
I used the data provided by Michael Nielsen’s web, Neural Networks and Deep Learning. Thank him for leading me into this area.
import random
import pickle
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
### load data
f = open('/home/data/mnist.pkl')#
training_data, validation_data, test_data = pickle.load(f)
### load data and stack the target with data
training_data = torch.cat((torch.from_numpy(training_data[0]).float(), torch.from_numpy(training_data[1]).float()), dim=1)
validation_data = torch.cat((torch.from_numpy(validation_data[0]).float(), torch.from_numpy(validation_data[1]).float()), dim=1)
test_data = torch.cat((torch.from_numpy(test_data[0]).float(), torch.from_numpy(test_data[1]).float()), dim=1)
Here is the FNN net.
### Forward Neural Network
class FNN_NET (nn.Module):
def __init__(self):
super(FNN_NET, self).__init__()
self.linear1 = nn.Linear(784, 30)
self.linear2 = nn.Linear(30, 30)
self.linear3 = nn.Linear(30, 10)
def forward(self, x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = F.relu(self.linear3(x))
return x
model = FNN_NET()
model.cuda()
batch_size = 30
learning_rate = 0.03
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
This is how I trained the net.
def train(epoch):
model.train()
data_temp = training_data# train with training_data or validation_data
random.shuffle(data_temp)
mini_batches = [data_temp[k:k+batch_size] for k in xrange(0, len(data_temp), batch_size)]
for batch_idx, data_mini_batch in enumerate(mini_batches):
###
data, target = torch.split(data_mini_batch, 784, dim=1)# split the data and target
target = vetcorize_result(len(data_mini_batch), target)# vectorize the target
target = torch.from_numpy(target).float()# change the data type to use in GPU
data,target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
loss = F.mse_loss(output, target, size_average=True)
loss.backward()
optimizer.step()
The test.
def test(epoch):
model.eval()
test_loss = 0
correct = 0
data_temp = test_data# we can also test the model using the validation data
random.shuffle(data_temp)
mini_batches = [data_temp[k:k+batch_size] for k in xrange(0, len(data_temp), batch_size)]
for batch_idx, data_mini_batch in enumerate(mini_batches):
###
data, target = torch.split(data_mini_batch, 784, dim=1)# split the data and target
target_temp = target# store to use in the "correct"
target = vetcorize_result(len(data_mini_batch), target)# vectorize the target
target = torch.from_numpy(target).float()# change the data type
###
data,target = data.cuda(), target.cuda()# change the data type to use GPU
data, target = Variable(data), Variable(target)
###
output = model(data)
test_loss = F.mse_loss(output, target, size_average=True).data[0] # sum up batch loss
pred = output.data.max(1, keepdim=True)[1]
correct += pred.cpu().eq(target_temp.long()).sum()
test_loss /= len(mini_batches)
print ( 'Train Epoch: {} \t test loss: {},\t test accuracy: {:.3f}%'.format(epoch, test_loss, 100 * correct/float(len(data_temp)) ) )
Here I changed the labels of the data to be a one hot vector.
def vetcorize_result(batch_size, target):
result = np.zeros((batch_size, 10))
target = np.int_(target.numpy())
for idx in range(batch_size):
result[idx, target[idx]] = 1
return result
Finally, the train and test of the model.
for epoch in range(1, 30):
train(epoch)
test(epoch)
|
st115422
|
Actually, if I move the “test(epoch)” out of the epoch, testing only after all epochs of training are done, then the problem is solved. But I still can’t figure out why this could work. Is it possible that with the test procedure following training could change the data we read in?
|
st115423
|
Hi! I was wondering which class would you recommend to use when one defines function that takes variables or tensors as inputs and performs some transformations to enable autocompletion and etc. in IDE? There seem to be no such thing as torch.Tensor , I was trying torch.FloatTensor or _TensorBase but they seem lacking definitions of methods like torch.sum and etc. - they are defined implicitly via C extensions I guess, so pycharm can not see them.
Thanks!
upd: maybe someone built some sort of stub files 21?
|
st115424
|
hi, thanks for the answer! could you please let me know what exactly was fixed in master? do you have some nice base class that has declarations of all required methods now?
|
st115425
|
This is related to #6194.
I have the same problem with Pycharm.
With ipython everything works fine, autocomplete finds everything, and the python interpreter is the same.
Please let us know when this issue is fixed
|
st115426
|
I am new to pytorch. In the course of reading the tutorial and docs, I feel rather confused about one thing. That is what’s the relationship among torch.nn, torch.nn.functional, and torch.autograd.Function? what follows is my understanding. Hope that somebody tells me whether I am right or not. torch.nn consists of modules (layers). These modules are constructed by using the operations provided by torch.nn.functional. Furthermore, these operations are constructed based on the rules specified by torch.autograd.Function. My understanding is right?
|
st115427
|
torch.nn are non-functional, stateful network modules. They are used as functors, typically within a large network module class, which is itself a functor, and derives from nn.Module:
class MyNetwork(nn.Module):
def __init__(self):
super().__init__()
# create a nn.Linear functor:
self.h1 = nn.Linear(3, 2)
def forward(self, x):
# call the functor:
x = self.h1(x)
return x
The functional versions are stateless, and called directly, eg for softmax, which has no internal state:
x = torch.nn.functional.soft_max(x)
There are functional versions of various stateful network modules. In this case, you have to pass in the state yourself. Conceptually, for Linear, it’d be something (conceptually) like:
x = torch.nn.functional.linear(x, weights_tensor)
(I havent looked to see if this actually exists, but conceptually it’d be like that).
torch.autograd.Functional is used to create the torch.nn.functional modules, but you can ignore it for now. I’ve never looked at it yet… Only if you want to create some new functional method, might you need to look at it, but not even necessarily in fact.
^^^ the above might not be entiely complete/correct, so someone else can patch up any holes/inaccuracies, but it’s apprpxoaimtely correct, I think
|
st115428
|
Imagine I’m doing RL, and I have per-state values represented as a tensor:
V = torch.IntTensor(2, 3)
V.zero_()
Then I have a state represented as another tensor:
s = torch.IntTensor([1, 1])
… and I want to increment the corresponding value in V by 1, something like:
V[s] += 1 , or
V.select(s) += 1, or
thoughts on how to do this? (I’d prefer not to have to write V[s[0], s[1], s[2], ...] += 1 ideally). Happy to convert the index tensor to a LongTensor, if that will help?
|
st115429
|
I’m trying to update my installation from source. I’ve tried the most recent master branch as well as the v0.2.0 branch.
My system is Ubuntu 14.04, Python 2.6, Cuda 8.0, I’m using anaconda where I’ve installed the dependencies.
Both times I get the errors:
/home/rakelly/pytorch/torch/lib/THCUNN/generic/SpatialFullConvolution.cu(18): error: identifier “THNN_CudaHalfSpatialFullDilatedConvolution_updateOutput” is undefined
9 errors detected in the compilation of “/tmp/tmpxft_00006874_00000000-7_SpatialFullConvolution.cpp1.ii”.
CMake Error at THCUNN_generated_SpatialFullConvolution.cu.o.cmake:267 (message):
Error generating file
/home/rakelly/pytorch/torch/lib/build/THCUNN/CMakeFiles/THCUNN.dir//./THCUNN_generated_SpatialFullConvolution.cu.o
make[2]: *** [CMakeFiles/THCUNN.dir/THCUNN_generated_SpatialFullConvolution.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs…
/home/rakelly/pytorch/torch/lib/THCUNN/generic/VolumetricFullConvolution.cu(17): error: identifier “THNN_CudaHalfVolumetricFullDilatedConvolution_updateOutput” is undefined
9 errors detected in the compilation of “/tmp/tmpxft_000068fd_00000000-7_VolumetricFullConvolution.cpp1.ii”.
CMake Error at THCUNN_generated_VolumetricFullConvolution.cu.o.cmake:267 (message):
Error generating file
/home/rakelly/pytorch/torch/lib/build/THCUNN/CMakeFiles/THCUNN.dir//./THCUNN_generated_VolumetricFullConvolution.cu.o
make[2]: *** [CMakeFiles/THCUNN.dir/THCUNN_generated_VolumetricFullConvolution.cu.o] Error 1
/home/rakelly/pytorch/torch/lib/THCUNN/LookupTableBag.cu(18): warning: variable “MODE_SUM” was declared but never referenced
/home/rakelly/pytorch/torch/lib/THCUNN/LookupTableBag.cu(18): warning: variable “MODE_SUM” was declared but never referenced
make[1]: *** [CMakeFiles/THCUNN.dir/all] Error 2
|
st115430
|
Try “python setup.py 6 clean”, and rebuild the system after that. This solved my problem.
|
st115431
|
I am trying to so something similar to code below:
class MyClass(nn.Module):
def __init__(self):
for i in range(10):
self.MySubClassArray.append(MySubClass())
def forward(self, X_Array):
#X_Array is a list of 10 elements,
Y_Array = map(lambda MySubClass, x: MySubClass(x), self.MySubClassArray, X_Array)
return Y_Array
In forward() call, all 10 calls to MySubClass() are executed sequentially (as far as I know, correct me here if I am wrong). However can we make it a parallel execution? It should be possible as each instance of MySubClass() is independent and different than the other.
|
st115432
|
Hi,
if you indeed have an X_array, for many use-cases (where the items have same size), you can just glue your enumeration dimension with the batch dimension using .view:
y = MySubClass(X_Array.view((X_Array.size(0)*X_Array.size(1),X_Array.size(2)))
return y.view(X_Array.size(0), X_Array.size(1),y.size(2))
or so.
Of course, if the items are all of different shapes, that won’t work.
Best regards
Thomas
|
st115433
|
There are different instances of MySubClass () to be applied on X_Array(0), X_Array(1) etc. So It cant be done that way.
def init(self):
for i in range(10):
self.MySubClassArray.append(MySubClass())
I have collection of different MySubClass() instances which are then applied on elements of X_Array() using map function whicb I intend to parallelize.
|
st115434
|
I’m porting some Lua/Torch code over to PyTorch, which used ParallelCriterion (to apply L1 loss separately to 2 different layers of a stacked network, applying final+intermediate supervision).
I see that the legacy nn API provided by PyTorch provides a ParallelCriterion implementation – but what is the “correct” / non-legacy way to do this?
Thanks for any help!
|
st115435
|
you simply do:
loss1 = criterion1(input1[:, 0])
loss2 = criterion2(input1[:, 1])
torch.cat(loss1, loss2, dim=0)
(an example of a two-criterion case is shown, if you have a lot of criterion, just do a for loop)
|
st115436
|
Thanks for your reply!
I have tried this, but when I call backward() on the resulting combined loss, I receive the following error:
“grad can be implicitly created only for scalar outputs”
|
st115437
|
well, yes. your final output Variable is not a scalar if you cat two losses.
Maybe your intention is instead:
loss1 = criterion1(input1[:, 0])
loss2 = criterion2(input1[:, 1])
loss = loss1 + loss2
|
st115438
|
For example, the vgg16 net in the torchvision.models is defined as following. How to get the output of any middle layer in the sequential?
VGG (
(features): Sequential (
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU (inplace)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU (inplace)
(4): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU (inplace)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU (inplace)
(9): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU (inplace)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU (inplace)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU (inplace)
(16): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU (inplace)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU (inplace)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU (inplace)
(23): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU (inplace)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU (inplace)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU (inplace)
(30): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
)
(classifier): Sequential (
(0): Linear (25088 -> 4096)
(1): ReLU (inplace)
(2): Dropout (p = 0.5)
(3): Linear (4096 -> 4096)
(4): ReLU (inplace)
(5): Dropout (p = 0.5)
(6): Linear (4096 -> 1000)
)
)
|
st115439
|
You could just iterate the application yourself:
x = someinput
for l in vgg.features.modules():
x = l(x)
and keep what you want.
You can slice the modules by wrapping the in a list, so:
modulelist = list(vgg.features.modules())
for l in modulelist[:5]:
x = l(x)
keep = x
for l in modulelist[5:]:
x = l(x)
Best regards
Thomas
|
st115440
|
In torch we can do torch.zeros(3, 4):randn() for in place randomization of values. This doesn’t seem to work in pytorch. Is this (minor but convenient) feature available in pytorch? (There is in place torch.zeros(3, 4).random_() but that’s just random floats, it seems)
|
st115441
|
you can do:
a = torch.zeros(3, 4) # or whatever shape
torch.randn(3, 4, out=a)
But yea we didn’t implement some methods. We just didn’t get around to it.
|
st115442
|
:randn() just sampled values from a standard normal distribution, so torch.Tensor(3, 4).normal_() is equivalent.
|
st115443
|
FYI, this isn’t implemented yet:
>>> torch.cuda.FloatTensor(3,2).random_()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'torch.cuda.FloatTensor' object has no attribute 'random_'
But, @smth’s suggestion works greatly.
|
st115444
|
Can pytorch handle models inside models? For example, let’s say we want to build a seq2seq model. Rather than creating an EncoderRNN class and DecoderRNN class then instantiating each separately inside of some training function (like in the pytorch tutorials), can we wrap them in a Seq2Seq class, which also inherits from nn.Module, instead?
|
st115445
|
From the document 296:
“Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes.”
|
st115446
|
Dear sir,
I am a new guy,had issue with GPU training for cifar10_tutorial. CPU training code is fine!
My environment:
pytorch 0.2.0 py27hc03bea1_4cu80 [cuda80] soumith
torchvision 0.1.9 py27hdb88a65_1 soumith
code:
in 30: outputs = net.cuda(Variable(images))
I get:
RuntimeError Traceback (most recent call last)
in ()
----> 1 outputs = net.cuda(Variable(images))
/home/john/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in cuda(self, device_id)
145 copied to that device
146 “”"
–> 147 return self._apply(lambda t: t.cuda(device_id))
148
149 def cpu(self):
/home/john/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in _apply(self, fn)
116 def _apply(self, fn):
117 for module in self.children():
–> 118 module._apply(fn)
119
120 for param in self._parameters.values():
/home/john/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in _apply(self, fn)
122 # Variables stored in modules are graph leaves, and we don’t
123 # want to create copy nodes, so we have to unpack the data.
–> 124 param.data = fn(param.data)
125 if param._grad is not None:
126 param._grad.data = fn(param._grad.data)
/home/john/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in (t)
145 copied to that device
146 “”"
–> 147 return self._apply(lambda t: t.cuda(device_id))
148
149 def cpu(self):
/home/john/anaconda2/lib/python2.7/site-packages/torch/_utils.pyc in _cuda(self, device, async)
51 if device is None:
52 device = torch.cuda.current_device()
—> 53 if self.get_device() == device:
54 return self
55 else:
/home/john/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.pyc in bool(self)
121 return False
122 raise RuntimeError(“bool value of Variable objects containing non-empty " +
–> 123 torch.typename(self.data) + " is ambiguous”)
124
125 nonzero = bool
RuntimeError: bool value of Variable objects containing non-empty torch.ByteTensor is ambiguous
Do anybody try the tutorial? Is this a known bug? How could I fix it?
Thanks
John
|
st115447
|
My CIFAR-10 example is fully working on the GPU:
https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyTorch/21-PyTorch-CIFAR-10-Custom-data-loader-from-scratch.ipynb 127
|
st115448
|
http://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html 26
Maybe I code has problem, who can provide one to test my GPU?
|
st115449
|
Thanks, sorry, how to get it.
I use wget https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyTorch/21-PyTorch-CIFAR-10-Custom-data-loader-from-scratch.ipynb 38
Then try to open it, it claim the file is not jason format.
|
st115450
|
Where did you copy the line output = net.cuda(Variable(images)) from?
If you want to train your model on GPU, you should follow this part 37 of the tutorial.
net.cuda()
images = Variable(images.cuda())
output = net(images)
|
st115451
|
Hi Allenye0119, Thank you reply very much, the part is too simple.
I search the internet cannot find a good document for it. I already did:
net = Net()
net.cuda()
forward + backward + optimize
outputs = net(inputs).cuda() <== should I touch it or leave it alone.
If you can, could you provide the guid how to modify http://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html 11 to GPU trainingable
Thanks
|
st115452
|
Thank everybody, I think I start figure it out, I will follow QuantScientistSolomon K
6h’s example. This case could closed
|
st115453
|
Dear Quant, I run your notbook, but it take forever. How I know it run success on my GPU?
How to debug it?
I think my GPU stats is more then yours any problem?
e)0e7Every 0.1s: nvidia-settings -q GPUUtilization -q us… Tue Sep 5 19:19:41 2017Attribute ‘GPUUtilization’ (john-GS63VR-6RF:0[gpu:0]): graphics=3,memory=1, video=0, PCIe=1Attribute ‘UsedDedicatedGPUMemory’ (john-GS63VR-6RF:0[gpu:0]): 264.‘UsedDedicatedGPUMemory’ is an integer attribute.‘UsedDedicatedGPUMemory’ is a read-only attribute.‘UsedDedicatedGPUMemory’ can use the following target types: GPU.18,728,311,556321520411523322031123127,417,10, video=0, PCIe=44849, video=0, PCIe=3 211, video=0, PCIe=71804659, video=0, PCIe=3 52111, video=0, PCIe=7426170486616, video=0, PCIe=3 610, video=0, PCIe=4892019769910, video=0, PCIe=42036116, video=0, PCIe=3 82210, video=0, PCIe=6148, video=0, PCIe=4 99710, video=0, PCIe=459, video=0, PCIe=3 810, video=0, PCIe=694620115, video=0, PCIe=3 810, video=0, PCIe=450222619048749, video=0, PCIe=3 2011, video=0, PCIe=61180479,6, video=0, PCIe=3 19,10, video=0, PCIe=472516, video=0, PCIe=3 910, video=0, PCIe=422353137, video=0, PCIe=4 810, video=0, PCIe=412026116, video=0, PCIe=3 2212, video=0, PCIe=4417027910, video=0, PCIe=482021475810, video=0, PCIe=4624469,6, video=0, PCIe=3 14,1062211, video=0, PCIe=114423430146272467943007 61107, video=0, PCIe=1 3827234117496767523120:004362413162314322425241332522311423235742141646526374633172343842342942310232312323223233237424851342725154314231427469210,35,137463754112,430347,23311852293472331129723614536522041392410,318,2716356429210,39,2633123823211422183257934211723934310,4625853313, video=0, PCIe=47324137,283223693410,5,23617472617623311285221647697432182212,342,113016,344,112111,238,16,341232,11211,334,11210,2332,114015678940378781784213,629,4731875311,73
212412125121261536116,12 217426478667932810,373,11250749310,9,7219310,9,72321142231233245374251364535724961522934611,08,2936237211393451262383116374217239349825134153531:009246172951513413522317244126234123312524172346251315225311236522134221742314284527246197261374231051245412149247910,329,212,31213214,1123410,344,1137235911,349,238311693311,48,237311723283232934262231120746763741634231833912,4824,126331
|
st115454
|
CUDA Trick in my system it take a long time and seems forever!
I had 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1c20 (rev a1)
and I had try nvidia driver 375/384 all the same.
I use conda:
john@john-GS63VR-6RF ~ $ conda list | grep -i cuda
accelerate_cudalib 2.0 0
cuda80 1.0 0 soumith
cudatoolkit 7.0 1
pytorch 0.2.0 py27hc03bea1_4cu80 [cuda80] soumith
Why my laptop cud Trick take so long?
|
st115455
|
My Apologies, there was a line there that should have been commented, this is why it run forever.
use_cuda = torch.cuda.is_available()
# use_cuda = False
FloatTensor = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
LongTensor = torch.cuda.LongTensor if use_cuda else torch.LongTensor
Tensor = FloatTensor
# if torch.cuda.is_available():
# print("WARNING: You have a CUDA device, so you should probably run with --cuda")
# ! watch -n 1 nvidia-smi
# ! nvidia-smi -l 1
# nvidia-settings -q GPUUtilization -q useddedicatedgpumemory
# You can also use:
# ! watch -n0.1 "nvidia-settings -q GPUUtilization -q useddedicatedgpumemory"
# ! pip install git+https://github.com/wookayin/gpustat.git@master
# ! watch --color -n1.0 gpustat
# ! gpustat
# ! watch -n 5 nvidia-smi --format=csv --query-gpu=power.draw,utilization.gpu,fan.speed,temperature.gpu
I updated the code,
|
st115456
|
Dear Quant,
I have another issue, where can I get trainLabels.csv
DATA_ROOT =’/home/john/Downloads/data’
IMG_PATH = DATA_ROOT + '/train/'
IMG_EXT = '.png’
IMG_DATA_LABELS = DATA_ROOT + ‘/trainLabels.csv’
I put the orginal train data in train:
john@john-GS63VR-6RF ~/Downloads/data/train $ ls -l
total 181876
-rw-r–r-- 1 john john 158 Mar 30 2009 batches.meta
-rw-r–r-- 1 john john 31035704 Mar 30 2009 data_batch_1
-rw-r–r-- 1 john john 31035320 Mar 30 2009 data_batch_2
-rw-r–r-- 1 john john 31035999 Mar 30 2009 data_batch_3
-rw-r–r-- 1 john john 31035696 Mar 30 2009 data_batch_4
-rw-r–r-- 1 john john 31035623 Mar 30 2009 data_batch_5
-rw-r–r-- 1 john john 88 Jun 4 2009 readme.html
-rw-r–r-- 1 john john 31035526 Mar 30 2009 test_batch
But where to get trainLabels.csv?
It claim error late.
Thanks
John
|
st115457
|
Dear John,
At the top of the notebook it is stated:
For this to work:
Download data from https://www.kaggle.com/c/cifar-10/data
Remove headers from the CSV BEFORE running this code
in the images training folder copy 1.png to 0.png and add the same label inside training labels.
Did you get the data from Kaggle?
CIFAR-10_-_Object_Recognition_in_Images___Kaggle.jpg1284×643 101 KB
|
st115458
|
I want to calculate the determinant of a variable and if fact I want to get the gradient of the det w.r.t. each elements in the matrix.
|
st115459
|
You could compute the determinant by computing the product of all its eigenvalues, but I don’t there there are yet backprop for torch.eig
|
st115460
|
I was unable to find the code for unfold() http://pytorch.org/docs/master/tensors.html#torch.Tensor.unfold 174 in its github repo.
|
st115461
|
Unfold is implemented in C in here 256. It only performs size/strides modifications, so it can be performed in python without issues.
Also, note that unfold supports backpropagation, so there might be no need to implement it yourself.
|
st115462
|
Hi all,
Often in papers, notation is simplified by pooling all parameters into a single variable “theta”, and describe various operations in terms of that theta.
For example say for some reason you’re manually implementing momentum sgd (yes I know that this is usually done internally, but just for the sake of the example…). Momentum sgd is described by the pseudocode:
v = gamma*v + eta*grad(L, theta)
theta = theta - v
In reality, if theta is a bunch of parameters, you go:
dL_dtheta = grad(L, theta)
v = [gamma*v_ + eta*dl_dtheta_ for v_, dl_dtheta_ in zip(v, dL_dtheta)]
theta = [theta_ - v_ for v_, theta_ in zip(v, theta)]
But this is quite messy - as you end up with list expansions everywhere. It would be really nice to have a “CollectionVariable” or something, where we could have a variable that represents the a whole collection, and where you could apply all kinds of elementary operations (+, -, *, …) on. So you could write your code in the clean form above and still be able to work with situations where your parameters are not all in one array.
Does PyTorch have anything like this, or if not do anticipate any difficulties in implementing such a thing?
|
st115463
|
Pytorch doesn’t have such a functionality by default.
Lua-Torch used to have such functionality (getParameters() would return a view of all the parameters concatenated into a single huge tensor), but it turned out there were many corner cases that made a proper implementation difficult.
But you can always have an approximation of it using simple elementary functions. For example, to append all the parameters in a single tensor in PyTorch, you could do something like
param_list = list(p.view(-1) for p in model.parameters())
concat_param = torch.cat(param_list)
|
st115464
|
I have a pretrained CNN model which has several BatchNorm2d layers.
I take a batch with 1 image, set model.eval() and run forward pass.
If I run the same image but with model.train() then I get different output!
As I understood the only difference of the BatchNorm behaviour in eval and train mode is that in eval it doesn’t update moving averages. It means that in my experiment I should have got the same output in both cases!
Could you please point me out to my mistake?
|
st115465
|
In the train mode, batch normalization calculates the statistics for the batch you provide, normalizes by calculated statistics and applies affine transformation. In the eval mode, instead of calculating statistics, batch norm uses provided moving averages to normalize. Having different output in eval and train mode is expected.
|
st115466
|
I defined my nn.Embedding layer as
self.word_embedding = nn.Embedding(vec.shape[0], vec.shape[1], vocab_size, max_norm = 1, norm_type = 1)
When forwarding it by self.word_embedding(input), I came across the following error
TypeError: _renorm() takes exactly 5 arguments (4 given)
Is there anything wrong with max_norm and norm_type?
|
st115467
|
I adapted it from my docker image, in case you are using Linux directly:
https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyTorch/build_torch.sh 39
torch01.png778×434 120 KB
# PyTorch GPU and CPU
# If you dont have CUDA installed, run this first:
# https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/docker/deps_nvidia_docker.sh
#GPU version
export PATH=/usr/local/cuda-8.0/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
export PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}
export CUDA_BIN_PATH=/usr/local/cuda
export CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-8.0
# Build PyTorch from source
#git clone https://github.com/pytorch/pytorch.git
cd pytorch
git submodule update --init
#git checkout 4eb448a051a1421de1dda9bd2ddfb34396eb7287
export TORCH_CUDA_ARCH_LIST="3.5 5.2 6.0 6.1+PTX"
export TORCH_NVCC_FLAGS="-Xfatbin -compress-all"
#pip uninstall torch
#python setup.py clean
#python setup.py build
python setup.py install
# Build torch-vision from source
git clone https://github.com/pytorch/vision.git
cd vision
#git checkout 83263d8571c9cdd46f250a7986a5219ed29d19a1
git submodule update --init
python setup.py install
# CPU version
#pip install git+https://github.com/pytorch/tnt.git@master
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.