id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st115168
|
@fmassa Can’t torch.backends.cudnn.enabled = False guarantee that CUDNN won’t be used, which means that the conv related operations would be deterministic even if the CUDA versions are used?
|
st115169
|
@apaszke what’s the use case of having both torch.cuda.manual_seed and torch.cuda.manual_seed_all ?
|
st115170
|
manual_seed seeds only the current GPU, manual_seed_all seeds all of them. We’re thinking about having torch.manual_seed seed both CPU and all GPUs.
|
st115171
|
If I don’t have any random initialization or anything random in my neural networks, is it going to affected by the torch.cuda.manual_seed()?
I am trying to understand the role of seeding in pytorch. For example, if I have a model trained with a specific seed, can I say, it will produce the same output for a specific input? While with no seed, its not guaranteed to produce the same output?
One thing is also bothering me that if I train without setting any seed, why I would get different output for the same input given that there is no randomness associated with my model?
|
st115172
|
I found that multi-thread pre-fetching training samples also introduces randomness. In the multi-thread way, in a new run the samples are put into the queue in a new order, determined by the relative speed of the threads. I had to set the number of pre-fetching threads to 1 to solve the problem.
What’s more, if the pre-fetching thread (only having one pre-fetching thread in this case) is not the main thread (i.e. it’s parallel to the main thread) and both threads are using random numbers, then make sure that these two threads use different random value generators, each generator having its own seed. Otherwise, their relative order of accessing the random value generator may differ in different runs. To have separate random value generators, for example, the main thread may set the seed like numpy.random.seed(seed) and use numpy.random.uniform() to generate a random value; The pre-fetching thread creates its own generator with seed prng = numpy.random.RandomState(seed) and generates values like this prng.uniform().
BTW, I implemented the multi-threading in my own way using package threading, not using the official one.
|
st115173
|
I have some code that loads in VGG and some pre loaded image:
vgg = models.vgg19(pretrained=True)
inp = Variable(img_arr)
vgg_output = vgg(inp) ; vgg_output
Here are 3 outputs:
Screen Shot 2017-09-17 at 12.36.14 AM.png1464×560 55 KB
I was just wondering if this is normal behavioral?
|
st115174
|
VGG uses dropout, which is in effect when nn.Module.training is True, which is true until you call nn.Module.eval(), like so:
In [1]: import torch
In [2]: from torch.autograd import Variable
In [3]: from torchvision import models
In [4]: vgg = models.vgg19(pretrained=True)
img_arr = torch.randn(1, 3, 255, 255)
In [5]: img_arr = torch.randn(1, 3, 255, 255)
In [6]: inp = Variable(img_arr)
In [7]: vgg_output = vgg(inp); vgg_output
Out[7]:
Variable containing:
0.3886 1.7391 0.6889 ... -0.7818 -0.6299 1.4617
[torch.FloatTensor of size 1x1000]
In [8]: vgg_output = vgg(inp); vgg_output
Out[8]:
Variable containing:
0.2152 0.8930 -0.2694 ... -1.3448 -1.0411 2.2298
[torch.FloatTensor of size 1x1000]
In [9]: vgg = vgg.eval()
In [10]: vgg_output = vgg(inp); vgg_output
Out[10]:
Variable containing:
-0.1476 1.2013 0.4434 ... -1.2543 -0.6353 1.5407
[torch.FloatTensor of size 1x1000]
In [11]: vgg_output = vgg(inp); vgg_output
Out[11]:
Variable containing:
-0.1476 1.2013 0.4434 ... -1.2543 -0.6353 1.5407
[torch.FloatTensor of size 1x1000]
|
st115175
|
PyTorch installation and troubleshooting guide:
Medium – 16 Sep 17
PyTorch Linux + GPU installation and troubleshooting guide 57
The code below was run on my own computer with:
Reading time: 3 min read
Happy programming,
|
st115176
|
Hi guys,
https://gist.github.com/santisy/40842eaf393356013fe60d30267007c0 1. As the code there. When I double the second dimension of the tensor, the time it costs also doubles. I don’t know what’s the reason. Is this behavior expected? Is there a way to make the time even for different tensor size? I use the binary from the official website
|
st115177
|
I’d like to build a c extension to binarize a FloatTensor which may need to store the binarized values in a ByteTensor.
I followed this tutorials 4, but it didn’t talk much about the c interface of torch.
I tried to read the source code but it seems to be pretty hard for me to figure this out.
So could anyone show me an example of creating a THByteTensor, or a better way to implement the binarization?
Thank you very much !
|
st115178
|
You could look at https://github.com/pytorch/extension-ffi/tree/master/package 6
Particularly the .c files in there.
|
st115179
|
Thank you!
I’ve solved my problem.
https://github.com/pytorch/extension-ffi/tree/master/package 6 helped a lot. It is in fact an example about how to create an c extension.
What I really need are DOCs about TorcH’s c api but I can’t find some.
Now I’m trying to read the TH source code.
And thank you again! PyTorch is great!
|
st115180
|
probably about 1 microsecond (basically the cost of a python call). There is no memcopy or anything, so it’s quite efficient.
|
st115181
|
PyTorch tensors and NumPy arrays share the same memory locations.
I’ve compared converting to NumPy arrays from PyTorch and Tensorflow here: http://stsievert.com/blog/2017/09/07/pytorch/ 262
On my local machine, PyTorch takes 0.5 microseconds to convert between the two.
|
st115182
|
@smth @stsievert what if I convert from cuda tensor? Is that just 2 Python function call? Or gpu tensor to cpu tensor takes much longer? I have been using quite a number of conversion in my code and am wondering if it is slowing me down
|
st115183
|
The code behind these timings can be found at https://github.com/stsievert/pytorch-timing-comparisons 112 in Jupyter notebooks. They are timing a CPU tensor to NumPy array, for both tensor flow and PyTorch.
I would expect that converting from a PyTorch GPU tensor to a ndarray is O(n) since it has to transfer all n floats from GPU memory to CPU memory. I’m not sure on the O constant, but I would expect it to be fairly small.
|
st115184
|
Of course the big O constant is small – memory copies are fast.
But the big O constant is still significant. Memory is a bottleneck – CPUs spend most of their time waiting for registers to be filled, not waiting for computation to be finished.
Try to minimize the number of CPU <=> GPU transfers. I believe you want to use async=True in cuda or cpu when you can.
|
st115185
|
When I try to compile PyTorch from source, I encountered this issue (CUDA 8.0, cuDNN v5 and python3.5). Thank you!
/cm/shared/package/gcc/4.9.3/bin/ld: cannot find -lcuda
/cm/shared/package/gcc/4.9.3/bin/ld: skipping incompatible /lib/…/lib/libpthread.so when searching for -lpthread
/cm/shared/package/gcc/4.9.3/bin/ld: skipping incompatible /usr/lib/…/lib/libpthread.so when searching for -lpthread
collect2: error: ld returned 1 exit status
error: command ‘gcc’ failed with exit status 1
|
st115186
|
what platform is this? Linux? what linux?
do you have the NVIDIA driver installed?
post a full log of the command python setup.py install, not just the error message.
It’s weird that there is no pthread in your system, i am suspecting Solaris?
|
st115187
|
Thank you so much!!! It was right on point…
In our cluster, we have a central host that sends out slurm job requests to other nodes. But this host itself does not have GPU cards, hence no NVIDIA driver. That’s why I couldnt install master branch from this central host. But by requesting a node with GPU card, it can installed via that machine.
Many thanks again!!
|
st115188
|
M = torch.randn(4,2)
idx_mask = torch.Tensor([1,1,0,0]).long().view(-1,1)
I want to use idx_mask in order to get rows 0&1 from m.
This operation is done easily in MATLAB by M(idx_mask, :).
What is the best way to do it in PyTorch ?
i know i can expand idx_mask to have the same dimensions as M and do the slicing, but i’m curious to know if there is a better way.
|
st115189
|
you can use the nonzero function to get indices from a mask:
M = torch.randn(4,2)
idx_mask = torch.Tensor([1,1,0,0]).long()
M[idx_mask.nonzero(), :]
|
st115190
|
Curious about best practices.
When should I use cuda for matrix operations and when should I not use it?
Are cuda operations only suggested for large tensor multiplications?
What is a reasonable size after which it is advantageous to convert to cuda tensors?
Are there situations when one should not use cuda?
What’s the best way to convert between cuda and standard tensors?
Does sparsity affect the performance of cuda tensors?
|
st115191
|
if your project only has small computations, use CPU, otherwise use CUDA
Unless you are doing advanced code, stick to either using CPU for the entire project, or using CUDA for the entire project
depends on operation, have to benchmark and see
if model is very very small, they no point using CUDA
.cuda() and .cpu().
yes, sparse operations are usually faster on the CPU.
|
st115192
|
You can run this and provide the output:
github.com
QuantScientist/Deep-Learning-Boot-Camp/blob/master/docker/deps_nvidia_docker.sh 7
#!/usr/bin/env bash
apt-get install nvidia-modprobe
# curl -O -s https://raw.githubusercontent.com/minimaxir/keras-cntk-docker/master/deps_nvidia_docker.sh
if lspci | grep -i 'nvidia'
then
echo "\nNVIDIA GPU is likely present."
else
echo "\nNo NVIDIA GPU was detected. Exiting ...\n"
exit 1
fi
echo "\nChecking for NVIDIA drivers ..."
# Check for CUDA and try to install.
if ! dpkg-query -W cuda; then
# The 16.04 installer works with 16.10.
curl -O http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
dpkg -i ./cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
apt-get update
This file has been truncated. show original
|
st115193
|
I’m new to pytorch, i’ve switched my project from tensorflow because i need to works with dynamic computational graph and modularity but I found some difficulties.
I’m trying to implement a mixture of RNN, somenthing like this http://cvgl.stanford.edu/papers/jain_cvpr16.pdf 17 .
I need to define different LSTM modules to use in different parts of the model, but they must share the same parameters. How can I do?
|
st115194
|
you define one LSTM module and reuse it at multiple places (this is completely fine).
|
st115195
|
Sry, was obvious. I had not yet understood how the torch modules work, it solves very easily.
|
st115196
|
I’ve heard that running on GPU can give nondeterministic results, but is this expected to happen on CPU?
At the beginning of the code, I’ve called torch.manual_seed(1234). I’ve also seeded the numpy and python random number generators just to be safe. I have a snippet in my code (inside a loop):
loss = self.loss_function(out, y)
dl_dold_state, dl_dtheta_direct = grad(loss, (state_vec, self.theta), retain_graph=True)
And I have verified that even when all input variables (loss, out, y, state_vec, self.theta) match their values from the last run of the code (at the same loop iteration), dl_dtheta_direct can output a slightly different value (the error is on the order of 1e-9).
I’m running on a laptop without a GPU, so this code is definitely running on CPU. It may not seem like a big deal, but I’m getting unexpected behaviour in my code and if there is some possiblility of a bug in pytorch’s grad operation it opens up the possibility that the bug is not in my code but in pytorch.
Anyone had a similar problem or know how to resolve it?
|
st115197
|
if you use CPU, OpenMP multi-threading introduces non-determinism for some functions.
You can disable this and run with a single thread, that will run fully deterministcally (on CPU):
OMP_NUM_THREADS=1 MKL_NUM_THREADS=1 python mycode.py
|
st115198
|
Thanks for the reply.
But this still doesn’t seem to fix it for me. Are there any other possible sources of randomness in torch that I could try before I try to come up with a minimal example to reproduce the error?
|
st115199
|
except for manual_seed and threads, i cant think of much else. If you have any representative script that can reproduce the non-determinism, i’ll take a look.
|
st115200
|
Hi!
So I have no idea what’s going on.
Here is my code.
PS.
The output is def a function of the input (model is a pretty good[93%] gender classifier).
def compute_saliency_maps(X, y, model):
# Make sure the model is in "test" mode
model.eval()
# Wrap the input tensors in Variables
X_var = Variable(X, requires_grad=True).cuda()
y_var = Variable(y).cuda()
scores = model(X_var)
# Get the correct class computed scores.
scores = scores.gather(1, y_var.view(-1, 1)).squeeze()
# Backward pass, need to supply initial gradients of same tensor shape as scores.
scores.backward(torch.FloatTensor([1.0,1.0,1.0,1.0]).cuda())
# Get gradient for image.
saliency = X_var.grad.data #Here X_var.grad is still None! What the hell?!
|
st115201
|
Please see this post for an answer Why cant I see .grad of an intermediate variable?
In your case, note that since you call .cuda(), the variable that you store in X_var is not the leaf Variable.
Either do
X_var = Variable(X.cuda(), requires_grad=True)
or
X_var = Variable(X, requires_grad=True)
X_var_cuda = X_var.cuda()
scores = model(X_var_cuda)
|
st115202
|
I want to fixed all input the same when I run the code ‘example/mnist/main.py’ next time.
I fix np.random seed, torch cpu cuda random seed, and use shuffle=False for DataLoader,but the test result is different when I run the code next time with cuda,but normal on cpu,is there something to be set?
|
st115203
|
I am currently using minibatching for the first time with pytorch. It was quiet a mess to implement with padding (its an NLP system that uses sentences as wordwise input to RNNs) and now that its done, I wonder if the fact that on my CPU the learning is slower with increasing batchsize is due to my poor implementation (and therefore may transfer to GPU) or due to the interaction between minibatches (and its matrix multiplication) and CPUs.
I can currently NOT test what happens if I would run the network on GPU.
|
st115204
|
Actually it could be the implementation. If you’re looping over the batch, the CPU is not working in parallel. The batch should always be a dimension in the tensors that are multiplied.
With GPU the implementation can be different because you can make multiple calls and get parallel work. Greater batches are faster on GPU depending on the specs.
|
st115205
|
Is there a way to convert dense tensors to sparse tensors without having to extract every value and coordinate and recreating the dense tensor as a sparse one?
|
st115206
|
When I install pytorch in mac ox 10.11 using: pip install http://download.pytorch.org/whl/torch-0.2.0.post3-cp36-cp36m-macosx_10_7_x86_64.whl 21, I get: torch-0.2.0.post3-cp36-cp36m-macosx_10_7_x86_64.whl is not a supported wheel on this platform.
Does it mean that pytorch cannot be installed in ox 10.11?
|
st115207
|
Use this:
github.com
QuantScientist/Deep-Learning-Boot-Camp/blob/master/day02-PyTORCH-and-PyCUDA/PyTorch/build_torch.sh 116
# PyTorch GPU and CPU
# If you dont have CUDA installed, run this first:
# https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/docker/deps_nvidia_docker.sh
#GPU version
export PATH=/usr/local/cuda-8.0/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
export PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}
export CUDA_BIN_PATH=/usr/local/cuda
export CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-8.0
# Build PyTorch from source
git clone https://github.com/pytorch/pytorch.git
cd pytorch
git submodule update --init
#git checkout 4eb448a051a1421de1dda9bd2ddfb34396eb7287
export TORCH_CUDA_ARCH_LIST="3.5 5.2 6.0 6.1+PTX"
This file has been truncated. show original
It will work
|
st115208
|
Thanks. This code run into an err: PyPI not installed. Then I removed everything installed, and run the standard command provided on pytorch.org 86 again. It works.
|
st115209
|
I am facing a scenario of a dynamics model in RL setting. A simple MLP represents the dynamics model, we need to predicts rollout distribution. Concretely,
Step 0: Randomly draw 10 samples (initial states)
Iteration t = 1 to T:
(1)Feed samples in MLP, we obtain 10 outputs
(2)Compute the mean and standard deviation (i.e. fit a Gaussian) and then sample 10 new results (discard the old ones) from torch.normal(mean, std)
The dynamics model in each iteration is sharing parameters.
Given some differentiable cost function cost(x), the objective (loss funnction) is the summation of average costs of the samples in each iteration, i.e.
J = mean(cost(x_0)) + ... + mean(cost(x_T))
where x_t is a set of 10 samples at the iteration t.
I am a bit confused about how to deal with the backpropagation with .reinforce(), since it is different scenario with policy gradient, where the analytic gradient is log-probability of action multiplied by reward, so that the action of type Variable calls .reinforce(r).
|
st115210
|
I’m currently trying to implement the model in this paper https://arxiv.org/abs/1607.06854 17 which has 350 units, which during one forward step can be evaluated completely independently of one another. I’d like to distribute the calculation of each unit across multiple GPUs (I have two) and, assuming this would be better performance-wise, across multiple streams.
I found this which gives me a good idea of how to split the model across GPUs, but I can’t find any good examples how to use streams in PyTorch, and I don’t understand the documentation. Is using the class torch.cuda.Stream to get additional concurrency feasible, or am I completely wrong about this? I have already implemented the unit as an nn.Module, let’s call it PVM_unit, I can add the multiple instances of it to my model with setattr, but how can I run them as fast as possible independently?
Thanks, ahead of time.
|
st115211
|
Hi, I encounter a problem that when I use Adadelta optimization method, I got exactly same result on CPU but different on GPU. So If anyone knows is it because of Adadelta Algorithm is non-deterministic on GPU?
|
st115212
|
Just like we can access a network’s modules using net._modules and then use that to modify the modules, change or remove them, can we do the same with the function calls defined in forward function?
For example, if we have a line out = out>0 in the forward call can we access this in any way (maybe with the new jit mechanisms) and change it to out = out > 0.1?
|
st115213
|
If you want to access an operation into the method of an object, as far as I know, it’s impossible on the whole python language. You can just access the parameters and methods of the object. But you can still re-define the whole forward method by inheritance.
|
st115214
|
How to get the index of multidimensional array efficiently? For example, I want to know what are the multidimensional indices of values greater than 5.
|
st115215
|
Hi,
I experience performance differences when alternating the order of BatchNorm1d and AvgPool1d
What is the correct mathematically efficient order?
Why should there be a difference?
Regards,
|
st115216
|
In the batchNorm you compute a variance, so it’s O(N2) with the total number of elements, while the avgPool is just O(N). So I think you would improve the performance by doing the avgPool first (that reduces the number of elements), and then the batchNorm.
But that’s just a feeling, not a precise explanation.
|
st115217
|
if this is our network:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
self.fc4 = nn.Linear(320, 40)
self.fc5 = nn.Linear(40, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc4(x))
x = F.dropout(x, training=self.training)
x = self.fc5(x)
return F.log_softmax(x)
model = Net()
if args.cuda:
model.cuda()
when you use print(model)
it just print the network components where you defined in init, like this:
Net (
(conv1): Conv2d(1, 10, kernel_size=(5, 5), stride=(1, 1))
(conv2): Conv2d(10, 20, kernel_size=(5, 5), stride=(1, 1))
(conv2_drop): Dropout2d (p=0.5)
(fc1): Linear (320 -> 50)
(fc2): Linear (50 -> 10)
(fc4): Linear (320 -> 40)
(fc5): Linear (40 -> 10)
)
they are not the real data flow in the network. Please notice that i do not use fc1 and fc2, i use fc4 and fc5.
How can i know the real data flow gragh in my network? i need know what my network looks like. There was an embarassing bug that my network was not connected in the way i thought. It ends in the half way during the backprop prosess.
|
st115218
|
Then I don’t think there’s an existing tool/script that can visualize the data flow.
|
st115219
|
This may also be helpful:
stackoverflow.com
Model summary in pytorch 29
pytorch
answered by
SpiderWasp42
on 02:55AM - 06 Mar 17
A result (in my case):
:Sequential (
(0): Linear (29 -> 1024), weights=((1024L, 29L), (1024L,)), parameters=30720
(1): Dropout (p = 0.05), weights=(), parameters=0
(2): Tanh (), weights=(), parameters=0
(3): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True), weights=((1024L,), (1024L,)), parameters=2048
(4): Linear (1024 -> 128), weights=((128L, 1024L), (128L,)), parameters=131200
(5): Dropout (p = 0.05), weights=(), parameters=0
(6): Tanh (), weights=(), parameters=0
(7): Linear (128 -> 64), weights=((64L, 128L), (64L,)), parameters=8256
(8): Dropout (p = 0.05), weights=(), parameters=0
(9): LeakyReLU (0.01), weights=(), parameters=0
(10): Linear (64 -> 32), weights=((32L, 64L), (32L,)), parameters=2080
(11): Dropout (p = 0.05), weights=(), parameters=0
(12): Tanh (), weights=(), parameters=0
(13): Linear (32 -> 16), weights=((16L, 32L), (16L,)), parameters=528
(14): Dropout (p = 0.05), weights=(), parameters=0
(15): LeakyReLU (0.01), weights=(), parameters=0
(16): Linear (16 -> 1), weights=((1L, 16L), (1L,)), parameters=17
(17): Sigmoid (), weights=(), parameters=0
)
|
st115220
|
There is a great project! Use dmlc’s standalone tensorboard in pytorch!
GitHub
lanpa/tensorboard-pytorch 51
tensorboard-pytorch - tensorboard for pytorch
|
st115221
|
I’m aware of this project, but I didn’t know that it can also visualize computing graph. Thanks for the info!
|
st115222
|
Just curious whether the softmax in a Fully Convolutional Network equals to Softmax2D ? As in the source code it seems to be true. But the current version of Pytorch document does not mention anything about 2D softmax.
http://pytorch.org/docs/master/_modules/torch/nn/modules/activation.html#Softmax 128
|
st115223
|
Hi,
All the other settings are the same, why these two methods will get different performance.
class net1(nn.Module):
def __init__(self):
super(net1, self).__init__()
self.conv1 = nn.Conv2d(1,16,5,1,2)
self.conv2 = nn.Conv2d(16,32,5,1,2)
self.pool1 = nn.MaxPool2d(2)
self.pool2 = nn.MaxPool2d(2)
self.fc1 = nn.Linear(32*7*7,10)
def forward(self, x):
out = F.relu(self.conv1(x))
out = self.pool1(out)
out = F.relu(self.conv2(out))
out = self.pool2(out)
out = out.view(-1,32*7*7)
out = F.relu(self.fc1(out))
return out
class net2(nn.Module):
def __init__(self):
super(net2, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(
in_channels=1,
out_channels=16,
kernel_size=5,
stride=1,
padding=2,
),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
)
self.conv2 = nn.Sequential(
nn.Conv2d(16, 32, 5, 1, 2),
nn.ReLU(),
nn.MaxPool2d(2),
)
self.out = nn.Linear(32 * 7 * 7, 10)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = x.view(x.size(0), -1)
output = self.out(x)
return output
|
st115224
|
Hi, I am not talking about the speed, these two methods in the accuracy and accuracy are very different, net2 have higher accuracy.
|
st115225
|
Hi, All.
This is my simple GPU code in PyTorch.
How can I change this in CPU code?
filename=r’/Users/dti/Desktop/PyTorch4testAccuracy/hymenoptera_data/test/bees/1.jpg’
img = skimage.io.imread(filename)
x = V(centre_crop(img).unsqueeze(0), volatile=True).cuda()
model = models.dict’resnet18’
model = torch.nn.DataParallel(model).cuda()
model = torch.load(‘model5.pth’)
logit = model(x)
print(logit)
Thanks in advance .
|
st115226
|
An example:
use_cuda = torch.cuda.is_available()
if use_cuda:
X_tensor = Variable(torch.from_numpy(x_data_np).cuda()) # Note the conversion for pytorch
else:
X_tensor = Variable(torch.from_numpy(x_data_np)) # Note the conversion for pytorch
|
st115227
|
remove the .cuda() and the torch.nn.DataParallel
filename=r’/Users/dti/Desktop/PyTorch4testAccuracy/hymenoptera_data/test/bees/1.jpg’
img = skimage.io.imread(filename)
x = V(centre_crop(img).unsqueeze(0), volatile=True)
model = models.dict['resnet18']
model = torch.load('model5.pth')
logit = model(x)
print(logit)
|
st115228
|
Two questions:
What is the meaning exactly of param.requires_grad = False?
I’m trying to do a different type of fine tuning in which the first layer is a new one and the rest is trained. I want to update only the weights of the first layer and in order to do that I want whole network to back-propagate and pass the loss to the first layer but to not do any updates, How can I do that?
|
st115229
|
param.requires_grad = False (will cause the parameter not to get a gradient stored in the loss.backward() call.
When you param.requires_grad = False to some parameters, it will not affect the gradient calculation for the others. Note that while the errors need to backpropagate through the layers for your set-up, the parameters of the layers are leaf nodes (and the backpropagation continues through the input data).
Best regards
Thomas
|
st115230
|
When you create an optimizer, you pass it the parameters you want it to optimize, so just give it the ones in the first layer.
Note also that when you call backwards, PyTorch will need to compute grads of variables other than the ones you specified when you created it, but they won’t be leaf variables. When you call zero_grad, it will zero the grad of the parameters you specified when you created it. It doesn’t need to zero the grad of the other variables it computed grad for because grads only accumulate in leaf variables.
|
st115231
|
That’s exactly what I did. So you’re saying that even though I gave the optimizer only the params of the first layer and set param.requires_grad = False, the loss backpropagate all the way to the first layer?
|
st115232
|
Yes. The parameters you set requires_grad to False on are leaf variables anyway; you don’t have to backpropagate through them to get to the variables you want grads for. It is still a good idea to set requires_grad to False though, so that they don’t needlessly accumulate grads.
|
st115233
|
(I’m assuming you set requires_grad to False on all the parameters except those in the first layer that you want grads for. I guess you don’t explicitly say that that’s what you did.)
|
st115234
|
Yes, I did it but how come the gradients are being calculated in layers in which I didn’t set an optimizer?
|
st115235
|
Hi,
The optimizer accesses the gradients through the ‘.grad’ attribute of the specific parameters that you assigned to the optimizer object. It does not compute the gradients but uses it when .step() is called.
The computation of these gradients is indepedent of the optimizer. The gradients are computed (and set as .grad) when you do the .backward() call.
|
st115236
|
Just to add to what Anuvabh writes, setting requires_grad to False won’t set the grad to zero or None. If a variable already has a grad from a previous backward pass, setting required_grad to False on it will just stop future backward passes from accumulating their gradients into it.
|
st115237
|
Thank you, now I get it. One more question, do you think it’s possible to fine tune in when the layer I add is connected to the input instead of the output? I tried to train the network and obviously when I used the same input it learned very fast but when I changed the input (added some noise to the image) it didn’t learn at all.
|
st115238
|
Great, glad everything is clear now. As for what you are trying, I don’t know if it can work. Is it based on some paper? Maybe pretrain a denoiser or gradually increase the noise from zero, and maybe one new layer at the beginning will not be enough capacity.
|
st115239
|
I was wondering how I could enforce synchronization for all cuda operations when profiling on GPU (in order to find the operations/function calls that are slow). Thanks!
|
st115240
|
I need to do a full convolution using designated weights with a special size, I should produce outputs that have same size with input but I can’t use padding because the value of weights are sensitive with their location. Does there be a way in pytorch to do this elegantly?
Any advice would be appreciated.
|
st115241
|
example: https://github.com/pytorch/examples/blob/0984955bb8525452d1c0e4d14499756eae76755b/imagenet/main.py#L241 355
What is pth extension?
Just curious, thanks.
|
st115242
|
Don’t think it means anything in particular, it just seems to be a convention for files saved using torch.save(). Compare this to .pck, which is commonly used with Python’s built-in pickle.dump().
|
st115243
|
Personally I tend to use .dat data files to save and load models but as mentioned above .pth path files seem to be most prominent preference for saving models
|
st115244
|
Hmm, I wonder where this convention came from? Being new to pytorch, it was a bit confusing because .pth files are already used in the context of python as files in site packages that point to other directories on the system.
|
st115245
|
I have a model with a constant that is used in the forward pass in some autograd.Variable math.
I want to use DataParallel, so I’ve registered the constant Variable as a buffer (self.register_buffer(buffer_name, constant_variable)) so that it will be replicated with the model.
I have a second model (which inherits from the first) that wants to override/modify the constant Variable buffer_name.
When I try self.buffer_name = new_variable I get an error saying that I have to set a buffer using a Tensor.
TypeError: cannot assign ‘torch.autograd.variable.Variable’ as buffer ‘flow_mean’ (torch.Tensor or None expected)
I was able to register the buffer as a Variable initially, but am not able to __setattr__ with a Variable? This doesn’t seem right to me.
I can try to del self.buffer_name and reset, but that seems like a hack.
Is this behavior intentional or should the nn.module support setting it’s buffers with Variables?
|
st115246
|
buffers have to be Tensors not Variables. It seems like a bug to allow you to set buffers as Variables the first time.
|
st115247
|
This is a seemingly stupid request for all post-post-doc AI researchers on here, but hear me out.
First thing that DL Newbies look for is an image inferencing example. Inferencing is computationally cheap and has a high WOW factor, which makes it the perfect “Hello World” for DL.
We have the reference PyTorch ImageNet training code on Github, but we don’t have a reference ImageNet inferencing example. If someone has an elegant, memory efficient CPU + GPU inferencing code sitting around please submit a PR and add it to the examples repo!
github.com
pytorch/examples/blob/master/imagenet/main.py 66
import argparse
import os
import shutil
import time
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.distributed as dist
import torch.optim
import torch.utils.data
import torch.utils.data.distributed
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import torchvision.models as models
model_names = sorted(name for name in models.__dict__
if name.islower() and not name.startswith("__")
and callable(models.__dict__[name]))
This file has been truncated. show original
Thanks!
|
st115248
|
I have a pretty lean inference script included with a DPN conversion I recently posted at https://github.com/rwightman/pytorch-dpn-pretrained 72
It utilizes a Dataset that’s fine with images in arbitrary folder hierarchies, from flat to many levels. The script dumps top-5 prediction (by class id) to a csv file.
|
st115249
|
By inferencing you mean making a prediction for new sample? Sure, I’ll add it to the tutorial I wrote for fine tuning. The tutorial currently only does training. This is a good idea. Here’s the repo for now -
GitHub
Spandan-Madan/Pytorch_fine_tuning_Tutorial 31
Pytorch_fine_tuning_Tutorial - A short tutorial on performing fine tuning or transfer learning in PyTorch.
|
st115250
|
Your inferencing example looks epic Mr Wightman! You should submit it to PyTorch Examples repo below. (Perhaps inside “imagenet” folder, since your code / arguments match training script inside the folder?)
By the way, I missed the DPN paper this summer - thanks for the implementation! I have starred it and will check it out more.
GitHub
pytorch/examples 29
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
|
st115251
|
Hello. I need to implement a functional which includes four for loops. It is too slow to brute-force loop over the tensor. Therefore, I wonder if there is a way to write cuda parallel code to manipulate the pytorch tensor, which is also on gpu?
Is there a way like c or c++ in which we define block and thread dim, them parallelize the for in multi cuda cores?
Thanks!
|
st115252
|
you can use your only CUDA kernels in-line if you want, using cupy. See the code in pyinn 405 package for example.
|
st115253
|
Hi, I got non-deterministic results from my code when I run several times on GPU.
I have ever print some loss and there are some minor difference at the beginning and it become totally different at the end.
I am wondering is there any implementation error in my code? I use torchtext for processing the data. I set random seed for torch and torch.cuda, I run on one GPU. random seed also set for numpy and rondom module.
And I have set model.train() when training and model.eval() when testing.
I notice that some people said that it is because of the randomness of the GPU. But there are 1% to 2% difference on my results which is really a big gap between them.
The following is my model code which is simply a conv for text classification.
github.com
Impavidity/kim_cnn/blob/master/model.py 7
import torch
import torch.nn as nn
import torch.nn.functional as F
class KimCNN(nn.Module):
def __init__(self, config):
super(KimCNN, self).__init__()
output_channel = config.output_channel
target_class = config.target_class
words_num = config.words_num
words_dim = config.words_dim
embed_num = config.embed_num
embed_dim = config.embed_dim
self.mode = config.mode
Ks = 3 # There are three conv net here
if config.mode == 'multichannel':
input_channel = 2
else:
input_channel = 1
This file has been truncated. show original
and this the train.py 1
https://github.com/Impavidity/kim_cnn/blob/master/train.py 3
Can someone help out of here? Thanks.
|
st115254
|
CuDNN convolution code is non-deterministic, the deterministic version is slower.
See these bug reports on Torch 180 and Tensorflow 92. I saw the same question in Nvidia forums too.
|
st115255
|
Hello. I have a problem because of running out of gpu memory. In my nn module, I’ve define a custom functional.However, this functional is too memory-demanding. Then I wonder if we can switch to cpu mode when the program is up to run my custom functional. I think I can invoke the .cpu() function and them pass the feature map to the custom functional, however what will happen when backprop the gradient, can pytorch auto tranfer the gradient from gpu to cpu in the custom functional and tranfer from cpu to gpu after going out of the custom functional? Thanks!
|
st115256
|
Hi,
Yes, if you call .cpu() on the variable before giving it to your function, it will be converted back to whatever was the type before the call when doing the backward.
|
st115257
|
Hey guys,
It seems that if I pressed control+C in command line, the program doesn’t quit (it says shows keyboard interruption but then got stuck there). I am just wondering whether this is a common problem or I did something wrong?
|
st115258
|
Hmm, I am using python. Though this issue occurs less often if I don’t stop
at the very early stage of the training. I will let you if I see a clear
pattern why it happens.
Thanks, Soumith!
|
st115259
|
I did gradient check on maxpooling2d and got returned False.
The code is as below:
from torch.autograd import Variable
import torch
from torch.autograd import gradcheck
import numpy as np
import torch
import torch.nn.functional as F
if __name__ == '__main__':
# type maxpool 2d:
kernel_size = 2
n_channel = 100
feature_size = 6
batch_size = 3
input = (Variable(torch.FloatTensor(torch.randn(1, n_channel, feature_size, feature_size)), requires_grad=True), kernel_size)
f_max_pooling = F.max_pool2d
print gradcheck(f_max_pooling, input, eps=1e-3)
If I set the eps as default, it will always return False, if I set it to 1e-3, it sometimes returns False. Any idea why?
|
st115260
|
The gradcheck can be sensitive to numerical precision. You could e.g. run it with .double() to somewhat alleviate that (I think this is what the test-suite does, too).
In the second part, you may be on the bound of the required precision and depending on the randomness you pass or not.
Best regards
Thomas
|
st115261
|
Thanks!
I tried with:
input = (Variable(torch.DoubleTensor(torch.randn(1, n_channel, feature_size, feature_size).double()), requires_grad=True), kernel_size)`
This still sometimes return False.
Could you elaborate on
you may be on the bound of the required precision and depending on the randomness you pass or not?
|
st115262
|
Hi,
The problem is that maxpool is not really differentiable. It is not differentiable at many points (every time the input that is the maximum change).
That means that if your test case hits one of these cases (where two input are very very close), the gradient computed by finite difference will be different from the subgradient given by the backward pass.
Given your eps of 1e-3, the gradient will be wrong if in your input, two values for the same patch have a difference smaller than 1e-3. Since your input is quite large in size and your generate data in a very small range, I guess this happens all the time.
|
st115263
|
http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html 4
i am following this post in my project work and i want to add more deep layers of LSTM , how can i do that ?
i tried some simples ways but didnt worked for me with the existent model.
|
st115264
|
if it is nn.LSTM, the number of layers is just a constructor argument.
http://pytorch.org/docs/master/nn.html#lstm 2
|
st115265
|
Hi guys, I have been working on an implementation of a convolutional lstm.
I implemented first a convlstm cell and then a module that allows multiple layers.
Here’s the code:
github.com
rogertrullo/pytorch_convlstm/blob/master/conv_lstm.py 981
import torch.nn as nn
from torch.autograd import Variable
import torch
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
m.weight.data.normal_(0.0, 0.02)
elif classname.find('BatchNorm') != -1:
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
class CLSTM_cell(nn.Module):
"""Initialize a basic Conv LSTM cell.
Args:
shape: int tuple thats the height and width of the hidden states h and c()
filter_size: int that is the height and width of the filters
num_features: int thats the num of channels of the states, like hidden_size
"""
This file has been truncated. show original
It’d be nice if anybody could comment about the correctness of the implementation, or how can I improve it.
Thanks!
|
st115266
|
there is no need to implement LSTM by yourselt. In your forward() of CLSTM_cell, just input the output of conv to nn.LSTM. Like the code below:
x = self.CNN(x)
x = x.view(x.size()[0], 512, -1)
# (batch, input_size, seq_len) -> (batch, seq_len, input_size)
x = x.transpose(1, 2)
# (batch, seq_len, input_size) -> (seq_len, batch, input_size)
x = x.transpose(0, 1).contiguous()
x, _ = self.LSTM1(x)
You should define your self.LSTM1 in your init, like
self.BiLSTM1 = nn.LSTM(input_size=nIn, hidden_size=nHidden, num_layers=1, dropout=0)
Also refer to the definition of nn.LSTM for how to use.
|
st115267
|
Hi,
I think this is different, I am trying to do something similar to what is presented in this paper 408.
Here the input is an image, and the states are also multichannel images. The input to hidden and hidden to hidden operation are convolutions instead of matrix vector multiplications…
In your code you just convert the output of a CNN to a vector and use the regular LSTM.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.