id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st99668
|
Well, if you need 1e-8 precision you don’t really have a choice, you need to work with doubles I’m afraid.
|
st99669
|
class CNNText(nn.Module):
def __init__(self):
super(CNNText, self).__init__()
self.encoder_tit = nn.Embedding(3281, 64)
self.encoder_con = nn.Embedding(496037, 512)
self.title_conv_1 = nn.Sequential(
nn.Conv1d(in_channels = 1,
out_channels = 1,
kernel_size = (1, 64)),
nn.ReLU(),
nn.MaxPool1d(kernel_size=1),
)
self.title_conv_2 = nn.Sequential(
nn.Conv1d(in_channels = 1,
out_channels = 1,
kernel_size = (2, 64)),
nn.ReLU(),
nn.MaxPool1d(kernel_size=1),
)
self.content_conv_3 = nn.Sequential(
nn.Conv1d(in_channels = 1,
out_channels = 1,
kernel_size = (3, 512)),
nn.ReLU(),
nn.MaxPool1d(kernel_size = 50)
)
self.content_conv_4 = nn.Sequential(
nn.Conv1d(in_channels = 1,
out_channels = 1,
kernel_size = (3, 512)),
nn.ReLU(),
nn.MaxPool1d(kernel_size = 50)
)
self.content_conv_5 = nn.Sequential(
nn.Conv1d(in_channels = 1,
out_channels = 1,
kernel_size = (3, 512)),
nn.ReLU(),
nn.MaxPool1d(kernel_size = 50)
)
self.fc = nn.Linear(5, 9)
def forward(self, title, content):
title = self.encoder_tit(title)
print(title.size())
title_out_1 = self.title_conv_1(title)
title_out_2 = self.title_conv_2(title)
content = self.encoder_con(content)
content_out_3 = self.content_conv_3(content)
content_out_4 = self.content_conv_4(content)
content_out_5 = self.content_conv_5(content)
conv_out = t.cat((title_out_1,title_out_2,content_out_3,content_out_4,content_out_5),dim=1)
logits = self.fc(conv_out)
return F.log_softmax(logits)
cnnt = CNNText()
optimizer = optim.Adam(cnnt.parameters(), lr=.001)
Loss = nn.NLLLoss()
for epoch in range(50):
loss = 0
t = ''.join(title[epoch])
c = ''.join(content[epoch])
T, C = variables_from_pair(t, c)
# print(T.squeeze(1).unsqueeze(0))
T = T.squeeze(1).unsqueeze(0)
C = C.squeeze(1).unsqueeze(0)
optimizer.zero_grad()
out = cnnt(T, C)
target = cla[epoch]
loss += Loss(out, target)
loss.backward()
optimizer.step()
print("Loss is {} at {} epoch".format(loss, epoch))
Error:
torch.Size([1, 3, 64])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-34-328d44896eef> in <module>()
15 optimizer.zero_grad()
16
---> 17 out = cnnt(T, C)
18 target = cla[epoch]
19 loss += Loss(out, target)
/home/quoniammm/anaconda3/envs/py3Tfgpu/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
222 for hook in self._forward_pre_hooks.values():
223 hook(self, input)
--> 224 result = self.forward(*input, **kwargs)
225 for hook in self._forward_hooks.values():
226 hook_result = hook(self, input, result)
<ipython-input-31-fe95ab78725e> in forward(self, title, content)
52 title = self.encoder_tit(title)
53 print(title.size())
---> 54 title_out_1 = self.title_conv_1(title)
55 title_out_2 = self.title_conv_2(title)
56
/home/quoniammm/anaconda3/envs/py3Tfgpu/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
222 for hook in self._forward_pre_hooks.values():
223 hook(self, input)
--> 224 result = self.forward(*input, **kwargs)
225 for hook in self._forward_hooks.values():
226 hook_result = hook(self, input, result)
/home/quoniammm/anaconda3/envs/py3Tfgpu/lib/python3.6/site-packages/torch/nn/modules/container.py in forward(self, input)
65 def forward(self, input):
66 for module in self._modules.values():
---> 67 input = module(input)
68 return input
69
/home/quoniammm/anaconda3/envs/py3Tfgpu/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
222 for hook in self._forward_pre_hooks.values():
223 hook(self, input)
--> 224 result = self.forward(*input, **kwargs)
225 for hook in self._forward_hooks.values():
226 hook_result = hook(self, input, result)
/home/quoniammm/anaconda3/envs/py3Tfgpu/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input)
152 def forward(self, input):
153 return F.conv1d(input, self.weight, self.bias, self.stride,
--> 154 self.padding, self.dilation, self.groups)
155
156
/home/quoniammm/anaconda3/envs/py3Tfgpu/lib/python3.6/site-packages/torch/nn/functional.py in conv1d(input, weight, bias, stride, padding, dilation, groups)
81 f = ConvNd(_single(stride), _single(padding), _single(dilation), False,
82 _single(0), groups, torch.backends.cudnn.benchmark, torch.backends.cudnn.enabled)
---> 83 return f(input, weight, bias)
84
85
RuntimeError: expected 3D tensor
The title has been a 3D tensor.Why RuntimeError is expected 3D tensor
|
st99670
|
@chenyuntc I use the construct of your program.Can you help me solve this problem,please?
|
st99671
|
how about open an issue here? I’ll help you
GitHub
chenyuntc/PyTorchText 14
PyTorchText - 知乎看山杯 第一名 init 队解决方案
|
st99672
|
Of course.I have open an issue here.
github.com/chenyuntc/PyTorchText
Issue: Why RuntimeError is expected 3D tensor? 56
opened by quoniammm
on 2017-09-27
closed by chenyuntc
on 2017-10-04
class CNNText(nn.Module):
def __init__(self):
super(CNNText, self).__init__()
self.encoder_tit = nn.Embedding(3281, 64)
self.encoder_con = nn.Embedding(496037, 512)
self.title_conv_1 = nn.Sequential(
nn.Conv1d(in_channels =...
|
st99673
|
my situation is : my cnn need a input_img (3x96x96), and so i feed it, then occured:“expected 3d tensor”, so i change it to the shape (1x3x96x96), and it work.
|
st99674
|
Hi all,
I have a basic Pytorch question on how to use model.train() and .eval(). Let’s say I have a code like that:
For epochs:
For training mini-batches:
model.train()
.
.
.
For validation data:
model.eval()
.
Is this correct?
I am feeling that using .train() and .eval() like above causes problem because at some points both of them are true.
Thank you!
|
st99675
|
Yes it is correct. For example if you have dropout in your model, model.eval() will disable it. You have to explicitly change the model state back to training again by calling model.train() To enable dropout during training loop.
|
st99676
|
Here is the detail:
sim_1 = torch.bmm(output, output_pos.transpose(1, 2)) # both output and output_pos have size of (6, 25, 100), so sim_1.shape = (6, 25, 25)
Then I want to apply scipy.optimize.linear_sum_assignment to get each matrix’s some elements sum up. But I can only think of a for-loop way to do it:
for i in range(sim_1.shape[0]):
row_ind, col_ind = linear_sum_assignment( - sim_1[i].cpu().data.numpy())
sim_1[i] = sim_1[i, row_ind, col_ind].sum().view(1, -1) # This step failed. I want sim_1.shape = (6, 1) at last.
My torch version is 0.4. I want to ask, is there a correct way to do it? Thanks!
|
st99677
|
I have multiple losses: Say lossA, lossB
I have an option to configure which losses to pick for training.
My code for getting total loss is:
lossA = 0
lossB = 0
if config[‘use_lossA’]:
lossA = getLossA(input, output)
if config[‘use_lossB’]:
lossB = getLossB(input, output)
totalLoss = lossA + lossB
if training:
//optimiser clear
totalLoss.backward()
//optimiser related step
totalLoss might not always be a variable, if both losses are disabled (which won’t generally happen). But for safe keeping, what should be the best method to combine these losses?
Is total = Variable(lossA +lossB, requires_grad=True) okay?
Is it okay to do backward on total loss?
|
st99678
|
As a workaround, you can have a switch as below:
if not totalLoss == 0:
totalLoss.backward()
|
st99679
|
I have a question regarding training GANs and whether it is possible to avoid generating two sets of fake data at each batch (one for training the generator and the other for training the discriminator).
In most examples I see the following training procedure for training GANs:
Train Discriminator
Train on fake: use generators output
Train on real
Calculate loss (binary classification or whatever)
Train Generator
Generate (again) fake data
Calculate loss, using the discriminator
Here is such an example https://github.com/pytorch/examples/blob/master/dcgan/main.py#L195 2
My question is why not:
first train the generator
keep the generator’s output
then train the discriminator, but instead of generating new fake data, use the previously generated data (while detaching them)
Here is such an example https://github.com/lidq92/PyTorch-GAN/blob/master/implementations/acgan/acgan.py#L184 5
Am I missing something? In the first case, arent we simply wasting computation? Is there any benefit in doing things like this?
|
st99680
|
And do we normally use that in current SOTA models (any kind of LSTMs)? I have heard someone says that to be set as 10^3 to 10^4, is that correct? Thanks!
|
st99681
|
What is cheap and preferable hosting for deploying deep learning/ pytorch models? I would like to run them on CPU and expose API. What if I have 1000 models on a single server? Would it be costly to do inference? How about online learning? Can you use simple flask web app hosting to run the models? Thank you!
|
st99682
|
Hi,
I am working on videos and I want to know how to visualize video data in pytorch.
|
st99683
|
my net : a stacked 5 layer lstm, a Linear,paras like below:
num_layers = 5
bidirectional = 0 # 0 代表不需要双向,1 代表需要
batch_size = 64
input_size = 4
seq_len = 11
hidden_size = 20
output_size = 400
train result:
0 loss = tensor(6.7650)
50 loss = tensor(5.8131)
100 loss = tensor(4.6847)
150 loss = tensor(4.8746)
200 loss = tensor(4.6258)
250 loss = tensor(4.7645)
300 loss = tensor(4.6798)
350 loss = tensor(4.4751)
400 loss = tensor(4.6369)
450 loss = tensor(4.5987)
500 loss = tensor(4.6832)
550 loss = tensor(4.8298)
600 loss = tensor(4.5591)
650 loss = tensor(4.7101)
700 loss = tensor(4.7349)
750 loss = tensor(4.6433)
800 loss = tensor(4.6694)
850 loss = tensor(4.4634)
900 loss = tensor(4.8253)
950 loss = tensor(4.7901)
1000 loss = tensor(4.7092)
1050 loss = tensor(4.9188)
1100 loss = tensor(4.7793)
1150 loss = tensor(4.9890)
1200 loss = tensor(4.5564)
1250 loss = tensor(4.5261)
1300 loss = tensor(4.8063)
1350 loss = tensor(4.5315)
1400 loss = tensor(4.4564)
1450 loss = tensor(4.8649)
1500 loss = tensor(4.8288)
1550 loss = tensor(4.6560)
1650 loss = tensor(4.7287)
1700 loss = tensor(5.1292)
1750 loss = tensor(4.6975)
1800 loss = tensor(4.9339)
1850 loss = tensor(4.9230)
1900 loss = tensor(4.8357)
1950 loss = tensor(4.7121)
2000 loss = tensor(4.8539)
2050 loss = tensor(4.8245)
2100 loss = tensor(4.7509)
2150 loss = tensor(4.8134)
2200 loss = tensor(5.0023)
2250 loss = tensor(5.0936)
……
Why the loss is not decreasing, any Suggestion ?
|
st99684
|
Hello, I’ve converted resnet from torchvision for use in caffe2, but I get different answers in the latter. It seems conv layers give the same results but batch norms do not.
It’s been hard to try and trace the differences, so I was hoping to get some help. I realize the frameworks may do floating-point math in different order and get errors propagated, etc., but I am hoping for a simple fix.
One thing I’ve tried is inverting the ‘running_variance’ from pytorch bn when changing to ‘_riv’ and copying to ‘_siv’ variables in caffe2 bn. It’s unclear if caffe2 stores and processing inverse variance? (The code seems to suggest so, but the docs just talked about “running/saved_var”). Anyway it didn’t work.
ONNX is a no-go since it doesn’t seem to support feature concatentation.
Thanks
|
st99685
|
Hi, I am following this procedure:
Train a network with train.py, and save the model with torch.save(model.state_dict(), 'model.pth')
Next I load the model in classify.py with model.load_state_dict(torch.load(opt.model)) and set model.eval().
However, now I notice the model gives entirely different results if I call .eval() or not. The model includes a couple of BatchNorm2d and Dropout layers, and performs fine without calling .eval(), but I was hoping for a slight improvement in performance with .eval() activated. What are thinks to look out for when encountering this problem?
|
st99686
|
Solved by bartolsthoorn in post #4
Figured it out, it did have to do with the sampling strategy.
Since BatchNorm requires mini-batches as a proxy to the population statistics the triplet samples should be selected randomly across many classes, instead of single-class batches.
|
st99687
|
BatchNorm and Dropout layers have different behavior in training and evaluation modes. Try to narrow down your problem by setting all the layers to “eval” mode except your BatchNorm layers. Then try the same thing with the Dropout layers.
BatchNorm uses a running average of mean and variance during training to compute the statistics used during eval mode. Check the running_mean and running_var properties on your BatchNorm layers.
One thing I’ve seen mess up BatchNorm is sending in uninitialized inputs in train mode. For example:
output_size = network(torch.FloatTensor(16, 3, 224, 224).cuda()).size() # BAD!
The uninitialized input can mess up the running averages, especially if it happens to contain NaNs.
|
st99688
|
I have narrowed down the problem to the BatchNorm layers. Additionally I plotted the running_mean and running_var properties, and they seem fine (no NaNs for example.).
My guess is that it is related to the way I am sampling while training. I am using triplet loss, which means the same model is used 3 times on three batches, each containing samples of a single label.
anchor, positive, negative = model(input1), model(input2), model(input3)
loss = F.triplet_margin_loss(anchor, positive, negative)
Taking MNIST as an example, this means the mini-batch input1 at each iteration contains randomly picked samples from a single label (for example only 9’s). input2 contains the other samples from the same label as input1. Finally, input3 contains samples from another label, for example only contains 4's.
Will mini-batches like these cause problems with BatchNorm properties? Should I switch to another sampling strategy?
|
st99689
|
Figured it out, it did have to do with the sampling strategy.
Since BatchNorm requires mini-batches as a proxy to the population statistics the triplet samples should be selected randomly across many classes, instead of single-class batches.
|
st99690
|
I seem to have the same issue in my binary classification. When evaluating in .train() mode my model gets a high accuracy on my test data. Whereas, when I set the model to .eval() mode to switch dropout mode etc. my model always outputs class 0.
Switching everything but the BatchNorm layers to eval() mode gives me a good prediction. Unlike @bartholssthoorn my mini-batches contain already both classes (i.e. shuffled) so that would not solve it.
My running_means do not contain any NaNs.
What else can I check to figure out why the batchnorm layers are preventing my evaluation mode?
|
st99691
|
How large is your batch size? A small batch size might yield noisy estimates of the running stats.
|
st99692
|
16, same as I used for training. Using a smaller batchsize (4 or 1) didn’t help. I’m using 3D data so I cannot really increase it above 16.
Freitag, 28 September 2018, 00:03vorm. +02:00 von ptrblck [email protected]:
|
st99693
|
You could try to tune the momentum hyperparameter of BatchNorm or alternatively use GroupNorm, which should work better using small batch sizes.
|
st99694
|
Finally, I have figured out the issue lay with the preprocessing of my data - it wasn’t exactly the same at test time as during training. A different model I have trained without batchnorm showed that it wouldn’t work on the test data at all either. I find it quite interesting that the model with batchnorm is actually able to perform quite well in train() mode, even though the normalization is different, causing the model to fail in eval() mode.
Thanks for the pointer to GroupNorm, I haven’t been aware of it but will surely check it out!
|
st99695
|
I have a Raspberry Pi Compute Module 3 and would like to run pytorch (cpu only) on it. I tried following this guide 27, but I don’t have enough space left on the device to increase the swap, so the compiling just freezes along the way…
But I had an idea of building a pytorch wheel made for arm. I downloaded the rpi toolchain 5 and set the compiler flag to
export CC="/home/me/gits/rpi_tools/arm-bcm2708/arm-rpi-4.9.3-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc"
I also set cuda and no distributed to false
export NO_CUDA=1
export NO_DISTRIBUTED=1
but when I tried running
python3 setup.py sdist bdist_wheel
I got gcc error saying
warning: check: missing required meta-data: url
warning: check: missing meta-data: either (author and author_email) or (maintainer and maintainer_email) must be supplied
+ USE_CUDA=0
+ USE_ROCM=0
+ USE_NNPACK=0
+ USE_MKLDNN=0
+ USE_GLOO_IBVERBS=0
+ USE_DISTRIBUTED_MW=0
+ FULL_CAFFE2=0
+ [[ 4 -gt 0 ]]
+ case "$1" in
+ USE_NNPACK=1
+ shift
+ [[ 3 -gt 0 ]]
+ case "$1" in
+ break
+ CMAKE_INSTALL='make install'
+ USER_CFLAGS=
+ USER_LDFLAGS=
+ [[ -n '' ]]
+ [[ -n '' ]]
+ [[ -n '' ]]
++ uname
+ '[' Linux == Darwin ']'
++ dirname tools/build_pytorch_libs.sh
+ cd tools/..
+++ pwd
++ printf '%q\n' /home/me/gits/pytorch
+ PWD=/home/me/gits/pytorch
+ BASE_DIR=/home/me/gits/pytorch
+ TORCH_LIB_DIR=/home/me/gits/pytorch/torch/lib
+ INSTALL_DIR=/home/me/gits/pytorch/torch/lib/tmp_install
+ THIRD_PARTY_DIR=/home/me/gits/pytorch/third_party
+ CMAKE_VERSION=cmake
+ C_FLAGS=' -I"/home/me/gits/pytorch/torch/lib/tmp_install/include" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/TH" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/THC" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/THS" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/THCS" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/THNN" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/THCUNN"'
+ C_FLAGS=' -I"/home/me/gits/pytorch/torch/lib/tmp_install/include" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/TH" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/THC" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/THS" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/THCS" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/THNN" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1'
+ LDFLAGS='-L"/home/me/gits/pytorch/torch/lib/tmp_install/lib" '
+ LD_POSTFIX=.so
++ uname
+ [[ Linux == \D\a\r\w\i\n ]]
+ LDFLAGS='-L"/home/me/gits/pytorch/torch/lib/tmp_install/lib" -Wl,-rpath,$ORIGIN'
+ CPP_FLAGS=' -std=c++11 '
+ GLOO_FLAGS=
+ THD_FLAGS=
+ NCCL_ROOT_DIR=/home/me/gits/pytorch/torch/lib/tmp_install
+ [[ 0 -eq 1 ]]
+ [[ 0 -eq 1 ]]
+ [[ 0 -eq 1 ]]
+ CWRAP_FILES='/home/me/gits/pytorch/torch/lib/ATen/Declarations.cwrap;/home/me/gits/pytorch/torch/lib/THNN/generic/THNN.h;/home/me/gits/pytorch/torch/lib/THCUNN/generic/THCUNN.h;/home/me/gits/pytorch/torch/lib/ATen/nn.yaml'
+ CUDA_NVCC_FLAGS=' -I"/home/me/gits/pytorch/torch/lib/tmp_install/include" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/TH" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/THC" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/THS" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/THCS" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/THNN" -I"/home/me/gits/pytorch/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1'
+ [[ -z '' ]]
+ CUDA_DEVICE_DEBUG=0
+ '[' -z 8 ']'
+ BUILD_TYPE=Release
+ [[ -n '' ]]
+ [[ -n '' ]]
+ echo 'Building in Release mode'
+ mkdir -p torch/lib/tmp_install
+ for arg in "$@"
+ [[ caffe2 == \n\c\c\l ]]
+ [[ caffe2 == \g\l\o\o ]]
+ [[ caffe2 == \c\a\f\f\e\2 ]]
+ pushd /home/me/gits/pytorch
+ build_caffe2
+ [[ -z '' ]]
+ EXTRA_CAFFE2_CMAKE_FLAGS=()
+ [[ -n '' ]]
+ [[ -n /usr/lib/python3/dist-packages ]]
+ EXTRA_CAFFE2_CMAKE_FLAGS+=("-DCMAKE_PREFIX_PATH=$CMAKE_PREFIX_PATH")
+ mkdir -p build
+ pushd build
+ cmake .. -DBUILDING_WITH_TORCH_LIBS=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_CAFFE2=0 -DBUILD_ATEN=ON -DBUILD_PYTHON=0 -DBUILD_BINARY=OFF -DBUILD_SHARED_LIBS=ON -DONNX_NAMESPACE=onnx_torch -DUSE_CUDA=0 -DCAFFE2_STATIC_LINK_CUDA= -DUSE_ROCM=0 -DUSE_NNPACK=1 -DCUDNN_INCLUDE_DIR= -DCUDNN_LIB_DIR= -DCUDNN_LIBRARY= -DUSE_MKLDNN=0 -DMKLDNN_INCLUDE_DIR= -DMKLDNN_LIB_DIR= -DMKLDNN_LIBRARY= -DCMAKE_INSTALL_PREFIX=/home/me/gits/pytorch/torch/lib/tmp_install -DCMAKE_EXPORT_COMPILE_COMMANDS=1 -DCMAKE_C_FLAGS= -DCMAKE_CXX_FLAGS= '-DCMAKE_EXE_LINKER_FLAGS=-L"/home/me/gits/pytorch/torch/lib/tmp_install/lib" -Wl,-rpath,$ORIGIN ' '-DCMAKE_SHARED_LINKER_FLAGS=-L"/home/me/gits/pytorch/torch/lib/tmp_install/lib" -Wl,-rpath,$ORIGIN ' -DCMAKE_PREFIX_PATH=/usr/lib/python3/dist-packages
CMake Error at cmake/MiscCheck.cmake:29 (message):
Please use GCC 6 or higher on Ubuntu 17.04 and higher. For more
information, see: https://github.com/caffe2/caffe2/issues/1633
Call Stack (most recent call first):
CMakeLists.txt:172 (include)
Gcc version of the selected toolchain is
$ cc --version
arm-bcm2708-linux-gnueabi-gcc (crosstool-NG 1.15.2) 4.7.1 20120402 (prerelease)
Copyright (C) 2012 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
What am I doing wrong?
|
st99696
|
If L-BFGS is also fine for you, then yes:
https://pytorch.org/docs/stable/optim.html#torch.optim.LBFGS 172
https://pytorch.org/docs/stable/_modules/torch/optim/lbfgs.html 54
|
st99697
|
GitHub
hjmshi/PyTorch-LBFGS 32
A PyTorch implementation of L-BFGS. Contribute to hjmshi/PyTorch-LBFGS development by creating an account on GitHub.
|
st99698
|
I ran into the Runtime Error: CUDNN_STATUS_NOT_INITIALIZED when I was training a model that has cnn.
I checked my installation
>>> import torch
>>> print(torch.backends.cudnn.is_acceptable(torch.cuda.FloatTensor(1)))
True
>>> print(torch.backends.cudnn.version())
7005
One thing that might cause the problem but I’m not sure:
On the server that I’m using:
(test_cudnn) zhangjuexiao@ubuntu:/data/disk1/private/zhangjuexiao/allennlp$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Sun_Sep__4_22:14:01_CDT_2016
Cuda compilation tools, release 8.0, V8.0.44
cat /usr/local/cuda-8.0/include/cudnn.h | grep CUDNN_MAJOR -A 2
#define CUDNN_MAJOR 6
#define CUDNN_MINOR 0
#define CUDNN_PATCHLEVEL 21
But I install this in my virtual environment:
conda install pytorch torchvision cuda90 -c pytorch
I’ve been tortued by it for days…
How to solve the problem? Any kind help is appreciated!
|
st99699
|
Here are the detailed trace back, if needed:
File "/data/disk1/private/zhangjuexiao/allennlp/allennlp/training/trainer.py", line 434, in _train_epoch
loss = self._batch_loss(batch, for_training=True) File
"/data/disk1/private/zhangjuexiao/allennlp/allennlp/training/trainer.py", line 371, in _batch_loss
output_dict = self._model(**batch) File
"/data/disk1/private/zhangjuexiao/anaconda3/envs/test_cudnn/lib/python3.6/site-
packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File
"/data/disk1/private/zhangjuexiao/allennlp/allennlp/models/reading_comprehension/bidaf.py", line
174, in forward embedded_question = self._highway_layer(self._text_field_embedder(question)) File
"/data/disk1/private/zhangjuexiao/anaconda3/envs/test_cudnn/lib/python3.6/site-
packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File
"/data/disk1/private/zhangjuexiao/allennlp/allennlp/modules/text_field_embedders/basic_text_field_e
mbedder.py", line 52, in forward token_vectors = embedder(tensor) File
"/data/disk1/private/zhangjuexiao/anaconda3/envs/test_cudnn/lib/python3.6/site-
packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File
"/data/disk1/private/zhangjuexiao/allennlp/allennlp/modules/token_embedders/token_characters_en
coder.py", line 36, in forward return self._dropout(self._encoder(self._embedding(token_characters),
mask)) File "/data/disk1/private/zhangjuexiao/anaconda3/envs/test_cudnn/lib/python3.6/site-
packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File
"/data/disk1/private/zhangjuexiao/allennlp/allennlp/modules/time_distributed.py", line 35, in forward
reshaped_outputs = self._module(*reshaped_inputs) File
"/data/disk1/private/zhangjuexiao/anaconda3/envs/test_cudnn/lib/python3.6/site-
packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File
"/data/disk1/private/zhangjuexiao/allennlp/allennlp/modules/seq2vec_encoders/cnn_encoder.py",
line 106, in forward self._activation(convolution_layer(tokens)).max(dim=2)[0] File
"/data/disk1/private/zhangjuexiao/anaconda3/envs/test_cudnn/lib/python3.6/site-
packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File
"/data/disk1/private/zhangjuexiao/anaconda3/envs/test_cudnn/lib/python3.6/site-
packages/torch/nn/modules/conv.py", line 168, in forward self.padding, self.dilation, self.groups) File
"/data/disk1/private/zhangjuexiao/anaconda3/envs/test_cudnn/lib/python3.6/site-
packages/torch/nn/functional.py", line 54, in conv1d return f(input, weight, bias) RuntimeError:
CUDNN_STATUS_NOT_INITIALIZED
|
st99700
|
Hi @juexZZ , any sollution yet?. I ran into the same issue while trying a batch_size > 1. My system details are:
Cuda version: 9.0
Cudnn version: 7102
Pytorch version: 0.4.0
GPU: GTX 1080 Ti
Driver version: 390.77
OS: Ubuntu 16.04
With a batch_size of 1, the training loop works fine. Detailed traceback:
<ipython-input-20-8f012bcb5dc5> in forward(self, x)
9
10 def forward(self, x):
---> 11 x = self.conv3d_1(x)
12 x = self.conv3d_2(x)
13 x = self.conv3d_3(x)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
489 result = self._slow_forward(*input, **kwargs)
490 else:
--> 491 result = self.forward(*input, **kwargs)
492 for hook in self._forward_hooks.values():
493 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py in forward(self, input)
89 def forward(self, input):
90 for module in self._modules.values():
---> 91 input = module(input)
92 return input
93
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
489 result = self._slow_forward(*input, **kwargs)
490 else:
--> 491 result = self.forward(*input, **kwargs)
492 for hook in self._forward_hooks.values():
493 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input)
419 def forward(self, input):
420 return F.conv3d(input, self.weight, self.bias, self.stride,
--> 421 self.padding, self.dilation, self.groups)
422
RuntimeError: CUDNN_STATUS_NOT_INITIALIZED
|
st99701
|
Does anyone know how ‘nn.Upsample’ works when its ‘align_corners’ parameter is set to True?
So, when you have a tensor as below
source : tensor([[[[ 2, -5, 10, 4 ]]]])
and apply upsample operation to it.
m = torch.nn.Upsample(scale_factor=2, mode=‘bilinear’, align_corners=True)
result = m(source)
Finally, get result as
tensor([[[[ 2.0, -1.0, -4.0, -0.7143, 5.7143, 9.1429, 6.5714, 4.0 ]]]])
So my question here is, how do I calculate these values from source-tensor?
|
st99702
|
Hi there.
I am really new in the community.
I will like to ask about meta learning and out of memory errors.
I implemented a code similar to the explained in https://medium.com/huggingface/from-zero-to-research-an-introduction-to-meta-learning-8e16e677f78a 1, but now I have a issue with the memory.
I checked in cpu and gpu and both generate the same problem.
Some details about the code:
Variable declaration:
model_foward = ClassifierWithState(RNNLM(args.n_vocab, args.layer, args.unit))
model_backward = ClassifierWithState(RNNLM(args.n_vocab, args.layer, args.unit))
optimizer = MetaLearner(None)
model_foward.cuda(gpu_id)
model_backward.cuda(gpu_id)
optimizer.cuda(gpu_id)
meta_optimizer = torch.optim.SGD(optimizer.parameters(), lr=1.0)
MetaLearner Class:
class MetaLearner(nn.Module):
""" Bare Meta-learner class
Should be added: intialization, hidden states, more control over everything
"""
def __init__(self, model):
super(MetaLearner, self).__init__()
self.weights = torch.nn.Parameter(torch.Tensor(1, 2))
def forward(self, forward_model, backward_model):
""" Forward optimizer with a simple linear neural net
Inputs:
forward_model: PyTorch module with parameters gradient populated
backward_model: PyTorch module identical to forward_model (but without gradients)
updated at the Parameter level to keep track of the computation graph for meta-backward pass
"""
f_model_iter = get_params(forward_model)
b_model_iter = get_params(backward_model)
for f_param_tuple, b_param_tuple in zip(f_model_iter, b_model_iter): # loop over parameters
# Prepare the inputs, we detach the inputs to avoid computing 2nd derivatives (re-pack in new Variable)
(module_f, name_f, param_f) = f_param_tuple
(module_b, name_b, param_b) = b_param_tuple
inputs = torch.autograd.Variable(torch.stack([param_f.grad.data, param_f.data], dim=-1))
# Optimization step: compute new model parameters, here we apply a simple linear function
dW = F.linear(inputs, self.weights).squeeze()
param_b = param_b + dW
# Update backward_model (meta-gradients can flow) and forward_model (no need for meta-gradients).
module_b._parameters[name_b] = param_b
param_f.data = param_b.data
Training cicle:
meta_optimizer.zero_grad()
self.model_backward.zero_grad()
# Progress the dataset iterator for sentences at each iteration.
batch = train_iter.__next__()
losses = []
for j in six.moves.range(len(batch)):
# print('{} / {} \r'.format(j, len(batch)))
x, t = convert_examples(batch[j], self.device)
self.model_foward.zero_grad()
loss = 0
count = 0
state = None
batch_size, sequence_length = x.shape
# Sequence Forward
for i in six.moves.range(sequence_length):
# # Compute the loss at this time step and accumulate it
state, loss_batch = self.model_foward(state, x[:, i], t[:, i])
non_zeros = torch.sum(x[:, i] != 0, dtype=torch.float)
loss += loss_batch * non_zeros
count += int(non_zeros)
losses.append(loss)
loss.backward(retain_graph=True) # retain_graph=True
self._optimizer(self.model_foward, self.model_backward)
meta_loss = sum(losses)
# logging.info('meta loss: {}'.format(float(meta_loss.detach())))
reporter.report({'loss': float(meta_loss.detach())}, meta_optimizer.target)
reporter.report({'count': count}, meta_optimizer.target)
self._optimizer.zero_grad()
meta_loss.backward()
if self.gradclip is not None:
nn.utils.clip_grad_norm_(self.model_foward.parameters(), self.gradclip)
nn.utils.clip_grad_norm_(self.model_backward.parameters(), self.gradclip)
meta_optimizer.step()
The LMRNN is a lstm network with 1 layers.
The network and metanetwork do not have a lot of hyperparameters to update and I try to update the metalearner after a number of inputted samples. However, the app still breaks due to the lack of memory.
Am I missing any part?.
I though that I am using shared weights but do is it required to declare it once more?
Regards.
|
st99703
|
Hi PyTorchers!
So, I was trying to look up the parameters of pretrained model for fun.
when I run this code,
model = myModel()
for name, param in model.naemed_parameters():
print(’{}: {}’.format(name, param.shape))
I get this line of unexpected code.
Upsample.1.weight: (torch.Size([256, 256, 3, 3]))
Upsample.1.bias: (torch.Size([256]))
myModel class contains the following code in its ‘init’ method.
self.Upsample = torch.nn.Upsample(scale_factor=2, mode=‘bilinear’, True)
Is this normal?
|
st99704
|
I think you have something additionally somewhere else. There seems to be a sequential model with convolution (?) in Upsample (if you permit a style hint you didn’t ask for: why would you have upper case for your instances?)
list(torch.nn.Upsample().named_parameters())
shows that Vanilla Upsample module don’t have parameters. If you assign things to them, they will take this, though.
If you want people to look at this, you would likely have to show more of your code.
Best regards
Thomas
|
st99705
|
Yes, there are something more in the sequential model like conv operations as you mentioned.
I just didn’t write the whole code because I thought it would be hairy and unnecessary.
Anyway, that works for me. Thanks!
|
st99706
|
I got the following error when doing grad_clip, any idea why it happens?
Traceback (most recent call last):
File "main.py", line 230, in <module>
exp.run()
File "main.py", line 162, in run
train_loss, train_acc = self.model.train_epoch(self.train_dataloader)
File "/share/data/lang/users/zeweichu/universal-classification/927/model.py", line 122, in train_epoch
torch.nn.utils.clip_grad_norm_([p for p in self.network.parameters() if p.requires_grad], self.args.grad_clipping)
File "/share/data/speech/zewei/anaconda3/lib/python3.6/site-packages/torch/nn/utils/clip_grad.py", line 29, in clip_grad_norm_
total_norm += param_norm ** norm_type
RuntimeError: Expected object of type torch.cuda.FloatTensor but found type torch.cuda.DoubleTensor for argument #4 'other'
|
st99707
|
Could you check the types of your model and self.args.grad_clipping?
One argument is a DoubleTensor while it should be a FloatTensor or vice versa.
This is a common issue, if you use numpy arrays and transform them to torch.tensors, as numpy uses float64 as the default dtype.
|
st99708
|
It seems that some of your parameters are of double dtype and some are of float dtype. This is currently not supported unfortunately.
|
st99709
|
I submitted an issue on this at https://github.com/pytorch/pytorch/issues/12159 9
|
st99710
|
First, I’m building a time series classifier, to classify each chunk/block of time. The data which will be used has already be tried and failed on standard ML techniques, as the long-term temporal order is important.
The basic idea that I’m trying to do is that for each Y sec chunk/block of time, use a series of CNNs, with a AdaptiveMaxPool1d on top (to handle the various block sizes), to extract X number of features. Then, using an “outer” CNN to work on all of those blocks, and finally a time-distributed dense layer or two to get a classification. Previously I had used a couple LSTM layers with Keras for the “outer” part, but I’m intrigued by the current findings replacing LSTMs with CNN.
My pared-down dataset is about 70GB in size, with ~2500 recordings (samples, in the pytorch sense), that are of various lengths and each recorded at a different rate. Each recording is divided into lengths of a fixed number of seconds, but each of those block lengths depends on the sample rate. Example:
Recording 1: 2213 blocks, with a sample rate of 128 Hz means each block is 3,840 samples (in the classical time series data sense) long.
Recording 2: 5127 blocks, with a sample rate of 512 Hz means each block is 15,360 samples long.
I would really like to avoid padding my data (in both “dimensions”), as just the back of the envelope calculation suggests the padded size will be at least 2x the raw data set size. I just think the overhead of transferring all of that data to the GPU is going to be a massive waste of time.
*I know that I could resample the raw data to the same sample rate, so that only the number of blocks is different. However, too much of my data is at 128 Hz, and I’m convinced I’m losing some information at that rate. And I hope the higher sample rate recordings can help inform the network.
|
st99711
|
Good morning
Can you pass me a link to something confirming your claim, as Im very intrigued by the idea myself, and would love to know more.
apytorch:
Previously I had used a couple LSTM layers with Keras for the “outer” part, but I’m intrigued by the current findings replacing LSTMs with CNN.
One of the key elements of CNN’s is that they excel where data have a grid-like structure. For images the grid is the distance between pixels and for time-series its the constant data sample rate. So it might be a problem using two sample rates as the CNN might have trouble when grid structure are different from sequence to sequence.
apytorch:
My pared-down dataset is about 70GB in size, with ~2500 recordings (samples), that are of various lengths and each recorded at a different rate.
If you loose information at 128 Hz, is it possible to resample data to 512 Hz then?
Another solution could be to simple clip the data sequences to have same length but that would throw away a lot of otherwise usefull data.
You might also be able to train one CNN on the short sequences and another CNN on the long sequences and combine them into one classification using some sort of feature fusion? Like in the picture:
37
If none of these approaches work im afraid i cannot think of anything else than padding
Cheers.
|
st99712
|
There are 4 ways to deal with variable size inputs:
one by one training (batch size = 1).
resizing (or up/down sampling) the samples to the same size in a batch.
crop the samples to the same size in a batch.
padding the samples to the max size in a batch, and masking the redundant parts after every y=f(x).
I would really like to avoid padding my data (in both “dimensions”), as just the back of the envelope calculation suggests the padded size will be at least 2x the raw data set size. I just think the overhead of transferring all of that data to the GPU is going to be a massive waste of time.
Why there is a “2x”?
|
st99713
|
Thank you, Ditlev and wizardk, for responding.
Ditlev_Jorgensen:
Can you pass me a link to something confirming your claim, as Im very intrigued by the idea myself, and would love to know more.
Yup. I found these very interesting. For those that might need a little more reason to click, they found that LSTM actually doesn’t have that long of a memory, and that using a set of dilated CNNs could develop an exceptionally long memory about the sequence.
arxiv.org 1 30
arxiv.org 2 19
towardsdatascience.com 40
Ditlev_Jorgensen:
So it might be a problem using two sample rates as the CNN might have trouble when grid structure are different from sequence to sequence.
I had planned on solving the problem of different sample rates mating with the fixed-size dense layer by using an Adaptive Pool at the top of the “feature extracting” CNN. (this also is in response to your suggestion of training two CNNs). Also, I feel I should add that this is continuous data.
Ditlev_Jorgensen:
f you loose information at 128 Hz, is it possible to resample data to 512 Hz then?
I could, but then I blow up my data set even more. Since about 1/2 of the recordings are at 128 Hz, even without padding the sequences, we’re talking a 4x increase for just those recordings.
Ditlev_Jorgensen:
Another solution could be to simple clip the data sequences to have same length but that would throw away a lot of otherwise usefull data.
I’m almost 100% positive that throwing away 3/4 of the data would not work. Even though I’m extracting some features that I’m not so confident the network can learn, I still expect that the bulk of the learning will occur on the raw data.
wizardk:
There are 4 ways to deal with variable size inputs:
You list mixes possible solutions for the two -different- dimensions; but the solutions for one are not applicable to to the other. The recording sequences themselves are of variable length, and the 1D main feature vector is also variable (the same size for a given sequence, but varies by recording).
wizardk:
Why there is a “2x”?
The difference between the shortest and longest sequence (number of blocks of data) is over 3x, combined with the 4x difference between the lowest and highest sampling rate (the samples for each block of X seconds).
I still wonder if there is another solution to this. Although I would prefer not to, if necessary, I can resample the blocks to be the same size, and pad the sequences of blocks - which would actually be smaller memory-wise. However, I’m still not quite sure how to do the padding for the upper CNNs so that computations aren’t wasted on computing empty blocks, as it appears the pad/pack functions in pytorch are just for RNNs.
Aside: I was thinking about it more after I posted last night, and I realized that given that pytorch only needs to reshape the data for a time distributed network might be of use (I had used keras’ TimeDistributed wrapper around the feature extraction part that runs on every block).
|
st99714
|
Hi apytorch,
First, you should ensure the C (channels) of samples in the same. You can normalize them by resampling in preprocess, or pooling/convolution with stride 2 in additional layer specially for the high sample rate.
For padding, you can sort the samples by length and do a local random. That will let the total padding size keep smaller. And maybe you will get faster convergence as well.
|
st99715
|
wizardk:
For padding, you can sort the samples by length and do a local random. That will let the total padding size keep smaller. And maybe you will get faster convergence as well.
Yeah, I had read a few ideas about doing that. Although I would assume some randomness would have to be added to prevent the same batch from being presented every epoch.
I’m curious if you have any suggestions about how to do the padding when going through a CNN, instead of a RNN, so that the padded samples aren’t calculated.
|
st99716
|
Hi apytorch,
You can shuffle the samples in the range of 2x batch size on the sorted samples, that’s what I mean “local random”.
I just discuss the issue about padding and masking a few days ago. You can find it here.
Padding and masking in convolution autograd
Hi, I’m using pytorch to do some encoding things on 1-D inputs by 1-D convolution.
I have 2 questions since the length of inputs is inconsistent in a batch.
1.Currently, I maintain some mask tensors for every layer to mask their outputs by myself. And in my code, I have to compute the change of each mask tensor in the forward method since the size of input and output may be different.
Q:Is this a regular way?
2.Now, I just mask the outputs for each layer without caring about gradients.
Q:Is…
|
st99717
|
I’m trying to build the module onnx-caffe2 from source. I can’t seem to be able to find the official repo for the latest code.
https://github.com/onnx/onnx-caffe2 points to the Caffe2 project. Caffe2 has been moved into Pytorch. I’ve build Pytorch from source it builds Caffe2 but I don’t see onnx-caffe2.
I’ve greped the entire pytorch repo, and the only reference to onnx-caffe2 is in the onnx submodule.
|
st99718
|
Hi,
I am following the tutorial on loading a traced model and running forward pass in C++, using the preview version (https://pytorch.org/tutorials/advanced/cpp_export.html 36)
Running this on CPU looks promising but when I try running on the GPU I get this error from the module->forward() pass:
"Input type (Variable[CUDAFloatType]) and weight type (Variable[CPUFloatType]) should be the same…
It seems obvious that the weights need to be of type “CUDA”. In python I would have done .cuda() on the model, but I cannot figure out how this is done in the C++ API?
I am far from an expert here…
|
st99719
|
I am working on a research project and we are trying to optimize the network communication in a distributed PyTorch deployment. I modified the gloo library and I am able to compile and run programs that use this library (similar to what you can find in gloo/gloo/examples).
So I replaced the third_party/gloo folder in PyTorch with my version of gloo and I am trying to compile it.
The tricky part is that my version of gloo requires some additional libraries and special linker options, and I cannot find the right place where to specify these options.
I tried to add these options to the Caffe2_DEPENDENCY_LIBS variable in pytorch/caffe2/CMakeLists.txt with no success.
My compilation stops with the linker error:
/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to <my code>
/pytorch/build/lib/libcaffe2.so: undefined reference to <my code>
Can someone point me in the right direction?
Thanks!
|
st99720
|
How to solve this problem?
RuntimeError: ONNX export failed: Couldn’t export operator aten::upsample_trilinear3d
|
st99721
|
I ran the resnet18 imagenet example on an AWS p3.2xlarge on the official amazon AMI for deep learning, and I’m finding that the data-loader is a bottleneck, evidenced both by a lack of constant GPU utilization and also the fact that when allowing the training loop to run with a constant data (no iteration through the dataloader), the time to train an epoch is cut by nearly 50% (and full GPU utilization). Has anyone else experienced this? If you want to replicate the issue exactly, replace data loader loop with a “for i in range(len(dataloader))” and set data to be some constant batch of data so that no data is actually required to be read from disk.
|
st99722
|
So the issue is that the p3 has insufficient compute to match the gpu - you need this compute for loading images from disk, and for transforming each image. Some solutions - change the loading format to something that loads faster - like hdf5 . I haven’t tested this but I know that loading is a huge bottle neck. You might also notice that the problem goes away if yo use a larger model- basically the gpu becomes the bottleneck as desired. What we’re really after tho is cost effective training and not being wasteful- which in some cases the g3 might be a better option since it has a higher ratio of compute to gpu. You can also experiment with using less transforms, or trying to implement transforms that would normally be implemented by the cpu in the model itself so that it’s done by the gpu. There also might be opportunities to improve the efficiency of transforms. Lastly, setting num workers to 8 helped me
|
st99723
|
It seems i cannot printf in .cu file, such as global, device function.
But I can printf in .cu file of C code.
Is there any advice about this?
|
st99724
|
I have used printf functions in the same way (as any other C or C++ file) in .cu files as well. Did you try it?
for example, here 40.
|
st99725
|
Yes. I try it in C++ cuda file. But it doesn’t work. Do you use any other compiling options?
|
st99726
|
You can refer to the Makefile that I have there, I guess.
github.com
InnovArul/personreid_normxcorr/blob/master/src/modules/Makefile 2
TORCH_INSTALL_DIR = /home/arul/torch/install
all: libNormCrossMapCorrelation.so libCrossInputNeighborhood.so
libNormCrossMapCorrelation.so: NormCrossMapCorrelation.cu
nvcc --std=c++11 --shared --compiler-options '-fPIC -Wall -O2' -I $(TORCH_INSTALL_DIR)/include -I $(TORCH_INSTALL_DIR)/include/TH -o libNormCrossMapCorrelation.so NormCrossMapCorrelation.cu
libCrossInputNeighborhood.so: CrossInputNeighborhood.cu
nvcc --std=c++11 --shared --compiler-options '-fPIC -Wall -O2' -I $(TORCH_INSTALL_DIR)/include -I $(TORCH_INSTALL_DIR)/include/TH -o libCrossInputNeighborhood.so CrossInputNeighborhood.cu
|
st99727
|
Hi
Im overfitting a CNN on 3 data samples one of each class. During training i run for 15 epochs.
The loss is decreasing beautifully.
However when i want to find train accuracy, i run the same three images through, with model.eval() and the model just predict constant class and thus gets 33.3 percent accuracy.
If i use model.train() it gets 100 percent accuracy. This is what i expect as the loss is so low and the model have overfitted.
And remember its the same three samples i use when training which is should be able to fit perfectly.
Is there something i have misunderstod? I use batchnorm layers and dropout so i thought i had to use model.eval() in order to use the neurons that have been dropped etc.
Thanks
|
st99728
|
Your BatchNorm layers might have a wrong estimate of the running mean and std if you just use 3 samples.
Try to remove them or use something like InstanceNorm for such a small sample.
|
st99729
|
I tried for a slightly larger data set (50 samples), the loss still decreases and i still overfit (after around 25 epochs). But this is still only with model.train() and not with model.eval(). With model.eval() the accuracies never increases, even with dropping loss.
The loss:
The accuracies:
The testing stays same through entire training.
I will try using InstanceNorm() and see if that improves the issue.
Thanks!
|
st99730
|
I think, this is interesting. By whatever idealogy that we know about training, testing paradigm, I would hope that whatever the data we train on, the model already knows/can memorize them. Given that the model is trained to the level to overfit those data, I would not expect the model to perform pretty bad on the same data.
|
st99731
|
For some reason, clamp_ and clamp don’t work on GPU.
torch version: 0.4.1
import torch
tensor = torch.tensor([[205.6188, 99.7265, 300.2824, 320.4083, 0.9999, 1.0000, 0.0000],[165.0100, 305.5398, 266.4070, 330.8875, 0.9962, 0.9998, 36.0000]]).to(‘cuda’)
img_dim = torch.tensor([3.8942, 2.5938, 3.8942, 2.5938]
Why does this fail on GPU
tensor[:, :4] = tensor[:, :4].clamp_(0, 416) * img_dim
Traceback (most recent call last):
File “”, line 1, in
RuntimeError: Expected object of type torch.cuda.FloatTensor but found type torch.FloatTensor for argument #2 ‘other’
But this works just fine:
tensor = tensor.to(‘cpu’)
tensor[:, :4] = tensor[:, :4].clamp_(0, 416) * img_dim
|
st99732
|
Solved by albanD in post #2
Because img_dim is not a cuda tensor, it’s a cpu tensor.
|
st99733
|
When I try to compile c extension by using python bulid.py, it always raise error:
CompileError: command ‘gcc’ failed with exit status 1 #235.
I search in pytorch forums but get no answers.
And I find this link https://github.com/jwyang/faster-rcnn.pytorch/issues/235 33 discuss the same problem and someone give some solution — switch pytorch to 0.4.0
I want to know is there some better solution, because it’s annoying to install another version.
|
st99734
|
Solved by albanD in post #4
It looks like it did not compiled nnd_cuda.cu. is it specified properly in your setup.py script? Is your user allow to write files on disk?
|
st99735
|
Hi,
Could you post the compilation log.
This error means that there was an error during the compilation, could you report it here please?
|
st99736
|
/home/zeal/work/others-projects/3D-CODED/nndistance
generating /tmp/tmpALHKrQ/_my_lib.c
setting the current directory to ‘/tmp/tmpALHKrQ’
running build_ext
building ‘_my_lib’ extension
creating home
creating home/zeal
creating home/zeal/work
creating home/zeal/work/others-projects
creating home/zeal/work/others-projects/3D-CODED
creating home/zeal/work/others-projects/3D-CODED/nndistance
creating home/zeal/work/others-projects/3D-CODED/nndistance/src
gcc -pthread -B /home/zeal/software/anaconda3/envs/py2.7/compiler_compat -Wl,–sysroot=/ -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/zeal/software/anaconda3/envs/py2.7/lib/python2.7/site-packages/torch/utils/ffi/…/…/lib/include -I/home/zeal/software/anaconda3/envs/py2.7/lib/python2.7/site-packages/torch/utils/ffi/…/…/lib/include/TH -I/home/zeal/software/anaconda3/envs/py2.7/include/python2.7 -c _my_lib.c -o ./_my_lib.o
gcc -pthread -B /home/zeal/software/anaconda3/envs/py2.7/compiler_compat -Wl,–sysroot=/ -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/zeal/software/anaconda3/envs/py2.7/lib/python2.7/site-packages/torch/utils/ffi/…/…/lib/include -I/home/zeal/software/anaconda3/envs/py2.7/lib/python2.7/site-packages/torch/utils/ffi/…/…/lib/include/TH -I/home/zeal/software/anaconda3/envs/py2.7/include/python2.7 -c /home/zeal/work/others-projects/3D-CODED/nndistance/src/my_lib.c -o ./home/zeal/work/others-projects/3D-CODED/nndistance/src/my_lib.o
gcc -pthread -shared -B /home/zeal/software/anaconda3/envs/py2.7/compiler_compat -L/home/zeal/software/anaconda3/envs/py2.7/lib -Wl,-rpath=/home/zeal/software/anaconda3/envs/py2.7/lib -Wl,–no-as-needed -Wl,–sysroot=/ ./_my_lib.o ./home/zeal/work/others-projects/3D-CODED/nndistance/src/my_lib.o /home/zeal/work/others-projects/3D-CODED/nndistance/src/nnd_cuda.cu.o -L/home/zeal/software/anaconda3/envs/py2.7/lib -lpython2.7 -o ./_my_lib.so
gcc: error: /home/zeal/work/others-projects/3D-CODED/nndistance/src/nnd_cuda.cu.o: 没有那个文件或目录
Traceback (most recent call last):
File “build.py”, line 35, in
ffi.build()
File “/home/zeal/software/anaconda3/envs/py2.7/lib/python2.7/site-packages/torch/utils/ffi/init.py”, line 185, in build
_build_extension(ffi, cffi_wrapper_name, target_dir, verbose)
File “/home/zeal/software/anaconda3/envs/py2.7/lib/python2.7/site-packages/torch/utils/ffi/init.py”, line 108, in _build_extension
ffi.compile(tmpdir=tmpdir, verbose=verbose, target=libname)
File “/home/zeal/software/anaconda3/envs/py2.7/lib/python2.7/site-packages/cffi/api.py”, line 697, in compile
compiler_verbose=verbose, debug=debug, **kwds)
File “/home/zeal/software/anaconda3/envs/py2.7/lib/python2.7/site-packages/cffi/recompiler.py”, line 1520, in recompile
compiler_verbose, debug)
File “/home/zeal/software/anaconda3/envs/py2.7/lib/python2.7/site-packages/cffi/ffiplatform.py”, line 22, in compile
outputfilename = _build(tmpdir, ext, compiler_verbose, debug)
File “/home/zeal/software/anaconda3/envs/py2.7/lib/python2.7/site-packages/cffi/ffiplatform.py”, line 58, in _build
raise VerificationError(’%s: %s’ % (e.class.name, e))
cffi.error.VerificationError: LinkError: command ‘gcc’ failed with exit status 1
|
st99737
|
It looks like it did not compiled nnd_cuda.cu. is it specified properly in your setup.py script? Is your user allow to write files on disk?
|
st99738
|
Thanks for your help!
I think I may know where something went wrong. I am sorry that I haven’t read the compilation log carefully.
|
st99739
|
error: no matching function for call to ‘caffe2::Tensor::Tensor(std::vector&, std::vector&, NULL)’
caffe2::TensorCPU t(img_dims, data, NULL);
how to set argments?
|
st99740
|
RuntimeError Traceback (most recent call last)
~/Ternary-Weights-Network/main.py in ()
138
139 if name == ‘main’:
–> 140 main()
~/Ternary-Weights-Network/main.py in main()
81 for epoch_index in range(1,args.epochs+1):
82 adjust_learning_rate(learning_rate,optimizer,epoch_index,args.lr_epochs)
—> 83 train(args,epoch_index,train_loader,model,optimizer,criterion)
84 acc = test(args,model,test_loader,criterion)
85 if acc > best_acc:
~/Ternary-Weights-Network/main.py in train(args, epoch_index, train_loader, model, optimizer, criterion)
96 optimizer.zero_grad()
97
—> 98 output = model(data)
99 loss = criterion(output,target)
100 loss.backward()
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
–> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)
~/Ternary-Weights-Network/model.py in forward(self, x)
75 self.fc2 = TernaryLinear(512,10)
76 def forward(self,x):
—> 77 x = self.conv1(x)
78 x = F.relu(F.max_pool2d(self.bn_conv1(x),2))
79 x = self.conv2(x)
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
–> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)
~/Ternary-Weights-Network/model.py in forward(self, input)
61 super(TernaryConv2d,self).init(*args,**kwargs)
62 def forward(self,input):
—> 63 self.weight.data = Ternarize(self.weight.data)
64 out = F.conv2d(input, self.weight, self.bias, self.stride,self.padding, self.dilation, self.groups)
65 return out
~/Ternary-Weights-Network/model.py in Ternarize(tensor)
14 output = torch.zeros(tensor.size())
15 delta = Delta(tensor)
—> 16 alpha = Alpha(tensor,delta)
17 for i in range(tensor.size()[0]):
18 for w in tensor[i].view(1,-1):
~/Ternary-Weights-Network/model.py in Alpha(tensor, delta)
34 count = truth_value.sum()
35 abssum = torch.matmul(absvalue,truth_value.type(torch.FloatTensor).view(-1,1))
—> 36 Alpha.append(abssum/count)
37 alpha = Alpha[0]
38 for i in range(len(Alpha) - 1):
RuntimeError: Expected object of type torch.FloatTensor but found type torch.LongTensor for argument #2 ‘other’
Help needed, thanks in advance
|
st99741
|
As the error message says, on this line Alpha.append(abssum/count), you perform a tensor operations between a FloatTensor and a LongTensor while both should be of the same type. I would guess abssum is a FloatTensor and count is a LongTensor? You need to convert the LongTensor into a FloatTensor with .float() or .type_as(the_other).
|
st99742
|
thanks for your reply @albanD, i solved that problem, now i have an other error, going to post here… i believe you can help.
TypeError: ‘list’ object is not callable
Traceback (most recent call last):
File “/home/akb/pytorch_learning/main.py”, line 141, in
main()
File “/home/akb/pytorch_learning/main.py”, line 133, in main
train(args,epoch_index,train_loader,model,optimizer,criterion)
File “/home/akb/pytorch_learning/main.py”, line 13, in train
for batch_idx,(data,target) in enumerate(train_loader):
File “/home/akb/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 314, in next
batch = self.collate_fn([self.dataset[i] for i in indices])
File “/home/akb/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 314, in
batch = self.collate_fn([self.dataset[i] for i in indices])
File “/home/akb/anaconda3/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/datasets/mnist.py”, line 77, in getitem
File “/home/akb/anaconda3/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/transforms.py”, line 49, in call
TypeError: ‘list’ object is not callable
Thanks in advance
|
st99743
|
Hi,
Not sure what’s happening here just from the stack trace, but it looks like you specify transforms the wrong way for your dataset. Make sure to follow the examples in the doc.
|
st99744
|
yes , you are right buddy, before i added extra Resize line… thanks for your help. appreciated
|
st99745
|
Hi, I just started with Pytorch and basic arithmetic operations are the best to begin with, i believe.
I performed element-wise multiplication using Torch with GPU support and Numpy using the functions below and found that Numpy loops faster than Torch which shouldn’t be the case, I doubt.
I want to know how to perform general arithmetic operations with Torch using GPU.
Note: I ran these code snippets in Google Colab notebook
Define the default tensor type to enable global GPU flag
torch.set_default_tensor_type(torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor)
Initialize Torch variables
x = torch.Tensor(200, 100) # Is FloatTensor
y = torch.Tensor(200,100)
def mul(d,f):
g = torch.mul(d,f).cuda() # I explicitly called cuda() which is not necessary
return g
When call the function above as %timeit mul(x,y)
Returns:
The slowest run took 10.22 times longer than the fastest. This could mean that
an intermediate result is being cached.
10000 loops, best of 3: 50.1 µs per loop
Now trial with numpy,
Used the same values from torch variables
x_ = x.data.cpu().numpy()
y_ = y.data.cpu().numpy()
def mul_(d,f):
g = d*f
return g
%timeit mul_(x_,y_)
Returns
The slowest run took 12.10 times longer than the fastest. This could mean that
an intermediate result is being cached.
100000 loops, best of 3: 7.73 µs per loop
Huge speed difference, i notice.
I already posted the same question in Stackoverflow: (https://stackoverflow.com/questions/52526082/pytorch-cuda-vs-numpy-for-arithematic-operations-fastest 17)
|
st99746
|
Solved by albanD in post #3
Hi,
This is expected for small operations.
The cost of launching the work on the gpu is actually more than what is spent actually doing computations.
Try increasing your tensor size and you will see that the GPU version doesn’t get much slower while the cpu one get slower very quickly.
|
st99747
|
I just noticed that it is a known issue from https://github.com/pytorch/pytorch/issues/1630 31.
I move forward assuming that, CPU is faster for small operations.
|
st99748
|
Hi,
This is expected for small operations.
The cost of launching the work on the gpu is actually more than what is spent actually doing computations.
Try increasing your tensor size and you will see that the GPU version doesn’t get much slower while the cpu one get slower very quickly.
|
st99749
|
I’m not sure I fully understand how to use IntList as an input for a function. First off, in the extension-cpp example, only Tensors are used as inputs. It would be nice to see examples with IntList, Scalar, int64_t as inputs so people that aren’t so familiar with c++ could see those in use and apply those in their own extensions.
I was looking at the LossCTC 5 file as an example for a cpp extension that uses IntList as an input. In this case, they’ve got IntList as the input and it accepts regular python lists as expected. But when I tried to write my own extension I run into the following error if I use IntList as an input:
Emitting ninja build file /tmp/torch_extensions/test_cpp_ext/build.ninja...
Building extension module test_cpp_ext...
Loading extension module test_cpp_ext...
Traceback (most recent call last):
File "jit_intlist_input.py", line 9, in <module>
print(test_cpp_ext.f(x))
TypeError: f(): incompatible function arguments. The following argument types are supported:
1. (arg0: at::ArrayRef<long>) -> at::Tensor
Invoked with: [1, 2]
however, if I use std::vector<int64_t> then I get the desired output:
Emitting ninja build file /tmp/torch_extensions/test_cpp_ext/build.ninja...
Building extension module test_cpp_ext...
Loading extension module test_cpp_ext...
tensor([1., 2.])
Should I be using std::vector<int64_t> for function inputs if I want the input to be a python list or should IntList work and I’m doing something wrong? I feel like the second is correct, but can’t figure out how to get IntLists to work.
Below is some code to create a minimal example
**test.cpp**
#include <torch/torch.h>
namespace at {
Tensor f(IntList a) {
return tensor(a, torch::CPU(kFloat));
}
Tensor g(std::vector<int64_t> a) {
return tensor(a, torch::CPU(kFloat));
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("f", &f, "function with IntList input");
m.def("g", &g, "function without IntList input");
}
}
**jit_intlist_input.py**
import torch
from torch.utils.cpp_extension import load
test_cpp_ext = load(
'test_cpp_ext', ['test.cpp'], verbose=True)
x = [1, 2]
y = torch.LongTensor(x) # using y as an input doesn't work either
print(test_cpp_ext.f(x))
**jit_vector_input.py**
import torch
from torch.utils.cpp_extension import load
test_cpp_ext = load(
'test_cpp_ext', ['test.cpp'], verbose=True)
x = [1, 2]
print(test_cpp_ext.g(x))
|
st99750
|
Please feel free to open a github issue requesting more comprehensive extension-cpp examples: https://github.com/pytorch/pytorch 19
|
st99751
|
@richard do you have any suggestions related to my question about IntList?
I’ll consider creating an issue about the examples but before creating an issue, if like to make sure that I’m not making a stupid mistake.
|
st99752
|
The intlist is a defined class, you can see two file
https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/core/ScalarType.h 15
typedef ArrayRef<int64_t> IntList;
https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/core/ArrayRef.h 14
class ArrayRef final {
public:
using iterator = const T*;
using const_iterator = const T*;
using size_type = size_t;
using reverse_iterator = std::reverse_iterator<iterator>;
...
Since it’s a internal class of pytorch, i guess it cannot be parsed by pybind11. So you can use std::vector<int64_t> arg in the arguments, then convert it to IntList by IntList val(arg).
|
st99753
|
Thanks for the response @mingminzhen. Perhaps I’ll create an issue for this. It’s not high priority, but it’s just not clear in the docs that pybind11 won’t accept IntList as an input.
|
st99754
|
Hi David,
sorry, I forgot to report this here. After this came up again, we have an IntList conversion in master for a while:
github.com/pytorch/pytorch
pybind conversion for IntList 27
by t-vi
on 01:07PM - 08 Sep 18
1 commits
changed 4 files
with 160 additions
and 98 deletions.
Best regards
Thomas
|
st99755
|
In the document of LSTM, it says:
dropout – If non-zero, introduces a dropout layer on the outputs of each RNN layer except the last layer
I have two questions:
Does it apply dropout at every time step of the LSTM?
If there is only one LSTM layer, will the dropout still be applied?
And it’s very strange that even I set dropout=1, it seems have no effects on my network performence. Like this:
self.lstm1 = nn.LSTM(input_dim, lstm_size1, dropout=1, batch_first=False)
this is only 1 layer, so I doubt if the dropout really works.
|
st99756
|
Yes, dropout is applied to each time step, however, iirc, mask for each time step is different
If there is only one layer, dropout is not applied, as indicated in the docs (only layer = last layer).
|
st99757
|
Thank you!
And Yes, your answer has be proved by my experiments.
I manually add a dropout layer after lstm, and it works well.
I have been stucked in this bug for a long time! the same data and the same config, it’s always overfitting with the pytorch version. Finally! Thanks!
|
st99758
|
Hi Young,
I was wondering if you manually add a dropout layer after LSTM, will the dropout mask be the same for all the time steps in a sequence? Or it will be different for each time step.
Thanks
|
st99759
|
LSTM dropout - Clarification of Last Layer
In the documentation for LSTM, for the dropout argument, it states:
introduces a dropout layer on the outputs of each RNN layer except the last layer
I just want to clarify what is meant by “everything except the last layer”.
Below I have an image of two possible options for the meaning.
Option 1: The final cell is the one that does not have dropout applied for the output.
Option 2: In a multi-layer LSTM, all the connections between layers have dropout applied, except the very top lay…
But in this post the figure shows it is not…
Which claim is true?
Thank you!
|
st99760
|
The RNN or LSTM network recurs itself for every step, which means in every step it’s like a normal fc network. So the dropout is applied to each time step.
|
st99761
|
In the newer version of pytorch, 1-layer rnn does not have a valid argument as dropout, so the dropout was not applied to each step, unless it is manually implemented (re-write the rnn module)
|
st99762
|
Yes, I guess your description would be more clear.
I was trying to explain that dropout would be applied in every time step, which means on every h_t the dropout works.
Dropout was placed in the middle of 2 stacked RNN unit.
|
st99763
|
I have seen the source code, I tend to think that claim 2 is right, dropout works between layers but not every timestep.
for i in range(num_layers):
all_output = []
for j, inner in enumerate(inners):
l = i * num_directions + j
hy, output = inner(input, hidden[l], weight[l], batch_sizes)
next_hidden.append(hy)
all_output.append(output)
input = torch.cat(all_output, input.dim() - 1)
if dropout != 0 and i < num_layers - 1:
input = F.dropout(input, p=dropout, training=train, inplace=False)
|
st99764
|
I expand data in getitem, how can I stack it in collatefn?
from torch.utils.data import Dataset
import numpy as np
class BarDataset(Dataset):
def __init__(self):
self.data = list(np.random.rand(9))
pass
def __getitem__(self, idx):
# some ops
out = np.array([self.data[idx] * 2, self.data[idx] * 3])
return out
def __len__(self):
return len(self.data)
train_dataset = BarDataset()
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=3,
pin_memory=True
)
for item in train_loader:
print(item)
# len(train_loader) = 3, wants len(train_loader) = 3 * 2
|
st99765
|
Solved by ptrblck in post #4
Ah OK.
I guess you want to use the same data in your Dataset?
If so, you could just fake a new length and use a modulo operation in index to avoid an out of range error:
class BarDataset(Dataset):
def __init__(self):
self.data = list(np.random.rand(9))
pass
def __getitem_…
|
st99766
|
If you just want to change the shape of item, you could use item = item.view(6, 1) on it.
|
st99767
|
I want to expand dataset, len(train_loader) = 2 * len(train_loader).
I am sorry that I didn’t say it clearly.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.