id
stringlengths
3
8
text
stringlengths
1
115k
st99968
In the following I am updating the wights in the resnet101 without a problem base_ = ResNet(Bottleneck, [3, 4, 23, 3]) resnet_weights = torch.load('resnet101_caffe.pth') base_.load_state_dict(resnet_weights, strict=False) if i do print(resnet_weights.keys()) i will have odict_keys([‘conv1.weight’, ‘bn1.weight’, ‘bn1.bias’, ‘bn1.running_mean’, ‘bn1.running_var’, ‘layer1.0.conv1.weight’, ‘layer1.0.bn1.weight’, ‘layer1.0.bn1.bias’, ‘layer1.0.bn1.running_mean’, ‘layer1.0.bn1.running_var’, ‘layer1.0.conv2.weight’, ‘layer1.0.bn2.weight’, ‘layer1.0.bn2.bias’, ‘layer1.0.bn2.running_mean’, ‘layer1.0.bn2.running_var’, ‘layer1.0.conv3.weight’, ‘layer1.0.bn3.weight’, . . . my question is: if i use model = models.resnet101(pretrained = True) which is from pytorch, how i can use the weights in this model to update my model in base_.load_state_dict(resnet_weights, strict=False) like i did above? is it even possible?
st99969
Hi, If the keys are the same, you can do the following: model = models.resnet101(pretrained = True) base_.load_state_dict(model.state_dict(), strict=False)
st99970
I see, thank you for your answer. I have a follow up question, let say my model is model = models.resnet101(pretrained = True) and then I finetune it, and then want to save it and load it again. Do you suggest to save and load it like: torch.save(the_model.state_dict(), PATH) the_model.load_state_dict(torch.load(PATH)) or you have other suggestion? in link 3 it says to do the_model = TheModelClass(*args, **kwargs) first, but i dont get why and what it means by TheModelClass
st99971
Yes what you want to do here is fine. What it means is that creation of the model and loading of the weights are two different things: You don’t save an nn.Module with all it’s weights. You save the weights on one side with .state_dict() and the module’s info on the other side by saving the arguments to create it.
st99972
I see! Thanks albanD: You don’t save an nn.Module with all it’s weights. Well, can I save and load an nn.Module with all it’s weights ? if so, how?
st99973
Sorry it was not clear “You don’t” means “You should not”. Saving python class instances like models breaks so easily in so many levels that you have a very high chance to end up with an object that you will never be able to load again You can do it, but you shouldn’t so I won’t tell you how to do it
st99974
Creation of a LongTensor with size < 74240 yields a tensor filled with random long int noise. Above 74240 yields a tensor filled with uniform 0s. Any idea why this is? Python 3.6.5 | packaged by conda-forge | (default, Apr 6 2018, 13:44:09) Type 'copyright', 'credits' or 'license' for more information IPython 6.4.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: import torch In [2]: max(torch.LongTensor(74240)) tensor(9222708909033422584) In [3]: max(torch.LongTensor(74241)) tensor(0)
st99975
Hi, Creating a Tensor will not initialize the memory. It will contain whatever the memory contained when it was returned by the allocator. In your case I guess that you have an allocator for small objects that don’t initialize memory and one for large objects that does zero-out all the values.
st99976
Hello, latest pytoch, cuda 9 and cudnn7 installed using conda on linux (lubuntu 16.04, latest nvidiam driver). I am running this code on a computer with 2x GTX 1080 Ti: GitHub tensorboy/pytorch_Realtime_Multi-Person_Pose_Estimation 6 This is a pytorch version of Realtime_Multi-Person_Pose_Estimation, origin code is here https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation - tensorboy/pytorch_Realtime_Multi-Person_Pose... Previsously, I was running the imagenet example from the pytorch example. For both of them, the RAM needed to run the training seems to be expensive according to the same kind of training using caffe or tensorflow - preventing it to be greedy on the RAM. For instance, on the pose estimation, there is a burst of RAM either at end of an epoch either at the beginning of the evaluation step leading to a cuda error (no more memory). With 11Gb of RAM on each GPU, I can run the training only with batch size = 32. The script is using, within an epoch, only half of the available RAM. For the imagenet example, same problem, I fixed the batch_size to 80 to be able to train a VGG19_bn. Another problem, when restarting from a check point, the script dies at the end of each epoch with cuda error: no more memory, even with the same batch size. Can anyone help me understand this burst of RAM? Thank you.
st99977
If it can help, here is the package version from my conda environment: Name Version Build Channel appdirs 1.4.3 py37h28b3542_0 asn1crypto 0.24.0 py37_0 attrs 18.2.0 py37h28b3542_0 automat 0.7.0 py37_0 backcall 0.1.0 py37_0 blas 1.0 mkl bleach 2.1.4 py37_0 bzip2 1.0.6 h14c3975_5 ca-certificates 2018.03.07 0 cairo 1.14.12 h8948797_3 certifi 2018.8.24 py37_1 cffi 1.11.5 py37h9745a5d_0 chardet 3.0.4 py37_1 conda 4.5.11 py37_0 conda-env 2.6.0 1 constantly 15.1.0 py37h28b3542_0 cryptography 2.3.1 py37hc365091_0 cudatoolkit 9.0 h13b8566_0 cudnn 7.1.2 cuda9.0_0 cycler 0.10.0 py37_0 cython 0.28.5 py37hf484d3e_0 dbus 1.13.2 h714fa37_1 decorator 4.3.0 py37_0 easydict 1.8 entrypoints 0.2.3 py37_2 expat 2.2.5 he0dffb1_0 ffmpeg 4.0 hcdf2ecd_0 fontconfig 2.13.0 h9420a91_0 freeglut 3.0.0 hf484d3e_5 freetype 2.9.1 h8a8886c_0 future 0.16.0 glib 2.56.1 h000015b_0 gmp 6.1.2 h6c8ec71_1 graphite2 1.3.12 h23475e2_2 gst-plugins-base 1.14.0 hbbd80ab_1 gstreamer 1.14.0 hb453b48_1 harfbuzz 1.8.8 hffaf4a1_0 hdf5 1.10.2 hba1933b_1 html5lib 1.0.1 py37_0 hyperlink 18.0.0 py37_0 icu 58.2 h9c2bf20_1 idna 2.7 py37_0 incremental 17.5.0 py37_0 intel-openmp 2018.0.3 0 ipykernel 4.9.0 py37_0 ipython 6.5.0 py37_0 ipython_genutils 0.2.0 py37_0 ipywidgets 7.4.1 py37_0 jasper 2.0.14 h07fcdf6_1 jedi 0.12.1 py37_0 jinja2 2.10 py37_0 jpeg 9b h024ee3a_2 jsonschema 2.6.0 py37_0 jupyter 1.0.0 py37_5 jupyter_client 5.2.3 py37_0 jupyter_console 5.2.0 py37_1 jupyter_core 4.4.0 py37_0 kiwisolver 1.0.1 py37hf484d3e_0 libedit 3.1.20170329 h6b74fdf_2 libffi 3.2.1 hd88cf55_4 libgcc-ng 8.2.0 hdf63c60_1 libgfortran-ng 7.2.0 hdf63c60_3 libglu 9.0.0 hf484d3e_1 libopencv 3.4.2 hb342d67_1 libopus 1.2.1 hb9ed12e_0 libpng 1.6.34 hb9fc6fc_0 libsodium 1.0.16 h1bed415_0 libstdcxx-ng 8.2.0 hdf63c60_1 libtiff 4.0.9 he85c1e1_1 libuuid 1.0.3 h1bed415_2 libvpx 1.7.0 h439df22_0 libxcb 1.13 h1bed415_1 libxml2 2.9.8 h26e45fe_1 markupsafe 1.0 py37h14c3975_1 matplotlib 2.2.3 py37hb69df0a_0 mistune 0.8.3 py37h14c3975_1 mkl 2018.0.3 1 mkl_fft 1.0.4 py37h4414c95_1 mkl_random 1.0.1 py37h4414c95_1 nbconvert 5.3.1 py37_0 nbformat 4.4.0 py37_0 nccl 1.3.5 cuda9.0_0 ncurses 6.1 hf484d3e_0 ninja 1.8.2 py37h6bb024c_1 notebook 5.6.0 py37_0 numpy 1.15.1 py37h1d66e8a_0 numpy-base 1.15.1 py37h81de0dd_0 olefile 0.45.1 py37_0 opencv 3.4.2 py37h6fd60c2_1 openssl 1.0.2p h14c3975_0 pandoc 2.2.3.2 0 pandocfilters 1.4.2 py37_1 parso 0.3.1 py37_0 pcre 8.42 h439df22_0 pexpect 4.6.0 py37_0 pickleshare 0.7.4 py37_0 pillow 5.2.0 py37heded4f4_0 pip 10.0.1 py37_0 pixman 0.34.0 hceecf20_3 prometheus_client 0.3.1 py37h28b3542_0 prompt_toolkit 1.0.15 py37_0 protobuf 3.6.1 ptyprocess 0.6.0 py37_0 py-opencv 3.4.2 py37hb342d67_1 pyasn1 0.4.4 py37h28b3542_0 pyasn1-modules 0.2.2 py37_0 pycocotools 2.0.0 pycosat 0.6.3 py37h14c3975_0 pycparser 2.18 py37_1 pygments 2.2.0 py37_0 PyHamcrest 1.9.0 pyopenssl 18.0.0 py37_0 pyparsing 2.2.0 py37_1 pyqt 5.9.2 py37h22d08a2_0 pysocks 1.6.8 py37_0 python 3.7.0 hc3d631a_0 python-dateutil 2.7.3 py37_0 pytorch 0.4.1 py37ha74772b_0 pytz 2018.5 py37_0 pyyaml 3.13 py37h14c3975_0 pyzmq 17.1.2 py37h14c3975_0 qt 5.9.6 h52aff34_0 qtconsole 4.4.1 py37_0 readline 7.0 ha6073c6_4 requests 2.19.1 py37_0 ruamel_yaml 0.15.46 py37h14c3975_0 scipy 1.1.0 py37hfa4b5c9_1 send2trash 1.5.0 py37_0 service_identity 17.0.0 py37h28b3542_0 setuptools 40.0.0 py37_0 simplegeneric 0.8.1 py37_2 sip 4.19.8 py37hf484d3e_0 six 1.11.0 py37_1 sqlite 3.24.0 h84994c4_0 tensorboardX 1.4 terminado 0.8.1 py37_1 testpath 0.3.1 py37_0 tk 8.6.7 hc745277_3 torchvision 0.2.1 py37_1 pytorch tornado 5.1 py37h14c3975_0 traitlets 4.3.2 py37_0 twisted 18.7.0 py37h14c3975_1 urllib3 1.23 py37_0 wcwidth 0.1.7 py37_0 webencodings 0.5.1 py37_1 wheel 0.31.1 py37_0 widgetsnbextension 3.4.1 py37_0 xz 5.2.4 h14c3975_4 yaml 0.1.7 had09818_2 zeromq 4.2.5 hf484d3e_1 zlib 1.2.11 ha838bed_2 zope 1.0 py37_1 zope.interface 4.5.0 py37h14c3975_0
st99978
My gueses are: You don’t evaluate under a with torch.no_grad() environment. You are storing the tensor losses instead of their values (that is, you are missing calling the item() method of the tensor) keeping in memory the whole computation graph. Check those two first and see if that’s the problem. At this moment I don’t have time to check out your code.
st99979
Thank you for your answer. I already test the loss hypothesis. I did not test with torch.no_grad(). The training is currently running (but it will failed as the memory is more and more filled… ). If someone has some other hypothesis about this problem, he is welcome to submit it before the next traning. thank you.
st99980
Sorry for my late answer. Thus, the point: I did not manage to make it work with ` torch.no_grad(). And actually, I want to compare with other runs thus I drop this idea. I tried to add some “del” on tensors (as seen in pytorch document), no changes appeared. The amazing thing is that pytorch start with half of the GPU memory and finish to fill it after several epochs but I have no cuda errors running the training (at the end, the memory of both GPU is almost full). I still do not undestand what is hapenning… Is there any way to track all memory blocks in the GPU and where they were allocated? Thank you. PS: anyway, my run went fine, it is just a pity that with 2x1080 Ti GPUs, I can not run with bigger batch size…
st99981
I want to implement a framework like AlphaGo Zero. My MCTS needs to get values from the network very frequently and quickly. So is there a good communication mechanism could do this? I hear that I could convert the PyTorch into Caffe to use its C++ API, but is there another way?
st99982
Based on what I have been reading here, one can get L2 regularization by providing a value other than 0 to the optimizer through the argument weigh_decay. Yet, one may implement a custom loss function like this one where the L2 regularization is already taken into account: class AutoRec_Loss(torch.nn.Module): def __init__(self): super(AutoRec_Loss,self).__init__() def forward(self,predicted_ratings, real_ratings, weights, reg_strength): ratings_loss = torch.norm(real_ratings - predicted_ratings) # L2 regularization weights_regularization = (reg_strength/2)*torch.norm(weight) return ratings_loss + weights_regularization What would happen if I set a value other than 0 to the underlying optimizer given this loss function?
st99983
Hi, It will just increase even more the l2 regularization. You loss would be loss = ratings_loss + weight_regularization + weight_decay * weight_norm.
st99984
Hi everyone. I am implementing local reparameretization (https://papers.nips.cc/paper/5666-variational-dropout-and-the-local-reparameterization-trick.pdf 5) and realized that I need somehow a matrix that has the same vector parameter row-wise. Supposing a layer with 512 neurons. If I code this: bias=nn.Parameter(torch.zeros(512,)).repeat(batch,1) If i now sample from this bias matrix, does pytorch (when performing backward) know that each row is the same parameter? What I want to do is avoid this: bias=nn.Parameter(torch.zeros(512,)) for i in range(batch): bias.sample()
st99985
Hi, Yes it will work. Be careful though that if you do bias=nn.Parameter(torch.zeros(512,)).repeat(batch,1), the python vairable bias will not contain the original Parameter object and is not the tensor that will be optimized. You would need to do: bias_for_optimizer = nn.Parameter(torch.zeros(512,)) bias = bias_for_optimizer.repeat(batch, 1)
st99986
Because in the one liner, the variable that you get is not a leaf tensor and so will not have it’s gradient saved. Meaning you won’t be able to optimize it: import torch from torch import nn a = nn.Parameter(torch.rand(10)) b = a.repeat(2) b.sum().backward() print("b is leaf: ",b.is_leaf) # False print("b.grad: ", b.grad) # None print("a is leaf: ",a.is_leaf) # True print("a.grad: ", a.grad) # some gradients a = nn.Parameter(torch.rand(10)).repeat(2) a.sum().backward() print("a is leaf: ",a.is_leaf) # False print("a.grad: ", a.grad) # None
st99987
The problem with this approach is that the resulting tensor bias is not a parameter: bias_for_optim=nn.Parameter(torch.zeros(topology[idx+1],).cuda()) bias=bias_for_optim.repeat(batch,1) print type(bias) and cannot be registered in the module. Shoud I put into nn.Parameter again after repeat?
st99988
That’s my point, You should register bias_for_optim in the module and for the optimization. The bias=bias_for_optim.repeat(batch,1) should only be done during the forward pass.
st99989
Ah okei, I wanted to avoid calling repeat in each forward step for efficience. However I suppose .repeat() is good optimized (at least would be much quicker than looping at python level) Thanks @albanD
st99990
Hi, You can consider that repeat is for free litterally. It changes 2 numbers on the cpu memory. You should not worry about it
st99991
Yes, I was talking more about memory reservation, however pytorch pool memory allocator should not have problems at this level, my intention was to have all memory allocated to avoid the typical problems that arise when you allocate and deallocate dynamically.
st99992
Hi! I have following problem during PyTorch compilation on Linux (I’ve been able to successfully compile it on macOS): Install the project... -- Install configuration: "Release" -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/share/cmake/Gloo/GlooConfig.cmake -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/share/cmake/Gloo/GlooConfigVersion.cmake -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/share/cmake/Gloo/GlooTargets.cmake -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/share/cmake/Gloo/GlooTargets-release.cmake -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/lib/libgloo.a -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/lib/libgloo_builder.a -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/lib/libgloo_cuda.a -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/config.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/algorithm.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/allgather_ring.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/allreduce_halving_doubling.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/allreduce_bcube.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/allreduce_local.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/allreduce_ring.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/allreduce_ring_chunked.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/barrier.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/barrier_all_to_all.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/barrier_all_to_one.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/broadcast_one_to_all.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/reduce_scatter.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/context.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/math.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/pairwise_exchange.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/types.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/common/common.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/common/error.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/common/linux.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/common/linux_devices.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/common/logging.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/common/string.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/rendezvous/file_store.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/rendezvous/hash_store.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/rendezvous/prefix_store.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/rendezvous/store.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/rendezvous/context.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/transport/address.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/transport/buffer.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/transport/device.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/transport/pair.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/transport/tcp/address.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/transport/tcp/buffer.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/transport/tcp/device.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/transport/tcp/pair.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/allreduce_builder.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/broadcast_builder.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/cuda.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/cuda_allreduce_halving_doubling.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/cuda_allreduce_halving_doubling_pipelined.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/cuda_allreduce_local.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/cuda_allreduce_ring.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/cuda_allreduce_ring_chunked.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/cuda_broadcast_one_to_all.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/cuda_collectives_device.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/cuda_collectives_host.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/cuda_collectives_native.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/cuda_collectives_nccl.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/cuda_private.h -- Installing: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/gloo/cuda_workspace.h + popd ~/Programs/pytorch/third_party ~/Programs/pytorch ++ uname + [[ Linux == \D\a\r\w\i\n ]] + popd ~/Programs/pytorch + for arg in '"$@"' + [[ THD == \n\c\c\l ]] + [[ THD == \g\l\o\o ]] + [[ THD == \c\a\f\f\e\2 ]] + [[ THD == \T\H\D ]] + pushd /macierz/home/155079jp/Programs/pytorch/torch/lib ~/Programs/pytorch/torch/lib ~/Programs/pytorch + build THD + mkdir -p build/THD + pushd build/THD ~/Programs/pytorch/torch/lib/build/THD ~/Programs/pytorch/torch/lib ~/Programs/pytorch + BUILD_C_FLAGS= + case $1 in + BUILD_C_FLAGS=' -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/TH" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THC" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THS" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THCS" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THNN" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1 -fexceptions' + cmake ../../THD -DCMAKE_MODULE_PATH=/macierz/home/155079jp/Programs/pytorch/cmake/Modules_CUDA_fix -DTorch_FOUND=1 -DCMAKE_INSTALL_PREFIX=/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install '-DCMAKE_C_FLAGS= -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/TH" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THC" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THS" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THCS" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THNN" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1 -fexceptions ' '-DCMAKE_CXX_FLAGS= -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/TH" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THC" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THS" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THCS" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THNN" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1 -fexceptions -std=c++11 ' '-DCMAKE_EXE_LINKER_FLAGS=-L"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/lib" -Wl,-rpath,$ORIGIN ' '-DCMAKE_SHARED_LINKER_FLAGS=-L"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/lib" -Wl,-rpath,$ORIGIN ' -DCMAKE_INSTALL_LIBDIR=/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/lib '-DCUDA_NVCC_FLAGS= -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/TH" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THC" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THS" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THCS" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THNN" -I"/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1' -DCUDA_DEVICE_DEBUG=0 -DCMAKE_PREFIX_PATH=/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install '-Dcwrap_files=/macierz/home/155079jp/Programs/pytorch/torch/lib/ATen/Declarations.cwrap;/macierz/home/155079jp/Programs/pytorch/torch/lib/THNN/generic/THNN.h;/macierz/home/155079jp/Programs/pytorch/torch/lib/THCUNN/generic/THCUNN.h;/macierz/home/155079jp/Programs/pytorch/torch/lib/ATen/nn.yaml' -DTH_INCLUDE_PATH=/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include -DTH_LIB_PATH=/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/lib -DTH_LIBRARIES=/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/lib/libTH.so -DCAFFE2_LIBRARIES=/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/lib/libcaffe2.so -DTHNN_LIBRARIES=/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/lib/libTHNN.so -DTHCUNN_LIBRARIES=/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/lib/libTHCUNN.so -DTHS_LIBRARIES=/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/lib/libTHS.so -DTHC_LIBRARIES=/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/lib/libTHC.so -DTHCS_LIBRARIES=/macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/lib/libTHCS.so -DTH_SO_VERSION=1 -DTHC_SO_VERSION=1 -DTHNN_SO_VERSION=1 -DTHCUNN_SO_VERSION=1 -DTHD_SO_VERSION=1 -DUSE_CUDA=1 -DNO_NNPACK=0 -DNCCL_EXTERNAL=1 -Dnanopb_BUILD_GENERATOR=0 -DCMAKE_DEBUG_POSTFIX= -DCMAKE_BUILD_TYPE=Release -DCMAKE_EXPORT_COMPILE_COMMANDS=1 -- The C compiler identification is GNU 4.8.5 -- The CXX compiler identification is GNU 4.8.5 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test HAS_THREAD_LOCAL -- Performing Test HAS_THREAD_LOCAL - Success -- Found MPI_C: /usr/lib64/mpi/gcc/mvapich2/lib64/libmpi.so (found version "3.0") -- Found MPI_CXX: /usr/lib64/mpi/gcc/mvapich2/lib64/libmpicxx.so (found version "3.0") -- Found MPI: TRUE (found version "3.0") -- Found Gloo: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include -- Looking for pthread.h -- Looking for pthread.h - found -- Looking for pthread_create -- Looking for pthread_create - not found -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE -- Caffe2: Found protobuf with new-style protobuf targets. -- Caffe2: Protobuf version 3.5.0 -- Found CUDA: /usr/local/cuda (found suitable version "9.1", minimum required is "7.0") -- Caffe2: CUDA detected: 9.1 -- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc -- Caffe2: CUDA toolkit directory: /usr/local/cuda -- Caffe2: Header version is: 9.1 -- Found CUDNN: /usr/local/cuda/include -- Found cuDNN: v7.0.5 (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so.7) -- Autodetected CUDA architecture(s): 6.1 -- Added CUDA NVCC flags for: -gencode;arch=compute_61,code=sm_61 -- Found CUDA: /usr/local/cuda (found suitable version "9.1", minimum required is "7.5") -- Found NCCL: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include -- Determining NCCL version from the header file: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include/nccl.h -- Found NCCL (include: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/include, library: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/lib/libnccl.so) -- MPI_LIBRARIES: /usr/lib64/mpi/gcc/mvapich2/lib64/libmpicxx.so;/usr/lib64/mpi/gcc/mvapich2/lib64/libmpi.so -- Found Gloo, will compile with Gloo distributed backend -- Building the gloo backend with TCP support only -- NCCL_LIBRARIES: /macierz/home/155079jp/Programs/pytorch/torch/lib/tmp_install/lib/libnccl.so -- Found NCCL, but the NCCL version is either not 2+ or not determinable, will not compile with NCCL distributed backend -- Excluding /macierz/home/155079jp/Programs/pytorch/torch/lib/THD/master_worker/worker/dispatch/Communication.cpp from the build -- Excluding /macierz/home/155079jp/Programs/pytorch/torch/lib/THD/master_worker/worker/dispatch/Generator.cpp from the build -- Excluding /macierz/home/155079jp/Programs/pytorch/torch/lib/THD/master_worker/worker/dispatch/Storage.cpp from the build -- Excluding /macierz/home/155079jp/Programs/pytorch/torch/lib/THD/master_worker/worker/dispatch/Tensor.cpp from the build -- Excluding /macierz/home/155079jp/Programs/pytorch/torch/lib/THD/master_worker/worker/dispatch/TensorCopy.cpp from the build -- Excluding /macierz/home/155079jp/Programs/pytorch/torch/lib/THD/master_worker/worker/dispatch/TensorLapack.cpp from the build -- Excluding /macierz/home/155079jp/Programs/pytorch/torch/lib/THD/master_worker/worker/dispatch/TensorMath.cpp from the build -- Excluding /macierz/home/155079jp/Programs/pytorch/torch/lib/THD/master_worker/worker/dispatch/TensorRandom.cpp from the build -- MPI_COMPILE_FLAGS: -fmessage-length=0;-funwind-tables;-fasynchronous-unwind-tables;-fstack-clash-protection -- MPI_LINK_FLAGS: -Wl,-rpath -Wl,/usr/lib64/mpi/gcc/mvapich2/lib64 -Wl,--enable-new-dtags -- Configuring done -- Generating done CMake Warning: Manually-specified variables were not used by the project: CMAKE_DEBUG_POSTFIX CMAKE_INSTALL_LIBDIR NCCL_EXTERNAL NO_NNPACK THCS_LIBRARIES THCUNN_LIBRARIES THCUNN_SO_VERSION THC_LIBRARIES THC_SO_VERSION THD_SO_VERSION THNN_LIBRARIES THNN_SO_VERSION THS_LIBRARIES TH_INCLUDE_PATH TH_LIBRARIES TH_LIB_PATH TH_SO_VERSION Torch_FOUND cwrap_files nanopb_BUILD_GENERATOR -- Build files have been written to: /macierz/home/155079jp/Programs/pytorch/torch/lib/build/THD + make install -j8 Scanning dependencies of target THD [ 5%] Building CXX object CMakeFiles/THD.dir/base/ChannelUtils.cpp.o c++: fatal error: no input files compilation terminated. /bin/sh: -funwind-tables: nie znaleziono polecenia /bin/sh: -fasynchronous-unwind-tables: nie znaleziono polecenia [ 11%] Building CXX object CMakeFiles/THD.dir/base/Cuda.cpp.o /bin/sh: -fstack-clash-protection: nie znaleziono polecenia CMakeFiles/THD.dir/build.make:62: polecenia dla obiektu 'CMakeFiles/THD.dir/base/ChannelUtils.cpp.o' nie powiodły się make[2]: *** [CMakeFiles/THD.dir/base/ChannelUtils.cpp.o] Błąd 127 make[2]: *** Oczekiwanie na niezakończone zadania.... c++: fatal error: no input files compilation terminated. /bin/sh: -funwind-tables: nie znaleziono polecenia [ 35%] Building CXX object CMakeFiles/THD.dir/base/RPCType.cpp.o [ 35%] Building CXX object CMakeFiles/THD.dir/base/DataChannel.cpp.o [ 35%] Building CXX object CMakeFiles/THD.dir/base/DataChannelRequest.cpp.o /bin/sh: -fasynchronous-unwind-tables: nie znaleziono polecenia /bin/sh: -fstack-clash-protection: nie znaleziono polecenia CMakeFiles/THD.dir/build.make:75: polecenia dla obiektu 'CMakeFiles/THD.dir/base/Cuda.cpp.o' nie powiodły się make[2]: *** [CMakeFiles/THD.dir/base/Cuda.cpp.o] Błąd 127 [ 47%] Building CXX object CMakeFiles/THD.dir/base/data_channels/DataChannelGloo.cpp.o [ 47%] Building CXX object CMakeFiles/THD.dir/base/data_channels/DataChannelMPI.cpp.o [ 47%] Building CXX object CMakeFiles/THD.dir/base/data_channels/DataChannelTCP.cpp.o c++: fatal error: no input files c++: fatal error: no input files compilation terminated. compilation terminated. c++: fatal error: no input files compilation terminated. /bin/sh: -funwind-tables: nie znaleziono polecenia /bin/sh: -funwind-tables: nie znaleziono polecenia /bin/sh: -funwind-tables: nie znaleziono polecenia /bin/sh: -fasynchronous-unwind-tables: nie znaleziono polecenia /bin/sh: -fasynchronous-unwind-tables: nie znaleziono polecenia /bin/sh: -fasynchronous-unwind-tables: nie znaleziono polecenia /bin/sh: -fstack-clash-protection: nie znaleziono polecenia /bin/sh: -fstack-clash-protection: nie znaleziono polecenia c++: fatal error: no input files CMakeFiles/THD.dir/build.make:114: polecenia dla obiektu 'CMakeFiles/THD.dir/base/RPCType.cpp.o' nie powiodły się make[2]: *** [CMakeFiles/THD.dir/base/RPCType.cpp.o] Błąd 127 compilation terminated. CMakeFiles/THD.dir/build.make:101: polecenia dla obiektu 'CMakeFiles/THD.dir/base/DataChannelRequest.cpp.o' nie powiodły się make[2]: *** [CMakeFiles/THD.dir/base/DataChannelRequest.cpp.o] Błąd 127 c++: fatal error: no input files compilation terminated. /bin/sh: -fstack-clash-protection: nie znaleziono polecenia CMakeFiles/THD.dir/build.make:88: polecenia dla obiektu 'CMakeFiles/THD.dir/base/DataChannel.cpp.o' nie powiodły się make[2]: *** [CMakeFiles/THD.dir/base/DataChannel.cpp.o] Błąd 127 c++: fatal error: no input files compilation terminated. /bin/sh: -funwind-tables: nie znaleziono polecenia /bin/sh: -funwind-tables: nie znaleziono polecenia /bin/sh: -fasynchronous-unwind-tables: nie znaleziono polecenia /bin/sh: -funwind-tables: nie znaleziono polecenia /bin/sh: -fasynchronous-unwind-tables: nie znaleziono polecenia /bin/sh: -fstack-clash-protection: nie znaleziono polecenia /bin/sh: -fasynchronous-unwind-tables: nie znaleziono polecenia /bin/sh: -fstack-clash-protection: nie znaleziono polecenia CMakeFiles/THD.dir/build.make:127: polecenia dla obiektu 'CMakeFiles/THD.dir/base/data_channels/DataChannelGloo.cpp.o' nie powiodły się make[2]: *** [CMakeFiles/THD.dir/base/data_channels/DataChannelGloo.cpp.o] Błąd 127 CMakeFiles/THD.dir/build.make:140: polecenia dla obiektu 'CMakeFiles/THD.dir/base/data_channels/DataChannelMPI.cpp.o' nie powiodły się make[2]: *** [CMakeFiles/THD.dir/base/data_channels/DataChannelMPI.cpp.o] Błąd 127 /bin/sh: -fstack-clash-protection: nie znaleziono polecenia CMakeFiles/THD.dir/build.make:153: polecenia dla obiektu 'CMakeFiles/THD.dir/base/data_channels/DataChannelTCP.cpp.o' nie powiodły się make[2]: *** [CMakeFiles/THD.dir/base/data_channels/DataChannelTCP.cpp.o] Błąd 127 CMakeFiles/Makefile2:67: polecenia dla obiektu 'CMakeFiles/THD.dir/all' nie powiodły się make[1]: *** [CMakeFiles/THD.dir/all] Błąd 2 Makefile:129: polecenia dla obiektu 'all' nie powiodły się make: *** [all] Błąd 2 Failed to run 'bash tools/build_pytorch_libs.sh --use-cuda --use-nnpack --use-mkldnn nccl caffe2 nanopb libshm gloo THD c10d' Sorry for polish language, it’s uni PC, I can’t change that (or can I? Do you know how?). I can translate if you don’t understand something. Versions: OS: openSUSE Leap 42.3 GCC: 4.8.5 conda: 4.5.11 Python: 3.6.5 CUDA: 9.1 cuDNN: 7.0 I work on NFS drive if it changes anything. I’ve executed those commands before installation: export CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" conda install numpy pyyaml mkl mkl-include setuptools cmake cffi typing conda install -c mingfeima mkldnn conda install -c pytorch magma-cuda91 I’ve changed magma-cuda80 to magma-cuda91. I was looking through the Internet and couldn’t find any help. Do someone know what is going on? Thanks for any help!
st99993
Solved by albanD in post #2 Hi, It does look weird indeed. It looks like it splits the compilation command line in the middle and so the compiler does not have input files specified, and it tries to execute the compiler arguments as bash methods. Do you by any change used a windows computer to get the code before putting i…
st99994
Hi, It does look weird indeed. It looks like it splits the compilation command line in the middle and so the compiler does not have input files specified, and it tries to execute the compiler arguments as bash methods. Do you by any change used a windows computer to get the code before putting it on the machine where you compile? Will you need the distributed package? If not, try compiling with NO_DISTRIBUTED=1 as MPI might be the problem here. If that does not help, could you run make in verbose mode directly within /macierz/home/155079jp/Programs/pytorch/torch/lib/build/THD to see what is the exact command that it tries to execute please? That would help us fix this problem.
st99995
Do you by any change used a windows computer to get the code before putting it on the machine where you compile? Nope. Will you need the distributed package? If not, try compiling with NO_DISTRIBUTED=1 as MPI might be the problem here. Yea, I don’t think I’ll be trying to distribute training anytime soon. I set this variable and it compiled and installed fine this time. Tanks a lot for help!
st99996
Hello, When should we use repeat instead of expanse? Apart from the case where we want to call an inplace function?
st99997
expand does not allocate new memories, which means if you do x = torch.tensor([[1], [2], [3]]) expand_x = x.expand([3,4]) x[0, 0] = 4 you will get expand_x tensor([[4, 4, 4, 4], [2, 2, 2, 2], [3, 3, 3, 3]]) If you use repeat_x = x.repeat(1, 4), changing x will not affect repeat_x.
st99998
If they have the same number of layers, I know that How to load part of pre trained model? 156 works. However, if my new model has m+n layers and my old model has m layers, pytorch will complain about missing layers. How should I load the model then?
st99999
The pre-trained model is loaded as a OrderedDict by calling torch.load(), you can then extract weights from the dictionary and do what you want. For example, in your case, you could get your model’s state_dict, then assign weights to layers of interests and load the dict using model.load_state_dict().
st115000
x = np.array([1.8507 ,-2.7324]) y = np.array([0.9722 , 0.4470 , 1.0000 , 0.0000 , 0.0000 , 0.0000]) x = torch.from_numpy(x) y = torch.from_numpy(y) z = x.unsqueeze(0)*y.unsqueeze(1) print(z) or x = np.array([1.8507 ,-2.7324]) y = np.array([0.9722 , 0.4470 , 1.0000 , 0.0000 , 0.0000 , 0.0000]) x = torch.from_numpy(x) y = torch.from_numpy(y) z = x*y.unsqueeze(1) print(z) both work on my computer. I prefer the former as it explicit specifies the tensor shape.
st115001
@chenchr: Thanks for your suggestion. I could solve the problem using Smith’s method. But this alternative solution also helped me to understand performance of “unsqueeze” better. I’m still new to pytorch. So taking sometime to grab the new concepts.
st115002
I’m training an object detection model in pytorch on Ubuntu 16.04 with a TitanX Pascal gpu. In the middle of training (after several thousand training iterations e.g. 70k), the training crashes with the message: [libprotobuf FATAL google/protobuf/wire_format.cc:830] CHECK failed: (output->ByteCount()) == (expected_endpoint): : Protocol message serialized to a size different from what was originally expected. Perhaps it was modified by another thread during serialization? terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (output->ByteCount()) == (expected_endpoint): : Protocol message serialized to a size different from what was originally expected. Perhaps it was modified by another thread during serialization? Command terminated by signal 6 I’ve searched, but couldn’t find anyone experiencing similar errors. I’m using tensorboard-pytorch to visualize the training artifacts. Any ideas on how to resolve this issue?
st115003
can you check if by any chance you are running out of memory or disk space? It looks like one of those errors where it tried to write X bytes, but could only write bytes less than X due to some unknown constraints.
st115004
It’s training now, and the gpu is only using 7293MiB / 12189MiB. Since the crash happens almost at random, it’s difficult to pin down. I’m currently scaling the shortest image side to 800 pixels, but could try reducing that to 600 pixels and seeing whether the issue arises. My batch size is already 1 image.
st115005
I just noticed that I had a zombie process taking up some memory, so this may have been the culprit. After killing the zombie process, my gpu memory is down to 4228MiB / 12189MiB. Hopefully this was the issue.
st115006
The error was actually related to tensorboard. I removed the code portion that writes tfevents and the training completed just fine.
st115007
I found some features like upsampling_nearest are in the github but not in the conda package. Is there a timeline when the conda package will be updated?
st115008
the conda package was updated yesterday evening with the 0.1.7 release which has the upsampling_nearest available
st115009
trypag: conda update pytorch torchvision -c soumith Oh, so the packages are not sent to the default channels, likewise it was happening for luarocks in Torch. I would suggest, therefore, to run this command, in order to automate the process. conda config --add channels soumith [which] adds the new channel [soumith] to the top of the channel list, making it the highest priority.
st115010
How long does it usually take to update the packages? This commit 7 from 4 days ago still seems not to be included.
st115011
I am currently trying to train a 3-layer LSTM for a classification task. The input sequence has variable length,so I padded every sequence with zero to the longest one within the minibatch and the padded label is set to -1 which will be ignore in the loss calculation. When I train LSTM with batch_size=1, it works well, the cross entropy loss decreases and the training classification accuracy increases. The problem is when I set batch_size >1, e.g. batch_size=8, the loss decreases while the accuracy do not increase. Could anyone help me to figure out why ? Some related code is as follows: class Model(nn.Module): def __init__(self, args): super(Model, self).__init__() self.args = args self.n_d = args.feadim self.n_cell=args.hidnum self.depth = args.depth self.drop = nn.Dropout(args.dropout) self.n_V = args.statenum if args.lstm: self.rnn = nn.LSTM(self.n_d, self.n_cell, self.depth, dropout = args.rnn_dropout, batch_first = True ) else: pass self.output_layer = nn.Linear(self.n_cell, self.n_V) def forward(self, x, hidden,lens): rnnout, hidden = self.rnn(x, hidden) output = self.drop(rnnout) output = output.view(-1, output.size(2)) output = self.output_layer(output) return output, hidden def train_model(epoch, model, train_reader): model.train() args = model.args batch_size = args.batch_size total_loss = 0.0 criterion = nn.CrossEntropyLoss(size_average=False,ignore_index=-1) hidden = model.init_hidden(batch_size) i=0 running_acc=0 total_frame=0 while True: feat,label,length = train_reader.load_next_nstreams() if length is None or label.shape[0]<args.batch_size: break else: x, y = Variable(torch.from_numpy(feat)).cuda(), Variable(torch.from_numpy(label).long()).cuda() hidden = model.init_hidden(batch_size) hidden = (Variable(hidden[0].data), Variable(hidden[1].data)) if args.lstm else Variable(hidden.data) model.zero_grad() output, hidden = model(x, hidden,length) assert x.size(0) == batch_size loss = criterion(output, y.view(-1)) _,predict = torch.max(output,1) correct = (predict == y).sum() loss.backward() total_loss += loss.data[0] running_acc += correct.data[0] total_frame += sum(length) i+=1 if i%10 == 0: sys.stdout.write(“time:{}, Epoch={},trbatch={},loss={:.4f},tracc={:.4f}\n”.format(datetime.now(),epoch,i,total_loss/total_frame, running_acc*1.0/total_frame)) sys.stdout.flush()
st115012
hi Pan-Zhou, I am not sure of exactly why you are seeing this behavior. If you pin it down, I would love to know. Some easy things to try: increase / decrease learning rate, see what happens print out the min/max values of the weights of the network over learning, as well as the norm. check if somehow weights are becoming NaN.
st115013
thanks for your advise. I do tuning the learning rate and founds it helps a little. I use about 150 hours speech feathure training a 3 layer lstm with 400 cell each. and set batch_size=4. Here is the log information and weights norm after each epoch. Epoch=0 lr=2.0000 train_loss=3.6674 dev_loss=2.9469 tracc=0.0777 validacc=0.0789 [58.3999m] Epoch=1 lr=2.0000 train_loss=3.1931 dev_loss=2.8304 tracc=0.0786 validacc=0.0787 [57.8110m] Epoch=2 lr=2.0000 train_loss=3.0930 dev_loss=2.7190 tracc=0.0793 validacc=0.0786 [58.2592m] p_norm: ['7', '23', '0', '0', '23', '23', '0', '0', '23', '23', '0', '0', '37', '0'] p_norm: ['71', '82', '20', '20', '82', '82', '17', '17', '82', '99', '16', '16', '144', '27'] p_norm: ['91', '104', '25', '25', '98', '104', '21', '21', '95', '119', '19', '19', '160', '30'] p_norm: ['97', '110', '27', '27', '106', '118', '22', '22', '102', '129', '19', '19', '166', '31'] In fact I use the same data and the same data io function to train 3 layer LSTM with tensorflow. I works well. traing loss and valid loss are: End of epoch 0 with avg loss 3.66545295715 and accuracy 0.26187556982 End of epoch 1 with avg loss 2.78404808044 and accuracy 0.355499237776 End of epoch 2 with avg loss 2.55863642693 and accuracy 0.38808375597 End of epoch 3 with avg loss 2.42844891548 and accuracy 0.4079862535 End of epoch 4 with avg loss 2.33932137489 and accuracy 0.422125428915 End of epoch 5 with avg loss 2.20702433586 and accuracy 0.445332825184 End of epoch 6 with avg loss 2.12942314148 and accuracy 0.459314882755 End of epoch 7 with avg loss 2.08610677719 and accuracy 0.467183083296 End of epoch 8 with avg loss 2.06255722046 and accuracy 0.471532851458 End of epoch 9 with avg loss 2.04997444153 and accuracy 0.473860412836 epoch 0 valid split mean loss: 2.96507430077, accuracy: 0.331166476011 epoch 1 valid split mean loss: 2.6755862236, accuracy: 0.3697052598 epoch 2 valid split mean loss: 2.54053473473, accuracy: 0.389888346195 epoch 3 valid split mean loss: 2.47018957138, accuracy: 0.399514913559 epoch 4 valid split mean loss: 2.42790412903, accuracy: 0.40643504262 epoch 5 valid split mean loss: 2.35705971718, accuracy: 0.420234382153 epoch 6 valid split mean loss: 2.32587504387, accuracy: 0.426783770323 epoch 7 valid split mean loss: 2.31113815308, accuracy: 0.42946600914 epoch 8 valid split mean loss: 2.30379247665, accuracy: 0.430867373943 epoch 9 valid split mean loss: 2.29960465431, accuracy: 0.431604236364
st115014
Hello, I want to use MultiProcessing to train several models simultaneously. Each thread (cpu core) update the parameters of its own model. Does PyTorch support such operations? For example, firstly I create a model list that each element is a seperate model: net_list=[PyTorch Net for _ in range(threads) ] After then I use MultiPocessing to start each process to update these models. Will this work for PyTorch?
st115015
Why do you need multiprocessing for that? You can easily do a bash script for that
st115016
Because after optimizing these networks on different threads, the weights should be exchanged. Which is convenient to implement in a main process.
st115017
the official doc about the multiprocess 12 , but also i found some problem in it, here is the problem, if you have some idea, can you share it with me? Thx.
st115018
Hi, I’m using pytorch on python 3.5.2. While attempting to use torch.multiprocessing.pool, I’m getting the following error. Code snippet: from torch.multiprocessing.pool import Pool … with Pool(processes=n_processes) as pool: games = pool.map(self.play_game, range(n_processes)) … Error: Traceback (most recent call last): File “rl_net.py”, line 188, in agent.train(100000) File “rl_net.py”, line 145, in train with Pool(processes=n_processes) as pool: File “/usr/lib/python3.5/multiprocessing/pool.py”, line 150, in init self._setup_queues() File “/home/cs234-gpu2/.env3.5/lib/python3.5/site-packages/torch/multiprocessing/pool.py”, line 23, in _setup_queues self._inqueue = SimpleQueue() TypeError: init() missing 1 required keyword-only argument: ‘ctx’
st115019
Use torch.multiprocessing.Pool instead of torch.multiprocessing.pool.Pool; the latter is a definition for Python 3.3 and under, and loaded into torch.multiprocessing.Pool – as stated in torch/multiprocessing/__init__.py.
st115020
Hi, I would like to know how you guys deal with the dropout in testing since the dropout rate should be set to 0 while testing. I directly set model.training = False while testing. Or is there other ways to handle this?
st115021
YongyiTang92: Hi, I would like to know how you guys deal with the dropout in testing since the dropout rate should be set to 0 while testing. I directly set model.training = False while testing. Or is there other ways to handle this? what do you mean for " the dropout rate should be set to 0 while testing"? Is the dropout rate different between train and test?
st115022
Hi, I wonder if it is possible to instantiating modules in “init” as entries in a dictionary. For example, is the following allowed? class MyNet(nn.Module): def __init__(self): self.myModules = {} self.myModule['dog'] = nn.Linear(4096, 300) self.myModule['cat'] = nn.Linear(4096, 300) self.myModule['flower'] = nn.Linear(4096, 300) def forward(self, img): ... This example is simple that it does not have to use dictionary, but what I actually need to do is something like following: class MyNet(nn.Module): def __init__(self, number_of_modules): self.myModules = {} for i in range(0, number_of_modules): self.myModule[ 'dog' + str(i) ] = nn.Linear(4096, 300) def forward(self, img): ... The modules are of same shape, but I don’t want them to share weights.
st115023
def __init__(): ..... for i in range(0, number_of_modules): setattr(self,'dog' + str(i) ,nn.Linear(4096, 300)) def forward(self,input): dog1 = self.dog1(input) dog2 = self.dog2(input) dog3 = getattr(self,'dog3')(input) or def __init__(): ..... modules = [] for i in range(0, number_of_modules): modules .append( nn.Linear(4096, 300)) self.modules = nn.ModuleList(modules) def forward(self,input): outputs = [] for model in self.modules: outputs.append(model(input)) also,try to format your code like: ```Python your code here ```
st115024
Hello, I’m seeking suggestions w.r.t the best practice to monitor the “dead” neuron ratio during training process in pytorch. The goal is to probably set this as one of the early-stop signals so that I could abandon a certain model when I see that number goes up too high say 30% etc. Thanks,
st115025
If you do weight decay, you can do this: model.target_layer.weight.data.var(1) to get the variance of the output units’ weight. If you don’t do weight decay, and have to check the unit activation, look up Module.register_forward_hook 132. This allows you to check the output of intermediate layers quite conveniently.
st115026
torch.save({ ‘epoch’: epoch, ‘model’: net, ‘model_state_dict’: net.state_dict(), ‘best_mean_iu’: meanIU_best, }, os.path.join(model_path, ‘checkpoint.pth.tar’)) i save model like this. checkpoint = torch.load(‘checkpoint.pth.tar’) net = torch.load(checkpoint[‘model’]) but i try to load model from checkpoint, it would appear error like this: Traceback (most recent call last): File “/home/liuyf/DenseNet_clockwork/CamVid_DenseNet/camvid_train.py”, line 35, in net = torch.load(checkpoint[‘model’]) File “/usr/local/lib/python2.7/dist-packages/torch/serialization.py”, line 231, in load return _load(f, map_location, pickle_module) File “/usr/local/lib/python2.7/dist-packages/torch/serialization.py”, line 364, in _load return legacy_load(f) File “/usr/local/lib/python2.7/dist-packages/torch/serialization.py”, line 299, in legacy_load with closing(tarfile.open(fileobj=f, mode=‘r:’, format=tarfile.PAX_FORMAT)) as tar, File “/usr/lib/python2.7/tarfile.py”, line 1691, in open return func(name, filemode, fileobj, **kwargs) File “/usr/lib/python2.7/tarfile.py”, line 1721, in taropen return cls(name, mode, fileobj, **kwargs) File “/usr/lib/python2.7/tarfile.py”, line 1579, in init self.offset = self.fileobj.tell() File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py”, line 262, in getattr type(self).name, name)) AttributeError: ‘DataParallel’ object has no attribute ‘tell’
st115027
Any preferences; do people generally favor one over the other? If so, why? crayon repo 132 tensorboard-pytorch repo 182
st115028
Hi, As one of the person that originally created crayon, I do not actively use it anymore (because I don’t need this kind of visualization). And so there are no new features added to it by the original authors. I don’t know what is the status of tensorboard-pytorch though.
st115029
I would suggest tensorboard-pytorch. It’s easier to use especially if you used tensorboard before.
st115030
albanD: I do not actively use it anymore (because I don’t need this kind of visualization). Could you elaborate; are there other visualization tools you or @ruotianluo recommend?
st115031
I just don’t have any loss to plot. So I just don’t plot anything. I don’t use anything else.
st115032
I usually just use pyplot.plot in Jupyter. That flickers a bit when I clear the output abd redraw but works quite welk for me. I also have something in bokeh. That is a bit more appealing conceptually because it supports updates, but I found that it was a bit less convenient. Here is an example with pyplot 133. Best regards Thomas
st115033
I am following the tutotial for transfer learning http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html 73 I wish to train on a custom dataset, which cannot be cropped as it will result in relevant data being lost. 224x224 is too small for my use case Maybe I could resize my data to 480x640 But I would prefer not to alter the images. When I try to train the model I get an error on size mismatch. It seems the implementation of the model only allows for images which are 224 x 224. Is this correct? Looking at the model github.com pytorch/vision/blob/master/torchvision/models/resnet.py 46 import torch.nn as nn import math import torch.utils.model_zoo as model_zoo __all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152'] model_urls = { 'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth', 'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth', 'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth', 'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth', 'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth', } def conv3x3(in_planes, out_planes, stride=1): """3x3 convolution with padding""" This file has been truncated. show original vs the torch versions There is a single kernel size 7 and single input to AvgPool Which suggests that the input must be square 32*7=224 In torch github.com facebook/fb.resnet.torch/blob/master/models/resnet.lua 12 -- -- Copyright (c) 2016, Facebook, Inc. -- All rights reserved. -- -- This source code is licensed under the BSD-style license found in the -- LICENSE file in the root directory of this source tree. An additional grant -- of patent rights can be found in the PATENTS file in the same directory. -- -- The ResNet model definition -- local nn = require 'nn' require 'cunn' local Convolution = cudnn.SpatialConvolution local Avg = cudnn.SpatialAveragePooling local ReLU = cudnn.ReLU local Max = nn.SpatialMaxPooling local SBatchNorm = nn.SpatialBatchNormalization This file has been truncated. show original model:add(Convolution(3,64,7,7,2,2,3,3)) in pytorch self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) If I change the torch model to model:add(Convolution(3,64,15,20,2,2,3,3)) It will at least allow me to train with 480*640 images…although it will not allow me to fine tune a pretrained model So I basically have 3 questions To train on different image sizes can the pretrained models be used? Do all images in training have to be the same size (I thought fully convolutional networks would allow any input size …this training works with tensorflow and inception-v3)? How do I fine tune a model with images which are not 224*224?
st115034
Thanks for the pointer. Used model.avgpool = nn.AdaptiveAvgPool2d(1) To get this to work
st115035
I try to replicate the net2net 4 torch code in pytorch and here where I stuck. It uses m.output to reach the last feedforward state to compute BatchNorm statistics but as far as I see, it is not possible in Pytorch. Do you have any other alternative method that you might like to suggest? especially @smth
st115036
this is not possible in pytorch. In pytorch, you might want to implement net2net as a utility function, that the user explicitly uses in their program. It’ll take weights of a Conv2d layer (or Linear layer) and return a new Conv2d layer that’s wider, or a Sequential of 2 Conv2d layers.
st115037
Hi, I just found that calling index_select() on long columns (dim=1) are much slower than doing it on long rows (dim=0). In fact, for large enough matrices, it’s much faster to first transpose the matrix and call index_select() on rows! I wonder if it’s a known performance issue and if there’s any way to mitigate the problem? Here’s the code I used to test. (I’m using Python 2.7.12, PyTorch 0.2.0.post1, and GTX 1080). It builds a 256*N matrix (for various N) and rearranges its rows. Then it does the same for columns. import time import numpy as np import torch DIM = 256 idxs = np.random.permutation(DIM) idxs = torch.LongTensor(idxs).cuda() def do_test(dim, sz, trans=False): if dim == 0: # Rearrange 256 rows, each size 'sz'. A = torch.cuda.FloatTensor(DIM, sz) else: # Rearrange 256 columns, each size 'sz'. A = torch.cuda.FloatTensor(sz, DIM) A.uniform_(-1.0, 1.0) B = A.new(A.shape) torch.cuda.synchronize() T0 = time.time() for step in xrange(10): if trans: T = A.t().clone() T.index_select(dim=1-dim, index=idxs, out=B) else: A.index_select(dim=dim, index=idxs, out=B) torch.cuda.synchronize() T1 = time.time() print ' %6d : Elapsed %.3f ms' % (sz, (T1 - T0) / 10 * 1000.0) sizes = [100, 200, 500, 1000, 2000, 5000, 10000, 20000, 50000, 100000] print 'Rearranging rows:' for sz in sizes: do_test(0, sz) print 'Rearranging columns:' for sz in sizes: do_test(1, sz) print 'Rearranging columns (with transpose):' for sz in sizes: do_test(1, sz, True) Result: Rearranging rows: 100 : Elapsed 0.051 ms 200 : Elapsed 0.049 ms 500 : Elapsed 0.056 ms 1000 : Elapsed 0.066 ms 2000 : Elapsed 0.084 ms 5000 : Elapsed 0.142 ms 10000 : Elapsed 0.235 ms 20000 : Elapsed 0.419 ms 50000 : Elapsed 0.999 ms 100000 : Elapsed 1.937 ms Rearranging columns: 100 : Elapsed 0.048 ms 200 : Elapsed 0.051 ms 500 : Elapsed 0.065 ms 1000 : Elapsed 0.111 ms 2000 : Elapsed 0.298 ms 5000 : Elapsed 1.166 ms 10000 : Elapsed 2.953 ms 20000 : Elapsed 7.699 ms 50000 : Elapsed 22.267 ms 100000 : Elapsed 44.207 ms Rearranging columns (with transpose): 100 : Elapsed 0.051 ms 200 : Elapsed 0.054 ms 500 : Elapsed 0.065 ms 1000 : Elapsed 0.081 ms 2000 : Elapsed 0.118 ms 5000 : Elapsed 0.236 ms 10000 : Elapsed 0.489 ms 20000 : Elapsed 1.472 ms 50000 : Elapsed 8.336 ms 100000 : Elapsed 14.236 ms As you can see, index_select(dim=1) is much slower than dim=0: more importantly, it grows faster than O(N): N=100000 is about 400 times slower than N=1000, and first transposing the matrix is about three times faster (for N=100000), even considering the time spent on transposing. Does anyone know what’s going on here?
st115038
How to understand the backward() in stochastic functions 11 ? e.g. For Normal distribution, grad_mean = -(output - mean)/std**2, however why it is following this formula ? Is it a derivative of Gaussian PDF ? The forward pass only use output = mean + std*eps where eps ~ N(0, 1), so the gradient w.r.t. mean should be identity ?
st115039
Gradient formulas are based on Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning, available at http://incompleteideas.net/sutton/williams-92.pdf 10 https://github.com/pytorch/pytorch/blob/master/torch/autograd/_functions/stochastic.py#L3-L5 16
st115040
Hello Everyone, My neural network is optimized in the way that the gradients are obtained by a certain algorithm. To pass the gradients to the network, I firstly clear the gradients by: (1), for i in list(net.parameters()): i.grad=None (2), opt.zero_grad() I assume that both methods do the same thing and have the same effect, is my understanding right? After clearing the gradients, I pass the gradients ( the calculated gradients are in list type, and each element is an numpy array that has the same shape with the network element ) to the network by: (1),for i in list(net.parameters()): i.grad=Variable(torch.from_numpy(GRADIENT_ARRAY)) After passing the gradients, I use optimizer.step() to update the parameters. Does calling optimizer.step() do the same thing as updating the parameters manually?
st115041
yes, optimizer.step() does the update rule: x.data -= x.grad * learning_rate being a simple example of an update rule.
st115042
class EncoderRNN(nn.Module): def __init__(self, input_size, hidden_size, n_layers=1): super(EncoderRNN, self).__init__() self.n_layers = n_layers self.hidden_size = hidden_size self.embedding = nn.Embedding(input_size, hidden_size) self.gru = nn.GRU(hidden_size, hidden_size) def forward(self, input, hidden): embedded = self.embedding(input).view(1, 1, -1) output = embedded for i in range(self.n_layers): output, hidden = self.gru(output, hidden) return output, hidden def initHidden(self): result = Variable(torch.zeros(1, 1, self.hidden_size)) if use_cuda: return result.cuda() else: return result
st115043
One GRU for many layers: the weights will be shared across all layers Use n_layers instead: each layer will learn different weights.
st115044
Thanks for your response. Which one is more preferred if i need to use multiple layers?
st115045
look at the source code of the model definition (i.e. the python class definition)
st115046
Why are there two bias terms in RNNCell when they are pointwise added together? Wouldn’t this be equivalent to using one bias and doubling its gradient? Although I’m not sure if doubling the gradient is the desired behavior…
st115047
Solved by smth in post #2 Yes, it’d be equivalent to just learn 1 bias term. I guess it’s just convention to learn two bias terms for an Elman cell (or we just implemented it exactly as the formula says, rather than thinking this through). Here’s the relevant code that I double-checked https://github.com/pytorch/pytorch/blo…
st115048
Yes, it’d be equivalent to just learn 1 bias term. I guess it’s just convention to learn two bias terms for an Elman cell (or we just implemented it exactly as the formula says, rather than thinking this through). Here’s the relevant code that I double-checked https://github.com/pytorch/pytorch/blob/master/torch/nn/_functions/rnn.py#L14 12
st115049
a = Variable(torch.LongTensor(torch.rand(2, 3))) The error: TypeError Traceback (most recent call last) <ipython-input-12-f8802b707ca7> in <module>() ----> 1 a = Variable(torch.LongTensor(torch.rand(2, 3))) TypeError: torch.LongTensor constructor received an invalid combination of arguments - got (torch.FloatTensor), but expected one of: * no arguments * (int ...) didn't match because some of the arguments have invalid types: (torch.FloatTensor) * (torch.LongTensor viewed_tensor) didn't match because some of the arguments have invalid types: (torch.FloatTensor) * (torch.Size size) didn't match because some of the arguments have invalid types: (torch.FloatTensor) * (torch.LongStorage data) didn't match because some of the arguments have invalid types: (torch.FloatTensor) * (Sequence data) didn't match because some of the arguments have invalid types: (torch.FloatTensor) Why is not there a conversion?Is it just a code style, or just a little bug?
st115050
I’m not sure why you are trying to Construct a LongTensor from a FloatTensor (we do not support this in the constructor). Instead, a much simpler and equivalent way: a = Variable(torch.rand(2, 3).long())
st115051
Hi, I am trying to train a model by using GPU. I can create simple tensors and do operations on them with CUDA. However, when I tried to build a complex model, it raises an exception “CUDNN_STATUS_NOT_INITIALIZED”. raise CuDNNError(status) CuDNNError: 1: b'CUDNN_STATUS_NOT_INITIALIZED' CUDNN_STATUS_NOT_INITIALIZED\ Traceback (most recent call last): File "<ipython-input-33-b62d175cac47>", line 1, in <module> CUDNN_STATUS_NOT_INITIALIZED NameError: name 'CUDNN_STATUS_NOT_INITIALIZED' is not defined I did some research, similar problem reported in Tensorflow discussions. Few people reported that it is a memory issue, if you can limit TF to use fraction of GPU, it solves the problem. See the link: TF Discussion 19 . I actually once get a memory related exception but I can’t reproduce it. How do we do that in PyTorch? Thank you
st115052
Hi! So…this might be a silly question, but where is matmul? As can be seen from the code below from my Python interpreter, mm works fine (as well as @, which might or might not be the same as mm or matmul , I am not sure), but matmul doesn’t seem to exist in neither torch or a Tensor object. What’s going on?! >>> import torch >>> a=torch.randn(5,6) >>> b=torch.randn(6,7) >>> a.mm(b) 1.3938 1.3466 1.8738 2.9177 -2.7334 1.9803 0.1643 1.2277 1.2948 2.2676 1.6977 -2.8532 5.0795 2.0144 -1.9988 0.2808 -1.6006 -2.8685 0.5934 0.1643 0.1560 -0.6365 -1.3311 -4.0025 -2.5772 0.5418 -2.0688 1.9729 -0.3097 0.3623 1.6439 0.3341 1.5335 -2.8216 -2.1900 [torch.FloatTensor of size 5x7] >>> a.matmul(b) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'torch.FloatTensor' object has no attribute 'matmul' >>> torch.matmul(b) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'torch' has no attribute 'matmul'
st115053
0.1.12_2 Thanks, it was only a few months from the point when I installed it. I guess things are moving super fast
st115054
I’m implementing a library for training paragraph vector models as proposed by Q. V. Le et al. (Distributed Representations of Sentences and Documents). The code is available on GitHub: https://github.com/inejc/paragraph-vectors 410 I would appreciate any kind of feedback. Contributions in any form are also more than welcome (I have already opened some issues regarding future work).
st115055
Hi all, I use PyTorch version 0.2.0_4 and get an IndexError which I cannot explain: print("X:", x.size()) print("TYPE:", type(self.neuron_map[k])) gives X: torch.Size([25, 8]) TYPE: <class 'list'> Now x[:, self.neuron_map[k]] results in IndexError: When performing advanced indexing the indexing objects must be LongTensors or convertible to LongTensors I cannot understand why this happens and I have no idea how to fix this. Any help appreciated.
st115056
can you do: print(self.neuron_map[k]), I’m curious of it’s contents. Also try: x[:, torch.LongTensor(self.neuron_map[k])]
st115057
print("INDS:", self.neuron_map[k]) results in: INDS: [0, 1] Then, inds = torch.LongTensor(self.neuron_map[k]) runs into RuntimeError: tried to construct a tensor from a int sequence, but found an item of type numpy.int64 at index (0) I actually found a workaround: inds = np.array(self.neuron_map[k], dtype=np.int64) inds = torch.LongTensor(inds) nn_list.append(self.linears[k](x[:, inds])) I actually have an additional question. The reason, I am splitting the tensor is to apply linear units (like in last posted code line). For the result, i use: x_out = torch.cat(nn_list, 1) How efficient is this, as compared to manually implement an autograd.Function (forward and backward)?
st115058
it should be pretty efficient if x[:, inds] is large enough. the matrix multiply will prob dominate the cost. Writing a batched matrix multiply by hand is not easy to do efficiently.
st115059
Hi, I’m just starting with pytorch, so starting the models from the basic. So I was implementing the numpy model into pytorch. Following is the code I was trying. import torch import numpy as np import pandas as pd admissions = pd.read_csv('https://stats.idre.ucla.edu/stat/data/binary.csv') # Make dummy variables for rank data = pd.concat([admissions, pd.get_dummies(admissions['rank'], prefix='rank')], axis=1) data = data.drop('rank', axis=1) # Standarize features for field in ['gre', 'gpa']: mean, std = data[field].mean(), data[field].std() data.loc[:, field] = (data[field] - mean) / std # Split off random 10% of the data for testing np.random.seed(21) sample = np.random.choice(data.index, size=int(len(data) * 0.9), replace=False) data, test_data = data.ix[sample], data.drop(sample) # Split into features and targets features, targets = data.drop('admit', axis=1), data['admit'] features_test, targets_test = test_data.drop('admit', axis=1), test_data['admit'] dtype = torch.FloatTensor m = torch.nn.Sigmoid() n_hidden = 2 epochs = 10 learnrate = 0.005 n_records, n_features = features.shape last_loss = None weights_input_hidden = torch.randn(n_features, n_hidden).type(dtype) weights_hidden_output = torch.randn(n_hidden).type(dtype) for e in range(epochs): del_w_input_hidden = torch.from_numpy(np.zeros(weights_input_hidden.size())).type(dtype) del_w_hidden_output = torch.from_numpy(np.zeros(weights_hidden_output.size())).type(dtype) for x, y in zip(features.values, targets): hidden_input = torch.mm(x, weights_input_hidden) hidden_output = m(hidden_input) output = m(torch.mm(hidden_output, weights_hidden_output)) error = y - output output_error_term = error * output * (1 - output) hidden_error = torch.mm(output_error_term, weights_hidden_output) hidden_error_term = hidden_error * hidden_output * (1 - hidden_output) del_w_hidden_output += output_error_term * hidden_output del_w_input_hidden += hidden_error_term * x[:, None] weights_input_hidden += learnrate * del_w_input_hidden / n_records weights_hidden_output += learnrate * del_w_hidden_output / n_records if e % (epochs / 10) == 0: hidden_output = m(torch.mm(x, weights_input_hidden)) out = m(np.dot(hidden_output, weights_hidden_output)) loss = np.mean((out - targets) ** 2) if last_loss and last_loss < loss: print("Train loss: ", loss, " WARNING - Loss Increasing") else: print("Train loss: ", loss) last_loss = loss hidden = m(torch.mm(features_test, weights_input_hidden)) out = m(torch.mm(hidden, weights_hidden_output)) predictions = out > 0.5 accuracy = np.mean(predictions == targets_test) print("Prediction accuracy: {:.3f}".format(accuracy)) The error I’m getting is the following: Traceback (most recent call last): File “pytorch_tutorial.py”, line 50, in hidden_input = torch.mm(x, weights_input_hidden) TypeError: torch.mm received an invalid combination of arguments - got (numpy.ndarray, torch.FloatTensor), but expected one of: (torch.SparseFloatTensor mat1, torch.FloatTensor mat2) didn’t match because some of the arguments have invalid types: (!numpy.ndarray!, torch.FloatTensor) (torch.FloatTensor source, torch.FloatTensor mat2) didn’t match because some of the arguments have invalid types: (!numpy.ndarray!, torch.FloatTensor) I’m not getting how to convert the “x” into “torch.FloatTensor”. If someone can please guide me, how to resolve the issue. Edit: For comparison I’m putting the numpy code as well. def sigmoid(x): return 1 / (1 + np.exp(-x)) n_hidden = 2 epochs = 10 learnrate = 0.005 n_records, n_features = features.shape last_loss = None weights_input_hidden = np.random.normal(scale=1 / n_features ** .5, size=(n_features, n_hidden)) weights_hidden_output = np.random.normal(scale=1 / n_features ** .5, size=n_hidden) for e in range(epochs): del_w_input_hidden = np.zeros(weights_input_hidden.shape) del_w_hidden_output = np.zeros(weights_hidden_output.shape) for x, y in zip(features.values, targets): hidden_input = np.dot(x, weights_input_hidden) hidden_output = sigmoid(hidden_input) output = sigmoid(np.dot(hidden_output, weights_hidden_output)) error = y - output output_error_term = error * output * (1 - output) hidden_error = np.dot(output_error_term, weights_hidden_output) hidden_error_term = hidden_error * hidden_output * (1 - hidden_output) del_w_hidden_output += output_error_term * hidden_output del_w_input_hidden += hidden_error_term * x[:, None] weights_input_hidden += learnrate * del_w_input_hidden / n_records weights_hidden_output += learnrate * del_w_hidden_output / n_records if e % (epochs / 10) == 0: hidden_output = sigmoid(np.dot(x, weights_input_hidden)) out = sigmoid(np.dot(hidden_output, weights_hidden_output)) loss = np.mean((out - targets) ** 2) if last_loss and last_loss < loss: print("Train loss: ", loss, " WARNING - Loss Increasing") else: print("Train loss: ", loss) last_loss = loss hidden = sigmoid(np.dot(features_test, weights_input_hidden)) out = sigmoid(np.dot(hidden, weights_hidden_output)) predictions = out > 0.5 accuracy = np.mean(predictions == targets_test) print("Prediction accuracy: {:.3f}".format(accuracy)) Thank you!
st115060
Going from numpy to pytorch tensors and back is very simple. Pytorch tensor to Numpy array: numpy_array = pytorch_tensor.numpy() Numpy array to Pytorch tensor: pytorch_tensor = torch.from_numpy(numpy_array) More info here. 4
st115061
dhpollack: x = torch.from_numpy(x).float() @dhpollack: Thanks for your reply. But after implementing your suggestion, I’m getting the following error: Traceback (most recent call last): File “pytorch_tutorial.py”, line 50, in hidden_input = torch.mm(x, weights_input_hidden) RuntimeError: matrices expected, got 1D, 2D tensors at d:\downloads\pytorch-master-1\torch\lib\th\generic/THTensorMath.c:1233 This code works perfectly when I’m running using numpy. So during conversion to pytorch I’m making some mistake.
st115062
What is happening is that numpy is more lenient with regards to vector/matrix multiplication than pytorch. So you need to make one or both of the tensors into 2 dimensional tensors rather than 1d. You should look at the size of each tensor (x.size(), weights_input_hidden.size()), you’ll find one or both have just one dimension. To add dummy dimensions use any (but not all!) of the following: x = x.unsqueeze(0) x.unsqueeze_(0) x = x.view(1, -1).contiguous()
st115063
Full example of going from Numpy to PyTorch for binary classification: github.com QuantScientist/Deep-Learning-Boot-Camp/blob/master/day02-PyTORCH-and-PyCUDA/PyTorch/18-PyTorch-NUMER.AI-Binary-Classification-BCELoss-0.691839667509 .ipynb 8 { "cells": [ { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Deep Learning Bootcamp November 2017, GPU Computing for Data Scientists\n", "\n", "<img src=\"../images/bcamp.png\" align=\"center\">\n", "\n", "## 18 PyTorch NUMER.AI Deep Learning Binary Classification using BCELoss \n", "\n", "Web: https://www.meetup.com/Tel-Aviv-Deep-Learning-Bootcamp/events/241762893/\n", "\n", "Notebooks: <a href=\"https://github.com/QuantScientist/Data-Science-PyCUDA-GPU\"> On GitHub</a>\n", "\n", This file has been truncated. show original
st115064
@QuantScientist: Thanks for sharing the link. I already checked your link and it’s a wonderful presentation. But I wanted a simpler conversion, which I already did. But yours is next level of complexity I’ll try.
st115065
Hi guys, I’m new one to pytorch and cuda.(BTW, pytorch is quite friendly to new ones ) And I’m just confused when i’m reading these codes: `net = torch.nn.DataParallel(net)` `net = net.cuda()` I only know that cuda() and DataParallel() has something to do with GPU and parallel computation etc. But what’s the difference in those two lines? What have they done respectively? Also, what happens if the second line is removed? Thanks in advance!
st115066
DataParallel is a line to use multiple GPUs for processing your model. .cuda() is just putting your model on GPU.
st115067
I am trying to train siamese network for sentence similarity task. i am using same lstm with pack_padded_sequence to two sentences and getting the norm difference between the two final output of two sequences as similarity and finding the error with actual similarity score and backpropagating. after some time (in first epoch only) gradients are becoming very low and then they are becoming nan.