id
stringlengths
3
8
text
stringlengths
1
115k
st104700
Hi, I recently upgrade Pytorch to version 0.4 and try to run it on Windows with multiple GPUs. However, the training speed is pretty slow while I get much faster speed with the same GPUs on another PC with Ubuntu. What’s the possible reason and how to solve that? Thanks a lot!
st104701
I have a simple CNN image classifier inspired by the CIFAR-10 example from the tutorial. I want to check the norm of the gradient with respect to all weights of the network. I read here 3 on this forum that one way to do this is accessing Module.parameters() after the .backward() pass However when trying this: for p in net.parameters(): print(type(p), p.grad.size()) I have the following output: <class 'torch.nn.parameter.Parameter'> torch.Size([6, 3, 5, 5]) <class 'torch.nn.parameter.Parameter'> torch.Size([6]) <class 'torch.nn.parameter.Parameter'> torch.Size([16, 6, 5, 5]) <class 'torch.nn.parameter.Parameter'> torch.Size([16]) <class 'torch.nn.parameter.Parameter'> torch.Size([120, 400]) <class 'torch.nn.parameter.Parameter'> torch.Size([120]) <class 'torch.nn.parameter.Parameter'> torch.Size([84, 120]) <class 'torch.nn.parameter.Parameter'> torch.Size([84]) <class 'torch.nn.parameter.Parameter'> torch.Size([10, 84]) <class 'torch.nn.parameter.Parameter'> torch.Size([10]) first line: I recognize the dimension of the weight kernel of layer 1 second line: ??? Every line out of two is an unexpected item to me. Why does it appear in Module.parameters() ? Thanks for helping me understanding this. Here is my network class class Net2(nn.Module): def __init__(self): super(Net2, self).__init__() self.conv = nn.Sequential(nn.Conv2d(3, 6, 5), nn.ReLU(), nn.MaxPool2d(2, 2), nn.Conv2d(6, 16, 5), nn.ReLU(), nn.MaxPool2d(2, 2)) self.fc = nn.Sequential(nn.Linear(16*5*5, 120), nn.ReLU(), nn.Linear(120, 84), nn.ReLU(), nn.Linear(84, 10)) def forward(self, x): x = self.conv(x) x = x.view(-1, 16*5*5) x = self.fc(x) return x
st104702
Try for n, p in net.named_parameters(): print(n, type(p), p.grad.size()) for an additional clue. Best regards Thomas
st104703
Hi Thomas, Thanks a lot for your quick reaction. I was not aware of the existence of Module.named_parameters() and indeed it helped me understanding what was going on. The new output is: conv.0.weight <class 'torch.nn.parameter.Parameter'> torch.Size([6, 3, 5, 5]) conv.0.bias <class 'torch.nn.parameter.Parameter'> torch.Size([6]) conv.3.weight <class 'torch.nn.parameter.Parameter'> torch.Size([16, 6, 5, 5]) conv.3.bias <class 'torch.nn.parameter.Parameter'> torch.Size([16]) fc.0.weight <class 'torch.nn.parameter.Parameter'> torch.Size([120, 400]) fc.0.bias <class 'torch.nn.parameter.Parameter'> torch.Size([120]) fc.2.weight <class 'torch.nn.parameter.Parameter'> torch.Size([84, 120]) fc.2.bias <class 'torch.nn.parameter.Parameter'> torch.Size([84]) fc.4.weight <class 'torch.nn.parameter.Parameter'> torch.Size([10, 84]) fc.4.bias <class 'torch.nn.parameter.Parameter'> torch.Size([10]) So these additional lines refer to the bias parameters. By the way I didn’t expect the bias to be constant for every channel so I learnt something else… Best regards, Adrien
st104704
Numpy and Theano support tensordot. It would be nice that Pytorch also have this feature.
st104705
If numpy supports it, PyTorch supports it, through the use of numpy(): import torch import numpy as np M = torch.randn(3, 3, 4) v = torch.randn(2, 2, 3) out = torch.Tensor(np.tensordot(v.numpy(), M.numpy(), axes=[[2], [0]])) Not as concise, but it’s there.
st104706
Not exactly true, because if you want to backprop through it, you will break the graph…
st104707
For example, here is a snippet of code of old version of pytorch: # classifier is the classifier of a torchvision pre-trained model fc6 = nn.Conv2d(512, 4096, kernel_size=7) fc6.weight.data.copy_(classifier[0].weight.data.view(4096, 512, 7, 7)) fc6.bias.data.copy_(classifier[0].bias.data) Is there a better way to write this in version 0.4? Thanks in advance!
st104708
operations on .data are hidden from autograd. in this case, if you used weight and bias in a graph before this segment, and don’t do .data or .detach or with torch.no_grad(), autograd may complain about necessary tensors being modified inplace.
st104709
As I understood migration guide correctly, we can just simply replace code with .data to .detach if operation was not inplace and torch.no_grad() if it was inplace operation. Like in this example: Detach and .data 17
st104710
Yes you can generally do that, unless you are doing some hacks that you want hidden from autograd
st104711
I have a layer that is forward-only and using .data was the simplest way to implement is. second_loss(func(x.data), y) I could use no_grad context, but that would have been not as elegant because I do want grad for second_loss, just not for func.
st104712
What happens if I have second_loss(func(x.detach()), func2(x.detach()), y), then it would only matter if x.detach(), being a function that I do not know its implementation, has an overhead or not.
st104713
dashesy: if x.detach() , being a function that I do not know its implementation, has an overhead or not. Given that x.data is a property that does function calls in the background to re-wrap the underlying stuff in a new variable, I would not know the overhead of x.data either (but believe both are similar, after looking at the implementation). The migration guide 9 fairly clearly advises However, .data can be unsafe in some cases. […] A safer alternative is to use x.detach(), […] Best regards Thomas
st104714
Good morning, I would like to represent a document by the sum of it weighted embedding . I use: batch_size of 128 documents that each contains 1000 words embedding size of 300 Which add up to 128x1000x300 tensor. We’ll mark as E. *I use batch_size of 128 I have 1000 weights. That corresponding with the word document from the first tensor. Which add up to 128x1000 tensor. We’ll mark as W. For every word in the document I want to multiply it by the corresponding weight. This means multiplying all 300 embedding components at the same weight. I would be grateful if you could help me with that. Thank you, Ortal
st104715
I need to use softmax 3 function with some base of exponent other than ‘e’. I can manually do so but am afraid it might not be numerically stable Any simple solution to it??
st104716
Solved by Naman-ntc in post #2 okay it was me being stupid!!! if you want to change exponent base than e just multiple input by : ln k
st104717
okay it was me being stupid!!! if you want to change exponent base than e just multiple input by : ln k
st104718
Hi, After I created nn.Embedding object and change the weights manually, the values at the padding index also changed and the result of the embedding is not what I wanted. Don’t you think that in the mechanism of nn.Embedding, even though the initialization value is changed manually, the values at the padding should output 0s? Following is the example code I tried: embed = nn.Embedding(5,10, padding_idx=4) initrange = 0.5 embed.weight.data.uniform_(-initrange, initrange) print embed(Variable(torch.LongTensor([4]))) print embed(Variable(torch.LongTensor([4]))) Variable containing: -0.2718 -0.1496 0.2677 -0.3810 -0.3220 -0.4013 0.2528 0.0429 0.1287 -0.3817 [torch.FloatTensor of size 1x10]
st104719
padding_idx is just a specific index in the weight matrix. So there is no mechanism of separation. After you change the weights, you have to reset the index of padding_idx to zeros, i.e.: embed.weight.data[4] = 0
st104720
Isnt padding_idx special in the sense that we do not apply gradients to it? Or is the embedding corresponding to the padding index also updated when we use an optimizer?
st104721
padding_idx is ignored in computing backward gradients. Here’s the code location reflecting that: https://github.com/pytorch/pytorch/blob/1848cad10802db9fa0aa066d9de195958120d863/aten/src/ATen/native/Embedding.cpp#L117 100
st104722
Thanks for the pointer. I had the confusion as I was using SparseEmbeddings with Pytorch 0.3.0. There the embedding corresponding to padding index was also updated. The bug was fixed in 0.3.1. Adding it here in case someone else stumbles across it https://github.com/pytorch/pytorch/issues/3506 22
st104723
Is there an easy way to do this in a general way? I’m doing seq2seq where the inputs are a set of variables rather than a single value. I’ve got a special start of sequence set value that I’d like to use to trigger operations within the forward, but I’m hoping that I can somehow prevent the weights from being updated. The approach I’m planning for now is to make the sequence all 0, with the target also 0, and hope that the net chooses the easy path of just connecting input to output, but ideally i’d like to just not update the weights at all. I’m also curious about the implementation here within the embedding layer. Is there corresponding code preventing the rest of the net from updating? Or does padding index just prevent the embedding from updating? Thanks in advance!
st104724
Will there be more batch matrix operations, such as a batch versions of: tril triu diag trace ? I can think of hacky ways to implement them by creating batch mask matrices, but that doesn’t seem efficient.
st104725
at this time we dont have better plans for these than a naive for-loop. What are your use-cases that these are needed? Can you elaborate?
st104726
I’m trying to implmement Normalized Advantage Function. In this case, you need to compute x^T P x in batch mode. There’s already an implementation available here 128, but it requires creating a mask and multiplying the resulting matrix by that. It works for now, but I was hoping to play around with variations of NAF, and these methods would help me.
st104727
I’m not sure about your usage of $x^{\top}Px$, Bilinear layer 59 might be helpful.
st104728
Is there an example for the usage of Bilinear Layer? I wonder, for example, how could one use it in MNIST example.
st104729
I think there is no way to do it in 0.4.0. By looking at the doc of the latest version (0.5), I found support for batch diagonal and einsum('bii->bi', (batch_matrix,)).
st104730
Just as a note regarding batch diag. Let’s assume that we want to find the diagonals of N matrices of the same sizes (R, C) stacked in a variable x of size (N, R, C). N = 2 R = 5 C = 3 x = torch.rand(N, R, C) One way to get the diagonals is as follows: x[[...] + [torch.arange(min(x.size()[-2:]), dtype=torch.long)] * 2] However, this is very inefficient. The function that is implemented in pytorch versions strictly greater than 0.4.0: torch.diagonal(x, dim1=-2, dim2=-1) A temporary equivalent solution for pytorch 0.4.0 that only makes a view of the diagonals: torch.from_numpy(x.numpy().diagonal(axis1=-2, axis2=-1)) Here is a comparison for the running time: %timeit x[[...] + [torch.arange(min(x.size()[-2:]), dtype=torch.long)] * 2] # 23.8 µs ± 687 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) %timeit torch.from_numpy(x.numpy().diagonal(axis1=-2, axis2=-1)) # 3.02 µs ± 102 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) We, also, note that it is slightly more efficient to stack the matrices as (R, C, N) where axis1=0 and axis2=1: x = torch.rand(R, C, N) %timeit x[[torch.arange(min(x.size()[:2]), dtype=torch.long)] * 2 + [...]].t() # 19.6 µs ± 662 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) %timeit torch.from_numpy(x.numpy().diagonal(axis1=0, axis2=1)) # 2.58 µs ± 73.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
st104731
Hi all, I am working on binary images, and I don’t want to change a binary image to a gray image after passing the network. Could you know me how to make a binary activation function? Thanks you!
st104732
My personal goal is to use the moving MNIST dataset to do some frame prediction with custom CLSTM cells. I am trying to understand how the normal LSTM takes in timestep data so I can mimic the attributes to create my own CLSTM cells. Let’s just assume the LSTM cell takes in 2D images for now, how is the timestep of the inputs managed? For example, the data loader outputs a batch of a non-randomized sequence of images. [B, C, H, W] where B would represent the number of images. Normally with a CNN we just feed the entire batch to the network without worrying about timesteps, so here do I need to write another for loop to make sure each cell gets a different image from the above batch?
st104733
I don’t want to have my net structure in the main/training file. Therefore I created a new file in the same folder. In that files I created the net class AlexNet. Thats my Net file: import torch.nn as nn class AlexNet(nn.Module): def __init__(self, num_classes=3): super(AlexNet, self).__init__() self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(64, 192, kernel_size=5, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(192, 384, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(384, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(256, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), ) self.classifier = nn.Sequential( nn.Dropout(), nn.Linear(256 * 7 * 7, 4096), nn.ReLU(inplace=True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Linear(4096, num_classes), ) def forward(self, x): x = self.features(x) x = x.view(x.size(0), 256 * 7 * 7) x = self.classifier(x) return x And thats how I want to import the class in the main file: from .Net import AlexNet Net is the name of the file with the AlexNet class in. When I want to run this I get the following error: ModuleNotFoundError: No module named '__main__.Net'; '__main__' is not a package I already tried it with a empty __init__.py file in the same folder but it changed nothing.
st104734
It works like that but then I get red underlings under Net and AlexNet of the statement from Net import AlexNet. With the message unresolved reference ‘AlexNet’
st104735
Does your IDE give you any hints on why it’s complaining? If it’s working, I would just ignore it.
st104736
I use pycharme as IDE and the hint is Unresolved reference ‘AlexNet’. Yeah I know I can ignore that but its marked like an error. And I would like to know the reason for that.
st104737
Recently, I updated my PyTorch from 0.3 to 0.4 so I also rewrote my C extension using the new C++ APIs in 0.4. Previously, if I wrapped my C extension with DataParallel, the CPU usage of my script could go above 100% (170%). However, if I wrapped the new C++ extension with DataParallel, the CPU usage could not go above 100% and the training was slowed down. After doing some research, I suspected that is related to GIL and I also found a way to release GIL in my C++ source files. PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { m.def("forward", &foo_forward, "Foo Forward", py::call_guard<py::gil_scoped_release>()); m.def("backward", &foo_backward, "Foo Backward", py::call_guard<py::gil_scoped_release>()); } After I added py::call_guard<py::gil_scoped_release>(), the CPU usage could go above 100% and the training speed returned to normal. I just wonder if this will have some potential problems. Thanks in advance.
st104738
Solved by royboy in post #2 This should be fine. There shouldn’t be any pytorch-specific problems, but of course all the typical things to avoid when releasing GIL in pybind will apply.
st104739
This should be fine. There shouldn’t be any pytorch-specific problems, but of course all the typical things to avoid when releasing GIL in pybind will apply.
st104740
Can you please some code how you added the line? were you using cuda or you run your network on CPU?
st104741
I was curious why my old C extensions did not suffer from GIL. It looks like ffi, which was previously used to compile PyTorch extension, would release GIL by default. Is that correct?
st104742
I was just following the tutorial here https://github.com/pytorch/pytorch/releases/tag/v0.4.0 37 and added py::call_guard<py::gil_scoped_release>() as the last argument to m.def in PYBIND11_MODULE. That extension runs on GPU. It has some looping so it is CPU intensive as well.
st104743
Do you know of a documentation for m.def arguments? What are the implications? I wonder if it would break things assuming we stick to ATen and kernels (no Python object references).
st104744
PyTorch uses a library called pybind11 to create Python bindings. You can find the documentation in this https://pybind11.readthedocs.io/en/stable/index.html 46. I am not sure if releasing GIL would break anything.
st104745
I am new to the Pytorch, and I am working on training image net. It seems like lots of others have this issue too. I don’t know why it cause, and how to fix. Server: server.png1150×398 87.2 KB Local: local .png1296×355 120 KB Local run on 1 GPU, server run on 4 GPUs (batch size is different, do not have enough memory on local). You can see that server average data load time around 4 seconds. But local only have 0.03 seconds. Some time server takes 30 seconds to load data. I think because it needs to load data then sperate data to multi GPU, which is DataParallel. But why it that slow? It needs to take 22.5 days over 100 epoch to load data but only 3.5 days to train data. The code I modify from is https://github.com/jiecaoyu/XNOR-Net-PyTorch 6. (ImageNet) I did not change the load data code. Does someone has similar issue and fixed it? Please help me.
st104746
It looks like the time is approx. the same (~0.7s) besides some spikes on the server. Could you try to increase the number of workers and run it again?
st104747
I am trying to understand what is the reason that init_method is needed distributed 2 package. NCCL2 2 only needs a way to broadcast the master’s ncclGetUniqueId among the nodes and suggests MPI_BCast which I have used and works. So why there is no MPI:// for init_method that seems like the more common choice. All other init methods (even File 2) seem to rely on a master address and port, but that will be difficult if the ports are not open among nodes, or if there are more than one network interfaces (NICs). I now find that file:// may hang if there are multiple adapters that some do not resolve (docker, …) Is there a way to have a custom init_method? so that I can use MPI (that is already set up), I want to use MPI for init, not for backend I created a feature request here 5.
st104748
Hello I tried to install pytorch on my raspberry pi 3, according to this tutorial: https://gist.github.com/fgolemo/b973a3fa1aaa67ac61c480ae8440e754 95 But it don’t work. Every time I got an Error from CMake running install running build_deps + WITH_CUDA=0 + WITH_ROCM=0 + WITH_NNPACK=0 + WITH_MKLDNN=0 + WITH_GLOO_IBVERBS=0 + WITH_DISTRIBUTED_MW=0 + [[ 4 -gt 0 ]] + case "$1" in + WITH_NNPACK=1 + shift + [[ 3 -gt 0 ]] + case "$1" in + break + CMAKE_INSTALL='make install' + USER_CFLAGS= + USER_LDFLAGS= + [[ -n '' ]] + [[ -n '' ]] + [[ -n '' ]] ++ dirname tools/build_pytorch_libs.sh + cd tools/.. +++ pwd ++ printf '%q\n' /root/Download/pytorch + PWD=/root/Download/pytorch + BASE_DIR=/root/Download/pytorch + TORCH_LIB_DIR=/root/Download/pytorch/torch/lib + INSTALL_DIR=/root/Download/pytorch/torch/lib/tmp_install + THIRD_PARTY_DIR=/root/Download/pytorch/third_party + CMAKE_VERSION=cmake + C_FLAGS=' -DTH_INDEX_BASE=0 -I"/root/Download/pytorch/torch/lib/tmp_install/include" -I"/root/Download/pytorch/torch/lib/tmp_install/include/TH" -I"/root/Download/pytorch/torch/lib/tmp_install/include/THC" -I"/root/Download/pytorch/torch/lib/tmp_install/include/THS" -I"/root/Download/pytorch/torch/lib/tmp_install/include/THCS" -I"/root/Download/pytorch/torch/lib/tmp_install/include/THNN" -I"/root/Download/pytorch/torch/lib/tmp_install/include/THCUNN"' + C_FLAGS=' -DTH_INDEX_BASE=0 -I"/root/Download/pytorch/torch/lib/tmp_install/include" -I"/root/Download/pytorch/torch/lib/tmp_install/include/TH" -I"/root/Download/pytorch/torch/lib/tmp_install/include/THC" -I"/root/Download/pytorch/torch/lib/tmp_install/include/THS" -I"/root/Download/pytorch/torch/lib/tmp_install/include/THCS" -I"/root/Download/pytorch/torch/lib/tmp_install/include/THNN" -I"/root/Download/pytorch/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1' + LDFLAGS='-L"/root/Download/pytorch/torch/lib/tmp_install/lib" ' + LD_POSTFIX=.so.1 + LD_POSTFIX_UNVERSIONED=.so ++ uname + [[ Linux == \D\a\r\w\i\n ]] + LDFLAGS='-L"/root/Download/pytorch/torch/lib/tmp_install/lib" -Wl,-rpath,$ORIGIN' + CPP_FLAGS=' -std=c++11 ' + GLOO_FLAGS= + THD_FLAGS= + NCCL_ROOT_DIR=/root/Download/pytorch/torch/lib/tmp_install + [[ 0 -eq 1 ]] + [[ 0 -eq 1 ]] + [[ 0 -eq 1 ]] + CWRAP_FILES='/root/Download/pytorch/torch/lib/ATen/Declarations.cwrap;/root/Download/pytorch/torch/lib/THNN/generic/THNN.h;/root/Download/pytorch/torch/lib/THCUNN/generic/THCUNN.h;/root/Download/pytorch/torch/lib/ATen/nn.yaml' + CUDA_NVCC_FLAGS=' -DTH_INDEX_BASE=0 -I"/root/Download/pytorch/torch/lib/tmp_install/include" -I"/root/Download/pytorch/torch/lib/tmp_install/include/TH" -I"/root/Download/pytorch/torch/lib/tmp_install/include/THC" -I"/root/Download/pytorch/torch/lib/tmp_install/include/THS" -I"/root/Download/pytorch/torch/lib/tmp_install/include/THCS" -I"/root/Download/pytorch/torch/lib/tmp_install/include/THNN" -I"/root/Download/pytorch/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1' + [[ '' -eq 1 ]] + '[' -z 4 ']' + BUILD_TYPE=Release + [[ -n '' ]] + [[ -n '' ]] + echo 'Building in Release mode' Building in Release mode + mkdir -p torch/lib/tmp_install + for arg in "$@" + [[ caffe2 == \n\c\c\l ]] + [[ caffe2 == \g\l\o\o ]] + [[ caffe2 == \c\a\f\f\e\2 ]] + pushd /root/Download/pytorch ~/Download/pytorch ~/Download/pytorch + build_caffe2 + mkdir -p build + pushd build ~/Download/pytorch/build ~/Download/pytorch ~/Download/pytorch + cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_CAFFE2=OFF -DBUILD_ATEN=ON -DBUILD_PYTHON=OFF -DBUILD_BINARY=OFF -DBUILD_SHARED_LIBS=ON -DUSE_CUDA=0 -DUSE_ROCM=0 -DUSE_NNPACK=1 -DCUDNN_INCLUDE_DIR= -DCUDNN_LIB_DIR= -DCUDNN_LIBRARY= -DUSE_MKLDNN=0 -DMKLDNN_INCLUDE_DIR= -DMKLDNN_LIB_DIR= -DMKLDNN_LIBRARY= -DCMAKE_INSTALL_PREFIX=/root/Download/pytorch/torch/lib/tmp_install -DCMAKE_EXPORT_COMPILE_COMMANDS=1 -DCMAKE_C_FLAGS= -DCMAKE_CXX_FLAGS= -DCMAKE_EXE_LINKER_FLAGS= -DCMAKE_SHARED_LINKER_FLAGS= CMake Error: CMake was unable to find a build program corresponding to "Unix Makefiles". CMAKE_MAKE_PROGRAM is not set. You probably need to select a different build tool. CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage -- Configuring incomplete, errors occurred! Failed to run 'bash tools/build_pytorch_libs.sh --with-nnpack caffe2 nanopb libshm' I hope someone can help me.
st104749
Yes I found a Solution. I used dietpi instead of the normal raspian image. After I changed this it works but it takes to long so I never installed it completly.
st104750
Hello, I’m trying to implement neuroevolution using PyTorch and I run into a problem when I try to recover the perturbations generated by a gaussian noise The principle is: I start from a base individual I create a number of offsprings. For each offspring I: Select a integer seed, using numpy Use torch.manual_seed(numpy_seed) For each tensor in state_dict().values() I create a normal perturbation using: perturbation = torch.ones_like(v).normal_() Set the new tensor with v.copy_(v + perturbation*std) I record only the seed. I get fitnesses for all offsprings Then, I want to move the base individual in the direction indicated by the rewards. Problem arises when I try to recover the perturbation: For each offspring I: Get the corresponding seed, and set torch.manual_seed(this_seed) Regenerate the perturbation, which is not the same ! I can’t figure out why. Could someone help ? In case my explanation weren’t clear, here’s a link to the code: GitHub Mehd6384/EvolutionStrategies 11 EvolutionStrategies - Evolution Strategies repo Thanks a lot !
st104751
I’ve tried to locate the error in your code, but couldn’t find the right place. Could you explain a bit where the error occurs? The seeding and generation of perturbation is deterministic: torch.manual_seed(s) perturbation = torch.ones_like(p).normal_() print(perturbation) This results in exact the same values for each run.
st104752
Hey, thanks for trying it out. Problem occurs when in either improve or improve_bis. When I look at the perturbation re-generated using the seed, it is never the same as the one I created in the add_pop method.
st104753
In improve and improve_bis you are getting the seeds from normalized_fitness which is returned by normalize_dico, while in add_pop you get a new random seed from a np.random.randint. It looks like the results should be different. Am I missing something?
st104754
Well, in add_pop I actually use the int from numpy as a seed in torch.manual_seed. I keep it as a memory in fitness. Afterwards, when I’m trying to improve the base individual, I regenerate the perturbation using this same seem, so I expected it to be the same perturbation as before. No ?
st104755
Ok, I see. Thanks for the info! I see a difference between improve and improve_bis in the order of setting the seed and getting the parameters, but this does not explain the difference between add_pop and improve_bis. I try to debug it later.
st104756
Yep, I tried both order, see if I could figure out the problem, but so far, in vain. Thanks a lot, that’s very kind !
st104757
Sorry for the late reply. It seems the get_clone function is messing up the random values, and I don’t know what exactly happens. However, just add your seed in add_pop right before the for loop and you should get the same results. I try to debug this issue a bit further.
st104758
OK, I found the issue. In get_clone you create a new instance of Ind and copy the weights into this copy. While instantiating this class, the Linear layers are initialized with random weights, thus the PRNG is used already when we go to the permutation code.
st104759
Hey ! Thanks a lot for sticking to the problem ! That’s very nice of you I’m not sure I got it. Even if the layers are initialized with random weights, shouldn’t this be fixed when I copy the weights ?
st104760
The weights will be the same, of you copy them, but you are initializing Ind again in get_clone. This means, that the PRNG will be called for sampling the weights. Calling .normal_() after this will yield different random numbers. Have a look at the following example: seed = 2809 torch.manual_seed(seed) print(torch.empty(5).normal_()) torch.manual_seed(seed) print(torch.empty(5).normal_()) # Same numbers as before torch.manual_seed(seed) # Init a model before sampling model = nn.Linear(10, 10) print(torch.empty(5).normal_()) # Numbers are different, since PRNG was called in nn.Linear
st104761
Oh ! I see ! Then I suppose that I should call manual_seed after initializing the model, no ?
st104762
Haha ! You were right ! It does work ! Here’s the test output: Adding nb 0 seed: 20595 tensor([[ 1.0241, -0.2424], [ 1.8578, -1.2400], [ 1.1870, -1.7786], [ 2.1059, 0.0410], [-0.6069, -0.7865]]) tensor([-0.3170, -0.1979, 0.3467, -0.1865, 1.0867]) tensor([[-0.4042, 1.0822, 0.0599, -0.2872, 0.0582]]) tensor([-0.2743]) Adding nb 1 seed: 155213 tensor([[-0.8802, -0.0765], [-2.2938, -0.9288], [-1.2693, -0.8255], [ 0.0502, -0.4021], [-0.6032, 1.7732]]) tensor([ 0.0450, -0.4972, 1.1490, 1.3801, 2.4020]) tensor([[-0.7401, 1.1411, 0.3848, -0.8668, -0.6465]]) tensor([-0.7383]) seed: 20595 Character 0 tensor([[ 1.0241, -0.2424], [ 1.8578, -1.2400], [ 1.1870, -1.7786], [ 2.1059, 0.0410], [-0.6069, -0.7865]]) tensor([-0.3170, -0.1979, 0.3467, -0.1865, 1.0867]) tensor([[-0.4042, 1.0822, 0.0599, -0.2872, 0.0582]]) tensor([-0.2743]) seed: 155213 Character 1 tensor([[-0.8802, -0.0765], [-2.2938, -0.9288], [-1.2693, -0.8255], [ 0.0502, -0.4021], [-0.6032, 1.7732]]) tensor([ 0.0450, -0.4972, 1.1490, 1.3801, 2.4020]) tensor([[-0.7401, 1.1411, 0.3848, -0.8668, -0.6465]]) tensor([-0.7383])
st104763
Well, at least this part works, the rest is rather far from it, but it is another story. Thanks a lot for debugging this !
st104764
Hi, I am new to Pytorch so sorry in advance if this is a trivial question. I would simply like to rescale the nn.MSELoss() in the spirit of rescaled_MSE() = C*MSELoss(). I tried to define a tensor C=torch.FloatTensor(1), C[0]= “some number”, but this gives an error because the multiplication does not accept the MSELoss as argument. How can I rescale the nn.MSELoss() without constructing a loss function myself? I know that this amounts to rescaling the gradients/learning rate, but I would like to fix the scale of the loss function to separate it conceptrually from the learning rate scale. Thank you, any comments are appreciated.
st104765
You could just rescale the loss. Currently it looks like you try to rescale a function, which is probably the workflow in static frameworks (e.g. Theano). Try the following: criterion = nn.MSELoss() scale = torch.tensor([2.0]) # your training procedure loss = criterion(output, target) loss = loss * scale loss.backward() ...
st104766
Great! Thanks @ptrblck for your suggestion. The only change is that I need to use scale = torch.FloatTensor([2.0]) Otherwise it works.
st104767
If you are using the current stable release, you don’t have to change this line. Have a look at the website 25 for install instructions. There are a lot of improvements and nice features!
st104768
I compliled pytorch from github, and the version was ‘0.5.0a0+06d5dd0’. It succeed use ‘THCPFloatTensor_New’ for change ‘THCudaTensor*’ to ‘PyObject*’ in version ‘0.3.0’. And same error when using the function ‘THPFloatTensor_New’. But it failed in the new version. How should I replace the function?And would you tell me is there any article or tutorial for using the c++ api? Thank you very much.
st104769
Hi guys, I’m trying to optimize the following parameters by keeping the L param always lower triangular with a positive diagonal and the noise param always with just positive diagonal, but they are not correctly updating in the forward pass. I guess I’m doing something wrong with the autograd mechanism. Any help appreciated. Here is a sample snippet. import torch from torch.autograd import Variable from torch.nn.parameter import Parameter from torch.nn import ParameterList class Model(torch.nn.Module): def __init__(self, dim): """ Constructor. """ super(Model, self).__init__() self.noise_vector = Parameter(torch.tensor(torch.zeros(D).cuda(), requires_grad=True)) self.noise = Parameter(torch.tensor(torch.diag(torch.exp(self.noise_vector.data)).cuda(), requires_grad=True)) self.L_chol_cov_theta = Parameter(torch.tensor(torch.randn(dim, dim).cuda(), requires_grad=True)) self.log_diag_L_chol_cov_theta = Parameter(torch.tensor(torch.randn(dim).cuda(), requires_grad=True)) self.L = Parameter(torch.tensor(torch.randn(dim, dim).cuda(), requires_grad=True)) self.L_chol_cov_theta.data = torch.tril(self.L_chol_cov_theta.data) self.L_chol_cov_theta.data -= torch.diag(torch.diag(self.L_chol_cov_theta.data)) self.L.data = self.L_chol_cov_theta.data + torch.diag(torch.exp(self.log_diag_L_chol_cov_theta.data)) def forward(self): # update parameters self.L_chol_cov_theta.data -= torch.diag(torch.diag(self.L_chol_cov_theta.data)) self.L.data = self.L_chol_cov_theta.data + torch.diag(torch.exp(self.log_diag_L_chol_cov_theta.data)) self.noise.data = torch.diag(torch.exp(self.noise_vector.data)) return torch.mm(self.L, self.noise_vector.view(-1,-1)) custom_net = Model(5) optimizer = torch.optim.SGD(filter(lambda p: p.requires_grad, model.parameters()), lr=1e-3) for epoch in range(10): optimizer.zero_grad() forward_pass_something = model() loss = calc_likelihood(samples, a_ground_truth) # calc a custom loss loss.backward() optimizer.step()
st104770
I’m using: PyTorch 0.4 Cuda: 9.1 the error message: RuntimeError: cuda runtime error (10) : invalid device ordinal at /opt/conda/conda-bld/pytorch_1524590031827/work/aten/src/THC/THCGeneral.cpp:70 Any idea how to fix it? update: torch.cuda.device_count() returns 0 and it’s weird. here is my nvidia-smi: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 396.26 Driver Version: 396.26 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 | | N/A 37C P0 31W / 250W | 0MiB / 16280MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ Fixed by reinstalling CUDA/CUDNN/CONDA… good luck:)
st104771
I am training a simple 2 layers MLP in an online learning setting where batch size and number of epoch are 1. The input size is (5000000, 28) and the network is (28-100-2). However, the training speed is very slow (13 seconds for 5000 instances) compare to my previous implementation in keras (5 seconds for 10000 instances). Is there a way to improve the speed in online training ? I put the code for custom dataset and training here, I did try training with GPU but the speed is significantly slower than CPU because batch size is 1 in this case. class Stream(Dataset): def __init__(self, x,y): self.x = x self.y = y self.length = x.size(0) def __iter__(self): return self def __len__(self): return self.length def __getitem__(self, idx): x_ = self.x[idx] y_ = self.y[idx] return x_, y_ stream = DataLoader(Stream(x_train, y_train), batch_size = 1, shuffle = False, num_workers = 8) for j, (x,y) in enumerate(stream): loss_, acc_ = model.observe(x,y) loss += loss_ acc += acc_
st104772
What is model.observe returning? Are loss_ and acc_ detached? Could you post the model code, since maybe you storing the whole computation in loss.
st104773
Here is the rest of the code, thank you. class MLP(nn.Module): def __init__(self, sz): super(MLP, self).__init__() self.layer1 = nn.Linear(sz, 100) self.layer2 = nn.Linear(100,2) def forward(self, x): x = relu(self.layer1(x)) x = self.layer2(x) return x class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.net = MLP(28) self.opt = torch.optim.SGD(self.parameters(), lr=0.1) self.bce = torch.nn.CrossEntropyLoss() def observe(self, x, y): self.train() self.zero_grad() y_ = self.net(x) loss = self.bce(y_, y) loss.backward() self.opt.step() _, idx = torch.max(y_.data.cpu(), 1 , keepdim=False) acc = (idx==y.data.cpu()).float() return loss, acc
st104774
Could you change the last line to: return loss.item(), acc and run it again? Right now you are returning the computation graph, which might slow down your application and increase the memory usage.
st104775
I modified accordingly but there was not much improvement in the running time. Is there anything else I can try?
st104776
Deep learners will find torch.einsum() helpful in multiple ways. Essentially, you can use it to carry out any sort of operation on two tensors, thus helping circumvent loops. See https://rockt.github.io/2018/04/30/einsum 52 for an intro and a few examples.
st104777
I’m a PyTorch beginner, and I’m curious how to mimic this 7 functionality in PyTorch. My current approach defines a function and registers it with register_backward_hook: def normalize_grad(module, grad_input, grad_output): # both grad input and grad output are of size one, but we'll do it in general anyway in_tup = [] for gi, go in zip(grad_input, grad_output): gin = gi.div(torch.norm(gi, p=1) + 1e-8).mul(module.strength).add(go) in_tup.append(gin) return tuple(in_tup) I’m not sure if this is correct, because it is very dramatically changing my results.
st104778
You almost surely don’t want to use the module hook 16 but add mysomething.register_hook(...) on the variables. For example I use (from the top of my head) mysomething.register_hook(lambda x: x.clamp(min=-10, max=10)) for recreating Graves’ handwriting RNN 5. Best regards Thomas
st104779
So what you’re suggesting is, given a class attribute loss that is the loss computed in a module, I should call something like loss.register_hook(lambda g: g.div(torch.norm(g, p=1) + 1e-8))?
st104780
I’m not sure whether you want to do this with loss or with some intermediate quantity inside your module, but yes, that looks about what I’d do. (Note that you’re not confined to a lambda, you can also define a function if you prefer.
st104781
I’m trying to build pytorch from source, having just installed MAGMA due to the error No CUDA implementation of 'potrf'. Install MAGMA and rebuild cutorch I’m doing this on a 2013 macbook, so I’m aware support is limited, however I did it before so I have faith it will work again! The problem I’m having is when I try and call setup.py, it keeps referencing a different version of gcc. I have both 8.1.0 and 5.3.0 – 5 is the one I want. I’ve set up environment variables corresponding to this version in ~/.bash_profile $ echo $CC /usr/local/bin/gcc-5 $ echo $CXX /usr/local/bin/g++-5 I’ve also set up symbolic links to /usr/bin/gcc and /usr/bin/g++ in case it was calling them. However, when I run the command CC=gcc-5 CXX=g++-5 DCUDA_HOST_COMPILER=/usr/local/bin/gcc-5 python setup.py install I get the error: -- Caffe2: CUDA detected: 8.0 -- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc -- Caffe2: CUDA toolkit directory: /usr/local/cuda -- Caffe2: Header version is: 8.0 -- Found cuDNN: v6.0.21 (include: /usr/local/cuda/include, library: /usr/local/cuda/lib/libcudnn.6.dylib) CMake Error at cmake/public/cuda.cmake:321 (message): CUDA 8.0 is not compatible with GCC version >= 6. Use the following option to use another version (for example): -DCUDA_HOST_COMPILER=/usr/bin/gcc-5 Call Stack (most recent call first): cmake/Dependencies.cmake:401 (include) CMakeLists.txt:204 (include) -- Configuring incomplete, errors occurred! See also "/Users/askates/Documents/GitRepos/pytorch/build/CMakeFiles/CMakeOutput.log". See also "/Users/askates/Documents/GitRepos/pytorch/build/CMakeFiles/CMakeError.log". and when I look at CMakeOutput.log I see it’s referencing /usr/local/bin/gcc-8 How can I make it reference gcc-5 instead of gcc-8? What am I missing?
st104782
I seem to have gotten it to work with cmake CMakeLists.txt, however I now get the error: CMake Error at third_party/sleef/CMakeLists.txt:27 (message): SLEEF does not allow in-source builds. You can refer to doc/build-with-cmake.md for instructions on how provide a separate build directory. Note: Please remove autogenerated file `CMakeCache.txt` and directory `CMakeFiles` in the current directory. -- Configuring incomplete, errors occurred! I’ve tried building sleef separately and if I run $ mkdir third_party/sleef/build_dir $ cd third_party/sleef/build_dir $ cmake .. It builds successfully, but when I try and run setup.py I get this: -- The CXX compiler identification is unknown -- The C compiler identification is unknown CMake Error at CMakeLists.txt:6 (project): The CMAKE_CXX_COMPILER: /usr/local/bin/g++-8 is not a full path to an existing compiler tool. Tell CMake where to find the compiler by setting either the environment variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path to the compiler, or to the compiler name if it is in the PATH. CMake Error at CMakeLists.txt:6 (project): The CMAKE_C_COMPILER: /usr/local/bin/gcc-8 is not a full path to an existing compiler tool. Tell CMake where to find the compiler by setting either the environment variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to the compiler, or to the compiler name if it is in the PATH. -- Configuring incomplete, errors occurred!
st104783
I’m not sure what the official way is, but here is what has consistently worked for me for a long time by now: I have set up two directories with symlinks I add to my path. In ~/pytorch/compiler I have g++ -> /usr/bin/g++-5 gcc -> /usr/bin/gcc-5 x86_64-linux-gnu-g++ -> /usr/bin/x86_64-linux-gnu-g++-5 x86_64-linux-gnu-gcc -> /usr/bin/x86_64-linux-gnu-gcc-5 and because I like ccache (I use the Debian/unstable ccache seems to do OK) I also have ~/pytorch/compilers-with-ccache g++ -> /usr/bin/ccache g++-5 -> /usr/bin/ccache gcc -> /usr/bin/ccache gcc-5 -> /usr/bin/ccache x86_64-linux-gnu-g++ -> /usr/bin/ccache x86_64-linux-gnu-g++-5 -> /usr/bin/ccache x86_64-linux-gnu-gcc -> /usr/bin/ccache x86_64-linux-gnu-gcc-5 -> /usr/bin/ccache Then I compile PyTorch with PATH=~/pytorch/compilers-with-ccache/:~/pytorch/compilers/:$PATH python3 setup.py bdist_wheel to get a whl in the dist directory. Finally, I install with sudo pip3 -U --no-deps dist/something.whl because my system Python 3 is with pip3. You might have to clean (python3 setup.py clean) before rerunning setup.py. Best regards Thomas
st104784
I just tried that – I don’t have a ccache directory though so I missed that out. I also added all the others just in case: $ mkdir compilers; cd compilers $ ln -s /usr/local/bin/g++-5 g++ $ ln -s /usr/local/bin/gcc-5 gcc $ ln -s /usr/local/bin/gcc-ar-5 gcc-ar $ ln -s /usr/local/bin/gcc-nm-5 gcc-nm $ ln -s /usr/local/bin/gcc-ranlib-5 gcc-ranlib $ cd .. $ PATH=~/Documents/GitRepos/pytorch/compilers/:$PATH python setup.py bdist_wheel I get this again: -- The CXX compiler identification is unknown CMake Error at CMakeLists.txt:6 (project): The CMAKE_CXX_COMPILER: /usr/local/bin/g++-8 is not a full path to an existing compiler tool. Tell CMake where to find the compiler by setting either the environment variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path to the compiler, or to the compiler name if it is in the PATH. -- Configuring incomplete, errors occurred! I don’t know why it’s defaulting to g++-8 still… $ compilers/gcc --version gcc (GCC) 5.3.0 Copyright (C) 2015 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
st104785
I got the whole project again, overwriting all local changes. Your instructions seemed to work up until I got to [ 48%] Linking CXX shared library ../lib/libcaffe2.dylib ld: library not found for -lgcc_s collect2: error: ld returned 1 exit status make[2]: *** [lib/libcaffe2.dylib] Error 1 make[1]: *** [caffe2/CMakeFiles/caffe2.dir/all] Error 2 make: *** [all] Error 2 Failed to run 'bash tools/build_pytorch_libs.sh --with-cuda --with-nnpack caffe2 nanopb libshm THD'
st104786
al3x: library not found for -lgcc_s So I know nothing at all about this (I missed that you are on OS X , but googling this error message, people apparently do funny manual symlinking with some success. Best regards Thomas
st104787
I have a model that works fine with single GPU but when I want to use DataParallel I get this error: File "mmod/runtorch.py", line 224, in train_batch loss = sum(model(data, labels)) File "/opt/conda/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 115, in forward return self.gather(outputs, self.output_device) File "/opt/conda/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 127, in gather return gather(outputs, output_device, dim=self.dim) File "/opt/conda/lib/python2.7/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather return gather_map(outputs) File "/opt/conda/lib/python2.7/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return type(out)(map(gather_map, zip(*outputs))) File "/opt/conda/lib/python2.7/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in gather_map return Gather.apply(target_device, dim, *outputs) File "/opt/conda/lib/python2.7/site-packages/torch/nn/parallel/_functions.py", line 54, in forward ctx.input_sizes = tuple(map(lambda i: i.size(ctx.dim), inputs)) File "/opt/conda/lib/python2.7/site-packages/torch/nn/parallel/_functions.py", line 54, in <lambda> ctx.input_sizes = tuple(map(lambda i: i.size(ctx.dim), inputs)) RuntimeError: dimension specified as 0 but tensor has no dimensions If I return loss.unsqueeze(dim=0) then error message will become: File "/opt/conda/lib/python2.7/site-packages/torch/autograd/__init__.py", line 27, in _make_grads raise RuntimeError("grad can be implicitly created only for scalar outputs") RuntimeError: grad can be implicitly created only for scalar output I know of this 25 and this issue 25, but is there any workaround or I have to update from PyTorch 0.4 to master? Please note that I have multiple losses (returned from model.forward() is a tuple of 0-dim loss value tensors) for different parts of the last layer, and therefore I use sum() and apply backward on it.
st104788
Solved by dashesy in post #2 Thanks to the workaround here: Instead of returning a tuple of 0-dim tensors for loss: return tuple(loss_list) if I return: return torch.stack(loss_list).squeeze() Everything works.
st104789
Thanks to the workaround here 126: Instead of returning a tuple of 0-dim tensors for loss: return tuple(loss_list) if I return: return torch.stack(loss_list).squeeze() Everything works.
st104790
by this i mean, after each update of the weights, i’d like to store the values in a dictionary or list. this is for a particular layer and not for all the network
st104791
Do you want to overwrite the old values after each update? Would this work? model = nn.Sequential( nn.Linear(20, 10), nn.ReLU(), nn.Linear(10, 2) ) optimizer = optim.SGD(model.parameters(), lr=1e-3) x = torch.randn(1, 20) # Store just the first layer weights = dict(model[0].named_parameters()) for epoch in range(10): output = model(x) output.mean().backward() optimizer.step() # Update weights weights = dict(model[0].named_parameters())
st104792
no id actually like to store all the updates, in a dictionary so that i can see how the weights change as training proceeds
st104793
Then you could try the following: model = nn.Sequential( nn.Linear(20, 10), nn.ReLU(), nn.Linear(10, 2) ) optimizer = optim.SGD(model.parameters(), lr=1e-3) x = torch.randn(1, 20) # Store just the first layer weights = {k: v.clone() for k, v in model[0].named_parameters()} updates = {k: v.clone() for k, v in model[0].named_parameters()} for epoch in range(10): optimizer.zero_grad() output = model(x) output.mean().backward() optimizer.step() new_weights = {k: v.clone() for k, v in model[0].named_parameters()} updates = {k: new_weights[k] - weights[k] for k in weights} weights = new_weights print(updates)
st104794
just for future reference, and for whoever else looks over this question. I decided to use a pickel file to store the weights as the network was being trained. It was simpler, since i needed to load the weights in another file that I was using. but thank you very much for your help
st104795
Hi, I would like to use REINFORCE like described in the docs 9 an tried this in my code, using PyTorch v0.4.0: m = torch.distributions.categorical.Categorical(probs) action = m.sample() loss = -m.log_prop(action) * reward probs is a 128x10 Tensor (128 is the batch-size, 10 the number of actions) Running the code I get the following error: AttributeError: 'Categorical' object has no attribute 'log_prop' What could I be doing wrong? I also found the log_prop in torch/distributions/categorical.py, so I do not really understand the error-message.
st104796
Solved by SimonW in post #2 doc you linked uses log_prob
st104797
I’m converting a list of 3d numpy arrays (from an .npz file) into a tensor like so: data = np.load("path.npz",encoding='bytes') data = torch.from_numpy(data['arr_0'].unsqueeze(1).float() train_kwargs = {'data_tensor':data} The images are 8 channel 96X96 pixels, and currently I’m using a batch size of 64. So my tensor object has a size of torch.Size([64,1,8,95,95]). Here is a portion of the code I’ve been using in my encoding portion of my network: def __init__(self, z_dim=10, nc=3): super(VAE, self).__init__() self.z_dim = z_dim self.nc = nc self.encoder = nn.Sequential( nn.Conv2d(nc, 32, 4, 2, 1), # B, 32, 32, 32 nn.ReLU(True), nn.Conv2d(32, 32, 4, 2, 1), # B, 32, 16, 16 nn.ReLU(True), nn.Conv2d(32, 64, 4, 2, 1), # B, 64, 8, 8 nn.ReLU(True), nn.Conv2d(64, 64, 4, 2, 1), # B, 64, 4, 4 nn.ReLU(True), nn.Conv2d(64, 256, 4, 1), # B, 256, 1, 1 nn.ReLU(True), View((-1, 256*1*1)), # B, 256 nn.Linear(256, z_dim*2), # B, z_dim*2 ) When I go to start training I call my network to return mu, log variable for reparameterizing after every iteration like so: x_recon, mu, logvar = self.net(x) When I do this, I’m getting the above error. Changing nc to 8 results in the same error. I want to keep these images as 8 channels for now, and continue to do 2d convolutions on them. Is there an easy way to do this? I’m considering doing 3d convolutions later on, and perhaps you could answer how to input that image as a 3-dimensional image as well, opposed to a 8-channel image? Thanks in advance.
st104798
Solved by ptrblck in post #23 Ah sorry, my bad! I just added my batch size of 1 to the View() layer. You should change it to: encoder: View((batch_size, -1)) decoder: View((batch_size, -1, 3, 3)) Unfortunately, you cannot use x.size(0), since the layer is defined in a Sequential, so you have to know your batch size beforehan…
st104799
Your input should have the dimensions [batch_size, channels, height, width]. Just remove the additional 1 and your model should run. x = x.squeeze(1)