id
stringlengths
3
8
text
stringlengths
1
115k
st103600
because product is the number of input values that contribute to an output value. sum isn’t.
st103601
is the way pytorch initializing He init or a variant? Is there some paper one can cite or read on it? I just can’t recognize what init type its using. Maybe its something that doesn’t appear on a paper?
st103602
Hello, my question is on the output of the loss function (cross entropy) for different models initialized with ones and randn. So, if I initialize as ones, the loss is a valid float (i.e. -107) but if I go with randn, the loss always appears as not-a-nr. -> I have checked gradients, the grads flow in both cases (model gets updated as well). The only difference being ; one showing an actual loss and rand showing nan loss. -> For extra info. my sequences are up to 40 timesteps long but as I said I do not suspect vanishing grads since I have checked it manually at every step. So what could be the problem? Cuz i’m out of ideas Thank you… edit: the code for loss function is below def custom_entropy(output_seq, label_seq): loss_all = [] # for all steps for t in range(len(label_seq)): lbl = label_seq[t] pred = output_seq[t] loss = (-torch.log(pred) * lbl).mean() loss_all.append(loss) return loss_all
st103603
Hi, I am trying to train and check accuracy of my network, in my case if I train using ToTensor function it very fast, almost takes 5 minutes for one epoch, but when i want to evaluate my network and use custom_collate function to read labels which may be empty sometimes, it becomes too slow, I am using 4 workes. even if i load the same batch same data with default collate it is very fast but when i use custom collate it bexomes 4 times slower def pred_collate_fn(data): file_name = [item[cfg.SAMPLE_FILENAME] for item in data] pc = [item[cfg.SAMPLE_POINT_CLOUD] for item in data] voxel_grid = [item[cfg.SAMPLE_VOXEL_GRID].reshape(1, item[cfg.SAMPLE_VOXEL_GRID].shape[0], item[cfg.SAMPLE_VOXEL_GRID].shape[1], item[cfg.SAMPLE_VOXEL_GRID].shape[2]) for item in data] gt_objectness_map = [item[‘gt_objectness_map’] for item in data] gt_coordinates_map = [item[‘gt_coordinate_map’] for item in data] gt_labels = [item[cfg.LABEL_FILE] for item in data] image = [item[cfg.SAMPLE_IMAGE] for item in data] return [ torch.FloatTensor(voxel_grid), # -- Voxel Rep torch.FloatTensor(gt_objectness_map), # -- Objectness torch.FloatTensor(gt_coordinates_map), # -- coordinates file_name, np.array(pc), np.array(gt_labels), np.array(image) ]
st103604
I noticed some inconsistency in the document and feel really confused: I met the problem of getting Numpy randomness in multiprocessing (by DataLoader, multiple workers), so I decided to use the worker_init_fn in the __init__ function of DataLoader, in the note, it is said that “each worker will have its PyTorch seed set to base_seed + worker_id, where base_seed is a long generated by main process using its RNG. You may use torch.initial_seed() to access the PyTorch seed for each worker in worker_init_fn, and use it to set other seeds before data loading.”. But in the documentation for torch.initial_seed(), it says “Returns the current random seed of the current GPU.” which is not consistent with that in the Note for DataLoader, my current solution is to set: worker_init_fn=lambda x: np.random.seed((torch.initial_seed() + x) % (2 ** 32)) Though by printing, I can see the np.random output is not identical, I’m still not sure I’m doing it correctly. And I would like to use the transform library in torchvision, they use the standard random library, should I also set the seed in through worker_init_fn?
st103605
you looked at the wrong function. torch.initial_seed doesn’t say that: https://pytorch.org/docs/stable/torch.html#torch.initial_seed 138 setting that should just work for versions >= 0.4. in fact, just using torch.initial_seed as seed is enough as it already does the + worker_id offset.
st103606
I see. Thank you very much! I’m using Dash, and happen to look at torch.cuda.initial_seed.
st103607
Suppose there are two different LSTMs/BiLSTMs and I want to tie their weights. What is the best way to do it? There does not seem to be any torch.nn.Functional interface. If I simple assign the weights after instantiating the LSTMs like self.lstm2.weight_ih_l0 = self.lstm1.weight_ih_l0 etc, it seems to work but there are two issues. I get the “UserWarning: RNN module weights are not part of single contiguous chunk of memory.” warning. More importantly, there seems to be memory leakage as the GPU memory utilization keeps on increasing and the process runs out of memory after a while. What is the best way to tie weights of two different LSTMs?
st103608
Contiguous warnings are unavoidable for tied weights, unfortunately, due to cudnn limitations. Don’t know what’s going on with the memory.
st103609
One solution is to lstm1.weight_il_l0.data.copy_(lstm2.weight_il_l0.data) for all weights after each optim update.
st103610
If we do the copy after every optim update, what happens to the gradients? Or are you suggesting to tie the weights as I mentioned in the question and do the copy?
st103611
Oh I see the issue. Okay maybe like this: ... loss.backward() for w1, w2 in zip(lstm1.parameters(), lstm2.parameters()): w1.grad.data.add_(w2.grad.data) w2.grad = None optim.step() for w1, w2 in zip(lstm1.parameters(), lstm2.parameters()): w2.data.copy_(w1.data) Might be slow though
st103612
Why does it make sense to sum up all the gradients of the same parameters with respect to the loss?
st103613
I see a related topic regarding my question here 212, but i could not find my answer there so i ask it here. lets say im using the pretrained vgg and i want to extract the features from some specific layers. Here is what i should do: # Load the Vgg: vgg16 = models.vgg16(pretrained=True) # cut the part that i want: new_base = (list(vgg16.children())[:-1])[0] # if i print the new_base, i will have: # Sequential( # (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) # (1): ReLU(inplace) # (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) # (3): ReLU(inplace) # (4): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False) # (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) # ... # ... # (30): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False) # ) # Then here is my feature extractor function: class FeatureExtractor(nn.Module): def __init__(self, submodule, extracted_layers): self.submodule = submodule def forward(self, x): outputs = [] for name, module in self.submodule._modules.items(): x = module(x) if name in self.extracted_layers: outputs += [x] return outputs + [x] Lets say i have my input and i called it Input. ands I am interested in having the computed features from the (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) and (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) layers. Can you please tell me what should i do and how should i do that? When i call FeatureExtractor, should i do like: FeatureExtractor(new_base,???). I am not sure how should i the ???, also dont know how to send my infput to the forward function. My assumption is that because i loaded the pretrained network, i dont have train it anymore, because it already has the pretrain weights, and i can just use them, right? I also so have second question that is related to this, is there anyway that i can just do: y = new_base[n](x), where n is the number of layer that i am interested it and x is the input. and get the output y? Thanks
st103614
Solved by chenglu in post #5 You can still using the pretrained weights, here’s some code: import torch import torch.utils.model_zoo as model_zoo from torchvision.models.vgg import VGG, make_layers, cfg, vgg16 class MyVgg(VGG): def __init__(self): super().__init__(make_layers(cfg['D'])) def forward(self, x):…
st103615
I don’t think you can use the vgg16 directly. If you want to using feature maps of some layers in the middle, you may have to rewrite the forward function of the vgg class in the torch vision package, and in that forward function you can return anything you want.
st103616
Thank you for your answer. But then i wont be able to use the pretrain weights… Lets see if pytorch people agrees with you.
st103617
Sure you can do whatever you want with this model! To extract the features from, say (2) layer, use vgg16.features[:3](input). Note that vgg16 has 2 parts features and classifier. You can call them separately and slice them as you wish and use them as operator on any input. For the above example, vgg16.features[:3] will slice out first 3 layers (0, 1 and 2) from the features part of model and then I operated the sliced sequence on input.
st103618
You can still using the pretrained weights, here’s some code: import torch import torch.utils.model_zoo as model_zoo from torchvision.models.vgg import VGG, make_layers, cfg, vgg16 class MyVgg(VGG): def __init__(self): super().__init__(make_layers(cfg['D'])) def forward(self, x): # here, implement the forward function, keep the feature maps you like # and return them pass model = MyVgg() model.load_state_dict(model_zoo.load_url('https://download.pytorch.org/models/vgg16-397923af.pth'), strict=True) model(torch.rand(1, 3, 224, 224))
st103619
justusschock: Alternatively you could use forward_hooks to achieve this behavior. This is even better : )
st103620
I am not sure how to do that. I did this and it gave me an error: Input = torch.rand(1,3,5,5) vgg16 = models.vgg16(pretrained=True) new_base = (list(vgg16.children())[:-1])[0] new_base[:3](Input) Input = torch.rand(1,3,5,5) vgg16 = models.vgg16(pretrained=True) new_base = (list(vgg16.children())[:-1])[0] vgg16.features[:3](Input) both of the above cases cause an error and not working, can you please clarify how i should do that
st103621
Can you please explain what is cfg? and how it can be used? i printed it and it is like this: {‘A’: [64, ‘M’, 128, ‘M’, 256, 256, ‘M’, 512, 512, ‘M’, 512, 512, ‘M’], ‘B’: [64, 64, ‘M’, 128, 128, ‘M’, 256, 256, ‘M’, 512, 512, ‘M’, 512, 512, ‘M’], ‘D’: [64, 64, ‘M’, 128, 128, ‘M’, 256, 256, 256, ‘M’, 512, 512, 512, ‘M’, 512, 512, 512, ‘M’], ‘E’: [64, 64, ‘M’, 128, 128, ‘M’, 256, 256, 256, 256, ‘M’, 512, 512, 512, 512, ‘M’, 512, 512, 512, 512, ‘M’]} But i cannot figure it out what it is saying and how i should use it
st103622
Oh! This minimal code is working for me. import torch import torchvision.models.vgg as models input = torch.rand(1, 3, 5, 5) vgg16 = models.vgg16(pretrained=True) output = vgg16.features[:3](input) print(output) I’m using PyTorch 0.4.0 to run this.
st103623
The cfg is for creating different kinds of VGG nets(like vgg16 or vgg19). Feel free to dig into the source code of vgg.py located at $YOUR_PYTHON3_DIR/lib/python3.6/site-packages/torchvision/models/vgg.py It is very clear and simple how to use the cfg after you have a look at this file. Plus, if you only want to extract feature, and not do any training stuff. @KrnTneja’s solution is better.
st103624
If you don’t want to, you can open ipython and experiment a little to find out what works in your version. (Not the best idea but works! )
st103625
Hi Guys, I am trying to install pytorch from source because of this (glibc version) issue https://github.com/pytorch/pytorch/issues/6607 15 Below are the commands I used : $export CMAKE_PREFIX_PATH=/home/jamess/anaconda2/ [#anaconda path] $conda install numpy pyyaml mkl mkl-include setuptools cmake cffi typing $conda install -c mingfeima mkldnn $git clone --recursive https://github.com/pytorch/pytorch cd pytorch $python setup.py install System info : OS : CentOS release 6.7 CUDA : N/A I get the following error : ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ cmake … -DCMAKE_BUILD_TYPE=Release -DBUILD_CAFFE2=0 -DBUILD_ATEN=ON -DBUILD_PYTHON=0 -DBUILD_BINARY=OFF -DBUILD_SHARED_LIBS=ON -DONNX_NAMESPACE=onnx_torch -DUSE_CUDA=0 -DUSE_ROCM=0 -DUSE_NNPACK=1 -DCUDNN_INCLUDE_DIR= -DCUDNN_LIB_DIR= -DCUDNN_LIBRARY= -DUSE_MKLDNN=1 -DMKLDNN_INCLUDE_DIR=/home/jamess/anaconda2/bin/…/include -DMKLDNN_LIB_DIR=/home/jamess/anaconda2/bin/…/lib -DMKLDNN_LIBRARY=/home/jamess/anaconda2/bin/…/lib/libmkldnn.so -DCMAKE_INSTALL_PREFIX=/home/jamess/Downloads/pytorch/torch/lib/tmp_install -DCMAKE_EXPORT_COMPILE_COMMANDS=1 -DCMAKE_C_FLAGS= -DCMAKE_CXX_FLAGS= -DCMAKE_EXE_LINKER_FLAGS= -DCMAKE_SHARED_LINKER_FLAGS= – Need to define long as a separate typeid. – std::exception_ptr is NOT supported. – NUMA is not available – Turning off deprecation warning due to glog. – Building using own protobuf under third_party per request. – Use custom protobuf build. – Caffe2 protobuf include directory: $<BUILD_INTERFACE:/home/jamess/Downloads/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include> – The BLAS backend of choice:MKL – Checking for [mkl_intel_lp64 - mkl_gnu_thread - mkl_core - gomp - pthread - m - dl] – Library mkl_intel_lp64: /home/jamess/anaconda2/lib/libmkl_intel_lp64.so – Library mkl_gnu_thread: /home/jamess/anaconda2/lib/libmkl_gnu_thread.so – Library mkl_core: /home/jamess/anaconda2/lib/libmkl_core.so – Library gomp: -fopenmp – Library pthread: /usr/lib64/libpthread.so – Library m: /usr/lib64/libm.so – Library dl: /usr/lib64/libdl.so – Brace yourself, we are building NNPACK CMake Warning at cmake/Dependencies.cmake:310 (find_package): By not providing “FindEigen3.cmake” in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by “Eigen3”, but CMake did not find one. Could not find a package configuration file provided by “Eigen3” with any of the following names: Eigen3Config.cmake eigen3-config.cmake Add the installation prefix of “Eigen3” to CMAKE_PREFIX_PATH or set “Eigen3_DIR” to a directory containing one of the above files. If “Eigen3” provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): CMakeLists.txt:181 (include) – Did not find system Eigen. Using third party subdirectory. – Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR) CMake Error at cmake/Dependencies.cmake:803 (message): The C++ compiler does not support required functions. This is very likely due to a known bug in GCC 5 (and maybe other versions) on Ubuntu 17.10 and newer. For more information, see: https://github.com/pytorch/pytorch/issues/5229 9 Call Stack (most recent call first): CMakeLists.txt:181 (include) – Configuring incomplete, errors occurred! See also “/home/jamess/Downloads/pytorch/build/CMakeFiles/CMakeOutput.log”. See also “/home/jamess/Downloads/pytorch/build/CMakeFiles/CMakeError.log”. Failed to run ‘bash tools/build_pytorch_libs.sh --use-nnpack --use-mkldnn caffe2 nanopb libshm gloo THD’ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Any suggestions ?
st103626
James_Arambam: The C++ compiler does not support required functions. This is very likely due to a known bug in GCC 5 (and maybe other versions) on Ubuntu 17.10 and newer. For more information, see: https://github.com/pytorch/pytorch/issues/5229 In the issue, this link 327 seems to help building PyTorch with your GCC. Could you check it?
st103627
Unfortunately, I don’t have the root access to the system, so can’t edit the files.
st103628
The origin Loss Function is , and W = 192, H = 192, D = 200 Thus I worte a Loss Class, class CrossEntropyLoass(nn.Module): def __init__(self): super(CrossEntropyLoass, self).__init__() return '''test is V, output is V^''' def forward(self, output, test): start_time = time.time() count, d, w, h = test.shape losses = 0 for i in range(w): for j in range(h): for k in range(d): losses += torch.mul(test[0][k][i][j], torch.log(output[0][k][i][j])) + torch.mul((1-test[0][k][i][j]) , torch.log(1 - output[0][k][i][j])) cost_time = time.time() - start_time print('Costed time {:.0f}m {:.0f}s'.format(cost_time // 60, cost_time % 60)) return losses it’s called by loss = criterion(output, test_vol) the shape of output and test_vol are torch.Size([1, 200, 192, 192]). But it seems take up too much memory, I haven’t finished calc one loss, it shut down with errors out of memory. THCudaCheck FAIL file=c:\users\administrator\downloads\new-builder\win-wheel\pytorch\aten\src\thc\generic/THCStorage.cu line=58 error=2 : out of memory Traceback (most recent call last): File "E:/2018/formal/pytorch/human_pose/train.py", line 184, in <module> main() File "E:/2018/formal/pytorch/human_pose/train.py", line 172, in main loss = criterion(output, vol) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "E:/2018/formal/pytorch/human_pose/train.py", line 89, in forward losses += torch.mul(test[0][k][i][j], torch.log(output[0][k][i][j])) + torch.mul((1-test[0][k][i][j]) , torch.log(1 - output[0][k][i][j])) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\tensor.py", line 317, in __rsub__ return -self + other RuntimeError: cuda runtime error (2) : out of memory at c:\users\administrator\downloads\new-builder\win-wheel\pytorch\aten\src\thc\generic/THCStorage.cu:58 I know the calculated amount is very big, 192x192x200 = 7372800, I wana know how to optimize this loss function, thus I can train my network. Thanks all!!!
st103629
I was trying to do distributed training on my computers. It seems that I have to choose between redis and MPI. Could anyone tell me what’s the difference between them or their pros and cons? It seems there’s no article comparing these two message passing way.
st103630
Hi, I’m trying to install pytorch from source in a cluster in which CUDA/CDNN directories aren’t the standard. I’ve seen I have to export these environment variables: # CUDA_HOME (Linux/OS X) # specify where CUDA is installed; usually /usr/local/cuda or # /usr/local/cuda-x.y # # CUDNN_LIB_DIR # CUDNN_INCLUDE_DIR # CUDNN_LIBRARY # specify where cuDNN is installed # LIBRARY_PATH # LD_LIBRARY_PATH # we will search for libraries in these paths Could you specify where should they point to? I expected to use cuda 9.0 with cdnn 7.5. I haven’t seen any compatibility restriction about it. Are both ok? And lastly, is it as easy as export them as environment variables using os.environ or from bash or is there any special function to do that?. I’m asking that cos I’d say i exported them but installer is still searching in directory by default.
st103631
So I’m building PyTorch from source and the caffe2 subrepo is still using old hiprng libraries CMake Error: The following variables are used in this project, but they are set to NOTFOUND. Please set them or make sure they are set and tested correctly in the CMake files: hiprng_LIBRARIES linked by target "caffe2_hip" in directory /home/thomas/code/pytorch/caffe2 hipsparse_LIBRARIES linked by target "caffe2_hip" in directory /home/thomas/code/pytorch/caffe2 -- Configuring incomplete, errors occurred! I’m having some fun getting hipsparse installed, but that’s neither here nor there. My question is that I can see it’s being actively worked on: github.com/pytorch/pytorch Get ROCm building again on master by Jorghi12 on 06:41PM - 11 Jun 18 47 commits changed 22 files with 187 additions and 132 deletions. and it’s been changed in places github.com/pytorch/pytorch Renamed from hiprng to hcrng since the soname is different now. by Jorghi12 on 05:07AM - 12 Jun 18 changed 1 files with 1 additions and 1 deletions. but not in others, specifically for caffe2 when building ATEN. github.com pytorch/pytorch/blob/master/cmake/Dependencies.cmake#L505 4 set(Caffe2_HIP_INCLUDES ${hip_INCLUDE_DIRS} ${hcc_INCLUDE_DIRS} ${hsa_INCLUDE_DIRS} ${rocrand_INCLUDE_DIRS} ${hiprand_INCLUDE_DIRS} ${rocblas_INCLUDE_DIRS} ${miopen_INCLUDE_DIRS} ${thrust_INCLUDE_DIRS} $<INSTALL_INTERFACE:include> ${Caffe2_HIP_INCLUDES}) # This is needed for library added by hip_add_library (same for hip_add_executable) hip_include_directories(${Caffe2_HIP_INCLUDES}) set(Caffe2_HIP_DEPENDENCY_LIBS ${rocrand_LIBRARIES} ${hiprand_LIBRARIES} ${PYTORCH_HIP_HCC_LIBRARIES} ${PYTORCH_MIOPEN_LIBRARIES} ${hipblas_LIBRARIES}) # Additional libraries required by PyTorch AMD that aren't used by Caffe2 (not in Caffe2's docker image) if(BUILD_ATEN) set(Caffe2_HIP_DEPENDENCY_LIBS ${Caffe2_HIP_DEPENDENCY_LIBS} ${hipsparse_LIBRARIES} ${hiprng_LIBRARIES}) endif() # TODO: There is a bug in rocblas's cmake files that exports the wrong targets name in ${rocblas_LIBRARIES} list(APPEND Caffe2_HIP_DEPENDENCY_LIBS roc::rocblas) else() caffe2_update_option(USE_ROCM OFF) endif() endif() # ---[ ROCm And it looks like the updates from hiprng were to hcrng, which has been outmoded for rocrand, which also shows up in the build files. So, aside from getting hipsparse installed (I’ve got a bug in my hipconfig https://github.com/ROCm-Developer-Tools/HIP/issues/552 20), any suggestions?
st103632
The “bug in my hipconfig” was due to the fact that ubuntu 16.04 updated to the 4.15 linux kernels, which aren’t supported by ROCm at this time. So I’ve got hipconfig --platform outputting hcc like it should, but I’m still seeing the hiprng_LIBRARIES associated errors. I’ve installed the HcSPARSE library, but I don’t really want to install HcRNG as the repo complains that it’s been superseded by rocrand, but ah well. I’ll install it and see if we can get this pytorch built with rocm.
st103633
Alright now I’m here and I’m opening an issue on the github. make[2]: *** [caffe2/CMakeFiles/caffe2_hip.dir/__/aten/src/THC/caffe2_hip_generated_THCReduceApplyUtils.cu.o] Error 1 1 error generated. In file included from /home/thomas/code/pytorch/aten/src/THC/THCStorage.cu:1: In file included from /home/thomas/code/pytorch/aten/src/THC/THCStorage.hpp:6: In file included from /home/thomas/code/pytorch/aten/src/THC/THCStorage.h:5: /home/thomas/code/pytorch/build/caffe2/aten/src/THC/THCGeneral.h:13:10: fatal error: 'cuda_runtime.h' file not found #include "cuda_runtime.h" ^~~~~~~~~~~~~~~~ 1 error generated. Died at /opt/rocm/hip/bin/hipcc line 496. CMake Error at caffe2_hip_generated_THCStorage.cu.o.cmake:120 (message): Error generating /home/thomas/code/pytorch/build/caffe2/CMakeFiles/caffe2_hip.dir/__/aten/src/THC/./caffe2_hip_generated_THCStorage.cu.o
st103634
Hi, I used CTC loss to train my network. The CTC loss is borrowed from https://github.com/SeanNaren/deepspeech.pytorch 3. I set length_average = True in the CTC loss. I was able to train my network. But, I found that if I multiple the loss by 0, such as loss = 0 * CTC_loss(arguments…) ( originally it was loss = CTC_loss(arguments…), and do loss.backward(), the result does not change. Anyone has ideas why scaling by 0 does not change result at all? Thank you very much
st103635
There are a lot of similar functions eg five_crop, affine etc. Whats the main differences? When should you use one from the other?
st103636
The functional API is stateless, i.e. you can use the functions directly passing all necessary arguments. On the other side torchvision.transforms are mostly classes which have some default parameters or which store the parameters you’ve provided. For example using Normalize, you could define the class and use it with the passed parameters. Using the functional approach, you would have to pass the parameters every time: transform = transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)) data = transform(data) # Functional data = TF.normalize(data, mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)) You can use the functional API to transform your data and target with the same random values, e.g. for random cropping: i, j, h, w = transforms.RandomCrop.get_params(image, output_size=(512, 512)) image = TF.crop(image, i, j, h, w) mask = TF.crop(mask, i, j, h, w) Sometimes it’s also just your coding style. While some find the functional API to be cleaner, some prefer the classes.
st103637
You can use the functional API to transform your data and target with the same random values, e.g. for random cropping Thank you this was what I was looking for. TF is import torchvision.transforms.functional as TF correct?
st103638
Dear fellow members, I have accidentally discovered that some numpy operations, e.g., exp and argmax, work normally on PyTorch tensor. For example: >>> a = torch.randn((3,3)) >>> a tensor([[ 1.1106, 1.3285, -1.7963], [ 0.1306, 0.1929, -0.6298], [-0.2770, 0.4762, -0.9054]]) >>> np.exp(a) tensor([[ 3.0361, 3.7752, 0.1659], [ 1.1395, 1.2127, 0.5327], [ 0.7581, 1.6100, 0.4044]]) >>> np.argmax(a, axis=1) tensor([ 1, 1, 1]) >>> np.argmax(a, axis=0) tensor([ 0, 0, 1]) Of course they cannot be used to track their computational graph, but Is it expected behavior that I did not know? Thanks!
st103639
It is sort of expected: Numpy is very liberal in accepting inputs (you can pass lists to most functions, too), PyTorch tries to play nicely by providing inferfaces like __array__ (e.g. sometimes - but not always - you can pass Tensors to matplotlib functions). Best regards Thomas
st103640
Thank you so much! It makes me easy to do some post-processing like calculating performance metrics, plotting, etc. How did I miss it!
st103641
Hi, I’d like to implement a gradient difference loss with pytorch. However, when I try to use gradcheck to check whether it is correctly implemented, it always shows that the loss function is not correctly implemented. Can anyone helps?
st103642
I’m not sure that there is much that can go wrong when you’re using forwards (except breaking the graph), but gradcheck usually is very sensitive to numerical stability and should always be used on double (you can call m.double() to convert parameters, similar to m.cuda()). Best regards Thomas
st103643
I try to run my code on Two GPUs. However, it raises the error: tensors are on different GPUs. I try to print the device of each input and variable. The devices are consistent. I want to ask how to solve it. Thanks.
st103644
Thanks for your reply. My code is a little complex. It is a meta-learning code. If possible, can I send the code to you via email? Thanks.
st103645
Send it via PM please. Also, could you replace all data tensors etc. with random values? If that’s too complicated, just write down the expected shape for the tensors, so that I don’t have to load a dataset.
st103646
Thanks for the code. It’s quite huge and somehow it cannot import learner.py. However, I skimmed through the code and one reason might be that you initialiye the states of your LSTM in MetaLearner.forward using .cuda(). This will push the states to the default GPU and DataParallel might throw the error. You could fix this by checking, on which GPU the current LSTM parameters are and create the state according to this: device = next(self.parameters()).device self.lstm_c0 = torch.zeros(...).to(device) Well, this would work in 0.4.0. Since you are using an older version, you could try the following: param = next(self.parameters()) is_cuda = param.is_cuda if is_cuda: cuda_device = param.get_device() self.lstm_c0 = Variable(torch.zeros(...)).cuda(cuda_device) The code isn’t tested, so let me know, if you encounter any errors.
st103647
Thanks a lot for your help. I am sorry that you can not run it. I can run the code on my server. I also guess the error is caused by self.lstm_c0 and self.lstm_h0. I tried the code you provided, but it still raises the runtimeerror: tensors are on different GPUs. param = next(self.parameters()) is_cuda=param.is_cuda if is_cuda: cuda_device=param.get_device() self.lstm_c0 = Variable(torch.zeros(self.nParams, self.lstm.hidden_size), requires_grad=False).cuda(cuda_device) self.lstm_h0 = Variable(torch.zeros(self.nParams, self.lstm.hidden_size), requires_grad=False).cuda(cuda_device)
st103648
I think we should move this conversation to PMs, as nobody can follow the thread without your code.
st103649
optimizer.zero_grad() for i_batch, sample in enumerate(dataloaders[phase]): if phase == 'train': img_list = sample['image'] img_tensor = torch.stack([img for img in img_list]).squeeze(1) inputs = Variable(img_tensor, requires_grad=False).cuda() labels = Variable(sample['label']).cuda() weights = sample['weight'].cuda() outputs = model(inputs) criterion = nn.BCELoss(weight=weights.float()) loss = criterion(outputs, labels.float()) running_loss += loss.data[0] loss /= (batch_num * len(img_list)) loss.backward() running_corrects += torch.sum((torch.max(outputs, 1)[0] > 0.5) == labels.byte()).float() del inputs, labels, outputs if i_batch % batch_num == (batch_num - 1): train_num += batch_num optimizer.step() #ipdb.set_trace() optimizer.zero_grad() loss.detach() print('batch: %d loss: %.4f acc: %.4f' % (i_batch, running_loss / train_num, running_corrects.data.cpu().numpy() / train_num)) Hi, the code follows the manner of iter_size (batch_num). For every iter_size, the code perform optimizer.step(). The problem is that cuda memory keeps increasing in coming iterations. I’ve tried methods mentioned in similar topics but they do not work. Anyone helps me ?
st103650
“running_corrects += outputs” actually holds the graph in the CUDA memory which make the memory increase when running more iterations.
st103651
My model is resnet20 , there are parameters from conv2d 、fc and BN ,but I just want to load the conv2d weight parameters from the pretrained model , how can I sovle this problem? Thanks for your help
st103652
Why did I get None gradients from net.state.dict() as below? Should not the gradient values from net.state.dict() be the same with those from net.named_parameters()? Note: using this state_dict() 3 and this named_parameters() 5 net.zero_grad() ypred = net(x) loss = loss_fn(ypred, y) loss.backward() print('### net.named_parameters():') for name, param in net.named_parameters(): print('param.grad=', param.grad) print('### net.state_dict():') for key, val in net.state_dict().items(): print('val.grad=', val.grad) Output: param.grad= tensor(1.00000e-02 * [[ 0.1781, 0.1962], [-1.3298, -1.9067], [-1.8645, -1.9591], [ 1.4285, 1.6071], [-1.3251, -1.3051]]) param.grad= tensor(1.00000e-02 * [ 0.3545, -3.0727, -3.7230, 2.9359, -2.5023]) param.grad= tensor([[-0.4717, -0.4988, -0.3780, -0.2974, -0.2383]]) param.grad= tensor([-0.7368]) ### net.state_dict(): val.grad= None val.grad= None val.grad= None val.grad= None Thank you.
st103653
Solved by ptrblck in post #2 Try to pass keep_vars=True to net.state_dict(keep_vars=True) and it should work. Since the default is set to False, the underlying data of the tensors will be returned and thus detached from the variable.
st103654
Try to pass keep_vars=True to net.state_dict(keep_vars=True) and it should work. Since the default is set to False, the underlying data of the tensors will be returned and thus detached from the variable.
st103655
Thanks @ptrblck: Btw, the doc should be updated for this info (plus the fact that state_dict() returns an OrderedDict)
st103656
I was testing a basic neural network one hidden layer. I tried various loss functions in the same session and the net got stuck in what I assume was a local min but it was completely wrong. I restarted the session and 1 epoche was better than 1000. I am using ADAM. My belief is that changing the error function drastically caused this mistake. I was wondering if anyone has encountered this problem in the past and how they dealt with it; and the command used to reset weights within a session.
st103657
Did you change the loss function during the training? Sometimes a run might be better than another one. It depends highly on your use case (i.e. dataset, model etc.). Setting the random seed at the beginning of your script helps dealing with these issues. You can reset the weights by just creating the model again or by re-initializing it with an init function: def weights_init(m): if isinstance(m, nn.Conv2d): torch.nn.init.xavier_uniform(m.weight.data) torch.nn.init.xavier_uniform(m.bias.data) model.apply(weights_init)
st103658
Looking back I do think I changed the weight function during training. This is super helpful thank you!!
st103659
Hi there, I like your flexible framework very much! I’m just wondering is there any performance guide like https://www.tensorflow.org/performance/performance_guide 75 so that I can follow and accelerate my model, both training and testing. One possible solution is to delete the tensors you don’t need any more, if the reference count goes to 0, then we can free some GPU memory resources? Will this also lead to some speed-up? Thank you very much in advance!
st103660
It’s more than just a bit rough, but you could check out the bottleneck documentation 437.
st103661
Hi, I tried this link, get the very similar issue in https://github.com/pytorch/pytorch/issues/6313 186. I’m actually asking about some common tricks to improve the speed, like that in the link I posted.
st103662
in Lasagne, the code like this: l_in_imgdim = nn.layers.InputLayer( shape=(batch_size, 2), name='imgdim' ) layers = [] l_conv = Conv2DLayer(layers[-1], num_filters=32, filter_size=(7, 7), stride=(2, 2), border_mode='same', W=nn.init.Orthogonal(1.0), b=nn.init.Constant(0.1), untie_biases=True) layers.append(l_conv) # after adding some common layers here...and the last one of these is FC-layer. l_first_repr = layers[-1] l_coarse_repr = nn.layers.concat([l_first_repr, l_in_imgdim]) layers.append(l_coarse_repr) My goal is to implement the networks based on Lasagne by Pytorch.
st103663
You could just define your model and concat the tensors in the forward function. Here is a small example: class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.fc1a = nn.Linear(20, 10) self.fc1b = nn.Linear(20, 10) self.fc2 = nn.Linear(20, 2) def forward(self, x): x1 = F.relu(self.fc1a(x)) x2 = F.relu(self.fc1b(x)) x = torch.cat((x1, x2), dim=1) x = self.fc2(x) return x model = MyModel() x = torch.randn(1, 20) output = model(x)
st103664
torch.symeig def forward(self,x): x = self.fc1(x) x = get_x(x) def get_x(x): #using x make matrix for j in range(total_batch): evals[j],evecs[j] = torch.symeig(sim_total[j],eigenvectors=True) Is there a way the task should be processed simultaneously on the GPU? i want to simultaneously only torch.symeig part !!
st103665
Hello everyone! I can’t clearly understand how packed sequence works. If we look at the example from the docs: a = torch.tensor([1, 2, 3]) b = torch.tensor([1, 2]) c = torch.tensor([1]) torch.nn.utils.rnn.pack_sequence([a, b, c]) it will give us something like that: PackedSequence(data=tensor([ 1, 1, 1, 2, 2, 3]), batch_sizes=tensor([ 3, 2, 1])) But when I change only one tensor, it gives completely different results: a = torch.zeros(5) b = torch.tensor([1, 2]) c = torch.tensor([1]) torch.nn.utils.rnn.pack_sequence([a, b, c]) PackedSequence(data=tensor([ 0., 1., 1., 0., 2., 0., 0., 0.]), batch_sizes=tensor([ 3, 2, 1, 1, 1])) What’s going on? Why we obtain batch_sizes with 5 elements when we gave only 3 tensors?
st103666
The batch_size tensor is telling you that: Your longest sequence is length 5 (because the tensor has five entries) Three of those entries contain data at the 0th position. Two of those entries contain data at the 1st position One of those entries contains data at the 2nd position One of those entries contains data at the 3rd position One of those entries contains data at the 4th position Those are all true statements: Your longest tensor is a, consisting of five 0’s. (And note the presence and location of the five 0s in the data tensor.) Your tensor c has only 1 element Your tensor b has only 2 elements Most importantly/intuitively: Batch_sizes are not lengths.
st103667
Hi yall, Ive been doing hyper-parameter training and in my case found that the cost of evaluating various learning statistics and breaking epochs early was fine. But when my hyperparameterizer was doing that I found errors leading to multiprocessing.py - I presume that my function started a new process before the old one was finished or something of the sort. I looked at dataloader.py and did not find a method that joins the queues to prevent this error. So I just hacked a time.sleep(arbitrary_n_seconds) and then call trainloader.iter()._shutdown_workers() So, question, is there such a method built into DataLoader or shouldnt there be one? It seems like simple thread management. pytorch 0.4.0 from conda python 3.6.5 ubuntu 16.04 no difference if device is cpu or gpu
st103668
Could you show some code how you start new processes? In general it should be fine to simply call break inside the loop iterating through the dataloader.
st103669
sure. Here I built a completely contrived example using cifar. It doesnt throw an error every time, so you may have to run it a few times before you see the error; Put it in a jupyter notebook and go. Only thing it does that you may want to change is the path of the data folder - ~/data/ is my go to location for databases My training example is more complex, I am running dozens of tests, I load different set of hyperparams every time, and instead of initializing the net anew every time I’m loading various saved .pths It seems that the more complex the initialization, the more likely the error ( but thats just circumstantial evidence and i don’t see why it would make sense) but as I said, if i let it sleep for a bit just before breaking the error goes away, so my hunch is that shutdown workers could use a join process. running this on cpu also can cause these errors. thanks X功祿 import os.path as osp import torchvision.datasets as datasets import torchvision.transforms as transforms import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import torch class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 120) self.fc3 = nn.Linear(120, 84) self.fc4 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = self.fc4(x) return x def init_net(net, method="xavier", bias=0.0): """ initialize net according to standard init Arguments: net: torch.nn module method: default: 'xavier', also 'kaiming', 'sparse', 'orthogonal' bias: default:0 verbose: default: false """ init_ = {"xavier":nn.init.xavier_normal_, "kaiming":nn.init.kaiming_normal_, "sparse":nn.init.sparse_, "orthogonal":nn.init.orthogonal_} for name, module in net.named_modules(): if 'weight' in module._parameters.keys(): init_[method](module.weight.data) if module.bias.data is not None and method != 2: module.bias.data.fill_(bias) CIFARROOT = osp.join(osp.expanduser('~'), 'data/CIFAR') if not osp.exists(CIFARROOT): os.mkdir(CIFARROOT) assert(osp.exists(CIFARROOT)), 'path does not exist' transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = datasets.CIFAR10(root=CIFARROOT, train=True, download=True, transform=transform) testset = datasets.CIFAR10(root=CIFARROOT, train=False, download=True, transform=transform) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') net = Net() criterion = nn.CrossEntropyLoss() epochs = 5 tests = ( 0.01, 0.001, 0.0001, 0.00001) bs = 100 _device = 'cuda' for lr in tests: init_net(net) net.to(_device) trainloader = torch.utils.data.DataLoader(trainset, batch_size=bs, shuffle=False, num_workers=8) optimizer = optim.SGD(net.parameters(), lr=lr, momentum=0.9) for epoch in range(epochs): for i, data in enumerate(trainloader, 0): optimizer.zero_grad() inputs, labels = data inputs, labels = inputs.to(_device), labels.to(_device) outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() if i == 3: break break Error Exception ignored in: <bound method _DataLoaderIter.__del__ of <torch.utils.data.dataloader._DataLoaderIter object at 0x7f9ef48936d8>> Traceback (most recent call last): File "/home/z/miniconda3/envs/abb/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 349, in __del__ self._shutdown_workers() File "/home/z/miniconda3/envs/abb/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 328, in _shutdown_workers self.worker_result_queue.get() File "/home/z/miniconda3/envs/abb/lib/python3.6/multiprocessing/queues.py", line 337, in get return _ForkingPickler.loads(res) File "/home/z/miniconda3/envs/abb/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 70, in rebuild_storage_fd fd = df.detach() File "/home/z/miniconda3/envs/abb/lib/python3.6/multiprocessing/resource_sharer.py", line 57, in detach with _resource_sharer.get_connection(self._id) as conn: File "/home/z/miniconda3/envs/abb/lib/python3.6/multiprocessing/resource_sharer.py", line 87, in get_connection c = Client(address, authkey=process.current_process().authkey) File "/home/z/miniconda3/envs/abb/lib/python3.6/multiprocessing/connection.py", line 493, in Client answer_challenge(c, authkey) File "/home/z/miniconda3/envs/abb/lib/python3.6/multiprocessing/connection.py", line 737, in answer_challenge response = connection.recv_bytes(256) # reject large message File "/home/z/miniconda3/envs/abb/lib/python3.6/multiprocessing/connection.py", line 216, in recv_bytes buf = self._recv_bytes(maxlength) File "/home/z/miniconda3/envs/abb/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes buf = self._recv(4) File "/home/z/miniconda3/envs/abb/lib/python3.6/multiprocessing/connection.py", line 379, in _recv chunk = read(handle, remaining) ConnectionResetError: [Errno 104] Connection reset by peer
st103670
I have this weird bug where I have to load many numpy arrays from disk and each array is 1 .npy file on disk, but when I increase my batch size after a certain point i.e. 64 (this means 64 files should be loaded) I get this cryptic error: IOError: bad message length. My dataloader is very standard has 8 workers so I’m not sure what the actual issue is.
st103671
Hello I don’t know where I put this kind of suggestion, therefore I put it in “Uncategorized”. In many deep learning implementation we really need to concern about tensor size. We need to know the size of input, and the size of output for function. Therefore in pytorch version up to 0.3 when I called print function, pytorch will showing content and also size of the tensor. This is co convenient for me because I immediate know my tensor size, also the type of tensor, and content just for single print, no need to call tensor.size() anymore. However in pytorch 0.4 I only get the content of the tensor. I need to use tensor.size again and again to debug my code. Here is the example of the print operation (*I use a different tensor). In version 0.4 no information available, it just the content. Even I also don’t know my tensor type. However I am so appreciated for giving the tensor location information like GPU device. The request feature for print function: Print the size and type of tensor also not only the content. Print the tensor location for example whether it is in GPU or CPU. If it is on GPU please tell us which device is the location. This is just suggestion I really appreciate the consideration for adding this. This is so convenient for debugging in my opinion, because many people mistake lies on tensor size format like [batch, channel, height, width] and also most of them doing like this the training data is in GPU but the testing is in CPU. Another mistake like tensor type for example when doing temperature prediction most of the temperature is in float but maybe user forget to change their tensor into float type so the tensor still in integer type, by giving the type of tensor it will help much. Keep up the good work, can’t wait to see Pytorch 1.0 -Thank you-
st103672
herleeyandi: Print the size and type of tensor also not only the content. Print the tensor location for example whether it is in GPU or CPU. If it is on GPU please tell us which device is the location. I think the former was a conscious decision because size used to show up in too many places where it isn’t useful. (Personally, I usually want to print either the size or the contents, but hardly ever both.) Also it’s easy to print both. The latter is already there: no device annotation means CPU, device=cuda:xx means GPU. I’m not sure what you miss there. dtype is implied if it is the default you would get from a constructor (with “.” means float, without long) or show dtype is show. Best regards Thomas
st103673
Hi @tom thank you for your reply. Yes I have stated in my post that I appreciate the feature for device location. My post is just a suggestion maybe many people have the other style, but for me by only use print command it will be more convenient because I just need 1 print function to know all information, beside printing type and dimension is just need 1 extra line in the print output. Thank you for your information about the dot “.” which means it is float, I just know about it -_- . I just hope that maybe next pytorch they add it all in print function however if many pytorch users want it seperated, I am still OK with it. It just about people coding style, but here I just suggest maybe if we print it in 1 function it will be more convenient in debugging for some people.
st103674
@tom Nothing wrong it just taking longer time and boring to type it long for some peoples like me its faster for me to type print(x) rather than print(x, x.size(),x.type()) or for people not familiar with python they will type print for all of it like print(x), print(x.size()),print(x.type()) hahahaha
st103675
Consider a source tensor src and a destination tensor dst of the same size: src = torch.rand(n,m) dst = torch.zeros_like(src) Furthermore, let’s assume we are given an index tensor idx of size (2,n*m) in which each column idx[:,j] contains a target index for the corresponding entry src.view(-1)[j]. We can map src.view(-1)[j] to its target index in dst via dst[tuple(idx)] = src.view(-1) However, if this mapping is not 1-to-1, i.e., if several entries in src map to the same entry in dst, the evaluation order on the GPU seems to be random. In my case, though, I need to rely on a left-to-right evaluation, i.e., if a target index appears multiple times in idx, I want to write the value in src corresponding to the last appearance of the target index in idx. Any ideas how to achieve this?
st103676
Hi, I’m trying to generate time-series data with an LSTM and a Mixture Density Network as described in https://arxiv.org/pdf/1308.0850.pdf 9 Here is a link to my implementation: https://github.com/NeoVand/MDNLSTM 19 The repository contains a toy dataset to train the network. On training, the LSTM layer returns nan for its hidden state after one iteration. There is a similar issue here: Getting nan for gradients with LSTMCell We are doing a customized LSTM using LSTMCell, on a binary classification, loss is BCEwithlogits. We traced the problem back to loss.backward(). The calculated loss is not nan, but the gradients calculated from the loss are nans. Things we’ve tried but not working pytorch 3.1, 4.0, 5.0, all have the same problem change softmax to logsoftmax in the forward pass change loss to logsoftmax + NLLloss change initialization of hidden and cell states to non-zeros Any ideas?! Much appreciated! I would appreciate any help on this.
st103677
The issue was caused by the log-sum-exp operation not being done in a stable way. Here is an implementation of a weighted log-sum-exp trick that I used and could fix the problem: def weighted_logsumexp(x,w, dim=None, keepdim=False): if dim is None: x, dim = x.view(-1), 0 xm, _ = torch.max(x, dim, keepdim=True) x = torch.where( # to prevent nasty nan's (xm == float('inf')) | (xm == float('-inf')), xm, xm + torch.log(torch.sum(torch.exp(x - xm)*w, dim, keepdim=True))) return x if keepdim else x.squeeze(dim) and using that implemented the stable loss function: def mdn_loss_stable(y,pi,mu,sigma): m = torch.distributions.Normal(loc=mu, scale=sigma) m_lp_y = m.log_prob(y) loss = -weighted_logsumexp(m_lp_y,pi,dim=2) return loss.mean() This worked like a charm. In general, the problem is that torch won’t report under-flows.
st103678
For the Pytorch implementation of resnet, I noticed that in their residual blocks class Bottleneck(nn.Module): expansion = 4 def __init__(self, inplanes, planes, stride=1, downsample=None): super(Bottleneck, self).__init__() self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) self.bn1 = nn.BatchNorm2d(planes) self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, <- here padding=1, bias=False) self.bn2 = nn.BatchNorm2d(planes) self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False) self.bn3 = nn.BatchNorm2d(planes * self.expansion) self.relu = nn.ReLU(inplace=True) self.downsample = downsample self.stride = stride only self.conv2 has a stride argument (stride = stride), and later on in the resnet class there is class ResNet(nn.Module): def __init__(self, block, layers, num_classes=1000): self.inplanes = 64 super(ResNet, self).__init__() self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = self._make_layer(block, 64, layers[0]) self.layer2 = self._make_layer(block, 128, layers[1], stride=2) <- here self.layer3 = self._make_layer(block, 256, layers[2], stride=2) <- here self.layer4 = self._make_layer(block, 512, layers[3], stride=2) <- here self.avgpool = nn.AvgPool2d(7, stride=1) self.fc = nn.Linear(512 * block.expansion, num_classes) Does this mean that in BottleNeck, only self.conv2 is taking the stride argument from ResNet’s make_layer, and that conv1 and conv3 take in the default stride length of 1? Thanks
st103679
Hi, I am trying to see if there is an equivalent to the np.reshape() command in pyTorch? Inspecting the doc 98 an googling around I cant seem to find anything that does this, except in the “old” torch manual? Does pyTorch support this for torch.Tensors, if so, is there a reason I cant find it? How can I reshape a torch.Tensor ? Thanks!
st103680
I think the view method would be the right tool for that. E.g., >>> import torch >>> t = torch.ones((2, 3, 4)) >>> t.size() torch.Size([2, 3, 4]) >>> t.view(-1, 12).size() torch.Size([2, 12])
st103681
Interesting - thanks @rasbt! I will take a look at it. From your knowledge, is it more or less similar to reshape? Any caveats to be aware of? thanks.
st103682
A caveat is that it has to be contiguous, but that matches numpy as far as I know.
st103683
Since people have asked about this several times – yes. You just have to define the module version yourself: class View(nn.Module): def __init__(self, *shape): super(View, self).__init__() self.shape = shape def forward(self, input): return input.view(*shape) sequential_model = nn.Sequential([Linear(10, 20), View(-1, 5, 4)])
st103684
One more thing about np.reshape function in numpy is that you can specify its order of reshaping. But I think in torch.view() there is not such a good feature.
st103685
You can use permute in pytorch to specify its order of reshaping. t = torch.rand((2, 3, 4)) t = t.permute(1, 0, 2) this can reshape its order
st103686
Do you think it a good idea to add a reshape function as a copy of view in pytorch to accommodate heavy numpy users?
st103687
My guess is that it’s been done to be consistent with torch rather than numpy, which makes sense. However, yeah, some of the naming conventions are a bit annoying for people coming from NumPy, not Torch
st103688
I think the torchTensor.view() is same as to np.reshape() after my experiment. And torchTesnor.permute is same as to np.transpose. But I have a question about what is the use of np.reshape. Because in image processing domain, reshape will change the structure information of the image and that is fatal.
st103689
In image processing field, we should use permute in my opinion then what is the meaning of view()'s existence? Thank you!
st103690
One use case. Suppose you feed an image through several conv layers, and then you want to run the extracted features through a linear layer. The conv output is of shape (batch_size, channels_out, height_out, width_out), now we want the linear layer to take all features from all channels and from every position in the image, but nn.Linear only acts on the last dimension of its input. That is where view comes in handy. linear_input = conv_output.view(batch_size, channels_out*height_out*width_out) linear_input is now of shape (batch_size, all_features_of_all_channels_at_all_positions)
st103691
I think a reshape method is now available in torch version 0.4.0. https://pytorch.org/docs/stable/torch.html?highlight=reshape#torch.reshape 1.4k
st103692
I noticed that advanced indexing works differently on CPU and GPU with PyTorch 0.4, when multi-index is large by length and contains repeated elements. Minimal working example: This code works on CPU and saves the last element encountered in a repeated indexed cell. len_idx = 150 for i in range(10): A = torch.zeros(5, 5, 5) idx_along_axis = torch.ones(len_idx).long() A[idx_along_axis, idx_along_axis, idx_along_axis] = torch.arange(len_idx) print('A[1, 1, 1]:', A[1, 1, 1].item()) produces: A[1, 1, 1]: 149.0 A[1, 1, 1]: 149.0 A[1, 1, 1]: 149.0 A[1, 1, 1]: 149.0 A[1, 1, 1]: 149.0 A[1, 1, 1]: 149.0 A[1, 1, 1]: 149.0 A[1, 1, 1]: 149.0 A[1, 1, 1]: 149.0 A[1, 1, 1]: 149.0 The same code on GPU saves a random element from all values corresponding to a repeated indexed cell. len_idx = 150 for i in range(10): A = torch.zeros(5, 5, 5).cuda() idx_along_axis = torch.ones(len_idx).long().cuda() A[idx_along_axis, idx_along_axis, idx_along_axis] = torch.arange(len_idx).cuda() print('A[1, 1, 1]:', A[1, 1, 1].item()) produces different output each time, e.g. this: A[1, 1, 1]: 149.0 A[1, 1, 1]: 31.0 A[1, 1, 1]: 149.0 A[1, 1, 1]: 31.0 A[1, 1, 1]: 149.0 A[1, 1, 1]: 149.0 A[1, 1, 1]: 31.0 A[1, 1, 1]: 31.0 A[1, 1, 1]: 31.0 A[1, 1, 1]: 31.0 This only works for large index arrays, for example, the problem cannot be reproduced for len_idx=30 on my machine. If we take larger len_idx, the set of saved values becomes even more random. Is it a bug in the framework? Thank you.
st103693
Hi, The problem is that you assign the exact same entry with different values. This is undefined behavior: in a single line you say a=1 and a=2. What would you expect the value of a to be after this line? It’s hard to define a default behavior for that (without large performance cost because conflicting indices would need to be detected). On cpu it appears that it gets assigned the value which is last in the indexing, on gpu you sometimes get another value.
st103694
I am using this code for object detection (source 2). The code is written to be run on one gpu, but it has the option of multiple gpus via nn.Dataparallel as well. Unfortunately even when i uncomment the nn.Dataparallel it still use one GPU. Can anyone give me any suggestion on what can be wrong I uncomment the DataParallel command in the following codes: from __future__ import print_function import numpy as np import torch from torch.autograd import Variable import torch.backends.cudnn as cudnn from lib.layers import * from lib.utils.timer import Timer from lib.utils.data_augment import preproc from lib.modeling.model_builder import create_model from lib.utils.config_parse import cfg class ObjectDetector: def __init__(self, viz_arch=False): self.cfg = cfg # Build model print('===> Building model') self.model, self.priorbox = create_model(cfg.MODEL) self.priors = Variable(self.priorbox.forward(), volatile=True) # Print the model architecture and parameters if viz_arch is True: print('Model architectures:\n{}\n'.format(self.model)) # Utilize GPUs for computation self.use_gpu = torch.cuda.is_available() self.half = False if self.use_gpu: print('Utilize GPUs for computation') print('Number of GPU available', torch.cuda.device_count()) self.model.cuda() self.priors.cuda() cudnn.benchmark = True self.model = torch.nn.DataParallel(self.model).module # Utilize half precision self.half = cfg.MODEL.HALF_PRECISION if self.half: self.model = self.model.half() self.priors = self.priors.half() # Build preprocessor and detector self.preprocessor = preproc(cfg.MODEL.IMAGE_SIZE, cfg.DATASET.PIXEL_MEANS, -2) self.detector = Detect(cfg.POST_PROCESS, self.priors) # Load weight: if cfg.RESUME_CHECKPOINT == '': AssertionError('RESUME_CHECKPOINT can not be empty') print('=> loading checkpoint {:s}'.format(cfg.RESUME_CHECKPOINT)) checkpoint = torch.load(cfg.RESUME_CHECKPOINT) # checkpoint = torch.load(cfg.RESUME_CHECKPOINT, map_location='gpu' if self.use_gpu else 'cpu') self.model.load_state_dict(checkpoint) # test only self.model.eval() def predict(self, img, threshold=0.6, check_time=False): # make sure the input channel is 3 assert img.shape[2] == 3 scale = torch.Tensor([img.shape[1::-1], img.shape[1::-1]]) _t = {'preprocess': Timer(), 'net_forward': Timer(), 'detect': Timer(), 'output': Timer()} # preprocess image _t['preprocess'].tic() x = Variable(self.preprocessor(img)[0].unsqueeze(0)) if self.use_gpu: x = x.cuda() if self.half: x = x.half() preprocess_time = _t['preprocess'].toc() # forward _t['net_forward'].tic() out = self.model(x) # forward pass net_forward_time = _t['net_forward'].toc() # detect _t['detect'].tic() detections = self.detector.forward(out) detect_time = _t['detect'].toc() # output _t['output'].tic() labels, scores, coords = [list() for _ in range(3)] # for batch in range(detections.size(0)): # print('Batch:', batch) batch=0 for classes in range(detections.size(1)): num = 0 while detections[batch,classes,num,0] >= threshold: scores.append(detections[batch,classes,num,0]) labels.append(classes-1) coords.append(detections[batch,classes,num,1:]*scale) num+=1 output_time = _t['output'].toc() total_time = preprocess_time + net_forward_time + detect_time + output_time if check_time is True: return labels, scores, coords, (total_time, preprocess_time, net_forward_time, detect_time, output_time) # total_time = preprocess_time + net_forward_time + detect_time + output_time # print('total time: {} \n preprocess: {} \n net_forward: {} \n detect: {} \n output: {}'.format( # total_time, preprocess_time, net_forward_time, detect_time, output_time # )) return labels, scores, coords from __future__ import print_function import numpy as np import os import sys import cv2 import random import pickle import torch import torch.backends.cudnn as cudnn from torch.autograd import Variable import torch.optim as optim from torch.optim import lr_scheduler import torch.utils.data as data import torch.nn.init as init from tensorboardX import SummaryWriter from lib.layers import * from lib.utils.timer import Timer from lib.utils.data_augment import preproc from lib.modeling.model_builder import create_model from lib.dataset.dataset_factory import load_data from lib.utils.config_parse import cfg from lib.utils.eval_utils import * from lib.utils.visualize_utils import * class Solver(object): """ A wrapper class for the training process """ def __init__(self): self.cfg = cfg # Load data print('===> Loading data') self.train_loader = load_data(cfg.DATASET, 'train') if 'train' in cfg.PHASE else None self.eval_loader = load_data(cfg.DATASET, 'eval') if 'eval' in cfg.PHASE else None self.test_loader = load_data(cfg.DATASET, 'test') if 'test' in cfg.PHASE else None self.visualize_loader = load_data(cfg.DATASET, 'visualize') if 'visualize' in cfg.PHASE else None # Build model print('===> Building model') self.model, self.priorbox = create_model(cfg.MODEL) self.priors = Variable(self.priorbox.forward()) self.detector = Detect(cfg.POST_PROCESS, self.priors) # Utilize GPUs for computation self.use_gpu = torch.cuda.is_available() if self.use_gpu: print('Utilize GPUs for computation') print('Number of GPU available', torch.cuda.device_count()) self.model.cuda() self.priors.cuda() cudnn.benchmark = True if torch.cuda.device_count() > 1: self.model = torch.nn.DataParallel(self.model,device_ids=[0, 1]).module # Print the model architecture and parameters print('Model architectures:\n{}\n'.format(self.model)) # print('Parameters and size:') # for name, param in self.model.named_parameters(): # print('{}: {}'.format(name, list(param.size()))) # print trainable scope print('Trainable scope: {}'.format(cfg.TRAIN.TRAINABLE_SCOPE)) trainable_param = self.trainable_param(cfg.TRAIN.TRAINABLE_SCOPE) self.optimizer = self.configure_optimizer(trainable_param, cfg.TRAIN.OPTIMIZER) self.exp_lr_scheduler = self.configure_lr_scheduler(self.optimizer, cfg.TRAIN.LR_SCHEDULER) self.max_epochs = cfg.TRAIN.MAX_EPOCHS # metric self.criterion = MultiBoxLoss(cfg.MATCHER, self.priors, self.use_gpu) # Set the logger self.writer = SummaryWriter(log_dir=cfg.LOG_DIR) self.output_dir = cfg.EXP_DIR self.checkpoint = cfg.RESUME_CHECKPOINT self.checkpoint_prefix = cfg.CHECKPOINTS_PREFIX def save_checkpoints(self, epochs, iters=None): if not os.path.exists(self.output_dir): os.makedirs(self.output_dir) if iters: filename = self.checkpoint_prefix + '_epoch_{:d}_iter_{:d}'.format(epochs, iters) + '.pth' else: filename = self.checkpoint_prefix + '_epoch_{:d}'.format(epochs) + '.pth' filename = os.path.join(self.output_dir, filename) torch.save(self.model.state_dict(), filename) with open(os.path.join(self.output_dir, 'checkpoint_list.txt'), 'a') as f: f.write('epoch {epoch:d}: {filename}\n'.format(epoch=epochs, filename=filename)) print('Wrote snapshot to: {:s}'.format(filename)) # TODO: write relative cfg under the same page def resume_checkpoint(self, resume_checkpoint): if resume_checkpoint == '' or not os.path.isfile(resume_checkpoint): print(("=> no checkpoint found at '{}'".format(resume_checkpoint))) return False print(("=> loading checkpoint '{:s}'".format(resume_checkpoint))) checkpoint = torch.load(resume_checkpoint) # print("=> Weigths in the checkpoints:") # print([k for k, v in list(checkpoint.items())]) # remove the module in the parrallel model if 'module.' in list(checkpoint.items())[0][0]: pretrained_dict = {'.'.join(k.split('.')[1:]): v for k, v in list(checkpoint.items())} checkpoint = pretrained_dict resume_scope = self.cfg.TRAIN.RESUME_SCOPE # extract the weights based on the resume scope if resume_scope != '': pretrained_dict = {} for k, v in list(checkpoint.items()): for resume_key in resume_scope.split(','): if resume_key in k: pretrained_dict[k] = v break checkpoint = pretrained_dict pretrained_dict = {k: v for k, v in checkpoint.items() if k in self.model.state_dict()} # print("=> Resume weigths:") # print([k for k, v in list(pretrained_dict.items())]) checkpoint = self.model.state_dict() unresume_dict = set(checkpoint)-set(pretrained_dict) if len(unresume_dict) != 0: print("=> UNResume weigths:") print(unresume_dict) checkpoint.update(pretrained_dict) return self.model.load_state_dict(checkpoint) def find_previous(self): if not os.path.exists(os.path.join(self.output_dir, 'checkpoint_list.txt')): return False with open(os.path.join(self.output_dir, 'checkpoint_list.txt'), 'r') as f: lineList = f.readlines() epoches, resume_checkpoints = [list() for _ in range(2)] for line in lineList: epoch = int(line[line.find('epoch ') + len('epoch '): line.find(':')]) checkpoint = line[line.find(':') + 2:-1] epoches.append(epoch) resume_checkpoints.append(checkpoint) return epoches, resume_checkpoints def weights_init(self, m): for key in m.state_dict(): if key.split('.')[-1] == 'weight': if 'conv' in key: init.kaiming_normal(m.state_dict()[key], mode='fan_out') if 'bn' in key: m.state_dict()[key][...] = 1 elif key.split('.')[-1] == 'bias': m.state_dict()[key][...] = 0 def initialize(self): if self.checkpoint: print('Loading initial model weights from {:s}'.format(self.checkpoint)) self.resume_checkpoint(self.checkpoint) start_epoch = 0 return start_epoch def trainable_param(self, trainable_scope): for param in self.model.parameters(): param.requires_grad = False trainable_param = [] for module in trainable_scope.split(','): if hasattr(self.model, module): # print(getattr(self.model, module)) for param in getattr(self.model, module).parameters(): param.requires_grad = True trainable_param.extend(getattr(self.model, module).parameters()) return trainable_param def train_model(self): previous = self.find_previous() if previous: start_epoch = previous[0][-1] self.resume_checkpoint(previous[1][-1]) else: start_epoch = self.initialize() # export graph for the model, onnx always not works # self.export_graph() # warm_up epoch warm_up = self.cfg.TRAIN.LR_SCHEDULER.WARM_UP_EPOCHS for epoch in iter(range(start_epoch+1, self.max_epochs+1)): #learning rate sys.stdout.write('\rEpoch {epoch:d}/{max_epochs:d}:\n'.format(epoch=epoch, max_epochs=self.max_epochs)) if epoch > warm_up: self.exp_lr_scheduler.step(epoch-warm_up) if 'train' in cfg.PHASE: self.train_epoch(self.model, self.train_loader, self.optimizer, self.criterion, self.writer, epoch, self.use_gpu) if 'eval' in cfg.PHASE: self.eval_epoch(self.model, self.eval_loader, self.detector, self.criterion, self.writer, epoch, self.use_gpu) if 'test' in cfg.PHASE: self.test_epoch(self.model, self.test_loader, self.detector, self.output_dir, self.use_gpu) if 'visualize' in cfg.PHASE: self.visualize_epoch(self.model, self.visualize_loader, self.priorbox, self.writer, epoch, self.use_gpu) if epoch % cfg.TRAIN.CHECKPOINTS_EPOCHS == 0: self.save_checkpoints(epoch) def test_model(self): previous = self.find_previous() if previous: for epoch, resume_checkpoint in zip(previous[0], previous[1]): if self.cfg.TEST.TEST_SCOPE[0] <= epoch <= self.cfg.TEST.TEST_SCOPE[1]: sys.stdout.write('\rEpoch {epoch:d}/{max_epochs:d}:\n'.format(epoch=epoch, max_epochs=self.cfg.TEST.TEST_SCOPE[1])) self.resume_checkpoint(resume_checkpoint) if 'eval' in cfg.PHASE: self.eval_epoch(self.model, self.eval_loader, self.detector, self.criterion, self.writer, epoch, self.use_gpu) if 'test' in cfg.PHASE: self.test_epoch(self.model, self.test_loader, self.detector, self.output_dir , self.use_gpu) if 'visualize' in cfg.PHASE: self.visualize_epoch(self.model, self.visualize_loader, self.priorbox, self.writer, epoch, self.use_gpu) else: sys.stdout.write('\rCheckpoint {}:\n'.format(self.checkpoint)) self.resume_checkpoint(self.checkpoint) if 'eval' in cfg.PHASE: self.eval_epoch(self.model, self.eval_loader, self.detector, self.criterion, self.writer, 0, self.use_gpu) if 'test' in cfg.PHASE: self.test_epoch(self.model, self.test_loader, self.detector, self.output_dir , self.use_gpu) if 'visualize' in cfg.PHASE: self.visualize_epoch(self.model, self.visualize_loader, self.priorbox, self.writer, 0, self.use_gpu) def train_epoch(self, model, data_loader, optimizer, criterion, writer, epoch, use_gpu): model.train() epoch_size = len(data_loader) batch_iterator = iter(data_loader) loc_loss = 0 conf_loss = 0 _t = Timer() for iteration in iter(range((epoch_size))): images, targets = next(batch_iterator) if use_gpu: images = Variable(images.cuda()) targets = [Variable(anno.cuda()) for anno in targets] else: images = Variable(images) targets = [Variable(anno) for anno in targets] _t.tic() # forward out = model(images, phase='train') # backprop optimizer.zero_grad() loss_l, loss_c = criterion(out, targets) # some bugs in coco train2017. maybe the annonation bug. if loss_l.data[0] == float("Inf"): continue loss = loss_l + loss_c loss.backward() optimizer.step() time = _t.toc() loc_loss += loss_l.data[0] conf_loss += loss_c.data[0] # log per iter log = '\r==>Train: || {iters:d}/{epoch_size:d} in {time:.3f}s [{prograss}] || loc_loss: {loc_loss:.4f} cls_loss: {cls_loss:.4f}\r'.format( prograss='#'*int(round(10*iteration/epoch_size)) + '-'*int(round(10*(1-iteration/epoch_size))), iters=iteration, epoch_size=epoch_size, time=time, loc_loss=loss_l.data[0], cls_loss=loss_c.data[0]) sys.stdout.write(log) sys.stdout.flush() # log per epoch sys.stdout.write('\r') sys.stdout.flush() lr = optimizer.param_groups[0]['lr'] log = '\r==>Train: || Total_time: {time:.3f}s || loc_loss: {loc_loss:.4f} conf_loss: {conf_loss:.4f} || lr: {lr:.6f}\n'.format(lr=lr, time=_t.total_time, loc_loss=loc_loss/epoch_size, conf_loss=conf_loss/epoch_size) sys.stdout.write(log) sys.stdout.flush() # log for tensorboard writer.add_scalar('Train/loc_loss', loc_loss/epoch_size, epoch) writer.add_scalar('Train/conf_loss', conf_loss/epoch_size, epoch) writer.add_scalar('Train/lr', lr, epoch) def eval_epoch(self, model, data_loader, detector, criterion, writer, epoch, use_gpu): model.eval() epoch_size = len(data_loader) batch_iterator = iter(data_loader) loc_loss = 0 conf_loss = 0 _t = Timer() label = [list() for _ in range(model.num_classes)] gt_label = [list() for _ in range(model.num_classes)] score = [list() for _ in range(model.num_classes)] size = [list() for _ in range(model.num_classes)] npos = [0] * model.num_classes for iteration in iter(range((epoch_size))): # for iteration in iter(range((10))): images, targets = next(batch_iterator) if use_gpu: images = Variable(images.cuda()) targets = [Variable(anno.cuda()) for anno in targets] else: images = Variable(images) targets = [Variable(anno) for anno in targets] _t.tic() # forward out = model(images, phase='train') # loss loss_l, loss_c = criterion(out, targets) out = (out[0], model.softmax(out[1].view(-1, model.num_classes))) # detect detections = detector.forward(out) time = _t.toc() # evals label, score, npos, gt_label = cal_tp_fp(detections, targets, label, score, npos, gt_label) size = cal_size(detections, targets, size) loc_loss += loss_l.data[0] conf_loss += loss_c.data[0] # log per iter log = '\r==>Eval: || {iters:d}/{epoch_size:d} in {time:.3f}s [{prograss}] || loc_loss: {loc_loss:.4f} cls_loss: {cls_loss:.4f}\r'.format( prograss='#'*int(round(10*iteration/epoch_size)) + '-'*int(round(10*(1-iteration/epoch_size))), iters=iteration, epoch_size=epoch_size, time=time, loc_loss=loss_l.data[0], cls_loss=loss_c.data[0]) sys.stdout.write(log) sys.stdout.flush() # eval mAP prec, rec, ap = cal_pr(label, score, npos) # log per epoch sys.stdout.write('\r') sys.stdout.flush() log = '\r==>Eval: || Total_time: {time:.3f}s || loc_loss: {loc_loss:.4f} conf_loss: {conf_loss:.4f} || mAP: {mAP:.6f}\n'.format(mAP=ap, time=_t.total_time, loc_loss=loc_loss/epoch_size, conf_loss=conf_loss/epoch_size) sys.stdout.write(log) sys.stdout.flush() # log for tensorboard writer.add_scalar('Eval/loc_loss', loc_loss/epoch_size, epoch) writer.add_scalar('Eval/conf_loss', conf_loss/epoch_size, epoch) writer.add_scalar('Eval/mAP', ap, epoch) viz_pr_curve(writer, prec, rec, epoch) viz_archor_strategy(writer, size, gt_label, epoch) def test_epoch(self, model, data_loader, detector, output_dir, use_gpu): model.eval() dataset = data_loader.dataset num_images = len(dataset) num_classes = detector.num_classes all_boxes = [[[] for _ in range(num_images)] for _ in range(num_classes)] empty_array = np.transpose(np.array([[],[],[],[],[]]),(1,0)) _t = Timer() for i in iter(range((num_images))): img = dataset.pull_image(i) scale = [img.shape[1], img.shape[0], img.shape[1], img.shape[0]] if use_gpu: images = Variable(dataset.preproc(img)[0].unsqueeze(0).cuda()) else: images = Variable(dataset.preproc(img)[0].unsqueeze(0)) _t.tic() # forward out = model(images, phase='eval') # detect detections = detector.forward(out) time = _t.toc() # TODO: make it smart: for j in range(1, num_classes): cls_dets = list() for det in detections[0][j]: if det[0] > 0: d = det.cpu().numpy() score, box = d[0], d[1:] box *= scale box = np.append(box, score) cls_dets.append(box) if len(cls_dets) == 0: cls_dets = empty_array all_boxes[j][i] = np.array(cls_dets) # log per iter log = '\r==>Test: || {iters:d}/{epoch_size:d} in {time:.3f}s [{prograss}]\r'.format( prograss='#'*int(round(10*i/num_images)) + '-'*int(round(10*(1-i/num_images))), iters=i, epoch_size=num_images, time=time) sys.stdout.write(log) sys.stdout.flush() # write result to pkl with open(os.path.join(output_dir, 'detections.pkl'), 'wb') as f: pickle.dump(all_boxes, f, pickle.HIGHEST_PROTOCOL) # currently the COCO dataset do not return the mean ap or ap 0.5:0.95 values print('Evaluating detections') data_loader.dataset.evaluate_detections(all_boxes, output_dir) def visualize_epoch(self, model, data_loader, priorbox, writer, epoch, use_gpu): model.eval() img_index = random.randint(0, len(data_loader.dataset)-1) # get img image = data_loader.dataset.pull_image(img_index) anno = data_loader.dataset.pull_anno(img_index) # visualize archor box viz_prior_box(writer, priorbox, image, epoch) # get preproc preproc = data_loader.dataset.preproc preproc.add_writer(writer, epoch) # preproc.p = 0.6 # preproc image & visualize preprocess prograss images = Variable(preproc(image, anno)[0].unsqueeze(0)) if use_gpu: images = images.cuda() # visualize feature map in base and extras base_out = viz_module_feature_maps(writer, model.base, images, module_name='base', epoch=epoch) extras_out = viz_module_feature_maps(writer, model.extras, base_out, module_name='extras', epoch=epoch) # visualize feature map in feature_extractors viz_feature_maps(writer, model(images, 'feature'), module_name='feature_extractors', epoch=epoch) model.train() images.requires_grad = True images.volatile=False base_out = viz_module_grads(writer, model, model.base, images, images, preproc.means, module_name='base', epoch=epoch) # TODO: add more... def configure_optimizer(self, trainable_param, cfg): if cfg.OPTIMIZER == 'sgd': optimizer = optim.SGD(trainable_param, lr=cfg.LEARNING_RATE, momentum=cfg.MOMENTUM, weight_decay=cfg.WEIGHT_DECAY) elif cfg.OPTIMIZER == 'rmsprop': optimizer = optim.RMSprop(trainable_param, lr=cfg.LEARNING_RATE, momentum=cfg.MOMENTUM, alpha=cfg.MOMENTUM_2, eps=cfg.EPS, weight_decay=cfg.WEIGHT_DECAY) elif cfg.OPTIMIZER == 'adam': optimizer = optim.Adam(trainable_param, lr=cfg.LEARNING_RATE, betas=(cfg.MOMENTUM, cfg.MOMENTUM_2), eps=cfg.EPS, weight_decay=cfg.WEIGHT_DECAY) else: AssertionError('optimizer can not be recognized.') return optimizer def configure_lr_scheduler(self, optimizer, cfg): if cfg.SCHEDULER == 'step': scheduler = lr_scheduler.StepLR(optimizer, step_size=cfg.STEPS[0], gamma=cfg.GAMMA) elif cfg.SCHEDULER == 'multi_step': scheduler = lr_scheduler.MultiStepLR(optimizer, milestones=cfg.STEPS, gamma=cfg.GAMMA) elif cfg.SCHEDULER == 'exponential': scheduler = lr_scheduler.ExponentialLR(optimizer, gamma=cfg.GAMMA) elif cfg.SCHEDULER == 'SGDR': scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=cfg.MAX_EPOCHS) else: AssertionError('scheduler can not be recognized.') return scheduler def export_graph(self): self.model.train(False) dummy_input = Variable(torch.randn(1, 3, cfg.MODEL.IMAGE_SIZE[0], cfg.MODEL.IMAGE_SIZE[1])).cuda() # Export the model torch_out = torch.onnx._export(self.model, # model being run dummy_input, # model input (or a tuple for multiple inputs) "graph.onnx", # where to save the model (can be a file or file-like object) export_params=True) # store the trained parameter weights inside the model file # if not os.path.exists(cfg.EXP_DIR): # os.makedirs(cfg.EXP_DIR) # self.writer.add_graph(self.model, (dummy_input, )) def train_model(): s = Solver() s.train_model() return True def test_model(): s = Solver() s.test_model() return True
st103695
Can you execute one of the following codes? Shell: echo $CUDA_VISIBLE_DEVICES Or in python import os print(os.environ["CUDA_VISIBLE_DEVICES"]) To check whether there are multiple GPUs available and visible on your machine?
st103696
INTERESTING! When i do nvidia-smi it shows my gpus, but when i do echo $CUDA_VISIBLE_DEVICES, it shows nothing! also the second command shows this error: >>> print(os.environ["CUDA_VISIBLE_DEVICES"]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/alireza/anaconda3/lib/python3.6/os.py", line 669, in __getitem__ raise KeyError(key) from None KeyError: 'CUDA_VISIBLE_DEVICES' when i do echo $CUDA_VISIBLE_DEVICES=0,1, it shows =0,1 What should i do?
st103697
You could either run your script with import os os.environ["CUDA_VISIBLE_DEVICES"] = "0,1" # or other GPU ids if you want to #your other imports and code Or setting it in the terminal before executing your script by export CUDA_VISIBLE_DEVICES=0,1 If you want to keep it permanent you can add the last variant to your .profile or your .bashrc
st103698
Two related questions: if I create an nn.Dropout(0), does this get ‘shortcutted’ out, to become essentially an Identity function, by an if statement inside F.dropout? if it doesnt, what is the performance impact (if any) if I dont add such if dropout > 0: guards myself? (edit: since currently my code is filled with such guards and I’m thinking of removing them)
st103699
Solved by albanD in post #5 I think this is the implementation of the nn Module for Dropout. And so yes this check is already done.