id
stringlengths
3
8
text
stringlengths
1
115k
st32968
I have a question that is it safety to use torch.cuda.empty_cache() before each iteration during training?
st32969
This should be safe, but might negatively affect the performance, since PyTorch might need to reallocate this memory again. What is your use case that you would like to call it for each iteration?
st32970
What is the negative effect? My code runs more and more slowly and sometimes it may interrupt due to memory error, so I have thought to use it before each iteration. I don’t know does it have an effect on the speed?
st32971
If you see increasing memory usage, you might accidentally store some tensors with the an attached computation graph. E.g. if you store the loss for printing or debugging purposes, you should save loss.item() instead. This issue won’t be solved, if you clear the cache repeatedly. As I said this might just trigger unnecessary allocations which will take some time thus potentially slowing down your code.
st32972
Just to clarify, item() deallocates loss? Its not clear to me what exactly item is doing.
st32973
.item() converts a Tensor containing a single element into a python number. It does not “deallocate” loss. But it won’t keep it alive.
st32974
This works only some of the times. Even when I clear out all the variables, restart the kernel, and execute torch.cuda.empty_cache() as the first line in my code, I still get a ‘CUDA out of memory’ error.
st32975
Hi, Running empty_cache at the beginning of your process is not useful as nothing is allocated yet. When you restart the kernel, you force all memory to be deallocated. So if you still run out of memory it is simply because your program requires more than what you have. You will most likely have to reduce the batch size or the size of your model.
st32976
I have been trying to get a PyTorch version built from source which I can use for contributing for quite a while now. I’m cloning the repo and running python setup.py develop inside a designated anaconda environment. The build proceeds without errors. However, my problem is that I can never get the tests in test/run_test.sh to pass. Since I’m out of ideas, I’m currently trying to eliminate all warnings I get during the build process. The rationale is that hopefully once this works, the tests will also succeed. Currently, I’m stuck with the following warnings for which I couldn’t find a satisfactory answer after googling. I have ommitted warnings which I think are unsubstantial for now. CMake Warning at /home/user/miniconda3/envs/torchdev37/lib/python3.7/site-packages/pybind11/share/cmake/pybind11/pybind11Tools.cmake:19 (message): Set PYBIND11_PYTHON_VERSION to search for a specific version, not PYTHON_VERSION (which is an output). Assuming that is what you meant to do and continuing anyway. I installed pybind via pip inside my Miniconda environment. I’m not sure what I’m supposed to do here. CMake Warning at cmake/External/nccl.cmake:62 (message): Objcopy version is too old to support NCCL library slimming ‘Update objcopy’ didn’t give me anything on Google. I’m on a newly set-up Ubuntu 20.04 machine. CMake Warning (dev) at third_party/gloo/CMakeLists.txt:21 (option): Policy CMP0077 is not set: option() honors normal variables. Run "cmake --help-policy CMP0077" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Not sure whether this is important. (Edit: I got this one resolved.) -- Could NOT find NCCL (missing: NCCL_INCLUDE_DIR NCCL_LIBRARY) This one is super weird. I explicitly set these and echo $NCCL_INCLUDE_DIR gives /home/user/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/include/, and similarly for the library-path. CMake Warning at cmake/Modules_CUDA_fix/upstream/FindCUDA.cmake:1915 (add_executable): Cannot generate a safe runtime search path for target generate_proposals_op_gpu_test because files in some directories may conflict with libraries in implicit directories: runtime library [libnvToolsExt.so.1] in /usr/lib/x86_64-linux-gnu may be hidden by files in: /usr/local/cuda-11.2/lib64 Some of these libraries may not be found correctly. I got multiple warnings of the above form, all complaining specifically about runtime library [libnvToolsExt.so.1] in /usr/lib/x86_64-linux-gnu may be hidden by files in: /usr/local/cuda-11.2/lib64 I read stackoverflow threads which dealt with the error message. But those didn’t seem actionable for this case, or maybe I just didn’t understand what I’m supposed to do exactly to remedy this. Set some environment variable maybe? I would be really grateful for help in resolving these.
st32977
For now, I was able to resolve 4. -- Could NOT find NCCL (missing: NCCL_INCLUDE_DIR NCCL_LIBRARY) This is a confusing warning, because setting the environment variables NCCL_INCLUDE_DIR and NCCL_LIBRARY does not resolve the issue. Instead, the solution is to set the environment variable NCCL_ROOT_DIR. In my case, I set it to /home/user/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/. The other warnings are still there.
st32978
Are you using the base environment? Usually, it is recommended to create different environments for different projects, to avoid conflicts between packages. So you could create a new environment and install immediately pytorch, then all the other packages. Nevertheless, I don’t think the problem is caused by pytorch in this case. NCLL is an NVIDIA library, so maybe you could try to uninstall and install CUDA again. My suggestion is: create a new environment first, install immediately pytorch, then all the other packages you need for your project, and see if it works. Otherwise, try to install also cuda again. Check this CONDA CHEAT SHEET 1 to see how to create a new env and activate it.
st32979
I am not using the base environment. I created a separate environment only for the PyTorch build. Thank you for the suggestion with CUDA, but as I wrote in my last post I got rid of the NCCL warning (4. of 5). I’m now left with warnings 1., 2., 3. and 5. described in my first post.
st32980
Usually most depedndencies like PyBind should be “vendored”, i.e. in git submodules under third_party rather than installed through conda. (use git submodule update --init --recursive or so if you don’t have them). Notable exceptions are the compute libraries (CuDNN, Magma, MKL, …) and 2-3 Python things numpy, pyyaml and one I forget. I’m assuming you generally follow the instructions in CONTRIBUTING.md? Best regards Thomas
st32981
Yes, I do follow the instructions in CONTRIBUTING.md and more specifically on how to build from source 1. The only thing I did differently before is that I didn’t install some of the required packages through conda, but instead used pip or apt-get. As a sanity check, I installed everything through conda now, inside a new environment with Python version 3.7.9. However, now the build fails completely, where before I had a ‘working’ build with failing tests from test/run_tests.sh. Here is the beginning of the output of python setup.py develop minus the error messages. Submodule path 'android/libs/fbjni': checked out 'b592c5591345a05341ed6cd31d214e71e8bf4229' Submodule path 'third_party/FP16': checked out '4dfe081cf6bcd15db339cf2680b9281b8451eeb3' Submodule path 'third_party/FXdiv': checked out 'b408327ac2a15ec3e43352421954f5b1967701d1' Submodule path 'third_party/NNPACK': checked out 'c07e3a0400713d546e0dea2d5466dd22ea389c73' Submodule path 'third_party/QNNPACK': checked out '7d2a4e9931a82adc3814275b6219a03e24e36b4c' Submodule path 'third_party/XNNPACK': checked out '55d53a4e7079d38e90acd75dd9e4f9e781d2da35' Submodule path 'third_party/benchmark': checked out '505be96ab23056580a3a2315abba048f4428b04e' Submodule path 'third_party/cpuinfo': checked out '5916273f79a21551890fd3d56fc5375a78d1598d' Submodule path 'third_party/cub': checked out 'd106ddb991a56c3df1b6d51b2409e36ba8181ce4' Submodule path 'third_party/eigen': checked out 'd41dc4dd74acce21fb210e7625d5d135751fa9e5' Submodule path 'third_party/fbgemm': checked out '580d6371fb4c4c606f6dcbb5b11085f5cfc73361' Submodule path 'third_party/fbgemm/third_party/asmjit': checked out '8b35b4cffb62ecb58a903bf91cb7537d7a672211' Submodule path 'third_party/fbgemm/third_party/cpuinfo': checked out 'ed8b86a253800bafdb7b25c5c399f91bff9cb1f3' Submodule path 'third_party/fbgemm/third_party/googletest': checked out 'cbf019de22c8dd37b2108da35b2748fd702d1796' Submodule path 'third_party/fmt': checked out 'cd4af11efc9c622896a3e4cb599fa28668ca3d05' Submodule path 'third_party/foxi': checked out 'bd6feb6d0d3fc903df42b4feb82a602a5fcb1fd5' Submodule path 'third_party/gemmlowp/gemmlowp': checked out '3fb5c176c17c765a3492cd2f0321b0dab712f350' Submodule path 'third_party/gloo': checked out '6f7095f6e9860ce4fd682a7894042e6eba0996f1' Submodule path 'third_party/googletest': checked out '2fe3bd994b3189899d93f1d5a881e725e046fdc2' Submodule path 'third_party/ideep': checked out 'f9468ff1a3d601b509ebe2c17d2ed0a58dffacee' Submodule path 'third_party/ideep/mkl-dnn': checked out '98be7e8afa711dc9b66c8ff3504129cb82013cdb' Submodule path 'third_party/ios-cmake': checked out '8abaed637d56f1337d6e1d2c4026e25c1eade724' Submodule path 'third_party/kineto': checked out '87c2a839b63f29ad0238345ab9d8dba5fde57f91' Submodule path 'third_party/kineto/libkineto/third_party/fmt': checked out '2591ab91c3898c9f6544fff04660276537d32ffd' Submodule path 'third_party/kineto/libkineto/third_party/googletest': checked out '7aca84427f224eeed3144123d5230d5871e93347' Submodule path 'third_party/nccl/nccl': checked out '033d799524fb97629af5ac2f609de367472b2696' Submodule path 'third_party/neon2sse': checked out '97a126f08ce318023be604d03f88bf0820a9464a' Submodule path 'third_party/onnx': checked out '54c38e6eaf557b844e70cebc00f39ced3321e9ad' Submodule path 'third_party/onnx/third_party/benchmark': checked out 'e776aa0275e293707b6a0901e0e8d8a8a3679508' Submodule path 'third_party/onnx/third_party/pybind11': checked out '80d452484c5409444b0ec19383faa84bb7a4d351' Submodule path 'third_party/onnx/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' Submodule path 'third_party/onnx-tensorrt': checked out 'c153211418a7c57ce071d9ce2a41f8d1c85a878f' Submodule path 'third_party/onnx-tensorrt/third_party/onnx': checked out '765f5ee823a67a866f4bd28a9860e81f3c811ce8' Submodule path 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark': checked out 'e776aa0275e293707b6a0901e0e8d8a8a3679508' Submodule path 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11': checked out 'a1041190c8b8ff0cd9e2f0752248ad5e3789ea0c' Submodule path 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' Submodule path 'third_party/protobuf': checked out 'd0bfd5221182da1a7cc280f3337b5e41a89539cf' Submodule path 'third_party/protobuf/third_party/benchmark': checked out '5b7683f49e1e9223cf9927b24f6fd3d6bd82e3f8' Submodule path 'third_party/protobuf/third_party/googletest': checked out '5ec7f0c4a113e2f18ac2c6cc7df51ad6afc24081' Submodule path 'third_party/psimd': checked out '072586a71b55b7f8c584153d223e95687148a900' Submodule path 'third_party/pthreadpool': checked out 'a134dd5d4cee80cce15db81a72e7f929d71dd413' Submodule path 'third_party/pybind11': checked out '8de7772cc72daca8e947b79b83fea46214931604' Submodule path 'third_party/python-enum': checked out '4cfedc426c4e2fc52e3f5c2b4297e15ed8d6b8c7' Submodule path 'third_party/python-peachpy': checked out '07d8fde8ac45d7705129475c0f94ed8925b93473' Submodule path 'third_party/python-six': checked out '15e31431af97e5e64b80af0a3f598d382bcdd49a' Submodule path 'third_party/sleef': checked out 'e0a003ee838b75d11763aa9c3ef17bf71a725bff' Submodule path 'third_party/tbb': checked out 'a51a90bc609bb73db8ea13841b5cf7aa4344d4a9' Submodule path 'third_party/tensorpipe': checked out 'daa6e23a1f41d7a0a7227b1a0e541414da1f251d' Submodule path 'third_party/tensorpipe/third_party/googletest': checked out 'aee0f9d9b5b87796ee8a0ab26b7587ec30e8858e' Submodule path 'third_party/tensorpipe/third_party/libnop': checked out 'aa95422ea8c409e3f078d2ee7708a5f59a8b9fa2' Submodule path 'third_party/tensorpipe/third_party/libuv': checked out '1dff88e5161cba5c59276d2070d2e304e4dcb242' Submodule path 'third_party/tensorpipe/third_party/pybind11': checked out 'a23996fce38ff6ccfbcdc09f1e63f2c4be5ea2ef' Submodule path 'third_party/tensorpipe/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' Submodule path 'third_party/zstd': checked out 'aec56a52fbab207fc639a1937d1e708a282edca8' -- The CXX compiler identification is GNU 9.3.0 -- The C compiler identification is GNU 9.3.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Not forcing any particular BLAS to be found -- Performing Test COMPILER_WORKS -- Performing Test COMPILER_WORKS - Success -- Performing Test SUPPORT_GLIBCXX_USE_C99 -- Performing Test SUPPORT_GLIBCXX_USE_C99 - Success -- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED -- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED - Success -- std::exception_ptr is supported. -- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING -- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING - Failed -- Turning off deprecation warning due to glog. -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS - Success -- Current compiler supports avx2 extension. Will build perfkernels. -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS - Success -- Current compiler supports avx512f extension. Will build fbgemm. -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Success -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Success -- Performing Test COMPILER_SUPPORTS_RDYNAMIC -- Performing Test COMPILER_SUPPORTS_RDYNAMIC - Success -- Building using own protobuf under third_party per request. -- Use custom protobuf build. -- -- 3.11.4.0 -- Looking for pthread.h -- Looking for pthread.h - found -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Check if compiler accepts -pthread -- Check if compiler accepts -pthread - yes -- Found Threads: TRUE -- Performing Test protobuf_HAVE_BUILTIN_ATOMICS -- Performing Test protobuf_HAVE_BUILTIN_ATOMICS - Success -- Caffe2 protobuf include directory: $<BUILD_INTERFACE:/home/username/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include> -- Trying to find preferred BLAS backend of choice: MKL -- MKL_THREADING = OMP -- Looking for sys/types.h -- Looking for sys/types.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Check size of void* -- Check size of void* - done -- Looking for cblas_sgemm -- Looking for cblas_sgemm - found -- MKL libraries: /usr/lib/x86_64-linux-gnu/libmkl_intel_lp64.so;/usr/lib/x86_64-linux-gnu/libmkl_gnu_thread.so;/usr/lib/x86_64-linux-gnu/libmkl_core.so;-fopenmp;/usr/lib/x86_64-linux-gnu/libpthread.so;/usr/lib/x86_64-linux-gnu/libm.so;/usr/lib/x86_64-linux-gnu/libdl.so -- MKL include directory: /home/username/miniconda3/pkgs/mkl-include-2021.2.0-h06a4308_296/include -- MKL OpenMP type: GNU -- MKL OpenMP library: -fopenmp -- The ASM compiler identification is GNU -- Found assembler: /usr/bin/cc -- Brace yourself, we are building NNPACK -- Performing Test NNPACK_ARCH_IS_X86_32 -- Performing Test NNPACK_ARCH_IS_X86_32 - Failed -- Found PythonInterp: /home/username/miniconda3/envs/torchdev/bin/python (found version "3.7.9") -- NNPACK backend is x86-64 -- Failed to find LLVM FileCheck -- Found Git: /usr/bin/git (found version "2.25.1") -- Performing Test HAVE_CXX_FLAG_STD_CXX11 -- Performing Test HAVE_CXX_FLAG_STD_CXX11 - Success -- Performing Test HAVE_CXX_FLAG_WALL -- Performing Test HAVE_CXX_FLAG_WALL - Success -- Performing Test HAVE_CXX_FLAG_WEXTRA -- Performing Test HAVE_CXX_FLAG_WEXTRA - Success -- Performing Test HAVE_CXX_FLAG_WSHADOW -- Performing Test HAVE_CXX_FLAG_WSHADOW - Success -- Performing Test HAVE_CXX_FLAG_WERROR -- Performing Test HAVE_CXX_FLAG_WERROR - Success -- Performing Test HAVE_CXX_FLAG_PEDANTIC -- Performing Test HAVE_CXX_FLAG_PEDANTIC - Success -- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS -- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS - Success -- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32 -- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32 - Failed -- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL -- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL - Success -- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING -- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING - Success -- Performing Test HAVE_CXX_FLAG_WNO_DEPRECATED_DECLARATIONS -- Performing Test HAVE_CXX_FLAG_WNO_DEPRECATED_DECLARATIONS - Success -- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING -- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING - Success -- Performing Test HAVE_CXX_FLAG_WD654 -- Performing Test HAVE_CXX_FLAG_WD654 - Failed -- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY -- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY - Failed -- Performing Test HAVE_CXX_FLAG_COVERAGE -- Performing Test HAVE_CXX_FLAG_COVERAGE - Success -- Performing Test COMPILER_SUPPORTS_AVX512 -- Performing Test COMPILER_SUPPORTS_AVX512 - Success -- Found OpenMP_C: -fopenmp (found version "4.5") -- Found OpenMP_CXX: -fopenmp (found version "4.5") -- Found OpenMP: TRUE (found version "4.5") -- Performing Test __CxxFlag__fno_threadsafe_statics -- Performing Test __CxxFlag__fno_threadsafe_statics - Success -- Performing Test __CxxFlag__fno_semantic_interposition -- Performing Test __CxxFlag__fno_semantic_interposition - Success -- Performing Test __CxxFlag__fmerge_all_constants -- Performing Test __CxxFlag__fmerge_all_constants - Success -- Performing Test __CxxFlag__fno_enforce_eh_specs -- Performing Test __CxxFlag__fno_enforce_eh_specs - Success -- Found Numa: /usr/include -- Found Numa (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libnuma.so) -- Using third party subdirectory Eigen. -- Found PythonInterp: /home/username/miniconda3/envs/torchdev/bin/python (found suitable version "3.7.9", minimum required is "3.0") -- Found PythonLibs: /home/username/miniconda3/envs/torchdev/lib/libpython3.7m.so.1.0 (found suitable version "3.7.9", minimum required is "3.0") -- Could NOT find pybind11 (missing: pybind11_DIR) -- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR) -- Using third_party/pybind11. -- pybind11 include dirs: /home/username/pytorch/cmake/../third_party/pybind11/include -- Found MPI_C: /usr/lib/x86_64-linux-gnu/libmpich.so (found version "3.1") -- Found MPI_CXX: /usr/lib/x86_64-linux-gnu/libmpichcxx.so (found version "3.1") -- Found MPI: TRUE (found version "3.1") -- MPI support found -- MPI compile flags: -- MPI include path: /usr/include/x86_64-linux-gnu/mpich -- MPI LINK flags path: -Wl,-Bsymbolic-functions -- MPI libraries: /usr/lib/x86_64-linux-gnu/libmpichcxx.so/usr/lib/x86_64-linux-gnu/libmpich.so -- Adding OpenMP CXX_FLAGS: -fopenmp -- Will link against OpenMP libraries: /usr/lib/gcc/x86_64-linux-gnu/9/libgomp.so;/usr/lib/x86_64-linux-gnu/libpthread.so -- Found CUDA: /usr/local/cuda-11.2 (found version "11.2") -- Caffe2: CUDA detected: 11.2 -- Caffe2: CUDA nvcc is: /usr/local/cuda-11.2/bin/nvcc -- Caffe2: CUDA toolkit directory: /usr/local/cuda-11.2 -- Caffe2: Header version is: 11.2 -- Found CUDNN: /home/username/miniconda3/pkgs/cudnn-7.6.5-cuda10.2_0/lib/libcudnn.so -- Found cuDNN: v7.6.5 (include: /home/username/miniconda3/pkgs/cudnn-7.6.5-cuda10.2_0/include, library: /home/username/miniconda3/pkgs/cudnn-7.6.5-cuda10.2_0/lib/libcudnn.so) -- /usr/local/cuda-11.2/lib64/libnvrtc.so shorthash is 369df368 -- Autodetected CUDA architecture(s): 7.5 7.5 -- Added CUDA NVCC flags for: -gencode;arch=compute_75,code=sm_75 -- Autodetected CUDA architecture(s): 7.5 7.5 -- Found CUB: /usr/local/cuda-11.2/include -- Gloo build as SHARED library -- MPI include path: /usr/include/x86_64-linux-gnu/mpich -- MPI libraries: /usr/lib/x86_64-linux-gnu/libmpichcxx.so/usr/lib/x86_64-linux-gnu/libmpich.so -- Found CUDA: /usr/local/cuda-11.2 (found suitable version "11.2", minimum required is "7.0") -- CUDA detected: 11.2 -- Found NCCL: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/include -- Determining NCCL version from the header file: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/include/nccl.h -- Found NCCL (include: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/include, library: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so) -- Found CUDA: /usr/local/cuda-11.2 (found version "11.2") -- Performing Test UV_LINT_W4 -- Performing Test UV_LINT_W4 - Failed -- Performing Test UV_LINT_NO_UNUSED_PARAMETER_MSVC -- Performing Test UV_LINT_NO_UNUSED_PARAMETER_MSVC - Failed -- Performing Test UV_LINT_NO_CONDITIONAL_CONSTANT_MSVC -- Performing Test UV_LINT_NO_CONDITIONAL_CONSTANT_MSVC - Failed -- Performing Test UV_LINT_NO_NONSTANDARD_MSVC -- Performing Test UV_LINT_NO_NONSTANDARD_MSVC - Failed -- Performing Test UV_LINT_NO_NONSTANDARD_EMPTY_TU_MSVC -- Performing Test UV_LINT_NO_NONSTANDARD_EMPTY_TU_MSVC - Failed -- Performing Test UV_LINT_NO_NONSTANDARD_FILE_SCOPE_MSVC -- Performing Test UV_LINT_NO_NONSTANDARD_FILE_SCOPE_MSVC - Failed -- Performing Test UV_LINT_NO_NONSTANDARD_NONSTATIC_DLIMPORT_MSVC -- Performing Test UV_LINT_NO_NONSTANDARD_NONSTATIC_DLIMPORT_MSVC - Failed -- Performing Test UV_LINT_NO_HIDES_LOCAL -- Performing Test UV_LINT_NO_HIDES_LOCAL - Failed -- Performing Test UV_LINT_NO_HIDES_PARAM -- Performing Test UV_LINT_NO_HIDES_PARAM - Failed -- Performing Test UV_LINT_NO_HIDES_GLOBAL -- Performing Test UV_LINT_NO_HIDES_GLOBAL - Failed -- Performing Test UV_LINT_NO_CONDITIONAL_ASSIGNMENT_MSVC -- Performing Test UV_LINT_NO_CONDITIONAL_ASSIGNMENT_MSVC - Failed -- Performing Test UV_LINT_NO_UNSAFE_MSVC -- Performing Test UV_LINT_NO_UNSAFE_MSVC - Failed -- Performing Test UV_LINT_WALL -- Performing Test UV_LINT_WALL - Success -- Performing Test UV_LINT_NO_UNUSED_PARAMETER -- Performing Test UV_LINT_NO_UNUSED_PARAMETER - Success -- Performing Test UV_LINT_STRICT_PROTOTYPES -- Performing Test UV_LINT_STRICT_PROTOTYPES - Success -- Performing Test UV_LINT_EXTRA -- Performing Test UV_LINT_EXTRA - Success -- Performing Test UV_LINT_UTF8_MSVC -- Performing Test UV_LINT_UTF8_MSVC - Failed -- Performing Test UV_F_STRICT_ALIASING -- Performing Test UV_F_STRICT_ALIASING - Success -- summary of build options: Install prefix: /home/username/pytorch/torch Target system: Linux Compiler: C compiler: /usr/bin/cc CFLAGS: -fopenmp -- Found uv: 1.38.1 (found version "1.38.1") -- -- ******** Summary ******** -- CMake version : 3.19.6 -- CMake command : /home/username/miniconda3/envs/torchdev/bin/cmake -- System : Linux -- C++ compiler : /usr/bin/c++ -- C++ compiler version : 9.3.0 -- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -Wnon-virtual-dtor -- Build type : Release -- Compile definitions : TH_BLAS_MKL;ONNX_ML=1;ONNXIFI_ENABLE_EXT=1 -- CMAKE_PREFIX_PATH : /home/username/miniconda3/envs/torchdev/lib/python3.7/site-packages;/usr/local/cuda-11.2 -- CMAKE_INSTALL_PREFIX : /home/username/pytorch/torch -- CMAKE_MODULE_PATH : /home/username/pytorch/cmake/Modules;/home/username/pytorch/cmake/public/../Modules_CUDA_fix -- -- ONNX version : 1.8.0 -- ONNX NAMESPACE : onnx_torch -- ONNX_BUILD_TESTS : OFF -- ONNX_BUILD_BENCHMARKS : OFF -- ONNX_USE_LITE_PROTO : OFF -- ONNXIFI_DUMMY_BACKEND : OFF -- ONNXIFI_ENABLE_EXT : OFF -- -- Protobuf compiler : -- Protobuf includes : -- Protobuf libraries : -- BUILD_ONNX_PYTHON : OFF --
st32982
cont. -- ******** Summary ******** -- CMake version : 3.19.6 -- CMake command : /home/username/miniconda3/envs/torchdev/bin/cmake -- System : Linux -- C++ compiler : /usr/bin/c++ -- C++ compiler version : 9.3.0 -- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -Wnon-virtual-dtor -- Build type : Release -- Compile definitions : TH_BLAS_MKL;ONNX_ML=1;ONNXIFI_ENABLE_EXT=1 -- CMAKE_PREFIX_PATH : /home/username/miniconda3/envs/torchdev/lib/python3.7/site-packages;/usr/local/cuda-11.2 -- CMAKE_INSTALL_PREFIX : /home/username/pytorch/torch -- CMAKE_MODULE_PATH : /home/username/pytorch/cmake/Modules;/home/username/pytorch/cmake/public/../Modules_CUDA_fix -- -- ONNX version : 1.4.1 -- ONNX NAMESPACE : onnx_torch -- ONNX_BUILD_TESTS : OFF -- ONNX_BUILD_BENCHMARKS : OFF -- ONNX_USE_LITE_PROTO : OFF -- ONNXIFI_DUMMY_BACKEND : OFF -- -- Protobuf compiler : -- Protobuf includes : -- Protobuf libraries : -- BUILD_ONNX_PYTHON : OFF -- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor -- Adding -DNDEBUG to compile flags -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 - True -- Compiling with MAGMA support -- MAGMA INCLUDE DIRECTORIES: /home/username/miniconda3/pkgs/magma-2.5.0-hc5c8b49_0/include -- MAGMA LIBRARIES: /home/username/miniconda3/pkgs/magma-2.5.0-hc5c8b49_0/lib/libmagma.a -- MAGMA V2 check: 1 -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- Looking for cpuid.h -- Looking for cpuid.h - found -- Performing Test HAVE_GCC_GET_CPUID -- Performing Test HAVE_GCC_GET_CPUID - Success -- Performing Test NO_GCC_EBX_FPIC_BUG -- Performing Test NO_GCC_EBX_FPIC_BUG - Success -- Performing Test C_VSX_FOUND -- Performing Test C_VSX_FOUND - Failed -- Performing Test CXX_VSX_FOUND -- Performing Test CXX_VSX_FOUND - Failed -- Performing Test C_HAS_AVX_1 -- Performing Test C_HAS_AVX_1 - Failed -- Performing Test C_HAS_AVX_2 -- Performing Test C_HAS_AVX_2 - Success -- Performing Test C_HAS_AVX2_1 -- Performing Test C_HAS_AVX2_1 - Failed -- Performing Test C_HAS_AVX2_2 -- Performing Test C_HAS_AVX2_2 - Success -- Performing Test CXX_HAS_AVX_1 -- Performing Test CXX_HAS_AVX_1 - Failed -- Performing Test CXX_HAS_AVX_2 -- Performing Test CXX_HAS_AVX_2 - Success -- Performing Test CXX_HAS_AVX2_1 -- Performing Test CXX_HAS_AVX2_1 - Failed -- Performing Test CXX_HAS_AVX2_2 -- Performing Test CXX_HAS_AVX2_2 - Success -- AVX compiler support found -- AVX2 compiler support found -- Performing Test BLAS_F2C_DOUBLE_WORKS -- Performing Test BLAS_F2C_DOUBLE_WORKS - Failed -- Performing Test BLAS_F2C_FLOAT_WORKS -- Performing Test BLAS_F2C_FLOAT_WORKS - Success -- Performing Test BLAS_USE_CBLAS_DOT -- Performing Test BLAS_USE_CBLAS_DOT - Success -- Found a library with BLAS API (mkl). Full path: (/usr/lib/x86_64-linux-gnu/libmkl_intel_lp64.so;/usr/lib/x86_64-linux-gnu/libmkl_gnu_thread.so;/usr/lib/x86_64-linux-gnu/libmkl_core.so;-fopenmp;/usr/lib/x86_64-linux-gnu/libpthread.so;/usr/lib/x86_64-linux-gnu/libm.so;/usr/lib/x86_64-linux-gnu/libdl.so) -- Found a library with LAPACK API (mkl). -- MIOpen not found. Compiling without MIOpen support -- MKLDNN_CPU_RUNTIME = OMP -- Intel MKL-DNN compat: set DNNL_ENABLE_CONCURRENT_EXEC to MKLDNN_ENABLE_CONCURRENT_EXEC with value `ON` -- Intel MKL-DNN compat: set DNNL_BUILD_EXAMPLES to MKLDNN_BUILD_EXAMPLES with value `FALSE` -- Intel MKL-DNN compat: set DNNL_BUILD_TESTS to MKLDNN_BUILD_TESTS with value `FALSE` -- Intel MKL-DNN compat: set DNNL_LIBRARY_TYPE to MKLDNN_LIBRARY_TYPE with value `STATIC` -- Intel MKL-DNN compat: set DNNL_ARCH_OPT_FLAGS to MKLDNN_ARCH_OPT_FLAGS with value `-msse4` -- Intel MKL-DNN compat: set DNNL_CPU_RUNTIME to MKLDNN_CPU_RUNTIME with value `OMP` -- Found OpenMP_C: -fopenmp (found version "4.5") -- Found OpenMP_CXX: -fopenmp (found version "4.5") -- Primitive cache is enabled -- Found MKL-DNN: TRUE -- Looking for clock_gettime in rt -- Looking for clock_gettime in rt - found -- Looking for mmap -- Looking for mmap - found -- Looking for shm_open -- Looking for shm_open - found -- Looking for shm_unlink -- Looking for shm_unlink - found -- Looking for malloc_usable_size -- Looking for malloc_usable_size - found -- Performing Test C_HAS_THREAD -- Performing Test C_HAS_THREAD - Success -- Version: 7.0.3 -- Build type: Release -- CXX_STANDARD: 14 -- Performing Test has_std_14_flag -- Performing Test has_std_14_flag - Success -- Performing Test has_std_1y_flag -- Performing Test has_std_1y_flag - Success -- Performing Test SUPPORTS_USER_DEFINED_LITERALS -- Performing Test SUPPORTS_USER_DEFINED_LITERALS - Success -- Performing Test FMT_HAS_VARIANT -- Performing Test FMT_HAS_VARIANT - Success -- Required features: cxx_variadic_templates -- Looking for strtod_l -- Looking for strtod_l - not found -- CUDA build detected, configuring Kineto with CUPTI support. -- Configuring Kineto dependency: -- KINETO_SOURCE_DIR = /home/username/pytorch/third_party/kineto/libkineto -- KINETO_BUILD_TESTS = OFF -- KINETO_LIBRARY_TYPE = static -- CUDA_SOURCE_DIR = /usr/local/cuda-11.2 -- CUDA_cupti_LIBRARY = /usr/local/cuda-11.2/extras/CUPTI/lib64/libcupti_static.a -- CUPTI_INCLUDE_DIR = /usr/local/cuda-11.2/extras/CUPTI/include -- Found PythonInterp: /home/username/miniconda3/envs/torchdev/bin/python (found version "3.7.9") -- Kineto: FMT_SOURCE_DIR = /home/username/pytorch/third_party/fmt -- Kineto: FMT_INCLUDE_DIR = /home/username/pytorch/third_party/fmt/include -- Configured Kineto -- GCC 9.3.0: Adding gcc and gcc_s libs to link line -- Performing Test HAS_WERROR_FORMAT -- Performing Test HAS_WERROR_FORMAT - Success -- Performing Test HAS_WERROR_CAST_FUNCTION_TYPE -- Performing Test HAS_WERROR_CAST_FUNCTION_TYPE - Success -- Looking for backtrace -- Looking for backtrace - found -- backtrace facility detected in default set of libraries -- Found Backtrace: /usr/include -- NUMA paths: -- /usr/include -- /usr/lib/x86_64-linux-gnu/libnuma.so -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Success -- Using ATen parallel backend: OMP -- Found OpenSSL: /usr/lib/x86_64-linux-gnu/libcrypto.so (found version "1.1.1f") -- Check size of long double -- Check size of long double - done -- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE -- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE - Success -- Performing Test COMPILER_SUPPORTS_FLOAT128 -- Performing Test COMPILER_SUPPORTS_FLOAT128 - Success -- Performing Test COMPILER_SUPPORTS_SSE2 -- Performing Test COMPILER_SUPPORTS_SSE2 - Success -- Performing Test COMPILER_SUPPORTS_SSE4 -- Performing Test COMPILER_SUPPORTS_SSE4 - Success -- Performing Test COMPILER_SUPPORTS_AVX -- Performing Test COMPILER_SUPPORTS_AVX - Success -- Performing Test COMPILER_SUPPORTS_FMA4 -- Performing Test COMPILER_SUPPORTS_FMA4 - Success -- Performing Test COMPILER_SUPPORTS_AVX2 -- Performing Test COMPILER_SUPPORTS_AVX2 - Success -- Performing Test COMPILER_SUPPORTS_AVX512F -- Performing Test COMPILER_SUPPORTS_AVX512F - Success -- Performing Test COMPILER_SUPPORTS_OPENMP -- Performing Test COMPILER_SUPPORTS_OPENMP - Success -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Success -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Success -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Success -- Configuring build for SLEEF-v3.6.0 -- Using option `-Wall -Wno-unused -Wno-attributes -Wno-unused-result -Wno-psabi -ffp-contract=off -fno-math-errno -fno-trapping-math` to compile libsleef -- Building shared libs : OFF -- Building static test bins: OFF -- MPFR : LIB_MPFR-NOTFOUND -- GMP : LIBGMP-NOTFOUND -- RT : /usr/lib/x86_64-linux-gnu/librt.so -- FFTW3 : LIBFFTW3-NOTFOUND -- OPENSSL : 1.1.1f -- SDE : SDE_COMMAND-NOTFOUND -- RUNNING_ON_TRAVIS : -- COMPILER_SUPPORTS_OPENMP : 1 -- Include NCCL operators -- Excluding FakeLowP operators -- Including IDEEP operators -- Excluding image processing operators due to no opencv -- Excluding video processing operators due to no opencv -- Include Observer library -- breakpad library not found -- /usr/bin/c++ /home/username/pytorch/torch/abi-check.cpp -o /home/username/pytorch/build/abi-check -- Determined _GLIBCXX_USE_CXX11_ABI=1 -- MPI_INCLUDE_PATH: /usr/include/x86_64-linux-gnu/mpich -- MPI_LIBRARIES: /usr/lib/x86_64-linux-gnu/libmpichcxx.so;/usr/lib/x86_64-linux-gnu/libmpich.so -- MPIEXEC: /usr/bin/mpiexec -- Autodetected CUDA architecture(s): 7.5 7.5 -- pytorch is compiling with OpenMP. OpenMP CXX_FLAGS: -fopenmp. OpenMP libraries: /usr/lib/gcc/x86_64-linux-gnu/9/libgomp.so;/usr/lib/x86_64-linux-gnu/libpthread.so. -- Caffe2 is compiling with OpenMP. OpenMP CXX_FLAGS: -fopenmp. OpenMP libraries: /usr/lib/gcc/x86_64-linux-gnu/9/libgomp.so;/usr/lib/x86_64-linux-gnu/libpthread.so. -- Using lib/python3.7/site-packages as python relative installation path -- -- ******** Summary ******** -- General: -- CMake version : 3.19.6 -- CMake command : /home/username/miniconda3/envs/torchdev/bin/cmake -- System : Linux -- C++ compiler : /usr/bin/c++ -- C++ compiler id : GNU -- C++ compiler version : 9.3.0 -- Using ccache if found : ON -- Found ccache : /home/username/miniconda3/bin/ccache -- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow -- Build type : Release -- Compile definitions : TH_BLAS_MKL;ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;MAGMA_V2;IDEEP_USE_MKL;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -- CMAKE_PREFIX_PATH : /home/username/miniconda3/envs/torchdev/lib/python3.7/site-packages;/usr/local/cuda-11.2 -- CMAKE_INSTALL_PREFIX : /home/username/pytorch/torch -- USE_GOLD_LINKER : OFF -- -- TORCH_VERSION : 1.9.0 -- CAFFE2_VERSION : 1.9.0 -- BUILD_CAFFE2 : ON -- BUILD_CAFFE2_OPS : ON -- BUILD_CAFFE2_MOBILE : OFF -- BUILD_STATIC_RUNTIME_BENCHMARK: OFF -- BUILD_TENSOREXPR_BENCHMARK: OFF -- BUILD_BINARY : OFF -- BUILD_CUSTOM_PROTOBUF : ON -- Link local protobuf : ON -- BUILD_DOCS : OFF -- BUILD_PYTHON : True -- Python version : 3.7.9 -- Python executable : /home/username/miniconda3/envs/torchdev/bin/python -- Pythonlibs version : 3.7.9 -- Python library : /home/username/miniconda3/envs/torchdev/lib/libpython3.7m.so.1.0 -- Python includes : /home/username/miniconda3/envs/torchdev/include/python3.7m -- Python site-packages: lib/python3.7/site-packages -- BUILD_SHARED_LIBS : ON -- CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF -- BUILD_TEST : True -- BUILD_JNI : OFF -- BUILD_MOBILE_AUTOGRAD : OFF -- BUILD_LITE_INTERPRETER: OFF -- INTERN_BUILD_MOBILE : -- USE_BLAS : 1 -- BLAS : mkl -- USE_LAPACK : 1 -- LAPACK : mkl -- USE_ASAN : OFF -- USE_CPP_CODE_COVERAGE : OFF -- USE_CUDA : ON -- Split CUDA : OFF -- CUDA static link : OFF -- USE_CUDNN : ON -- CUDA version : 11.2 -- cuDNN version : 7.6.5 -- CUDA root directory : /usr/local/cuda-11.2 -- CUDA library : /usr/local/cuda-11.2/lib64/stubs/libcuda.so -- cudart library : /usr/local/cuda-11.2/lib64/libcudart.so -- cublas library : /usr/local/cuda-11.2/lib64/libcublas.so -- cufft library : /usr/local/cuda-11.2/lib64/libcufft.so -- curand library : /usr/local/cuda-11.2/lib64/libcurand.so -- cuDNN library : /home/username/miniconda3/pkgs/cudnn-7.6.5-cuda10.2_0/lib/libcudnn.so -- nvrtc : /usr/local/cuda-11.2/lib64/libnvrtc.so -- CUDA include path : /usr/local/cuda-11.2/include -- NVCC executable : /usr/local/cuda-11.2/bin/nvcc -- NVCC flags : -Xfatbin;-compress-all;-DONNX_NAMESPACE=onnx_torch;-gencode;arch=compute_75,code=sm_75;-Xcudafe;--diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl;-std=c++14;-Xcompiler;-fPIC;--expt-relaxed-constexpr;--expt-extended-lambda;-Wno-deprecated-gpu-targets;--expt-extended-lambda;-Xcompiler;-fPIC;-DCUDA_HAS_FP16=1;-D__CUDA_NO_HALF_OPERATORS__;-D__CUDA_NO_HALF_CONVERSIONS__;-D__CUDA_NO_BFLOAT16_CONVERSIONS__;-D__CUDA_NO_HALF2_OPERATORS__ -- CUDA host compiler : /usr/bin/cc -- NVCC --device-c : OFF -- USE_TENSORRT : OFF -- USE_ROCM : OFF -- USE_EIGEN_FOR_BLAS : -- USE_FBGEMM : ON -- USE_FAKELOWP : OFF -- USE_KINETO : ON -- USE_FFMPEG : OFF -- USE_GFLAGS : OFF -- USE_GLOG : OFF -- USE_LEVELDB : OFF -- USE_LITE_PROTO : OFF -- USE_LMDB : OFF -- USE_METAL : OFF -- USE_PYTORCH_METAL : OFF -- USE_FFTW : OFF -- USE_MKL : ON -- USE_MKLDNN : ON -- USE_MKLDNN_CBLAS : OFF -- USE_NCCL : ON -- USE_SYSTEM_NCCL : OFF -- USE_NNPACK : ON -- USE_NUMPY : ON -- USE_OBSERVERS : ON -- USE_OPENCL : OFF -- USE_OPENCV : OFF -- USE_OPENMP : ON -- USE_TBB : OFF -- USE_VULKAN : OFF -- USE_PROF : OFF -- USE_QNNPACK : ON -- USE_PYTORCH_QNNPACK : ON -- USE_REDIS : OFF -- USE_ROCKSDB : OFF -- USE_ZMQ : OFF -- USE_DISTRIBUTED : ON -- USE_MPI : ON -- USE_GLOO : ON -- USE_TENSORPIPE : ON -- USE_DEPLOY : OFF -- Public Dependencies : Threads::Threads;caffe2::mkl;caffe2::mkldnn -- Private Dependencies : pthreadpool;cpuinfo;qnnpack;pytorch_qnnpack;nnpack;XNNPACK;fbgemm;/usr/lib/x86_64-linux-gnu/libnuma.so;fp16;/usr/lib/x86_64-linux-gnu/libmpichcxx.so;/usr/lib/x86_64-linux-gnu/libmpich.so;gloo;tensorpipe;aten_op_header_gen;foxi_loader;rt;fmt::fmt-header-only;kineto;gcc_s;gcc;dl -- Configuring done -- Generating done -- Build files have been written to: /home/username/pytorch/build and here is where the error occurs: [5818/6121] Linking CXX shared library lib/libtorch.so [5819/6121] Linking CXX executable bin/scalar_tensor_test FAILED: bin/scalar_tensor_test : && /usr/bin/c++ -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -rdynamic -Wl,-Bsymbolic-functions caffe2/CMakeFiles/scalar_tensor_test.dir/__/aten/src/ATen/test/scalar_tensor_test.cpp.o -o bin/scalar_tensor_test -Wl,-rpath,/home/username/pytorch/build/lib:/usr/local/cuda-11.2/lib64:/home/username/miniconda3/pkgs/cudnn-7.6.5-cuda10.2_0/lib: lib/libgtest_main.a -Wl,--no-as-needed,"/home/username/pytorch/build/lib/libtorch.so" -Wl,--as-needed -Wl,--no-as-needed,"/home/username/pytorch/build/lib/libtorch_cpu.so" -Wl,--as-needed lib/libprotobuf.a -lmkl_intel_lp64 -lmkl_gnu_thread -lmkl_core -fopenmp /usr/lib/x86_64-linux-gnu/libpthread.so -lm /usr/lib/x86_64-linux-gnu/libdl.so lib/libdnnl.a -ldl -Wl,--no-as-needed,"/home/username/pytorch/build/lib/libtorch_cuda.so" -Wl,--as-needed lib/libc10_cuda.so lib/libc10.so /usr/local/cuda-11.2/lib64/libcudart.so /usr/lib/x86_64-linux-gnu/libnvToolsExt.so /usr/local/cuda-11.2/lib64/libcufft.so /usr/local/cuda-11.2/lib64/libcurand.so /usr/local/cuda-11.2/lib64/libcublas.so /home/username/miniconda3/pkgs/cudnn-7.6.5-cuda10.2_0/lib/libcudnn.so lib/libgtest.a -pthread && : /usr/bin/ld: warning: libcudart.so.10.0, needed by /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1, not found (try using -rpath or -rpath-link) /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' /usr/bin/ld: /home/username/miniconda3/pkgs/nccl-1.3.5-cuda10.0_0/lib/libnccl.so.1: undefined reference to `[email protected]' collect2: error: ld returned 1 exit status [5820/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/fx/fx_init.cpp.o [5821/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/tensor/python_tensor.cpp.o [5822/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/autograd/python_variable_indexing.cpp.o [5823/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/python/python_arg_flatten.cpp.o [5824/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/passes/onnx.cpp.o [5825/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/runtime/static/init.cpp.o [5826/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/python/python_interpreter.cpp.o [5827/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/backends/backend_init.cpp.o [5828/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/python/pybind_utils.cpp.o [5829/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/passes/onnx/shape_type_inference.cpp.o [5830/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/python/python_custom_class.cpp.o [5831/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/frontend/concrete_module_type.cpp.o [5832/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/python/python_tracer.cpp.o [5833/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/python/python_sugared_value.cpp.o [5834/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/python/python_tree_views.cpp.o [5835/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/tensorexpr/tensorexpr_init.cpp.o [5836/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/python/python_ir.cpp.o [5837/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/python/init.cpp.o [5838/6121] Building CXX object caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/python/script_init.cpp.o ninja: build stopped: subcommand failed.
st32983
Mixing libnccl compiled with CUDA 10 doesn’t work with CUDA 11.2 you use for building PyTorch. I would probably use the vendored nccl.
st32984
Ok. I did a clean build now with CUDA 11.3, cuDNN 8.2.0 and /lib/x86_64-linux-gnu/libnccl.so. The build succeeds, and I have uploaded the output of python setup.py develop here 2 since it’s too large for one or two forum posts. But running test/run_test.sh I get fails immediately: Running test_import_time ... [2021-05-11 20:29:24.795319] Executing ['/home//miniconda3/envs/torchdev/bin/python', 'test_import_time.py'] ... [2021-05-11 20:29:24.795436] .. ---------------------------------------------------------------------- Ran 2 tests in 1.114s OK Running test_public_bindings ... [2021-05-11 20:29:26.483024] Executing ['/home//miniconda3/envs/torchdev/bin/python', 'test_public_bindings.py'] ... [2021-05-11 20:29:26.483127] . ---------------------------------------------------------------------- Ran 1 test in 0.000s OK Running test_type_hints ... [2021-05-11 20:29:27.053734] Executing ['/home//miniconda3/envs/torchdev/bin/python', 'test_type_hints.py'] ... [2021-05-11 20:29:27.053839] s ---------------------------------------------------------------------- Ran 1 test in 0.000s OK (skipped=1) Running test_autograd ... [2021-05-11 20:29:27.612298] Executing ['/home//miniconda3/envs/torchdev/bin/python', 'test_autograd.py'] ... [2021-05-11 20:29:27.612348] *** stack smashing detected ***: terminated Traceback (most recent call last): File "test/run_test.py", line 1169, in <module> main() File "test/run_test.py", line 1148, in main raise RuntimeError(err_message) RuntimeError: test_autograd failed! Received signal: SIGIOT
st32985
The next step could be to run under gdb (you can also run test_autograd directly if you want) and get a stack trace. I’m a bit surprised of sigiot, though, it could be that you still have a library version mixup somewhere (try using ldd on the libraries in build/lib.*/torch/lib or so). Sometimes things like threading libraries are tricky.
st32986
II did gdb --args python test_autograd.py, followed by run with output [Thread debugging using libthread_db enabled] Using host libthread_db library “/lib/x86_64-linux-gnu/libthread_db.so.1”. [Detaching after fork from child process 153339] [New Thread 0x7fff817a9700 (LWP 153353)] *** stack smashing detected ***: terminated Thread 1 “python” received signal SIGABRT, Aborted. __GI_raise (sig=sig@entry=6) at …/sysdeps/unix/sysv/linux/raise.c:50 50 …/sysdeps/unix/sysv/linux/raise.c: No such file or directory. and backtrace with output #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 [110/610] #1 0x00007ffff7dc1859 in __GI_abort () at abort.c:79 #2 0x00007ffff7e2c3ee in __libc_message (action=action@entry=do_abort, fmt=fmt@entry=0x7ffff7f5607c "*** %s ***: terminated\n") at ../sysdeps/posix/libc_fatal.c:155 #3 0x00007ffff7eceb4a in __GI___fortify_fail (msg=msg@entry=0x7ffff7f56064 "stack smashing detected") at fortify_fail.c:26 #4 0x00007ffff7eceb16 in __stack_chk_fail () at stack_chk_fail.c:24 #5 0x00007fffb578f0f7 in magma_init () from /home/user/pytorch/torch/lib/libtorch_cuda.so #6 0x00007fffb5060851 in at::cuda::detail::CUDAHooks::initCUDA() const () from /home/user/pytorch/torch/lib/libtorch_cuda.so #7 0x00007fffcafc1e10 in std::call_once<at::Context::lazyInitCUDA()::{lambda()#1}>(std::once_flag&, at:: Context::lazyInitCUDA()::{lambda()#1}&&)::{lambda()#2}::_FUN() () from /home/user/pytorch/torch/lib/libtorch_python.so #8 0x00007ffff7fa047f in __pthread_once_slow ( once_control=0x7fffc9c8fe40 <at::globalContext()::globalContext_>, init_routine=0x7fffd75987ba <std::__once_proxy()>) at pthread_once.c:116 #9 0x00007fffcafc1774 in THCPModule_initExtension(_object*, _object*) () from /home/user/pytorch/torch/lib/libtorch_python.so #10 0x00005555556b97e1 in _PyMethodDef_RawFastCallKeywords (method=0x555557ca3f40, self=0x7ffff6cf44d0, args=0x5555587d9c00, nargs=<optimized out>, kwnames=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:632 #11 0x00005555556b9a31 in _PyCFunction_FastCallKeywords (func=0x7ffff6cf7dc0, args=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:732 #12 0x0000555555725ebd in call_function (kwnames=0x0, oparg=0, pp_stack=<synthetic pointer>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:4568 #13 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:3093 #14 0x000055555566985b in function_code_fastcall (globals=<optimized out>, nargs=0, args=<optimized out>, co=0x7fff8fd3aa50) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:283 #15 _PyFunction_FastCallDict (func=<optimized out>, args=0x0, nargs=0, kwargs=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:322 #16 0x00005555555c9ad0 in _PyObject_CallFunctionVa (callable=0x7fff8fc287a0, format=<optimized out>, va=<optimized out>, is_size_t=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:933 #17 0x00005555556c3287 in callmethod (is_size_t=0, va=0x7fffffffc710, format=0x7fffcb46aa4f "", callable=0x7fff8fc287a0) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:1029 #18 PyObject_CallMethod (obj=<optimized out>, name=<optimized out>, format=0x7fffcb46aa4f "") at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:1048 #19 0x00007fffcaf80391 in torch::utils::cuda_lazy_init() () from /home/user/pytorch/torch/lib/libtorch_python.so #20 0x00007fffcafa8fac in torch::utils::(anonymous namespace)::internal_new_from_data(c10::TensorOptions, c10::ScalarType, c10::optional<c10::Device>, _object*, bool, bool, bool, bool) () from /home/user/pytorch/torch/lib/libtorch_python.so #21 0x00007fffcafadc99 in torch::utils::tensor_ctor(c10::DispatchKey, c10::ScalarType, _object*, _object* ) () from /home/user/pytorch/torch/lib/libtorch_python.so #22 0x00007fffcabe80ab in torch::autograd::THPVariable_tensor(_object*, _object*, _object*) () from /home/user/pytorch/torch/lib/libtorch_python.so #23 0x00005555556b99b6 in _PyMethodDef_RawFastCallKeywords (method=<optimized out>, self=0x0, args=0x5555586b4040, nargs=<optimized out>, kwnames=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:693 #24 0x00005555556b9a31 in _PyCFunction_FastCallKeywords (func=0x7ffff6d0d780, args=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:732 #25 0x0000555555726483 in call_function (kwnames=0x7ffff6ec2f90, oparg=<optimized out>, pp_stack=<synthetic pointer>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:4568 #26 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:3139 #27 0x0000555555668829 in _PyEval_EvalCodeWithName (_co=0x7fff89815c90, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=<optimized out>, kwnames=0x0, kwargs=0x0, kwcount=0, kwstep=2, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:3930 #28 0x0000555555669714 in PyEval_EvalCodeEx (_co=<optimized out>, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=<optimized out>, kws=<optimized out>, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:3959 #29 0x000055555566973c in PyEval_EvalCode (co=<optimized out>, globals=<optimized out>, [42/610] locals=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:524 #30 0x0000555555730e11 in builtin_exec_impl.isra.12 (locals=0x7fff898082d0, globals=0x7fff898082d0, source=0x7fff89815c90) at /tmp/build/80754af9/python_1598874792229/work/Python/bltinmodule.c:1079 #31 builtin_exec (module=<optimized out>, args=<optimized out>, nargs=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Python/clinic/bltinmodule.c.h:283 #32 0x000055555568a4b2 in _PyMethodDef_RawFastCallDict (method=0x5555558812e0 <builtin_methods+480>, self=0x7ffff7617d10, args=<optimized out>, nargs=2, kwargs=0x7fff89808a00) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:530 #33 0x000055555568a5d1 in _PyCFunction_FastCallDict (func=0x7ffff761ee10, args=<optimized out>, nargs=<optimized out>, kwargs=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:585 #34 0x0000555555726c33 in do_call_core (kwdict=0x7fff89808a00, callargs=0x7fff8a000820, func=0x7ffff761ee10) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:4641 #35 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:3191 #36 0x0000555555668829 in _PyEval_EvalCodeWithName (_co=0x7ffff75bf150, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=<optimized out>, kwnames=0x0, kwargs=0x7fff8995db08, kwcount=0, kwstep=1, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x7ffff75bd300, qualname=0x7ffff75bd300) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:3930 #37 0x00005555556b9107 in _PyFunction_FastCallKeywords (func=<optimized out>, stack=0x7fff8995daf0, nargs=3, kwnames=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:433 #38 0x0000555555725b29 in call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:4616 #39 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:3093 #40 0x00005555556b8e7b in function_code_fastcall (globals=<optimized out>, nargs=2, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:283 #41 _PyFunction_FastCallKeywords (func=<optimized out>, stack=0x5555585a6ed8, nargs=2, kwnames=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:408 #42 0x0000555555721740 in call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:4616 #43 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:3110 #44 0x00005555556b8e7b in function_code_fastcall (globals=<optimized out>, nargs=1, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:283 #45 _PyFunction_FastCallKeywords (func=<optimized out>, stack=0x55555852ce40, nargs=1, kwnames=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:408 #46 0x00005555557214b6 in call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:4616 #47 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:3124 #48 0x00005555556b8e7b in function_code_fastcall (globals=<optimized out>, nargs=2, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:283 #49 _PyFunction_FastCallKeywords (func=<optimized out>, stack=0x7fff898429e8, nargs=2, kwnames=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:408 #50 0x00005555557214b6 in call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:4616 #51 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:3124 #52 0x000055555566985b in function_code_fastcall (globals=<optimized out>, nargs=2, args=<optimized out>, co=0x7ffff75c5930) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:283 #53 _PyFunction_FastCallDict (func=<optimized out>, args=0x7fffffffd7e0, nargs=2, kwargs=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:322 #54 0x00005555556887ce in object_vacall (callable=0x7ffff75d1a70, vargs=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:1200 #55 0x00005555556e276d in _PyObject_CallMethodIdObjArgs (obj=<optimized out>, name=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Objects/call.c:1250 #56 0x0000555555671fdc in import_find_and_load (abs_name=0x7ffff74b6c90) at /tmp/build/80754af9/python_1598874792229/work/Python/import.c:1652 #57 PyImport_ImportModuleLevelObject (name=0x7ffff74b6c90, globals=<optimized out>, --Type <RET> for more, q to quit, c to continue without paging-- locals=<optimized out>, fromlist=0x7ffff5df4e90, level=0) at /tmp/build/80754af9/python_1598874792229/work/Python/import.c:1764 #58 0x0000555555724479 in import_name (level=0x5555558be2e0 <small_ints+160>, fromlist=0x7ffff5df4e90, name=0x7ffff74b6c90, f=0x555555967280) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:4770 #59 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:2600 #60 0x0000555555668829 in _PyEval_EvalCodeWithName (_co=0x7ffff73eb420, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=<optimized out>, kwnames=0x0, kwargs=0x0, kwcount=0, kwstep=2, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:3930 #61 0x0000555555669714 in PyEval_EvalCodeEx (_co=<optimized out>, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=<optimized out>, kws=<optimized out>, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:3959 #62 0x000055555566973c in PyEval_EvalCode (co=<optimized out>, globals=<optimized out>, locals=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Python/ceval.c:524 #63 0x0000555555780f14 in run_mod (mod=<optimized out>, filename=<optimized out>, globals=0x7ffff7597be0, locals=0x7ffff7597be0, flags=<optimized out>, arena=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Python/pythonrun.c:1035 #64 0x000055555578b331 in PyRun_FileExFlags (fp=0x5555558c4180, filename_str=<optimized out>, start=<optimized out>, globals=0x7ffff7597be0, locals=0x7ffff7597be0, closeit=1, flags=0x7fffffffdd10) at /tmp/build/80754af9/python_1598874792229/work/Python/pythonrun.c:988 #65 0x000055555578b523 in PyRun_SimpleFileExFlags (fp=0x5555558c4180, filename=<optimized out>, closeit=1, flags=0x7fffffffdd10) at /tmp/build/80754af9/python_1598874792229/work/Python/pythonrun.c:429 #66 0x000055555578c655 in pymain_run_file (p_cf=0x7fffffffdd10, filename=0x5555558c3900 L"test_autograd.py", fp=0x5555558c4180) at /tmp/build/80754af9/python_1598874792229/work/Modules/main.c:462 #67 pymain_run_filename (cf=0x7fffffffdd10, pymain=0x7fffffffde20) at /tmp/build/80754af9/python_1598874792229/work/Modules/main.c:1652 #68 pymain_run_python (pymain=0x7fffffffde20) at /tmp/build/80754af9/python_1598874792229/work/Modules/main.c:2913 #69 pymain_main (pymain=0x7fffffffde20) at /tmp/build/80754af9/python_1598874792229/work/Modules/main.c:3460 #70 0x000055555578c77c in _Py_UnixMain (argc=<optimized out>, argv=<optimized out>) at /tmp/build/80754af9/python_1598874792229/work/Modules/main.c:3495 #71 0x00007ffff7dc30b3 in __libc_start_main (main=0x555555649c90 <main>, argc=2, argv=0x7fffffffdf88, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffdf78) at ../csu/libc-start.c:308 #72 0x0000555555730ff0 in _start () at ../sysdeps/x86_64/elf/start.S:103 I don’t know how to interpret this output though. I also uploaded the outputs of ldd build/lib/* here 1 and of torch/lib/* here 2. What am I supposed to look for?
st32987
Looks like it is related to magma. Apparently it’s not dynamically linked, though. Either magma using a different cuda version or having incompatible threading (I don’t even know if that is a thing with magma) could be not so good. You could try to disable magma and see if it helps (but of course, you’ll be missing quite a bit of cuda linear algebra then). I must admit I’m not sure what’s next. I only ever build my stuff on bare metal Debian systems and it mostly works for me, so I don’t have a lot of expertise debugging funny linking stuff.
st32988
I had used MAGMA installed via conda. Is this wrong? Apparently I have to register with http://magma.maths.usyd.edu.au/magma and wait a few days to install it manually. Is this something every PyTorch developer does? By the way: I’ve spent a couple of days fulltime so far on trying to build PyTorch, and it starts to feel like I’m doing something significantly wrong. I had set up the machine (with Ubuntu), installed Cuda, cuDNN etc., and then followed the instructions in contributing.md. As a potential first-time contributor: Are there any other ways to get started? I’m only doing this because I think I need my own build of PyTorch so I can get the tests in test/run_test.sh to pass before making a pull request. The actual coding to fix the issue was a mere fraction of the time I’ve invested into trying to build PyTorch. What else can I do? Would it help if I use Docker? Do I have to switch to AWS?
st32989
It probably is a decidedly nonstandard setup but I literally use plain Debian/unstable even with the CUDA libraries from the non-free section of their archive (they just don’t have cudnn so I take that from NVidia). The other day profiling with kineto would not work but other than that it usually just works and use cmake, python3-dev python3-typing-extensions python3-numpy-dev python3-yaml and so from Debian. No conda, docker or other funny business. But again, this isn’t the standard way and maybe someone else would have better advice. Unless you need Magma for what you are changing, you could just compile without. USE_MAGMA=OFF or so. You can also just start the tests by running test/test_something.py TestCase.test_me or use pytest with -k to filter. If it’s a targeted fix and you have a test case for it and are reasonably sure that it works, you could probably just open a PR for it. I wouldn’t debug code with the CI, but I have certainly submitted patches where I had overlooked failing CI cases because I ran tests locally to narrowly. So far no-one bit my head off for it for that. git-clang-format and python3 -mflake8 are probably a good idea to get right on the first submission. Best regards Thomas
st32990
I got the following error when running Inception_v3 multithreading after 4 warm-ups in libtorch(c++) environment. terminate called after throwing an instance of ‘std::runtime_error’ what(): The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: CUDA driver error: invalid device context What do I do?
st32991
Is the GPU still available after the run? If so, could you update to the latest nightly release and rerun the script?
st32992
@ptrblck I used torch 1.8.1, but the problem still remains. #include <cuda_runtime.h> #include <ATen/cuda/CUDAContext.h> #include <c10/cuda/CUDAGuard.h> #include <ATen/cuda/CUDAMultiStreamGuard.h> #include <ATen/cuda/CUDAEvent.h> #include <c10/cuda/CUDAStream.h> #include <pthread.h> #include <torch/script.h> #include <torch/torch.h> #include <typeinfo> #include <iostream> #include <inttypes.h> #include <functional> #include <memory> #include <stdlib.h> #include <c10/cuda/CUDAFunctions.h> #include <cuda_profiler_api.h> #include <limits.h> #include <time.h> #include <sys/time.h> #include <cuda_runtime.h> #include <thread> #include <limits.h> #include <time.h> #include <sys/time.h> #include <algorithm> #define n_dense 0 #define n_res 0 #define n_alex 0 #define n_vgg 0 #define n_wide 0 #define n_squeeze 0 #define n_mobile 0 #define n_mnasnet 0 #define n_inception 2 #define n_shuffle 0 #define n_resX 0 #define WARMING 4 namespace F = torch::nn::functional; using namespace std; c10::DeviceIndex GPU_NUM = 0; vector<torch::jit::IValue> inputs; vector<torch::jit::IValue> inputs2; torch::Device device = {at::kCUDA,GPU_NUM}; torch::Tensor x = torch::ones({1, 3, 224, 224}).to(device); torch::Tensor x2 = torch::ones({1, 3, 299, 299}).to(device); typedef struct _net { torch::jit::script::Module module; std::vector<torch::jit::IValue> input; std::string name; //network name int index_n; //network index int n_all; // all network num }Net; double what_time_is_it_now() { struct timeval time; if (gettimeofday(&time,NULL)){ return 0; } return (double)time.tv_sec + (double)time.tv_usec * .000001; } const int n_all = n_alex + n_vgg + n_res + n_dense + n_wide + n_squeeze + n_mobile + n_mnasnet + n_inception + n_shuffle + n_resX; double *start_time; double *end_time; void *network_predict(Net *net){ start_time[net->index_n] = what_time_is_it_now(); at::Tensor out; out = net->module.forward(net->input).toTensor(); end_time[net->index_n] = what_time_is_it_now(); std::cout << out.slice(/*dim=*/1, /*start=*/0, /*end=*/15) << "\n"; std::cout << "\n***** "<<net->name<<" EXECUTION TIME : "<< end_time[net->index_n]-start_time[net->index_n] <<"s ***** \n"; } int main(){ start_time = (double *)malloc(sizeof(double)*n_all); end_time = (double *)malloc(sizeof(double)*n_all); torch::jit::script::Module model[11]; model[0] = torch::jit::load("../densenet201_model.pt"); model[0].to(device); model[1] = torch::jit::load("../resnet152_model.pt"); model[1].to(device); model[2] = torch::jit::load("../alexnet_model.pt"); model[2].to(device); model[3] = torch::jit::load("../vgg_model.pt"); model[3].to(device); model[4] = torch::jit::load("../wideresnet_model.pt"); model[4].to(device); model[5] = torch::jit::load("../squeeze_model.pt"); model[5].to(device); model[6] = torch::jit::load("../mobilenet_model.pt"); model[6].to(device); model[7] = torch::jit::load("../mnasnet_model.pt"); model[7].to(device); model[8] = torch::jit::load("../inception_model.pt"); model[8].to(device); model[9] = torch::jit::load("../shuffle_model.pt"); model[9].to(device); model[10] = torch::jit::load("../resnext_model.pt"); model[10].to(device); pthread_t networkArray_dense[n_dense]; pthread_t networkArray_res[n_res]; pthread_t networkArray_alex[n_alex]; pthread_t networkArray_vgg[n_vgg]; pthread_t networkArray_wide[n_wide]; pthread_t networkArray_squeeze[n_squeeze]; pthread_t networkArray_mobile[n_mobile]; pthread_t networkArray_mnasnet[n_mnasnet]; pthread_t networkArray_inception[n_inception]; pthread_t networkArray_shuffle[n_shuffle]; pthread_t networkArray_resX[n_resX]; Net net_input_dense[n_dense]; Net net_input_res[n_res]; Net net_input_alex[n_alex]; Net net_input_vgg[n_vgg]; Net net_input_wide[n_wide]; Net net_input_squeeze[n_squeeze]; Net net_input_mobile[n_mobile]; Net net_input_mnasnet[n_mnasnet]; Net net_input_inception[n_inception]; Net net_input_shuffle[n_shuffle]; Net net_input_resX[n_resX]; inputs.push_back(x); auto x_ch0 = torch::unsqueeze(x2.index({torch::indexing::Slice(), 0}), 1) * (0.229 / 0.5) + (0.485 - 0.5) / 0.5; auto x_ch1 = torch::unsqueeze(x2.index({torch::indexing::Slice(), 1}), 1) * (0.224 / 0.5) + (0.456 - 0.5) / 0.5; auto x_ch2 = torch::unsqueeze(x2.index({torch::indexing::Slice(), 2}), 1) * (0.225 / 0.5) + (0.406 - 0.5) / 0.5; x_ch0.to(device); x_ch1.to(device); x_ch2.to(device); auto x_cat = torch::cat({x_ch0,x_ch1,x_ch2},1).to(device); inputs2.clear(); inputs2.push_back(x_cat); for(int i=0;i<n_dense;i++){ net_input_dense[i].module = model[0]; net_input_dense[i].input = inputs; net_input_dense[i].name = "DenseNet"; net_input_dense[i].index_n = i; for(int j=0;j<WARMING;j++){ network_predict(&net_input_dense[i]); cudaDeviceSynchronize(); net_input_dense[i].input = inputs; } } for(int i=0;i<n_res;i++){ net_input_res[i].module = model[1]; net_input_res[i].name = "ResNet"; net_input_res[i].input = inputs; net_input_res[i].index_n = i+n_dense; for(int j=0;j<WARMING;j++){ network_predict(&net_input_res[i]); cudaDeviceSynchronize(); net_input_res[i].input = inputs; } } for(int i=0;i<n_alex;i++){ net_input_alex[i].module = model[2]; net_input_alex[i].index_n = i+ n_res + n_dense; net_input_alex[i].input = inputs; net_input_alex[i].name = "AlexNet"; for(int j=0;j<WARMING;j++){ network_predict(&net_input_alex[i]); cudaDeviceSynchronize(); net_input_alex[i].input = inputs; } } for(int i=0;i<n_vgg;i++){ net_input_vgg[i].module = model[3]; net_input_vgg[i].input = inputs; net_input_vgg[i].name = "VGG"; net_input_vgg[i].index_n = i + n_alex + n_res + n_dense; for(int j=0;j<WARMING;j++){ network_predict(&net_input_vgg[i]); cudaDeviceSynchronize(); net_input_vgg[i].input = inputs; } } for(int i=0;i<n_wide;i++){ net_input_wide[i].module = model[4]; net_input_wide[i].input = inputs; net_input_wide[i].name = "WideResNet"; net_input_wide[i].index_n = i+n_alex + n_res + n_dense + n_vgg; for(int j=0;j<WARMING;j++){ network_predict(&net_input_wide[i]); cudaDeviceSynchronize(); net_input_wide[i].input = inputs; } } for(int i=0;i<n_squeeze;i++){ net_input_squeeze[i].module = model[5]; net_input_squeeze[i].name = "SqueezeNet"; net_input_squeeze[i].input = inputs; net_input_squeeze[i].index_n = i + n_alex + n_res + n_dense + n_vgg + n_wide; for(int j=0;j<WARMING;j++){ network_predict(&net_input_squeeze[i]); cudaDeviceSynchronize(); net_input_squeeze[i].input = inputs; } } for(int i=0;i<n_mobile;i++){ net_input_mobile[i].module = model[6]; net_input_mobile[i].input = inputs; net_input_mobile[i].name = "Mobile"; net_input_mobile[i].index_n = i + n_alex + n_res + n_dense + n_vgg + n_wide + n_squeeze; for(int j=0;j<WARMING;j++){ network_predict(&net_input_mobile[i]); cudaDeviceSynchronize(); } } for(int i=0;i<n_mnasnet;i++){ net_input_mnasnet[i].module = model[7]; net_input_mnasnet[i].input = inputs; net_input_mnasnet[i].name = "MNASNet"; net_input_mnasnet[i].index_n = i + n_alex + n_res + n_dense + n_vgg + n_wide + n_squeeze + n_mobile; for(int j=0;j<WARMING;j++){ network_predict(&net_input_mnasnet[i]); cudaDeviceSynchronize(); } } for(int i=0;i<n_inception;i++){ net_input_inception[i].module = model[8]; net_input_inception[i].input = inputs2; net_input_inception[i].name = "Inception_v3"; net_input_inception[i].index_n = i + n_alex + n_res + n_dense + n_vgg + n_wide + n_squeeze + n_mobile + n_mnasnet; for(int j=0;j<WARMING;j++){ network_predict(&net_input_inception[i]); cudaDeviceSynchronize(); } } for(int i=0;i<n_shuffle;i++){ net_input_shuffle[i].module = model[9]; net_input_shuffle[i].input = inputs; net_input_shuffle[i].name = "ShuffleNet"; net_input_shuffle[i].index_n = i + n_alex + n_res + n_dense + n_vgg + n_wide + n_squeeze + n_mobile + n_mnasnet + n_inception; for(int j=0;j<WARMING;j++){ network_predict(&net_input_shuffle[i]); cudaDeviceSynchronize(); } } for(int i=0;i<n_resX;i++){ net_input_resX[i].module = model[10]; net_input_resX[i].input = inputs; net_input_resX[i].name = "ResNext"; net_input_resX[i].index_n = i + n_alex + n_res + n_dense + n_vgg + n_wide + n_squeeze + n_mobile + n_mnasnet + n_inception + n_shuffle; for(int j=0;j<WARMING;j++){ network_predict(&net_input_resX[i]); cudaDeviceSynchronize(); net_input_resX[i].input = inputs; } } //cudaProfilerStart(); cudaDeviceSynchronize(); double time1 = what_time_is_it_now(); //cudaProfilerStart(); for(int i=0;i<n_dense;i++){ if (pthread_create(&networkArray_dense[i], NULL, (void *(*)(void*))network_predict, &net_input_dense[i]) < 0){ perror("thread error"); exit(0); } } for(int i=0;i<n_res;i++){ if (pthread_create(&networkArray_res[i], NULL, (void *(*)(void*))network_predict, &net_input_res[i]) < 0){ perror("thread error"); exit(0); } } for(int i=0;i<n_alex;i++){ if (pthread_create(&networkArray_alex[i], NULL, (void *(*)(void*))network_predict, &net_input_alex[i]) < 0){ perror("thread error"); exit(0); } } for(int i=0;i<n_vgg;i++){ if (pthread_create(&networkArray_vgg[i], NULL, (void *(*)(void*))network_predict, &net_input_vgg[i]) < 0){ perror("thread error"); exit(0); } } for(int i=0;i<n_wide;i++){ if (pthread_create(&networkArray_wide[i], NULL, (void *(*)(void*))network_predict, &net_input_wide[i]) < 0){ perror("thread error"); exit(0); } } for(int i=0;i<n_squeeze;i++){ if (pthread_create(&networkArray_squeeze[i], NULL, (void *(*)(void*))network_predict, &net_input_squeeze[i]) < 0){ perror("thread error"); exit(0); } } for(int i=0;i<n_mobile;i++){ if (pthread_create(&networkArray_mobile[i], NULL, (void *(*)(void*))network_predict, &net_input_mobile[i]) < 0){ perror("thread error"); exit(0); } } for(int i=0;i<n_mnasnet;i++){ if (pthread_create(&networkArray_mnasnet[i], NULL, (void *(*)(void*))network_predict, &net_input_mnasnet[i]) < 0){ perror("thread error"); exit(0); } } for(int i=0;i<n_inception;i++){ if (pthread_create(&networkArray_inception[i], NULL, (void *(*)(void*))network_predict, &net_input_inception[i]) < 0){ perror("thread error"); exit(0); } } for(int i=0;i<n_shuffle;i++){ if (pthread_create(&networkArray_shuffle[i], NULL, (void *(*)(void*))network_predict, &net_input_shuffle[i]) < 0){ perror("thread error"); exit(0); } } for(int i=0;i<n_resX;i++){ if (pthread_create(&networkArray_resX[i], NULL, (void *(*)(void*))network_predict, &net_input_resX[i]) < 0){ perror("thread error"); exit(0); } } for (int i = 0; i < n_dense; i++){ pthread_join(networkArray_dense[i], NULL); } for (int i = 0; i < n_res; i++){ pthread_join(networkArray_res[i], NULL); } for (int i = 0; i < n_alex; i++){ pthread_join(networkArray_alex[i], NULL); } for (int i = 0; i < n_vgg; i++){ pthread_join(networkArray_vgg[i], NULL); } for (int i = 0; i < n_wide; i++){ pthread_join(networkArray_wide[i], NULL); } for (int i = 0; i < n_squeeze; i++){ pthread_join(networkArray_squeeze[i], NULL); } for (int i = 0; i < n_mobile; i++){ pthread_join(networkArray_mobile[i], NULL); } for (int i = 0; i < n_mnasnet; i++){ pthread_join(networkArray_mnasnet[i], NULL); } for (int i = 0; i < n_inception; i++){ pthread_join(networkArray_inception[i], NULL); } for (int i = 0; i < n_shuffle; i++){ pthread_join(networkArray_shuffle[i], NULL); } for (int i = 0; i < n_resX; i++){ pthread_join(networkArray_resX[i], NULL); } double time2 = what_time_is_it_now(); std::cout << "\n***** TOTAL EXECUTION TIME : "<<time2-time1<<"s ***** \n"; } If I don’t warm-up, there’s no error. The inception only has this problem
st32993
Hi! I have this code import re from datasets import load_dataset import pandas as pd from torch.utils.data import Dataset from torch.utils.data import DataLoader from transformers import ( AdamW, T5ForConditionalGeneration, T5Tokenizer, get_linear_schedule_with_warmup ) class TranslateDataset(Dataset): def __init__( self, data_path: str, tokenizer: T5Tokenizer, max_seq_len: int = 20, max_target_len: int=20, memory_len: int =1, type: str = "train", ): self.type = type self.base_folder = data_path if type =='train': self.data = load_dataset('text', data_files={'train_en': self.base_folder + 'en_train', 'train_ru': self.base_folder + 'ru_train'} ) elif type =="dev": self.data = load_dataset('text', data_files={'dev_en': self.base_folder + 'en_dev', 'dev_ru': self.base_folder + 'ru_dev'} ) elif type =="test": self.data = load_dataset('text', data_files={ 'test_en': self.base_folder + 'en_test', 'test_ru': self.base_folder + 'ru_test'} ) else: raise ValueError("Invalid type dataset : %s /n data set should be one of the following {'train', 'dev', 'test' }" % type) self.tokenizer = tokenizer self.max_seq_len = max_seq_len self.max_target_len = max_target_len self.memory_len = memory_len print("data: ", self.data) def __len__(self): return len(self.data[self.type+'_en']) def __getitem__(self, index: int): source_row = self.data[self.type+'_en'][index]['text'] target_row = self.data[self.type+'_ru'][index]['text'] # process source source_row = source_row.split('_eos') num_seq = len(source_row) source_ids = [] source_masks = [] for i, sub_sentence in enumerate(source_row): source_encoding = self.tokenizer( sub_sentence, max_length=self.max_seq_len, padding='max_length', truncation=True, return_attention_mask=True, add_special_tokens=True, return_tensors="pt" ) source_ids.append(source_encoding["input_ids"].flatten()) source_masks.append(source_encoding["attention_mask"].flatten()) source_ids = torch.cat(source_ids) source_masks = torch.cat(source_masks) # process target targets_ids = [] targets_masks = [] target_row = target_row.split('_eos') # print("target_row: ", target_row) for i, sub_sentence in enumerate(target_row): target_encoding = self.tokenizer( sub_sentence, # max_length=self.max_seq_len, # 20 padding='max_length', truncation=True, return_attention_mask=True, add_special_tokens=True, return_tensors="pt" ) labels = target_encoding['input_ids'] labels[labels == 0] = -100 targets_ids.append(labels.flatten()) targets_masks.append(target_encoding["attention_mask"].flatten()) targets_ids = torch.cat(targets_ids) targets_masks = torch.cat(targets_masks) return dict( num_seq = 4, ) tokenizer = T5Tokenizer.from_pretrained('t5-small') dataset = TranslateDataset(data_path = base_folder, tokenizer=tokenizer, max_seq_len=12, max_target_len=12, memory_len =1, ) dataloader = DataLoader( dataset, batch_size=2, num_workers=2, ) next(iter(dataloader)) and this is the result: data: DatasetDict({ train_en: Dataset({ features: [‘text’], num_rows: 1500000 }) train_ru: Dataset({ features: [‘text’], num_rows: 1500000 }) }) {‘num_seq’: tensor([4, 4])} I suspect the reason is the implementation of dataloder or collate function , what is the solution?
st32994
You have to rewrite the collate function (you can copy paste it, simple to understand) or to write your own
st32995
I know weighted sampler can solve imbalanced data problem. However, I wonder is there a way to load exactly the same number of data per class ? What I need now is, for example, a batch of 10 samples from class A, 10 from class B, 10 from class C, ETC…( I mean “not probablistically” but deterministically make sure to load 10 sample per class. ) I also want to know how to combine that solution with torch.utils.data.DataLoader
st32996
Solved by Mika_Spilner in post #4 Finally, I made a custom BatchSampler. I hope this code help someone like me. Reference here import numpy as np import torch from torch.utils.data import DataLoader from torch.utils.data.sampler import BatchSampler class BalancedBatchSampler(BatchSampler): """ BatchSampler - from a MNI…
st32997
I had the same problem when I re-implement PFE 20, which needs 64 classes, which have 4 data in each class, as an input. Here’s my solution (not very good but useful for me). #class Dataset def __init__(self): self.label_to_index = {'A': [1, 3, 4], 'B': [0, 2, 5], 'C': [6, 7, 8]} def __getitem__(self, item): images = np.empty((1, *image_shape)) labels = np.empty((1,)) for i, (label, indices) in enumerate(self.label_to_index.items()): index = np.random.choice(indices, 1, replacement=True)[0] images[i, ...] = get_image(index) labels[i] = get_label(index) return torch.from_numpy(images), torch.from_numpy(labels) As you can see, it returns (1 from A, 1 from B, 1 from C) as an item. Then I set batch_size = 10. I think, if the dataset is large enough, after a large number of iterations, random choice in each batch would be similar to shuffle the dataset then iterate over it
st32998
Thank you! That’s sounds nice. I try it. I’m also trying to implement a metric learning (NCA) as you did.
st32999
Finally, I made a custom BatchSampler. I hope this code help someone like me. Reference here 75 import numpy as np import torch from torch.utils.data import DataLoader from torch.utils.data.sampler import BatchSampler class BalancedBatchSampler(BatchSampler): """ BatchSampler - from a MNIST-like dataset, samples n_classes and within these classes samples n_samples. Returns batches of size n_classes * n_samples """ def __init__(self, dataset, n_classes, n_samples): loader = DataLoader(dataset) self.labels_list = [] for _, label in loader: self.labels_list.append(label) self.labels = torch.LongTensor(self.labels_list) self.labels_set = list(set(self.labels.numpy())) self.label_to_indices = {label: np.where(self.labels.numpy() == label)[0] for label in self.labels_set} for l in self.labels_set: np.random.shuffle(self.label_to_indices[l]) self.used_label_indices_count = {label: 0 for label in self.labels_set} self.count = 0 self.n_classes = n_classes self.n_samples = n_samples self.dataset = dataset self.batch_size = self.n_samples * self.n_classes def __iter__(self): self.count = 0 while self.count + self.batch_size < len(self.dataset): classes = np.random.choice(self.labels_set, self.n_classes, replace=False) indices = [] for class_ in classes: indices.extend(self.label_to_indices[class_][ self.used_label_indices_count[class_]:self.used_label_indices_count[ class_] + self.n_samples]) self.used_label_indices_count[class_] += self.n_samples if self.used_label_indices_count[class_] + self.n_samples > len(self.label_to_indices[class_]): np.random.shuffle(self.label_to_indices[class_]) self.used_label_indices_count[class_] = 0 yield indices self.count += self.n_classes * self.n_samples def __len__(self): return len(self.dataset) // self.batch_size MNIST example: import torch import torchvision import torchvision.transforms as transforms from torchvision import datasets import numpy as np import matplotlib.pyplot as plt n_classes = 5 n_samples = 8 mnist_train = torchvision.datasets.MNIST(root="mnist/mnist_train", train=True, download=True, transform=transforms.Compose([transforms.ToTensor(),])) balanced_batch_sampler = BalancedBatchSampler(mnist_train, n_classes, n_samples) dataloader = torch.utils.data.DataLoader(mnist_train, batch_sampler=balanced_batch_sampler) my_testiter = iter(dataloader) images, target = my_testiter.next() def imshow(img): npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) imshow(torchvision.utils.make_grid(images))
st33000
Thanks, Mika, it works like a charm. I had to change the while condition to ensure that the last mini-batch is also delivered to the data loader. # in __iter__'s while condition, change "<" to "<=". while self.count + self.batch_size <= len(self.dataset): # the rest of the code In case, anyone prefers using a library for this task, there is a similar Sampler in PyTorch Metric Learning named MPerClassSampler. Refer to here 35
st33001
my input tensor size [[10,3,256,256]) but my output tensor size ([10,1000]) is like this. I chose RGB images and my batch size 10. So my lost function doesn’t work. how can i fix this. i use ResNet-152 ["" def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): “”“3x3 convolution with padding”"" return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=dilation, groups=groups, bias=False, dilation=dilation) def conv1x1(in_planes, out_planes, stride=1): “”“1x1 convolution”"" return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) class BasicBlock(nn.Module): expansion = 1 def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, base_width=64, dilation=1, norm_layer=None): super(BasicBlock, self).__init__() if norm_layer is None: norm_layer = nn.BatchNorm2d if groups != 1 or base_width != 64: raise ValueError('BasicBlock only supports groups=1 and base_width=64') if dilation > 1: raise NotImplementedError("Dilation > 1 not supported in BasicBlock") # Both self.conv1 and self.downsample layers downsample the input when stride != 1 self.conv1 = conv3x3(inplanes, planes, stride) self.bn1 = norm_layer(planes) self.relu = nn.ReLU(inplace=True) self.conv2 = conv3x3(planes, planes) self.bn2 = norm_layer(planes) self.downsample = downsample self.stride = stride def forward(self, x): identity = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) if self.downsample is not None: identity = self.downsample(x) out += identity out = self.relu(out) return out class Bottleneck(nn.Module): # Bottleneck in torchvision places the stride for downsampling at 3x3 convolution(self.conv2) # while original implementation places the stride at the first 1x1 convolution(self.conv1) # according to "Deep residual learning for image recognition"https://arxiv.org/abs/1512.03385. # This variant is also known as ResNet V1.5 and improves accuracy according to # https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch 2. expansion = 4 def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, base_width=64, dilation=1, norm_layer=None): super(Bottleneck, self).__init__() if norm_layer is None: norm_layer = nn.BatchNorm2d width = int(planes * (base_width / 64.)) * groups # Both self.conv2 and self.downsample layers downsample the input when stride != 1 self.conv1 = conv1x1(inplanes, width) self.bn1 = norm_layer(width) self.conv2 = conv3x3(width, width, stride, groups, dilation) self.bn2 = norm_layer(width) self.conv3 = conv1x1(width, planes * self.expansion) self.bn3 = norm_layer(planes * self.expansion) self.relu = nn.ReLU(inplace=True) self.downsample = downsample self.stride = stride def forward(self, x): identity = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out = self.relu(out) out = self.conv3(out) out = self.bn3(out) if self.downsample is not None: identity = self.downsample(x) out += identity out = self.relu(out) return out class ResNet(nn.Module): def __init__(self, block, layers, num_classes=1000, zero_init_residual=False, groups=1, width_per_group=64, replace_stride_with_dilation=None, norm_layer=None): super(ResNet, self).__init__() if norm_layer is None: norm_layer = nn.BatchNorm2d self._norm_layer = norm_layer self.inplanes = 64 self.dilation = 1 if replace_stride_with_dilation is None: # each element in the tuple indicates if we should replace # the 2x2 stride with a dilated convolution instead replace_stride_with_dilation = [False, False, False] if len(replace_stride_with_dilation) != 3: raise ValueError("replace_stride_with_dilation should be None " "or a 3-element tuple, got {}".format(replace_stride_with_dilation)) self.groups = groups self.base_width = width_per_group self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = norm_layer(self.inplanes) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = self._make_layer(block, 64, layers[0]) self.layer2 = self._make_layer(block, 128, layers[1], stride=2, dilate=replace_stride_with_dilation[0]) self.layer3 = self._make_layer(block, 256, layers[2], stride=2, dilate=replace_stride_with_dilation[1]) self.layer4 = self._make_layer(block, 512, layers[3], stride=2, dilate=replace_stride_with_dilation[2]) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.fc = nn.Linear(512 * block.expansion, num_classes) for m in self.modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): nn.init.constant_(m.weight, 1) nn.init.constant_(m.bias, 0) # Zero-initialize the last BN in each residual branch, # so that the residual branch starts with zeros, and each residual block behaves like an identity. # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 if zero_init_residual: for m in self.modules(): if isinstance(m, Bottleneck): nn.init.constant_(m.bn3.weight, 0) elif isinstance(m, BasicBlock): nn.init.constant_(m.bn2.weight, 0) def _make_layer(self, block, planes, blocks, stride=1, dilate=False): norm_layer = self._norm_layer downsample = None previous_dilation = self.dilation if dilate: self.dilation *= stride stride = 1 if stride != 1 or self.inplanes != planes * block.expansion: downsample = nn.Sequential( conv1x1(self.inplanes, planes * block.expansion, stride), norm_layer(planes * block.expansion), ) layers = [] layers.append(block(self.inplanes, planes, stride, downsample, self.groups, self.base_width, previous_dilation, norm_layer)) self.inplanes = planes * block.expansion for _ in range(1, blocks): layers.append(block(self.inplanes, planes, groups=self.groups, base_width=self.base_width, dilation=self.dilation, norm_layer=norm_layer)) return nn.Sequential(*layers) def _forward_impl(self, x): # See note [TorchScript super()] x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = torch.flatten(x, 1) x = self.fc(x) return x def forward(self, x): return self._forward_impl(x) "]
st33002
The output shape of [10, 1000] looks valid for a mutli-class classification use case with 1000 classes. I guess the target shape might be wrong. nn.CrossEntropyLoss expects the target to have the shape [batch_size=10] containing the class indices in the range [0, nb_classes-1 = 999]. If your target is one-hot encoded, use target = torch.argmax(target, 1) to create the expected target with the class indices.
st33003
I solved the size problem. but I have 6 classes in total. hence, he expects target efficiency to be at [10,6,256,256] dimensions. The output data is [10,2048,256,256] due to the resnet. How can I fix the output data? I want it to be [10,6,256,256].
st33004
Are you working on a multi-class segmentation use case? If so, then the target should have the shape [10, 256, 256] and contain values in the range [0, 5]. How did you end up with an output of [10,2048,256,256]? It seems that the spacial size wasn’t reduced at all, which should be the case for a resnet.
st33005
Yes, I am working on semantic segmentation on the RGB image. Since Batch_size = 10 and class_num = 6, my target size is [10,6,256,256]. Yes, I use Resnet architecture, so the output size is [10,2048,256,256].
st33006
mustafa_emre_dos: Since Batch_size = 10 and class_num = 6, my target size is [10,6,256,256]. If you are using nn.CrossEntropyLoss or nn.NLLLoss, the target shape is wrong, as explained before. mustafa_emre_dos: Yes, I use Resnet architecture, so the output size is [10,2048,256,256]. ResNet would return an output of [batch_size, 1000], so you would have to manipulate it in some way to get this output. What were these modifications?
st33007
I changed self.avgpool = nn.AdaptiveAvgPool2d ((1, 1)) and self.fc = nn.Linear (512 * block.expansion, num_classes in the last layer and deleted the line torch.flatten (x, 1). I used nn.Softmax2d () and self.avgpool = nn.AdaptiveAvgPool2d ((256, 256)).
st33008
In that case you would have to reduce the output channels from 2048 to the number of classes, e.g. by using a conv layer with out_channels=nb_classes. The kernel size might be 1x1, but it depends on your use case, if that’s the best approach. The target would still be wrong: the target should have the shape [10, 256, 256] and contain values in the range [0, nb_classes-1]
st33009
ptrblck: In that case you would have to reduce the output channels from 2048 to the number of classes, e.g. by using a conv layer with out_channels=nb_classes . The kernel size might be 1x1 , but it depends on your use case, if that’s the best approach. sorry but i don’t think i fully understand.
st33010
Hi! Can I have a kernel for a conv2d with some parameters trainable and some parameters not trainable? Let s say if I have 1 kernel with dim 9x9 , can I have the firs 4x9 params trainable and the last 5x9 params not trainable?
st33011
Solved by ptrblck in post #2 Yes, you could create a custom nn.Parameter for the trainable part and use torch.cat or torch.stack to create the weight tensor for the convolution. Once this is done, you could use the functional API via F.conv2d(input, weight, ...) to apply the convolution. You could use a custom nn.Module for th…
st33012
Yes, you could create a custom nn.Parameter for the trainable part and use torch.cat or torch.stack to create the weight tensor for the convolution. Once this is done, you could use the functional API via F.conv2d(input, weight, ...) to apply the convolution. You could use a custom nn.Module for these operations, if it would fit your use case.
st33013
I have tried to freeze part of my model but it does not work. Gradient computation is still enabled for each layer. Is that some sort of bug or am I doing something wrong? model = models.resnet18(pretrained=True) # To freeze the residual layers for param in model.parameters(): param.require_grad = False for param in model.fc.parameters(): param.require_grad = True # Replace last layer num_features = model.fc.in_features model.fc = nn.Linear(num_features, 2) model.fc = nn.Dropout(0.5) # Find total parameters and trainable parameters total_params = sum(p.numel() for p in model.parameters()) print(f'{total_params:,} total parameters.') total_trainable_params = sum( p.numel() for p in model.parameters() if p.requires_grad) print(f'{total_trainable_params:,} training parameters.') 21,284,672 total parameters. 21,284,672 training parameters.
st33014
Solved by ptrblck in post #3 You are creating a new .require_grad attribute for each parameter, while you most likely want to set the .requires_grad attribute to False for the parameters (note the missing s).
st33015
The line model.fc = nn.Dropout(0.5) seems to make no sense, since it completely changes model.fc from a linear layer with trainable parameters to a layer without trainable parameters (nn.Dropout) Declare it as another layer: model.fc = nn.Linear(num_features, 2) model.dropout = nn.Dropout(0.5) Or combine the two like this. model.fc = nn.Sequential( nn.Linear(num_features, 2), nn.Dropout(0.5) ) And let me know if it has changed.
st33016
You are creating a new .require_grad attribute for each parameter, while you most likely want to set the .requires_grad attribute to False for the parameters (note the missing s).
st33017
Hello All; I have a very unbalanced dataset, which I tried to balance using the following code (Class Dataset then my code): class myDataset(Dataset): def __init__(self, csv_file, root_dir, target, length, transform=None): self.annotations = pd.read_csv(csv_file).iloc[:length,:] self.root_dir = root_dir self.transform = transform self.target = target self.length = length def __len__(self): return len(self.annotations) def __alltargets__(self): return self.annotations.loc[:,self.target] def __getitem__(self, index): img_path = os.path.join(self.root_dir, self.annotations.loc[index, 'image_id']) image = Image.open(img_path) image = np.array(image) if self.transform: image = self.transform(image=image)["image"] image = np.transpose(image, (2, 0, 1)).astype(np.float32) image = torch.tensor(image)# device=torch.device('cuda:0')) y_label = torch.tensor(int(self.annotations.loc[index, str(self.target)]))# device=torch.device('cuda:0')) return image, y_label And then my code: aug = al.Compose([ al.RandomResizedCrop(H, W, p=0.2), al.Resize(H, W), al.Transpose(p=0.2), al.HorizontalFlip(p=0.5), al.VerticalFlip(p=0.2), al.augmentations.Normalize(max_pixel_value=255.0, always_apply=True, p=1.0) ]) dataset = myDataset(csv_file=LABEL_PATH, root_dir=IMAGE_PATH, target='gender', length=LENGTH, transform=aug) l = dataset.__len__() y = dataset.__alltargets__() train_idx, valid_idx = train_test_split(np.arange(l), test_size=0.2, shuffle=True, stratify=y) train_sampler = torch.utils.data.SubsetRandomSampler(train_idx) test_sampler = torch.utils.data.SubsetRandomSampler(valid_idx) train_loader = DataLoader(dataset=dataset, batch_size=batch_size, shuffle=False, pin_memory=True, num_workers=4, sampler=train_sampler) test_loader = DataLoader(dataset=dataset, batch_size=batch_size, shuffle=False, pin_memory=True, num_workers=4, sampler=test_sampler) My Question please: My actual code is just splitting evenly the classes among train and test datasets. How can I make the mini-batches balanced also ? Thank you very much, Habib
st33018
Solved by ptrblck in post #4 You could create the weights using the code snippet from my previous post and replace the SubsetRandomSampler with the WeightedRandomSampler using the train_idx. Since you are already using a subset via the train_idx, note that you would also only need to calculate the weights for these indices and…
st33019
You could use a WeightedRandomSampler as described in this post 52 with an example. If you want to split the dataset in a stratified way, you could use e.g. sklearn.model_selection.train_test_split 5 with the stratify option for the indices of the dataset (and the targets as the inputs for stratify) and use these indices in Subsets.
st33020
Thank you @ptrblck for your response. In my code above, I’ve already used the stratify option of the method train_test_split. So my Train and Test are both well balanced. After that, I used SubsetRandomSampler based on these well balanced indexes. I’m struggling on how to add WeightedRandomSampler on top of SubsetRandomSampler. Thank you for your help. As you can see, all my data are initially in the dataset placeholder, which I have to evenly split for overfit monitoring, my issue is that I don’t know how to do WeightedRandomSampler on Train dataset only. Thank you very much, Habib
st33021
You could create the weights using the code snippet from my previous post and replace the SubsetRandomSampler with the WeightedRandomSampler using the train_idx. Since you are already using a subset via the train_idx, note that you would also only need to calculate the weights for these indices and use it in the WeightedRandomSampler.
st33022
Hello, i have a single model (Net) that contains two separate ResNet (cnn) models. i want to pass two datasets(same number and same resolution of input images) into the same model using one dataloader. can you help me?
st33023
From your other post 1 - does this thread help you? The idea would be to: Create two data sets Create a final data set that joins the two datasets Create a dataloader for the final concatenated datasets to pass into your model.
st33024
We need more information to come to a solution. However, it is possible to return different images with labels in the same dataloader. class SampleDataloader(Dataset): def __init__(self, required_arguments...): #Modify as per your use self.length_samples = #length of samples def __len__(self): return self.length_samples def __getitem__(self, index): # returns a single sample at a time return image1, image2, label1, label2 As i’ve mentioned if you are looking for more accurate code, try to mention what you are trying to do.
st33025
Why is my dimension unequal? Full script here: https://github.com/adomakor412/NERTO_CIMSS_GOES-R/blob/ca2d75bce4a279a8781c0fc1f3567bdac1856fce/AI_model-BW-NUMPY.ipynb device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # model = models.resnet50(pretrained=True) PATH = 'MODELS/model_epoch_60_May-10-21:1605_1620677105.pth' model = torch.load(PATH) #torch.save(model, 'ResnetPretrained.pth'); myTestData = [] myTrainData = [] myValData = [] def load_split_train_test(traindir, testdir, valdir): train_transforms = transforms.Compose([transforms.Resize(32), transforms.ToTensor(), ]) test_transforms = transforms.Compose([transforms.Resize(32), transforms.ToTensor(), ]) val_transforms = transforms.Compose([transforms.Resize(32), transforms.ToTensor(), ]) train_data = datasets.ImageFolder(traindir, transform=train_transforms) test_data = datasets.ImageFolder(testdir, transform=test_transforms) val_data = datasets.ImageFolder(valdir, transform=test_transforms) # train_idx = list(range(len(traindir))) # nr.shuffle(np.array(train_idx)) # test_idx = list(range(len(testdir))) # nr.shuffle(np.array(test_idx)) # val_idx = list(range(len(valdir))) # nr.shuffle(np.array(val_idx)) train_idx = list(range(len(traindir))) nr.shuffle(train_idx) test_idx = list(range(len(testdir))) nr.shuffle(test_idx) val_idx = list(range(len(valdir))) nr.shuffle(val_idx) train_sampler = SubsetRandomSampler(train_idx) test_sampler = SubsetRandomSampler(test_idx) val_sampler = SubsetRandomSampler(val_idx) # trainloader = torch.utils.data.DataLoader(train_data, sampler = train_sampler, batch_size = batch_size) # testloader = torch.utils.data.DataLoader(test_data, sampler = test_sampler, batch_size = batch_size) # valloader = torch.utils.data.DataLoader(val_data, sampler = val_sampler, batch_size = batch_size) trainloader = torch.utils.data.DataLoader(train_data, batch_size = batch_size) testloader = torch.utils.data.DataLoader(test_data, batch_size = batch_size) valloader = torch.utils.data.DataLoader(val_data, batch_size = batch_size) myTestData.append(test_data) myTrainData.append(train_data) myValData.append(val_data) return trainloader, testloader, valloader trainloader, testloader, valloader = load_split_train_test(data_train_dir, data_test_dir, data_val_dir) print(valloader.dataset.classes) print(testloader.dataset.classes) print(trainloader.dataset.classes) [‘fillin’, ‘sharkfin’] [‘fillin’, ‘sharkfin’] [‘fillin’, ‘sharkfin’] train_X = next(iter(trainloader.dataset))[0].numpy() test_X = trainloader.dataset.targets train_y = next(iter(testloader.dataset))[0].numpy() test_y = trainloader.dataset.targets train_dataset = TensorDataset(torch.Tensor(train_X), torch.Tensor(train_y)) train_loader = DataLoader(train_dataset, batch_size=16, shuffle=True) net = model criterion = nn.CrossEntropyLoss()# cross entropy loss optimizer = torch.optim.SGD(net.parameters(), lr=0.01) net.train() # for epoch in range(1000): # for inputs, targets in train_loader: # optimizer.zero_grad() # out = net(inputs) # loss = criterion(out, targets.long()) # loss.backward() # optimizer.step() # if epoch % 100 == 0: # print('number of epoch', epoch, 'loss', loss.item()) predict_out = net(torch.Tensor(test_X)) _, predict_y = torch.max(predict_out, 1) print('prediction accuracy', accuracy_score(test_y.data, predict_y.data)) print('macro precision', precision_score(test_y.data, predict_y.data, average='macro')) print('micro precision', precision_score(test_y.data, predict_y.data, average='micro')) print('macro recall', recall_score(test_y.data, predict_y.data, average='macro')) print('micro recall', recall_score(test_y.data, predict_y.data, average='micro'))
st33026
RuntimeError Traceback (most recent call last) in 27 # print(‘number of epoch’, epoch, ‘loss’, loss.item()) 28 —> 29 predict_out = net(torch.Tensor(test_X)) 30 _, predict_y = torch.max(predict_out, 1) 31 /scratch/adomakor412/conda/envs/MyEnv/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: → 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) /scratch/adomakor412/conda/envs/MyEnv/lib/python3.6/site-packages/torchvision/models/resnet.py in forward(self, x) 214 215 def forward(self, x): → 216 return self._forward_impl(x) 217 218 /scratch/adomakor412/conda/envs/MyEnv/lib/python3.6/site-packages/torchvision/models/resnet.py in _forward_impl(self, x) 197 def _forward_impl(self, x): 198 # See note [TorchScript super()] → 199 x = self.conv1(x) 200 x = self.bn1(x) 201 x = self.relu(x) /scratch/adomakor412/conda/envs/MyEnv/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: → 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) /scratch/adomakor412/conda/envs/MyEnv/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input) 343 344 def forward(self, input): → 345 return self.conv2d_forward(input, self.weight) 346 347 class Conv3d(_ConvNd): /scratch/adomakor412/conda/envs/MyEnv/lib/python3.6/site-packages/torch/nn/modules/conv.py in conv2d_forward(self, input, weight) 340 _pair(0), self.dilation, self.groups) 341 return F.conv2d(input, weight, self.bias, self.stride, → 342 self.padding, self.dilation, self.groups) 343 344 def forward(self, input): RuntimeError: Expected 4-dimensional input for 4-dimensional weight 64 3 7 7, but got 1-dimensional input of size [1455] instead
st33027
Well, even though I had the right dim I need to pick the last value of nn.Linear(…, last) to match the length of my class for param in model.parameters(): param.requires_grad = False model.fc = nn.Sequential(nn.Linear(2048, 512), nn.ReLU(), nn.Dropout(0.2), nn.Linear(512, 2), nn.LogSoftmax(dim=1)) criterion = nn.NLLLoss() optimizer = optim.Adam(model.fc.parameters(), lr=0.003) model.to(device); model.fc = nn.Sequential(nn.Linear(x, y), nn.ReLU(), nn.Dropout(0.2), nn.Linear(y, len(class)), nn.LogSoftmax(dim=1))
st33028
Hello Guys, i have two models cnn (without fc) and the third do the concatenate and the FC there is the model : import torch import torch.nn as nn import torch.nn.functional class block(nn.Module): def __init__( self, in_channels, intermediate_channels, identity_downsample=None, stride=1 ): super(block, self).__init__() self.expansion = 4 self.conv1 = nn.Conv2d( in_channels, intermediate_channels, kernel_size=1, stride=1, padding=0 ) self.bn1 = nn.BatchNorm2d(intermediate_channels) #Le BatchNormalization applique une transformation qui maintient la sortie moyenne proche de 0 et l'écart type de sortie proche de 1. self.conv2 = nn.Conv2d( intermediate_channels, intermediate_channels, kernel_size=3, stride=stride, padding=1, ) self.bn2 = nn.BatchNorm2d(intermediate_channels ) self.conv3 = nn.Conv2d( intermediate_channels, intermediate_channels * self.expansion, kernel_size=1, stride=1, padding=0, ) self.bn3 = nn.BatchNorm2d(intermediate_channels * self.expansion) self.relu = nn.ReLU() self.identity_downsample = identity_downsample self.stride = stride def forward(self, x): identity = x.clone() x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.conv2(x) x = self.bn2(x) x = self.relu(x) x = self.conv3(x) x = self.bn3(x) if self.identity_downsample is not None: identity = self.identity_downsample(identity) x += identity x = self.relu(x) return x class ResNet(nn.Module): def __init__(self, block, layers, image_channels, num_classes): super(ResNet, self).__init__() self.in_channels = 4 self.conv1 = nn.Conv2d(image_channels, 4, kernel_size=3, stride=2, padding=3) self.bn1 = nn.BatchNorm2d(4) self.relu = nn.ReLU() self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.softmax = nn.Softmax(dim=1) # Essentially the entire ResNet architecture are in these 4 lines below self.layer1 = self._make_layer( block, layers[0], intermediate_channels=55, stride=1 ) self.layer2 = self._make_layer( block, layers[1], intermediate_channels=64, stride=2 ) self.layer3 = self._make_layer( block, layers[2], intermediate_channels=128, stride=2 ) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.avgpool(x) x = x.reshape(x.shape[0], -1) return x def _make_layer(self, block, num_residual_blocks, intermediate_channels, stride): identity_downsample = None layers = [] if stride != 1 or self.in_channels != intermediate_channels * 4: identity_downsample = nn.Sequential( nn.Conv2d( self.in_channels, intermediate_channels * 4, kernel_size=1, stride=stride, ), nn.BatchNorm2d(intermediate_channels * 4), ) layers.append( block(self.in_channels, intermediate_channels, identity_downsample, stride) ) self.in_channels = intermediate_channels * 4 for i in range(num_residual_blocks - 1): layers.append(block(self.in_channels, intermediate_channels)) return nn.Sequential(*layers) def ResNet50(img_channel=4, num_classes=2): return ResNet(block, [3,4,3], img_channel, num_classes) import torch.optim as optim net1 = ResNet50(img_channel=4, num_classes=2) ###############################################################################################################" ###################################################################################################################" import torch import torch.nn as nn import torch.nn.functional class block(nn.Module): def __init__( self, in_channels, intermediate_channels, identity_downsample=None, stride=1 ): super(block, self).__init__() self.expansion = 4 self.conv1 = nn.Conv2d( in_channels, intermediate_channels, kernel_size=1, stride=1, padding=0 ) self.bn1 = nn.BatchNorm2d(intermediate_channels) #Le BatchNormalization applique une transformation qui maintient la sortie moyenne proche de 0 et l'écart type de sortie proche de 1. self.conv2 = nn.Conv2d( intermediate_channels, intermediate_channels, kernel_size=3, stride=stride, padding=1, ) self.bn2 = nn.BatchNorm2d(intermediate_channels ) self.conv3 = nn.Conv2d( intermediate_channels, intermediate_channels * self.expansion, kernel_size=1, stride=1, padding=0, ) self.bn3 = nn.BatchNorm2d(intermediate_channels * self.expansion) self.relu = nn.ReLU() self.identity_downsample = identity_downsample self.stride = stride def forward(self, x): identity = x.clone() x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.conv2(x) x = self.bn2(x) x = self.relu(x) x = self.conv3(x) x = self.bn3(x) if self.identity_downsample is not None: identity = self.identity_downsample(identity) x += identity x = self.relu(x) return x class ResNet(nn.Module): def __init__(self, block, layers, image_channels, num_classes): super(ResNet, self).__init__() self.in_channels = 40 self.conv1 = nn.Conv2d(image_channels, 40, kernel_size=3, stride=2, padding=3) self.bn1 = nn.BatchNorm2d(40) self.relu = nn.ReLU() self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.softmax = nn.Softmax(dim=1) # Essentially the entire ResNet architecture are in these 4 lines below self.layer1 = self._make_layer( block, layers[0], intermediate_channels=32, stride=1 ) self.layer2 = self._make_layer( block, layers[1], intermediate_channels=64, stride=2 ) self.layer3 = self._make_layer( block, layers[2], intermediate_channels=128, stride=2 ) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.avgpool(x) x = x.reshape(x.shape[0], -1) return x def _make_layer(self, block, num_residual_blocks, intermediate_channels, stride): identity_downsample = None layers = [] if stride != 1 or self.in_channels != intermediate_channels * 4: identity_downsample = nn.Sequential( nn.Conv2d( self.in_channels, intermediate_channels * 4, kernel_size=1, stride=stride, ), nn.BatchNorm2d(intermediate_channels * 4), ) layers.append( block(self.in_channels, intermediate_channels, identity_downsample, stride) ) self.in_channels = intermediate_channels * 4 for i in range(num_residual_blocks - 1): layers.append(block(self.in_channels, intermediate_channels)) return nn.Sequential(*layers) def ResNet50(img_channel=40, num_classes=2): return ResNet(block, [3,4,3], img_channel, num_classes) import torch.optim as optim net2 = ResNet50(img_channel=40, num_classes=2) import torch.optim as optim net2 = ResNet50(img_channel=40, num_classes=2) class Net(nn.Module): def init(self): super(Net, self).__init__() self.feature1 = net1 self.feature2 = net2 self.fc = nn.Linear(128 *4,2) def forward(self, x,y): x1= self.feature1(x) x2= self.feature2(y) x3 = torch.cat((x1,x2),1) x3 = x3.view(x3.size(0), -1) x3 = self.fc(x3) return x3 net=Net() loss_fn = nn.BCEWithLogitsLoss() import torch.optim as optim net.train() i would like to build a data loader for each model and for the third model please help me
st33029
This is my data loader for one model import torch import numpy as np import glob import scipy import scipy.io as sio from ModelResNet import * from Preper import * from torch.utils.data import Dataset, DataLoader class ImageDataset: def init(self,alldata, transform=None): #self.root_dir = root_dir #load data from mat files self.alldata=alldata alldata_olivier = A alldata_non_olivier = Y #a list of label for i in range(5027) : #number of samples 'non olive' labels.append(0) for i in range(4990) : #number of samples 'olive labels.append(1) #shuffle data using sklearn self.numdata = 10017 self.transform = transform def __len__(self): return self.numdata def __getitem__(self, idx): label=labels[idx] #newidx = self.shuffle[idx] image = self.alldata[idx] label=np.asarray(label) #transform data from numpy to torch tensor imageTensor =np.asarray(alldata)# imageTensor =torch.from_numpy(imageTensor) #plt.imshow(imageTensor[:,:,0]) labelTensor =np.asarray(labels)# torch.from_numpy(label) labelTensor =torch.from_numpy(labelTensor) #print(imageTensor) return imageTensor , labelTensor if name == ‘main’: k= ImageDataset(z) k.getitem(60) import sklearn.model_selection as model_selection X_train, X_test, y_train, y_test = model_selection.train_test_split(alldata, labels, train_size=0.8,test_size=0.2) class trainData(): def __init__(self, X_data, y_data): self.X_data = X_data self.y_data = y_data def __getitem__(self, index): return self.X_data[index], self.y_data[index] def __len__ (self): return len(self.X_data) train_data = trainData(torch.FloatTensor(X_train), torch.FloatTensor(y_train), ) class testData(): def __init__(self, X_data): self.X_data = X_data def __getitem__(self, index): return self.X_data[index] def __len__ (self): return len(self.X_data) train_loader = DataLoader(dataset=train_data, batch_size=2, shuffle=True) net.to(device) device=torch.device(“cuda:0” if torch.cuda.is_available () else “cpu”) test_data = testData(torch.FloatTensor(X_test)) EPOCHS =21 LEARNING_RATE = 0.001 def binary_acc(y_pred, y_testt): y_pred_tag = torch.round(y_pred) correct_results_sum = (y_pred_tag[:,0] == y_testt).sum().float() acc = correct_results_sum/y_testt.shape[0] acc = torch.round(acc * 100) return acc net.train() for e in range(1, EPOCHS+1): epoch_loss = 0 epoch_acc = 0 for X_batch, y_batch in train_loader: X_batch =np.asarray(X_batch) X_batch =torch.from_numpy(X_batch) optimizer.zero_grad() X_batch= X_batch.permute(0,3,2,1).float() y_batch =np.asarray(y_batch) y_batch =torch.from_numpy(y_batch) X_batch, y_batch =X_batch.to(device) , y_batch.to(device) y_pred = net(X_batch) acc = binary_acc(y_pred, y_batch) loss = loss_fn(y_pred[:,0], y_batch.float()) loss.backward() optimizer.step() epoch_loss += loss.item() epoch_acc += acc.item() print('debut training : ') print(f'Epoch {e+0:03}: | Loss: {epoch_loss/len(train_loader):.5f} | Acc: {epoch_acc/len(train_loader):.3f}')
st33030
Hi Adam There’s a fair bit of code here. Are you experiencing any errors or anything you weren’t expecting? From a quick look through, it looks as though you have a single model (Net) that contains two separate ResNet models. So it sounds as though you may want to pass two datasets into the same model using one dataloader? If so, following something like this will be helpful 6. Let me know how you get on or if I’ve misunderstood.
st33031
How can I compute this function in a way that handles gradients correctly? def f(x): return torch.where(x > 0, x, x / (1 - x)) This issue 1 causes an incorrect nan gradient at x == 1: x = torch.tensor(1., requires_grad=True) y = f(x) print(y) y.backward() print(x.grad) I tried using masked_scatter 1 but it also doesn’t work: def f(x): return x.masked_scatter(x < 0, x / (1 - x))
st33032
If your problem is related to the presence of NaNs, I think you could: use an if statement to avoid x == 1, you could set a smaller value for x, for instance x = 0.98; before returning the value, you could check the presence of NaNs: you could create a variable function = torch.where(x > 0, x, x / (1 - x)), then use torch.nan_to_num(function, nan = value). I don’t know if this solutions are the best, but I guess they’re worth trying. Let me know if you’re able to solve the problem
st33033
Solution is to use Masking() layers available in keras with mask_value=0 . This is because when using empty vectors they are calculated into the loss, by using Masking() , as outlined by keras the padding vectors are skipped and not included.
st33034
This is probably a human error but i would like to note down accuracy of AlexNet with already trained networks and then replace conv layers with my custom layers and note down results again. I can find AlexNet and pre_trained weights here [AlexNet] 6 The Datasets are downloaded from here [AT] 8 Main Folder Name : imagenet2012 Sub Folder 1: ILSVRC2012_img_train Contains different folder (n01443537,n01484850… 15 of them with images inside) Sub Folder 2: ILSVRC2012_img_val (ILSVRC2012_val_00000001.JPEG etc…Contains all images) import torch.nn as nn from torch.hub import load_state_dict_from_url import torchvision from torchvision import transforms from torch.utils import data from torchvision.datasets import ImageFolder import torch __all__ = ['AlexNet', 'alexnet'] data_transforms = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) data_dir = "/home/Sami/Documents/imagenet2012/" model_urls = { 'alexnet': 'https://download.pytorch.org/models/alexnet-owt-4df8aa71.pth', } train_dataset = torchvision.datasets.ImageFolder(root=val_dir, transform=data_transforms, target_transform=None, is_valid_file=None) test_dataset = torchvision.datasets.ImageFolder(root=val_dir, transform=data_transforms, is_valid_file=None) test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=100, shuffle=True) test_ds = torch.utils.data.Subset(test_loader.dataset, range(0,50)) test_loader_1 = torch.utils.data.DataLoader(dataset=test_ds, batch_size=100, shuffle=True) class AlexNet(nn.Module): def __init__(self, num_classes=1000): super(AlexNet, self).__init__() self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(64, 192, kernel_size=5, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(192, 384, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(384, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(256, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), ) self.avgpool = nn.AdaptiveAvgPool2d((6, 6)) self.classifier = nn.Sequential( nn.Dropout(), nn.Linear(256 * 6 * 6, 4096), nn.ReLU(inplace=True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Linear(4096, num_classes), ) def forward(self, x): x = self.features(x) x = self.avgpool(x) x = torch.flatten(x, 1) x = self.classifier(x) return x model = AlexNet() state_dict = load_state_dict_from_url(model_urls['alexnet'], progress=True) model.load_state_dict(state_dict) print(model) with torch.no_grad(): correct = 0 total = 0 for images,labels in test_loader_1: out = model(images) _,predicted = torch.max(out.data,1) total += labels.size(0) correct += (predicted==labels).sum().item() print('Accuracy of the network on the 50 test images: {} %'.format(100 * correct / total)) But the accuracy stays 0.0 something like that.
st33035
Solved by ptrblck in post #6 Based on your code it looks like you are using ImageFolder on your validation directory, which seem to contain only the images without any subfolders. ImageFolder creates the targets based on subfolders, so your current Datasets might contains only a single class label. Could you check that?
st33036
Could you please check, if correct and total are stores as float values or as int? In the latter case, your accuracy might get rounded down to zero.
st33037
You are right it was int, after changing it to float i get 2.0 % on 50 images is this alright ?
st33038
2% accuracy on 50 images would be a single correct sample. It depends what you define as “alright”, but I would say it’s a bit low.
st33039
I am not sure but i was thinking may be around 56-57% would be alright out of 1000 images Comparisons or Accuracy 1. ILSVRC2012_img_train/ folder has only 15 sub-folders with in. May be my data loaders have some problems in them ? Directory structure is as follows imagenet2012/ ILSVRC2012_img_train/ n01443537/ n01443537_2.JPEG n01443537_16.JPEG . . n01484850/ n01484850_17.JPEG n01484850_76.JPEG . . . . ILSVRC2012_img_val ILSVRC2012_val_00000001.JPEG ILSVRC2012_val_00000002.JPEG . . .
st33040
Based on your code it looks like you are using ImageFolder on your validation directory, which seem to contain only the images without any subfolders. ImageFolder creates the targets based on subfolders, so your current Datasets might contains only a single class label. Could you check that?
st33041
Yes you are right. That seems to be the problem. In reply to the commands print(test_dataset.classes) print(test_dataset.class_to_idx) Yields [‘ILSVRC2012_img_train’, ‘ILSVRC2012_img_val’] {‘ILSVRC2012_img_train’: 0, ‘ILSVRC2012_img_val’: 1} Do i need to have some official dataset ? As i cannot seem to download it when i use this one torchvision.datasets.ImageNet(root, split='train', download=True, **kwargs)
st33042
I think the download URL used in datasets.ImageNet was disabled (as described here 3), so you would need to register on their site and download the data separately.
st33043
Thank you very much. I have already applied to register on their site. I will update this post as soon as i move ahead. In the meantime i will try this dataset [kaggle] 2 and let you know. Thanks a ton really Update: Above kaggle dataset works fine i have used this script sh 3 to put images in folders for “val” so that pytorch’s dataloader can easily identify labels. For test data folder may be a similar script is needed to put them in folders. Obtained Accuracy on Val Set 70.1% (1000 images)
st33044
Hi, Can you please share more details on the structure of the “test data folder” and test-dataloader for ImageNet dataset? Thanks.
st33045
Hi, are you asking about this one . _all__ = ['AlexNet', 'alexnet'] data_transforms = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) val_dirr = "~/Alexnet/val" model_urls = { 'alexnet': 'https://download.pytorch.org/models/alexnet-owt-4df8aa71.pth', } #train_dataset = torchvision.datasets.ImageFolder(root=..., transform=data_transforms, target_transform=None, is_valid_file=None) test_dataset = torchvision.datasets.ImageFolder(root=val_dirr, transform=data_transforms, is_valid_file=None) test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=100, shuffle=True) #If making subsets #test_ds = torch.utils.data.Subset(test_loader.dataset, range(0,2)) #test_loader_1 = torch.utils.data.DataLoader(dataset=test_ds, batch_size=1, shuffle=True)
st33046
I’m trying to train IAM Dataset on TPSSpatialTransformerNetwork but finally I got an error: shape ‘[-1, 2, 4, 28]’ is invalid for input of size 768 Each image in the dataset has the size of (32,128). I can not figure out the shape it got in the error step. And here is the code: class TPS_SpatialTransformerNetwork(nn.Module): def __init__(self): super(TPS_SpatialTransformerNetwork, self).__init__() self.conv1 = nn.Conv2d(1, 79, kernel_size=5) self.conv2 = nn.Conv2d(79, 256, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(256, 512) self.fc2 = nn.Linear(512, 79) # Spatial transformer localization-network self.localization = nn.Sequential( nn.Conv2d(1, 8, kernel_size=7), nn.MaxPool2d(2, stride=2), nn.ReLU(True), nn.Conv2d(8, 79, kernel_size=5), nn.MaxPool2d(2, stride=2), nn.ReLU(True) ) # Regressor for the 3 * 2 affine matrix self.fc_loc = nn.Sequential( nn.Linear(79 * 4 * 28, 32), nn.ReLU(True), nn.Linear(32, 3 * 2) ) # Initialize the weights/bias with identity transformation self.fc_loc[2].weight.data.zero_() self.fc_loc[2].bias.data.copy_(torch.tensor([1, 0, 0, 0, 1, 0], dtype=torch.float)) # Spatial transformer network forward function def stn(self, x): xs = self.localization(x) xs = xs.view(-1, 79 * 4 * 28) theta = self.fc_loc(xs) theta = theta.view(-1, 2, 4,28) grid = F.affine_grid(theta, x.size()) x = F.grid_sample(x, grid) return x def forward(self, x): # transform the input x = self.stn(x) # Perform the usual forward pass x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return F.log_softmax(x, dim=1) 4 frames /content/drive/My Drive/OCR/transformation.py in stn(self, x) 41 xs = xs.view(-1, 79 * 4 * 28) 42 theta = self.fc_loc(xs) ---> 43 theta = theta.view(-1, 2, 4,28) 44 45 RuntimeError: shape '[-1, 2, 4, 28]' is invalid for input of size 768
st33047
I have trained a Faster RCNN model for detecting fish underwater. I am currently using a random selection of frames from a 1hr video and training in batches. I have a few questions that might help improve the accuracy of the model: Should I be training on a 1hr video without shuffling the data? What’s the best way to go about this? Is there a way to inform the training of previous frames, so the model has some understanding of how fish move from frame to frame? This would help track fish between predicted frames, and reduce the number of false positives (eg. rocks, seaweed, misc. objects). Should I be considering alternatives to Faster RCNN to achieve better results? (eg. Mask RCNN, Instance segmentation, YOLO/DeepSort) Cheers!
st33048
#custom initiliazation of weight self.hidden_layer_1.weight = torch.nn.Parameter(torch.from_numpy(weights)) a = self.hidden_layer_1.weight num_epochs=10 for epoch in range(num_epochs): print('Epoch {}'.format(epoch)) torch.autograd.set_detect_anomaly(True) epoch_loss_train = [] epoch_loss_validation = [] for i, (x,y) in enumerate(zip(feature_trainloader,label_trainloader), 0): optimizer.zero_grad() output = self.forward(x) target = y loss = self.CrossEntropyLoss(output, target) epoch_loss_train.append(loss.item()) y_true_train = torch.cat((y_true_train, target)) _, pred_class = torch.max(output, dim=1) y_pred_train = torch.cat((y_pred_train, pred_class)) loss.backward() optimizer.step() b = self.hidden_layer_1.weight print(torch.equal(a, b)) #True I don’t understand how weights are the same after 10 epochs even though the loss is changing
st33049
Could you share a colab notebook which can reproduce the issue? A minimal implementation would work.
st33050
As a sanity check, can you actually save (e.g., with something like (.detatch().clone()) (or print) the values before and after to compare them? I have a suspicion that a and b are the same reference so they are both updated as the model is being trained.
st33051
Hi Jpj! jpj: #custom initiliazation of weight self.hidden_layer_1.weight = torch.nn.Parameter(torch.from_numpy(weights)) At this point, you (presumably) have already created your optimizer, so your optimizer contains a collection of Parameters that it will be updating. Because you have overwritten weight with a new Parameter (rather than having overwritten weight.data with a new tensor) – and done so after creating your optimizer – your optimizer does not contain the new Parameter that is used (via the forward pass) to calculate the loss. (The Parameter that is modified by the optimizer-- or would be if it had a non-trivial gradient – is the old Parameter that you have, in a sense, “hidden.”) So self.hidden_layer_1.weight doesn’t change, because it is not in the optimizer’s list of Parameters to update. [/quote] I don’t understand how weights are the same after 10 epochs even though the loss is changing Why the loss is changing even though self.hidden_layer_1.weight is not: I will assume that self.hidden_layer_1 is a Linear. A Linear has both a weight and a bias. You haven’t overwritten bias, so it is still updated by your optimizer, and still affects the loss you calculate. Best. K. Frank
st33052
Note that even after fixing the optimizer issue, the weights should not be compared using references. For example, this code snippet will report the “saved” weights being the same because both references refer to the same underlying tensor. Copying the weights before training will produce the expected results: import torch a = torch.nn.Linear(128,1) b = torch.nn.Sigmoid() optimizer = torch.optim.SGD(a.parameters(), 1e-3) criterion = torch.nn.BCELoss() saved = a.weight detached = a.weight.detach().clone() for i in range(0, 512): inp = torch.randn((1, 128)) label = float(torch.sum(inp[64:]) > 0) label = torch.tensor([[label]]) output = b(a(inp)) loss = criterion(output, label) print(loss) loss.backward() optimizer.grad_ = None optimizer.step() print(torch.allclose(saved, a.weight)) print(saved == a.weight) print(torch.allclose(detached, a.weight)) print(detached == a.weight)
st33053
Thanks for the reply. I do create my optimizer before this statement, you are correct. So if I understand correctly, self.hidden_layer_1.weight.data = my_custom_weights should fix it?
st33054
Hi Jpj! jpj: self.hidden_layer_1.weight.data = my_custom_weights should fix it? I think that will fix it. My preferred idiom (not that I know what I’m doing) would be: with torch.no_grad(): self.hidden_layer_1.weight.data.copy_(my_custom_weights) Best. K. Frank
st33055
Hi all, I came across this post and I’m facing a similar issue with PyTorch where I don’t know what would be the best approach: Machine Learning Mastery – 14 May 17 How to use Different Batch Sizes when Training and Predicting with LSTMs 1 Keras uses fast symbolic mathematical libraries as a backend, such as TensorFlow and Theano. A downside of using these libraries […] Est. reading time: 13 minutes I am working on a network that is supposed to run on real-time in CPU during inference, taking a noisy audio chunk of 512 samples as input to produce a clean version of the same audio chunk of 512 samples as output. The network I am using involves LSTM layers that according to the documentation require a known batch size during training of dimensions (seq_len, batch_size, input_size) which in my case would be (1, 1, 512): https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html 1 I would ideally like to train the network on batches bigger than 1 (e.g. batch_size=32) but use the model during inference with batch_size=1 since it will be running on real time taking single input frames of 512 samples. Also, the model has to be stateful and as far as I understand, this can be implemented by detaching the output hidden and cell states in order to feed them back in as the inicial hidden and cell states of the next batch, is that correct? Along with this one, my other questions would be: 1. Is it possible in PyTorch to train LSTM on batch_size=32 and then use the same saved model to do inference in single steps of batch_size=1? (i.e. train on (1, 32, 512) and inference in (1, 1, 512) assuming batch_first=False) if so, how? and, may the predictions be affected by choosing a different batch_size during inference? 2. Would it be okay to just do train and inference on batch_size=1 and use the model like that or would it be expected to be less time efficient than (1) during training? Overall, my main concert is to obtain the least CPU intensive model during inference. Sorry if some questions may have an easy answer, but I couldn’t find much information about similar scenarios. Any guidelines are highly appreciated. Esteban
st33056
You can train your model with a batch size of, say, 32 and make individual inferences (i.e., batch size of 1). Why should this cause any problem? The only thing you have to consider is that you have to initialize the initial hidden state with the correct batch size: 32 for training, and 1 for inferencing.
st33057
Hi @vdw and thanks for your help! What would happen in such case if I were to do my initial hidden states learnable as suggested here 2? I am confused in the sense that now a whole bunch of time steps during training are on the same batch. I assume I should just copy the hidden states values to the new instance, but I am not sure if the ones to be copied in such case are the topmost (i.e. [:, 0, :]) bottommost or (i.e. [:, batch_size - 1 :]).
st33058
If the sequences within and between batches are independent, then it shouldn’t really matter which one of the 32 hidden states you should pick, being the topmost, bottommost, or any other one. Arguably, if you train a large amount of epoch with enough shuffling, the 32 hidden states should be kind of similar. Or you simple make 32 inferences with for the same input sequence but with all 32 available hidden states and see if it effects the result in any way :).
st33059
Thanks again @vdw ! what could be a sound approach in the case of dependent sequences within a batch but not between batches? (basically a single audio file chopped in smaller sequential frames per batch)
st33060
I have a created a neural network that is for some reason running extremely slow (especially in the backward part which takes ~x40 the forward pass), so I decided to try using the profiler on it. I’m currently using it like this, which I have basically taken straight from the profiler documentation: with profiler.profile(record_shapes=True) as prof: with profiler.record_function("model_inference"): node_vec = model(input=coords_init,xn_attr=node_attr) print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10)) However when I use this I get the following error: Traceback (most recent call last): File "/home/tue/PycharmProjects/GraphNetworks/src/protein_utils.py", line 41, in use_proteinmodel print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10)) File "/home/tue/PycharmProjects/GraphNetworks/e3nn_cpu/lib/python3.8/site-packages/torch/autograd/profiler.py", line 552, in key_averages self._check_finish() File "/home/tue/PycharmProjects/GraphNetworks/e3nn_cpu/lib/python3.8/site-packages/torch/autograd/profiler.py", line 525, in _check_finish raise RuntimeError("can't export a trace that didn't finish running") RuntimeError: can't export a trace that didn't finish running The mode runs fine (but a bit slow) when I don’t use the on it. The error seems to suggest that the model hasn’t finished running, which I don’t really understand? and unfortunately I haven’t been able to find anyone else encountering this error. I’m currently running this on my laptop using CPU, with the following: (e3nn_cpu) tue@tue-laptop:~/PycharmProjects/GraphNetworks$ pip list Package Version --------------------- --------- ase 3.21.1 certifi 2020.12.5 chardet 4.0.0 cycler 0.10.0 decorator 4.4.2 e3nn 0.2.7 googledrivedownloader 0.4 h5py 3.2.1 idna 2.10 isodate 0.6.0 Jinja2 2.11.3 joblib 1.0.1 kiwisolver 1.3.1 llvmlite 0.36.0 MarkupSafe 1.1.1 matplotlib 3.4.1 mpmath 1.2.1 networkx 2.5.1 numba 0.53.1 numpy 1.20.2 pandas 1.2.4 Pillow 8.2.0 pip 21.0.1 pyparsing 2.4.7 python-dateutil 2.8.1 python-louvain 0.15 pytz 2021.1 rdflib 5.0.0 requests 2.25.1 scikit-learn 0.24.1 scipy 1.6.2 setuptools 56.0.0 six 1.15.0 sympy 1.8 threadpoolctl 2.1.0 torch 1.8.1+cpu torch-cluster 1.5.9 torch-geometric 1.7.0 torch-scatter 2.0.6 torch-sparse 0.6.9 torch-spline-conv 1.2.1 torchaudio 0.8.1 torchvision 0.9.1+cpu tqdm 4.60.0 typing-extensions 3.7.4.3 urllib3 1.26.4
st33061
Solved by tueboesen in post #2 I figured out the issue, the issue is that the print statement needs to be outside both with statements, like this: with profiler.profile(record_shapes=True) as prof: with profiler.record_function("model_inference"): node_vec = model(input=coords_init,xn_attr=node_attr) …
st33062
I figured out the issue, the issue is that the print statement needs to be outside both with statements, like this: with profiler.profile(record_shapes=True) as prof: with profiler.record_function("model_inference"): node_vec = model(input=coords_init,xn_attr=node_attr) print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10)) It would be nice if this was included in the documentation/tutorial.
st33063
I am trying to down sample a .mp4 file for a vision project. t = torchvision.io.read_video(r"path.mp4") video_tensor= t[0] # video_tensor.shape ---> [853, 1080, 1920, 3] video_tensor= video_tensor.permute(0,3,1,2) video_tensor= torch.reshape(video_tensor, (video_tensor.shape[0], video_tensor.shape[1], -1)) video_tensor= torch.nn.functional.interpolate(video_tensor.float(), scale_factor=0.3) I get this error: RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 22071398400 bytes. Buy new RAM! Is this a viable way to try to downscale video, or are there other functions I should be using? Cheers!
st33064
Hello, It is known that if there are ambiguities in a training dataset then neural network will struggle to learn. Is there a way of checking if the dataset is ambiguous before attempting to train a neural network? Especially if it is a regression problem.
st33065
Solved by ariG23498 in post #2 You can always add a check in the __getitem__ of your dataset, to check for ambiguities.
st33066
You can always add a check in the __getitem__ of your dataset, to check for ambiguities.
st33067
Hi, I have an application where I am running a set of models (~10) on variable-batch-sized input. So I’m using dynamic parallelism since each of these models can be run in parallel. Each model’s batch is around 500 items. But after getting it running, I actually experience a significant slowdown to running them sequentially (just iterating over them). Is this expected in that the overhead for each process takes longer than the actual execution with this small data size?