id
stringlengths
3
8
text
stringlengths
1
115k
st182100
I would like to train an ensemble of same network architectures. Currently, I’m defining it this way. def make_net(network_specs): # return nn.Module class Ensemble(nn.Module): def __init__(self, network_specs, ensemble_size): super().__init__() self.model = nn.ModuleList([make_net(network_specs) for _ in range(ensemble_size)]) def forward(self, x): return torch.stack([self.model[i](x[i]) for i in range(ensemble_size)]) However, backprop of the stack operator doesn’t seem to be parallelized (in the same GPU). The GPU utilization is very low, around 15%. Thinking that dynamic graph might be the cause (similar thread here 5), I’ve recently tried using @torch.jit. The performance still doesn’t improve. What am I doing wrong here? How can I improve the performance of my ensemble model? Thanks.
st182101
Hi, All the opreations that run on the GPU are asynchronous. So if the GPU usage is very low, it’s most likely because your networks are not big enough to use all the GPU.
st182102
If your network is big enough you can try making bigger batches and check CPU and Disk usage maybe you are doing some data manipulation inside of train loop so its bottleneck. And if you dont move big enough piece of data into .cuda() before training it also can be bottleneck and cpu usage should be pretty high.
st182103
Hi, If the network is small, we would expect the GPU utilization to get higher when we increase the ensemble size. However, the GPU utilization doesn’t increase when we scale the ensemble size. The total run time increases almost linearly with respect to the ensemble size. Note that this is a reinforcement learning task (on simple environments), so data processing/transfer is not a bottleneck. Here is the visualization of my network when the ensemble size is 4. bootstrap_size=4.png2310×920 303 KB And here’s our profiling result for different ensemble sizes. time_forward, time_backward: run 50 times Backward, Forward: time_B128/time_b4 ~ 28-30 Bootstrap_size: 4: ----------------------------------------- time_backward 0.07882976531982422 mean_time_backward 0.0015765953063964844 time_forward 0.05576205253601074 mean_time_forward 0.0011152410507202148 time_backward 0.07231521606445312 mean_time_backward 0.0014463043212890624 time_forward 0.05587363243103027 mean_time_forward 0.0011174726486206056 time_backward 0.07005977630615234 mean_time_backward 0.0014011955261230469 time_forward 0.05555391311645508 mean_time_forward 0.0011110782623291015 time_backward 0.07131695747375488 mean_time_backward 0.0014263391494750977 time_forward 0.055143117904663086 mean_time_forward 0.0011028623580932617 time_backward 0.06970882415771484 mean_time_backward 0.001394176483154297 time_forward 0.05509185791015625 mean_time_forward 0.001101837158203125 time_backward 0.0810239315032959 mean_time_backward 0.001620478630065918 time_forward 0.05518746376037598 mean_time_forward 0.0011037492752075195 time_backward 0.07718276977539062 mean_time_backward 0.0015436553955078126 time_forward 0.05403590202331543 mean_time_forward 0.0010807180404663085 Bootstrap_size: 32: ----------------------------------------- time_backward 0.48969507217407227 mean_time_backward 0.009793901443481445 time_forward 0.4311997890472412 mean_time_forward 0.008623995780944825 time_backward 0.4772953987121582 mean_time_backward 0.009545907974243165 time_forward 0.516700029373169 mean_time_forward 0.01033400058746338 time_backward 0.4743640422821045 mean_time_backward 0.00948728084564209 time_forward 0.5470066070556641 mean_time_forward 0.01094013214111328 time_backward 0.5156633853912354 mean_time_backward 0.010313267707824708 time_forward 0.5515599250793457 mean_time_forward 0.011031198501586913 time_backward 0.48656153678894043 mean_time_backward 0.009731230735778808 time_forward 0.5587642192840576 mean_time_forward 0.011175284385681153 time_backward 0.48267650604248047 mean_time_backward 0.009653530120849609 time_forward 0.549140214920044 mean_time_forward 0.01098280429840088 time_backward 0.493422269821167 mean_time_backward 0.00986844539642334 time_forward 0.546377420425415 mean_time_forward 0.0109275484085083 Bootstrap_size: 128: ----------------------------------------- time_backward 2.0336191654205322 mean_time_backward 0.040672383308410644 time_forward 2.0258209705352783 mean_time_forward 0.040516419410705565 time_backward 2.0157926082611084 mean_time_backward 0.04031585216522217 time_forward 1.716789960861206 mean_time_forward 0.03433579921722412 time_backward 1.9942104816436768 mean_time_backward 0.039884209632873535 time_forward 1.6753108501434326 mean_time_forward 0.033506217002868655 time_backward 2.0784974098205566 mean_time_backward 0.04156994819641113 time_forward 1.6769888401031494 mean_time_forward 0.033539776802062986 time_backward 1.9966001510620117 mean_time_backward 0.03993200302124023 time_forward 1.6629443168640137 mean_time_forward 0.033258886337280275 time_backward 1.9680683612823486 mean_time_backward 0.039361367225646975 time_forward 1.679962158203125 mean_time_forward 0.0335992431640625 time_backward 2.00929856300354 mean_time_backward 0.0401859712600708 time_forward 1.664689302444458 mean_time_forward 0.03329378604888916
st182104
Hi, The thing with GPUs is that they are very good at doing one very parrallel task but not many small parrallel tasks You can use nvidia visual profiler nvvp if you want to look more in details how your code runs on the gpu. But you’re most certainly going to have “low core usage” if you have small tasks.
st182105
ubuntu 16LTS,torch version:1.0.1.post2,-cpu version,Python 2.7.15 |Anaconda here’s my codes: import torch import torchvision import numpy as np from importlib import import_module from torch.nn import DataParallel model = import_module(‘net_detector’) config1, nod_net, loss, get_pbb = model.get_model() checkpoint = torch.load(‘130.ckpt’) model.load_state_dict(checkpoint[‘state_dict’]) example = np.load(’./236350_clean.npy’) example=torch.from_numpy(example) traced_script_module = torch.jit.trace(model, example) I got error as follow: Traceback (most recent call last): File “convert2pt.py”, line 18, in traced_script_module = torch.jit.trace(nod_net, example) File “/home/qrf/anaconda2/lib/python2.7/site-packages/torch/jit/init.py”, line 636, in trace var_lookup_fn, _force_outplace) File “/home/qrf/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 487, in call result = self._slow_forward(*input, **kwargs) File “/home/qrf/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 477, in _slow_forward result = self.forward(*input, **kwargs) TypeError: forward() takes exactly 3 arguments (2 given)
st182106
Do you have a version of the model that is compatible with the current version of PyTorch? We do not guarantee that tracing works on models written for PyTorch 0.3
st182107
yes,thank you.maybe i should retrain my model using the stable version to figure out if it’s a “version” thing.
st182108
I saw that jit supports the differential ability of some operators (about 80), but the number of ATen’s operators (about 700) is much larger than the current support. When does the plan support the differential ability of all operators? github.com pytorch/pytorch/blob/master/torch/csrc/jit/autodiff.cpp#L30-L146 1 bool isDifferentiable(Node* n) { // TODO: scalar-tensor ops should be canonicalized static OperatorSet differentiable_ops = { "aten::add(Tensor self, Tensor other, *, Scalar alpha) -> Tensor", "aten::add(Tensor self, Scalar other, Scalar alpha) -> Tensor", "aten::sub(Tensor self, Tensor other, *, Scalar alpha) -> Tensor", "aten::sub(Tensor self, Scalar other, Scalar alpha) -> Tensor", "aten::mul(Tensor self, Tensor other) -> Tensor", "aten::mul(Tensor self, Scalar other) -> Tensor", "aten::div(Tensor self, Tensor other) -> Tensor", "aten::div(Tensor self, Scalar other) -> Tensor", "aten::max(Tensor self, Tensor other) -> Tensor", "aten::min(Tensor self, Tensor other) -> Tensor", "aten::sigmoid(Tensor self) -> Tensor", "aten::tanh(Tensor self) -> Tensor", "aten::relu(Tensor self) -> Tensor", "aten::threshold(Tensor self, Scalar threshold, Scalar value) -> Tensor", "aten::erf(Tensor self) -> Tensor", "aten::erfc(Tensor self) -> Tensor", "aten::exp(Tensor self) -> Tensor", This file has been truncated. show original
st182109
We plan to increase the number of symbolic derivatives that we support, but we don’t have a specific timeline for when we’ll reach full coverage. In the meantime, autograd still works for JIT modules, so as long as you set requires_grad on your input tensors you will be able call backward() like in Python.
st182110
Hi Everyone, I’m trying to convert multilabel classification model (pytorch 1.0) to torch script via annotation. Here is my model: class MultiLabelModel(nn.Module): def __init__(self, basemodel, basemodel_output, num_classes): super(MultiLabelModel, self).__init__() self.basemodel = basemodel self.num_classes = num_classes for index, num_class in enumerate(num_classes): setattr(self, "FullyConnectedLayer_" + str(index), nn.Linear(basemodel_output, num_class)) def forward(self, x): x = self.basemodel.forward(x) outs = list() dir(self) for index, num_class in enumerate(self.num_classes): fun = eval("self.FullyConnectedLayer_" + str(index)) out = fun(x) outs.append(out) return outs The basemodel is predefined backbone model. I tried this: class MultiLabelModel(torch.jit.ScriptModule): __constants__ = ['num_classes'] __constants__ = ['basemodel'] __constants__ = ['basemodel_output'] def __init__(self, basemodel, basemodel_output, num_classes): super(MultiLabelModel, self).__init__() self.basemodel = basemodel self.num_classes = num_classes self.basemodel_output = basemodel_output for index, num_class in enumerate(num_classes): setattr(self, "FullyConnectedLayer_" + str(index), nn.Linear(basemodel_output, num_class)) @torch.jit.script_method def forward(self, x): x = self.basemodel.forward(x) outs = list() for index, num_class in enumerate(self.num_classes): fun = eval("self.FullyConnectedLayer_" + str(index)) out = fun(x) outs.append(out) return outs and get error: expected a value of type Tensor for argument ‘0’ but found int seems like I can’t define the multilabel classifier like this: for index, num_class in enumerate(num_classes): setattr(self, "FullyConnectedLayer_" + str(index), nn.Linear(basemodel_output, num_class)) Hope somebody can help me out.
st182111
Complete example: import torch import torchvision import torch.nn as nn MyClassNum = 10 class FeatureExtraction(torch.nn.Module): def __init__(self): super(FeatureExtraction, self).__init__() self.resnet = torchvision.models.resnet18(pretrained=True) self.resnet = nn.Sequential(*list(self.resnet.children())[:-1]) self.resnet.cuda() def forward(self, image_batch): return self.resnet(image_batch) class MultiLabelModel(torch.jit.ScriptModule): __constants__ = ['num_classes'] def __init__(self, num_classes): super(MultiLabelModel, self).__init__() self.num_classes = MyClassNum for index in enumerate(range(0,self.num_classes)): setattr(self, "FullyConnectedLayer_" + str(index[0]), nn.Linear(512, 1)) @torch.jit.script_method def forward(self, x): x = x.view(x.size(0), -1) # flatten outs = list() for index in enumerate(range(0,self.num_classes)): fun = eval("self.FullyConnectedLayer_" + str(index[0])) out = fun(x) outs.append(out) return outs class MultiLabelClassifier(torch.jit.ScriptModule): def __init__(self, num_classes): super(MultiLabelClassifier, self).__init__() self.FeatureExtraction = FeatureExtraction() self.classifier = MultiLabelModel(num_classes) @torch.jit.script_method def forward(self, img): feature = self.FeatureExtraction(img) preLabels= self.classifier(feature) return preLabels my_script_module = MultiLabelClassifier(MyClassNum) my_script_module.save("model.pt") Thanks for your quick reply.
st182112
Hi Team, I have a very simple fizbuz model made using pytorch python APIs which I have exported as ScriptModule. I am loading the same module from python and CPP and passing same input but getting wrong output in CPP. In fact, regardless of what ever input I pass, I get the exactly same values in the output from CPP Here is my python and CPP code for the same PS: I am a CPP noob # ~/myp/HOD/8.P/FizBuzTorchScript> python fizbuz.py fizbuz_model.pt 2 import sys import torch def main(): net = torch.jit.load(sys.argv[1]) temp = [int(i) for i in '{0:b}'.format(int(sys.argv[2]))] array = [0] * (10 - len(temp)) + temp inputs = torch.Tensor([array]) print(inputs) # tensor([[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.]]) output = net(inputs) print(output) # tensor([[ -1.8873, -17.1001, -3.7774, 3.7985]], ... if __name__ == '__main__': main() // ~/myp/HOD/8.P/FizBuzTorchScript/build> ./fizbuz ../fizbuz_model.pt 2 #include <torch/script.h> #include <iostream> #include <memory> #include <string> int main(int argc, const char* argv[]) { if (argc != 3) { std::cerr << "usage: <appname> <path> <int>\n"; return -1; } std::string arg = argv[2]; int x = std::stoi(arg); int array[10]; int i; int j = 9; for (i = 0; i < 10; ++i) { array[j] = (x >> i) & 1; j--; } std::shared_ptr<torch::jit::script::Module> module = torch::jit::load(argv[1]); torch::Tensor tensor_in = torch::from_blob(array, {1, 10}); std::vector<torch::jit::IValue> inputs; inputs.push_back(tensor_in); std::cout << inputs << '\n'; /* 1e-45 * 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 1.4013 0.0000 [ Variable[CPUFloatType]{1,10} ] */ at::Tensor output = module->forward(inputs).toTensor(); std::cout << output << '\n'; /* 3.7295 -23.8977 -8.2652 -1.3901 [ Variable[CPUFloatType]{1,4} ] */ }
st182113
Solved by smth in post #5 As Thomas said, you probably have to make array a float, you have it as int array[10]. If you have int array[10] and re-interpret it as a float, it’s probably going to have weird floats come out on the other side.
st182114
Here is the model, if it helps input_size = 10 output_size = 4 hidden_size = 100 class FizBuzNet(nn.Module): """ 2 layer network for predicting fiz or buz param: input_size -> int param: output_size -> int """ def __init__(self, input_size, hidden_size, output_size): super(FizBuzNet, self).__init__() self.hidden = nn.Linear(input_size, hidden_size) self.out = nn.Linear(hidden_size, output_size) def forward(self, batch): hidden = self.hidden(batch) activated = torch.sigmoid(hidden) out = self.out(activated) return out
st182115
hhsecond: int array[10]; ... torch::Tensor tensor_in = torch::from_blob(array, {1, 10}); From the docs, from_blob takes a void* and the (optional) TensorOptions specify the type, probably defaulting to float. So maybe declaring array to be a float array works better. Best regards Thomas
st182116
tom: TensorOptions That did not help Does it look like a bug in the JIT module (I know, highly unlikely) or am I doing something wrong? Also for what ever input I pass, I get this exactly same output 3.7295 -23.8977 -8.2652 -1.3901 [ Variable[CPUFloatType]{1,4} ]
st182117
As Thomas said, you probably have to make array a float, you have it as int array[10]. If you have int array[10] and re-interpret it as a float, it’s probably going to have weird floats come out on the other side.
st182118
Thanks a ton @smth @tom. That worked. In fact, I got the answer two minutes ago from @lantiga and was about to post here.
st182119
ahah, yes, I was posting here when I saw @smth reply live I confirm that using float array[10] fixes it.
st182120
Hi, I’m trying to export a ModuleList of Sequentials. It’s failing with the following error: RuntimeError: could not export python function call <python_value>. Remove calls to python functions before export.: @torch.jit.script_method def forward(self, x): # type: (List[Tensor]) -> List[Tensor] outputs = [] for cur_xc in self.xc: out = cur_xc(torch.tensor(0)) ~~~~~~ <--- HERE outputs.append(out) return outputs Example: from fractions import gcd import torch from torch import nn class CrossScale(torch.jit.ScriptModule): __constants__ = ['xc'] def __init__(self, n, ng=32): super(CrossScale, self).__init__() xc = [] for i in range(len(n)): m = nn.Sequential( nn.Conv2d(n[i], n[i], 1, bias=False), nn.GroupNorm(gcd(ng, n[i]), n[i])) xc.append(m) self.xc = nn.ModuleList(xc) @torch.jit.script_method def forward(self, x): # type: (List[Tensor]) -> List[Tensor] outputs = [] for cur_xc in self.xc: out = cur_xc(torch.tensor(0)) outputs.append(out) return outputs if __name__ == "__main__": n = [32, 64, 96] cs = CrossScale(n) cs.save("cs.pt")
st182121
Hi, try including the ModuleList in the __constants__ list. We have an issue up to improve the error message in this case: https://github.com/pytorch/pytorch/issues/16400 107
st182122
Hi, you can see in the snippet I included that the module list is already in the constants list.
st182123
lol you’re right, I wasn’t reading carefully enough! Thanks for filing the GH issue, will follow up there
st182124
Hello! Today I compared numba.jit and torch.jit and was very surprised. What am I doing wrong? image.png721×327 25.8 KB import torch from numba import jit @torch.jit.script def torch_jit_sum(x : torch.Tensor): res = torch.zeros_like(x[0, 0]) for i in range(x.shape[0]): for j in range(x.shape[1]): res += x[i, j] return res def blablabla(x): with torch.no_grad(): return torch_jit_sum(x) def loop_sum(x): res = torch.zeros_like(x[0, 0]) for i in range(x.shape[0]): for j in range(x.shape[1]): res += x[i, j] return res @jit def numba_sum(x): res = 0.0 for i in range(x.shape[0]): for j in range(x.shape[1]): res += x[i, j] return res
st182125
Hi @Daulbaev! The reason torch.jit is slow in your example, is because it’s not designed for this particular use-case The kinds of speed ups you will see with torch.jit are situations, e.g., when you have a number of pointwise operations, and torch.jit will be able to fuse them together and eliminate overhead and memory traffic. torch.jit, at this point in time, is not designed to take pointwise loops as you’ve written here, and compile them into machine code directly.
st182126
Thank you for a quick response. Do you have an example of a function where torch.jit on CPU is faster than numba.jit?
st182127
Given that numba jit compiles single cuda kernels, it’s going to be at leas as fast in execution. However, for many things, the expressive power of PyTorch is much greater and the JIT will take those ops and optimize them. Best regards Thomas
st182128
Hi, I successfully traced the model, but looks like I cannot save it. I thought that if I could do something like model = trace(model, (params) ) then it is ready so saving? Am I wrong? Torch version is ‘1.0.0.dev20190130’ Here is the trace: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-23-4014fec4f00b> in <module> ----> 1 ctc_model.save("ctc_test.ph") RuntimeError: could not export python function call <python_value>. Remove calls to python functions before export.: @torch.jit.script_method def forward(self, x, x_length): h_t, x_length = self.rnn(x, x_length) ~~~~~~~~ <--- HERE
st182129
Solved by Michael_Suo in post #6 Try putting "rnns" in the __constants__ attribute. We should work on having a better error msg here
st182130
You cannot export a model if in contains a Python function call. Is self.rnn() a Python function?
st182131
I think if self.rnn() is subclassed from torch.jit.ScriptModule(), then it should be possible to trace & save it.
st182132
Could you provide a script/model we can use the reproduce the problem? It’s hard to say what’s going on here without more information. Thanks!
st182133
Thank you very much for your replies! Actually I think I fixed my original question: self.rnn was a nn.Module and now I also made in ScriptModule, but now I have a new problem. Looks like I cannot loop over nn.ModuleList. I tried to index it in the loop but did not work as well. Is it even possible to use jit for nn.ModuleList? I omitted some parts for brevity: class PyramidalRNNENcoder(ScriptModule): __constants__ = ['num_layers'] def __init__(self, num_mels, encoder_size, num_layers, downsampling=None, dropout=0.0): super(PyramidalRNNENcoder, self).__init__() ... self.rnns =nn.ModuleList() for i in range(num_layers): input_size = num_mels*2 if i == 0 else encoder_size*2 lstm_i = nn.LSTM(input_size, hidden_size=encoder_size, bidirectional=True) initialize_lstm(lstm_i) self.rnns.append(lstm_i) self.num_layers = num_layers ... @torch.jit.script_method def forward(self, x, x_length): batch_size = x.size(0) ... idx = 0 for rnn in self.rnns: ~~~~~~~~~~~~~~~~~~ RuntimeError: python value of type 'ModuleList' cannot be used as a tuple: rnn_result = rnn(data)
st182134
Try putting "rnns" in the __constants__ attribute. We should work on having a better error msg here
st182135
When I implement VAE in pytorch, I need to use torch.randn_like() like this: """ :param mean: torch.Tensor """ eps = torch.randn_like(mean) However, when I use torch.jit.tracing to convert my VAE model into the graph representation, I face this warning message: /*/lib/python3.6/site-packages/torch/jit/__init__.py:644: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error: Not within tolerance rtol=1e-05 atol=1e-05 at input[8, 15] (-0.3760705292224884 vs. 5.01010274887085) and 199 other locations (100.00%) _check_trace([example_inputs], func, executor_options, module, check_tolerance, _force_outplace) /*/lib/python3.6/site-packages/torch/jit/__init__.py:644: TracerWarning: Trace had nondeterministic nodes. Nodes: %eps : Float(10, 13) = aten::randn_like(%mean, %20, %21, %22), scope: Decoder This may cause errors in trace checking. To disable trace checking, pass check_trace=False to torch.jit.trace() _check_trace([example_inputs], func, executor_options, module, check_tolerance, _force_outplace) /*/lib/python3.6/site-packages/torch/jit/__init__.py:644: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error: Not within tolerance rtol=1e-05 atol=1e-05 at input[3, 8] (-1.5253040790557861 vs. 3.154987096786499) and 129 other locations (100.00%) _check_trace([example_inputs], func, executor_options, module, check_tolerance, _force_outplace) What should I do? What is trace checking? What will passing check_trace=False to torch.jit.tracing() cause?
st182136
Solved by Michael_Suo in post #2 Trace checking compares the results of the traced function to the actual function to make sure they are the same. It’s just a sanity check, but non-deterministic ops like randn_like() will cause trace checking to fail. This failure is expected, and if you really want to use non-determinism in your m…
st182137
Trace checking compares the results of the traced function to the actual function to make sure they are the same. It’s just a sanity check, but non-deterministic ops like randn_like() will cause trace checking to fail. This failure is expected, and if you really want to use non-determinism in your model then you can disable the trace checking as the warning says.
st182138
@Michael_Suo Thank you for your reply! I understand what happened. I’ll try it again with check_trace=False.
st182139
hi. I am using PyTorch1.0 for the first time since PyTorch 0.2.X. I have a simple question about torch.jit. Please check [this code] (https://colab.research.google.com/gist/hellocybernetics/013f7d6fb007df1d8c70161872acce72/pytorch_jit_test.ipynb?authuser=1 26) at gist. JIT did NOT improve the speed. Am I using torch.jit correctly?
st182140
Your code looks reasonable to me. Keep in mind that using the JIT may not necessarily yield big performance increases. Right now, the primary use case for the JIT is running PyTorch models in production without a dependency on Python.
st182141
Thank you to reply. Before I came PyTorch1.0, I was using TensorFlow 1.X. Lately, TF have “eager execution mode” which is define by run. In TF2.0, eager execution mode is default, so I think the code will become PYTHONIC like PyTorch therefore the difference in usage feeling will disappear. Then TF have a great JIT function which translator eager code into a function which works as TF graph internally named tf.function at TF2.0.This JIT function makes eager code which much slower than PyTorch into faster than PyTorch. So, I tried to make pytorch faster with torch.jit. (I think that Pyro uses torch.jit for speed at Variational Inference API.) If torch.jit is not for speed, don’t we need torch.jit at prototyping in research? If so, when considering production, what is difference between using caffe2 and torch.jit?
st182142
Hi, I’m getting an error when running a dummy jit script. It seems it cannot infer types on a list of lists. import torch class PreProcessor(torch.jit.ScriptModule): def __init__(self): super(PreProcessor, self).__init__() @torch.jit.script_method def forward(self, frames): # type: (List[Tensor]) -> List[Tensor] lidars = [] for i in range(len(frames)): frame = frames[i] lidars.append(frame) return lidars class Inference(torch.jit.ScriptModule): def __init__(self): super(Inference, self).__init__() self.preprocessor = PreProcessor() @torch.jit.script_method def forward(self, batched_frames): # type: (List[List[Tensor]]) -> List[List[Tensor]] data = [] for i in range(len(batched_frames)): frames = batched_frames[i] preprocessed_data = self.preprocessor(frames) data.append(preprocessed_data) return data if __name__ == "__main__": p = Inference() print(p) p.save("p.pt") The relevant stack trace output: RuntimeError: arguments for call are not valid: for operator aten::append(t[] self, t el) -> t[]: could not match type Tensor[] to t in argument 'el': type variable 't' previously matched to type Tensor is matched to type Tensor[] @torch.jit.script_method def forward(self, batched_frames): # type: (List[List[Tensor]]) -> List[List[Tensor]] data = [] for i in range(len(batched_frames)): frames = batched_frames[i] preprocessed_data = self.preprocessor(frames) data.append(preprocessed_data) ~~~~~~~~~~~~~~~~~ <--- HERE return data Is this a known issue?
st182143
Hi! Lists are statically typed in TorchScript. The default type for a list declared like foo = [] is List[Tensor] So in data.append(preprocessed_data) you are trying to append a list of tensors to a list of tensors. You can fix this by annotating data properly (search “Variable Type Annotation” here 119).
st182144
I currently have a trained model for a RNN Encoder Decoder model and I’m trying to follow the tutorial for deploying code using jit and C++ here https://pytorch.org/tutorials/advanced/cpp_export.html 4 . For the example inputs, I’m supposed to pass in what one normally puts for the forward pass from the model; however, I am getting the following error: SyntaxError: invalid syntax >>> traced_script_module = torch.jit.trace(model, (example1, example2)) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.7/site-packages/torch/jit/__init__.py", line 634, in trace module = TopLevelTracedModule(func, **executor_options) File "/usr/local/lib/python3.7/site-packages/torch/jit/__init__.py", line 963, in init_then_register original_init(self, *args, **kwargs) File "/usr/local/lib/python3.7/site-packages/torch/jit/__init__.py", line 963, in init_then_register original_init(self, *args, **kwargs) File "/usr/local/lib/python3.7/site-packages/torch/jit/__init__.py", line 1316, in __init__ self._name = orig.__name__ AttributeError: 'dict' object has no attribute '__name__' Here is my code from the forward pass of the model def forward(self, input, hidden, return_hiddens=False, noise=False): # input = (torch.randn(1,2)) # hidden = (torch.randn(2,1,32), torch.randn(2,1,32)) print('Input tuple length', len(input)) print('Hidden tuple length', len(hidden)) #print(input[0]) #print(hidden[0].size()) #print(hidden) #print(hidden[1].size()) emb = self.drop(self.encoder(input.contiguous().view(-1, self.enc_input_size))) emb = emb.view(-1, input.size(1), self.rnn_hid_size) # [ seq_len * batch_size * feature_size] if noise: hidden = (F.dropout(hidden[0], training=True, p=0.9), F.dropout(hidden[1], training=True, p=0.9)) output, hidden = self.rnn(emb, hidden) output = self.drop(output) # [(seq_len * batch_size) * feature_size] decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2))) decoded = decoded.view(output.size(0), output.size(1), decoded.size(1)) # [seq_len * batch_size * feature_size] if self.res_connection: decoded = decoded + input if return_hiddens: return decoded, hidden, output return decoded, hidden Here is the code I used when trying to use torch.jit.trace model = torch.load('./gcloud-gpu-results/save/ecg/model_best/chfdb_chf13_45590.pth', map_location='cpu') example1 = (torch.randn(1,2)) example2 = (torch.randn(2,1,32), torch.randn(2,1,32)) traced_script_module = torch.jit.trace(model, example1, example2)) Let me know, if I should provide any more information or be more clear. Thank you!
st182145
Could you check what the type object you are torch.load()ing is? It seems that the tracing path thinks it’s a dict for some reason.
st182146
Hi there, It seems to me that TorchScript is a subset of Pythran 32, or at least that we could enlarge Pythran to make it a superset of PyTorch. If that’s the case, it would be interesting to have both compiler share the same AST optimization engine. Pythran optimizations have been designed to be relatively independent from the target, there’s probably room for cooperation there. Does that look like a good idea?
st182147
(Replace this first paragraph with a brief description of your new category. This guidance will appear in the category selection area, so try to keep it below 200 characters.) Use the following paragraphs for a longer description, or to establish category guidelines or rules: Why should people use this category? What is it for? How exactly is this different than the other categories we already have? What should topics in this category generally contain? Do we need this category? Can we merge with another category, or subcategory?
st182148
Hi! I am facing a very strange issue: I have 2 different environments for training NN: (Ubuntu 20.04) RTX 2070super mobile, CUDA 11.2, pytorch 1.10.0 RTX 3060ti , CUDA 11.4, pytorch 1.10.1 On the first computer, training my NN consumes around 1400MB of GPU ram while the second one uses 2200MB. Their training configs are the same: Its a RL project using PyTorch and stableBaselines. The code is the same. Nonetheless, it still uses more ram from the GPU on the second computer which prevents me from using more parallel training. I have been advised about pytorch flags. They are the same on both. I wonder if it is an issue with Cuda, PyTorch or a hidden default config of each GPU.
st182149
I don’t know how you’ve installed PyTorch but given that you mention CUDA 11.4, I assume you are using source builds. In this case, the different CUDA versions as well as cuDNN (which versions did you build with?) would most likely have a different memory footprint. If you don’t want to load e.g. the cuDNN kernels and are dynamically linking to it during the build, disable cuDNN via torch.backends.cudnn.enabled=False and check the memory usage again. Also, different GPU generations would load a different amount of kernels in libs such as cuDNN. It’s also important what exactly you are measuring since PyTorch uses a caching allocator and can thus reuse memory so you should check the allocated and reserved memory via e.g. torch.cuda.memory_summary().
st182150
Hello, I’m tring to calculate the cosine similarity between two tensors of different shapes: tensor1: torch.Size([15, 24, 4, 120]) tensor2: torch.Size([5608, 4, 120]) To make them have the same dimension, I used unsqueeze to make them both of size torch.Size([15, 24, 5608, 4, 120]) However, this new tensor makes me run out of memory because it has 969M elements and takes 3.8Gb. Is there a better way to calculate cosine similarity without creating this giant tensor?
st182151
Solved by anantguptadbl in post #2 @Yunchao_Liu you can try something like tensor1 = torch.rand([15, 24, 4, 120]) tensor2 = torch.rand([5608, 4, 120]) cos = nn.CosineSimilarity(dim=0) def custom_cos(cur_tensor): return cos(cur_tensor, tensor2) results = map(custom_cos, tensor1.view(-1, 4, 120)) results = torch.concat(list(res…
st182152
@Yunchao_Liu you can try something like tensor1 = torch.rand([15, 24, 4, 120]) tensor2 = torch.rand([5608, 4, 120]) cos = nn.CosineSimilarity(dim=0) def custom_cos(cur_tensor): return cos(cur_tensor, tensor2) results = map(custom_cos, tensor1.view(-1, 4, 120)) results = torch.concat(list(results)).view(15,24,4,120) results.size() If you have the latest version you can also use https://pytorch.org/tutorials/prototype/vmap_recipe.html 3
st182153
Hello, I have a flask server running inference for several different models. Two identical computers that only differs in terms of GPU, one RTX3070 and the other 2070 SUPER. Both computers have i9-10900X and nvme2 SSDs and same RAM. Both computers are up to date with every library and pytorch 1.10.0 with 11.3 cuda and 8.2.0 cudnn. So in summary, they are exactly the same with everything else. However, RTX 3070 performs significantly slower and uses much more VRAM (~7.6GB) than 2070SUPER (~6GB VRAM). 2070SUPER can handle %50 more load and still perform better. Are there any known issues regarding the performance issues on 3070? Note: I use Transformers and Attention networks which I heard should run faster on 30** series GPUs. Note2: I use half precision for CNN models as well. Can the issue be related with fp16? Note3: I regularly clear gpu cache release some memory, otherwise VRAM keeps bloating under heavy load. Note4: Flask runs multithreaded if that helps. Note5: This issue is impossible to reproduce due to complexity of the code.
st182154
I’m not sure what “Note 3” means and how you are releasing the memory, but in case you are clearing the cache this would hit the performance. Note 5 makes it quite hard to give you a valid answer. Would it be possible to get a model with the input shapes showing the performance difference?
st182155
If clearing the cache was hitting the performance, wouldn’t it decrease the performance on both of the computers? They are running the same code at the same time, and there is a huge performance difference both in terms of allocated VRAM which leads to reduced inference speed due to lack of gpu VRAM available. I realized when the gpu VRAM is full while using Flask server, whenever a request is made that requires more space on VRAM, it queues the operation which leads to reduced inference speed. So the issue is with RTX3070 allocating more VRAM compared to 2070 SUPER.
st182156
I get confused when I try to profile the memory consumption of my simple program. import torch import torch.nn as nn from memory_profiler import profile def simple_func(): x = torch.rand(3000, 2000, requires_grad=True) f = nn.Linear(2000, 1000) y = f(x) dd = torch.autograd.grad(y, (x,)+tuple(f.parameters()), grad_outputs=torch.rand(3000, 1000)) @profile def compound_func(): simple_func() simple_func() simple_func() simple_func() compound_func() The program output Line # Mem usage Increment Occurrences Line Contents ============================================================= 48 306.8 MiB 306.8 MiB 1 @profile 49 def compound_func(): 50 319.4 MiB 12.6 MiB 1 simple_func() 51 403.2 MiB 83.8 MiB 1 simple_func() 52 403.2 MiB 0.0 MiB 1 simple_func() 53 403.2 MiB 0.0 MiB 1 simple_func() I don’t understand why the first two simple_func calls give increment in memory usage. It seems there is often leftover in memory even if all objects are local and should be deleted after the call. I have tried an even simpler function def simple_func(): x = torch.rand(3000, 2000) But there is still leftover Line # Mem usage Increment Occurrences Line Contents ============================================================= 48 306.8 MiB 306.8 MiB 1 @profile 49 def compound_func(): 50 308.5 MiB 1.7 MiB 1 simple_func() 51 331.2 MiB 22.7 MiB 1 simple_func() 52 331.2 MiB 0.0 MiB 1 simple_func() 53 331.2 MiB 0.0 MiB 1 simple_func() Do we need to clean the memory manually? I have tried “del” before the end of the simple_func, but it doesn’t help. Or is it just the measurement issue by memory_profiler?
st182157
github.com pytorch/pytorch/blob/7ee0712642492ef221a69d3fdf13b607f406bd78/c10/cuda/CUDACachingAllocator.cpp#L102 */ namespace { using stream_set = std::unordered_set<cuda::CUDAStream>; constexpr size_t kMinBlockSize = 512; // all sizes are rounded to at least 512 bytes constexpr size_t kSmallSize = 1048576; // largest "small" allocation is 1 MiB constexpr size_t kSmallBuffer = 2097152; // "small" allocations are packed in 2 MiB blocks constexpr size_t kLargeBuffer = 20971520; // "large" allocations may be packed in 20 MiB blocks constexpr size_t kMinLargeAlloc = 10485760; // allocations between 1 and 10 MiB may use kLargeBuffer constexpr size_t kRoundLarge = 2097152; // round up large allocations to 2 MiB typedef std::bitset<static_cast<size_t>(StatType::NUM_TYPES)> StatTypes; void update_stat(Stat& stat, int64_t amount) { stat.current += amount; When I check code, CUDACachingAllocator try to allocate GPU memory at best fit of 2MB size. I find somethings like, “X86 support 4k, 2M, 4G page”, “CUDA driver need pinned memory for cudaMemcpy, So pytorch CPU allocator use cudaHostAlloc”… So I think, 2MB is just good to manage between CPU and GPU. Is there any more specific reason using 2MB size ? Thanks
st182158
I have a tiny CNN model that gives this summary: ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 32, 222, 222] 896 ReLU-2 [-1, 32, 222, 222] 0 MaxPool2d-3 [-1, 32, 111, 111] 0 Conv2d-4 [-1, 32, 109, 109] 9,248 ReLU-5 [-1, 32, 109, 109] 0 MaxPool2d-6 [-1, 32, 54, 54] 0 Conv2d-7 [-1, 32, 52, 52] 9,248 ReLU-8 [-1, 32, 52, 52] 0 MaxPool2d-9 [-1, 32, 26, 26] 0 Flatten-10 [-1, 21632] 0 Linear-11 [-1, 1] 21,633 Sigmoid-12 [-1, 1] 0 ================================================================ Total params: 41,025 Trainable params: 41,025 Non-trainable params: 0 ---------------------------------------------------------------- Input size (MB): 0.57 Forward/backward pass size (MB): 35.24 Params size (MB): 0.16 Estimated Total Size (MB): 35.97 ---------------------------------------------------------------- The input images are 3x224x224 and the batch size is 16. When I start training the model (with torch.optim.SGD), I get this: RuntimeError: CUDA out of memory. Tried to allocate 11.89 GiB (GPU 0; 8.00 GiB total capacity; 1.14 GiB already allocated; 4.83 GiB free; 1.14 GiB reserved in total by PyTorch) Why does it try to allocate so much memory for such a small network? Thanks.
st182159
I found the bug which was trivial: I was loading all the validation images into a single batch which was too big. I would delete this post but don’t see any option to do that.
st182160
Click the ‘show more’ button on the right bottom of your post. Then you can see a trash can.
st182161
If I increase num_workers from 0 to 4, then could be cuda out of memory problem solved? RAM is enough but memory is exploded…help me please
st182162
No, increasing num_workers in the DataLoader would use multiprocessing to load the data from the Dataset and would not avoid an out of memory on the GPU. To solve the latter you would have to reduce the memory usage by e.g. reducing the batch size or by using e.g. torch.utils.checkpoint.
st182163
Hi, ptrblck! Do you know why increasing num_workers to 2, 4 or 8 wouldn’t decrease the runtime?
st182164
You would have to profile your code and check how long the data loading takes. E.g. your overall data loading might be faster than the model training iteration and thus the time to load each batch could already be hidden. In such a case speeding up the data loading would of course not yield any performance improvement. On the other hand you could see a data loading bottleneck, but your system cannot speed it up further e.g. due to the limited read speeds of your SSD etc. Generally, I would recommend to profile the code to see where the bottlenecks are and then try to optimize it. EDIT: also explained in this answer 2 from your double post.
st182165
May I ask how to test the batch loading speed & the model training iteration speed? Also, if I set num_workers = multiprocessing.cpu_count() which maximize the usage of cpu, and still does not improve runtime, does that mean there’s no way to improve runtime? New to Pytorch for these silly question lol
st182166
I’ve figured it out. Actually setting num_workers = 0 will speed up the code because the loading time itself is short, but creating new threads actually takes much longer. Therefore, with more num_workers & even using multiprocessing.cpu_count() will slow down the runs.
st182167
bool get_free_block(AllocParams& p) { BlockPool& pool = *p.pool; auto it = pool.lower_bound(&p.search_key); if (it == pool.end() || (*it)->stream != p.stream()) return false; p.block = *it; pool.erase(it); return true; } If block is allocated by one stream, and splited to one free block, why can’t we use it in aother stream? When CudaMalloc is host blocking.
st182168
Let’s say that I have a PyTorch tensor that I’m loading onto CPU. I would now like to experiment with different shapes and how they affect the memory consumption, and I thought the best way to do this is creating a simple random tensor and then measuring the memory consumptions of different shapes. However, while attempting this, I noticed anomalies and I decided to simplify the task further. I’m creating a 3 GB PyTorch tensor and want to measure its memory consumption with the psutil module. In order to get some statistics, I do this ten times in a for loop and consider the mean and std. I also move the tensor to GPU and then use PyTorch functions to measure the allocated and totally reserved memory on GPU. My code is this: import torch import psutil if __name__ == '__main__': resident_memories = [] for i in range(10): x = torch.ones((3, 1024, 1024, 1024), dtype=torch.uint8) resident_memory = psutil.Process().memory_info().rss/1024**2 resident_memories.append(resident_memory) del x, resident_memory print('Average resident memory [MB]: {} +/- {}'.format(torch.mean(torch.tensor(resident_memories)), torch.std(torch.tensor(resident_memories)))) del resident_memories device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') alloc_memories = [] reserved_memories = [] for i in range(10): x = torch.ones((3, 1024, 1024, 1024), dtype=torch.uint8).to(device) alloc_memory = torch.cuda.memory_allocated(device=device)/1024**2 reserved_memory = torch.cuda.memory_reserved(device=device)/1024**2 alloc_memories.append(alloc_memory) reserved_memories.append(reserved_memory) del x, alloc_memory print('By tensors occupied memory on GPU [MB]: {} +/- {}\nCurrent GPU memory managed by caching allocator [MB]: {} +/- {}'.format( torch.mean(torch.tensor(alloc_memories)), torch.std(torch.tensor(alloc_memories)), torch.mean(torch.tensor(reserved_memories)), torch.std(torch.tensor(reserved_memories))) ) I obtain the following output: Average resident memory [MB]: 4028.602783203125 +/- 0.06685283780097961 By tensors occupied memory on GPU [MB]: 3072.0 +/- 0.0 Current GPU memory managed by caching allocator [MB]: 3072.0 +/- 0.0 I’m executing this code on a cluster, but I also ran the first part on the cloud and I mostly observed the same behavior. When I ran this on the cluster, it was the only job on the CPU, so other jobs should (hopefully) not affect the memory consumption. I’d have two quick questions: i) Why is psutil.Process().memory_info().rss inaccurate when measuring the memory of a 3 GB tensor? ii) How can we (correctly) measure the memory of tensors on CPU? One use case might be that the tensor is so huge that moving it onto a GPU might cause a memory error.
st182169
I’m not familiar enough with the Python memory management and garbage collection mechanism to be able to explain the effect you are seeing properly. Since rss returns the “non-swapped physical memory” of the process, you would not only see the required memory used by the tensor allocation but also the memory usage by the process itself, loaded libraries, as well as all other objects. My guess would be that the garbage collection kicks in at different intervals and might free some memory, so you could check if playing around with its thresholds might change the behavior.
st182170
Thanks for your reply! I want to be a bit cautious, since I’m inexperienced when it comes to Python/PyTorch memory management, but I don’t think it’s the garbage collection mechanism that is responsible for the effects we’ve seen so far. I think I found the underlying issue. Let me explain: When you wrote that I should play around with the thresholds of the garbage collection, I thought it might be best to completely turn it off. So if the optional garbage collection is enabled, I write at the beginning of my script: import gc if __name__ == '__main__': if gc.isenabled(): gc.disable() However, after some tests, I saw no huge difference from the initially reported results. What I found is that the statement import torch is quite expensive in terms of memory. For this, I used the following code: import gc import torch import psutil if __name__ == '__main__': print('Is Pythons optional garbage collector enabled? Answer: {}'.format(gc.isenabled())) if gc.isenabled(): gc.disable() print('Pythons optional garbage collector was disabled.') resident_memory = psutil.Process().memory_info().rss/1024**2 print('\nResident memory [MB]: {}'.format(resident_memory)) If I execute the following code on an RTX2080Ti with Cuda10.1, I consistently get as output for the resident memory (this time, I’m not reporting any error bars, but I made sure that the results are not orders of magnitudes apart from each other for different runs): Resident memory [MB]: 190.37109375 However, the same code executed on an RTX2080Ti with Cuda11.0 yields: Resident memory [MB]: 981.79296875 For both Cuda versions, I used Python 3.6.9, and for Cuda10.1, PyTorch 1.7.1+cu101, whereas for Cuda11.0, I had PyTorch 1.7.1 available. My question would be: Why is there such a huge discrepancy of the resident memory when using different Cuda versions? (The results that I had reported in my initial post were on Cuda version 11.3 if I’m not wrong.)
st182171
GinnyWeasley: but I don’t think it’s the garbage collection mechanism that is responsible for the effects we’ve seen so far I was thinking about the gc due to the change in the reported memory while using the same code snippet, but again I don’t know what bookkeeping is done additionally etc. You might want to take a look at this issue 1 which discusses the memory overhead, increase etc.
st182172
I don’t know what this means. If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
st182173
Take a look at the Memory Management docs 63 which explain how the caching memory allocator works. The last section explains how the env variable can be used to prevent fragmentation in case your workload is suffering from it.
st182174
You could delete all tensors, parameters, models etc. and call empty_cache() afterwards to remove all allocations created by PyTorch. To also remove the CUDA context, you would have to shut down the Python session.
st182175
No, I don’t think there is a way to list all tensors. You could try to use this approach 3, but as discussed in the topic tensors kept alive in the C++ backend won’t be returned by this approach.
st182176
I see the how to get the list of all tensor. However, I have hard time moving the tensor to cpu and then remove from from gpu.
st182177
Below is the result of nivdia-smi command. Which shows that no process is running. but when i try to run my code it says RuntimeError: CUDA out of memory. Tried to allocate 1.02 GiB (GPU 3; 7.80 GiB total capacity; 6.24 GiB already allocated; 258.31 MiB free; 6.25 GiB reserved in total by PyTorch) nivdia-smi results: Thu Dec 23 11:52:33 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 455.23.05 Driver Version: 455.23.05 CUDA Version: 11.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce RTX 208... Off | 00000000:05:00.0 Off | N/A | | 30% 43C P8 19W / 250W | 3MiB / 7981MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 GeForce RTX 208... Off | 00000000:06:00.0 Off | N/A | | 27% 41C P8 3W / 250W | 3MiB / 7982MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 2 GeForce RTX 208... Off | 00000000:09:00.0 Off | N/A | | 27% 39C P8 2W / 250W | 3MiB / 7982MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 3 GeForce RTX 208... Off | 00000000:0A:00.0 Off | N/A | | 27% 35C P8 3W / 250W | 3MiB / 7982MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ Note: 1- I already tried: torch.cuda.empty_cache() 2- This is a shared sever and I have user access not root access.
st182178
The error message explains that you are running out of memory and cannot allocate the desired ~1GB. Your total GPU memory is given as 7.8GB, 6.25GB are already reserved by PyTorch, ~260MB are free, the rest is used by the CUDA context (since you’ve made sure no other applications are running and using the device).
st182179
For what PyTorch is using this much memory? While I am not running anything. I feel like Pytorch has a memory problem.
st182180
The CUDA context loads the driver and all linked CUDA kernels, i.e. PyTorch native kernels, cuDNN, NCCL, etc. If you don’t want to load these kernels (and either drop the performance or remove specific utils), you could rebuild PyTorch from source without any additional libraries (i.e. NCCL, cuDNN, MAGMA,…).
st182181
so this means that I can not execute the 1GB required codes on this server? (without changing the PyTorch source code)
st182182
Yes, that’s correct. You are running out of memory and would need to reduce the memory usage by e.g. lowering the batch size, using torch.utils.checkpoint, mixed-precision training, DistributedDataParallel, model sharding etc.
st182183
I have java background and I am new to Python, so my assumption was that if we close the process then memory should be released. but I find out that it is not true. My question is: Why PyTorch does not free the memory after execution is completed. forPyGCL is my conda env on a shared server, and the below table shows that it is using the memory 5457MiB Why it does not free the memory, I can’t even run the next experiments. because now it says that memory is full. This problem wasted my 3 weeks and then I came to know that this is the problem that memory is not free after the execution. torch.cuda.empty_cache() is use less in this case. So i had to kill the process, but still getting this error. RuntimeError: CUDA out of memory. Tried to allocate 1.02 GiB (GPU 0; 23.70 GiB total capacity; 5.20 GiB already allocated; 573.56 MiB free; 5.23 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Thu Dec 23 16:51:39 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.27.04 Driver Version: 460.27.04 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce RTX 3090 Off | 00000000:02:00.0 Off | N/A | | 43% 49C P2 140W / 350W | 18322MiB / 24268MiB | 21% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 GeForce RTX 3090 Off | 00000000:03:00.0 Off | N/A | | 60% 60C P2 321W / 350W | 22951MiB / 24268MiB | 99% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 2 GeForce RTX 3090 Off | 00000000:82:00.0 Off | N/A | | 61% 61C P2 325W / 350W | 23130MiB / 24268MiB | 98% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 3 GeForce RTX 3090 Off | 00000000:83:00.0 Off | N/A | | 30% 39C P2 105W / 350W | 19942MiB / 24268MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 1800637 C .../envs/forPyGCL/bin/python 1827MiB | | 0 N/A N/A 3955522 C python 16493MiB | | 1 N/A N/A 3931490 C ...drsum-torch1.8/bin/python 22949MiB | | 2 N/A N/A 1800637 C .../envs/forPyGCL/bin/python 5457MiB | | 2 N/A N/A 3931491 C ...drsum-torch1.8/bin/python 17671MiB | | 3 N/A N/A 1800637 C .../envs/forPyGCL/bin/python 1825MiB | | 3 N/A N/A 1852217 C python 18115MiB | +-----------------------------------------------------------------------------+
st182184
Hi all, Due to a known memory limitation that causes errors on Windows when importing torch as multiple processes get spawned (refer to python - How to efficiently run multiple Pytorch Processes / Models at once ? Traceback: The paging file is too small for this operation to complete - Stack Overflow 2), I would like to avoid any unnecessary memory usage when training my distributed model. As such, I switched to torch.multiprocessing in the hope that at least the model parameters will be shared among processes. Now, I would like to understand what and how are Tensors actually shared among child mp.Processes because it seems to me that the data within Tensors remains consistent throughout multiprocessing when these get passed to ‘args’ (I haven’t used mp.Queue in my tests yet), even when I’m simply importing the default ‘multiprocessing’ package, and despite omitting the ‘share_memory_()’ calls altogether? This makes me suspect the memory is not actually shared, but copied around a lot, just like using a Manager.dict() or so. Another ‘strange’ thing that I observed is that an ‘args’ Tensor’s ‘is_shared()’ method does return False outside the mp.Process startup when no calls to ‘share_memory_()’ are being made, but returns True within the child processes to which it was sent no matter what (again, even if torch.multiprocessing is never imported). What is more, despite my best efforts in inspecting ‘data_ptr’ or the backend ‘storage’ objects, I was not able to confirm whether my Tensor data is ever truly shared or not. These aspects also beg my final question: How would I check whether the parameters of my model are actually living in a true shared memory space or not? Thanks for your time!
st182185
Hi, I am trying to implement basic support for having Persistent Memory (Intel Optane PMem) instead of DRAM as the underlying storage of a Tensor. PMem memory regions are basically also just regular pointers. However, I am currently not able to understand where the actual memory of a Tensor is allocated. Is there any documentation on that? I found that there are several libraries that are actually backing tensors (C10, ATen), but without a first introduction it’s hard to dive into that level of detail. For example, if I torch.load() a tensor from disk, how does the call stack work? Thank you so much for giving me initial pointers. Best, Maximilian
st182186
When I run my code, I get this runtime error below: CUDA out of memory. Tried to allocate 88.00 MiB (GPU 0; 8.00 GiB total capacity; 6.04 GiB already allocated; 0 bytes free; 6.17 GiB reserved in total by PyTorch) I don’t understand why it says 0 bytes free; Maybe I should have at least 6.17 - 6.04 = 0.13 GiB free? I’m afraid if it’s not really a memory shortage problem.
st182187
Solved by ptrblck in post #2 You are trying to allocate 88MB. ~130MB are in the cache, but are not a contiguous block, so cannot be used to store the needed 88MB. 0B are free, which shows that the rest of your device memory is used to store the CUDA context, memory for other applications etc.
st182188
You are trying to allocate 88MB. ~130MB are in the cache, but are not a contiguous block, so cannot be used to store the needed 88MB. 0B are free, which shows that the rest of your device memory is used to store the CUDA context, memory for other applications etc.
st182189
Hi I have a big issue with memory. I am developing a big application with GUI for testing and optimizing neural networks. The main program is showing the GUI, but training is done in thread. In my app I need to train many models with different parameters one after one. To do this I need to create a model for each attempt. When I train one I want to delete it and train new one, but I cannot delete old model. I am trying to do something like this: del model torch.cuda.empty_cache() but GPU memory doesn’t change, then i tried to do this: model.cpu() del model When I move model to CPU, GPU memory is freed but CPU memory increase. In each attempt of training, memory is increasing all the time. Only when I close my app and run it again the all memory is freed. Is there a way to delete model permanently from GPU or CPU?
st182190
I cannot reproduce the issue using a recent master build: model = models.resnet18() print(torch.cuda.memory_allocated()) > 0 model.cuda() print(torch.cuda.memory_allocated()) > 46861312 del model print(torch.cuda.memory_allocated()) > 0
st182191
i have this issue as well, but if you this model = model.to(‘cuda’) it will free the gpu memory when you delete the model
st182192
It seems like that reserved_memory+allocated memory is bigger than the memory shown in nvidia-smi. My allocated mem is 3199MB and my reserved mem is 8766 MB, which equals 11965MB, but the mem shown in my nvidia-smi is 11131MB. By the way, there is another question, I met an OOM when trying to allocate about 1000MB space but there was still about 8766MB in reserved space. The reserved space is designed to be reused, why is there still an OOM error?
st182193
The reserved memory reports the allocated memory and the cached memory so you shouldn’t sum them together. Take a look at the memory management docs 5 for more information.
st182194
@ptrblck Thank you for your reply! I have checked the docs. But I still have some confusion over the memory. I met an OOM error. RuntimeError: CUDA out of memory. Tried to allocate 1.53 GiB (GPU 0; 11.75 GiB total capacity; 5.97 GiB already allocated; 449.69 MiB free; 8.99 GiB reserved in total by PyTorch). It seemed that I still have about 3G (8.99GiB-5.97GiB) cached memory to allocate tensors, but the allocation failed. So I guess 1.53 GiB tensors might be moved to free memory and then removed to cache memory, or the cached memory was occupied by Pytorch internal context management but why could it occupy so large space. Did I make anything wrong? Thanks!
st182195
The error message points out that 449.69MiB are only free and the rest might be scattered and lost due to e.g. memory fragmentation. The CUDA context is not captured by the allocated or reserved memory, as PyTorch doesn’t check its memory usage.
st182196
Questions and Help The demo code is listed below. Why after I used the torch.cuda.empty_cache(), only part of the reserved space generated by the forwarding of Net could be released? For the second epoch, the reserved_space will get larger while doing the same operation as the first epoch. And how does the reserved_memory work? Why is the reserved memory much larger than allocated memory? import torch import torch.nn as nn class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv = torch.nn.Conv2d(3, 28, (3, 3)) # in out kernel size self.maxpool1 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.maxpool2 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) def forward(self, x): x = self.conv(x) x = self.maxpool1(x) x = self.maxpool2(x) return x def train(): net = Net().to('cuda') for i in range(5): with torch.no_grad(): frames_batches = torch.randn(512 , 3, 224, 224).to('cuda') # before forward 1st | memory_reserved: 296MB | memory allocated: 294MB pred = net(frames_batches) # after forward 1st epoch | memory_reserved: 5014MB | memory allocated: 465MB # after forward 2nd epoch | memory_reserved: 5688MB | memory allocated: 465MB torch.cuda.empty_cache() # after empty_cache 1st epoch | memory_reserved: 1644MB | memory allocated: 465MB if __name__ == '__main__': train()
st182197
Hello, We noticed that with a model in NHWC memory layout, and an explicit conversion around BatchNorm2d, the backward() call seems to have some interesting output (measured with hooks). The gradient wrt input for the last layer of the 2 layer model is NHWC, but the gradient wrt output for the first layer is NCHW. The issue does not exist if the explicit conversions are omitted and BatchNorm2d is allowed to be in NHWC. Can anyone help explain why the gradient tensor memory layout seems to be switched in this reproducer ? Reproducer import torch def is_channels_last(tensor): if tensor is not None: return tensor.is_contiguous(memory_format=torch.channels_last) else: return None def check(tensor): if isinstance(tensor, tuple): res = "" for i, t in enumerate(tensor): res += "{}".format(is_channels_last(t)) if i < len(tensor) - 1: res += ", " elif tensor is None: res = None else: res = is_channels_last(tensor) return res def fw_hook(module, input, output): if isinstance(module, torch.nn.Conv2d): print("Debug: forward module {}, input {}, output {}, weight {}, bias {}".format( module, check(input), check(output), check(module.weight), check(module.bias))) else: print("Debug: forward module {}, input {}, output {}".format( module, check(input), check(output))) def bw_hook(module, grad_wrt_input, grad_wrt_output): if isinstance(module, torch.nn.Conv2d): print("Debug: backward module {}, grad_wrt_input {}, grad_wrt_output {}, weight {}, bias {}".format( module, check(grad_wrt_input), check(grad_wrt_output), check(module.weight), check(module.bias))) else: print("Debug: backward module {}, grad_wrt_input {}, grad_wrt_output {}".format( module, check(grad_wrt_input), check(grad_wrt_output))) class BnAddRelu(torch.nn.BatchNorm2d): def __init__(self, planes, fuse_relu=False, bn_group=1): super(BnAddRelu, self).__init__(planes) self.fuse_relu_flag = fuse_relu def forward(self, x, z=None): x = x.to(memory_format=torch.contiguous_format) out = super().forward(x) if z is not None: z = z.to(memory_format=torch.contiguous_format) out = out.add_(z) if self.fuse_relu_flag: out = out.relu_() out = out.to(memory_format=torch.channels_last) return out class Model(torch.nn.Module): def __init__(self): super(Model, self).__init__() self.conv = torch.nn.Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) self.bn = BnAddRelu(128) def forward(self, input): out = self.conv(input) out = self.bn(out) return out model = Model().to(memory_format=torch.channels_last).cuda() for _, l in model._modules.items(): l.register_forward_hook(fw_hook) l.register_backward_hook(bw_hook) input = torch.rand(120, 128, 38, 38, dtype=torch.float, device="cuda").to(memory_format=torch.channels_last) output = model(input) output.backward(torch.rand_like(output)) Output $ python test_conv_bw.py Debug: forward module Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False), input True, output True, weight True, bias None /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:1033: UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior. warnings.warn("Using a non-full backward hook when the forward contains multiple autograd Nodes " Debug: forward module BnAddRelu(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), input True, output True Debug: backward module BnAddRelu(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), grad_wrt_input **True**, grad_wrt_output True Debug: backward module Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False), grad_wrt_input None, True, grad_wrt_output **False**, weight True, bias None
st182198
Hi, I have a memory leakage problem with the code below: import torch import torch.nn as nn class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.c, self.k, self.p, self.x = 256, 3, 1, 2304 self.n, self.h, self.w = 0, 0, 0 self.unfold = nn.Unfold(self.k, padding=self.p) self.conv = nn.Conv2d(self.c * self.k * self.k, self.x, 1, padding=0, groups=self.c * self.k * self.k) def forward(self, x: torch.Tensor): self.n, self.h, self.w = x.size(0), x.size(2), x.size(3) c1 = x.view(self.n, self.c, 1, self.h, self.w) c2 = self.unfold(x) c2 = c2.view(self.n, self.c, self.k * self.k, self.h, self.w) out = c1 + c2 out = out.view(self.n, self.c * self.k * self.k, self.h, self.w) return self.conv(out) if __name__ == '__main__': try: net = Net().cuda() x = torch.randn((1024, 256, 32, 32)).cuda() out = net(x) print(out) except Exception as ex: print(f' > CUDA memory: {torch.cuda.memory_allocated() / 1024 ** 3} GiB') del x, net print(ex) print(f' > CUDA memory: {torch.cuda.memory_allocated() / 1024 ** 3} GiB') Here I define a network with depth-wise convolution as the last layer. I’m using 24GB RTX 3090, so when it runs into self.conv(out), it will surely be out of CUDA memory due to the large tensors. Then we can catch the exception and delete the references of x and net, the expected CUDA memory after del x, net should be zero but I got 9 GB instead, which means that there’s a memory leakage with a failed-to-allocate-output depth-wise convolution. If I change the group number of self.conv into something else, the CUDA memory will be correctly zero.
st182199
I get the expected result: > CUDA memory: 19.000017166137695 GiB CUDA out of memory. Tried to allocate 9.00 GiB (GPU 0; 23.69 GiB total capacity; 19.00 GiB already allocated; 1.48 GiB free; 19.00 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF > CUDA memory: 0.0 GiB but think that these “hacks” might easily fail (as seen in your setup) and would thus recommend to set a proper batch size before starting the training. In any case, you could check if deleting out (if it’s even created) might solve your issue.