id
stringlengths
3
8
text
stringlengths
1
115k
st182300
Ah, I wasn’t sure how transforms were written. It has the same results as the lambda approach, though–it won’t pickle if local and will endlessly loop when placed globally. I was thinking the easiest way would be to subclass ImageFolder and alter the format before returning the item e.g. class MyImageFolder(datasets.ImageFolder): def __getitem__(self, index: int) -> Tuple[Any, Any]: sample, target = super().__getitem__(self, index) sample = sample.to(memory_format=torch.channels_last) return sample, target But that hangs in the exact same manner as well. The model trains fine (if with a big penalty to performance) if the conversion is done with input within the training loop, or left implicitly (which oddly has a lower performance hit, but still about 4x vs not changing the channel format.
st182301
I worked out (part of) the problem whilst working on another dataset–the endless looping appears to be a multiprocess issue, removing the workers argument from the DataLoader moves past that step. That, however, introduces a different error–i.e. channels last only works on batches, so if the unsqueeze happens then 5d tensor is introduced to the network and if it doesn’t then no reformatting can happen. On the data loader level, then, it should be a specific collate function to alter the batch memory format?
st182302
Hi, I am using channels_last format when using amp for training. From pytorch 1.8, when I detach a tensor that is in channels_last format and feed it to another model, pytorch breaks with error. Here’s an example snippet. import torch import torch.nn as nn import torch.cuda.amp as amp device = torch.device('cuda') dtype = torch.float32 memory_format = torch.channels_last # memory_format = torch.contiguous_format model1 = nn.Conv2d(3,3,1,1).to(device=device, dtype=dtype, non_blocking=True, memory_format=memory_format) model2 = nn.Conv2d(3,3,1,1).to(device=device, dtype=dtype, non_blocking=True, memory_format=memory_format) input = torch.randn(1,3,4,4).to(device, dtype=dtype, memory_format=memory_format) with amp.autocast(): out1 = model1(input) out2 = model2(out1.detach()) Here’s the error message: RuntimeError: set_sizes_and_strides is not allowed on a Tensor created from .data or .detach(). If your intent is to change the metadata of a Tensor (such as sizes / strides / storage / storage_offset) without autograd tracking the change, remove the .data / .detach() call and wrap the change in a `with torch.no_grad():` block. For example, change: x.data.set_(y) to: with torch.no_grad(): x.set_(y) If I change the memory format to torch.contiguous_format, it works fine. Is detaching a channels_last tensor forbidden? As there is no resizing/reshaping operations in the second model, the error message seems to be irrelevant. A related issue is here: https://github.com/pytorch/pytorch/issues/55301 11 PyTorch version: 1.8.1
st182303
I’m trying to run the ‘Play Mario with RL’ code in a Jupyter notebook right now. The cache function had to be modified due to an error thrown when using the original code verbatim. CPU usage has no problems reaching episodes 1000+. However, running this code on my local GPU always causes the GPU’s memory to run out around episode 560. Commenting out the cache function seems to stop the memory from filling up, so I’m guessing the problem may be in there. Does anything seem wrong at a quick glance? Thanks class Mario: def __init__(self, state_dim, action_dim, save_dir, checkpoint=None): self.state_dim = state_dim self.action_dim = action_dim self.memory = deque(maxlen=100000) self.batch_size = 64 self.exploration_rate = 1 self.exploration_rate_decay = 0.99999975 self.exploration_rate_min = 0.1 self.gamma = 0.9 self.curr_step = 0 self.burnin = 1e5 # min. experiences before training self.learn_every = 3 # no. of experiences between updates to Q_online self.sync_every = 1e4 # no. of experiences between Q_target & Q_online sync self.save_every = 5e5 # no. of experiences between saving NN self.save_dir = save_dir # NN to predict the most optimal action - implemented in the Learn section self.net = DQN(self.state_dim, self.action_dim).float() self.net = self.net.to(device) if checkpoint: self.load(checkpoint) self.optimizer = torch.optim.Adam(self.net.parameters(), lr=0.00025) self.loss_fn = torch.nn.SmoothL1Loss() ... [CODE OMITTED] ... def cache(self, state, next_state, action, reward, done): """ Store the experience to self.memory (replay buffer) Inputs: state (LazyFrame), next_state (LazyFrame), action (int), reward (float), done(bool)) """ state = torch.FloatTensor(np.array(state)).to(device) next_state = torch.FloatTensor(np.array(next_state)).to(device) action = torch.LongTensor([action]).to(device) reward = torch.DoubleTensor([reward]).to(device) done = torch.BoolTensor([done]).to(device) self.memory.append( (state, next_state, action, reward, done,) ) ... [CODE OMITTED] ...
st182304
I cannot see any obvious issues in the posted code snippet. However, it also doesn’t show the overall usage. In the cache method you are appending device tensors to the dequeu and see an increase in memory. Are you removing these objects properly from self.memory and are not storing any other references to these tensors?
st182305
Hi, I encountered a weird situation for the same code, in PyTorch 1.1 to 1.6, the code takes almost constant memory, but in PyTorch 1.7 the memory consumptions keep increasing. Could you help out check what’s the reason? Is the mechanism to free GPU memory experiencing a big change? Thanks a lot. I defined an autograd function, where the backward function creates local variables, performs local backward, then deletes intermediate tensors. The link to the repo is GitHub - juntang-zhuang/torch_ACA: repo for paper: Adaptive Checkpoint Adjoint (ACA) method for gradient estimation in neural ODE 2, and can be reproduced by running “python cifar_classification/train_mem.py” Here’s the autograd function import torch import torch.nn as nn from .ode_solver_endtime import odesolve_endtime from torch.autograd import Variable import copy __all__ = ['odesolve_adjoint'] def flatten_params(params): flat_params = [p.contiguous().view(-1) for p in params] return torch.cat(flat_params) if len(flat_params) > 0 else torch.tensor([]) def flatten_params_grad(params, params_ref): _params = [p for p in params] _params_ref = [p for p in params_ref] flat_params = [p.contiguous().view(-1) if p is not None else torch.zeros_like(q).view(-1) for p, q in zip(_params, _params_ref)] return torch.cat(flat_params) if len(flat_params) > 0 else torch.tensor([]) class Checkpointing_Adjoint(torch.autograd.Function): @staticmethod def forward(ctx, *args): z0, func, flat_params, options= args[:-3], args[-3], args[-2], args[-1] if isinstance(z0,tuple): if len(z0) == 1: z0 = z0[0] ctx.func = func state0 = func.state_dict() ctx.state0 = state0 if isinstance(z0, tuple): ctx.z0 = tuple([_z0.data for _z0 in z0]) else: ctx.z0 = z0.data ctx.options = options with torch.no_grad(): solver = odesolve_endtime(func, z0, options, return_solver=True, regenerate_graph = False) #solver.func.load_state_dict(state0) ans, steps = solver.integrate(z0, return_steps=True) ctx.steps = steps #ctx.ans = ans return ans @staticmethod def backward(ctx, *grad_output): if isinstance(ctx.z0, tuple): z0 = tuple([Variable(_z0, requires_grad=True) for _z0 in ctx.z0]) else: z0 = Variable(ctx.z0, requires_grad=True) options = ctx.options func = ctx.func f_params = func.parameters() steps, state0 = ctx.steps, ctx.state0 func.load_state_dict(state0) if isinstance(z0, tuple) or isinstance(z0, list): use_tuple = True else: use_tuple = False z = z0 solver = odesolve_endtime(func, z, options, return_solver=True) # record inputs to each step inputs = [] inputs.append(z) #t0 = solver.t0 t_current = solver.t0 y_current = z for point in steps: solver.neval += 1 # print(y_current.shape) with torch.no_grad(): y_current, error, variables = solver.step(solver.func, t_current, point - t_current, y_current, return_variables=True) t_current = point if not use_tuple: inputs.append(Variable(y_current.data, requires_grad = True)) else: inputs.append([Variable(_y.data, requires_grad=True) for _y in y_current]) if use_tuple: solver.delete_local_computation_graph(list(error) + list(variables)) else: solver.delete_local_computation_graph([error] + list(variables)) # delete the gradient directly applied to the original input # if use tuple, input is directly concatenated with output grad_output = list(grad_output) if use_tuple: input_direct_grad = grad_output[0][0,...] grad_output[0] = grad_output[0][1,...] grad_output = tuple(grad_output) ################################### #print(steps) # note that steps does not include the start point, need to include it steps = [options['t0']] + steps # now two list corresponds, steps = [t0, teval1, teval2, ... tevaln, t1] # inputs = [z0, z1, z2, ... , z_out] ################################### inputs.pop(-1) steps2 = copy.deepcopy(steps) steps2.pop(0) steps.pop(-1) # steps = [t0, eval1, eval2, ... evaln, t1], after pop is [t0, eval1, ... evaln] # steps2 = [t0, eval1, eval2, ... evaln, t1], after pop is [eval1, ... evaln, t1] # after reverse, they are # steps = [evaln, evaln-1, ... eval2, eval1, t0] # steps2 = [t1, evaln, ... eval2, eval1s] param_grads = [] inputs.reverse() steps.reverse() steps2.reverse() assert len(inputs) == len(steps) == len(steps2), print('len inputs {}, len steps {}, len steps2 {}'.format(len(inputs), len(steps), len(steps2))) for input, point, point2 in zip(inputs, steps, steps2): if not use_tuple: input = Variable(input, requires_grad = True) else: input = [Variable(_, requires_grad = True) for _ in input] input = tuple(input) with torch.enable_grad(): #print(type(z)) y, error, variables = solver.step(solver.func, point, point2 - point, input, return_variables=True) param_grad = torch.autograd.grad( y, f_params, grad_output, retain_graph=True) grad_output = torch.autograd.grad( y, input, grad_output) param_grads.append(param_grad) if use_tuple: solver.delete_local_computation_graph(list(y) + list(error) + list(variables)) else: solver.delete_local_computation_graph([y, error] + list(variables)) # sum up gradients w.r.t parameters at each step, stored in out2 out2 = param_grads[0] for i in range(1, len(param_grads)): for _1, _2 in zip([*out2], [*param_grads[i]]): _1 += _2 # attach direct gradient w.r.t input if use_tuple: grad_output = list(grad_output) # add grad output to direct gradient if input_direct_grad is not None: grad_output[0] = input_direct_grad + grad_output[0]#torch.stack((input_direct_grad, grad_output[0]), dim=0) grad_output = tuple(grad_output) out = tuple([*grad_output] + [None, flatten_params_grad(out2, func.parameters()), None]) return out #return out1[0], out1[1], None, flatten_params_grad(out2, func.parameters()), None def odesolve_adjoint(func, z0, options = None): flat_params = flatten_params(func.parameters()) if isinstance(z0, tuple) or isinstance(z0, list): zs = Checkpointing_Adjoint.apply(*z0, func, flat_params, options) else: zs = Checkpointing_Adjoint.apply(z0, func, flat_params, options) return zs The definition of function to delete local variable is for i in inputs: i.set_() del i torch.cuda.empty_cache() return```
st182306
Hello, My pytorch program is working in a slow way. I determined the bottleneck through this code: from timeit import default_timer as timer t1=timer() xxx(some codes here) t2=timer() print(t2-t1) After running this code, I found the bottleneck: for batch_idx, (x1, x2, y) in enumerate(train_loader.get_augmented_iterator(model.training)): x1 = torch.Tensor(x1).to(device) # slow in this line of code x1 = x1.transpose(1, 3) Where get_augmented_iterator is some function to load the data I defined. In the first line, “x1 = torch.Tensor(x1).to(device)”, my program takes ~0.4s to execute this. The get_augmented_iterator() takes about ~0.1s to finish and includes some steps I have to preprocess it in this stage. While the problem is certainly not in this line itself, I researched a little bit and found that if I add torch.cuda.empty_cache(), this line execute in normal time. However, torch.cuda.empty_cache() itself takes ~0.4s to execute, so I didn’t really solve this problem, but cache itself must be the problem. I tried on another several projects where very similar dataloader were used, However, I didn’t reproduce my problem on their codes. So my question is: how can it possibly relate to any errors in my code and how can I solve it?
st182307
Here is an update: I ran several more times and found that if I comment the “loss.backward()” line, the speed will be normal. Given that I have to keep this line for sure, how to make the training speed normal? I believe it must be the data somehow overflowed, so pytorch have to clear the cache at every iteration. But why and how it caused this problem?
st182308
CUDA operations are executed asynchronously, so you need to synchronize the code manually before starting and stopping the timers via torch.cuda.synchronize(). Based on your current description, you are “moving” the accumulated time from one operation to the next blocking one by commenting them out. If you don’t synchronize the code, the next blocking operation will accumulate all previously executed (async) operations.
st182309
I came across this quirky thing when playing around with sparse tensors where, a sparse tensor takes lot more memory than a corresponding dense tensor. Example Code import torch from pytorch_memlab import MemReporter DEVICE = 'cuda:0' n = 1000 c = .6 t1 = torch.randn(100, n, n).to(DEVICE) t1[torch.rand_like(t1) > c] = 0 t2 = t1.to_sparse() torch.cuda.empty_cache() reporter = MemReporter() reporter.report() Output Element type Size Used MEM ------------------------------------------------------------------------------- Storage on cuda:0 Tensor0 (100, 1000, 1000) 381.47M Tensor2 (3, 60000715) 1.34G Tensor2 (60000715,) 228.88M ------------------------------------------------------------------------------- Total Tensors: 340002860 Used Memory: 1.94G The allocated memory on cuda:0: 1.94G Memory differs due to the matrix alignment or invisible gradient buffer tensors ------------------------------------------------------------------------------- t2 occupies around 1.5GB compared to t1's 380MB. Device Info Pytorch Version: 1.7.1 CUDA Version: 11.0 Device: GeForce RTX 2070 Is this a bug in the sparse memory format? I have a large sparse tensor (~ 30% sparsity) and I would like to reduce the GPU memory usage. Any help in reducing memory usage is appreciated.
st182310
Solved by ptrblck in post #2 I think the memory footprint is expected as described here. For your sparsity, this would be the memory usage: DEVICE = 'cuda:0' n = 100 c = .6 t1 = torch.randn(100, n, n).to('cuda') t1[torch.rand_like(t1) > c] = 0 t2 = t1.to_sparse() print((t2.indices().nelement() * 8 + t2.values().nelement() …
st182311
I think the memory footprint is expected as described here 17. For your sparsity, this would be the memory usage: DEVICE = 'cuda:0' n = 100 c = .6 t1 = torch.randn(100, n, n).to('cuda') t1[torch.rand_like(t1) > c] = 0 t2 = t1.to_sparse() print((t2.indices().nelement() * 8 + t2.values().nelement() * 4) / 1024**2) Note that I reduced the number of elements, but it should be the same relative usage compared to the dense tensor.
st182312
I am trying to setup up my dataloader to sample from multiple csv files within a directory, each of which contains a variable number of samples. Each sample is a row in one of the csv files Each file is too big to load in as a single batch (~50k) Each file contains a different number of samples (between 30k-60k) There are several thousand csv files in the folder The entire training set is to large too hold in memory (around 200M samples) I have looked at some of the examples on this forum and they include using torchvision.datafolder, however I think that assumes that each csv file contains a single sample, which does not apply to my situation.
st182313
Hi, I am training my own model using the data pipeline provided by Background-Matting and during training the cache memory (CPU memory) increases stably and sometimes runs out at the end. So my process is often killed with bus error. And I try a dummy script to test the dataloader: data_config_train = {'reso': [512, 512], 'trimapK': [5, 5], 'noise': True} train_dataset = AdobeDataAffineHR("Data_adobe/Adobe_train_data_50.csv", data_config_train) train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=8, num_workers=0, shuffle=True) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") for epoch in range(10): for i, data in enumerate(train_loader): fg, bg, alpha, image, bg_tr = data['fg'].to(device), data['bg'].to(device), data['alpha'].to(device),\ data['image'].to(device), data['bg_tr'].to(device) if (i+1)%50 == 0: print("{} iterations finished!".format(i+1)) print("{} epochs finished!".format(epoch+1)) And the AdobeDataAffineHR code is here: class AdobeDataAffineHR(Dataset): def __init__(self, csv_file, data_config, transform=None): frames = pd.read_csv(csv_file, sep=';') self.frames = np.array(frames) self.transform = transform self.resolution = data_config['reso'] self.trimapK = data_config['trimapK'] self.noise = data_config['noise'] def __len__(self): return len(self.frames) def __getitem__(self, idx): # try: # load cv2.setNumThreads(0) cv2.ocl.setUseOpenCL(False) fg = io.imread(self.frames[idx, 0]) alpha = io.imread(self.frames[idx, 1]) image = io.imread(self.frames[idx, 2]) back = io.imread(self.frames[idx, 3]) fg = cv2.resize(fg, dsize=(800, 800)) alpha = cv2.resize(alpha, dsize=(800, 800)) back = cv2.resize(back, dsize=(800, 800)) image = cv2.resize(image, dsize=(800, 800)) sz = self.resolution # random flip if np.random.random_sample() > 0.5: alpha = cv2.flip(alpha, 1) fg = cv2.flip(fg, 1) back = cv2.flip(back, 1) image = cv2.flip(image, 1) trimap = generate_trimap(alpha, self.trimapK[0], self.trimapK[1], False) # randcom crop+scale different_sizes = [(576, 576), (608, 608), (640, 640), (672, 672), (704, 704), (736, 736), (768, 768), (800, 800)] crop_size = random.choice(different_sizes) x, y = random_choice(trimap, crop_size) fg = safe_crop(fg, x, y, crop_size, sz) alpha = safe_crop(alpha, x, y, crop_size, sz) image = safe_crop(image, x, y, crop_size, sz) back = safe_crop(back, x, y, crop_size, sz) trimap = safe_crop(trimap, x, y, crop_size, sz) fg, alpha, image, back = fg.astype(np.uint8), alpha.astype(np.uint8), \ image.astype(np.uint8), back.astype(np.uint8) # Perturb Background: random noise addition or gamma change if self.noise: if np.random.random_sample() > 0.6: sigma = np.random.randint(low=2, high=6) mu = np.random.randint(low=0, high=14) - 7 back_tr = add_noise(back, mu, sigma) else: back_tr = skimage.exposure.rescale_intensity(back, out_range=(0, 255)) back_tr = skimage.exposure.adjust_gamma(back_tr, np.random.normal(1, 0.12)) back_tr = back_tr.astype(np.uint8) sample = {'image': to_tensor(image), 'fg': to_tensor(fg), 'alpha': to_tensor(alpha), 'bg': to_tensor(back), 'bg_tr': to_tensor(back_tr), 'trimap': to_tensor(trimap)} if self.transform: sample = self.transform(sample) return sample # except Exception as e: # print("Error loading: " + self.frames[idx, 3]) # print(e) And the functions used above are the same as the BackgroundMatting project. I run the dummy script and can see a fast increase of cache memory (about 1GB per 50 mini-batches) Also, I try several libs (PIL, skimage, cv2 and imageio) to read images but the phenomenon is the same. I doubt it is file cache causes this memory consumption, but what’s weird is that cache memory still increases after one epoch, when on theory all images are cached. And I am using a conda virtual env on Ubuntu system with 128 GB RAM, and basic information are listed below: python 3.6 pytorch 1.1.0 numpy==1.17.0 opencv-python==3.4.5.20 pandas Pillow==6.1 scikit-image==0.14.2 scipy==1.2.1 tqdm tensorboardX
st182314
I have noticed a disturbing pattern with pytorch (and other libraries too) but can’t get to the bottom of it! Any operation which involves bias term gives different (wrong results!) I will illustrate with Conv2d module here! Without Bias term! import torch import numpy as np torch.set_printoptions(precision=32) np.random.seed(23) # Creating random datapoints of Batch size=5 x_data = np.random.random(size=(5,1,1,1)).astype('float32') x = torch.as_tensor(x_data) # Defining Network net = torch.nn.Conv2d(in_channels=1,out_channels=1,kernel_size=1) # Initializing random weights and zero biases weight = np.random.random(size=(1,1,1,1)).astype('float32') bias = np.zeros(shape=(1,)).astype('float32') from collections import OrderedDict parameters = OrderedDict() parameters['weight'] = torch.as_tensor(weight) parameters['bias'] = torch.as_tensor(bias) # Loading the network parameters with custom W & B net.load_state_dict(parameters) output1 = net(x) With Bias term import torch import numpy as np torch.set_printoptions(precision=32) np.random.seed(23) # You can reinitialize x here or use the same x from before. Won't make a difference! x_data = np.random.random(size=(5,1,1,1)).astype('float32') x = torch.as_tensor(x_data) net = torch.nn.Conv2d(in_channels=1,out_channels=1,kernel_size=1) # Here too you can use the same weights as before, bias is randomly initialized! weight = np.random.random(size=(1,1,1,1)).astype('float32') bias = np.random.random(size=(1,)).astype('float32') from collections import OrderedDict parameters = OrderedDict() parameters['weight'] = torch.as_tensor(weight) parameters['bias'] = torch.as_tensor(bias) net.load_state_dict(parameters) output2 = net(x) # You can even print the actual values and notice the difference! print(output2-output1 == torch.as_tensor(bias)) My outputs are: tensor([[[[ True]]], [[[False]]], [[[False]]], [[[ True]]], [[[False]]]]) This True-False sequence is totally random. Choose a different value of seed or batch_size, you will get a different result. Although the error introduced by bias term is very small, but still it exists. Not able to know the reason is very frustrating. If you know the reason (and/or workaroud to avoid it), please help! Thanks! Note: I have observed the same randomness in other modules like BatchNorm and other libraries like PaddlePaddle. Also I am running this on CPU.
st182315
Solved by tom in post #2 If you print the difference: print(output2-output1 - torch.as_tensor(bias)) you see that it is 1e-8ish, so this is an effect of numerical precision. With floating points, you cannot expect two “algebraically equivalent” (i.e. the maths say it should be the same but calculated differently) to give…
st182316
If you print the difference: print(output2-output1 - torch.as_tensor(bias)) you see that it is 1e-8ish, so this is an effect of numerical precision. With floating points, you cannot expect two “algebraically equivalent” (i.e. the maths say it should be the same but calculated differently) to give exactly the same result, but the errors in the computation make it slightly different. This gets even more tricky when parallel computation is involved which will not even necessarily compute the same result when run twice because the order might be “random” (look for reproducible/deterministic in the PyTorch documentation, the CUDA atomicAdd command with floating point is perhaps the most common source of this in PyTorch). The most simple example perhaps is running 1e32+1-1e32 this is not the same as 1e32-1e32+1. Best regards Thomas
st182317
Thanks for the quick reply and nice explanation! So, there is no way of getting the same result in floating point operations. Although the error because of numerical precision is negligible here, I wonder won’t it affect the output by big models with billions of FLOPs.
st182318
Whether or not this is a problem depends a lot: Convolutions and linear layers and other reductions “average” errors in the input. If these are independent, then the errors won’t accumulate. For other operations, this is much more tricky, e.g. taking exponentials in softmax to get probabilities and then log (e.g. for KL divergence / “cross entropy” etc.) can run into precision problems, in particular with very small probabilities. Finally, while the ops in a single model run often are not problematic, this becomes different when you train models in millions or billions of iterations: in the course of a training run, these errors can accumulate quite dramatically, so the differences may mean that you cannot expect to reproduce the same weights as the end results when running the exact same code. This is quite annoying for reproducing scientific results etc.
st182319
Hi. I am studying memory bandwidth. I want to know the data capacity when I look up on the embedding table. For example, when you look up three out of five, how do you know how many bytes the three sizes are? Also, I want to know how much memory bandwidth is consumed when looking up using the EmbeddingBag function or the Embedding function. How can I check it? Thank you.
st182320
I am performing some operations in Pytorch that require a very high degree of precision. I am currently using torch.float64 as my default datatype. All my numbers are real. I would like to use a float128 datatype (since memory is not an issue for this simulation). However, the only datatypes I find in the documentation is torch.complex128 where the real and imaginary parts are both 64 bits. Is there a datatype or a way I can use all the 128 bits for my real numbers? Thank you
st182321
Solved by albanD in post #2 Hi, We don’t have any support for float128 I’m afraid. I don’t actually think that this type is supported by cuda If you don’t care about cuda, I guess we could accept a PR adding this new data type and implementations for it but no core contributor is working on that atm. But you can open an is…
st182322
Hi, We don’t have any support for float128 I’m afraid. I don’t actually think that this type is supported by cuda If you don’t care about cuda, I guess we could accept a PR adding this new data type and implementations for it but no core contributor is working on that atm. But you can open an issue with a feature request if you want to discuss this further.
st182323
Thanks a lot for your answer. For this problem, which is an optics simulation, I do not care about Cuda. I will open an issue with a feature request to discuss further
st182324
I am also working on an optimization problem for Molecular Dynamics simulations where I need high precision. After months of struggle to find out the bug in a seemingly correct code, I finally reached to the conclusion that the issue is with precision limitations of float64. It would be very helpful if support can be provided for float128 (equivalent to real128 in FORTRAN) in near future.
st182325
Hello, I have a question about memory layout and access efficiency. I will explain about the situation and let me clarify the question. I have certain amount of nodes(points, vertices). Each point has its own embedding of specific length(like, 3~1024). So, this x’s dimension is [batch_size, embedding length, numbers of points]; (By the way, My batch size is always ‘1’.) And i have an array which consists of random indices whose name is idx_p0 (like, [0, 71, 234, 22, 0 …, 2]. Using this idx_p0, i do random access to this x y = x[:,:,idx_p0] Here comes the question. As, i do random access to the x. Then i will cause memory access(toward register file of gpu). But i want to do this in efficient way. Is it wise to change the dimension of the x to this ‘x_bar = x.transpose(2,1).contiguous()’. Then the ‘x_bar’ becomes [batch_size, numbers of points, embedding length]. And do random access like this ‘y = x[:,idx_p0,:]’. I want to know how the pytorch save the 3 dimension tensor. Oh! my batch_size is usually only 1. Thank you.
st182326
Is there a way to know whether a tensor will fit into the remaining GPU ram, before creating it?
st182327
I’ve been trying to figure out the same thing. Recent paper “DNNMem” apparently can analyze a model file to figure out its size BEFORE loading into memory, but they haven’t released any code. Instead, what I’ve been doing at least is adding a lot of exception handling to respond to CUDA OOMs.
st182328
I’ve tried to calculate free memory by total_memory - memory_allocated or total_memory - memory_reserved, I’ve also tried to do it with pynvml, here is the code snippet: def remaining_memory(): pynvml.nvmlInit() gpu_handle = pynvml.nvmlDeviceGetHandleByIndex(0) info = pynvml.nvmlDeviceGetMemoryInfo(gpu_handle) return info.free and none of them are accurate. I guess it has something to do with memory fragmentation, if the remaining space is not contiguous, you can’t fit a large tensor into it even if it shows there are enough free memory. I just want to know how pytorch checks whether a tensor will fit in memory.
st182329
PyTorch’s data loader uses multiprocessing in Python 26 and each process gets a replica of the dataset. When the dataset is huge, this data replication leads to memory issues. Normally, multiple processes should use shared memory to share data (unlike threads). I wonder if there is an easy way to share the common data across all the data loading worker processes in PyTorch. Maybe someone has already coded this (I could not find yet). Thanks.
st182330
Solved by ptrblck in post #2 If you are lazily loading the data (which is the common use case, if you are dealing with large datasets), the memory overhead from the copies might be small in comparison to the overall memory usage in the script. That being said, you could try to use shared arrays as described here instead.
st182331
If you are lazily loading the data (which is the common use case, if you are dealing with large datasets), the memory overhead from the copies might be small in comparison to the overall memory usage in the script. That being said, you could try to use shared arrays as described here 320 instead.
st182332
Hi, I’m experimenting the different memory layouts based on these two documentation: Convolutional Layers User Guide 4 (from NVIDIA) CHANNELS LAST MEMORY FORMAT IN PYTORCH 4 (from Pytorch official doc) I tried to compare the NCHW model with the NHWC model with the following scripts: from time import time import torch import torch.nn as nn def time_layer(layer, feature, num_iter): """Time the total time used in num_iter forwarding.""" tic = time() for _ in range(num_iter): _ = layer(feature) print(time() - tic, "seconds") N, C, H, W, K = 32, 1024, 7, 7, 1024 # params from the NVIDIA doc # NCHW tensor & layer a = torch.empty(N, C, H, W, device="cuda:0") conv_nchw = nn.Conv2d(C, K, 3, 1, 1).to("cuda:0") # NHWC tensor & layer b = torch.empty(N, C, H, W, device="cuda:0", memory_format=torch.channels_last) conv_nhwc = nn.Conv2d(C, K, 3, 1, 1).to("cuda:0", memory_format=torch.channels_last) # NCHW kernel & NCHW tensor time_layer(conv_nchw, a, 1000) # NCHW kernel & NHWC tensor time_layer(conv_nchw, b, 1000) # NHWC kernel & NHWC tensor time_layer(conv_nhwc, b, 1000) # NHWC kernel & NCHW tensor time_layer(conv_nhwc, a, 1000) And I got the following output (results looked similar in many repeated runs): 0.9735202789306641 seconds # NCHW kernel & NCHW tensor 2.213291645050049 seconds # NCHW kernel & NHWC tensor 2.3461294174194336 seconds # NHWC kernel & NHWC tensor 2.7654671669006348 seconds # NHWC kernel & NCHW tensor I’m using a TITAN RTX GPU which is supposed to have Tensor Core and Pytorch 1.7.0+cu101 which supports channels_last format. So, it’s surprising to see that the fastest timing happens with NCHW kernel & NCHW tensor combination (which won’t be as surprising if I don’t have Tensor Core on my GPU because I guess NCHW format was the one that’s optimized). It’s not so surprising with NCHW kernel & NHWC tensor and NHWC kernel & NCHW tensor combinations because mixing up the format is certainly no good to the computation. However, why is NHWC kernel & NHWC tensor not the fastest combination which is supposed to be the most optimized one with Tensor Core? Am I doing the layout optimization correctly? Am I missing anything? Follow up question: instead of running all 4 benchmarks in a script, I executed the 4 lines in the python interpreter interactively, line-by-line, and got (results looked similar in many repeated runs): >>> time_layer(conv_nchw, a, 1000) # NCHW kernel & NCHW tensor 0.9541912078857422 seconds >>> time_layer(conv_nchw, b, 1000) # NCHW kernel & NHWC tensor 2.034724235534668 seconds >>> time_layer(conv_nhwc, b, 1000) # NHWC kernel & NHWC tensor 1.7101032733917236 seconds >>> time_layer(conv_nhwc, a, 1000) # NHWC kernel & NCHW tensor 1.9565918445587158 seconds Why are the latter 3 timings shorter than those executed in the stream-lined script? Only thing I can think of is that when executed in the interactive interpreter, I made noticeable time gaps between two executions while the script didn’t have such gaps. Are there any nuances related to this? I you could answer I’d really appreciate the help!
st182333
@CDhere For benchmarking, should probably place a torch.cuda.synchronize(device="cuda:0") before you print. Also, the channels_last speedups for convs are most relevant for float16. You might want to set a, b and the convs dtype=torch.float16 If there is odd float32 behaviour like you see, could be a regression in 1.7 builds, I saw several issues that have been fixed in more recent NGC (20.12) container releases that build against newer versions of cuDNN/CUDA.
st182334
I’m currently playing around with some transformers with variable batch sizes, and I’m running into pretty severe memory fragmentation issues, with CUDA OOM occurring at less than 70% GPU memory utilization. For example (see the GitHub link below for more extreme cases, of failure at <50% GPU memory): RuntimeError: CUDA out of memory. Tried to allocate 1.48 GiB (GPU 0; 23.65 GiB total capacity; 16.22 GiB already allocated; 111.12 MiB free; 22.52 GiB reserved in total by PyTorch) This has been discussed before on the PyTorch forums [1, 2 17] and GitHub 25. Fragmentation is also mentioned briefly in the docs 10 for torch.cuda.empty_cache(): empty_cache() 6 doesn’t increase the amount of GPU memory available for PyTorch. However, it may help reduce fragmentation of GPU memory in certain cases. See Memory management 20 for more details about GPU memory management. According to this thread 10, we shouldn’t be relying on torch.cuda.empty_cache(). However (in my use case, at least), torch.cuda.empty_cache() is the difference between running code with GiBs of GPU memory to spare and CUDA OOM errors. None of these past threads (as far as I can tell) really led to conclusive mitigation strategies for this problem. My questions are: Are there “official” best practices for handling cases when GPU memory fragmentation is severely reducing effective usable GPU memory? Can we get more documentation about when torch.cuda.empty_cache() is the right tool for addressing OOM/fragmentation issues? As far as I can tell, moving all allocated tensors into main memory, emptying the cache, then moving them back to GPU memory a la this post 25 seems a reasonable (if potentially expensive) strategy. Is this recommended?
st182335
In no way to steal from the focus of this super important issue you raise, I am curious how your code would fair with the DeepSpeed integration - since it manages temp memory allocations itself. And in my few experiments it needs 3-5x less gpu RAM with its ZeRO algorithms. that is if you’re using the HF trainer. If you don’t use the latter, you can see from the code how to activate deepspeed in your own trainer. We are getting close to completing the integration but you can already try my branch: github.com/huggingface/transformers [trainer] deepspeed integration 10 huggingface:master ← stas00:ds opened Dec 19, 2020 stas00 +384 -23 There is a doc in it that explains how to activate it. You’d need to install DeepSpeed from its master. If you have any follow up questions please ask in that PR thread so that we don’t derail this thread. We also recently added support for Sharded DDP via fairscale 7. if you use HF trainer, just install fairscale and add --sharded_ddp to the training args and you should also see a huge improvement in memory utilization. Again if you have questions let’s continue this discussion on HF forums. Back to the topic of this thread.
st182336
Very interesting, thanks for sharing! I’m not using the HuggingFace trainer, but I went ahead and ran with deepspeed with my current code. I normally run: ENV=var python -m train --arg1 val1 --arg2 val2 I ran: ENV=var deepspeed ./train.py --arg1 val1 --arg2 val2 I don’t see any difference in runtime/memory usage, though. Is there anything else I need to do? My ds_report looks like this: JIT compiled ops requires ninja ninja .................. [OKAY] -------------------------------------------------- op name ................ installed .. compatible -------------------------------------------------- cpu_adam ............... [NO] ....... [OKAY] fused_adam ............. [NO] ....... [OKAY] fused_lamb ............. [NO] ....... [OKAY] [WARNING] sparse_attn requires one of the following commands '['llvm-config', 'llvm-config-9']', but it does not exist! sparse_attn ............ [NO] ....... [NO] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY] utils .................. [NO] ....... [OKAY] -------------------------------------------------- Did you pre-build the extension ops? I’ve got a CUDA Version mismatch that’s preventing me from pre-building them with DS_BUILD_OPS=1 pip install deepspeed.
st182337
Based on previous experience - continuing the ZeRO solution discussion here will derail your attempt to find core solutions. I highly recommend removing these follow ups and opening a separate thread on HF forums (and tag @stas) and we can continue there. But the quick answer is that based on your reply - you’re not using deepspeed, you’re just using its launcher. You need to: create a configuration file which sets up its ZeRO magic.See: DeepSpeed Configuration JSON - DeepSpeed 19 activate deepspeed Getting Started - DeepSpeed 5 use its wrapped model in training Getting Started - DeepSpeed 4 launch it as Getting Started - DeepSpeed 5 That’s when you will see huge improvements.
st182338
I’m running the following code snippet on PyTorch version 1.6.0: import torch import torch.nn as nn class Model(nn.Module): def __init__(self): super().__init__() self.layers = nn.Sequential( nn.Conv2d(3, 16, 1), nn.ReLU(), ) def forward(self, model_inputs): return self.layers(model_inputs) device = torch.device('cuda:0') torch.cuda.set_device(device) model = Model() model = model.to(device=device, memory_format=torch.channels_last) x = torch.zeros((1, 3, 32, 32), dtype=torch.float, device=device) x = x.contiguous(memory_format=torch.channels_last) loss = model(x).mean() loss.backward() During backward, it generates the following message: [W TensorIterator.cpp:924] Warning: Mixed memory format inputs detected while calling the operator. The operator will output channels_last tensor even if some of the inputs are not in channels_last format. (function operator()) If I remove backward() call, then no warning is raised, as well as if I remove ReLU function from the model. Where may formats not match? Can someone provide a deeper understanding of what’s happening here?
st182339
I don’t get this warning in the nightly release. Could you update to 1.7.1 or the nightly, as this warning might have been wrong and seems to be fixed?
st182340
I ran into an issue when using 16 K80 GPUs today. It turns out that the default behavior (peer-to-peer) in PyTorch will not support more than 8 devices, probably for very good reasons that I don’t understand. A proposed solution is to set NCCL_P2P_LEVEL=1 for the environment, but I’m not sure how to actually do that because I have never had to fiddle with NVIDA environment. Is there a PyTorch command that will let me set NCCL_P2P_LEVEL? Where do I change this? I’m building models in a Jupyter Notebook on a Windows machine.
st182341
You could set the env variable directly via NCCL_P2P_LEVEL=1 python script.py args. However, NCCL shouldn’t be supported on Windows, so you might need to use another backend.
st182342
Thanks @ptrblck! Is that the only way of training a model on more than 8 GPUs with PyTorch? Note that I have the model layers distributed across devices in a model parallel manner.
st182343
It might depend on your system and you can check the GPU connections via nvidia-smi topo -m. I’m able to train a ResNet on 16GPUs without any NCCL env variables.
st182344
Hi everyone, I am working on a Neuroevolution project. I am building a model which is composed by several DenseBlocks, however, the skip connections of each of the denseblocks may vary. I tell you all of this to put into context, that the forward pass is built dynamically. My code is the following: '''Networks class''' class CNN(nn.Module): def __init__(self, e, denseBlocks, links, classifier, init_weights = True): super(CNN, self).__init__() extraction = [] for block in denseBlocks: for layer in block: extraction += layer self.extraction = nn.Sequential(*extraction) self.classifier = nn.Sequential(*classifier) self.denseBlocks = denseBlocks self.links = links self.connections = e.second_level self.first_level = e.first_level self.nblocks = e.n_block def forward(self, x): '''Feature extraction''' for i in range(self.nblocks): block = self.denseBlocks[i] connections = self.connections[i] link = self.links[i] prev = -1 pos = 0 outputs = [] for j in range(self.first_level[i]['nconv']): if j == 0 or j == 1: x = nn.Sequential(*block[j])(x) outputs.append(x) else: conn = connections[pos:pos+prev] for c in range(len(conn)): if conn[c] == 1: x2 = outputs[c] x = torch.cat((x, x2), axis = 1) x = nn.Sequential(*block[j])(x) outputs.append(x) pos += prev prev += 1 x = nn.Sequential(*link)(x) x = torch.flatten(x,1) '''Classification''' x = self.classifier(x) return nn.functional.log_softmax(x, dim=1) Inside the __init__, I add the convolutional part (i.e. all the denseblocks and transition operations) as self.extraction and the classification part as self.classification. In the forward pass, I use the information inside self.denseBlock to dynamically pass the input x through the convolutional layers inside each block, until the end of the CNN. The problem is that I want to pass a random tensor just to see if the network computes correctly. I have: #Create network net = CNN(e, network[0], network[1], network[2]) net.to(device, dtype = torch.float32) #Create random tensor in cuda image = torch.rand([1, 1, 256, 256], device = device) net(image) I what I get is: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-83-0987cf7a2c44> in <module>() ----> 1 net(image) 6 frames /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight) 418 _pair(0), self.dilation, self.groups) 419 return F.conv2d(input, weight, self.bias, self.stride, --> 420 self.padding, self.dilation, self.groups) 421 422 def forward(self, input: Tensor) -> Tensor: RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same I have tried many combinations, like the .cuda(), not using the dtype = torch.float32 but I have no idea how to solve it. I have read that possibly not all the parts of the network are on the GPU. But I have defined two sequentials already, and the modules I use in the forward pass are the same that are inside the sequentials. Thanks in advance for your help!
st182345
Solved by ptrblck in post #2 Could you check, if all passed arguments are nn.Module objects, i.e. denseBlocks, links, and classifier? It’s unclear from the error message which module is raising this issue and the code looks generally alright. Wrapping the modules in nn.Sequential inside the forward is unusual, but shouldn’t c…
st182346
Could you check, if all passed arguments are nn.Module objects, i.e. denseBlocks, links, and classifier? It’s unclear from the error message which module is raising this issue and the code looks generally alright. Wrapping the modules in nn.Sequential inside the forward is unusual, but shouldn’t change the device. Feel free to add the missing module definitions in case you get stuck, so that we can help debugging.
st182347
Thank you very much! You were right! The elements inside the links list were not added correctly. I used the nn.ModuleList instead of a normal list and the problem finally solved!
st182348
Hello all I’m trying to build a VGG16 model to make an ONNX export using Pytorch. I want to force the model with my own set of weights and biases. But in this process my computer quickly runs out of memory. Here is how I want to do it (this is only a test, in the real version I read the weights and biases in a set of files), this example only force all values to 0.5 # Create empty VGG16 model (random weights) from torchvision import models from torchsummary import summary vgg16 = models.vgg16() # la structure est : vgg16.__dict__ summary(vgg16, (3, 224, 224)) # convolutive layers for layer in vgg16.features: print() print(layer) if (hasattr(layer,'weight')): dim = layer.weight.shape print(dim) print(str(dim[0]*(dim[1]*dim[2]*dim[3]+1))+' params') # Remplacement des poids et biais for i in range (dim[0]): layer.bias[i] = 0.5 for j in range (dim[1]): for k in range (dim[2]): for l in range (dim[3]): layer.weight[i][j][k][l] = 0.5 # Dense layers for layer in vgg16.classifier: print() print(layer) if (hasattr(layer,'weight')): dim = layer.weight.shape print(str(dim)+' --> '+str(dim[0]*(dim[1]+1))+' params') for i in range(dim[0]): layer.bias[i] = 0.5 for j in range(dim[1]): layer.weight[i][j] = 0.5 When I look at the memory usage of the computer, it grows linealrly and saturates the 16GB RAM during the first dense layer processing. Then python crashes… Is there another better way to do this, keeping in mind that I want to onnx export the model afterwards? Thanks for your help.
st182349
I guess you might be using an older PyTorch version, as you would get this error in the latest stable version: RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation. Wrap your code into a with torch.no_grad() block and check, if the memory is still growing.
st182350
Hello, I got a trouble with CUDA memory I want to slice image input after ‘CUDA out of memory.’ occured. but, after ‘CUDA out of memory.’ error , MEMORY LEAK occured It seems like input random tensor “x” at line 79 [at inf class ,run function] didn’t free exactly. But there’s NO way to free it. Is there any idea to free it ?? HELP ME… I tried below but it didn’t work… del x x.to(torch.device(‘cpu’) **SYSTEM & ENV** GPU : NVIDIA 2080 ti OS : windows 10 + linux(centos) pytorch 1.5.1 import torch from torch import nn from math import sqrt # import torch.multiprocessing as mp import time def check_gpu(msg='gpu_check'): print(f'{msg:=^60}') # print('Memory Usage:') print(f'Allocated:, {round(torch.cuda.memory_allocated(0)/1024**3,6)}GB') # # print('Cached: ', round(torch.cuda.memory_cached(0)/1024**3,6), 'GB') # import gc # for i, obj in enumerate(gc.get_objects()): # try: # if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)): # print(f'{i}||{type(obj)}||{obj.size()}') # except: # pass class Conv_ReLU_Block(nn.Module): def __init__(self): super(Conv_ReLU_Block, self).__init__() self.conv = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1, bias=False) self.relu = nn.ReLU(inplace=True) def forward(self, x): return self.relu(self.conv(x)) # resnet network class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.residual_layer = self.make_layer(Conv_ReLU_Block, 18) self.input = nn.Conv2d(in_channels=1, out_channels=64, kernel_size=3, stride=1, padding=1, bias=False) self.output = nn.Conv2d(in_channels=64, out_channels=1, kernel_size=3, stride=1, padding=1, bias=False) self.relu = nn.ReLU(inplace=True) for m in self.modules(): if isinstance(m, nn.Conv2d): n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels m.weight.data.normal_(0, sqrt(2. / n)) def make_layer(self, block, num_of_layer): layers = [] for _ in range(num_of_layer): layers.append(block()) return nn.Sequential(*layers) def forward(self, x): # residual = x out = self.relu(self.input(x)) out = self.residual_layer(out) out = self.output(out) return out # inference class class inf: # init model def __init__(self): check_gpu('init') self.m = Net().to(torch.device('cuda')) self.m.eval() # self.m.share_memory() # run inference with random tensor # if tensor size (1,1,50000,1024) # -> CUDA out of memory. # -> memory leak occured while loop at main # # if tensor size (1,1,1024,1024) # -> great def run(self): check_gpu('run start') try: with torch.no_grad(): # x = torch.rand(1, 1, 50000, 1024).cuda() x = torch.rand(1, 1, 1024, 1024, device='cuda') check_gpu('allocated') x = self.m(x) except Exception as e: print(e) finally: torch.cuda.empty_cache() def main(): inf_ins = inf() for _ in range(100): t1 = time.time() inf_ins.run() t2 = time.time() # check_gpu('last') print(f'time: {t2-t1}') if __name__ == '__main__': main() result if use tensor (1,1,50000,1024) as input tensor allocated cuda memory rising ============================init============================ Allocated:, 0.0GB =========================run start========================== Allocated:, 0.002477GB =========================allocated========================== Allocated:, 0.193883GB CUDA out of memory. Tried to allocate 12.21 GiB (GPU 0; 11.00 GiB total capacity; 198.54 MiB already allocated; time: 0.3150339126586914 =========================run start========================== Allocated:, 0.193883GB =========================allocated========================== Allocated:, 0.385289GB CUDA out of memory. Tried to allocate 12.21 GiB (GPU 0; 11.00 GiB total capacity; 394.54 MiB already allocated; time: 0.31797289848327637 =========================run start========================== Allocated:, 0.385289GB =========================allocated========================== Allocated:, 0.576695GB CUDA out of memory. Tried to allocate 12.21 GiB (GPU 0; 11.00 GiB total capacity; 590.54 MiB already allocated; time: 0.31999874114990234 =========================run start========================== Allocated:, 0.576695GB =========================allocated========================== Allocated:, 0.768102GB CUDA out of memory. Tried to allocate 12.21 GiB (GPU 0; 11.00 GiB total capacity; 786.54 MiB already allocated; time: 0.3209991455078125 =========================run start========================== Allocated:, 0.768102GB =========================allocated========================== Allocated:, 0.959508GB CUDA out of memory. Tried to allocate 12.21 GiB (GPU 0; 11.00 GiB total capacity; 982.54 MiB already allocated; time: 0.31999993324279785 if use (1,1,1024,1024) as input tensor ============================init============================ Allocated:, 0.0GB =========================run start========================== Allocated:, 0.002477GB =========================allocated========================== Allocated:, 0.006383GB time: 1.4230003356933594 =========================run start========================== Allocated:, 0.002477GB =========================allocated========================== Allocated:, 0.006383GB time: 0.14099979400634766 =========================run start========================== Allocated:, 0.002477GB =========================allocated========================== Allocated:, 0.006383GB time: 0.13899970054626465
st182351
Is there a way in pytorch to borrow memory from the CPU when training on GPU. I am training a model related to video processing and would like to increase the batch size. This is to know if increasing batch size can improve the results of the model by better training it, especially the batchnorm3d part. I am trying to train a model that requires a lot of memory and my CPU has more memory and can handle a larger batch size, but the GPU is much faster but limitied in memory. So I want to add the memory in the CPU as usable memory for the GPU somehow. To be able to increase batch size (I use batchnorm3d and from what I understand the minibatch size is a major factor). I want to know if there is something in pytorch or some external library for pytroch that can allow that. And whether that sollution is available for windows or not as I do not use linux (unless maybe if it is the only way and no alternatives can be found).
st182352
Solved by ptrblck in post #2 I think Microsoft released a PyTorch package some time ago, where intermediate tensors could be pushed to the CPU temporarily to reduce the GPU memory usage. However, I can’t remember the name at the moment and don’t know if it’s still maintained. That being said, you could trace compute for memo…
st182353
salahelabyad: So I want to add the memory in the CPU as usable memory for the GPU somehow. I think Microsoft released a PyTorch package some time ago, where intermediate tensors could be pushed to the CPU temporarily to reduce the GPU memory usage. However, I can’t remember the name at the moment and don’t know if it’s still maintained. That being said, you could trace compute for memory via torch.utils.checkpoint.
st182354
I looked into torch.utils.checkpoint, it does satisfy what I needed. I also tried searching again with different keywords and found this repository https://github.com/IBM/pytorch-large-model-support 45 It enables memory swapping between CPU memory and GPU (similar to memory swap between CPU RAM and storage memory). I will leave the link here maybe it can help someone later on. I will look into microsoft releases, if I happen to find anything related to the topic I’ll try to remember to update the post. Thanks for the help
st182355
salahelabyad: I will look into microsoft releases, if I happen to find anything related to the topic I’ll try to remember to update the post. Ah no, you were right. It was indeed the linked repository by IBM, I just misremembered it which also explains why I couldn’t find it. Thanks for the link. Based on the last commits it seems that PyTorch 1.5.0 is at least supported.
st182356
Hi, I have a dataset that has the channel dimension as last, obviously pytorch find it not suitable for the CNN that i’m usinng self.conv2d = nn.Sequential( nn.Conv2d(1, 64, (3,6), (1, 1)), nn.ReLU() ) the dataset has as dimensions: samples x 10 x 6 x 1. What should i do? For now I’m transposing the data but the network is not performing well.
st182357
Permuting the output is the right approach. What do you mean by “not performing well”? Is the model bad regarding the speed or accuracy? You could use the channels_last memory format as described here 398, which internally permutes the data to the channels last format. Note that the shape of the tensor would still indicate the standard contiguous format (channels first).
st182358
Thanks for answering, so the model is bad regarding the accuracy. So by using the channel_last i just need the dataset without permuting it, right?
st182359
Stefano_Setti: So by using the channel_last i just need the dataset without permuting it, right? Yes, as shown in the linked tutorial. Note that changing the memory layout will not fix the accuracy issue, so you might want to fix this first e.g. by playing around with some hyperparameters.
st182360
So I’m using the channel_last memory format in my custom dataloader but i got this error RuntimeError: Given groups=1, weight of size 64 1 3 6, expected input[2048, 10, 6, 1] to have 1 channels, but got 10 channels instead so apparently is not working
st182361
for reference, this is my dataloader import torch from torch.utils import data class Hdf5_dl(data.Dataset): """HDF5 datasets""" def __init__(self, archive, transform = None): self.archive = h5.File(archive, 'r') self.labels = torch.tensor(self.archive['labels']) self.data = torch.tensor(self.archive['data']).contiguous(memory_format=torch.channels_last) self.transform = transform def __getitem__(self, index): sample = self.data[index] if self.transform is not None: sample = self.transform(sample) return sample, self.labels[index] def __len__(self): return len(self.labels) def close(self): self.archive.close()
st182362
It’s working for me: conv = nn.Conv2d(1, 64, (3, 6)).cuda().to(memory_format=torch.channels_last) x = torch.randn(2048, 1, 10, 6).cuda().to(memory_format=torch.channels_last) out = conv(x) print(out.shape) > torch.Size([2048, 64, 8, 1]) As said before, you should not manually permute the tensor, but handle it in the “standard” contiguous layout (channels-first or NCHW).
st182363
so right now what i’m doing is to mimic your code like that: model = Network() model = model.to(memory_format=torch.channels_last) model = model.double() criterion = nn.BCELoss() optimizer = optim.Adam(params = model.parameters(), lr = 0.01) … for epoch in range(epochs): # loop over the dataset multiple times model.train() for j, data in enumerate(trainloader, 0): # Get the inputs; data is a list of [inputs, labels] inputs, labels = data print(inputs.stride()) inputs = inputs.to(memory_format=torch.channels_last) print(inputs.stride()) # Zero the parameter gradients optimizer.zero_grad() # Forward + Backward + Optimize outputs = model(inputs.double()) loss = criterion(outputs, labels.double()) loss.backward() optimizer.step() print("epoch\t{}\t\tbatch\t{}\nloss\t{}\n---".format(epoch, j, loss.item())) and is still not working, i even tried to print out the stride before and after the channel_last operation and this is what i get: (60, 6, 1, 1) (60, 6, 1, 1) Error: RuntimeError: Given groups=1, weight of size 64 1 3 6, expected input[2048, 10, 6, 1] to have 1 channels, but got 10 channels instead
st182364
Could you print the shape of inputs? It should be [batch_size, channels, height, width].
st182365
So the shape of the input is [2048, 10, 6, 1] even if I convert the memory format like this: for j, data in enumerate(trainloader, 0): inputs, labels = data inputs = inputs.to(memory_format=torch.channels_last) print("\ninput: ", inputs.shape)
st182366
As described before: your input has to be in the shape [batch_size, channels, height, width] before using to(memory_format=torch.channels_last) (have another look at my code snippet). You should not manually permute the tensor to the channels-last format, the to() operation will internally handle it for you. Since the conv layer has in_channels=1, your data should have the shape [2048, 1, 10, 6].
st182367
I think i didn’t explained my feld properly, my dataset from the beginning is [batch_size, height, width, channels]
st182368
In that case you have to permute it, so that the shape is channels-first and apply the memory_format later.
st182369
Assuming original inputs are contiguous, tensors will become channels_last automatically after permute call. inputs, labels = data inputs = inputs.permute(0,3,1,2) print("\ninput: ", inputs.shape, inputs.is_contiguous(memory_format=torch.channels_last))
st182370
Hello! I am quite new to PyTorch and training DNN models in general. I’m working on audio separation and I would like to augment my dataset by cropping random overlapping segments of audio, adding noise, etc. What bothers me is how in general data augmentation works, meaning will I augment my data, save it to HDD and then load it, or is it done “per batch”, stored temporarily? I’m not sure if I have explained this properly. If the latter is the answer to my question, then is there anything similar (for audio) to augmentation for images in PyTorch (random crops, etc.)? Thanks in advance!
st182371
Solved by ptrblck in post #2 Data augmentation is applied on the fly for each batch during training. If you are using a Dataset and pass it to a DataLoader with multiple workers, the data loading and processing would be executed in the background while the model is training. I would have a look at torchaudio which ships with…
st182372
stases: What bothers me is how in general data augmentation works, meaning will I augment my data, save it to HDD and then load it, or is it done “per batch”, stored temporarily? Data augmentation is applied on the fly for each batch during training. If you are using a Dataset and pass it to a DataLoader with multiple workers, the data loading and processing would be executed in the background while the model is training. I would have a look at torchaudio 11 which ships with some transformation.
st182373
Hi, I have a question with CUDA out of memory, I already know how to solve it, I just wonder the meaning of the bug. GPU: RTX 2080Ti, CUDA 10.1 Pytorch version: 1.6.0+cu101 Model: EfficientDet-D4 When I trained it with the batch size is 1, it took 9.5 GiB GPU RAM, then I tried to increase the batch size and it returned: # Batch_size = 2 CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 11.00 GiB total capacity; 8.32 GiB already allocated; 2.59 MiB free; 8.37 GiB reserved in total by PyTorch) # Batch_size = 3 CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 11.00 GiB total capacity; 8.23 GiB already allocated; 48.59 MiB free; 8.32 GiB reserved in total by PyTorch) # Batch_size = 4 CUDA out of memory. Tried to allocate 192.00 MiB (GPU 0; 11.00 GiB total capacity; 8.03 GiB already allocated; 90.59 MiB free; 8.28 GiB reserved in total by PyTorch) # Batch_size = 5 CUDA out of memory. Tried to allocate 240.00 MiB (GPU 0; 11.00 GiB total capacity; 8.06 GiB already allocated; 38.59 MiB free; 8.33 GiB reserved in total by PyTorch) # Batch_size = 10 CUDA out of memory. Tried to allocate 1.41 GiB (GPU 0; 11.00 GiB total capacity; 7.19 GiB already allocated; 964.59 MiB free; 7.43 GiB reserved in total by PyTorch) I am confused about how to measure the allocated memory, why the already allocated memory keeps decrease if I increase the batch size, and what is the meaning of reserved memory in that pop-up? I read the code and comment in pytorch github 22 (line 247-272). The comment mentioned the cached memory, so what is it in my case? Note: I noticed the memory of the driver when I killed all processes and free all tasks, it took 0.4~0.5/11GB in my GPU
st182374
Toby: why the already allocated memory keeps decrease if I increase the batch size PyTorch tries to allocate the memory for the complete tensor, so increasing the batch size would also increase (some) tensors and thus the memory blocks are also bigger. If you are now running out of memory, the failed memory block might be bigger (as seen in the “tried to allocate …” message), while the already allocated memory is smaller. Toby: and what is the meaning of reserved memory in that pop-up? Reserved memory returns the allocated and cached memory. Cached memory is used to be able to reuse device memory without reallocating it.
st182375
I am not familiar with controlling memory or memory distribution in hardware, so I cannot discuss further in the first sentence (if you have some related documents, it will help me a lot) // The sum of "allocated" + "free" + "cached" may be less than the // total capacity due to memory held by the driver and usage by other // programs. Follow Pytorch github 48 (line 257-259) that I mentioned above and with your answer, I have a new question. Is it should be: total = allocated + free + cached + driver # driver is 0.4 GiB, I mentioned above = allocated + free + (reserved - allocated) + driver # follow your answer = free + reserved + driver Let take batch size = 2 to be an example, we have: 2.59 MiB + 8.37 GiB + 0.4 GiB = 8.7725 GiB So where is the rest of the memory? 2.2275 GiB
st182376
I am trying to load data into my Dataset class in the get_item function rather than init function because of the data is very large and it cannot be loaded all at once to memory. Since the index of DataLoader keeps I am keeping a record of length of previously loaded part of data but this length goes to 0 which was initiated in init every time a new batch is loaded. Is there a way to not call init function of DataSet? ` def __init__(self, data_path, graph_args={}, train_val_test='train'): ''' train_val_test: (train, val, test) ''' self.data_path = data_path self.path_list = sorted(glob.glob(os.path.join(self.data_path,'*.txt'))) self.all_feature=[] self.all_adjacency=[] self.all_mean_xy=[] self.it = 0 #self.load_data() #total_num = len(self.all_feature) # equally choose validation set self.feature_num=0 self.prev=0 def __getitem__(self, idx): # C = 11: [frame_id, object_id, object_type, position_x, position_y, position_z, object_length, pbject_width, pbject_height, heading] + [mask] #if(idx>=self.feature_num): try: now_feature = self.all_feature[idx-self.prev].copy() except: path = self.path_list[self.it] self.it = self.it+1 self.all_feature, self.all_adjacency, self.all_mean_xy = generate_data(path) self.prev = self.feature_num self.feature_num = self.feature_num+len(self.all_feature) now_feature = self.all_feature[idx-self.prev].copy()`
st182377
Be careful with trying to manipulate the Dataset in a DataLoader, if you are using multiple workers. Each worker will use a clone of the Dataset, so that changes to the internal states of the Dataset will not be reflected. Could this be the case for your issue?
st182378
1.7 release and current master will have reset functionality, see https://github.com/pytorch/pytorch/pull/35795 2
st182379
Thank you so much, making the num_workers to 0 solved the issue, But I guess I’m restricting the capabilities of machine by loading the data sequentially.
st182380
Dear all, I can not figure out how to get rid of the out of memory error: RuntimeError: CUDA out of memory. Tried to allocate 7.50 MiB (GPU 0; 11.93 GiB total capacity; 5.47 GiB already allocated; 4.88 MiB free; 81.67 MiB cached). In fact due to the recurrent architecture of my network I have to ‘retain_graph=True’ Otherwise I get the error: RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time. I keep running into this error: RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time. Here is the main of my function for epoch in range(300): # again, normally you would NOT do 300 epochs, it is toy data states = None #torch.empty().to(device) for idx, image in enumerate(loader): # Step 1. Remember that Pytorch accumulates gradients. # We need to clear them out before each instance # Step 3. Run our forward pass. tensor = image[0].clone().to(device) if states is None: states = prednet.get_initial_states(tensor) prednet.zero_grad() # tensor = tensor.reshape(tensor.shape[0],1,tensor.shape[1],tensor.shape[2],tensor.shape[3]) tag_scores, states = prednet(tensor, states) # Step 4. Compute the loss, gradients, and update the parameters by # calling optimizer.step() loss = loss_function(tag_scores, torch.zeros_like(tag_scores)) print(loss) loss.backward(retain_graph=True) for state in states: state.detach() optimizer.step() print('1 backward') torch.cuda.empty_cache() Here is the forward function: def forward(self, a, states = None): r_tm1 = states[:self.nb_layers] c_tm1 = states[self.nb_layers:2*self.nb_layers] e_tm1 = states[2*self.nb_layers:3*self.nb_layers] if self.extrap_start_time is not None: t = states[-1].copy() a = torch.switch(t >= self.t_extrap, states[-2], a) # if past self.extrap_start_time, the previous prediction will be treated as the actual c = [] r = [] e = [] for l in reversed(range(self.nb_layers)): inputs = [r_tm1[l], e_tm1[l]] if l < self.nb_layers - 1: inputs.append(r_up) inputs = torch.cat(inputs, self.channel_axis) # print(inputs.shape) i = self.conv_layers['i'][l](inputs) f = self.conv_layers['f'][l](inputs) o = self.conv_layers['o'][l](inputs) # print('i',torch.isnan(i).any()) # print('f',torch.isnan(f).any()) # print('o',torch.isnan(o).any()) # print('c',torch.isnan(o).any()) # print('c',torch.isnan(self.conv_layers['c'][l](inputs)).any()) _c = f * c_tm1[l] + i * self.conv_layers['c'][l](inputs) _r = o * self.LSTM_activation(_c) c.insert(0, _c) r.insert(0, _r) if l > 0: r_up = self.upsample(_r) for l in range(self.nb_layers): ahat = self.conv_layers['ahat'][l](r[l]) if l == 0: value = torch.Tensor([self.pixel_max]).to(device) ahat = torch.min(ahat, value.expand_as(ahat)) frame_prediction = ahat # compute errors e_up = self.error_activation(ahat - a) e_down = self.error_activation(a - ahat) e.append(torch.cat((e_up, e_down), dim=self.channel_axis)) if l < self.nb_layers - 1: a = self.conv_layers['a'][l](e[l]) a = self.pool(a) # target for next layer if self.output_mode == 'prediction': output = frame_prediction else: for l in range(self.nb_layers): layer_error = torch.mean(torch.flatten(e[l],start_dim=1), dim=-1, keepdim = True) if l == 0: all_error = layer_error else: all_error = torch.cat((all_error, layer_error), dim=-1) if self.output_mode == 'error' and image_n ==0: output = all_error output = output.unsqueeze(1) # elif self.output_mode == 'error': # all_error = all_error.unsqueeze(1) # output = torch.cat((output, all_error), dim=1) else: output = torch.cat((torch.flatten(frame_prediction, start_dim=1), all_error), dim=-1) states = r + c + e if self.extrap_start_time is not None: states += [frame_prediction, t + 1] # return output, states return output, states
st182381
Solved by Martin_Barry in post #8 If anyone ever comes by I ended up solving the issue by replacing the line tag_scores, states = prednet(tensor, states) with tag_scores, states = prednet(tensor, [ state.detach() for state in states]) worked well ! it is similar solution to the one of @ptrblck just that states is a list of tenso…
st182382
I assume you need the retain_graph=True setting, since you are not detaching the states tensor. If this is your use case, you would have to lower the batch size to be able to store all computation graphs on the device. If you don’t need to backpropagate through multiple steps, you might want to detach states via: tag_scores, states = prednet(tensor, states.detach())
st182383
Thank you for your answer! I indeed need the backpropagation through time. But even reducing the batch will only delay the rise of memory error. Also I already tried to use for state in states: state.detach() It doest not change the out of memory error after 5-10 batches.
st182384
state.detach() is not an inplace method and you would have to reassign the result as: state = state.detach() If that doesn’t help, could you post an executable code snippet?
st182385
Hey! thank you so much for your time. Still not working. Yet I am working on a 2-3GB database and the network is “fairly” complex. I can send you the code, but make an executable snippet that reproduces the error would take me lot of time. Especially since I do not know how to make one, that would take me a long while. I will put the solution here if I ever find one.
st182386
But isn’t there a way to set retain_graph=False from time to time to save memory ? I wanted to do something like this but every time I do this at a given step I get RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
st182387
The issue is raised, since the intermediate tensors are already freed (after you’ve used retain_graph=True), while your backward() call tries to backpropagate through operations where these intermedates are already deleted. Detaching the tensor would solve the problem (the backward pass would stop at this point and will not backpropagate further), but I understand that it might not be trivial if the code base is complicated.
st182388
If anyone ever comes by I ended up solving the issue by replacing the line tag_scores, states = prednet(tensor, states) with tag_scores, states = prednet(tensor, [ state.detach() for state in states]) worked well ! it is similar solution to the one of @ptrblck just that states is a list of tensor Thank you for the help!
st182389
Hello, I am working a classification problem, which includes the feature generation at first and then classification. Due to my problem constrain, i generate the features first and then train classifier separately. However, dataloading has become the bottleneck of my pipeline and progress. I save the generated datasamples as list of tensors, as .pt file having dimensions [(50,10,10,10), (1)], the last (1) being the associated label tensor. I use a standard dataset and dataloader code: class LR_Dataset(Dataset): def __init__(self, filepath): self.filepath = filepath self.filenames = os.listdir(self.filepath) def __len__(self): return len(self.filenames) def __getitem__(self, idx): x,y = torch.load(os.path.join(self.filepath,self.filenames[idx])) return x,y def dataloader(filepath, batch_size, num_workers=0): dataset = LR_Dataset(filepath) return DataLoader(dataset, batch_size=batch_size, num_workers=num_workers) Note that: the dataset folder contains ~150,000 .pt files. So i doubt if reading .pt file is taking the time, or is it because the .pt files contain tensors and not numpy arrays. I am facing a tough deadline. Any help would be greatly appreciated. Thank you
st182390
I don’t think there is any overhead from using tensors instead of np arrays. You could try increasing num_workers > 0 to use multiprocessing in DataLoader
st182391
As @Dipayan_Das explained multiple workers might give you a speedup. If that’s not helping, you could try to preload the complete dataset, which should take approx. 27GB, if my calculation is right.
st182392
Thanks a lot for considering, but I have a limited RAM of 16GB and 8GB GPU. Any other method other than loading whole data to RAM shall do. Additionally, i have indeed experimented with num_workers, however, the effect is not at all significant. Thank you
st182393
i have an error after make one loss func class Latent_Classifier(nn.Module): def __init__(self): super(Latent_Classifier, self).__init__() self.encoder = nn.Sequential( nn.Linear(128, 750), nn.LeakyReLU(0.2), nn.Linear(750, 750), nn.Linear(750, 1) ) def forward(self,latent_z): x1 = self.encoder(latent_z) print(x1.size()) _eps = 1e-15 loss = -(x1 + _eps).log().mean()-(1 - x1 + _eps).log().mean() return loss i use this func as classifier = Latent_Classifier() f_classifier = classifier(latent_f) lm_classifier = classifier(latent_l) loss = 4000*(f_loss + m_loss) + 30 * (f_classifier + lm_classifier) + 2000 * lm_loss loss.backward() in loss.backward() i got an error msg CUDA error: an illegal memory access was encountered before using classifier loss, i have no error msg is an error in function Latent_Classifier? when i executed it using ‘torch.device(“cpu”)’ not cuda:0 it works well
st182394
Could you rerun the script with: CUDA_LAUNCH_BLOCKING=1 python script.py args and post the stack trace here, please? The illegal memory access might have been created by a previous CUDA operation and your loss could be a red herring.
st182395
it’s my traceback message Traceback (most recent call last): File “train2.py”, line 121, in loss.backward() File “/home/hhhoh/.local/lib/python3.6/site-packages/torch/tensor.py”, line 18 5, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File “/home/hhhoh/.local/lib/python3.6/site-packages/torch/autograd/init.p y”, line 127, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: CUDA error: an illegal memory access was encountered Exception raised from copy_kernel_cuda at /pytorch/aten/src/ATen/native/cuda/Cop y.cu:200 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7f130683 e1e2 in /home/hhhoh/.local/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: + 0x1e63b08 (0x7f1308b0ab08 in /home/hhhoh/.local/l ib/python3.6/site-packages/torch/lib/libtorch_cuda.so) frame #2: + 0xc282b9 (0x7f13424cf2b9 in /home/hhhoh/.local/li b/python3.6/site-packages/torch/lib/libtorch_cpu.so) frame #3: + 0xc25f28 (0x7f13424ccf28 in /home/hhhoh/.local/li b/python3.6/site-packages/torch/lib/libtorch_cpu.so) frame #4: at::native::copy_(at::Tensor&, at::Tensor const&, bool) + 0x44 (0x7f13 424cf144 in /home/hhhoh/.local/lib/python3.6/site-packages/torch/lib/libtorch_cp u.so) frame #5: at::Tensor::copy_(at::Tensor const&, bool) const + 0x115 (0x7f1342bba0 95 in /home/hhhoh/.local/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so) frame #6: + 0x37e647e (0x7f134508d47e in /home/hhhoh/.local/l ib/python3.6/site-packages/torch/lib/libtorch_cpu.so) frame #7: at::Tensor::copy_(at::Tensor const&, bool) const + 0x115 (0x7f1342bba0 95 in /home/hhhoh/.local/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so) frame #8: at::native::to(at::Tensor const&, c10::TensorOptions const&, bool, boo l, c10::optionalc10::MemoryFormat) + 0xb54 (0x7f134270b564 in /home/hhhoh/.loc al/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so) frame #9: + 0x128850a (0x7f1342b2f50a in /home/hhhoh/.local/l ib/python3.6/site-packages/torch/lib/libtorch_cpu.so) frame #10: + 0x2e749da (0x7f134471b9da in /home/hhhoh/.local/ lib/python3.6/site-packages/torch/lib/libtorch_cpu.so) frame #11: + 0x10ea412 (0x7f1342991412 in /home/hhhoh/.local/ lib/python3.6/site-packages/torch/lib/libtorch_cpu.so) frame #12: at::Tensor::to(c10::TensorOptions const&, bool, bool, c10::optional<c 10::MemoryFormat>) const + 0x146 (0x7f1342bedf56 in /home/hhhoh/.local/lib/pytho n3.6/site-packages/torch/lib/libtorch_cpu.so) frame #13: + 0x336a970 (0x7f1344c11970 in /home/hhhoh/.local/ lib/python3.6/site-packages/torch/lib/libtorch_cpu.so) frame #14: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::aut ograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std:: shared_ptrtorch::autograd::ReadyQueue const&) + 0x3fd (0x7f1344c173fd in /home /hhhoh/.local/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so) frame #15: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd: :GraphTask> const&) + 0x451 (0x7f1344c18fa1 in /home/hhhoh/.local/lib/python3.6/ site-packages/torch/lib/libtorch_cpu.so) frame #16: torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::auto grad::ReadyQueue> const&, bool) + 0x89 (0x7f1344c11119 in /home/hhhoh/.local/lib /python3.6/site-packages/torch/lib/libtorch_cpu.so) frame #17: torch::autograd::python::PythonEngine::thread_init(int, std::shared_p trtorch::autograd::ReadyQueue const&, bool) + 0x4a (0x7f13523b14ba in /home/hh hoh/.local/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #18: + 0xbd6df (0x7f135350d6df in /usr/lib/x86_64-linux -gnu/libstdc++.so.6) frame #19: + 0x76db (0x7f13559496db in /lib/x86_64-linux-gnu/ libpthread.so.0) frame #20: clone + 0x3f (0x7f1355c82a3f in /lib/x86_64-linux-gnu/libc.so.6)
st182396
The illegal memory access was most likely triggered before the copy kernel, so the blocking launch is apparently not working. Could you post an executable code snippet, which would reproduce this issue?
st182397
No, if possible narrow down the minimal code snippet, which reproduces the error. I.e. remove all data loading, metric calculation etc., use random inputs and try to isolate the illegal memory access to a few lines. What’s currently hard to debug is that your code apparently seems to run fine on the CPU and that the blocking launch isn’t properly working in your setup.
st182398
I have checked my input and model are on GPU. However, it seemed not working. Can anyone help me with the bug I’m encountering?
st182399
I need to train the image classifiers in different groups. When one group is trained, the script will automatically load and train the next group by “for” statements. But an error occurred saying there was not enough memory in GPU when it start to train the next group. How can I clear the GPU memory used by the last group training before the script start train the next group? l have try to use torch.cuda.empty_cache() after each group training finished but it doesn’t work. time.sleep(5) del model del loss gc.collect() torch.cuda.empty_cache()#清空GPU缓存 # print(torch.cuda.memory_stats(0)) time.sleep(10) torch.cuda.empty_cache()#清空GPU缓存 # print(torch.cuda.memory_stats(0)) Pytorch’s version is 1.6.0 image1884×1065 1.04 MB