id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st182200 | >>> out = torch.randn(100,100)
>>> out.cuda()
>>> torch.cuda.memory_allocated(0)
40448
>>> out1 = torch.randn(100,100)
>>> torch.cuda.memory_allocated(0)
0
Why is this happening? I expected the output of totorch.cuda.memory_allocated(0) be 40448 still, because out1 is totally different to out .
Note that,
>>> id(out)
139989884735632
>>> id(out1)
139989884693456
Note2
>>> out = torch.randn(100,100)
>>> out.cuda()
>>> out1 = torch.randn(90,90)
>>> torch.cuda.memory_allocated(0)
40448 |
st182201 | Solved by ptrblck in post #4
The underscore (_) is a special variable which will get the output of the last operation as seen here:
a = 5
a
Out[2]: 5
6
Out[3]: 6
_
Out[4]: 6
_ + _
Out[5]: 12
If you are stepping through the code in a REPL env I would guess that in particular this variable can hold a reference to the CUDATe… |
st182202 | I cannot reproduce the issue and get:
out = torch.randn(100,100)
out.cuda()
print(torch.cuda.memory_allocated(0))
> 0
out1 = torch.randn(100,100)
print(torch.cuda.memory_allocated(0))
> 0
Are you using a REPL environment where the output of the last operation might still be stored in the _ object? |
st182203 | Yes I was using the REPL env. Is there a docs/blog posts explaining what you just said here : the output of the last operation be stored in the _ object?
Thanks in advance. |
st182204 | The underscore (_) is a special variable which will get the output of the last operation as seen here:
a = 5
a
Out[2]: 5
6
Out[3]: 6
_
Out[4]: 6
_ + _
Out[5]: 12
If you are stepping through the code in a REPL env I would guess that in particular this variable can hold a reference to the CUDATensor and could thus report the GPU usage. On the other hand, if you execute the entire script, the memory usage is shown as 0. which is expected since you didn’t assign the CUDATensor to any variable so it will be discarded. |
st182205 | I want a simple technique that will convert a pytorch nn.module to a float 64 model. |
st182206 | Solved by ptrblck in post #2
To transform all parameters and buffers of a module to float64 tensors, use model.double(). |
st182207 | To transform all parameters and buffers of a module to float64 tensors, use model.double(). |
st182208 | Buffers are tensors, which are registered to the parent module but don’t require gradients. E.g. the running stats in batchnorm layers are buffers. You can register buffers via self.register_buffer inside an nn.Module. |
st182209 | Just to replicate a historic approach (Kim, 2014), I am experimenting with conv1d for a sentence classification task. Embedding size for each word = 300. Sequence length (i.e. number of words in a sentence) = 512. So my input sentence is [512, 300]. (Let us omit batch size for this discussion). This way I can refer to each word in my input sentence as input[word_index]. A conv layer needs to convolve over sequence length (say, to get bi-grams or tri-grams). This means your embedding needs to be seen as ‘in_channels’.
Now, after digging around SO answers, it turns out, conv1d expects my input to have channels first, i.e. a sentence needs to be [300, 512].
This breaks the row-major convention which are historic defaults by C, NumPy, etc.
For output, as a diagram shows in Kim’s paper, one might expect sequence length to be the first index and channels (corresponding to your multiple filters) to be the second one. nn.conv1d gives output in [out_channels, sequence_length_with_padding_stride_calculation] format.
Why does PyTorch favor channels first?
If it’s a matter of convention, shouldn’t there be an easy way to choose your convention?
If it’s because CuDNN prefers NCHW, can PyTorch not internally change input?
The to and contiguous approach outlined here is convoluted. Why would you want devs to explicitly convert their input and their models? Can we not have a parameter in model declaration that decides channels_first or channels_last and the input is processed accordingly? If you want to maintain backwards compatibility, the parameter might have channels_first as default.
P.S. the conv1d documentation could greatly benefit if there is an explanation of this below the example, and possibly, if your input is channels_last, how to convert it and have contiguous memory allocation. If this needs to be added only in conv1d, I can raise a PR. If this needs explanation across multiple pages, a longer discussion might be helpful. |
st182210 | Yash_Jakhotiya:
Why does PyTorch favor channels first?
For historic performance reasons, if I’m not mistaken.
Yash_Jakhotiya:
If it’s a matter of convention, shouldn’t there be an easy way to choose your convention?
If it’s because CuDNN prefers NCHW, can PyTorch not internally change input?
PyTorch can change the memory layout via to(memory_format=torch.channels_last), which is the preferred format for mixed-precision convolutions using TensorCores.
Yash_Jakhotiya:
Why would you want devs to explicitly convert their input and their models? Can we not have a parameter in model declaration that decides channels_first or channels_last and the input is processed accordingly?
Wouldn’t this already be the case or are you concerned about having to add the to() operation to the input as well? Under the hood, the parameter layout might be respected and the inputs thus transformed for you, but explicit transformations are preferred over implicit ones. |
st182211 | I am inclined towards a design where something like,
input = torch.randn(N, L, C)
conv_layer = torch.nn.Conv1d(in_channels=C, out_channels=C, kernel_size=3, memory_format='channels_last')
output = conv_layer(input)
works out of the box.
Is this possible in the current setup? And if not, what would be the minimal changes required here to make it work? |
st182212 | I am having some difficulties loading models on my device, it’s an Nvidia Xavier NX, and when I try to load a model to gpu there is a very large memory spike, over double of when the model finishes loading.
The only thing I can think of that would cause this is the model being loaded to cpu memory, then being copied to gpu memory, and this being the same memory device it temporarily has the model loaded to ram twice.
Is this the cause or something else? And is there a way I can work around it? |
st182213 | I am training a stacked GRU with a linear output layer. I have verified the input size is 3 dim. The code is giving me the following error: TypeError: relu(): argument ‘input’ (position 1) must be Tensor, not tuple
I have written code to convert my input numpy arrays to torch tensors but I am still getting that error.
class CustomDataset(Dataset):
def init(self, X_data, Y_data):
super(CustomDataset, self).init()
self.X_data = X_data.float()
self.Y_data = Y_data.float()
# Initializes the data and preprocessing.
def __getitem__(self, index):
return self.X_data[:,:,:], self.Y_data[:,:,:]
# Returns data (input and output) in batches.
# There is an error in this part of the code.
def __len__ (self):
return len(self.X_data[:,1,1])
# Returns the size of the input data.
inputdata = np.load(‘4thin.npy’)
outputdata = np.load(‘4thout.npy’)
trainX = inputdata[:208,:,:10]
trainX = torch.from_numpy(trainX)
print(trainX.shape)
trainY = outputdata[:208,:,:8]
trainY = torch.from_numpy(trainY)
print(trainY.shape)
testX = inputdata[208:,:,:10]
testX = torch.from_numpy(testX)
print(testX.shape)
testY = outputdata[208:,:,:8]
testY = torch.from_numpy(testY)
print(testY.shape)
#select a subset of the features from the data for testing of the script
#Loads in the input and output numpy arrays and splits into training and validation datasets.
train_dataloader = CustomDataset(trainX, trainY)
test_dataloader = CustomDataset(testX, testY)
The size of each of the tensors is the following:
torch.Size([208, 3, 10])
torch.Size([208, 1, 8])
torch.Size([208, 3, 10])
torch.Size([208, 1, 8])
I set up the program to print the x tensor and got the following:
tensor([[[ 0., 0., 0., …, 0., 0., 0.],
[ 0., 0., 0., …, 0., 0., 0.],
[ 15., 5., 34., …, 4., 11., 1.]],
[[ 0., 0., 0., …, 7., 6., 7.],
[ 6., 15., 0., …, 0., 7., 0.],
[ 1., 5., 17., …, 61., 9., 0.]],
[[ 58., 58., 35., …, 87., 54., 38.],
[ 0., 0., 0., …, 0., 0., 0.],
[ 8., 0., 0., …, 0., 0., 0.]],
…,
[[ 2., 0., 8., …, 3., 2., 2.],
[ 4., 4., 30., …, 5., 0., 23.],
[125., 145., 97., …, 179., 704., 140.]],
[[ 0., 0., 0., …, 1., 0., 0.],
[ 78., 0., 43., …, 62., 0., 64.],
[ 45., 88., 0., …, 0., 15., 0.]],
[[ 0., 0., 0., …, 0., 0., 0.],
[ 0., 0., 0., …, 0., 0., 0.],
[112., 84., 104., …, 133., 125., 75.]]])
The script seems to think this is a tuple and not a tensor. Help would be much appreciated! |
st182214 | Could you post the model definition, please?
Sometimes these errors are caused by an accidental training comma in a line of code, e.g.
x = self.layer(x),
x = F.relu(x)
(note the comma in the first line of code). |
st182215 | I use try-catch to enclose the forward and backward functions, and I also delete all the tensors after every batch.
try:
decoder_output, loss = model(batch)
if Q.qsize() > 0:
break
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss.append(loss.mean().item())
del decoder_output, loss
except Exception as e:
optimizer.zero_grad()
for p in model.parameters():
if p.grad is not None:
del p.grad
torch.cuda.empty_cache()
oom += 1
del batch
for p in model.parameters():
if p.grad is not None:
del p.grad
torch.cuda.empty_cache()
The training process is normal at the first thousands of steps, even if it got OOM exception, the exception will be catched and the GPU memory will be released.
But after I trained thousands of batches, it suddenly keeps getting OOM for every batch and the memory seems never be released anymore.
It’s so weird to me, is there any suggestions? (I’m using distributed data parallel) |
st182216 | Here is the code before the try-catch, i.e., the code to prepare input data with train_loader:
for i, batch in enumerate(train_loader):
batch = {k: v.squeeze(0).cuda(non_blocking=True) for (k, v) in batch.items()}
batch['rem_father_index'] = torch.split(batch['rem_father_index'], batch['rem_root_num'].tolist(), dim=0)
batch['rem_father_index'] = [l.tolist() for l in batch['rem_father_index']]
batch['tree_sizes'] = batch['tree_sizes'].tolist() |
st182217 | I have some updates:
If I create some tensors like: torch.zeros(bsz, max_len), will this cause some side-effect?
I output the python garbage collector, and find something wierd:
Screenshot from 2021-09-23 12-53-45816×373 46.8 KB
There is a huge dict, which contains a lot of tensors, is this part of the graph or something? |
st182218 | Ideally, I would like to execute something like
t = torch.zeros([4, 3, 64, 64])
t[:, :, ::8, ::8].view(4, -1)
but that produces the error
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
Unfortunately, I can’t use .reshape() or .contiguous() because of memory consumption. This code is called too often to make a copy of the tensor each time. Instead I would like to create one big tensor and slice it each time.
Is there some way to use .transpose() or something similar in combination with the above .view() to achieve my goal? Is there a way to get a more detailed error message to understand which dimension exactly is the problem?Thanks a lot in advance! |
st182219 | Solved by googlebot in post #4
Yes, sorry, output from stride() should be reasoned about together with tensor size. Here, your last dimension says: increase pointer by 8 (last stride) 8 times (last dim size), but the third dimension wants to to take steps of 512 elements (it is skipping 7 segments altogether).
Stride of 8 corres… |
st182220 | The problem is that element spacing is irregular when you merge dimensions:
t[:,:,::8,::8].stride()
(12288, 4096, 512, 8)
So it is impossible to collapse three last numbers into one. |
st182221 | Thanks for your reply, could you elaborate a little? Can’t result of
t[:,:,::8,::8].view(4, -1)
be the tensor with the same underlying data in storage as t, size = (4, 3*64*64), and stride = (12288, 8)? Both 4096 and 512 are divisible by 8, so wouldn’t that be the solution view() goes for? |
st182222 | Yes, sorry, output from stride() should be reasoned about together with tensor size. Here, your last dimension says: increase pointer by 8 (last stride) 8 times (last dim size), but the third dimension wants to to take steps of 512 elements (it is skipping 7 segments altogether).
Stride of 8 corresponds to t[:,:,:,::8], where you have stride[i] = stride[i+1] * size[i+1] |
st182223 | Hi! I’m new to pytorch, and I’m trying to figure out how pytorch works.
I want to know when pytorch exactly alloc or dealloc tensors(specially output gradients and weight gradients) during training.
For this work, I’m trying to get logs for allocation/deallocation. I used some hooks with torch.cuda.memory_allocated(), but it’s hard to get exact moment and target tensor.
Is there a way to get logs for VRAM allocation/deallocation? (like vlog in tensorflow) or What do you think is appropriate to achieve my goal?
Thanks. |
st182224 | Hi all,
I’m currently working on training a model on a rather large dataset ( ~1 billion examples in total). To store and load this dataset, I’m employing the Webdataset library (https://github.com/webdataset/webdataset). I am storing preprocessed examples as Pytorch tensors in a tar file, per the Webdataset spec. I am running into a memory issue after a long period of training where the machine performing the reading/loading operation runs out of memory. After a bit of debugging, it seems to be caused by some unexpected (at least to me) behavior with the Pytorch serialization code.
Since each tensor is stored as a separate file, then tarred together, each must be loaded individually, meaning that ~1 billion torch.load() operations take place per epoch. I traced memory usage using the tracemalloc utility and discovered that some part of this serialization process is creating objects that are not garbage collected even after the tensor read from disk goes out of scope (or in the example below, is manually deleted).
I have included a minimal example below to show these objects. The example creates a dummy tensor, then reads it into memory repeatedly. The memory usage is queried in 10,000 step intervals. Given that that the tensor goes out of scope immediately after it is read with torch.load(), I would not expect to see any objects from torch/serialization.py to still be in scope. However, I instead see ~10,000 new objects created each time the memory is queried (every 10,000 steps, see example output below). These objects also seem to never be released and cause constant linear memory growth.
Is there something I am missing here? Is there a way to remove references to these objects so they can be collected? Even though they are only 28B each, on average, they still seem to be causing memory use to grow linearly until the machine is out of memory (usually after many millions of examples are read from disk on a high-mem training machine).
Thank you in advance for any and all help you can offer!
Best,
Tyler
System info
Ubuntu 18.04.1
Python 3.8
Pytorch 1.9.0
Minimal reproducible example:
import torch
import tracemalloc
if __name__ == '__main__':
# Create a dummy Tensor serialized to disk
rand = torch.rand((20, 10))
torch.save(rand, 'test.pth')
# Start tracemalloc
tracemalloc.start(30)
old_snapshot = tracemalloc.take_snapshot()
for i in range(100000):
# Load the Tensor,
test_tensor = torch.load('test.pth')
# Immediately delete reference to test_tensor
del test_tensor
if i % 10000 == 0 and i != 0:
# Take snapshot
snapshot = tracemalloc.take_snapshot()
# Print changes in memory consumption
print(f'################# STEP {i} #################')
for stat in snapshot.compare_to(old_snapshot, 'lineno')[:2]:
print(str(stat))
print('############################################')
# Save snapshot
old_snapshot = snapshot
Example output:
################# STEP 10000 #################
.../python3.8/site-packages/torch/serialization.py:845: size=274 KiB (+274 KiB), count=10005 (+10005), average=28 B
.../python3.8/site-packages/torch/serialization.py:242: size=274 KiB (+274 KiB), count=10003 (+10003), average=28 B
################# STEP 20000 #################
.../python3.8/site-packages/torch/serialization.py:845: size=547 KiB (+273 KiB), count=20005 (+10000), average=28 B
.../python3.8/site-packages/torch/serialization.py:242: size=547 KiB (+273 KiB), count=20003 (+10000), average=28 B
################# STEP 30000 #################
.../python3.8/site-packages/torch/serialization.py:242: size=820 KiB (+273 KiB), count=30003 (+10000), average=28 B
.../python3.8/site-packages/torch/serialization.py:845: size=821 KiB (+273 KiB), count=30004 (+9999), average=28 B
The two lines being referenced from torch/serialization.py above are:
github.com
pytorch/pytorch/blob/v1.9.0/torch/serialization.py#L845
def _load(zip_file, map_location, pickle_module, pickle_file='data.pkl', **pickle_load_args):
restore_location = _get_restore_location(map_location)
loaded_storages = {}
def load_tensor(data_type, size, key, location):
name = f'data/{key}'
dtype = data_type(0).dtype
storage = zip_file.get_storage_from_record(name, size, dtype).storage()
loaded_storages[key] = restore_location(storage, location)
def persistent_load(saved_id):
assert isinstance(saved_id, tuple)
typename = _maybe_decode_ascii(saved_id[0])
data = saved_id[1:]
assert typename == 'storage', \
f"Unknown typename for persistent_load, expected 'storage' but got '{typename}'"
data_type, key, location, size = data
and
github.com
pytorch/pytorch/blob/v1.9.0/torch/serialization.py#L242
if 'w' in mode:
return _open_buffer_writer(name_or_buffer)
elif 'r' in mode:
return _open_buffer_reader(name_or_buffer)
else:
raise RuntimeError(f"Expected 'r' or 'w' in mode but got {mode}")
class _open_zipfile_reader(_opener):
def __init__(self, name_or_buffer) -> None:
super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
class _open_zipfile_writer_file(_opener):
def __init__(self, name) -> None:
super(_open_zipfile_writer_file, self).__init__(torch._C.PyTorchFileWriter(str(name)))
def __exit__(self, *args) -> None:
self.file_like.write_end_of_file() |
st182225 | del x doesn’t guarantee garbage collection in Python, you need to try with gc.collect() |
st182226 | Hi @VitalyFedyunin, thanks for the suggestion! Unfortunately, the issue persists even after adding an explicit call to gc.collect() after each read. I’ve included an updated script and output below.
import torch
import tracemalloc
import gc
if __name__ == '__main__':
# Create a dummy Tensor serialized to disk
rand = torch.rand((20, 10))
torch.save(rand, 'test.pth')
# Start tracemalloc
tracemalloc.start(30)
old_snapshot = tracemalloc.take_snapshot()
for i in range(30001):
# Load the Tensor
test_tensor = torch.load('test.pth')
# Immediately delete reference to test_tensor
del test_tensor
gc.collect()
if i % 10000 == 0 and i != 0:
# Take snapshot
snapshot = tracemalloc.take_snapshot()
# Print changes in memory consumption
print(f'################# STEP {i} #################')
for stat in snapshot.compare_to(old_snapshot, 'lineno')[:2]:
print(str(stat))
print('############################################')
# Save snapshot
old_snapshot = snapshot
Output:
################# STEP 10000 #################
.../python3.8/site-packages/torch/serialization.py:845: size=274 KiB (+274 KiB), count=10003 (+10003), average=28 B
.../python3.8/site-packages/torch/serialization.py:242: size=274 KiB (+274 KiB), count=10003 (+10003), average=28 B
############################################
################# STEP 20000 #################
.../python3.8/site-packages/torch/serialization.py:845: size=547 KiB (+273 KiB), count=20003 (+10000), average=28 B
.../python3.8/site-packages/torch/serialization.py:242: size=547 KiB (+273 KiB), count=20003 (+10000), average=28 B
############################################
################# STEP 30000 #################
.../python3.8/site-packages/torch/serialization.py:845: size=820 KiB (+273 KiB), count=30003 (+10000), average=28 B
.../python3.8/site-packages/torch/serialization.py:242: size=820 KiB (+273 KiB), count=30003 (+10000), average=28 B
############################################
I’ve also opened an issue on the Pytorch Github here:
github.com/pytorch/pytorch
Uncleared memory use after torch.load() 5
opened
Aug 26, 2021
TShimko126
module: serialization
triaged
## 🐛 Bug
When loading a tensor from disk using `torch.load()`, the full memor…y used by the loading process does not get collected even when the tensor goes out of scope. The amount of uncleared memory grows with the number of `torch.load()` operations.
## To Reproduce
Steps to reproduce the behavior:
Memory usage is measured using [guppy3](https://pypi.org/project/guppy3/).
1. Run the following script to get baseline memory usage after creating and deleting a random tensor:
```python
from torch import rand
from guppy import hpy
# Get the heap before Tensor creation
hp = hpy()
before = hp.heap()
# Create a dummy Tensor
rand = rand((20, 10))
del rand
# Get the after heap and diff
after = hp.heap()
leftover = after - before
print(leftover)
```
Output:
```
Partition of a set of 1 object. Total size = 408 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 1 100 408 100 408 100 types.FrameType
```
2. Compare to the following output which creates a tensor, saves it to disk, then reads it with `torch.load()` and deletes it:
```python
from torch import rand, save, load
from guppy import hpy
# Create a dummy Tensor and save to disk
rand = rand((20, 10))
save(rand, 'test.pth')
del rand
# Get the heap before torch.load()
hp = hpy()
before = hp.heap()
# Read and del
test_tensor = load('test.pth')
del test_tensor
# Get the after heap and diff
after = hp.heap()
leftover = after - before
print(leftover)
print('\nGet referrer to leftover dict:')
print(leftover[1].byid.referrers.byvia)
```
Output:
```
Partition of a set of 2 objects. Total size = 640 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 1 50 408 64 408 64 types.FrameType
1 1 50 232 36 640 100 dict (no owner)
Get referrer to leftover dict:
Partition of a set of 1 object. Total size = 416 bytes.
Index Count % Size % Cumulative % Referred Via:
0 1 100 416 100 416 100 "['Unpickler']", "['_unpickler']", '.__objclass__',
'.__self__', '[0]'
```
Given that the uncleared `dict` appears to be referred to by the `Unpickler` object, my thought is that this may be related to the wrapper defined [here](https://github.com/pytorch/pytorch/blob/master/torch/serialization.py#L869), but I may be completely wrong.
3. Finally, run this script to show that the memory usage allocated to the `dict` grows with subsequent `torch.load()` operations:
```python
from torch import rand, save, load
from guppy import hpy
# Create a dummy Tensor and save to disk
rand = rand((20, 10))
save(rand, 'test.pth')
del rand
# Get the heap before torch.load()
hp = hpy()
before = hp.heap()
# Read and del
for i in range(1000):
test_tensor = load('test.pth')
del test_tensor
del i
# Get the after heap and diff
after = hp.heap()
leftover = after - before
print(leftover)
print('\nGet referrer to leftover dict:')
print(leftover[0].byid.referrers.byvia)
```
Output:
```
Partition of a set of 2 objects. Total size = 9720 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 1 50 9312 96 9312 96 dict (no owner)
1 1 50 408 4 9720 100 types.FrameType
```
Note the size increase of the `dict` from 408 -> 9312 between the examples in step 2 and step 3.
## Expected behavior
All memory occupied during/after the `torch.load()` operation should be cleared after the tensor is deleted/goes out of scope.
## Environment
```
PyTorch version: 1.9.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 10.15.7 (x86_64)
GCC version: Could not collect
Clang version: 12.0.0 (clang-1200.0.32.27)
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.5 (default, Sep 4 2020, 02:22:02) [Clang 10.0.0 ] (64-bit runtime)
Python platform: macOS-10.15.7-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.4
[pip3] pytorch-lightning==1.2.10
[pip3] torch==1.9.0
[pip3] torch-cluster==1.5.9
[pip3] torch-geometric==1.7.1
[pip3] torch-scatter==2.0.7
[pip3] torch-sparse==0.6.10
[pip3] torch-spline-conv==1.2.1
[pip3] torchmetrics==0.2.0
[conda] blas 1.0 mkl
[conda] mkl 2019.5 intel_281 intel
[conda] mkl-service 2.3.0 py38h9ed2024_0
[conda] mkl_fft 1.2.0 py38hc64f4ea_0
[conda] mkl_random 1.1.1 py38h959d312_0
[conda] numpy 1.19.4 pypi_0 pypi
[conda] numpy-base 1.19.2 py38hcfb5961_0
[conda] pytorch-lightning 1.2.10 pypi_0 pypi
[conda] torch 1.9.0 pypi_0 pypi
[conda] torch-cluster 1.5.9 pypi_0 pypi
[conda] torch-geometric 1.7.1 pypi_0 pypi
[conda] torch-scatter 2.0.7 pypi_0 pypi
[conda] torch-sparse 0.6.10 pypi_0 pypi
[conda] torch-spline-conv 1.2.1 pypi_0 pypi
[conda] torchmetrics 0.2.0 pypi_0 pypi
```
## Additional context
I previously brought this issue up on the [Pytorch forum](https://discuss.pytorch.org/t/uncollected-object-references-from-torch-load-cause-memory-growth/130188) and will include the relevant text below:
I’m currently working on training a model on a rather large dataset ( ~1 billion examples in total, with multiple individual tensors that need to be loaded per each example). To store and load this dataset, I’m employing the [Webdataset library](https://github.com/webdataset/webdataset). I am storing preprocessed examples as Pytorch tensors in a tar file, per the Webdataset spec. I am running into a memory issue after a long period of training where the machine performing the reading/loading operation runs out of memory. After a bit of debugging, it seems to be caused by some unexpected (at least to me) behavior with the Pytorch serialization code.
Since each tensor is stored as a separate file, then tarred together, each must be loaded individually, meaning that ~1 billion torch.load() operations take place per epoch. I traced memory usage using the tracemalloc utility and discovered that some part of this serialization process is creating objects that are not garbage collected even after the tensor read from disk goes out of scope (or in the example below, is manually deleted).
I have included a minimal example below to show these objects. The example creates a dummy tensor, then reads it into memory repeatedly. The memory usage is queried in 10,000 step intervals. Given that that the tensor goes out of scope immediately after it is read with torch.load(), I would not expect to see any objects from torch/serialization.py to still be in scope. However, I instead see ~10,000 new objects created each time the memory is queried (every 10,000 steps, see example output below). These objects also seem to never be released and cause constant linear memory growth. Even though they are only 28B each, on average, they still seem to be causing memory use to grow linearly until the machine is out of memory (usually after many millions of examples are read from disk on a high-mem training machine).
Thank you in advance for any and all help you can offer! I sincerely apologize if this has been addressed elsewhere and I missed it while searching. Right now this is blocking bug for one of my ongoing projects.
```python
import torch
import tracemalloc
if __name__ == '__main__':
# Create a dummy Tensor serialized to disk
rand = torch.rand((20, 10))
torch.save(rand, 'test.pth')
# Start tracemalloc
tracemalloc.start(30)
old_snapshot = tracemalloc.take_snapshot()
for i in range(30001):
# Load the Tensor
test_tensor = torch.load('test.pth')
# Immediately delete reference to test_tensor
del test_tensor
if i % 10000 == 0 and i != 0:
# Take snapshot
snapshot = tracemalloc.take_snapshot()
# Print changes in memory consumption
print(f'################# STEP {i} #################')
for stat in snapshot.compare_to(old_snapshot, 'lineno')[:2]:
print(str(stat))
print('############################################')
# Save snapshot
old_snapshot = snapshot
```
Output:
```
################# STEP 10000 #################
/usr/local/Caskroom/miniconda/base/envs/graphene/lib/python3.8/site-packages/torch/serialization.py:845: size=274 KiB (+274 KiB), count=10004 (+10004), average=28 B
/usr/local/Caskroom/miniconda/base/envs/graphene/lib/python3.8/site-packages/torch/serialization.py:242: size=274 KiB (+274 KiB), count=10003 (+10003), average=28 B
############################################
################# STEP 20000 #################
/usr/local/Caskroom/miniconda/base/envs/graphene/lib/python3.8/site-packages/torch/serialization.py:845: size=547 KiB (+273 KiB), count=20005 (+10001), average=28 B
/usr/local/Caskroom/miniconda/base/envs/graphene/lib/python3.8/site-packages/torch/serialization.py:242: size=547 KiB (+273 KiB), count=20003 (+10000), average=28 B
############################################
################# STEP 30000 #################
/usr/local/Caskroom/miniconda/base/envs/graphene/lib/python3.8/site-packages/torch/serialization.py:242: size=820 KiB (+273 KiB), count=30003 (+10000), average=28 B
/usr/local/Caskroom/miniconda/base/envs/graphene/lib/python3.8/site-packages/torch/serialization.py:845: size=821 KiB (+273 KiB), count=30004 (+9999), average=28 B
############################################
```
The relevant referenced lines are:
- [torch/serialization.py:845](https://github.com/pytorch/pytorch/blob/v1.9.0/torch/serialization.py#L845)
- [torch/serialization.py:242](https://github.com/pytorch/pytorch/blob/v1.9.0/torch/serialization.py#L242)
cc @mruberry |
st182227 | I recently stumbled upon something I don’t understand: When creating a float-tensor on GPU with only 1 element, I would assume it to take up 4 bytes of memory. However, torch.cuda.max_memory_allocated() returns 512 bytes? Why is that?
MWE to replicate:
import torch
print(torch.cuda.max_memory_allocated())
a = torch.tensor(1.0, device='cuda')
print(torch.cuda.max_memory_allocated()) # 512 bytes
print(torch.cuda.max_memory_reserved()) # 2097152 bytes = 2 MB
I’m aware that the CUDA context must be created on the GPU as well, which is why nvidia-smi shows values much higher than 0.5MB, around 1GB, but that’s not what I’m asking here. Why does PyTorch reserve half a MB for a 4byte tensor, and why does it cache 2MB when creating said tensor?
Following this answer on Stackoverflow, I did
import sys
print(sys.getsizeof(a)) # 64
print(sys.getsizeof(a.storage()) # 60
which unfortunately is even more confusing. Anybody knows what’s going on here? |
st182228 | Solved by googlebot in post #2
that’s because pytorch manages its own buffers, you don’t want to do 4 byte allocations from “system” memory manager |
st182229 | that’s because pytorch manages its own buffers, you don’t want to do 4 byte allocations from “system” memory manager |
st182230 | not sure, that’s rather a common practice for high performance c++ programming - using specialized allocators tuned for program’s allocation patterns. with cuda’s limited memory it is even more essential, mostly to decrease fragmentation.
but, yeah, this happens to be documented - CUDA semantics — PyTorch master documentation |
st182231 | Hi everyone,
Here is my question:
I have roughly 400,000 training data and each one is stored as a csv (~35 GB in total). I have a custom dataset object that reads these csv files in __getitem__. Currently, each epoch takes roughly 70 minutes with a batch size of 512.
So, I was wondering if there’s anyway to speed up the training without adding additional resources?
Thanks! |
st182232 | You should consider using torch.utils.data.DataLoader, and specify number of workers. These workers retrieve data from the dataset and will significantly improve the read spead. Here is a little snippet:
dataloader = torch.utils.data.DataLoader(dataset,
batch_size=512,
shuffle=True,
num_workers=10) |
st182233 | Thanks for the suggestion. I tried it out but whenever I set num_workers to something greater than 1 the VM just freezes (running on GCP instance with 1 GPU). Is this perhaps a problem with memory? |
st182234 | Well, it seems there is an open issue for it: CPU memory gradually leaks when num_workers > 0 in the DataLoader 11. You can find a diverse set of possible solutions in the aforementioned link. |
st182235 | converting data to binary file format should help (e.g. read csv with pandas, write parsed data with numpy.save, numpy.memmap or torch.save). make sure to write 4-byte floats, unless you train with float64. |
st182236 | Hello,
I’m new to pytorch, and have an issue with assigning a large tensor to GPU. The memory of the tensor is larger than the GPU 16GB memory (and I have multiple such tensors for training/val/test). I suppose it should be a common issue that has been well resolved by many others. Any suggestion?
I read a related post here 4, which hasn’t been answered.
Thanks in advance! |
st182237 | Solved by Peishi in post #3
Thank you for the reply! The problem is solved by simply feeding the GPU with each batch during training, instead of sending all data together. But the model parallel approach looks also interesting. Is there any pytorch tutorial for that? |
st182238 | You could try to use a model parallel approach (i.e. splitting the operation into separate parts) to save memory, as you won’t be able to allocate more device memory than is available.
The implementation depends on the actual operation. |
st182239 | Thank you for the reply! The problem is solved by simply feeding the GPU with each batch during training, instead of sending all data together. But the model parallel approach looks also interesting. Is there any pytorch tutorial for that? |
st182240 | Hello!
I’ve looked around online but I still haven’t been able to figure out how to properly free GPU memory, here’s a link to a simple Colab demo explaining the situation [make sure to change the Runtime type to use a GPU]:
https://colab.research.google.com/drive/1i-1rIbcsxwm90krtPyP5px6P6owKUKYH?usp=sharing 5
I basically start by allocating a random tensor, move it to the GPU, report the GPU memory usage, then move the tensor back to the CPU, report the GPU memory usage, then delete the reference to the tensor and report the GPU memory usage once more. On the last two memory reports I’d expect the usage to go back to zero, but I keep getting a non zero answer (e.g. the usage goes down from 37% to 4%, but not zero).
How can I get the GPU memory back to zero in this simple case? Is this an issue with Google Colab specifically?
Thanks in advance! |
st182241 | This likely isn’t a leak.
PyTorch initializes CUDA “on demand” when you first use it and as part of this initialization, some global GPU memory is allocated. You would not expect this to be freed before terminating the Python process. |
st182242 | tom:
some global GPU memory is allocated
I wonder what global memory it is and its functions. |
st182243 | Zinuo:
I wonder what global memory it is and its functions.
These maintain state of the device and also work areas for various libraries I think.
You can poke around in the relevant PyTorch source directories and read up on context 1 in the CUDA docs and the libraries like cuDNN, cuBLAS etc. (Seems that NVidia+cloudfare helpfully decided I should not link to the NVidia docs more, so you’ll have to find those yourself. Typically the workflow is that you get a global handle). |
st182244 | Hi All,
I am very new to PyTorch and I’m seeing something weird when my code runs that I can’t figure out.
In short this I am applying a gaussian to many images and then a regression with brain data.
The code batches the gaussian/image process. The center location and width of the gaussian changes, each combination is considered one ‘model’ and we find the combination that provides the best prediction for the brain data. I use nvtop to monitor the GPU usage. I’m trying to find the best batch number for both images and brain data so that I can model a whole brain without it taking days and days.
So I call my function, I watch the gpu and it starts the first model, I can see the memory shoot up when it starts the image batching, which makes sense. But when the second model runs, it shoots up again, as if PyTorch is allocating a new set of memory and I’m not sure why it would. Even weirder is it only does this on the second model, it doesn’t keep going up for each model. If it was doing it for every model then I would assume it’s creating an unnecessary new tensor because of something in my code, but to do it just between the first and second loop is really puzzling to me. I’ve tried stopping the code at various places and looking at the tensors that have been created, but I can’t find the culprit as there is nothing new after the second run.
The code is below, it was mostly written by someone else and I just changed some aspects to ‘optimize’ it. I’m sure there are other things that can be done, but at the moment figuring out this memory thing is the most important because it severely limits the batch sizes I can use. And I also just want to gain a better understanding of memory issues so I can get better at using PyTorch in general. I appreciate any help or insights!
def learn_params_ridge_regressionM(data, voxels, _fmaps_fn, models, lambdas, aperture=1.0, _nonlinearity=None, zscore=False, sample_batch_size=100, voxel_batch_size=100, holdout_size=100, shuffle=True, add_bias=False):
"""
Learn the parameters of the fwRF model
Parameters
----------
data : ndarray, shape (#samples, #channels, x, y)
Input image block.
voxels: ndarray, shape (#samples, #voxels)
Input voxel activities.
_fmaps_fn: Torch module
Torch module that returns a list of torch tensors.
models: ndarray, shape (#candidateRF, 3)
The (x, y, sigma) of all candidate RFs for gridsearch.
lambdas: ndarray, shape (#candidateRegression)
The rigde parameter candidates.
aperture (default: 1.0): scalar
The span of the stimulus in the unit used for the RF models.
_nonlinearity (default: None)
A nonlinearity expressed with torch's functions.
zscore (default: False)
Whether to zscore the feature maps or not.
sample_batch_size (default: 100)
The sample batch size (used where appropriate)
voxel_batch_size (default: 100)
The voxel batch size (used where appropriate)
holdout_size (default: 100)
The holdout size for model and hyperparameter selection
shuffle (default: True)
Whether to shuffle the training set or not.
add_bias (default: False)
Whether to add a bias term to the ridge regression or not.
Returns
-------
losses : ndarray, shape (#voxels)
The final loss for each voxel.
lambdas : ndarray, shape (#voxels)
The regression regularization index for each voxel.
models : ndarray, shape (#voxels, 3)
The RF model (x, y, sigma) associated with each voxel.
params : list of ndarray, shape (#voxels, #features)
Can contain a bias parameter of shape (#voxels) if add_bias is True.
mst_mean : ndarray, shape (#voxels, #feature)
None if zscore is False. Otherwise returns zscoring average per feature.
mst_std : ndarray, shape (#voxels, #feature)
None if zscore is False. Otherwise returns zscoring std.dev. per feature.
"""
def _cofactor_fn(_x, lambdas):
'''input matrix [#samples, #features], a list of lambda values'''
_f = torch.stack([(torch.mm(torch.t(_x), _x) + torch.eye(_x.size()[1], device=device) * l).inverse() for l in lambdas], axis=0) # [#lambdas, #feature, #feature]
return torch.tensordot(_f, _x, dims=[[2],[1]]) # [#lambdas, #feature, #sample]
def _loss_fn(_cofactor, _vtrn, _xout, _vout):
'''input '''
_beta = torch.tensordot(_cofactor, _vtrn, dims=[[2], [0]]) # [#lambdas, #feature, #voxel]
_pred = torch.tensordot(_xout, _beta, dims=[[1],[1]]) # [#samples, #lambdas, #voxels]
_loss = torch.sum(torch.pow(_vout[:,None,:] - _pred, 2), dim=0) # [#lambdas, #voxels]
return _beta, _loss
#############################################################################
dtype = np.float32
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
trn_size = len(voxels) - holdout_size
assert trn_size>0, 'Training size needs to be greater than zero'
print ('trn_size = %d (%.1f%%)' % (trn_size, float(trn_size)*100/len(voxels)))
sys.stdout.flush()
nt = len(data)
nm = len(models)
nv = voxels.shape[1]
data = torch.from_numpy(data)
data = data.pin_memory()
if shuffle:
order = np.arange(len(voxels), dtype=int)
np.random.shuffle(order)
data = data[order]
voxels = voxels[order]
trn_voxels = torch.from_numpy(voxels[:trn_size]).to(device)
out_voxels = torch.from_numpy(voxels[trn_size:]).to(device)
### Calculate total feature count
nf = 0
_fmaps = _fmaps_fn(data[:3].float().to(device))
fmaps_rez = []
for k,_fm in enumerate(_fmaps):
nf += _fm.size()[1]
assert _fm.size()[2]==_fm.size()[3], 'All feature maps need to be square'
fmaps_rez += [_fm[k].size()[2],]
#print (_fm.size())
#print ('---------------------------------------')
#sys.stdout.flush()
#############################################################################
### Create full model value buffers
best_models = np.full(shape=(nv,), fill_value=-1, dtype=np.int)
best_lambdas = torch.ones(nv, device=device, dtype=torch.long).neg_()
best_losses = torch.full((nv,), float("Inf"), device=device)
best_w_params = torch.zeros(nv,nf,device=device)
nfd=nf
if add_bias:
nfd = nf+1
best_w_params = torch.cat([best_w_params, torch.ones(nv,1,device=device)], axis=1)
mst_mean = None
mst_std = None
if zscore:
mst_mean = torch.zeros(nv, nf, device=device)
mst_std = torch.zeros(nv, nf, device=device)
start_time = time.time()
vox_loop_time = 0
with torch.no_grad():
for m,(x,y,sigma) in enumerate(models):
print ('\rmodel %4d of %-4d' % (m, nm), flush=True)
mst = torch.zeros(nt, nf, device=device)
_pfs = [_to_torch(pnu.make_gaussian_mass(x, y, sigma, n_pix, size=aperture, dtype=dtype)[2], device=device) for n_pix in fmaps_rez]
for rt,rl in iterate_range(0, nt, sample_batch_size):
mst[rt] = torch.cat([torch.tensordot(_fm, _pf, dims=[[2,3], [0,1]]) for _fm,_pf in zip(_fmaps_fn(data[rt].float().to(device)), _pfs)], dim=1) # [#samples, #features]
if _nonlinearity is not None:
mst = _nonlinearity(mst)
if zscore:
mstm = torch.mean(mst, axis=0, keepdims=True) #[:trn_size]
msts = torch.std(mst, axis=0, keepdims=True) + 1e-6
mst -= mstm
mst /= msts
if add_bias:
mst = torch.cat([mst, torch.ones(len(mst), 1, device=device)], axis=1)
_xtrn = mst[:trn_size]
_xout = mst[trn_size:]
_cof = _cofactor_fn(_xtrn, lambdas)
vox_start = time.time()
for rv,lv in iterate_range(0, nv, voxel_batch_size):
_vtrn = trn_voxels[:,rv]
_vout = out_voxels[:,rv]
_betas, _loss = _loss_fn(_cof, _vtrn, _xout, _vout) # [#lambda, #feature, #voxel, ], [#lambda, #voxel]
_values, _select = torch.min(_loss, dim=0)
imp = _values<best_losses[rv]
if torch.sum(imp)>0:
arv = torch.arange(rv[0],rv[-1]+1)[imp]
li = _select[imp]
best_lambdas[arv] = li
best_losses[arv] = _values[imp]
best_models[arv.numpy()] = m
if zscore:
mst_mean[arv] = mstm # broadcast over updated voxels
mst_std[arv] = msts
best_w_params[arv,:]= _betas[:,:,imp].gather(dim=0,index=li.repeat(nfd,1).view(1,nfd,-1)).squeeze().T
vox_loop_time += (time.time() - vox_start)
#############################################################################
total_time = time.time() - start_time
inv_time = total_time - vox_loop_time
best_w_params=best_w_params.cpu().numpy()
return_params = [best_w_params[:,:nf],]
if add_bias:
return_params += [best_w_params[:,-1],]
else:
return_params += [None,]
print ('\n---------------------------------------')
print ('total time = %fs' % total_time)
print ('total throughput = %fs/voxel' % (total_time / nv))
print ('voxel throughput = %fs/voxel' % (vox_loop_time / nv))
print ('setup throughput = %fs/model' % (inv_time / nm))
sys.stdout.flush()
return best_losses.cpu().numpy(), best_lambdas.cpu().numpy(), [models[best_models],]+return_params+[mst_mean, mst_std]
The code for _fmaps_fn function that is fed into it is below. The code was originally written to extract features from AlexNet for the images. I’ve actually already made the features for my images, so instead of inputting the raw stimuli I’m giving it the features directly but need this function to play nice with the code as it is currently written. I thought about changing it to not be necessary but at some point my own model is going to require more complicated feature maps and I’ll probably need to give the raw stimuli and have a function to create the feature maps for each batch, so taking it out will be counterproductive.
class Torch_Split(nn.Module):
def __init__(self,s,d):
super(Torch_Split, self).__init__()
self.s = nn.Parameter(torch.as_tensor(s),requires_grad=False)
self.d = nn.Parameter(torch.as_tensor(d),requires_grad=False)
def forward(self, _x):
return list(torch.split(_x, self.s, self.d))
Thanks in advance for any help you can provide!! |
st182245 | Solved by tom in post #2
I’m not sure anything in your code immediately jumps into the eye that is bad.
So there a couple of things to keep in mind here:
PyTorch has a caching GPU memory allocator, this means that it will not return memory to the system when freeing tensors but instead keep it and re-use it for the next… |
st182246 | I’m not sure anything in your code immediately jumps into the eye that is bad.
So there a couple of things to keep in mind here:
PyTorch has a caching GPU memory allocator, this means that it will not return memory to the system when freeing tensors but instead keep it and re-use it for the next tensor (but torch.cuda.memory_allocated will give you the memory that is currently allocated to tensors. You can try releasing cached memory (with torch.cuda.empty_cache 1), but there are circumstances in which it is impossible to release all unused memory from the cache.
There are two typical things where you would see something like this:
When doing something like
a = torch.randn(5, 5, device="cuda")
a = torch.randn(5, 5, device="cuda")
the first tensor a will only go be freed after the second line has been completed (if there were an exception, you would still have the old value for a). This means that while the second line is executed, both tensors will be there at the same time. Similar effects can also happen with more complex code.
The caching and re-using can have the effect that after running the first model, there are “gaps” from releasing unused tensors that are too small for the tensors of the second model (e.g. because part of the memory is used by another tensor that was allocated in between). This is a bit of “bad luck” and often such effects get smaller over time (because we do not have bad luck all the time).
So I am not sure you have a memory leak here, from your description it might be benign.
Best regards
Thomas |
st182247 | I’m using PyTorch Geometric, and I’m interested in converting a batch of graphs into separate batches of node features and edge features. In particular, I’m interested in being able to separate the nodes along the dimension by which graph they belong to, but I’m only given the batches in this format 1.
Concretely, I have tensors batch = Tensor([0, 0, 0, 0, 1, 1, 2]), nodes = Tensor([1, 2, 3, 4, 5, 6, 7]), and I want to convert it to a tensor that looks something like nodes = Tensor([[1,2,3,4], [5,6, pad, pad], [7, pad, pad, pad]) where I use pad to denote some padding tokens. In this case, I have turned a tensor of shape (7,) (number of nodes) into a tensor of shape (3, 4) (number of graphs in batch, maximum number of nodes per graph). |
st182248 | Is this what you’re looking for?
https://pytorch-geometric.readthedocs.io/en/latest/modules/utils.html#torch_geometric.utils.to_dense_batch 75 |
st182249 | I try to train two DNN jointly, The model is trained and goes to the validation phase after every 5 epochs, the problem is after the 5 epochs it is okay and no problem with memory, but after 10 epochs the model complains about Cuda memory. Any help to solve the memory issue.
class Trainer(BaseTrainer):
def __init__(self, config, resume: bool, model, loss_function, optimizer, train_dataloader, validation_dataloader):
super(Trainer, self).__init__(config, resume, model, loss_function, optimizer)
self.train_data_loader = train_dataloader
self.validation_data_loader = validation_dataloader
self.model = self.model.double()
def _train_epoch(self, epoch):
for i, (mixture, clean, name, label) in enumerate(self.train_data_loader):
mixture = mixture.to(self.device, dtype=torch.double)
clean = clean.to(self.device, dtype=torch.double)
enhanced = self.model(mixture).to(self.device)
front_loss = self.loss_function(clean, enhanced)
front_loss.backward(retain_graph=True)
torch.cuda.empty_cache()
model_back.train()
y = model_back(enhanced.double().to(device2))
back_loss = backend_loss(y[0], label[0].to(device2))
print("Iteration %d in epoch%d--> loss = %f" % (i, epoch, back_loss.item()), end='\r')
back_loss.backward(retain_graph=True)
self.optimizer.zero_grad()
self.optimizer.step()
torch.cuda.empty_cache()
dl_len = len(self.train_data_loader)
@torch.no_grad()
def _validation_epoch(self, epoch):
sample_length = self.validation_custom_config["sample_length"]
stoi_c_n = [] # clean and noisy
stoi_c_e = [] # clean and enhanced
stoi_e_n = []
pesq_c_n = []
pesq_c_e = []
pesq_e_n = []
correct = []
for i, (mixture, clean, name, label) in enumerate(self.validation_data_loader):
#assert len(name) == 1, "Only support batch size is 1 in enhancement stage."
name = name[0]
padded_length = 0
mixture = mixture.to(self.device)
if mixture.size(-1) % sample_length != 0:
padded_length = sample_length - (mixture.size(-1) % sample_length)
mixture = torch.cat([mixture, torch.zeros(1, 1, padded_length, device=self.device)], dim=-1)
assert mixture.size(-1) % sample_length == 0 and mixture.dim() == 3
mixture_chunks = list(torch.split(mixture, sample_length, dim=-1))
enhanced_chunks = []
for chunk in mixture_chunks:
enhanced_chunks.append(self.model(chunk.double()).detach().cpu())
enhanced = torch.cat(enhanced_chunks, dim=-1) # [1, 1, T]
enhanced = enhanced.to(self.device)
#print(enhanced)
if padded_length != 0:
enhanced = enhanced[:, :, :-padded_length]
mixture = mixture[:, :, :-padded_length]
torch.cuda.empty_cache()
model_back.eval()
y_pred = model_back(enhanced.double().to(self.device))
pred = torch.argmax(y_pred[0].detach().cpu(), dim=1)
intent_pred = pred
correct.append((intent_pred == label[0]).float())
torch.cuda.empty_cache()
acc = np.mean(np.hstack(correct))
intent_acc = acc
iter_acc = '\n iteration %d epoch %d -->' %(i, epoch)
print(iter_acc, acc, best_accuracy)
if intent_acc > best_accuracy:
improved_accuracy = 'Current accuracy {}, {}'.format(intent_acc, best_accuracy)
print(improved_accuracy)
torch.save(model_back.state_dict(), '/home/mnabih/jt/best_model.pkl') |
st182250 | Hey guys,
I’m looking for suggestions on how to speed up process times when storing/accessing large amounts of data for training. Presently, I am using Pandas DataFrames saved to csv. Then I load just the part of the csv I need at any given time in the CustomDataset.
But this has been quite slow, in my opinion, as the csv file is over 300mb(might be bigger later, too). Loading the entire file and accessing the parts I need via .iloc is even slower. So I wanted to check what you all use. I see there are a few options:
Pandas DataFrames
Numpy Arrays
Tensors saved to .pt files
???
What have you found to work best for performance? |
st182251 | Solved by J_Johnson in post #3
What I ended up doing that worked quite well was split the data between files with training samples of 1000 each quite easily by setting the filename to 'data-'+str(i//1000)+'.csv'. Since the data has 180 points of data per training sample, the csv file sizes are just a couple mb each. But if your t… |
st182252 | It likely depends on the particular bottlenecks of your system. If there is a large amount of system memory, but limited file I/O, you might want to keep the entire file in memory at all times. If there isn’t much system memory, but sufficient file I/O you could split the file. The actual file format may not matter as much as the basic loading algorithm design. |
st182253 | What I ended up doing that worked quite well was split the data between files with training samples of 1000 each quite easily by setting the filename to 'data-'+str(i//1000)+'.csv'. Since the data has 180 points of data per training sample, the csv file sizes are just a couple mb each. But if your training samples are larger, the same would work by changing 1000---->100.
Just make sure to print “i” at the end of your preprocessing script just to make sure you know what to set the __len__ to in your CustomDataset. |
st182254 | I have observed that strides of input and output tensors (just before and after network inference) are different. This behavior can be observed both in python and C++. I am not sure whether this is an inherent feature or a bug.
Example:
input_Tensor size: [1, 3, 256, 256]
input_Tensor strides: [256x256x3, 256x256, 256, 1]
output_Tensor size: [1, 3, 256, 256]
output_Tensor strides: [256x256x3, 1 , 256x3 , 3]
This behavior is specially problematic when in C++ (libtorch) we try to set input Tensor data from a buffer like this:
torch::Tensor input_Tensor = torch::from_blob(input_buffer,{1,3,256,256},torch::kFloat);
and after inference try to copy output Tensor data to a buffer like this:
memcpy(output_buffer, (float*)output_Tensor.data_ptr(), 256*256*3*sizeof(float));
So, if strides of input_Tensor and output_Tensor differs like in above example, we will get wrong data filled in output_buffer. One way to resolve this is to make output_Tensor contiguous before doing memcpy. |
st182255 | Solved by ptrblck in post #7
Thanks for the update.
I haven’t reproduced the complete issue in libtorch, since you are already manually manipulating the .data attribute (which is not recommended, as it could yield unwanted issues, so wrap the code in a with torch.no_grad() block and assign the new nn.Parameter directly if nece… |
st182256 | I guess you are using the channels_last memory format, which would change the meta data as described here 5. |
st182257 | Yes, output_Tensor is in channels_last format but I have not explicitly defined anywhere to use channels_last memory format. It’s just that the input_Tensor is contiguous but output_Tensor is not. Pytorch (libtorch) automatically did it. |
st182258 | That shouldn’t happen. Could you post an executable code snippet to reproduce this issue? |
st182259 | Please find below exact steps and code to reproduce this issue. In my original code, I have transferred my model weights from a caffe2 network, so below python code is simulating this thing. First we will create our model file from this python code:
My execution environment:
Python torch version: 1.7.0+cu101
Libtorch version: 1.7.0 cuda 10.1
class TestNet(nn.Module):
def __init__(self):
super(TestNet, self).__init__()
self.conv1 = nn.Conv2d(2,10,3,padding=1)
def forward(self,x):
x = self.conv1(x)
return x
def get_arr():
arr = np.random.rand(10,3,3,2)
arr = np.transpose(arr,(0,3,1,2))
return arr
model = network.TestNet()
arr = get_arr()
model._modules["conv1"].weight.data = torch.from_numpy(arr.astype(np.float32))
print (arr.shape)
print (arr.strides)
print (arr.data.contiguous)
model.eval()
arr = np.zeros((1,2,256,256)).astype(np.float32)
inp_T = torch.from_numpy(arr)
traced_script_module = torch.jit.trace(model, inp_T)
traced_script_module.save("TestNet.pt")
We have created our model file from above code, now we will execute below C++ code:
torch::jit::script::Module model = torch::jit::load("TestNet.pt");
model.to(at::kCUDA);
model.eval();
torch::NoGradGuard no_grad;
torch::Tensor tensor_in;
float* in_data = new float[2 * 256 * 256];
tensor_in = torch::from_blob(in_data, { 1, 2, 256, 256 }, torch::kFloat);
tensor_in = tensor_in.to(at::kCUDA);
tensor_in.set_requires_grad(0);
cout << tensor_in.is_contiguous() << endl;
cout << tensor_in.strides() << endl;
cout << tensor_in.sizes() << endl;
std::vector<torch::jit::IValue> inputs;
inputs.push_back(tensor_in);
torch::Tensor pred_out = model.forward(inputs).toTensor();
cout << pred_out.is_contiguous() << endl;
cout << pred_out.strides() << endl;
cout << pred_out.sizes() << endl;
Output of python code:
(10, 2, 3, 3)
(144, 8, 48, 16)
False
Output of C++ code:
1
131072,65536,256,1
1,2,256,256
0
655360,1,2560,10
1,10,256,256
We can see from above C++ output that although our input tensor is contiguous, output tensor is not. Interestingly, I found that get_arr() function in python code is responsible for all this. For example, if we replace above get_arr() function with below one, we see that now our output tensor is also contiguous.
Replace above get_arr() with this:
def get_arr():
arr = np.random.rand(10,3,3,2)
arr = np.transpose(arr,(0,3,1,2))
arr = np.ascontiguousarray(arr)
return arr
Now. output of python code:
(10, 2, 3, 3)
(144, 72, 24, 8)
True
Output of C++ code:
1
131072,65536,256,1
1,2,256,256
1
655360,65536,256,1
1,10,256,256
Observe that our output tensor is contiguous now. How is this behaviour connected to our model weights? We can infer that if our model weights are not contiguous then output tensor is also not contiguous. But, how do we explain below behaviour then:
If get_arr() is like this:
def get_arr():
arr = np.random.rand(10,2,3,3)
arr = np.transpose(arr,(0,1,3,2))
return arr
Output of python code:
(10, 2, 3, 3)
(144, 72, 8, 24)
False
Output of C++ code:
1
131072,65536,256,1
1,2,256,256
1
655360,65536,256,1
1,10,256,256
So, we can see that in this case model weights are not contiguous but output tensor is contiguous. It would be great if someone can explain this weird behaviour. |
st182260 | Thanks for the update.
I haven’t reproduced the complete issue in libtorch, since you are already manually manipulating the .data attribute (which is not recommended, as it could yield unwanted issues, so wrap the code in a with torch.no_grad() block and assign the new nn.Parameter directly if necessary) and set it into a channels-last format.
As your Python script shows, arr.data.contiguous returns False and also:
print(model.conv1.weight.stride())
> (18, 1, 6, 2)
print(model.conv1.weight.is_contiguous())
> False
print(model.conv1.weight.is_contiguous(memory_format=torch.channels_last))
> True
shows that your manual assignment is setting the weight parameter to channels_last. |
st182261 | Yes, it looks like so. Strides and contiguity of output tensor depends on model weights. Anyway, thanks Piotr for your help. |
st182262 | Hi, I was playing with pytorch trying to find how much space some things occupy in VRAM using this script 2 and found that when I first send anything to the GPU, I get a fixed memory usage (seen through nvidia-smi) of around 1284MB 2.
Was wondering what it is due to? I would’ve guessed the cuda runtime sits in normal RAM, but it seems to also occupy space in VRAM? Though I am not understanding how that would work.
Thanks! |
st182263 | Solved by eqy in post #2
Great question. There are several things going on here. First, a bare CUDA context on a GPU will use about 300MiB VRAM (this gets initialized the first time you call .cuda()). Then libtorch_cuda.so (containing all GPU kernels for various tensor operators, etc.) will be loaded into VRAM taking anothe… |
st182264 | Great question. There are several things going on here. First, a bare CUDA context on a GPU will use about 300MiB VRAM (this gets initialized the first time you call .cuda()). Then libtorch_cuda.so (containing all GPU kernels for various tensor operators, etc.) will be loaded into VRAM taking another 500MiB or so. Finally if you use any cuDNN or cuBLAS functions loading those libraries will also take memory. Additionally, the caching allocator 5 may reserve additional GPU memory ahead of time even before it is required for tensors to save time when more allocations are needed. |
st182265 | Hi thanks a lot for the answer! That makes perfect sense.
The way I originally thought about VRAM was that it only stored the tensors the GPU would operate on, but that the instructions were stored in RAM and fed into the GPU live on each step. From your answer I can see that I was totally wrong, and you actually do store libraries in VRAM. If you don’t mind me asking, is there any resource I could read to get a decent overall idea of how this flow works? |
st182266 | Sorry I am relatively new to Pytorch and I know this is an old and common problem
RuntimeError: CUDA out of memory. Tried to allocate 160.00 MiB (GPU 0; 14.76 GiB total capacity; 13.24 GiB already allocated; 97.75 MiB free; 13.63 GiB reserved in total by PyTorch)
So I am trying to train a CycleGan for 2 days. But I face numerous GPU memory problem like it said above.
So when I try to solve the runtime cuda out of memory error. I try to interpret the message. I am thinking that they first tried to allocated 160MB on GPU during my runtime session. There is total 14.76GB total available memory for this particular GPU. However only 13.63 GB is allowed to be used by Pytorch? And 13.24GB already been allocated for this runtime session, so it only have 97.75 MB free memory left, and the required 160MB is larger than the free 97.75MB so it throw the memory error?
Am I interpret the message correctly?
If yes, I don’t know why my training use so many memory. For cycle GAN my generator have like 6 million parameters, discriminator probably 0.5 million parameter. If I add it up it should be around 40-50MB. During my training batches, my image size is 256 * 256, each batch size is 30. Which brings up to around 8-9 MB. Even if I add it up it’s way less than 160MB. I don’t think there is a memory leak problem because the error throw in the first training epochs during my first generator forward calculation. So did I mess up something or is this behavior expected?
Also is it true that 13.24GB in the error message about GPU memory is allocated by me as well, or is it used by other application? The reason I am asking is I am not doing anything with GPU besides training the model so how in the world did 13.24GB been allocated already?
One last question. So I know in order to fix the problem I need to either reduce the batch size or reduce the model size. However many times I found out when I reduce the batch size and model size, the required allocated memory sometime even increase, for example if I reduce my channel by half in Generator conv layer, the expected memory usage should be reduce by close to half, but sometime the error says the required allocated memory actually increased by half. Why is that happening? Also for the free memory sometime for example it will show I have 80 MB free available. But when I reduce the batch size and re-run again it says I only have 40MB left available? Why is this happening as well?
Sorry for many questions, but the memory problem is just really frustrated to solve |
st182267 | Yes, the issue is that more memory is required than what is available with current allocations. Note that typical training will use far more memory than just what is required for model parameters and the input data. All of the intermediate activations are typically stored for the backward pass and this can often use the bulk of required memory during training. The behavior of the memory requirements being far greater than just the parameters and input size is expected.
(A clarification on input images: assuming single precision input, a batch size of 30 with 256x256 input images will use 30x256x256x3 (channels) x 4 bytes which is closer to 23MB.)
You can check the distribution of memory usage via nvidia-smi or similar commands. You might be able to get a few tens of MB back by killing any graphical environments (e.g., a Linux deskop environment) that are running concurrently on the GPU.
For this last part it is likely due to the different sizes of tensors that need to be allocated. The total memory requirement of the model maybe lower when you reduce the channels, but it may fail to allocate at a different part of the model so the error message reports a different number. For example, consider a hypothetical scenario where your model needs to allocate 3 tensors of size 30MiB each but there is only 50 MiB total of GPU memory available. After the first allocation, it will fail with 20MiB remaining while needing to allocate 30MiB for just the second tensor. (30/50MiB used) If you reduce each tensor to 20MiB, it will fail after the second allocation while needing to allocate 20MiB for the third tensor. (40/50MiB used) So really the “how much free” value reported in the error is just for diagnostic purposes and not an indicator that what you are doing isn’t helping. |
st182268 | Thanks. But just to clarify is 13.24GB of GPU allocated memory in error message also used by my runtime session or is it used by some other application I am not aware of? Because compare to 160MB. It is still a 100 times jump which is a lot if it’s used by my runtime session |
st182269 | It is probably used by your session but it is hard to know without running something like nvidia-smi to see what process the memory belongs to. |
st182270 | Hi all,
I have developed a tracking system which operates on video. I also have strong constraints about embeddability (my code needs to run on small cards/CPUs) and the tracker must be real time. Thus, I spend a lot of time to optimize my code, using torch functions and doing as many operations as possible on GPU.
But there is a point not clear to me: I am not sure when data is exchanged between CPU and GPU, and how memory is exactly managed. For instance:
a = torch.tensor([1, 2, 3]).float().cuda() # tensor a will be on GPU
b = a + 1 # tensor b will also be on GPU
c = len(a) # tensor c will still be on GPU
Few questions about these basic operations:
On line 1, does the .float() function make a copy of a in memory?
On line 2, does the + operation copy the data of a to the CPU, then compute the result, then send the data back to the GPU? Or is torch intelligent enough to make the addition directly on the GPU? Is it always better to use the .add() torch function?
Do functions such as len() make a copy of the tensor on the CPU before counting?
And finally, is there a difference between these two lines:
a = torch.tensor(1.).cuda().add_(1.)
a = torch.tensor(1.).cuda().add_(torch.tensor(1.).cuda())
Thank you in advance, any help will be very appreciated! |
st182271 | Solved by albanD in post #2
c is not a Tensor here but a python number. And as such it will be on the CPU.
For you other questions:
.float() makes a full copy if the given Tensor is not of floating type, otherwise it returns the input as-is
All operations on GPU Tensors will happen on GPU. + or add() are the same thing so… |
st182272 | pierremrg:
c = len(a) # tensor c will still be on GPU
c is not a Tensor here but a python number. And as such it will be on the CPU.
For you other questions:
.float() makes a full copy if the given Tensor is not of floating type, otherwise it returns the input as-is
All operations on GPU Tensors will happen on GPU. + or add() are the same thing so you can use either.
len() returns a python number. So the result will always be on CPU (as python number cannot be on GPU). And it returns the size of the first dimension, which doesn’t need access to the content of the Tensor, only its metadata, which are read on the CPU side.
The two lines are very similar. The first one is preferable as a single plain number can be sent as an argument to the GPU kernel which will be faster than creating a full Tensor containing a 1. and send that to the GPU.
Some other pointers for perf:
PyTorch will never copy the content of a Tensor between device for you. Only when you call .cuda() or .cpu(), etc where a copy will happen.
It is better to create the Tensor you want on the right device directly to avoid extra copies: torch.tensor([1, 2, 3]).float().cuda() should be torch.tensor([1, 2, 3], dtype=torch.float, device="cuda") |
st182273 | My question is: What happens to memory region of tensor object on CPU when tensor (in pytorch) is sent to Cuda device? |
st182274 | That depends.
In PyTorch, moving a CPU tensor to GPU does create a copy, so they both live independently after.
If the CPU tensor is still referenced eg
by your program,
by autograd (if you used it in a computation involving values that required grad which and PyTorch needs it for computing the backward),
by a “view”-Tensor (e.g. a slice)
it will be still around.
If the CPU tensor (or the storage, to be more precise) is not used after the copy, it will be freed (with the usual caveats about how memory allocation works with system libraries potentially caching etc.).
Best regards
Thomas |
st182275 | Hello…
Suppose I have a tensor t that resides on GPU and I would like to apply some math operations on it with constants:
t = torch.FloatTensor(...).cuda()
t = 0.1 * t + 0.2
I wonder how Torch/Python handles the constants 0.1 and 0.2. Are they transferred from CPU to GPU memory each time this operation is performed, or is the code optimized automatically and the constants are moved to GPU?
I am concerned about the efficiency.
Should I define constant CUDA tensors and use them instead (less readable code)?
Thanks. |
st182276 | Solved by googlebot in post #2
it comes down to two cuda kernels that, strongly simplified, look like:
mul(float* a, float* b)
vs
mul(float* a, float b)
second one is marginally faster, as one “load from global memory pointer” operation is avoided
but this is not related to CPU-GPU transfers, note that function arguments, la… |
st182277 | it comes down to two cuda kernels that, strongly simplified, look like:
mul(float* a, float* b)
vs
mul(float* a, float b)
second one is marginally faster, as one “load from global memory pointer” operation is avoided
but this is not related to CPU-GPU transfers, note that function arguments, launch configuration and a signal to execute cuda kernel itself require a data transfer anyway (I’d assume it is a single transfer that uses faster “constant memory”, but haven’t investigated into this).
takeaway is, it is better to NOT use tensors, when tensor-scalar operator versions exist (they usually do when you don’t need to wrap numbers in tensors). OTOH, the timing difference is negligible. |
st182278 | Thanks for the nice explanation.
This is a very common use-case, and I almost always see constants being used as constants in the equations instead of constant tensors.
There is also the caching; not familiar with GPU caching, but assuming it is similar to CPU caching, such constants are probably cached if they are frequently accessed. |
st182279 | It is not about caching, what I tried to explain is that to call a kernel you either transfer a pointer to tensor data (address of 8 bytes) or a scalar (1-8 bytes), it is the same overhead, probably unavoidable (unless you would compile specialized kernels with less function arguments). |
st182280 | I understand that, but the real cost comes from whether the value/pointer is accessed from cache or from GPU/CPU memory, the latter being much slower (especially from CPU memory). |
st182281 | In that sense, I think special memory area is used to pass arguments, reading from which is at least not slower than from cached global memory. And I haven’t mentioned that the version with pointers requires doing address arithmetic in all threads, so it is worse than my oversimplified example suggests. |
st182282 | Using a pre-trained PyTorch model (N C HW format) but my acceleration platform requires model in NHWC format.
Is there an easy way for converting PyTorch model to NHWC format?
I have permuted the weights by fetching weights from PyTorch’s state_dict() method like -
if('conv' in str(key)):
params[key] = value.permute(0,2,3,1)
But am unable to repopulate the model with permuted dictionary - model.load_state_dict(Updated_params) gives -
size mismatch for stage4.2.branches.2.3.conv1.weight: copying a param with shape torch.Size([128, 3,
3, 128]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
To resolve this, how can I define layers in the new model in NHWC format in PyTorch.
The link below doesn’t seem to solve this issue.
Channels Last Memory Format in PyTorch 13
Thanks! |
st182283 | Hi,
You are using wrong permutation order.
_singh:
value.permute(0,2,3,1)
See this example:
x = torch.randn(1, 128, 128, 3)
# your order
x.permute(0,2,3,1).shape # torch.Size([1, 128, 3, 128])
# correct order
x = x.permute(0, 3, 1, 2)
x.shape # torch.Size([1, 3, 128, 128])
And the error corresponds to this issue.
I am not sure still it could work fine or not because of channel mutation, because of the forward method. For instance, concatenation, squeezing, etc and all other method which use dim argument to do the operation, if exist in the forward function, may cause issues. You may need to override forward function w.r.t. channel changes.
Bests |
st182284 | Hi,
Thanks for your reply. If you see, you are recommending the solution other way around i.e. NHWC to NCHW. I want the opposite.
I want (1, 3,128,128) to (1, 128, 128, 3) for which
value.permute(0,2,3,1)
is the correct order.
Thanks for the insight regarding forward function though. |
st182285 | _singh:
I want the opposite
Ow sorry! I misunderstood the weight with input.
_singh:
The link below doesn’t seem to solve this issue.
Channels Last Memory Format in PyTorch
What was the issue with this tutorial? I think it is ok as you can send all layers to channel last or channel first mode. I am not sure I am missing something here.
# define channel first
conv = nn.Conv2d(128, 128, 3) # replicate [128, 128, 3, 3] weight tensor
print(conv.weight.shape)
print(conv.weight.stride())
# convert to channel last
conv = conv.to(memory_format=torch.channels_last)
print(conv.weight.shape)
print(conv.weight.stride())
# convert back to channel first
conv = conv.to(memory_format=torch.contiguous_format)
print(conv.weight.shape)
print(conv.weight.stride())
# output
# torch.Size([128, 128, 3, 3])
# (1152, 9, 3, 1)
# torch.Size([128, 128, 3, 3])
# (1152, 1, 384, 128)
# torch.Size([128, 128, 3, 3])
# (1152, 9, 3, 1) |
st182286 | Nikronic:
conv = conv.to(memory_format=torch.channels_last)
print(conv.weight.shape)
print(conv.weight.stride())
This is the correct way to convert the existing model or layer. Please also make sure you are converting inputs as well
input = input.to(memory_format=torch.channels_last) |
st182287 | Please see below code snippet and comments in print lines:
device = 'cuda'
input = torch.randint(1, 10, (2, 8, 4, 4), dtype=torch.float32, device= device, requires_grad=True)
model = torch.nn.Conv2d(8, 4, 3)
print(input.shape) # torch.Size([2, 8, 4, 4])
input = input.contiguous(memory_format=torch.channels_last)
print(input.shape) # still torch.Size([2, 8, 4, 4]) - need [2, 4, 4, 8] (NHWC) here
model = model.to(memory_format=torch.channels_last)
print(model) # output= Conv2d(8, 4, kernel_size=(3, 3), stride=(1, 1))
model = model.to(device)
out = model(input)
print(out.shape) # output= torch.Size([2, 4, 2, 2]) | need [2, 2, 2, 4] (NHWC) here
print(out.is_contiguous(memory_format=torch.channels_last)) # Output: True |
st182288 | Please see this comment.
It appears memory_format = torch.channels_last is not converting the layers/input to NHWC format. It performs a different functionality. Pytorch Channels Last Memory Format 30 webpage also doesn’t mention NHWC anywhere in the whole webpage.
I hope I have put my requirement correctly. Thanks. |
st182289 | Can you please clarify what do you mean by acceleration platform in this case. PyTorch operators (and modules) require CV tensors to be in specific indexing order NCHW. To use accelerated NHWC kernels we preserve dimensions order but laying out tensor in memory differently. |
st182290 | Acceleration platform is a custom processor. I am currently using the pytorch model, converting it to onnx/keras and then porting it to the processor for inference. But face this challenge of NHWC vs NHWC.
If possible, can you please shed some light on the possibility of updating the model definition after permuting the layers, as mentioned in my original comment?
thanks! |
st182291 | All PyTorch operators are written to take NCHW as dimensions order. There is no way to change it (you can only change memory format - aka how tensor laid in memory).
If you really want to change the order of dimensions you would need to permute each model parameter manually. Take into account that your model will not work in PyTorch anymore. |
st182292 | Hello,
I hope this is the right forum to post my issue.
During the model training I received the following error message with an additional code snippet to reconstruct the error. The code snippet throws the same error.
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
You can try to repro this exception using the following code snippet. If that doesn't trigger the error, please include your original repro script when reporting this issue.
import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.allow_tf32 = True
data = torch.randn([1, 256, 128, 128], dtype=torch.float, device='cuda', requires_grad=True)
net = torch.nn.Conv2d(256, 256, kernel_size=[3, 3], padding=[1, 1], stride=[1, 1], dilation=[1, 1], groups=1)
net = net.cuda().float()
out = net(data)
out.backward(torch.randn_like(out))
torch.cuda.synchronize()
If I set
torch.backends.cudnn.benchmark = False
the error is not triggered. Originally, the error was triggered when I used transforms.RandomCrop(256) for the training data and transforms.RandomCrop(512) for the validation data. With the same crop size the error is not triggered.
I don’t know if this is a bug or if I did something wrong.
BR,
Patrick |
st182293 | Could you check, if you are running out of memory?
If that’s not the case, could you post an executable code snippet as well as the output of:
python -m torch.utils.collect_env |
st182294 | Hi,
thanks for the reply. I´m not running out of memory.
Enviroment information:
Collecting environment information...
PyTorch version: 1.8.0
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Quadro RTX 8000
GPU 1: Quadro RTX 8000
Nvidia driver version: 450.51.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] torch==1.8.0
[pip3] torchvision==0.9.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.2.89 h8f6ccaa_8 conda-forge
[conda] mkl 2020.4 h726a3e6_304 conda-forge
[conda] mkl-service 2.3.0 py38h1e0a361_2 conda-forge
[conda] mkl_fft 1.3.0 py38h5c078b8_1 conda-forge
[conda] mkl_random 1.2.0 py38hc5bc63f_1 conda-forge
[conda] numpy 1.19.2 py38h54aff64_0
[conda] numpy-base 1.19.2 py38hfa32c7d_0
[conda] pytorch 1.8.0 py3.8_cuda10.2_cudnn7.6.5_0 pytorch
[conda] torchvision 0.9.0 py38_cu102 pytorch
I’m not sure what you mean with executable code snippet, the one provided above is executable and triggers the error. But I wrapped it in a main function:
import torch
def main():
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.allow_tf32 = True
data = torch.randn([1, 256, 128, 128], dtype=torch.float, device='cuda', requires_grad=True)
net = torch.nn.Conv2d(256, 256, kernel_size=[3, 3], padding=[1, 1], stride=[1, 1], dilation=[1, 1], groups=1)
net = net.cuda().float()
out = net(data)
out.backward(torch.randn_like(out))
torch.cuda.synchronize()
if __name__ == "__main__":
main()
Error:
Traceback (most recent call last):
File "error.py", line 16, in <module>
main()
File "error.py", line 11, in main
out = net(data)
File ".../anaconda3/envs/TestTorch18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File ".../anaconda3/envs/TestTorch18/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 399, in forward
return self._conv_forward(input, self.weight, self.bias)
File ".../anaconda3/envs/TestTorch18/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 395, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
You can try to repro this exception using the following code snippet. If that doesn't trigger the error, please include your original repro script when reporting this issue.
import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.allow_tf32 = True
data = torch.randn([1, 256, 128, 128], dtype=torch.float, device='cuda', requires_grad=True)
net = torch.nn.Conv2d(256, 256, kernel_size=[3, 3], padding=[1, 1], stride=[1, 1], dilation=[1, 1], groups=1)
net = net.cuda().float()
out = net(data)
out.backward(torch.randn_like(out))
torch.cuda.synchronize()
ConvolutionParams
data_type = CUDNN_DATA_FLOAT
padding = [1, 1, 0]
stride = [1, 1, 0]
dilation = [1, 1, 0]
groups = 1
deterministic = false
allow_tf32 = true
input: TensorDescriptor 0x556f5877abe0
type = CUDNN_DATA_FLOAT
nbDims = 4
dimA = 1, 256, 128, 128,
strideA = 4194304, 16384, 128, 1,
output: TensorDescriptor 0x556f595a3a20
type = CUDNN_DATA_FLOAT
nbDims = 4
dimA = 1, 256, 128, 128,
strideA = 4194304, 16384, 128, 1,
weight: FilterDescriptor 0x556f595fe4b0
type = CUDNN_DATA_FLOAT
tensor_format = CUDNN_TENSOR_NCHW
nbDims = 4
dimA = 256, 256, 3, 3,
Pointer addresses:
input: 0x7ff411c00000
output: 0x7ff412e40000
weight: 0x7ff412c00000 |
st182295 | Great! Thanks for the update. We’ll check the workload and forward it to cudnn, if applicable. |
st182296 | Looking at the tutorial for the channels last format available at (beta) Channels Last Memory Format in PyTorch 4, I decided to try it out, but found myself facing a roughly x6 performance penalty rather than any gain. Looking at it, I feel that this is because the memory reorganisation was happening on the GPU–the only time the Input variable is directly exposed is within the training loop. Obviously, restructuring the data once it’s loaded onto the GPU is undesirable.
However, does is there any way to change the channel ordering when working with the DataLoader class or using ImageLoader to load data? Neither class has its own memory format due to being iterables, and passing in lambda x: x.to(memory_format=torch.channels_last) or lambda x: x.contiguous(memory_format=torch.channels_last) to a transforms.Lambda function results in this error:
AttributeError: Can't pickle local object 'main.<locals>.<lambda>'
It would seem that a unique transform method would be needed? |
st182297 | Could you try to pass a custom transformation method instead of a lambda to the transforms?
Alternatively you could also transform the data inside the Dataset in case you are using a custom implementation (or want to use one). |
st182298 | You mean including a method like:
def channels_last(input):
return(input.contiguous(memory_format=torch.channels_last))
and then putting that inside transforms.Lambda? That gives the same “can’t pickle” error AttributeError: Can't pickle local object 'main.<locals>.channels_last' if it’s included locally, and if it’s located globally then it appears to get stuck on something when inside the training loop–no error is thrown but it doesn’t complete an epoch (and system utilisation drops to nothing).
Writing a custom dataset would entail rewriting the whole functionality of ImageFolder, which I was hoping to avoid. |
st182299 | No, I was thinking about writing a transformation class, such as:
class ToChannelsLast:
def __call__(self, x):
if x.ndim == 3:
x = x.unsqueeze(0)
elif x.ndim !=4:
raise RuntimeError
return x.to(memory_format=torch.channels_last)
def __repr__(self):
return self.__class__.__name__ + '()'
import torchvision.transforms as transforms
transform = transforms.Compose([
transforms.ToPILImage(),
transforms.RandomCrop(224),
transforms.ToTensor(),
ToChannelsLast()
])
x = torch.randn(3, 256, 256)
out = transform(x)
out.is_contiguous(memory_format=torch.channels_last)
out.stride() |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.