id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st49268 | I tried reducing the patch size as well to 100*100, but still can’t able to fit in the GPU memory!! |
st49269 | Good day all,
I know there have been many answers for similar questions, however I havn’t found a solution. My cuda_is_available() is returning False, but weirdly it used to return True with the same configuration. The code is running on a GPU cluster. I was wondering if any people wiser than me can identify anything wrong with the following configuration:
PyTorch version: 1.4.0
Is debug build: False
CUDA used to build PyTorch: 10.0
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 4.9.0
Clang version: Could not collect
CMake version: Could not collect
Python version: 3.6 (64-bit runtime)
Is CUDA available: False
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: Tesla V100-PCIE-16GB
GPU 1: Tesla V100-PCIE-16GB
GPU 2: Tesla V100-PCIE-16GB
Nvidia driver version: 418.40.04
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.18.5
[pip3] numpydoc==1.1.0
[pip3] torch==1.4.0
[pip3] torchvision==0.5.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.0.130 0
[conda] mkl 2020.1 217
[conda] mkl-service 2.3.0 py36he904b0f_0
[conda] mkl_fft 1.1.0 py36h23d657b_0
[conda] mkl_random 1.1.1 py36h0573a6f_0
[conda] numpy 1.18.5 py36ha1c710e_0
[conda] numpy-base 1.18.5 py36hde5b4d6_0
[conda] numpydoc 1.1.0 py_0
[conda] pytorch 1.4.0 py3.6_cuda10.0.130_cudnn7.6.3_0 pytorch
[conda] torchvision 0.5.0 py36_cu100 pytorch
Here is the nvidia-smi output:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.40.04 Driver Version: 418.40.04 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-PCIE... On | 00000000:3B:00.0 Off | Off |
| N/A 31C P0 25W / 250W | 0MiB / 16130MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-PCIE... On | 00000000:AF:00.0 Off | Off |
| N/A 32C P0 27W / 250W | 0MiB / 16130MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla V100-PCIE... On | 00000000:D8:00.0 Off | Off |
| N/A 30C P0 22W / 250W | 0MiB / 16130MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
I appreciate any assistance. Thank you! |
st49270 | Solved by ptrblck in post #4
After e.g. a driver change, the nvidia-smi output might still be valid, but CUDA programs might not work properly and thus you should restart the system.
As a test you could compile any CUDA sample and run it. Since other conda envs are also not working, I assume that someone might have indeed chan… |
st49271 | Are other CUDA programs working on this server?
If so, did you change anything on this machine, e.g. did you upgrade some drivers without restarting the node? |
st49272 | It is possible that something could have changed without me knowing about it, since it is not my server. However the nvidia-smi output is still what I posted above. Perhaps I should try reinstall pytorch? Is there another Cuda test I could run?
The only thing that I tried to do was use another conda environment with the latest version of pytorch, version 1.6.0 with cuda 10.1. This latest pytorch version had the same problem unfortunately. |
st49273 | After e.g. a driver change, the nvidia-smi output might still be valid, but CUDA programs might not work properly and thus you should restart the system.
As a test you could compile any CUDA sample and run it. Since other conda envs are also not working, I assume that someone might have indeed changed some drivers or runtimes on the server. |
st49274 | It seems that it definitely the problem on the cluster’s side. They resolved it after I queried them. Cheers! |
st49275 | I’m currently trying to replace values AxBxC matrix with given array of indexes and corresponding values.
for example I have an index array [[0,1], [1,0], [3,4]] for a 6x6x4 matrix, with corresponding value [[1,1,1,1],[2,2,2,2],[3,3,3,3]]. I’d like to replace the values in the 6x6x4 matrix at [0,1,:], [1,0,:], and [3,4,:] with [1,1,1,1], [2,2,2,2] and [3,3,3,3]. extracting is possible with given indices, but not the other way around it seems |
st49276 | Solved by ptrblck in post #2
This should work:
A, B, C = 6, 6, 4
x = torch.zeros(A, B, C)
idx = torch.tensor([[0,1], [1,0], [3,4]])
val = torch.tensor([[1,1,1,1],[2,2,2,2],[3,3,3,3]]).float()
x[idx[:, 0], idx[:, 1]] = val
print(x)
> tensor([[[0., 0., 0., 0.],
[1., 1., 1., 1.],
[0., 0., 0., 0.],
… |
st49277 | This should work:
A, B, C = 6, 6, 4
x = torch.zeros(A, B, C)
idx = torch.tensor([[0,1], [1,0], [3,4]])
val = torch.tensor([[1,1,1,1],[2,2,2,2],[3,3,3,3]]).float()
x[idx[:, 0], idx[:, 1]] = val
print(x)
> tensor([[[0., 0., 0., 0.],
[1., 1., 1., 1.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]],
[[2., 2., 2., 2.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]],
[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]],
[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[3., 3., 3., 3.],
[0., 0., 0., 0.]],
[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]],
[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]]]) |
st49278 | Thank you! I actually forgot to convert the tensor(in this case val) from numpy matrix, so that’s why it didn’t end up working haha. |
st49279 | I am struggeling to determine a cause for following behavior of 3 class classification code I have below. The classifier takes a 2D matrix of torch.Size([1, 1, 512, 512]) . The output is torch.Size([256, 3]) .
class Classifier(nn.Module):
def __init__(self):
super(Classifier, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 32, kernel_size=5, stride=1, padding=2),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(32, 64, kernel_size=5, stride=1, padding=2),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.drop_out = nn.Dropout()
self.fc1 = nn.Linear(4096, 1024)
self.fc2 = nn.Linear(1024, 3)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.view(-1, 4096)
out = self.drop_out(out)
out = self.fc1(out)
out = self.fc2(out)
return out
The problem is same model provides following dimensions when model summary is printed. My problem is I need to compare this with a torch.Size([1]) where the class labels are stored. Everything is showing that this is designed for 3 class classification problem but cannot determine where this value of 256 in the output tensor comes from.
summary(activity_recognizer,(1,512,512))
==========================================================================================
Layer (type:depth-idx) Output Shape Param #
==========================================================================================
├─Sequential: 1-1 [-1, 32, 256, 256] --
| └─Conv2d: 2-1 [-1, 32, 512, 512] 832
| └─ReLU: 2-2 [-1, 32, 512, 512] --
| └─MaxPool2d: 2-3 [-1, 32, 256, 256] --
├─Sequential: 1-2 [-1, 64, 128, 128] --
| └─Conv2d: 2-4 [-1, 64, 256, 256] 51,264
| └─ReLU: 2-5 [-1, 64, 256, 256] --
| └─MaxPool2d: 2-6 [-1, 64, 128, 128] --
├─Dropout: 1-3 [-1, 4096] --
├─Linear: 1-4 [-1, 1024] 4,195,328
├─Linear: 1-5 [-1, 3] 3,075
==========================================================================================
Total params: 4,250,499
Trainable params: 4,250,499
Non-trainable params: 0
Total mult-adds (G): 3.57
==========================================================================================
Input size (MB): 1.00
Forward/backward pass size (MB): 96.01
Params size (MB): 16.21
Estimated Total Size (MB): 113.22
==========================================================================================
Spend few hours but seems like I hit a deadend. Appreciate any help I can have here. |
st49280 | Solved by ptrblck in post #2
The view operation is wrong and will change the batch size.
Use out = out.view(out.size(0), -1) instead and set the in_features of self.fc1 to 128*128*64. |
st49281 | The view operation is wrong and will change the batch size.
Use out = out.view(out.size(0), -1) instead and set the in_features of self.fc1 to 128*128*64. |
st49282 | I have a stack of MRI images that look like
Screen Shot 2018-11-26 at 12.16.10 AM.png660×624 50 KB
say in shape [train_size, h,w,d]
There is alot of black space making my convolution very heavy so I want to trim the images from the minimum index of the first non-zero pixel value from all the 4 corners. i.e identify the maximum distance I can trim my images from the four corners without losing any non-zero values pixel from any of the images in whole training size. This is only for the height and width. How Could I go about doing this? |
st49283 | Solved by ptrblck in post #2
This code should work for a [batch, channel, height, width]-shaped tensor:
x = torch.zeros(1, 3, 24, 24)
x[0, :, 5:10, 6:11] = torch.randn(1, 3, 5, 5)
non_zero = (x!=0).nonzero() # Contains non-zero indices for all 4 dims
h_min = non_zero[:, 2].min()
h_max = non_zero[:, 2].max()
w_min = non_zero[… |
st49284 | This code should work for a [batch, channel, height, width]-shaped tensor:
x = torch.zeros(1, 3, 24, 24)
x[0, :, 5:10, 6:11] = torch.randn(1, 3, 5, 5)
non_zero = (x!=0).nonzero() # Contains non-zero indices for all 4 dims
h_min = non_zero[:, 2].min()
h_max = non_zero[:, 2].max()
w_min = non_zero[:, 3].min()
w_max = non_zero[:, 3].max() |
st49285 | @ptrblck this solution gives the min and max nonzero cell across all images in the batch, which is unlikely to be the desired behavior.
I’m currently trying to solve the same problem, and so far haven’t found anything that can be done without a loop. |
st49286 | That’s right and I think it would be the only way to keep the batch intact, wouldn’t it?
If you crop each image separately, you could end up with variable spatial sizes, which would disallow creating a batch afterwards.
Let me know, if I’m missing something. |
st49287 | Yup @ptrblck you are missing something
I’ve figured out how to do this now so I can show you! Here’s how I did the batchwise nonzero thing:
github.com
fastai/fastai_dev/blob/master/dev/local/medical/imaging.py#L180 10
c = x.sum(dim).nonzero().cpu()
idxs,vals = torch.unique(c[:,0],return_counts=True)
vs = torch.split_with_sizes(c[:,1],tuple(vals))
d = {k.item():v for k,v in zip(idxs,vs)}
default_u = tensor([0,x.shape[-1]-1])
b = [d.get(o,default_u) for o in range(x.shape[0])]
b = [tensor([o.min(),o.max()]) for o in b]
return torch.stack(b)
#Cell
def mask2bbox(mask):
no_batch = mask.dim()==2
if no_batch: mask = mask[None]
bb1 = _px_bounds(mask,-1).t()
bb2 = _px_bounds(mask,-2).t()
res = torch.stack([bb1,bb2],dim=1).to(mask.device)
return res[...,0] if no_batch else res
#Cell
def _bbs2sizes(crops, init_sz, use_square=True):
bb = crops.flip(1)
And here’s how I then use that to create cropped zoomed sections from a batch:
github.com
fastai/fastai_dev/blob/master/dev/local/medical/imaging.py#L199 3
def _bbs2sizes(crops, init_sz, use_square=True):
bb = crops.flip(1)
szs = (bb[1]-bb[0])
if use_square: szs = szs.max(0)[0][None].repeat((2,1))
overs = (szs+bb[0])>init_sz
bb[0][overs] = init_sz-szs[overs]
lows = (bb[0]/float(init_sz))
return lows,szs/float(init_sz)
#Cell
def crop_resize(x, crops, new_sz):
# NB assumes square inputs. Not tested for non-square anythings!
bs = x.shape[0]
lows,szs = _bbs2sizes(crops, x.shape[-1])
if not isinstance(new_sz,(list,tuple)): new_sz = (new_sz,new_sz)
id_mat = tensor([[1.,0,0],[0,1,0]])[None].repeat((bs,1,1)).to(x.device)
with warnings.catch_warnings():
warnings.filterwarnings('ignore', category=UserWarning)
sp = F.affine_grid(id_mat, (bs,1,*new_sz))+1.
grid = sp*unsqueeze(szs.t(),1,n=2)+unsqueeze(lows.t()*2.,1,n=2)
return F.grid_sample(x.unsqueeze(1), grid-1)
As you see, the trick is to use grid_sample to resize them all to the same size after the crop. This is a nice way to do things like RandomResizeCrop on the GPU. |
st49288 | Thanks for sharing!
Did you know what speedup to expect, as I assume it should be way faster than the loop. |
st49289 | Yeah my preprocessing overall is 150x faster on CUDA, and I think that’s nearly all in these two methods. |
st49290 | Updated github link to this solution:
github.com
fastai/fastai2/blob/f9231256e2a8372949123bda36e44cb0e1493aa2/fastai2/medical/imaging.py#L198 2
p = x.windowed(*window)
if remove_max: p[p==1] = 0
return gauss_blur2d(p, s=sigma*x.shape[-1])>thresh
# Cell
@patch
def mask_from_blur(x:DcmDataset, window, sigma=0.3, thresh=0.05, remove_max=True):
return to_device(x.scaled_px).mask_from_blur(window, sigma, thresh, remove_max=remove_max)
# Cell
def _px_bounds(x, dim):
c = x.sum(dim).nonzero().cpu()
idxs,vals = torch.unique(c[:,0],return_counts=True)
vs = torch.split_with_sizes(c[:,1],tuple(vals))
d = {k.item():v for k,v in zip(idxs,vs)}
default_u = tensor([0,x.shape[-1]-1])
b = [d.get(o,default_u) for o in range(x.shape[0])]
b = [tensor([o.min(),o.max()]) for o in b]
return torch.stack(b)
# Cell |
st49291 | Hi,
When using BinaryCrossEntropy I get good better results than we I use Crossentropy.
Here is the code that I am using:
class BERT(nn.Module):
def __init__(self):
super(BERT, self).__init__()
options_name = "bert-base-uncased"
self.encoder = BertForSequenceClassification.from_pretrained(options_name, num_labels=3)
for param in self.encoder.bert.parameters():
param.requires_grad = True
def forward(self, text, label):
text_fea = self.encoder(text, labels=label)[0]
return text_fea
def train(model,
optimizer,
criterion = nn.CrossEntropyLoss(),
train_loader = train_iter,
valid_loader = valid_iter,
num_epochs = 5,
eval_every = len(train_iter) // 2,
file_path = destination_folder):
# training loop
model.train()
for epoch in range(num_epochs):
for titletext, labels in train_loader:
labels = labels.type(torch.LongTensor)
labels = labels.to(device)
titletext = titletext.type(torch.LongTensor)
titletext = titletext.to(device)
optimizer.zero_grad()
y_train_pred = model(titletext, None)
loss = criterion(y_train_pred, labels)
loss.backward()
optimizer.step()
Any idea what could be wrong?
thank you |
st49292 | Hi Fatimah!
Fatimah:
When using BinaryCrossEntropy I get good better results than we I use Crossentropy.
If you are performing a single-label, multi-class classification
problem, your first (and almost certainly best) choice for your
loss function is CrossEntropyLoss.
If you are performing a multi-label, multi-class classification
problem, you should use BCEWithLogitsLoss as your loss
function.
If you want to switch between CrossEntropyLoss and
BCEWithLogitsLoss, you do have to change the type, shape
and values of the target (labels) you pass your loss function.
Good luck.
K. Frank |
st49293 | I took. for rent a server with 8 GPU and 8 CPU. I want to use DistributedDataParallel. Please give me a simple example of how to do this. I found a lot of different examples, but I do not fully understand them. |
st49294 | Hey @slavavs
This 2 is the minimum example.
For more advanced usages, please see this overview 2.
BTW, could you please add a “distributed” tag for questions related to distributed training, so that people work on that can get back to you promptly. Thx! |
st49295 | I am getting this error after loading the model and when the model is called. Below is my code:
def main(hparams):
device = 'cuda' if torch.cuda.is_available() else 'cpu'
net = Unet.load_from_checkpoint(hparams.checkpoint)
net.freeze()
net.to(device)
for fn in tqdm(os.listdir(hparams.img_dir)):
fp = os.path.join(hparams.img_dir, fn)
img = Image.open(fp)
mask = predict(net, img, device=device)
mask_img = mask_to_image(mask)
mask_img.save(os.path.join(hparams.out_dir, fn))
if __name__ == '__main__':
parent_parser = ArgumentParser(add_help=False)
parent_parser.add_argument('--checkpoint', required=True)
parent_parser.add_argument('--img_dir', required=True)
parent_parser.add_argument('--out_dir', required=True)
parser = Unet.add_model_specific_args(parent_parser)
hparams = parser.parse_args()
main(hparams)
getting the error at the last line “main(hparams)”. Can someone help me with this, please |
st49296 | Could you post a minimal, executable code snippet so that we could have a look, please? |
st49297 | Hello @Varun_Tirupathi did you solve the problem? I am also getting the same problem! |
st49298 | hi, @akib62 I got this error when I am testing the model from a separate file called test.py. As I didn’t find a solution I have changed the methodology to do the testing. I don’t have a solution to this I’m sorry |
st49299 | Hi, @ptrblck I am writing the UNet code using PyTorch Lightning.
I am taking help from a repo 3
The unet and train.py is working fine. But at the time of test.py, it is showing the error (question title and description) is the same for me.
In my understanding, when, we are running test.py here and any images going through the pre-trained unet model here which requires n_channels!
But from the test.py we are not passing any arguments like n_channels because it is already defined! Most probably for that reason, the error is showing.
Any idea to solve the issue?
Updated:
Problem solved |
st49300 | Hi @akib62 please post the code snippet also for better understanding of the issue i mean where the problem is arising
Thanks |
st49301 | @Varun_Tirupathi same as your!
You already gave the code snippet and already described the problem |
st49302 | Hi @Varun_Tirupathi I solved the problem.
You have to use self.save_hyperparameters() in your Unet model to store the hyperparameters
I found the solution in this post here 6 |
st49303 | Hi @akib62 How you are running the test.py? what arguments you are passing is the same one im getting init() missing 1 required positional argument: ‘hparams’ this error now
python test.py --checkpoint lightning_logs/version_0/checkpoints/_ckpt_epoch_1.ckpt --img_dir dataset/carvana/test --out_dir result/carvana |
st49304 | After changing the code for saving parameters you have to train the model.
Yes, I used the same command |
st49305 | Hi, I was wondering if anyone can off their aid.
I am using someone else’s repo that I want to finetune, it is a image colorization nn. I want to utilise the models and introduce my own dataset to colorize various natural scenery.
I am having difficulty finetuning as most of the tutorials are based on the pretrained models. I am currently struggling on loading their models and training my dataset.
The git hub is called “InstColorization” by ericsujw. https://github.com/ericsujw/InstColorization 6
Thanks in advance! |
st49306 | Hi @ptrblck, where do I begin . I solved some of my preexisting issues but the first issue I had was listed here, [solved] KeyError: 'unexpected key "module.encoder.embedding.weight" in state_dict' 1.
I originally tried to change my model to nn.dataparallel but I was running into all sorts of attribute errors, so I eventually used ‘load_state_dict(state_dict,strict=False)’. My question now is that will having strict=false effect my model, or should I go for the solution in the above link to remove the ‘.module’ prefix.
Cheers |
st49307 | Go for the .module solution. Otherwise strict=False might just drop mismatched keys and your could end up with a randomly initialized model. |
st49308 | Thanks, I will definitely give that a shot!
I have another question regarding fine tuning, I am following this tutorial: https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html 1
I have a param_to_update tensor which I want to feed into my optim function. Currently my optimizer func is called as self.optimizer_G = torch.optim.Adam(self.netG.parameters(),
lr=opt.lr, betas=(opt.beta1, 0.999))
If I want to call the function, how do I set the netG.parameters() = param_to_update?
Sorry if my explanation is a bit confusing. |
st49309 | If param_to_update is not part of the netG.parameters(), you could pass it additionally to the optimizer as:
self.optimizer_G = torch.optim.Adam(list(self.netG.parameters()) + [param_to_update], lr=opt.lr, betas=(opt.beta1, 0.999)) |
st49310 | So in the tutorial they passed param_to_update through the optim function as such optimizer_ft = optim.SGD(params_to_update, lr=0.001, momentum=0.9).
So in my case I was hoping to pass my param_to_update into my optimizer function by only calling the function. The function is in a different script file so I thought the only way to call the funcion with my param_to_update tensor was to update my self.netG.parameters(). So is there a way to set self.netG.parameters() = param_to_update
Sorry for the silly questions and thanks for the help! |
st49311 | I don’t understand, why you would need to add this parameter to the model parameters.
Wouldn’t it work, if you pass the parameter with netG.parameters() directly to the optimizer?
If you want to add the parameter later, you could still use optimizer.add_param_group. |
st49312 | I was hoping to pass param_to_update into the optimizer instead of netG.parameters(), but im not entirely too sure how to achieve this.
I will try follow your method though, thanks for the help |
st49313 | Hi @ptrblck, I was hoping you could aid me on another problem. I am currently getting this error RuntimeError: element 0 of variables does not require grad and does not have a grad_fn and I have seen your previous solutions to this. The error occurs when the .backward() func is called, and I believe it is due to the loss functions and them possibly having the required_grad = False?
Here is my optimize_parameter and forward() func:
def forward(self):
if self.opt.stage == 'full' or self.opt.stage == 'instance':
(_, self.fake_B_reg) = self.netG(self.real_A, self.hint_B, self.mask_B)
else:
print('Error! Wrong stage selection!')
exit()
def optimize_parameters(self, optimize):
self.forward()
optimize.zero_grad()
if self.opt.stage == 'full' or self.opt.stage == 'instance':
self.loss_L1 = torch.mean(self.criterionL1(self.fake_B_reg.type(torch.cuda.FloatTensor),
self.real_B.type(torch.cuda.FloatTensor)))
self.loss_G = 10 * torch.mean(self.criterionL1(self.fake_B_reg.type(torch.cuda.FloatTensor),
self.real_B.type(torch.cuda.FloatTensor)))
else:
print('Error! Wrong stage selection!')
exit()
self.loss_G.backward()
optimize.step()
Any help is much appreciated |
st49314 | The computation graph seems to be detached at one point.
Could you check, if the model output and the loss tensors have a valid .grad_fn? |
st49315 | I have tried to print the .grad_fn and the requires_grad of the model and the loss tensors. On both occasions the loss tensors does not print anything. Not entirely sure If I am getting this right.
I tried print the first iteration of the loss tensor and it displayed tensor(2.155, device:'cuda:0') |
st49316 | Are you wrapping the forward pass in a with torch.no_grad() block or disabling the gradient calculation globally?
If not, could you post the model definition, please? |
st49317 | I have not used the torch.no_grad(), Here’s the model that I am working on
class SIGGRAPHGenerator(nn.Module):
def __init__(self, input_nc, output_nc, norm_layer=nn.BatchNorm2d, use_tanh=True, classification=True):
super(SIGGRAPHGenerator, self).__init__()
self.input_nc = input_nc
self.output_nc = output_nc
self.classification = classification
use_bias = True
# Conv1
model1=[nn.Conv2d(input_nc, 64, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model1+=[nn.ReLU(True),]
model1+=[nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model1+=[nn.ReLU(True),]
model1+=[norm_layer(64),]
# add a subsampling operation
# Conv2
model2=[nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model2+=[nn.ReLU(True),]
model2+=[nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model2+=[nn.ReLU(True),]
model2+=[norm_layer(128),]
# add a subsampling layer operation
# Conv3
model3=[nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model3+=[nn.ReLU(True),]
model3+=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model3+=[nn.ReLU(True),]
model3+=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model3+=[nn.ReLU(True),]
model3+=[norm_layer(256),]
# add a subsampling layer operation
# Conv4
model4=[nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model4+=[nn.ReLU(True),]
model4+=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model4+=[nn.ReLU(True),]
model4+=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model4+=[nn.ReLU(True),]
model4+=[norm_layer(512),]
# Conv5
model5=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=use_bias),]
model5+=[nn.ReLU(True),]
model5+=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=use_bias),]
model5+=[nn.ReLU(True),]
model5+=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=use_bias),]
model5+=[nn.ReLU(True),]
model5+=[norm_layer(512),]
# Conv6
model6=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=use_bias),]
model6+=[nn.ReLU(True),]
model6+=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=use_bias),]
model6+=[nn.ReLU(True),]
model6+=[nn.Conv2d(512, 512, kernel_size=3, dilation=2, stride=1, padding=2, bias=use_bias),]
model6+=[nn.ReLU(True),]
model6+=[norm_layer(512),]
# Conv7
model7=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model7+=[nn.ReLU(True),]
model7+=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model7+=[nn.ReLU(True),]
model7+=[nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model7+=[nn.ReLU(True),]
model7+=[norm_layer(512),]
# Conv7
model8up=[nn.ConvTranspose2d(512, 256, kernel_size=4, stride=2, padding=1, bias=use_bias)]
model3short8=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model8=[nn.ReLU(True),]
model8+=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model8+=[nn.ReLU(True),]
model8+=[nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model8+=[nn.ReLU(True),]
model8+=[norm_layer(256),]
# Conv9
model9up=[nn.ConvTranspose2d(256, 128, kernel_size=4, stride=2, padding=1, bias=use_bias),]
model2short9=[nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1, bias=use_bias),]
# add the two feature maps above
model9=[nn.ReLU(True),]
model9+=[nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1, bias=use_bias),]
model9+=[nn.ReLU(True),]
model9+=[norm_layer(128),]
# Conv10
model10up=[nn.ConvTranspose2d(128, 128, kernel_size=4, stride=2, padding=1, bias=use_bias),]
model1short10=[nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1, bias=use_bias),]
# add the two feature maps above
model10=[nn.ReLU(True),]
model10+=[nn.Conv2d(128, 128, kernel_size=3, dilation=1, stride=1, padding=1, bias=use_bias),]
model10+=[nn.LeakyReLU(negative_slope=.2),]
# classification output - possibly change this output
model_class=[nn.Conv2d(256, 529, kernel_size=1, padding=0, dilation=1, stride=1, bias=use_bias),]
# regression output
model_out=[nn.Conv2d(128, 2, kernel_size=1, padding=0, dilation=1, stride=1, bias=use_bias),]
if(use_tanh):
model_out+=[nn.Tanh()]
self.model1 = nn.Sequential(*model1)
self.model2 = nn.Sequential(*model2)
self.model3 = nn.Sequential(*model3)
self.model4 = nn.Sequential(*model4)
self.model5 = nn.Sequential(*model5)
self.model6 = nn.Sequential(*model6)
self.model7 = nn.Sequential(*model7)
self.model8up = nn.Sequential(*model8up)
self.model8 = nn.Sequential(*model8)
self.model9up = nn.Sequential(*model9up)
self.model9 = nn.Sequential(*model9)
self.model10up = nn.Sequential(*model10up)
self.model10 = nn.Sequential(*model10)
self.model3short8 = nn.Sequential(*model3short8)
self.model2short9 = nn.Sequential(*model2short9)
self.model1short10 = nn.Sequential(*model1short10)
self.model_class = nn.Sequential(*model_class)
self.model_out = nn.Sequential(*model_out)
self.upsample4 = nn.Sequential(*[nn.Upsample(scale_factor=4, mode='nearest'),])
self.softmax = nn.Sequential(*[nn.Softmax(dim=1),])
def forward(self, input_A, input_B, mask_B):
conv1_2 = self.model1(torch.cat((input_A,input_B,mask_B),dim=1))
conv2_2 = self.model2(conv1_2[:,:,::2,::2])
conv3_3 = self.model3(conv2_2[:,:,::2,::2])
conv4_3 = self.model4(conv3_3[:,:,::2,::2])
conv5_3 = self.model5(conv4_3)
conv6_3 = self.model6(conv5_3)
conv7_3 = self.model7(conv6_3)
conv8_up = self.model8up(conv7_3) + self.model3short8(conv3_3)
conv8_3 = self.model8(conv8_up)
if(self.classification):
out_class = self.model_class(conv8_3)
conv9_up = self.model9up(conv8_3.detach()) + self.model2short9(conv2_2.detach())
conv9_3 = self.model9(conv9_up)
conv10_up = self.model10up(conv9_3) + self.model1short10(conv1_2.detach())
conv10_2 = self.model10(conv10_up)
out_reg = self.model_out(conv10_2)
else:
out_class = self.model_class(conv8_3.detach())
conv9_up = self.model9up(conv8_3) + self.model2short9(conv2_2)
conv9_3 = self.model9(conv9_up)
conv10_up = self.model10up(conv9_3) + self.model1short10(conv1_2)
conv10_2 = self.model10(conv10_up)
out_reg = self.model_out(conv10_2)
return (out_class, out_reg) |
st49318 | Your model definition works and the output tensors have valid grad_fns, so I’m unsure why they are None in your script:
model = SIGGRAPHGenerator(3, 1)
x = torch.randn(1, 1, 24, 24)
out = model(x, x, x)
print(out[0].grad_fn)
> <ThnnConv2DBackward object at ...>
print(out[1].grad_fn)
> <TanhBackward object at ...> |
st49319 | Does this mean that the loss functions does not have the attribute requires_grad=True ? And if not, how do I set these tensors to have this attribute.
I saw in another post that you said loss.requries_grad =True will result in undesirable things. Thanks! |
st49320 | Could you verify, that your model outputs also have valid .grad_fns?
From your last post I understood that’s not the case. |
st49321 | Apologies for the late reply. I followed your steps and I obtained similar results and were able to print out
<ThnnConv2DBackward object at …> |
st49322 | Ah OK, could you post the loss function then?
If the model output contains a valid grad_fn, while the loss doesn’t, the loss function might detach the graph. |
st49323 | Hi ptrblck, I have found that when i use self.loss_G = Variable(self.loss_G, requires_grad=True), the error doesn’t occur, so I assume it is due to the loss tensors, but if there are some other reason for the error please let me know! Again, thanks for all the help!
However, here are the loss functions if you are still interested:
self.criterionL1 = networks.HuberLoss(delta=1. / opt.ab_norm)
class HuberLoss(nn.Module):
def __init__(self, delta=.01):
super(HuberLoss, self).__init__()
self.delta=delta
def __call__(self, in0, in1):
mask = torch.zeros_like(in0)
mann = torch.abs(in0-in1)
eucl = .5 * (mann**2)
mask[...] = mann < self.delta
# loss = eucl*mask + self.delta*(mann-.5*self.delta)*(1-mask)
loss = eucl*mask/self.delta + (mann-.5*self.delta)*(1-mask)
return torch.sum(loss,dim=1,keepdim=True) |
st49324 | Hi,
I have a problem where I have 0 accuracy (and big loss) in training after resuming.
I managed to understand where it’s coming from.
When I resume to evaluate, and do “model.eval()” before processing data, I have a nice accuracy.
But when I resume before training, and do “model.train()”, my accuracy drops to 0.
I have a normal batch size, same between train/eval, I’m processing the same data, the same way, the only difference is the “model.train()” => really high loss, but with “model.eval()” => low loss.
Do you know how I could solve it?
Thanks for your answers |
st49325 | could it be that you save / load only the model or model.state_dict?
Because for resuming training you should save and load a Checkpoint
If so please refer to this 3 and/or this 3 tutorial |
st49326 | Maybe I wasn’t precise enough. It’s not that my accuracy slowly drops to 0, or that I’m wrongly training.
Because in that case, it could be a problem with the optimizer state dict etc. (I checked that possibility)
The problem is that, even without training, without doing any loss.backward() or optimizer.step(), I already have a loss that indicates that my model is garbage when it’s configured with model.train(). But everything is ok when I use model.eval(). For the exact same code, dataloader etc…
It’s like if the train() method was making my model completely useless.
It’s not the training procedure that I don’t even have to execute to already see the big difference in loss between train() and eval(). I’m doing fp32 training. I know that with misconfigured fp16 training you could have bad eval() and good train(), but here it’s the opposite.
Is there a way to reconfigure the model.train() the same way it’s configured when I do model.eval()? like manually re-modifying the batchnorm layers etc… ? |
st49327 | I am sorry I misunderstood your question at first.
But with the information given I can’t say anything more.
I also tried to recreate your problem but couldn’t. Everything works fine on my end.
There is no noticeably large difference in loss between using model.eval() or model.train() after loading. |
st49328 | That’s also my results for many models.
But I don’t know why this one has this problem, if someone has had this problem, maybe it could help.
It would be hard for me to give a minimal reproducible example, I’m using other libs, maybe it’s because I’m using deformable conv 1 but no one reported this issue.
I’ll keep looking into it anyway |
st49329 | Don’t know if that could help in helping me, but if I do the standard training procedure but I use model.eval() instead of model.train(), everything goes as expected. No accuracy drop whatsoever |
st49330 | One more info, despite the training starting back from 0, it gets close to its saved score really fast. So I guess the convolution layers are fine but It’s just like if I had to retrain the batchnorm from start maybe |
st49331 | No. You can restart a training with the old weights, some of them are good so the model will progress quickly and you can see then if it happens again, it probably won’t.
I had this problem but I don’t have it anymore |
st49332 | I have built a resnet 50 for classification task .I saved the model after training is completed . However when I load back the model and test the accuracy using the test set . It can give me a overall result 89% accuracy . But different accuracy can be obtained if I retest the test set .(test set is the always the same ).is it a problem about the model . I assume the accuracy should be always the same for same input .And the accuracy is actually fluctuating from 89 to 89.5 .(not a big difference) .is this expected ? I am pretty sure my coding and algorithm is absolutely right . |
st49333 | You might get non-deterministic outputs based on some operations such as non-deterministic cudnn kernels as explained in the reproducibility docs 20.
If you follow these docs, you should get deterministic outputs. |
st49334 | Why the default inputs’ dimensions of LSTM are [sequence_length, batch_size, feature_size]?
I think it is more naturally to use the data in the shape of [batch_size, sequence_length, feature_size] |
st49335 | Before I comment on the principle, if your input_data is of shape [batch_size, sequence_length, feature_size], then input_data.permute(1, 0, 2) will transform it into shape [sequence_length, batch_size, feature_size].
I believe that permute doesn’t copy the data, it just alters the strides used for the underlying array, so it is very efficient.
I am running some analyses on some really long time-series data and I wanted to create sequential batches. I found that if my data was of shape [batch_size, sequence_length, feature_size], then selecting slices of the form [:, start:end, :] gave me non-contiguous tensors and the model couldn’t use them directly. So to avoid having to copy the tensor in order to make it contiguous, I first made sure my data was of shape [sequence_length, batch_size, feature_size] and then it all worked. |
st49336 | I also saw this with nn.MultiheadAttention. It’s still not clear to me why you would not have the first dimension be the batch size like for nn.Linear. |
st49337 | The layout is chosen for performance reasons as also said here 6.
Also, for RNNs you can use batch_first=True to change the shapes. |
st49338 | If I have an even sized convolutional filter, say 2 by 2, f_{i,j} for 1<= i <=2, 1<= j < = 2
Then if I apply it to a window centered at (x,y) (assuming appropriate padding),
then will it apply something like
f_{1,1} (x,y) + f_{1,2} (x+1, y) + f_{2, 1}(x, y+1) + f_{2,2}(x+1, y+1) in Pytorch?
I’m not sure what the formula would be for even sized filters basically.
Thanks. |
st49339 | Solved by ptrblck in post #2
I’m not sure what your notation means exactly, but you could use a simple example as given here:
weight = torch.ones(1, 1, 2, 2)
x = torch.arange(4*4).view(1, 1, 4, 4).float()
print(weight)
> tensor([[[[1., 1.],
[1., 1.]]]])
print(x)
> tensor([[[[ 0., 1., 2., 3.],
[ 4., … |
st49340 | I’m not sure what your notation means exactly, but you could use a simple example as given here:
weight = torch.ones(1, 1, 2, 2)
x = torch.arange(4*4).view(1, 1, 4, 4).float()
print(weight)
> tensor([[[[1., 1.],
[1., 1.]]]])
print(x)
> tensor([[[[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.]]]])
out = F.conv2d(x, weight, stride=1, padding=0)
print(out)
> tensor([[[[10., 14., 18.],
[26., 30., 34.],
[42., 46., 50.]]]])
out = F.conv2d(x, weight, stride=1, padding=1)
print(out)
> tensor([[[[ 0., 1., 3., 5., 3.],
[ 4., 10., 14., 18., 10.],
[12., 26., 30., 34., 18.],
[20., 42., 46., 50., 26.],
[12., 25., 27., 29., 15.]]]])
As you can see, the kernel will use windows of 2x2 to create its output.
Let me know, if this example makes the output clear or if you would need more information. |
st49341 | I’m trying to free some GPU memory so that other processes can use it. I tried to do that by executing torch.cuda.empty_cache() after deleting the tensor but for some reason it doesn’t seem to work.
I wrote this small script to replicate the problem
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
showUtilization()
t = torch.zeros((1, 2**6, 2**6)).to(f'cuda')
showUtilization()
del t
torch.cuda.empty_cache()
showUtilization()
The memory utilization grows from 5% to 12% after allocating the tensor and stays to 12% even after emptying the cache.
Of course as the process terminates the memory is released but I’d need to do that while the process is running. Does anyone have any idea about how to solve this? |
st49342 | Your approach should work as shown here:
print(torch.cuda.memory_allocated())
> 0
print(torch.cuda.memory_reserved())
> 0
t = torch.zeros((1, 2**6, 2**6)).to(f'cuda')
print(torch.cuda.memory_allocated())
> 16384
print(torch.cuda.memory_reserved())
> 2097152
del t
print(torch.cuda.memory_allocated())
> 0
print(torch.cuda.memory_reserved())
> 2097152
torch.cuda.empty_cache()
print(torch.cuda.memory_allocated())
> 0
print(torch.cuda.memory_reserved())
> 0
Note that the first CUDA operation will create the CUDA context on the device, which will load all kernels, cudnn, etc. onto the device.
This memory is not reported by torch.cuda.memory_allocated() and torch.cuda.memory_reserved() and can be seen via nvidia-smi. |
st49343 | Thanks for your response. If I run your code I get the exact same results but for some reason nvidia-smi doesn’t seem to notice that the memory was deallocated.
If I run this code
def nvidia_smi():
out = subprocess.Popen(['nvidia-smi'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
sout, serr = out.communicate()
res = re.findall("([0-9]*)MiB / 16130MiB", str(sout))
res = [int(x) for x in res]
return res
os.environ['CUDA_VISIBLE_DEVICES'] = '2'
print(f'{nvidia_smi()[2]} MiB')
t = torch.zeros((1, 2**6, 2**6)).to(f'cuda')
print(f'{nvidia_smi()[2]} MiB')
del t
print(f'{nvidia_smi()[2]} MiB')
torch.cuda.empty_cache()
print(f'{nvidia_smi()[2]} MiB')
The result is
470 MiB
1473 MiB
1473 MiB
1471 MiB
As the process terminate the used memory goes down to 470 MiB again.
For some reason empty_cache() manages to deallocate 2 MiB (this is consistent and not due to other processes on the same GPU I’ve tried it multiple times). Thinkig about it I guess that those 2 MiB are the size of the tensor I allocate. The other 1001 MiB are probably allocated by the CUDA backend for some internal functioning reason.
I was gonna ask if there’s a way to prevent that but I don’t think so. |
st49344 | Andrea_Rosasco:
For some reason empty_cache() manages to deallocate 2 MiB (this is consistent and not due to other processes on the same GPU I’ve tried it multiple times). Thinkig about it I guess that those 2 MiB are the size of the tensor I allocate.
Yes, the 2MB are shown in the torch.cuda.memory_reserved() output, which gives you the allocated and cached memory: 2097152 / 1024**2 = 2.0MB.
Andrea_Rosasco:
The other 1001 MiB are probably allocated by the CUDA backend for some internal functioning reason.
I was gonna ask if there’s a way to prevent that but I don’t think so.
Yes, these are used by the CUDA context as described before and cannot be freed. |
st49345 | Hi,
I need to generate a skew symmetric matrix from some weights. It would suffice to generate an upper triangular matrix A from the weights, since then
S = A - A.t()
would do the trick. The hard part is generating the matrix A from a vector, i.e. [x,y,z] to
0,x,y
0,0,z
0,0,0
and similarly for longer vectors.
Any ideas for how to do this? |
st49346 | def skewmat(x_vec):
'''
torch.matrix_exp(a)
Eigen::Matrix3f mat = Eigen::Matrix3f::Zero();
mat(0, 1) = -v[2]; mat(0, 2) = +v[1];
mat(1, 0) = +v[2]; mat(1, 2) = -v[0];
mat(2, 0) = -v[1]; mat(2, 1) = +v[0];
return mat;
input : (*, 3)
output : (*, 3, 3)
'''
W_row0 = torch.tensor([0,0,0, 0,0,1, 0,-1,0]).view(3,3).to(x_vec.device)
W_row1 = torch.tensor([0,0,-1, 0,0,0, 1,0,0]).view(3,3).to(x_vec.device)
W_row2 = torch.tensor([0,1,0, -1,0,0, 0,0,0]).view(3,3).to(x_vec.device)
x_skewmat = torch.stack([torch.matmul(x_vec, W_row0.t()) , torch.matmul(x_vec, W_row1.t()), torch.matmul(x_vec, W_row2.t())] , dim = -1)
return x_skewmat |
st49347 | I am trying to implement a “Sequence to sequence prediction model for time series” using either LSTM or GRU but could not find good tutorials for the exact problem.
However, I found this, which is implemented for a NLP problem (Machine Translation)
Link to Notebook 1
After giving this a thorough read once, I intuitively figured out these things below (Which I am not sure are correct) :
For time series data (1 dimensional data which has been converted to chunks of 12 time steps using moving window of stride 1), we do not need a embedding layer
We do not need tokens (, ) at the beginning and end of each sequence
In the NLP data, they use vocab size to represent each word in 1xvocav_size dimensions however, we just need a single dimension for each time step
It would be great, If anyone could confirm these
Or if you have any notebook I can follow for sequence to sequence prediction of time series
Thank you |
st49348 | Hi I want to design a network that reconstruct a frame using the previous frame. I used conv layers with simple mse loss but it doesnt work. would you mind suggest me an architecture to help me? |
st49349 | I have a model code that defined as (model_baseline)
modules = []
for block in blocks:
for bottleneck in block:
modules.append(
unit_module(bottleneck.in_channel,
bottleneck.depth,
bottleneck.stride))
self.body = Sequential(*modules)
self._initialize_weights()
def forward(self, x):
x = self.input_layer(x)
x = self.body(x)
x = self.output_layer(x)
return x
Link: https://github.com/ZhaoJ9014/face.evoLVe.PyTorch/blob/d5e31893f7e30c0f82262e701463fd83d9725381/backbone/model_irse.py#L156
where blocks is
blocks = [
get_block(in_channel=64, depth=64, num_units=3),
get_block(in_channel=64, depth=128, num_units=13),
get_block(in_channel=128, depth=256, num_units=30),
get_block(in_channel=256, depth=512, num_units=3)
]
I want to access the output of block 0 in the body during training (get_block(in_channel=64, depth=64, num_units=13),) . How can I do it in pytorch? Note that, the network is loaded from a pretrained model
This is what I tried
modules_0 = []
modules = []
for block in blocks[0:1]:
for bottleneck in block:
modules_0.append(
unit_module(bottleneck.in_channel,
bottleneck.depth,
bottleneck.stride))
self.body_0 = Sequential(*modules_0)
modules = []
for block in blocks[1:]:
for bottleneck in block:
modules.append(
unit_module(bottleneck.in_channel,
bottleneck.depth,
bottleneck.stride))
self.body = Sequential(*modules)
self._initialize_weights()
def forward(self, x):
x = self.input_layer(x)
x = self.body_0(x)
x = self.body(x)
x = self.output_layer(x)
However, the above method cannot use weight from pretrained model that trained on the model_baseline architecture |
st49350 | You can use forward hooks to get the output of a specific module.
This post 15 gives you an example. Let me know, if you get stuck. |
st49351 | @ptrblck Great for your help. I have tried your approach but it does not work. This is my full code
https://gist.github.com/John1231983/aeba806e92ed62052e842f4de74049b1#file-model-py-L177 3 |
st49352 | You are registering the hook after the forward pass was already performed.
Could you use the same approach I’ve used in the linked post or explain what exactly is not working? |
st49353 | ptrblck:
explain
@ptrblck My aim is to get attention (CAM) at a selected layer during forwarding. Is it possible to use your method in my case? |
st49354 | Yes, you should be able to get the intermediate activations, if you stick to my code example.
As explained before, you are registering the hook after the forward pass was already performed, so that you won’t be able to get this activation anymore. |
st49355 | Hi, I’m trying to implement Mesh RCNN, but after running the demo file Im getting the error:
“ModuleNotFoundError: No module named ‘pytorch3d’” , even though I have downloaded it. |
st49356 | From the official tutorials on git, Pytorch3D Tutorials 55, there wasn’t an import pytorch3d statement rather there were from ... import ....
i think this to streamline the workflow |
st49357 | are you running directly from the demo.py file and where are do you see this error?
using the config provided it should work. |
st49358 | Kiran_Ikram:
trying to implement Mesh RCNN, but after running the demo file Im getting the error:
“ModuleNotFoundError: No mod
If you want we developed the google colab to try it https://github.com/CDInstitute/CompoNET 18 |
st49359 | Assuming my model is on a gpu already, is there a way to get a state_dict of a model with cpu tensors without moving the model first to cpu and then back to gpu again?
Something like:
state_dict = model.state_dict()
state_dict = state_dict.cpu() |
st49360 | There is also a one-liner to create a cpu copy of the state_dict :
{k: v.cpu() for k, v in model.state_dict()} |
st49361 | Summary : I found memory cached or usage increase significantly on Conv2d for specific input shape,
like from torch.randn(14, 512, 2, 64) to torch.randn(15, 512, 2, 64), the memory could suddenly increase 500~1000MB, while I change 64 to 128 (width dimension), the memory usage come back to normal.
layer = torch.nn.Conv2d(512,
512,
kernel_size=(3, 3),
stride=(2, 1),
padding= (1, 1),
bias=False).to('cuda:0').eval()
torch.cuda.reset_max_memory_allocated()
torch.cuda.reset_max_memory_cached()
torch.cuda.empty_cache()
with torch.no_grad():
inputs = torch.randn(15, 512, 2, 64).to('cuda:0')
out = layer(inputs)
print('Max gpu memory allocated : {:.2f}'.format(torch.cuda.max_memory_allocated() / 1024 / 1024))
print('Max gpu memory cached : {:.2f}'.format(torch.cuda.max_memory_reserved() / 1024 / 1024))
## return
## Max gpu memory allocated : 1885.24
## Max gpu memory cached : 2062.00
The weird thing is that, while I replace width dimension from 64 to 128 (total parameters increase), or batch size from 15 to 14, the memory usage drop significantly.
torch.cuda.reset_max_memory_allocated()
torch.cuda.reset_max_memory_cached()
torch.cuda.empty_cache()
with torch.no_grad():
inputs = torch.randn(15, 512, 2, 128).to('cuda:0')
out = layer(inputs)
print('Max gpu memory allocated : {:.2f}'.format(torch.cuda.max_memory_allocated() / 1024 / 1024))
print('Max gpu memory cached : {:.2f}'.format(torch.cuda.max_memory_reserved() / 1024 / 1024))
## return
## Max gpu memory allocated : 680.68
## Max gpu memory cached : 846.00
Is there anyone know the reason ? Thanks in advance.
Version :
torch 1.6.0
torchvision 0.7.0 |
st49362 | Solved by albanD in post #2
Hi,
cudnn has many algorithms it can choose from to perform convolution. This choice depends on the input size and so a small change in input size might trigger a different algorithm to be used and thus different memory behavior.
You can try setting torch.backends.cudnn.benchmark=True for cudnn to… |
st49363 | Hi,
cudnn has many algorithms it can choose from to perform convolution. This choice depends on the input size and so a small change in input size might trigger a different algorithm to be used and thus different memory behavior.
You can try setting torch.backends.cudnn.benchmark=True for cudnn to try and choose the best algorithm (for speed, not memory). |
st49364 | I see, thanks for the reply, one more question is that any suggestion if I wanna do the memory management? Since the input size might vary from each inference in my case, I am suffering from memory explosion from time to time. |
st49365 | There is no specific tool for this no I’m afraid.
You can also try torch.backends.cudnn.deterministic=True to force special deterministic algorithms that will reduce the available algos and will prevent these switches. |
st49366 | Thanks!
Btw, How about CPU inference ? Does memory usage from RAM also face same issue? And do we have API to monitor memory usage on CPU inference cases. |
st49367 | On CPU we use our own algo by default. There is a single one so all the usage should be smooth
If you use mkldnn or other cpu backend explicitly though you might see similar behavior. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.