id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st46568 | Yes, it is like any other module in that regard. It is also same as [sub]batching sequences (H1,H2) for rnn module. |
st46569 | Hi, I have faced some problem that despite added new parameters using nn.Parameter() it is not added into the list of nn.parameters().
I have simplified my code as below:
class CoNet(nn.Module):
def __init__(self):
super(CoNet, self).__init__()
self.std = 0.01
self.layers = [64, 32, 16, 8]
self.class_size = 2
self.initialize_variables()
def initialize_variables(self):
#weights and biases: apps
self.weights_apps = {}
self.biases_apps = {}
for l in range(len(self.layers) - 1):
self.weights_apps[l] = nn.Parameter(torch.normal(mean=0, std=self.std, size=(self.layers[l], self.layers[l+1]), requires_grad=True))
self.biases_apps[l] = nn.Parameter(torch.normal(mean=0, std=self.std, size=(self.layers[l+1],), requires_grad=True))
self.W_apps = nn.Parameter(torch.normal(mean=0, std=self.std, size=(self.layers[-1], self.class_size), requires_grad=True))
self.b_apps = nn.Parameter(torch.normal(mean=0, std=self.std, size=(self.class_size,), requires_grad=True))
model = CoNet()
list(model.parameters())
[Parameter containing:
tensor([[-0.0046, 0.0010],
[-0.0066, -0.0176],
[ 0.0054, 0.0118],
[-0.0128, -0.0059],
[ 0.0155, 0.0053],
[-0.0106, -0.0087],
[ 0.0029, 0.0146],
[-0.0236, -0.0152]], requires_grad=True), Parameter containing:
tensor([0.0284, 0.0035], requires_grad=True)]
The output is basically the last two parameters added namely: self.W_apps and self.b_apps.
Is there any reason why only these two are added?
Help please, thanks! |
st46570 | Solved by Abhilash_Srivastava in post #2
Try using nn.ParameterList instead of the {} |
st46571 | Hi everyone
I have two models that are essentially the same (same architecture, same number of parameters) but they yield different results. The first model is one from the PyTorch model selection (a ResNet18 without pretrained weights) and the other one is essentially copy pasted code a bit reformatted (I want to later try some stuff with the ResNet architecture which is why I had to code it myself).
Somehow they yield different results even if I seed my code to make it reproducible…Does anyone know why that is the case? Is it because altough conceptually they are the same, PyTorch initialises different weights for them because they are different instances?
Any help is very much appreciated!
All the best
snowe
Edit: I just realised that when I instantiate two ResNets from PyTorch they also yield different results, even though they are the same. Is that behaviour to be expected? Is this because of some randomness in the batchnorm or so? |
st46572 | Solved by ptrblck in post #13
I get the same results, if I try to make sure to use the same calls into the PRNG:
torch.manual_seed(2809)
modelA = ResNet(BasicBlock, [2, 2, 2, 2], 1000)
in_ = modelA.fc.in_features
classes = 10
modelA.fc = nn.Linear(in_features=in_, out_features=classes)
torch.manual_seed(2809)
modelB = ResNet(Ba… |
st46573 | You should be able to get the same results between runs, at least I’m pretty sure of that. What did you set seed on? I tried with
import torch
import torchvision.models as models
seed=42
torch.manual_seed(seed)
resnet18 = models.resnet18(pretrained=False)
x = torch.randn((1, 3, 224, 224))
print(resnet18(x)[0:10])
And I seem to get same results on different runs. If you are running on CUDA then you should add
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
When training I think it is also important to send seed to DataLoader. Perhaps you could share some small code that give you different results? |
st46574 | Hi @AladdinPerzon, thank you for your response!
I seed like this:
torch.manual_seed(seed)
np.random.seed(seed)
random.seed(seed)
# for cuda
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.enabled = False
I am able to reproduce the results if I load the ResNet from PyTorch and use this one over and over again. But when I use my own implementation of the ResNet with the same architecture and the same number of parameters I don’t get the same results as I did with the loaded one.
In other words, loading and own implementation yield different results although the network is essentially the same… |
st46575 | Ah, got it. It seems to me that it must weight initialization which seems strange since we’ve set the seed. This seems a bit odd and is indeed what I also get:
Edit: I just realised that when I instantiate two ResNets from PyTorch they also yield different results, even though they are the same.
Maybe someone can clarify this |
st46576 | snowe:
Edit: I just realised that when I instantiate two ResNets from PyTorch they also yield different results, even though they are the same. Is that behaviour to be expected? Is this because of some randomness in the batchnorm or so?
I cannot reproduce this issue:
torch.manual_seed(2809)
modelA = models.resnet18()
modelB = models.resnet18()
modelB.load_state_dict(modelA.state_dict())
x = torch.randn(8, 3, 224, 224)
outA = modelA(x)
outB = modelB(x)
print((outA - outB).abs().max())
> tensor(0., grad_fn=<MaxBackward1>)
Could you clarify your use case or post a code snippet to produce this issue? |
st46577 | If I understand correctly, topic author is trying to understand why his own implementation and implementation of resnet18 from torchvision give different results. First, i would try to catch some obvious mistakes like instantiating pretrained version from torchvision. Second, I would go through the source code of torchvision model and make sure I did all the layer instantiations and weights initialization in my own model the same as they do in ‘reference’ model. Without looking into ‘own’ model code it is to broad topic to discuss, right? |
st46578 | That’s right and a good idea to solve the original issue. I would start with making sure the torchvision implementation yields the same results first as a smoke test, as I guess there might be a misunderstanding how seeding works or another issue. |
st46579 | Could you explain why we are doing this line:
modelB.load_state_dict(modelA.state_dict())
If I do
seed=42
torch.manual_seed(seed)
resnetA = models.resnet18(pretrained=False)
resnetB = models.resnet18(pretrained=False)
x = torch.randn((1, 3, 224, 224))
print((resnetA(x) - resnetB(x)).abs().max())
I obtain different results. I’m assuming this is expected but I guess why they are different is not clear to me |
st46580 | Your code should yield different results, since you are only seeding the code once.
Each call into the pseudorandom number generator would yield a new random number.
Seeding will make sure that the sequence of these pseudorandom numbers is reproducible, but won’t yield the same numbers for random calls:
torch.manual_seed(2809)
print(torch.randn(2))
> tensor([-2.0748, 0.8152])
print(torch.randn(2))
> tensor([-1.1281, 0.8386])
torch.manual_seed(2809)
print(torch.randn(2))
> tensor([-2.0748, 0.8152])
print(torch.randn(2))
> tensor([-1.1281, 0.8386])
If you want to use the seed to initialize both models with the same random values, you would have to re-seed the code:
torch.manual_seed(2809)
modelA = models.resnet18()
torch.manual_seed(2809)
modelB = models.resnet18()
x = torch.randn(8, 3, 224, 224)
outA = modelA(x)
outB = modelB(x)
print((outA - outB).abs().max())
> tensor(0., grad_fn=<MaxBackward1>) |
st46581 | Thank you all for your inputs!
So I can reproduce the results of two ResNets I load from PyTorch. However, my own implementation still yields different results compared to the loaded ones and I’ve checked it multiple times and cannot figure out why it behaves different. I can reproduce the results of my own implementation as well, so there does not seem to be some weird random stuff happening…
Here is my ResNet implementation:
# this type of block is used to build ResNet18 and ResNet34
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, in_channels, out_channels, stride=1, downsample=None):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1, stride=stride, bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1, stride=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channels)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
identity = self.downsample(identity)
out += identity
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes):
super(ResNet, self).__init__()
self.in_channels = 64
self.conv1 = nn.Conv2d(in_channels=3, out_channels=self.in_channels, kernel_size=7,
padding=3, stride=2, bias=False)
self.bn1 = nn.BatchNorm2d(self.in_channels)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, padding=1, stride=2)
self.layer1 = self._make_layer(block, layers[0], out_channels=64, stride=1)
self.layer2 = self._make_layer(block, layers[1], out_channels=128, stride=2)
self.layer3 = self._make_layer(block, layers[2], out_channels=256, stride=2)
self.layer4 = self._make_layer(block, layers[3], out_channels=512, stride=2)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(512 * block.expansion, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def _make_layer(self, block, num_blocks, out_channels, stride):
downsample = None
if stride != 1 or self.in_channels != out_channels * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.in_channels, out_channels*block.expansion, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(out_channels*block.expansion)
)
layers = []
layers.append(block(self.in_channels, out_channels, stride, downsample))
self.in_channels = out_channels * block.expansion
for _ in range(1, num_blocks):
layers.append(block(self.in_channels, out_channels))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = torch.flatten(x, 1)
x = self.fc(x)
return x
And here how I test it:
def set_seed(seed):
torch.manual_seed(seed)
np.random.seed(seed)
random.seed(seed)
# for cuda
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.enabled = False
set_seed(0)
modelA = PyTorchModels.resnet18()
in_ = modelA.fc.in_features
classes = 10
modelA.fc = nn.Linear(in_features=in_, out_features=classes)
set_seed(0)
modelB = PyTorchModels.resnet18()
in_ = modelB.fc.in_features
classes = 10
modelB.fc = nn.Linear(in_features=in_, out_features=classes)
set_seed(0)
modelC = ResNet(BasicBlock, [2, 2, 2, 2], 10)
t = torch.rand(32, 3, 32, 32)
outA = modelA(t)
outB = modelB(t)
outC = modelC(t)
print(outA[0])
print('\n')
print(outB[0])
print('\n')
print(outC[0])
tensor([-0.0160, -0.0413, 0.5379, -0.3654, -0.0620, -0.7079, -0.9632, -0.9346,
1.5941, 1.0369], grad_fn=<SelectBackward>)
tensor([-0.0160, -0.0413, 0.5379, -0.3654, -0.0620, -0.7079, -0.9632, -0.9346,
1.5941, 1.0369], grad_fn=<SelectBackward>)
tensor([ 0.1272, 0.1153, -0.4902, -0.2696, -0.4524, -0.4243, -0.5799, -0.0227,
0.5023, 0.8597], grad_fn=<SelectBackward>)
So the two loaded ResNets behave the same but diffreent to my own… |
st46582 | I get the same results, if I try to make sure to use the same calls into the PRNG:
torch.manual_seed(2809)
modelA = ResNet(BasicBlock, [2, 2, 2, 2], 1000)
in_ = modelA.fc.in_features
classes = 10
modelA.fc = nn.Linear(in_features=in_, out_features=classes)
torch.manual_seed(2809)
modelB = ResNet(BasicBlock, [2, 2, 2, 2], 1000)
in_ = modelB.fc.in_features
classes = 10
modelB.fc = nn.Linear(in_features=in_, out_features=classes)
torch.manual_seed(2809)
modelC = models.resnet18()
in_ = modelC.fc.in_features
modelC.fc = nn.Linear(in_features=in_, out_features=classes)
x = torch.randn(8, 3, 224, 224)
outA = modelA(x)
outB = modelB(x)
outC = modelC(x)
print((outA - outB).abs().max())
> tensor(0., grad_fn=<MaxBackward1>)
print((outA - outC).abs().max())
> tensor(0., grad_fn=<MaxBackward1>)
In your example you are using modelC = ResNet(BasicBlock, [2, 2, 2, 2], 10), which will directly create a linear layer with 10 output classes.
While it’s a valid approach for your use case, this will break the comparison using seeds, since the calls to the PRNG in the torchvision implementation are:
-> init layer1
-> init layer2
...
-> init fc with 1000 output classes
-> init custom nn.Linear with 10 output classes
while you would skip the penultimate step.
If you use my code, you should get the same results.
That being said, I would recommend not to use the seeding approach to compare models, as you would need to be familiar which layers are initialized in which order.
The better approach is just to load the state_dict from one model into the other and test both models. |
st46583 | Thank you so much @ptrblck, it is working now!
Usually I wouldn’t use seeding to compare models but this time I just wanted to make sure that my implementation was correct. Therefore, I figured I just check the results using the same seed. But I see how this can lead to issues, as in this case for example.
Anyways, once again, thank you for the support!
All the best
snowe |
st46584 | Hi, I try to train myself wtih mask R-CNN and i see this colab work :
colab.research.google.com
Google Colaboratory 6
How can i modify this code to count people on images ? |
st46585 | 03%20PM2034×908 155 KB
58%20PM2028×1092 347 KB
As far as I know, int object is not callable is an error when you use a function name already assigned to a variable.
However, I cannot find any aliasing in my code
What could have been possibly gone wrong??? |
st46586 | Are you passing a tensor to the model or some other object?
Could you post a small, executable code snippet so that we could have a look? |
st46587 | import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
from torch.utils.data import Dataset, DataLoader
import time
import math
USE_GPU = False
if USE_GPU and torch.cuda.is_available():
device = torch.device(‘cuda’)
else:
device = torch.device(‘cpu’)
vocab = open(‘vocab.txt’).read().splitlines()
n_vocab = len(vocab)
torch.manual_seed(1)
def text2int(csv_file, dname, vocab):
ret = []
data = csv_file[dname].values
for datum in data:
for char in str(datum):
idx = vocab.index(char)
ret.append(idx)
ret = np.array(ret)
return ret
class NewsDataset(Dataset):
def init(self, csv_file, vocab):
self.csv_file = pd.read_csv(csv_file, sep=’|’)
self.vocab = vocab
self.len = len(self.csv_file)
self.x_data = torch.tensor(text2int(self.csv_file, ‘x_data’, self.vocab))
self.y_data = torch.tensor(text2int(self.csv_file, ‘y_data’, self.vocab))
def len(self):
return self.len
def getitem(self, idx):
return self.x_data[idx], self.y_data[idx]
dataset = NewsDataset(csv_file = ‘data.csv’, vocab = vocab)
train_loader = DataLoader(dataset=dataset,
batch_size=64,
shuffle=False,
num_workers=1)
from torch.autograd import Variable
import numpy as np
class selfModule(nn.Module):
def init(self, inputdim, hiddendim, batchsize, outputdim, numlayers):
super(selfModule, self).init()
self.inputdim = inputdim
self.hiddendim = hiddendim
self.numlayers = numlayers
self.outputdim = outputdim
self.batchsize = batchsize
self.lstm = nn.LSTM(self.inputdim, self.hiddendim, self.numlayers, bias=True, batch_first=False, bidirectional=False)
self.fc = nn.Linear(self.hiddendim, self.outputdim)
def forward(self, input):
onehot_input = np.zeros((len(input), 86)) #86 = dictionary size
onehot_input[np.arange(len(input)), input] = 1
lstm_out, self.hidden = self.lstm(onehot_input)
prediction = self.fc(lstm_out[-1].view(self.batchsize, -1))
return prediction.view(-1)
def init_hidden_cell(self):
return (torch.zeros(self.numlayers, self.batchsize, self.hiddendim), torch.zeros(self.numlayers, self.batchsize, self.hiddendim))
def timeSince(since):
now = time.time()
s = now - since
m = math.floor(s / 60)
s -= m * 60
return ‘%dm %ds’ % (m, s)
def train(dataset, model, optimizer, n_iters):
model.to(device=device)
model.train()
start = time.time()
print_every = 50
for e in range(n_iters):
model.hidden = model.init_hidden_cell()
for i, (x, y) in enumerate(dataset):
x = x.to(device=device)
y = y.to(device=device)
model.zero_grad()
output = model(x) #####
loss = loss_fcn(output, y)
loss.backward()
optimizer.step()
if e % print_every == 0:
print(’%s (%d %d%%) %.4f’ % (timeSince(start), e, e / n_iters * 100, loss))
def test(start_letter):
max_length = 1000
with torch.no_grad():
idx = vocab.index(start_letter)
input_nparray = [idx]
input_nparray = np.reshape(input_nparray, (1, len(input_nparray)))
inputs = torch.tensor(input_nparray, device=device, dtype=torch.long)
output_sen = start_letter
for i in range(max_length):
output = model(inputs)
topv, topi = output.topk(1)
topi = topi[-1]
letter = vocab[topi]
output_sen += letter
idx = vocab.index(letter)
input_nparray = np.append(input_nparray, [idx])
inputs = torch.tensor(input_nparray, device=device, dtype=torch.long)
return output_sen
print(‘using device:’, device)
inputdim = 86
hiddendim = 100
batchsize = 64
outputdim = 86
numlayers = 128
loss_fcn = nn.NLLLoss()
model = selfModule(inputdim, hiddendim, batchsize, outputdim, numlayers)
do_restore = False
if do_restore:
model.load_state_dict(torch.load(‘fng_pt.pt’))
model.eval()
model.to(device=device)
else:
n_iters = 500
optimizer = torch.optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.999), eps=2e-16, weight_decay=0)
train(train_loader, model, optimizer, n_iters)
torch.save(model.state_dict(), ‘fng_pt.pt’)
print(test(‘W’)) |
st46588 | here is all my code!!
I think I’m passing a tensor to the model but I’m not sure. |
st46589 | I am having the same exact error, I am passing numpy.ndarray to the model after successfully train it. Can anybody help diagnose this please.
/Screen Shot 2020-11-18 at 2.06.06 PM2188×1112 261 KB |
st46590 | Pass a PyTorch tensor to the model, since the .size returns an int in numpy while it’s a function in PyTorch.
You can convert a numpy array to a tensor via tensor = torch.from_numpy(array). |
st46591 | I would like to be able to approach GPU out of memory issues more systematically
Are there some resources that explain roughly when allocations happen and when memory is released?
Are there tools that can show which tensors are alive at a given time?
As a concrete example, I am building a network that operates on 3d medical images and GPU memory is an issue. My model contains dense net like building blocks that look like
tmp = torch.cat([lots, of, inputs], 1)
small_output = conv(tmp)
I suspect that tmp eats up a lot of memory and is computationally cheap.
How to tell pytorch to recompute tmp whenever it is needed instead of storing it for the backward pass?
If this is solved, how profile the better memory footprint? |
st46592 | May I suggest to start from this tutorial and dig dipper as needed? https://pytorch.org/tutorials/recipes/recipes/profiler.html 5 |
st46593 | I created two variants of a toy net with and without checkpointing (see end of post for code).
Profile looks essentially like
-------- ------------ ------------ ------------ ------------ ------------
Name CPU Mem Self CPU Mem CUDA Mem Self CUDA Mem # of Calls
-------- ------------ ------------ ------------ ------------ ------------
forward 51.61 Kb -276 b 394.53 Mb -3.91 Mb 1
backward -4 b -276 b 512 b -512 b 1
-------- ------------ ------------ ------------ ------------ ------------
Name CPU Mem Self CPU Mem CUDA Mem Self CUDA Mem # of Calls
-------- ------------ ------------ ------------ ------------ ------------
forward -4 b -276 b 3.82 Gb -39.45 Mb 1
backward -51.61 Kb -51.88 Kb -77.07 Mb -77.07 Mb 1
So great it seems checkpointing could save a lot of memory. However I don’t understand what exactly is reported here.
What s the difference between CUDA Mem and Self CUDA Mem?
What is reported here? Is it peak allocated memory? Why are some numbers negative like -39.45 Mb? Is it sum all allocations - sum of all deallocations?
In case somebody wants to play with it here is my code:
import torch
from torch import nn
import torchvision.models as models
import torch.autograd.profiler as profiler
import pytorch_lightning as pl
class Model(torch.nn.Module):
def __init__(self, ninput, nhidden, ncat, nrepeat, save_memory):
super().__init__()
self.save_memory = save_memory
self.nrepeat = nrepeat
self.ncat = ncat
self.layer1 = nn.Linear(ninput, nhidden)
self.layer2 = nn.Linear(ncat*nhidden, ninput)
def forward_loop_body(self, x):
x = self.layer1(x)
x = torch.cat([x for _ in range(self.ncat)], 1)
x = self.layer2(x)
return x
def forward(self, x):
for _ in range(self.nrepeat):
if self.save_memory and x.requires_grad:
x = torch.utils.checkpoint.checkpoint(self.forward_loop_body, x)
else:
x = self.forward_loop_body(x)
return x
device = torch.device("cuda:0")
nb = 1024
x = torch.randn(nb,100).to(device)
for save_memory in [True, False]:
model = Model(ninput=100,
nhidden=1000,
ncat=100,
nrepeat=10,
# save_memory=False,
save_memory=save_memory,
).to(device)
criterion = torch.nn.MSELoss()
with profiler.profile(record_shapes=True, profile_memory=True, ) as prof:
with profiler.record_function("forward"):
y = model(x)
with profiler.record_function("backward"):
model.zero_grad()
loss = criterion(x,y)
loss.backward()
filename = f"profile_save_memory={save_memory}.txt"
with open(filename, "w") as io:
io.write(prof.key_averages().table()) |
st46594 | I am not an expert in cuda memory profiling, sorry for that.
As I understand from tutorial:
Note the difference between self cpu time and cpu time - operators can call other operators, self cpu time exludes time spent in children operator calls, while total cpu time includes it.
It should be the same for self and total memory. Negatives are most likely releases of memory. For more information, we probably have to go into documentation or maybe in source. |
st46595 | I am reading this article https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html
telegram-cloud-photo-size-2-5309744802939645667-y1280×339 31.4 KB
Could you help what this vertical bar means? |
st46596 | Solved by Abhilash_Srivastava in post #2
This notation is used to denote the value of the partial differentiation of o with xi at the value xi = 1.
So, in the equation do/dxi = 3(xi + 2)/2, replace xi = 1, you’ll get 9/2. |
st46597 | This notation is used to denote the value of the partial differentiation of o with xi at the value xi = 1.
So, in the equation do/dxi = 3(xi + 2)/2, replace xi = 1, you’ll get 9/2. |
st46598 | scipy convolve has mode=‘same’ option which gives you the output with the same size as input, how do I set parameters like stride and padding to achive the same with torch.nn.functional.conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) ? |
st46599 | For a stride and dilation of 1, the padding should be weight.size(2)//2 for an odd sized kernel on both sides and [weight.size(2)//2, weight.size(2)//2 -1] for an evenly sizes kernel. |
st46600 | thank you so much for this answer. However I need bigger stride, is there a formula considering this parameter? |
st46601 | For a more general formula, you could take a look at @rwightman’s median pooling implementation 4.
This would calculate the padding for a 2-dimensional input, so you could remove one spatial dim for your use case. |
st46602 | class RNN_Single(nn.Module):
def __init__(self, input_dim, embed_size, hidden_state_size, classes):
super(RNN_Single, self).__init__()
self.embed = nn.Embedding(input_dim, embed_size)
self.rnn = nn.RNN(embed_size, hidden_state_size)
self.fc = nn.Linear(hidden_state_size, classes)
def forward(self, input_batch):
embedding_batch = self.embed(input_batch)
output, hidden = self.rnn(embedding_batch)
hidden = hidden.squeeze(0)
assert torch.equal(output[-1,:,:], hidden) # Comparing the last time step output vector to the hidden vector and this should be equal
return self.fc(hidden)
INPUT_DIM = len(NEWS.vocab)
EMBED_DIM = 128
HIDDEN_UNITS = 512
CLASSES = 4
model = RNN_Single(INPUT_DIM, EMBED_DIM, HIDDEN_UNITS, CLASSES)
def accuracy(preds, true):
_, index = torch.max(preds, dim = 1)
return (index == true).sum().float() / len(preds)
def train(model, iterator, optimizer, criterion):
e_loss = 0
e_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
preds = model(batch.title)# Call using the column name
acc = accuracy(preds, batch.cat)
loss = criterion(preds.squeeze(1), batch.cat)
acc = accuracy(preds, batch.cat)
loss.backward()
optimizer.step()
e_loss += loss.item()
e_acc += acc.item()
return e_loss/len(iterator), e_acc/len(iterator)
def eval(model, iterator):
e_loss = 0
e_acc = 0
model.eval()
for batch in iterator:
preds = models(batch.title)
loss = criterion(preds.squeeze(1), batch.cat)
acc = accuracy(preds, batch.cat)
e_loss += loss.item()
e_acc += acc.item()
return e_loss/len(iterator), e_acc/len(iterator)
N_EPOCHS = 5
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, test_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut1-model.pt')
print(f'Epoch: {epoch+1:02} / {N_EPOCHS} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
The error is:
TypeError Traceback (most recent call last)
in ()
** 7 start_time = time.time()**
** 8 **
----> 9 train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
** 10 valid_loss, valid_acc = evaluate(model, test_iterator, criterion)**
** 11 **
1 frames
in train(model, iterator, optimizer, criterion)
** 12 preds = model(batch.title)# Call using the column name**
** 13 acc = accuracy(preds, batch.cat)**
—> 14 loss = criterion(preds.squeeze(1), batch.cat)
** 15 acc = accuracy(preds, batch.cat)**
** 16 loss.backward()**
*/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, input, kwargs)
** 725 result = self._slow_forward(input, kwargs)
** 726 else:
–> 727 result = self.forward(input, kwargs)
** 728 for hook in itertools.chain(
** 729 _global_forward_hooks.values(),
TypeError: forward() takes 2 positional arguments but 3 were given |
st46603 | Solved by ptrblck in post #8
Thanks for the code.
You are overwriting the criterion after its definition:
criterion = nn.CrossEntropyLoss()
model = model.to(device)
criterion = model.to(device) |
st46604 | What kind of criterion are you using? Based on the stack trace it seems you might be using a custom nn.Module? If so, could you check its expected arguments and make sure that both tensors are accepted? |
st46605 | Criterion is CrossEntropyLoss.This is a multi class problem with 4 classes. I am inheriting the default nn.Module. |
st46606 | Are you using some old function definitions, where the passed criterion can be mapped to another argument?
In your code snippets you are e.g. using evaluate, while the definition seems to be eval(). |
st46607 | Sorry, @ptrblck that was a blunder I made. I corrected and still the same error. I am new to PyTorch and DL please bear with me.
import torch.optim as optim
optimizer = optim.SGD(model.parameters(), lr = 1e-3)
criterion = nn.CrossEntropyLoss()
model = model.to(device)
criterion = model.to(device) |
st46608 | Something might still be overwriting the definition of criterion, so could you post an executable code snippet to reproduce this issue, please? |
st46609 | hi @ptrblck. You can find my code here https://colab.research.google.com/drive/1stzMYrUPDXjWwPRZvJJsKAxtQ6P1ydUr?usp=sharing 1 |
st46610 | Thanks for the code.
You are overwriting the criterion after its definition:
criterion = nn.CrossEntropyLoss()
model = model.to(device)
criterion = model.to(device) |
st46611 | I got the following error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
with anomaly detection enabled, the last line of “traceback of the forward call that caused the error” is
num = (1 + 2 * c * xy + c * y2) * x + (1 - c * x2) * y
(function _print_stack)
Traceback (most recent call last):
Error detected in MulBackward0. y has the same shape as the variable modified. If I understand this correctly, it seems the error is in ‘* y’. I don’t understand how ’ * ’ could modify ‘y’ inplace.
BTW c is a scalar tensor, and this multiplication is broadcasted.
Hope this is enough information. Thanks in advance. |
st46612 | What happens is that backprop reaches this line doing MulBackward and detects that some multiplicand is no longer available (overwritten). So inplace operation is happening somewhere later, this is just a detection point. |
st46613 | Solved by utkuumetin in post #3
Thank you for reply. When play with learning rate and training loop now loss decreasing |
st46614 | Basically everything or nothing could be wrong.
It’s hard to tell the reason your model isn’t working without having any information.
I think a generally good approach would be to try to overfit a small data sample and make sure your model is able to overfit it properly. |
st46615 | ptrblck:
model isn’t working without having any information.
I think a generally good approach would be to try to overfit a small data sample and make sure your model is able to overfit it prop
Thank you for reply. When play with learning rate and training loop now loss decreasing |
st46616 | Hello, I am creating a RNN for binary classification. The goal is to look at binary arrays of length 60 in which arrays containing 2 or more consecutive 1s are not a part of the grammar (target = 0) and those that do not are a part of the grammar (target = 1). The test data is similar to the training data except that it is of length 80. In my model I attempted to make the batch_sizes = 3 and 4 for the training and test set respectively; however, I get the error 'Expected input batch_size (3) to match target batch_size (1)." I am not sure how to get this network to work. The goal is to use the binary arrays to predict either class 1 or 0 - so the target batch size should only be 1 no?
Here is the model class:
input_size = 1
batch = 3
sequence_length = 20
hidden_size = 128
num_classes = 2
num_layers = 1
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first = True)
#self.fc = nn.Linear(hidden_size*sequence_length, output_size)
self.fc = nn.Linear(self.hidden_size*sequence_length, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, x):
#x = torch.reshape(x, (batch, sequence_length, input_size))
# h0 = torch.zeros(self.num_layers, x.size(1), self.hidden_size).cuda()
h0 = torch.zeros(self.num_layers, batch, self.hidden_size).cuda()
#x = torch.unsqueeze(x, 0)
# print(x.size())
#forward propagation
out, _ = self.rnn(x, h0)
out = out.reshape(out.shape[0], -1)
out = self.fc(out)
output = self.softmax(out)
return output
The test definition:
tot_losses = []
tot_counter = [i * len(train_loader.dataset) for i in range(num_epochs + 1)]
def test(model, loader, batch = 3):
with torch.no_grad():
model.eval()
N = 0
tot_loss, correct = 0.0, 0.0
predictions = []
targets = []
for i, (data, target) in enumerate(loader):
data, target = data.cuda(), target.cuda()
if batch == 3:
data = torch.reshape(data, (batch, sequence_length, input_size))
else:
data = torch.reshape(data, (4, sequence_length, input_size))
#print(data.size())
output = model(data)
tot_loss += criterion(output, target).cpu().numpy()
pred = output.data.max(1, keepdim = True)[1]
targets.append(target.cpu())
predictions.append(pred.cpu())
correct += pred.eq(target.data.view_as(pred)).sum()
tot_loss /= len(test_loader.dataset)
tot_losses.append(tot_loss)
confusion_matrix = ConfusionMatrix(predictions, targets, len(loader))
return tot_loss, 100. * correct/len(loader.dataset), confusion_matrix
Training loop:
# train_losses = []
# train_counter = []
logdir = generate_unique_logpath(top_logdir, "RNN_Adam_IL20_LR00005")
print("Logging to {}".format(logdir))
# -> Prints out Logging to ./logs/linear_0
if not os.path.exists(logdir):
os.mkdir(logdir)
print("Before Training Validation Set Performance:")
val_loss, val_acc, confusion_M = test(model, val_loader)
print("\nValidation : Avg. Loss : {:.4f}, Accuracy : {:.2f}\n".format(val_loss, val_acc))
print("Validation Set Confusion Matrix: \n" + str(confusion_M))
print()
model_checkpoint = ModelCheckpoint(logdir + "/best_model.pt", model)
for epoch in range(num_epochs):
print("---------------------------------------------------------------------------")
for batch_idx, (data, target) in enumerate(train_loader):
data = data.cuda()
targets = target.cuda()
data = torch.reshape(data, (batch, sequence_length, input_size))
model.train()
output = model(data)
# if batch_idx % 50:
# print(output)
# print(targets)
loss = criterion(output, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch_idx % 50 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item()))
train_losses.append(loss.item())
train_counter.append((batch_idx * batch_size_train) + ((epoch) * len(train_loader.dataset)))
val_loss, val_acc, confusion_M = test(model, val_loader, batch = 4)
model_checkpoint.update(val_loss)
print("\n Validation : Avg. Loss : {:.4f}, Accuracy : {:.2f}\n".format(val_loss, val_acc))
print("Validation Set Confusion Matrix: \n" + str(confusion_M))
print()
print("---------------------------------------------------------------------------")
error:
Logging to C:\Users\Daniel\OneDrive\Documents\Neural Networks Hw\Best Models Project\logs\RNN_Adam_IL20_LR00005_14
Before Training Validation Set Performance:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-359fc80a1bdb> in <module>
9
10 print("Before Training Validation Set Performance:")
---> 11 val_loss, val_acc, confusion_M = test(model, val_loader)
12 print("\nValidation : Avg. Loss : {:.4f}, Accuracy : {:.2f}\n".format(val_loss, val_acc))
13 print("Validation Set Confusion Matrix: \n" + str(confusion_M))
<ipython-input-21-ff2c743fda6d> in test(model, loader, batch)
19 #print(data.size())
20 output = model(data)
---> 21 tot_loss += criterion(output, target).cpu().numpy()
22 pred = output.data.max(1, keepdim = True)[1]
23 targets.append(target.cpu())
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~\anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target)
202
203 def forward(self, input, target):
--> 204 return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction)
205
206
~\anaconda3\lib\site-packages\torch\nn\functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
1834 if input.size(0) != target.size(0):
1835 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).'
-> 1836 .format(input.size(0), target.size(0)))
1837 if dim == 2:
1838 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
ValueError: Expected input batch_size (3) to match target batch_size (1).
Thank you! |
st46617 | That part is not working as intended:
Danielr13:
for i, (data, target) in enumerate(loader):
data, target = data.cuda(), target.cuda()
if batch == 3:
data = torch.reshape(data, (batch, sequence_length, input_size))
else:
data = torch.reshape(data, (4, sequence_length, input_size))
From the loader, you get a single (data, target)-pair – at least I assume given the error. You then forcefully reshape the single data item into some kind of batch, where I’m pretty sure you mess up your data. Anyway, you don’t reshape target the same way, so it’s still of size 1. |
st46618 | Hi, the data is a binary array of length 60 and the target is only size 1 since the whole string represents whether or not the data array is an element of the grammar or not. Does it still make sense to reshape my target if this is the case? |
st46619 | From what I understand, no. The reshape is wrong then. Does it work when you remove the if/else block? |
st46620 | I am working on an architecture where I experience spurious exploding gradients and I want to find out which operation exactly is causing them. I have already identified the parameters that are affected by these huge gradients and have code that identifies when unusual gradients occur, but I am unsure how I can proceed.
I think I know what causes it for some parameters, but others I have no clue.
Ideally, I would recalculate the gradient while retaining the graph and interactively traverse the gradient-calculation backwards to find out what’s going on.
When i recalculate my gradients using .backward(retain_graph=True), the grad_fn for the parameters is still None and I am not sure how to actually interact with the graph and find the reason for the exploding gradient.
EDIT: Ok, I’ve found out that by using create_graph=True the grad_fn get’s populated, but I am unsure whether it’s possible to interact with them in a meaningful way. I have not found a way to evaluate them or get the context of them. |
st46621 | Hi Leander,
Would something like torch.nn.utils.clip_grad_norm_ the link for which can be found here 193 be useful? |
st46622 | What kind of code are you currently using to “identify when unusual gradients occur”?
You could register hooks on all parameters and maybe print some debug message, when high gradients are propagated.
Once the step is found, you could check all .grad attributes as well as the parameters to see, which operations causes the high gradients?
Maybe I’ve misunderstood your question and you are already doing exactly this. |
st46623 | @ptrblck yeah, I am doing something similar to this. I am keeping track of whether the gradient-norm is unexpected based on previous gradients and some absolute conditions. I am also generating plots containing the maximum and mean norm of the gradients per parameter. So I have already identified some parameters that sometimes (randomly) get unusually high gradients (they are very, very big. in the range of 1e+6).
I suspect that certain parametrizations of the operations I use lead to exploding gradients further down the networks since it occurs without a pattern and relatively unaffected by other hyperparameters. The immediate operations are fairly standard (for example, some of the affected modules are standard 2d-convolution).
But I am not sure, some of the operations are very dynamic in nature and some get predicted by another network. It’s quite delicate and I have already thought of exploding gradients under certain scenarios, but it seems like I have not eliminated every scenario.
So I have the code to start a debugging session, the code to identify the affected parameters and the code to move my model and the data onto the CPU to enjoy a lot more available RAM and potentially trace the operations. But I am unsure how to start from here so that I can identify the culprits that let my gradients explode. They must originate from somewhere.
Sometimes it happens after a few minutes, sometimes I need to wait a few hours.
Ideally, I would have an interactive conversation with the autograd-framework, where I would ask it questions about the results and the responsible computations and parameters to isolate and identify the origin. But I am fine with a more crude way as long as i don’t have to guess. There are a lot of moving parts.
@Prerna_Dhareshwar gradient clipping does not solve my problem since the thing i am interested in is the source of the instability. It’s more of a duct-tape approach. It’s researchy, but I don’t see an immediate reason why the model must be unstable.I think it’s just a special condition I overlooked, but I am not sure how I am supposed to interact with the autograd-framework in this scenario. |
st46624 | @LeanderK did you ever find a good methodology for debugging this problem? Have a similar issue and most answers suggest clipping gradients but i agree that is like applying duck tape and WD-40 |
st46625 | Hello @LeanderK - did you find what was causing this issue?
I am experiencing exploding gradients in a cascade of 2 models where the first model W is unsupervised (which is training using this loss 6) and the second H is fully supervised using CE loss.
Are you using a similar setting because in your original post you mentioned: “predicted from another model”
Please let me know if you have found a solution to this.
Thanks,
Megh |
st46626 | I tried this operation for one batch and worked.
x = torch.FloatTensor([[ax,bx],[cx,dx],[ex,fx],[gx,hx]])
y = torch.FloatTensor([[ay,by],[cy,dy],[ey,fy],[gy,hy]])
z = x[:,None,0] * y[:,1]
print(z)
tensor([[ ax*by, ax*dy, ax*fy, ax*hy],
[ cx*by, cx*dy, cx*fy, cx*hy],
[ ex*by, ex*dy, ex*fy, ex*hy],
[ gx*by, gx*dy, gx*fy, gx*hy]])
I want to repeat above operation for several batches like below example without using for loop.
x = torch.FloatTensor([[[ax1,bx1],[cx1,dx1],[ex1,fx1],[gx1,hx1]],
[[ax2,bx2],[cx2,dx2],[ex2,fx2],[gx2,hx2]]])
y = torch.FloatTensor([[[ay1,by1],[cy1,dy1],[ey1,fy1],[gy1,hy1]],
[[ay2,by2],[cy2,dy2],[ey2,fy2],[gy2,hy2]]])
This is what I want to get for result.
tensor([[[ ax1*by1, ax1*dy1, ax1*fy1, ax1*hy1],
[ cx1*by1, cx1*dy1, cx1*fy1, cx1*hy1],
[ ex1*by1, ex1*dy1, ex1*fy1, ex1*hy1],
[ gx1*by1, gx1*dy1, gx1*fy1, gx1*hy1]],
[[ ax2*by2, ax2*dy2, ax2*fy2, ax2*hy2],
[ cx2*by2, cx2*dy2, cx2*fy2, cx2*hy2],
[ ex2*by2, ex2*dy2, ex2*fy2, ex2*hy2],
[ gx2*by2, gx2*dy2, gx2*fy2, gx2*hy2]]])
I tried as below, but gave me errors
z = x[...,None,0] * y[...,1]
RuntimeError: The size of tensor a (4) must match the size of tensor b (2) at non-singleton dimension 1
Is there any way doing this without for loop? |
st46627 | Solved by ptrblck in post #2
This should work:
x = torch.arange(2 * 4 * 2).view(2, 4, 2)
y = torch.arange(2 * 4 * 2).view(2, 4, 2)
z = x[:, :, None, 0] * y[:, None, :, 1] |
st46628 | This should work:
x = torch.arange(2 * 4 * 2).view(2, 4, 2)
y = torch.arange(2 * 4 * 2).view(2, 4, 2)
z = x[:, :, None, 0] * y[:, None, :, 1] |
st46629 | # no. of conv: 6
# no. of fc: 3
# Kernel size (conv): 3x3
# Stride (cov): 1x1
# Stride (maaxPool) = 2x2
# Dilation: 1x1 (default)
# padding = 1
# Dropout in FC: 10%
import numpy as np
import torch
import torchvision
import torchvision.datasets as datasets
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.autograd as autograd
from torch.autograd import Variable
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
class CNN2(nn.Module):
def __init__(self):
super(CNN2, self).__init__()
# TODO: define your CNN
# Convolution Layers
self.conv1 = nn.Conv2d(3, 32, 3, padding=1)
self.conv2 = nn.Conv2d(32, 64, 3, padding=1)
self.conv3 = nn.Conv2d(64, 128, 3, padding=1)
self.conv4 = nn.Conv2d(128, 128, 3, padding=1)
self.conv5 = nn.Conv2d(128, 256, 3, padding=1)
self.conv6 = nn.Conv2d(256, 256, 3, padding=1)
# FC Layers
self.fc1 = nn.Linear(4096, 1024)
self.fc2 = nn.Linear(1024, 512)
self.fc3 = nn.Linear(512, 10)
# Sub Sampling (Max pooling)
#self.maxpool2d1 = nn.MaxPool2d(kernel_size=2, stride=2)
#self.maxpool2d2 = nn.MaxPool2d(kernel_size=2, stride=2)
#self.maxpool2d3 = nn.MaxPool2d(kernel_size=2, stride=2)
# Dropout: 10%
self.drouput1 = nn.Dropout2d(p=0.1)
self.drouput2 = nn.Dropout(p=0.1)
self.drouput3 = nn.Dropout(p=0.1)
# Activation Function
self.relu = nn.ReLU()
def forward(self, y):
# TODO: define your forward function
# 1st conv
y = self.conv1
# Batch Normalization over 4D input
y = nn.BatchNorm2d(32)
y = self.relu
# 2nd conv
y = self.conv2
y = self.relu
# Max pooling over a (2, 2) window with stride = 2 on 2nd conv layer
y = nn.MaxPool2d(kernel_size=2, stride=2)
# 3rd conv
y = self.conv3
y = nn.BatchNorm2d(128)
y = self.relu
# 4th conv
y = self.conv4
y = self.relu
y = nn.MaxPool2d(kernel_size=2, stride=2)
y = self.drouput1
# 5th conv
y = self.conv5
y = nn.BatchNorm2d(256)
y = self.relu
# 6th conv
y = self.conv6
y = self.relu
y = nn.MaxPool2d(kernel_size=2, stride=2)
# flatten
y = y.view(-1, self.num_flat_features(y))
#y = y.view(y.size(0), -1)
#y = torch.flatten(y, start_dim = 1)
# fc layers
y = self.drouput2
y = self.fc1
y = self.relu
y = self.fc2
y = self.relu
y = self.drouput3
y = self.fc3
return y
def num_flat_features(self, y):
size = y.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
cnn2 = CNN2()
print(cnn2)
params2 = list(cnn2.parameters())
print(len(params2))
print(params2[0].size())
print(params2[1].size())
print(params2[2].size())
print(params2[3].size())
print(params2[4].size())
print(params2[5].size())
print(params2[6].size())
print(params2[7].size())
print(params2[8].size())
print(params2[9].size())
print(params2[10].size())
print(params2[11].size())
print(params2[12].size())
print(params2[13].size())
print(params2[14].size())
print(params2[15].size())
print(params2[16].size())
print(params2[17].size())
cnn2 = CNN2().to(device) # operate on GPU
print(cnn2)
## Define the Loss function Optimizer
import torch.optim as optim
# TODO: you can change loss function and optimizer
criterion2 = nn.CrossEntropyLoss()
optimizer2 = optim.SGD(cnn2.parameters(), lr=0.001, momentum=0.9)
## Train the Network
n_epoch2 = 5
for epoch2 in range(n_epoch2): # loop over the dataset multiple times
running_loss2 = 0.0
for i, data in enumerate(cifar_trainloader, 0):
# TODO: write training code
# get the inputs
inputs, labels = data
inputs = inputs.to(device)
labels = labels.to(device)
# wrap them in Variable
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
output2 = cnn2(inputs)
loss2 = criterion2(output2, labels)
loss2.backward()
optimizer2.step()
# print statistics
running_loss2 += loss2.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss2: %.3f' %(epoch2 + 1, i + 1, running_loss2 / 2000))
running_loss2 = 0.0
print('Finished Training the CNN2 Model')
---------------------------------------------------------------------------
ModuleAttributeError Traceback (most recent call last)
<ipython-input-108-58a5de353b76> in <module>()
20
21 # forward + backward + optimize
---> 22 output2 = cnn2(inputs)
23 loss2 = criterion2(output2, labels)
24 loss2.backward()
2 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __getattr__(self, name)
770 return modules[name]
771 raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
--> 772 type(self).__name__, name))
773
774 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:
ModuleAttributeError: 'MaxPool2d' object has no attribute 'view' ``` |
st46630 | Hi,
In your forward method, you are not calling any of objects you have instantiated in __init__ method.
In Python, first you initilize a class and make an object, then use it:
self.conv1 = nn.Conv2d(#args) # just init, now need to call it
# in forward
y = self.conv1(#some_input)
In none of your calls in forward you have specified input.
Look at the way you have initialized criterion2 and used it by passing inputs. You need to do the same thing for all layers in forward method.
I think before doing that, you need to read about basic PyTorch from available tutorials. It will help you to get started with it.
Bests |
st46631 | Hello,
I have two questions about the Autoencoder based signal (a vector here, considering an FC-autoencoder) reconstruction. I would be really thankful if anyone helps me through this.
1: does reconstruction error depend on vector size? (for example reconstruction a dataset with signals of 3 dimension like, [x1,x2,x3] versus 4 dimensional inputs like [x1,x2,x3,x4])
can we say that reconstruction error depends on the mean of the inputs? set of inputs with higher values (like [10,10,10] would show higher reconstruction error versus lower ranged inputs ( like [1,1,1])? if yes is it recommended to normalize data beforehand?
Thank you so much |
st46632 | It depends how the loss is calculated. If you are using e.g. nn.MSELoss in the default setup, the loss value should not depend on the input feature dimension:
x = torch.randn(1, 3)
y = torch.randn(1, 3)
criterion = nn.MSELoss()
loss_small = criterion(x, y)
x = torch.randn(1, 3000)
y = torch.randn(1, 3000)
loss_large = criterion(x, y)
print(loss_small)
> tensor(1.5440)
print(loss_large)
> tensor(2.0731)
However, you can of course use reduction='sum', which would change it.
It depends again on your use case and the loss value will depend on the magnitude:
y = torch.tensor([[1.]])
rel_err = 1e-1
x = y - y * rel_err
loss_small = criterion(x, y)
y = torch.tensor([[100.]])
x = y - y * rel_err
loss_large = criterion(x, y)
print(loss_small)
> tensor(0.0100)
print(loss_large)
> tensor(100.)
As you can see, the loss is much higher in the second use case even though the relative error is the same. I don’t know how you are interpreting the loss, but it doesn’t necessarily mean that the first use case is “better” than the second one.
That being said, normalizing the inputs often helps during the training so you might want to normalize the inputs anyway and could even “unnormalize” the outputs, if necessary. |
st46633 | Thank you so much @ptrblck. You are right, I think I should deeply consider and think about my loss_function.
I think I should read your comments several times and think more. Thank you again. |
st46634 | I have a dataset with 1000 samples, 1 timestep, and 1 feature (1000,1,1). I see a lot of discussions where they create subsequences (split one timestep column into multiple by using some X number - ex: (100,10,1)) and use that as input to LSTM autoencoder and argue that LSTMs/autoencoder can only learn when the data has multiple timesteps. But, I get good results with just 1 timestep column. What is the correct input here? Is it incorrect if I use univariate (1000,1,1) input? |
st46635 | Hi all,
cpp is far outside my comfort zone. So far I was using torch.nn.grad.conv2d_weight but this seems very slow. I want to know if using cudnn_convolution_backward_weight is faster. My cpp extension looks like this:
#include <torch/extension.h>
#include <vector>
#include <ATen/NativeFunctions.h>
#include <ATen/Config.h>
at::Tensor backward_weight(
c10::ArrayRef<long int> weight_size,
const at::Tensor& grad_output,
const at::Tensor& input,
c10::ArrayRef<long int> padding,
c10::ArrayRef<long int> stride,
c10::ArrayRef<long int> dilation,
int64_t groups,
bool benchmark,
bool deterministic) {
return at::cudnn_convolution_backward_weight(
weight_size,
grad_output,
input,
padding,
stride,
dilation,
groups,
benchmark,
deterministic);
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("backward", &backward_weight, "Conv2d backward cudnn");
}
Which I compile jit like this:
from torch.utils.cpp_extension import load
conv2d_cudnn = load(name="conv2d_backward", sources=["conv2d_backward.cpp"], verbose=True)
I can then use it in my python code: conv2d_cudnn.backward.
All my parameters seems correct (identical to torch.nn.grad.conv2d_weight, order has to be a bit different though.
I receive the following error:
RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM (getWorkspaceSize at /opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/ATen/native/cudnn/Conv.cpp:653)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x45 (0x7f1d5fc60cf5 in /home/hans/anaconda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x1082df9 (0x7f1d63d8bdf9 in /home/hans/anaconda/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so)
frame #2: at::native::raw_cudnn_convolution_backward_weight_out(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool) + 0x19b (0x7f1d63d89eeb in /home/hans/anaconda/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so)
frame #3: at::native::cudnn_convolution_backward_weight(char const*, c10::ArrayRef<long>, at::TensorArg const&, at::TensorArg const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool) + 0x3f7 (0x7f1d63d8a797 in /home/hans/anaconda/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so)
frame #4: at::native::cudnn_convolution_backward_weight(c10::ArrayRef<long>, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool) + 0xf7 (0x7f1d63d8aac7 in /home/hans/anaconda/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so)
frame #5: at::CUDAFloatType::cudnn_convolution_backward_weight(c10::ArrayRef<long>, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool) const + 0xab (0x7f1d63e62fdb in /home/hans/anaconda/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so)
frame #6: torch::autograd::VariableType::cudnn_convolution_backward_weight(c10::ArrayRef<long>, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool) const + 0x336 (0x7f1d5dacc436 in /home/hans/anaconda/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #7: <unknown function> + 0x34cb3 (0x7f1d42018cb3 in /tmp/torch_extensions/conv2d_backward/conv2d_backward.so)
frame #8: backward_weight(c10::ArrayRef<long>, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long, bool, bool) + 0x85 (0x7f1d42019015 in /tmp/torch_extensions/conv2d_backward/conv2d_backward.so)
frame #9: <unknown function> + 0x57f10 (0x7f1d4203bf10 in /tmp/torch_extensions/conv2d_backward/conv2d_backward.so)
frame #10: <unknown function> + 0x54a4e (0x7f1d42038a4e in /tmp/torch_extensions/conv2d_backward/conv2d_backward.so)
frame #11: <unknown function> + 0x50259 (0x7f1d42034259 in /tmp/torch_extensions/conv2d_backward/conv2d_backward.so)
frame #12: <unknown function> + 0x5093f (0x7f1d4203493f in /tmp/torch_extensions/conv2d_backward/conv2d_backward.so)
frame #13: <unknown function> + 0x412aa (0x7f1d420252aa in /tmp/torch_extensions/conv2d_backward/conv2d_backward.so)
<omitting python frames>
frame #26: torch::autograd::PyFunctionPostHook::operator()(std::vector<torch::autograd::Variable, std::allocator<torch::autograd::Variable> > const&, std::vector<torch::autograd::Variable, std::allocator<torch::autograd::Variable> > const&) + 0xe4 (0x7f1d81086b84 in /home/hans/anaconda/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #27: torch::autograd::Engine::evaluate_function(torch::autograd::FunctionTask&) + 0x1711 (0x7f1d5d9b40c1 in /home/hans/anaconda/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #28: torch::autograd::Engine::thread_main(torch::autograd::GraphTask*) + 0xc0 (0x7f1d5d9b4e80 in /home/hans/anaconda/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #29: torch::autograd::Engine::thread_init(int) + 0xc7 (0x7f1d5d9b1a47 in /home/hans/anaconda/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #30: torch::autograd::python::PythonEngine::thread_init(int) + 0x2a (0x7f1d8107633a in /home/hans/anaconda/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #31: <unknown function> + 0xb8678 (0x7f1d81d08678 in /home/hans/anaconda/lib/python3.6/site-packages/torch/../../../libstdc++.so.6)
frame #32: <unknown function> + 0x76ba (0x7f1d91d7e6ba in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #33: clone + 0x6d (0x7f1d91ab441d in /lib/x86_64-linux-gnu/libc.so.6)
Does anybody know where to look to debug?
Thanks! |
st46636 | Turns out the it does work and this error indicates that the shapes are not correct. Seems like torch.nn.grad.conv2d_weight is a bit more forgiving in handling wrong shapes.
I tried:
grad_output shape: torch.Size([1, 32, 46, 46])
input shape: torch.Size([1, 128, 49, 49])
But it should have been (with padding 0):
input shape: torch.Size([1, 128, 48, 48])
cudnn_convolution_backward_weight is about 3x faster than torch.nn.grad.conv2d_weight in my case |
st46637 | Can you give an example of how to call this function? I get the following error:
TypeError: backward(): incompatible function arguments. The following argument types are supported:
1. (arg0: at::IntArrayRef, arg1: at::Tensor, arg2: at::Tensor, arg3: at::IntArrayRef, arg4: at::IntArrayRef, arg5: at::IntArrayRef, arg6: int, arg7: bool, arg8: bool) -> at::Tensor
Invoked with: torch.Size([512, 512, 3, 3]), tensor([[[[ 0.0000e+00, 1.3958e-06, -4.4237e-05, …, 6.3123e-05,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, …, 0.0000e+00,
-3.5066e-04, 0.0000e+00],
[ 0.0000e+00, 7.1572e-05, 0.0000e+00, …, 0.0000e+00,
0.0000e+00, 0.0000e+00],
…,
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, …, 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, …, 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, -5.9945e-05, 1.2524e-04, …, 0.0000e+00,
3.2651e-04, 0.0000e+00]],
[[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
...,
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 1.0104e-04,
0.0000e+00, -5.8621e-05],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 1.1409e-04, 0.0000e+00, ..., 4.9096e-05,
-3.9835e-05, 0.0000e+00]]]], device='cuda:0'), tensor([[[[ 5.1700e-07, -1.7602e-07, 2.0850e-07, ..., 4.2585e-07,
-2.7707e-08, -2.4121e-07],
[ 6.9768e-07, 7.2259e-07, 2.7369e-07, ..., 3.4273e-07,
-3.3702e-07, -8.6962e-07],
[ 2.3399e-07, -8.0187e-07, -7.4774e-07, ..., -2.6042e-07,
-5.1840e-07, -7.7020e-07],
...,
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]]]], device='cuda:0'), (1, 1), (1, 1), (1, 1),
1, True, False |
st46638 | Hi Rahan, it is a bit hard to see what is wrong due to the formatting. I call the function like this:
conv2d_cudnn.backward(module.weight.shape, gradient, input_tensor,
module.padding, module.stride, module.dilation,
module.groups, True, False) |
st46639 | This worked for me, but haven’t tested it with the latest pytorch version
#include <torch/extension.h>
#include <c10/util/ArrayRef.h>
#include
#include <ATen/NativeFunctions.h>
#include <ATen/Config.h>
at::Tensor backward_weight(
std::vector<int64_t> weight_size,
const at::Tensor& grad_output,
const at::Tensor& input,
std::vector<int64_t> padding,
std::vector<int64_t> stride,
std::vector<int64_t> dilation,
int64_t groups,
bool benchmark,
bool deterministic) {
return at::cudnn_convolution_backward_weight(
weight_size,
grad_output,
input,
padding,
stride,
dilation,
groups,
benchmark,
deterministic);
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def(“backward”, &backward_weight, “Conv2d backward cudnn”);
} |
st46640 | i have created a neural net with nn.Moduelist()
i want to show r squared error
i would like to use it for 10 benchmark runs with different data splittings and different random
seeds for the network initialization.
also i would like to know how i can use these benchmark runs to obtain the R 2 scores |
st46641 | Hi, I have a problem with some memory leak (?).
When I load my dataset the usage of memory increase to 100 processes with 30 GB of RES memory. I think that the problem is my Dataset function. So I have a question how to load dataloader part by part (is it possible to reduce in this way memory necessary in my computations)? I attach my function below.
class LoadDataset(Dataset):
def __init__(self):
self.images = []
self.targets = []
img_path, ann_path = (
"path",
"ann",
)
coco_ds = torchvision.datasets.CocoDetection(img_path, ann_path)
for i in range(0, len(coco_ds)):
img, ann = coco_ds[i]
images, targets = collate(
[img.copy(), img.copy()], [ann, ann], coco_ds.coco
)
for t in targets:
self.targets.append(t)
for image in images:
self.images.append(image)
def __len__(self):
return len(self.images)
def __getitem__(self, idx):
img = self.images[idx]
target = self.targets[idx]
return (
img,
target,
)
Later I load data before iteration over epochs like:
train_loader = DataLoader(LoadDataset(), batch_size=24, shuffle=True) |
st46642 | Hi! I am trying to convert my input images and its masks to tensor from numpy.ndarray but I encountered an error along the way.
for img in train_gen:
img = torch.from_numpy(img)
print(img) |
st46643 | and this is the error I got
TypeError Traceback (most recent call last)
<ipython-input-40-658119f1f5a6> in <module>()
1 for img in train_gen:
----> 2 img = torch.from_numpy(img)
3 print(img)
TypeError: expected np.ndarray (got tuple) |
st46644 | Hi,
From the error message, it looks like “img” is a tuple and not a numpy array. So you should fix your code to make sure it is actually a numpy array. |
st46645 | I know it’s trivial for a parameter vector, but when I iterate through the params in, for example:
torch.cat([param.flatten() for param in self.policy.model.parameters()])
setting requires_grad to false returns an error. How do I turn off the gradient for the individual param scalar weights? |
st46646 | Hi,
Could you share a code sample of what you’re trying to do exactly and what is the exact error please? |
st46647 | I have a model and I want to disable some of the weights, so I flatten the weights of the model in order to iterate through them and turn off requires_grad for a subset:
params = torch.cat([param.flatten() for param in model.parameters()])
for i, param in enumerate(params):
if should_be_disabled[i]:
param.requires_grad_(False)
This returns error that requires_grad can only be changed for leaves. |
st46648 | Hi,
Your first line actually concatenates all the parameters in a single big Tensor in a differentiable manner.
So the new Tensor params requires_grad property is independent from the one in your parameters.
You can disable gradients for the Tensors in model.parameters() though:
for j, p in enumerate(model.parameters()):
if should_be_disabled[j]:
p.requires_grad_(False) |
st46649 | The problem is that p is a vector in your example and I would like to disable individual scalar weights within that vector. (A subset of them) |
st46650 | I am afraid that this is not possible. Tensors are “elementary” autograd objects. And so either the whole Tensor requires gradients or not.
Note that you can just zero-out the gradients after they are computed if you just want to not have gradients for some entries in there. (you can even do that with a hook to make sure it happens every time a gradient is computed for that Tensor). |
st46651 | That would still be good. Thanks! Could you provide an example of this please?
My only concern is that I do three separate backwards passes for three separate loss terms, and I’m worried it’ll get convoluted because each one requires different gradients to be zeroed out. |
st46652 | Sure
def get_hook(param_idx):
def hook(grad):
grad = grad.clone() # NEVER change the given grad inplace
# Assumes 1D but can be generalized
for i in grad.size(0):
if should_be_disabled[param_idx][i]:
grad[i] = 0
return grad
return hook
for j, p in enumerate(model.parameters()):
p.register_hook(get_hook(j)) |
st46653 | Hi guys,
I am trying to train model on a modified COCO database. During loading data to dataloaders (images and targets - code in the end of post) htop command shows me that I am running like 100 process and every one of it uses 60 GB of VIRT and about 40 GB of RES, but summary the mem bar shows only 50 GB/ 504 GB. How to understand it? How to be sure that I will not use to much memory?
Can you look on my code if I am doing it alright?
class LoadDataset(Dataset):
def __init__(self):
self.images = []
self.targets = []
img_path, ann_path = (
"path_to_images",
"path_to_annotations_json",
)
coco_ds = torchvision.datasets.CocoDetection(img_path, ann_path)
for i in range(0, len(coco_ds)):
img, ann = coco_ds[i]
for a in ann:
images, targets = collate(
[img.copy(), img.copy()], [[a], [a]], coco_ds.coco
)
for t in targets:
self.targets.append(t)
for image in images:
self.images.append(image)
def __len__(self):
return len(self.images)
def __getitem__(self, idx):
img = self.images[idx]
target = self.targets[idx]
return (
img,
target,
)
and later in code: …
train_loader = DataLoader(LoadDataset, batch_size=24, shuffle=True) |
st46654 | Hi! I was trying to do backward on the first derivative (Jacobian). I observed that the usage of memory continues to grow if y.backward(retain_graph=True,create__graph=True) .
I have read this post https://discuss.pytorch.org/t/how-to-free-the-graph-after-create-graph-true/58476/4 14, where it is said the graph will be deteted if the referece is deleted.
But I also found this post: https://github.com/pytorch/pytorch/issues/4661 21, stating that the leakage issue is still open.
I am confused. Could you please help me out? And I can’t use torch.autograd.grad, since my outputs y are vectors, not scalar outputs.
Thanks! |
st46655 | Solved by albanD in post #2
Hi,
The conclusion from the issue you linked is that this is expected behavior mostly (or something we should forbid people from doing).
torch.autograd.grad works for vectors as well. What is the issue you encounter when trying to use it? |
st46656 | Hi,
The conclusion from the issue you linked is that this is expected behavior mostly (or something we should forbid people from doing).
torch.autograd.grad works for vectors as well. What is the issue you encounter when trying to use it? |
st46657 | Ah! I see! So I should stick to torch.autograd.grad, right?
I just figured out how to use torch.autograd.grad for vectors couple minutes ago.
Thanks! |
st46658 | Hi, I meet the same problem, but I want to backward on the first derivative w.r.t. network parameters. Since both torch.autograd.grad and torch.autograd.functional.jacobian only takes vector inputs while network parameters are tuple of tensors, is there a feasible way to do this? Thanks in advance for any possible help!
This post: Get gradient and Jacobian wrt the parameters 14 helps get jacobian but I’m trying to backward further on Jacobian. It would be really nice if PyTorch supports gradient w.r.t PyTree 1 objects like jax. |
st46659 | Hi,
Both these functions take either a single Tensor or a tuple of Tensors as input. So it should work just fine. |
st46660 | Is it possible to train multiple models on multiple GPUs where each model is trained on a distinct GPU simultaneously?
for example, suppose there are 2 gpus,
model1 = model1.cuda(0)
model2 = model2.cuda(1)
then train these two models simultaneously by the same dataloader. |
st46661 | It should work! You have to make sure the Variables/Tensors are located on the right GPU.
Could you explain a bit more about your use case?
Are you merging the outputs somehow or are the models completely independent from each other? |
st46662 | Hi ptrblck, thanks for your reply. The models are completely independent from each other but in some training steps, the models would transfer information between each other. So I need to train these models simultaneously. BTW, if I want to train all the models simultaneously, how do I write the code? Currently, my code is like the following, but I guess the models are trained in a sequential manner,
model1 = model1.cuda(0)
model2 = model2.cuda(1)
models = [model1, model2]
for (input, label) in data_loader:
for m in models:
m.train()
optimizer.zero_grad()
output = m(input)
loss = criterion(output, label)
loss.backward()
optimizer.step() |
st46663 | I think in your current implementation you would indeed have to wait until the optimization was done on each GPU.
If you just have two models, you could push each input and target tensor to the appropriate GPU and call the forward passes after each other.
Since these calls are performed asynchronously, you could achieve a speedup in this way.
The code should look like this:
input1 = input.to('cuda:0')
intput2 = input.to('cuda:1')
# same for label
optimizer1.zero_grad()
optimizer2.zero_grad()
outpu1 = model1(intput1) # should be an asynch call
outpu2 = model2(intput2)
...
Unfortunately I cannot test it at the moment. Would you run it and check if it’s suitable for your use case? |
st46664 | Hi !
I am still interested in the topic. I am very new to Pytorch and currently would like to perform parallel training of different models on different GPUs (i.e. one model/GPU) for hyperparameter search or simply to get results for different weight initializations. I know there is a lot of documentation pertaining to multiprocessing and existing frameworks for hyperparameter tuning which I already checked, however I only have a limited amount of time and thus on the look out for the very simplest way to achieve this. It would be extremely helpful, thank you for your attention. |
st46665 | You can look at Horovod which is developed by UBer. It makes parallel training extremely easy.
Maxence_Ernoult via PyTorch Forums [email protected]于2019年3月15日 周五上午3:34写道: |
st46666 | Hi Boyu, how did you implement this finally? I am confuesed with this problem, i am grateful with your apply. |
st46667 | Hi finally I used the lib “mpi4py” to implement this. With MPI, you can assign each rank to train one model on one GPU. Also, MPI supports communication across ranks with which you can implement some special operations. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.