id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st31668
|
AlphaBetaGamma96:
Also, also, could the type of card be an issue as well? Some cards are GTX 745 cards whereas some are more modern Quadro cards.
Yes, this could play a role in the issue, but it’s hard to tell without a proper error message (unknown error is unfortunately not very helpful ) and apparently you are unable to see any xids.
|
st31669
|
Hi. You could try to reboot the remote PC. The problem may raise because GPU drivers updated recently without rebooting
|
st31670
|
It does seem like a bit of a problem! Is there anything else that comes to mind or am I out of luck?
Also, I was wondering if I could ask another question with some errors I get? For some reason I seem to get an issue with loading my model (occasionally).
Traceback (most recent call last):
File "~/main.py", line 145, in <module>
state_dict = torch.load(f=model_path_pt, map_location=torch.device(device))
File "~/.local/lib/python3.6/site-packages/torch/serialization.py", line 595, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "~/.local/lib/python3.6/site-packages/torch/serialization.py", line 764, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
EOFError: Ran out of input
Traceback (most recent call last):
File "~/main.py", line 145, in <module>
state_dict = torch.load(f=model_path_pt, map_location=torch.device(device))
File "~/.local/lib/python3.6/site-packages/torch/serialization.py", line 594, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "~/.local/lib/python3.6/site-packages/torch/serialization.py", line 853, in _load
result = unpickler.load()
File "~/.local/lib/python3.6/site-packages/torch/serialization.py", line 845, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "~/.local/lib/python3.6/site-packages/torch/serialization.py", line 833, in load_tensor
storage = zip_file.get_storage_from_record(name, size, dtype).storage()
RuntimeError: [enforce fail at inline_container.cc:145] . PytorchStreamReader failed reading file data/67511648: file read failed
For the first model it seems that the file is just 0 mb in size, is that correct? I only say this from reading this thread on stackoverflow here. For the second one, I’m not 100% sure what’s wrong. I did read you’re previous answer here but I’m saving everything within a dictionary rather than saving the model directly like this…
torch.save({'epoch':preepoch,
'model_state_dict':net.state_dict(),
'optim_state_dict':optim.state_dict(),
'loss':mean_preloss,
'chains':sampler.chains}, model_path_pt)
and then loaded with
state_dict = torch.load(f=model_path_pt, map_location=torch.device(device))
start=state_dict['epoch']+1
net.load_state_dict(state_dict['model_state_dict'])
optim.load_state_dict(state_dict['optim_state_dict'])
loss = state_dict['loss']
sampler.chains = state_dict['chains']
Thank you!
Edit: A follow up question to the PytorchStreamReader error, I save my model each epoch and each epoch takes around 0.3s to do. Is it advisable to save at each epoch or to save every n-th epoch?. Could this be causing the issue with reading a file each 0.3s? Because the error does vary a bit sometimes it’s failed finding central directory, invalid header or archive is corrupted, or file read failed!
|
st31671
|
You could create a topic in the NVIDIA board following these steps 4 to provide a full log, which might help to isolate the issue further.
|
st31672
|
Please help. I have a simple net which takes several inputs, and outputs a single value. I can compute gradient of the output wrt to all the inputs by autograd. Now I want to minimize the norm of this gradient wrt to the model parameters… Is it possible? I remember it was possible to do with Tensorflow 1.x, but not so sure about Pytorch. The sample code would be something like
x.requires_grad_()
y = Model(x)
y.backward()
loss = x.grad.square().sum()
loss.backward()
Thanks!
|
st31673
|
I am using a modified Resnet18, with my own pooling function at the end of the Resnet.
Here is my code:
resnet = resnet18().cuda() #a modified resnet
class Model():
def __init__(self, model, pool):
self.model = model
self.pool= pool #my own pool class which has trainable layers
def forward(self, sample):
output = self.model(sample)
output = self.pool(output)
output = F.normalize(output, p=2, dim=1)
return output
Now, obviously I need to train not only the resnet part, but also the pool part.
But, when I check:
model = Model(model=resnet, pool= pool)
print(list(model.parameters()))
It gives:
AttributeError: 'Model' object has no attribute 'parameters'
Can anyone help?
|
st31674
|
Solved by ptrblck in post #2
You would have to derive your custom Model from nn.Module as:
class Model(nn.Module):
def __init__(self, model, pool):
super().__init__()
...
to make sure all nn.Module methods and attributes are available.
|
st31675
|
You would have to derive your custom Model from nn.Module as:
class Model(nn.Module):
def __init__(self, model, pool):
super().__init__()
...
to make sure all nn.Module methods and attributes are available.
|
st31676
|
I’ve trained a code like the one in bellow for 1000 epochs but I forgot to flag the desired variable as the output of my model, is there any way to extract during the test time? I want to visualize it but don’t know what I need to do (unfortunately the name of all the inputs and outputs are the same, I don’t know how I should find it!!)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(...)
self.conv2 = nn.Conv2d(....)
self.conv3 = nn.Conv2d(....)
def forward(self, x1):
x1 = self.conv1(x1)
x1 = self.conv2(x1) <-------------------- the variable that I want to visualize
x1 = self.conv3(x1)
x1
return x1
|
st31677
|
Can you try returning two outputs instead?
e.g.,
def forward(self, x1):
x1 = self.conv1(x1)
saved = self.conv2(x1)
x1 = self.conv3(saved)
x1
return x1, saved
|
st31678
|
No I can not train it again, it’s now a pretrained network that I want to visualize outputs of it’s hidden layers.
|
st31679
|
What happens if you define a second model with the additional outputs and just load the weights of the first model? There aren’t any additional parameters introduced here.
|
st31680
|
I am trying to train a ViT model modification on the ImageNet dataset from scratch.
I am using 8 Teslas V100 GPUs and it is taking enormously too long.
While inspecting the gpus with nvidia-smi I get:
WhatsApp Image 2021-05-26 at 9.59.37 AM719×579 124 KB
I am using nn.DataParallel to train it. In my dataloader I am using num_workers = 8 and pin_memory=True of course.
I tried to increase the number of workers up to 16 as adviced in Guidelines for assigning num_workers to DataLoader - #4 by YossiB 2 but it only froze the machine.
Is that normal? The estimated time per epoch is around 9 hours, I think that’s too long, specially because I intend to train it for 300 epochs
|
st31681
|
Solved by eqy in post #9
You can see the short note in the docs here for why Distributed Data Parallel can be faster for multi-GPU training.
Yes, switching HDD to SSD can make a large difference, especially for “random” file I/O as loading many ImageNet images can look like random reads to storage. If you have sufficient m…
|
st31682
|
Obs: while increasing the number of workers from 0 to 8 the training time per epoch reduced from 16h to 6h, but that’s still too long.
I’ve seen some other imagenet training in 29 hours.
ResNet-50 takes 29 hours using 8 Tesla P100 GPU.
Would it be a Torch problem?
|
st31683
|
Even with 16 workers there might be a large imbalance between data loading time and GPU time. Do you have some statistics on the proportion of time spent on data in each batch (e.g., like in the classic ImageNet example here 5)?
|
st31684
|
So you are saying that the bottleneck is mostly loading the data (is what I suspected since for CIFAR100 it works fine).
I don’t have the statistics but I can implement them.
|
st31685
|
Right, I think a few simple time.time() checks won’t affect performance and are a good sanity check for obvious bottlenecks.
|
st31686
|
We also recommend to use DistributedDataParallel with one process per GPU for the best performance instead of using nn.DataParallel.
|
st31687
|
Thanks for the reply!
I wonder why is Distributed Data Parallel any better? I am using 8 gpus in the same machine
Anyway, any tips/ideas on how I could load imagenet faster with pytorch?
Hopefully by changing the HDD to a SSD will help.
The loading time is 5.2 sec average and the batch time is 0.8 sec =/
@eqy
|
st31688
|
You can see the short note in the docs here 6 for why Distributed Data Parallel can be faster for multi-GPU training.
Yes, switching HDD to SSD can make a large difference, especially for “random” file I/O as loading many ImageNet images can look like random reads to storage. If you have sufficient memory, you might consider increasing the prefetch_factor in your DataLoader as well to see if increased buffering might help. However, if your HDD cannot keep up with the data loading speed, it is tricky fully utilize the GPUs.
|
st31689
|
After training model when i test model in batch using shuffle=False give me good score , when i use the same model and same test data using shuffle=True give me bad score , i am confused why it is so?
dataset = pd.read_csv('Churn_Modelling.csv')
I shuffle the data before splitting data into train/test
from sklearn.utils import shuffle
data = shuffle(data)
data.reset_index(inplace=True, drop=True)
X = data[['Age','Tenure','Geography','Balance','EstimatedSalary','Gender','NumOfProducts','CreditScore','HasCrCard','IsActiveMember']]
Y = data['Exited']
I am embedding following categorical variables
categorical_columns = ['Geography', 'Gender', 'HasCrCard', 'IsActiveMember']
for col in categorical_columns:
X.loc[:,col] = X.loc[:,col].astype('category')
X['Geography'] = LabelEncoder().fit_transform(X['Geography'])
X['Gender'] = LabelEncoder().fit_transform(X['Gender'])
X['HasCrCard'] = LabelEncoder().fit_transform(X['HasCrCard'])
X['IsActiveMember'] = LabelEncoder().fit_transform(X['IsActiveMember'])
After encoding label encoder above , these columns converted into integer - hence re converting them to category
for col in categorical_columns:
X.loc[:,col] = X.loc[:,col].astype('category')
X.dtypes
Get embedding categorical columns
embedded_cols = {n: len(col.cat.categories) for n,col in X[categorical_columns].items()}
embedded_cols
{'Geography': 3, 'Gender': 2, 'HasCrCard': 2, 'IsActiveMember': 2}
Splitting train/test data
X_train, X_val, y_train, y_val = train_test_split(X, Y, test_size=0.20, random_state=0)
Following function will return categorical , numerical columns separately , reason for this i want to embed categorical column separately and then combined with numerical features while training
class ShelterOutcomeDataset(Dataset):
def __init__(self, X, Y, embedded_col_names):
Xdata = X.copy()
self.X1 = Xdata.loc[:,embedded_col_names].copy().values.astype(np.int64) #categorical columns
self.X2 = Xdata.drop(columns=embedded_col_names).copy().values.astype(np.float32) #numerical columns
self.y = Y.copy().values.astype(np.int64)
def __len__(self):
return len(self.y)
def __getitem__(self, idx):
return self.X1[idx], self.X2[idx], self.y[idx]
Size of embedding columns
embedding_sizes = [(n_categories, min(50, (n_categories+1)//2)) for _,n_categories in embedded_cols.items()]
embedding_sizes
[(3, 2), (2, 1), (2, 1), (2, 1)]
train_ds = ShelterOutcomeDataset(X_train,y_train ,categorical_columns)
embedded_col_names = embedded_cols.keys()
len(X.columns) - len(embedded_cols) #number of numerical columns
6
Model
class testNet(nn.Module):
def __init__(self, emb_dims, n_cont):
super().__init__()
self.embeddings = nn.ModuleList([nn.Embedding(categories, size) for categories,size in emb_dims])
no_of_embs = sum(e.embedding_dim for e in self.embeddings) #length of all embeddings combined
self.n_emb, self.n_cont = no_of_embs, n_cont
self.lin1 = nn.Linear(self.n_emb + self.n_cont,200)
self.lin2 = nn.Linear(200, 100)
self.lin3 = nn.Linear(100, 50)
self.lin4 = nn.Linear(50, 2)
self.bn1 = nn.BatchNorm1d(self.n_cont)
self.bn2 = nn.BatchNorm1d(200)
self.bn3 = nn.BatchNorm1d(100)
self.bn4 = nn.BatchNorm1d(50)
self.emb_drop = nn.Dropout(0.4)
self.drops = nn.Dropout()
def forward(self, x_cat, x_cont):
x = [e(x_cat[:,i]) for i,e in enumerate(self.embeddings)]
x = torch.cat(x, 1)
x = self.emb_drop(x)
x2 = self.bn1(x_cont)
x = torch.cat([x, x2], 1)
x = F.relu(self.lin1(x))
x = self.drops(x)
x = self.bn2(x)
x = F.relu(self.lin2(x))
x = self.drops(x)
x = self.bn3(x)
x = F.relu(self.lin3(x))
x = self.drops(x)
x = self.bn4(x)
x = F.relu(self.lin4(x))
return x
model = testNet(embedding_sizes,6)
print(model)
testNet(
(embeddings): ModuleList(
(0): Embedding(3, 2)
(1): Embedding(2, 1)
(2): Embedding(2, 1)
(3): Embedding(2, 1)
)
(lin1): Linear(in_features=9, out_features=200, bias=True)
(lin2): Linear(in_features=200, out_features=100, bias=True)
(lin3): Linear(in_features=100, out_features=50, bias=True)
(lin4): Linear(in_features=50, out_features=2, bias=True)
(bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn2): BatchNorm1d(200, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn3): BatchNorm1d(100, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn4): BatchNorm1d(50, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(emb_drop): Dropout(p=0.4, inplace=False)
(drops): Dropout(p=0.5, inplace=False)
)
Training
def get_optimizer(model, lr = 0.001, wd = 0.0):
parameters = filter(lambda p: p.requires_grad, model.parameters())
optim = torch_optim.Adam(parameters, lr=lr, weight_decay=wd)
return optim
def init_weights(m):
if type(m) == nn.Linear:
nn.init.xavier_normal_(m.weight)
criterion = nn.CrossEntropyLoss()
def train_model(model, optim, train_dl):
model.train()
total = 0
sum_loss = 0
output = 0
for cat, cont, y in train_dl:
batch = y.shape[0]
output = model(cat, cont)
loss = criterion(output, y)
optim.zero_grad()
loss.backward()
optim.step()
total += batch
sum_loss += batch*(loss.item())
return sum_loss/total,pred
def train_loop(model, epochs, lr=0.01, wd=0.0):
optim = get_optimizer(model, lr = lr, wd = wd)
for epoch in range(epochs):
loss,pred = train_model(model, optim, train_dl)
if (epoch+1) % 50 ==0:
print(f'epoch : {epoch+1},training loss : {loss}')
sampler = class_imbalance_sampler(y_train)
batch_size = 1000
train_dl = DataLoader(train_ds, batch_size=batch_size,shuffle=True)
model = testNet(embedding_sizes,6)
model.apply(init_weights)
opt = torch.optim.Adam(model.parameters(), lr=1e-2)
train_loop(model, epochs=200, lr=0.001, wd=0.00001)
Validation When Shuffle=False- Sores are below
valid_ds = ShelterOutcomeDataset(X_val,y_val , categorical_columns)
batch_size = 100
valid_dl = DataLoader(valid_ds, batch_size=batch_size,shuffle=False)<-------
valid_dl = DeviceDataLoader(valid_dl, device)
preds = []
with torch.no_grad():
for cat, cont,y in valid_dl:
model.eval()<-----------------------------------------------------
output = model(cat, cont)
_,pred = torch.max(output,1)
preds.append(pred.cpu().detach().numpy())
final_preds = [item for sublist in preds for item in sublist]
print(classification_report(y_val, np.array(final_preds)))
precision recall f1-score support
0 0.86 0.95 0.90 1610
1 0.63 0.37 0.47 390
accuracy 0.83 2000
macro avg 0.74 0.66 0.69 2000
weighted avg 0.82 0.83 0.82 2000
valid_ds = ShelterOutcomeDataset(X_val,y_val , categorical_columns)
batch_size = 100
valid_dl = DataLoader(valid_ds, batch_size=batch_size,shuffle=True)<--------
valid_dl = DeviceDataLoader(valid_dl, device)
preds = []
with torch.no_grad():
for cat, cont,y in valid_dl:
model.eval()<----------------------------------------------------
output = model(cat, cont)
_,pred = torch.max(output,1)
preds.append(pred.cpu().detach().numpy())
final_preds = [item for sublist in preds for item in sublist]
print(classification_report(y_val, np.array(final_preds)))
precision recall f1-score support
0 0.79 0.87 0.83 1576
1 0.23 0.14 0.17 424
accuracy 0.72 2000
macro avg 0.51 0.51 0.50 2000
weighted avg 0.67 0.72 0.69 2000
|
st31690
|
Solved by ptrblck in post #5
The running stats of batchnorm layers are updated during training using the training batch stats and the formula mentioned in the docs, which can then be used during evaluation and makes the inference perform independent from the batch size. The BatchNorm paper explains this in more detail.
|
st31691
|
I cannot reproduce the issue after calling model.eval() as neither the running stats are updated (as mentioned here 4 nor is the output showing any difference when shuffling the inputs using this code snippet:
embedding_sizes = [(3, 2), (2, 1), (2, 1), (2, 1)]
model = testNet(embedding_sizes,6)
print(model)
model.eval()
for name, module in model.named_modules():
if 'bn' in name:
print(module.running_mean)
print(module.running_var)
# first pass
x_cat = torch.randint(0, 2, (16, 4))
x_cont = torch.randn(16, 6)
out_ref = model(x_cat, x_cont)
# running stats are not updated
for name, module in model.named_modules():
if 'bn' in name:
print(module.running_mean)
print(module.running_var)
# shuffle data
idx = torch.randperm(x_cat.size(0))
out = model(x_cat[idx], x_cont[idx])
# compare to reference output
print((out - out_ref[idx]).abs().max())
> tensor(0., grad_fn=<MaxBackward1>)
# check running stats
# running stats are not updated
for name, module in model.named_modules():
if 'bn' in name:
print(module.running_mean)
print(module.running_var)
|
st31692
|
@ptrblck here in my case running stats are not being updated either , please see below code. Not sure why shuffle=True and False has different score
|
st31693
|
@ptrblck thanks for pointing me right direction , using your code there was no changes in running stats in my code as well and everything was fine except shuffling valid_dl, i was shuffling valid_dl so target shuffled as well , after that doing prediction and comparing this prediction with non shuffled y_val of train_test_split.
I am still not clear what running stats are for? What it really tells in layman term?
|
st31694
|
The running stats of batchnorm layers are updated during training using the training batch stats and the formula mentioned in the docs 7, which can then be used during evaluation and makes the inference perform independent from the batch size. The BatchNorm paper 2 explains this in more detail.
|
st31695
|
Hi ,
I have almost 300,000 records with mixed of categorical and numerical features. For most of categorical variable where cardinality is greater than 2 are embedded into 50% of those unique values , i defined layers and neurons arbitrarily as follows for classification problem 1 or 0, based on following layers and neurons i am getting loss (Cross Entropy) 0.52656052014033 at 100th epochs
My question are
Is there anything wrong with my code?
Is there any technique like Grid Search which i can use to chose optimal number of hidden layers, and neurons in each hidden layer?
testNet(
(embeddings): ModuleList(
(0): Embedding(115, 8)
(1): Embedding(119, 10)
(2): Embedding(113, 7)
(3): Embedding(120, 10)
(4): Embedding(184, 42)
(5): Embedding(116, 8)
(6): Embedding(151, 26)
(7): Embedding(161, 31)
(8): Embedding(119, 10)
(9): Embedding(399, 50)
)
(lin1): Linear(in_features=213, out_features=90, bias=True)
(lin2): Linear(in_features=90, out_features=85, bias=True)
(lin3): Linear(in_features=85, out_features=80, bias=True)
(lin4): Linear(in_features=80, out_features=75, bias=True)
(lin5): Linear(in_features=75, out_features=70, bias=True)
(lin6): Linear(in_features=70, out_features=60, bias=True)
(lin7): Linear(in_features=60, out_features=50, bias=True)
(lin8): Linear(in_features=50, out_features=40, bias=True)
(lin9): Linear(in_features=40, out_features=30, bias=True)
(lin10): Linear(in_features=30, out_features=20, bias=True)
(lin11): Linear(in_features=20, out_features=10, bias=True)
(lin12): Linear(in_features=10, out_features=5, bias=True)
(lin13): Linear(in_features=5, out_features=2, bias=True)
(bn1): BatchNorm1d(11, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn2): BatchNorm1d(90, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn3): BatchNorm1d(85, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn4): BatchNorm1d(80, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn5): BatchNorm1d(75, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn6): BatchNorm1d(70, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn7): BatchNorm1d(60, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn8): BatchNorm1d(50, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn9): BatchNorm1d(40, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn10): BatchNorm1d(30, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn11): BatchNorm1d(20, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn12): BatchNorm1d(10, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn13): BatchNorm1d(5, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(emb_drop): Dropout(p=0.6, inplace=False)
(drops): Dropout(p=0.3, inplace=False)
)
epoch : 10,training loss : 0.6801770955721538
epoch : 20,training loss : 0.5797973778088887
epoch : 30,training loss : 0.548956808312734
epoch : 40,training loss : 0.5404320967992147
epoch : 50,training loss : 0.5338565409978231
epoch : 60,training loss : 0.5300635928471883
epoch : 70,training loss : 0.529638019879659
epoch : 80,training loss : 0.5281008475780488
epoch : 90,training loss : 0.525910607846578
epoch : 100,training loss : 0.52656052014033
Following are my code
class testNet(nn.Module):
def __init__(self, emb_dims, n_cont):
super().__init__()
self.embeddings = nn.ModuleList([nn.Embedding(categories+100, size) for categories,size in emb_dims])
no_of_embs = sum(e.embedding_dim for e in self.embeddings) #length of all embeddings combined
self.n_emb, self.n_cont = no_of_embs, n_cont
self.lin1 = nn.Linear(self.n_emb + self.n_cont,90)
self.lin2 = nn.Linear(90, 85)
self.lin3 = nn.Linear(85, 80)
self.lin4 = nn.Linear(80, 75)
self.lin5 = nn.Linear(75, 70)
self.lin6 = nn.Linear(70, 60)
self.lin7 = nn.Linear(60, 50)
self.lin8 = nn.Linear(50, 40)
self.lin9 = nn.Linear(40, 30)
self.lin10 = nn.Linear(30, 20)
self.lin11 = nn.Linear(20, 10)
self.lin12 = nn.Linear(10, 5)
self.lin13 = nn.Linear(5, 2)
self.bn1 = nn.BatchNorm1d(self.n_cont)
self.bn2 = nn.BatchNorm1d(90)
self.bn3 = nn.BatchNorm1d(85)
self.bn4 = nn.BatchNorm1d(80)
self.bn5 = nn.BatchNorm1d(75)
self.bn6 = nn.BatchNorm1d(70)
self.bn7 = nn.BatchNorm1d(60)
self.bn8 = nn.BatchNorm1d(50)
self.bn9 = nn.BatchNorm1d(40)
self.bn10 = nn.BatchNorm1d(30)
self.bn11 = nn.BatchNorm1d(20)
self.bn12 = nn.BatchNorm1d(10)
self.bn13 = nn.BatchNorm1d(5)
self.emb_drop = nn.Dropout(0.6)
self.drops = nn.Dropout(0.3)
def forward(self, x_cat, x_cont):
x = [e(x_cat[:,i]) for i,e in enumerate(self.embeddings)]
x = torch.cat(x, 1)
x = self.emb_drop(x)
# batch normalization over continous features
x2 = self.bn1(x_cont)
# concatenate both embedding and continous feature , here 1 means dim
# the dimension over which the tensors are concatenated we are concatenating columns
x = torch.cat([x, x2], 1)
#x = F.relu(self.lin1(x))
m = nn.LeakyReLU(0.01)
x = m(self.lin1(x))
x = self.drops(x)
x = self.bn2(x)
#x = F.relu(self.lin2(x))
x = m(self.lin2(x))
x = self.drops(x)
x = self.bn3(x)
x = self.lin3(x)
x = self.drops(x)
x = self.bn4(x)
x = m(self.lin4(x))
x = self.drops(x)
x = self.bn5(x)
x = m(self.lin5(x))
x = self.drops(x)
x = self.bn6(x)
x = m(self.lin6(x))
x = self.drops(x)
x = self.bn7(x)
x = m(self.lin7(x))
x = self.drops(x)
x = self.bn8(x)
x = m(self.lin8(x))
x = self.drops(x)
x = self.bn9(x)
x = m(self.lin9(x))
x = self.drops(x)
x = self.bn10(x)
x = m(self.lin10(x))
x = self.drops(x)
x = self.bn11(x)
x = m(self.lin11(x))
x = self.drops(x)
x = self.bn12(x)
x = m(self.lin12(x))
x = self.drops(x)
x = self.bn13(x)
return x
Training function
def get_optimizer(model, lr = 0.001, wd = 0.0):
parameters = filter(lambda p: p.requires_grad, model.parameters())
optim = torch_optim.Adam(parameters, lr=lr, weight_decay=wd)
return optim
def init_weights(m):
if type(m) == nn.Linear:
nn.init.xavier_uniform_(m.weight)
if m.bias is not None:
torch.nn.init.zeros_(m.bias)
criterion = nn.CrossEntropyLoss()
to_device(criterion, device)
def train_model(model, optim, train_dl):
model.train()
total = 0
sum_loss = 0
output = 0
for cat, cont, y in train_dl:
batch = y.shape[0]
output = model(cat, cont)
_,pred = torch.max(output,1)
loss = criterion(output, y)
optim.zero_grad()
loss.backward()
optim.step()
total += batch
sum_loss += batch*(loss.item())
return sum_loss/total,pred
def train_loop(model, epochs, lr, wd=0.0):
optim = get_optimizer(model, lr = lr, wd = wd)
for epoch in range(epochs):
loss,pred = train_model(model, optim, train_dl)
if (epoch+1) % 10 ==0:
print(f'epoch : {epoch+1},training loss : {loss}')
def class_imbalance_sampler(labels):
class_count = np.array([len(np.where(labels.cpu().detach().numpy()==t)[0]) for t in np.unique(labels.cpu().detach().numpy())])
print(class_count)
weight = 1. / class_count
samples_weight = np.array([weight[t] for t in labels.cpu().detach().numpy()])
samples_weight = torch.from_numpy(samples_weight)
sampler = WeightedRandomSampler(samples_weight.type('torch.DoubleTensor'), len(samples_weight))
return sampler
y = torch.from_numpy(y_tr.to_numpy(np.int)).to(device)
sampler = class_imbalance_sampler(y)
batch_size = 512*2
train_dl = DataLoader(train_ds, batch_size=batch_size,sampler=sampler)
train_dl = DeviceDataLoader(train_dl, device)
model = testNet(embedding_sizes,11)
model.apply(init_weights)
to_device(model, device)
print(model)
from collections import defaultdict
opt = torch.optim.Adam(model.parameters(), lr=1e-2)
train_loop(model, epochs=100, lr=0.001)
|
st31696
|
Solved by eqy in post #4
So rather than using the ResNet architecture directly you can still take inspiration from the architecture by adding things like residual connections. If you take a look at the BasicBlock in ResNet you can try a variation of the architecture but with fully connecter layers rather than conv layers (t…
|
st31697
|
There are a few things that are strange with the current model architecture. Usually the embedding size of the intermediate layers is not monotonically decreasing in MLPs. You might consider projecting the input to a larger dimension first (e.g., 1024) and using a shallower network (e.g., just 3-4 layers) to begin with. Additionally, models beyond a certain depth typically have residual connections (e.g., ResNets and Transfomers), so the lack of residual connections may be an issue with so many linear layers.
As for searching for the model architecture, the issue is this usually very expensive, so randomized approaches are preferred over techniques like grid search. A reasonable heuristic with limited computational resources is to start with a much simpler model (e.g., fewer layers, fewer bells and whistles such as dropout) and to grow the model in width or depth until things stop improving.
|
st31698
|
@eqy thanks for your suggestion and it sounds great to use resnet , i tried to find its implementation , i have deep fully connected layer not CNN , lot of its use case i found using Resnet in CNN but not in fully connected layer (i assume mine is fully connected layer). Could you please share some naive implementation using fully connected layer?
As you said Transfomers is it type of Resnet as well?
Another question the embedding size of the intermediate layers is not monotonically decreasing in MLPs what does it mean please?
Thanks
|
st31699
|
So rather than using the ResNet architecture directly you can still take inspiration from the architecture by adding things like residual connections. If you take a look at the BasicBlock in ResNet you can try a variation of the architecture but with fully connecter layers rather than conv layers (the important part is the use of identity to add the original input back to the output which is the residual connection).
Transformers are not ResNets but they are just another example of an architecture that uses a residual connection.
Typically the size of the model embedding is grown from the input layers but in this case all of your layer sizes have dimension smaller than the input. So I just mean that usually his dimension is much larger in the MLP.
|
st31700
|
@eqy thanks for your support - i will use your suggestion and implement it
Quick question can i use CNN for tabular data , i mean non image data?
|
st31701
|
If there is some kind of sequence to the tabular data (e.g., a time series) then CNNs can work, but usually CNNs are only used when there is some kind of locality (spatial or temporal).
|
st31702
|
@eqy suppose i have some synthetically generated data but not sure whether it has some kind of locality or not, is there any statistical measures which can be used on these data to find whether it has locality (spatial/temporal) or not?
|
st31703
|
This is my code:
import torch
import torch.nn as nn
class AlexNet(nn.Module):
def __init__(self, __output_size):
super(AlexNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=96, kernel_size=11, stride=4),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2)
)
self.layer2 = nn.Sequential(
nn.Conv2d(in_channels=96, out_channels=256, kernel_size=5, stride=1, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2)
)
self.layer3 = nn.Sequential(
nn.Conv2d(in_channels=256, out_channels=384, kernel_size=3, stride=1, padding=1),
nn.ReLU(inplace=True)
)
self.layer4 = nn.Sequential(
nn.Conv2d(in_channels=384, out_channels=384, kernel_size=3, stride=1, padding=1),
nn.ReLU(inplace=True)
)
self.layer5 = nn.Sequential(
nn.Conv2d(in_channels=384, out_channels=256, kernel_size=3, stride=1, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=1)
)
self.layer6 = nn.Sequential(
nn.Dropout(p=0.5)
)
#self.avgpool = nn.AdaptiveAvgPool2d((6, 6))
self.layer7 = nn.Sequential(
nn.Linear(in_features=4096, out_features=4096, bias=True),
nn.ReLU()
)
self.layer8 = nn.Sequential(
nn.Dropout(p=0.5)
)
self.layer9 = nn.Sequential(
nn.Linear(in_features=4096, out_features=4096, bias=True),
nn.ReLU()
)
self.layer10 = nn.Sequential(
nn.Linear(in_features=4096, out_features=1000, bias=True),
nn.Softmax()
)
def forward(self, x):
_output = self.layer1(x)
_output = self.layer2(_output)
_output = self.layer3(_output)
_output = self.layer4(_output)
_output = self.layer5(_output)
_output = self.layer6(_output)
# _output = self.avgpool(_output)
_output = torch.flatten(_output, 1)
_output = self.layer7(_output)
_output = self.layer8(_output)
_output = self.layer9(_output)
_output = self.layer10(_output)
return _output
This is my training part:
def train(_train_loader, _model, _num_epochs, _device, _criterion, _optimizer):
try:
_total_steps = len(_train_loader)
for _epochs in range(_num_epochs):
for i, (_images, _labels) in enumerate(_train_loader):
_images = _images.to(_device)
_labels = _labels.to(_device)
# forward pass
_outputs = _model(_images)
_loss = _criterion(_outputs, _labels)
# backward pass and optimization
_optimizer.zero_grad()
_loss.backward()
_optimizer.step()
if (i+1) % 100 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' .format(_epochs+1, _num_epochs, i+1, _total_steps, _loss.item()))
except Exception as error:
print("An error occured while training")
print(error)
raise error
Initially I was getting:
RuntimeError: Given groups=1, weight of size [96, 3, 11, 11], expected input[100, 1, 28, 28] to have 3 channels, but got 1 channels instead
Then I changed the number of channels to 1, I got the above mentioned error. Please help me out to resolve this error!
|
st31704
|
The error is raised, if the spatial size of the input (and thus an intermediate activation) is too small for the model architecture.
It seems you are using an input of 28x28 pixels, which would be too small for an AlexNet-like model, which was initially using inputs of 224x224, so you would have to either resize the inputs to a larger size or modify the model (e.g. by removing layers).
|
st31705
|
I’m trying to unpickle a pytorch tensor, but pickling it back yields different results across runs:
>>> import pickle
>>> tensor1 = pickle.load(f) # I cannot reproduce the issue with some minimal manually-created tensor, only with this specific file
>>> tensor2 = pickle.load(f)
>>> pickled_tensor1 = pickle.dumps(tensor1)
>>> pickled_tensor2 = pickle.dumps(tensor2)
>>> pickled_tensor1 == pickled_tensor2
False
Below are the values of pickled_tensor1 and pickled_tensor2 respectively:
b'\x80\x04\x95\x98\x01\x00\x00\x00\x00\x00\x00\x8c\x0ctorch._utils\x94\x8c\x12_rebuild_tensor_v2\x94\x93\x94(\x8c\rtorch.storage\x94\x8c\x10_load_from_bytes\x94\x93\x94B\r\x01\x00\x00\x80\x02\x8a\nl\xfc\x9cF\xf9 j\xa8P\x19.\x80\x02M\xe9\x03.\x80\x02}q\x00(X\x10\x00\x00\x00protocol_versionq\x01M\xe9\x03X\r\x00\x00\x00little_endianq\x02\x88X\n\x00\x00\x00type_sizesq\x03}q\x04(X\x05\x00\x00\x00shortq\x05K\x02X\x03\x00\x00\x00intq\x06K\x04X\x04\x00\x00\x00longq\x07K\x04uu.\x80\x02(X\x07\x00\x00\x00storageq\x00ctorch\nFloatStorage\nq\x01X\x0f\x00\x00\x00140382183041680q\x02X\x03\x00\x00\x00cpuq\x03K\x04Ntq\x04Q.\x80\x02]q\x00X\x0f\x00\x00\x00140382183041680q\x01a.\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00?\x94\x85\x94R\x94K\x00K\x02K\x02\x86\x94K\x02K\x01\x86\x94\x89\x8c\x0bcollections\x94\x8c\x0bOrderedDict\x94\x93\x94)R\x94t\x94R\x94.'
b'\x80\x04\x95\x98\x01\x00\x00\x00\x00\x00\x00\x8c\x0ctorch._utils\x94\x8c\x12_rebuild_tensor_v2\x94\x93\x94(\x8c\rtorch.storage\x94\x8c\x10_load_from_bytes\x94\x93\x94B\r\x01\x00\x00\x80\x02\x8a\nl\xfc\x9cF\xf9 j\xa8P\x19.\x80\x02M\xe9\x03.\x80\x02}q\x00(X\x10\x00\x00\x00protocol_versionq\x01M\xe9\x03X\r\x00\x00\x00little_endianq\x02\x88X\n\x00\x00\x00type_sizesq\x03}q\x04(X\x05\x00\x00\x00shortq\x05K\x02X\x03\x00\x00\x00intq\x06K\x04X\x04\x00\x00\x00longq\x07K\x04uu.\x80\x02(X\x07\x00\x00\x00storageq\x00ctorch\nFloatStorage\nq\x01X\x0f\x00\x00\x00140382172016592q\x02X\x03\x00\x00\x00cpuq\x03K\x04Ntq\x04Q.\x80\x02]q\x00X\x0f\x00\x00\x00140382172016592q\x01a.\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00?\x94\x85\x94R\x94K\x00K\x02K\x02\x86\x94K\x02K\x01\x86\x94\x89\x8c\x0bcollections\x94\x8c\x0bOrderedDict\x94\x93\x94)R\x94t\x94R\x94.'
My question is why is it happening and how can I prevent this?
I am using Python 3.8; pytorch 1.7.0
Cheers, Hlib.
|
st31706
|
Hello!
I would like to implement a slightly different version of conv2d and use it inside my neural network.
I would like to take into account an additional binary data during the convolution. For the sake of clarity, let’s consider the first layer of my network. From the input grayscale image, I compute a binary mask where object is white and background is black. Then, for the convolution, I will consider a fixed size window filter moving equally along the image and the mask. If the center of the considered window belongs to the object (ie is white), then only the pixels in the grayscale image which are white in the mask for the considered window should contribute to the filtering. The same reasoning is applied for pixel belonging to the background.
Here is my code for my custom layer :
class MyConv2d(nn.Module):
def __init__(self, n_channels, out_channels, kernel_size, dilation=1, padding=0, stride=1):
super(MyConv2d, self).__init__()
self.kernel_size = (kernel_size, kernel_size)
self.kernal_size_number = kernel_size * kernel_size
self.out_channels = out_channels
self.dilation = (dilation, dilation)
self.padding = (padding, padding)
self.stride = (stride, stride)
self.n_channels = n_channels
self.weights = nn.Parameter(torch.Tensor(self.out_channels, self.n_channels, self.kernal_size_number)).data.uniform_(0, 1)
def forward(self, x, mask):
width = self.calculateNewWidth(x)
height = self.calculateNewHeight(x)
result = torch.zeros(
[x.shape[0] * self.out_channels, width, height], dtype=torch.float32, device=device
)
windows_x = self.calculateWindows(x)
windows_mask = self.calculateWindows(mask)
windows_mask[windows_mask < 1] = -1
windows_mask_centers = windows_mask[:, :, windows_mask.size()[2]//2].view(windows_mask.size()[0], windows_mask.size()[1], 1)
windows_mask = windows_mask * windows_mask_centers
windows_mask[windows_mask < 1] = 0
windows_x_seg = windows_x * windows_mask
for channel in range(x.shape[1]):
for i_convNumber in range(self.out_channels):
xx = torch.matmul(windows_x_seg[channel], self.weights[i_convNumber][channel])
xx = xx.view(-1, width, height)
result[i_convNumber * xx.shape[0] : (i_convNumber + 1) * xx.shape[0]] += xx
result = result.view(x.shape[0], self.out_channels, width, height)
return result
def calculateWindows(self, x):
windows = F.unfold(
x, kernel_size=self.kernel_size, padding=self.padding, dilation=self.dilation, stride=self.stride
)
windows = windows.transpose(1, 2).contiguous().view(-1, x.shape[1], self.kernal_size_number)
windows = windows.transpose(0, 1)
return windows
def calculateNewWidth(self, x):
return (
(x.shape[2] + 2 * self.padding[0] - self.dilation[0] * (self.kernel_size[0] - 1) - 1)
// self.stride[0]
) + 1
def calculateNewHeight(self, x):
return (
(x.shape[3] + 2 * self.padding[1] - self.dilation[1] * (self.kernel_size[1] - 1) - 1)
// self.stride[1]
) + 1
Then, I would like to call MyConv2d from my network;
Here is a snipset of my network :
class MyNetwork(nn.Module):
def __init__(self):
super(MyNetwork, self).__init__()
self.conv = MyConv2d(1, 64, 5, stride=2, padding=0)
# etc
def forward(self, x, mask):
x = F.relu(self.conv(x, mask))
# etc
return x
First of all, I have a question regarding the execution speed. MyConv2d is much slower than conv2d (because of the double for loop I guess). Is there a way to speed it up?
Secondly, I have an issue at the very first iteration when I train my network on gpu. Indeed, once the input got through my first custom layer, I get back Nan values in the output. Do you have any idea why this happens? Is there something wrong with my implementation of MyConv2d?
Last, I recently have a weird error that came out of the blue when I train my network:
copy_if failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered
This error occurs in MyConv2d when it runs into:
windows_mask[windows_mask < 1] = -1
Can you please help me fix this?
Many thanks in advance!
|
st31707
|
flora:
First of all, I have a question regarding the execution speed. MyConv2d is much slower than conv2d (because of the double for loop I guess). Is there a way to speed it up?
You could try to remove the loops and unfold the data, which could use more memory but might also be faster. Alternatively, you could also write a custom C++/CUDA extension, which could also yield a speedup.
flora:
Secondly, I have an issue at the very first iteration when I train my network on gpu. Indeed, once the input got through my first custom layer, I get back Nan values in the output. Do you have any idea why this happens? Is there something wrong with my implementation of MyConv2d?
You could add debug print statements and check which part of your custom layer returns the NaN values to narrow it down further.
flora:
Last, I recently have a weird error that came out of the blue when I train my network:
If you are using an older PyTorch version, please update to the latest stable, since indexing errors should raise RuntimeErrors, not fail with illegal memory accesses.
|
st31708
|
Dear ptrblck,
Thank you for your reply.
If you are using an older PyTorch version, please update to the latest stable, since indexing errors should raise RuntimeErrors, not fail with illegal memory accesses.
I’ve tried to update pytorch to the last version but I encountered difficulties. I tried several things:
First of all I updated my cudatoolkit from 10.0 to 10.2
Then I ran the command : conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
Nothing happened (I kept my old version of pytorch - 1.7.1)
Then I tried : pip install --upgrade torch torchvision torchaudio
Here pytorch was updated to 1.8.1 but I could not launch my code on jupyter notebook because the kernel crashed at the very beginning when importing packages.
Then I decided to remove pytorch entirely and install it back. I ran the following lines :
conda uninstall pytorch
pip uninstall torch
pip uninstall torch
conda uninstall torchvision
pip uninstall torchvision
pip uninstall torchvision
conda uninstall torchaudio
pip uninstall torchaudio
pip uninstall torchaudio
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
But the version I got back was the previous one (1.7.1)
Then I wanted to force pytorch to update to 1.8.1. So I ran conda install pytorch=1.8.1 torchvision torchaudio cudatoolkit=10.2 -c pytorch
But I got several package conflicts (openssl, mkl_fft, python_abi, msgpack-python, m2w64-gcc-libs, tornado, vs2008_runtime, libssh2, prompt_toolkit, vs2015_runtime, clyent, pywin32, sqlite, blas, statsmodels, ibllvm9, jupyter_client, babel, zipp, libpng, hypothesis, ipython, xz, configparser, enum34, traitlets, sphinx, numpy, nbformat, jpeg, contextlib2, pycrypto, python-simplegeneric, matplotlib-inline, liblapacke, jupyterlab_widgets, snowballstemmer, backcall, urllib3, ipython_genutils, jupyterlab_server, notebook, rtree, pysocks, qtconsole, bkcharts, conda, pyopenss, toolz, matplotlib-base, html5lib, mkl-service, zlib, spyder-kernels, imagesize, qtawesome, pandocfilters, ptyprocess, python-language-server, jedi, anaconda-project, … ).
I am using python 3.8.3 and anaconda 4.10.1
Could you help me with this?
|
st31709
|
Your local CUDA toolkit won’t be used, if you install the conda binaries or pip wheels, and you would only need to install the NVIDIA driver.
Since you are apparently running into env conflicts, you could try to create a new conda env and install the latest PyTorch version there.
|
st31710
|
Thank you for your answer,
Ok, I will try to set up a new conda env and install PyTorch. I will let you know.
|
st31711
|
Dear Patrick,
I have created a new conda environement and I’ve been able to install the latest version of Pytorch.
That said, I still had an error related to illegal memory access. I found in another post that it could be related to having variables both on gpu and cpu. So, I checked up on the variable (i.e. weights) created in MyConv2d and it appeared that it was created by default on cpu. To correct I added the following lines at the top of the forward function :
if x.is_cuda:
self.weights = self.weights.cuda()
I also had problem of exploding gradient at the very beginning of training, so I normalized the output of MyConv2d to check if it improves things. For now, this issue seems solved, but I still have a problem with training but this time with memory. The training goes well for a big number of epochs (~100) , but at some point, I get this error :
RuntimeError: CUDA out of memory. Tried to allocate 656.00 MiB (GPU 0; 5.00 GiB total capacity; 345.42 MiB already allocated; 590.35 MiB free; 1.10 GiB reserved in total by PyTorch)
And I don’t understand why it occurs at this point; memory should be free at the end of each epoch, right? Then I have another question : is there a tool (like the Matlab profile viewer) to check the memory consumption for pytorch code?
I post here the last version of my custom convolutional layer, if it helps.
class MyConv2d(nn.Module):
def __init__(self, n_channels, out_channels, kernel_size, dilation=1, padding=0, stride=1):
super(MyConv2d, self).__init__()
self.kernel_size = (kernel_size, kernel_size)
self.kernel_size_number = kernel_size * kernel_size
self.out_channels = out_channels
self.dilation = (dilation, dilation)
self.padding = (padding, padding)
self.stride = (stride, stride)
self.n_channels = n_channels
self.weights = nn.Parameter(torch.Tensor(self.out_channels, self.n_channels, self.kernel_size_number)).data.uniform_(0, 1)
def forward(self, x, mask):
if x.is_cuda:
self.weights = self.weights.cuda()
width = self.calculateNewWidth(x)
height = self.calculateNewHeight(x)
result = torch.zeros(
[x.shape[0] * self.out_channels, width, height], dtype=torch.float32, device=device
)
result_mask = torch.zeros(
[x.shape[0] * self.out_channels, width, height], dtype=torch.float32, device=device
)
windows_x = self.calculateWindows(x)
windows_mask = self.calculateWindows(mask)
windows_mask[windows_mask < 1] = -1
windows_mask_centers = windows_mask[:, :, windows_mask.size()[2]//2].view(windows_mask.size()[0], windows_mask.size()[1], 1)
windows_mask = windows_mask * windows_mask_centers
windows_mask[windows_mask < 1] = 0
windows_x_seg = windows_x * windows_mask
# compute the result of x with mask-aware convolution
for i_convNumber in range(self.out_channels):
for channel in range(x.shape[1]):
xx = torch.matmul(windows_x_seg[channel], self.weights[i_convNumber][channel].view(-1, 1))
xx = xx.view(-1, width, height)/torch.sum(windows_mask[channel], 1).view(-1, width, height)
result[i_convNumber * xx.shape[0] : (i_convNumber + 1) * xx.shape[0]] += xx
result[i_convNumber * xx.shape[0] : (i_convNumber + 1) * xx.shape[0]] /= x.shape[1]
result = result.view(x.shape[0], self.out_channels, width, height)
# compute the result of mask with mask-aware convolution
windows_mask_seg = self.calculateWindows(mask) * windows_mask
for i_convNumber in range(self.out_channels):
for channel in range(x.shape[1]):
xx = torch.matmul(windows_mask_seg[channel], self.weights[i_convNumber][channel].view(-1, 1))
xx = xx.view(-1, width, height)
result_mask[i_convNumber * xx.shape[0] : (i_convNumber + 1) * xx.shape[0]] += xx
result_mask = result_mask.view(mask.shape[0], self.out_channels, width, height)
result_mask = torch.clamp(result_mask, min=0, max=1)
return result, result_mask
def calculateWindows(self, x):
windows = F.unfold(
x, kernel_size=self.kernel_size, padding=self.padding, dilation=self.dilation, stride=self.stride
)
windows = windows.transpose(1, 2).contiguous().view(-1, x.shape[1], self.kernel_size_number)
windows = windows.transpose(0, 1)
return windows
def calculateNewWidth(self, x):
return (
(x.shape[2] + 2 * self.padding[0] - self.dilation[0] * (self.kernel_size[0] - 1) - 1)
// self.stride[0]
) + 1
def calculateNewHeight(self, x):
return (
(x.shape[3] + 2 * self.padding[1] - self.dilation[1] * (self.kernel_size[1] - 1) - 1)
// self.stride[1]
) + 1
Many thanks in advance
|
st31712
|
Typically a gradual OOM after many epochs can be the result of something in training loop unwittingly holding on to previous data that no longer needs to stored (e.g., the loss). Can you share the training loop of the model?
|
st31713
|
Dear eqy,
Thank you for your reply.
Here is my training loop :
# transfer the model to the gpu
model.to(device)
#define optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.000001)
# define loss function
distortion = nn.MSELoss().cuda()
beta = 0.00001
n_epochs = 400
for epoch in range(0, n_epochs):
running_loss = 0.0
for i_batch, data in enumerate(dataloader):
batch_images = data[0].to(device).float()
batch_masks = data[1].to(device).float()
[decoded_images, x_quantized] = model(batch_images, batch_mask, 1, True)
optimizer.zero_grad()
loss_dist = distortion(decoded_images, batch_images)
loss_bit = entropy_dist(x_quantized, model.phi, model.var)
loss = beta * loss_dist + loss_bit
loss.backward()
optimizer.step()
running_loss += loss.item()
running_loss = running_loss/len(dataloader)
I don’t know if I should prevent batch_mask from beeing stored for backpropagation (batch_mask.detach()). batch_mask does not appear in the loss but is an entry to my custom convolutional layers, and is used to modify the main input batch_images.
|
st31714
|
Hello everyone.
In my code I’ll need to perform the outer product for both unbatched and batched data (tensors with dimension T1(Ndim) and T2(B, Ndim) respectively).
For now I was using:
prod = T1[:, None] * T1 # Unbatched
prod = T2[:, :, None] * T2[:, None, :] # Batched
But, is there a way to have an unified expression for both computations?
Thanks!
|
st31715
|
Solved by Pablo in post #2
Well I found the solution. This works for both cases:
prod = T[..., None, :] * T[..., None]
where T can be either T1 or T2.
|
st31716
|
Well I found the solution. This works for both cases:
prod = T[..., None, :] * T[..., None]
where T can be either T1 or T2.
|
st31717
|
Hello, I’m trying to understand and compute myself checkpointing for a BERT model that I have( because I cannot run the code on GPU and i already tried lowering batch size as much as possible but that didn’t helped). Here is the class model :
class BertClassifier(nn.Module):
def __init__(self, freeze_bert=False):
super(BertClassifier, self).__init__()
self.bert = BertModel.from_pretrained('bert-base-multilingual-uncased')
self.lstm = nn.LSTM(768, 256, batch_first=True, bidirectional=True)
self.linear = nn.Linear(256*2 , 2)
if freeze_bert:
for param in self.bert.parameters():
param.requires_grad = False
def forward(self, input_ids, attention_mask):
outputs = self.bert(input_ids=input_ids,attention_mask=attention_mask)
sequence_output = outputs[0]
sequence_output, _ = self.lstm(sequence_output)
linear_output = self.linear(sequence_output[:, -1])
return linear_output
And below is me trying to use the checkpointing from
(pytorch_memonger/Checkpointing_for_PyTorch_models.ipynb at master · prigoyal/pytorch_memonger · GitHub)
class BertClassifier(nn.Module):
def __init__(self, freeze_bert=False):
super(BertClassifier, self).__init__()
self.bert = BertModel.from_pretrained('bert-base-multilingual-uncased')
self.lstm = nn.LSTM(768, 256, batch_first=True, bidirectional=True)
self.linear = nn.Linear(256*2 , 2)
if freeze_bert:
for param in self.bert.parameters():
param.requires_grad = False
def run_function(self, start, end):
def custom_forward(*inputs):
output, hidden = self.lstm(inputs[0][start:(end + 1)], (inputs[1], inputs[2]))
return output, hidden[0], hidden[1]
return custom_forward
def forward(self, input_ids, attention_mask):
outputs = self.bert(input_ids=input_ids,attention_mask=attention_mask)
sequence_output = outputs[0]
# checkpoint self.lstm() computation
output = []
segment_size = len(modules) // segments
for start in range(0, segment_size * (segments - 1), segment_size):
end = start + segment_size - 1
out = checkpoint.checkpoint(self.run_function(start, end), sequence_output, hidden[0], hidden[1])
output.append(out[0])
hidden = (out[1], out[2])
out = checkpoint.checkpoint(self.run_function(end + 1, len(modules) - 1), sequence_output, hidden[0], hidden[1])
output.append(out[0])
hidden = (out[1], out[2])
output = torch.cat(output, 0)
hidden = (out[1], out[2])
linear_output = self.linear(sequence_output[:, -1])
return linear_output
I have a few questions on the above :
What are the segments and modules ? I saw the modules declared in the Checkpointing sequential models, but mine isn’t a sequential model, how can I declare the modules in this case? Also segments were declared as =2, what means that 2 ?
Will the above checkpointing technique work in my case? If not, how can i properly compute this?
The checkpointing will have to be done just in the declaration of the class like above or there will also have to be modification to the overall training code?
The saving and loading of the state_dict will be the same? (eg. torch.save(bert_classifier.state_dict(), ‘finetuned_model.pt’) )
Is it possible to train the model using an english corpus and then test it on another language? Or is it possible to make the model language independent?
Thanks in advance!
|
st31718
|
I’m trying to build a simple MNIST Model and this is what I’ve built -
training_loader = DataLoader(training_dataset, 128, shuffle = True)
validation_loader = DataLoader(validation_dataset, 128)
class mnistmodel(nn.Module):
def __init__(self):
super().__init__()
self.linear1 = nn.Linear(784, 10)
self.linear2 = nn.Linear(10, 5)
self.linear3 = nn.Linear(5, 10)
def forward(self, xb):
xb.reshape(-1, 784)
predicted = F.relu(self.linear1(xb))
predicted.reshape(-1, 10)
predicted = F.relu(self.linear2(predicted))
predicted.reshape(-1, 5)
predicted = self.linear3(predicted)
return predicted
def training_step(self, batch):
images, labels = batch
predicted = self(images)
loss = F.cross_entropy(predicted, labels)
return loss
def validation_step(self, batch):
images, labels = batch
predicted = self(images)
loss = F.cross_entropy(predicted, labels)
_, preds = torch.max(predicted, dim=1)
accuracy = torch.tensor(torch.sum(preds == labels).item() / len(preds))
return {'validation_loss': loss, 'validation_accuracy': accuracy}
def validation_epoch_end(self, outputs):
batch_losses = [x['validation_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean()
batch_accs = [x['validation_acc'] for x in outputs]
epoch_acc = torch.stack(batch_accs).mean()
return {'validation_loss': epoch_loss.item(), 'validation_accuracy': epoch_acc.item()}
def epoch_end(self, epoch, result):
print(f"Epoch [{epoch}], val_loss: {result['validation_loss']}, val_acc: {result['validation_acc']}")
model = mnistmodel()
def fit_mnist(epochs, lr, model, training_loader, validation_loader, optimizer_function=torch.optim.SGD):
optimizer = optimizer_function(model.parameters(), lr)
history = []
for epoch in range(epochs):
for batch in training_loader:
loss = model.training_step(batch)
loss.backward()
optimizer.step()
optimizer.zero_grad()
result = evaluate(model, validation_loader)
model.epoch_end(epoch, result)
history.append(result)
return history
history1 = fit_mnist(5, 0.001, model, training_loader, validation_loader)
I get the following error -
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-162-48e4fe0cc2d9> in <module>()
----> 1 history1 = fit_mnist(5, 0.001, model, training_loader, validation_loader)
6 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
1751 if has_torch_function_variadic(input, weight):
1752 return handle_torch_function(linear, (input, weight), input, weight, bias=bias)
-> 1753 return torch._C._nn.linear(input, weight, bias)
1754
1755
RuntimeError: mat1 and mat2 shapes cannot be multiplied (3584x28 and 784x10)
I’m new to pytorch but as far as I understand the shapes seem to be fine, what is going wrong here?
|
st31719
|
Solved by ptrblck in post #2
reshape is not an inplace operation, so you need to assign the return value to another object:
xb = xb.reshape(-1, 784)
Generally, I would recommend to use this approach:
xb = xb.view(xb.size(0), -1)
to keep the batch dimension and to get a better error message in case the feature dimension is i…
|
st31720
|
reshape is not an inplace operation, so you need to assign the return value to another object:
xb = xb.reshape(-1, 784)
Generally, I would recommend to use this approach:
xb = xb.view(xb.size(0), -1)
to keep the batch dimension and to get a better error message in case the feature dimension is incorrect.
Also, you wouldn’t need to reshape the activations, as the linear layers should already return the expected output.
|
st31721
|
Can anyone give me the formula for weighted NLL. I want to know where weights are getting multiplied. I have read this https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html 1 but I am not able to understand what $x_{n,y_n}$ is basically. I want to write the formula for weight NLL for C classes. Is the following correct (consider I have correct labels and incorrect labels):
|
st31722
|
x corresponds to the logits given as the model output.
Here is a comparison between a manual approach and the weighted criterion.
|
st31723
|
Hi. Thanks for the answer. Can you please tell me whether the formula I have written above for reduction==‘mean’ is correct or not?
|
st31724
|
I don’t know how y and y' are defined, e.g. is y a one-hot encoded target tensor acting as a mask (so alternatively also indexing would work using the class indices directly)?
If so, I guess the class index in w would be missing but you could also compare it to the used formula in nn.CrossEntropyLoss 1 to make sure it’s equivalent.
|
st31725
|
I have the following model:
class Model(nn.Module):
def __init__(self, dim_in, lambda_=.3):
super(FeatExtractorGR, self).__init__()
'''The linear transform layer'''
self.lt = nn.Linear(in_features = dim_in, out_features = 10, bias=True)
'''The encoder'''
self.feature_extractor = \
nn.Sequential(
#nn.Linear(in_features = 10, out_features = 30, bias=True),
nn.BatchNorm1d(10),
nn.ReLU(),
nn.Linear(in_features = 10, out_features = 20, bias=True),
nn.BatchNorm1d(20),
nn.ReLU(),
nn.Linear(in_features = 20, out_features = 10, bias=True),
nn.BatchNorm1d(10),
nn.ReLU()
)
def forward(self, x):
transformed = self.lt(x)
return self.feature_extractor(transformed)
I want to force the weight vectors of the linear transformation layer to be uncorrelated. I tried to include the dot products among the vectors in the cost function (as a proxy for correlations among them):
params=list(model.lt.parameters())[0]
dotprod=torch.tensordot(params, params, dims=([1],[1])).abs().fill_diagonal_(0).sum()/2
loss = other_losses + dotprod * weight
But this is not working, even with really high weight. The weight vectors from the lt layer are still highly correlated. I have also tried to remove other_losses, but no effect. What am I doing wrong?
|
st31726
|
A special feature of biological models is that they usually contain a largeIn this paper, “parameter correlations” means a group of parameters in Based on Eq. (1), MathML is expressed as vectors of the first order partial derivative functions (1) to a data set (j) by minimizing the following cost function.
|
st31727
|
Marie154:
A special feature of biological models is that they usually contain a largeIn this paper, “parameter correlations” means a group of parameters in Based on Eq. (1), MathML is prepaidcardstatus expressed as vectors of the first order partial derivative functions (1) to a data set (j) by minimizing the following cost function.
Thanks for the solution
|
st31728
|
I have batch data and want to dot to the data. W is trainable parameters.
How to dot between batch data and weights?
hid_dim = 32
data = torch.randn(10, 2, 3, hid_dim)
data = tdata.view(10, 2*3, hid_dim)
W = torch.randn(hid_dim) # assume trainable parameters via nn.Parameter
result = torch.bmm(data, W).squeeze() # error, want (N, 6)
result = result.view(10, 2, 3) #
Update
This may look good.
hid_dim = 32
data = torch.randn(10, 2, 3, hid_dim)
data = tdata.view(10, 2*3, hid_dim)
W = torch.randn(hid_dim, 1) # assume trainable parameters via nn.Parameter
W = W.unsqueeze(0).expand(10, hid_dim, 1)
result = torch.bmm(data, W).squeeze() # error, want (N, 6)
result = result.view(10, 2, 3)
|
st31729
|
First of all, all things should be wrapped into Variable for W to be trainable. Secondly, W has shape (32,), which is not multiply-able with tensors of shape (2 3). So I assume that W is of shape (3, 2).
Then, you can use torch.bmm(data, W.unsqueeze(0).expand(10, 3, 2)). Probably you don’t need unsqueeze but I don’t have access to pytorch right now so you can check your self.
|
st31730
|
@SimonW
Thank you. I updated my post. How about my code in case of using 1 row W.
|
st31731
|
It still doesn’t make sense bmm (10, 2, 3) and (10, 32, 1). What exactly should be multiplied with each matrix?
|
st31732
|
bmm of (10, 2*3, hid_dim) and (10, hid_dim, 1), not (10, 2, 3) and (10, 32, 1). Sorry for the confusing.
|
st31733
|
Oh I see, sorry I missed the hid_dim. Yeah, I think your code should work. Are you still seeing errors?
|
st31734
|
I have tensors of size A: 32 x 4 x 1 and B: 4 x 2. I want my output tensor to be of shape 32 x 2 x 1
Can you explain, how can I multiply A and B?
|
st31735
|
This should work:
a = torch.randn(32, 4, 1)
b = torch.randn(4, 2)
c = torch.matmul(b.unsqueeze(0).permute(0, 2, 1), a)
print(c.shape)
> torch.Size([32, 2, 1])
|
st31736
|
Hi, i want to convert a batched dense edge adjacency matrix of size (B,N,N) to a batched sparse edge adjacency matrix of size (2, M), in which B denotes the batch size, N denotes the maximum number of nodes each graph and M denotes the number of edges in one batch.
I could only find one function for this purpose in the package torch_geometric.utils named dense_to_sparse. However, the source code shows that this function does not work for batched data. My solution is to iterate over batch, apply this function on each graph, and aggregate the results to get the batched result. I wonder if this iteration would impact the calculation performance and is there any better way to achieve the same purpose.
The other question is that when i convert a batched sparse edge adjacency matrix of size (2,M) with edge attributes of size (M, F), in which F denotes the dimension of features for each edge, to a batched dense edge adjacency matrix using the function to_dense_adj provided by package torch_geometric.utils, It results a tensor of size (B, N, N, F), but I couldn’t find the function for converting such a tensor back. Is there any convenient way to convert such a tensor back to sparse edge adjacency matrix of size (2,M) and edge attributes of size (M, F)?
|
st31737
|
For anyone who have the same problem, here is the solution.
torch_geometric discussion on github 149
|
st31738
|
I’m currently working on a CNN project using resnet18. I replaced the fully connected layers as per my requirement, but before doing this I set the requires_grad parameter to False for all the layers in the resnet18 model. Now what I want to do is train my model for certain epochs, overfit it to a task, and then unfreeze the gradients only for the convolution part. How do I do it?
Thank You
|
st31739
|
My current training bottleneck has been identified as data transfer to GPU. I noticed that when using num_workers>0 I seem to be hitting a wall where the GPU transfer rate appears to approach that of non-pinned data transfer, which is far below the rate of pinned data transfer. Based on benchmark example found here: Slow CPU<=>GPU transfer
import torch
import time
#import os
#os.environ["CUDA_VISIBLE_DEVICES"] = '0'
rank = 0
gpu_device = torch.device("cuda:"+str(rank))
Batch = 512
N = 25
C = 50
W = 500
n_samples = 50
w1 = [torch.randn(Batch, N, C, W, device=torch.device("cpu")) for x in range(n_samples)]
w1_pinned = [torch.nn.Parameter(x).pin_memory() for x in w1]
w1_not_pinned = [torch.nn.Parameter(x) for x in w1]
x1 = torch.randn(Batch, N, C, W, dtype=torch.float32, device=gpu_device)
#Run a Pinned Memory Transfer Test
torch.cuda.synchronize()
t_start = time.time()
for i in range(n_samples):
x1.copy_(w1_pinned[i].data, non_blocking=True)
torch.cuda.synchronize()
t_end = time.time()
t_time = (t_end - t_start)*1000
t_size = n_samples*(Batch*N*C*W)*4/1024/1024
t_bw = t_size/(t_end-t_start)
print('Pinned Test')
print('size of transfer ', t_size, 'MB')
print('time taken by transfer ', t_time, 'mSec')
print('Effective bandwidth ', t_bw, 'MBps (Pinned)')
#Run a Non-Pinned Memory Transfer Test
torch.cuda.synchronize()
t_start = time.time()
for i in range(n_samples):
x1.copy_(w1_not_pinned[i].data, non_blocking=True)
torch.cuda.synchronize()
t_end = time.time()
t_time = (t_end - t_start)*1000
t_size = n_samples*(Batch*N*C*W)*4/1024/1024
t_bw = t_size/(t_end-t_start)
print('Not Pinned Test')
print('size of transfer ', t_size, 'MB')
print('time taken by transfer ', t_time, 'mSec')
print('Effective bandwidth ', t_bw, 'MBps (Not Pinned)')
I get the following consistent results for V100:
Pinned Test
size of transfer 61035.15625 MB
time taken by transfer 5160.215616226196 mSec
Effective bandwidth 11828.024406204298 MBps (Pinned)
Not Pinned Test
size of transfer 61035.15625 MB
time taken by transfer 29578.11951637268 mSec
Effective bandwidth 2063.5238902261713 MBps (Not Pinned)
Now if I throw in a simple DataLoader to accomplish this transfer (as you would in a training loop):
from torch.utils.data import DataLoader
class pytorch_dataset(torch.utils.data.Dataset):
def __init__(self, samples):
self.samples = samples
def __len__(self):
return len(self.samples)
def __getitem__(self, item):
return self.samples[item]
def DataLoader_Test(torch_loader, num_workers):
torch.cuda.synchronize()
t_start = time.time()
for x in torch_loader:
sample = x.to('cuda', non_blocking=True)
torch.cuda.synchronize()
t_end = time.time()
t_time = (t_end - t_start)*1000
t_size = n_samples*(Batch*N*C*W)*4/1024/1024
t_bw = t_size/(t_end-t_start)
print('DataLoader Test: ', num_workers, ' workers')
print('size of transfer ', t_size, 'MB')
print('time taken by transfer ', t_time, 'mSec')
print('Effective bandwidth ', t_bw, 'MBps (DataLoader:',num_workers,' workers)')
num_workers=0
torch_loader = DataLoader(pytorch_dataset(w1),
batch_size=None,
num_workers=num_workers,
pin_memory=True)
DataLoader_Test(torch_loader, num_workers)
num_workers=1
torch_loader = DataLoader(pytorch_dataset(w1),
batch_size=None,
num_workers=num_workers,
pin_memory=True)
DataLoader_Test(torch_loader, num_workers)
This is again the results for V100:
DataLoader Test: 0 workers
size of transfer 61035.15625 MB
time taken by transfer 5572.557687759399 mSec
Effective bandwidth 10952.80832786513 MBps (DataLoader: 0 workers)
DataLoader Test: 1 workers
size of transfer 61035.15625 MB
time taken by transfer 36907.78851509094 mSec
Effective bandwidth 1653.7202229021066 MBps (DataLoader: 1 workers)
Now as the number of samples is increased, the DataLoader overhead is diminished and the transfer rate approaches that of the benchmarks above. (more workers just binds things up as they are all accessing the same data in memory)
I have yet to look under the hood, but I assume when n_workers > 0 the results are passed into a multiprocessing queue. Does this queue retain data pinning?
If I check that the returned tensor is pinned via: x.is_pinned(), I get True in either case. But it seems strange that I can’t seem to achieve more than the non-pinned transfer rate.
|
st31740
|
I verified this is not a data-pinning issue. Ok to close this thread based on the title.
If the GPU is not used in the DataLoader test, the transfer rate for a single worker is still the same.
def DataLoader_Test(torch_loader, num_workers):
torch.cuda.synchronize()
t_start = time.time()
for x in torch_loader:
sample = x
#sample = x.to(‘cuda’, non_blocking=True)
torch.cuda.synchronize()
t_end = time.time()
t_time = (t_end - t_start)*1000
t_size = n_samples*(Batch*N*C*W)*4/1024/1024
t_bw = t_size/(t_end-t_start)
print('DataLoader Test: ', num_workers, ' workers')
print('size of transfer ', t_size, 'MB')
print('time taken by transfer ', t_time, 'mSec')
print('Effective bandwidth ', t_bw, 'MBps (DataLoader:',num_workers,' workers)')
Result:
DataLoader Test: 0 workers
size of transfer 244140.625 MB
time taken by transfer 8440.999269485474 mSec
Effective bandwidth 28923.18992166928 MBps (DataLoader: 0 workers)
DataLoader Test: 1 workers
size of transfer 244140.625 MB
time taken by transfer 127765.62309265137 mSec
Effective bandwidth 1910.847527608873 MBps (DataLoader: 1 workers)
I did look at the dataloader construction and it is loading a queue and then passing through a second queue if being pinned. It must be the queue throughput and pinning overhead that is limiting performance.
If the pin_memory is set to False, and again bypassing the GPU:
num_workers=0
torch_loader = DataLoader(pytorch_dataset(w1),
batch_size=None,
num_workers=num_workers,
pin_memory=False)
DataLoader_Test(torch_loader, num_workers)
num_workers=1
torch_loader = DataLoader(pytorch_dataset(w1),
batch_size=None,
num_workers=num_workers,
pin_memory=False)
DataLoader_Test(torch_loader, num_workers)
The results show bandwidth well above, where the single worker instance is passing through a single queue, but not the second pinning queue:
DataLoader Test: 0 workers
size of transfer 244140.625 MB
time taken by transfer 5.549430847167969 mSec
Effective bandwidth 43993813.36999484 MBps (DataLoader: 0 workers)
DataLoader Test: 1 workers
size of transfer 244140.625 MB
time taken by transfer 14370.611906051636 mSec
Effective bandwidth 16988.881656263326 MBps (DataLoader: 1 workers)
|
st31741
|
I want to create a custom loss function for multi-label classification. The idea is to weigh the positive and negative labels differently. For this, I am making use of this custom code implementation.
class WeightedBCEWithLogitLoss(nn.Module):
def __init__(self, pos_weight, neg_weight):
super(WeightedBCEWithLogitLoss, self).__init__()
self.register_buffer('neg_weight', neg_weight)
self.register_buffer('pos_weight', pos_weight)
def forward(self, input, target):
assert input.shape == target.shape, "The loss function received invalid input shapes"
y_hat = torch.sigmoid(input + 1e-8)
loss = -1.0 * (self.pos_weight * target * torch.log(y_hat + 1e-6) + self.neg_weight * (1 - target) * torch.log(1 - y_hat + 1e-6))
# Account for 0 times inf which leads to nan
loss[torch.isnan(loss)] = 0
# We average across each of the extra attribute dimensions to generalize it
loss = loss.mean(dim=1)
# We use mean reduction for our task
return loss.mean()
I started getting nan values which I realized happened because of 0 times inf multiplication. I handled it as shown in the figure. Next, I again saw getting inf as the error value and corrected it by adding 1e-6 to the log (I tried with 1e-8 but that still gave me inf error value).
It would be great if someone can take a look and suggest further improvements and rectify any more bugs visible here.
|
st31742
|
Hi Chinmay!
chinmay5:
I want to create a custom loss function for multi-label classification. The idea is to weigh the positive and negative labels differently.
Do be aware that pytorch’s BCEWithLogitsLoss supports a
pos_weight constructor argument that will do what you want.
So unless this is a learning exercise, you should simply use
BCEWithLogitsLoss.
y_hat = torch.sigmoid(input + 1e-8)
The 1e-8 doesn’t do anything useful here. sigmoid() is very
well behaved (and equal to 0.5) when its argument is equal to
zero. So there’s no need or benefit in trying to move the argument
a little bit away from zero. (Furthermore, -1.e-8 is a perfectly
valid logit and argument to your loss function and your “fix” just
moves it to zero – not that anything bad happens at zero.)
loss = -1.0 * (self.pos_weight * target * torch.log(y_hat + 1e-6) + self.neg_weight * (1 - target) * torch.log(1 - y_hat + 1e-6))
Here you apply log() to sigmoid(). This is a source of numerical
instability. You should use the log-sum-exp 2 “trick” to compute
log (sigmoid()). (This is what pytorch’s BCEWithLogitsLoss
does internally.)
I started getting nan values
…
Next, I again saw getting inf
With the code you posted, I don’t see why you would be getting
nans or infs. The 1.e-6 that you add to your log() functions
should protect against that.
Best.
K. Frank
|
st31743
|
Hi ,
I posted a query here on github using code so it can be neat to view , apologies if it is going to be duplicate
After training model when i test model in batch using shuffle=False give me good score , when i use the same model and same test records using shuffle=True give me bad score , i am confused why it is so?
Shuffling True on test data decreasing score 1
|
st31744
|
Hi.
I tried to look at the C code and in the code in the docs, but still didn’t figure it out,
generally BatchNorm aggregates the mean and the variance of the samples it sees in train mode and then in test mode just uses them, my question is does the aggregation, i.e the updates of the mean and the variance according (in the current forward iteration) to what just was seen are done in the forward path or as it more conventional in the backward path, i.e only after backward and step functions are activated the internal statistics include the current iteration statistics ?
Thank you in advance for clarification.
|
st31745
|
Solved by ptrblck in post #2
The running stats are updated during the forward pass as seen here:
bn = nn.BatchNorm2d(3)
x = torch.randn(16, 3, 24, 24)
print(bn.running_mean)
> tensor([0., 0., 0.])
print(bn.running_var)
> tensor([1., 1., 1.])
out = bn(x)
print(bn.running_mean)
> tensor([ 0.0002, -0.0007, 0.0006])
print(bn.r…
|
st31746
|
The running stats are updated during the forward pass as seen here:
bn = nn.BatchNorm2d(3)
x = torch.randn(16, 3, 24, 24)
print(bn.running_mean)
> tensor([0., 0., 0.])
print(bn.running_var)
> tensor([1., 1., 1.])
out = bn(x)
print(bn.running_mean)
> tensor([ 0.0002, -0.0007, 0.0006])
print(bn.running_var)
> tensor([0.9991, 1.0022, 0.9988])
|
st31747
|
Hey Guys,
I am using multiple nn.BatchNorm1d() layers. I noticed that the activation of the same inputs change slightly when I include the input in different batches. Even when I am using model.eval().
So I have input A and I feed it to the model in the same batch as the the inputs B and C. The activation of A will differ to the Activation of A when I fed it through the network together with B C, D, E.
Is this due to the BatchNorm I feel like model.eval(), should prevent this behavior. But maybe I am messed up somewhere.
Would there be a option to prevent this behavior
|
st31748
|
How large are the relative and absolute errors?
Note that you might run into the limited floating point precision e.g. due to different batch sizes, different algorithms etc. You could try to use float64 and see if the error would be lower (and if you would really need it).
|
st31749
|
Why is track_running_stats set to True in eval. This may lead to performance degradation of pretrained model as well and in my opinion should be default behaviour.
And what is recommended method to set track_running_stats False, currently I am just using model.apply(fn)
Infact if such a model with track_running_stats is deployed isn’t it a very vulnerable model. Suppose it is provided to user, he may give gibberish data to screw up running stats.
Something like this happened to me, although instead of gibberish I had a batch size of 1 and running_mean and running_var changed pretty significantly reducing performance of my model somewhat drastically!!!
|
st31750
|
Solved by ptrblck in post #2
track_running_stats is used to initialize the running estimates as well as to check if they should be updated in training (line of code).
The running estimates won’t be updated in eval:
bn = nn.BatchNorm2d(3)
for _ in range(10):
x = torch.randn(10, 3, 24, 24)
out = bn(x)
print(bn.running_m…
|
st31751
|
track_running_stats is used to initialize the running estimates as well as to check if they should be updated in training (line of code 195).
The running estimates won’t be updated in eval:
bn = nn.BatchNorm2d(3)
for _ in range(10):
x = torch.randn(10, 3, 24, 24)
out = bn(x)
print(bn.running_mean)
print(bn.running_var)
> tensor(1.00000e-03 *
[-0.7753, 0.7027, -1.4181])
tensor([ 1.0015, 1.0021, 0.9947])
bn.eval()
for _ in range(10):
x = torch.randn(10, 3, 24, 24)
out = bn(x)
print(bn.running_mean)
print(bn.running_var)
> tensor(1.00000e-03 *
[-0.7753, 0.7027, -1.4181])
tensor([ 1.0015, 1.0021, 0.9947])
|
st31752
|
Ohhhh sorry my bad, I rechecked my script, I forgot putting model.eval() at one place and that led to this screw up.
Thanks a lot @ptrblck
|
st31753
|
Hi, There may be some bugs on track_running_stats=False and bn continous to track the running_mean
|
st31754
|
What kind of bug did you observe? Did the running_mean update even after calling eval() on the module?
|
st31755
|
Hi @ptrblck yes it is happening even when i call model.eval() on validating model using same test data , model performance degrade when i use shuffle=True please see detail with code here
Model.eval() giving different result when shuffle is True and False 73
|
st31756
|
I am new about pytorch.
the example like this
inputs = torch.rand(1,1,10,10)
mod = nn.Conv2d(1,32,3,2,1)
out = mod(inputs)
print(out.shape)
the output is torch.Size([1, 32, 5, 5])
I think new_width = (old_width+2*padding-kernal_size)/stride +1.
but it cann’t divisible.
So how to calculate it in pytorch?
|
st31757
|
Solved by ptrblck in post #2
The complete formula for the output size is given in the docs. If it’s not divisible, the output size seems to be rounded down.
EDIT: new link to the Conv2d docs.
|
st31758
|
The complete formula for the output size is given in the docs 7.3k. If it’s not divisible, the output size seems to be rounded down.
EDIT: new link to the Conv2d docs 1.4k.
|
st31759
|
Is there a built in way to compute the output size of layers, without actually running the layer? I am looking for something like:
output_tensor_size = compute_output_size(layer, input_tensor_size)
|
st31760
|
I am trying to export a model(which when dumped on disk as a pt file, is 4GB). The code I use is:
net.eval()
torch.onnx.export(net, # model being run
(doc_embeddings, shortlist), # model input (or a tuple for multiple inputs)
"/path/to/a/folder/net.onnx", # where to save the model (can be a file or file-like object)
opset_version=11, # the ONNX version to export the model to
input_names = ['doc_embeddings', 'shortlist'], # the model's input names
output_names = ['scores'], # the model's output names
dynamic_axes={'doc_embeddings' : {0 : 'batch_size'}, # variable length axes
'shortlist' : {0 : 'batch_size', 1: 'num_shortlist'},
'scores' : {0 : 'batch_size', 1: 'num_shortlist'}},
use_external_data_format=True,
verbose=True)
But this doesn’t work and I get the error: RuntimeError: Exporting model exceed maximum protobuf size of 2GB. Please call torch.onnx.export with use_external_data_format=True. I would ideally expect that the model would be exported with external files after specifying use_external_data_format=True
|
st31761
|
Hello everyone,
I’m trying to model the hidden dynamics of a system to predict continuous-time behavior (trajectory) of the output by solving an ODE-based-RNN (ODE: Ordinary Differential Equation)
References
The idea and algorithms in detail found here:
the paper 2
torchdiffeq library
Idea in nutshell
Basically, I’m trying to move from a standard RNN model which can learn discrete behavior to a general model that learns and predicts continuous-time behavior. The idea is to train a network to be to learn the changes in hidden states then by using accurate solvers, IVP could be solved to get states/output at instants of evaluation.
Implementation
step 0
#import modules
import os
import argparse
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import time
import torch
import torch.nn as nn
import torch.optim as optim
#from torchdiffeq import odeint_adjoint as odeint #backprop. using adjoint method integrated in odeint
from torchdiffeq import odeint as odeint
from torch.utils.tensorboard import SummaryWriter
import shutil
#use GPU if available
device = "cuda" if torch.cuda.is_available() else "cpu"
print("Using {} device".format(device))
step 1
#create train dataset
#read csv dataframes
df_train = (pd.read_csv('some_csv_file'))
#split dataframes into time seria
df_train.columns = ['T','in','out']
#assume a number divisible by data length for now
nbatchs = 100
#create sequence batchs as tensors
t_train = torch.tensor(1e6*df_train[['T']].values.reshape(nbatchs,-1,1)).to(device) #in us
x_train = torch.tensor(df_train[['in']].values.reshape(nbatchs,-1,1)).float().to(device) #in V
y_train = torch.tensor(df_train[['out']].values.reshape(nbatchs,-1,1)).float().to(device) #in V
#check sizes
y_train.size()
step 2
#define a fn that handles data and returns:
#data= list([init_x[i], init_y[i], time_series[i], targets[i]] for i in range(nbatchs))
#init_state= torch.zeros(1, 1, hidden_size)
#intializations
data_size = 1000
eval_pts = 10 #no. of eval pts for integration
seq_len = int(data_size/nbatchs) #batch length
s = int(seq_len/eval_pts) #sampling rate
niters = 1000 #no. of iterations
test_freq = 2 #test frequency
hidden_size = 10 #size of hidden layer
def get_data():
x0 = list([x_train[batch,0].view(-1,1,1) for batch in range(nbatchs)])
y0 = list([y_train[batch,0].view(-1,1,1) for batch in range(nbatchs)])
t = list([t_train[batch,::s].view(-1) for batch in range(nbatchs)])
y = list([y_train[batch,::s].view(-1,1,1,1) for batch in range(nbatchs)])
data= list([x0[i], y0[i], t[i], y[i]] for i in range(nbatchs))
init_state = torch.zeros(1, 1, hidden_size)
targets = y
return data, init_state, targets
step 3
Thanks to @albanD:
“./implementing-truncated-backpropagation-through-time/15500”
#This class trains func -> (dy/dt) and solves for y at predefined eval_pts
tot_loss= 0.0
class ODE_RNN_TBPTT():
def __init__(self, func, loss_fn, k, optimizer):
self.func = func
self.loss_fn = loss_fn
self.k = k
self.optimizer = optimizer
def train(self, data, init_state):
global tot_loss
h0 = init_state
#save prev hidden states
states = [(None, h0)]
#iterate on batches
for batch, (x0, y0, t, targets) in enumerate(data):
#call get_new_observation
func.get_new_observation(x0)
#detach state from grad computation graph
state = states[-1][1].detach()
state.requires_grad=True
#run solver on the batch which will call func.forward() under the hood
pred, new_state = odeint(self.func, tuple([y0, state]), t)
#append the new_state
states.append((state, new_state[-1].view(1, 1, -1)))
if (batch+1)%self.k == 0:
loss = self.loss_fn(pred, targets)
tot_loss = tot_loss + loss
optimizer.zero_grad()
loss.backward()
optimizer.step()
step 4
class NN_Module(nn.Module):
def __init__(self, hidden_size):
super().__init__()
#net layers
self.rnn= nn.RNN(1, hidden_size, batch_first=True)
self.dense= nn.Linear(hidden_size, 1)
def get_new_observation(self, x0):
self.x0= x0
def forward(self, t, init_conditions):
#global idx
#RNN update equations
x0= self.x0
y0, h0= init_conditions
ht, hn= self.rnn(x0, h0)
y= self.dense(ht)
f= tuple([y, hn]) #f is a tuple(dy/dt, dh/dt) at t=T where T whenever the solver is evaluating
step 5
#main
#2 main steps
#1. make an instant of ODE_RNN_TBPTT: this is supposed to train a NN wrapped inside an odeint (Ordinary Differential Equation Integral)
#2. call get_data() and feed the trainer
#ODE_RNN_TBPTT inputs
func = NN_Module(hidden_size).to(device) #func implements f(t,y(t),params) representing dy/dt as NN
loss_fn = nn.MSELoss() #loss criterion
k = 1 #k1,k2 are no. of batchs per gradient update; assume k1=k2 for now
optimizer = torch.optim.Adam(func.parameters(), lr=1e-3)
trainer = ODE_RNN_TBPTT(func, loss_fn, k, optimizer)
#clear logs
#shutil.rmtree('/content/runs')
writer = SummaryWriter('runs') #create a logger
#test loop idx
ii= 0
for itr in range(1, niters + 1):
tot_loss = 0.0 #loss per itr
data, init_state, targets = get_data() #get training data
trainer.train(data, init_state) #feed the trainer
print("itr: {0} | loss: {1}".format(itr,tot_loss))
writer.add_scalar('loss', tot_loss, itr ) #log to writer
Results
I was able to get results of training but usually loss is huge and descending with very small steps !
I tried also GRU instead of VRNN.
Update: The gradients are vanishing even for very small batches. I guess that’s because back prop through odeint takes into consideration all the steps the integral do.
itr: 1 | loss: 4.284154891967773
itr: 2 | loss: 4.283952236175537
itr: 3 | loss: 4.283847808837891
itr: 4 | loss: 4.283742904663086
itr: 5 | loss: 4.283634662628174
itr: 6 | loss: 4.283525466918945
itr: 7 | loss: 4.283415794372559
Questions
Is there any obvious methodological mistakes in coding or the problem with the technique of training and I need to find another way to handle data or change network structure.
Any advice or recommendations are welcomed.
Thanks.
|
st31762
|
Hello
I am new to pytorch and I wanna to set different learning rate for some layer’s weights and biases.
here is my code:
with open('DCN.txt', 'r') as Defor_layer:
my_list = Defor_layer.readlines()
params = list(filter(lambda kv: kv[0] in my_list, model.named_parameters()))
base_params = list(filter(lambda kv: kv[0] not in my_list, model.named_parameters()))
if args.optm == "SGD":
optimizer = SGD([
{'params': base_params},
{'params': params, 'lr':1}]
, lr=0.05, momentum=0.9, weight_decay=0.0002)
else:
optimizer = Adam([{'params': base_params},
{'params': params, 'lr':1}]
, lr=0.05, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.0002)
and my params list looks like below, but longer
module.encoder.layers.7.DCNv2.conv_offset_mask.weight
module.encoder.layers.7.DCNv2.conv_offset_mask.bias
module.encoder.layers.8.DCNv2.conv_offset_mask.weight
module.encoder.layers.8.DCNv2.conv_offset_mask.bias
However, I am getting this error and I am not sure what is about:
File “main_FE_Deco_v2_lr.py”, line 191, in train
, lr=0.05, momentum=0.9, weight_decay=0.0002)
File “/home/kh/.conda/envs/torch41/lib/python3.6/site-packages/torch/optim/sgd.py”, line 64, in init
super(SGD, self).init(params, defaults)
File “/home/kh/.conda/envs/torch41/lib/python3.6/site-packages/torch/optim/optimizer.py”, line 43, in init
self.add_param_group(param_group)
File “/home/kh/.conda/envs/torch41/lib/python3.6/site-packages/torch/optim/optimizer.py”, line 191, in add_param_group
"but one of the params is " + torch.typename(param))
TypeError: optimizer can only optimize Tensors, but one of the params is tuple
I appriciate any guide.
|
st31763
|
I realized what is the problem.
I have name and data in the params and that’s why the I got the error say one of the params is tuple. all I had to is to send only the data as params and not the names along them.
|
st31764
|
Could you please show how to send only the data as params? I have the same error as yours.
|
st31765
|
Hi @luciaL before initializing the optimizer, adding the following two lines to obtain the params from the tuples in the two lists resolves the issue.
params = [i[1]for i in params]
base_params = [i[1]for i in base_params]
|
st31766
|
Is there a good way to have an infinite dataloader? That said, is there a class that will provide automatically looping for method like data_loader.get_next()? And how to maintain full iterations?
|
st31767
|
Well one quick and dirty hack would be for your CustomDataset to return a very high number (e.g. np.iinfo(np.int64).max) in its __len__ .
As for get_next(), you can get the iterator from the dataloader and call next on that:
next(dataloader.__iter__())
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.