id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st49568 | Solved by fadetoblack in post #4
After this line:
train_dataset , valid_dataset, = torch.utils.data.random_split(dataset , [len_train_set, len_valid_set])
You could modify the transforms, something like:
train_dataset.transforms = train_transform
valid_dataset.transforms = test_transform |
st49569 | One simple idea would be to split the data beforehand and have train.csv and test.csv.
Then you can write:
train_dataset = MothLandmarksDataset('train.csv', '.', transform=transform_train)
test_dataset = MothLandmarksDataset('test.csv', '.', transform=transform_test) |
st49570 | right but I was hoping I don’t have to do manually and could use pytorch to do that automatically. is there an automatic way for doing what you mean via torch? |
st49571 | After this line:
train_dataset , valid_dataset, = torch.utils.data.random_split(dataset , [len_train_set, len_valid_set])
You could modify the transforms, something like:
train_dataset.transforms = train_transform
valid_dataset.transforms = test_transform |
st49572 | github.com/pytorch/pytorch
the different train speed between pip install torch==1.2 and build torch1.2 from source 3
opened
Oct 13, 2020
closed
Oct 13, 2020
wuyujiji
❓ Questions and Help
Hi, recently I train a model with pytorch1.2. I found a phenomenon that train speed of each iter...
module: build
shadow review
triaged |
st49573 | Which libraries are you using for the custom build and which are used in the 1.2 binaries?
Also, why are you comparing the speed of such an old PyTorch version? |
st49574 | the libraries are listed in https://github.com/pytorch/pytorch/issues/46245 2, the reason of comparing speed is that I want to reproduce the usr’s training speed, his pytorch 1.2 environment is build by pip install , but my only build from source since our internal platform’s limitation. |
st49575 | If you made sure the binary and your local build are equal you could use profiling tools such as NSIGHT or use the built-in profiler in PyTorch.
Also, note that your profiling should synchronize the device before starting and stopping the timer, but I assume you are already familiar with profiling PyTorch ops. |
st49576 | I have already profiling and saving to timeline.json, the most cost time of each train step is IndexPutBackward op (0.8s vs. 0.2s) |
st49577 | Say I have C classes, and I want to get a C*C similarity matrix in which each entry is the cosine similarity between class_i and class_j. I write the below code to compute the similar loss based on the weights of last but one fc layer. Below is the part of the code for simplicity:
cos = nn.CosineSimilarity(dim=1, eps=1e-6)
for batch_idx, (data, target) in enumerate(self.data_loader):
# C*M
weights = self.model.module.model.fc[-1].weight
num_iter = weights.size(0)
# similarity matrix
sim_mat = torch.empty(0, num_iter).cuda()
for j in range(num_iter):
weights_i = weights[j, :].expand_as(weights)
sim = cos(weights, weights_i)
sim = torch.unsqueeze(sim, 0)
sim_mat = torch.cat((sim_mat, sim), 0)
sim_mat = sim_mat - torch.diag(torch.diag(sim_mat))
max_val = torch.max(sim_mat, dim=1).values
loss_sim = (torch.sum(max_val) / num_classes**2)
...
loss_sim.backward()
Why my loss_sim doesn’t work for my training process? It seems that the loss_sim didn’t backward properly to affect the original model weights.
I know we can wrap the loss function into a class like:
class Loss_sim(nn.Module):
def __init__():
def forward(self, weights):
Should I write like this? Is any problem here because of the copy of the original weights?
Thanks in advance |
st49578 | Solved by KFrank in post #4
Howdy Hoody!
If you’re asking whether the assignment:
weights = self.model.module.model.fc[-1].weight
will prevent gradients from flowing back through the assignment
and break backpropagation, the answer is no.
In python, “variables” are references.
self.model.module.model.fc[-1].weight refer… |
st49579 | Howdy Hoody!
Perhaps the short answer is that your loss_sim is bounded below
by zero, and that might not be what you want.
Hoodythree:
I write the below code to compute the similar loss based on the weights of last but one fc layer.
I’m not sure what you are trying to do here, but I think you
want a loss term that pushes the rows of weights to be dissimilar
from one another.
It seems that the loss_sim didn’t backward properly to affect the original model weights.
Your loss_sim will backpropagate. Whether it will do so “properly”
depends on what you are expecting.
These lines:
sim_mat = sim_mat - torch.diag(torch.diag(sim_mat))
max_val = torch.max(sim_mat, dim=1).values
place zeros on the diagonal of sim_mat so that max_val can
never be negative.
The problem is that the rows of weights can all have negative
cosine similarity with one another, at which point loss_sim becomes
zero, has zero gradient, and no longer contributes to the training.
After quoting your code, I show a script that runs your version
of loss_sim packaged as a function, cc_sim, and compares it
with two possibly improved versions. cc_simA returns the mean
of the cosine similarities, while cc_simB removes the floor
of zero on loss_sim so that it can become negative and fall
to its most negative similarity (maximum dissimilarity).
This script shows that your version does backpropagate, but does
get stuck at zero, and that the improved versions don’t get stuck
at zero.
It also give an example of a tensor, t, whose rows all have
negative cosine similarity with one another.
Your code:
cos = nn.CosineSimilarity(dim=1, eps=1e-6)
for batch_idx, (data, target) in enumerate(self.data_loader):
# C*M
weights = self.model.module.model.fc[-1].weight
num_iter = weights.size(0)
# similarity matrix
sim_mat = torch.empty(0, num_iter).cuda()
for j in range(num_iter):
weights_i = weights[j, :].expand_as(weights)
sim = cos(weights, weights_i)
sim = torch.unsqueeze(sim, 0)
sim_mat = torch.cat((sim_mat, sim), 0)
sim_mat = sim_mat - torch.diag(torch.diag(sim_mat))
max_val = torch.max(sim_mat, dim=1).values
loss_sim = (torch.sum(max_val) / num_classes**2)
...
loss_sim.backward()
The script:
import torch
torch.__version__
torch.random.manual_seed (2020)
def cc_sim (weights):
cos = torch.nn.CosineSimilarity(dim=1, eps=1e-6)
num_iter = num_classes = weights.size(0)
# similarity matrix
sim_mat = torch.empty(0, num_iter)
for j in range(num_iter):
weights_i = weights[j, :].expand_as(weights)
sim = cos(weights, weights_i)
sim = torch.unsqueeze(sim, 0)
sim_mat = torch.cat((sim_mat, sim), 0)
sim_mat = sim_mat - torch.diag(torch.diag(sim_mat))
max_val = torch.max(sim_mat, dim=1).values
loss_sim = (torch.sum(max_val) / num_classes**2)
return loss_sim
def cc_simA (weights):
cos = torch.nn.CosineSimilarity(dim=1, eps=1e-6)
num_iter = num_classes = weights.size(0)
# similarity matrix
sim_mat = torch.empty(0, num_iter)
for j in range(num_iter):
weights_i = weights[j, :].expand_as(weights)
sim = cos(weights, weights_i)
sim = torch.unsqueeze(sim, 0)
sim_mat = torch.cat((sim_mat, sim), 0)
sim_mat = sim_mat - torch.diag(torch.diag(sim_mat))
loss_sim = torch.mean (sim_mat)
return loss_sim
def cc_simB (weights):
cos = torch.nn.CosineSimilarity(dim=1, eps=1e-6)
num_iter = num_classes = weights.size(0)
# similarity matrix
sim_mat = torch.empty(0, num_iter)
for j in range(num_iter):
weights_i = weights[j, :].expand_as(weights)
sim = cos(weights, weights_i)
sim = torch.unsqueeze(sim, 0)
sim_mat = torch.cat((sim_mat, sim), 0)
sim_mat = sim_mat - torch.diag (float ('inf') * torch.ones (3))
max_val = torch.max(sim_mat, dim=1).values
loss_sim = (torch.sum(max_val) / num_classes**2)
return loss_sim
t = torch.tensor ([[1.0, 0.0], [-0.5, 1.0], [-0.5, -1.0]])
t.requires_grad = True
print ('cc_sim:')
loss = cc_sim (t)
loss.backward()
print ('loss =', loss)
print ('t = ...\n', t)
print ('t.grad = ...\n', t.grad)
with torch.no_grad():
_ = t.grad.zero_()
print ('cc_simA:')
lossA = cc_simA (t)
lossA.backward()
print ('lossA =', lossA)
print ('t = ...\n', t)
print ('t.grad = ...\n', t.grad)
with torch.no_grad():
_ = t.grad.zero_()
print ('cc_simB:')
lossB = cc_simB (t)
lossB.backward()
print ('lossB =', lossB)
print ('t = ...\n', t)
print ('t.grad = ...\n', t.grad)
nDim = 2
w = torch.randn ((3, nDim))
wA = w.clone()
wB = w.clone()
w.requires_grad = True
wA.requires_grad = True
wB.requires_grad = True
print ('w = ...\n', w)
print ('wA = ...\n', wA)
print ('wB = ...\n', wB)
lr = 5.0
print ('cc_sim:')
for i in range (10):
loss = cc_sim (w)
print ('loss =', loss)
if i != 0:
_ = w.grad.zero_()
loss.backward()
with torch.no_grad():
_ = w.sub_ (lr * w.grad)
print ('w = ...\n', w)
print ('w.grad = ...\n', w.grad)
print ('cc_simA:')
for i in range (10):
lossA = cc_simA (wA)
print ('lossA =', lossA)
if i != 0:
_ = wA.grad.zero_()
lossA.backward()
with torch.no_grad():
_ = wA.sub_ (lr * wA.grad)
print ('wA = ...\n', wA)
print ('wA.grad = ...\n', wA.grad)
print ('cc_simB:')
for i in range (10):
lossB = cc_simB (wB)
print ('lossB =', lossB)
if i != 0:
_ = wB.grad.zero_()
lossB.backward()
with torch.no_grad():
_ = wB.sub_ (lr * wB.grad)
print ('wB = ...\n', wB)
print ('wB.grad = ...\n', wB.grad)
The output of the script:
>>> import torch
>>> torch.__version__
'1.6.0'
>>>
>>> torch.random.manual_seed (2020)
<torch._C.Generator object at 0x7f635efa4930>
>>>
>>> def cc_sim (weights):
... cos = torch.nn.CosineSimilarity(dim=1, eps=1e-6)
... num_iter = num_classes = weights.size(0)
... # similarity matrix
... sim_mat = torch.empty(0, num_iter)
... for j in range(num_iter):
... weights_i = weights[j, :].expand_as(weights)
... sim = cos(weights, weights_i)
... sim = torch.unsqueeze(sim, 0)
... sim_mat = torch.cat((sim_mat, sim), 0)
... sim_mat = sim_mat - torch.diag(torch.diag(sim_mat))
... max_val = torch.max(sim_mat, dim=1).values
... loss_sim = (torch.sum(max_val) / num_classes**2)
... return loss_sim
...
>>> def cc_simA (weights):
... cos = torch.nn.CosineSimilarity(dim=1, eps=1e-6)
... num_iter = num_classes = weights.size(0)
... # similarity matrix
... sim_mat = torch.empty(0, num_iter)
... for j in range(num_iter):
... weights_i = weights[j, :].expand_as(weights)
... sim = cos(weights, weights_i)
... sim = torch.unsqueeze(sim, 0)
... sim_mat = torch.cat((sim_mat, sim), 0)
... sim_mat = sim_mat - torch.diag(torch.diag(sim_mat))
... loss_sim = torch.mean (sim_mat)
... return loss_sim
...
>>> def cc_simB (weights):
... cos = torch.nn.CosineSimilarity(dim=1, eps=1e-6)
... num_iter = num_classes = weights.size(0)
... # similarity matrix
... sim_mat = torch.empty(0, num_iter)
... for j in range(num_iter):
... weights_i = weights[j, :].expand_as(weights)
... sim = cos(weights, weights_i)
... sim = torch.unsqueeze(sim, 0)
... sim_mat = torch.cat((sim_mat, sim), 0)
... sim_mat = sim_mat - torch.diag (float ('inf') * torch.ones (3))
... max_val = torch.max(sim_mat, dim=1).values
... loss_sim = (torch.sum(max_val) / num_classes**2)
... return loss_sim
...
>>> t = torch.tensor ([[1.0, 0.0], [-0.5, 1.0], [-0.5, -1.0]])
>>> t.requires_grad = True
>>> print ('cc_sim:')
cc_sim:
>>> loss = cc_sim (t)
>>> loss.backward()
>>> print ('loss =', loss)
loss = tensor(0., grad_fn=<DivBackward0>)
>>> print ('t = ...\n', t)
t = ...
tensor([[ 1.0000, 0.0000],
[-0.5000, 1.0000],
[-0.5000, -1.0000]], requires_grad=True)
>>> print ('t.grad = ...\n', t.grad)
t.grad = ...
tensor([[0., 0.],
[0., 0.],
[0., 0.]])
>>> with torch.no_grad():
... _ = t.grad.zero_()
...
>>> print ('cc_simA:')
cc_simA:
>>> lossA = cc_simA (t)
>>> lossA.backward()
>>> print ('lossA =', lossA)
lossA = tensor(-0.3321, grad_fn=<MeanBackward0>)
>>> print ('t = ...\n', t)
t = ...
tensor([[ 1.0000, 0.0000],
[-0.5000, 1.0000],
[-0.5000, -1.0000]], requires_grad=True)
>>> print ('t.grad = ...\n', t.grad)
t.grad = ...
tensor([[ 0.0000, 0.0000],
[ 0.0168, 0.0084],
[ 0.0168, -0.0084]])
>>> with torch.no_grad():
... _ = t.grad.zero_()
...
>>> print ('cc_simB:')
cc_simB:
>>> lossB = cc_simB (t)
>>> lossB.backward()
>>> print ('lossB =', lossB)
lossB = tensor(-0.1491, grad_fn=<DivBackward0>)
>>> print ('t = ...\n', t)
t = ...
tensor([[ 1.0000, 0.0000],
[-0.5000, 1.0000],
[-0.5000, -1.0000]], requires_grad=True)
>>> print ('t.grad = ...\n', t.grad)
t.grad = ...
tensor([[ 0.0000, 0.0994],
[ 0.1590, 0.0795],
[ 0.0795, -0.0398]])
>>>
>>> nDim = 2
>>> w = torch.randn ((3, nDim))
>>> wA = w.clone()
>>> wB = w.clone()
>>> w.requires_grad = True
>>> wA.requires_grad = True
>>> wB.requires_grad = True
>>> print ('w = ...\n', w)
w = ...
tensor([[ 1.2372, -0.9604],
[ 1.5415, -0.4079],
[ 0.8806, 0.0529]], requires_grad=True)
>>> print ('wA = ...\n', wA)
wA = ...
tensor([[ 1.2372, -0.9604],
[ 1.5415, -0.4079],
[ 0.8806, 0.0529]], requires_grad=True)
>>> print ('wB = ...\n', wB)
wB = ...
tensor([[ 1.2372, -0.9604],
[ 1.5415, -0.4079],
[ 0.8806, 0.0529]], requires_grad=True)
>>>
>>> lr = 5.0
>>>
>>> print ('cc_sim:')
cc_sim:
>>> for i in range (10):
... loss = cc_sim (w)
... print ('loss =', loss)
... if i != 0:
... _ = w.grad.zero_()
... loss.backward()
... with torch.no_grad():
... _ = w.sub_ (lr * w.grad)
...
loss = tensor(0.3133, grad_fn=<DivBackward0>)
loss = tensor(0.2794, grad_fn=<DivBackward0>)
loss = tensor(0.2203, grad_fn=<DivBackward0>)
loss = tensor(0.1282, grad_fn=<DivBackward0>)
loss = tensor(0.0270, grad_fn=<DivBackward0>)
loss = tensor(0.0421, grad_fn=<DivBackward0>)
loss = tensor(0., grad_fn=<DivBackward0>)
loss = tensor(0., grad_fn=<DivBackward0>)
loss = tensor(0., grad_fn=<DivBackward0>)
loss = tensor(0., grad_fn=<DivBackward0>)
>>> print ('w = ...\n', w)
w = ...
tensor([[-0.5426, -1.7768],
[ 1.8619, -0.0235],
[-1.0342, 1.1212]], requires_grad=True)
>>> print ('w.grad = ...\n', w.grad)
w.grad = ...
tensor([[0., 0.],
[0., 0.],
[0., 0.]])
>>>
>>> print ('cc_simA:')
cc_simA:
>>> for i in range (10):
... lossA = cc_simA (wA)
... print ('lossA =', lossA)
... if i != 0:
... _ = wA.grad.zero_()
... lossA.backward()
... with torch.no_grad():
... _ = wA.sub_ (lr * wA.grad)
...
lossA = tensor(0.5826, grad_fn=<MeanBackward0>)
lossA = tensor(0.1014, grad_fn=<MeanBackward0>)
lossA = tensor(-0.2646, grad_fn=<MeanBackward0>)
lossA = tensor(-0.3099, grad_fn=<MeanBackward0>)
lossA = tensor(-0.3240, grad_fn=<MeanBackward0>)
lossA = tensor(-0.3299, grad_fn=<MeanBackward0>)
lossA = tensor(-0.3322, grad_fn=<MeanBackward0>)
lossA = tensor(-0.3330, grad_fn=<MeanBackward0>)
lossA = tensor(-0.3332, grad_fn=<MeanBackward0>)
lossA = tensor(-0.3333, grad_fn=<MeanBackward0>)
>>> print ('wA = ...\n', wA)
wA = ...
tensor([[-1.0928, -1.7762],
[ 1.6100, -0.0603],
[-0.9511, 1.8146]], requires_grad=True)
>>> print ('wA.grad = ...\n', wA.grad)
wA.grad = ...
tensor([[ 1.7566e-03, -1.0688e-03],
[-3.6173e-05, -8.9793e-04],
[ 1.2300e-03, 6.3939e-04]])
>>>
>>>
>>> print ('cc_simB:')
cc_simB:
>>> for i in range (10):
... lossB = cc_simB (wB)
... print ('lossB =', lossB)
... if i != 0:
... _ = wB.grad.zero_()
... lossB.backward()
... with torch.no_grad():
... _ = wB.sub_ (lr * wB.grad)
...
lossB = tensor(0.3133, grad_fn=<DivBackward0>)
lossB = tensor(0.2794, grad_fn=<DivBackward0>)
lossB = tensor(0.2203, grad_fn=<DivBackward0>)
lossB = tensor(0.1282, grad_fn=<DivBackward0>)
lossB = tensor(0.0040, grad_fn=<DivBackward0>)
lossB = tensor(-0.1192, grad_fn=<DivBackward0>)
lossB = tensor(-0.0680, grad_fn=<DivBackward0>)
lossB = tensor(-0.1348, grad_fn=<DivBackward0>)
lossB = tensor(-0.1079, grad_fn=<DivBackward0>)
lossB = tensor(-0.1362, grad_fn=<DivBackward0>)
>>> print ('wB = ...\n', wB)
wB = ...
tensor([[-1.0508, -1.8293],
[ 1.7102, 0.5224],
[-1.1758, 1.3381]], requires_grad=True)
>>> print ('wB.grad = ...\n', wB.grad)
wB.grad = ...
tensor([[ 0.0950, -0.0278],
[ 0.0077, -0.0571],
[ 0.0489, 0.0305]])
>>>
Good luck.
K. Frank |
st49580 | KFrank:
torch.diag (float ('inf') * torch.ones (3))
Thank you for your timely help. Basically, I want to penalize the most similar pairs of each class. That’s why I want to get a similar loss which serves as a regularizing term. My total loss is like L_total = L_CE + L_sim. My questions as follows:
I want to know if this term can help to update the parameters of my network in the training process cause I did an assignment for model weights here weights = self.model.module.model.fc[-1].weight.
If I want to get a weighed similar loss, is the below code based on yours correct?
import torch
torch.__version__
torch.random.manual_seed (2020)
def cc_sim (weights, w_s):
'''
weights: weights from NN model
w_s: weights for every entry in similarity matrix, w_s.size = (num_classes, num_classes)
'''
cos = torch.nn.CosineSimilarity(dim=1, eps=1e-6)
num_iter = num_classes = weights.size(0)
# similarity matrix
sim_mat = torch.empty(0, num_iter)
for j in range(num_iter):
weights_i = weights[j, :].expand_as(weights)
sim = cos(weights, weights_i)
sim = torch.unsqueeze(sim, 0)
sim_mat = torch.cat((sim_mat, sim), 0)
sim_mat = sim_mat * w_s
sim_mat = sim_mat - torch.diag(torch.diag(sim_mat))
max_val = torch.max(sim_mat, dim=1).values
loss_sim = (torch.sum(max_val) / num_classes**2)
return loss_sim
def cc_simA (weights, w_s):
'''
weights: weights from NN model
w_s: weights for every entry in similarity matrix, w_s.size = (num_classes, num_classes)
'''
cos = torch.nn.CosineSimilarity(dim=1, eps=1e-6)
num_iter = num_classes = weights.size(0)
# similarity matrix
sim_mat = torch.empty(0, num_iter)
for j in range(num_iter):
weights_i = weights[j, :].expand_as(weights)
sim = cos(weights, weights_i)
sim = torch.unsqueeze(sim, 0)
sim_mat = torch.cat((sim_mat, sim), 0)
# element-wise multiply
sim_mat = sim_mat * w_s
sim_mat = sim_mat - torch.diag(torch.diag(sim_mat))
loss_sim = torch.mean (sim_mat)
return loss_sim
def cc_simB (weights, w_s):
'''
weights: weights from NN model
w_s: weights for every entry in similarity matrix, w_s.size = (num_classes, num_classes)
'''
cos = torch.nn.CosineSimilarity(dim=1, eps=1e-6)
num_iter = num_classes = weights.size(0)
# similarity matrix
sim_mat = torch.empty(0, num_iter)
for j in range(num_iter):
weights_i = weights[j, :].expand_as(weights)
sim = cos(weights, weights_i)
sim = torch.unsqueeze(sim, 0)
sim_mat = torch.cat((sim_mat, sim), 0)
# element-wise multiply
sim_mat = sim_mat * w_s
sim_mat = sim_mat - torch.diag (float ('inf') * torch.ones (num_classes))
max_val = torch.max(sim_mat, dim=1).values
loss_sim = (torch.sum(max_val) / num_classes**2)
return loss_sim
if __name__ == '__main__':
num_classes = nDim = 10
weights = torch.randn((num_classes, nDim), requires_grad=True)
w_s = torch.FloatTensor(num_classes, num_classes).uniform_(0, 1)
loss = cc_simB(weights, w_s)
loss.backward()
print('loss item:', loss)
print('grad of weights:\n')
print(weights.grad)
Thanks again! |
st49581 | Howdy Hoody!
Hoodythree:
I want to know if this term can help to update the parameters of my network in the training process cause I did an assignment for model weights here weights = self.model.module.model.fc[-1].weight.
If you’re asking whether the assignment:
weights = self.model.module.model.fc[-1].weight
will prevent gradients from flowing back through the assignment
and break backpropagation, the answer is no.
In python, “variables” are references.
self.model.module.model.fc[-1].weight refers to a tensor
in memory somewhere. The above assignment creates a new
reference that refers to the same tensor in memory. No new
tensor is created, nor is any data copied from one place to
another. Performing tensor operations on weights is essentially
identical to performing tensor operations on
self.model.module.model.fc[-1].weight, so backpropagation
will work identically.
To emphasize this point:
wtmp1 = self.model.module.model.fc[-1].weight
wtmp2 = wtmp1
wtmp3 = wtmp2
weights = wtmp3
would be almost equivalent to your assignment, with the only
difference being the three extra (and unnecessary) temporary
references that will be cleaned up when your script exits (or
when an enclosing code block goes out of scope).
If I want to get a weighed similar loss, is the below code based on yours correct?
...
sim_mat = sim_mat * w_s
sim_mat = sim_mat - torch.diag(torch.diag(sim_mat))
max_val = torch.max(sim_mat, dim=1).values
This will do what I believe you want. Each element of sim_mat
will be multiplied by the corresponding element of w_s. This
will potentially change the result of torch.max(), depending on
the specific values involved.
Note, if you set up w_s so that it has zeros along its diagonal, the
multiplication will zero out the diagonal of sim_mat so you can forgo
sim_mat = sim_mat - torch.diag(torch.diag(sim_mat)).
Best.
K. Frank |
st49582 | Thanks for your clear explanation.
training748×663 231 KB
I still have an extra question here:
Why my training and validation accuracy curves remained almost the same after I added the sim_loss term like above cc_simB ? (Green is with sim_loss) It’s really wired. |
st49583 | In the official website 3, it mentions that the nn.TransformerEncoderLayer is made up of self-attention layers and feedforward network. The first is self-attention layer, and it’s followed by feed-forward network. Here are some input parameters and example
d_model – the number of expected features in the input (required).
dim_feedforward - the dimension of the feedforward network model (default=2048)
encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8, dim_feedforward=1024)
src = torch.rand(10, 32, 512)
out = encoder_layer(src)
print(out.shape)
the output shape is
torch.Size([10, 32, 512])
there are two questions
Why is the output shape [10, 32, 512]? Isn’t it [10, 32, 1024] because the feedforward network is connected after self-attention layer ?
In this example, is the input shape [10, 32, 512] corresponding to [batchsize, seq_length, embedding]?
In my experience, using nn.Embedding the shape is [batchsize, seq_length, embedding]
but it looks like the shape is [seq_length, batchsize, embedding] in this tutorial 6. |
st49584 | Solved by phan_phan in post #2
Hi,
The TransformerEncoder “transforms” each input embeddings with the help of neighboring embeddings in the sequence, so it is normal that the output is homogeneous with the input : it should be the same shape as the input.
You can look at the implementation of nn.TransformerEncoderLayer for mor… |
st49585 | Hi,
The TransformerEncoder “transforms” each input embeddings with the help of neighboring embeddings in the sequence, so it is normal that the output is homogeneous with the input : it should be the same shape as the input.
You can look at the implementation of nn.TransformerEncoderLayer 26 for more details : you can see where dim_feedforward is used.
See the full nn.Transformer documentation 51 for details on required shapes : It is effectively [seq, batch, emb].
Note that nn.Embedding works with any shape : [size1, size2] -> [size1, size2, emb]
So you can use [seq, batch, emb] or [batch, seq, emb] for nn.Embedding. |
st49586 | I’ve been spending the past few weeks working with ASR models, and so far I’ve managed to train a decent CTC model, start training a Seq2Seq model (takes too long to train), and I have been trying to get the RNN Transducer to work, but I haven’t found any implementation that works in Windows. Every implementation typically always uses WarpTransducer, which has no support for Windows. Has anybody made one, or know of one that will work?
Thanks in advance! |
st49587 | I’m a little confused as to how PyTorch would keep track and update the weight matrix (point multiplication to the input matrix), should the weight matrix be fed to the network itself where it will be kept track of manually by the user after each update, or its going to be updated and track automatically by the PyTorch library? |
st49588 | Solved by ptrblck in post #2
Generally you could use nn.Modules, which will keep the parameters as internal attributes, or use the functional API where you can keep track of all parameters manually.
I would recommend to take a look at some tutorials to see different work flows and use cases. |
st49589 | Generally you could use nn.Modules, which will keep the parameters as internal attributes, or use the functional API where you can keep track of all parameters manually.
I would recommend to take a look at some tutorials to see different work flows and use cases. |
st49590 | As a follow up question (a slightly silly one at that), I was wondering how one would be sure if the back propagation isn’t broken. I check the gradient of my weight matrix that I initialized in init and was able to get a value after backward(), that means that the backprop isn’t broken up until that point, would that be correct? and trying to print loss.backward() would gave me none either way?
Sorry the last one sounds really dumb, just wanna make sure I’m not doing anything incorrectly.
here is a snippet of what I was testing out, seemed to work fine (also if things are correct other people in the future can reference this):
from __future__ import print_function
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch_geometric.nn import knn
from torchviz import make_dot
from skimage import io, transform
channel_in = 6
channel_out = 6
neighbours = 2
err_val = 0
CHANNEL_MLP = channel_in
class MLP_2L(nn.Module):
def __init__(self, channels):
super().__init__()
self.channel= channels
self.hidden = nn.Linear(int(channels) , int(channels/2))
self.output = nn.Linear(int(channels/2) , int(channels))
self.bn = nn.BatchNorm1d(channels)
def forward(self, input):
output = self.output(self.hidden(input))
output = F.relu(self.bn(output))
return output
class testnet(nn.Module):
def __init__(self, in_channel, out_channel):
super().__init__()
self.in_channel = in_channel
### mlp
self.mlp = MLP_2L(in_channel)
### initialize parameterized weight matrix
Fk = torch.randn(3,6,requires_grad = True)
self.Fk= torch.nn.Parameter(Fk)
def forward(self, test_features):
conv_out = self.mlp(test_features)
print(conv_out)
print(conv_out.requires_grad)
conv_out = torch.mul(conv_out, self.Fk)
print(conv_out)
print(conv_out.requires_grad)
return conv_out
### variables
raw_features = np.array([[0,0,0,0,0,0], [1,1,1,1,1,1], [2,2,2,2,2,2]])
features = torch.tensor(raw_features,dtype=torch.float,requires_grad=True)
### loss function
criterion = nn.MSELoss()
### initalize network
net = testnet(channel_in, channel_out)
for param in net.parameters():
print(type(param.data), param.size())
print("\n\n")
print(testnet)
### send the input through the layer to get output
output = net(features)
## get loss
loss = criterion(output, torch.randn(3,6,requires_grad = True))
print('loss:')
print(loss)
### get grad
loss.backward()
print('Fk grad:')
print(net.Fk.grad)
#### test pytorch with pcnn strutuce propsed using mlp, with relu and batch normalization afterward
channel_in = 6
channel_out = 6
neighbours = 2
err_val = 0
CHANNEL_MLP = channel_in
#### mlp that's been fed with 3 x neighbour_size at each input (with channel_size amount of input)
## input of Nx3xneighbours {or 3xneighbours per channel) mlp, return Nx3xneighbours
class MLP_2L(nn.Module):
def __init__(self, channels):
super().__init__()
self.channel= channels
self.hidden = nn.Linear(int(channels) , int(channels/2))
self.output = nn.Linear(int(channels/2) , int(channels))
self.bn = nn.BatchNorm1d(channels)
def forward(self, input):
output = self.output(self.hidden(input))
output = F.relu(self.bn(output))
return output
class FUSE_SUB_DEPTHNET(nn.Module):
def __init__(self, in_channel, out_channel, neighbours):
super().__init__()
### input variables
self.in_channel = in_channel
self.neighbours = neighbours
### MLP layer
self.mlp = MLP_2L(in_channel)
### initialize parameterized weight matrix
Fk = torch.randn(3,self.neighbours, self.in_channel,requires_grad = True)
self.Fk= torch.nn.Parameter(Fk)
def forward(self, test_xyz, test_feature):
### KNN and find the corresponding features and xyz
assign_index = knn(test_xyz, test_xyz, self.neighbours)
#tensor_idx = Depth_conv_otpt_idx.clone().detach()#.requires_grad_(True)
knn_neighbours = test_xyz[assign_index]
knn_features = test_feature[assign_index]
print(knn_neighbours.size())
print(knn_neighbours)
print(knn_neighbours.requires_grad)
conv_out = self.mlp(knn_features)
print(conv_out)
print(conv_out.requires_grad)
return conv_out
raw_x = np.array([[-1, -1,1], [-1, 1,4], [1, -1,6], [1, 1,9], [20,3,5,], [-4,-8,-30],[-0.5,1,-4], [0.5,-2,-4.1],[1,-4,5],[6,9,30], [-4,6,5],[31,42,1],[4,64,13],[44,59,-103],[1,55,671]])
raw_y = np.array([[-1, 0, 1], [1, 0, 1], [0,0,0]])
raw_features = np.array([[0,0,0,0,0,0], [1,1,1,1,1,1], [2,2,2,2,2,2]])
x = torch.tensor(raw_x,dtype=torch.float,requires_grad=True)
y = torch.tensor(raw_y,dtype=torch.float,requires_grad=True)
features = torch.tensor(raw_features,dtype=torch.float,requires_grad=True)
### loss function
criterion = nn.MSELoss()
### initalize network
Depth_cnn = FUSE_SUB_DEPTHNET(channel_in, channel_out, neighbours)
for param in Depth_cnn.parameters():
print(type(param.data), param.size())
print(Depth_cnn)
### send the input through the layer to get output
output = Depth_cnn(y, features)
## get loss
loss = criterion(output, torch.randn(6,6,requires_grad = True))
print('loss:')
print(loss)
### get grad
loss.backward()
print(Depth_cnn.mlp.hidden.bias.grad) |
st49591 | peepeepoopoo:
I check the gradient of my weight matrix that I initialized in init and was able to get a value after backward(), that means that the backprop isn’t broken up until that point, would that be correct?
Yes, that’s correct.
peepeepoopoo:
and trying to print loss.backward() would gave me none either way?
Yes, tensor.backward() doesn’t return anything. |
st49592 | On some of the systems, I need to replace “basic_english” with “spacy”, it works.
What is the difference between these basic_english and spacy tokenizer? |
st49593 | Why do you need to replace it with spaCy? What does not work correctly for you? The basic_english tokenizer should work as long as your language is English. It’s in the name. It will do a very basic string normalization and then splitting by whitespace. If you use other parsers, such as spaCy, that library will be used to tokenize the text. |
st49594 | Hello, I use the following code to define my network. But I got the warning:
"Couldn't retrieve source code for container of "type " + obj.__name__ + ". It won't be checked for correctness upon loading."
What’s the meaning of this warning and how to solve it? Thanks!
class BasicBlock(nn.Module):
def __init__(self):
super(BasicBlock, self).__init__()
self.conv = torch.nn.Conv2d(in_channels=64, out_channels=64,
kernel_size=3, stride=1, padding=1)
self.relu = torch.nn.ReLU()
def forward(self, x):
out = self.conv(x)
out = self.relu(out)
return out
class MNN(nn.Module):
def __init__(self, block, blocks):
super(MNN, self).__init__()
self.layer1 = torch.nn.Conv2d(in_channels=2, out_channels=64,
kernel_size=3, stride=1, padding=1)
self.layer2 = self.make_layer(block, blocks)
self.layer3 = torch.nn.Conv2d(in_channels=64, out_channels=1,
kernel_size=3, stride=1, padding=1)
def make_layer(self, block, blocks):
layers = []
for i in range(0, blocks):
layers.append(block())
return nn.Sequential(*layers)
def forward(self, x, y):
out = self.layer1(x)
out = self.layer2(out)
out = self.layer3(out)
return out+y |
st49595 | zhangboknight:
't retrieve source code for container of "type " + obj.name + “. It won’t be checked for correctness upon loading.”
I have the same problem. Could you tell me how to avoid this problem?
Thanks. |
st49596 | Well, I appreciate your work, but it could be much better either to get a more exact answer, or to remove the warning from the code if user may even totally ignore it. |
st49597 | I probably typed that answer out from my phone, which is why it was concise.
Retrieving source code of a python class might happen for a variety of reasons, including:
If you defined the class in the interpreter directly (like in ipython terminal / notebook, or python shell)
If you only have access to .pyc files but not the original .py files
If the class itself was auto-generated
The warning exists to let the user know that we cannot do some sanity checks upon loading the class back, which are around checking if the source code of the old definition and new definition are different. |
st49598 | Thank you for clarifying, @smth!
I’m with @geossy, preferring warnings to be resolved and to have none if possible.
Is there a workaround/hack to this situation if the class is defined in the notebook? (other than
moving the class outside of the notebook or supressing warnings)
Thank you!
p.s. also the warning has a weird paste of the chunk of its source after the warning is rendered in jupyter nb (last line below):
UserWarning: Couldn't retrieve source code for container of type MyClass. It won't be checked for correctness upon loading.
"type " + obj.__name__ + ". It won't be checked |
st49599 | You can suppress the warnings by capturing them using a filter: https://docs.python.org/3/library/warnings.html#overriding-the-default-filter 431
There isn’t another mechanism I can think of. |
st49600 | thank you, @smth - but I meant solving the problem, not hiding it. apologies if I weren’t clear.
The idea is that we want to edit a module in the notebook and have pytorch find its source - which normally is not possible since nb is a json file.
Here is a hack I came up with for notebooks:
#cell0
# some code
[...]
#celln:
from torch import nn
class FeatureLoss(nn.Module): [... your code here ...]
#celln+1:
x=_i # this copies the contents of the prev cell verbatim
file = "./feature_loss.py"
with open(file, 'w') as f: f.write(x)
from feature_loss import FeatureLoss
and voila, you can have the cake (edit in the notebook) and eat it too (having the source pytorch can find).
Except you can’t rely on any of the local variables of the notebook unless you pass those as arguments to that auto-saved class. |
st49601 | we are opening to disabling the warning in notebook settings, if you think that’s useful. |
st49602 | I think warnings should be there regardless of the environment and they can be turned off by the user if need be, as you suggested originally, so all is good.
I was looking for a way to make pytorch find the source code that resides in the nb, which I succeeded in a hackish way (my previous comment).
Thank you, @smth |
st49603 | So just to be clear - this is a problem specifically with using a Jupyter notebook file as opposed to a plain old .py script? |
st49604 | Please, see Soumith’s answer above:
Got warning: Couldn't retrieve source code for container
I probably typed that answer out from my phone, which is why it was concise.
Retrieving source code of a python class might happen for a variety of reasons, including:
If you defined the class in the interpreter directly (like in ipython terminal / notebook, or python shell)
If you only have access to .pyc files but not the original .py files
If the class itself was auto-generated
The warning exists to let the user know that we cannot do some sanity checks upon loading the class back, which …
i.e. Jupyter env is just one of such possible situations. |
st49605 | smth:
You can suppress the warnings by capturing them using a filter: https://docs.python.org/3/library/warnings.html#overriding-the-default-filter
There isn’t another mechanism I can think of.
for ppl looking for code here it is:
import warnings
warnings.simplefilter("ignore") |
st49606 | I use Pytorch to train YOLOv5, but when I run three scripts, every scripts have a Dataloader and their num_worker all bigger than 0, but I find all of them are run in cpu 1, and I have 48 cpu cores, do any one knows why?2020-10-13 1734221650×467 343 KB |
st49607 | Hi,
Does CPU here refers to physical CPU or core? Because I guess you have a single CPU no?
In any case, pytorch is not doing anything to pin itself to a given CPU or core. So if that happens, that is from your config. |
st49608 | It is fun that I define two dataloaders in the scripts for train and test with same config, the test dataloader can run in dual cores, but train dataloader can’t. I can’t understand. |
st49609 | I have 2 cpus with 24 cores per cpu, and I set num_workers=N > 1. There are N threads of train loader and N threads of test loader, but all these train threads only run in cpu core 1, the test threads can randomly run on N cores. |
st49610 | I have 2 cpus with 24 cores per cpu, and I set num_workers=N > 1. There are N threads of train loader and N threads of test loader, but all these train threads only run in cpu core 1, the test threads can randomly run on N cores. |
st49611 | Note that dataloader create processes, not threads for its workers.
You might want to check that you don’t have core restrictions when you create the train loader. |
st49612 | I even don’t know how to restrict the number of core
I can show the code.
dataloader, dataset = create_dataloader(train_path, train_label_path, imgsz, batch_size, gs, opt,
hyp=hyp, augment=True, cache=opt.cache_images, rect=opt.rect, rank=rank,
world_size=opt.world_size, workers=opt.workers, isTrain=True)
testloader = create_dataloader(test_path, test_label_path, imgsz_test, total_batch_size, gs, opt,
hyp=hyp, augment=False, cache=opt.cache_images, rect=True, rank=-1,
world_size=opt.world_size, workers=opt.workers, isTrain=False)[0]
def create_dataloader(path, label_path, imgsz, batch_size, stride, opt, hyp=None, augment=False, cache=False, pad=0.0, rect=False,
rank=-1, world_size=1, workers=8, isTrain=False):
# Make sure only the first process in DDP process the dataset first, and the following others can use the cache.
with torch_distributed_zero_first(rank):
dataset = LoadImagesAndLabels(path, label_path, imgsz, batch_size,
augment=augment, # augment images
hyp=hyp, # augmentation hyperparameters
rect=rect, # rectangular training
cache_images=cache,
single_cls=opt.single_cls,
stride=int(stride),
pad=pad,
rank=rank,
data_debug=False,
isTrain=isTrain)
batch_size = min(batch_size, len(dataset))
nw = min([os.cpu_count() // world_size, batch_size if batch_size > 1 else 0, workers]) # number of workers
train_sampler = torch.utils.data.distributed.DistributedSampler(dataset) if rank != -1 else None
dataloader = InfiniteDataLoader(dataset,
batch_size=batch_size,
num_workers=nw,
sampler=train_sampler,
pin_memory=True,
collate_fn=LoadImagesAndLabels.collate_fn)
return dataloader, dataset
class InfiniteDataLoader(torch.utils.data.dataloader.DataLoader):
'''
Dataloader that reuses workers.
Uses same syntax as vanilla DataLoader.
'''
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler))
self.iterator = super().__iter__()
def __len__(self):
return len(self.batch_sampler.sampler)
def __iter__(self):
for i in range(len(self)):
yield next(self.iterator)
class _RepeatSampler(object):
'''
Sampler that repeats forever.
Args:
sampler (Sampler)
'''
def __init__(self, sampler):
self.sampler = sampler
def __iter__(self):
while True:
yield from iter(self.sampler) |
st49613 | You can find that the train loader and the test loader are create by the same function and without restriction of core. |
st49614 | I am not sure how this will interact with distributed I admit…
In any case, since all the threads that you show in the htop process above are idle. It doesn’t matter that they are all on the same cpu/core at the moment. The OS will move them around if they are actually active at the same time? |
st49615 | Yes. Just like htop, it will run in different cores at different time. So the dataloader only run in core 1 is strange when there are 48 cores active. |
st49616 | If they are never used at the same time, it is actually more efficient to have them all on the same core. As the core is “warm”. Moving them to a different one would be bad.
But again, this is OS scheduler business. |
st49617 | Hi @klyjm!
I may have missed this above, but what’s your RAM freq and how does it compare to your CPU? Could you be suffering from insufficient RAM speed or a mismatch between the freq of your RAM and CPU? I’d poke around in the BIOS if you haven’t already. Also, keep in mind that overclocking RAM has seemed to have unpredictable effects for me in the past. Sometimes it seems to help, sometimes to hurt.
Good luck!
–SEH |
st49618 | There is no overlocking RAM, and other code run well. I am trying to rebuild the devlopment enviroment. |
st49619 | Gotcha. Only thing I could think of… Sorry! Issues like this can be soooooo frustrating. I’ll let u know if I can think of anything else. Hope you figure it out soon! |
st49620 | Hi,
I don’t think you should focus on that number so much.
Many process/threads are created when you run pytorch and most of them will be fairly idle. And the current device/core where they sit doesn’t mean anything unless they are actually doing something.
In particular, if you check during testing, then I would expect the processes from the testing dataloader to do stuff at the same time and so to be scheduled on different cores.
But the training workers don’t do anything so they will just stay idle whereever they are.
And I expect it to be the oposite during training where the training processes are scheduled on different cores and the test ones idle somwhere. |
st49621 | I update the version of python to 3.8, and there is no bug. I think this maybe the bug of old version python, just like 3.6. |
st49622 | I am using the following code for implementing beam search for text generation.
def beam_search_decoder(data, k):
sequences = [[list(), 0.0]]
# walk over each step in sequence
for row in data:
all_candidates = list()
# expand each current candidate
for i in range(len(sequences)):
seq, score = sequences[i]
for j in range(len(row)):
candidate = [seq + [j], score - torch.log(row[j])]
all_candidates.append(candidate)
# order all candidates by score
ordered = sorted(all_candidates, key=lambda tup:tup[1])
# select k best
sequences = ordered[:k]
return sequences
This performs beam search decoding for batch_size 1. I am working with a vocab_size of ~9900, so this function itself is extremely slow. Is there an alternative faster way to do batch-wise beam search ? |
st49623 | The number of steps is param of calculations in adam.
Does it mean with some step, updates would be around zero, or im wrong? |
st49624 | Hi, can you please elaborate your query a little bit? Are you referring to torch.optim.Adam? |
st49625 | Yes, this one
pytorch.org
torch.optim.adam — PyTorch 1.6.0 documentation 1
2 values depend on state[‘step’] var.
bias_correction1 change lr at the start of training. Then it always will be 1.
Im not sure about bias_correction2. |
st49626 | Hi,
I’m not sure if I have correctly implemented applying augmentations to only one class during training (binary classes). The code runs however, I’m just unsure if the augmentations is actually happening during the get item function. Would anyone be able to tell me if this would cause any potential issues (for example memory or speed)? :
if self.phase == 'train':
self.transforms = T.Compose([T.RandomResizedCrop(224),
T.ToTensor(),
T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
self.transforms_augs = T.Compose([T.RandomResizedCrop(224),
T.RandomHorizontalFlip(),
T.RandomVerticalFlip(),
T.ToTensor(),
T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
else:
self.transforms = T.Compose([T.RandomResizedCrop(224),T.ToTensor(),
T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
def __getitem__(self, index):
if self.phase == 'train':
path = self.imgs[index]
label = int(path.split('/')[11])
data = Image.open(path).convert('RGB')
if label == 1:
data = self.transforms_augs(data)
else:
data = self.transforms(data)
Cheers,
T |
st49627 | Solved by Caruso in post #2
Hi,
for me your augmentation looks good. You could visualize your input data by denormalizing it first, converting it from [B, C, H, W] to [B, H, W, C] using torch.permute(), maybe min-max it to be between 0-255, then convert it to a numpy array and e.g. show it with matplotlib imshow().
Greetings… |
st49628 | Hi,
for me your augmentation looks good. You could visualize your input data by denormalizing it first, converting it from [B, C, H, W] to [B, H, W, C] using torch.permute(), maybe min-max it to be between 0-255, then convert it to a numpy array and e.g. show it with matplotlib imshow().
Greetings. |
st49629 | Hello all,
I was curious – for hyperparameter search which sometimes leads to issues like CUDA out of memory due to batch_size and/or image size, related parameters is this code structure an okay practice? If there is a better way of handling this I would love to hear comments!
for run_count, hyperparams in enumerate(HYPERPARAMS, start=1):
# define model, dataloaders, etc
try:
#Call Training Utils
except RuntimeError as e:
# Handle Possible Errors
with open("run_parameters.txt", "a+") as text_file:
text_file.write("*** Runtime Error: {} \n\n".format(e))
finally:
#Rest For Next Run
del model
del optimizer_ft
del exp_lr_scheduler
del dataloaders
gc.collect()
torch.cuda.empty_cache() |
st49630 | Hi,
If you’re designing this from scratch, I would advise to create new processes to run the inner job to ensure no issue with cleanup of remaining states.
You can check this issue for issues related to GPU OOM failures: https://github.com/pytorch/pytorch/issues/18853 6.
Also be aware that if an assert happens in a cuda kernel (like index out of range), the cuda driver cannot recover from it and you will have to restart the process. So your code example won’t help in that case as all the other trainings will fail. |
st49631 | I am new to Pytorch, Kindly help to understand error.
class LSTMClassifier(nn.Module):
def __init__(self, input_dim, seq_size, hidden_dim, label_size, batch_size, bidirectional, num_layers):
super(LSTMClassifier, self).__init__()
self.input_dim = input_dim
self.seq_size = seq_size
self.hidden_dim = hidden_dim
self.batch_size = batch_size
self.num_layers = num_layers
self.bidirectional = bidirectional
self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers=num_layers, bidirectional=bidirectional, batch_first=True)
self.hidden = self.init_hidden()
self.fc = nn.Linear(hidden_dim*seq_size, 128)
if bidirectional:
self.fc = nn.Linear(2*hidden_dim*seq_size, 128)
self.hidden2label = nn.Linear(128, label_size)
def init_hidden(self):
first_size = 1
if self.bidirectional:
first_size = 2
h0 = Variable(torch.zeros(first_size*self.num_layers, self.batch_size, self.hidden_dim)).float()
c0 = Variable(torch.zeros(first_size*self.num_layers, self.batch_size, self.hidden_dim)).float()
return (h0, c0)
def forward(self, x):
lstm_out, self.hidden = self.lstm(x, self.hidden)
lstm_out = torch.squeeze(lstm_out) #original
y = self.fc(lstm_out)
y = F.relu(y)
y = self.hidden2label(y)
return y
…
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
1674 ret = torch.addmm(bias, input, weight.t())
1675 else:
-> 1676 output = input.matmul(weight.t())
1677 if bias is not None:
1678 output += bias
RuntimeError: size mismatch, m1: [320 x 2], m2: [20 x 128] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:41 |
st49632 | Solved by ptrblck in post #2
The linear layer is raising this error, as the number of input features from the activation doesn’t match the in_features from the layer.
Check the shape of lstm_out and make sure it’s [batch_size, *, in_features].
PS: I would also be careful with using torch.squeeze without specifying the dim arg… |
st49633 | The linear layer is raising this error, as the number of input features from the activation doesn’t match the in_features from the layer.
Check the shape of lstm_out and make sure it’s [batch_size, *, in_features].
PS: I would also be careful with using torch.squeeze without specifying the dim argument, as it could also remove the batch dimension, if you are working with a single sample, which is usually not desired. |
st49634 | I changed the line from
#self.fc = nn.Linear(hidden_dim*seq_size, 128) #original
to
self.fc = nn.Linear(self.hidden_dim, 128, bias=True)
error resolved… but result is something unexpected… |
st49635 | I am trying to use torch.conv2d or torch.nn.functional.conv2d with groups specified, but it reports runtime error. The code is something like this:
a = torch.rand(4, 3, 8, 8)
b = torch.ones(3, 3, 3)
out = torch.conv2d(a, b, groups=3)
The error is something like this:
RuntimeError: expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]
Changing torch.conv2d to torch.nn.functional.conv2d gives the same error.
The following code works:
a = torch.rand(4, 3, 8, 8)
b = torch.ones(3, 3, 3, 3)
out = torch.conv2d(a, b)
and the following also works:
a = torch.rand(4, 3, 8, 8)
b = torch.ones(3, 3, 3, 3)
out = torch.conv2d(a, b, groups=1) |
st49636 | Solved by ptrblck in post #4
3 groups with a filter using in_channels=3 would need 9 channels.
The grouped conv section or the docs give you more information on the usage of the groups argument.
If you want to process each input channel separately, use:
input = torch.rand(4, 3, 8, 8)
weight = torch.ones(3, 1, 3, 3)
out = tor… |
st49637 | The weight tensor is expected to have 4 dimensions as [out_channels, in_channels, height, width], which is the case for the second approach. |
st49638 | Thanks @ptrblck . The following code
a = torch.rand(4, 3, 8, 8)
b = torch.ones(3, 3, 3, 3)
out = torch.conv2d(a, b, groups=3)
gives this bug:
RuntimeError: Given groups=3, weight of size 3 3 3 3, expected input[4, 3, 8, 8] to have 9 channels, but got 3 channels instead |
st49639 | 3 groups with a filter using in_channels=3 would need 9 channels.
The grouped conv section 49 or the docs give you more information on the usage of the groups argument.
If you want to process each input channel separately, use:
input = torch.rand(4, 3, 8, 8)
weight = torch.ones(3, 1, 3, 3)
out = torch.conv2d(input, weight, groups=3) |
st49640 | In Torch, we use cutorch.getMemoryUsage(i) to obtain the memory usage of the i-th GPU.
Is there a similar function in Pytorch? |
st49641 | You need that for your script? If so, I don’t know how. Otherwise, you can run nvidia-smi in the terminal to check that |
st49642 | Yes, I try to use it in script.
The goal is to automatically find a GPU with enough memory left.
import torch.cuda as cutorch
for i in range(cutorch.device_count()):
if cutorch.getMemoryUsage(i) > MEM:
opts.gpuID = i
break |
st49643 | In case anyone else stumbles across this thread, I wrote a script to query nvidia-smi that might be helpful.
import subprocess
def get_gpu_memory_map():
"""Get the current gpu usage.
Returns
-------
usage: dict
Keys are device ids as integers.
Values are memory usage as integers in MB.
"""
result = subprocess.check_output(
[
'nvidia-smi', '--query-gpu=memory.used',
'--format=csv,nounits,noheader'
], encoding='utf-8')
# Convert lines into a dictionary
gpu_memory = [int(x) for x in result.strip().split('\n')]
gpu_memory_map = dict(zip(range(len(gpu_memory)), gpu_memory))
return gpu_memory_map |
st49644 | GPUtil is also a library that achieve the same goal.
But I’m wondering if PyTorch has some functions for this purpose.
My goal is to measure the exact memory usage of my model, and it varies as the input size varies, so I’m wondering if PyTorch could have such a function so that I can have more accurate GPU memory usage estimation. |
st49645 | Hi,
You can find in the doc these functions here 2.5k and below. You can get the current and max memory allocated on the gpu and the current max memory used to actually store tensors. |
st49646 | Hi, thank you!
I found these functions later, but find that they did not match the nvidia-smi output. And what’s the difference between max_memory_allocated/cached V.S. memory_allocated/cached?
I raised these questions in Memory_cached and memory_allocated does not nvidia-smi result 463 |
st49647 | Just FYI, I was also looking for the total GPU memory, which can be found with:
torch.cuda.get_device_properties(device).total_memory |
st49648 | The different answers explain what the use case of the code snippet is, e.g. printing the information of nvidia-smi inside the script, checking the current and max. allocated memory, and printing the total memory of a specific device, so you can chose whatever fits your use case of “memory usage”. |
st49649 | When i tried to test one trained model with torch,it was an error.The error is as follows:
Traceback (most recent call last):
File “run.py”, line 152, in
main()
File “run.py”, line 86, in main
saver.write_test_img(ep, i, model, index_a = index_a, index_b = index_b)
File “/home/zhangwei/LADN/src/saver.py”, line 123, in write_test_img
assembled_images = model.assemble_outputs()
File “/home/zhangwei/LADN/src/model.py”, line 933, in assemble_outputs
row1 = torch.cat((images_a[0:1, ::], images_b1[0:1, ::], images_b2[0:1, ::], images_a4[0:1, ::], images_a3[0:1, ::]),3)
RuntimeError: All input tensors must be on the same device. Received cuda:1 and cuda:0
And my command is
CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_VISIBLE_DEVICES=0,1 python3 run.py --backup_gpu 1 --dataroot ../datasets/makeup --name makeup_test --resize_size 576 --crop_size 512 --local_style_dis --n_local 12 --phase test --test_forward --test_random --result_dir ../results --test_size 300 --resume ../models/light.pth --no_extreme
i’d really appreacite it if someone could help me with it. |
st49650 | Hi,
As mentioned in the error message, the Tensors you give to the cat operations are on different devices which is not allowed. You need to move all the Tensors to the same device before calling cat. |
st49651 | Thanks for you suggestion.But i didn’t call the cat operations actually,did i?I am still a little confused. |
st49652 | From the error message you posted, it seems cat operation has been used here:
File “/home/zhangwei/LADN/src/model.py”, line 933, in assemble_outputs |
st49653 | Current situation
I have a multi-label classification problem for which being overconfident is a problem in the end application. The data is labeled with 1 or more from [A, B, C, D, E] , but in reality e.g. label B should not be treated as 1 or 0, but e.g. 0.7 (unfortunately unattainable).
Normal training
If I would use BCEWithLogitsLoss as normal on data like this:
import torch
loss_func = torch.nn.BCEWithLogitsLoss()
pred = torch.tensor([[0.1, 0.1, .7, 0.2, .7],
[.8, .5, 0.1, 0.1, 0.2]])
target = torch.tensor([[0, 0, 1, 0, 1],
[1, 1, 0, 0, 0]])
loss_func(pred, target.type(torch.FloatTensor))
# tensor(0.6225)
I can successfully train a model. The problem is that the confidence values are like [0.0, 0.0, 0.99, 0.0, 0.98].
Goal
I want to say to the the loss function: “If confidence values of correct labels are above >=0.6, and wrong labels below <0.6, don’t calculate a loss”.
Attempt
Set the rows which are correct to binary format, so the loss is 0 for this row.
pred_binary = torch.where(pred >= 0.6, torch.tensor(1), torch.tensor(0))
compare = torch.where(pred_binary == target, torch.tensor(1), torch.tensor(0))
# tensor([[1, 1, 1, 1, 1],
# [1, 0, 1, 1, 1]])
compare_row = compare.type(torch.FloatTensor).mean(axis=1)
# tensor([1.0000, 0.8000])
select_row = torch.where(compare_row >= 1, torch.tensor([1]), torch.tensor([0]))
# tensor([1, 0])
select_row = select_row.type(torch.bool)
# tensor([ True, False])
pred[select_row, :] = pred_binary[select_row].type(torch.FloatTensor)
# tensor([[0.0000, 0.0000, 1.0000, 0.0000, 1.0000],
# [0.8000, 0.5000, 0.1000, 0.1000, 0.2000]])
loss_func(pred, target.type(torch.FloatTensor))
# tensor(0.5838)
The loss is indeed lower (0.6225 versus 0.5838).
Questions
Is there a smarter way to do what I want?
Or more efficient code?
Any feedback on what I should watch out for doing this? |
st49654 | Solved by KFrank in post #2
Hi Numes!
The short answer is to use a “less-certain” target:
target = torch.tensor([[0.3, 0.3, 0.7, 0.3, 0.7],
[0.3, 0.7, 0.3, 0.3, 0.3]])
As I understand it, the “ground-truth” labels you are given are all
exactly 0.0 or 1.0. But (because you understand the problem)
y… |
st49655 | Hi Numes!
NumesSanguis:
The data is labeled with 1 or more from [A, B, C, D, E] , but in reality e.g. label B should not be treated as 1 or 0, but e.g. 0.7 (unfortunately unattainable).
The short answer is to use a “less-certain” target:
target = torch.tensor([[0.3, 0.3, 0.7, 0.3, 0.7],
[0.3, 0.7, 0.3, 0.3, 0.3]])
As I understand it, the “ground-truth” labels you are given are all
exactly 0.0 or 1.0. But (because you understand the problem)
you realize that the ground-truth training data really shouldn’t be
that certain, and therefore you don’t want to train your model to
be that certain.
As a general rule, you should train your model to do what you want
it to do, rather than try to fix it up somehow after the fact.
So tell your model that you want less-certain, lower-confidence
predictions by training it with less-certain, lower-confidence targets,
as above.
Note, BCEWithLogitsLoss does take probabilities between 0.0 and
1.0 for its target values – they are not restricted to be exactly 0.0
or 1.0.
Best.
K. Frank |
st49656 | Hi K. Frank,
Sorry for my late reply!
I actually expected the model confidence to hover around the uncertainty value, but with 0.1 and 0.9 the confidence values were covering the full range between [0-1].
Thank you! |
st49657 | I have a dictionary whose values corresponding to some keys are being used in the forward function of my class. How can I make this dictionary trainable? |
st49658 | What should be trainable in this dict? Could you explain your idea and use case a bit more? |
st49659 | I’m trying to figure out why PT binary_cross_entropy gives different results than TF’s binary_cross_entropy.
colab.research.google.com
Google Colaboratory 3
Anyone? |
st49660 | Solved by fadetoblack in post #2
This is not a pytorch related issue, however, if you use from_logits=True in tensorflow’s binary_crossentropy function, you will get the same value as pytorch’s function.
If you check the pytorch docs for binary cross entropy, it can be seen in the example that sigmoid is applied to the outputs bef… |
st49661 | This is not a pytorch related issue, however, if you use from_logits=True in tensorflow’s binary_crossentropy function, you will get the same value as pytorch’s function.
If you check the pytorch docs for binary cross entropy 3, it can be seen in the example that sigmoid is applied to the outputs before passing into the cross entropy loss function.
>>> loss = F.binary_cross_entropy(F.sigmoid(input), target) |
st49662 | The prediction I provided is already calculated with nn.Sigmoid.
I’ll need to read about from_logits. Thank you. |
st49663 | Hello all, I have a problem as follows:
Dataset 1 has 1000 images
Dataset 2 has 10 images.
I want to train a model , in which the images in the dataset 2 must be always use in the training. For example, I want to train with batch size of 64, then for each mini-batch, we should load all 10 images from dataset 2 and remaining from dataset 1. I am using customer dataset with distributed sampler. Do we have any solution for that? |
st49664 | fadetoblack:
ed using a separate dataloader with batch size 10 for Dataset 2?
Good idea but how to get them work together |
st49665 | Create dataloader1 for Dataset 1 with batch size 54.
Create dataloader2 for Dataset 2 with batch size 10
Then do something like:
for x1,y1 in dataloader1:
x2, y2 = next(iter(dataloader2))
# then concatenate x = [x1,x2] and y = [y1,y2]
# use x,y for fwd pass and backwd pass
Relevant discussion 3 |
st49666 | Hi I have a 3D tensor like (batch_size, seq_len, dim), and some of them in the 2nd dimension are zero-padded. And I have a mask generated by the lengths. I want to select the tensor by the mask. The behavior is like masked_select but returns a 2D tensor.
a = torch.rand((3, 3, 3))
a[1, 2] = 0
a[2, 2] = 0
a[2, 1] = 0
print(a)
tensor([[[0.7910, 0.4829, 0.7381],
[0.9005, 0.2266, 0.5940],
[0.8811, 0.8379, 0.9670]],
[[0.3192, 0.9537, 0.1001],
[0.5695, 0.0185, 0.2561],
[0.0000, 0.0000, 0.0000]],
[[0.8885, 0.0043, 0.3867],
[0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000]]])
b = torch.BoolTensor([[1, 1, 1],
[1, 1, 0],
[1, 0, 0]])
Expected output
tensor([[0.7910, 0.4829, 0.7381],
[0.9005, 0.2266, 0.5940],
[0.3192, 0.9537, 0.1001],
[0.5695, 0.0185, 0.2561],
[0.8885, 0.0043, 0.3867]])
Thanks! |
st49667 | Solved by Caruso in post #2
Hi,
you can use this mask for slicing, so c = a[b] should return your expected output |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.