id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st49768 | @ptrblck Thank you for the reply!
Yeah, unfortunately it is not possible to use the test data for training as there is no target value.
So in this case, do you recommend not using data augmentation to give extra noise onto my training dataset and instead should aim to increase the validation dataset accuracy? |
st49769 | edshkim98:
and instead should aim to increase the validation dataset accuracy?
That’s hard to tell, since your validation dataset is not a good proxy of the test data.
I.e. even if your training and validation performance is great, your model might just fail on the test dataset, since the distribution could be too different.
Note that “looking” into the test set is not a good idea (you should not observe the test predictions during training), as you would leak the test information into the training.
One other approach, which comes to my mind, would be to use a small portion of the test set to train the model in an unsupervised fashion, e.g. to reconstruct the input.
For this you could use a special “branch” for the output layers to make sure the output shape fits the input shape and remove it later during the classification training. This could maybe pretrain some early layers, but you would have to experiment with it. |
st49770 | I am training a Roberta masked language model for which I read my input as batches of sentences from a huge file. For batch sizes of 4 to 16 I run out of GPU memory after a few batches. Batch sizes over 16 run out of memory right away. Only batch_size of 2 works. I made sure not to accumulate anything. Is the error result of the size of the model or I am managing my memory badly?
def train():
model.to(device)
model.train()
total_loss = 0
total_preds = 0
for step,batch in enumerate(train_dataloader):
if step % 512 == 0 and step != 0:
print(f'Batch {step}---loss: {maskedlmloss.item()}')
writer.add_scalar('loss/training_loss', maskedlmloss.item(), step)
writer.add_scalar('optimizer/lr', get_lr(optimizer), step)
writer.flush()
encoded_inputs = tokenizer(batch, padding=True, truncation=True, return_tensors='pt')
dic = collator(encoded_inputs['input_ids'].unbind())
input_ids, labels = dic['input_ids'], dic['labels']
model.zero_grad()
maskedlmloss, prediction_scores = model(input_ids.to(device), labels = labels.to(device))
total_loss += float(maskedlmloss.item())
total_preds += 1
maskedlmloss.backward()
optimizer.step()
scheduler.step()
return total_loss/total_preds
Thanks for your help in advance |
st49771 | Are you using variable input shapes? If so, could you check if the largest batch would fit into the model? |
st49772 | All my batches are of the same size and I play with this size to find the largest that fits in the model. I am not sure if that’s what you are asking but the largest batch_size on which the model can carry out its computations without running out of memory is 2.
Thank you. |
st49773 | I am trying to define a loss function to compute the loss between edge reconstruction. The following is my implementation however I suspect I have made some error. I am calculating the edges using convolutions and then performing mse over it.
def edge_loss(out, target, cuda=True):
x_filter = np.array([[1, 0, -1], [2, 0, -2], [1, 0, -1]])
y_filter = np.array([[1, 2, 1], [0, 0, 0], [-1, -2, -1]])
convx = nn.Conv2d(1, 1, kernel_size=3, stride=1, padding=1, bias=False)
convy = nn.Conv2d(1, 1, kernel_size=3 , stride=1, padding=1, bias=False)
weights_x = torch.from_numpy(x_filter).float().unsqueeze(0).unsqueeze(0)
weights_y = torch.from_numpy(y_filter).float().unsqueeze(0).unsqueeze(0)
if cuda:
weights_x = weights_x.cuda()
weights_y = weights_y.cuda()
convx.weight = nn.Parameter(weights_x)
convy.weight = nn.Parameter(weights_y)
g1_x = convx(out)
g2_x = convx(target)
g1_y = convy(out)
g2_y = convy(target)
g_1 = torch.sqrt(torch.pow(g1_x, 2) + torch.pow(g1_y, 2))
g_2 = torch.sqrt(torch.pow(g2_x, 2) + torch.pow(g2_y, 2))
return torch.mean((g_1 - g_2).pow(2))
The loss is then calculated as follows
loss = edge_loss(out, x)
loss.backward()
I do not want to update the weights of the convolution filters since these are the edge filters needed. Is this implementation correct? |
st49774 | If you don’t want trainable features, the functional api is a much better match, so torch.nn.functional.conv2d etc.
Best regards
Thomas |
st49775 | Does it mean a single task (dataset) sampled form the meta-set or does it mean a batch of tasks (datasets)?
Reference:
https://stats.stackexchange.com/questions/478255/what-does-the-term-episode-mean-in-meta-learning 2
reference:
https://github.com/tristandeleu/pytorch-meta/issues/78
https://www.quora.com/unanswered/What-does-the-term-episode-mean-in-meta-learning
https://www.reddit.com/r/MLQuestions/comments/hve478/what_does_the_term_episode_mean_in_metalearning/?
What does the term episode mean in meta-learning?
https://stats.stackexchange.com/questions/478255/what-does-the-term-episode-mean-in-meta-learning 2 |
st49776 | An episode is considered to be a batch of tasks. For example, an episode may be 5 classes, where we have 5 images per class known as our support set. We also have some number of query images in which we classify as one of those 5 classes. This is what’s considered an episode.
So we essentially have a mini train-set (our support set) and a mini test-set (our query set). This is where the meta-learning concept comes in.
Depending on your set-up, your episode can be sampled from several datasets (also known as domains) or a single dataset. In my experience, it’s typical to sample an episode from a single domain. The idea is that it’s more realistic to have your model adapt to a single unseen domain rather than several unseen domains.
I hope this helps your understanding. |
st49777 | Hi Alex, Thanks for taking the time to respond.
Before I respond let me share what I understand from the meta-learning papers I’ve read (which are a bunch in some detail at this point).
In meta-learning there is a meta-set which has the collection of data-sets to learn from (usually also called tasks). Each task/data-set in the meta-set is usually split into a train/support set and a test/query set (usually 5+15 examples afaik its common). Each of these tasks/data-set is built as following:
If its regression one samples a function from a family of similar function and creates the examples from an input range where y=f_i(x) for task i or data-set D_i. With 20 examples then split it.
If it’s classification it’s a N-way, K-shot task (+ K_eval shots). So one usually samples 5 classes from the fraction of classes used for meta-training and then from each class sample K+K_eval examples (usually 5+15=20 total).
Then the meta-set has a bunch of these tasks/data-sets.
From Trist’s response it seems that an episode is 1 data-set/task. Not a (meta) batch of data-sets.
I am trying to confirm if this is correct but he’s published a paper with Yoshua Bengio, so he’s likely a reliable source, but it’s weird to me cuz I thought your answer is the correct one. But Im unsure.
Quote 2:
tristandeleu 2 commented 2 days ago 2
Usually an episode means one single dataset D_i . If you have a (meta-)batch of size 16, this means you have 16 episodes/datasets in your (meta-)batch. |
st49778 | I would agree with the way that you described the meta-set.
When you say:
Not a (meta) batch of data-sets
Do you mean, the batch is not sampled from multiple data-sets? I think my answer and his are about the same. He suggests that an episode is sampled from one data-set, and I would agree with this. I just don’t know if this is a strict requirement of an episode. What’s the main contradictory thought that you’re noticing? |
st49779 | To me a data-set is a task as I understand it. i.e. we generate 20 samples from a selected distribution (from a fixed function or from a mini-classification task were we sample say 5 random labels).
Let’s take mini-Imagenet for example with meta-batch=1. It has 64 images for meta-train and each has a total of 600 images. A data-set in the meta-set would be a sample of 5 images from those 64 classes with a sample of 20 actual images from the 600. During one epoch then we create ceil(64/5) of iterations/meta-batches (with the last meta-batch having only 4 classes or we skip it). That is what is seems to be happening in torchmeta, at least thats correct for regression.
I created a task of 100 samples per function with 20 functions and then after 2 batches the function is done (first batch has 16 data-sets the next one 4):
[epoch=0]
0%| | 0/2 [00:00<?, ?it/s]
batch_idx = 0
train_inputs.shape = torch.Size([16, 5, 1])
train_targets.shape = torch.Size([16, 5, 1])
test_inputs.shape = torch.Size([16, 15, 1])
test_targets.shape = torch.Size([16, 15, 1])
batch_idx = 1
train_inputs.shape = torch.Size([4, 5, 1])
train_targets.shape = torch.Size([4, 5, 1])
test_inputs.shape = torch.Size([4, 15, 1])
test_targets.shape = torch.Size([4, 15, 1])
[epoch=1]
50%|█████ | 1/2 [00:00<00:00, 3.48it/s]
batch_idx = 0
train_inputs.shape = torch.Size([16, 5, 1])
train_targets.shape = torch.Size([16, 5, 1])
test_inputs.shape = torch.Size([16, 15, 1])
test_targets.shape = torch.Size([16, 15, 1])
batch_idx = 1
train_inputs.shape = torch.Size([4, 5, 1])
train_targets.shape = torch.Size([4, 5, 1])
test_inputs.shape = torch.Size([4, 15, 1])
test_targets.shape = torch.Size([4, 15, 1])
Done with test! a
import sys; print('Python %s on %s' % (sys.version, sys.platform))
100%|██████████| 2/2 [00:00<00:00, 3.49it/s]
code:
# loop through meta-batches of this data set, print the size, make sure it's the size you exepct
from torchmeta.utils.data import BatchMetaDataLoader
from torchmeta.transforms import ClassSplitter
from torchmeta.toy import Sinusoid
from tqdm import tqdm
dataset = Sinusoid(num_samples_per_task=100, num_tasks=20)
shots, test_shots = 5, 15
# get metaset
metaset = ClassSplitter(
dataset,
num_train_per_class=shots,
num_test_per_class=test_shots,
shuffle=True)
# get meta-dataloader
batch_size = 16
num_workers = 0
meta_dataloader = BatchMetaDataLoader(metaset, batch_size=batch_size, num_workers=num_workers)
epochs = 2
print(f'batch_size = {batch_size}')
print(f'len(metaset) = {len(metaset)}')
print(f'len(meta_dataloader) = {len(meta_dataloader)}\n')
with tqdm(range(epochs)) as tepochs:
for epoch in tepochs:
print(f'\n[epoch={epoch}]')
for batch_idx, batch in enumerate(meta_dataloader):
print(f'\nbatch_idx = {batch_idx}')
train_inputs, train_targets = batch['train']
test_inputs, test_targets = batch['test']
print(f'train_inputs.shape = {train_inputs.shape}')
print(f'train_targets.shape = {train_targets.shape}')
print(f'test_inputs.shape = {test_inputs.shape}')
print(f'test_targets.shape = {test_targets.shape}') |
st49780 | ayalaa2:
What’s the main contradictory thought that you’re noticing?
My contradictory concepts is this:
First Trist’s definition:
episode = 1 single data set/task
Our definition of episode:
episode = 1 batch of data sets/tasks
in fact Trist implies this equation should hold:
total_episodes = num_meta_epochs * meta_batch_size |
st49781 | references I have that might help:
reference: arxiv.org/pdf/1606.04080.pdf 3
reference: proceedings.mlr.press/v48/santoro16.pdf 1
this is the quote I find unreadable:
More specifically, let us define a task T as distribution over possible label sets L. Typically we consider T to uniformly weight all data sets of up to a few unique classes (e.g., 5), with a few examples per class (e.g., up to 5). In this case, a label set L sampled from a task T , L ∼ T , will typically have 5 to 25 examples.
To form an “episode” to compute gradients and update our model, we first sample L from T (e.g.,
L could be the label set {cats, dogs}). We then use L to sample the support set S and a batch B
(i.e., both S and B are labelled examples of cats and dogs). The Matching Net is then trained to
minimise the error predicting the labels in the batch B conditioned on the support set S. This is a
form of meta-learning since the training procedure explicitly learns to learn from a given support set
to minimise a loss over a batch.
references for Trist’s answer:
In our setup, a task, or episode, in-
volves the presentation of some dataset D = {dt}Tt=1 = T
{(xt,yt)}t=1. For classification, yt is the class label for an image xt, and for regression, yt is the value of a hid- den function for a vector with real-valued elements xt, or simply a real-valued number xt (here on, for consistency, xt will be used). |
st49782 | In my opinion the right definition of an episode should be a batch of tasks (usually called a meta-batch). For regression if we have 100 𝑓𝑖
f
i
from some family (e.g. sine functions) then 1 episode with a meta-batch size of size 16 should be 16 functions, each with a support set and a query set.
For classification an episode is still a (meta) batch of tasks. In this case a task is a N-way K-shot classification task. e.g. 5-way, 5-shot would have 25 examples for the support set and if the Keval is 15 then 75 examples for the query set. In this case if we have meta-batch size of 16 then we sample 16 tasks, each with 25+75 examples. So a total of 16*100 examples for a meta-batch.
In fact with this definition 1 episode is the same as an iteration step. When meta-batch size is 1 then a task is an episode.
I can’t imagine why we’d define an episode as a task, which I thought at some point. In that case we have the same word for task and episode. But an episode of learning happens fully during each iteration.
Though, I’d prefer to not use this word at all since it seems redundant + RL already uses this term which adds to the confusion in my opinion. |
st49783 | I will appreciate if anyone suggests a simple and efficient way to manage and save experiment results? |
st49784 | Solved by Mauri in post #6
https://wandb.ai is a great tool for that. Hope it helps. |
st49785 | What would you like to save?
If you are referring to the model and optimizer, have a look at the Serialization docs 65. |
st49786 | Actually, my question was mainly about experiment management, how to monitor your model in organized way (e.g., hyper-parameters, loss, plots,). So far I use excel sheet to save my results and corresponding hyper parameters manually, I was wondering if there simple and effective tool to save results and help to do visualization also. |
st49787 | Would appreciate, if you as an expert, present your best practices for managing ML experiment. |
st49788 | Oh, I’m no expert in managing ML experiments, so let’s wait for the real experts to give their opinion, which tools to use. |
st49789 | A variety of tools is reported here: https://neptune.ai/blog/best-ml-experiment-tracking-tools 134
Facing the same problem, getting lost in too many experiments… |
st49790 | I would like to thank you, for the last 3 months I have been using wandb.ai, and all what I can say that it is really amazing |
st49791 | I’d like to put in a plug for runx, which is very lightweight, and we use it in many of my group’s research projects. https://github.com/NVIDIA/runx 99. |
st49792 | I’m not sure this is the proper forum for questions of this type, let me know.
I am looking for the PyTorch way of doing the following:
Given
a = torch.Tensor([[1, 2, 3], [4, 5, 6]]) # 2 x 3
b = torch.Tensor([
[[ 1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]],
[[12, 11, 10], [9, 8, 7], [6, 5, 4], [ 3, 2, 1]]
]) # 2 x 4 x 3
I would like to get
r = [[14.0, 32.0, 50.0, 68.0], [163.0, 118.0, 73.0, 28.0]] # 2 x 4
with the inner result vectors being dot products of a, b sub-vectors as follows:
r = [
[a[0] \dot b[0][0], a[0] \dot b[0][1], a[0] \dot b[0][2], a[0] \dot b[0][3]],
[a[1] \dot b[1][0], a[1] \dot b[1][1], a[1] \dot b[1][2], a[0] \dot b[1][3]]
]
In yet other words, the first dimension of both a and b being the batch dimension, I need to take batch-wisely the first vector transposed, and do standard matrix multiplication with the 3 x 4 array, finally getting two batch-arranged four-element vectors
I tried
r = torch.tensordot(a, b, dims=([1], [2]))
but this produces
[[[14.0, 32.0, 50.0, 68.0], [64.0, 46.0, 28.0, 10.0]], [[32.0, 77.0, 122.0, 167.0], [163.0, 118.0, 73.0, 28.0]]]
i.e. more than I need - I need only the diagonal of it
Is there a good guide to read about PyTorch matrix operations in a systematic way?
Extra: if there is more than one way of doing 1., which would be the most efficient in terms of CUDA operations, if the first dimension is large (hundreds), and two others relatively small (a dozen or two)? Where can I read more about PyTorch / CUDA efficiency? |
st49793 | Solved by Caruso in post #2
Hi @Tomasz_Dryjanski ,
have a look at torch.bmm, it computes a matrix multiplication for each corresponding matrix pair.
Here is an example of what you want to do:
import torch
a = torch.Tensor([[1, 2, 3], [4, 5, 6]]).float()
b = torch.Tensor([[[ 1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]],
… |
st49794 | Hi @Tomasz_Dryjanski ,
have a look at torch.bmm 14, it computes a matrix multiplication for each corresponding matrix pair.
Here is an example of what you want to do:
import torch
a = torch.Tensor([[1, 2, 3], [4, 5, 6]]).float()
b = torch.Tensor([[[ 1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]],
[[12, 11, 10], [9, 8, 7], [6, 5, 4], [ 3, 2, 1]]]).float()
a = a.unsqueeze(1) # B, 1, 3
print(a.shape, b.shape)
>> torch.Size([2, 1, 3]) torch.Size([2, 4, 3])
c = torch.bmm(a, b.transpose(1, 2))
c = c.squeeze(1)
print(c)
>> tensor([[ 14., 32., 50., 68.],
[163., 118., 73., 28.]])
albanD will hopefully answer your extra question :^)
Edit example for einsum, which albanD mentioned:
a = torch.Tensor([[1, 2, 3], [4, 5, 6]]).float()
b = torch.Tensor([[[ 1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]],
[[12, 11, 10], [9, 8, 7], [6, 5, 4], [ 3, 2, 1]]]).float()
torch.einsum("bn, bmn -> bm", a, b) |
st49795 | @Caruso answer above is good for 1.
for 2: I’m afraid there is no such thing. But there are functions to do all the matrix ops that you want in general so hopefully such a guide is not necessary. In general, I use torch.matmul() that performs generic batch matrix multiplication. And add extra dimensions where needed.
Note that sometimes, it is more efficient to do the product reduction by hand and you can do an element-wise product and a sum(dim=[-1, -2]) for example if you need to reduce two dimensions at once.
Finally, if you are interested, we also have a einsum() function that allows you to perform arbitrary reductions specified with Einstein notation.
For 3: PyTorch / CUDA efficiency rules are the same as regular CUDA efficiency rules (that you can find online). The gist is: do only large ops |
st49796 | About efficiency: is there e.g. any difference between
torch.bmm(a.unsqueeze(1), b.transpose(2, 1)).squeeze(1)
and
torch.bmm(b, a.unsqueeze(2)).squeeze(2)
in terms of operation speed, assuming that b.size()[0] >> b.size()[1] and b.size()[0] >> b.size()[2]
(i.e. the batch size being much bigger than any other dimension)? |
st49797 | In theory no. Mostly because ops like transpose or squeeze don’t actually touch the content of the Tensor. And so won’t even need to run anything on the GPU: they only change the Tensor metadata stored in ram.
In practice, GPU optimization is quite hard and there is no silver bullet there. So you can actually see wildly different behaviors for similar ops just because you transpose or unsqueeze a different dimension.
So you will have to run the code to see which one is faster in practice.
Don’t forget the proper synchronization with torch.cuda.synchronize() to get proper timings on the GPU. |
st49798 | I tried to create 3d CNN using Pytorch. the following code works with 5 images but does not work with 336 images.
The error is: RuntimeError: $ Torch: not enough memory: you tried to allocate 166GB. Buy new RAM! at /opt/conda/conda-bld/pytorch_1550813258230/work/aten/src/TH/THGeneral.cpp:201
Can anyone help me please ?
def __init__(self):
super(CNNModel, self).__init__()
self.conv_layer1 = self._conv_layer_set(3, 32)
self.conv_layer2 = self._conv_layer_set(32, 64)
self.fc1 = nn.Linear(64*28*28*28, 2)
self.fc2 = nn.Linear(1404928, num_classes)
self.relu = nn.LeakyReLU()
self.batch=nn.BatchNorm1d(2)
self.drop=nn.Dropout(p=0.15, inplace = True)
def _conv_layer_set(self, in_c, out_c):
conv_layer = nn.Sequential(
nn.Conv3d(in_c, out_c, kernel_size=(3, 3, 3), padding=0),
nn.LeakyReLU(),
nn.MaxPool3d((2, 2, 2)),
)
return conv_layer
def forward(self, x):
# Set 1
out = self.conv_layer1(x)
out = self.conv_layer2(out)
out = out.view(out.size(0), -1)
out = self.fc1(out)
out = self.relu(out)
out = self.batch(out)
out = self.drop(out)
out = F.softmax(out, dim=1)
return out
#Definition of hyperparameters
n_iters = 2
num_epochs = 2
# Create CNN
model = CNNModel()
#model.cuda()
print(model)
# Cross Entropy Loss
error = nn.CrossEntropyLoss()
# SGD Optimizer
learning_rate = 0.001
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
...
#train
#Definition of hyperparameters
n_iters = 2
num_epochs = 2
loss_list_train = []
accuracy_list_train = []
for epoch in range(num_epochs):
#for i in range(x_train1.shape[0]):
training_data = torch.Tensor(training_data)
targets = torch.Tensor(targets)
training_data = Variable(training_data.view(336,3,120,120,120))
labels = targets
# Clear gradients
optimizer.zero_grad()
# Forward propagation
outputs = model(training_data)
# Calculate softmax and ross entropy loss
_, predicted = torch.max(outputs, 1)
accuracy = accuracyCalc(predicted, targets)
#labels = labels.tolist()
#outputs = outputs.tolist()
labels = labels.long()
labels=labels.view(-1)
loss = nn.CrossEntropyLoss()
loss = loss(outputs, labels)
# Calculating gradients
loss.backward()
# Update parameters
optimizer.step()
loss_list_train.append(loss.data)
accuracy_list_train.append(accuracy)
print('Iteration: {}/{} Loss: {} Accuracy: {} %'.format(epoch+1, num_epochs, loss.data, accuracy)) |
st49799 | Solved by BramVanroy in post #2
Well yes, that’s. Normal. That’s what batch sizes are for: generally speaking you can’t do a forward pass with your full dataset, so you chop it up into batches. |
st49800 | Well yes, that’s. Normal. That’s what batch sizes are for: generally speaking you can’t do a forward pass with your full dataset, so you chop it up into batches. |
st49801 | I tried to modify the code but it doesn’t work (it only works 3 times so with 9 images)
#Definition of hyperparameters
n_iters = 2
num_epochs = 2
loss_list_train = []
accuracy_list_train = []
for epoch in range(num_epochs):
outputs = []
for fold in range(0, len(training_data), 3):
xtrain = training_data[fold : fold+3]
#for i in range(x_train1.shape[0]):
xtrain = torch.Tensor(xtrain)
xtrain = Variable(xtrain.view(3,3,120,120,120))
# Clear gradients
optimizer.zero_grad()
# Forward propagation
v = model(xtrain)
outputs.append(v)
# Calculate softmax and ross entropy loss
targets = torch.Tensor(targets)
labels = targets
_, predicted = torch.max(outputs, 1)
accuracy = accuracyCalc(predicted, targets)
#labels = labels.tolist()
#outputs = outputs.tolist()
labels = labels.long()
labels=labels.view(-1)
loss = nn.CrossEntropyLoss()
loss = loss(outputs, labels)
# Calculating gradients
loss.backward()
# Update parameters
optimizer.step()
loss_list_train.append(loss.data)
accuracy_list_train.append(accuracy)
print('Iteration: {}/{} Loss: {} Accuracy: {} %'.format(epoch+1, num_epochs, loss.data, accuracy)) |
st49802 | BramVanroy:
outputs.append(v.detach())
I tried with ʻoutputs.append (v.detach ()) ` but the problem is:
TypeError Traceback (most recent call last)
<ipython-input-3-051c943b10b7> in <module>
180 targets = torch.Tensor(targets)
181 labels = targets
--> 182 _, predicted = torch.max(outputs, 1)
183 accuracy = accuracyCalc(predicted, targets)
184 #labels = labels.tolist()
TypeError: max() received an invalid combination of arguments - got (list, int), but expected one of:
* (Tensor input)
* (Tensor input, Tensor other, Tensor out)
* (Tensor input, int dim, bool keepdim, tuple of Tensors out) |
st49803 | I solved it, but there is another error message.
I think that the problem is with detach() and loss.backward()
#Definition of hyperparameters
n_iters = 2
num_epochs = 2
loss_list_train = []
accuracy_list_train = []
for epoch in range(num_epochs):
outputs = []
outputs= torch.tensor(outputs)
for fold in range(0, len(training_data), 3):
xtrain = training_data[fold : fold+3]
xtrain = torch.Tensor(xtrain)
xtrain = Variable(xtrain.view(3,3,120,120,120))
# Clear gradients
optimizer.zero_grad()
# Forward propagation
v = model(xtrain)
outputs = torch.cat((outputs,v.detach()),dim=0)
# Calculate softmax and ross entropy loss
targets = torch.Tensor(targets)
labels = targets
outputs = torch.Tensor(outputs)
_, predicted = torch.max(outputs, 1)
accuracy = accuracyCalc(predicted, targets)
labels = labels.long()
labels=labels.view(-1)
loss = nn.CrossEntropyLoss()
loss = loss(outputs, labels)
# Calculating gradients
loss.backward()
# Update parameters
optimizer.step()
loss_list_train.append(loss.data)
accuracy_list_train.append(accuracy)
print('Iteration: {}/{} Loss: {} Accuracy: {} %'.format(epoch+1, num_epochs, loss.data, accuracy))
Result :
RuntimeError Traceback (most recent call last)
<ipython-input-2-73901ba48e8b> in <module>
195 loss = loss(outputs, labels)
196 # Calculating gradients
--> 197 loss.backward()
198 # Update parameters
199 optimizer.step()
/opt/tljh/user/envs/fethi_env/lib/python3.6/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
100 products. Defaults to ``False``.
101 """
--> 102 torch.autograd.backward(self, gradient, retain_graph, create_graph)
103
104 def register_hook(self, hook):
/opt/tljh/user/envs/fethi_env/lib/python3.6/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
88 Variable._execution_engine.run_backward(
89 tensors, grad_tensors, retain_graph, create_graph,
---> 90 allow_unreachable=True) # allow_unreachable flag
91
92
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn |
st49804 | You should probably move all your loss calculation, optimizer step, etc. inside your data loop. In most scenarios you want to run one loss backward (and optimizer step) after each model forward. |
st49805 | I was looking at the Pytorch pruning tutorial (https://pytorch.org/tutorials/intermediate/pruning_tutorial.html# 3) and I am left confused about where in the process this pruning happens.
Does the setup in the tutorial happen before training, and then it is trained with the pruning method somewhere in the loop
Do the strategies in the tutorial happen after training when we have the final network weights? |
st49806 | Solved by albanD in post #2
Hi,
This tutorial shows how to use pruning for the weights that are used during training.
It is not a pre or post processing technique.
In particular, the prunning happen when you access the weights where you registered the prunning, they will automatically get prunned. |
st49807 | Hi,
This tutorial shows how to use pruning for the weights that are used during training.
It is not a pre or post processing technique.
In particular, the prunning happen when you access the weights where you registered the prunning, they will automatically get prunned. |
st49808 | Thanks, so just to verify, I would follow the tutorial, and then make a normal training loop without any other modification and the model will automatically be pruned while training? |
st49809 | I have two Tensors A and B,
A.shape is (b,c,100,100),
B.shape is (b,c,80,80),
how can I get tensor C with shape (b,c,21,21) subject to
C[:, :, i, j] = torch.mean(A[:, :, i:i+80, j:j+80] - B)?
I wonder whether there’s an efficient way to solve this?
Thanks very much.
Sorry if it’s not a proper title. |
st49810 | Solved by seungjun in post #4
Ah, sorry for the misunderstanding.
You can try below.
C = (A.unfold(2,80,1).unfold(3,80,1) - B.expand(b,c,21,21,80,80)).mean(dim=(-2,-1))
# variable version
_, _, H, W = A.shape
_, _, h, w = B.shape
C = (A.unfold(2,h,1).unfold(3,w,1) - B.expand(b,c,H-h+1,W-w+1,h,w)).mean(dim=(-2,-1))
The keys a… |
st49811 | If I understand you correctly, would it be the way to subtract B from part of A?
You can specify corresponding index ranges if the dimension lengths equal to each other.
C = A[:,:,80:100, 80:100] - B
# C shape: b x c x 20 x 20
You can further apply torch.mean afterwards by your needs. |
st49812 | Appreciate for your reply. But you did misunderstand my question.
I hope B to be a filter, do torch.mean(slice_A - B), and slide B to next position, etc. And the result would has shape (b, c, A.shape[-1] - B.shape[-1] + 1, A.shape[-1] - B.shape[-1] + 1). |
st49813 | Ah, sorry for the misunderstanding.
You can try below.
C = (A.unfold(2,80,1).unfold(3,80,1) - B.expand(b,c,21,21,80,80)).mean(dim=(-2,-1))
# variable version
_, _, H, W = A.shape
_, _, h, w = B.shape
C = (A.unfold(2,h,1).unfold(3,w,1) - B.expand(b,c,H-h+1,W-w+1,h,w)).mean(dim=(-2,-1))
The keys are torch.Tensor.unfold 1 and torch.Tensor.expand 2.
The unfold function extracts sliding window and expand function views the tensor in expanded size without copying values. |
st49814 | I asked the same question on stackoverflow. https://stackoverflow.com/questions/64313895/element-wise-operation-in-pytorch 3 |
st49815 | my first attemp at visualizing with tensorboard.
built a convnet with nn.module class -
class ThreeLayerConvNet2(nn.Module):
def __init__(self, in_channel, channel_1, channel_2, num_classes):
super().__init__()
pad1 = 2
pad2 = 1
ker1 = (5,5)
ker2 = (3,3)
feature_map1_size = (32 + 2 * pad1 - ker1[0] + 1, 32 + 2 * pad1 - ker1[1] + 1)
feature_map2_size = (feature_map1_size[0] + 2 * pad2 - ker2[0] + 1, feature_map1_size[1] + 2 * pad2 - ker2[1] + 1)
flat_size = np.prod(feature_map2_size) * channel_2
self.conv1 = nn.Conv2d(in_channel, channel_1, kernel_size = ker1, padding = pad1,
padding_mode = 'zeros')
nn.init.kaiming_normal_(self.conv1.weight)
self.relu = nn.ReLU()
self.conv2 = nn.Conv2d(channel_1, channel_2, kernel_size = ker2, padding = pad2,
padding_mode = 'zeros')
nn.init.kaiming_normal_(self.conv2.weight)
self.fc = nn.Linear(flat_size, num_classes)
#self.batchnorm2d_1 = nn.BatchNorm2d(in_channel)
#self.batchnorm2d_2 = nn.BatchNorm2d(channel_1)
#self.batchnorm1d = nn.BatchNorm1d(flat_size)
pass
def forward(self, x):
scores = None
#x = self.batchnorm2d_1(x)
x = self.conv1(x)
x = self.relu(x)
#x = self.batchnorm2d_2(x)
x = self.conv2(x)
x = self.relu(x)
x = flatten(x)
#x = self.batchnorm1d(x)
scores = self.fc(x)
pass
return scores
num_classes = 10
in_channel = 3
channel_1 = 32
channel_2 = 16
learning_rate = 1e-2
model = ThreeLayerConvNet2(in_channel, channel_1, channel_2, num_classes)
optimizer = optim.SGD(model.parameters(), lr=learning_rate,
momentum=0.9, nesterov=True)
then initialized tensorboard -
from torch.utils.tensorboard import SummaryWriter
# default `log_dir` is "runs" - we'll be more specific here
writer = SummaryWriter('cs231n/final_cifar10_convnet')
tried to feed the board a graph like shown in tutorial -
dataiter = iter(loader_train)
images, labels = dataiter.next()
writer.add_graph(ThreeLayerConvNet2,images)
and got the following error.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-73-49d61804c4c3> in <module>
1 dataiter = iter(loader_train)
2 images, labels = dataiter.next()
----> 3 writer.add_graph(ThreeLayerConvNet2)
~\.conda\envs\torch_env\lib\site-packages\torch\utils\tensorboard\writer.py in add_graph(self, model, input_to_model, verbose)
705 if hasattr(model, 'forward'):
706 # A valid PyTorch model should have a 'forward' method
--> 707 self._get_file_writer().add_graph(graph(model, input_to_model, verbose))
708 else:
709 # Caffe2 models do not have the 'forward' method
~\.conda\envs\torch_env\lib\site-packages\torch\utils\tensorboard\_pytorch_graph.py in graph(model, args, verbose)
281 processing.
282 """
--> 283 with torch.onnx.set_training(model, False): # TODO: move outside of torch.onnx?
284 try:
285 trace = torch.jit.trace(model, args)
~\.conda\envs\torch_env\lib\contextlib.py in __enter__(self)
79 def __enter__(self):
80 try:
---> 81 return next(self.gen)
82 except StopIteration:
83 raise RuntimeError("generator didn't yield") from None
~\.conda\envs\torch_env\lib\site-packages\torch\onnx\utils.py in set_training(model, mode)
36 yield
37 return
---> 38 old_mode = model.training
39 if old_mode != mode:
40 model.train(mode)
AttributeError: type object 'ThreeLayerConvNet2' has no attribute 'training' |
st49816 | Hi,
whats your PyTorch version?
is it possible to upgrade to 1.6 or nightly version? |
st49817 | currently not because i’m doing the cs231n course of stanford and the 1.4 is required. maybe it’s not a big deal. i will try to upgrade and see. |
st49818 | i try now simply to follow the pytorch - tensorboard tutorial 3 and it’s not working.
more specifically, the tensorboard web page starts but i can’t upload an image.
details:
conda 4.8.5 environment
windows 10
python 3.6.11
pytorch - 1.6
tensorboard - 1.15
pycharm IDE
i tried upgrading tensorboard, tensorboardX.
steps:
i run the program like the tutorial shows, in pycharm.
(not sure it’s critical but i’m new to pycharm and to module based programming. maybe i did something wrong there? full code added below)
in command line i activate the torch environment, change directory to the pycharm project/runs folder and then execute:
tensorboard --logdir==runs
i get the link to local host web page and see the tensorboard GUI but no image data found. nothing is found for that matter but i’m trying as a first step to upload an image, so nothing works.
pycharm code:
test.py
import torchvision
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch
from Net import Net
from data_load import classes, trainloader, testloader
from helper_func import matplotlib_imshow
from board import writer
net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
if __name__ == "__main__":
# default `log_dir` is "runs" - we'll be more specific here
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
print(images.shape)
# create grid of images
img_grid = torchvision.utils.make_grid(images)
# show images
matplotlib_imshow(img_grid, one_channel=True)
# write to tensorboard
writer.add_image('four_fashion_mnist_images', img_grid)
board.py
from torch.utils.tensorboard import SummaryWriter
#from tensorboardX import SummaryWriter
writer = SummaryWriter('runs/fashion_mnist_experiment_1')
data_load.py
import torchvision
import torch
from helper_func import transform
# constant for classes
classes = ('T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot')
# datasets
trainset = torchvision.datasets.FashionMNIST('./data',
download=True,
train=True,
transform=transform)
testset = torchvision.datasets.FashionMNIST('./data',
download=True,
train=False,
transform=transform)
# dataloaders
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
helper_func.py
import matplotlib.pyplot as plt
import numpy as np
import torchvision.transforms as transforms
# helper function to show an image
# (used in the `plot_classes_preds` function below)
def matplotlib_imshow(img, one_channel=False):
if one_channel:
img = img.mean(dim=0)
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
if one_channel:
plt.imshow(npimg, cmap="Greys")
else:
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
Net.py
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 4 * 4, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 4 * 4)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
i mention all this code because so far i did all my programming in notebooks so i’m still confused about the order python interprets all this modules and maybe that caused an error. |
st49819 | Hi,
can you look if file is created in your ../runs/fashion_mnist_experiment_1 path?
But honestly your code looks right
I hope someone other can help you! |
st49820 | yes in the folder fashion_mnist_experiment1 i find many similar files i guess they are generated everyt ime i run the code.
file name: events.out.tfevents.1602411148.DESKTOP-9IVI3D
there are 2 files whose type is DESKTOP_9IVI3MD
and the rest are ‘0 file’
no clue what that means… |
st49821 | another strange details that might contribute, there seems to be an error with the order of python commands.
i added two print commands, one in the main script and one in the board module. the code changed only where i mention it, the rest is the same:
import torchvision
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch
from Net import Net
from data_load import classes, trainloader, testloader
from helper_func import matplotlib_imshow
print('writer not') # change 1
from board import writer
and in board.py module:
from torch.utils.tensorboard import SummaryWriter
#from tensorboardX import SummaryWriter
writer = SummaryWriter('runs/fashion_mnist_experiment_1')
print('writer ready') # change 2
when i run the following is printed:
writer not
writer ready
writer not
writer not
writer ready
writer ready
torch.Size([4, 1, 28, 28])
it seems to be repeating the main script before proceeding to the if main statement, and also strangely doubling the print on the second repeat.
what can cause this and is it related to the dysfunction of tesnorboard perhaps?
*update
it seems that the num_workers=2 in the data_load module in the torch.util.Dataloader object caused the double printing issue. after changing to num_workers=1 :
writer not
writer ready
writer not
writer ready
torch.Size([4, 1, 28, 28])
writer yes
and after changing num_workers=0 i got the printout i was expecting:
writer not
writer ready
torch.Size([4, 1, 28, 28])
writer yes
i still don’t know why or how is this related to the tensorboard issue.
i have a cuda enabled GPU gtx 1050 |
st49822 | Can you write the from board import writer in the if __name__ == '__main__': block and try it again? |
st49823 | i inserted the from board import write into the if __name__ block.
ran the code, seems to be fine, except still printing the print command inside board (from what i understand if i import writer from board it should ignore the print command inside board, no?)
i open anaconda prompt, activate the venv, change directory to pycharmprojects/deeplearning/runs
type tensorboard --logdir==runs
get this result:
TensorBoard 1.15.0 at http://DESKTOP-9IVI3MD:6006/ (Press CTRL+C to quit)
enter the link, still no image. also tried adding graphs to no use.
the folder runs/fashion_mnist_experiment_1 contains the tfevents files |
st49824 | Hello everybody,
I am trying to implement a CNN for a regression task on audio data. I am using mel-spectrograms as features with a pixel size of (64, 64). The network consist of two convolutional layers with max pooling and three additional fully connected layers.
I am facing problems with the input dimension of the first fully connected layer to flatten the output of the convolutional layers. The target size doesn’t match the input size and the following warning appears:
UserWarning: Using a target size (torch.Size([2592, 1])) that is different to the input size (torch.Size([1, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction)
By chance I figured out that with a input size of 1568 the warning doesn’t appear and the network is successfully training.
I am using a batch size of 162 and one channel. The input dimension of X and y of one iteration are the following:
X.shape = [162, 1, 64, 64]
y.shape = [162, 1]
Here the code of the network
class Basic_CNN(nn.Module):
def __init__(self):
super(Basic_CNN, self).__init__()
self.conv1 = nn.Conv2d(1, 16, (5, 5), padding=1, stride=1)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(16, 8, (5, 5), padding=1, stride=1)
self.fc1 = nn.Linear(1568, 120)
self.fc2 = nn.Linear(120, 64)
self.fc3 = nn.Linear(64, 1)
def forward(self, x):
# Convolutional layer with ReLU and 2x2 pooling
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
# flatten output
x = x.view(-1, 1568)
# fc layers
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
# output layer
x = self.fc3(x)
return x
Could somebody be so kind and explain to me how to choose the input size of this specific layer? How is it dependend on the output size of the previous layers?
Thank you very much in advance.
p.s. : sorry for first deleted post |
st49825 | Hi,
the warning you are getting seems to be raised by your loss function if you look at the end of your message [return F.mse_loss(input, target, reduction=self.reduction)]. This is because MSE expects that your prediction and your target are of the same shape. Maybe you can show us a bit of your training loop, so we can help you more.
And now to answer your questions:
Could somebody be so kind and explain to me how to choose the input size of this specific layer? How is it dependend on the output size of the previous layers?
When you go from a Convolution to a Linear-Layer you want to flatten your learned features, because a Conv2d Layer outputs a 4d-Tensor [B, C, H_out, W_out] and a Linear Layer takes in a 2d-Tensor [B, F_in] . Those features are contained in your Channel, Width and Height dimension, so you want them flattened to be equal to F_in (C*H_out*W_out = F_in).
Normally you can also apply something like a adaptive average pooling 4 before you flatten your input, because the height and width of the output of a convolution depends on the height and width of the input, which can cause C*H_out*W_out != F_in and you would get an error, because of mismatching shapes of the weights in your Linear Layer and your input. By applying adaptive average pooling you define the output shape of the input given to the AdaptiveAvgPooling Module. When you now flatten this output, its guaranteed to have the same Height and Width all time and C*H_out*W_out = F_in, will (hopefully) always be true.
PS: Maybe you can also have a look at Conv1ds, because mel-spectrograms are normally sequential data of shape [B, n_mels, time], but just saying. |
st49826 | Hi,
thank you very much. This helps a lot.
This is the code of my training loop:
def train(train_loader, model, optimizer, criterion, n_epochs, batch_size):
cost = []
metric_hist = []
for epoch in range(n_epochs):
total_loss = total_metrics = 0
for i, (x, y) in enumerate(train_loader):
# reshape input to 4D => [batch_size, N_channels=1, Height, Width]
x = np.reshape(x, [batch_size, 1, x.shape[1], x.shape[2]])
# reshape target to 2D => [batch_size, N_channels=1]
y = np.reshape(y, [batch_size, 1])
# convert to float and load to GPU
x = x.float().to(device)
y = y.float().to(device)
model.zero_grad()
# forward
y_pred = model(x)
# calc loss & rmse
loss = criterion(y_pred, y) # LOSS = Mean Square Error
metrics = rmse(y_pred, y) # METRICS = Root Mean Square Error
# backward
optimizer.zero_grad()
loss.backward(retain_graph=True)
optimizer.step()
# append results
total_loss += loss.item()
total_metrics += metrics.item()
# print status every 100th step
if (i+1) % 100 == 0:
print(i+1, " out of ", len(train_loader), "iterations: ",
round(((i+1) / len(train_loader)) * 100, 2), "% Done -- LOSS: ", round(loss.item(), 3),
"-- RMSE: ", round(metrics.item(), 3))
# calc average loss of epoch and append to cost
avg_metrics = total_metrics / len(train_loader)
avg_loss = total_loss / len(train_loader)
cost.append(avg_loss)
metric_hist.append(avg_metrics)
print(epoch+1, " out of ", n_epochs, "epochs: ", round(((epoch+1) / n_epochs) * 100, 3)
, "% Done --", "LOSS: ", round(avg_loss, 2), "-- RMSE: ", round(avg_metrics, 3), '\n')
return cost, metric_hist |
st49827 | Hi @BeneFr,
I’m pretty confused, so your input values seem to be of shape [B, 1, H, W] - like expected - and your target of shape [B, 1], which matches with the expected output shape [B, 1] of your model. However your error message states that your input for F.mse_loss() - so your model output - is of shape [2592, 1] and your target is of shape [1, 1]. (Also 2592 / 162 = 16)
Maybe you can print the shape of x and y before you input them to the model and also your model output y_pred? Do you use MSELoss anywhere else than for your loss and metrics? |
st49828 | Hi Caruso,
the shape of x is [162, 1, 64, 64] and the shape of y is [162, 1] before I input them to the model. The shape of the model output y_pred is [162, 1]. I only use MSELoss for my loss in the training and testing loop.
The warning doesn’t appear when I use 1568 as input for the flatten layer. I just don’t understand why. |
st49829 | Hi,
The warning doesn’t appear when I use 1568 as input for the flatten layer. I just don’t understand why.
What other shape do you use for the input of your flatten layer? The output of your last conv is of shape [B, 8, 14,14] -> flattened [B, 1568], so anything else would raise a size mismatch error. |
st49830 | Hi Caruso,
thank you very much for your help! I finally figured out how to calculate the conv layer output dimensions.
With the formular:
Width_out = (Width_in - Width_filter + 2 * padding) / stride +1
equivalent for the heigth.
Than with 2x2 pooling the result is halved. Afterwards (Width_out * Height_out * N_output_channels) leads to the number of inputs for the flatten layer.
Am I correct so far? |
st49831 | Hi BeneFr,
yes your formular is right, in the docs of the Conv2d-Layer 8 you can find the formular too .
You can also print the shape after the last convolution, if you don’t want to calculate the output shape ^^ |
st49832 | Hello,
I wander if someone can help me to find a better way to do the following things (ie. I’m not a python geek)
Let say that in a forward method of my model I get a batch of image of size N,Cin,H,W, then I make a treatment that produce a tensor of dim; N, Cin, K, H, W let us call it ‘x’
with K=1+J*L+L**2 J(J-1)/2 for some J and L values.
Now, I compute some means values on the two last dims (H,W)
meanCoeff= torch.mean(x,axis=(3,4))
Then, I proceed like that
xnew=torch.zeros_like(x)
#lvl 0
xnew[:,:,0,:,:] = x[:,:,0,:,:]
#lvl 1
for j1 in range(0,J):
for t1 in range(0,L):
i=adict[(j1,t1)]
xnew[:,:,i,:,:] = x[:,:,i,:,:]-meanCoeffs[:,:,i,None,None]
#lvl 2
for j1 in range(0,J-1):
for t1 in range(0,L):
i1=adict[(j1,t1)]
for j2 in range(j1+1,J):
for t2 in range(0,L):
i12=adict[(j1,t1,j2,t2)]
xnew[:,:,i12,:,:] =x[:,:,i12,:,:]-meanCoeffs[:,:,i1,None,None]
where in case of J=2, L=4 adict is a dictionary equals to
adict={(-1,): 0,
(0, 0): 1,
(0, 1): 2,
(0, 2): 3,
(0, 3): 4,
(1, 0): 5,
(1, 1): 6,
(1, 2): 7,
(1, 3): 8,
(0, 0, 1, 0): 9,
(0, 0, 1, 1): 10,
(0, 0, 1, 2): 11,
(0, 0, 1, 3): 12,
(0, 1, 1, 0): 13,
(0, 1, 1, 1): 14,
(0, 1, 1, 2): 15,
(0, 1, 1, 3): 16,
(0, 2, 1, 0): 17,
(0, 2, 1, 1): 18,
(0, 2, 1, 2): 19,
(0, 2, 1, 3): 20,
(0, 3, 1, 0): 21,
(0, 3, 1, 1): 22,
(0, 3, 1, 2): 23,
(0, 3, 1, 3): 24}
My question is : is there a better way to compute xnew (eg. vectorization) especially on cuda device (of course)
Thanks |
st49833 | Hi there,
first of all, thanks for the nice modular implementation of the transformer architecture in pytorch!
I recently tried to extract the attention output weights for some layers in a TransformerEncoder for an input sample and have some suggestions on how to improve this.
The option to get the attention weights is currently only present in MultiheadAttention with the need_weights argument. This returns already the mean attention weights instead of the attention weights per head.
First, it would be better to return the attention weights per head and let the user do the sum over heads if needed (some users want to see the attention weights per head).
Second, it would be good to introduce the same argument to TransformerEncoderLayer and TransformerDecoderLayer in their forward methods and use that argument when evaluating self.self_attn and self.multihead_attn (currently, the default value True is used, which calculates the attention weights but then they are discarded, which is clearly not optimal)
There should be a way to get the forward keyword arguments in forward hooks! Then it would be simple to install a forward hook in each TransformerEncoderLayer, evaluate with needs_weights=True and store the results somewhere. For me, this is currently not possible because the attention mask is passed as keyword argument. This is a severe shortcoming of hooks! They should have access to all the arguments that forward has access to. Keyword arguments could be passed as a separate dict.
What is your opinion on this? |
st49834 | RuntimeError Traceback (most recent call last)
in ()
15 labels=labels.long()
16 lossi = loss(output, labels)
—> 17 lossi.backward()
18 optimizer.step()
19 print(f"batch {c}, loss–>{lossi.item()}")
1 frames
/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
125 Variable._execution_engine.run_backward(
126 tensors, grad_tensors, retain_graph, create_graph,
–> 127 allow_unreachable=True) # allow_unreachable flag
128
129
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED
Exception raised from createCuDNNHandle at /pytorch/aten/src/ATen/cudnn/Handle.cpp:9 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7f5b063391e2 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
frame #1: + 0x100ca68 (0x7f5b077aea68 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cuda.so)
frame #2: at::native::getCudnnHandle() + 0x108d (0x7f5b077b034d in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cuda.so)
frame #3: + 0xeda4cc (0x7f5b0767c4cc in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0xed59ee (0x7f5b076779ee in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cuda.so)
frame #5: + 0xed75db (0x7f5b076795db in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cuda.so)
frame #6: at::native::cudnn_convolution_backward_input(c10::ArrayRef, at::Tensor const&, at::Tensor const&, c10::ArrayRef, c10::ArrayRef, c10::ArrayRef, long, bool, bool) + 0xb2 (0x7f5b07679b32 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cuda.so)
frame #7: + 0xf3cd3b (0x7f5b076ded3b in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cuda.so)
frame #8: + 0xf6cb58 (0x7f5b0770eb58 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cuda.so)
frame #9: at::cudnn_convolution_backward_input(c10::ArrayRef, at::Tensor const&, at::Tensor const&, c10::ArrayRef, c10::ArrayRef, c10::ArrayRef, long, bool, bool) + 0x1ad (0x7f5b3e58588d in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #10: at::native::cudnn_convolution_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef, c10::ArrayRef, c10::ArrayRef, long, bool, bool, std::array<bool, 2ul>) + 0x223 (0x7f5b07678203 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cuda.so)
frame #11: + 0xf3ce25 (0x7f5b076dee25 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cuda.so)
frame #12: + 0xf6cbb4 (0x7f5b0770ebb4 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cuda.so)
frame #13: at::cudnn_convolution_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef, c10::ArrayRef, c10::ArrayRef, long, bool, bool, std::array<bool, 2ul>) + 0x1e2 (0x7f5b3e594242 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #14: + 0x2ec9c62 (0x7f5b40257c62 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #15: + 0x2ede224 (0x7f5b4026c224 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #16: at::cudnn_convolution_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef, c10::ArrayRef, c10::ArrayRef, long, bool, bool, std::array<bool, 2ul>) + 0x1e2 (0x7f5b3e594242 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #17: torch::autograd::generated::CudnnConvolutionBackward::apply(std::vector<at::Tensor, std::allocatorat::Tensor >&&) + 0x258 (0x7f5b400dec38 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #18: + 0x3375bb7 (0x7f5b40703bb7 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #19: torch::autograd::Engine::evaluate_function(std::shared_ptrtorch::autograd::GraphTask&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptrtorch::autograd::ReadyQueue const&) + 0x1400 (0x7f5b406ff400 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #20: torch::autograd::Engine::thread_main(std::shared_ptrtorch::autograd::GraphTask const&) + 0x451 (0x7f5b406fffa1 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #21: torch::autograd::Engine::thread_init(int, std::shared_ptrtorch::autograd::ReadyQueue const&, bool) + 0x89 (0x7f5b406f8119 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #22: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptrtorch::autograd::ReadyQueue const&, bool) + 0x4a (0x7f5b4de9834a in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #23: + 0xbd6df (0x7f5b6af1a6df in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
frame #24: + 0x76db (0x7f5b6bffc6db in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #25: clone + 0x3f (0x7f5b6c335a3f in /lib/x86_64-linux-gnu/libc.so.6)
These above is the error
output = fcn_resnet_50(images)[“out”]
labels=labels.long()
lossi = loss(output, labels)
lossi.backward()
optimizer.step()
number of classes match with number of levels
link of colab
colab.research.google.com
Google Colaboratory |
st49835 | Could you post an executable code snippet to reproduce this issue, please?
Also, which PyTorch and CUDA versions are you using? |
st49836 | Thank you for the quick reply. But it worked out, I was giving number of classes wrong. So that is the reason it is giving CUDA error |
st49837 | I am working on wrapping a function written in C++ to PyTorch. The C++ function takes 3 floats and produces 1 float.
Is there a C++ PyTorch API to perform the broadcasting of the wrapped function?
Probably something like np.nditer in numpy. |
st49838 | I think tensors will be automatically broadcasted in the C++ API in the same manner as in Python, won’t they?
To trigger broadcasting you could unsqueeze dimensions if necessary.
Let me know, if I misunderstood your use case. |
st49839 | Thanks for your answer.
I have a C++ code with signature double some_fcn(double a, double b), and I want to wrap it to PyTorch function Tensor some_fcn_wrap(Tensor& a, Tensor& b) with some broadcasting capability.
The problem of wrapping it is I need to implement the broadcasting inside the function, but I’m sure there’s a function (or class) in PyTorch’s C++ source code to do that.
That’s what I’m looking for. Something like numpy.nditer 1. |
st49840 | Hey,
I have a workload that has to take a model back and forth to cuda and cpu:
for i in range(iterations):
with torch.no_grad():
model.to('cuda')
... operations ...
with torch.no_grad():
model.to('cpu')
This code leads to memory overflow on CPU depending on how many iterations there are.
Am I missing something? or is it a bug? |
st49841 | If I understand the issue correctly your (CPU) RAM is increasing by pushing the model back and forth?
Could you post an executable code snippet, which reproduces this issue so that we could debug it? |
st49842 | Hi, I implemented the custom dataset in which both features and labels are images. I’m unable to read the masks it is showing assertion error. Can someone help with this please I’m stuck here please find my code below:
class DirDataset(Dataset):
def init(self, img_dir, mask_dir):
self.img_dir = img_dir
self.mask_dir = mask_dir
try:
self.ids = [s.split('.')[0] for s in os.listdir(self.img_dir)]
except FileNotFoundError:
self.ids = []
def __len__(self):
return len(self.ids)
def __getitem__(self, i):
idx = self.ids[i]
img_files = glob.glob(os.path.join(self.img_dir, idx+'.*'))
mask_files = glob.glob(os.path.join(self.mask_dir, idx+'_mask.*'))
assert len(img_files) == 1, f'{idx}: {img_files}'
assert len(mask_files) == 1, f'{idx}: {mask_files}'
# use Pillow's Image to read .gif mask
# https://answers.opencv.org/question/185929/how-to-read-gif-in-python/
img = Image.open(img_files[0])
mask = Image.open(mask_files[0])
assert img.size == mask.size, f'{img.shape} # {mask.shape}'
# img = self.preprocess(img)
# mask = self.preprocess(mask)
return torch.from_numpy(img).float(), \
torch.from_numpy(mask).float() |
st49843 | What kind of error are you seeing?
PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging easier |
st49844 | I would like to draw the loss convergence for training and validation in a simple graph. So far I found out that PyTorch doesn’t offer any in-built function for that yet (at least none that speaks to me as a beginner). I think it might be the best to just use some matplotlib code. I couldn’t figure out how exactly to do it though. I would be happy if somebody could give me hints how to incorporate the necessary code into my training & validation code:
def train_model(learning_rate, l2_penalty, epochs):
print(str(datetime.datetime.now()).split('.')[0], "Starting training and validation...\n")
print("====================Data and Hyperparameter Overview====================\n")
print("Number of training examples: %d, Number of validation examples: %d" %(len(training_dataframe), len(validation_dataframe)))
print("Learning rate: %.5f, Embedding Dimension: %d, Hidden Size: %d, Dropout: %.2f, L2:%.10f\n" %(learning_rate, emb_dim, encoder.hidden_size, encoder.p_dropout, l2_penalty))
print("================================Results...==============================\n")
optimizer = torch.optim.Adam(dual_encoder.parameters(), lr = learning_rate, weight_decay = l2_penalty)#*
loss_func = torch.nn.BCEWithLogitsLoss()
loss_func.cuda()
best_validation_accuracy = 0.0
for epoch in range(epochs):
shuffle_dataframe(training_dataframe)
sum_loss_training = 0.0
training_correct_count = 0
dual_encoder.train()
for index, row in training_dataframe.iterrows():
context_ids, response_ids, label = load_ids_and_labels(row, word_to_id)
context = autograd.Variable(torch.LongTensor(context_ids).view(-1,1), requires_grad = False).cuda()
response = autograd.Variable(torch.LongTensor(response_ids).view(-1, 1), requires_grad = False).cuda()
label = autograd.Variable(torch.FloatTensor(torch.from_numpy(np.array(label).reshape(1,1))), requires_grad = False).cuda()
score = dual_encoder(context, response)
loss = loss_func(score, label)
sum_loss_training += loss.data[0]
loss.backward()
optimizer.step()
optimizer.zero_grad()
training_correct_count = increase_count(training_correct_count, score, label)
training_accuracy = get_accuracy(training_correct_count, training_dataframe)
shuffle_dataframe(validation_dataframe)
validation_correct_count = 0
sum_loss_validation = 0.0
dual_encoder.eval()
for index, row in validation_dataframe.iterrows():
context_ids, response_ids, label = load_ids_and_labels(row, word_to_id)
context = autograd.Variable(torch.LongTensor(context_ids).view(-1,1)).cuda()
response = autograd.Variable(torch.LongTensor(response_ids).view(-1, 1)).cuda()
label = autograd.Variable(torch.FloatTensor(torch.from_numpy(np.array(label).reshape(1,1)))).cuda()
score = dual_encoder(context, response)
loss = loss_func(score, label)
sum_loss_validation += loss.data[0]
validation_correct_count = increase_count(validation_correct_count, score, label)
validation_accuracy = get_accuracy(validation_correct_count, validation_dataframe)
print(str(datetime.datetime.now()).split('.')[0],
"Epoch: %d/%d" %(epoch,epochs),
"TrainLoss: %.3f" %(sum_loss_training/len(training_dataframe)),
"TrainAccuracy: %.3f" %(training_accuracy),
"ValLoss: %.3f" %(sum_loss_validation/len(validation_dataframe)),
"ValAccuracy: %.3f" %(validation_accuracy))
if validation_accuracy > best_validation_accuracy:
best_validation_accuracy = validation_accuracy
torch.save(dual_encoder.state_dict(), '/output/saved_model_%d_examples.pt' %(len(training_dataframe)))
print("New best found and saved.")
print(str(datetime.datetime.now()).split('.')[0], "Training and validation epochs finished.") |
st49845 | Try visdom! https://github.com/facebookresearch/visdom 5.4k. Specifically vis.line 1.3k. |
st49846 | Visdom will create a small web application and send all plots to it, so that you can just let your model train on a server and see e.g. the loss curves, segmentation output etc. in your web browser.
Matplotlib will create the plots on your machine, which is sometimes not desired, e.g. if you just connect via SSH. |
st49847 | Janinanu:
it
I think this tutorial is very useful for the task.
https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html#model-training-and-validation-code 2.8k |
st49848 | I wrote livelossplot 188 (live training loss plot in Jupyter Notebook for Keras, PyTorch and others) exactly for that purpose.
PyTorch example: https://github.com/stared/livelossplot/blob/master/examples/pytorch.ipynb 645 |
st49849 | I concur with using visdom.
I recommend putting the code that handles drawing the visualizations in a different module for clarity, e.g.:
# module visualizations.py
from datetime import datetime
import visdom
class Visualizations:
def __init__(self, env_name=None):
if env_name is None:
env_name = str(datetime.now().strftime("%d-%m %Hh%M"))
self.env_name = env_name
self.vis = visdom.Visdom(env=self.env_name)
self.loss_win = None
def plot_loss(self, loss, step):
self.loss_win = self.vis.line(
[loss],
[step],
win=self.loss_win,
update='append' if self.loss_win else None,
opts=dict(
xlabel='Step',
ylabel='Loss',
title='Loss (mean per 10 steps)',
)
)
# module train.py
(...)
def train():
(...)
# Initialize the visualization environment
vis = Visualizations()
# Training loop
loss_values = []
for step, input in enumerate(loader):
# Forward pass
output = model(input)
loss = model.loss(output)
loss_values.append(loss.item())
# Backward pass
model.zero_grad()
loss.backward()
optimizer.step()
# Visualization data
if step % 10 == 0:
vis.plot_loss(np.mean(loss_values), step)
loss_values.clear()
Visdom is partially documented on its github page, and I found the demo 240 they provide to answer a lot of my questions on the usage. For more complex plots, use matplotlib and embed them using vis.matplot 54.
Here is an example of a visdom environment:
chrome_VjAh5A73XH.png1529×877 68.1 KB |
st49850 | I’m not sure, as I’m not using Colab that much.
However, you could have a look at this Visdom issue 98, so that the Visdom devs could also have a look. |
st49851 | Livelossplot was very helpful. It was exactly what I needed for my notebooks. I used it with both PyTorch and Keras and the usage was very straightforward. |
st49852 | I have n points as tensor(n, 2), the second dimension is x and y coordinate on an image, I want to convert them to tensor(n, W, H), the W and H are x and y coordinate on the image.
How can I do it with no for loop?
thanks |
st49853 | Solved by ptrblck in post #2
If you want to set a specific value for these coordinates into the image, this code should work:
img = torch.zeros(4, 5, 5)
x = torch.randint(0, 5, (4, 2))
img[torch.arange(img.size(0)), x[:, 0], x[:, 1]] = 1.
print(img) |
st49854 | If you want to set a specific value for these coordinates into the image, this code should work:
img = torch.zeros(4, 5, 5)
x = torch.randint(0, 5, (4, 2))
img[torch.arange(img.size(0)), x[:, 0], x[:, 1]] = 1.
print(img) |
st49855 | I’m trying to install pytorch and torch vision(pip3 install torch==1.2.0+cpu torchvision==0.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html 1) in my uubuntu@ip-172-31-23-180_ ~ 10-10-2020 03_06_18802×482 9.84 KB buntu AWS machine. But im getting this error. |
st49856 | Unsure, but maybe you could install the wheel directly from the URL (with pipenv?).
Btw. what’s your use case that you want to install such an old PyTorch version? |
st49857 | I am a bit new to ML and PyTorch and I’ve been trying to get to grips with this all. I’m trying to train an LSTM that gives me probabilites for my multi-class data. For example, I have a sentence and each sentence can be given a label from 0 to 4 (and only one label, so it’s not multi-label classification).
Right now, I was working from an example (using my own data), but it seems that was for binary classification. When passing y to my model during training, it appears that it is only a tensor of labels corresponding to each sentence like [1,4,2,0,2,2,3,1] (so that would be a batch of 8 sentences, each sentence given one label). If I want to do multi-class, must I pass y so that it looks like [0,0,1,0,0], which correspomds to label 2? I’m just not sure at all what way my labels must be passed for this.
Thanks in advance. |
st49858 | The first approach of passing the class indices to the criterion, which should be nn.CrossEntropyLoss or nn.NLLLoss, is correct. Both loss functions expect a target containing the class indices and are not expecting a one-hot encoded target (which would be the second approach). |
st49859 | Hi. I am really new to deep learning and I am trying to create a school project using ESNet architecture. But this time I just want to test with a sample dataset (Cats VS Dogs) but I keep on getting the error:
Expected input batch_size (4) to match target batch_size (1). |
st49860 | THIS IS THE ARCHITECTURE THAT I USED
#ESNet: An Efficient Symmetric Network for Real-time Semantic Segmentation
#Paper-Link: https://arxiv.org/pdf/1906.09826.pdf 1
###################################################################################################
import torch
import torch.nn as nn
import torch.nn.init as init
import torch.nn.functional as F
from torchsummary import summary
class DownsamplerBlock(nn.Module):
def __init__(self, ninput, noutput):
super().__init__()
self.conv = nn.Conv2d(ninput, noutput-ninput, (3, 3), stride=2, padding=1, bias=True)
self.pool = nn.MaxPool2d(2, stride=2)
self.bn = nn.BatchNorm2d(noutput, eps=1e-3)
self.relu = nn.ReLU(inplace=True)
def forward(self, input):
x1 = self.pool(input)
x2 = self.conv(input)
diffY = x2.size()[2] - x1.size()[2]
diffX = x2.size()[3] - x1.size()[3]
x1 = F.pad(x1, [diffX // 2, diffX - diffX // 2,
diffY // 2, diffY - diffY // 2])
print(x1.shape)
output = torch.cat([x2, x1], 1)
output = self.bn(output)
output = self.relu(output)
return output
class UpsamplerBlock (nn.Module):
def __init__(self, ninput, noutput):
super().__init__()
self.conv = nn.ConvTranspose2d(ninput, noutput, 3, stride=2, padding=1, output_padding=1, bias=True)
self.bn = nn.BatchNorm2d(noutput, eps=1e-3)
def forward(self, input):
output = self.conv(input)
output = self.bn(output)
return F.relu(output)
class FCU(nn.Module):
def __init__(self, chann, kernel_size,dropprob, dilated):
"""
Factorized Convolution Unit
"""
super(FCU,self).__init__()
padding = int((kernel_size-1)//2) * dilated
self.conv3x1_1 = nn.Conv2d(chann, chann, (kernel_size,1), stride=1, padding=(int((kernel_size-1)//2)*1,0), bias=True)
self.conv1x3_1 = nn.Conv2d(chann, chann, (1,kernel_size), stride=1, padding=(0,int((kernel_size-1)//2)*1), bias=True)
self.bn1 = nn.BatchNorm2d(chann, eps=1e-03)
self.conv3x1_2 = nn.Conv2d(chann, chann, (kernel_size,1), stride=1, padding=(padding,0), bias=True, dilation = (dilated,1))
self.conv1x3_2 = nn.Conv2d(chann, chann, (1,kernel_size), stride=1, padding=(0,padding), bias=True, dilation = (1, dilated))
self.bn2 = nn.BatchNorm2d(chann, eps=1e-03)
self.relu = nn.ReLU(inplace = True)
self.dropout = nn.Dropout2d(dropprob)
def forward(self, input):
residual = input
output = self.conv3x1_1(input)
output = self.relu(output)
output = self.conv1x3_1(output)
output = self.bn1(output)
output = self.relu(output)
output = self.conv3x1_2(output)
output = self.relu(output)
output = self.conv1x3_2(output)
output = self.bn2(output)
if (self.dropout.p != 0):
output = self.dropout(output)
return F.relu(residual+output,inplace=True)
class PFCU(nn.Module):
def __init__(self,chann):
"""
Parallel Factorized Convolution Unit
"""
super(PFCU,self).__init__()
self.conv3x1_1 = nn.Conv2d(chann, chann, (3,1), stride=1, padding=(1,0), bias=True)
self.conv1x3_1 = nn.Conv2d(chann, chann, (1,3), stride=1, padding=(0,1), bias=True)
self.bn1 = nn.BatchNorm2d(chann, eps=1e-03)
self.conv3x1_22 = nn.Conv2d(chann, chann, (3,1), stride=1, padding=(2,0), bias=True, dilation = (2,1))
self.conv1x3_22 = nn.Conv2d(chann, chann, (1,3), stride=1, padding=(0,2), bias=True, dilation = (1,2))
self.conv3x1_25 = nn.Conv2d(chann, chann, (3,1), stride=1, padding=(5,0), bias=True, dilation = (5,1))
self.conv1x3_25 = nn.Conv2d(chann, chann, (1,3), stride=1, padding=(0,5), bias=True, dilation = (1,5))
self.conv3x1_29 = nn.Conv2d(chann, chann, (3,1), stride=1, padding=(9,0), bias=True, dilation = (9,1))
self.conv1x3_29 = nn.Conv2d(chann, chann, (1,3), stride=1, padding=(0,9), bias=True, dilation = (1,9))
self.bn2 = nn.BatchNorm2d(chann, eps=1e-03)
self.dropout = nn.Dropout2d(0.3)
def forward(self, input):
residual = input
output = self.conv3x1_1(input)
output = F.relu(output)
output = self.conv1x3_1(output)
output = self.bn1(output)
output = F.relu(output)
output2 = self.conv3x1_22(output)
output2 = F.relu(output2)
output2 = self.conv1x3_22(output2)
output2 = self.bn2(output2)
if (self.dropout.p != 0):
output2 = self.dropout(output2)
output5 = self.conv3x1_25(output)
output5 = F.relu(output5)
output5 = self.conv1x3_25(output5)
output5 = self.bn2(output5)
if (self.dropout.p != 0):
output5 = self.dropout(output5)
output9 = self.conv3x1_29(output)
output9 = F.relu(output9)
output9 = self.conv1x3_29(output9)
output9 = self.bn2(output9)
if (self.dropout.p != 0):
output9 = self.dropout(output9)
return F.relu(residual+output2+output5+output9,inplace=True)
class ESNet(nn.Module):
def __init__(self, classes):
super().__init__()
#-----ESNET---------#
self.initial_block = DownsamplerBlock(1, 16)
self.layers = nn.ModuleList()
for x in range(0, 3):
self.layers.append(FCU(16, 3, 0.03, 1))
self.layers.append(DownsamplerBlock(16,64))
for x in range(0, 2):
self.layers.append(FCU(64, 5, 0.03, 1))
self.layers.append(DownsamplerBlock(64,128))
for x in range(0, 3):
self.layers.append(PFCU(chann=128))
self.layers.append(UpsamplerBlock(128,64))
self.layers.append(FCU(64, 5, 0, 1))
self.layers.append(FCU(64, 5, 0, 1))
self.layers.append(UpsamplerBlock(64,16))
self.layers.append(FCU(16, 3, 0, 1))
self.layers.append(FCU(16, 3, 0, 1))
self.output_conv = nn.ConvTranspose2d( 16, classes, 2, stride=2, padding=0, output_padding=0, bias=True)
def forward(self, input):
output = self.initial_block(input)
for layer in self.layers:
output = layer(output)
output = self.output_conv(output)
return output
“”“print layers and params of network”""
if name == ‘main’:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = ESNet(classes=2).to(device)
summary(model,(1,100,100)) |
st49861 | and this is the rest of my code:
import numpy as np
from tqdm import tqdm
import torch.optim as optim
training_data = np.load("/content/thirdy/training_data_grayscale.npy", allow_pickle = True)
x = torch.Tensor([i[0] for i in training_data]).view(-1, 100, 100)
y = torch.Tensor([i[1] for i in training_data])
VAL_PCT = 0.1
val_size = int(len(x)*VAL_PCT)
print(val_size)
train_x = x[:-val_size]
train_y = y[:-val_size]
test_x = x[-val_size:]
test_y = y[-val_size:]
import torch.optim as optim
optimizer = optim.Adam(model.parameters(), lr = 0.001)
loss_function = nn.CrossEntropyLoss()
BATCH_SIZE = 1
EPOCHS = 1
for epoch in range(EPOCHS):
for i in tqdm(range(0, len(train_x), BATCH_SIZE)):
batch_x = train_x[i:i+BATCH_SIZE].view(-1, 1, 50, 50)
print(batch_x.shape)
batch_y = train_y[i:i+BATCH_SIZE]
print(batch_y.shape)
model.zero_grad()
outputs = model(batch_x)
loss = loss_function(outputs, batch_y)
loss.backward()
optimizer.step() # Does the update
print(f"Epoch: {epoch}. Loss: {loss}") |
st49862 | and this is the error I always get no matter how i try to apply the suggestions i found online:
0%| | 0/22452 [00:00<?, ?it/s]torch.Size([4, 1, 50, 50])
torch.Size([1, 2])
torch.Size([4, 2, 56, 56])
ValueError Traceback (most recent call last)
in ()
13 outputs = model(batch_x)
14 outputs = outputs.squeeze(0)
—> 15 loss = loss_function(outputs, batch_y)
16 loss.backward()
17 optimizer.step() # Does the update
3 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
2214 if input.size(0) != target.size(0):
2215 raise ValueError(‘Expected input batch_size ({}) to match target batch_size ({}).’
-> 2216 .format(input.size(0), target.size(0)))
2217 if dim == 2:
2218 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
ValueError: Expected input batch_size (4) to match target batch_size (1). |
st49863 | I guess you are reshaping the intermediate tensors somewhere in your model and are reducing the batch size to 1.
Your current code is not easily readable, so please feel free to post an executable code snippet to reproduce this error by wrapping it into three backticks ``` |
st49864 | I am really new to this sir and is still trying to understand how networks work and I really find it tricky to do so, that’s why any helpful comments will do! By the way this is my complete code and the error was different this time and I have no idea why!
###################################################################################################
#ESNet: An Efficient Symmetric Network for Real-time Semantic Segmentation
#Paper-Link: https://arxiv.org/pdf/1906.09826.pdf
###################################################################################################
import torch
import torch.nn as nn
import torch.nn.init as init
import torch.nn.functional as F
from torchsummary import summary
class DownsamplerBlock(nn.Module):
def __init__(self, ninput, noutput):
super().__init__()
self.conv = nn.Conv2d(ninput, noutput-ninput, (3, 3), stride=2, padding=1, bias=True)
self.pool = nn.MaxPool2d(2, stride=2)
self.bn = nn.BatchNorm2d(noutput, eps=1e-3)
self.relu = nn.ReLU(inplace=True)
def forward(self, input):
x1 = self.pool(input)
x2 = self.conv(input)
diffY = x2.size()[2] - x1.size()[2]
diffX = x2.size()[3] - x1.size()[3]
x1 = F.pad(x1, [diffX // 2, diffX - diffX // 2,
diffY // 2, diffY - diffY // 2])
output = torch.cat([x2, x1], 1)
output = self.bn(output)
output = self.relu(output)
return output
class UpsamplerBlock (nn.Module):
def __init__(self, ninput, noutput):
super().__init__()
self.conv = nn.ConvTranspose2d(ninput, noutput, 3, stride=2, padding=1, output_padding=1, bias=True)
self.bn = nn.BatchNorm2d(noutput, eps=1e-3)
def forward(self, input):
output = self.conv(input)
output = self.bn(output)
return F.relu(output)
class FCU(nn.Module):
def __init__(self, chann, kernel_size,dropprob, dilated):
"""
Factorized Convolution Unit
"""
super(FCU,self).__init__()
padding = int((kernel_size-1)//2) * dilated
self.conv3x1_1 = nn.Conv2d(chann, chann, (kernel_size,1), stride=1, padding=(int((kernel_size-1)//2)*1,0), bias=True)
self.conv1x3_1 = nn.Conv2d(chann, chann, (1,kernel_size), stride=1, padding=(0,int((kernel_size-1)//2)*1), bias=True)
self.bn1 = nn.BatchNorm2d(chann, eps=1e-03)
self.conv3x1_2 = nn.Conv2d(chann, chann, (kernel_size,1), stride=1, padding=(padding,0), bias=True, dilation = (dilated,1))
self.conv1x3_2 = nn.Conv2d(chann, chann, (1,kernel_size), stride=1, padding=(0,padding), bias=True, dilation = (1, dilated))
self.bn2 = nn.BatchNorm2d(chann, eps=1e-03)
self.relu = nn.ReLU(inplace = True)
self.dropout = nn.Dropout2d(dropprob)
def forward(self, input):
residual = input
output = self.conv3x1_1(input)
output = self.relu(output)
output = self.conv1x3_1(output)
output = self.bn1(output)
output = self.relu(output)
output = self.conv3x1_2(output)
output = self.relu(output)
output = self.conv1x3_2(output)
output = self.bn2(output)
if (self.dropout.p != 0):
output = self.dropout(output)
return F.relu(residual+output,inplace=True)
class PFCU(nn.Module):
def __init__(self,chann):
"""
Parallel Factorized Convolution Unit
"""
super(PFCU,self).__init__()
self.conv3x1_1 = nn.Conv2d(chann, chann, (3,1), stride=1, padding=(1,0), bias=True)
self.conv1x3_1 = nn.Conv2d(chann, chann, (1,3), stride=1, padding=(0,1), bias=True)
self.bn1 = nn.BatchNorm2d(chann, eps=1e-03)
self.conv3x1_22 = nn.Conv2d(chann, chann, (3,1), stride=1, padding=(2,0), bias=True, dilation = (2,1))
self.conv1x3_22 = nn.Conv2d(chann, chann, (1,3), stride=1, padding=(0,2), bias=True, dilation = (1,2))
self.conv3x1_25 = nn.Conv2d(chann, chann, (3,1), stride=1, padding=(5,0), bias=True, dilation = (5,1))
self.conv1x3_25 = nn.Conv2d(chann, chann, (1,3), stride=1, padding=(0,5), bias=True, dilation = (1,5))
self.conv3x1_29 = nn.Conv2d(chann, chann, (3,1), stride=1, padding=(9,0), bias=True, dilation = (9,1))
self.conv1x3_29 = nn.Conv2d(chann, chann, (1,3), stride=1, padding=(0,9), bias=True, dilation = (1,9))
self.bn2 = nn.BatchNorm2d(chann, eps=1e-03)
self.dropout = nn.Dropout2d(0.3)
def forward(self, input):
residual = input
output = self.conv3x1_1(input)
output = F.relu(output)
output = self.conv1x3_1(output)
output = self.bn1(output)
output = F.relu(output)
output2 = self.conv3x1_22(output)
output2 = F.relu(output2)
output2 = self.conv1x3_22(output2)
output2 = self.bn2(output2)
if (self.dropout.p != 0):
output2 = self.dropout(output2)
output5 = self.conv3x1_25(output)
output5 = F.relu(output5)
output5 = self.conv1x3_25(output5)
output5 = self.bn2(output5)
if (self.dropout.p != 0):
output5 = self.dropout(output5)
output9 = self.conv3x1_29(output)
output9 = F.relu(output9)
output9 = self.conv1x3_29(output9)
output9 = self.bn2(output9)
if (self.dropout.p != 0):
output9 = self.dropout(output9)
return F.relu(residual+output2+output5+output9,inplace=True)
class ESNet(nn.Module):
def __init__(self, classes):
super().__init__()
#-----ESNET---------#
self.initial_block = DownsamplerBlock(3, 16)
self.layers = nn.ModuleList()
for x in range(0, 3):
self.layers.append(FCU(16, 3, 0.03, 1))
self.layers.append(DownsamplerBlock(16,64))
for x in range(0, 2):
self.layers.append(FCU(64, 5, 0.03, 1))
self.layers.append(DownsamplerBlock(64,128))
for x in range(0, 3):
self.layers.append(PFCU(chann=128))
self.layers.append(UpsamplerBlock(128,64))
self.layers.append(FCU(64, 5, 0, 1))
self.layers.append(FCU(64, 5, 0, 1))
self.layers.append(UpsamplerBlock(64,16))
self.layers.append(FCU(16, 3, 0, 1))
self.layers.append(FCU(16, 3, 0, 1))
self.output_conv = nn.ConvTranspose2d( 16, classes, 2, stride=2, padding=0, output_padding=0, bias=True)
def forward(self, input):
output = self.initial_block(input)
for layer in self.layers:
output = layer(output)
output = self.output_conv(output)
return output
"""print layers and params of network"""
if __name__ == '__main__':
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = ESNet(classes=2).to(device)
summary(model,(3,100,100))
import numpy as np
from tqdm import tqdm
import torch.optim as optim
training_data = np.load("/content/thirdy/training_data.npy", allow_pickle = True)
x = torch.Tensor([i[0] for i in training_data]).view(-1, 3, 100, 100)
print(x.shape)
y = torch.Tensor([i[1] for i in training_data])
print(y.shape)
VAL_PCT = 0.1
val_size = int(len(x)*VAL_PCT)
print(val_size)
train_x = x[:-val_size]
train_y = y[:-val_size]
test_x = x[-val_size:]
test_y = y[-val_size:]
print(len(train_x))
print(len(test_x))
import torch.optim as optim
optimizer = optim.Adam(model.parameters(), lr = 0.001)
loss_function = nn.CrossEntropyLoss()
BATCH_SIZE = 10
EPOCHS = 1
for epoch in range(EPOCHS):
for i in tqdm(range(0, len(train_x), BATCH_SIZE)):
batch_x = train_x[i:i+BATCH_SIZE].view(-1, 3, 100, 100)
print(batch_x.shape)
batch_y = train_y[i:i+BATCH_SIZE].squeeze(1)
print(batch_y.shape)
model.zero_grad()
outputs = model(batch_x)
loss = loss_function(outputs, batch_y)
loss.backward()
optimizer.step() # Does the update
print(f"Epoch: {epoch}. Loss: {loss}")``` |
st49865 | Thanks for the code. Unfortunately it’s not executable, but based on the view operation I assume your input has the shape [batch_size, 3, 100, 100].
Based on this shape the output would have the shape [batch_size, 2, 104, 104] and thus the target should have the shape [batch_size, 104, 104] and contain values in [0, 1].
Using these shapes, the script works fine so you could check the shapes of your inputs, outputs, and targets and make sure they have the mentioned shapes. |
st49866 | Thank you so much sir for your response. But as a newbie I don’t understand why my model should have the shapes you have stated above, specifically the output’s shape and the target shape. Lastly, I still don’t fully understand what target really is. Can you please give an explanation? Thanks! |
st49867 | by the way sir, this is my complete code (I forgot to include my repo where i store my custom dataset in the previous code I provided):
!git clone https://gitlab.com/mariacassie/thirdy.git
###################################################################################################
#ESNet: An Efficient Symmetric Network for Real-time Semantic Segmentation
#Paper-Link: https://arxiv.org/pdf/1906.09826.pdf
###################################################################################################
import torch
import torch.nn as nn
import torch.nn.init as init
import torch.nn.functional as F
from torchsummary import summary
class DownsamplerBlock(nn.Module):
def __init__(self, ninput, noutput):
super().__init__()
self.conv = nn.Conv2d(ninput, noutput-ninput, (3, 3), stride=2, padding=1, bias=True)
self.pool = nn.MaxPool2d(2, stride=2)
self.bn = nn.BatchNorm2d(noutput, eps=1e-3)
self.relu = nn.ReLU(inplace=True)
def forward(self, input):
x1 = self.pool(input)
x2 = self.conv(input)
diffY = x2.size()[2] - x1.size()[2]
diffX = x2.size()[3] - x1.size()[3]
x1 = F.pad(x1, [diffX // 2, diffX - diffX // 2,
diffY // 2, diffY - diffY // 2])
output = torch.cat([x2, x1], 1)
output = self.bn(output)
output = self.relu(output)
return output
class UpsamplerBlock (nn.Module):
def __init__(self, ninput, noutput):
super().__init__()
self.conv = nn.ConvTranspose2d(ninput, noutput, 3, stride=2, padding=1, output_padding=1, bias=True)
self.bn = nn.BatchNorm2d(noutput, eps=1e-3)
def forward(self, input):
output = self.conv(input)
output = self.bn(output)
return F.relu(output)
class FCU(nn.Module):
def __init__(self, chann, kernel_size,dropprob, dilated):
"""
Factorized Convolution Unit
"""
super(FCU,self).__init__()
padding = int((kernel_size-1)//2) * dilated
self.conv3x1_1 = nn.Conv2d(chann, chann, (kernel_size,1), stride=1, padding=(int((kernel_size-1)//2)*1,0), bias=True)
self.conv1x3_1 = nn.Conv2d(chann, chann, (1,kernel_size), stride=1, padding=(0,int((kernel_size-1)//2)*1), bias=True)
self.bn1 = nn.BatchNorm2d(chann, eps=1e-03)
self.conv3x1_2 = nn.Conv2d(chann, chann, (kernel_size,1), stride=1, padding=(padding,0), bias=True, dilation = (dilated,1))
self.conv1x3_2 = nn.Conv2d(chann, chann, (1,kernel_size), stride=1, padding=(0,padding), bias=True, dilation = (1, dilated))
self.bn2 = nn.BatchNorm2d(chann, eps=1e-03)
self.relu = nn.ReLU(inplace = True)
self.dropout = nn.Dropout2d(dropprob)
def forward(self, input):
residual = input
output = self.conv3x1_1(input)
output = self.relu(output)
output = self.conv1x3_1(output)
output = self.bn1(output)
output = self.relu(output)
output = self.conv3x1_2(output)
output = self.relu(output)
output = self.conv1x3_2(output)
output = self.bn2(output)
if (self.dropout.p != 0):
output = self.dropout(output)
return F.relu(residual+output,inplace=True)
class PFCU(nn.Module):
def __init__(self,chann):
"""
Parallel Factorized Convolution Unit
"""
super(PFCU,self).__init__()
self.conv3x1_1 = nn.Conv2d(chann, chann, (3,1), stride=1, padding=(1,0), bias=True)
self.conv1x3_1 = nn.Conv2d(chann, chann, (1,3), stride=1, padding=(0,1), bias=True)
self.bn1 = nn.BatchNorm2d(chann, eps=1e-03)
self.conv3x1_22 = nn.Conv2d(chann, chann, (3,1), stride=1, padding=(2,0), bias=True, dilation = (2,1))
self.conv1x3_22 = nn.Conv2d(chann, chann, (1,3), stride=1, padding=(0,2), bias=True, dilation = (1,2))
self.conv3x1_25 = nn.Conv2d(chann, chann, (3,1), stride=1, padding=(5,0), bias=True, dilation = (5,1))
self.conv1x3_25 = nn.Conv2d(chann, chann, (1,3), stride=1, padding=(0,5), bias=True, dilation = (1,5))
self.conv3x1_29 = nn.Conv2d(chann, chann, (3,1), stride=1, padding=(9,0), bias=True, dilation = (9,1))
self.conv1x3_29 = nn.Conv2d(chann, chann, (1,3), stride=1, padding=(0,9), bias=True, dilation = (1,9))
self.bn2 = nn.BatchNorm2d(chann, eps=1e-03)
self.dropout = nn.Dropout2d(0.3)
def forward(self, input):
residual = input
output = self.conv3x1_1(input)
output = F.relu(output)
output = self.conv1x3_1(output)
output = self.bn1(output)
output = F.relu(output)
output2 = self.conv3x1_22(output)
output2 = F.relu(output2)
output2 = self.conv1x3_22(output2)
output2 = self.bn2(output2)
if (self.dropout.p != 0):
output2 = self.dropout(output2)
output5 = self.conv3x1_25(output)
output5 = F.relu(output5)
output5 = self.conv1x3_25(output5)
output5 = self.bn2(output5)
if (self.dropout.p != 0):
output5 = self.dropout(output5)
output9 = self.conv3x1_29(output)
output9 = F.relu(output9)
output9 = self.conv1x3_29(output9)
output9 = self.bn2(output9)
if (self.dropout.p != 0):
output9 = self.dropout(output9)
return F.relu(residual+output2+output5+output9,inplace=True)
class ESNet(nn.Module):
def __init__(self, classes):
super().__init__()
#-----ESNET---------#
self.initial_block = DownsamplerBlock(3, 16)
self.layers = nn.ModuleList()
for x in range(0, 3):
self.layers.append(FCU(16, 3, 0.03, 1))
self.layers.append(DownsamplerBlock(16,64))
for x in range(0, 2):
self.layers.append(FCU(64, 5, 0.03, 1))
self.layers.append(DownsamplerBlock(64,128))
for x in range(0, 3):
self.layers.append(PFCU(chann=128))
self.layers.append(UpsamplerBlock(128,64))
self.layers.append(FCU(64, 5, 0, 1))
self.layers.append(FCU(64, 5, 0, 1))
self.layers.append(UpsamplerBlock(64,16))
self.layers.append(FCU(16, 3, 0, 1))
self.layers.append(FCU(16, 3, 0, 1))
self.output_conv = nn.ConvTranspose2d( 16, classes, 2, stride=2, padding=0, output_padding=0, bias=True)
def forward(self, input):
output = self.initial_block(input)
for layer in self.layers:
output = layer(output)
output = self.output_conv(output)
return output
"""print layers and params of network"""
if __name__ == '__main__':
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = ESNet(classes=2).to(device)
summary(model,(3,100,100))
import numpy as np
from tqdm import tqdm
import torch.optim as optim
training_data = np.load("/content/thirdy/training_data.npy", allow_pickle = True)
x = torch.Tensor([i[0] for i in training_data]).view(-1, 3, 100, 100)
print(x.shape)
y = torch.Tensor([i[1] for i in training_data])
print(y.shape)
VAL_PCT = 0.1
val_size = int(len(x)*VAL_PCT)
print(val_size)
train_x = x[:-val_size]
train_y = y[:-val_size]
test_x = x[-val_size:]
test_y = y[-val_size:]
print(len(train_x))
print(len(test_x))
import torch.optim as optim
optimizer = optim.Adam(model.parameters(), lr = 0.001)
loss_function = nn.CrossEntropyLoss()
BATCH_SIZE = 100
EPOCHS = 1
for epoch in range(EPOCHS):
for i in tqdm(range(0, len(train_x), BATCH_SIZE)):
batch_x = train_x[i:i+BATCH_SIZE].view(-1, 3, 100, 100)
batch_y = train_y[i:i+BATCH_SIZE]
model.zero_grad()
outputs = model(batch_x)
print(outputs.shape)
loss = loss_function(outputs, batch_y)
loss.backward()
optimizer.step() # Does the update
print(f"Epoch: {epoch}. Loss: {loss}") |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.