id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st47068 | After .backward(), look at the embeddings .weight.grad attribute to get the updated indices. After optim.step(), clamp weights at those indices. |
st47069 | Would the same effect be achieved if we add a ReLU layer after the embedding layer? |
st47070 | SimonW:
clamping
How did you manage to implement this in pytorch? would you mind sharing your code? |
st47071 | Hi!
My Data Loader :
LoadData(Dataset):
def __init__(self, ..., ...., ...):
self.ns = (640, 640)
.....
.....
def __getitem__(index):
img = resize(img[index], self.ns)
.....
.....
return img
def set_size(self, ns):
self.ns = ns
And the training loop
for img in dataiterator:
forward(img)
backward()
datatiterator.dataset.set_size(new_ns)
I would like to resize images dynamically after each iteration. When I do it as above, it does not work. Is there any simple trick to do that? Please help.
Best, |
st47072 | You won’t be able to manipulate the underlying .dataset through the DataLoader, if you are using multiple workers during the epoch, since each worker will use a separate copy of the Dataset.
You could manipulate the .dataset after each epoch and before the start of a new one (if persistent_workers=False) or you could iterate the Dataset and create the batches manually.
Alternatively, you could also try to forward specific arguments to the __getitem__ through a custom sampler to switch between different behaviors. |
st47073 | Thank you for your answer. Let me see how I can create a custom sampler and control the image size through sending argument to getitem. |
st47074 | Hello
I am beginner and I try to learn RCNN from TorchVision. I have 8 classes of objects, and i have wrote dataset class with this def__getitem and here labels is numpy array with classes for boxes on image.(for example ([3, 5]))
def getitem(self, index):
path_image = self.paths[index]
image, boxes , labels, area = self.load_image_and_boxes(index)
target = {}
target[‘boxes’] = boxes
target[‘labels’] = labels
target[‘image_id’] = torch.tensor([index])
target[‘area’] = area
if self.transforms:
sample = self.transforms(**{
‘image’: image,
‘bboxes’: target[‘boxes’],
‘labels’: labels
})
image = sample[“image”]
target[‘bboxes’] = torch.as_tensor(sample[‘bboxes’], dtype=torch.float32)
target[‘bboxes’] = target[‘bboxes’].reshape(-1, 4)
target[“boxes”] = target[“bboxes”]
del target[‘bboxes’]
else:
target[‘boxes’] = torch.as_tensor(target[‘boxes’], dtype=torch.float32)
target[‘boxes’] = target[‘boxes’].reshape(-1, 4)
image = np.transpose(image,(2,0,1))
image = torch.from_numpy(image)
return image, target, path_image
After lerniing RCNN return outputs, with 8 items. This is one of this
outputs[7]
{‘boxes’: tensor([[ 0.0000, 64.0552, 500.5320, 477.4697],
[ 22.2902, 121.1948, 360.7867, 475.3104],
[ 6.6014, 123.5036, 282.5276, 460.8509],
[206.0831, 77.8670, 505.2312, 443.2711],
[ 10.5778, 97.3200, 489.6339, 433.8982],
[ 16.5061, 66.0151, 495.6974, 367.8644],
[ 0.0000, 264.9781, 240.3872, 496.7976],
[141.8789, 51.5878, 508.8718, 501.8432],
[244.2009, 45.8100, 512.0000, 479.3023],
[243.4653, 92.4951, 512.0000, 474.3302],
[287.1657, 273.5756, 483.0078, 463.3998],
[ 14.1116, 115.1430, 275.1363, 383.5063],
[ 0.0000, 118.9819, 428.0742, 509.8760],
[336.0939, 133.6007, 501.2955, 395.3423],
[331.1056, 110.4328, 497.5609, 425.7774],
[259.7857, 140.4948, 505.6891, 472.5172],
[ 36.9824, 208.6160, 237.9554, 512.0000],
[ 19.1136, 186.5290, 151.8781, 499.3608],
[213.0657, 102.6356, 474.4019, 215.7374],
[ 0.0000, 266.3952, 198.0168, 487.8173],
[ 23.2956, 125.9192, 169.1113, 444.2588],
[ 64.2475, 91.6519, 478.7527, 428.1402],
[330.7071, 115.4189, 491.4273, 420.2543],
[ 0.0000, 96.2847, 219.0983, 512.0000],
[ 0.0000, 478.2526, 189.6664, 512.0000],
[ 11.6199, 131.2440, 319.1119, 458.8158]], grad_fn=),
‘labels’: tensor([2, 7, 2, 7, 6, 7, 2, 4, 2, 1, 4, 6, 4, 1, 7, 4, 4, 2, 4, 7, 6, 1, 2, 7,
3, 1]),
‘scores’: tensor([0.4158, 0.2980, 0.2887, 0.2255, 0.1818, 0.1811, 0.1805, 0.1661, 0.1560,
0.1539, 0.1526, 0.1276, 0.1204, 0.1092, 0.0959, 0.0930, 0.0845, 0.0823,
0.0796, 0.0788, 0.0780, 0.0750, 0.0691, 0.0663, 0.0524, 0.0515],
grad_fn=)}
So, i thing i made wrong in __getitem, and in outputs in each item should have boxes belongs only one class. But i dont know how to rewrite it properly |
st47075 | Hi everyone
I have a conceptual question regarding for loops in the forward method of a Convolutional Neural Network and the corresponding backpropagation.
I read that when you want to loop over modules in the forward method you can make use of the nn.ModuleList class. But what if I would like to use a for loop for something else? How does this influence the backpropagation? Say I would like to loop over the channels of an image and process them individually, does that influence the backpropagation?
Any help is very much appreciated!
All the best
snowe |
st47076 | Solved by ptrblck in post #2
It “influences” the computation graph in the sense that each operation would be added to the graph and the backpropagation should just work. It shouldn’t be necessary to take extra steps and could just use for loops etc. without a problem. |
st47077 | It “influences” the computation graph in the sense that each operation would be added to the graph and the backpropagation should just work. It shouldn’t be necessary to take extra steps and could just use for loops etc. without a problem. |
st47078 | Thank you @ptrblck
I was worrying that it might only use the last loop or so or overwrite them but nice to hear otherwise! |
st47079 | Keras has an option that can cause the weights of the model to be non negative :
tf.keras.constraints.NonNeg()
keras.io
Keras documentation: Layer weight constraints 6
What is the equivelent of this in pytorch? lets say my model is the one below, how should i change it to force the weights to be non negative ?
class LogisticRegression(torch.nn.Module):
def __init__(self, input_dim, output_dim):
super(LogisticRegression, self).__init__()
self.linear = torch.nn.Linear(input_dim, output_dim)
def forward(self, x):
outputs = self.linear(x)
return outputs |
st47080 | Continuing the discussion from Converting Multi-output model from Keras to Pytorch results in worse results: |
st47081 | You’ve unfortunately deleted the old topic.
Would you like to repost the original question here? |
st47082 | I found the answer in another post. My problem was with loading the data. I was basically training on a subset of the data. |
st47083 | I’m curious about if we want to do the autoregressive manner. Is it possible to do with implemented using pad_packed_sequence and pack_padded_sequence input to some recurrent network? Because we need to wait for the output of timestep t. Given this scenario we cannot gather all prediction and packed them. Then what should I do with this. Is there any way to do this without looping to each time step? Also If i want to do it in batch processing is it possible?
Thank you |
st47084 | I am trying to install pytorch-geometric in a conda environment on 64-bit Windows 10, using python 3.7.9.
When I try to install one of the dependencies pip install torch-sparse, I get a big error. Same thing for the other dependencies, torch-scatter and torch-cluster. How can I install them?
The error: https://dpaste.org/O5Uq 4 |
st47085 | I would try to make sure your current setup is able to build custom PyTorch extensions e.g. by building the tutorial extension 18.
The issue is also tracked here 13. |
st47086 | Hi, I’m a little bit confused about the reproducibility of LSTM in pytorch.
Things I have already done:
def setup_seed(seed: int) -> None:
CUDA = torch.cuda.is_available()
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
if CUDA:
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
set CUBLAS_WORKSPACE_CONFIG before import torch
import os
# for reproducibility, must before import torch
os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8" # noqa
I have 6 gpus in my machine(centos 7, pytorch 1.7, cudatoolkit 10.2), 2 tesla v100 32GB, 2 tesla V100 16GB, 2 tesla M40.
And on same type gpu, the results are same. But on different type gpus results are different. Each result is reproducible.
Here is my model:
class PreEmbeddings(nn.Module):
"""Construct the embeddings from pretrained embeddings."""
def __init__(self, config, pretrained_embeddings):
super().__init__()
pretrained_embeddings = pretrained_embeddings.astype('float32')
self.word_embeddings = nn.Embedding.from_pretrained(torch.from_numpy(pretrained_embeddings))
self.dropout = nn.Dropout(config["embed_dropout_prob"])
def forward(self, input_ids, class_relatedness_ids=None):
embeddings = self.word_embeddings(input_ids)
embeddings = self.dropout(embeddings)
return embeddings
class RelatedEmbeddings(nn.Module):
"""Construct the embeddings from relatedness between words and labels."""
def __init__(self, config, related_embeddings):
super().__init__()
related_embeddings = related_embeddings.astype('float32')
self.relatedness = nn.Embedding.from_pretrained(torch.from_numpy(related_embeddings))
def forward(self, input_ids):
relatedness = torch.mean(self.relatedness(input_ids), dim=1)
return relatedness
class LSTMClassifier(torch.nn.Module):
def __init__(self, config, pretrained_embeddings, related_embeddings):
super().__init__()
self.config = config
self.word_embeddings = PreEmbeddings(config, pretrained_embeddings)
self.relatedness = RelatedEmbeddings(config, related_embeddings)
self.lstm = nn.LSTM(config["embed_dim"], config["embed_dim"]//2,
batch_first=True,
bidirectional=True,
num_layers=2
)
self.fc1 = nn.Linear(
config["embed_dim"]//2 + len(config['keywords']) * config['aug'], config["num_classes"])
def forward(self, input_ids):
word_embeddings = self.word_embeddings(input_ids)
relatedness = self.relatedness(input_ids)
lstm_out, (ht, ct) = self.lstm(word_embeddings)
if self.config["aug"]:
comb = torch.cat((ht[-1], relatedness), dim=1)
x = self.fc1(comb)
else:
x = self.fc1(ht[-1])
return x
Is it possible to get same result on different type gpu? |
st47087 | Solved by ptrblck in post #2
I don’t think this is universally possible due to the different hardware architectures.
The reproducibility should work on the same device though, which seems to be the case here. |
st47088 | heroadz:
Is it possible to get same result on different type gpu?
I don’t think this is universally possible due to the different hardware architectures.
The reproducibility should work on the same device though, which seems to be the case here. |
st47089 | I am working on the segmentation task with 2 classes :vessel and background.
It seems to have two options.
1 .output is 1 channel mask,grayscale image,using binary cross entropy as loss function.I am confused with how the metric like FP TP caculated,should I set the threshold to turn the grayscale picture to black and white picture then caculate the metric?
2 .output is 2 channel mask,using cross entropy, get the mask by output.argmax(dim=1),and it looks like easier to caculate the FP and TP
Is there a best option between them? |
st47090 | Both options are valid and your explanations are correct.
The difference is in treating the segmentation use case as a binary segmentation (first option) with nn.BCEWithLogitsLoss and a single output channel or as a 2-class multi-class segmentation (second option) with two output channels.
From what I saw in other posts the first option is often preferred, but the second one should also work.
@KFrank is an expert in these topics and might want to add something. |
st47091 | Situation: I am trying to implement a Convolutional Text Binarizer, a CNN which accepts as inputs RGB images with superimposed text and returns as outputs a map F which is (after some more processing) its corresponding black and white binary image, with the text in black and the backround in white. After the training and validation of the network I am trying to export it to ONNX so I can use it with the openCV’s dnn.module.
Problem #1: In my architecture I use one Upsample layer, which results in this warning after the export to ONNX finishes:
UserWarning: You are trying to export the model with onnx:Upsample for ONNX opset version 9. This operator might cause results to not match the expected results by PyTorch.
ONNX's Upsample/Resize operator did not match Pytorch's Interpolation until opset 11. Attributes to determine how to transform the input were added in onnx:Resize in opset 11 to support Pytorch's behavior (like coordinate_transformation_mode and nearest_mode).
We recommend using opset 11 and above for models using this operator.
"" + str(_export_onnx_opset_version) + ". "
I tried to use my network anyway, without re-exporting it to opset version 11 and it works pretty good, there is some room for improvments but it’s a satisfacting first attemp. I use the blob method from openCV to perform a forward pass of an image with text and I do have its corresponding binary as output. But still that warning itches my brain a bit.
Problem 2: Naturally the next thing I did, was to try to re-export the model using opset 11 and seemed to workout fine, no UserWarning was raised. Then i tried the exact same thing as before and perform a forward pass for the same image and see if there is any obvious differences between the two versions, but then this error was raised:
error: OpenCV(4.3.0) C:\projects\opencv-python\opencv\modules\dnn\src\layers\padding_layer.cpp:41: error: (-215:Assertion failed) params.has("paddings") in function 'cv::dnn::PaddingLayerImpl::PaddingLayerImpl'
My two main questions:
Is the miss-match between the way that PyTorch and ONNX handles the Upsample operator
significant enough to cause me trouble, or can I just ignore the warning?
Why openCV’s dnn.module is loading the opset 9 version model whitout any trouble but it raises an error for the exact same model but in opset 11 version?
PS: I know thats not a forum for openCV users, so I don’t expect any answers for my second question, but if by any change somebody happens to think of a reason , I 'd really appreciate to know. |
st47092 | Hi I am pretty new to PyTorch and neural networks in general and I am wondering about the weight vector estimate of linear networks. Suppose I have built a very simple model with 3 input nodes and 2 output nodes (no hidden layers) to use on a 2D dateset I generated (I know what the weight vector should look like). I only have two different classes in this set and I would like to extract the weight vector (linear decision boundary) from my network once training is done. I was wondering what is the best way to do that.
So far I have tried something like this :
weights_out = list(model.fc1.weight.data)
which gives me a pair of 2D vectors, one for each output node I presume.
Thanks in advance for any help.
My network actually looks like this :
class Net(nn.Module):
def __init__(self,dimensions,model_type):
super(Net,self).__init__()
self.fc1 = nn.Linear(3,2)
def forward(self, x):
x = self.fc1(x)
return x |
st47093 | Suppose there is an array A = tensor([[0.4869, 0.5144, 0.9086, 0.6139],
[0.5103, 0.8270, 0.4832, 0.8980],
[0.5234, 0.1135, 0.1037, 0.7451]])
And I want to shift the elements in each row and replace the shifted places by zeros, depending on another tensor t = tensor([0, 1, 3])
The output should be like out = tensor([[0.4869, 0.5144, 0.9086, 0.6139],
[0, 0.5103, 0.8270, 0.4832],
[0, 0, 0, 0.5234]])
I have already tried an implementation that uses the torch.gather function but that operation seems to consume a lot of memory and it runs into memory overflow when dealing with huge tensors. |
st47094 | import os
import warnings
import torchaudio
from torch.utils.data import Dataset
from torchaudio.datasets.utils import (download_url, extract_archive, unicode_csv_reader, walk_files)
URL = “https://www.kaggle.com/mfekadu/darpa-timit-acousticphonetic-continuous-speech 15”
FOLDER_IN_ARCHIVE = “timitcorpus”
def load_timit_item(fileid, path, ext_audio):
Read lables
labels = [int© for c in fileid.split("_")]
Read wav
file_audio = os.path.join(path, fileid + ext_audio)
waveform, sample_rate, lables
class timit(Dataset):
_ext_audio = “.wav”
def init(self, root, url=URL, folder_in_archive = FOLDER_IN_ARCHIVE,
downlaod = False, transform = None,
target_transform = None):
if transform is not None or target_transform is not None:
warning.warn(“Show warning”, DeprecationWarning)
self.transform = transform
self.target_transform = target_transform
archive = os.path.basename(url)
archive = os.path.join(root, archive)
self.path._path = os.path.join (root, folder_in_archive)
if not os.path.isdir(self.path):
raise RuntimeError (“Dataset not found. Please use download=True, to download it.”)
walker = walk_files(self._path, suffix=self._ext_audio, prefix=False,
remove_suffix=True)
self._walker = list(walker)
def getitem(self, n):
fileid = self.walker[n]
item = load_timit_item(fileid, self._path, self._ext_audio)
waveform, sample_rate, labels = item
if self.transform is not None:
waveform = self.transform(waveform)
if self.target_transform is not None:
labels = self.target_transform(labels)
return waveform, sample_rate, labels
def len(self):
return len (self._walker)
When I run this command
timit_data = torchaudio.datasets.timit(’.’, download=True)
it gives me tourchaudio.datasets has no attribute timit I am following Yeson dataloader |
st47095 | torchaudio.datasets doesn’t seem to have the “timit” dataset as seen here 21.
Your code is currently a bit hard to read and you can format it by wrapping it into three backticks ```.
Are you defining this dataset somehow?
E.g. load_timit_item seems to be undefined as well. |
st47096 | @Mohamed_Nabih I suggest you use something like this:
import os
import torchaudio
from torch.utils.data import Dataset
from torchaudio.datasets.utils import walk_files
def main():
timit = Timit('../TIMIT/')
x = timit[0]
def load_timit(file: str):
data_path = os.path.splitext(file)[0]
with open(data_path + '.TXT', 'r') as txt_file:
_, __, transcript = next(iter(txt_file)).strip().split(" ", 2)
with open(data_path + '.WRD', 'r') as word_file:
words = [l.strip().split(' ') for l in word_file]
words = [(int(hd), int(tl), w) for (hd, tl, w) in words]
with open(data_path + '.PHN', 'r') as phn_file:
phonemes = [l.strip().split(' ') for l in phn_file]
phonemes = [(int(hd), int(tl), w) for (hd, tl, w) in phonemes]
wav, sr = torchaudio.load(data_path + '.WAV')
return data_path, wav, transcript, words, phonemes
class Timit(Dataset):
def __init__(self, root: str):
self.root = root
self.walker = list(walk_files(root, suffix='.WAV', prefix=True))
def __getitem__(self, item):
return load_timit(self.walker[item])
def __len__(self):
return len(self.walker)
if __name__ == '__main__':
main()
It simply loads all files (regardless of train test). You could do the train test split by simply changing root=’…/TIMIT/data/TEST’ or similar thing for train. |
st47097 | I am wondering how to deal with incongruent training and test labels with nn.CrossEntropyLoss.
For an ultra minimal example say we have:
logits = torch.tensor([-0.3080, -0.2961]).reshape(1, 2)
y = torch.tensor([0])
y_test = torch.tensor([2])
F.cross_entropy(logits, y)
F.cross_entropy(logits, y_test) # target is out of bounds
How can I deal with labels found in the test data that never occurred in the training? |
st47098 | Solved by KFrank in post #2
Hi Andrew!
The short answer:
Make sure that the classification model you build has outputs for all
classes in your classification problem.
An important note:
If your test set includes classes (“test labels”) that do not occur in
your training set (“training labels”), it will not be possible f… |
st47099 | Hi Andrew!
localh:
How can I deal with labels found in the test data that never occurred in the training?
The short answer:
Make sure that the classification model you build has outputs for all
classes in your classification problem.
An important note:
If your test set includes classes (“test labels”) that do not occur in
your training set (“training labels”), it will not be possible for your
model to learn to predict those missing classes correctly.
Some further explanation:
CrossEntropyLoss expects an input (the output of your model)
that has shape [nBatch, nClass], and a target (the labels) that
has shape [nBatch] and whose values are integer class labels that
run from [0, nClass - 1] (inclusive).
In your example your logits (the input) has shape [1, 2].
Therefore this example mimics a two-class classification problem,
so your y_test (the test target) values should be 0 or 1. The value
2 is “out of bounds,” and would be the label for the third class in a
three-class (or more) classification problem. (Your y_test has shape
[1], which is correct.)
To reiterate, if you a performing an nClass classification problem,
you must build a model that has nClass outputs, and your target
(label) values must run from 0 to nClass - 1. Your model can’t
determine from the data, on the fly, how many classes you have.
That has to be baked into your model (and consistent with the
target values).
Best.
K. Frank |
st47100 | localh:
logits = torch.tensor([-0.3080, -0.2961]).reshape(1, 2)
y = torch.tensor([0])
y_test = torch.tensor([2])
F.cross_entropy(logits, y)
F.cross_entropy(logits, y_test) # target is out of bounds
Ah I see – the question seems silly now. I should have just made a tweak and played around a bit more. Thank you for your time and help! |
st47101 | Hi. I have a question if could somebody find some mistakes in my training and validation loops. For me everything looks fine, but after second iteration (after first epoch looks normal), during the validation every image from validation dataset is predicted as an element of the same class (let’s say that I am trying to classify to one of four classes, so everything is predicted as 1 or f.e. 4).
It’s based on RCNN model.
My training function:
def train(train_loader, model, optimizer, epoch, device):
model.train()
loss_monitor = AverageMeter()
with tqdm(train_loader) as _tqdm:
for x, y in _tqdm:
x = x.to(device)
y = y.to(device)
outputs = model(x, y)
loss = outputs["loss_classifier"]
optimizer.zero_grad()
(outputs["loss_classifier"]).backward()
optimizer.step()
return loss # I know it's unnecessary
Validation function:
def validate(val_loader, model, epoch, device):
model.eval()
preds = []
gt = []
with torch.no_grad():
with tqdm(val_loader) as _tqdm:
for x, y in _tqdm:
x = x.to(device)
y = y.to(device)
gt.append(y["class"].cpu().numpy())
outputs = model(x, y)
for output in outputs:
pred = F.softmax(output["age"], dim=-1).cpu().numpy()
pred = (pred * np.arange(0, pred.size)).sum(axis=-1)
preds.append(np.array([pred]))
_tqdm.set_postfix(OrderedDict(stage="val", epoch=epoch),)
mae = calculate_mae(gt, preds) # function to calculate mae (classes are inindependance [class 1 is closer to 2 then to 3])
f1 = calculate_f1(gt, preds) # function to calculate mae
return mae, f1
Main loop:
model = PornRCNN.create_resnet_50()
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
model = model.to(device)
model.set_age_loss_fn(loss_classifier)
scheduler = StepLR(
optimizer, step_size=0.0001, gamma=0.2, last_epoch=start_epoch - 1,
)
best_val_f1 = 0
for epoch in range(start_epoch, num_epoch):
train_loss = train(train_loader, model, optimizer, epoch, device)
mae, f1 = validate(val_loader, model, epoch, device)
if f1 > best_val_f1:
model_state_dict = model.state_dict()
best_val_f1 = f1
scheduler.step()
Any ideas why it works like I said? Do you have a tips how to do it better?
I should add that the loss in training mode decrease normally, so that’s not a problem. |
st47102 | Two things:
Tomash:
pred = F.softmax(output["age"], dim=-1).cpu().numpy()
pred = (pred * np.arange(0, pred.size)).sum(axis=-1)
preds.append(np.array([pred]))
What is happening in this block?
Have you trained your network for a sufficient number of epochs? Calculate the training f1 and mae to ensure the model is training as expected. |
st47103 | At first thanks for your reply.
This block just changes the format of prediction to be exactly the same like format of gt.
I trained my net for even 20-30 epochs but predictions only from first epoch looked kinda normal (weren’t good, but had good distribution). Since second epoch the results were always the same (f1 didn’t change and it always predicted one of the class for every element). I will check f1 and mae for training soon. Do you have any others suggestions? |
st47104 | I actually don’t know how to check mae/f1 during training because because outputs in rcnn training mode doesn’t return predictions. Do anyone know how to do it? |
st47105 | Tomash:
mae, f1 = validate(val_loader, model, epoch, device)
If I understand it correctly, simply passing train_loader in place of val_loader should give you the mae and f1 for training dataset. |
st47106 | If you’re facing the same issue with the training data, it implies your model is not learning.
Share the full latest code (model, training, validation logic etc). |
st47107 | Hello all, I have a paired image such as img1, img2. I want to apply the same transform during training for these images as
transform = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
]
img1 = transform(img1)
img2 = transform(img2)
Is it possible to do it in the data loader of pytorch? |
st47108 | You can concatenate the images along the channel dim and then apply the transform. You can check kornia 1, it does exactly what you want. |
st47109 | Alternatively, you could also use the functional API from torchvision as given in this example 12. |
st47110 | If you are using multiple processes, each process would still apply the same transformations on the data and target. The transformations will be different for each sample (in a single process and in a distributed setup).
Could you explain a bit more, what your use case is and what you would like to do? |
st47111 | Thanks @ptrblck. We use 16 workers and the current dataloader is
train_transform = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean = RGB_MEAN,
std = RGB_STD),])
dataset_train = FaceDataset(DATA_ROOT, RECORD_DIR, train_transform)
train_sampler = torch.utils.data.distributed.DistributedSampler(dataset_train)
train_loader = torch.utils.data.DataLoader(dataset_train, batch_size=batch_size, shuffle = (train_sampler is None), num_workers=workers, pin_memory=True, sampler=train_sampler, drop_last=True)
SAMPLE_NUMS = dataset_train.get_sample_num_of_each_class()
NUM_CLASS = len(train_loader.dataset.classes)
And the current dataloader is refered from https://github.com/HuangYG123/CurricularFace/blob/8b2f47318117995aa05490c05b455b113489917e/dataset/datasets.py#L92
path, target = self.imgs[index]
sample = Image.open(path)
sample = sample.convert("RGB")
if self.transform is not None:
sample = self.transform(sample)
Is it fine to replace your customized transformation for the above code? |
st47112 | Yes, it should be alright.
Note that using multiple workers is not a distributed setup, which we define as using multiple devices or nodes. |
st47113 | In PyTorch we can do comparisons between elements in tensors like so:
import torch
a = torch.tensor([[1,2], [2,3], [3,4]])
b = torch.tensor([[3,4], [1,2], [2,3]])
print(a.size())
# torch.Size([3, 2])
print(b.size())
# torch.Size([3, 2])
c = a[:, 0] < b[:, 0]
print(c)
# tensor([ True, False, False])
However, when we try to add a condition, the snippet fails:
c = a[:, 0] < b[:, 1] < b[:, 0]
The expected output is
tensor([ False, False, False])
So, for each element in a, compare its first element with the second element of the corresponding item in b, and compare that element with the first element of the same item in b.
Traceback (most recent call last):
File “scratch_12.py”, line 9, in
c = a[:, 0] < b[:, 1] < b[:, 0]
RuntimeError: bool value of Tensor with more than one value is ambiguous
Why is that, and how can we solve it? |
st47114 | BramVanroy:
c = a[:, 0] < b[:, 1] < b[:, 0]
c = (a[:, 0] < b[:, 1])< b[:, 0]
I don’t know if the parentheses will solve the problem, it depends what you compare and when |
st47115 | That seems an odd construct. Aren’t you then doing implicit casting?
c = (floattensor(true or false)) < other_tensor
In “regular Python” we can do x < y < z, which is the same as “x < y and y < z”. |
st47116 | To train my model, I must use AMP, but AMP causes Div/0 in Adam.
To fix this, I have to increase the default value of eps from 1e-8 to 1e-4. However, I noticed that by doing so, my model progress much slower.
My default LR rate is 0.001.
What is the effect of eps have on Adam?? |
st47117 | Hi, I have a problem with my MaskRCNN training process. The loss in training mode is decreasing well, but the validation metrics are always the same (since first to last epoch) and look very bad (mostly only one class is predicted). I wanted to take a look how prediction looks during training but I don’t know how to get access to them, because output = model(x, y) returns only losses. Do you know how to take it?
I have also another question - do you know why validation doesn’t work? Why predictions are always the same during single training process?
I should also add that I added another head to the backbone and that’s actually what I wish to train (it’s age of people on the picture).
training function:
def train(train_loader, model, optimizer, epoch, device):
model.train()
loss_monitor = AverageMeter()
lr_scheduler = None
if epoch == 0:
warmup_factor = 1. / 1000
warmup_iters = min(1000, len(train_loader) - 1)
lr_scheduler = utils.warmup_lr_scheduler(optimizer, warmup_iters, warmup_factor)
with tqdm(train_loader) as _tqdm:
for x, y in _tqdm:
x = x.to(device)
for key, value in y.items():
y[key] = torch.tensor(value).to(device)
y_list = []
for i in range(0, len(x)):
y_list.append(y)
outputs = model(x, y_list)
print(outputs)
# calc loss
cur_loss = outputs["loss_age"]
# measure accuracy and record loss
sample_num = x.size(0)
loss_monitor.update(cur_loss, sample_num)
# compute gradient and do step
optimizer.zero_grad()
(outputs["loss_age"]).backward()
optimizer.step()
if lr_scheduler is not None:
lr_scheduler.step()
_tqdm.set_postfix(
OrderedDict(stage="train", epoch=epoch, loss=loss_monitor.avg),
)
return loss_monitor.avg
validation function:
def validate(val_loader, model, epoch, device):
model.eval()
preds = []
gt = []
print("Validating function running...")
with torch.no_grad():
with tqdm(val_loader) as _tqdm:
for x, y in _tqdm:
x = x.to(device)
for key, value in y.items():
y[key] = torch.tensor(value).to(device)
gt.append(y["age"].cpu().numpy())
outputs = model(x)
print(outputs)
for output in outputs: # I just change format of predictions over here
pred = F.softmax(output["age"], dim=-1).cpu().numpy()
pred = (pred * np.arange(0, pred.size)).sum(axis=-1)
preds.append(np.array([pred]))
_tqdm.set_postfix(OrderedDict(stage="val", epoch=epoch),)
mae = calculate_mae(gt, preds) # my own functions - works well
f1 = calculate_f1(gt, preds)
return mae, f1
main loop:>
model = PornRCNN.create_resnet_50()
model = model.to(device)
model.set_age_loss_fn(loss_age)
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.0003,
momentum=0.9, weight_decay=0.0005)
lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0=1, T_mult=2)
num_epoch = 100
checkpoint_dir = Path("checkpoints")
for epoch in range(start_epoch, num_epoch):
train_loss = train(train_loader, model, optimizer, epoch, device)
mae, f1 = validate(val_loader, model, epoch, device)
Anyone could help me? |
st47118 | Tomash:
optimizer.zero_grad()
Might be not related, but why do you zero-grad after you call forward? |
st47119 | Thank you for fast reply.
I did exactly like it’s shown in: https://github.com/pytorch/vision/blob/master/references/detection/engine.py (in train_one_epoch function):
loss_dict = model(images, targets)
losses = sum(loss for loss in loss_dict.values())
...
optimizer.zero_grad()
losses.backward()
optimizer.step() |
st47120 | Hello, I am using an LSTM to take 5 sequences as input to predict another 5. I want to know how to predict more than 5 timesteps. I assume its got something to do with the hidden_dim but I can’t figure it out.
Here is my code
class LSTM(nn.Module):
def __init__(self, seq_len=5, n_features=256, n_hidden=256, n_layers=1, output_size=1):
super().__init__()
self.n_features = n_features
self.seq_len = seq_len
self.n_hidden = n_hidden
self.n_layers = n_layers
self.l_lstm = nn.LSTM(input_size=self.n_features, hidden_size=self.n_hidden, num_layers=self.n_layers, batch_first=True)
def init_hidden(self, batch_size):
hidden_state = torch.zeros(self.n_layers,batch_size,self.n_hidden).to(device)
cell_state = torch.zeros(self.n_layers,batch_size,self.n_hidden).to(device)
self.hidden = (hidden_state, cell_state)
def forward(self, x):
lstm_out, self.hidden = self.l_lstm(x,self.hidden)
return lstm_out
If anyone knows how to extend the prediction range or could suggest a better way of writing LSTM, I would really appreciate it. |
st47121 | Solved by googlebot in post #2
Generally speaking, RNNs work one step at a time, so as you may have noticed, seq_len param is never used, that’s by design. So step count is arbitrary, so long as you have per-step inputs. To do multi-step forecasting, Seq2Seq approach can be used, or step_forecast -> next_input assignments in a lo… |
st47122 | Generally speaking, RNNs work one step at a time, so as you may have noticed, seq_len param is never used, that’s by design. So step count is arbitrary, so long as you have per-step inputs. To do multi-step forecasting, Seq2Seq approach can be used, or step_forecast -> next_input assignments in a loop. |
st47123 | I just finished finetuning a pretrained facial recognition model that spits out an encoding for every image you send into it. I would now like to deploy it in an EC2 instance
Deploying your ML Model with TorchServe
This tutorial tells you how to set Torchserve up to get a reply as a simple class. How do I set something similar for my model. It needs to take an image, pass it through the model to get the encodings, and then compare them to every image encoding in the database and output the closest one.
Which is the most efficient way to achieve this?
Thanks! |
st47124 | Hi,
I have a dataset with positive and negative samples for a segmentation task where:
1. “positive” means, that the image contains at least one object
2. “negative” means, that the image contains no object of interest.
For every positive sample, there are roughly 3 negative samples in the dataset.
Now when I evaluate my model on my validation set, the dice-score will be very high in the beginning as the model just predicts 0 everywhere. After some iterations, the dice score will start to increase again but fails to reach the high scores from the start of the training. This makes the automatic saving of the best performing model very hard, if not impossible.
Any suggestions how to balance the dice-score such that it doesn’t favor negative examples during validation? |
st47125 | Hello,
In my opinion you can try to put a weight to your dice score / loss function that is proportional to your dataset samples. If you have roughly 3 negative samples in the dataset, you should reflect it on the loss by setting a weight of 3. |
st47126 | Sorry for the very late answer, but I currently reiterate this issue.
Although your suggestion is possible, this will still lead to a dice-score of at least 0.5 when the output of the network is constant 0. What if the real working best performance of that model is only 0.4?
Meanwhile I searched the net a bit and am a little bit confused that this issue does not arise more often. Maybe someone else can step in? |
st47127 | Okay, I guess the solution is to aggregate all predictions and to calculate the dice-score over all results simultaniously.
I.e. instead of:
result = (dice_score(pred1,gt1) + dice_score(pred2,gt2)) / 2
do
result = dice_score(torch.cat([pred1, pred2, ...], dim=1), torch.cat([gt1, gt2, ...], dim=1))
EDIT
This of course could need huge amounts of RAM and compute, therefore better aggregate the intersection and union and calculate the dice-score in the end. |
st47128 | I am running my own custom deep belief network code using PyTorch and using the LBFGS optimizer. After optimization starts, my GPU starts to run out of memory, fully running out after a couple of batches, but I’m not sure why. Should I be purging memory after each batch is run through the optimizer? My code is as follows (with the portion of code that causes the problem marked):
def fine_tuning(self, data, labels, num_epochs=10, max_iter=3):
'''
Parameters
----------
data : TYPE torch.Tensor
N x D tensor with N = num samples, D = num dimensions
labels : TYPE torch.Tensor
N x 1 vector of labels for each sample
num_epochs : TYPE, optional
DESCRIPTION. The default is 10.
max_iter : TYPE, optional
DESCRIPTION. The default is 3.
Returns
-------
None.
'''
N = data.shape[0]
#need to unroll the weights into a typical autoencoder structure
#encode - code - decode
for ii in range(len(self.rbm_layers)-1, -1, -1):
self.rbm_layers.append(self.rbm_layers[ii])
L = len(self.rbm_layers)
optimizer = torch.optim.LBFGS(params=list(itertools.chain(*[list(self.rbm_layers[ii].parameters())
for ii in range(L)]
)),
max_iter=max_iter,
line_search_fn='strong_wolfe')
dataset = torch.utils.data.TensorDataset(data, labels)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=self.batch_size*10, shuffle=True)
#fine tune weights for num_epochs
for epoch in range(1,num_epochs+1):
with torch.no_grad():
#get squared error before optimization
v = self.pass_through_full(data)
err = (1/N) * torch.sum(torch.pow(data-v.to("cpu"), 2))
print("\nBefore epoch {}, train squared error: {:.4f}\n".format(epoch, err))
#*******THIS IS THE PROBLEM SECTION*******#
for ii,(batch,_) in tqdm(enumerate(dataloader), ascii=True, desc="DBN fine-tuning", file=sys.stdout):
print("Fine-tuning epoch {}, batch {}".format(epoch, ii))
with torch.no_grad():
batch = batch.view(len(batch) , self.rbm_layers[0].visible_units)
if self.use_gpu: #are we using a GPU?
batch = batch.to(self.device) #if so, send batch to GPU
B = batch.shape[0]
def closure():
optimizer.zero_grad()
output = self.pass_through_full(batch)
loss = nn.BCELoss(reduction='sum')(output, batch)/B
print("Batch {}, loss: {}\r".format(ii, loss))
loss.backward()
return loss
optimizer.step(closure)
The erros I get is:
DBN fine-tuning: 0it [00:00, ?it/s]Fine-tuning epoch 1, batch 0
Batch 0, loss: 4021.35400390625
Batch 0, loss: 4017.994873046875
DBN fine-tuning: 0it [00:00, ?it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/deep_autoencoder/deep_autoencoder.py", line 260, in fine_tuning
optimizer.step(closure)
File "/home/anaconda3/envs/torch_env/lib/python3.8/site-packages/torch/autograd
/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/home/anaconda3/envs/torch_env/lib/python3.8/site-packages/torch/optim/lb
fgs.py", line 425, in step
loss, flat_grad, t, ls_func_evals = _strong_wolfe(
File "/home/anaconda3/envs/torch_env/lib/python3.8/site-packages/torch/optim/lb
fgs.py", line 96, in _strong_wolfe
g_prev = g_new.clone(memory_format=torch.contiguous_format)
RuntimeError: CUDA out of memory. Tried to allocate 1.57 GiB (GPU 0; 24.00 GiB total capac
ity; 13.24 GiB already allocated; 1.41 GiB free; 20.07 GiB reserved in total by PyTorch)
This also racks up memory if I use CPU, so I’m not sure what the solution is here… |
st47129 | I have a backward hook function with ‘newLayer.register_backward_hook(hook_function)’ where I do not know how to control the inputs to it. The function contains a line like.
def hook_function(self, grad_input, grad_output):
self.average = self.average * 0.99 + grad_output[0].sum((0,2,3)) * 0.01
This results in:
RuntimeError: expected device cuda:0 but got device cuda:1
(Pdb) self.average
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], device='cuda:0',
dtype=torch.float64)
(Pdb) grad_output[0].sum((0,2,3))
tensor([ 0.0508, 0.0492, 0.0512, 0.0487, 0.0517, 0.0483, 0.0522, 0.0479,
-0.1974, -0.2026], device='cuda:1', dtype=torch.float64)
I need to know either how to define my self.average better or if i am doing something wrong somewhere else and it is grad_output that is wrong.
the above is with
self.average = torch.Tensor(out_channels).zero_().to(gf.device).double()
I have also tried
self.average = nn.Parameter(torch.Tensor(out_channels).zero_().to(gf.device).double())
which results in
TypeError: cannot assign 'torch.cuda.DoubleTensor' as parameter 'normalPassAverageD' (torch.nn.Parameter or None expected)
at the same location |
st47130 | Solved by ptrblck in post #2
I don’t know how self.average is initialized, but would assume this should work:
self.average = self.average.to(grad_output[0].device) * 0.99 + grad_output[0].sum((0,2,3)) * 0.01
Could you check it and see, if you are still getting an error?
In that case, could you post a small code snippet to re… |
st47131 | I don’t know how self.average is initialized, but would assume this should work:
self.average = self.average.to(grad_output[0].device) * 0.99 + grad_output[0].sum((0,2,3)) * 0.01
Could you check it and see, if you are still getting an error?
In that case, could you post a small code snippet to reproduce this issue? |
st47132 | That worked! Thanks!
But this is all within a DataParallel. Isn’t that supposed to make it play nice with multiple devices? I would have thought calling .to(a specific device) would limit it to only using that device. Maybe I don’t understand how DataParallel works entirely. |
st47133 | It’s hard to tell what’s creating the issue without seeing the code, but generally you are right. nn.DataParallel should take care of pushing the parameters and buffers to the right devices. However, if self.average was defined as at tensor, you would run into this error. |
st47134 | It was in fact defined as a tensor. Is there another way I should define self tensors to be sure they are handled by DataParallel? |
st47135 | If you don’t need to update this tensor (i.e. no gradients should be calculated for it), you should define it as a buffer via: self.register_buffer('average', torch.tensor(...)). This would make sure that nn.DataParallel will push this buffer to the corresponding device. In your forward method you can access it via self.average. |
st47136 | Cool! That worked for all of my tensors that are having their values be modified without the .to(). However, later in the same function there is one more tensor I am just saving directly with
self.current = grad_output[0].detach()
This works fine on one GPU but when running on multiple it tells me 'AttributeError: (module) object has no attribute ‘current’ '. I imagine I could do a .copy_() or something, but I won’t actually know the shape until the network has been run on an image. Thoughts? I can also start a new thread if that would be appropriate. |
st47137 | Manually manipulating model attributes when running a data parallel approach is a bit tricky, since each device uses a replica of the model and your changes could be lost.
Feel free to create a new topic and describe your use case further. |
st47138 | Initializing a member tensor after creation with DataParallel (repost)
I have a member tensor that is created/saved during a backward hook. Hook looks like:
def saveGrad(self, grad_input, grad_output):
self.currentGrad = grad_output[0].detach()
This works fine on one GPU but when running on multiple it tells me 'AttributeError: (module) object has no attribute ‘currentGrad’ '. I imagine I could do a .copy_() or something, but I won’t actually know the shape until the network has been run on an image. |
st47139 | Finally back to this after debugging other things. Apparently the .to() method made it run but it doesn’t get the same values which I imagine is because its not actually keeping the numbers the same. I’m using a registered buffer as well. Started a thread with all my current code, please take a look when you get a chance Multi GPU Hook not correctly filling buffer 1 |
st47140 | Have another thread now that includes a full code sample.
Multi GPU Hook not correctly doing filling buffer
Sorry, I kept a bit of extra code since I started with https://github.com/pytorch/examples/blob/master/mnist/main.py. and thought it would be easy to compare to that. I have further reduced the code below to only have the exact things to reproduce this problem. My network is 1 layer, the ‘main’ just inititalizes things and runs backprop on one random input. The only thing that is unique is my saveAverageD function and 2 custom modules. The custom modules and saveAverageD function were made b… |
st47141 | Could anyone explain the last four lines (lt, rb, inter, and the return expression) of the following code ? I do not quite understand how : works inside [] for pytorch
def box_iou(boxes1, boxes2):
# https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
"""
Return intersection-over-union (Jaccard index) of boxes.
Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
Arguments:
boxes1 (Tensor[N, 4])
boxes2 (Tensor[M, 4])
Returns:
iou (Tensor[N, M]): the NxM matrix containing the pairwise
IoU values for every element in boxes1 and boxes2
"""
def box_area(box):
# box = 4xn
return (box[2] - box[0]) * (box[3] - box[1])
area1 = box_area(boxes1.t())
area2 = box_area(boxes2.t())
lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2]
rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2]
inter = (rb - lt).clamp(min=0).prod(2) # [N,M]
return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter) |
st47142 | Hi,
I see that TorchServe has gRPC support. Are there PyTorch server/client C++ APIs? I saw a few client examples in Python, our product has a C++ requirement.
Thanks! |
st47143 | Why I got this error? my image and label are the same size(166*190 24bit rgb)
RuntimeError: stack expects each tensor to be equal size, but got [3, 185, 204] at entry 0 and [3, 190, 166] at entry 1
The following is the resize:where is wrong?
class Resize(object):
"""Resize image and/or masks."""
def __init__(self, imageresize, maskresize):
self.imageresize = imageresize
self.maskresize = maskresize
def __call__(self, sample):
image, mask = sample['image'], sample['mask']
if len(image.shape) == 3:
image = image.transpose(1, 2, 0)
if len(mask.shape) == 3:
mask = mask.transpose(1, 2, 0)
mask = cv2.resize(mask, self.maskresize, cv2.INTER_AREA)
image = cv2.resize(image, self.imageresize, cv2.INTER_AREA)
if len(image.shape) == 3:
image = image.transpose(2, 0, 1)
if len(mask.shape) == 3:
mask = mask.transpose(2, 0, 1)
return {'image': image,
'mask': mask}
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample, maskresize=None, imageresize=None):
image, mask = sample['image'], sample['mask']
if len(mask.shape) == 2:
mask = mask.reshape((1,)+mask.shape)
if len(image.shape) == 2:
image = image.reshape((1,)+image.shape)
return {'image': torch.from_numpy(image),
'mask': torch.from_numpy(mask)}
class Normalize(object):
'''Normalize image'''
def __call__(self, sample):
image, mask = sample['image'], sample['mask']
return {'image': image.type(torch.FloatTensor)/255,
'mask': mask.type(torch.FloatTensor)/255} |
st47144 | This is probably due to BatchNorm. You cannot use BatchNorm with batch_size=1. You should use InstanceNorm instead. |
st47145 | Thanks, I made the change, but get this error:
RuntimeError: stack expects each tensor to be equal size, but got [3, 530, 500] at entry 0 and [3, 500, 530] at entry 3
image(32bit) and label (4bit) , the same size(500*530), is there anything wrong with my transpose process?
class Resize(object):
“”“Resize image and/or masks.”""
def __init__(self, imageresize, maskresize):
self.imageresize = imageresize
self.maskresize = maskresize
def __call__(self, sample):
image, mask = sample['image'], sample['mask']
if len(image.shape) == 3:
image = image.transpose(1, 2, 0)
if len(mask.shape) == 3:
mask = mask.transpose(1, 2, 0)
mask = cv2.resize(mask, self.maskresize, cv2.INTER_AREA)
image = cv2.resize(image, self.imageresize, cv2.INTER_AREA)
if len(image.shape) == 3:
image = image.transpose(2, 0, 1)
if len(mask.shape) == 3:
mask = mask.transpose(2, 0, 1)
return {'image': image,
'mask': mask} |
st47146 | You probably have some image where the length condition is failing. This is a general error and you have to debug where the image is not getting transposed. |
st47147 | Just use print statements in the resize and check manually. The resize method looks correct, so there must be something wrong elsewhere (maybe image loading) |
st47148 | I am trying to reconstruct a rather complicated neural network for shape & appearance disentanglement. It includes several transformations on the tensors during the process. Now I am wondering, whether and how to correctly send data on GPU in order to be efficient without taking away useful memory space.
The first questions arises within the DataSet class. It currently looks as follows:
class ImageDataset(Dataset):
def __init__(self, images, arg):
super(ImageDataset, self).__init__()
self.device = arg.device
self.bn = arg.bn
self.brightness = arg.brightness_var
self.contrast = arg.contrast_var
self.saturation = arg.saturation_var
self.hue = arg.hue_var
self.scal = arg.scal
self.tps_scal = arg.tps_scal
self.rot_scal = arg.rot_scal
self.off_scal = arg.off_scal
self.scal_var = arg.scal_var
self.augm_scal = arg.augm_scal
self.images = images
self.transforms = transforms.Compose([transforms.ToTensor(),
transforms.Normalize([0.5], [0.5])
])
def __len__(self):
return len(self.images)
def __getitem__(self, index):
# Select Image
image = self.images[index]
# Get parameters for transformations
tps_param_dic = tps_parameters(1, self.scal, self.tps_scal, self.rot_scal, self.off_scal,
self.scal_var, self.augm_scal)
coord, vector = make_input_tps_param(tps_param_dic)
# Make transformations
x_spatial_transform = self.transforms(image).unsqueeze(0).to(self.device)
x_spatial_transform, t_mesh = ThinPlateSpline(x_spatial_transform, coord,
vector, 128, self.device)
x_spatial_transform = x_spatial_transform.squeeze(0)
x_appearance_transform = K.ColorJitter(self.brightness, self.contrast, self.saturation, self.hue)\
(self.transforms(image).unsqueeze(0)).squeeze(0)
original = self.transforms(image)
coord, vector = coord[0], vector[0]
return original, x_spatial_transform, x_appearance_transform, coord, vector
The ThinPlateSpline function is a rather complicated function, that performs a TPS transformation. During the process, some tensors are created and since I need the function later again, I have to specify the device. As an example, it contains things like that:
def ThinPlateSpline(U, coord, vector, out_size, device, move=None, scal=None):
coord = torch.flip(coord, [2])
vector = torch.flip(vector, [2])
num_batch, channels, height, width = U.shape
out_height = out_size
out_width = out_size
height_f = torch.tensor([height], dtype=torch.float32).to(device)
width_f = torch.tensor([width], dtype=torch.float32).to(device)
num_point = coord.shape[1]
The tensors height_f and width_f are therefore created at each call of the function and I wonder if this is a problem? Is there a better way to perform operations on my data within the architecture?
Also, should i send the data to the GPU within the DataSet class?
Thanks for your help! |
st47149 | Do transforms on the GPU. Have the dataloader return unscaled 8-bit int images on the CPU. After these are collated you can batch transfer these to the GPU and then apply the first set of transform self.transforms (Note: you would have to change the normalization mean and var to reflect unscaled values).
Also, the rest of the code can all be run on the GPU. |
st47150 | Thanks, that actually helped me a lot, in fact I removed the transformations within the DataLoader completely and only return the image.
Another question:
For my training I am using a dataset, that lies on a server. It contains about 10.000 images of size 256x256, so it is quite large. Currently I am loading the dataset as a numpy array:
# Load Datasets
data = load_images_from_folder()
train_data = np.array(data[:-1000])
train_dataset = ImageDataset(train_data, arg)
test_data = np.array(data[-1000:])
test_dataset = ImageDataset(test_data, arg)
# Prepare Dataloader & Instances
train_loader = DataLoader(train_dataset, batch_size=bn, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=bn)
The function load_images_from_folder just returns a list containing all images. Is that the correct approach?
Edit: I want to add, that I am asking because it takes like 5 minutes until my network starts training. |
st47151 | In your case the dataset can be fit into RAM and you are taking advantage of that. The initial slowdown that you are facing is a one time slowdown as you don’t hve to load the images again (the biggest bottleneck in all cases).
The choice to load the entire dataset on RAM depends on what is the bottleneck CPU or GPU. If GPU is the bottleneck than no need to load images in RAM. |
st47152 | I have original defined name for VOC dataset , but I want to use folder structure as this:
––plant
——Images
———plant001_rgb.png
———plant002_rgb.png
——Masks
———plant001_label.png
———plant002_label.png
How should I make the change…I am so confusing, any help would be so glad
def getitem(self, index):
if index == 0:
shuffle(self.train_lines)
annotation_line = self.train_lines[index]
name = annotation_line.split()[0]
jpg = Image.open(r"./VOCdevkit/VOC2007/JPEGImages" + '/' + name + ".jpg")
png = Image.open(r"./VOCdevkit/VOC2007/SegmentationClass" + '/' + name + ".png")
if self.random_data:
jpg, png = self.get_random_data(jpg,png,(int(self.image_size[1]),int(self.image_size[0])))
else:
jpg, png = letterbox_image(jpg, png, (int(self.image_size[1]),int(self.image_size[0])))
png = np.array(png)
png[png >= self.num_classes] = self.num_classes
seg_labels = np.eye(self.num_classes+1)[png.reshape([-1])]
seg_labels = seg_labels.reshape((int(self.image_size[1]),int(self.image_size[0]),self.num_classes+1))
jpg = np.transpose(np.array(jpg),[2,0,1])/255
return jpg, png, seg_labels
51543×635 54.9 KB |
st47153 | Hi,
I am new to PyTorch, and I encountered a question.
There is not any change in training loss and validation loss after I changed the learning rate several times (as shown below). But if I change the batchsize, training loss and validation loss on each epoch changed.
I wanna know why the training loss and validation loss on each epoch did not change.
Thanks!
SGD 0.001
-------------------------
7401
1.2.0
There are 1 CUDA devices
Setting torch GPU to 0
Using device:0
begin training!
Epoch:1/150
Train_loss:1.79059
Vali_loss:1.79172
Time_elapse:13.118939876556396
'
Epoch:2/150
Train_loss:1.78832
Vali_loss:1.78894
Time_elapse:23.398069381713867
'
Epoch:3/150
Train_loss:1.78577
Vali_loss:1.78627
Time_elapse:33.67959260940552
'
...
SGD 0.01
-------------------------
7401
1.2.0
There are 1 CUDA devices
Setting torch GPU to 0
Using device:0
begin training!
Epoch:1/150
Train_loss:1.79059
Vali_loss:1.79172
Time_elapse:13.118939876556396
'
Epoch:2/150
Train_loss:1.78832
Vali_loss:1.78894
Time_elapse:23.398069381713867
'
Epoch:3/150
Train_loss:1.78577
Vali_loss:1.78627
Time_elapse:33.67959260940552
'
... |
st47154 | Solved by Usama_Hasan in post #6
Can you share you Trainer function. It’s unclear what we’re doing wrong here. |
st47155 | Thanks for your reply!
global training, prediction, cuda
training = True
prediction = True
cuda = True
seed = 5
def run(opt="SGD", gpu=0, lr=0.001, lr_schedule=False, identifier='', scene_classification=False):
# directories
model_dir = r'.\model'
train_dir = r'C:\Users\train_img'
vali_dir = r'C:\Users\vali_img'
test_dir = r'C:\Users\test_img'
torch.manual_seed(seed)
if torch.cuda.is_available():
print (torch.__version__)
if not cuda:
print("You have a CUDA device")
else:
torch.cuda.set_device(gpu)
torch.cuda.manual_seed(seed)
# building the net
model = Unet_6(features=[32, 64])
if cuda:
model = model.cuda()
new_identifier = '8' + identifier
trainer = Trainer(net=model, train_dir=train_dir, vali_dir=vali_dir, test_dir=test_dir, model_dir=model_dir,
opt=opt, lr=lr, cuda=cuda, identifier=new_identifier, lr_schedule=lr_schedule)
# training
if training:
bs = 8
trainer.train_model(epoch=150, bs=bs) |
st47156 | Thanks for your reply!!
global training, prediction, cuda
training = True
prediction = True
cuda = True
seed = 5
def run(opt="SGD", gpu=0, lr=0.001, lr_schedule=False, identifier='', scene_classification=False):
# directories
model_dir = r'.\model'
train_dir = r'C:\Users\train_img'
vali_dir = r'C:\Users\vali_img'
test_dir = r'C:\Users\test_img'
torch.manual_seed(seed)
if torch.cuda.is_available():
print (torch.__version__)
if not cuda:
print("You have a CUDA device")
else:
torch.cuda.set_device(gpu)
torch.cuda.manual_seed(seed)
# building the net
model = Unet_6(features=[32, 64])
if cuda:
model = model.cuda()
new_identifier = '8' + identifier
trainer = Trainer(net=model, train_dir=train_dir, vali_dir=vali_dir, test_dir=test_dir, model_dir=model_dir,
opt=opt, lr=lr, cuda=cuda, identifier=new_identifier, lr_schedule=lr_schedule)
# training
if training:
bs = 8
trainer.train_model(epoch=150, bs=bs) |
st47157 | Can you share you Trainer function. It’s unclear what we’re doing wrong here.
Lincoln10153:
Trainer(net=model, train_dir=train_dir, vali_dir=vali_dir, test_dir=test_dir, model_dir=model_dir,
opt=opt, lr=lr, cuda=cuda, identifier=new_identifier, lr_schedule=lr_schedule) |
st47158 | Thanks!
I double checked my code just now, and I think I find the question finally. Because I used learning rate decay, and the learning rate I set was not considered during the training process. |
st47159 | Hello,
I have a problem in understanding convtranspose2d with stride in pytorch.
According to this paper https://arxiv.org/pdf/1603.07285.pdf 1, convtranspose2d with stride is equivalent to a convolution on a modified input by insterting 0s between the entries (that’s why it is sometimes called fractionnally strided convolution).
I run a quick test but it doesnt work and I don’t understand why. Could someone explain what is wrong here ?
x = torch.rand(1,1,20,20)
k = torch.rand(1,1,3,3)
w = torch.zeros(3, 3)
w[1, 1] = 1
zerofill = F.conv_transpose2d(x, w.expand(1, 1, 3, 3),stride=2) #interleave tensor x with 0s
out1 = F.conv_transpose2d(x,k,stride=2)
out2 = F.conv2d(zerofill,k,padding=1)
err = (out1-out2).pow(2)
print(err.sum().item()) # large error
Thanks |
st47160 | I read some tutorials for logistic regression in pytorch but they didn’t explain how can i implement the non-negative logistic regression model?
basically i have a linear classifier that i need to output a single probability number (i assume using sigmoid), and i need to have all the weights of the network to be non-negative
it was suggested that i should use non-negative variant of logistic regression to have non-negative weights but I’m not sure how to implement this? my model currently is a simple 1 layer linear classifier. |
st47161 | Hi everyone,
I am implementing a system for a Graph Neural Net (GNN) based on PyTorch and PyTorch Geometric (PyG).
Because of my algorithm, I need the system to perform operations that are not normally used in Convolutional Neural nets.
Examples of the functions I use in the code I wrote are:
torch.unique_consecutive()
torch.cumsum()
torch.eq()
Plus, I do have a few for loops in the extra code I wrote (shame on me ).
The version of my system prior to the introduction of those operations runs with a GPU utilisation of ~99%, while the new version runs with a utilisation of ~30%. I would like to understand what is causing the low GPU utilisation and if there is a way to make my code run faster.
I can exclude the usual culprit for low GPU utilisation, i.e., time that the GPU spends idle waiting for a minibatch to be loaded to GPU RAM, because all of the data for my current problem resides on GPU RAM.
I tried using the NVIDIA profiler, I instrumented the code so that only one forward pass of the GNN is analysed:
torch.cuda.profiler.cudart().cudaProfilerStart()
...useful code here...
torch.cuda.profiler.cudart().cudaProfilerStop()
exit()
I show here the results of running the profiler in the two cases.
Old version of the code, which does not feature the code I wrote myself, with the ‘special’ operations:
$ export CUDA_LAUNCH_BLOCKING=1; nvprof -s --export-profile on --profile-from-start off -o main_master.nvvp -f python main.py
Type Time(%) Time Calls Avg Min Max Name
GPU activities: 11.42% 3.9044ms 19 205.50us 1.1840us 1.1311ms _ZN2at6native29vectorized_elementwise[...]
10.98% 3.7539ms 6 625.65us 184.64us 1.5057ms volta_sgemm_128x128_tn
10.47% 3.5787ms 24 149.11us 1.0240us 761.06us _ZN2at6native29vectorized_elementwise[...]
10.11% 3.4564ms 12 288.03us 1.5680us 1.4638ms _ZN2at6native29vectorized_elementwise_[...]
8.19% 2.7999ms 46 60.866us 864ns 414.82us _ZN2at6native29vectorized_elementwise[...]
8.18% 2.7964ms 4 699.11us 154.59us 1.2435ms volta_sgemm_128x64_tn
...
**0.09%** 31.776us 14 2.2690us 1.2160us 2.6240us [CUDA memcpy HtoD]
**0.02%** 6.9120us 6 1.1520us 1.0240us 1.3120us [CUDA memcpy DtoD]
Current version of the code, which includes the ‘special’ operations:
$ export CUDA_LAUNCH_BLOCKING=1; nvprof -s --export-profile on --profile-from-start off -o main_current.nvvp -f python main.py
Type Time(%) Time Calls Avg Min Max Name
GPU activities: 12.50% 6.1623ms 4 1.5406ms 303.33us 2.7780ms volta_sgemm_128x128_tn
9.61% 4.7417ms 4242 1.1170us 704ns 380.10us _ZN2at6native29vectorized_elementwise[...]
9.19% 4.5337ms 4 1.1334ms 266.08us 2.0008ms volta_sgemm_128x64_nn
**9.00%** 4.4410ms 4810 923ns 800ns 12.384us [CUDA memcpy DtoD]
8.58% 4.2312ms 1803 2.3460us 832ns 1.1516ms _ZN2at6native29vectorized_elementwise[...]
8.09% 3.9914ms 2 1.9957ms 1.9954ms 1.9960ms volta_sgemm_128x64_nt
**4.65%** 2.2946ms 1517 1.5120us 704ns 12.352us [CUDA memcpy DtoH]
...
**0.60%** 313.34us 166 1.8870us 640ns 2.6560us [CUDA memcpy HtoD]
I also used the NVIDIA’s visual profiler:
nvvp_both2217×849 79.4 KB
What I understand is that the newly introduced operations are causing small and frequent memory movements, especially between the GPU and the CPU (CUDA memcpy DtoH). This is probably not a very good idea, because those are operations that take time, during which the GPU stays idle.
Can anyone help me understand which of the functions I used is causing that? Pointers to reading materials are welcome.
Thanks everyone in advance! |
st47162 | Solved by ptrblck in post #2
Your general approach is very good.
Are you looking at the first iteration or are these profiles from later steps?
In the former case, you should skip the startup and first iterations, as e.g. the caching allocator would need to allocate memory, which can later be reused.
You can also add markers… |
st47163 | Your general approach is very good.
Are you looking at the first iteration or are these profiles from later steps?
In the former case, you should skip the startup and first iterations, as e.g. the caching allocator would need to allocate memory, which can later be reused.
You can also add markers using:
torch.cuda.nvtx.range_push('my name')
[...]
torch.cuda.nvtx.range_push('my nested name')
[...]
torch.cuda.nvtx.range_pop()
torch.cuda.nvtx.range_pop()
to get a better idea which method is using which kernels. |
st47164 | Thanks @ptrblck, your suggestion helped me very much.
I added a few markers, so now I can tell which parts of the code the memory operations refer to.
I also start the profiler after the first few itereations, as you suggest.
I post here the output I currently get from NVVP, it might be useful for other people in the future.
nvvp_both_new2451×1010 138 KB
I added markers for the edge update and the node update of my GNN, plus one marker for the loss function.
The loss function is clearly taking a lot more time in the new version of the code, so I will start my code optimisation from there. |
st47165 | I am trying to manually derive and update the gradient for a simple nn .
input size is 12*12 ,apply 10*3*3 kernel ,so i have a conv layer with output features 10*10*10 ,and by flattening it to 1*1000 to create one dimension vector to connect to a fully connected layer with weight dimension 1000*2. I can get the fc layers weight to be updated , but how do I update weight value from 1000*2 matrix to 10*3*3 kernel matrix |
st47166 | Your loss function contains average. When you backprop from loss to fc layers you will get a scaler value (a vector in case of batching) due the the average. During calculation of gradients of conv layer you will multiply the previous scaler with the grads of conv layer.
If you derive the equations then it will become clearer. If something needs more clarification, feel free to ask. |
st47167 | Hi kushaj ,thank you for your answer , but i am still confused , if back propagate from loss to fc layer weight ,which here is 10002 , how do we get a scalar ? shouldnt be still 10002 weight matrix with all the weight value updated ? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.